Competitive Design Competitive Design

6 downloads 0 Views 58MB Size Report
Mar 31, 2009 - presentation within the factory showroom. A very important element of the VIDA Centre activities is cooperation with industrial partners.
CIRP – Design 2009

CIRP – Design 2009

The following topics are covered: • Innovative and creative design

Competitive Design is an essential driver of innovation and creativity. In today’s fastchanging engineering environment, the need to enhance the role of design and creativity in all aspects of business is increasingly acknowledged.

The proceedings present multidisciplinary research encompassing concepts, methodologies and infrastructure development for successful competitive design.

• Design methods, tools and techniques • Design to cost • Affordable design • Risk in design • User centric design • Requirements engineering and management • Design for customisation • Distributed and collaborative design • Design management • Product life cycle management • Design knowledge and information management • Computer aided design • Product platforms and modular design • Adaptable design • Virtual design and testing • Manufacturing systems design • Design optimisation • Intelligent design • Global design • Management of outsourced design

Professor Rajkumar Roy

Dr. Essam Shehab

Rajkumar Roy is Professor of Competitive Design and Head of the Decision Engineering Centre at Cranfield University. He is also the President of the Association of Cost Engineers. His research interests include design optimisation and cost engineering for products, services and industrial product-service systems.

Essam Shehab is a Senior Lecturer in Decision Engineering at Cranfield University. His research and industrial interests cover multi-disciplinary areas including design engineering, cost modelling and knowledge management for innovative products and industrial product-service systems.

Competitive Design

The application of good design techniques within both product and service organisations has grown in importance, encompassing areas such as communication, behaviour and environment, enabling engineering organisations to develop and maintain a competitive edge. The papers presented in this book focus on the notion of design as a pivotal activity, creating and setting in motion the vision of the future within an engineering environment.

• Design and sustainability

Competitive Design Proceedings of the

19th CIRP Design conference

Rajkumar Roy Essam Shehab

Rajkumar Roy, Essam Shehab Editors

Competitive Design Proceedings of the

19th CIRP Design Conference

Rajkumar Roy, Essam Shehab Editors

Editors Professor Rajkumar Roy, Dr. Essam Shehab Cranfield University Cranfield Bedford MK43 0AL UK

ISBN 978-0-9557436-4-1 Cranfield University Press © Cranfield University 2009. All rights reserved. No part of this publication may be reproduced without the written permission of the copyright owner.

19th CIRP Design Conference

Competitive Design 30-31 March 2009, Cranfield University, UK

Organised by Cranfield University, UK

Sponsored by

Conference Chairman R. Roy, Cranfield University, UK

International Scientific Committee CIRP A Bernard, France A Bramley, UK D Brissaud, France M Cantamessa, Italy J Corbett, UK J Duflou, Belgium W ElMaraghy, Canada J P van Griethuysen, Switzerland P Gu, Canada B Hon, UK I S Jawahir, USA H Kaebernick, Australia B Kaftanoglu, Turkey S G Kim, USA F Kimura, Japan F L Krause, Germany S Kumara, USA B Lauwers, Belgium S Lu, USA E Lutters, The Netherlands V D Majstorovich, Serbia L Monostori, Hungary D Mourtzis, Greece A Nee, Singapore B Pierre, France G Schuh, Germany G Seliger, Germany M Shpitalni, Israel A Sluga, Slovenia G Sohlenius, Sweden N Suh, USA S Tichkiewitch, France T Tolio, Italy T Tomiyama, The Netherlands M Tseng, Hong Kong K Ueda, Japan J Vancza, Hungary F van Houten, The Netherlands M Weck, Germany Paul Xirouchakis, Switzerland

Non CIRP E Antonsson, USA K Case, UK G Cascini, Italy D Cavallucci, France A Chakrabarti, India J Clarkson, UK A Duffy, UK E Goodman, USA A Leifer, USA C McMahon, UK T Triggs, UK M C van der Voort, The Netherlands Local Organising Committee A Al-Ashaab P Baguley R Barrett D Baxter E Benkhelifa S Bolton P Datta I Ferris M Goatman M Grant H Hassan J Mehnen (Organisation Chair) D. Saxena E Shehab (Programme Chair) A Tiwari (Finance Chair) B. Tjahjono Y. Xu T Bandee, (Organisation Secretary) E Pennetta L Brady

Foreword

Studying how to make products and services more competitive through design is essential in a global market. The 19th CIRP Design Conference has emphasised competitive design. The conference focuses on the notion of design as a pivotal activity, creating and setting in motion the vision of the future within an engineering environment. These proceedings present 86 technical papers from 20 countries reporting multidisciplinary research that encompasses concepts, methodologies and infrastructure development for successful competitive design. The papers are from both CIRP and non-CIRP communities. The term ‘design’ is synonymous with innovation and creativity. In today’s fast-changing engineering environment, the need to enhance the role of design and creativity in all aspects of business is increasingly acknowledged. The application of good design techniques within both product and service organisations has grown in importance, encompassing areas such as communication, behaviour and environment, enabling engineering organisations to develop and maintain a competitive edge. The conference has 18 parallel sessions, 4 keynote sessions from academia and industry and one opening address. Over 100 participants are expected to attend the conference. The sessions include collaborative design, design optimisation, design evaluation, design methods to inventive design and creative design. The multidisciplinary approach to design is the key for success. I would like to take this opportunity to thank all the authors for their quality research, the international scientific committee members for their support in reviewing the papers and the local organising committee for their meticulous preparation for the conference. I would like to specially thank Dr. Jorn Mehnen, Dr. Essam Shehab, Dr. Ashutosh Tiwari and Mrs Teresa Bandee for their significant contributions towards the success of the conference. I would also like to thank our sponsor Mori Seiki, the machine tool company, and the exhibitors for their support for the conference.

Professor Rajkumar Roy Chairman CIRP Design Conference 2009

Table of Contents

Keynote Paper Competing in Engineering Design – the Role of Virtual Product Creation R. Stark, F.-L. Krause, C. Kind, U. Rothenburg, P. Muller, H. Stockert....................................................1

Collaborative Design Web-based Collaborative Working Environment and Sustainable Furniture Design D. Su, J. Casamayor……………………………….....9 How to Answer to the Challenges of Competencies Management in Collaborative Product Design? B. Rose, V. Robin, S. Sperandio………………......17 Requirements Models for Collaborative Product Development C. Stechert, H-J. Franke………………………....…24 Modelling Product and Partners Network Architectures to Identify Hidden Dependencies S. Zouggar, M. Zolghadri, Ph. Girard……………...32 Integrated Design at VIDA Centre Poland Z. Weiss, R. Konieczny, J. Diakun, D. Grajewski, M. Kowalski………………………………………......40

Design Optimisation Optimal Design of Planer Parallel Manipulators 3RRR Through Lower Energy Consumption A. A. Rojas-Salgado, Y. A. Ledezma Rubio…..….45 Artificial Neural Networks to Optimize the Conceptual Design of Adaptable Product Development J. Feldhusen, A. Nagarajah…………………….......51 Work Roll Cooling System Design Optimisation in Presence of Uncertainty Y. T. Azene, R. Roy, D. Farrugia, C. Onisa, J. Mehnen, H. Trautmann…………………….........57 Integrating Conventional System Views with Function-Behaviour-State Modelling T.J. van Beek, T. Tomiyama………………………..65 Grid Services for Multi-objective Optimisation G. Goteng, A. Tiwari, R. Roy……………………….73

Knowledge Management

&

Information

Automated Retrieval of Non-Engineering Domain Solutions to Engineering Problems J. Stroble, R. B. Stone, D. A. McAdams, M. S. Goeke, S. E. Watkins...................................78 Structured Design Automation M. J. L. van Tooren, S. W. G. van der Elst, B. Vermeulen……………………………………...…86 Modular Product Design and Customization J. Pandremenos, G. Chryssolouris........................94 A Criteria-based Measure of Similarity between Product Functionalities D. P. Politze, S. Dierssen………………………......99 Dynamic Learning Organisations Supporting Knowledge Creation for Competitive and Integrated Product Design R. Messnarz, G. Spork, A. Riel, S. Tichkiewitch…………………………………......104

Product Lifecycle Management A Constraints Driven Product Lifecycle Management Framework J. Le Duigou, A. Bernard, N. Perry, J-C. Delplace…………………….........................109 Using a Process Knowledge Based CAD for a More Robust Response to Demands for Quotation L. Toussaint, S. Gomes,J.C. Sagot……………...116 Development of a Software Tool to Support System Lifecycle Management V. Robin, S. Brunel, M. Zolghadri, P. Girard…………………...…………………….....120 Integrated Design and PLM Applications in Aeronautics Product Development D. Van Wijk, B. Eynard, N. Troussier, F. Belkadi, L. Roucoules, G. Ducellier…………………...…...128 The Mechanisms of Construction of Generic Product Configuration with the Help of Business Object and Delay Differentiation S-H. Izadpanah, L. Gzara, M. Tollenaere……....134

Interoperability and Standards: The way for Innovative Design in Networked Working Environments C. Agostinho, B. Almeida, M. J. Nuñez-Ariño, R. Jardim-Gonçalves…………………………...….139

Design Rework Prediction in Concurrent Design Environment: Current Trends and Future Research Directions P. Arundachawat, R. Roy, A. Al-Ashaab, E. Shehab……………………...…………………...237

Product Lifecycle Management Approach for Sustainability N. Duque Ciceri, M. Garetti, S. Terzi…………….147

Systematic Processes for Creative and Inventive Design

Through-Life Integration Using PLM M. Gomez, D. Baxter, R. Roy, M. Kalta………….155

A Method of Analyzing Complexity by Effects and Rapid Acquisition of the Most Ideal Solution Based on TRIZ P. Zhang, F. Liu, D. R. Zhang, R.H. Tan………..245

Implementing an Internal Development Process Benchmark Using PDM-Data J. Roelofsen, S.D. Fuchs, D.K. Fuchs, U. Lindemann…………………………..…….........163 How to Make “Value Flow” for a Start-Up Enterprise W. Beelaerts van Blokland, B. Dumitrescu, R. Curran……………………………………...........171

Design Evaluation

Interrelating Products through Properties via Patent Analysis P. A. Verhaegen, J. D'hondt, J. Vertommen, S. Dewulf, J. R. Duflou………………………...……..252 The Product Piracy Conflict Matrix – Central Element of an Integrated, TRIZ-based Approach to Technology-based Know-how Protection G. Schuh, C. Haag…………………………...……258

Design and Manufacturing Uncertainties in Cost Estimating within the Bid Process: Results from an Industry Survey S. Parekh, R. Roy, P. Baguley…………………....178

Computer-Aided Conceptual Design Through TRIZbased Manipulation of Topological Optimizations G. Cascini, U. Cugini, F.S. Frillici, F. Rotini……..263

Design Interference Detector- A Tool for Predicting Intrinsic Design Failures V. D'Amelio, T. Tomiyama………………………...185

Interpretation of a General Model for Inventive Problems, the Generalized System of Contradictions S. Dubios, I. Rasovska, R. De Guio……………..271

A Generic Conceptual Model for Risk Analysis in a Multi-agent Based Collaborative Design Environment J. Ruan, S. F. Qin…………………………………..193

Long-Run Forecasting of Emerging Technologies with Logistic Models and Growth of Knowledge D. Kucharavy, E. Schenk, R. De Guio…………..277

A Methodology for Variability Reduction in Manufacturing Cost Estimating in the Automotive Industry based on Design Features F. J. Romero Rojo, R. Roy, E. Shehab…………..197 Assessing the Complexity of a Recovered Design and its Potential Redesign Alternatives J. Urbanic, W. H. ElMaraghy……………………...202 A Study on Process Description Method for DFM Using Ontology K. Hiekata, H. Yamato……………………………..210 The Use of DfE Rules During the Conceptual Design Phase of a Product to Give a Quantitative Environmental Evaluation to Designers H. Alhomsi, P. Zwolinski…………………………..216 Developing a Current Capability Design for Manufacture Framework in the Aerospace Industry A. Whiteside, E. Shehab, C. Beadle, M. Percival............................................................223 Design for Low-Cost Country Sourcing: Motivation, Basic Principles and Design Guidelines G. Lanza, S. Weiler, S. Vogt................................229

A TRIZ Based Methodology for the Analysis of the Coupling Problems in Complex Engineering Design G. Fei, J. Gao, X. Q. Tang………………...……...285 TRIZ Evolution Trends in Biological and Technological Design Strategies N. R. Bogatyrev, O. A. Bogatyreva......................293 Procedures and Models for Organizing and Analysing Problems in Inventive Design D. Cavallucci, F. Rousselot, C. Zanni……………300 Achieving Effective Innovation Based On TRIZ Technological Evolution J. G. Sun, R. H. Tan, G. Z. Cao……………….....309

Design Case Studies Modelling the Product Development Performance of Colombian Companies M. C. Herrera-Hernandez, C. Luna, L. Prada, C. Berdugo, A. Al-Ashaab…………………...……316

Design of a Virtual Articulator for the Simulation and Analysis of Mandibular Movements in Dental CAD/CAM E. Solaberrieta, O. Etxaniz, R. Minguez, J. Muniozguren, A. Arias……………………..…...323 Contribution of Two Diagnosis Tools to Support Interface Situation during Production Launch L. Surbier, G. Alpan, E. Blanco………………...…331 The Drift of the Xsens Moven Motion Capturing Suit during Common Movements in a Working Environment R.G.J. Damgrave, D. Lutters……………...………338 Reconfigurable Micro-mould for the Manufacture of Truly 3D Polymer Microfluidic Devices S. Marson, U. Attia, D. M. Allen, P. Tipler, T. Jin, J. Hedge, J. R. Alcock………………...……………...343

Creative Design

On the Potential of Function-Behaviour-State (FBS) Methodology for the Integration of Modelling Tools A. A. Alvarez Cabrera, M. S. Erden, T. Tomiyama…………………………………….....412 Function Orientation beyond Development – Use Cases in the Late Phases of the Product Life Cycle A. Warkentin, J. Gausemeier, J. Herbst………...420 An approach to the Integrated Design and Development of Manufacturing Systems H. Nylund, K. Salminen, P. H. Anderson.............428 An Improved Method of Failure Mode Analysis for Design Changes R. Laurenti, H. Rozenfeld……………………..….436 Object-Oriented Simulation Model Generation in an Automated Control Software Development Framework M. J. Foeken, M. J. L. van Tooren……………….443

Creative Approaches in Product Design H. Abdalla, F. Salah…………………………….....347

Improving Patient Flow Through Axiomatic Design of Hospital Emergency Departments J. Peck, S-G. Kim……………………………….....451

An Engineering-to-Biology Thesaurus to Promote Better Collaboration, Creativity and Discovery J. K. Stroble, R. B. Stone, D. A. McAdams, S.E. Watkins…………………………………..........355

Combining Axiomatic Design and Case-Based Reasoning in a Design Methodology of Mechatronics Products N. Janthong, D. Brissaud, S. Butdee…………….456

How do Designers Categorize Information in the Generation Phase of the Creative Process? J. E. Kim, C. Bouchard, J. F. Omhover, A. Aoussat………………………………………......363

Set-Based Concurrent Engineering Model for Automotive Electronic/Software Systems Development A. Al-Ashaab, S. Howell, K. Usowicz, P. Hernando Anta, A. Gorka………………………...…………...464

The Value of Design-led Innovation in Chinese SMEs S. Bolton………………………………………........369 We are Designers Because We Can Abstract A. Adel, R. Djeridi…………………………………..377 Bridging the Gap between Design and Engineering in Packaging Development R. ten Klooster, D. Lutters…………………………383 Supporting Knitwear Design Using Case-Based Reasoning P. Richards, A. Ekart…………………………...….388 Investigating Innovation Practices in Design: Creative Problem Solving and Knowledge Management D. Baxter, N. El-Enany, K. Varia, I. Ferris, B. Shipway…………………………………….........396

Design Methods Set-Based Design Method Reflecting the Different Designers’ Intentions M. Inoue, H. Ishikawa………………………..........404

Symbiotic Design of Products and Manufacturing Systems Using Biological Analysis T. N. AlGeddawy, H. A. ElMaraghy……………...469

Scenario Based Design Supporting Scenario-Based Product Design: The First Proposal for a Scenario Generation Support Tool I. Anggreeni, M. C. van der Voort…………...…...475 The Procedure Usability Game: A Participatory Game for the Development of Complex Medical Procedures and Products J. A. Garde, M. C. van der Voort…………….......483 Scenarios and the Design Process in Medical Application R. Rasoulifar, G. Thomann, F. Villeneuve…...….490 Scenario-Based Evaluation of Perception of Picture Quality Failures in LCD Televisions J. Keijzers, L. Scholten, Y. Lu, E. den Ouden………………………......................497

Applying Scenarios in the Context of Specific User Design: Surgeon as an Expert User, and Design for Handicapped Children G. Thomann, R. Rasoulifar, F. Villeneuve……....504

User Centric Design Analysing Discrete Event Simulation Modelling Activities Applied in Manufacturing System Design J. Johansson………………………………………..512 A User Centred Approach to Eliciting and Representing Experience in Surgical Instrument Development J. Restrepo, T. A. Nielsen, S. M. Pedersen, T. C. McAloone…………………...........................518 Equating Business Value of Innovative Product Ideas S. Brad………………………………………….......526 Affordance Feature Reasoning in Some Home Appliances Products J. S. Lim, Y. S. Kim………………………………...533 A Methodology of Persona-centric Service Design S. Hosono, M. Hasegawa, T. Hara, Y. Shimomura, T. Arai………………………………………………..541

Design Education Stanford’s ME310 Course as an Evolution of Engineering Design T. Carleton, L. Leifer………………………..……..547 Educating T-shaped Design, Business and Engineering Professionals T-M. Karjalainen, M. Koria, M. Salimäki…..…….555 European-wide Formation and Certification for the Competitive Edge in Integrated Design A. Riel, S. Tichkiewitch, R. Messnarz……….......560 ED100: Shifting Paradigms in Design Education and Student Thinking at KAIST M. K. Thompson…………………………………....568

Virtual Design A Knowledge Based Approach for Affordable Virtual Prototyping: The Drip Emitters Test Case P. Cicconi, R. Raffaeli………………………...…...575 Real 3D Geometry and Motion Data as a Basis for Virtual Design and Testing D. Weidlich,, H. Zickner, T. Riedel, A. Böhm.......583 Enhancement of Digital Design Data Availability in the Aerospace Industry E. Shehab, M. Bouin-Portet, R. Hole, C. Fowler..............................................................589

Competing in Engineering Design – the Role of Virtual Product Creation 1 2

R. Stark, F.-L. Krause (1), C. Kind1, U. Rothenburg1, P. Müller2, H. Stöckert2 Fraunhofer Institute for Production Systems and Design Technology (IPK) and

Institute for Machine Tools and Factory Management, Chair for Industrial Information Technology, Berlin Institute of Technology Pascalstr. 8-9, D-10587 Berlin, Germany [email protected], [email protected]

Abstract Product creation is facing the next level of fundamental changes. Global demands are growing substantially to achieve energy efficient and sustainable value creation networks for products, production and services without compromising traditional success factors such as time to market, cost and quality. To stay competitive within such an environment development partners in industry and public sectors will require new interplay solutions for engineering design execution, domain knowledge representation, expert competence utilization and digital assistance systems. This scenario offers the chance for virtual production creation solutions to become critical for the future by offering unique engineering capabilities which have not yet explored or deployed. The paper investigates key elements of modern virtual product creation – such as agile process execution, functional product modeling and context appropriate information management – towards their competitive role in satisfying increasing numbers of product requirements, in delivering robust systems integration and in ensuring true sustainable product lifecycle solutions. Keywords: Virtual Product Creation, engineering design, digital technologies, information and competence management, sustainability, systems integration, process execution

associated to the design execution factory”. The execution of engineering design within industry uses principle elements of the traditional design methodologies (e.g. VDI 2221 or Pahl/Beitz, see [1] [2]), but in the majority of the cases it does not follow them systematically. The reasons are manifold: 1. Most of the companies have not been active during the last years in using function structures to come to new design principles. The need for more intelligent products and combined systems with mechanical components, electronic and electrical modules as well as control-loop based software enablers, however, will raise the importance of function oriented design. 2. The traditional design methodologies have not taken into active consideration the complexity of products and the specific technical challenges of systems integration and verification. 3. The use of virtual product creation solutions including related processes, methods, models, tools and information standards was not yet part of engineering design when those design methodologies have been developed. The V-model of systems engineering is another very popular development guideline and is used in most of the industry branches. For many development tasks the consistent application of the V-model is limited, too, due to problems in finding objective criteria to conduct target cascading from the entire product function down to system, sub-system and component property / attribute levels. Product and systems integration as indicated on the right branch of the V-model is also missing consistent mapping to requirements, to target cascading and to complex parameter relations of mechanical systems, electronic modules and (control) software. Due to the nature of technical complexity engineering design activities involve many experts from different

1 INTRODUCTION AND MOTIVATION Competition in Engineering Design is characterized by execution actors (designers, engineers, OEMs, suppliers, engineering service providers etc.), by technical targets and economic factors within the field of application and by higher level needs of global & regional environments and social equity. In addition, engineering design competition is influenced by implicit aspects such as general or published knowledge of an industry branch or a technical domain and special competence set-ups in enterprise environments and project teams. Each one of the above elements can lead to distinct differences in approach, operation and technology support (both physical and digital). Three fundamental aspects are laid in this paper as drivers for the benchmark criteria which then are used to assess the role of virtual product creation within engineering design competition. The first aspect deals with the question “what is the subject area of the activity engineering design?”. Competition in designing a special type of product, machine, facility or service is characterized by the industry branch and oftentimes by its specific implicit design behaviors and practices: The development of an aircraft is organized in functional systems engineering activities in order to achieve best possible flight operation attributes and lifetime characteristics (weight, load capacity, fuel consumption, system robustness, safety redundancy, operating cost). The development of a fixture for an automotive welding station, however, does focus on design modularization and tool standards to enable a high chance of reuse across plants and assembly lines. These opposite examples indicate that different types of knowledge, engineering collaboration and virtual product creation technologies will serve as competing factors amongst the key development partners. The second aspect of competition is all about “how engineering design is executed” and “which main activities are

CIRP Design Conference 2009

1

domains. A typical pragmatic approach is to connect and “integrate” those expert activities with the help of company specific development milestone charts. Project Managers with limited capabilities in technical design and validation activities serve as gatekeepers to fulfill metric based milestone deliverables. Project reviews with stakeholders often replace proper expert design reviews and serve as a control unit for turbulent engineering design execution. In addition, engineers and designers have difficulties to cope with information complexity, PLM technologies and virtual product creation skill needs. Hence, robustness of engineering design and design efficiency suffer. The third aspect deals with the question of “who are the competitors?”. Unlike schoolbook scenarios which put a single designer into the center of activity the challenge of today’s engineering design competition is characterized by the following facts:

clarify the question to which extend IT technologies and virtual product creation solutions can positively influence the three aspects of engineering design competition. The following eight benchmark criteria have been selected after having analyzed megatrends around the three competition aspects and the bigger product creation needs for the future. They will serve as benchmark criteria (bmc) set in the following sections: 1. Avoidance of physical prototypes In order to reduce energy and material consumption and to avoid unnecessary pollution such as carbon dioxide during the product creation phase physical prototypes should be reduced to the minimum or should be eliminated at all (“0-prototype target”). This target points directly an increase of analytical and virtual engineering capabilities. 2. Offering of task and context oriented information and knowledge Future demands for sustainable products which are in harmony with society and environmental needs require the active interpretation of an increasing number of linked information sets. Today, development engineers are already exhausted and overwhelmed in using loosely coupled information databases for engineering reasoning. This stress will become worse unless better ways can be delivered for information offering, maintenance and active use. 3. Ensuring best suitable collaboration (incl. cultures and individual characters) Product creation activities meanwhile have to rely on expert networks and dispersed project teams around the globe. Different languages, cultures and individual education background as well as multiple approaches for design engineering make it difficult to keep the focus on development project time and content targets. In addition, today’s collaboration methodologies have not yet proven to deliver intelligent and clever solutions to the theoretical potential of those teams. The question remains what might be achievable via best possible collaboration. 4. Enabling robust and transparent decision making

 Significant “time to market reductions” have enforced a separation of design responsibilities amongst bigger teams of design experts executing design tasks in parallel.  The official responsibility split between OEMs, suppliers and engineering service providers require a high number of solid interface agreements.  The provision of project resources as “warm bodies” which can be leased like a commodity on the market oftentimes conflicts with the need to develop competencies and critical development skills mid and long term. The above described aspects of competition in engineering design lead to another key question: Can information technology help (or not) to overcome turbulent factors of competition in engineering design? If yes, which key factors are important and how can virtual product creation enable companies to acquire critical advantage in executing engineering design? The following sections will, therefore, investigate those questions in more detail as they are part of the research at the Chair of Industrial Information Technology at the Berlin Institute of Technology and at the division of Virtual Product Creation of the Fraunhofer Institute of Production Facilities and Engineering Design in Berlin.

Simultaneous and cross-enterprise development processes need constant operational and milestone oriented decision making. Still today, the disciplines of Project Management and Engineering Design do not follow the same conceptual thinking. As a consequence, project and engineering progression oftentimes are not in synchronization and make robust decision making impossible. As a result major technical compromises are accepted in order to deliver projects in time and due to missing decision transparency lessons learned are not possible.

2

COMPETITION IN ENGINEERING DESIGN – OVERVIEW, DRIVERS AND DEMANDS Engineering Design serves as fundamental discipline to deliver appropriate design models and descriptions in order to 

meet a high number of product requirements,



to enable robust manufacturing with high quality,



to deliver sound profits on competitive markets,



fulfill customer expectations during use,

 and to enable a sustainable future. The above mentioned principle drivers for engineering design have to interact with the three fundamental aspects of competition in engineering design, as described in the previous section: 

What is the subject area of the activity engineering design?



How is engineering design executed and which main activities are associated to the design execution factory?

5.

 Who are the competitors in engineering design? The authors of the paper have conducted research in order to find out which benchmark criteria might exist to

6.

2

Provision of a creative, individual adaptable and intuitive working environment Human beings remain the most valuable asset in agile and precise engineering execution. The early engagement with non physical artefacts of future product does require new ways of work places (“new generation of work desk laboratories”). Creativity zones will play a more important role if new levels of intelligent products need to be achieved. Delivering extended lifecycle views In the beginning of the 21st century it is no longer sufficient to concentrate on the production and

7.

8.

3

use of products and to leave out subsequent life cycles such as MRO and end of life recycling. Even 2nd cycle product planning and verification methods will become important. Steady maintenance and extension of competence School book knowledge and job experience are no longer sufficient to meet future design engineering skill requirements. New levels of knowledge capture, consistent use and rapid innovation need to be explored to allow for future generation systems engineering and related competence networks. Product Creation Process Planning and Adaptation Process competence is one of the key competitive factors and core competence of industrial companies. Beyond general guidelines and high level milestone maps there is almost no explicit representation of product creation processes available. Process models and associated target oriented deployment are highly desired to analyze and improve engineering design systematically.

consequence each company has to invest its own logic, considerations and efforts to use design approaches and verified computer models and to adjust virtual prototyping processes to obtain physical prototype reductions. Therefore, academic researcher should use more intensively the opportunity to develop more suitable development methodologies with the direct integration of computer technologies. This, however, will make it indispensable to establish consistent product models for the different conceptual layers of design methodology (requirements, product ideas, system and design layout, embodiment design). Consistent computer supported design methodology deployment today is still limited by the necessity to permanently convert data between different application systems or database systems. Furthermore, engineers need method and process assistance by intelligent assistant systems. According to the 2nd and the 5th benchmarking criteria (bmc 2, bmc 5) such lack of intelligent method and process assistance makes it difficult to offer a working environment both tailored to the needs of the designer and adopted to the current process state and product maturity. To realize the potentials of IT systems it is necessary to organize the product development process appropriately and to allow for flexibility that enables an adjustment to these objectives. As stated above, engineering design involves many stakeholders from different domains. To control an interdisciplinary development process it is necessary to handle versatile knowledge of various domains which need to be represented in IT applications appropriately. Development methodologies will only be able to fully enable the 2nd (“task oriented information and knowledge”) and 6th (“supporting life cycle views”) benchmarking criteria if appropriate information management solutions are available to help controlling the way of generating and using information (please compare the next section). The combined methodology of business processes, process management, project management and systems engineering could have significant potential for several benchmark criteria. Today, however, with respect to bmc 4 (“robust decision making”) engineering development methodologies and project management are not yet correlated. Considering project management taking the lead of product development projects the deployed development methodologies need to be adjustable in order to synchronize project and engineering progression. Customers increasingly ask for complete solutions instead of single products. While services offered for specific products are usually developed separately - often even after completing the product development - the integrated development of products and services is pursued to realize added value and new functionality, cp. [4]. Appropriate VPC solutions to develop Product Service Systems (PSS) and value co-creation need to be able to compare PSS variants, to support collaboration and to deploy distributed decision making according to the 3rd benchmarking criterion (“ensuring best possible collaboration”). The changing of global conditions with respect to economy, ecology and socialization have strong influence on the procedural approach of creating industrial products and require an adaption of organizational, methodical and technical aspects. Actually, this addresses the 2nd, 3rd and 6th benchmarking criteria. For the development and creation of sustainable products in general and energy efficient products in particular both the number of people involved in the development process and the information amount to be processed increase significantly. The persons involved need to be supported to ensure a best

VIRTUAL PROUCT CREATION (VPC) SOLUTIONS (PROCESS, METHODS, TOOLS) TO SUPPORT COMPETITIVE ENABLERS IN ENGINEERING DESIGN

3.1 Development methodology and process simulation/execution A development methodology is a comprehensive set of specific engineering rules, methods, and procedures that are used to develop or design systems or products in an industrial environment. There are some well known and often cited approaches such as VDI 2221 [1] or V-Model [3], which are commonly “applied” in industry though mostly adapted to meet the specific requirements of the industrial area and the needs of the individual company. However, those approaches do not consider the increasing complexity and variety of products arising from the integration of different domains (e.g. mechatronics) and the cumulation of requirements regarding sustainability, life cycle aspects, and productrelated services besides the “common” needs defined by costs, time and quality. With respect to the 3rd benchmark criterion (bmc 3) this means that current methodologies do not take into account the different types of engineering approaches and therefore do not support collaboration sufficiently. Furthermore, development methodologies mostly focus on phases and the outcome (products, services, software, systems), but not on the engineers and organizations applying them. This means that an individual adaption of the methodical procedures is not possible and not even intended. Accordingly, appropriate methods need to be developed that consider the collaboration of a heterogeneous network of product developers, representing different domains, life cycle phases and companies and characterized by different cultures and individual backgrounds. Additionally, the traditional engineering methodologies as described in [1], [2] and [3] hardly take into account potentials offered by information technologies since at the time of development of those methodologies computers were about to be developed or just set out to conquer the engineering world. However, information technologies and specific application systems are a prerequisite to avoid physical prototypes reflecting the first benchmark criterion (bmc 1). Also, the general engineering methodologies do not specifically aim at reducing physical prototypes. As a 3

suitable collaboration. The information and knowledge have to be offered according to task and context. The expanding area of responsibilities of a company for its product, not stopping after the product delivery, makes the companies taking into account life cycle aspects by applying design methodologies for life cycle creation, modeling, management and evaluation. In addition to generically deployed development methods that provide a kind of overall procedural framework there exist also a range of specific design methods which need to be integrated into the higher design process flow. The range of those methods starts from general procedures for change management, requirements engineering and complexity management, up to specific design methods in CAD system templates or other IT wizards. The later ones represent particular company knowledge and support specific development tasks and solutions. Such approaches mainly address the 2nd benchmarking criterion (bmc 2) by offering task and context oriented information and knowledge. Concluding intermediately with respect to development methodology and engineering design processes, the analysis reveals that the process itself and its deployed methodologies are key to improve the three aspects of engineering design competition. Since research is still dominantly focused on the “traditional” development methodologies, changes of boundary conditions require changes in mindsets and the development of solutions that meet Virtual Product Creation requirements. Process description, simulation and controlling are crucial factors for corporate success in product development [5]. However, the product development activities become increasingly complex as explained in the first section. Therefore, the active planning, optimization and adaptive execution of development processes become ever more important. Modeling and simulation of development processes provides a powerful approach to meet these objectives and addresses bmc 8 perfectly. However, product development processes demand modeling and simulation according to specific terms and conditions. They are characterized by creative elements and more uncertainty than conventional business processes. Unpredictable obstacles and problems frequently require the adjustment of the development plan during the development process. Moreover, product development processes are determined to a great extent by iteration loops. In order to meet these requirements, a tool for a goal-oriented modeling and simulation has to be able to map the characteristics of product development processes mentioned above. Particularly, the stochastic behavior has to be represented. Process simulation will support ensuring best suitable collaboration (3rd bmc) and enabling robust and transparent decision making (4th bmc) if the following prerequisites are met: (a) representation of aspects and parameters that influence collaboration and decision making, (b) project management characteristics. Another aspect of the 4th bmc refers to the development process itself. One objective of product development process modeling is to create a predictive model. This model improves managerial decision making and optimizes process predictability [6]. Processes can be defined that are more robust in case of changing conditions. Current problems here are the difficulties and high efforts in analyzing processes and generating appropriate and usable process models. For process planning the product development process is modeled and analyzed prior to its execution. To realize fast benefits interest is directed towards time to market, cost, and quality. Considering further evaluation criteria

such as the environmental impact of the development process itself is possible. In any case the model has to provide the ability to point out the effects of process adjustment by simulation. Common ‘adjusting knobs’ are improvement of human resources, changes in organizational aspects, and the use and enhancement of the capabilities of information technology [7]. Accordingly, process simulation can be applied to evaluate and optimize development processes with respect to bmc 5, bmc 6 and bmc 7 as well. However, it is necessary to represent and implement the specific characteristics of the real process in the simulation model. Most companies have not yet seriously started to invest into such process modeling capabilities. Some of the issues mentioned above have been investigated in VPC research projects and first results have been achieved. For instance in the joint research project MIKADO solutions are being developed to support the development of mechatronical products by improving the coordination and adaptation of mechanical, electronical and software development processes and by systematically extending and integrating approaches and tools from these three domains. The solutions comprise a systematic approach for designing and evaluating mechatronical development processes using predefined reference processes and a software tool for modeling and simulating multidiscipline development processes. New modeling and simulation features allow for a more precise prediction of real process behavior and more reliable identification of possible flaws in the process design. 3.2 Context appropriate PLM Information Management (authoring and consumption) Product Lifecycle Information is crucial to virtual product development and to the 1st benchmark criterion, avoidance of physical prototypes. Avoiding one single physical object leads to generating a myriad of information objects. The integration of a virtual prototype requires the incorporation of a high number of different data elements and cannot carry information the way a physical prototype can. This is why information management is essential already today and has to cope with additional challenges in the future. With regards to the other seven benchmark criteria, many shortcomings and opportunities exist within the technologies of Virtual Product Creation (VPC) concerning context appropriate Product Lifecycle Management (PLM). Especially the 2nd and the 4th benchmark criteria are not covered by industry available VPC solutions and are also mostly out of scope in today’s research activities. Information is not provided context oriented, but rather all at once. Robust and transparent decision making is therefore not yet possible. Many efforts are under way in the field of benchmark number 3 (“engineering collaboration”). As a result a range of semi-functional collaboration solutions are already available within commercial software products, or have been investigated scientifically. Recent research work has been conducted by Gärtner [8] and Langenberg [9] in the Ad-Hoc-Collaboration project and in the CoVes project. Market-ready and basically functional software applications are for example PTC CoCreate® and Dassault Systèmes Enovia® 3D live Collaborative Review. The collaboration in large project volumes is not yet satisfying. Such solutions require better reduced and context appropriate information provision, as well as possibilities to alter 3D models in the manner of computer aided design applications. Partial solutions exist for benchmark criteria 5, 6 and 7. Whereas the provision of creative, individual adaptable

4

and intuitive working environment is being heavily investigated by human factors research activities in many industrial fields, the focus on product development environments is still comparatively low. Product Lifecycle Management is meanwhile a popular discipline in IT technology and underlines that extended lifecycle views represent a key research are with widespread approaches from science and industry. The next necessary step is a context appropriate lifecycle view, which provides relevant information for different life cycle steps adapted to the requirements of the specific step. Heavily investigated in the scientific world, but not very much implemented in industrial solutions yet, is the 7th benchmark criterion ‘Steady maintenance and extension of competence’. Even if a lot of academic research has been conducted in the field of knowledge and competence management, also focused on product development contexts, there is no serious assistance system available on the market. Current solutions such as NX™ Knowledge Fusion, CATIA® Knowledge Expert or CATIA® Knowledge Advisor, are an approach of knowledge management, focused on process knowledge and support in particular development questions by separating expert knowledge from experts and storing it in databases. Competence, though, cannot be separated from the individual. Competence management aims at the development of personal qualifications and experiences of product developers. The integration of competence management processes into the product development process is still not satisfying. To provide information in an appropriate way, the competence of the information consumer, that is the engineer, is one of the most relevant context parameters. Intensive research in the concern of competence management in product development processes has been done by Strebel [10] and further elaborated by Stöckert et al. [11]. Due to the steady ongoing development of information systems, the amount of information created is literally snowballing. But even if increasing amounts of information throughout the product lifecycle becomes available for product designers, engineers, marketing personnel and others, they are not becoming better informed. The growing abundance of information is not properly structured, edited and visualized. Every part of information, every document, every product model and every working instruction is available at any point in time and without sensitivity to the context. This information overload actually becomes manifest as a lack of information. Context sensitivity research has been done throughout the last decade. Current approaches include weighted links, well-known from Amazon.com® book recommendations, implicit feedback mechanisms [12], complex adaptive systems (CAS) in form of multi-agent solutions [13] and information retrieval based on quantum theory [14]. Another well-established utilization is context sensitive user help within software applications. An adoption to the industrial field of product development and product development software systems has not yet taken place. One main reason is the abundance of different, non-standardized processes in product development which is strongly connected to the lack of consistent information classifications in this industry. Even if the standardization of innovative work is not to be expected, certain information classifications for virtual product creation are possible and already overdue. The possibility to adopt these techniques in a functional way may not be anticipated with levity. Serious, extensive research still has to be conducted. What has to be done to comply with the mentioned benchmark criteria? To secure the 4th benchmark criterion, robust and transparent decision making, infor-

mation has to be provided context sensitive. Five dimensions are necessary to fully describe a context in product development: 

Domain



Product



Tool



Process

 Person Domain and product are related and overlapping dimensions, as domain is related to product groups like automotive industry, aviation industry or plant engineering and construction. Products are rather single components and can belong to more than only one domain. Attributes of the product dimension are material, production technique and quality requirements in general. Different tools have different user interfaces and address different working foci. Information, regardless of the information source, has to be integrated in the according environment and is therefore another relevant dimension for context appropriate information management. Tool and process are again dimensions which interact. Project milestones, development maturity and underlying process model reflect the process dimension. Person related attributes contain knowledge and competence of the product development subject in general and, more specific, experience with situations similar to the current task. Also widely varying between individuals are cognitive models and the resulting ways of learning and information reception. Striking examples are the theoretical learner who gathers information in terms of formulas and concepts, whereas the practical learner needs to have tangible examples to get the idea. These and many more individual preferences need to be met to comply with the 5th benchmark criterion, creative, individually adaptable and intuitive working environments. Again five points have to be taken into account when talking about context sensitive Product Lifecycle Information: 1. Generation 2. Classification 3. Embedding 4. Provision (Visualization) 5. Controlling To be able to properly prepare information for context appropriate usage, these five steps have to be systematically planned and executed. Information generation sets the foundation of information management and has therefore by far the broadest impact on later phases. In the moment of creation, a lot of meta information is available that has to be documented to simplify later information reception and embedding processes. Examples are the context of information creation regarding decisionmaking processes, possible addressees of the created information and dependencies to other information objects. All these information elements have to be laboriously recaptured if they are not systematically documented in the first place. Many parts of this information capturing can be processed automatically with today’s state of the art technology; some are rather to be determined by interaction with the information draftsman. Information classification has to be conducted for additionally generated information as well as for already existing information sets. Present information objects are instrumental in establishing classification structures since they show the actual sources and drains of information in everyday business. This leads to two different ways of classification: ontologies and weighted relationships.

5

Whereas ontologies, i.e. semantic links, claim to be ubiquitous, weighted relationships are a representation of factual connections between information elements. Both ways of classification are necessary to provide information only in the correct context and not by indiscriminate allround distribution. Embedding requested information in specific product development contexts is the next step. Therefore, the working context has to be identified by the VPC tool automatically or with the cooperation of the product developer. Learning systems are able to capture contexts according to the beforehand mentioned weighted relationship classification. This step is the core of context appropriate information management since the problem of context sensitivity has to be tackled here, and links closely to the following. Both aim at the target of selected dissemination of information (SDI). Not every possible piece of information is appropriate in every context but only the one needed in terms of project milestone, precognition, relevance and all other attributes of the five context dimensions. Eventually, this approach will lead us from pull to push strategy. Information will not be requested, but provided. Provision is the next step after creation, classification and embedding information in the appropriate context. Information from different sources has to be integrated into single work environments and the according surrounding conditions. Appropriate visualization, adaptable to individual working environments and personal preferences, supports the fulfillment of the 2nd and 5th benchmark criterion, task and context oriented information and knowledge in creative, individually adaptable and intuitive working environments. Extended lifecycle views, requested in the 6th benchmark criterion, are part of this step, too. Last but not least, and regularly not taken into account, is the need for controlling in information management. Besides information quality, there is also an issue concerning efficiency. As it is no end in itself, there is always a relation between effort and benefit of information management, which is more tangible, the relation between complexity and degree of assistance. Determining factors for this ratio are the number of cooperating parties, the depth of classification structure and the frequency of use of information structures. As there will always be the necessity for somebody to clean up information libraries, the resulting amount of administrative tasks largely influence this ration, too. Even if these fundamental coherences have been identified, information controlling in product development is still far behind other techniques in product development and far behind controlling in other disciplines like manufacturing, sales or logistics. Some of the mentioned issues are already being investigated at the chair of Industrial Information Technology of the Berlin Institute of Technology and at the Fraunhofer IPK in research projects and first results have also been achieved: 1. Ad-Hoc-Collaboration Best possible collaboration environments, as requested in the 3rd benchmark criterion, are the objects of research in the Ad-Hoc-Collaboration project. Particularly with regard to today’s outsourcing, reduced vertical integration and multinational design teams, distributed design activities affect product lifecycle quality, time and costs. A functional prototype for collaborative engineering and virtual design reviews has already been implemented. The project has almost been finished, but a renewal proposal to conduct further detailed research activities, based on preliminary achievements, is already in preparation.

2. ProGRID Spreading and dressing up information to every person, acting at any step of the product lifecycle, becomes more and more a question of computing and communication technologies performance. Especially mathematical and visual simulations are crucial to transform stodgy information into tangible and immersive experiences as well as to predict the behavior of virtual prototypes. The ProGRID project researches the utilization of high-capacity grid computing for virtual engineering purposes. 3. MIKADO Handling complex, multidisciplinary contexts in product development is extra tricky. Therefore the joint project MIKADO has been started to establish a coherent and integrated systems engineering basis for the development of mechanical, electric and control components as well as software. Cross company information and cooperation models are being developed and implemented in tools to support requirements engineering and the predictability of total system behavior. Main functionalities are virtual validation, testing capabilities and diagnosis procedures. 3.3 Functional product modeling and simulation The availability of appropriate product modeling and simulation technologies and methods can be regarded as one of the decisive factors in supporting a competitive engineering design performance. It is obvious, that technical advances in these areas particularly contribute to the avoidance of physical prototypes by replacing these with digital counterparts. Although remarkable improvements have been achieved there are a number of aspects which are subject of further development [15]. By now 3D-CAD modeling technologies have achieved a very high application depth and maturity. Existing tools have evolved to versatile but also complex systems. Provided modeling methods as parametric design or templates technology allow rapid changes of parts/assemblies, fast generation of variants and capturing of design knowledge (bmc 7). Created 3D-CAD models serve as the common basis for all engineering processes such as CAE, blue print for CNC Manufacturing as well as the Digital Mock-Up (DMU). In analogy to the Physical Mock-Up (PMU) the DMU provides a computer-internal product representation which is mainly targeted to avoid mistakes and identify problems of a design [16]. It is noticeable, that current tools such as CATIA® DMU-Navigator or Teamcenter® Visua-lization Mockup predominantly address geometric and spatial validation tasks such as clash detection, interference check, evaluation of space requirements, computation of physical properties, or measurement of distances. Necessary process chains for modeling and generation of used lightweight geometry models are aligned accordingly and recognized robust and highly automated. With respect to criteria 2 and 3, existing integration into PLMenvironments assures access to up-to-date models and application depended views. Integrated methods for the validation of dynamic aspects of a product are only partially tackled yet. Kinematic or ergonomic simulations have been established for example. But mostly the inspection is covered by specialized application dependent simulation tools. The combination of these tools with other domain-specific simulation or design tools represents today’s implementations of a functional Digital Mock-Up, also called Functional Mock-Up (FMU). Via the incorporation mechatronical interactions and a more realistic behavior of virtual

6

products and prototypes can be simulated. Some applications such as LMS Virtual.Lab, Simulia® or MSC SimManagerTM already offer full product simulation packages. However, the analysis of product functions is still expensive and it partially lacks integration, which causes several drawbacks such as: 

high effort for model preparation due to manual collection of information,



delay in the availability of simulation results,



high effort in the management of simulation data (models, parameters, results),



multiple generation of product information in different systems and at different levels of detail/abstraction,



no rapid investigation of product functions,

acceleration of product development processes and to the improvement of decision making. The goal in using VR is to provide an intuitive and natural work environment for digital prototypes similar to the human interaction with real prototypes. This would enable even responsible management to access to a functional experience within digital supported decision making. Meanwhile, the VRTechnologies have reached a remarkable level of industrial application. Examples of currently applied VR tools are IDO (ICIDO) or DeltaView (Realtime Technologies). The next development steps are the extension of real time capabilities of computational algorithms as they have an important influence to the interaction between user and digital prototype. Thus, the focus of development is real time methods for interactive dynamic simulation and physically correct deformation simulation. Additionally, haptic interaction methods have to be improved and supplemented by real time collision detection or generation of contact forces for large assemblies. Several of the above described challenges are already covered by running research projects which are conducted by the chair of Industrial Information Technology of the Berlin Institute of Technology and the division Virtual Product Creation at the Fraunhofer IPK. For instance in the joint research project “AVILUSplus” the topics of PDM/CAx-VR-Integration for functional validation, real time physical simulation of flexible parts, and tangible interaction in Virtual Environments are addressed.



cumbersome determination of the fulfillment of requirements by product functions To overcome these limitations and also to cope requirements caused by the strong demand to validate mechatronical products, new comprehensive and integrative approaches are required. Also the focus will be shifted to a cross-domain design, modeling and simulation of the whole system, whereas a holistic optimization of the component interactions will come to the force. A continuously function oriented approach for engineering design promises to eliminate mentioned drawbacks, but requires to create relationships between requirements, functions and geometry as well as physical properties or even better to aggregate them into one information model. Consequently, new methods for the definition and modeling of functional assemblies need to be provided. Yet disjunctive methods for geometric and abstract modeling have to be joined and aligned. First research directed to a system oriented modeling is actually undertaken. To fully archive this goal a centralized and seamless data management has to be established not only for geometric information, but also for part properties, simulation models as well as results etc.. Furthermore, the process chain for simulation-model creation needs to be configured, automated and integrated into the product lifecycle management. With these enhancements implemented the essential foundation for coupled cross domain simulation is laid and thus a full behavior model of the digital product can be derived. This represents the first important step to a real fulfillment of the criterion 1 and also criterion 7 for an improved knowledge capturing. Furthermore Functional Mock-Ups also have to support verification methods related to the benchmark criterion of delivering extended lifecycle views. For example the disassembly simulation of a product needs to regard the fact that components properties change during its life. Recent research work has been conducted with the objective to provide methods of the simulation of product use and the consideration of its influence on form and function as well as their impact on the disassembly process [17]. With respect to support robust and transparent decision making as well as an intuitive working environment new Human-Machine-Interfaces (HMI) are advised for the realization of an intuitive interaction with digital prototypes. Virtual Reality (VR) can support new ways of interaction with digital prototypes, not only by integrating simulation methods, but also with the help of new HMI (Human Machine Interface) techniques. Both can contribute to the

4 CONCLUSIONS AND PROSPECTS The chart in figure 1 gives a final overview of the relations of VPC technologies and solutions with respect to the benchmark criteria concerning engineering design competition according to the current research assessments. It shows a qualitative estimation of the current state of VPC technologies deployment meeting the benchmarking criteria. Looking ahead, it also shows the tendency of potentials for future research and development activities with a perspective of the next five years. With respect to development methodology and process simulation the following characteristics exist: for the 2nd and 7th benchmarking criteria only implicit but no explicit support yet exists. This means, that executing a process always implies a context for information relevance and competence orientation. Additional potential exist in doing it more actively and explicitly. High potentials have been identified for the 3rd, 4th and 8th criterion. In particular, an active modeling of product development processes and the application of these models for process optimization purposes, for instance by process simulation, will offer great benefits. Concerning the research field of information management, high potential is evident for the 2nd, 3rd, 6th and 7th criterion. Information management is not only a contributor in these cases, but an active enabler to further developments and innovations. Significant potential in the area of functional product modeling and simulation exists related to criteria 1, 5 and 6. Although the level of maturity of single applicable technologies is already high, a further development of Functional Mock-Up frameworks, validation of lifecycle aspects and Virtual Reality enabling new HMI will tap the full potential.

7

Figure 1: Current state and potentials of VPC technologies with respect to benchmark criteria (bmc) [10] Strebel, M.: Kompetenzabhängiges Simulationsverfahren zur Optimierung von Produktentwicklungsprozessen. Fraunhofer IRB, 2008, Stuttgart, Germany [11] Stöckert, H.; Debitz, U.; Kind, C.; Hacker, W; Kompetenzentwicklung in der Produktentwicklung, in: Zeitschrift für wirtschaftlichen Fabrikbetrieb, 11, 2008, Hanser Verlag, München, Germany [12] Shen, X.; Tan, B.; Zhai, C.; Context-sensitive information retrieval using implicit feedback, in: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, 2005, ACM, New York, NY, USA [13] Clymer, J.; Expansionist/context-sensitive methodology: engineering of complex adaptive systems, April 1997 issue, Volume 33, Number 2, pages 686-695. [14] Song, D.; Towards Context-sensitive Information Retrieval Based on Quantum Theory: With Applications to Cross-media Search and Structured Document Access, Engineering and Physical Sciences and Research Council, http://gow.epsrc.ac.uk/ViewGrant.aspx?GrantRef=E P/F014708/1, retrieved: 2008-11-20 [15 Krause, F.-L.; Franke, H.-J.; Gausemeier, J.: Innovationspotenziale in der Produktentwicklung. Carl Hanser Verlag, München, Wien, 2007. [16] Krause, F.-L.; Rothenburg, U.: Advanced Product Validation using Functional DMU. In: Bley, H.; Jansen, H.; Krause, F.-L.; Sphitalni, M. (Eds.): Advances in Methods and Systems for the Development of Products and Processes, 2. German-Israeli Symposium on Design and Manufacture, Berlin, July, 7th-8th 2005. Fraunhofer IRB Verlag, Stuttgart, 2005, pp. 73-80. [17] Krause, F.-L.; Romahn, A.; Rothenburg, U.: Simulation Tools for Disassembly Design. In: Seliger G. (Editor): Sustainability in Manufacturing – Recovery of Resources in Product and Material Cycles. Springer Press, Berlin – Heidelberg – New York, 2007, pp. 170-182.

5 ACKNOWLEDGMENTS The authors are grateful for the support of the German Research Foundation (DFG) for funding the research projects “Kompetenzabhängige Personal- und Prozessplanung für die Produktentwicklung” and “Kooperative Produktentwicklung als ad hoc-Prozess”. We also thank the Federal Ministry of Education and Research for funding the research projects MIKADO, ProGRID and AVILUSplus.

6 REFERENCES [1] Verein Deutscher Ingenieure, 1993, VDI guideline 2221 – Methodical development and design of technical products, Beuth Verlag. [2] Pahl G., Beitz W., Feldhusen J., Grote K. H., 2007, Engineering Design, A Systematic Approach, Third Edition, Springer-Verlag, London. [3] Verein Deutscher Ingenieure, 2004, VDI guideline 2206, Design methodology for mechatronic systems. [4] Tucker, A., 2004, Eight Types of Product-Service Systems, DOI: 10.1002/bse.414. [5] Krause, F.-L.; Heimann, R.; Kind, C.: An Approach towards a Design Process Language. Proceedings of the 2001 International CIRP Design Seminar, Stockholm, Sweden, 6-8 June 2001, p. 7-12. [6] Krause, F.-L.; Kind, Chr.; Voigtsberger, J.: Adaptive Modelling and Simulation of Product Development Processes. In: Annals of the CIRP 53/1 (2004), Krakow, Poland. [7] Krause, F.-L., Raupach, C., Kimura, F., Suzuki, H., 1997, Development of Strategies for Improving Product Development Performance, Annals of the CIRP, 46/2:691-692. [8] Gärtner, H.; Optimierte Zulieferintegration in der Produktentwicklung durch Ad-HocKooperationswerkzeuge, IRB Verlag, 2008, Stuttgart, Germany. [9] Langenberg, D.; Collaborative Virtual Engineering for SMEs: Technical Architecture, in: Proceedings of the 14th International Conference on Concurrent Enterprising, 2008, Nottingham, UK

8

Invited Paper Web-based Collaborative Working Environment and Sustainable Furniture Design D. Su and J. Casamayor Advanced Design and Manufacturing Engineering Centre School of Architecture Design and the Built Environment Nottingham Trent University, UK {daizhong.su; jose.casamayor}@ntu.ac.uk

Abstract To meet the demand of online collaborative design, a Web-based Collaborative Working Environment (CWE) has been developed. An approach to enhance sustainable furniture design by utilizing the CWE is proposed. In this paper, the CWE framework is briefly presented, which consists of upperware, middleware and resource layers; then three key aspects of sustainable furniture design by utilizing the CWE are presented, including material and manufacturing processes selection, design for disassembly and damaged furniture return, followed by an example of damaged furniture return using the CWE to further illustrate the approach. Keywords: Collaborative Working Environment, Web/Internet technology, Sustainable Design, Furniture Design

virtual product development. The system contains a reusable software package or application, which is designed as the component agent. All the standard applications have two main parts, the implementation and the data. The interaction between two applications is achieved through the database. In that system, the ORB, COM and EJB, which allow the components to work together, are called ‘middleware’. Kan [5] described a Web-based Virtual Reality Collaborative Environment (VRCE), which was developed using VNet, Java and Virtual Reality Modelling Language. The system provided a portable, low hardware demand and customizable system with a set of comprehensive functions that facilitates virtual collaboration for product design. VRCE, using client-server architecture, for clients with low computing power can connect to a more powerful server, so that more computable or process job can be delivered into the server [6]. With the development of Web/Internet technologies, such as Grid, Web services, Wireless computing, peer to peer and mobile agents, combination of multiple Web/Internet techniques to form a more powerful collaborative working environment (CWE) has been emerging, which is known as the third generation of Web technology. As an emerging Web technique, it has been attracting researchers’ great attention due to its great advantages for application in collaborative product design and manufacture. For example, Wamg and Zhang [7] proposed an integrated collaborative approach for complex product development in distributed heterogeneous environment. Xiong, et al [8] developed a Service Oriented approach for software sharing, consisting of three components: Application Proxy Service (APS), which provides a surrogate for a running shared application process to control access to it; application proxy factory service and application manager service. In an Internet environment, multiple users can access a Web service with any client at the same time. The Advanced Design and Manufacturing Engineering Centre at Nottingham Trent University has been actively

1 INTRODUCTION It usually requires team work to successfully conduct product design through the total design process, and the team members are often geographically dispersed, which requires collaboration amongst different sites. Such situation places a great demand for Web/Internet software tools/techniques to effectively support the representation, collection and exchange of product information during the design process [1]. Along with the development of Web/Internet technologies, the Web based collaborative design was initially based on the first generation of Web technology, such as CGI, Servlet, etc. Leslie [2] described an approach to apply Computer Supported Collaborative Work (CSCW) technology for the product design and development process. Within this approach, the virtual integrated product design is viewed as a distributed and iterative process that involved significant sharing of information early in the design processes. Such systems have integrated applications and databases distributed at different locations. These systems are primarily closed systems, i.e. only selected users are allowed to access. In order to get the full benefits of object-to-object interactions, the collaborative design integrated the second generation Web/Internet technology, such as Object-based Modelling Environment (OBME), Remote Method Invocation (RMI), and Common Object Request Broker Architecture (CORBA). In OBME, an application at one location provides live services that can be used by other users at different locations through Internet/Web tools. End users can also develop new applications using the existing services provided [3]. In addition, several commercial Internet/Web infrastructure development tools, such as WebSphere by IBM and WebLogice by EBA, have also been introduced to reduce the efforts for implementing service-based Web applications. Roseman and Wang [4] presented open-system architecture for a collaborative CAD system supporting

CIRP Design Conference 2009

9

involved in the application of CWE into collaborative design and manufacture, for example, the project of Webenabled environment for intelligent manufacture supported by the EU-Asia IT&C programme [19], development of mobile collaborative environment for product design [20], and CWE for furniture design [11]. The research reported in the following sections is resulted from their efforts to apply the CWE into sustainable design with particular concern of furniture design. 2 FRAMEWORK OF WEB-BASED CWE As shown in Figure 1, the CWE consists of three layers: Upperware, middleware and resources. It connects with the applications via the upperware. Upperware: The upperware layer interacts with the applications to provide specific services for collaboration enabled by the middlewares and the tools. It has the following functions: 

Coordination of the utilization of multiple middleware techniques including Grid, Web services, mobile agent and meta-data techniques.



Process control of concurrence and consistency as well as synchronous and/or asynchronous messages. Process, elements (security, services, monitoring, etc) can be shared across applications to provide horizontal services to decouple these reusable application components, will facilitate more rapid changes in these processes.



Group Task Coordinator is responsible for the task assignment of each requester, monitoring the action

state of each requester and accepting the communication request and service from each provider. 

To provide an interface to the applications and to establish the connection between the applications and the middleware techniques specified in the underlying layer.



To support plug and Play, in this field, a knowledge based module will be developed. Investigation will be carried out to help integrate dynamic plug and play for better system performance. Underlying middleware: The underlying middleware is a class of software technologies to manage the complexity and heterogeneity inherent in the distributed systems. It is defined as a layer of software above the operating system but below the application program that provides a common programming abstraction across a distributed system and connects parts of a distributed application with data pipes and then passes data between them. It has two parts: The first part mainly aims to coordinate four main enabling middleware techniques including grid, Web services, mobile agents and meta-date technologies. Each of the middleware techniques has different advantages and the combination of multiple middleware techniques will enhance the functions of the system. The task to make decision when to use which technique will be carried out in the upperware layer. The second part is to provide resource management in the system. It will provide functionalities to publish, search, locate and wrap to coordinate the resources, and to control the transmission of all feature model resources in the system.

Collaborative Working Environment Applications

Upperware Plug in and display facilities Coordination, Group task coordinator Management & Control, Interfaces

Furniture and packaging design

Middleware

Furniture returns

Resource publishing, searching, location and wrapping Material Selection Grid

Web services

Mobile Agent

Meta data Processes selection

Resource layer

Grid resources

Web services

resources

Mobil agent resources

Meta data

facilities

Figure 1: CWE framework

10

Design for Disassembly

Resources layer: The physical resources would be wrapped into specific technology enabled ones so that they could be recognized by the collaborative working environment, and four kinds of resources are supported by default: Grid, Web Services, mobile agent resources and meta-data facilities. These resources independently exist in the system. The CWE has been utilized in areas such as mechanical design [21], furniture design [11] and inventory management [12], but it has not yet applied to sustainable furniture design. In this research, the important feature of the CWE, its plug-in function, has been utilized to enable the flexibility for external applications to join the CWE. As shown in Figure 1, the external applications currently considered include furniture and packaging design, furniture returns, material selection, production process, and assembly. 3 SUSTAINABLE FURNITURE DESIGN SUPPORTED BY WEB-BASED CWE Web-based CWE, as it has been explained previously within other areas, can facilitate the on-line exchange of heterogeneous data among geographically distributed collaborators within the total furniture design process. This platform can also be used to assist the design and development of sustainable furniture designs and packaging, thus leading to sustainable practices within furniture companies. This can be done through enhancing the exchange of data between collaborators and software/databases through Web-based platforms. Thus, Web-based CWE can assist with three critical areas within the design of sustainable furniture, namely: 

Selection of processes



Design for disassembly



Furniture return and damages

materials

and

manufacturing

3.1 Selection of materials and manufacturing processes The need of more sustainable design practices within furniture requires easier access to more data in order to carry out better informed design decisions in relation with sustainable practices. There is a need of tools that can assist the design process taking into account the selection of materials and manufacturing processes within the furniture industry with environmental issues in mind. Although, there are already software in the market (SimaPro, ECO-it, etc.) that can estimate the impact the product will have before it has been produced, these software are stand alone tools that have not yet been integrated within Web-based CWE, thus reducing the scope and access of this data produced to more varied and geographically distributed people involved in the design and development processes. 3.2 Design for Disassembly As indicated in its wording, Design for Disassembly makes the furniture easy to be disassembled, which can also reduce the impact of furniture in the environment. Guidelines about design processes that help to reduce the total number of furniture pieces and how these are assembled are important in order to allow shorter and easier disassembly processes, which may lead to higher rates of recyclable and remanufactured parts. 3.3 Furniture returns and damages Better and more effective management of information related with furniture damages during deliveries and handling, can improve/eliminate furniture returns within distribution processes. The CWE, as it will be seen in the

11

following section, can assist in reducing the number of furniture returns, through the provision of a platform that can support designers to make more informed design decisions, thus improving the furniture design structure, packaging and delivering styles. This can be done through systematic and effective recording of data that can then be exchanged through platforms such as the CWE. 4 FURNITURE RETURNS 4.1 Furniture returns overview Furniture returns remains an important issue within the furniture industry, returns rates are considered to be of 515% in the furniture industry, caused during transit, delivery and storage [13]. It affects negatively the economy of the company and the environment, as it produce incremental costs due to additional processes (finishing, assembly, manufacture, and packaging) as well as the subsequent increment in energy expenditure. In addition to this, it produces customer dissatisfaction, thus decreasing sales. Current legislation related with packaging waste [14], Producer responsibility obligations (packaging waste) regulations 1997 and, Directive 94/62/EC on Packaging and Packaging Waste, as amended by Directive 2004/12/EC [15], already are pressing companies towards a more sensible management of their waste packaging, which means that companies have to be more aware and implement more efficient management strategies in order to create more efficient and fit-to-purpose packaging, which not only will produce environmental benefits but also will increase their turnover by 4% and even be as high as 10% in some companies [13]. It has also been observed that there is a lack of feedback between companies and retailers/users in order to solve the problems related with inadequate packaging and products returns [16] which reinforce the need for a more efficient and sustainable system that improve the communication between these collaborators within the Total furniture design process in order to carry out better informed design decisions by furniture designers, packaging designers and logistics of the company. 4.2 Damages found in furniture return Furniture returns are produced because damage is caused in furniture during its transit, delivery and storage. Inappropriate handling, packaging (under-packaged or packaged in wrong areas) and storage, can lead to different type of damages. In a report carried out by FIET [17] seven types of damage were identified, namely: Breakage, bruising, scratching, abrasion, soiling, discoloration and climatic degradation. These types of damage can be solved through adequate: packaging, furniture design structure and methods of handling, delivery and storage. Thus, all these design and development processes have to be informed about the quantity and quality of damage that are found in furniture after being distributed. The main distribution hazards found by FIET [17] during the transit, delivery and storage were: Shock, vibration, compression and climatic. During the transit, distribution and storage of furniture, distribution hazards take place leading to furniture damages (see Figure2). In order to understand why damages are produced, distribution hazards have to be matched with furniture damages, so a cause-effect can be identified. This is not possible with current practices in the furniture industry where information about damages is not recorded through a database, which could be used in real time by other departments involved in the design and development of

the furniture, packaging and distribution methods. There is a lack of database platform in order to record and share this information between in-house and external departments/collaborators. 4.3 Models to reduce furniture return Traditional processes of furniture return are based on feedback of general information related with the damaged furniture, without any precise description and record of the type of damage in a database. As a result, the company and the departments involved in the design and development of the furniture design, design packaging and distribution methods cannot take informed design decisions, and the distribution hazards and, therefore damages, will take place in future deliveries. As it can be seen in Figure 3, when a piece of furniture is damaged, the retailer or user inform (feedback) the company in order to receive another brand new piece. This information is obtained by phone, fax or e-mail, and usually is very general, thus being difficult to analyze the

problems in order to find a solution. When this information is not recorded in a systematic manner in a database, the information is lost and it might not be transferred to the different departments involved in the design of furniture, packaging and distribution methods. The introduction of the Web-based CWE for management of data related with the type of damages suffered during the distribution processes can provide a platform where information can be systematically recorded and exchanged within different distributed locations within the in-house departments or external collaborators or departments (Figure 4). The furniture return process supported by the Web-based CWE is based on the same structure used in the platform for the Total Furniture Design process [18], but this model is more focused on the specific application towards the management of data related with the furniture return process between different collaborators of the Total furniture design process.

Figure 2: Factors that influence furniture damage.

Figure 3: Traditional furniture return model.

4: Furniture return model supported by Web-based CWE.

4.4 Advantages of the integration of Web-based CWE for furniture return

The integration of this model has advantages in comparison with traditional models used in furniture

12

companies. The main advantages can be seen in the chart (Figure 5).

Figure 5: Traditional model vs. Web-based CWE furniture return model. Traditional models do not usually record the damages in a database, and when this is carried out, the information is very general and it cannot be accessed easily by other internal or external departments/collaborators in real-time. The integration of the Web-based CWE in furniture return allow more detail of the type of damage found in furniture, and the record of this data in a platform that can be easily accessed by other internal or geographical distributed team members. In addition to this, this data can be used to monitor the problems and improvements carried out in the distribution process as well as to provide better informed design decisions. The use of this model can lead to the improvement/elimination of furniture return, whereas in the traditional model, design decisions are less/no informed. 4.5 Example of furniture return using CWE In order to show how a furniture return process takes place, what are the distribution hazards, steps during the transit and delivery, and how these are dealt with in traditional processes of return furniture in comparison with return furniture assisted by CWE, an example of a piece of furniture (chair) manufactured and finished through outsourced companies, and assembled and packaged with facilities in-house will be used. The model of the chair is basically made of a metal lacquered frame and a backseat piece framed of wood and upholstered. In a traditional distribution processes (Fig.6), the distribution hazards begin from the moment the chair have been assembled and is stored for packaging, storage and delivery. Once the parts of the chair are made and finished they are sent to the company to be assembled. Assembled chairs then have to be handled (transit) for its storage until they are packaged. During this transit period, the type of handling of the product can cause damage to the chair, i.e., scratching on the lacquered metal parts or on the fabric of the seat-back piece. Whilst the parts are stored waiting for assemble and packaging, exposition to direct strong sun or high humidity can cause damages to the chair such as

13

discoloration of fabrics as well as tensions in wooden frame joints, caused by wood internal fibers movement. During the packaging of the chair, careless handling can dirty the fabric of the upholstered parts, as well as the use of adhesive tape on lacquered areas, which after long periods of time and heat can cause damage in the finish when the chair is unpackaged. In addition to this, the inadequate stacking of chairs and packaging in order to use the space more efficiently inside boxes can lead to scratches and bending of legs, when these are not properly protected and correctly positioned. The handling of the boxes and location of these in the truck can also lead to damage in edges of legs and upholstered parts due to impacts or weight pressure and movement when they are stacked. Long periods of time inside compartments under high temperatures can damage lacquered parts, especially if they are in contact with packaging materials that can adhere to them. It also can affect the structural parts by weakening the mechanical properties of the glue used for wood joints, as well as movements of the wood in joints, which can have aesthetic consequences in wood that have not been designed with these movements in mind. Once the consumer (retailer/final user), receive the chair packaged and realize it has suffered damages, a report is sent to the company informing about general description of the damages on the phone or fax. It is here, where the information although can be filed (fax) usually get lost or is not introduced systematically in an accessible database where different teams and internal/external collaborators can access in order to improve and solve these damages caused by distribution hazards. If this is not done, the order then goes onto production without recording this information and making difficult the posterior analysis of the quantity and quality of problems found in furniture. Thus, new pieces will be produced with the same inadequate chair structural design, packaging design and distributed using the same distribution methods during delivery. Obviously, this will lead to more chair returns so any improvement will have been done. On the contrary, if an inspection and record of the quantity and quality of damages (Fig.5) through a web-based simple application (Figure 6) is carried out when the chair is received; then this data can be exchanged through a Web-based CWE and accessed in real-time for other internal/external collaborators who are working in the design and development and can be used to inform future decisions in design which will lead to the elimination of damages in future distribution methods, thus reducing or eliminating furniture returns. Thus, although the introduction of this quality system might take more time, as data have to be recorded, at the beginning, it would save time and money to the company as well as reduce the impact on the environment in the long term. Following the example, the damages found in the chair (breakage, scratching and bent legs) will be recorded in the return furniture application for Web-based CWE (Figure 7), it can be seen how the main types of damages found in furniture distribution can be selected after inspection of the furniture, as well as the intensity of the damage (intensity scale from 0-6) of each of these. In addition to this, the 3D model can be rotated in order to select visually the specific area that has been damaged according to the different types of damages found. There is also an space for other comments related with damages found that does not suit the main type of damages as well as the possibility to upload photos of the damages so collaborators can judge better what design decisions should be taken in future products, packaging or distribution style in future products and deliveries.

Figure 6: Furniture return process without/with the support of CWE.

Figure 7: Screen capture of Web-based CWE furniture return model application

14

Figure 8: Exchange of data between collaborators using Web-based CWE for furniture return. This information can also be used as an historic of damages for monitoring improvements, and for analysis of inadequate past design decisions. Likewise a quality system it allows the improvement of more efficient designs and methods. The data recorded then can be accessed in real time by all the collaborators in the design and development of furniture and packaging. Thus, breakages in the edges of the wooden frame of the upholstered frame could be solved through reinforcing packaging cushioning in those areas in the packaging or by rounding the design of the chair in order to avoid edges. Bent legs can be produced by compression power when boxes are stacked in the truck, so more rigid frame boxes, more rigid legs sections or different type of stacking the boxes (less weight on the top or avoiding high heights) can result in improvements. On the other hand, scratching on the sides of the seat can be solved by different stacking positions, or by increasing the cushioning in those areas. Again, the design could also been modified if the other solutions result in expensive changes, It is here, where different design alternatives are ‘weighted’ depending on price, materials, processes, etc,; and where the easy access of this data from external (may be geographically apart) collaborators (suppliers, engineers, marketing, etc.) can help to make more informed decisions backed by their expertise in their areas. This information can not only be transferred to these collaborators in form of specifications, but also can be accessed by them individually in the original form when it was obtained (Figure 8). 5 CONCLUDING REMARKS AND DISCUSSION The research reported in this paper is to apply the CWE into sustainable design with particular concern of furniture design. The CWE framework is briefly described first, then three key aspects of sustainable furniture design by utilizing the CWE are presented, including material and manufacturing processes selection, design for disassembly and damaged furniture return, followed by an example of damaged furniture return using the CWE to illustrate the approach proposed. As a new generation of Web/Internet techniques, CWE has been utilized in areas such as mechanical design, furniture design and inventory management, but it has not yet been applied to sustainable furniture design; therefore, the research reported in this paper is a novel contribution. Web-based CWE can facilitate the on-line exchange of heterogeneous data among geographically distributed collaborators within the total furniture design process, and, hence, it can assist the design and development of sustainable furniture designs and packaging, thus leads to sustainable practices within furniture companies. The

15

research reported in this paper currently concentrates on the three aspects mentioned above; however, more aspects will be further addressed. In this research, the important feature of the CWE, its plug-in function, has been utilized to enable the flexibility for external applications to join the CWE. The external applications currently considered include furniture and packaging design, furniture returns, material selection, production process, and assembly Damaged furniture return is used as a vehicle to demonstrate of the CWE approach for sustainable total furniture design. It may be argued that in some cases, it costs more to return damaged furniture to the manufacturer in comparison to other means; however, it has been used by some companies in practice, particularly for those expensive classic furniture products. Also, it is an important way to protect our environment by reducing the waste, and, hence, the CWE for sustainable furniture design is a useful sustainable technique. The research is at its early stage, and a simple product, a chair, is used in the example for illustration purpose only. More complex case studies will be developed at the next stage of the research in order to explore more features and advantages of the approach proposed. REFERENCES [1] R. Sudarsan, S. J. Fenves, R. D. Sriram and F. Wang, “A product information modeling framework for product lifecycle management”, Computer-Aided Design, November 2005, 37 (13), pp. 1399-1416. [2] [L. Monplaisir, “An integrated CSCW architecture for integrated product/process design and development”, Robotics and Computer-Integrated Manufacturing, 15 (1999), pp. 145-153. [3] A. S., W. D., S. N. and S. P., “Integrated design in a service market place”, Computer-Aided Design, 2000, 32(2), pp. 97-107. [4] M. Rosenman and F. Wang, “A component agent based open CAD system for collaborative design”, Automation in Construction, 10 (2001), pp. 383-397. [5] H. Y. Kan, V. G. Duffy, C. J. Su, “An Internet virtual reality collaborative environment for effective product design”, Computers in Industry, 45(2001) 197-213. [6] E. R. Harold, Java Network Programming, 1st Edition, O’Reilly, Cambridge, 1997. [7] H. W. Wang and H. M. Zhang, “An integrated collaborative approach for complex product development in distributed heterogeneous environment”, International Journal of Production Research, Taylor & Francis, Vol 46, No.9.1, May 2008, pp 2334-2344.

[8] Y. Xiong, J. Liu, P. Fitzgerald and D. Su, “Service Oriented Software Package Bank”, The 9th International Conference on Computer Supported Cooperative Work in Design Proceedings, 2005, pp. 661-667. [9] W. Lee, “Deploying personalized mobile services in an agent-based environment”, Expert Systems with Applications, 32(2007), pp. 1194-1207. [10] L. Peng and X. Long, “Design and Implementation of Embedded Mobile Agent”, computer engineer and Design.2006, 21. [11] J. Feng, D. Conway, D. Su, J. Mottram and S. Rutherford, “Web based collaboration for furniture design: survey and the structure of a collaborative working environment’, Proceedings of 5th International Conference on Digital Enterprise Technology (DET2008), Nantes, France, 22-24 October 2008 (forthcoming). [12] Y. Xiong, “Collaborative design and manufacture supported by multiple Web/Internet techniques”, PhD thesis, Nottingham Trent University, 2008. [13] [Why reduce waste in the furniture industry?. Envirowise, 2001. www.envirowise.gov.uk. [14] The Producer responsibility Obligations (Packaging Waste) Regulations 1997 (as amended). The user guide 2nd edition, 2003. Department for Environment, Food and Rural Affairs (DEFRA) http://www.defra.gov.uk/ENVIRONMENT/waste/topic s/packaging/pdf/userguide.pdf. [15] European Parliament and Council Directive 94/62/EC on Packaging and Packaging Waste. EUR-Lex. http://eur-

[16]

[17]

[18]

[19]

[20]

[21]

16

lex.europa.eu/LexUriServ/site/en/consleg/1994/L/019 94L0062-20050405-en.pdf. Pack Guide, A guide to Packaging Eco-Design, Industry Council for Packaging and the Industry (INCPEN). Envirowise, 2008. www.envirowise.gov.uk. Furniture Packaging Best Guide. Furniture Industry Environment Trust, 2001. Furniture Industry Research Association (FIRA). Fira International Ltd. Ian Feng, Daizhong Su and Jose Casamayor: “Webbased Collaboration work environment for furniture design”, International Workshop on Modern Science and Technology, 2008. D Su, Keynote Speech, ‘Web-based Collaborative Working Environment and Its Applications in Collaborative Design and Manufacture’, 13th International Conference on Machine Design and Production (UMTIK 2008), 3-5 September 2008, Istanbul, Turkey, Conference Programme & Abstract Book, pp 7-8. D Su and Y Zheng, ‘Development of a Prototype Mobile Collaborative Environment for Product Design’, in: ‘Expanding the Knowledge Economy: Issues, Applications, Case Studies’, Paul Cunningham and Miriam Cunningham (Eds), IOS Press, 2007 Amsterdam, ISBN 978-1-58603-801-4, pp 749-756. D Su and Y Zheng, 2008, ‘Utilization of the Collaborative Working Environment for Online Computer Aided Mechanical Design’, Proceedings of 13th International Conference on Machine Design and Production (UMTIK 2008), 3-5 September 2008, Istanbul, Turkey, ISBN 978-975-429-271-8, pp 25-42

How to Answer to the Challenges of Competencies Management in Collaborative Product Design? 1

2

2

B. ROSE , V. ROBIN and S. SPERANDIO 1 LGECO laboratory, ULP/INSA Strasbourg, 24 boulevard de la Victoire 67084 Strasbourg Cedex, France 2 IMS Laboratory – LAPS department, UMR 5131 CNRS, University of Bordeaux, 351 cours de la Libération, 33405 Talence Cedex, France. [email protected], [email protected], [email protected] Abstract Collaboration is an essential factor of the design activities performance. This collaboration occurs between actors suited with varied expertises, coming from various trades and thus building a real network around the design project. It is advisable to manage with effectiveness the follow-up and the capitalization of information exchanged within this network. With the aim of increasing the performance of the design activity, the set up of competence management tools within the design teams is today necessary. However, these competencies management within a framework of collaborative design must answer various challenges. This article presents various proposals to answer them. Keywords: Collaborative Design, Dynamic Competencies Management, Design Process Management.

1 INTRODUCTION Human resources directly influence efficiency of the relationships in the companies and of the decisionmaking in product design. They play a crucial role in this process [1] by making evolve the object on which they are working by their successive choices. They also influence the resolution of the problems by using their knowledge and their expertises. In design, evolution of the product and resolution of problems are closely influenced by the management of human resources in the organization. Garel et al. [2] highlight the problem of the human resources policies adaptation: how to adapt the policies and the human resources management tools which are historically developed for the functional organizations? Authors also point out the fact that policies are focused on the formal knowledge and not on the capacity to diffuse them and to capitalize them. Thus, even if the technical skill remains an element of choice in the assignment of the actors, it is not the single parameter to be taken into account any more. Within the framework of the collaborative design in which human resources evolve, the creation of collective competencies supposes a good interaction and a good collaboration between the various actors of the workgroups. These competencies are certainly based on individual expertises and competencies in management but also strongly call upon actors’ knowledge-being [3]. In parallel, the use of "collective knowledge" and in particular of popularization knowledge [4] seems to be necessary to mobilize and to reveal actor’s knowledge but also knowledge which is capitalized in actor’s networks. Popularization knowledge is the basis of "knowledge to collaborate" and “collective competencies”. In this paper, we describe the challenges and the needs relating to the development of competencies management tools to support collaborative product design. In a second time, we present solutions to answer these challenges, by proposing a "network" approach to

CIRP Design Conference 2009

17

consider the evolutions of the context in which the design activities proceed. We also present specific matrix to chart competencies. Finally, we propose prototype of software solutions to answer these challenges. 2

CHALLENGES AND NEEDS FOR COMPETENCIES MANAGEMENT

2.1 Study of the functionalities of the existing tools supporting competencies management Even if many studies on Knowledge Management (KM) exist, the tools to support KM are always in development. Moreover, they often lock up the organizations in a static structure, circumscribed by the system. They are not flexible and not adaptable to new contexts of work or new organisational orientations. Lindgren classifies the researches relating to the development of competencies management tools according to 3 currents [5]: •

The CSCW approach. Its goal is to facilitate the co-ordination of the workgroups and the cooperation between the different actors. Effective collaboration in a stable and specific context are not considered,



The Information System approach. Its aim is to offer users and researchers methodological guides and toolkit to implement a competencies management tool adapted to an organization [6],



Contributions from the Organisational Theory in Knowledge Management which recommend methodologies to develop KM systems and competencies management systems for KM driven organizations.

We locate our research in this last current, proposing software applications supports to these contributions. These applications must be integrated, as well as the other tools for the design, in the Information System. They must also consider an essential parameter of the competencies management: their dynamic evolution. 2.2 A real need to consider the dynamic components of the organization The competencies management tools have to provide information about the actors’ personal expertises in an organisational context but they have also to be to propose a more global vision of collective competencies. They have to promote the effective division of this knowledge and collaborative competencies. This concept is significant in the context of the design in which the search for performances is permanent and the innovation is subjacent with each design activity. Such a context obliges a perpetual dynamic evolution of the organization. The taking into account of this evolution in term of adaptability to the organisational changes reinforces the concept of team and community in the design projects. Our work proposes to provide a methodological guide and tools to consider the dynamic of the organization according to the 4 challenges stated by Stenmarck [7]: •

The challenge of the competencies cartography. It consists to index and make available various existing competencies in a service or in a group of actors.



The challenge of the competencies evolution. It consists in proposing an updated cartography of these competencies and the tools and the methods able to anticipate these evolutions in the organization.



The challenge of the collection of the input data implying that a competencies management system must be enriched by the individuals whom it indexes. In our case, the decisionmakers must feel the interest of such a system to provide him adequate and useful information.

The challenge of data isolation that concerns the provision of key information which is a preoccupation with confidentiality. We base our proposals on the design environment concept [8] and Robin’s researches on the evaluation of the performance of design systems [9]. In the following section, we analyze the contribution of the cartography of the existing networks within the projects. Then we propose the use of competencies matrix within a framework to consider the organization evolutions to support the design activities. •

3

A NETWORK OF DESIGNERS

3.1 A cartography of the networks to design project management According to the Håkansson et al.‘s model of network in the industrial marketing field [10], Nowak et al. [11] proposed a design process model. They suggested that the design environment can be seen as a "network" gathering the elements "actor", "activities" and "resources". Such a model allows us to make a distinction between a network of actors, a network of activities carried out between these actors and a network of resources used by these actors [12]. The vision "actoractivity-resources" of the networks was first developed to explain the interactions between inter-connected networks in the context of transactions in industrial marketing. This model is adapted in the context of the product design to chart and specify the various existing relationships within the design teams [13]. The visualization of these links contributes to the evaluation of the intensity of the various relationships. Figure 1 illustrates the visualization of relationships from a real case study. It concerns our industrial partner during the design process of a sheet stator of an electric motor. The study of this network appeared interesting since it allows us to identify the actors which have collaborations within the group. It highlights actors which are able to use their expertise to inform their colleagues and to make evolve competencies of their interlocutors.

Manufacturing Department

Electrical Calculations

Research & Development

Subcontractor Activity links Resources ties

Mechanical Calculations Expert

Customer

Actors bounds Shared Knowledge

Figure 1: Cartography of the various networks established when designing a sheet stator of an electric motor This visualization partially answers to the challenge of competencies cartography. It highlights in which field and how the actors collaborate within the current organization. It also brings a response to the challenge of the competencies evolution within the organization, in particular by the study of the knowledge exchanged between the actors in the collaborative networks. It gives an indication of the evolution of each actor’s competencies. A quantitative analysis of the various

existing links between the actors permits to generate performance indicators concerning various aspects of the collaboration. In the case of repetitive studies, valuation of these performance indicators regarding to specific situations could be used to support decision-makers in the choice of actors in similar situations [12], [14]. However, the study and the cartography of the various existing networks within the design team don’t answer to the last 2 challenges. For instance, the project manager

18

has difficulties to recover pertinent information and to establish the reality of these networks. To go far from the simple cartography of the network we propose to complete our approach with competencies cartography. It permits to think about preliminary concepts to develop an effective KM tool to help design projects managers. 4

PROPOSITIONS FOR A COMPETENCIES CARTOGRAPHY IN COLLABORATIVE DESIGN In the collaborative design context, the actor’s qualities have to be considered to constitute efficient design teams [15]. The identification and the selection of the actors must be done with pragmatic tools. In our case, we chose to use competencies matrix to help decision-makers. Such a tool allows a company to identify and capitalize information relating to actors’ competencies on given problems. We present in this section the various policies to develop the competencies matrix. 4.1 Using of competencies matrix Two types of competencies matrix permit to specify and to make appear the various actors’ competencies in a company, in a project or in a group within an organization: •

The matrix from a “trade” viewpoint,

• The matrix from a "produced" viewpoint. Concerning the design activity, the competencies matrix from a “trade” viewpoint is based on existing documents of the company describing the activities of the various design actors according to their trade. It is also possible to have a classification according to the actors’ function within the hierarchy of the studied service. The activities are categorized in four levels: functional objectives related to the responsibilities, intermediate objectives describing the missions associated with the tasks, actions and finally the software which is part of the actors’ environment. For each activity, a level of control of described competencies is associated to each actor. This solution allows the dynamic competencies management by the intermediary of the parameter "level of expertise" separated in 3 criteria: •

The "necessary” level of expertise that is the minimal one required by the activity to ensure the good course of the process.



The "specific" level of expertise corresponds to the difficulties awaited for given design activities. This level can be supplemented according to the actors’ empirical experiment at the beginning of project and/or can be based on documents estimating the difficulty of a project on the basis of preset criterion. It permits to identify "a priori" the actors most suited to the resolution of particular problems.

The "reached" level of expertise is supplemented at the end of the project, for each activity, during the end of project meeting. The examination of the divergences between the levels expected and those really reached can then be used as performance indicator to the evaluation of the formed group. Concerning the competencies classification from a "produced" viewpoint (Figure 2), the nomenclature trade is broken up in the form of macroscopic or microscopic sets that represent different the levels of expertises: •

19



The microscopic level concerns an actor’s level of expertise on a specific product as defined by Rakoto et al. [16]: the expert ensures his expertise near his internal and external customers. He capitalizes and makes share his expertise, and instigates knowledge of the field where he is recognized as a reference.

The macroscopic level represents competencies according to the traditional triptych: “knowledge, know-how and knowledge-to be” to each actor intervening during the product design process. When the design process is relatively well defined, structured and controlled, this approach offers a great visibility on the products designed and permits to identify the most qualified resource to achieve a task on a specific product. As proposed Hadj Hamou and Caillaud [15] this visibility can moreover be increased by adding a level of co-operation necessary for the various speakers within the design process. Figure 2 presents an example of competencies matrix with a “produced” viewpoint for a team in of charged the design of asynchronous motors. Here, the microscopic level that emphasizes the actors’ expertise is developed according to 4 levels. Each level is composed with prerogatives concerning: •



knowledge to achieve the task,



achievement of the activity on which the actor is affected,



the actor’s autonomy of work,

• the quality of the analysis of the results Such a structuring allows making go up information concerning the macroscopic level of actor’s competence; with a flat concerning knowledge-being which is not an easily quantifiable concept. The use of this kind of matrix helps management of collaborative design process by identifying the actors and their positioning on a given problem. Matrixes are tools for dissemination of the competencies management policy within a design project and design department. But without breaking the challenge of insulation of information (these data are not of confidential nature and remain generally diffused within the project). Matrixes also answer the challenge of collection of information previously stated since actors see the benefit of such an action. Indeed, within the framework of collaboration on a given activity, in addition to the interpersonal conflicts which can always exist within a team, this visualization of each actor’s levels could be beneficial. Actors will be able to see the potential advantages by increasing their own competencies. This measurement potentially enables them to fill their differences. This ambition requires a financial profit-sharing on the overall policy of the company to increase the decision-makers’ interest to post their qualification level objectively. Even if it can seem utopian, this position winner/winner can nevertheless improve total synergy of the design teams. It is advisable not to fall into a version perverted from the use of such matrix which would aim at selecting the best elements in the various categories and to be detached from services. Moreover, one major disadvantage of these competencies matrixes is that they are a static vision of the situation of the company, a project or a group of actors within a sub-project.

Competences matrix: Asynchronous motor N3 Product

Rotor

Sub-product Title

Names

Project Manager

P.A.C

Magnetic parts

Ventilation

Main shaft

Carcass

Stator

D.M Engineer

V.L J.B.D Ph. A.

Contractor M.O

Level 1

Knowledge

Possess all the basis knowledge for the job

Level 2

Ability to accomplish all the tasks of the job

Level 3

Level 4

Ability to accomplish all the tasks of the job

Ability to accomplish all the tasks of the job

Ability to explain all the tasks of the job

Ability to explain all the tasks of the job Ability to be a trainer

Activity

Autonomy

Quality

Apply standards

Work with a tutor

Control his own work with a tutor

Apply operative modes

Apply operative modes

Ability to select and apply the appropriate standard

Ability to select and apply the appropriate standard

Work alone by respecting operative modes

Control his own work alone

Apply operative modes Ability to select and apply the appropriate standard Create new standards

Create new standards

Ability to realise unusual operations

Work alone by respecting operative modes

Work alone by respecting operative modes

Make propositions of improvement actions

Make propositions and participate to improvement actions

Control his own work alone

Control his own work alone

Understand the results and make corrective actions with a tutor

Understand the results and make corrective actions alone

A level is achieve when all the criterion of this level are achieve

Figure 2: Competencies matrix with vision " produced " and captions notation

20

his own competencies. This data is then accessible to the project manager via various tables and diagrams on a specific Graphical User Interface (GUI) (Figure 3). Project manager will be able to exploit them during the course of the project to assign the resources and to create teams. This application answers the challenge of setting to data layout, while ensuring their confidentiality and their timelessness and making sure their capitalization and their use compared to the other projects of the company. This software capitalizes a static vision of actors’ competencies and is not able to give precision about the possible actors’ evolution yet.

4.2 Prototypes of software to support human resources management During the IPPOP project [17] we developed the PEGASE application allowing the integration of the results obtained in the competencies matrixes. Objective was to use it to manage design project. Initially an administrator connects to database of PEGASE by means of a protected connection. According to the competencies matrices previously defined, he identifies the elements to be taken into account in the database and creates it. When competencies are implemented, he assigns to each actor

Human Resources Management Summary

Structure

Resources

Competencies

Statistics

(tactical)

You are managing human resources for the project: Select criterion to search a resource for the project All

None

Technical competencies

Chassis M90 Name: LEGARDEUR Surname: Jeremy Function: Car Architect Situation: Intern resource Availability: 50% regarding all projects

Ability to use software

Technical competencies

PHP Language

Organizational competencies

C++ Language ANSYS CATIA V5 Pro Engineer CAM Works

Methods Search Affect a human resource to the project: Affect

Resource Decision-maker

Designer

General competencies Affected Resources: To the activity : Chassis Integration In the department: Design Department (FR) For the project : Chassis M90

Social competencies

Technical competency : Software

LEGARDEUR Jeremy Delete

ANSYS

Update

Designer for the project

Level 3

Modify

Delete

Role

Charge:

Social competency : Know-how

For the project

Availability:

Extravert

For other project

Level 2

Modify

Delete

Figure 3: Graphical User Interface for the choice and the allocation of a resource on a project in PEGASE application To complete this approach, Rose [18] was interested in Indicators concerning application conflicts management in collaborative product design. Number of iterations : 56 This situation is one of the most constrained cases of Number of emitted solutions : 14 collaboration [19]. In order to instrument the conflicts Indicators concerning conflicts management, CO²MED – a software application – (COllaborative COnflict Management in Engineering Number of iterations : 6 Design) was developed. It is based on a reference frame Number of emitted solutions : 2 improving the effectiveness and preserving the memory Number of actors concerned : 3 of the resolution of the technical conflicts. This resolution Indicators concerning human resources process is integrated in an engineering process and has Number of iterations per users and per keywords characteristics which can be parameterized and indicators that permit to judge its effectiveness. These Keywords: Search indicators (metrics) give information compared to the User: actors’ implications in the previous conflicts according to given expertises fields (figure 4). Results Iterations – Project Manager - solution Information concerning users

Figure 4: GUI for the consultation of the performance indicators in CO²MED

21

In this particular context, the control of the technical fields implied in the project and the comprehension of specificities of each project are necessary to ensure a good comprehension of the group. Indeed, competence on a project is built progressively, "it is in its unfolding even, as the various aspects are explored, that the compromises are analyzed and that the collective memory on the singular adventure is constituted" [2]. In the case of a conflict resolution, standard process cannot be defined and the actors’ strategy, their determination and their capacities of dynamic adaptation to the situation are central. Competence is expressed here while adapting and selecting the steps and solutions according to the target, of the specific context of the project but also of the as-is organization. Thus, in this particular context of work, it is necessary identify the actors who are potentially most interested and interesting to facilitate the conflicts resolution and to create a synergy to this resolution. The actors’ capacities to collaborate have to be highlighted. To adopt a dynamic vision of these capacities to collaborate, CO²MED uses a structure and a representation of the various exchanges between the actors invited to solve a conflict. This structuring, also existing in the theories of the negotiation [20], aims at showing the sequence of the answers to the various iterations, and permit to present the traceability of the knowledge exchanged by taking of account the type of the contribution (argumentation/critic of solution or contribution of solution) in the construction of the tree structures. The visual analysis of these sequences offers the opportunity to distinguish 2 cases (figure 5): Case n°1 A strong vertical deployment corresponds to a phase of mediation where one attends a significant production of solutions. This phase appears in particular at the beginning of conflict resolution, when the causes of this one are not exactly identified. Some of these new solutions will find a development as presented in the case n°2. Case n°2 A strong horizontal development of the tree structure corresponds to a process of discussion on a given proposal. The actors generate iterations by explaining, arguing or revoking the solution suggested. This structuring of the tree gives dynamic information and updates the capacities to collaborate of the actors solving the conflict. Indeed, a strong vertical deployment of the tree structure translates a good level of creativity among various protagonists but their capacity to collaborate is relatively low since they do not manage to converge towards a single solution. This can express a lack of interest to the objective of conflict resolution. In the same way, a tree structure developed horizontally highlights sterility in the actors’ capacity to collaborate. This representation made up of argumentations and againstargumentations can be characterized by the presence of personalities with strong character, wanting absolutely to impose their point of sight. This real time visualization of the exchanges animating the conflict resolution, associated the consultation of the performance indicators relating to the participation of the actors in the discussions, makes it possible to the project manager to make the decisions which are essential as for the withdrawal or the addition actors in the conflict resolution process. It can also underline the increase in competencies of actors being expressed more and more on a field of expertise not being identified like theirs. This software application answers thus the challenges of

evolution and dynamic update of competencies, in point of view of the level of expertise as their capacity to collaborate. Nevertheless, even if the access and the representation of these competencies fields can prove to be easy for management of the design activity at the operational level, it is not the same if one addresses to tactical or strategic levels. In this way, a fusion between the two software applications presented (PEGASE and CO²MED) would permit to answer completely the four challenges previously evoked, while guaranteeing a dynamic evaluation of the capacity of the actors of the design to collaborate. Case n° 1

IT 1

Case n° 2

IT 1 IT 2

IT 2

IT 3

IT 3 IT 4 IT 5

IT 4 IT 5

IT 6

IT 6

Figure 5: Logical representations of the sequence of the exchanges in the form of iterations 5 CONCLUSION In a context where the search for design activity performance is constant, an effective management of competencies of the actors to be implied in a design project seems today to have become a decisive point. In this paper, we wondered answer the various challenges concerned with this competencies management that are the provision, the dynamic evolution, the collection of the data and the confidentiality, within the framework of the individual expertises but also within the framework of collaborations specific to the new forms of design currently used. We tried to bring various answers to these challenges, via the use of a "network" approach allowing project managers to chart the relations existing between actors and thus their way of collaborating via the use of competencies matrix to chart these competencies and via the use of prototypes of software to assist competencies management. Future researches will try to improve the assistance with competencies management bound for the heads of design projects. This work can thus relate to the precise quantification of the level of collaboration between actors, like on the evaluation of collaborative competencies proposing in a design project, in correlation with researchers in social sciences. From a tools point of view, we showed how our complementary applications for the dynamic management of the actors of the design could support decision-making. The next step in our research will be the integration of these software applications. 6 ACKNOWLEDGMENTS Many thanks to Alstom Moteurs Company (Nancy, France) which provided us the industrial case study, and particularly to the employees of the design and engineering department for their collaboration in the establishment of the competencies matrix.

22

7 REFERENCES [1] Longueville B., Le Cardinal J. and Bocquet J-C., 2002, Decision based knowledge management for design project of innovative products, International Design Conference, Dubrovnik, Croatia, 14-17 May. [2] Garel G., Giard V. and Midler C., 2001, Management de projet et gestion des ressources humaines, IAE de Paris, Cahiers de recherche GREGOR. [3] Robin V., Rose B. and Girard P., 2007, Modelling collaborative knowledge to support engineering design project manager, Computers in Industry, vol. 58-2, pp 188-198. [4] Rose B., Robin V., Girard P. and Lombard M., 2007, Management of engineering design process in collaborative situation, International Journal of Product Lifecycle Management, vol. 2-1, pp.84-103. [5] Lindgren R, 2002, Competence studies, Gothenburg Studies in Informatics, Report 23, June. [6] Hao, Q., Shen, W., Zhang, Z., Park, S.W., Lee, J.K., 2006, “Agent-based collaborative product design engineering: An industrial case study,” Computers in Industry, 57, pp. 26-38. [7] Stenmarck D., 2004, Managing knowledge through everyday activity, Online Information 2004. [8] Girard P., Robin V., 2006, Analysis of collaboration for project design management, Computers in Industry, Vol. 57, n°8-9, pp. 817–826. [9] Robin V., Rose B. and Girard P., 2007, Modelling collaborative knowledge to support engineering design project manager, Computers in Industry, vol. 58-2, pp 188-198. [10] Hakansson H., Johanson J., 1992, A Model of Industrial Networks, Routledge, ed. by B. Axelsson and G. Easton. [11] Nowak P., Rose B., Saint-Marc L., Callot M., Eynard B., Gzara-Yesilbas L. and Lombard M., 2004, Towards a design process model enabling the integration of product, process and organisation, 5th International Conference Integrated Design and Manufacturing in Mechanical Engineering, 5-7 April, Bath, UK. [12] Robin V., Rose B., Girard P. and Lombard M., 2006, Management of Engineering Design Process in

23

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

Collaborative Situation, Advances in Design, Eds Springer, dir. Hoda A. ElMaraghy, Waguih H. ElMaraghy, pp. 257-269. Darses F., Détienne F. and Visser W., 2001, Assister la conception : perspectives pour la psychologie cognitive ergonomique, ÉPIQUE 2001, Proceeding of the « Journées d'étude en Psychologie ergonomique », Nantes, France, 29-30 october, http://www-sop.inria.fr/acacia/gtpe/GTPEActes-epique-2001-tdm.html Gzara L., Rose B. and Lombard M., 2006, Specification of a repository to support collaborative knowledge exchanges in IPPOP project, Computers in Industry, vol. 57, pp. 690–710. Hadj Hamou K., Caillaud E., 2004, Cooperative design : a framework for a competency-base approach”, 5th International Conference Integrated Design and Manufacturing in Mechanical Engineering, 5-7 April, Bath, UK. Rakoto H., Clermont P. and Geneste L., 2002, Le retour d’expérience, un processus soci-technique, ier 1 Colloque du groupe de travail Gestion des Compétences et des Connaissances en Génie Industriel, « Vers l’articulation entre Compétences et Connaissances », Nantes France. IPPOP, « Intégration Produit Processus et Organisation pour l'amélioration de la Performance en conception», RNTL project. Website IPPOP : http://ippop.laps.u-bordeaux1.fr/index.php Rose B., 2004, « Proposition d’un référentiel support à la conception collaborative : CO²MED (COllaborative COnflict Management in Engineering Design), Prototype logiciel dans le cadre du projet IPPOP », Thèse de doctorat de l’université Henry Poincaré Nancy 1. Rose B., Lombard M., 2003, « Gestion du cycle de vie d’échanges formalisés en conception collaborative : capitalisation et évaluation », dans « De la GDT au PLM », Special issue of Revue internationale de CFAO et d’informatique graphique, coordonné par Eynard B. et Caillaud E., Vol 18-4. Baker M.J., 1993, Dialogic Learning: Negotiation and Argumentation as Mediating Mechanisms”. Proceedings of AI-ED '93: World Conference on Artificial Intelligence in Education, Edinburgh, UK.

Requirements Models for Collaborative Product Development 1 1 C. Stechert , H.-J. Franke Institute for Engineering Design, Technische Universität Braunschweig, Germany 1

Abstract Multidisciplinary product development in collaborative networks is a typical working condition for nowadays engineers. After describing the main deficits and dangers of such development situations a modelling approach using the Systems Modelling Language (SysML) is shown. The approach focuses on the generation of a requirements model as a basis for discussion and analysis of the real project aims. A short and simplified example from the field of parallel robots illustrates the approach. Keywords: Requirements management, collaborative networks, design methodology, parallel robots

1 INTRODUCTION The multidisciplinary development of complex products is often performed in collaborative networks. Thence, the system is decomposed into smaller and manageable subsystems (or subtasks). Ideally, these subsystems can be handled independently. It is a big challenge to keep the subsystems consistent, because boundaries are diffuse and changes can have an effect on several subsystems at the same time. A goal-oriented approach should consider all important system views during the product lifecycle. Requirements are seen as the core of a successful product development. All steps of the development process and every subsystem should support the initial goals. A requirements model is shown that uses the Systems Modelling Language (SysML) as a basis. SysML is a widely known standard e.g. in software development, electronics, and automation. A SysML model is able to integrate several different model elements (e.g. scenarios, functions structures, mechanical structures, and manufacturing processes). These elements can be visualised and documented. Based on a classification of requirements and relations (both qualitative and quantitative) a semiautomatic analysis is possible that helps detecting goal conflicts and estimating change impact. The benefit of the method is shown for high dynamic parallel robots. Parallel robots are mechatronic, typically customised products. Every task at each customer is unique. Thus, the need for a domain integrating method that supports modular systems was the main motivation for this work. 2

PRODUCT DEVELOPMENT IN COLLABORATIVE NETWORKS In this work, a collaborative network is seen as a group of organisationally and locally separated companies or departments that work together on the development of a special product. One aim of product development in collaborative networks is to use the specific expertise of different enterprises in

CIRP Design Conference 2009

24

diverse domains to generate a domain-spanning product [1], e.g. mechatronic products like robots, cars, or airplanes. Especially, small and medium sized enterprises (SME) cannot provide high expertises in many different domains. On the other hand, a small company can be expert and world market leader in a delimited area. Another aim of product development in collaborative networks is to decrease the time-to-market by parallelization of work packages. A well-known strategy is simultaneous or concurrent engineering. According to [2] simultaneous engineering leads to 30-60% less costs, 3090% shorter development time, and 30-87% increase of quality. Along with these advantages, collaborative networks bring some difficulties and dangers. The most concerning aspects of often-cited difficulties and dangers are summarized and presented from the following seven viewpoints. 2.1 Communication between development partners The more distant the different sites are the higher will be the effort for communication [3] (e.g. travel, communication techniques) and if no clear code of behaviour is defined, there will be efficiency losses [3]. Different spoken languages and different cultural backgrounds fortify the effects [4]. Thence, it is important to define a common language as a basis for effective communication. This is necessary at least at those high levels of abstraction (e.g. requirements clarification), where many different partners are involved. 2.2 Exchange of information According to Zanker [5] the storage, selection, and processing of information and the management of knowledge are the main weak points. It is not always clear to every project member whether he works with a valid version of the dataset. In addition, different data formats have to be converted with losses or cannot be handled by every project member [3]. Interfaces are badly defined and no rules for information exchange are specified [2, 3]. If rules are set, it often leads to a higher amount of time spent learning special

notations or project-specific procedures. Some product specific approaches (e.g. [6]) were made to assist a project-(data-)coordinator to assure a consistent exchange of model data and make it more time-efficient. However, a development environment has often to be adapted to new technologies. This would lead to problems if the possibility for such extensions was not considered during programming. 2.3 Division of labour A big danger for efficiency losses is a bad clarification of project goals [4]. The goals must be clear to every project member and the completion of every subtask should be a step towards the fulfilment of a goal. A bad disjunction of subtasks leads to extra work [4, 5]. Moreover, responsibilities for results are not fixed and not controlled [5]. A single member of the project team is not able to overview the whole system, thence is not aware of weak points and cannot imagine the importance for the whole system [3, 5]. Once established it is difficult to change the distribution of subtasks. The organisational effort for coordination of work and maintenance of interfaces is comparably high [5]. 2.4 Combination and integration of results Often there is no continuous and no systematic workflow present, thus the right results are not available at the right moment [5]. Interfaces are often inexpedient in type and number [5]. Thence, “optimal” results of subtasks (according to the first set of requirements) are integrated and form a suboptimal total system. The documentation of decisions and results is often seen as a bothersome duty. If any, project members document results at the end of a project or in a way that is just clear for the person who created it. If personnel and boundary conditions change, decisions cannot be retraced and in a worst case situation work has to start over. 2.5 Human behaviour and human relations Communication between human beings is seen as the main factor to avoid errors, but the more the distance the less is the likelihood of communication. New techniques try to virtually reduce this distance (e.g. videoconferences, application sharing). Nevertheless, some project members are inexperienced or too narrow-minded for using methods and new techniques of communication. In addition, design engineers see themselves as creative inventors, which might lead to internal resistance against the use of methods [3, 4]. Besides efficiency losses due to different cultures and structures in the different companies [3, 5], each partner follows different aims and strategies [3, 5]. If partners do not trust each other, this will lead to “inside-the-boxthinking” of employees, i.e. holding back of information [3, 4, 5].

2.6 Use of methods and tools Especially in multidisciplinary product development for complex products a huge variety of different methods and tools is used [7]. Different companies apply different methods for the same type of problem. Furthermore, experts often use their favourite tools (e.g. the use of a specific CAD-system sometimes seems to be a “philosophical” rather than a technical/economical question). It follows that project members use methods wrong, not adapted to the actual problem or not at all [4]. If methods are used a broad number and variety of methods are used that cannot be handled easily [4]. Van Beek and Tomiyama [8] state that the integration of different methods is the main challenge. Some approaches (e.g. [6, 9]) already showed that it is possible to create integrated development environments. However, these are limited to the predefined set of implemented methods. Furthermore, special formalisms are used to apply the methods. Following an idea stated in [10], Figure 1 shows the benefit of formalism as a curve over the degree of formalism. Apparently, the maximum benefit lies in between a too low and a too high degree of formalism. It is clear that the run of the curve highly depends on the type of project. The narrower the boundary conditions (i.e. number of domains, project members involved) the more moves the curve to the left. Thence, Figure 1 shows a useful area of formalism located around the optimum of the described curve. Figure 2 shows a similar approach of [11] now distinguishing between the effort for exchange and standardization of information. A minimum effort (though a maximum benefit) is reached in between recommendations and norms as a measure for standardization grade. However, if the norm exists and is well known, the effort for that standardization will not count for the actual project. 2.7 Organizational aspects Limited resources, time pressure, dynamic boundary conditions, and goal conflicts affect performance and productivity [4]. Decision making processes and managing of development goals are main weak points [5], i.e. definition of goals and if necessary their adjustment due to changing environments. Due to hierarchy (e.g. steering committee, project management, and work team), domain (e.g. mechanics, electronics, software) and place (e.g. different companies/ departments, different sites) a number of somehow independent “islands of knowledge” emerge. Each island uses a specific subset of the whole project knowledge and is often not sufficiently connected to others [3].

benefit of formalism

effort

total effort for exchange

useful area of formalism

effort for standardization effort for exchange recommendations

degree of formalism

norms standardization grade

Figure 2: Effort for exchange/porting depending on standardization grade, [11].

Figure 1: Benefit of formalism, related to [10].

25

The organisation of a collaborative multidisciplinary project for the development of complex products needs a big effort in planning before the project starts. Main points that have to be considered are scheduled time and cost frames. Also the development risk and the possibility of changes (e.g. of customer wishes, laws, available technologies) are main influencing factors for the success of a project. A very comprehensive summary is given in [12]. Here, the key elements of successful operational design coordination are identified as coherence, communication, task management, schedule management, resource management, and real-time support. 3 MODELLING THE PRODUCT As mentioned earlier the development of a product makes use of a variety of different models or “partialmodels” that describe a specific view on the whole system. Two definitions of model are cited below: “Models express relations between real conditions in an abstract form. As a copy of reality models own simplifications that on the one hand cause a loss of realness, but on the other hand bring transparency and controllability of real relations.” [13] “Model is a purpose-dependent, finite, simplified, but still adequate representation of whatever is modelled, allowing us to abstract from its unimportant properties and details and to concentrate only on the most specific and most important traits.” [11] Both definitions contain the aspects of abstraction and simplification to bring transparency and to concentrate on the most important characteristics. Additionally, Ort [13] states that models bring controllability of real relations: We cannot control what we do not understand. 3.1 Requirements on models According to [11] a model should represent the subject to be modelled, ignore unimportant details (abstraction), and allow a pragmatic usage. The purposes are to support and improve the understanding of the matter and build a common basis for discussion and information exchange. Moreover, models should allow comparison of different solutions as well as analysis and prediction of behaviour and characteristics of the system to be designed. The organisation of a model should contain its structure and architecture. Furthermore, interactions between components, component interdependencies, and important external relations should be taken into account. One important aspect of a model is its representation, especially the visualization of its contents. Salustri et al. [14] mentions “there is relatively little use of diagrammatic visualization of qualitative information in the early stages of designing”, although designers are seen as “visual thinkers”. Within this context two principles are stated [14]: •

Simplicity is power.

• Diagrams augment cognition. A model of the designed product is always some kind of documentation, too. It documents the actual state of work, allows for discussions, and gives a basis for presentations to e.g. stakeholders. If personnel changes during the project the documentation should help the new project member to acquaint himself with the topic. For follow-up projects, the rationales for decisions are helpful for the development of a new product or for reconfiguration of the product in operation. The necessary degree of formalism depends on the project and its state. The more creativity is demanded the more would formalism hinder. For instance, in the early

26

phases a good designer would start with a freehand sketch rather than using a CAD system for his very first ideas. On the other hand, in later phases a formalized workshop drawing or a detailed 3D-CAD model is necessary to exchange the generated information in a commonly understandable language and to transfer information to other models, e.g. FEM. Many product development processes thus suggest a from-rough-todetailed approach. The model should just be as detailed as necessary to provide a commonly understandable basis for all persons who might get in touch with this model. Wherever more detailed aspects are needed, a submodel for a subset of project members should be generated. 3.2 SysML The Systems Modelling Language (SysML) is an approach to model a product on different levels of abstraction and with different viewpoints. It is a widely known notation within the fields of software development, electronic design, automation, and (in parts of) mechanical engineering. SysML uses parts of UML (Unified Modelling Language) and special extension for systems modelling (e.g. requirements diagrams). Commercial and open source modelling tools support UML and SysML profiles. OMG SysML v1.0 was issued as Available Specification in September 2007 [15] and provides a common basis, so a better exchangeability, to describe requirements, as well as structure (e.g. blocks, packages, constraints) and behaviour (e.g. activities, use cases). So far, SysML mainly focuses on the needs of software and electric development, but new profiles containing new classes and stereotypes can be generated for customisation. All relations can be visualised in diagrams and formatted to desired views. Moreover, view-specific lists (e.g. requirements lists) and matrices (e.g. Design Structure Matrix (DSM), traceability tables) can be generated. Since UML and SysML are widely known and taught at universities, most project members do not have to learn a new and complex modelling language. It can be assumed that this leads to a higher acceptance than for completely new notations. Of course, some new and project-specific elements have to be integrated. In [16] functions structures and in [17] an airplane structure is described with UML class diagrams. This shows that existing methods can be flexibly integrated and handled with the existing tools. Commercial tools already provide a variety of useful functions. For instance, view specific requirements lists and traceability matrices can be generated from the model and then used e.g. in standard office software. Macros can be programmed to extend the functionalities. Client-server architecture allows the collaborative work with the model at different sites and security procedures are already implemented in the software. The principles of versioning (coming from software development) allow a simultaneous collaborative work on the models. One drawback of commercial software is the limited access via defined interfaces and the possible change of these interfaces with a new version. Another is the cost of licenses, which is important especially for SME. One possibility to overcome these is open-source software that however brings other disadvantages with them. 3.3 The requirements model In a collaborative network, as in every major development process the system has to be decomposed into a smaller, manageable, and at the same time consistent set of

Figure 3: Requirements and constraints for the parallel robot HEXA (schematic overview). subsystems (subtasks). The requirements model is the core that forces the development process to fulfil the initial customer needs, the companies’ strategic aims, and other constraints, e.g. arising from laws and regulations. Figure 3 illustrates this thinking approach as a schematic overview for a parallel robot of type HEXA (six DoF). The requirements model is one of the first models of the product. However, it is not complete a priori. The further the product development process advances, the more requirements evolve. This is due to the augmentation of knowledge. The better the understanding of the matter is the more detailed requirements are specified and the better will the idea of boundary conditions be. Furthermore, after each made decision (e.g. choosing a solution or design principle) new requirements evolve. For instance, the decision “rack should be welded” leads to requirements like “use steel profiles”. The decision “rack should be casted” would lead to very different requirements like “allow for big radii”. Experience points out that for typical projects around 50% of the requirements evolve after the “clarification of task” phase. Figure 4 is taken from [18], who analysed interdisciplinary product development. In the first phases the intensity of a conscious clarification of requirements is high, but decreases to a very low amount within the first third of the project. The documentation of requirements number of necessary requirements number of specified requirements number of documented requirements intensity of requirements clarification time Figure 4: Qualitative display of requirements clarification during a project, according to [18]

does not advance after this phase. Nevertheless, requirements are specified also in later phases, but not documented. However, the number of necessary requirements is higher during all phases. In [19] a holistic approach of requirements management within a PLM context is demanded. This paper focuses on four aspects concerning the requirements model: Surroundings, structure, relations, and analysis. Surroundings One of the first steps in the development process is to analyze the product surroundings in order to recognise the important requirements (e.g. [20]). Here, the whole product lifecycle has to be taken into account including different scenarios (e.g. [21, 22, 23]) or use cases, with all related actors, surrounding environment and possible disturbances. Furthermore, the different product views from different domains have to be considered and those requirements generated by later development steps (e.g. simulation, manufacturing) should be gathered. A systematic documentation helps to identify and to use the collected requirements and constraints. It further shows which requirements derive from what surrounding elements. If several requirements from different domains cause trouble according to the same surrounding elements it might be interesting to think of extending the system boundaries. Structure For a better accessibility, information should be structured [24, 25]. Requirements can be structured in a hierarchy such as goal, target, system requirement, and subsystem requirement. Furthermore, they can be allocated to a domain and to a purpose in the development process. Getting more concrete requirements can be allocated to their concerning subsystems. In addition to wellestablished attributes (wish / minimum / fixed, source) it makes sense to assign certainty and change probability.

27

Relations Model elements are related to each other. In earlier work [26] a basic classification for relations was suggested according to development steps, granularity, support, direction, linking, and quantifiability. The systematic integration of relations into the model helps designers understanding the matter and be aware of interfaces to other disciplines. During synthesis steps, getting aware of relations might lead to new model elements. In [18] this was shown for an interdisciplinary development in the field of medical apparatuses. In addition, the modelling of relations is the basis for the analysis of the model, discussed below. Analysis One important aspect during the development of complex products is to detect goal conflicts both in early qualitative and later quantitative phases. The earlier one gets aware of possible goal conflicts the higher will be the benefit. This means not just to reject possible solutions early, but also to be aware of problems that might occur in later phases and be prepared for their solution. Often goal conflicts do not appear on abstract level, but due to decisions on a more concrete level. On these levels, goals are described as (technical) requirements and allocated to the total system or to components. Parameters of some components are already established. As the system is subdivided into many different subsystems of different domains and diffuse boundaries, it is difficult to trace relations without systematic assistance. Hence, the traceability is important to follow the traces from the requirements model through a number of more or less concrete partial models and back. In addition, the impact a certain change will have on the whole system can be estimated by following these traces. If a boundary condition changes during the development, the analysis shows the effected areas of the product. It

Figure 5: Requirements and surroundings for a new robot in the hierarchical explorer view.

28

thus helps to decide on how and where to adapt the actual concepts or to start all over again. If in dynamic environments one knows the (un)certainty of a specific requirement it is possible to plan the development process efficiently (e.g. focus on the certain aspects and leave uncertain areas solution independent as long as possible). According to [10] a system is characterised by a bigger number of relations between system elements within the system boundaries than outside them. The subsystems of a product are to be developed as independently as possible. This approach might lead to modular systems with redundant structures, i.e. synergetic effects are not used and the same problem is solved twice, because module interfaces are generated just because of departmental or domain separation. In addition, the development of modular systems often focuses on just one aspect of the product (e.g. assembly). The requirements model describes early the real aims of the modularity and by analysing the relations modules can be separated more purposefully. 4

PARALLEL ROBOTIC SYSTEMS FOR HANDLING AND ASSEMBLY Within the Collaborative Research Centre 562 “Robotic Systems for Handling and Assembly – High Dynamic Parallel Structures with Adaptronic Components” concepts for design and modelling of parallel robots for high operating speeds, accelerations and accuracy are developed. Due to the use of closed kinematic chains, parallel robots feature relatively small moved masses (drives are mainly placed in the rack) and high stiffness. In comparison with serial mechanisms, they offer higher dynamics and high accuracy, especially when new and optimized structure components (e.g. adaptive joints [27] and rods [28]) are used. The disadvantages compared to serial robots are mainly a small ratio of workspace to installation area and the existence of singularities within the workspace. Thence, new design, analysis, and control methods were developed to overcome these drawbacks. As a mechatronic product, several disciplines and many different partial models [7] are necessary to set the robot in operation. This results in relatively complex products with complex relations. As by now, no parallel robots are sold as mass products but customized to the needs of a special customer. The re-use of knowledge, thence the configuration through a modular concept and an effective change management through a systematic holistic view are helpful to provide the desired fast time-to-market as well as high quality and optimal products to the customer needs. The developed SysML-based requirements model reduces the abovementioned difficulties of collaborative product development providing transparency, communicability, exchangeability, and coherence. The following examples illustrate the approach. Figure 5 shows in the explorer view the packages of a model for a project “New Robot”. Besides the SysML and UML profiles, one can see the packages “Requirements” and “Surrounding”. The requirements are hierarchically distinguished into “Goals”, “Targets”, and “Technical Requirements”. “Surrounding” documents the “product environments” and “Use Cases”. For instance, one use case in the product lifecycle phase “Use” describes the handling of muffins. This use case leads amongst others to a refinement of the requirements “workspace” and “payload” (see Figure 6). For each customer a unique use case has to be developed, considering the specialties of

«goal» quality

«goal» cycle time

«target» high accuracy

«target» high dynamic

«requirement» damping

«requirement» acceleration

assembling handling cell phones handling sausages muffins

«requirement» speed

«requirement» workspace

«constraint» velocity

«constraint» power

v²max = 2 amax lmax

Pdrive = M «block» drive

«requirement» payload

«requirement» transmission rate

«block» kinematic chain

«block» middleware

Figure 6: Goal oriented view on the product, excerpt of an extended requirements diagram based on SysML. that specific surrounding. For instance, the number and arrangement of conveyor belts, the size and weight of the objects, and the following planned manipulation steps have to be considered. It might be that the surroundings show synergetic or parasitic effects. For instance, if a mechanism could help to orient the objects in a way that the robot needs one DoF less, the whole systems could get cheaper, especially regarding Total Cost of Ownership (TCO), i.e. energy costs. However, the new use case shows similarities and differences to already performed projects. Besides fulfilling the specific use case “Handling of muffins” one important goal is a short cycle time. There are a number of targets that support this goal. However, not every is related to the development of the robot, i.e. not within the projects boundaries. For instance prior or following manipulation steps can decrease the cycle time by supplying the objects in a more efficient way or by picking them in a more flexible way (e.g. objects are

«goal» quality

«goal» cycle time

«block» drive 1

«target» high accuracy

«target» high dynamic

«requirement» stiffness

«requirement» speed

1 «block» crank 1 1 «block» joint

«block» joint

1 1

«requirement» clearance

«requirement» friction

«block» rod 1

«constraint» contact Fr = µ Fn

1 «block» joint

Figure 7: Simplified example of the hierarchical view on requirements of components (left) and component structure in a kinematic chain (right).

grouped relatively to each other, but not absolutely to a fixed coordinate system; another robot picks the grouped objects and orients them). The target “high dynamic” is directly related to the robot and supported by the requirements “high acceleration” and “high speed”. As a simplified example, the triangular relationship between acceleration, speed, and workspace can be described by an equation considering constant acceleration at the tool centre point (TCP). As long as the concretion level is low, this simplified equation can just give an idea of a reasonable area. However, it contains the danger of prejudgement. Another supporting requirement of “short cycle time” is the transmission rate of the middleware (right side of Figure 6). That means, even if the robot accelerates pretty fast it would not necessarily lead to short cycle times. If it had to stop and wait for new data (because of delay in transmission), the high acceleration would gain nothing. If the acceleration would be designed for the special task in such a way that no waiting periods evolve, drives could be cheaper or less energy consuming. Developers from these different domains should thence work together to find the optimal – goal supporting – solution. The diagrams thus represent a common level of knowledge of the whole system and its relations. Moreover, the overall development goals are always present to each developer. At the high-level representation, a domain spanning discussion is supported. Then in the software development domain a more detailed view on the middleware is necessary. For this purpose developers use e.g. sequence diagrams that are supported by the SysML notation and modelling tools, too. The combination of targets “high dynamic” and “high accuracy” lead to the requirement “damping” (see left side of Figure 6). The kinematic structure is designed following lightweight principles to fulfil the target “high dynamic”, thus an internal structural damping is relatively small. When the robot stops the structure oscillates, thus the accuracy is affected. According to the lightweight structure, the oscillations die out relatively slowly. Adaptive components are able to suppress oscillations actively [29]. The adaptive components initiate oscillations opposite in phase, thence accelerate the die out. Many different disciplines are now involved. The adaptronic components (e.g. planar piezo actuators) have to be physically integrated on the rods, a suppression

29

strategy must be developed, and control hard- and software must be designed. However, this technology leads to additional costs in development, manufacturing, use, and recycling. The consideration of the product surroundings show that the technology leads to big benefits in the area of precise handling or assembly, namely to gain better cycle times. For some other use cases, the benefit would not justify the costs. Figure 7 shows on its right side the decomposition of the kinematic chain into its structural components. Regarding the kinematic chain in relation to the aforementioned targets, a critical component is found. A joint that supports high accuracy should provide low clearance between moving surfaces. Then a joint that supports high dynamics should provide low friction. The left side of Figure 7 displays both requirements within the block “joint”, because they are directly related to that component. The arrows show their dependencies to the system requirements and it is possible to follow up to the level of goals. The constraint “contact” illustrates that for a low clearance there is a normal force at the moving surfaces, which generates a friction force. Hence, the smaller the clearance the higher becomes the friction. In conventional joints, this goal conflict is handled by finding the best compromise [30]. The analysis of the model showed that the concerning requirements do not have to be fulfilled at the same time during operation, i.e. when fast moving a low friction is needed, but when assembling the movement is relatively slow and the low clearance is of importance. To allow a time dependent change of joint characteristics new joints with an active adaptability were developed [27, 31]. In later phases sometimes changes appear. For instance, a customer rethinks the cost target: The robot has to be cheaper, otherwise he would back out. Analysing the relations shows that in that case the simplest possibility is to exchange the drives by cheaper (but less powerful) ones. This would mainly effect the “high dynamic” target and to a certain degree the goal “cycle time”. The model allows generating a specific view that makes all these relations transparent and communicable to the customer. Then it would be up to the customer to decide, whether the cheaper robot justifies the loss in cycle time. One important aspect for working with models especially in collaborative networks is the management of different versions. Modelling software often provides a multi-user repository that allows team members to work concurrently with the same model. Normally a model tree is generated, so that a trunk contains the universally valid versions. The tip version is the actual common state of the project. The different project members generate branches (“private sandbox”) to extend and modify the model according to their subtask. Versions in a branch can be merged at two different manners. “Rebasing” means to integrate the trunk tip version in the branch version and create a new version in the branch. “Reconciling” means to integrate the branch version in the trunk tip version and create a new version in the trunk that constitutes the new common state of the project. Whilst merging two model versions differences can be analysed, i.e. the impact a change in one domain had on model elements also used in another domain are displayed. However, the project management has to ensure a regularly merging of outcomes, an analysis regarding redundant elements, and communication between project members as well as making comments and notes on the made changes. Then it is possible to trace changes and even restore the model to a former condition.

30

Another opportunity of versioning is the building of variants. Starting from the common state version different branches can be modelled. These branches can detail different possible solutions to figure out the best one. Moreover, it is possible to start from a generic model and detail the different branches according to different development aims, e.g. to develop two different robots for relatively similar boundary conditions but for different tasks. As pointed out earlier one problem in product development projects is an inadequate documentation of results. Using the models as a developing tool is at the same time a kind of documentation. It gives at least a proper basis for generating the documentation. Modelling software often supports a (semi)automatic document generation. Thence, documents are generated “on the fly” what facilitates this unloved duty of documenting. 5 CONCLUSIONS Multidisciplinary product development in collaborative networks is a typical working condition for nowadays engineers. After describing the main deficits and dangers of such development situations a modelling approach using the Systems Modelling Language (SysML) is shown. The approach focuses on the generation of a requirements model as a basis for discussion and analysis of the real project aims. Therefore, it discusses the four aspects surrounding, structure, relations, and analysis. A simplified example from the field of parallel robots illustrates the approach and highlights the benefits. The paper shows that the modelling of requirements is an essential step in the development of complex products. Especially in collaborative networks, it helps to concentrate on and to communicate goals, targets, and requirements. It assists the decision-making processes and makes them more transparent. SysML is a known notation, thence the effort to formalise the model is comparably small, but gains a good exchangeability, e.g. for remote and concurrent working. The generated diagrams are quite easy to understand and hence augment cognition. 6 ACKNOWLEDGMENTS The authors gratefully thank the German research association (DFG) for supporting the Collaborative Research Centre SFB 562 “Robotic Systems for Handling and Assembly – High Dynamic Parallel Structures with Adaptronic Components”. 7 REFERENCES [1] Franke, H.-J., Huch, B., Herrmann, C., Löffler, S., (ed.), 2005, Ganzheitliche Innovationsprozesse in modularen Unternehmensnetzwerken, Logos Verlag, Berlin. [2] Steinmetz, O., 1993, Die Strategie der integrierten Produktentwicklung, Vieweg Verlag, Frankfurt. [3] Gaul, H.-D., 2001, Verteilte Produktentwicklung – Perspektiven und Modell zur Optimierung, Verlag Dr. Hut, München. [4] Bender, B., 2001, Zielorientiertes Kooperationsmanagement in der Produktentwicklung, Verlag Dr. Hut, München. [5] Zanker, W., 1999, Situative Anpassung und Neukombination von Entwicklungsmethoden, Shaker Verlag, Aachen. [6] Franke, H.-J., Wrege, C., Stechert, C., Pavlovic, N., 2005, Knowledge Based Development Environment.

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15] [16]

[17]

[18]

[19]

The 2nd International Colloquium of the Collaborative Research Center 562, Braunschweig, Germany, 10-11 May: 221-236. Stechert, C., Alexandrescu, I., Franke, H.-J., 2007, Modelling of Inter-Model Relations for a Customer Oriented Development of Complex Products. The 16th International Conference on Engineering Design ICED 07, Paris, France, 28-30 August. van Beek, T.J., Tomiyama, T., 2008, Requirements for Complex Systems Modelling. The 18th CIRP Design Conference - Design Synthesis, Enschede, The Netherlands, 7-9 April. Kläger, R., 1993, Modellierung von Produktanforderungen als Basis für Problemlösungsprozesse in intelligenten Konstruktionssystemen, Shaker Verlag, Aachen. Daenzer, W.F., Huber, F. (Ed.), 2002, Systems Engineering - Methodik und Praxis, Verlag industrielle Organisation, Zürich. Avgoustinov, N., 2007, Modelling in Mechanical Engineering and Mechatronics Towards Autonomous Intelligent Software Models, Springer Verlag, London. Coates, G., Duffy, A.H.B., Whitfield, I., Hills, W., 2004, Engineering management: operational design coordination, Journal of Engineering Design, 15/5: 433-446 Ort A., 1998, Entwicklungsbegleitende Kalkulation mit Teilebibliotheken, Papierflieger, ClausthalZellerfeld. Salustri, F.A., Eng, N.L., Weerasinghe, J.S., 2008, Visualizing Information in the Early Stages of Engineering Design, Computer-Aided Design & Applications, 5: 1-4. The Official OMG Systems Modelling Language (SysML) site, 2007, http://www.omgsysml.org/ Johar, A., Stetter, R., 2008, A Proposal for the Use of Diagrams of UML for Mechatronics Engineering, The 10th International Design Conference - DESIGN 2008, Dubrovnik, Croatia, 19-22 May: 1287-1294. La Rocca, G., van Tooren, M.J.L., 2006, A modular reconfigurable software tool to support distributed multidisciplinary design and optimisation of complex products, The 16th CIRP International Design Seminar, Kananaskis, Alberta, Canada,16-19 July. Jung, C., 2006, Anforderungsklärung in interdisziplinärer Entwicklungsumgebung, Verlag Dr. Hut, München. Maletz, M., Blouin, J.-G., Schnedl, H., Brisson, D., Zamazal, K., 2007, A Holistic Approach for Integrated Requirements Modelling in the Product Development Process, The 17th CIRP Design Conference - The Future of Product Development, Berlin, Germany, 26-28 March: 197-207.

[20] Franke, H.-J., 1976, Untersuchungen zur Algorithmisierbarkeit des Konstruktionsprozesses, VDI Verlag GmbH, Düsseldorf. [21] Anggreeni, I., van der Voort, M.C., 2008, Classifying Scenarios in a Product Design Process: a study towards semi-automated scenario generation, The 18th CIRP Design Conference - Design Synthesis, Enschede, The Netherlands, 7-9 April. [22] Brouwer, M., van der Voort, M.C., 2008, Scenarios as a Communication Tool in the Design Process: Examples from a Design Course, The 18th CIRP Design Conference - Design Synthesis, Enschede, The Netherlands, 7-9 April. [23] Miedema, J., van der Voort, M.C., Lutters, D., van Houten, F.J.A.M., 2007, Synergy of Technical Specifications, Functional Specifications and Scenarios in Requirements Specifications, The 17th CIRP Design Conference - The Future of Product Development, Berlin, Germany, 26-28 March: 235245. [24] Stechert, C., Franke, H.-J., 2007, RequirementOriented Configuration of Parallel Robotic Systems. The 17th CIRP Design Conference - The Future of Product Development, Berlin, Germany, 26-28 March: 259-268. [25] Franke, H.-J., Krusche, T., 1999, Design decisions derived from product requirements, The 9th CIRP Design Conference - Integration of Process Knowledge into Design, Enschede, The Netherlands, 24-26 March: 371-382. [26] Stechert, C., Franke, H.-J., 2008, Managing Requirements as the Core of Multi-Disciplinary Product Development. The 18th CIRP Design Conference - Design Synthesis, Enschede, The Netherlands, 7-9 April. [27] Stechert, C., Pavlovic, N., Franke, H.-J., 2007, Parallel Robots with Adaptronic Components Design Through Different Knowledge Domains, The 12th IFToMM World Congress, Besançon, France, 17-21 June. [28] Rose, M., Keimer, R., Breitbach, E.J., Campanile, L.F., 2004, Parallel Robots with Adaptronic Components, Journal of Intelligent Material Systems and Structures, 15/9-10: 763-769. [29] Rose, M.; Keimer, R; Algermissen, S., 2003, Vibration Suppression on High Speed Parallel Robots with Adaptronic Components, The 10th International Congress on Sound and Vibration (ICSV), Stockholm, Sveden, 7-10 Juli. [30] Otremba, R., 2005, Systematische Entwicklung von Gelenken für Parallelroboter, Logos Verlag, Berlin. [31] Pavlovic, N, Keimer, R., 2008, Improvement of Overall Performance of Parallel Robots by Adapting Friction of Joints Using Quasi-Statical Clearance Adjustment, Adaptronic Congress 2008. Berlin, Germany, 20-21 May.

31

Modelling Product and Partners Network Architectures to Identify Hidden Dependencies

S. Zouggar, M. Zolghadri, Ph. Girard IMS-LAPS, Bordeaux University, 351, cours de la libération, Talence 33405, France [email protected]

Abstract This paper explores mutual dependencies between partners of a New Product Design (NPD) project during the realization phase. It highlights bidirectional influences between components and partners. Linking product and network architectures is achieved through gBOMO (generalised Bill Of Materials and Operations) that allows expressing product components use and partners interventions in the network. We develop an approach to achieve building Dependency strength matrix. Analysing this matrix reveals obvious and hidden dependencies between partners which should be anticipated in early stages of the NPD project. The approach is illustrated through its application on an engine. Keywords: Partners network, Product architecture, Logical dependencies, NPD project.

1 INTRODUCTION In the current economic competitive context, companies are looking for development of sustainable partnerships through a sustainable network of partners commonly called supply chain, to increase their responsiveness and innovation capability. Such a partnership is crucial for SMEs whose means and capacities are limited. However, taking part in a partnership, means sharing not only opportunities but also risks. Therefore the question of Supply Chain design is crucial for all companies and especially for SMEs which have limited resources. The scientific literature contains lots of work in this field, see for example [1]. More recently Meixella and Gargeya provided an interesting literature review [2]. The criticality in designing supply chain, is held in the difficulties to know how to decide among feasible decisions such as the number and location of production facilities, the amount of capacity at each facility, the assignment of each market region to one or more locations, supplier selection for sub-assemblies and components and materials, in [3]. [4] in [5] distinguishes plant sizing, product/material selection, and allocation decisions that belong to the strategic level and tactical decisions such as production and inventory levels. However, often companies should cope with real failures occurring at advanced stages of the NPD project due either to inadequate components or to partnership dysfunctions undetected in early stages of decisionmaking cascades. [6] consolidates this idea arguing that the efficiency of various collaborative networks (with customers, suppliers, competitors) such as Supply Chains influences the success of the introduction of new products. Their study highlights that technological collaboration within the Supply Chain has a positive impact on innovation capability and proves that the suppliers are important contributors to the product innovation. Also Girard and Robin in [7] point out the necessity to focus not only, on product, but also on

CIRP Design Conference 2009

32

relations between designers. They argue that the design process is an outcome of collaboration process and propose a methodology to manage these collaborations. Studying design and development of partners’ network while the product is designed and developed, is therefore a relevant way for identification of future potential dysfunctions. This study presents the Supply Chain design and development from the point of view of that company which launches the NPD project, called here Focal Company. To do so, it is necessary to identify first links between the architectures of the product and the network or Supply Chain. Based on these links, logical and temporal dependencies between partners and the focal company are identified and qualified. It allows highlighting hidden dependencies within the supply chain. The architectures of the product and the network are modelled through interconnected formalisms which reveals direct and indirect interfaces intra-product (between its components), intra-network (between its partners). This paper is organised as follows. A brief state-of-the-art is presented in section two. Sections three and four present necessary concepts of the approach. Section five describes the use case study. Some conclusions and perspectives will end the paper. 2 SOME FUNDAMENTAL AND RELATED WORKS Ulrich in [8] says: “Product architecture is defined as the scheme by which the function of product is allocated to physical components”. He reminds that product architecture consists of not just certain number of components, but the way that they work together and assembled, as well. The notion of interaction between components is also evoked by [9]. The complexity is emerged from these relationships often hard to identify, to model, and thorny to monitor. Therefore, modelling mutual dependency links is of utmost importance. This point is also underlined by [10]. He argues that the product architecture, when properly defined and articulated, can

serve as a coordination mechanism. Various product characteristics have consequences (enabling or constraining) on decisions made during the product life cycle. Roughly speaking, two kinds of architectures can be distinguished: Modular and Integral, [8], [11], [12]. A modular architecture includes a one-to-one mapping from functional elements to physical components of the product. Modules are independent and have clear interfaces between them [13]. According to [14] the modularity is an interesting way of providing flexibility in technical development without entire modification of the design. Hölttä and Otto in [15] outline the characteristic of good module. They argue that it is the facility with which the module design could be redesigned without impacting its interfaces and the rest of product. Integral architectures include a complex mapping between functional elements and physical components. In [10] he uses a Function-Component Allocation matrix, FCA in short, defining this mapping. Product’s functions are listed in columns and components in rows. Through this approach, he contributes to provide a descriptive product architecture framework (and the way it is linked to many decisions across the domains of product, process and supply chain). Current research in the literature provides considerable tools helping to coherently identify and achieve technical reliable solutions answering technical specifications. The Design Structure Matrix (DSM) is one of them. Used to represent the architecture, it has been studied extensively, for instance by [9], [16]. [17] uses this tool to model the interdependency and explicit likelihood, impact and risks on changing context with propagation effects. 3

MODULAR PRODUCT

3.1 Modelling the product architecture The concept of product modularization shows that product is a complex system made up of many interacting parts. To simplify the complexity of the system, the product is designed as a set of sub-assemblies (sub-systems) so that their assembly constitutes the new product. Through product modularization, the manufacturer can create many products by assembling different sub-assemblies within a short product development lead time.

a b

c

X d e

Figure 1: Product modules interfaces. The product modularity influences the network architecture by imposing interfaces between pre-defined modules. The figure 1 shows a final product, X, provided by the focal company. X is the core of final product containing modules a, b, c, d and e made by 5 different partners. Interfaces are represented by dotted lines. For example, module (a) has three interfaces with X, (c) and (b) whereas module (c) interfaces with X, (a) and (d). Connections between parts of a product, authorized through interfaces, might be of different types: energy, movement and data [18]. Two other links between the modules can also be defined: geometrical constraints (they refer to all spatial positioning constraints of modules on the product) physical constraints (referring to all electromagnetic, thermodynamic, mechanical constraints). When two modules are interfaced it means

that one of the quoted connections exists. These connections are represented easily in the components linkage matrix. a

b

c

d

e

a

-

1

1

0

0

b

1

-

0

0

0

c

1

0

-

1

0

d

0

0

1

-

1

e

0

0

0

1

-

Table1: Components linkage matrix. Based on this configuration, the goal is to model the corresponding realization process that requires the involvement of several collaborative partners. Therefore the next step consists of modelling the connections between the focal company and its partners (suppliers, sub-contractors) through gBOMO (generalised Bill-Of Materials and Operations). 3.2 Generalized Bill-Of-Materials and Operations, gBOMO Starting from traditional well known Bill-Of-Materials that enables to represent partially the product architecture, this concept is extended into gBOMO in order to link together product-network architectures. All potential connections must be outlined in order to perceive the gradual intervention of partners. A BOM shows the participation of parts (subassemblies, intermediate parts, raw materials) in the production of parent part. The usefulness of BOM is perceived through its usage in master production planning. Generally, two kinds of BOM exists (Jiao et al,2000) [19]: eBOM (engineering BOM) and mBOM (manufacturing BOM), the distinction between the two is that an eBOM structures the way that a product is designed and according to Jiao it consists of functional assemblies of subsystems while mBOM defines the way that a product is produced. In both cases, the BOM enclose the number of items (raw materials or subassemblies) used to produce the parent sub-assembly. BOM is represented in a simple way through a table where the components are put close to the parent assembly with corresponding number of participation (quantity of components) to do one unit of parent assembly. Beyond the concept of BOM in [19] concept of BOMO is introduced by a fusion of the BOMs and Bill-OfOperations (BOO) to facilitate better production planning and control, order processing and engineering change control. A BOO represented by process flow diagram gives the production structure of a given product. Hence the fusion of these two concepts, the BOMO could specify the sequence of production operations required for making an intermediate part/subassembly or a final product as well as the materials and resources required at each operation. A Kitting activity is added by Jiao before any operation consisting of preparing all necessary components, tools and fixtures. 3.3 Generalized Bill-Of-Materials and Operations, gBOMO The gBOMO concept (see Figure2) was introduced in [20] as an adaptation of Jiao BOMO [19]. This representation gathers jointly technical data of BOO (Bill Of Operations) and BOM (Bill Of Materials) of the considered product. The BOM allows perceiving the connections between the focal company and a subset of its major suppliers.

33

However the BOO allows the definition of the collaboration with sub-contractors. It also contains complementary data describing the sequence of synchronisation situations. The employment of BOMO is justified by the purpose of giving enough data to the

managers supporting them for the production planning and control tasks. That is the reason why the kitting activity (preparing materials, tools, resources before any activity specially assembly) is also used.

Figure 2: gBOMO representation of the product X The following points represent the differences of gBOMO comparing to BOMO concept: • Purchased parts and raw materials are gathered in one class. • The sub-contracted processes are directly connected to the external partner by a bi-lateral connection. • The systematic use of kitting process is neglected. This process is maintained for assembly processes. • When no confusion is possible, before the kitting process the intermediate parts are not represented in the model. The idea of the formalism is to represent technical data (BOO and BOM) of a product jointly from a given point of view. It means that the formalism is applied based on the analyst-manager decision level; the aggregation mechanisms [21] could be used in this modelling approach too. 3.4 Works connection graph The gBOMO formalism as introduced previously, allows visualizing the expected execution of realization process of product and its requirements in terms of components, data and external interventions. For observing chronologically the necessity of intervention of different involved actors in the realization process, it is possible to identify a works connection graph.

(From [20])

• Intermediate activities which do not require partners’ participation are neglected. Junction points identified in this graph are then 1) shipping: J1, J2, J3, J4, J5, J6, 2) subcontracting: J7, J8, J11, 3) assembly: J9, J10, J12.

S1

2

R 1

J7

J1

S1 J2

J11

J9

S2

Q

2

3 1

1

J3

S3 J4

S4

2

P J8

2

3

2

J5

J12

J10

2

S5 J6

Assembling work For obtaining the works connection graph (see Figure.3) from gBOMO, the next steps ought to be followed: • Associate with each activity of shipping, subcontracting and assembling a node in the network. Each node of this graph represents a work (or activity). These activities vary according to the nature of the performed operations: shipping, subcontracting, assembling. In other words, these nodes can be seen as junction points where the intervention of concerned partners is required to finalize the current activity and to allow the execution of the next one. • Two adjacent nodes involved in the same workflow are linked by an edge. The edges of the graph represent the antecedence relationship between various activities.

34

Supplier/Sub-contractor work Figure 3: Works connection graph In works connection graph, edges valuation will be of great help. They distinguish different relationships between actors (the focal company and partners). The valuation between two adjacent nodes A and B, expresses their mutual influence or the criticality level in their complementarity. In a real application case, these valuations are obtained after discussions and brainstorming among company’s experts. The scale of criticality is defined as follow: 1 stands for High criticality, 2 stands for average criticality, 3 stands for low criticality.

For example: the value of edge between J1 and J7equals to 2, it means that the work performed in node J1 has an average criticality for work completion at junction point J7 and vice versa. 4 PARTNERS DEPENDENCY MODELLING In [20] it is suggested to follow two steps before analysing the network dependency: gBOMO identification, synchronization graph. The situation is different for works connection graph and edge valuation. Through the interpretation of gBOMO into works connection graph, the focus is put on complementarity in transformations made by various partners. Edge valuation gives relevant linkage between adjacent partners, useful for analysing all dependencies, hidden or explicit, between partners. The global approach to determine these dependencies is illustrated in figure.4. Once the gBOMO identified, the works connection graph is extracted from it using those rules presented in §3.4. The edges are then valuated. From this model, the modelling of partners’ dependencies can begin. After obtaining an Amplified Work Criticality Level matrix ψ ( i , j ) the Partners dependency strength matrix is deduced. By extracting supplier’s dependency strength matrix and by comparing it to Product components linkage matrix, we enable to highlight the hidden dependencies between partners.

The assessment of paths (P) between two entities (i,j) constitutes the value of criticality. It is done by evaluating the value of the path of these entities to the next common junction point. For instance, J12 is the next common junction between S1, S5 (see Figure.5). (i) and (j) both contribute on the realization of an activity. It is chosen to use the minimal path value between all possible paths. This minimal value corresponds to the highest criticality, because the highest criticality value determines the real impact that the work of the partner can have on the other in whole work realization. When a partner intervenes at different steps as S1 does, involving different criticality values, the minimum value representing the highest criticality is chosen. It allows taking and amplifying potential risks within the project. For instance, there are two paths between S1 and its next common junction point J12 with S5. The calculation in then as follows: PS1 → J12 = J1 → J 7 → J 9 → J11→ J12 = 2 × 2 = 4

PS1 → J 12 = J2 → J 9 → J11 → J12 = 3 × 2 = 6

PS5 → J12 = J 6 → J12 = 2 ψ (1,5) = min { min(4,6 ) , 2 }= 2 To summarize the calculation of the AWCL matrix, we can write:

ψ ( i , j ) = min { min P( i

gBOMO identification

S1

2

S1

1

3

S2

Amplified Work Criticality level matrix elaboration

S3

Q

2

J2

Valuation of edges

}

→ common junction )

R J7

J1

Works connections graph representation

, min P( j

→ common junction )

J11

J9

1

1

J3

2

P J8

J4

S4

Partners dependency strength matrix

J12

2

3

2

J10

J5

2

S5 Extracted Suppliers dependency strength matrix

J6

Product Components linkage matrix

Figure 5: AWCL examples In this way, the AWCL for all partners is found. These results are summarized in the table below.

Highliting hidden dependencies

Figure 4: Modeling approach 4.1 Amplified Work Criticality Level Matrix The Amplified Work Criticality Level (AWCL) matrix determines the mutual criticality among two partners, not only adjacent ones but every couple of partners. It is assumed that criticality is amplified (multiplication of dependency values of the edges) along the paths of the graph. This amplification models the criticality of the value added to items along the workflow. This reflects the fact that: 1) any little error or mistake made at the beginning of a workflow could have important influences on the final result of the workflow, 2) the later these errors, mistakes or dysfunctions are detected the more will be the cost of correction activities. The AWCL matrix is defined then as:

ψ (i , j ) i, j ∈ {Partners } .

S1

S2

S3

S4

S5

P

Q

R

FC

S1

0

1

4

4

2

4

4

2

1

S2

1

0

2

2

2

2

2

1

1

S3

4

2

0

2

2

2

1

2

1

S4

4

2

2

0

2

2

1

2

1

S5

2

2

2

2

0

2

1

2

1

P

4

2

2

2

2

0

1

2

1

Q

4

2

1

1

1

1

0

2

1

R

2

1

2

2

2

2

2

0

1

FC

1

1

1

1

1

1

1

1

0

Table 2: AWCL matrix Hypothesis: • Focal company’s work is highly critical to all of its partners. Then these links equal 1. • The main diagonal of the matrix equal to 0.

35

Different possible situations can be therefore identified resumed in the following table:

4.2 Dependency strength matrix elaboration δ ( i , j ) Dependency strength noted δ ( i , j ) is defined as the value that characterizes the mutual dependency between partner i and partner j. As the criticality scale goes from 1: highest, 2: average, 3: Lowest, the minimal value of amplified criticality values obtained after calculation between any couple of partners represents the highest dependency existing between them. So, the dependency strength can be expressed as decreasing function of the criticality value: δ (i , j ) = ψ −1( j ,i ) , ∀i, j (see Figure.6).

[

]

Suppliers

Subcontractors

S1

S2

S3

S4

S5

P

Q

R

FC

S1



1

1/4

1/4

1/2

1/4

1/4

1/2

1

S2

1



1/2

1/2

1/2

1/2

1/2

1

1

S3

1/4

1/2



1/2

1/2

1/2

1

1/2

1

S4

1/4

1/2

1/2



1/2

1/2

1

1/2

1

S5

1/2

1/2

1/2

1/2



1/2

1

1/2

1

P

1/4

1/2

1/2

1/2

1/2



1

1/2

1

Q

1/4

1/2

1

1

1

1



1/2

1

R

1/2

1

1/2

1/2

1/2

1/2

1/2



1

FC

1

1

1

1

1

1

1

1



S1 S2 S3 S4 S5

a

b

c

d

e

a

-

1

1

0

0

1/2

b

1

-

0

0

0

1/2

1/2

c

1

0

-

1

0



1/2

d

0

0

1

-

1

1/2



e

0

0

0

1

-

S1



1

1/4

1/4

1/2

S2

1



1/2

1/2

S3

1/4

1/2



S4

1/4

1/2

1/2

S5

1/2

1/2

1/2

Suppliers strengths dependency matrix

S1 S2 S3 S4

0

0

1/2

a



1/2

1/2

1/2

b



1/2

1/2

c



1//2

d



e

S5 Suppliers strengths dependency matrix

a

b

C

d

e

-

1

1

0

0

-

0

0

0

-

1

0

-

1

Category 1

Don’t exist

Don’t exist

Category 2

Don’t exist

Exist

Category 3

Exist

Don’t exist

Category 4

Exist

Exist

Table 3: Situations categorization Category 1: This is not a critical case: no link between suppliers and no link between components. It does not require any specific attention (ex: S2 and S3 are not dependent and B and C are not linked).

Category 4: When suppliers are dependent and the components that they provide are linked (S1, S2 and b, a). In this case, all dependencies are obvious and the focal company has specific procedures and treatment corresponding to this case.

4.3 Analysis According to the obtained results of this matrix, unsurprisingly, it can be noticed that all partners are more or less dependent to each other. Interesting observations can be deduced by comparing this latter matrix and the components linkage matrix representing some hidden dependencies (Figure.7). The focus can be put only on suppliers and their dependency sub-matrix extracted from the global dependency strength matrix. The matrix is adjusted by replacing all values lower than 1/4 by 0 considering that it is negligible. 1

Components links

Category 3: Suppliers are dependent and the components that they supply are not linked. (ex: S1,S4 and a ,d). Focal company has to define explicit design rules for each couple of dependent partners. Identify the way they have impacts on the works of each other. This impact can be in manufacturing capacity planning or direct / indirect temporal synchronization. Focal company has to be sure that the two partners handle their relationship and dependency in a face-to-face collaboration.

Figure 6: Dependency strength matrix



Supplier links

Category 2: There is no dependency link between suppliers but the components that they supply are linked (ex: S1, S3 and a, c). These are the cases of hidden links and the most important representatives of potential dysfunctions. The focal company has to master manufacturing requirements for both suppliers concerning the realization phase by anticipating it within design process.

Components linkage matrix

S1 S2 S3 S4 S5

Links types Category

Category 2 highlights hidden dependencies that it is interesting to exploit in the early phase of NPD project. It allows early consideration of constraints lies on realization phase ensuring coherent choices during the design phase according to network of partners working. 5 APPLICATION CASE The application case is a simplified version of the structure of an engine. Here are given some of the basic components of the engine and their affectation to suppliers. The components linkage matrix is built according to the existing interfaces between components. S1 S2 S3 S4 S5

Æ C1: Steel Æ C2: Iron Æ C3: Piston Æ C4: Connecting rod Æ C5: Oil pump, Oil sump, Cylinder head gasket, Cylinder head cover S6 Æ C6: Flywheel S7 Æ C7: Cylinder head block S8 Æ C8: Bearing cap S9 Æ C9: Timing belt S10 Æ C10: spark plug

-

Components linkage matrix

Figure 7: Matrices comparing

36

S11 Æ C11: alternator S12 Æ C12: Lead assay FC Æ X: Camshaft, Cranks haft, Cylinder block gBOMO of the engine is represented in figure 8. The realization process of the engine is well known, it allows

obtaining complete gBOMO. It is not always obvious to get a full knowledge of realization phase because of potential novelty to a known case. These adding imply a new linkages and dependencies not always perceptible at first view.

Figure 8: gBOMO of the Engine The Works connection graph, extracted from the gBOMO, permits to explicit link value between each two adjacent partners implied in local work, (see figure 9) obtained according to the steps enounced in section 3.2 and by interviewing experts in mechanical engineering.

The developed approach is applied in the case study, obtaining gradually AWCL matrix, Dependency strength matrix, adjusted supplier’s dependency matrix.

q 2

S1

3 S1

p 3

2 2

2

2

2 S2

S9

2

S2 2

1

2

S3 1 S4

S8

S5

3

2

1

2 2

2 1

S12

2 1

2 1

2

S5

S11

2 1

1

2

S10

2 1

2

S5

2 S7

1

1 3 S5

S2

S3

S4

S5

S6

S7

S8

S9

S10

S11

S12

P

Q

FC

0

2

1

2

2

2

2

2

2

2

2

2

2

2

1

S2

2

0

1

2

2

2

2

2

2

2

2

2

2

2

1

S3

1

1

0

1

2

2

4

2

2

2

2

2

6

1

1

S4

2

2

1

0

2

2

4

2

2

2

2

2

6

2

1

S5

2

2

2

2

0

2

4

2

2

2

2

2

4

2

1

S6

2

2

2

2

2

0

2

2

2

2

2

2

2

2

1

S7

2

2

4

4

4

2

0

4

2

2

2

2

2

4

1

S8

2

2

2

2

2

2

4

0

2

2

2

2

6

2

1

S9

2

2

2

2

2

2

2

2

0

2

2

2

2

2

1

S10

2

2

2

2

2

2

2

2

2

0

2

2

2

2

1

S11

2

2

2

2

2

2

2

2

2

2

0

2

2

2

1

S12

2

2

2

2

2

2

2

2

2

2

2

0

2

2

1

P

2

2

6

6

4

2

2

6

2

2

2

2

0

6

1

Q

2

2

1

2

2

2

4

2

2

2

2

2

6

0

1

FC

1

1

1

1

1

1

1

1

1

1

1

1

1

1

0

Table 5: AWCL matrix

2 S6

Figure 9: Works connection graph for Engine C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 X

S1 S1

C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 1 0 1 1 0 0 0 0 0 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 0 0 0 1 1 1 - 0 0 0 1 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 0 1 0 1 1 1 0 1 1 0 0 0 0 0 0 0

X 1 1 0 1 1 0 0 0 0 0 0 0 -

S1

S2

S3

S4

S5

S6

S7

S8

S9

S10

S11

S12

P

Q

FC

S1



1/2

1

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1

S2

1/2



1

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1

S3

1

1



1

1/2

1/2

1/4

1/2

1/2

1/2

1/2

1/2

1/6

1

1

S4

1/2

1/2

1



1/2

1/2

1/4

1/2

1/2

1/2

1/2

1/2

1/6

1/2

1

S5

1/2

1/2

1/2

1/2



1/2

1/4

1/2

1/2

1/2

1/2

1/2

1/4

1/2

1

S6

1/2

1/2

1/2

1/2

1/2



1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1

S7

1/2

1/2

1/4

1/4

1/4

1/2



1/4

1/2

1/2

1/2

1/2

1/2

1/4

1

S8

1/2

1/2

1/2

1/2

1/2

1/2

1/4



1/2

1/2

1/2

1/2

1/6

1/2

1

S9

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2



1/2

1/2

1/2

1/2

1/2

1

S10

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2



1/2

1/2

1/2

1/2

1

S11

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2



1/2

1/2

1/2

1

S12

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2



1/2

1/2

1

P

1/2

1/2

1/6

1/6

1/4

1/2

1/2

1/6

1/2

1/2

1/2

1/2



1/6

1

Q

1/2

1/2

1

1/2

1/2

1/2

1/4

1/2

1/2

1/2

1/2

1/2

1/6



1

FC

1

1

1

1

1

1

1

1

1

1

1

1

1

1



Table 6: Dependency strength matrix

Table 4: Engine components linkage matrix

37

The analysis of Suppliers strength dependency matrix compared to the components linkage matrix (see Table 7) C1 C2 C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C1 C2 C3 C4 C5 C6 C7 C8

-

1

0

1

1

0

0

0

0

0

0

0

S1

-

0

1

1

0

0

0

0

0

0

0

S2

-

1

0

1

0

1

0

0

0

0

S3

-

0

0

0

1

0

0

0

0

S4

-

0

1

0

0

0

0

0

S5

-

0

0

1

0

0

0

S6

-

0

0

0

0

0

S7

-

0

0

0

0

S8

-

0

1

1

S9

-

0

0

S10

-

1

S11

-

S12

C9 C10 C11 C12

Components linkage matrix

allows achieving the previously evoked categorization. S1

S2

S3

S4

S5

S6

S7

S8

S9

S10

S11

S12



1/2

1

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2



1

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2

1/2



1

1/2

1/2

0

1/2

1/2

1/2

1/2

1/2



1/2

1/2

0

1/2

1/2

1/2

1/2

1/2



1/2

0

1/2

1/2

1/2

1/2

1/2



1/2

1/2

1/2

1/2

1/2

1/2



0

1/2

1/2

1/2

1/2



1/2

1/2

1/2

1/2



1/2

1/2

1/2



1/2

1/2



1/2 ∞

Adjusted supplier dependency strength matrix

Table 7: Components linkage matrix & adjusted supplier dependency strength matrix comparing Category 1: not linked components Flywheel/Connecting rod and not dependent partners S6/S4. In this case no specific treatment is required during the design phase neither on the components interface, nor in suppliers relations. Suppliers S5/S7 with negligible dependency and supplying linked components Cylinder head cover/ Cylinder head block belong to category 2. In order to avoid potential dysfunctions, the focal company has to consider early and as soon as possible the realization aspects of both suppliers. Category 3 is showed through the example of not linked components Piston/Timing belt and dependent suppliers S3/S9. This case requires defining explicit design rules for each partner by taking in account each partner work specifics and ensuring operational compatibility. For Category 4, we evoke the example of linked components Timing belt/alternator and dependent suppliers S9/S11. 6 CONCLUSION & PERSPECTIVES This paper highlights and reinforces the importance of partners’ dependencies and especially hidden ones. They are absolutely important to identify as soon as possible. This area has been under-explored by academics thinking that only the dependency between components, partners directly linked are important. This paper explores the implication of setting up valuated arcs between junction points that induce a relative dependence between two adjacent nodes. This adjacency is then extended to the whole set of partners. The major contribution of this research lies in developing a formal approach able for analyzing the network (gBOMO, works connection graph) and providing useful managerial insights through the Dependency strength matrix. This matrix outlines the level of dependency that exists between all the partners involved in the network. This approach has been applied in a specific product (engine). Interesting interpretations comes from obtaining results: 1) Even if the components are interfaced in a given product, suppliers are not necessarily closely dependent. 2) Even if the partners seem to be “far” from each other, they are not necessarily. This idea is the main pitfalls to which many deciders are confronted to. It is judicious to oversee and anticipate the

38

potential impact of one unexpected influent partner at early stage. In future works, the analysis will be extended to design phases process using Design-gBOMO. This study enables to confirm that partners apparently not dependent can being really dependent in realization process (because of some inherent constraints of timing, assembling …), so that their mutual dependency may be taken into account even when the product is designed with the participation of different actors. 7 REFERENCES [1] Beamon, B. M. (1998). Supply chain design and analysis: Models and methods. International Journal of Production Economics, 55,281–294. [2] Meixell M.J, Gargeya V.B, (2005). Global supply chain design: A literature review and critique, Transportation Research Part E 41, pp 531–550. [3] Chopra S, Meindl P. (2004). Supply Chain Management: Strategy, Planning, and Operations, 2nd Edition. Upper Saddle River, NJ: Pearson Prentice Hall, Inc. pp. 511-512 [4] Alonso-ayuso, A, Escudero, L.F, Garin, A, Ortuno, M.T, Perez, G. (2003). An approach for strategic supply chain planning under uncertainty based on stochastic 0–1 programming, J. Global Optimization 26, 97–124. [5] Shen, Z.-J.M. (2006). A profit-maximizing supply chain network design model with demand choice flexibility, Operations Research Letters 34, 673 – 682 [6] Nieto M.J, Santamaria L, (2006) Technological collaboration: Bridging the innovation gap between small and large firms, Working Paper 06-66, Business Economics Series 20. [7] GIRARD Ph., ROBIN V., 2006, Analysis of collaboration for project design management, Computers in Industry, Vol. 57/8-9, pp 817-826 [8] Ulrich K.T, (1995). The role of product architecture in the manufacturing firm. Research Policy 24, 419– 440. [9] Browning T.R, Eppinger S.D, 2002 Modeling

Impacts of Process Architecture on Cost and Schedule Risk in Product Development IEEE Transactions on Engineering Management, vol. 49, no. 4, pp. 428-442. [10] Fixson S. K. (2005) `Product architecture assessment: a tool to link product, process, and supply chain design decisions', J. of Operations Management, Vol.23, No.3-4, pp 345-369 [11] Shibata T, Yano M, Kodama F. (2005). Empirical analysis of evolution of product architecture Fanuc numerical controllers from 1962 to 1997, Research Policy (34), pp 13–31. [12] Otto & Wood, 2001 Incorporating Lifecycle Costs into Product Architecture Product Design. PrenticeHall. [13] Dahmus J. B, Gonzalez-Zugasti J.P, Otto K.N. (2001). Modular product architecture, Design Studies, Vol.22, No.5, pp 409-424. [14] Ericsson, A. and Erixon, G. (1999). Controlling design variants : Modular product platforms, Society of Manufacturing Engineers and American Society of Mechanical Engineers, Fairfield, N.J. [15] Hölttä, K., Suh, E. S., & de Weck, Olivier. (2005) Trade-off between modularity and

[16]

[17]

[18] [19]

[20]

[21]

39

performance for engineered systems and products. In Proc of International Conference on Engineering Design, Melbourne August 15-18. Sharman D. M, Yassine A. A, (2004). Characterizing Complex Product Architectures, Systems Engineering Journal, Vol.7, No.1, pp 3560. Clarckson P.J, Simons C, Eckert C. (2004) Predicting Change Propagation in Complex Design, JOURNAL OF MECHANICAL DESIGN, vol 126, pp 788-797. Ullrich K., Eppinger S. (2003) `Product design and development McGraw Hill. Jiao J, Tseng M.M, Ma Q, Zou Y, (2000) `Generic Bill-of-Materials-and-Operations for High-variety production management', CERA, 4(8), pp. 297-321. Zolghadri M, Baron C, Girard P (2008). Modelling mutual dependencies between products architecture and network of partners, International Journal of Product Development, to be printed. Zolghadri M., Bourrieres J.-P. (2006), `Data aggregation mechanisms for production planning function of firms', Journal of decision systems, vol. 15, no. 4, pp. 425-452.

Integrated Design at VIDA Centre Poland Z. Weiss, R. Konieczny, J. Diakun, D. Grajewski, M. Kowalski Virtual Design and Automation Centre (VIDA), Poznań University of Technology (PUT) Piotrowo 3 Street, PL-60-965 Poznań, Poland [email protected]

Abstract The modern design process forces the tight integration of processes connected with the development of the product. All the steps of the design process are based on geometrical models created in CAD systems. The geometry of the product is used for programming of the virtual environment of the product, its behaviour, manufacturing and so on. It is necessary to remember that the integrated design process needs special skills and knowledge from engineers. The authors of this paper presented a brief description of the VIDA Centre at Poznań University of Technology, its structure and skills in the area of integrated design. Couples of examples of projects realized at VIDA Centre’s environment as integrated design of products are described as well. Keywords: Product Development, Integrated Design, VIDA Centre’s environment, VIDA skills in integrated design

1 INTRODUCTION Nowadays, product design is understood more as a product development during the whole life cycle (Fig. 1). The design should take into consideration all the stages of the product life cycle: its manufacturing, exploitation, recycling and so on. These force the integration of all processes into product development. Product development forces the use of new design methods as well as methods for managing a design process.

Conceptual design

Product engineering

Planning

Manufactruring engineering

Product Development

Simulation & Validation

Requirements

PDM

Build & Produce

PLM

Test & Quality Sales & Distribution Disposal & Recycling

Maintenance & Repair

Figure 1: Integrated product development New product development embraces all stages including the defining of requirements, product planning, conceptual design, product engineering, manufacturing engineering, product and process simulation and validation, all the way up to the realization of production. To manage the integrated design process the PDM/PLM (Product Data Management/Product Life Cycle Management) system is becoming more common. The product development is based on virtual model of product and on virtual method used for its research, testing, simulation, description and all activities necessary for its production and is named Virtual Product Development. Virtual Product Development can be considered as a complete set of activities that are necessary to bring new

CIRP Design Conference 2009

40

devices, technologies, and services to the marketplace. These activities span the entire product life-cycle from the identification of a market opportunity or need, through design, prototyping, testing, manufacturing and distribution, and end of product life. The application of computer technologies enables virtual product design that incorporates all the stages of the product life cycle, from product and production planning, through the creation of prototypes, the modification of its geometry and its functionality, the simulation of manufacturing (machining and assembly), its automation and determination of production costs. Product creation processes are increasingly performed digitally to reduce the numbers of hard prototypes. This approach results from the need for increased flexibility and a faster response to the market. To develop a product, VR (Virtual Reality) technology is used more and more often. Using VR technology to develop a product sometimes involves the designer importing the geometrical model from different CAD systems. A Digital Mock Up for an analysis of the behaviour of the product may also be used. Reliable communication system between designers working at different work stations is the crucial point in this approach. Often for managing the whole design process a PDM system is used. A PDM system enables to organize the team working within the framework of a particular project and reflects the structure of the organization (team and its members) and the roles of particular members within the team and project (i.e. designer, approver, etc.). This may be achieved mainly by user management functionality. Specific for PDM systems are data and document management function that ensure the mirroring of the process characteristics for design (i.e. the approval or rejection of a document or a project's stage). The product is developed during its whole life cycle but the main development stage takes place during the design process. The PDM system is designed mainly for the management of documentation that may appear during work on the product and for the management of work

• fluid power technology, especially hydraulic linear drives,

flow, but the PLM system also embraces the other life cycle phases. The integration of product development creates interaction and overlap between the stages in the whole product development process and it is mainly achieved by:

• planning of modern machining technologies and CAM, • robotics and montage planning, • assembly and disassembly with regard of recycling,

• enhancement of new product development process,

• production process management,

• variety of product definition, • organizational context,

• design, automation and investigations of different machines and equipment, • machine design methodology, its optimisation, CAD and FEM, • mechatronics, automation of machines, • forming and possible application of modified face toothing,

control

and

• machining and its diagnostic,

• teaming. Integration can be perceived on three main levels. The basic layer concerns data, where it is ensured that the output data from the result of one stage in the product development and the input data for the next stage are of appropriate and compatible formats. The second layer is related to the tools based (mainly) on the computer tools being used in the activities at a particular stage of the process (mainly systems belonging to the CAx group) as well as tools that enable communication within the team. On the third, top layer, are the methods used in the process to ensure that resources are used as effectively and efficiently as possible. Methods should be assessed to ensure they are prepared in order to achieve maximum benefits by the team. 2 PUT AS ENVIRONMENT FOR VIDA CENTRE The PUT (Poznań University of Technology) research and development activities encompass the broad area of machine tool design and related manufacturing processes, as seen from the perspective that spans the whole product life cycle. PUT has a long tradition in machine tools design and their automation. It holds significant achievements as well as highly qualified personnel in this field. The information technologies for machines design process are used at PUT effectively for almost 20 last years. Numerous software systems, from CAD, CAD/CAM, CAPP, FEM, PPC together with production simulation systems are used at PUT. Institutes have numerous computer stands and well organized and equipped computer laboratories. The existing computer systems allow for virtual product design, starting from the stage of product and production planning through prototyping, geometric characteristics modification, fitness for use creation to manufacturing (e.g. machining and assembly) processes simulation and manufacturing costs computation. Nevertheless, in many cases, it is required to purchase new software or to upgrade the versions of the software or to work out the interfaces. The infrastructure of PUT was developed within the confines of several national projects (founded by Polish Science Committee) and European projects (founded by Tempus, Copernicus, Inco-Copernicus Programmes). The PUT has specific contributions to the advancement of the understanding the scientific and technological problems to be solved in home country. The many investigations have been made in cooperation with home country enterprises and with others universities. The results of research have made an important contribution to development of Polish science and industrial practice in the following areas:

planning,

• new cutting tools design and their investigations, • application of CAD, CAPP, CAD/CAP and PPC in industry, • flexible manufacturing systems and CIM, • design of production quality control systems, • design of metrology and diagnostic systems in mechanical enterprises, • automation of measurement and monitoring, • modelling and simulation of production capacity of products (machines, apparatus and devices) in range of assembly and disassembly with regard of recycling, • planning of modern machining technologies for machine elements taking into consideration costs of its realization, • practical applications of industrial information system. In the area of Poznań there are many mechanical enterprises and small and medium companies. PUT is especially focused on cooperation with those companies and their area of interests. The research groups of PUT for several years co-operate with numerous industrial enterprises in Poland, especially in Poznań City and Wielkopolska region. There are numerous amount of enterprises in Poznan and its nearby terrain, representing the following industrial branches: automotive industry (VW Poznań, MAN companies), motor industry (HCP Poznań Company), furniture industry (in Swarzedz and Oborniki cities), plastics industry (Wavin Buk Company), chemical industry (Beiersdorf-Lechia Company), food industry (Lech Browary Company), machine tools and machine equipment industry (JAFO, FAMOT companies), building industry (Metalplast Company in Poznań and in Oborniki Wielkopolskie). The VIDA Centre, which is described in the next chapter, exploits the infrastructure that has been prepared so far. Joint of individuals laboratories (research areas) by usage of Internet technology and state-of-the-art information technology in data and processes management allow to make the data available and to exchange the data generated within the next phases of product design and manufacturing processes development. It is the platform to join the VIDA Centre with other centres through networking. Such solution will also make easier to join the European research area and will allow the Institute to become a partner within the Framework Programme. 3 SCOPE OF DESIGN IN VIDA CENTRE The concept of creating VIDA was a consequence of the so far development of the Poznań University of Technology. The VIDA Centre integrated several stages of product and process design and enabled demonstration of the integrated design process as a whole as well as the selected steps only, if required. Connecting several laboratories of PUT over the Intranet and application of the cutting edge information technologies in the field of design, data, and process

41

management has resulted in sharing fully the information generated at the subsequent steps of product and process design. The organizational structure of the VIDA Centre is based on the structure of the Faculty of Mechanical Engineering and Management. The working groups are completed in dependence of current realized projects. The VIDA includes the following aspects that are innovative in comparison with current state of the research: •

Intranet is fully used to integrate the design process,



design is based on the virtual model of a product and manufacturing process,



a comprehensive demonstration of design process and its constituent elements is possible in virtual reality .environment,



new or emerging design and information technology is used, such as rapid prototyping, rapid tooling, DMU.



deep integration of the VIDA Centre with other centres will be possible thanks to usage of appropriate common software platform. The environment of VIDA can allow designers to digitally define and evolve product models in a more natural manner and that will provide designers with the implicit, intuitive methods for expressing and evolving their projects. Designers at the VIDA Centre are able to use the latest, up-to-date tools for CAD/CAM/CAE modelling, Virtual Reality, Virtual Prototyping, Reverse Engineering, Rapid Prototyping and Rapid Tooling methods. A virtual product created within the VIDA centre can enable the real-time visualization and review of the 3D product as it evolves, thereby will streamline a collaborative review and decision-making and driving innovation. The DMU can allow design teams to digitally build the product prototype and its environment, and then analyze it to gain an early insight into the key factors determining design quality, product performance, and ultimate market success. The VIDA environment can provide extensive support for industry. Using Rapid Prototyping and Simulating tools for testing and analysis can reduce and even eliminate the time and cost invested in build it / break it scenarios requiring multiple real prototypes. Reverse engineering technologies can be used for transforming hand-made design into virtual reality and for the reconstruction of lost design knowledge from previous designed products. Rapid Tooling and Rapid Manufacturing tools will shorten the time to market for short series products. VIDA also is able to disseminate the innovative technologies in the area of virtual product and process development in trainings. Worked out special applications that support the data management tool and the planning of manufacturing operations are based on the recorded knowledge rules. VIDA is involved in research in the area of Virtual Product Development on regional, national and European level supported by a video-conferencing system. On the regional level, VIDA cooperates with SMEs and manufacturing clusters, disseminate the result of projects and exchange the know-how and experience in cooperation with the Wielkopolska Chamber of Industry and Commerce. On the national level, VIDA Centre cooperates with the Polish Network SDPP/ProNet (Network of Excellence of Production Processes) and other Polish Technological Platforms. On the European level, the VIDA Centre is incorporated into the international research activities in the framework

42

of the European Research Area (as well as in Technological Platforms) through the EMIRAcle Research Association (earlier EU NoE VRL-KCiP). The collaborative design environment of VIDA is based on PDM system as a management system for supervision of design process and information flow. The PDM system enable preparation of the system towards particular project’s needs – it encompasses definition of users, groups, permissions, etc. reflecting roles and functions of the persons involved in the project, definition of project’s stages, tasks within them, persons to whom these element will be ascribed, input and output data and time constraints. Nowadays within the frames of VIDA there are working groups in following domains: 3D Modelling and Reverse Engineering, VR (Virtual Reality), Design for Manufacturing, Design for Assembly and Disassembly, Design for Recycling, Rapid Prototyping and Rapid Tooling, Production Planning and Simulation, Virtual Engineering, Design of Drive and Control systems and Quality Engineering (Fig. 2).

Figure 2: VIDA research area in design CAD modelling is an important step in a new-product design that is base for a further design and applications. The 3D modelling in CAD/CAM systems (i.e. CATIA, ProEngineer, IDEAS) is developed in connection within research in Rapid Prototyping, Reverse Engineering, Virtual Engineering and Virtual Reality applications. The CAD models are used for programming of manufacturing, assembling, disassembling processes and simulation of these processes too. In the area of production planning and simulation the modelling and simulation of workstations layout, modelling and simulation of material and information flow and visualization of manufacturing processes were developed. The Artificial Intelligence methods and Computer Aided Process Planning systems are used in this field. The VIDA carries out research of assembly sequence of machine parts and sets using the graph theory, analysis and investigation the technological ability of construction of part, sets and whole product, using the theory of Petri Nets, heuristic and genetic algorithms to balancing assembly line, designing for automated and robotized operation of assembly and disassembly, and investigation of CAx systems to designing of assembly structure and simulation assembly processes. In the domain of Virtual Engineering the use of Virtual Prototyping techniques for casting and plastic working was developed. The research addresses the problems of simulation systems application for modelling of mass and heat transfer, and microstructure modelling. Research in the domain of design of drive and control systems concerns on:

automation of technological machines and processes,



magneto-rheological shock and energy absorbers,



deliver common services of data transfer between design stands. EUResearch Area



electronic controllers of devices with magnetorheological fluid. In the area of Quality Engineering the following domains are investigated: the methodology for product design and process planning oriented towards quality,



usage of methods and tools for quality improvement,



data processing for quality control e.g. patterns classification on control charts based on artificial intelligence (AI) methods and neural networks,



development of data model for quality methods and tools integration,



process – oriented information flow modelling for quality control circles in enterprise,



quality systems audits,



measurement systems design and analysis.

VIDA COLLABORATION PLATFORM FOR INTEGRATED DESIGN Integration processes during design in VIDA proceed on four levels:

Geometry Modelling



Research Areas Virtual Reality



Rapid Protyping & Rapid Tooling

applications of artificial intelligence in control,

ensure the mechanism for data classification and search,

Reverse Engineering





Virtual Engineering

linear and rotary dampers,

Manufacturing Simulation



Quality Engineering

applications of magneto-rheological fluids,

reflect the methods of particular team members (work within the established environment should be as close as possible to the already existing methods and tools),

Drive and Units Design





Recycling-oriented Design

control of electro hydraulic servo drives and modelling of electro hydraulic servo drives and mechatronic devices,

Design for Assembly



VIDA Integration platformand information exchange pipeline

Software integration layer (PLM)

Research Units: - Institute of Mechanical Technology (Poznań), - Institute of Material Technology(Poznań), - IPK-DZViPro (Berlin), - ZGDV(Rostock), - Laboratoire 3S (Grenoble), - IVF (Braunschweig)

Industrial Partners: - AmicaWronki S.A., - Leszczynska Fabryka Pomp Sp. z o.o.

Participants

Figure 3: VIDA collaboration platform for integrated design

4

• organizational, consisting of the structure of particular teams within the sphere of development project: their members (designers) and the roles played by individuals during the design process, • data, based on a choice of document (file) format which is suitable for all participants, • methods of product development, according to standard •

tools, perceived mainly as a common collaboration platform (the tools and methods for collaboration), guidelines, eg. VDI-Richtlinen or Lemach. The collaboration platform (Fig. 3) enables: •

the organization of the team working within the framework of a particular project,



reflect the structure of an organization (teams and their members),



the simultaneous execution of different projects,



reflect the roles of particular members within the team and project (i.e. designer, approver, etc.),



reflect the characteristics of design information systems (i.e. approval or rejection of documents or project stages),



aid in the utilisation of documents of different kinds and formats,



use already existing IT infrastructures (Intranet),



encourage the speedy implementation of project work within the system,



organize data access and documents management in relation to the project,



manage workflows,

5

EXAMPLES OF THE INTEGRATED DESIGN AT VIDA CENTRE The “Virtual refrigerator” project which was conducted at VIDA Centre will be presented as a first example. It was a prototype project for putting the new virtual reality (VR) technology into design process practice. The aim of the project was to create the model for possible product presentation within the factory showroom. A very important element of the VIDA Centre activities is cooperation with industrial partners. The model created for VR presentation was the model of refrigerator manufactured by the Amica Wronki S.A. company, the leading polish household product supplier. This cooperation allows carrying out the tests with the regular product. The great advantage was the accessibility of 3D product model worked out in PTC ProEngineer system with the complete geometrical information. After consultation with the industrial partner, the model was supplied with interactive controlled functionality: opening and closing of refrigerator and freezer doors, opening and closing of the refrigerator multiboxes, modification of the casing version, operating the multifunction electronic control panel. In order to create all the declared functions in the virtual model, it was necessary to use scripting programming. All the work carried out in this project was based on CAD geometry created in the ProEngineer CAD system. Work on the project proceeded in two stages and began even before the VIDA Centre was equipped with the stereoscopy projection system. In the first stage, a VRML model of the refrigerator was created and then, after the equipping of the laboratory, the EON Studio system was used to create a model for EON ICatcher that drives 3D large format stereoscopy visualization. There is the significant similarity between the sequence of model programming in the VRML language and the method implemented in EON. The development of model functions is also based on defining the nodes, the prototypes and the routes. However, the EON software

43

introduces the graphical user interface to the model development process and allows users of all levels of experience to build quickly and easily the complex interactive virtual applications. An interesting solution implemented in the EON system is the simulation tree, which is similar to the model structure tree in CAD software and which shows the hierarchy and boundaries of programmed model interactions that occur as a response to the individual programmed events. It is especially essential when working with complex models. The project was based on PDM system to check the collaborative design possibility [1]. The goal of the project was to create model of a refrigerator in immersive virtual reality environment. For this purpose a project team was established, where a number of designers from area of CAD and virtual reality, equipped with appropriate computer tools, were involved. The project consists of three general stages: definition of requirements, data conversion and finally, creation of virtual environment (Fig. 4).

contributed to optimization of shape and dimensions of the mould. 3D model function modelling documentation

CNC design

assembly design mould design

production scheduling

Figure 5: General stages of the water pump project

Figure 4: General stages of the virtual refrigerator project Two groups of designers were appointed to project team: CAD group and VR group. In each group particular participants played different roles – in every group there was a person (group supervisor) whose duty was to verify the results another group members’ work. In the whole project there was also one general project manager, who defined detailed project stages, time constraints and input and output data on every project stage. Based on these data the project was implemented in PDM. The pilot project will be used for development of VIDA members cooperation. Design teams and their members working on the common projects are equipped with hardware (mainly computer workstations), and software, according to their competences and role in the project. The participation in the common virtual design environment is ensured by the PLM client being present on each workstation. Another example of project conducted at VIDA design environment was water pump design. At the beginning, the 3D CAD model of the pump has been created in Catia V5 system. This model was transferred to FOD (Functional Oriented Design) system, where pump’s functions were modelled and costs of the variants were estimated. The variant that fulfilled customer’s requirements was transferred into Catia in order to geometrical modifications and generating of technical documentation. Then CNC programs for selected parts of the product were also created in Catia environment. Four poured parts moulds were designed together with simulation of metal flow during casting and cooling, which

44

6 SUMMARY To manage integrated design activities the collaboration platform based on PDM system is crucial. The use of a collaboration platform changes the character of communication between team members. The communication without PDM implementation is based mainly on spontaneous data and information exchange. Thanks to the internal system mechanism of PDM, the elimination of several minor tools is possible (this mainly applies to popular Intranet and Internet service clients). The implementation of PDM changes the method of organising IT infrastructure. Individual workstations become the location for data processing while data storage is handled As presented in refrigerator project example, which was developed using the PDM system in connection with VR applications, the use of collaborative platform allowed for coordination of the different VIDA research groups work. 7 REFERENCES [1] WEISS Z., WEISS E., KONIECZNY R., KASICA M., KOWALSKI M., "Some experiences with virtual technique implementation in household product development", Virtual Concept 2005, 8-10.11.2005, Biarritz, FRANCE. [2] WEISS Z., DIAKUN J., "Integration of Product Development using PDM", Computer Integrated Manufacturing - Intelligent Manufacturing Systems, 16-19.05.2005, Gliwice - Wisła, POLAND. [3] WEISS Z., DIAKUN J., "Virtual Integrated Design in Intranet Environment", e-Work 2004, 2729.10.2004, Vienna, AUSTRIA. [4] WEISS Z., KONIECZNY R., KASICA M., KOWALSKI M., "Application of VRML for Interactive Models Design", 1st VIDA Conference, 0304.06.2004, Poznan, POLAND. [5] KRAUSE F.-L., HAYKA H., PASEWALDT B.: “Efficient Product Data Sharing in Collaboration Life Cycles”. 14th International CIRP DESIGN Seminar 2004, Cairo 2004.

Optiimal Des sign of Planar Pa arallel Ma anipulato ors 3RRR R Throug gh Lowerr Enerrgy Cons sumption n A. A. Rojas s-Salgado, Y. A. Ledezma-Rubio Departamento de Ingenie ería de Diseño o, Facultad de e Ingeniería, U UNAM. Circuito o Exterior s/n, Ciudad Unive ersitaria, Anexo o de Ingenieríía, 4510, Del. Co oyoacán, D. F. CP. 04 [email protected], yahv ve_ledezma@ @hotmail.com Ab bstract In most existing studies, the solutions s of pllanar parallel manipulators are restricted to a feasible region of nk dimensionss of planar parallel manipula ators to a solution. This ressearch provides an optimall solution in lin ure of the linkks, minimizing g the mechan nical energy o of the manipu ulator. An deffined trajectorry and structu alg gorithm will be e obtained tha at allows ade equate dimens sioning of the e manipulator for a specific c task, by me eans of a passive reconfigu uration. With this method most m of the en nergy is used d by the manip pulator to exe ecute a task, not n for the manipulator's mo ovement. The process is illu ustrated with a an example. Ke eywords: Pa arallel manipulator, Parallel and serial sing gularities, Ene ergy, Optimiza ation

UCTION 1 INTRODU In contrasst to seria al manipulators, the parallel p manipulatorrs present a higher comp plexity of analysis, since they have more co omplex closed d kinematic chains, c correlated to t give a soluttion to the sysstem. Not onlyy serial singularitiess (q) appear, in which the geometry doe es not allow to re each certain points when the movement is blocked once the links are a aligned [1 1], but also parallel p r to pointss in which rigidity of singularitiess (θ). Those refer the system is lost, as we ell as control of o movement,, since the numberr of degrees of o freedom of the t system ch hanges [2, 3, 4]. Ea ach of these siingularities (q or θ) can cau use the loss of co ontrol over the desired movement, be it separately or when bo oth appear simultaneouslyy. The gion is not only defined by the t task area of the feasible reg manipulatorr, but the orie entations and singularities within the task area have alsso to be take en into accou unt, to determine ifi the point iss a part of the e above men ntioned region [5]. For differentt models of serial manipu ulators conditions have been de efined to dete ermine the fe easible regions [4]. o parallel man nipulators foccus on Some studies related to e feasible reg gions accordin ng to the Jaccobian defining the matrix of th he system [6,7 7]. Serial and d parallel matrrix are generated which, when n reduced in n range, are at a singular point. One way to plan trajecctories with this type of models is using redundant systems, leaving a free on of the end effector link, to be variable as the orientatio able to avo oid the singu ularity points for a defined d task trajectory [7 7]. The paralle el planar ma anipulator 3R RRR consists of a triangular mobile m plate connected byy three armss to a base. Everyy arm has thrree rotational joints, with parallel p axes among g them and among a the arm ms. This defines the planar move ement of the triangular t plate e with change e in the task point (center of the e plate) and its orientation n. The motorized jo oints are those e that are connected to an inertial i base, one in every arm. The T triangularr plate in movvement as well as th he base are equilateral e trian ngles (Figure 1).

45 CIRP Desig gn Conference 2009

The optimum m design depe ends on what is considered d to be optimum. If the price is tthe most important require ement, the optimum m will be minor cost. Or min nor infrastructu ure for construction n or a betterr weight-strength relation is the important re equirement. T The suitable so olution depends on what is need ded for a speccific problem [4 4, 8].

F Figure 1. Para allel manipulator 3RRR In this pape er, the lengthss of the links are obtained for a parallel ma anipulator, and d the one th hat uses the least energy for its task is de efined as optimum. This method m a which uses the lengths of o the uses an algorithm manipulatorr to determin ne the objecttive function. This solution nott only must ccomply with th he criterion of o less energy, it also must b be within the feasible re egion, stem. restricted byy the singularitties of the sys

2

4.

THEORETICAL MODEL

In the optimization process the mechanical energy of the model is the objective function to consider. This function depends on different variables, like the dimensions of the links, speed, masses and task at hand. A pre-established task is to be executed, with position and orientation of the movable plate predetermined. This work trajectory is established and cannot be changed, be it because it complies with a specific task or because it is a trajectory optimized by other methods. Examples would be specific painting or welding processes, o in microchip assembling, where there is an initial and final position to be complied with. In this paper the dimensions of the manipulator’s links are given as variables to be optimized, leaving the whole system based on them. For the optimization process and the computing of the examples described Mathematica 6 software will be employed, which allows the symbolic use of the equations, as well as the implementation of the search method of Hooke and Jeeves (Simplex Method). The following steps will have to be taken for this method: 1. Define the energy model of the manipulator.

Energy used for the movement of the manipulator.

Of those flows, the one to be optimized is the energy required for the movement of the manipulator, so the energy introduced into the system is used in the execution of the main task. To simplify, in this case ideal actuators (that do not dissipate energy) are used as a first approximation to reality. Thereby the following schematic results (Figure 2).

Energy to  task  Energy  supply 

Friction  looses 

2. Define the dynamic characteristics of the manipulator.

Energy to  manipulator 

3. Analyze the task trajectory. 4. Define the energy function with the design variables. 5. Define the criteria of the feasible region for the model. 6. Optimize the energy function of the manipulator with the restrictions of the model. 2.1 Energy model The model is based on the concepts of classic thermodynamics. For this case, the moving links of the manipulator are defined as the system. It is considered to be a closed system with boundaries around the movable links and the energy flows between the system and its surroundings will be defined [9]. The energy depends on the current conditions of the system, such as speed and position of the center of mass of each of the links that are taken into account. The following equation includes each and every task and energies of the system: δQ

δW

δW

dE

dE

1

Of the interactions between the system and its surroundings, only those of mechanic origin are to be considered. The actuators are considered idealized, that is, heat flows δQ or internal energy changes of the link are not considered, since it is modeled as rigid body, reason why the expansion or deformation tasks are eliminated as well δW . By way of simplifying, the following equation is obtained: δW

dEPG

dEPE dECR

dEPM

dECL

3.

It is important to point out the consideration taken into account for the analysis of the system. The links are rigid bodies, reason why the deformation and expansion effects will not be considered. The actuators involved are ideal, without friction losses. With these simplifications the model only has one input, the energy provided to the manipulator, and two outputs, the energy provided for the task and the movement of the manipulator. This last one would be the one we look to optimize. 2.1 Dynamics of the manipulator The equations of the mechanical energy of the proposed model are developed. The arms of the manipulator are defined as constant cross-section links (Figure 3), whereas the movable plate is considered of constant thickness, both of materials of uniform density . This density is given as data, in the case of the bar as density by length unit, and in the case of the triangular plate as density by surface unit. m I

ρ Li

ρ Li 12

2

Where only mechanical interactions are considered, reason why the magnetic and electric interactions are also eliminated from the equation. The energy transformations that may exist within the actuators are not to be taken into consideration. From this model the following flows are obtained: 1. Energy of the inertial base to the manipulator, 2.

Figure 2. Energy flow

Energy of the manipulator to the task to be performed,

Li Figure 3. Mass and inertia of link i Knowing the masses and inertias of the links in movement of the manipulator, the energy associated to each one of them can be defined as follows (Eqs. 3). ECLineal

Energy dissipated in the actuators in form of heat,

46

mv 2

3a

Iω 2

ECAngular EP

m gh

E

ECLineal

3b 3c ECAngular

EP

3d

For each of the binary links (arms) and for the ternary link (plate) similar equations are generated, with the speeds being those of its mass centers and the angular velocities of each link. In order to determine the speeds of the mass centers, the inverse kinematics of the model is used. For each orientation of the movable plate, 8 possible solutions or operation modes are generated (Figure 4), depending on the way the plate is moving [6]. The equation generated involves the dimensions of design in the speed equations. This way, as much the mechanical properties of mass and inertia of each link as well as the speeds of each link element is function of the dimensions of design.

Where , , are the reasons for the change of the trajectory with respect to time, and , , are the angular velocities of the active links of the arms of the manipulator. The relation between those would be the one that determines the conditions of singularity. When the , is range of the parallel and serial Jacobian matrix zero, the system is in singularity. Therefore, in order to evaluate if the point analyzed is able to generate a solution with less energy, it is also necessary to verify that it is within the feasible region, by means of the behavior of the Jacobian matrix of the model, as well as for the whole trajectory. 2.4 Objective function, restrictions and optimization The objective function is the energy associated to the links of the manipulator, based on the lengths of links and the trajectory of the task (Eqs. 5). The restrictions that the model presents are introduced through the determinant of the Jacobian matrix, which are also based on the lengths of design and the trajectory of the task. With the previous information the optimization begins.

Eesl Eesl Etot Etot

Figure 4. 3RRR modes of operation 2.3 Task trajectory The task trajectory involves not only the position, but also the orientation of the movable platform. For the analysis, a defined trajectory, which can not be changed, is used to optimize the elements of the manipulator. The variables of the trajectory are , , . By means of inverse kinematics equations are established the angles of each link, not only based on the trajectory, but also on the lengths of the links to be optimized. The task trajectory must be within the feasible region, that is to say, where the manipulator is without singularities. Within this feasible region, depending on the combination of angles and lengths of the link, the orbits leading to singularities may be found [2, 3]. These singularities may be of Type 1 or serial, of Type 2 or parallel, or Type 3 when both singularities appear simultaneously. The model used to analyze the type of singularity is based on the Jacobian matrix associated with the manipulator (Eqs. 4), [1]. Jθ t

J θ Xt, Yt, φt

4b

θ

θ ,θ ,θ

4

47

f Xt, Yt, L1, L2, … , Li Eesl f Xt t , Yt t , φ t , L1, L2, L3

5a 5b 5c 5d

As data we have the trajectory and direction of the central point of the movable triangular plate, the lengths of the links being free variables. For this type of manipulators the symmetry has great advantages for the description of different trajectories, choosing the primary links with the same length in each one of the arms, same as for the secondary links. For the movable triangular plate, a triangle circumscribed to a circumference is assumed whose radio would be the design dimension of the plate. As optimization method the Simplex method is used, through the following steps: 1. For each set of lengths of the manipulator, a point whose coordinates are 1, 2, 3 is considered, and the energy of the manipulator is obtained through the whole trajectory. For this, 1 initial points are chosen, being the number of dimensions to optimize, in this case, 3. 2.

For each point it has to be evaluated if the criterion of non-reduction of the margin of the associated Jacobian determinants is fulfilled the whole time. If a specific point does not comply with this requirement, another one is chosen.

3.

The point with the highest value of energy (objective function) is eliminated, and the following point is generated, through the reflection of the point with greater value with respect to the origin of the subspace generated by other points (Figure 5)

4.

Another iteration is done, until the criteria of convergence necessary for the method are met.

4a

t

E

l1

1.0 m

l2

1.0 m

l3

0.4 m

ETot BAA

6855.77 J

Figure 5. 3D simplex method This analysis can be complex, depending on the type of proposed trajectory or for other types of manipulators and the complexity of the inverse kinematics of the manipulator. In this case, having 8 options of operation, it will be necessary to review each one, since it is possible that some do not fulfill the criteria of the feasible region but others do. The search for the optimal point will be as much in the energy as in the non-singularity of each of the work modes of the system.

Figure 7a. Original serial singularity

3 EXAMPLE In this case a defined trajectory will be considered, initiating with proposed dimensions and optimizing the model, according to the behavior of the solutions that are generated by the method up to the optimal point. The mentioned simplifications are taken from ideal actuators, without losses through friction or heat. The trajectory and direction proposed of the movable plate are given by the equations (Eqs. 6) that generate a no simple desired task and direction graphs (Figure 6): xt

0.3 Sin 2t

Sin 4t

6a

yt

0.3 Sin 2t

Sin 4t

6b

φt

Sin 4πt

Figure 7b. Original energy

6c

Figure 6. Task trajectory and orientation The movement of the links is made on planes perpendicular to the field of gravitation, in order to eliminate the effects of potential energy. With the initial dimensions for a point in the optimization process, which will be denominated original, the following results for this model are achieved, with the corresponding graphs of energy and singularities (Figure 7).

For this case, the best configuration is BAA, which has the lower energy and neither serial nor parallel singularities apply. The amount of energy against which to compare the results of the following generated points solution is noted. Based on the first iterations, models of smaller dimensions are generated, but not all comply with the criteria of singularities, with parallel singularities (Figure 8) and serial singularities (Figure 9), where the behavior of the dimensions and the objective function during the search are shown. For the case of serial singularity, the following dimensions apply in ABB mode: l1

0.519 m

l2

0.489 m

l3

0.197 m

ETot ABB

48

2249.23 J

When serial singularities exist, the points are eliminated when the generate infinite energies, as shown in the case of the following combination of dimensions: l1

0.565 m

l2

0.547 m

l3

0.2 m

For iteration 14, the system presents the necessary criteria of convergence, generating the following solution, in which the diminution of energy can be appreciated (Figure 10), as well as the nonexistence of singularities in the development of its trajectory (the graphics of the Jacobian determinant is never zero for any point):

Figure 8a. Parallel Jacobian singularity

Figure 10a. Last iteration, lower energy Fig. 8b Energy in parallel singularity

Figure 9a. Serial singularity

Figure 10b. Serial Jacobian in last iteration l1

0.652 m

l2

0.653 m

l3

0.2 m

Etot AAA

1256.2 J

In this case, the energy required by the initial model is 6855,77 [J], optimized at a value of 1256,2 [J], which represent 81,68% savings.

Figure 9b. Energy in serial singularity

49

4 CONCLUSIONS When applying this method to parallel manipulators: 1. The model with smaller consumption of energy according to the mentioned criteria is obtained. 2. The model can be reconfigured in order to use the least energy possible for the required process. 3. The solution, in spite of being complex equations, does not require a lot of computer time. A disadvantage would be that at the moment of computation of the solutions, where the behavior of each of the operation modes will have to be compared, since some of them may fall into singularity, whereas others may be a solution. The decisive factor in models that display the same amount of energy (the speeds and the masses of the links are similar) is then the one that does not fall within a singular configuration, but this may improve when analyzing each one of the operation modes separately, and comparing the final result with the other operation modes, that could have different dimensions. With regard to serial singularities, they will be eliminated as they would need infinite energy values when the system is blocked, however, for the analysis of parallel singularities, the use of the Jacobian matrix is indispensable. The advantage of this radical method is that it leads quickly to a solution. In the example of a manipulator 3RRR, it takes only 5 minutes with an Intel in 1.4 GHz processor. As the complexity of the system increases, it will be increasingly harder to find a solution, since then it would depend on the capacity of the used optimization method, as well as on the complexity of the equations of the proposed inverse kinematics, as is the case with space parallel manipulators. However, saving energy in a task of multiple repetitions, as could be the assembly of a circuit or executing laser cuts, is an even greater advantage. The benefits arise when the least amount of energy is consumed for a certain process.

6 REFERENCES [1] Gosselin C., Angeles J., 1990, Singularity Analysis of Closed-Loop Kinematic Chains, IEEE Trans. on Robotics and Automation, vol 6, no 3, 281-290. [2] Bonev, I. A. Zlatanov, D. Gosselin, C. M., 2003, Singularity Analysis of 3-DOF Planar Parallel Mechanisms, Transactions of the ASME Journal of Mechanical Design, Vol. 125 Pp. 573-581. [3] Chablat, D Wenger, P., 2006, “Self Motions of Special 3RPR Planar Parallel Robot” Advances in Robot Kinematics, Springer, October, Pp. 221-228. [4] Ma, O. Angeles, J., 1993, Optimum design of manipulators under dynamic isotropy conditions, IEEE International Conference on Robotics and Automation. Pp 485-492. [5] Kumar Dash A., I-Ming Chen, Yeo S. H., Yang G., 2005, Workspace Generation and Planning Singularity-Free Path for Parallel Manipulators, o Mechanism and Machine Theory, vol.40, n 7, 776805 [6] Gosselin C., 1990, Dexterity Indices for Planar and Spatial Robotic Manipulators, IEEE International Conference on Robotics and Automation, Cincinnati, OH, USA, 13-18 may, 650-655. [7] Alba, O. G., Pámanes, J. A. Wenger, P., 2007, Trajectory planning of a redundant parallel manipulator changing of working mode, 12th IFToMM World Congress. June 18-21. [8] Gosselin C., Angeles J., 1988, The Optimum Kinematic Design of a Planar Three-Degree-ofFreedom Parallel Manipulator, Journal of Mechanisms, Transmissions, and Automation in Design, vol 110, 35-41. [9] Rojas, A., A., Ledezma, Y.A., 2008, “Minimum Energy Manipulator Design” Advances in Robot Kinematics: Analysis and Design. Springer. Pp. 89 – 99

5 ACKNOWLEDGMENTS This research work reported was made possible by grant program IN101007-2 of PAPIIT, by DGAPA, UNAM.

50

Artificial Neural Networks to Optimize the Conceptual Design of Adaptable Product Development J. Feldhusen, A. Nagarajah Chair and Institute for Engineering Design (ikt), RWTH Aachen University, Steinbachstrasse 54B, Aachen, 52074, Germany [email protected], [email protected] Abstract This paper describes an approach to apply Artificial Neural Network (ANN) to support the developer to optimize the conceptual design of Adaptable Product Development. The ANN learn from the previously mapping between the requirements and the product solutions. For a new development Process of an Adaptable Product the ANN analyse existing solution and sort them according to their probability of success. The advantages of this approach is on the one hand that ANN is able to identify the best suitable product solution for a new development order and on other hand the ANN automatically store the knowledge of the developer. Keywords: Product Development Process; Engineering Design; Neural Network; Adaptable Products

1 INTRODUCTION In today’s automotive supplier industry majority of the products are developed by adaptation or by varying already existing products. This kind of Product Development is named as Adaptable Product Development. One reason for such a Development can surely be found in the demands of the supplier to reduce cost and time required for Product Development on an ongoing basis [1]. Thereby, the enterprises face the dilemma of depending on existing solutions on one hand but on the other hand to develop innovative solutions to enhance their own competitive ability compared to the international competitors. To solve this conflict, the developers have to be supported to automate the Processes, which are very time consuming but not sophisticated in respect of creativity. One of this process is searching for already gathered, but insufficiently documented knowledge [2]. What is today's mode of operation of the product developers for an adaptable product development? From numerous research projects in the automotive industry it can be derived that the developer does not select the suitable parent product which will fulfil the new customer requirements but selects the last product version as a base for the new development. Therefore the development is aggravated unnecessarily by the missing knowledge. An expert system can produce relief. The inauguration of these systems is very critically discussed in the industry due to the fact that those systems give the impression to be able to substitute the human developer in doing his tasks and therefore making him dispensable. This can be seen as one reason for the less-proliferation of those systems in the industry [3]. Thus such an expert system, which has to be developed, has not only to be technically successful, but also has to cope with the social environment of an enterprise. To solve this

CIRP Design Conference 2009

51

problem a NN based expert system seems to be success-oriented. This system will be teached with the knowledge of the developers by mapping the parent product solutions with associated requirements. In figure 1 the approach is described. For the handling of the requirements in a NN based system it is essential to collect them completely and to quantify them, i.e. to describe them with an unequivocal numerical value for representing in computer. Hence the requirement description should occur with the help of a visualisation language, which should lead to a better communication with the customer. To workout such a language it is necessary to subdivide the requirements into elementary constituents. After the requirements have been determined and described for a certain product, these should be related to the realised product. This mapping should be done for all products developed in the past. The NN based system then shall be teached with these mappings. For a new development order, which is given in principle by changed values of the requirements (in fig. 1 by regulator denoted) the existing solution with the most potential to solve the new order can be identified. 2 STATE OF THE ART In first stages of the product development process a preferably complete, fast and consistent evaluation of the requirements is essential for the success of a product. But in this stage a part of the requirements has limited use in the form in which they are defined in the conceptual formulation. It is incumbent upon the developer to prepare these requirements purposeful [4]. In today’s automotive supplier industry, it can be observed that the developer orients himself on already existing solutions when developing a new product. He uses methods, e.g. benchmarking or reverse

Figure 1: The adaptable Product Development engineering, to get the necessary knowledge from his own products or from competitor products. Using these methods, it is required that the developer has both identified the functions fulfilling the customer's requirements and that he is able to deduce the functions from the design. In the design all information about its development is stored. By analysing the shape of a design, the restrictions can be identified that could arise in later stages of a Product Development. In spite of using these methods for identifying the requirements, it can be over and over detected that requirements haven't been correctly clarified or have to be amended afterwards [4]. That this is still a large problem for the automotive industry can be seen through numerous publications regarding this topic [5]. In the IT industry, software developers use the visualspecification language UML (Unified Modelling Language) for communication with customers. The demand for the use of UML in other industrial sectors led the Object Management Group (OMG) to introduce the modelling language SysML for commonly used technical systems. This modelling language has to be examined for the possibility of migration for communication with customers in the field of Product Development in the automotive industry. Moreover, the modelling language requires a basic description of the requirements in the type of attributes. For instance the specifications that are submitted in the written text form in the practice have to be transferred to a basic type of description. In this context the VDA (german automotive industry association) guideline can be used as a basis. It describes the requirements by condition, subject, demand word, object and action [6]. Artificial Neural Networks recreate natural Neural Networks like the brain. Neural Networks are used e.g. for pattern recognition and creation of language, as

Artificial intelligence for games and the optimization of Processes. In the field of engineering science, they are utilized for the forecast of costs using modular construction systems or for the calculation of constructive parameters. They are consulted if forecasts are deduced from a large number of historical data empirically [7]. An application of the Neural Networks in Product Development, and especially for linking the requirements with product components, doesn’t exist. 3

THE APPROACH OF THE RESEARCH PROJECT For a successful procedure of this research, it is necessary to divide the main goal into the following three sub goals: 1.

The first aim is do describing requirements in the type of basic requirement modules. Based on these modules a graphical „language“ for visualization of requirements should be developed.

2.

Development of an Artificial Neural Network that supports the developer in a meaningful way with the accomplishment of the Adaptable Product Development for new requirements, by offering purposeful parent CAD models (representing parent product solution), which should be suitable for the intended requirements.

52

3.1 Procedure for the creation of basic requirement components A basic prerequisite for the aimed goal is the creation of a description of the requirements, which should be as unambiguous as possible. The first step according to the VDA should be a type of description for the requirements in form of basic components (Fig. 2). The basic components, which should consist out of subject, object and predicate should be set-up especially for the type of business and should be stored in a database. The user should only use these defined components for the description. It should be ensured that errors, which may arise during the communication with the customer, would be minimized. A specification sheet with a description of the requirements in text form comprises a high error potential due to its large space for interpretations. This would be enhanced by an elementary form of description, but can’t be seen as the optimal solution due to the fact that like this, the behaviour of the system can only be identified very hard. A graphical visualization of the requirements would lead to a further optimization of the communication with the customer.

with attributes like „is affiliated with“and „in contradiction to “(Fig. 3). When developing a new type of product, it can be checked if the ascertained customer requirements can be assigned to an already existing class of requirements. Therefore possible contradictions should be identified at the beginning of the development Process. If a class of requirements is non-existing until now, a „dummy“-class should be generated. The „non-existence“of a requirement is a first hint that there is no technical solution within the own company to realize that requirement.

Figure 3: Relationship between requirement classes On the basis of these classes of requirements, a graphical modelling language for this type of industry can be developed, following the graphical modelling language SysML. As mentioned above, it is possible to define the desired feature of a product within a model of requirement classes and with a use case model; the desired behaviour can be visualized (Fig. 4).

Figure 2: Basic requirements components Complicated facts of the case can’t be described generally understandable in form of plain text. In that case graphical visualization helps a lot. Also graphics contain a defined syntax with defined semantics. The composition of graphics of that kind results in a model of reality, which can easily be transformed to others. This model has to be understandable for all people involved in the development Process. The largest accumulation of models that are able to describe commonly used systems was developed within the Object Management Group (OMG): We talk about the SysML (Systems Modelling Language). The functional limitations of the systems are described in SysML by the means of Use-Case-Diagrams. Use cases have been developed, in order to register the functional limitations of the systems, i.e. the functional requirements. At this point we have a complete picture: Every functional requirement of the specification sheet can be assigned to one use case. Derived from the other requirements, the use case can be linked to protagonists (which could be persons or systems), who will interact with the system and the pre- and post conditions that are results of the functional requirements. The requirement components are categorized for the compilation of a graphical shape in a first step. The relationships between the classes should be described

53

Figure 4: SysML - Use Case for open a car Every type of business has its own language and symbols. These should be used for the modelling language. For an effective communication, the system has to be seen like an organism. After a first visualization of the requirements by means of the modelling language, the illustration has to be developed to get a more detailed view using further interactions with the customers. Within the graphical presentation the possibility exists to update changes of the requirements during the Process of a Product Development, using the already existing illustration. Moreover, the illustration should be assembled hierarchically using a top-down approach. 3.2 Development of an Artificial Neural Network Artificial Neural Networks are Networks consisting of artificial neurons. They are one arm of Artificial intelligence and in principle object of research of the neuroinformatics. The origin of the Artificial Neural Networks can be found – as well as Artificial Neurals –

in biology. They are compared with the natural Neural Networks, which are forming cross-linking of nerve cells in the brain and the spinal cord. Altogether it is about an abstraction (construction of a model) of information Processing and less about the replicating of biological Neural Networks [8]. Artificial Neural Networks mostly base on the cross-linking of many McCulloch-Pitts-neurons or slight modifications of them. In principle, other Artificial neurons can be used in Artificial Neural Networks, e.g. the High-Orderneuron. Depending on its task, the Network topology (the assignment of connections to knots) has to be reviewed very well. After construction of a Network a training phase follows. In this period the Network will „learn“. Theoretically, a Network can learn using the following methods [8]: •

Development of new connections, deleting existing ones



Change of the weights (the weights wij of neuron i to neuron j)



Adaptation of thresholds of neurons

• Adding or deleting neurons Moreover, the learning behaviour changes when changing the activating function or the neurons or the rate of learning of the Network. Practically, a Network mainly „learns“ by modifying of the weight of the neurons [8]. An adaptation of thresholds can be done by a Bias-neuron (bias means to distort in this context). That’s why ANNs are able to learn complex non-linear functions by a „learning“-algorithm that tries to identify all parameters (influencing factors) of the function out of existing input- and desired output values by iterative or recursive course of action. Thus ANNs are a realization of implicit learning (so called connecting paradigm), because the function consists of many elementary similar parts. First as a total the behaviour will be complicated. Perceptron The perceptron (named after the English word percept) is a simplified Artificial Neural Network, which was firstly presented by Frank Rosenblatt in 1958. In its basic version (simplified perceptron) it consists out of one single artificial neuron with adaptable weights and a threshold. Among this concept several combinations of the original model are perceived, differentiated between one-layer and multi-layer perceptrons. The basic principle is to convert an input vector to an output vector, therefore forming a simple associative memory. One-layer perceptron A one-layer perceptron has only one single layer of artificial neurons that represent the output vector at the same time. Therefore, every neuron is represented by a neuron function and receives the entire input vector as a parameter. The Processing occurs completely similar to the so-called Hebb's rule for natural neurons. However, the activating factor of this rule is substituted by a difference between set point and actual value. Due to the fact that the Hebb's (learning) rule relates to the weight of the individual input values, the learning of a perceptron is done by adapting the weight of every neuron. If the weights have been learned once, a perceptron is able to classify input vectors that are slightly deviating from the vector learned. Just therein, the perceptron’s desired ability to classify exists, from which it owes its name.

Frank Rosenblatt showed that a simple perceptron with two input values and one singular output neuron, can be used for the representation of the simple logic operators AND, OR and NOT. Marvin Minsky and Seymour Papert demonstrated in 1969 that a onelayer perceptron is not able to dismantle the XOROperator (linear separation problem).This lead to a termination of the research on the Artificial Neural Networks (ANN). Multi-layer perceptron Later on, the limitation of the one-layer perceptron was solved by the multi-layer perceptron, which besides the output layer, also has at least one further layer of hidden neurons (so called „hidden layer“). All neurons of a layer are completely linked to the neurons of the next layer forward (so called feed forward-Networks). Further topologies have also proved themselves: •

Full-Connection: The neurons of one layer are connected with the neurons of all following layers.



Short-Cuts: Some neurons are not only connected with all neurons of the next layer, but additionally with more neurons of the next but one layer.

Amongst others, a multilayer perceptron can be trained with the back propagation algorithm. In this algorithm, the weights of the connections are changed such that the Network can classify the desired patterns after a supervised learning. Method of learning Back propagation or Back propagation of Error [1] is a widespread practice for learning Artificial neuron Networks. Although first verbalised in 1974 by Paul Werbos, it first became well-known by the paper of David E. Rumelhart, Geoffrey E. Hinton and Ronald J. Williams from 1986 and led to a „renaissance“ of the research on Artificial neuron Networks. It belongs to the group of supervised learning methods and is exerted as generalization of the „Delta“ rule on multi-layer Networks. In addition, it is necessary to have an external teacher at every moment of input knowing the desired output, called target value,. The back propagation is a special case of the common gradient method in optimization, based on the median quadratic error. With the „learning problem“for arbitrary Networks a transformation of given input vectors to given output vectors as exact as possible is aimed for. For this purpose, the quality of the transformation is described by an error function. The aim is now the minimization of the error function, whereas in general only a local minimum can be found. The learning Process of an Artificial neuron Network takes place with the Back propagationmethod by adaptation of the weights, due to the fact that the output value of the Network is – beside the activating function – directly dependent on them. The algorithm for Back propagation can be divided into the following phases: •

54

An input pattern is applied and propagated forward through the Network.



The output of the Network is compared with the desired output. The difference between these two values is considered as the error of the Network.



The error is now propagated backwards, from the output layer to the input layer where depending on their influence on the error, the weights of the neuron connections, will be changed. This guarantees a convergence to the desired target output when attaching an input again.

3.3 ANN for Adaptable Product Development In Chapter 3.2 the essential basics to design an NN, which is able to fulfil the requirements of this approach is described. The neural network will be building up with a set of multi-layer perceptrons and should be trained by the developer. The intention of this approach is to design a system, which should optimize the development process of an Adaptable Product. Therefore the system to be developed should be trained by the procedure of the developer. It is important to recognize which requirement has led to which product characteristic. For a new product development the product characteristic can be derived immediately, if is it possible to recognize these assignments. But these assignments are so complex that they can not be identified without supporting by adequate tools. The developing system should be trained by the developer, because he determines the assignments. Due to Artificial Neural Networks the Adaptable Product Development should be designed more effective and efficient. Artificial Neural Networks comply a continuous learning of thinkable and desired product components according to a customer's requirement. Therefore the requirements should be prepared for the implementation in a NN system. The quantifiable requirements like cross-section, forces, etc. can be processed relative easily. The qualitative requirements like ergonomics, haptics etc. are difficult to process. In the first step these requirements should be transferred in numerical values by estimating of experts. The requirements, which belong to a product, are represented as a vector. Hence the different values by different order lead to different requirement vectors. Due to this approach it is possible to detect a proposal for the product component that fulfils essential customer requirements, but is not glutted by unnecessary versions. By implementing a Neural Network, the required development time for familiar requests should be shortened effectively. By this, it is possible to automatically check if the requirement can be realized by the existing variant of products, as soon as the requirements are determined, thus if the solution of the requirement is within the solution range (Fig. 5).

Figure 5: Mapping of specifications and product solutions The results would be visualized to the user - sorted by their probability of success – in form of an intuitively operable system for proposals. 4 SUMMARY The goal of this research project is to apply a NN based system to short the time for conceptual design of an Adaptable Product Development. The shortening of the conceptual design phase should be achieved by applying the NN-based system to identify the existing solutions that are suitable for a new product order. The identified solutions for the new product requirements should be adapted with a minimal effort on research and design. Applying the NN System for support the conceptual design phase of product development is the novelty of this approach. An important precondition for this application is the quantification of the requirements. To apply the requirements to a NN based system they should be quantified. The quantifiable requirements can be processed relative easily. The qualitative requirements are difficult to process. In the first step these requirements should be transferred in numerical values by estimating of experts. In a further step, the fuzzy modelling should be used to these requirements to get numeric values automatically. A further precondition for the success of this approach is surely the complete identification of requirements. This is to develop a visualisation language for a better communication with the customers. In an Adaptable Product development there are not various differences between product generations. Therefore the requirements can be assumed directly in the visualisation language. For this the modelling language SysML which stems from informatics should be adapted to suit the needs of the automotive industry. The basic for the design of such a language is the description of the requirements in elementary components. For this the VDA (german automotive industry association) guideline can be used as a basis. 5 ACKNOWLEDGEMENTS The authors like to thank the German Federal Ministry of Education and Research for aiding this research project. 6 REFERENCES [1] Ehrlenspiel, K., Kiewert, A., Lindemann, U., 2007, Cost-Efficient Design, Amer Society of Mechanica [2] Feldhusen, J., Nurchaya, E., Loewer, M., 2007, Variant Creation Using Configuration Of A

55

[3]

[4] [5]

Reference Variant, ICED International Conference On Engineering Design, Paris, France, 28-31 August Rammert, W., Schlese, M., Wagner, G., 1998, Wissensmaschinen: soziale Konstruktion eines technischen Mediums: das Beispiel Expertensysteme, Campus Verlag Kruse, P., 1996, Anforderungen in der Systementwicklung, VDI-Verlag Almefelt, L., Andersson, F., Nilsson, P., Malmqvist, J., 2003 Exploring Requirements Management in the Automotive Industry., ICED

[6]

[7]

[8]

56

International Conference On Engineering Design, Stockholm Verband der Automobilindustrie, 2006, Automotive VDA-Standardvorlage Komponentenlastenheft, Frankfurt a.M. Pulm, U, 2004, Eine systemtheoretische Betrachtung der Produktentwicklung, PhD thesis, Munich Zell, A., 2000, Simulation neuronaler Netze, Oldenburg Verlag

Work Roll Cooling System Design Optimisation in Presence of Uncertainty Y. T. Azene1, R. Roy1, D. Farrugia2, C. Onisa2, J. Mehnen1, H. Trautmann3 School of Applied Science, Cranfield University, Cranfield, Bedfordshire, MK43 0AL, UK 2 Corus Research, Development & Technology, Swindon Technology Centre, S60 3AR, UK 3 Department of Computational Statistics, TU Dortmund University, Germany {y.tafesseazene, r.roy, j.mehnen} @cranfield.ac.uk, [email protected] 1

Abstract The paper presents a framework to optimise the design of work roll based on the cooling performance. The framework develops Meta models from a set of Finite Element Analysis (FEA) of the roll cooling. A design of experiment technique is used to identify the FEA runs. The research also identifies sources of uncertainties in the design process. A robust evolutionary multi-objective algorithm is applied to the design optimisation I order to identify a set of good solutions in the presence of uncertainties both in the decision and objective spaces. Keywords: Roll cooling design, Uncertainty, Design optimisation, Multi-objective optimisation

1 INTRODUCTION Roll cooling optimisation can be considered as a process of finding the best set of manufacturing parameters which guarantees an efficient use of water cooling and application. The optimisation of the rolling system is crucial for improving process time, reducing cost as well as increasing product quality. To obtain the desired cooling conditions, it is essential to know and to control as accurately as possible the relevant process parameters. The hot rolling process takes place in a relatively harsh environment with safety implications due to high temperature, machinery, moving stock and overall conditions (space, etc.). Following a detailed mapping of the main input factors (dependent, independent) affecting roll cooling and, hence, roll life, factors such as roll temperature, stock temperature, roll speed, roll stock contact length, cooling heat transfer coefficient, delay time, roll stock contact length and roll gap heat transfer coefficient have been considered as the main factors influencing cooling conditions. All these factors have inherent uncertainty, i.e. they follow some statistical distribution, which is in general a priori not known. Previous studies identified that temperature difference in the roll as well as the developed stresses (thermal and mechanical) are key roll cooling design quality factors or responses [1, 2]. Both measures can contradict each other, depending on the amount of under or overcooling, i.e. the improvement in one quality factor comes with a decrease in the other factor. Therefore, no single optimum solution exists but a set of best possible compromise solutions can be found from which experts can choose depending on their preference [3]. Stress and temperature in the roll cannot be assumed to be perfectly constant. In practice they fluctuate slightly. To address the issues of roll cooling design as described above, this paper presents the application of robust evolutionary multiobjective optimisation using a new dominance-criteria technique. This technique is designed to identify Pareto fronts in noisy environments. A predecessor of this algorithm with noisy fitness functions is described in [4]. In this paper the new technique is adopted and applied in the

CIRP Design Conference 2009

57

case of noisy decision space as well as noisy fitness functions. More work on the technique also can be found in [5]. The paper also presents the underlying roll cooling model and experimental application of the new optimisation technique on the model. Also result analysis and concluding remarks are presented.

Figure 1: Principle of rolling system. 2 DEVELOPMENT OF ROLL COOLING MODEL The section develops a mathematical model of a roll cooling system design. The model is to represent a complex behaviour of a real life rolling process in a simplified and controllable manner. Designs of Experiment (DoE) methods are used to develop the surrogate model. Alternative/surrogate model represents the underlying characteristics of the issues being investigated, such as: rolling process factors and parameters, as well as the influence of those factors on the thermal behaviours of rolls during rolling [6]. The proposed meta-modelling framework was introduced to carry out computation of intensive design simulations. Since the framework is based on a response surface methodology it inherits the following advantages: providing insights into the relationship between output responses y and the input design variables x which can be used to evaluate design process parameters uncertainty. Nevertheless, there is also uncertainty in the meta-model. Uncertainty in the meta-model is due to the fact that it is an approximate representation of the real world rolling practice, where there is inevitable forced accuracy compromise and losses of information during

design of experiment. In the next section the model building methodology and evaluation of the uncertainty will be discussed. 2.1 Model Building methodology Experimental Procedures Problem Definition: The purpose is to identify the main factors influencing effective roll cooling and therefore minimising effect of thermal fatigue whilst increasing roll life. The problem definition leads to the identification of change of characteristics and behaviours of rolls which occur during cooling. The change in characteristics and behaviours of rolls are later used as a measure in determining solution for optimum roll cooling. Here, also roll wear and influencing factors, as well as the dependency, if any, between factors, are investigated. 2.2 Identifying optimum rolls cooling measure Change in roll surface temperature (∆T) is an important roll cooling design objective that expresses the effect of roll cooling during hot rolling. It is a suitable measurement since it displays rolls thermal behaviour (i.e., how well the current cooling design meets the requirements).The change in temperature is measured as the difference between maximum and minimum values over a cycle in quasi steady state heat exchange rolling conditions: ∆T = T2 - T1, measured in Kelvin [k]. Roll Stress [MPa]: Another equally important measure/objective in optimising roll cooling is keeping the roll maximum principal stress (MPS) at the roll surface as low as possible. Behaviour of stress on the roll is a useful objective to consider since it has a proportional effect with change in temperature in rolls and hence on thermal fatigue. Roll surface passing under the water jets undergoes a cyclic state of tensile stresses due to the roll cooling being applied after that surface has been in contact with the hot stock where the stress is compressive in nature. This tensile stress is a contributory factor to thermal crack growth. Identifying Contributing Factors: The aim is to understand the issues concerning the roll cooling problems and identify specific contributing factors to the identified. This step also lists the most important design variables from a large number of potentially important factors. Defining regions of interest was according to roll cooling experts. The choice of design variables were driven by the need to mimic the real design problem experienced in the plant. Seven variables were identified and their operating range specified. Table 1 below shows the factors identified and factor levels recommended. HTC 1 and HTC 2 are the heat transfer coefficient values for roll cooling and roll/stock contacts respectively (kW/m2.K). The factors have been given a range of design space, called factor levels that lie between acceptable upper and lower boundaries; therefore modelling problems caused by factor variability can be resolved. The boundaries are assigned based on information from real world rolling practices. Higher model accuracy is expected from higher number of levels in the design space [7, 8]. Therefore a 3level is allocated for each of the seven identified main factors. Table1: Factors and factor levels used in the simulations.

Experiment: The finite element runs were performed using Abaqus Standard version 6.2.2. Due to its high thermal resistance characteristics high chromium steel material has been selected for the work roll. The same loading and boundary conditions were applied in the simulations so that the responses are measured under similar conditions. For each run, values of the two response variables are recorded. Response values for ∆T are collected from the roll at a depth, calculated based on speed of rolls and roll /stock contact length at time when the temperature reaches the end of steady state [9]. The depth indicates the maximum heat penetration in the roll when in contact with the stock at a given roll speed. Heat penetration depth can be expressed mathematically as: P = √6αt

(1)

Where α is thermal diffusivity of the roll material and expressed as a function of thermal conductivity (K) and the product of density x specific heat capacity (ρ Cp), mathematically it can be expressed as: α = K / (ρ Cp) ρ and Cp represent roll material density capacity respectively. The parameter t rolls contact time and is expressed as /stock projected contact length (L) rotational speed (Ω) and roll radius (r). is expressed as the following: t = L/ (Ω r)

(2) and specific heat is the stock and a function of roll divided by roll Mathematically it (3)

Based on the roll material considered for the simulation, high chromium steel, the following values are allocated to calculate the roll heat penetrating depth: K = 48 w/mk, ρ = 3 -1 -1 7833kg/m , Cp = 478Jkg .K 2.3 Finite element analysis and data extraction The finite element runs were performed using Abacus version 6.2.2. Change in temperature of roll and stress, as a response, is the target to be collected and analysed. The sample result below shows the effect of combination of variables and their contribution for the variation of temperature in roll during rolling. X-Y plot on field output (ODB) of the simulation result used to analysis and determined the trend and exact value of the responses from the roll as shown in Figure 4. Each run responses recorded from simulation result are later used to develop the meta-model using a statistical tool. Sample results of the recorded responses are shown below. The response from the FEA shows that how design variable parameter variation and scheduling design set can have effect on the thermal behaviour (temperature) and mechanical property (stress) of rolls, and the cooling system reaction in normalizing to that effect during hot rolling process. Temperature is calculated as the difference between temperatures of the roll after simulation taken at a depth and the roll initial temperature, temperature before simulation (Figure 4). While stress is represented by the value directly measured from the roll surface after simulation (Figure 5). Samples (Table2) below shows example of input parameters set and the data /response extracted from finite element simulation output. Considering two steps of the process, rolling and delay time and the responses data have been collected at the begging and end of each step. Therefore a total of four data values, have been collected. The delay time during rolling is a time when no stock pass in the roll gap. Delay time can occur at any time in the process and resulted due to controllable or uncontrollable activities. Generally unsolicited delay time, too short and too long, considered

58

uncertainty in the rolling process since it has a direct effect on the roll temperature distribution. Table 2: Input parameters and responses used in the modelling.

2 Temperature = -0.7915 * (x1+ε’1) + 0.0014 * (x1+ε’1) 0.0488 * (x2 +ε’2) + 2.851 10-5 * (x2+ε’2)2 + 30.6809 * (x3 +ε’3) + 30.6809 * (x3+ε’3)2 - 1.7359 * (x4 +ε’4) + 0.0200 * (x4+ε’4)2 - 0.0038 * (x5 +ε’5) + 6.9791 10-4 * (x5+ε’5)2 2 2.4565 * (x6+ε’6) + 0.0721 * (x6+ε’6) - 1.4177 * (x7 +ε’7) + 0.1333 * (x7+ε’7)2 + 83.5805 + ε1 (4) 2 Stress = -1.9147 * (x1+ε’1) + 0.0272 * (x1+ε’1) + 0.0489 * -5 2 -2 (x2 +ε’2) - 6.731210 * (x2+ε’2) - 1.4309 10 * (x3 +ε’3) + 70.5653 * (x3+ε’3)2 + 1.3606 * (x4 +ε’4) - 0.0096 * (x4+ε’4)2 2 - 0.5041 * (x5 +ε’5) + 0.0067 * (x5+ε’5) + 4.0248 * (x6+ε’6) - 0.1024 * (x6+ε’6)2 - 16.5929 *(x7 +ε’7) + 0.8525 * (x7+ε’7)2 + 1.3624 10-2 + ε2 (5)

With x1 to x7 being input factors defined in table 1 (left to right). The inherent uncertainty in the fitness functions is accounted for by the error factors ε1 and ε2. Both error terms follow a priori unknown probability distribution. For reasons of convenience a normal distribution εi ~ N(0,σi2), i =1,2, has been assumed in the experiments. It is assumed that x1 to x7 are disturbed by external noise as well. Each factor has its own specific error factor ε’1, …, ε’7. The noise in the decision variables have not been considered by the model fitting but added after the models have been estimated. In the experiments for sake of convenience error terms follow a normal distribution ε’i ~ N(0,σ’i2). Validation of the Model: The section gives justification for the acceptability of the meta-model by analysing post processing statistical features from the regression. The features helps to determine the relevance of the independent input factors in the model building process as well as measuring the ability of the model to predict the system response over the search space. The criteria of the performance are based on two measures: R2 and R2adj. R2 and R2adj measure the amount of variation explained by the model. When R2 equals 1 (perfect fit) i.e. all N model outputs equal their corresponding simulation 2 outputs. Higher R implies lower variation between observed and predicted values, therefore a better model. The respective basic quality of the fit of the deterministic FEM data, R2 and R2adj, are 0.91 and 0.81 respectively for change in temperature, 0.95 and 0.89 for stress.

Figur 2: roll stock contace during hot rolling.

Figur 3: Depth of heat penetration in the roll after steady state.

Figure 4: Stress (S-Maximum) data from roll surface after steady state.

Figure 5: Temprature data from the roll after steady state 0 at a depth 3.6mm, roll initial temprature = 40 C . Fitting the model: The models for the temperature and stress response surfaces were generated by fitting a second order polynomial to the results from the FEM simulations, the fit was carried out using Statistica® software package resulted in the following two models:

59

Figure 6: Method for the modelling and optimisation process. 3 REVIEW OF ROLL THERMAL MODELLING Cooling of rolls is a critical concern in the rolling system design particularly, in the operation of hot mills. Untimely loss of rolls is a common occurrence during hot rolling process. Main sources of these phenomena are the

severe temperature variations and the resulting thermal stresses during work-rolls contact in the process. To control roll thermal stresses and roll life, it is necessary to know the temperature variations in the work-roll during the hot rolling process. There are a number of published studies that have focused on determining the temperature field in the work-roll and how it affects the roll life during rolling. Parke and Baker [10] used a computational method for determining the temperature field in the finishing stand work-roll. The results from their model were then used to design the optimum water spray condition. A two-dimensional finite element method was used by Seluzalec [11] to predict the temperature distribution within the work-rolls in a roll forging process. Devadas and Samarasekara [12] utilized a onedimensional heat transfer model that was based on the finite difference method. The model was coupled with the assumption of homogenous work to estimate the steady state temperature distributions in the work-rolls and the rolled metal during the finishing stage. Teseng et al [13] combined experimental and numerical methods to predict temperature distributions in work-rolls and to evaluate roll life. In another research work, Teseng et al [14] used an analytical method to solve the heat transfer, partial differential equations and thus determine the temperature field in a work-roll for a single pass hot strip rolling process. The cooling of both the work-rolls and the product was simulated with the aid of a mathematical model and the results are presented in [15]. In that paper the temperature fields in the work-roll and the rolled metal are predicted and the effects of various cooling conditions on work-roll temperature variations are determined. In all of these works however, the result shows only the workroll temperature transfer prediction, estimate how it affects the roll life during rolling. Most of these research papers also lack looking in to the uncertain and fuzzy issues in rolling system affecting the cooling process. To fill these gaps therefore, the paper will focus on modelling of a rolling system design for optimum cooling of rolls, by integrating uncertainties in rolling process effecting cooling, deterministic parameters as well as the qualitative nature of rolling causing unexpected temperature variation in rolls leading to untimely roll ware. Therefore tackling cooling problems requires a multi dimensional approach. Hence, a multiple model approach would be most appropriate to consider for achieving a better cooling solution. This will require multiple representations and multiple models for each form of information (subjective and deterministic variable representation). The challenge is however in building a system with a selection of representations to integrate as one model. Today, due to its accuracy, thermal modelling is one of the most important ways used to model quantitative representation of data in metal forming. The most common technique and the most used is the Finite Element Method (FEM). This method is the only one which can give the behaviour of the temperature during metal forming with acceptable result. Because of the development of the numerical analysis, the FEM is no longer “nice to have”, but rapidly becomes a cost effective way of representing/modelling the real life forming problems. The finite element method is a technique based on discretisation. A number of finite points called nodes are identified. So, the work piece is divided into an assemblage of elements connected together. Once the boundaries known, the flow equation can be resolved. This is the best technique to analyse temperature in metal forming process, since the method gives the temperature distribution on the roll at any points/nodes required. C. J. Walters gives an example of an application of finite element method in forging [16]. The first is the capability of obtaining detailed solutions of

stress, strains and temperature. And the second one gives a detail analysis of the fact that a computer code can be used several times and for different kinds of problems. 4

UNCERTAINTY AND SOURCES OF UNCERTAINTY IN THE PROBLEM Design uncertainty is comprised of design imprecision, uncertainty in choosing among alternatives, and stochastic uncertainty, usually associated to measurement limitations [18, 19, 20]. This summarise the inevitability of uncertainty in engineering design optimisations. If reliable optimal solution is to be found this inevitability must be considered in an optimisation task. There are several general sources contribute to the uncertainties in simulation predictions. These contributors can be categorized as follows: ¾

Variability of input values x (including both design parameters and design variables), called “input parameter uncertainty”

¾

Uncertainty due to limited information in estimating the characteristics of model parameters p, called “model parameter uncertainty” and

¾

Uncertainty in the model structure F (×) itself (including uncertainty in the validity of the assumptions underlying the model), referred to as “model structure uncertainty”.

The robust non dominated technique considered here to deal with the optimisation aimed to address all or most of those uncertainty issues to achieve the best/optimised solution. Once the uncertainties and the sources of uncertainty related to the problem has identified, the next step is representation of the uncertainty mathematically in the model. The integrated model is then introduced in the optimisation using the robust non dominance criterion genetic algorithm technique. 5

FEATURE OF THE ROBUST NON DOMINANCE CRITERION The robust non dominance criterion is a GA based multiobjective optimisation method designed to reflect the general situation of real world applications, such as rolling system, where high disturbance process environment and inevitable uncertainty in input variables occur. The technique also designed to find a solution for problems with uncertainty by introducing a non dominance criterion between design points in the solution space that are created as a result of presence of noise in the problem. The Idea here is based on standard approach to evaluate the objective functions a fixed number of times k for a given decision vector. The problem at hand is then to estimate the true Pareto front PFtrue from a set of k noisy samples. The criterion for dominance between points is determined by computing the median values of the objective functions for all points of the Pareto set. Here the initial step is computing the k design solution points, realizing of two objective functions and for seven different points in the decision space and the related median. Afterwards the points are connected to one another so that convex hulls can be formed. Then required average distances are computed. Here a measure of uncertainty of a solution in m-dimensional objective space can be introduced as average deviation of a sample set to the estimate of the solution in each coordinate direction. Taking P: = med (fk). as a robust estimate of a solution, and the convex hull of all k sample points around P describes a worst case representative of solution P

60

containing all k samples. The absolute distances in each dimension of all points in the convex hull to P can be used to define the uncertainty vector A robust dominance criterion then determined given the uncertainty bounds around a solution P all points within the box formed by the bounds are represented by P. This implies that the conventional Pareto-dominance definition may not hold any more if any two points P and Q are inside the uncertainty vicinity of each other. Although these points may dominate each other in a noise-free case, in the case with noise it is impossible to tell which point dominates the other. For the analysis in this paper a real-coded Matlab version of NSGA-II was chosen so that is able to provide a source of comparison of the technique used to deal with problem with uncertainty in the paper. More details of the technique can be found in [4, and 5]. 6 OPTIMISATION EXPERIMENT AND RESULT Here presented the engineering design optimisation with uncertainty using non-dominance criteria technique. The optimisation carried out in problem with uncertainty in the decision (Parameter) space and objective space. The technique also applied to the design optimisation problem where uncertainty presence in both spaces. The experimental results are presented below. 6.1 Experimental details and discussion of results The experiment is intended to prove a case study on real life engineering design optimisation problems with uncertainty. The experiments are conducted on the mathematical model developed in the previous section for real life roll cooling design problem. Robust non dominance optimisation approach was applied to the mathematical model and a minimisation of change of temperature (∆T) and work rolls maximum principal stress (MPS) sought. 6.2 Experimental details The section investigates various levels of uncertainty and their effect on the cooling of rolls. For comparison initially a deterministic search was carried out, i.e. search with no noise. Then the search problem with noise in the problem using the new Pareto-dominance technique was experimented. A total of 6 experiments were carried out based on the set shown in table 3 below. In the experiment a standard NSGA-II setting has been applied, with crossover probability pc = 0.9 and mutation probability is pm = 1/n where n is the number of decision variables. The distribution indices for cross over and mutation operates are vc = 20 and vm = 20 respectively. A population size of pop = 200 resulted in sufficient spread of the solutions along the Pareto front and all the experiments have been performed with gen=200 generations. Table 3: Experimental Set. DS

1

2

1

3

1

3

FF

1

1

2

1

3

3

Optimisation with and without uncertainty in the DS and FF

Descriptions FF = Fitness Function DS = Decision Space 1 = Deterministic search (no uncertainty in the problem) 2 = Uncertainty with lower sigma (5% DS & 6.25% FF) 3 = Uncertainty with higher sigma (10% DS & 12.5% FF) The experimental results are presented in the next section. The first section is the grid search on the 7 dimensional decision spaces. Here no uncertainty is

61

considered. Thus the result highlights the (estimated) true Pareto front. In the second section the results of experiments taking into account uncertainty. Based on information from real world rolling practice, two levels of uncertainty, lower to higher sigma values, are considered. The levels are 5% and 10% of the decision space and uncertainty of 6.25% and 12.5 % for the fitness function. The two levels are used to represent commonly noticed degree of uncertainty and worst case scenario in rolling practice. The performances of the solution are based on these uncertainty values introduced in the problem in the form of perturbation where the perturbation represents a normal distribution with sigma (σ) values. The sigma is the value in the design space calculated as a percentage of decision space of each decision variables listed in table 1] for example [σ = 5%, 10% (xi max – xi min) where i = 1,…7. Experimental result Below the result from grid mapping (point cloud) and the Pareto search (the thick line under the cloud). Here no uncertainty is applied. The results are used to illustrate and provide a comparison between the true Pareto problem without uncertainty from grid search and the impact of the new Pareto-dominance criteria. As shown in Figure 7 the result gives a Pareto front with the same convex shape as the grid search from standard NSGA-II. This means that in a deterministic environment the new Pareto-dominance criterion behave like the conventional dominance criterion as expected.

Figure 7: ∆f / s map generated by exhaustive grid search of the decision space. Uncertainty in the Decision Space: here presented results of the problem with uncertainty in the decision space. Two experiments carried out Sigma σ = 5% and 10% was used. The two values are used in the experiment so that uncertainty of margins 5% as well as the worst case scenario margins 10% can be tested in the optimisation. The robust non dominance optimisation technique applied and result observed. The results show that the spread of the Pareto is clustered and scattered, (Figures 8.1, 8.2). This is in fact an expected feature of solutions according to the Pareto dominance in uncertain environment. However, unlike results of other experiments presented in the next sections for uncertainty in the fitness function this property is uniquely observed more in the case of problems in the decision space. Nevertheless the all over spread of the solution lies around behind the true Pareto front. From the result it has been learnt that in this particular case more investigation required to study the scattering behaviour and improve the solution. Which is the beyond the scope of this paper. Additional work dealing with optimisation problem with uncertainty in the decision space and solution proposed for improved result are presented in [5].

Figure 8.1: Optimisation with uncertainty in the objective space, σ = 5%.

Uncertainty in the decision space and objective space: here the experiment carried out to observe the optimisation of problems with uncertainty in the decision space and in the fitness functions. As presented above the two cases have been experimented separately and each resulted with Pareto of unique characteristics. Here result shows that the Pareto dominance criteria technique find optimal solution but with few design solution points in comparison with result presented in the previous sections. This is may be due to higher overall noise level and particularly the presence of uncertain decision space in the problem so that not many non dominated points detected in the convex hull (Figures 10.1 and 10.2). However the results suggest that the uncertainty in decision space and the fitness function can be dealt with in the optimisation using the robust non dominated criterion technique. As presented below the algorithm find Pareto front that is very close to the true Pareto. For comparison, three random samples of design solution for problem without uncertainty and with uncertainty in the decision space and fitness function (Figure 10.2) presented in Table 4 and 5.

Figure 8.2: Optimisation with uncertainty in the decision space, σ = 10%. Uncertainty in the fitness function: here experiment conducted design optimisation problem with uncertainty in the model i.e. fitness functions Δf (change in temperature) and MPS (maximum principal stress). Unlike the result observed uncertainty in the decision variables, here the robust non dominance technique applied find solutions that are evenly spread on the true Pareto front (Figure 9.1). The same problem, with worst case scenario uncertainty level, considered very strong in real life rolling practise experimented. The result although, it shows a slight increase in scattering and a shift away from the true Pareto front nevertheless the solution remains close to the true Pareto front. The result here means that uncertainty in the model even in worst case scenario can be dealt with in the optimisation using robust non dominated criterion technique presented in section 5.

Figure 10.1: Optimisation with uncertainty in the decision space and fitness function, σ = 5 % and 6.25% respectively.

Figure 10.2: Optimisation with uncertainty in the decision space and in the fitness function, σ = 10% and 12.5% respectively. Table 4: Design solution at three random points along the true Pareto (Figure 7). Figure 9.1: Optimisation with uncertainty in the fitness functions, σ = 6.25%.

Figure 9.2: Optimisation with uncertainty in the fitness functions, σ = 12.5%.

62

Table 5: Design solution at three random points along the Pareto of the problem with uncertainty (Figure 10.2).

REFERENCES [1]

∆T = Change in temperature S = Stress (principal) f1 = fitness function1 (maximum principal stress) f2 = fitness function 2 (Change in temperature) x1,x2,x3,x4,x5,x6,x7 are input variables/design points 7 DISCUSSION AND CONCLUSIONS Engineering design optimisation is a challenging discipline. The obvious challenge is in decision making. Decision making is even more difficult due to presence of uncertainty. Uncertainty such as input variability is a common occurrence in real life engineering process. Thus needs to be addressed in the optimisation. Many real life engineering process are mainly chaotic and characterised by high disturbance therefore design optimisation in real life process is a complex task to do. Therefore the need to develop a representative mathematical model is an avoidable. However, the mathematical model is an approximation of the real life process. This approximation and the inherited input variable variability are the sources of uncertainty in the model in terms of its accuracy. This paper presents a design optimisation of a real life roll cooling process where the process represented by an approximate mathematical model described in section 2. The paper addresses the uncertainty of the input variable and the model (fitness functions) in the optimisation using a multi objective optimisation technique called robust non dominance criteria. A number of experiments have been conducted on the surrogate model with varying degree of uncertainty, i.e. commonly occurred degree of uncertainty and worst case scenario. Initialisation of degree of uncertainty is motivated by information from real world current rolling practice. The paper proved that the optimisation problem with uncertain input variables and fitness function/model can be overcome using robust non dominated criterion technique. The technique converges to a set of solutions that gives good nominal performance while exerting maximum robustness, giving an important rolling system design parameter set for achieving optimum roll cooling. As the experimental result shows, the technique is able to find a optimal design solutions even in very highly uncertain input parameters and uncertain fitness functions. 8 ACKNOWLEDGMENTS The Authors would like to thank Engineering and Physical Science Research Council (EPSRC) for funding this work. Many thanks also to CORUS RD&T UK, Swindon Technology Centre and Cranfield Decision Engineering Centre for their support in this work.

63

Parke, D. M., and Baker, J. L., 1972, ‘Temperature Effects of Cooling Work Rolls’, Iron and Steel Eng, 49: 675-680. [2] Teseng, A. A., Lin, F.H., Gunderia, A.S. & Ni, D.S., 1990, Roll Cooling and its relationship to Roll Life, Metallurgical Trans., 20A: 2305-2320. [3] Roy, R., 1997, Adaptive search and the preliminary design of gas turbine blade cooling systems, PhD thesis, University of Plymouth, UK. [4] Mehnen, J. & Trautmann, H., 2008, ‘Robust Multiobjective Optimisation of Weld Bead Geometry for Additive Manufacturing’, ICME 2008 Intelligent Computation in Manufacturing Engineering. [5] Roy, R., Azene, Y. T., Farrugia, D., Onisa, C., Mehnen, J., 2009, ‘Evolutionary Multi-objective Design Optimisation with Real Life Uncertainty and Constraints’, CIRP Annals, CIRP. [6] Oduguwa V. and Roy R., 2006, ‘A review of Rolling System Design Optimisation’, International Journal of Machine Tools and Manufacture, 46/8: 912-928. [7] Cheng, C-L and Van Ness, J., 1999, Statistical Regression with Measurement Error, Oxford University press Inc., New York. [8] Roy, R., Tiwari, A. and Corbett, J., 2003, ‘Designing a turbine blade cooling system using a generalised regression genetic algorithm’. CIRP Annals, CIRP, 52/1: 415-418. [9] Guo, Remn-Min, 1993, Heat Transfer of A Finite Length Roll Subject to Multiple Zone Cooling and Surface Heating Boundary Conditions, Moving Interface Problems in Manufacturing Heat Transfer, ASME Annual Meeting, Windter. [10] Parke, D. M., and Baker, J. L., 1972, ’Temperature Effects of Cooling Work Rolls’, Iron and Steel Eng., 49: 675-680. [11] Sluzalec, A., Jr., 1984, A preliminary analysis of temperatures within roll-forging dies, using a finite element method, Int. Journal of Machine Tools Research and Design, 24: 171-179. [12] Devadas, C., & Samarasekara, I.V., 1986, Heat transfer during hot rolling of steel strip, Iron making and Steelmaking, 13: 311-321. [13] Teseng, A. A., Lin, F. H., Gunderia, A. S. & Ni, D.S., 1990, Roll cooling and its relationship to roll life, Metallurgical Trans., 20A: 2305-2320. [14] Teseng, A., Gunderia, S. & Sun, P., 1991, Cooling of roll and strip in steel rolling, Steel Research, 62: 207-215. [15]. Lin, Z. & Chen, C., 1995, Three-dimensional heattransfer and thermal-expansion analysis of the work roll during rolling, Journal of Materials Processing and Technology, 49:125-147. [16] Walters, J., 1991, ‘Application of finite element method in forging: an industry perspective’, Journal of Materials Processing Technology, 27: 43-51. [17] Roy. R., Hinduja, S. and Teti, R., 2008, ’Recent advances in engineering design optimisation: Challenges and future trends’ CIRP Annals Manufacturing Technology, 57/2: 697-715. [18] Jones, P., Tiwari, A., Roy, R. and Corbett, J., 2004, Multi-objective Optimisation with Uncertainty, Proceeding 451, Artificial intelligence and soft computing.

[19] Deb, K., Pratap, A., Agarwal, S. & Meyarivan, T., 2002, ‘A Fast and Elitist Multi-Objective Genetic Algorithm: NSGA-II’, IEEE Transactions on Evolutionary Computation, 6: 182-197. [20] Chen, W., Allen, J., Tsui, K. and Mistree, F., 1996, ‘A procedure for robust design: Minimizing variations caused by noise factors and control factors’, ASME J. Mechanical Design, 118: 478-485.

64

Integrating Conventional System Views with Function-Behaviour-State Modelling T.J. van Beek, T. Tomiyama Intelligent Mechanical Systems, Delft University of Technology, Mekelweg 2, Delft, 2628 CD, The Netherlands [email protected], [email protected]

Abstract The main contribution of this paper lies in the observations done in industry resulting in an approach of integrating the Function-Behaviour-State (FBS) model with user workflow and interface models to create complex system overview. On one side of the spectrum it focuses on modelling the usage of the system, while on the other side it considers the modelling and managing of interfaces. The choice for both these views is based on an industrial experience with the clinical Magnetic Resonance (MR) imaging system to manage design complexity. The paper gives a real example of the approach. Keywords: Design complexity, function model, overview

CIRP Design Conference 2009

65

focuses on modelling the usage of the system, while on the other it considers the modelling and managing of interfaces. The choice for both these views is based on an industrial experience with the MR system to manage design complexity. The paper gives a real example of the approach. Lindemann and Maurer [8] recognize that controlling product complexity has become an important issue in product development and they state that although reducing complexity is purposeful, it is not favourable at any cost. Controlling complexity is not the same as managing it as this paper proposes. This paper hypothesises that by managing the design complexity with increased overview, the design complexity is decreased. Current situation: 0

10

Nr. of details in system description

1 INTRODUCTION The conceptual engineering design process of complex mechatronic systems rarely starts from scratch. Mostly an existing system will serve as a starting point for the new, changed or improved functionality. Producing high quality design solutions under these circumstances is a difficult task for system architects because they have to consider a large amount of design aspects simultaneously and make sure the solution is obtained in a timely manner. Complexity management is essential in this process but has not yet been satisfactorily addressed in literature and practice [1]. The development of a Magnetic Resonance (MR) imaging system is presented here as an example of a complex mechatronic system. MR systems have only been around since the early 1980’s, and represent a relative young class of systems. Therefore, MR development still evolves rapidly. New functionalities are developed for each product release and compatibility of new functionalities with the existing system is essential. Besides the design process complexity, the MR design is characterized by a strong multidisciplinary nature (e.g. mechanics, electronics, computer science, materials science, clinical science, and fundamental physics). Managing and coordinating this multi disciplinary product development process is extremely difficult [2] and exceeds the comprehension of a single engineer who cannot understand every detail [1, 3, 4]. The research presented in this paper is a continuation of ongoing research [5, 6]. This research aims at developing a method that supports system architects in their complex design activities by giving them a clear bird’s-eye view of the system architecture, see figure 1. By extending the FBS model [7] with additional system views, a consistent system architecture ontology is targeted. Creating a formalism that describes this ontology will allow for semiautomated reasoning on the large number of design concepts and address scaling issues. The main contribution of this paper lies in the observations done in industry resulting in an approach of integrating the FBS model with user workflow and interface requirements models to create overview. On one side of the spectrum it

Systems level

106

Figure 1. Schematic representation of system complexity. The Function and Key drivers method (FunKey) [9] method proposes relating system’s functions to key drivers and requirements and coupling them in a matrix. The method seeks mainly to provide an easy way of documenting a certain choice for an architecture and its performance, thereby providing the system architect with an overview of his choices. The contribution is focussed

at the architect himself. This paper proposes to also facilitate communication and design knowledge sharing among architects and other stakeholders in the design process. Boersting et al. [10] give an important contribution that relates requirements to functions and deals with complex design to gain product overview by means of models. This enables them to manage and predict change propagation in complex design. They deal with the important relation between requirements and function as where this paper tries to relate workflow to function. Workflow aims more at the modelled usage of the product. The CAFCR model [11] also recognizes the importance of a system architecture overview. It proposes a decomposition of the architecture into five main views that capture the need of the customer, the functions the product performs, and the design of the product from the conceptual and realization points of view. The work is of importance because it presents a method to create different views, but it does not specify how to implement these methods in a design support tool. This paper will first give Motivation and background of the research. It will elaborate on the MR system design to illustrate the complexity and it will discuss FBS as a model that can reduce complexity. In section 3 the approach of integrating workflow, FBS and interface models is proposed and in section 4 an example is given using the patient support table of the MR system. 2

MOTIVATION

2.1 Industry-as-Laboratory The research presented in this paper is part of a research project conducted in close collaboration with the MR division of Philips Healthcare. The goal of this collaboration is to bring the academic and the industrial worlds closer together. Research driven by real industrial problems ensures relevant research topics. Proposed methods and solutions later can be tested using the industrial practice as laboratory. A short introduction to the MR system and development organization will follow to illustrate its complexity. MR System The authors would like to acknowledge their colleague Alexander Ulrich Douglas for his work on the following illustrating description of the MR. MR is a clinical imaging modality that visualizes small changes in the magnetism of nucleus of hydrogen atoms. The magnetic properties will temporarily change once excited by the MR’s static and dynamic magnetic fields. The static field strength ranges from 1 tot 3 Tesla and is produced by a superconductive magnet constantly cooled to temperatures close to absolute zero (0K ≈ -273 C°). Combined with a dynamic magnetic field created by large amplifiers and electro magnetic coils, the hydrogen nuclei are excited according to a predetermined waveform. After the excitation the MR turns into a highly sensitive sensor and measures the magnetic response of a specific part of the human body. Dedicated receiver coils are developed for specific parts of the body to support different clinical applications. In other words, to produce an image of the patients neck another receiver coil is used than when an image of a leg is produced. The coil sensor signals then need to be captured and processed real time (order of nano seconds) by the data acquisition system. Powerful computers make sure that the signal is conditioned such that an image reconstruction is possible.

Besides the workstation used as an interface between the system and the operator, several computers are embedded to pre- and post-process, control and plan the scans. Three hospital rooms are needed to house the MR system, a technical room with amplifiers, control units and cooling equipment, an examination with the magnet and the patient environment and an operator room with the work station. From the description above it can be seen that the MR system contains the disciplines; mechanics, electronics, control, physics, material science, software, clinical science and marketing. Philips Healthcare MR development involves 400 people across all before mentioned disciplines spread over 3 main and several smaller sites all over the world. The software archive contains about 10 different programming languages, resulting in 7 million lines of code. 150 software developers work on the archive concurrently to add new functionality to system. In preceding work of the authors [5] three main issues in the product development process have been identified: •

Lack of Design traceability



Lack of Design Understanding



No support in decomposing the design problem into smaller pieces.

Design Traceability The transitions from one level of abstraction to another often are iterative processes both ways. Because of the large amount of uncoupled design information content, good traceability of the relations between design aspects in different levels of abstraction is difficult to realize in complex multi-disciplinary design processes. Design Understanding There is a need for better traceability of design requirements and system decomposition choices [12]. Both the size of the information embedded in the designed product and the information gathered in the design process is growing. The size of the problems has grown beyond the limits of one person’s comprehension [1, 3, 4]. In our research it was estimated by architects that maybe 0.5 % of all employees have a total system overview. Not understanding the overall system is a source of uncertainty and errors in the design. System Decomposition System architects decompose the system into smaller sub systems. Where two sub systems meet, an interface should be defined. Creating an ideal interface description for one sub system often conflicts with the ideal interface for another sub system. The systems are highly customizable and therefore configurations exist as a sub set of all available sub systems. It was observed that navigating through the product configuration space is very difficult without models and tools that support the architects. 2.2 Bird’s-eye view To increase design traceability we need models of complex systems that connect high levels of abstraction to low levels of abstraction. Most models used now, do not span different levels of abstractions [13]. For example, a mechanical 3D CAD concerns only the geometry of components, and does not link to functional information. The link between these aspects is missing. They are not considered in parallel and connected, but sequential and only linked in the mind of the designers. When for example changes are executed in the workflow models of the system, the designer has to determine

66

manually where he has to change the requirement and function models. To increase system understanding a map (shared model) is needed that communicates the system composition and outline between the architects. A MR system typically has details that reach O(107). Other products have similar properties, for example; an aircraft has unique components of this order. Complex mechatronic machines (e.g. mobile phones, medical systems, printers, hybrid car) are controlled by software that has number of lines in the same order of magnitude. At the top level there are abstract functional descriptions. At the bottom, component details of that order are needed. At this level, descriptions are very much mono-disciplinary and their complexity is high but manageable if engineers are provided with dedicated tools. However, the middle layer is systems level multi-disciplinary. The current industrial situation lacks a good way to deal with this level. Function Modelling What is needed is a model that connects different design aspects (e.g. models of system usage, requirements, function, interfaces and components) at different levels of abstraction. In this paper a method is proposed based on the function modelling technique of FBS modelling [7]. An FBS approach is considered because it already integrates design concepts at different levels. The FBS model creates system overview from the early, abstract levels of functions through the concepts of objective system behaviour all the way to low level detailed state descriptions. 2.3 Function Behaviour State Model An FBS model (see figure 2) can be divided in three connected levels; function, behaviour and state level. In the function layer an F hierarchy of the system is maintained. For a complete reference on all the FBS nomenclature and definitions the reader is referred to [7, 14, 15]. Some useful definitions are reproduced here. A function is defined as: Function = ‘a description of behaviour recognized by a human through abstraction in order to utilize it’ In another form function can be defined as ‘to do something’. Functions are related to behaviour(s) by means of the many-to-many F-B relationship. Behaviour = ‘sequential one and more changes of state over time’ The behaviour or state transitions of the system are caused by Physical Phenomena (PP). And state is defined as: State = a triplet where: E: identifiers of entities included in this state A: attributes of the entities R: relations in this state Relations can occur among entities, between entities and attributes and among attributes. Figure 3 gives an example of an FBS model. The subjective function description ‘to cool down’ is connected to the objective entities water and bottle through a physical phenomenon named fluid flow. Figure 3 also illustrates the different possible relations. Extend FBS Although FBS is a good starting point for creating a multi level of abstraction system overview model, to create complex system transparency some extensions to the FBS model were proposed [5] to adequately address the problems mentioned in section 2.1 of this paper. Three

67

areas were identified where extensions to the current FBS paradigm are needed. 1. Missing modelling entities 2. Ontology Problem 3. System decomposition support

Figure 2. FBS Model scheme F: To cool down

B: Pour water PP: Fluid Flow

E: Water

E: Bottle R: in

R: has-attribute A: Weight: 1kg A: Volume: 1dm3 A: Density: 1kg/dm3

R: D=W/V

Figure 3. Illustrative FBS example for a single function Missing modelling entities High level functions often do not map one-to-one onto user needs and marketing requirements. Other design models, e.g. requirements and usage scenarios, are primarily used to clarify the design task and serve as input for the creation process of a function decomposition of the system. In [10] they recognize that many methods in engineering design rely on functional models. The main problem with many of these methods is that they can be very sensitive to the quality of their input information and that ‘overlooked’ relations can bias the results obtained by analysing the models. Therefore, considering as many functional relations as possible is crucial for building functional models This missing link is recognized by [10]. They propose a method to link the function view and the requirements view to improve the ability to predict and manage design change. Ontology Problem The defined FBS ontology provides the frame in which the system is captured. If the frame is too narrow it might not allow certain functions to be included into the model. When the frame is too broad it allows all functions to be included, but it will be difficult to create a manageable

System decomposition FBS currently does not have a facility to consider systems of systems decompositions. The design process almost always means adding or changing functionality to an existing system. This means that the models used to support the design process should facilitate adding new or changing model entities to an existing system model. This paper addresses the first of the three discussed problem and tries to identify views to extend the FBS model. 2.4 Observations FBS applied in industry In the scope of this research a design project of a new functionality was observed and an FBS model was created and evaluated. The new functionality is developed to answer a new clinical application of the MR. The project involves people from the clinical science, marketing and engineering departments of Philips Healthcare MR. The developed system will add hardware and software to the existing main MR system in order to facilitate this new clinical application. Based on design documentation, interviews and discussions an FBS model was created by the researchers and presented to the architects. The aim of the FBS model was to test whether the FBS model would support the architects with a clear overview of why the system is developed (F), what it does (B) and what it consists of (S). Communication support among architects, between architects and engineers, and other members of the design team is needed to keep overview and understanding of the system. Observations regarding the FBS model were: •

3 APPROACH: INTEGRATING VIEWS As a result from discussions with system architects advantages and disadvantages of the FBS model application were found. FBS was found to be useful in gathering information and containing it in a human understandable manner, the graphical overview it provides can be improved. Regarding the functional layer it was suggested by the architects to combine the F view with their workflow view, because their workflow view seemed close to the functional view. The workflow views have a sequential form. Their workflow view corresponds to a user scenario described in [16]. There a scenario is defined as; Scenarios = ‘explicit descriptions of the hypothetical use of a product’. This definition fits the use of the term workflow in the MR development organization. It’s an envisioning of the use of the product. The discussed interface requirements document serves as a starting point for the divide-and-conquer of the design problem for the phases following the conceptual design. Based on the interface specifications engineers who develop part of the system can communicate with other engineers developing other parts. Having a clear definition of the interfaces helps reduce the amount of design errors or forgotten relations in the design. Constructing the interface document in a flexible manner is experienced as troublesome. In this process the architects rely on their system understanding and experience. Proposed Situation: Number of Details in System Description

design object data model since all objects are allowed to be so different. An example of a function that proved difficult in FBS was ‘to facilitate cable management’ in a system.

The first impression of the FBS model is that it’s too complex. It has too many nodes and edges to give an instant overview. It needs studying before the model is understood.



Causal or ordered relations between functions are absent.



The FBS model bridges high level system functions to detail level state, or components.

Work Flow

FBS Models Interface Models

Figure 4. By combining user workflow, FBS and interface models part of the pyramid is covered.

Conventional System Views So called ‘workflow’ and ‘interface’ views were used during the design process of the new system. The workflow view described how the developed system should be used and could be used. Multiple possible workflows are identified at the start of the project. Interviews with clinical experts, a marketing expert and engineers all mentioned the workflow view as an important tool to communicate. To asses the impact of the new system to the existing system the architects construct an interface requirement document. The document consists of a graphical nodeedge view of the systems. Named edges in the graph represent interfaces between parts of the system. A worksheet is attached where the requirements on those interfaces can be looked up. The document has one zoom level. A choice was made to do it at a certain modular level and interfaces between these modules are described, but the document does not contain interface requirements for components inside the modules.

Model:

Advantage:

Disadvantages:

FBS

Bridge high level to low level of design concepts

Time needed to get to know the model. Not intuitive

Workflow

Intuitive for people from multiple disciplines Models the use

Not connected directly to system properties

Interface

Guideline for embodiment and detail design phases. Supports Communication

Static of nature Inflexible to changing design choices

Table 1. Advantages and disadvantages of proposed views

68

A static level of abstraction of the interfaces is problematic. A slight change in the system decomposition changes the interfaces. But a static document as it is used now does not follow these changes. The changes occur frequently in the early phase of the design process. Proposed Method It’s proposed to combine the advantage of both the FBS model and the workflow and interface views. By connecting the models the disadvantages of the individual models are reduced. See table 1 for a short summary of the advantages and disadvantages of the different models.

FBS view When the first function and workflow views are constructed it is time to update the B and S models. Changes and additions to the function model will then be translated into changes of the behavioural and state models as discussed in section 2.3. Figure 6 shows schematically how to update the FBS model. Interface View Interfaces exist on different levels in a complex mechatronic system like MR. Depending on the activities, design phase and interest of the architects, different level interface requirements descriptions are needed. High level of abstraction interface requirements could for example be a description of the patient support table to the MR magnet. The patient table and magnet are complex mechatronic systems themselves but they have an interface. The engineering interfaces consist of mechanical, software and physical (e.g. contact surfaces, data streams and the strong magnetic field). A low level of detail interface could be for example a shaft-to-bearing interface inside the patient support table. Therefore, it’s desirable to create a ‘zoomable’ Interface requirement description depending on the activities of the architect. In figure 7 the proposed method for connecting the FBS model to the interface requirement model is illustrated. The diamond shape nodes in the state model represent FBS entity relations (see section 2.3). Entity relations typically relate attributes of different entities to each other. A relation could for example be named ‘In’ as with the example mentioned in section 2.3. In the knowledgebase

69

Function Model

Workflows

Functions 1

Workflow

Workflow-F View Connection A workflow starts with a story telling of the use of a product from the point of view of a certain stakeholder. Multiple stakeholders may be considered and multiple scenarios per stakeholder are possible. A nice graphical representation of a workflow was found in a common flow diagram. A directed graph with nodes and edges illustrates the workflow of the system. A connection between the workflow nodes and the function nodes is determined manually by using a dependency matrix with Boolean indicators for a connection, see figure 5. The function view starts with the function tree of the existing MR system and the new functions are added onto that. No formal workflow descriptions are used. The workflow view has no formalised ontology at this moment. A formalised workflow description could in the future facilitate (semi-)automatic reasoning of the connection between workflow entries and function descriptions. Both views are developed by the design team in parallel. The workflow view facilitates the discussions among architects and between architects, clinical experts and marketing. Therefore fewer items are overlooked in the function view.

connected to the FBS model [14] it is defined that the ‘in’ relation allows force to be transmitted and that the magnitude of that force depends on the weight of the water. The interface between the water and the bottle can now be described using the relations between entities and their attributes.

1 1 1

1

Dependency Matrix Figure 5. Connecting the Workflow and Function models by Dependency Matrix Function Model

Behaviour Model

State Model

Figure 6. Updating the FBS model. Function Behaviour State Model

Interface Model Zoom: Modules 1

Module 1

Module 2

2

Interface Model Zoom: Components 3 2 1

Figure 7. Connection between FBS relations and interface model. When higher level interfaces need to be described, a grouping of entities occurs. How to define these modules is outside the scope of this paper and the reader is referred to [17-19]. System decomposition and known module boundaries are assumed for now. As displayed in figure 7 with the boxes labelled ‘Module 1’ and ‘Module 2’. Considering both modules, there is one edge that crosses the boundaries of those boxes. This apparently is the only interface between the two modules. The description of

this interface would be constructed in a similar manner than that of a component-component interface. By having the connections between the different models as live-links, a consistent view with the current design situation is realised. Responsibility of keeping the models up to date has to be assigned to specific people in the design team. Workflow Layer ‘Position Patient’ Workflow:

4 EXAMPLE: PATIENT SUPPORT TABLE The patient support sub system of the MR system is displayed here, figure 8, as an example of the above described method. The presented example doesn’t go into the details because of the confidential nature of the data and besides that, the diagrams become too big to present on A4 format.

Lay Patient on Bed

‘Emergency’ Workflow:

Position left-right

Lift Table

Position front-rear

Pull table out

Function to Position patient to Carry patient

to Move patient

to comfort patient

to support weight

to move in X direction

to move in Y direction

to move in Z direction

Adapting bed

Apply Force

horizontal Motion

Horizontal motion

Vertical Motion

Behaviour

State Module Bed E: cushion

R: On

Module X-Y

E: Table-Top

R: Fixed

E: X-rail

E: Y-rail

R: On

Module Scissor

E: Frame

R: On

E: leg 2

R: Fixed

E: Bearing

R: Fixed

R: In

E: leg 1

R: In

Interfaces

Bed

Cushion Table-Top

X-Y

X-rail Y-rail

Floor

Leg 1

Bearing

Leg 2

On On

Fix Fix

On On

Frame

On On

Leg 2 Scissor

Scissor

Frame

X-Y

X-rail

Cushion

State Entities:

Table-Top

Bed

Y-rail

Modules:

Fix Fix

Bearing

Fix In

In

Leg 1

Fix

Floor

On In

In On

On On

Figure 8. Example of Workflow – FBS – Interfaces Connection 70

E: Floor

R: On

As the name suggests the patient support table carries the patient during a MR exam. Depending on the type of clinical exam the patient is placed on the table in the down position. Once the patient is on the bed, the table rises to the right level and the table slides into the bore of the magnet. Depending on the body part being examined the table slides in a certain distance. The main function of the patient support table is to position the patient at a predetermined location. It realizes this function by a vertical (Z-axis) scissor-lift mechanism combined with a horizontal (X-axis) slider mechanism. Al axis are actuated and controlled by the MR host computer or a manual operator. The Patient support table serves as a good example because it was recently upgraded with a new functionality; A second axis (Y-axis) in the horizontal motion. The new functionality was introduced for a new type of MR systems called ‘Open MR’. An open MR no longer uses one horizontal cylindrical magnet, but it uses two vertical oriented magnets. The patient is positioned in between the two magnets and experiences a more open environment. In the example we have indicated some nodes of the graphs with a different colour. These are the nodes that have been changed/added after the introduction of the new functionality. The interface model is shown here as a dependency matrix. This matrix shows a big resemblance to the Design Structure Matrix (DSM) [18]. A DSM could be used to determine the modules of the system, but this process is outside the scope of this paper. The entries in the interface model dependency matrix have properties determined according to the nature of the attributes and entities connected to the relations. 5 CONCLUSION AND FUTURE WORK This paper has proposed an approach of integrating three different system views to create a better system overview for the system architects. It was found that using the FBS model as a basis and attaching the workflow and interface models an integrated system view can be created that supports the architects in reasoning from the use model all the way to the engineering interfaces. By creating this overview the complexity of the MR system becomes more transparent and manageable. By making the complexity manageable the system complexity is reduced. Future work of this research aims at; •

Finding more models as ‘Missing modelling entities’, first candidate being the requirements model.



Solving the ‘Ontology Problem’. To describe a formal ontology which facilitates the proposed method and allows for semi-automated model reasoning. A tool will be developed to support the model creation process. This includes formalising workflow and interface views.



System decomposition support. In this paper a decomposition was assumed, but in future work this process should be supported.

6 ACKNOWLEDGMENTS This work has been carried out as a part of the DARWIN project at Philips Healthcare under the responsibilities of the Embedded Systems Institute. This project is partially supported by the Dutch Ministry of Economic Affairs under the BSIK program.

71

7 REFERENCES [1] Szykman, S., et al., A web-based system for design artifact modeling. Design Studies, 2000. 21(2): p. 145-165. [2] d'Amelio, V. and T. Tomiyama, Predicting the unpredictable problems in mechatronic design, in International conference on engineering design, ICED'07. 2007, Ecole centrale Paris: Paris. [3] Tomiyama, T. and B.R. Meijer, Directions of Next Generation Product Development. Advances in Design, 2005: p. 27–35. [4] Tomiyama, T., et al., Complexity of Multi-Disciplinary Design. CIRP Annals-Manufacturing Technology, 2007. 56(1): p. 185-188. [5] van Beek, T. and T. Tomiyama, Connecting Views in Mechatronic Systems Design, a Function Modeling Approach, in MESA08. 2008, IEEE and ASME: Beijing. Accepted for Conference. [6] Van Beek, T. and T. Tomiyama, Requirements for Complex Systems Modeling, in CIRP Design Conference 2008. 2008, Springer: Enschede, The Netherlands. [7] Umeda, Y. and T. Tomiyama, FBS Modeling: Modeling scheme of function for conceptual design, in Proc. of the 9th Int. Workshop on Qualitative Reasoning. 1995. p. 11-19. [8] Lindemann, U. and M. Maurer. Facing Multi-Domain Complexity in Product Development. in The future of product development, Proceedings of the17th CIRP Design Conference. 2007. Berlin: Springer-Verlag. [9] Bonnema, G.M., FunKey Architecting, An Integrated Approach to System Architecting Using Functions, Key Drivers and System Budgets. 2008, University of Twente: Enschede, The Netherlands. [10] Boersting, P., et al., The Relationship between Functions and Requirements for an Improved Detection of Component Linkages, in DESIGN 2008. 2008: Dubrovnik, Croatia. p. 309 - 316. [11] Muller, G., CAFCR: A Multi-view Method for Embedded Systems Architecting. 2004, Delft University of Technology: Delft, The Netherlands. [12] Maletz, M., et al. A Holistic Approach for Integrated Requirements Modeling in the Product Development Process. in The future of product development, Proceedings of the17th CIRP Design Conference. 2007. Berlin: Springer-Verlag. [13] Bonnema, G.M., Use of models in conceptual design. Journal of Engineering Design, 2006. 17(6): p. 549-562. [14] Yoshioka, M., et al., Physical concept ontology for the knowledge intensive engineering framework. Advanced Engineering Informatics, 2004. 18(2): p. 95-113. [15] Umeda, Y., et al., Function, Behaviour and Structure, Application of Artificial Intelligence in Engineering V, Vol 1: Design, JS Gero. 1990, Computational Mechanics Publications, Boston. [16] Anggreeni, I. and M.v.d. Voort, Classifying Scenarios in a Product Design Process: a study towards semi-automated scenario generation, in CIRP Design Conference 2008. 2008, Springer: Enschede, The Netherlands. [17] Albers, A., et al., A Modularization Method in the Early Phase of Product Development, in DESIGN 2008. 2008: Dubrovnik, Croatia.

[18] Browing, R., Applying the Design Structure Matrix to System Decomposition and Integration Problems: A Review and New Directions. IEEE transition on Engineering Management, 2001. 48(3): p. 292-306.

[19] Stone, R.B., K.L. Wood, and R.H. Crawford, A Heuristic Method for Identifying Modules for Product Architectures. Design Studies, 2000. 21(1): p. 3-31.

72

Grid Services for Multi-objective Optimisation G. Goteng, A. Tiwari, R. Roy Manufacturing Department, Cranfield University, Cranfield, Building 50, Bedford, MK43 0AL, UK {g.l.goteng, a.tiwari, r.roy}@cranfield.ac.uk

Abstract The emerging grid technology is defined as an infrastructure for secure and coordinated large-scale resource sharing. In this paper, we describe the architecture and grid services of DECGrid. DECGrid enables distributed design experts to collaborate and share resources during design optimisation. Mathematical models are built using services by experts. These models are then directly linked to NSGA-II optimisation algorithm service and allow design experts to enter design parameters of their choice. A real-life case study-welded beam problem was used to validate the prototype. The results obtained showed a wider spread in the solution space compared to the results in literature. Keywords: Grid services, Multi-objective optimisation, design optimisation, Mathematical model

1 INTRODUCTION Competition and the desire to retain as well as to attract new customers are crucial reasons for optimisation of product design for companies to remain relevant in today’s global business. The quality of a product is a reflection of the skills put into the design phase by bringing together different talents of design experts. Collaborative design goes beyond producing products that meet customers’ requirements in terms of performance, quality and cost but should also have novel and goodlooking features externally [1]. [1] observed that for this to happen, engineers and designers need to have access to tools that support design methods. The DECGrid (Decision Engineering Centre Grid) aims at providing these tools and resources that support multi-objective design optimisation (MDO) as grid services for distributed designers to collaborate and work together. DECGrid consists of mathematical model building service; design parameters input service and NSGA-II optimisation service. These services are built within the Globus Toolkit middleware and run in Linux environment. Grid technology has the capability that allows designers to recreate data, integrate distributed resources and reuse them [2]. DECGrid uses this feature to enable designers to manage workflows of design processes as well as reuse or recreate them. 2 RELATED WORK Ever since the Global Grid Forum (GGF) met in 1993 to demonstrate the computational synergy obtained by linking 17 supercomputing centres, many grid projects have emerged thereafter to solve computationally and data intensive jobs. These projects range from middleware [3], schedulers [4] and problem solving environments (PSE) [5]. This paper is on PSE for solving MDO problems. Some similar projects that have developed PSEs for MDO applications are Geodise (GridEnabled Optimisation Design Search Environment), DAME (Distributed Aircraft Maintenance Environment) and FIPER (Federated Intelligent Product Environment). Geodise provides computational resources as services for optimisation engineers to access and use through a user interface [6]. Geodise consists of toolboxes for

CIRP Design Conference 2009

73

computation, optimisation, visualisation and knowledge repository which the users are presented with through a grid portal. A wizard-like process for carrying out optimisation guides design engineers. The DAME project develops a grid-based fault diagnosis and prognosis for aircraft maintenance on the fly [7]. This is done by capturing data and information on various parts of the aircraft and providing this information to maintenance engineers as grid services when there is a deviation from normal behaviour. This helps engineers to take prompt decision. The aim of FIPER is to provide an intelligent system that leverages the emerging web technologies in which engineering tools such as CAD (Computer Aided Design), CAE (Computer Aided Engineering), PDM (Product Data Management) and optimisation algorithms act as distributed service providers as well as service requestors communicating through intelligent context models for concurrent engineering design optimisation [8]. The difference between these PSEs and DECGrid is that while these 3 provide service specifications at the programmers’ level, DECGrid provides specifications document for both programmers as well as end users of the services. This ensures compliance to service level agreements between service providers and requestors. 3 FRAMEWORK AND ARCHITECTURE OF DECGRID DECGrid runs in Linux platform built within Globus middleware. It consists of 8 nodes which perform different complementary design and optimisation functions and use Condor for scheduling. MDO being a multidisciplinary oriented field calls for the use of service oriented grid platform that will provide resources to users as services for easy data and information sharing. For example FIPER is used in the preliminary design optimisation of complex engineering product such as the gas turbine which requires the interaction between different professionals in aerodynamics, heat transfer, structural analysis, finite element analysis and computational fluid dynamics to satisfy the scalability required for these interactions [8]. DECGrid is proposed to extend this sort of scalability so that optimisation resources for MDO applications can be described, published, subscribed and used as well as enable mathematical model building. This

allows distributed users to share limited computational resources for optimisation. 3.1 Service Framework The framework is the process of service specifications that provide design engineers the environment to specify design requirements. This is a document that shows the functionalities of the design resources and quality of service. This document represents the bond among collaborating parties and ensures the delivery and review of services. Figure 1 is the class diagram of the framework. The initial interaction is between the service provider and requestor based on service level agreement [9]. After which the service provider uses the functionality of Globus Toolkit to register design tools and resources in WebMDS (Web Monitoring and Discovery Service). All resources are aggregated in the design Aggregator Source. A search strategy interface is provided for optimisation engineers to perform search using deterministic or stochastic search algorithms. After subscription, requestors execute DesignService which has interfaces for step by step process for building a mathematical model for design optimisation in a particular field. The first interface has the main domain. This allows the design engineer to select the domain (FEA, CFD, etc) that the mathematical model is needed. The criteria, design parameters and constraints are obtained to generate the mathematical model. The design Collaboratory allows distributed design engineers to securely share data, make queries and collaborate.

Design Service

scheduler. Apache web server and WSRF (Web Services Resource Framework) are used to allow the application run both as grid services and web services. Figure 2 describes the DECGrid architecture. Service provider publishes design and optimisation resources and requestors (design engineers) access the resources and collaborate. This architecture ensures that only authorised design engineers have access to optimisation resources. This done by entering users’ grid certificate details in the map-file of grid-security as well as allowing host machines to have access to the grid resources on other nodes. Users have different levels of rights to ensure integrity of design data. The certificate authority ensures that certificates expires every 24 hours and requires that users update certificates after expiration. The DECGrid architecture makes use of the grid service data elements (SDE) to present the properties of optimisation resources as MDO services. SDE can be static or dynamic [10]. The SDE are described in the XML schema which implements the different MDO services whose life-time management are done by uniquely identifying each service instance by an element called grid service handle (GSH) and are supported and held by the grid service record (GSR). This enables the MDO services to be invoked at the same time by different collaborating users without much interference as each service instance has its state and life time managed differently through the GSH and GSR. The important grid service interfaces (portTypes) that implements the interfaces are the GridService, NotificationSource, NotificationSink, Registry and Factory. The GridService portType uses the FindServiceData to query optimisation resources and is responsible for service discovery, NotificationSource is notification of the instances of services to service requestors, Registry registers or deregisters services and Factory create service instances using CreateService operation.

Level Agreement Provider

Requestor

Design Resource Registry

Design

Design Functional

Collaboratory

Requirements

Client 1

Apache GT4 Condor

Design Aggregator

Design

Design Tools

Design

CentOS 4

Engineer 1

Database

Linux Server

Entry

Storage

Design Tasks

Search Strategy

Design Query

Design Application

WebMDS

Service

Areas

Service Provider

GT4 Subscribe

Service

Apache Client 2 Resources

GT4 Condor

WSRF

Apache Design Aggregator

PublishedFunctions, Optimisation Resources Objective algorithms, etc

Source

Condor

Design Engineer 2 Apache

Optimisation Search Strategy

Design

Web Services

GT4 Condor

Notification Client n

Compute Service Requestors

Service

Figure 2: DECGrid Architecture Design

Figure 1: Service specification framework

Engineer n

3.2 DECGrid Architecture The framework is implemented using 8 computers that served as grid nodes. CentOS 4.0 Linux is used as the operating system. Globus Toolkit 4.0.4 (GT4) is used as the middleware and Condor 7.0.0 as the resource

4 CASE STUDY: THE WELDED BEAM PROBLEM To demonstrate the workings of DECGrid, a real-life case study-the welded beam problem is used. This case study is chosen because it requires the optimisation of at least 2 objectives and has variables that are sensitive to different parameters especially the lower and upper bounds. The objectives can be optimised in different nodes of the grid by running optimisation services on those nodes as well

74

CentOS 4 Linux Clients

Grid Services

as making some nodes build the mathematical model and others run parameter input service at the same time. This demonstrates how different grid services can handle different optimisation tasks concurrently for efficient design optimisation of products that have multiple objectives, constraints and variables. This problem describes how a beam needs to be welded on another beam and must carry a certain load. The objective of the design is to minimise cost of fabrication and minimise the end deflection. Here, the overhang portion of the beam and the applied force are specified, thus making the cross-sectional dimensions of the beam (b, t) and the welded dimensions (h, l) as the variables. The problem has four constraints. The first constraint is to make sure that the shear stress at the developed support location of the beam is smaller than the allowable shear strength of the material. The second constraint is to make sure that normal stress at the support location is smaller than the allowable yield strength of the material. The third constraint is to ensure that the thickness of the beam is not smaller that the weld thickness from a practical standpoint and the fourth constraint ensures that the allowable buckling load of the beam is more than the applied load. 3.1 Mathematical model building service A mathematical model building service provides the interactive session for design engineers to build models collaboratively. DECGrid has 2 graphical user interfaces (GUI) for mathematical model building service. They are implicit and explicit interfaces. The explicit interface has 4 input fields that enable experts to build generic explicit mathematical models while the implicit interface provides a field where experts can upload mathematical models from a file. This case study uses the explicit interface so this paper will concentrate on explicit mathematical building service. Figure 3 below shows the explicit mathematical model service builder running. This interface is run on 2 nodes of the grid, allowing 1 expert each to build part of the welded beam model. The first expert sits at the first node and enters ‘6000’ in the variable field and ‘ ;’ (because the last variable in a relation must end with semi-colon in C programming language) in the right operators field and next step is to enter ‘F=’ (force applied) in the output field which when submitted and displayed will show F=6000; as created. The expert then submits this which get saved in the database for reuse.

This process is repeated by this expert for the first part of the model. This part model is shown below. F=6000; //force applied b= xreal[0]; //beam width t= xreal[1]; //beam length h= xreal[2]; //weld width l= xreal[3]; //weld length l1=pow(l,2); ht1=pow(h+t,2); dt= 2.1952/(pow(t,3)*b); //objective r1= F/pow(2*h*l,0.5); NSGA-II uses ‘xreal[]’ for real variables starting with index 0 to n number of variables and ‘constr[]’ and ‘obj[]’ for constraints and objectives respectively. The second experts uses similar interface on the second node to build the remaining part of the model. The second part of the mode is shown below. c1=1.10471*pow(h,2)*l; c2=0.04811*t*b; c3=14.0+l; c4=pow(r1,2); c5=0.25*(l1+ht1); c6=pow(c5,0.5); c7=F*(14.0+(0.5*l)); c8=(l1/12)+(0.25*ht1); c9=0.707*h*l; c10=2*c9*c8; c11=c7*c6; r2=c11/c10; c12=pow(r2,2); c13=l*r1*r2; dt= 504000/(pow(t,2)*b); // objective 2 r=pow((c4+c12+(c13/c6)),0.5); PC= 64746.022*(1-(0.0282346*t))*(t*pow(b,3)); obj[0]=c1+(c2*c3); //objective 1 obj[1]= dt; //objective 2 constr[0]= 13600-r; //constraint 1 constr[1]= 30000-dt; //constraint 2 constr[2]= b-h; //constraint 3 constr[3]= PC-F; //constraint 4 It is assumed that these 2 experts are specialists in the different mathematical components of the welded beam problem. This collaboration may trigger innovative and creative thinking in the design process [11]. The 2 components are merged and the complete model can be accessed by authorised grid users as an optimisation grid resource. Companies need to engage in such collaborative design approach to gain competitive advantage as customers are demanding better and customised products [12].

Figure 3: Explicit mathematical model building service

3.2 NSGA-II optimisation service The mathematical model is now available as a service for other experts to use. The system has capability to link the mathematical model built to NSGA-II through ‘problemdef.c’ (the file that contains mathematical models

interface

75

ready for optimisation) file. The third expert sits at the third node and clicks perform this linking automatically. The expert then uses the NSGA-II Optimisation facility to start the NSGA-II Optimisation Service. The service consists of 3 interfaces. The first interface allows the expert to enter 5 parameters namely number of generation, number of population, number of objectives, number of constraints and number of real variables. When these 5 parameters are entered, they get stored in a text file as well as the PosgreSQL database which can be retrieved and reused. The last value (number of variable) is used to generate a sub-interface under the first interface for the lower and upper bounds of the real variables. This process now takes the expert takes the expert to the second service interface which enables the expert to enter probability of crossover of real variable, probability of mutation of real variable, distribution crossover index, distribution mutation index and number of binary numbers. Another subinterface is generated if the last value is more than 0 for the lower and upper bounds of binary variables. The last interface allows the expert to enter choice of display (0 or 1), objective index for x-axis and objective index for y-axis. Now the optimisation can be run and results observed in graphical form or text format. The interface for NSGA-II parameters service is shown in Figure 4 below. The mathematical model building service which brought experts from different disciplines and the optimisation parameter service which allows interdependent subservices to interact demonstrates the efficiency of distributed grid services for design optimisation.

Parameters

Description

500

number of generations

100

number of population

2

number of objectives

4

number of constraints

4

number of real variables

-5

lower bound for first variable

10

upper bound for first variable

0

lower bound for second variable

15

upper bound for second variable

-3

lower bound for third variable

3

upper bound for third variable

0

lower bound for third variable

5

upper bound for fourth variable

0.8

probability of crossover of real variable

0.05

probability of mutation of real variable

10

Distribution index for real variable crossover

50

Distribution index for real variable polynomial mutation

0

number of binary variables

1

enter 1 to display gnuplot or 0 to display only results

1

x axis objective index

2

y axis objective index

Table 1: Welded beam input parameters

Figure 5: One solution of welded beam problem Figure 4: First parameter interface 5 RESULTS The system is linked to the GNUPLOT text and graphical result display interface. Different parameters are used to come up with an acceptable optimum solution for the welded beam problem. Continuous refinement of the parameters and running the optimisation proved to have varying and interesting results. One of the results of running 500 generations and 100 populations with corresponding parameters as shown in Table 1 is shown in the following graph in Figure 5. The welded beam problem has been used by many researchers [5], [13], [14]. The results when compared to some in literatures showed that in certain circumstances, our results have wider spread along the cost values and shorter along the deflection values and vice versa. Using more nodes to distribute the computations showed better spread of results than the ones using fewer nodes. This demonstrates the computational synergy of grid services in obtaining results that both show convergence and diversity of results so that experts can make better decisions on the choice of design variables and parameters.

6 SUMMARY MDO is a field that brings together multidisciplinary experts to collaborate. These experts may be geographically distributed. The grid provides a problem solving environment (PSE) for MDO experts to collaborate and publish mathematical models, algorithms and parameters as grid resources and services. DECGrid is an implementation of such PSE collaborative tools to enable companies use grid services for multi-objective optimisation. These services provide tools as grid services to designers to enhance exploration in the design and ultimately produce wider optimum solution space for design engineers to choose from. 7 ACKNOWLEDEMENTS We wish to thank the Petroleum Technology Development Fund (PTDF) of Nigeria and Cranfield University UK for sponsoring this research.

76

8 REFERENCES [1] Liu, H., Tang, M. and Frazer, J. H., 2004, Supporting Creative Design in A Visual Evolutionary Computing Environment. Advances in Engineering Software., 35:261–271. [2] AlSairafi, S., Emmanouil, F. S., Ghanem, M., Giannadakis, N., Guo, Y., Kalaitzouplos Di., Osmond M., Rowe A., Syed J., Wendel P., 2003, The Design of Discovery Net: Towards Open Grid Services for Knowledge Discovery. The International Journal of High Performance Computing Applications, 17(3):297–315. [3] Foster, I., Kesselman, C., and Tuecke, S., 2001, The Anatomy of the Grid: Enabling Scalable Virtual Organizations. Intl J. Supercomputer Applications. [4] Hiroyasu, T., Miki, M., Shimosaka, H. and Dongarra, J. Optimization Problem Solving System using Grid RPC. Nihon Kikai Gakkai Sekkei Kogaku, Shisutemu Bumon Koenkai Koen Ronbunshu. L1283A. [5] Parmee, I. C., Araham, J., Shackelford, M., Rana, O. F. and Shaikhali, A., 2005, Towards Autonomous Evolutionary Design Systems Via Grid-Based Technologies. Proceedings of ASCE 2005 International Conference on Computing in Civil Engineering. [6] Eres, M. H., Pound, G. E., Keane, A. J. and Cox, S. J., 2004, User Deployment of Grid Toolkits to Engineers . Proceedings of the UK e-Science All Hands Meeting. AHM. [7] Jackson, T, Austin, J., Fletcher, M. and Jessop, M.,2003, Delivering a Grid enabled Distributed Aircraft Maintenance Environment (DAME). FGCS. [8] Goel, S., Taya, S. S. and Sobolewski, M., 2005, Preliminary Design using Distributed Service-Based Computing. Next Generation Concurrent Engineering. 113-120, ISSN: 0-9768246-0-4. [9] Litke, A.,Konstanteli, K., Andronikou, V., Chatzis, S. and Varvarigou, T., 2008, Managing Service Level Agreement Contracts In OGSA-Based Grids. FGCS: 24: 245-258. [10] Antonioletti, M., Atkinson, M., Baxter, R., Borley, A., Neil, P., Hong, C., Collins, B., Hardman, N., Hume, A. C., Knox, A., Jackson, M., Krause, A., Laws, S., Magowan, J., Paton, N. W., Pearson, D., Sugden, T., Watson6, P. and Westhead, M., 2005, The Design And Implementation Of Grid Database Services In OGSA-DAI. Concurrency And Computation: Practice And Experience. 17: 357– 376. [11] Gero, J. S., 1994, Computational Models for Creative Design Processes. Dartnall, T. Artificial Intelligence and creativity. Springer, 269-280. [12] Fan, L. Q., Kumar, A. S., Jagdish, B. N. and Bok, S. H., 2008, Development Of A Distributed Collaborative Design Framework Within Peer-ToPeer Environment. Computer Aided Design Journal. [13] Deb, K., 2000, An Efficient Constraint Handling Method For Genetic Algorithms. Comput. Methods Appl. Mech. Engrg. 18 : 311-338. [14] Carlos, A. and Coello, C., 2000, Use Of A SelfAdaptive Penalty Approach For Engineering Optimization Problems. Computers in Industry. 41 :113–127.

77

Knowledge & Information Management

Automated Retrieval of Non-Engineering Domain Solutions to Engineering Problems J.K. Stroble1, R.B. Stone2, D.A. McAdams3, M.S. Goeke1, S.E. Watkins1 and Computer Engineering, Missouri University of Science and Technology, 1870 Miner Circle, Rolla, MO 65409, USA 2Interdisciplinary Engineering, Missouri University of Science and Technology, 1870 Miner Circle, Rolla, MO 65409, USA 3Mechanical Engineering, Texas A&M University, MS 1250, College Station, TX 77843, USA [email protected] 1Electrical

Abstract Biological inspiration for engineering design has occurred through a variety of techniques such as creation and use of databases, keyword searches of biological information in natural-language format, prior knowledge of biology, and chance observations of nature. This research focuses on utilizing the reconciled Functional Basis function and flow terms to identify suitable biological inspiration for function based design. The organized search provides two levels of results: (1) associated with verb function only and (2) narrowed results associated with verb-noun (function-flow). A set of heuristics has been complied to promote efficient searching using this technique. An example for creating smart flooring is also presented and discussed. Keywords: Engineering design, Function, Biomimicry, Information retrieval

1 INTRODUCTION The idea generation and inspiration for engineering designs can be arrived at in a multitude of ways. Formal methods include, but are not limited to, brainstorming, analogy, 635 method, gallery, delphi, synectics, analysis of current products, and analysis of natural systems [1]; brain-ball, C-sketch, and morphological analysis [2], functional reasoning [3]; systematic classification [4]; or creative conceptual design [5] [6]. Whichever method is preferred and utilized, design is about meeting a need of society through fulfilling it’s required functions [7]. Function based design, encompassing the above idea generation and inspiration methods, aims to represent a system or product in it’s most abstract form using functionally descriptive words. Using the same functionally descriptive keywords to search for analogies or solution strategies to the desired function is an obvious corollary. Therefore, the search algorithm presented in this paper is formulated in the manner stated by Abbass: “we need somehow to choose the problem solving approach before representing the problem” [8]. “Science is the study of the natural world; it is concerned with understanding what is. Engineering design is concerned with creating new things; it makes extensive use of science, but is a quite different activity. … it is only the support of science that has made possible the quickened pace and great achievements of engineering today, and in most fields nowadays the designer must have a solid background of scientific knowledge.” – [9] Science, as inspiration for analogous solution priniciples, has greatly influenced engineering as a whole. For instance, the recently formalized and growing field of Biomimetics or Biomimicry is a design discipline devoted to studying biological systems and imitating related principles to solve engineering problems [10]. Many engineering breakthroughs have occurred based on biological phenomena and it is evident that mimicking biological systems or using them for inspiration has led to successful innovation (e.g., velcro, flapping wing micro air vehicles, synthetic muscles, self-cleaning glass, selfcooling buildings etc.).

CIRP Design Conference 2009

78

Searching for and retrieving analogous solution information as part of function based design, has several methods. Searching the Internet, a database and a subject domain specific corpus are common approaches. However, each method contains unique properties that provide successful results. The same is true for this research and in Section 5 a set of heuristics (not related to modern heuristics such as genetic algorithms, tabu search, ant colony optimization, simulated annealing or immune systems) are introduced that allow engineering designers to mine data for inspiration or direct solutions. 2 RELATED WORK 2.1 Information Retrieval in Design A general approach to design information retrieval was undertaken by Wood et al., which created a hierarchical thesaurus of component and system functional decompositions to capture design context [11]. Through a framework for systematic formalization of informal information in the early design process they propose that informal knowledge in design can be reused. Strategies for retrieval, similar to search heuristics, of issue based and component/function information were presented. Cheong et al. developed a set of search cases, specific to the incorporation of biology in engineering design, for determining biologically meaningful keywords to sets of engineering keywords [12]. Although the results are subjective, the process for retrieving the words is systematic. They were successful in determining biologically meaningful words to several functions found in the reconciled Functional Basis [13]. 2.2 Biology in Design Previous problem solving by inspiration based on biological principles may have happened by chance, but several engineering design researchers have created methods for transferring biological phenomena for use in the engineering domain. A short list of prominent research in biologically inspired products, theories, and design processes include: [14-16] [17,18]. With the right tools, knowledge transfer between the biological and engineering domains can be facilitated and biomimetic designs can be systematically realized.

Chakrabarti et al. developed a software package entitled Idea-Inspire that allows one to search a database by choosing a verb-noun-adjective set [19]. Their database is comprised of biological and engineered mechanical systems. Each entry’s action is described functionally by behavioral language in the form of a function-behaviorstructure model. Chakrabarti’s Idea-Inspire software yields seven behavioral constructs for each biological phenomenon within the search results, the aim is to inspire ideas rather than solve the problem directly. Wilson and Rosen approach biological systems through reverse engineering to determine behavioral characteristics [20]. This method begins assuming a biological system has been identified using other methods and involves functionally abstracting or decomposing the biological system into physical and functional parts. A behavioral model and truth table depicting system functionality allows the designer to describe the biological system with domain-independent terms, which allows for the transfer of general design principles. A searchable database that focuses on technology transfer between biology and engineering is the TRIZbased (Teoriya Resheniya Izobretatelskikh Zadatch) method by [21]. TRIZ provides the user with the results of analyzing 3,000,000 patents and over 6000 physical, chemical, mathematical and engineering solutions all classified in terms of function. By reformatting the problem into abstract representation, a list of possible solutions is generated, which leads the designer to a specific solution. Shu et al. have developed a method for identifying relevant biological analogies by searching through biological knowledge in natural-language format using functional keywords [22,23]. The engineering domain keywords are expanded using WordNet to create a set of natural-language keywords to yield better search results, which could be based on multiple keywords. This method has successfully generated biologically inspired solutions to engineering problems [24]. Researchers at the University of Toronto have also worked to provide designers with biologically meaningful words that correspond to the Functional Basis functions. They analyzed the functions in the secondary and tertiary levels as well as correspondent terms to develop groups of words that were similar according to WordNet [12]. Biologically meaningful words were identified through a methodology developed by Chiu et al. [25] using bridge verbs, verbs that were modified by a frequently occurring noun. Four cases for identification are discussed and examples presented [12]. Based on semantic relationships, the engineering function terms of the Functional Basis were used to systematically generate a list of biologically connotative keywords. Synonyms, troponyms and hypernyms of functions were identified, effectively creating a thesaurus of biological terms that map to Functional Basis functions. 3 RESEARCH APPROACH Due to the vast array of information available via the Internet, books, libraries and other sources, knowing where to start searching for design inspiration can be difficult. There are no wrong ways to search for analogies in design; rather there may be more effective and efficient ways. The approach presented in this paper is to conduct an organized verb-noun search of a non-engineering corpus, such as a textbook or Internet resource. This search utilizes the lexicon of engineering terms known as the reconciled Functional Basis and the automated retrieval tool created at Missouri University of Science and Technology. The majority of non-engineering corpi are written in natural-language format, which prompted the use of a Functional Basis function (a verb) and a flow (a

noun). In general, by selecting an introductory text that covers a broad spectrum of topics in the non-engineering domain, a wide range of analogies presented at a level that is understandable for engineering design can be found. The goals of this research are to: • Create a method for concept generation to identify suitable non-engineering domain inspiration for function based design from a searchable corpus. • Create a general approach for design by analogy that supports the search of non-engineering domain inspired solution strategies for engineering design problems. • Extract analogy-inspired solution strategies for specific engineering problems from non-engineeirng texts through organized verb-noun searches with the automated retrieval tool. Two specific design strategies are used for generating solutions to engineering problems through automated retrieval of information: (1) the Functional Basis, and (2) the organized verb-noun search algorithm. Both tools are summarized in the next sections to give the reader a background for understanding the organized verb-noun search algorithm. Following that, an overview of the organized verb-noun search algorithm advocated in this research is presented. 3.1 Functional Basis The set of generally valid functions and flows proposed by Pahl and Beitz [1] was further evolved by Stone and Wood into a well-defined modeling language comprised of function and flow sets with definitions and examples, entitled the Functional Basis [26]. Hirtz, et al. later reconciled the Functional Basis into its most current set of terms [13]. Here, a function represents an operation performed on a flow of material, signal or energy. There exist eight classes of functions and three classes of flows, both having an increase in specification at the secondary and tertiary levels. There are 24 tertiary functions, each with a set of correspondent terms to aid the designer in choosing the correct function. Similarly, there are 22 tertiary flows, with certain ones having correspondent terms. Tables 1 and 2 provide the function set and the flow set at the class and secondary levels, respectively. Note: The function set in Table 1 comprises the verbs for the automated verb-noun search in Section 4. However, the nouns referenced in Sections 4-6 are not those listed in Table 2. 3.2 Organized Verb-Noun Search Strategy In this section, we present a specific strategy developed to work with non-engineering subject domain specific information. The majority of non-engineering domain texts are written in natural-language format, which prompted the investigation of using both a Functional Basis function and flow term when searching for solutions. Realizing how the topic of the text is treated increases the extensibility of the organized verb-noun search algorithm. This organized verb-noun combination search strategy provides two levels of results: (1) associated with verb only, of which the user can choose to utilize or ignore, and (2) the narrowed results associated with verb-noun. This search strategy requires the designer to first form an abstraction of the unsolved problem using the Functional Basis terms. The verbs (functions) provided in the Functional Basis are used as keywords in the organized search to generate a list of matches, and subsequently a list of words that occur most frequently in proximity to the searched verb in those matches. The generated list contains mostly nouns, which can be thought of as flows (materials, energies and signals), analogous to the correspondent words already provided in the Functional

79

Class Secondary

Branch Separate Distribute

Channel Import Export Transfer Guide

Connect Couple Mix

Control Actuate Regulate Change Stop

Convert Convert

Provision Store Supply

Signal Sense Indicate Process

Support Stabilize Secure Position

Table 1: Partial Functional Basis function set. [25] Class

Secondary

Material Gas Human Liquid Solid Plasma Mixture

Signal Control Status

Energy Acoustic Biological Chemical Electrical Electromagnetic Human

Hydraulic Magnetic Mechanical Pneumatic Radioactive/Nuclear Thermal

Table 2: Partial Functional Basis flow set. [25] Basis flow set. The noun listing is then used in combination with the verb for a second search to locate specific excerpts that describe how the non-engineering domain systems perform the verb (function) used in the organized searches. This search strategy is embodied in an automated retrieval tool that allows an engineering designer to selectively choose which documents to search and to upload additional searchable information as it is made available. The user interface initially presents the designer with function (verb) and flow (noun) entry fields with search and directory options. Search options prompt the designer to choose from exact word, derivatives of the word, and partial word. If the designer does not want to search by verb-noun, a function only option is also available. For this paper, the non-engineering domain chosen for examples is biology. To enhance this search strategy, the biologically meaningful function terms identified by Cheong et al. [12] can be included with the original search verb when performing the second search. Additionally, the biological meaningful flow terms identified by Stroble et al. in their engineering to biology thesaurus can be used to relate engineering flows to their biological synonym [27]. The search typically yields more than one biological phenomenon. Thus, the designer will examine the text excerpts and decide which biological phenomena are best suited for solving the problem. The designer utilizing this organized search technique does not need an extensive background in the nonengineering domain but, rather, the designer must have sufficient engineering background to abstract the unsolved problem to its most basic level utilizing the Functional Basis taxonomy. 4 ORGANIZED VERB-NOUN SEARCH ALGORITHM The sub-sections below describe how the organized verbnoun search algorithm is executed. The designer initiates the search using a function word, chooses how the search word is treated and all other steps are automated. Search results are then presented to the designer whom must decide which biological phenomena are best suited for solving the problem. Search heuristics provided in Section 5 present scenarios an engineering designer might encounter when searching for design inspiration or solutions. 4.1 Step 1: Initial Verb Search Functional Basis functions are the foundation for the organized verb-noun search. In context, the “verb” is any secondary or tertiary level function from the Functional Basis. To produce additional search results, the class

80

level Functional Basis words can also be used. However, those words are generally discouraged as the resulting text excerpts or matches are often repeated and offer limited insight. Thus, the designer should choose the correct verb from the Functional Basis, based on the definitions and examples provided in appendix of [13] for the problem that needs to be solved. For example, to find out how biological systems measure various parameters, the word measure (tertiary level under Signal in the Functional Basis) should be the chosen search verb. The results of the verb search are used to generate the list of frequently occurring subject domain specific nouns in Step 2. 4.2

Step 2: Generate a list of frequently occurring collocating nouns The “nouns” generated in this step can be likened to Functional Basis flows, as they are often representations of material, energy and signals within the chosen subject domain. Using a function as the search keyword yields many results, however, a poor choice of keyword can yield vague or unhelpful results. By generating the set of subject domain specific nouns, a complementary set of search keywords to the original verb keyword is provided to focus and minimize time spent searching. Utilizing the collocation strategy by [28], the results of Step 1 are passed through a script that generates the set of subject domain specific nouns. The result entails a list of words that occur in the Step 1 results and the number of occurrences for each word. With the most frequent word starting the list, the words are sorted in descending order down to single occurrences. Additionally, only content-bearing words are counted and English language articles, spelled-out forms of numbers, single letters, common adjectives, verb phrase headers and other frequently used verbs are ignored. Each of the subject domain specific nouns is used in the second search (Step 3). Occasional adjectives or verbs appearing in the noun list should be dismissed as most, if not all, are abstract in nature. Now the designer has the two pieces that are needed for the complete organized search. 4.3 Step 3: Perform Verb-Noun search This salient step is the key to eliminating the majority of non-relevant search results, leading to a quick judgment of the search keyword and its success, thus saving time and effort. With the original search verb and a list of subject domain specific nouns, the designer can search non-engineering knowledge in natural-language format to identify focused matches, in this particular case, for relevant biological phenomena. The nouns used for this portion of the algorithim are not input into the automated retrieval tool in the same fashion as the verb keyword.

The nouns generated in Step 2 are automatically paired with the verb keyword and searched. Control of the noun list is only through the chosen non-engineering corpus. An additional search option, to increase the search robustness, is to cross-reference the search verb with the biologically meaningful functions identified by Cheong et al. [12]. One who is familiar with the Functional Basis can think of it as a function-flow search. To perform the verbnoun search, each period delimited sentence of the chosen documents is scanned for concurrent instances of the verb and noun pair, not necessarily in consecutive order. This verb-noun search recursively executes for each subject domain specific noun generated in Step 2, starting with the most frequent, while keeping the verb the same. If the designer chooses to include thesaurus terms the search algorithm would then repeat the verb-noun search, switching only the verb word. For example, the search pairs might be (biologically meaningful words corresponding to stabilize are italized [12]): • Stabilize-cell, membrane, temperature, structure, … • Connect-cell, membrane, temperature, structure, … • Bind-cell, membrane, temperature, structure, … 4.4 Step 4: Display search results The resulting matches from each verb-noun search are complied to form the set of phenomena that correspond with the function represented by the verb and displayed to the designer on screen. Results are displayed according to the type of search, and are separated by searched document and presented in a sequential format on screen. If verb only searches were conducted, only the sentences containing the verb are displayed. For verbnoun searches, the list of subject domain specific nouns is listed first with frequency number and references to their occurrence in the searched document. As mentioned before, searching multiple documents displays each set of results individually to deter confusion. 4.5 Step 5: Interpreting results The organized verb-noun search algorithm results can suggest direct solutions by examining the resulting

compiled excerpts that describe biological phenomena, and mapping them to possible engineering design solutions using analogical reasoning. Determining whether or not to use a particular biological phenomenon is largely up to the designer’s discretion. 5 HEURISTICS FOR INFORMATION RETRIEVAL Strategies for retrieving focused results from a subject domain specific corpus are presented here. Depending on the stage of the design process, search criterion may vary widely, thus a set of heuristics has been complied to promote efficient searching using this technique. Our definition of heuristics for information retieval is: A method of extracting useful information from a user defined corpus, empirical in nature, to aid in engineering design. Functional models are used in function based design and are considered input for these scenarios. Stone et al. [29] identified three modular heuristics for functional models, two of those heuristics have been adapted to this research and modified with the concept of primary/carrier flow by Nagel et al. [30]. The Delta cordless circular saw functional model in Figure 1 is provided to demonstrate several of the heuristics. The function based design heuristics that grew out of using the automated retrieval tool presented in this paper are listed below: • General inspiration search To provide the broadest set of analogies, perform a verb only search with the Functional Basis function that is closest to the desired functionality of a conceptual design. Creating a black box model is a reliable method for narrowing down possible functions when performing the verb only search. Furthermore, opting for the partial word search will return the most matches by generating the most nouns related to the subject of the searched corpus. • Dominant flow a. Concept has dominant material flow b. Concept has dominant energy flow c. Concept has dominant signal flow A dominant flow can be of material, energy or signal type.

Figure 1: Delta cordless circular saw functional model.

81

It must enter or be initiated within the system, pass through until it exits or is converted into another flow and be of importantce to the system as a whole. The dominant flows, or primary flows, in Figure 1 are: (1) on/off switch control signal, (2) variable trigger switch control signal, (3) electrical energy of the battery, (4) mechanical energy of the rotating blade, and (5) human hand guiding the blade. With a dominant flow, nouns are of great importance and choosing the exact search word option during a verbnoun search yields only the text excerpts for prime nouns; reducing the clutter in the results. • Branching flow A branching flow is a material, energy or signal flow that creates parallel function chains. Human energy as force and the blade in Figure 1 are examples of branching flows. From a functional standpoint they are carrier flows as they are necessary to the system but not of primary importance to the user. Thus, a verb-noun search with the derivations of the search word option chosen yields nouns that support the function word leading to encompassing, yet focused results. • Redesign phase a. Rework components, more elegant solution b. Concept needs innovation c. Looking for analogy Perhaps the most interesting heuristic is that of redesign. The full potential of the automated retrieval tool is realized when innovation or analogies are the result. Calling upon any synonymous functions during a verb-noun search will greatly increase the designer’s chances of discovering a direct solution. Increasing the elegance of an older design can be achieved by updating system components. This goes back to primary flows achieving the function, thus an exact search word option verb-noun search is the best choice for this scenario. • User defined verb – Non Functional Basis verb The fifth heuristic is included for the anomalous case when the Functional Basis functions do not produce usable search results for the chosen non-engineering domain corpus. In this instance other verb-noun pairs must be generated by the designer. 6 EXAMPLE To demonstrate the organized verb-noun search, a smart flooring example is presented. The automated retrieval tool is utilized to search for biology inspired analogies that can be implemented in a product. When using the Functional Basis for product design there are a few basic steps needed before the search for

inspiration or solutions can be performed. First, one must define the customer needs and convert them into engineering terms [31]. Second, one must develop the conceptual functional model of the desired new product using the Functional Basis function and flow terms. Examples can be found in [32] and [30]. The designer now has several Functional Basis functions that could be used with the automated retrieval tool to gain inspiration. However, to minimize the search time, the designer should start with a black box representation of the desired solution, which designates the main function and flow term [1]. The main function is most likely the keyword (verb) chosen for generating useful biological phenomena via the automated retrieval tool. 6.1 Smart Flooring The customer wants to create a security/surveillance product that looks like ordinary carpet, mats, rugs, etc. to detect intruders, a presence or movement. Requirements for the smart flooring include being unseen by human eye, durability, and composed of common materials. Given these customer needs, the block box model (Figure 2) and the functional model (Figure 3) are created. The main function of detect is input into the automated retrieval tool for a verb-noun search and the exact word option is selected, as Figure 3 contains many dominant flows. The search resulted in 29 text excerpts shown in Figure 4 in the form of individual sentences. Both relevant and non-relevant text excerpts are displayed in Figure 4 to demonstrate the format of the search results. The corpus chosen for this automated retieval of biological solutions to engineering problems was Life, The Science of Biology by Purves et al. [33]. Our corpus is comprised of 58 separate files, one for each chapter of the textbook. The results are separated by each file in the corpus and each instance of a match references the paragraph and sentence within the searched file. Additionally, if the designer wishes to observe the generated nouns from a verb-noun search, they are displayed in alphabetical oerder, with paragraph and sentence citations above the collection of results.

Figure 2: Smart flooring black box model.

Figure 3: Smart flooring functional model.

82

Results for chapter 11 (1) Paragraph 107 Sentence 0: Since both AT and GC pairs obey the base-pairing rules, how does the repair mechanism "know" whether the AC pair should be repaired by removing the C and replace it with T, for instance, or by removing the A and rThe repair mechanism can detect the "wrong" base because a newly s ynthesized DNA strand is chemically modified some time a fter replication. (2) Paragraph 120 Sentence 2: This technique measures the length of the DNA fragments, and can detect dif ferences in fragment length as short as one base. Results for chapter 12 (3) Paragraph 8 Sentence 2: This fact is important because it means that e ven recessive mutant alleles are easy to detect in experiments. Results for chapter 13 (4) Paragraph 155 Sentence 0: Sequencing has also provided the necessary information for the design of primers and hybridization probes used to detect these and other pathogens. Results for chapter 17 (5) Paragraph 73 Sentence 0: In addition to genes for antibiotic resistance, several other marker genes are used to detect recombinant DNA in host cells. (6) Paragraph 93 Sentence 4: The second is the use o f "DNA chips" to detect the presence o f many di f ferent sequences simultaneously. (7) Paragraph 102 Sentence 5: This method may provide a rapid way to detect mutations in people. (8) Paragraph 130 Sentence 2: Some diabetics' immune sy stems detect these di f ferences and react against the foreign protein. (9) Paragraph 176 Sentence 5: I f an organism is present in small amounts, PCR testing will detect it. Results for chapter 18 (10) Paragraph 106 Sentence 1: We will describe their use to detect the mutation in the ?-globin gene that results in sickle-cell anemia. (11) Paragraph 145 Sentence 2: It is also possible to detect early in life whether an individual has inherited a mutated tumor suppressor gene. (12) Paragraph 169 Sentence 1: Scientists attending this conference quickly realized that the ability to detect such damage would also be useful in evaluating environmental mutagens. (13) Paragraph 169 Sentence 2: But in order to detect changes in the human genome, scientists first needed to know its normal sequence. Results for chapter 19 (14) Paragraph 108 Sentence 1: For example, they have been invaluable in the development of immunoassays, which use the great specificity of the antibodies to detect tiny amounts of molecules in tissues and fluids. Results for chapter 23 (15) Paragraph 78 Sentence 2: However, as soon as they detect homoplasies, systematists change their classifications to eliminate polyphyletic taxa. Results for chapter 24 (16) Paragraph 6 Sentence 1: Because modern molecular techniques enable us to detect substitutions at the level of nucleotides, molecular evolutionists can measure even these nonfunctional changes. (17) Paragraph 23 Sentence 2: One way to detect homologous genes in distantly related organisms is to find identical or nearly identical families of genes that produce similar effectsin a wide varity of organisms. (18) Paragraph 40 Sentence 2: The more types of molecules we use, the better we can detect homoplasies. Results for chapter 40 (19) Paragraph 10 Sentence 6: Smell and taste receptors, for example, are epithelial cells that detect specific chemicals. Results for chapter 45 (20) Paragraph 1 Sentence 6: Dogs can be trained to detect the signature odors of such items, so they are used by police, customs agents, and other investigators to identify those odors wherever suspicious activities are occurring. (21) Paragraph 32 Sentence 16: The mammalian inner ear has two equilibrium organs that use hair cells to detect the position of the body with respect to gravity: semicircular canals and the vestibular apparatus. (22) Paragraph 51 Sentence 5: Thus while the hawk is flying, it sees both its projected flight path and the ground below, where it might detect a mouse scurrying in the grass. (23) Paragraph 67 Sentence 3: These sensory cells enable the fish to detect weak electric fields, which can help them locate prey. (24) Paragraph 67 Sentence 6: Any objects in the environment, such as rocks, plants, or other fish, disrupt the electric fish's electric field, and the electroreceptors of the lateral line detect those disruptions. (25) Paragraph 71 Sentence 5: Eyes vary from the simple eye cups of flatworms, which enable the animal to sense the direction of a light source, to the compound eyes of arthropods, which enable the animal to detect shapes and patterns, to the lensed eyes of cephalopods and vertebrates. Results for chapter 49 (26) Paragraph 44 Sentence 4: Electrodes placed on the surface of the body at different locations, usually on the wrists and ankles, detect those electric currents at different times and therefore register a voltage difference. Results for chapter 50 (27) Paragraph 35 Sentence 4: Bats use echolocation, pit vipers sense infrared radiation from the warm bodies of their prey, and certain fishes detect electric fields created in the water by their prey (see Chapter 45). (28) Paragraph 110 Sentence 1: That is why urine tests are used to detect illegal drug use by athletes and other individuals. Results for chapter 53 (29) Paragraph 72 Sentence 4: The larger a flock of pigeons, the greater the distance at which they detect an approaching hawk, and the less likely the hawk is to succeed in capturing a pigeion. Figure 4: Search results for the smart flooring example.

83

6.2 Discussion of Results For the purposes of this example, each result or match was given a number to make referencing individual results easier. The verb-noun search, utilizing the verb detect, yielded a list of 29 results. Of those results, 8 of them are relevant, which are bolded and italicized in Figure 4: Match #5, 19, 21, 23, 24, 25, 27 and 29. All other matches were deemed irrelevant because the corresponding descriptions referred to performing the function detect using non-biological means or equipment operated by humans. The following phenomena summarize the relevant matches, (1) the hair cell, a mechanoreceptor, found in the ear of most animals or on the body of most insects; (2) electric fields, produced by electroreceptors found in electric fish; (3) smell and taste receptors or epithelial cells; and (4) genes that mark recombinant DNA; and (5) birds flock to deter predators, are natural ways of detecting and can provide analogy for an engineered sensing solution. We were unable to derive an analogous solution that fulfills the customer need of unseen to the human eye using “genes that mark recombinant DNA” as stimulus. Remaining as sources for potential inspiration and analogy are the phenomena of hair cells, electroreceptors and epithelial cells. Flocking birds is difficult to directly adapt to smart flooring, but the idea of several sensors grouped together would increase the detection rate. Detecting a certain material could adapt the idea of epithelial cells, by “tasting” materials. Electroreception could be used like radar and even detect the presence of an object when it is just above the flooring. Perhaps the natural phenomenon that most readily allows an analogy and inspiration is (1) – Hair cells are like cantilevers and would detect a presence when disturbed, such as being stepped upon. It can be concluded that the automated retrieval tool was successful at extracting specific biological phenomena that perform the function detect. To demonstrate how biological inspiration from hair cells could be utilized in a smart flooring product, an image that embodies one possible conceptualization is provided in Figure 5. This conceptualization exploits the tactile response provided when a hair cell is forcefully deformed. Shaded strands in the close-up view of the carpet fibers represent durable feedback elements woven into the carpet, providing hidden sensing capabilities.

nouns. Furthermore, the search extracts solutions to engineering problems from non-engineering domain texts and narrows resulting matches based on functionality. Non-engineering flow terms (nouns) combined with engineering function terms create a powerful tool for designers seeking biological phenomena to solve human problems or improve existing products. Engineering tools necessary for the automated retrieval were presented along with the organized verb-noun search algorithm. A set of heuristics for generating specific types of results were developed and the method behind the four significant ones were discussed. The heuristics are in place to make searching a non-engineering text userfriendly and provide precise results. An example demonstrating how the automated retrieval search tool was used to generate biological phenomena that could be utilized or improve detection for security or surveillance was presented. Biological phenomena relevant to the function detect were discussed and analyzed. It was shown that starting with a black box representation of the desired solution minimized the time spent searching, as suitable biological solutions were captured in the first search. Analogical reasoning points to hair cells as the best solution for the smart flooring example out of five resulting biological phenomena that perform the function of detect. The organized verb-noun search algorithm of the automated retrieval tool provides targeted results, which quickly prompt creative solutions and stimulates designers to make connections between the biological and engineering domains. 8 FUTURE WORK Future work includes developing a web-based version of the automated retrieval tool to increase accessibility and ease of use. Also, a feature that allows results to be downloaded is a proposed. With the designer in mind, the automated retrieval results could be incorporated into the other web-based engineering design tools employed by Missouri University of Science & Technology, thus catering to many schools of thought. Furthermore, including the retrieval tool results into the Concept Generator, MEMIC, [34] will provide designers with biological descriptions and analogies based on product requirements leading to the mixing of engineering and biological principles. The organized verb-noun search algorithm was presented here utilizing information from biology; however, this retrieval tool can be used for mapping any nonengineering subject to engineering through functionality. It would be interesting to look at how law, history or even psychology maps to engineering, as the generated nouns are subject domain specific. 9 ACKNOWLEDGMENTS This research is funded by the National Science Foundation grant DMI-0636411.

Figure 5: Image of conceptualized smart flooring product. 7 SUMMARY AND CONCLUSIONS In efforts to bridge biology and engineering through functionality an automated retrieval tool was presented, which provides engineering designers with analogies from non-engineering domains to the engineering systems they wish to create. The automated retrieval tool developed at the Missouri University of Science and Technology significantly reduces the time spent searching biological information for solutions or inspiration by utilizing the function term of interest and subject domain specific

84

10 REFERENCES [1] Pahl, G. and Beitz, W., 1996, Engineering Design: A Systematic Approach, 2 ed, Berlin; Heidelberg; New York, Springer-Verlag. [2] Otto, K.N. and Wood, K.L., 2001, Product Design: Techniques in Reverse Engineering and New Product Development, Upper Saddle River, New Jersey, Prentice-Hall. [3] Far, B.H. and Elamy, A.H., 2005, Functional reasoning theories: Problems and perspectives, Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 19: 75-88.

[4]

[5]

[6]

[7]

[8]

[9]

[10] [11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

Ulrich, K.T. and Eppinger, S.D., 1995, Product Design and Development, ed, New York, NY, McGraw-Hill. Benami, O. and Jin, Y., 2002, Creative Stimulation in Conceptual Design, ASME 2002 Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Montreal, Canada. Thompson, G. and Lordan, M., 1999, A review of creativity principles applied to engineering design, Proceedings of the I Mech E Part E: Journal of Process Mechanical Engineering, 213(1): 17-31. Collins, M.W., Atherton, M.A. and Bryant, J.A., 2005, Nature and design, Southampton; Boston, WIT Press. Abbass, H.A., 2002, Data Mining: A Heuristic Approach, Abbass, H.A., Sarker, R.A. and Newton, C.S., Editors, Idea Group Publishing, Hershey. French, M.J., 1994, Invention and evolution design in nature and engineering, Cambridge; New York, Cambridge University Press. Benyus, J.M., 1997, Biomimicry Innovation Ispired by Nature, New York, Morrow. Wood, W.H., Yang, M.C., Cutkosky, M.R. and Agogino, A.M., 1998, Design Information Retrieval: Improving access to the informal side of design, ASME 1998 Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Atlanta, GA, Pages. Cheong, H., Shu, L.H., Stone, R.B. and Mcadams, D.A., 2008, Translating terms of the functional basis into biologically meaningful words, ASME Design Engineering Technical Conference, Design Theory and Methodology Conference, New York City, NY, Pages. Hirtz, J., Stone, R., Mcadams, D., Szykman, S. and Wood, K., 2002, A Functional Basis for Engineering Design: Reconciling and Evolving Previous Efforts, Research in Engineering Design, 13(2): 65-82. Brebbia, C.A., Sucharov, L.J. and Pascolo, P., 2002, Design and nature: Comparing design in nature with science and engineering, ed, Southampton; Boston, WIT. Brebbia, C.A. and Collins, M.W., 2004, Design and nature II: Comparing design in nature with science and engineering, ed, Southampton, WIT. Brebbia, C.A. and Technology, W.I.O., 2006, Design and nature III: Comparing design in nature with science and engineering, ed, Southampton, WIT. Bar-Cohen, Y., 2006, Biomimetics Biologically Inspired Technologies, ed, Boca Raton, FL, CRC/Taylor & Francis. Vincent, J.F.V., Bogatyreva, O.A., Bogatyrev, N.R., Bowyer, A. and Pahl, A.-K., 2006, Biomimetics: its practice and theory, Journal of the Royal Society Interface, 3: 471-482. Chakrabarti, A., Sarkar, P., Leelavathamma, B. and Nataraju, B.S., 2005, A functional representation for aiding biomimetic and artificial inspiration of new ideas, Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 19: 113-132. Wilson, J.O. and Rosen, D., 2007, Systematic Reverse Engineering of Biological Systems, ASME 2007 Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Las Vegas, Nevada.

[21] Vincent, J.F.V. and Mann, D.L., 2002, Systematic technology transfer from biology to engineering, Philosophical Transactions of the The Royal Society London A, 360: 159-173. [22] Hacco, E. and Shu, L.H., 2002, Biomimetic Concept Generation Applied to Design for Remanufacture, ASME 2002 Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Montreal, Canada. [23] Chiu, I. and Shu, L.H., 2007, Using language as related stimuli for concept generation, Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 21(2): 103-121. [24] Shu, L.H., Hansen, H.N., Gegeckaite, A., Moon, J. and Chan, C., 2006, Case Study in Biomimetic Design: Handling and Assembly of Microparts, ASME 2006 Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Philadelphia, Pennsylvania. [25] Chiu, I. and Shu, L.H., 2005, Bridging Cross-Domain Terminology for Biomimetic Design, ASME 2005 Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Long Beach, California, Pages. [26] Stone, R. and Wood, K., 2000, Development of a Functional Basis for Design, Journal of Mechanical Design, 122(4): 359-370. [27] Stroble, J.K., Stone, R.B., Mcadams, D.A. and Watkins, S.E., 2008, Generating an Engineering to Biology Thesaurus to Promote Better Collaboration, Creativity and Discovery, in CIRP Design Conference 2009, Cranfield, Bedfordshire, UK. [28] Chiu, I. and Shu, L.H., 2004, Natural Language Analysis for Biomimetic Design, ASME 2004 Design Engineering Technical Conferences and Computers and Information in Engineering Conference Salt Lake City, Utha. [29] Stone, R., Wood, K. and Crawford, R., 1998, A Heuristic Method to Identify Modules from a Functional Description of a Product, ASME 1998 Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Atlanta, GA. [30] Nagel, R.L., Bohm, M.R., Stone, R.B. and Mcadams, D.A., 2007, A Representation of Carrier Flows for Functional Design, International Conference of Engineering Design, Paris, France. [31] Kurfman, M., Stone, R., Rajan, J. and Wood, K., 2003, Experimental Studies Assessing the Repeatability of a Functional Modeling Derivation Method, Journal of Mechanical Design, 125(4): 682-693. [32] Tinsley, A., Midha, P.A., Nagel, R.L., Mcadams, D.A., Stone, R.B. and Shu, L.H., 2007, Exploring the use of Functional Models as a Foundation for Biomimetic Conceptual Design, ASME 2007 Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Las Vegas, Nevada. [33] Purves, W.K., Sadava, D., Orians, G.H. and Heller, H.C., 2001, Life, The Science of Biology, 6 ed, Sunderland, MA, Sinauer Associates. [34] Bryant, C., Mcadams, D., Stone, R., Kurtoglu, T. and Campbell, M., 2005, A Computational Technique for Concept Generation, ASME 2005 Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Long Beach, CA.

85

Structured Design Automation

M.J.L. van Tooren1, S.W.G. van der Elst1, B. Vermeulen2 Delft University of Technology, Kluyverweg 1, 2629 HS Delft, The Netherlands 2 Stork Fokker AESP, Industrieweg 4, 3351 LB Papendrecht, The Netherlands [email protected], [email protected], [email protected] 1

Abstract The next stage in product development process evolution should be the automation of labour intensive repetitive steps. The design automation strategy should follow the trend of the supply chain management approach and deliver a flexible framework, allowing local specification and adjustment The proposed Adjoined Design Automation Process Trajectory (ADAPT) and associated tools provide such a framework. It covers the process from fuzzy front-end to the evaluation of automated processes. Considerable lead time and cost savings are shown for electrical component design and the feasibility of a Domain Specific Language approach is an important step towards acceptance of design automation. Keywords: Design automation, knowledge technologies, knowledge application, value stream mapping

1 INTRODUCTION The value chain framework of Michael Porter [1], the Resource Based View developed by, amongst others, Wernerfelt and Barney [2], [3] and the Knowledge Management Framework described by Collison and Parcell, [4] provide us with a global and local view on what to organise a firm for. Of course knowing what to do leaves us with the question how to do it. In this paper a framework is proposed to use structured design automation in an engineering design environment to achieve better use of resources and implement true engineering knowledge management to improve the firms competitive position. The proposed framework approaches engineering knowledge as a resource which can and should be partially addressed as a tangible asset. It shows that the development of integrated design tools has blurred our view on the functionality of its components. Only through a proper understanding of this functionality and associated technology one can (re)discover which part of the engineering supply chain is covered by these tools, how it is covered and finally if and why it should be covered in this way. The framework offers an alternative engineering knowledge management tool chain which is transparent and reconfigurable in a lean way to fit the actual need of the company and to allow insertion of one’s own proprietary knowledge. The framework borrows elements from the lean manufacturing approach. However, although lean manufacturing can be used as a guideline to organise the structured design automation process it needs supporting philosophies and tools to achieve the required functionality. Starting point of the proposed approach is the assumption that it will become more and more important to apply and extent the abilities of current information technology towards knowledge application technologies which can

CIRP Design Conference 2009

86

take over many of the repetitive activities currently done by scarce and expensive human intellectual capital. Many of the necessary components for this transformation are available on the market but lack coherence and acceptance by the industry. This can be explained from the large gap between knowledge management as understood and implemented by managers and knowledge management as seen by knowledge engineering specialists from the IT-world and a small group of believers in the engineering world. Both aim for the same goals but are too far apart through lack of mutual understanding. Furthermore, the wide range of available tools and the complexity of understanding their functions and benefits in a local engineering environment lead to many disappointments and long lasting suspicion. It is our aim to narrow the gap and contribute to the further development of the methodology to implement knowledge engineering in industry. 2 THE ADAPT PROCESS The proposed Adjoined Design Automation Process Trajectory (ADAPT) is an attempt to define a business model providing a generic framework for engineering knowledge management with a clear coupling to knowledge technologies. The ADAPT approach should ensure the application, implementation and added value of these knowledge technologies and help engineering design communities to continuously evolve by enabling re-use and extension of their knowledge base. The implementation of knowledge technologies in an industrial environment in a controlled and useful way requires an integrated, programmatic and transparent approach. The ADAPT process shown in Figure 1 is an attempt to frame existing methodologies and tools in a coherent way.

Figure 1: The ADAPT process 2.1 Process Analysis The first phase of the ADAPT process concerns an indepth analysis of the engineering processes performed by the engineers, the products/services they work on and the language they use in describing these products and processes (indicated with the fuzzy front-end). This analysis shall identify process improvement opportunities by applying lean principles to the product development activities. The process analysis is mainly focused on knowledge-intensive activities and products belonging to a larger set or family, ensuring a sufficiently large applicability of the resulting knowledge application. The deliverable of the process analysis is a value stream map (see section 3) that shows the engineering processes and the data, information and knowledge flowing through these processes or generated by these processes. The map is used to identify waste of (scarce) engineering resources as well as opportunities to reduce this waste using knowledge technologies that enable design automation. For the categorisation one can use the seven

types of waste that can be identified during product development, as presented in Table 1 [1]. Of these seven types of waste identifiable in most product development processes, design automation mainly addresses ‘processing’ and ‘correction’ waste. Using Knowledge Based Engineering (KBE) techniques, expert knowledge can be captured and reused to automate repetitive and non-creative engineering activities, thereby reducing product development time and cost. The VSM is performed in close cooperation with project managers and principal stake holders to create awareness of the weaknesses and possibilities within the engineering process. This way, a natural demand can established, rather than a push model for the relatively unfamiliar knowledge technologies. Furthermore, the involvement of project managers in the process analysis phase is important for a second reason: they are key enablers of and responsible for a successful implementation. The project managers are able to allocate engineering resources and therefore stimulate

Table 1: Applying the Seven Wastes to product development [5]

87

the use of knowledge applications to assure business advantages, e.g. reduced cost or increased quality. During the analysis of the engineering design process the flow of information, the transformation of information and the required and applied expert knowledge is monitored. The analysis focuses on four main characteristics: • Required engineering resources •

Repetitiveness of engineering process within product family



Nature and maturity of expert knowledge



Key performance indicators related to the identified processes (cost, time, quality etc.) The required engineering resources and the number of process cycles provide insight in the cost involved in different recurring processes in the non-recurring part of the development of a product family. It should also offer information about the longevity of the applied knowledge. The domain expert knowledge is assessed to determine its nature and maturity. When processes are highly frequent, time-intensive, clearly defined and not subject to change, knowledge technologies can enable automation. During the process analysis, possible knowledge technology architectures and applications are examined. Furthermore a risk analysis is performed to identify the risks involved in the development of knowledge applications. Together the required investments, the expected benefits and the risk analysis should justify the implementation of KBE techniques. The analysis phase is concluded by a selection of engineering processes to be automated, the level of automation and a first draft of the architecture for the constellation of knowledge tools suitable in the context. 2.2 Knowledge acquisition During the knowledge acquisition phase expert knowledge involved in the engineering process is identified, captured and structured. The knowledge acquisition phase forms the foundation for the subsequent phases of the ADAPT process. The knowledge acquisition phase has an iterative character and consists of identifying, capturing, structuring and validating the expert knowledge. The deliverable of the knowledge acquisition phase is a knowledge base, a digital repository containing a detailed description of knowledge concerned with the selected engineering process. The quality and completeness of the captured knowledge largely determines the success rate of the development process hence the resulting design automation. To guarantee a successful result, the acquisition process is performed in close cooperation with the domain experts. The involvement of domain experts is vital to the project for two main reasons: •

Identification and dissemination of relevant knowledge



Validation of quality and completeness of the captured knowledge Using different knowledge acquisition techniques a conceptual model of the selected engineering process is constructed, providing an informal but detailed description of the activities. In order to maximise the ability for future reuse of the captured knowledge, it is recommended that the knowledge base hence the conceptual model is not catered to one specific implementation (in this case: the development of knowledge applications) and embeds a neutral structure oriented to the engineers. This enables the knowledge base to act as a general-purpose fundamental base for reuse of knowledge. Other purposes of knowledge reuse are: provide expertise and increase awareness to stakeholders within an organisation or

88

reduce the risk of knowledge loss in domains where only a small number of experts hold vital knowledge. To obtain a neutral structure the captured engineering knowledge is represented using natural language, terminology from the domain under consideration and predefined forms to structure the different knowledge elements. The conceptual model contains a process diagram focusing on the activities performed by the engineers and is oriented to the ‘input-behaviour-output’ perspective. It mainly contains procedural knowledge and therefore encompasses a comprehensive activity diagram or flow chart. Besides a detailed description of the engineering activities under consideration, the conceptual model of the knowledge base also contains a product centric hierarchical decomposition of the system (i.e. product/service) into subsystems and components. This product model is oriented to the ‘object-relation-object’ (triple) perspective and mainly contains conceptual knowledge. The conceptual model of the knowledge base will form the basis for the subsequent development of the application. 2.3 Knowledge structuring The third phase focuses on modelling the captured knowledge. The captured engineering knowledge is analysed and (re)structured to suit the knowledge technologies selected for subsequent development when the knowledge base is not the end product. The deliverable is considered a redesigned engineering process and provides structure and lay-out for the knowledge application to be developed. It is also referred to as a specification model since it provides a more formal definition of the engineering knowledge oriented towards software platforms. The specification model is used to support the communication between knowledge engineers and software developers. Together with the expert-oriented conceptual model created during the knowledge acquisition phase it comprises the knowledge base. The specification model provides a structure for the software classes representing the different product and process elements and acts as a blueprint for the knowledge application. It consists of two layers. First, the specification model provides an architecture layout describing the software framework environment. Following a functional decomposition the knowledge application is divided into several self-contained software tools to increase the reuse and the expressiveness of the related software code. The set of software tools provides full functionality to execute the engineering activity under consideration. Furthermore, the framework enables communication between the software tools through agents and provides a loosely coupled demand-driven structure for the application. Within the framework, each tool is considered an engineering service providing functionality to the framework, for example optimisation packages, data bases and analysis tools. Second, the specification model contains a representation of the central KBE application: the model generator. The model generator is responsible for definition and instantiation of a specific product model and is able to generate discipline specific report files as input for analysis tools defined in the framework environment. 2.4 Knowledge application development The fourth phase, knowledge application development, addresses the software development of the actual knowledge application, e.g. the architecture and its constituting tools (model generator, agents, optimisers, product data management, analysis tools etc) [6].

Due to the framework approach and the modular build-up of knowledge applications, the reuse of the different tools is ensured to a large extend. The tools composing the knowledge application are either already available (commercially of the shelf (COTS) or developed and applied during previous applications) or will need to be developed. Developing knowledge applications using dedicated KBE platforms require the programming of the central model generator: defining (new) design options and constructing configurations within a product family. Exploiting dedicated KBE development platforms, for example Genworks’ GDL or the former ICAD from KTI, an objectoriented and functional programming language is used to encode the knowledge. The engineering knowledge is stored into modular software objects, called High Level Primitives (HLP). The primitives represent different design options and can be created, tailored and assembled to define new product configurations. The object-oriented characteristic allows developers to resemble the decomposition of the product defined by the conceptual model using a network of classes. Besides the conceptual knowledge, object-oriented programming also allows the incorporation of procedural knowledge using so-called facets: specific class attributes that contain procedures (methods and references) that are automatically invoked when the value of the slot is requested or changed during runtime. The specific procedures are derived from the rules in the activity diagram of the conceptual model. The encoding of the primitives and software modules is considered an iterative process. During the development of the application, additional, undiscovered or changed knowledge might be identified and the associated models from the knowledge base need adjustment to ensure that they accurately represent the engineering activity as well as the structure and process of the application. Using object-oriented and high-level programming languages, the resulting code volume is considered very low. Furthermore, programming languages with a high level of abstraction require lower entry-level programming skills. 2.5 Tool integration and deployment The fifth phase addresses the integration of the software modules to form the knowledge based architecture and its components. It includes the development of communication interfaces and the distribution of the application itself. The deliverable is an automated design application based on the knowledge techniques offering engineering services. The architecture and tools shall support performance indicators to support their evaluation with respect to the key performance indicators identified in the first phase of the ADAPT process. 2.6 Business implementation The last phase concerns the implementation of the knowledge application in the design process. Since the flow of information within a process will change when deploying knowledge technology applications, a process wide re-design is needed to prevent the occurrence of bottlenecks creating waste [7]. Configuration management and maintenance are conducted to ensure traceability of the knowledge rules invoked and reproducibility of the resulting solutions. Furthermore, an essential step in the implementation of knowledge technologies is to recognise that they imply an important change in the work of engineers. Therefore, more practical attributes to a successful implementation are support and training of end-users. Overall, five groups of key success factors for

the implementation of KBE applications can be identified [8]: can •

Provide training in the operation of the application



Provide a useful and usable user manual



Stimulate users to share best practices in using the application want •

Focus on topics important to the business and engineers



Communicate KBE vision, need for the business, results and experiences of users



Evaluate the usefulness and application on a regular basis have

usability

of

the



Plan the development of the application in terms of required resources and release date



Make reservation in project planning to practice using the application

• Provide support during the lifetime of the application must •

Convince management of possible payback in terms of lead-time and resources



Have a well-respected engineer promote the use of the application measure •

Application performance in relation to the identified key performance indicators The last and very important aspect is the monitoring of the performance of the application using the information supplied by the system in relation to the key performance indicators identified. Adaptation, cancellation and expansion where required should be an integral part of the process. 3

FROM VALUE STREAM MAPPING TO STRUCTURED KNOWLEDGE In the ADAPT framework Value Stream Mapping (VSM) is a key tool to gain insight into the local engineering processes. The graphical representation of the engineering activities as well as the flow of data, information and knowledge flowing through and generated by those activities helps the communication about and the understanding of the local engineering practice. VSM originated in the manufacturing industry. Applying proper modifications to the original VSM, this tool can also be applied in order to improve product development processes [9]. Where the original VSM looks critically at the flow of material, the modified VSM looks at the transformation and generation of data, information and knowledge as a series of process steps interrupted by waste: consuming engineering resources without adding value for the customer. By considering and mapping the current state of product development value streams and identifying waste, VSM defines a more efficient or lean future state while eliminating waste that interrupts a continuous and even flow of data, information and knowledge. The future state diagram provides the foundation for a future process and the subsequent action plan to implement it. As opposed to serial value streams typical of manufacturing, typical product development processes consist of numerous interdependent value adding activities. This interwoven character makes it difficult to

89

Product process

development

processes are described by both generic, domain and discipline specific terminology. In order to encode all design options effectively and correctly into the knowledge application, the representation of the related classes and objects need to complete multiple objectives, also known as knowledge representation roles. Where the conceptual model enables the communication and visualisation between knowledge engineers and the domain experts, the specification model is used as means of communication for both human expression and computation (execution of activities by knowledge applications). Especially this latter category requires modifications and explicit specifications (hence the name specification model) to the underlying language in order to suite the correct interpretation by virtual machines:

Traditional Manufacturing process

Virtual data flow

Physical product flow

Weeks and months

Seconds through hours

Primarily knowledge intensive work

Physical manufacturing

Nonlinear and multidirectional flows

Linear and serial evolution

Large and diverse group of domain experts

Primarily manufacturing organisation

Table 2: PDVSM versus VSM define flow and identify forms of waste. The key to superior product development is to analyse the complex network of activities into definable ‘work streams’ or sets of subsequent process steps transforming input into output. The work streams will not only identify the waste of resources in between the diverse streams, they will also pinpoint waste interrupting the process steps within the individual streams. Numerous distinctions between traditional VSM and product development VSM (PDVSM) are represented in Table 2. Within the ADAPT framework PDVSM is used to select the engineering practices that will benefit from automation. In addition it is the first step towards the knowledge acquisition and knowledge structuring phases which are leading to the knowledge base, a crucial product of the ADAPT process. In this knowledge base we will have process maps to formalise the identified processes, trees to formalise products and product families and taxonomies to formalise the terminology used in the maps and trees. The relation between the products and processes are formalised with ontologies (also named diagrams), Table 3. This way the fuzzy front-end, which is the not explicitly and consistently defined collective of local engineering activities and their objectives (the processes, products and language which define the local engineering practice) is transformed into a well defined body of knowledge suitable for further development into KBE applications or a Design and Engineering Engines (DEE) [10]. The re-use of the knowledge in multiple KBE applications or DEE’s needs an additional step. Most of the designers and engineers are not willing to spend most of their time programming, even in a high level language as normally used in a KBE platform. Therefore a proper interface language is needed through which knowledge can be reused. This will be discussed in the next section. 4 DOMAIN SPECIFIC LANGUAGES In general products and services are designed through a synthesis of existing and new design options into known or new configurations. The associated design options and



The specification model should follow programming language syntax



The model should provide a visual representation for ease of construction



Rules governing the value of class properties are defined



Class and object descriptions should be intelligible for humans (experts, knowledge engineers and software developers) To alleviate the required effort involved in the development of knowledge applications a Domain Specific modelling Language (DSL) is developed, enabling the symbolic representation of products or systems of the problem domain while satisfying the abovementioned requirements. With the help of generic language concepts like the Unified Modelling Language (UML) a DSL is carefully defined to enable the representation of conceptual classes of the physical world to be meaningful to both humans and intelligent systems. The DSL is considered a visual dictionary of noteworthy abstractions, domain vocabulary and knowledge content of the domain under consideration [11]. In addition to an ontology defining the types of elements that exist and their relations within a particular domain, a DSL should contain not only class types but also instances of objects and rules in order to construct new specification models. These knowledge elements are considered the building blocks for the specification model, like words are to natural languages. During the knowledge acquisition and structuring phases it is important to get a thorough and formal description of the different knowledge elements. The structuring of the objects and rules applied during the design processes can benefit from a standardised categorisation. An example of a general categorisation for design rules is shown in Table 4. During knowledge acquisition a systematic discovery of the rules applied in each of these categories is performed. The subsequent knowledge structuring should prepare for the DSL as the interface towards the formalised knowledge and the re-use of this knowledge (e.g. when

Fuzzy Front-end

Knowledge Base

Processes (engineering practices and rules)

Process maps

Products (design options)

Trees

Diagrams (built from concepts and relations)

Knowledge Re-use

Jargon (discipline specific language)

Taxonomy

Ontology

KBE applications (object oriented) DEE’s High Level Primitives Domain Specific Languages

Table 3: Relation between product and process knowledge during different phases

90

Product related

Process related

Internal

Engineering Design (functional, aesthetics etc) Management

Engineering Tool operation (including work-arounds) Tool interfacing Management

External

Mathematics Physics Engineering Design Law Market

Mathematics Physics Engineering Design Tool operation (including work-arounds) Tool interfacing

Table 4: Origins of rules in a design organisation: internal and external related to organisation boundaries building a KBE application or a DEE). When applied to the knowledge base using knowledge management tools, the DSL provides domain experts, knowledge engineers and IT specialist a means of communication to visualise, structure and validate their conceptual ideas. The DSL can be applied to define new product configurations and variations within the product family. Since the syntax of the DSL suites object-oriented programming languages it enables the application of the same abstractions and vocabulary to define the different software classes underlying the knowledge applications. Therefore it an be stated that the DSL increases the insight in the knowledge application and the coherence between the different knowledge application technologies. Combined with dynamic source code generation, the knowledge base can be applied to structure new product configurations using existing or new design options and automatically generate the software code representing the associated generative product model for the knowledge application [12]. 5 KNOWLEDGE APPLICATION CASE STUDY Following the discussion on the ADAPT process an example of a knowledge application will be addressed. The application has been developed applying the automation process trajectory applying the DSL. 5.1 Wiring Harness Design Application Electric aircraft wiring harnesses can be comprised of hundreds of cables and ten thousands of wires, providing connectivity between all the mission and vehicle systems ensuring sufficient redundancy and reliability. Electrical

Figure 2: Connectors applied at a wiring harness

wiring design is often performed in parallel with structural design. Consequently, the wiring harness design is subject to changes in the aircraft structure that occur with subsequent design iterations, requiring time consuming rework for any harnesses affected. The routing for all wires is determined manually and strongly dependent on personal knowledge and experience. Besides, the electric wiring design is governed by numerous regulatory and functional design rules. The repetitive, time consuming and rule-based nature makes aircraft wiring design a key opportunity to develop knowledge applications The development of the application is performed in close corporation with Stork Fokker Elmo, a main international player on the aircraft electric wiring market, regarding both design and manufacturing. Process Analysis For the wiring harness design process, one of the key opportunities resulting from the initial VSM involves the pin assignment process. It involves the assignment of electric signals at production breaks, where connectors connect the different wiring harnesses (Figure 2). Each wiring harness connector can include up to 150 slots, called pins, to accommodate a signal. The pins can vary in size, as do the signals to be assigned. For each production break the signals are assigned to a pin and associated connector, one by one consecutively. This process of pin assignment is highly repetitive and time-consuming due to several reasons: •

Separation of signals across multiple wiring harness segments or cables is enforced by numerous opposing design rules and regulations, for example redundancy of flight controls, electromagnetic compatibility or heat dissipation of power cables.



The increasingly vast quantity of signals to be assigned (‘processing’ waste).



Rework caused by changes in the input data, for example governed by design iterations for the aircraft structural design (‘correction’ waste). For the development of the application, the dedicated knowledge system GDL from Genworks is selected. GDL is a new generation knowledge system that combines the power and flexibility of the former ICAD system with novel web technologies. Its object-oriented programming language is based on the standard ANSI Common Lisp and allows the definition of generative product models. Furthermore, ILOG CPLEX is selected to act as search engine: the COTS linear programming optimisation tool will analyse models provided by the generative product model and drive the search process to a feasible and optimal design. Knowledge Acquisition The iterative knowledge acquisition process of capturing, structuring and validating the expert knowledge is supported by Epistemics’ PCPACK a software package supporting the process of acquiring, storing and representing knowledge. A separate ontology is developed, specifically built to suit the wiring harness domain. A comprehensive description of the involved engineering activities is defined, together with a conceptual product decomposition of the system. Furthermore the design rules and best practices guiding the activities are captured, many of which are opposing. Some examples of applicable design rules are: •

The ratio of occupied pins over available pins has a settable maximum (design requirement)



Signal types should be grouped among connectors to fulfil separation requirements (authority regulations)

production break

91



Per connector, signals subtypes should be centred and grouped together (manufacturing requirements) The informal model functions as a detailed engineering handbook decreasing the knowledge entry level required to perform the pin assignment processes. Knowledge structuring During knowledge structuring, a large amount of specific domain knowledge is crunched into a more formal model, reflecting deep insight into the resulting knowledge application. The formal model of the knowledge base provides an architecture lay-out describing the software framework environment for the application. To that purpose, it takes into account the roles and capabilities of the GDL and CPLEX software tools. Although inheriting the functionality of the original process, the redesigned process might consist of entirely different sub-processes and activities. For example, when the objective is to assign 70 signals across 90 available pins fulfilling all requirements and incorporating best practices, a human engineer will require a vast amount of time to explore most if not all possibilities. Applying the CPLEX optimisation software results in a much more efficient exploration of the solution space, solving the problem concurrently for all signals thus increasing the reduction in recurring process time. The object classes that constitute the product decomposition represent the generative product model which will be programmed during the subsequent development phase. The object classes will encompass the design rules and best practices, suiting the objectoriented approach of the GDL knowledge system. Together, these object class definitions form the DSL representing functional building block called High-Level Primitive (HLP) [10]. The HLPs can be tailored and assembled enabling engineers to define new product configuration and new design options. For instance new connector types or pins with alternative gauges can be defined easily. Knowledge application development Once the knowledge structuring of the expert knowledge is finished and the architecture for the application fully defined, the software modules constituting the application are developed. For the application supporting the pin assignment process, the application will consist of two modules: •

A generative product model called a Multi-Model Generator (MMG) developed using the GDL knowledge system [10].



A converger and evaluator, represented by the linear programming optimiser CPLEX. Since CPLEX is a COTS tool, the development focuses on the MMG. The product decomposition as defined in the formal model of the knowledge base represents the structure of the software classes. The modular building blocks or HLPs are programmed using the object-oriented programming language. Each object class or HLP defined in the formal model has an equivalent software class. It becomes apparent that the formal model is a diagrammatic representation of the software structure and source code: it makes the code more expressive and clarifies the processes and rules invoked by the knowledge application. Besides the HLPs, the MMG consists of elements, called Capability Modules (CM) capable of extracting certain discipline specific ‘views’ in order to facilitate the analysis tools. In this particular case, the only discipline involved is mathematics. The related CM extracts a mathematical model of the connectors composing the production break,

92

Figure 3: Graphical User Interface for the pin assignment application defining the supply of pins as well as the demand generated by the signals per separation code. The CM defines the objective function (minimise the number of pins occupied by a signal) and generates all constraints derived from the applicable design rules. The output is a report file, specifying a linear programming problem modelled after the instantiated pin assignment problem. This problem can be analysed and solved by Cplex efficiently. The development of the KBE software modules is performed iteratively and can be considered domain driven. After each iteration cycle, the formal model is adjusted to ensure the model accurately represents the structure and process of the knowledge application. Tool integration and deployment To empower the automation of repetitive tasks for of the pin assignment problem the framework concept of the Design and Engineering Engine is applied [10]. The DEE integrates the self-contained software modules and provides communication between the modules through the application of software agents [13]. Considering the pin assignment problem, the resulting framework functions as a stand-alone knowledge application and has not yet been connected to the other engineering corporate software packages. The MMG and associated agent have been deployed on-site at Fokker Elmo, whereas the CPLEX optimiser is executed remotely as engineering service, on request. A Graphical User Interface (GUI) is designed to enable interaction with engineers. The GUI allows the engineers to specify the input data (problem description) and provides identified best practices as execution options, such as grouping of signals. The GUI also enables the engineers to manually adapt solutions as suggested by the application through incorporated selection functionality and provides different types of output files to accommodate manufacturing as well as design engineers. The GUI is presented in Figure 3 and illustrates the front view of the set of connectors composing the production break of the wiring harness. The different signal types are colour-coded by separation code, to enable easy verification by the engineers. Business implementation and validation The implementation of the pin assignment application into the business environment is not yet performed and scheduled for next year.

6 CONCLUSIONS The ADAPT process presented offers a structured approach towards a practical implementation of knowledge management and Knowledge Based Engineering in an engineering environment. It is based on a sequence of well defined technologies, supplemented with additional tools to complete the chain. The Value Stream Mapping technique adapted for a product development environment is well suited to analyse the fuzzy front end of local engineering communities and prepares well for subsequent knowledge acquisition and analysis. With the use of Domain Specific Languages the UML concept is extended to form an interface to the reusers of the formalised knowledge. A case study showed that the methodology works and can lead to structured waste elimination and cost saving.

[3]

[4]

[5]

[6]

[7] 7 RECOMMENDATIONS The case study presented in this paper addressed mainly the elimination of ‘processing’ and ‘correction’ waste associated to individual process steps belonging to the value-adding or core business process. However, ADAPT could also support the elimination of other types of waste. A large part of the waste in the overall business process is likely to occur in between different process steps and is considered to be greater than the waste within single process steps. Typical types of waste occurring in between process steps are ‘waiting’, ‘overproducing’ and ‘inventory’. Knowledge technologies also enable the partial elimination of other types of waste. Integrating multiple design automation and Commercial Of-The-Shelf (COTS) tools into a framework structure, communication and data handling (‘conveyance’ waste) can be controlled in a demand-driven approach reducing ‘waiting’ ‘inventory’ and ‘overproducing’ waste. 8 ACKNOWLEDGMENTS The authors would like to express their gratitude to Genworks, Stork Fokker AESP and Fokker Elmo for their support and contributions. 9 REFERENCES [1] Porter, M., 1985, Competitive Advantage: Creating and Sustaining Superior Performance, Free Press, New York, NY. [2] Wernerfelt, B., 1984, The Resource-Based View of the Firm, Strategic Management Journal, vol. 5, No. 2: 171-180.

[8]

[9]

Barney, J., 1991, Firm Resources and Sustained Competitive Advantage, Journal of Management, vol. 17, No. 1: 99-120. Collison, C., Parcell, G., 2001, Learning to Fly: Practical Knowledge Management from Leading and Learning Organisations, Capstone Publishing Ltd., Chichester, United Kingdom. Morgan, J., Liker, J., 2006, The Toyota Product Development System, Productivity Press, New York, NY. Van der Elst, S., Van Tooren, M., Vermeulen, B., Emberey, C., Milton, N., 2008, Application of a Knowledge Based Design Methodology to Support Fuselage Panel Design, Aircraft Structural Design Conference, Liverpool, UK. Vermeulen, B., 2007, Knowledge Based Method for solving complexity in design problems, Delft University of Technology, Delft, The Netherlands. Van der Spek, R., Kelleher, M., Knowledge management, reducing the costs of ignorance, www.dnv.com/services/consulting/knowledge_mana gement/Publications Morgan, J., 2002, High Performance Product Development; a Systems Approach to a Lean Product Development Process, The University of Michigan, Ann Arbor, MI.

[10] La Rocca, G., van Tooren, M., Enabling Distributed Multi-disciplinary Design of Complex Products: a Knowledge Based Engineering Approach, J. Design Research, vol. 5, No. 3: 333-352. [11] Evans, E., 2004, Domain-Driven Design, Addison Wesley, Boston, MA. [12] Van der Elst, S., Van Tooren, M., 2008, Domain Specific Modelling Languages to Support ModelDriven Engineering of Aircraft Systems, 26th Congress of the International Council of the Aeronautical Sciences, Anchorage, AK. [13] Berends, J., van Tooren, M., Schut, E., 2008, Design and Implementation of a New Generation Mulit-Agent Task Environment Framework. 49th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Schaumburg, IL.

93

Modular product design and customization J. Pandremenos, G. Chryssolouris* Laboratory for Manufacturing Systems & Automation, Dept. of Mechanical Engineering & Aeronautics, University of Patras, 26500, Greece [email protected]

Abstract The paper deals with modular design architecture and its capabilities for easy and fast customization in products. The basic principles of modularity are mentioned and the most known methodologies and tools for modular product design are reported. The way modularity can facilitate a product’s customization is being addressed. With the help of a modular real test case, a motorcycle helmet, the capabilities of modularity in customization are illustrated and a customization procedure is described. The conclusions drawn in the final section of the paper indicate that although many modular design methods exist none of them is capable to provide the optimum design solution. However, all of these methods undeniably facilitate the customization of products.

Keywords: Modularity, customization, design

1

demonstrated and the ease by which it can be customized according to customer’s needs. In the last section, conclusions are drawn and discussed.

INTRODUCTION

In today’s market of increasingly demanding customers, companies are compelled to focus on smaller and specific market segments of customer oriented products. The era of the so called “mass customization” is now emerging (Figure 1). The main requirement a product should meet in order to be customizable is to maintain its sensitivity to a change at the lowest possible level. The lower its sensitivity is, the higher its flexibility [1]. Therefore, flexible design architectures are utilized, so as to enable easy, low cost and fast changes in the product. Modularity is such architecture and considered as the most effective means of achieving these demands.

2

MODULARITY AND CUSTOMIZATION

Ulrich [3] first distinguished two main architectures in product design: the integral and the modular one. In an integral architecture, the components of a product are designed to be assigned for more than one function and the interfaces among them are coupled. On the contrary, in a modular architecture, a one-to-one mapping exists between functions and parts and uncoupled interfaces are specified. However in the last years, modularity or integrality is not considered anymore as a binary characteristic. Products may therefore present varying degrees of one or the other architecture [4].

2.1

Design for modularity

A number of methods and tools, leading to modular design have been developed over the last years. Hereafter, the methods that are most widely used by design engineers are described. Figure 1: From mass production to mass customization [2].

Design Structure Matrix

Within the next sections of the paper, the principles of modularity and the basic design of modularity methods are initially described. Moreover, the way modularity facilitates customization is addressed. The main purpose of this work is for some of the existing methods to be applied to an existing case study, a modular motorcycle helmet, in order for the modularity of this product to be

CIRP Design Conference 2009

94

Design Structure Matrix (DSM) is used for the better representation of a system’s element structure. Through this visualization facility, a designer has the ability to better control the modularity of the product, with regards to the interface complexity.

Interface architecture

Fully integral

Bus-modular

Fully modular

Graph

DSM

⎡0 ⎢1 ⎢ ⎢1 ⎢ ⎢1 ⎢1 ⎢ ⎢1 ⎢1 ⎣

1⎤ 1⎥⎥ 1⎥ ⎥ 1⎥ 1⎥ ⎥ 1⎥ 1 1 1 1 1 0⎥⎦ 1 0 1 1 1 1

1 1 0 1 1 1

1 1 1 0 1 1

1 1 1 1 0 1

⎡0 ⎢1 ⎢ ⎢1 ⎢ ⎢1 ⎢1 ⎢ ⎢1 ⎢1 ⎣

1 1 1 1 1 0

1⎤ 0⎥⎥ 0⎥ ⎥ 0⎥ 0⎥ ⎥ 0⎥ 0 0 0 0 0 0⎥⎦

1 0 0 0 0 0

1 0 0 0 0 0

1 0 0 0 0 0

1 0 0 0 0 0

⎡0 ⎢1 ⎢ ⎢0 ⎢ ⎢0 ⎢0 ⎢ ⎢0 ⎢0 ⎣

1 0 0 0 0 0

0⎤ 0⎥⎥ 0⎥ ⎥ 0⎥ 0⎥ ⎥ 1⎥ 0 0 0 0 1 0⎥⎦

1 0 1 0 0 0

0 1 0 1 0 0

0 0 1 0 1 0

0 0 0 1 0 1

0 0 0 0 1 0

Table 1: DSMs for different interface architectures [4]. These matrices are binary in general, square and contain the system’s elements, the name down the side (as row headings) and across the top (as column headings). While a link exists between node i to node j, the value of the ij element is unity or marked with X, otherwise the element value is either zero or it is left empty. Finally, the diagonal elements of such matrices have usually zero value or are left empty as well, since they do not play any role within matrix [5]. Holtta-Otto and de Weck [4] describe the DSMs of a fully “integral”, a “bus-modular” and a fully “modular” system of seven components (Table 1).

Axiomatic Design Theory Axiomatic Design Theory (ADT), developed by Suh [6], is a method that transforms Customer Attributes (CAs) into Functional Requirements (FRs) and subsequently to Design Parameters (DPs) and Process Variables (PVs). The design becomes interplay between the functional (FR domain) and the physical domains (DP domain). More than one result may arise from this procedure. The interrelation among domains can be better demonstrated with the help of a Design Matrix (DM). By using vector notations for FRs and DPs (the same could apply for the pairs of the other domains), the relationship can be expressed in an equation of the following type:

FR = A ⋅ DP

(1)

Axiom1 - The Independence Axiom All FRs should remain independent throughout the design Axiom2 - the Information Axiom The information content of the design should always be kept at a minimum level In order for the satisfaction of Axiom1 to be controlled, the DM is utilized. If the DM is diagonal, then Axiom1 is valid. This case corresponds to an uncoupled design. If the matrix is triangular, then the design is decoupled and may, under certain circumstances, satisfy Axiom 1. In all other cases, the design is coupled and each function is affected by more than one design decisions. Axiom2, is utilized in case two or more designs fulfill Axiom1. By measuring the information content of each design it then selects the one that “carries” the minimum amount of information. A correlation between ADT and modular design has been performed in [7]. According to this, an integral architecture can be compared with a coupled DM while the modular architecture can be modeled with an uncoupled one. Finally, a design characterized by an intermediate architecture, between integral and modular, may be represented with a “semi-coupled” design matrix, where some of its entries are not equal to zero and thus, coupling in design is caused.

where A the DM. Modular Function Deployment ADT is governed by two axioms:

Modular Function Deployment (MFD) is also a method performing functional decomposition. However, here the mapping takes place between the module drivers and the functions.

⎡ FR1⎤ ⎡ x 0 0⎤ ⎡ DP1⎤ ⎢ FR 2⎥ = ⎢0 x 0⎥ ⎢ DP 2⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢⎣ FR3⎥⎦ ⎢⎣0 0 x ⎥⎦ ⎢⎣ DP3⎥⎦

⎡ FR1 ⎤ ⎡ x 0 0⎤ ⎡ DP1⎤ ⎢ FR 2⎥ = ⎢ x x 0⎥ ⎢ DP 2⎥ ⎥ ⎥⎢ ⎥ ⎢ ⎢ ⎢⎣ FR3⎥⎦ ⎢⎣ x x x ⎥⎦ ⎢⎣ DP3⎥⎦

Uncoupled design

Decoupled design

⎡ FR1 ⎤ ⎡ x x ⎢ FR 2⎥ = ⎢ x x ⎥ ⎢ ⎢ ⎢⎣ FR3⎥⎦ ⎢⎣ x x

Coupled design

Figure 2: FR–DP relationship according to the design matrix [6].

95

x ⎤ ⎡ DP1⎤ x ⎥⎥ ⎢⎢ DP 2⎥⎥ x ⎥⎦ ⎢⎣ DP3⎥⎦

MFD comprises five main steps: 1. Define customer requirements In this initial step, the characteristics of the product are defined, based on competition analysis and customer requirements. 2. Select technical solutions The FRs meeting the above demands are specified. These requirements are afterwards transformed into technical solutions. 3. Generate concepts This is the basic step of MFD, where the modules of the product occur, after the analysis of the technical solutions. The analysis is performed having as criteria twelve modularity drivers (carryover, technology evolution, planned design changes, different specification, styling, common unit etc.) 4. Evaluate concepts In this step, the interface relation between the modules is determined. Additionally, an economic evaluation of the modular concepts takes place. 5. Improve each module The final step of the method includes the definition of the modules’ specifications (technical information, cost targets etc.). Based on these specifications, the detailed design and optimization of each module may take place. Finally, MFD also indicates the ideal number of modules within a product, as the square root of the number of assembly operations in the average product. Furthermore, the interface design is also addressed taking into consideration parameters such as those of the fixation method, the number of contact interfaces, information exchange between modules (material flow, energy, signals etc.) [8]. Although several design methods, leading to modular architecture exist, as it was shown by Holtta and Salonen [9], each one of them gives different results with the same identical input. This happens, due to the different perception and application fields of each method.

2.2

Product customization

In order to perform product customization at low-cost, of high-quality and at the same time large-volume delivery, two are the basic requirements that have to be fulfilled. Firstly, technologies capable of performing the customization are needed. Reverse Engineering, advanced CAD techniques, Information Technology, non conventional manufacturing methods are some of these technologies. At the same time, the product’s design complexity should be kept at a minimum level. This is accomplished by making the design modular, both as far as the functions mapping and the interfaces structuring is concerned. A number of effective customized products already exist covering a wide range of industrial sectors, from cars and computers to software, toys, shoes and many others. In [10] - [14] such examples are reported.

3

3.1

TEST CASE: CUSTOMIZING A MOTORCYCLE HELMET Helmet’s design

The main parts of a motorcycle helmet are shown in figure 3:

Figure 3: Motorcycle helmet’s basic parts. In order to better illustrate the design architecture of the helmet, both in terms of functional decomposition and interface complexity, a DM with ADT and a DSM were formulated respectively.

Design Matrix formulation The following main FRs-DPs are defined for the helmet: Functional Requirements

Design Parameters

FR1

Prevent penetration

DP1

Shell

FR2

Absorb energy

DP2

Expanded Polystyrene (EPS) foam liner

FR3

Provide comfort

DP3

Padding

FR4

Protect face/Visibility

DP4

Face shield

Table 2: FRs-DPs for motorcycle helmet. The relation between these FRs with the DPs is described in the following design matrix:

⎡ FR1 ⎤ ⎡ A11 ⎢ FR ⎥ ⎢ 0 ⎢ 2⎥ = ⎢ ⎢ FR3 ⎥ ⎢ 0 ⎥ ⎢ ⎢ ⎣ FR4 ⎦ ⎣ 0

0

0

A22

0

0

A33

0

0

0 ⎤ ⎡ DP1 ⎤ 0 ⎥⎥ ⎢⎢ DP2 ⎥⎥ 0 ⎥ ⎢ DP3 ⎥ ⎥ ⎥⎢ A44 ⎦ ⎣ DP4 ⎦

(1)

Design Structure Matrix formulation The graph of the interface structure of the helmet’s components as well as its corresponding DSM, are represented in Table 3.

96

The customization procedure developed consists of five steps:

Graph

1.

DP 4 DP1 DP 2 DP 3

DSM

⎡0 ⎢1 ⎢ DP 2 ⎢0 ⎢ DP3 ⎣0 DP 4 DP1

1 0 0⎤ 0 1 0⎥⎥ 1 0 1⎥ ⎥ 0 1 0⎦

2.

3.

Table 3: Helmet’s components interface structure.

Design evaluation By analyzing matrix (1), an absolute one-to-one mapping that gives a completely uncoupled design, may be observed: this can be considered as a modular design, in terms of functional decomposition. Furthermore, Table 3 reveals a fully modular interface structure as well.

3.2

4.

Helmet’s customization

A generic customization procedure for products has been created in [15]. A motorcycle helmet is one of the test cases of the project, upon which this procedure was developed. This product was selected, since from studies performed within the project, it was shown that the 1520% of all full face composite helmets were ill fitting and that 5% of the motorcyclists could not find helmets to fit their head geometry. Moreover, as the role of the EPS foam liner is to bring the head to a gentle stop, it is obvious that the smaller the gap is between the head and the liner, the less serious the injury. In Figure 4, this gap is illustrated.

5.

Capturing geometrical data Scanned data from the rider’s head are gathered with the help of a 3D body scanner. Capturing non-geometrical data These data include the information about the pressure between the helmet and the user’s head. For this reason, a customized recording system of static pressures has been developed so as to generate a pressure map. Designing the Custom-Fit inner liner Geometric customer data, non-geometric data and the existing helmet geometry into which the liner has to be integrated, are required in the design phase. For this purpose, a design system was developed within the project that executes the required sequence of operations altogether automatically. Developing the manufacturing process for the inner liner Rapid Manufacturing (RM) was selected for the manufacturing of the customized liner, due to the unique feature of this technology to build any shape that might be required. An RM system, capable of mimicking the material and properties of the EPS foam has been developed within the project. Manufacturing and assembling the inner liner The STL files are generated from the design system and are imported in the RM machine. The inner liner is produced with the use of a straightforward honeycomb structure, which is a good compromise of mimicking the polyurethane foam and being cost effective in design and manufacturing Figure 5 [16].

Figure 5: Customized helmet’s liner.

4 Figure 4: Gap illustration between head and liner. In section 3.1, the helmet’s design was defined as being completely modular. This means that a change in the “DP2 – EPS foam liner” would not affect the functionality of the other parts. Additionally, the interface between the liner and the shell is not altered in the customization procedure. Therefore, in order to minimize the gap and thus, maximize safety and comfort, the internal geometry of the EPS liner should be customized according to the rider’s geometrical and non-geometrical features. The non-geometrical requirements define the interaction in the zones of contact, such as the pressure distribution between the product and the rider’s head and the level of comfort felt by the rider.

CONCLUSIONS

Many modular design methods exist that may be useful to designers. None of these methods lead to an optimum design solution, as they all examine the design from a different perspective. However, all of them undoubtedly are capable to facilitate the customization of the product. With the help of a test case, the paper demonstrated how a modular product could be easily customized and a customization procedure was proposed. Since the current trend is for products with combined design architectures, in order for the benefits that each architecture provides to be exploited, special attention should be given when assigning functions and interface structures to parts/modules, so as for couplings to be avoided in the parts to be customized.

97

5

beyond generic prescriptions, Journal Operations Management, 20(5): 549 - 575

ACKNOWLEDGEMENT

The work was partially supported by the EU funded project within the 6th Framework Programme: “Custom-Fit” – “A knowledge-based manufacturing system, established by integrating Rapid Manufacturing, IST and Material Science to improve the Quality of Life of European Citizens through Custom-Fit Products” (contract number NMP2-CT-2004-507437).

6

REFERENCES [1] Chryssolouris, G., 2006, Manufacturing Systems: Theory and Practice 2nd edition, Springer, New York [2] Ramani, K., Cunningham, R., Devanathan, S., Subramaniam, J. and H., Patwardhan, 2004, Technology Review of Mass Customization, International Conference on Economic, Technical and Organisational aspects of Product Configuration Systems, Copenhagen, Denmark, June 2004 [3] Ulrich, K., 1995, The role of product architecture in the manufacturing firm, Research Policy, 24: 419 – 440 [4] Holtta-Otto, K. and de Weck, O., 2007, Degree of Modularity in Engineering Systems and Products with Technical and Business Constraints, Concurrent Engineering, 15(2): 113 – 126 [5] Ulrich, K.T. and Eppinger, S.D., 2000, Product Design and Development, McGraw-Hill, New York, 2nd ed. [6] Suh, NP, 2001, Axiomatic Design, Advances and Applications, Oxford University Press, New York [7] Pandremenos, J., Dentsoras, A., Chatzikomis, C. and Chryssolouris, G., 2008, 'Integral and modular vehicle design: a comparative study, Proceeding of the 2nd CIRP Conference on Assembly Technologies and Systems (CATS 2008), Toronto, Canada, September 2008 [8] Ericsson, A. and Erixon, G., 1999, Controlling Design Variants, Society of Manufacturing Engineers, Dearborn, Michigan [9]

Holtta, K. and Salonen, M., 2003, Comparing three modularity methods, In Proc of ASME Design Engineering Technical Conferences, Chicago, IL. September 2003

[10] Pandremenos, J., Paralikas, J., Salonitis, K. and Chryssolouris, G., 2009, Modularity concepts for the automotive industry: A critical review, CIRP Journal of Manufacturing Science and Technology, 1(3): 148-152, doi:10.1016/j.cirpj.2008. [11] Duray, R., Ward, P.T., Milligan, G.W. and Berry, W.L., 2000, Approaches to mass customization: Configurations and empirical validation, Journal of Operations Management, 18(6): 605 - 625 [12] Salvador, F., Forza, C. and Rungtusanatham, M., 2002, Modularity, product variety, production volume, and component sourcing: Theorizing

98

of

[13] Berman, B., 2002, Should your firm adopt a mass customization strategy?, Business Horizons, July-August, 51 - 60 [14] Moser, K., Müller, M. and Piller, F.T., 2006, Transforming mass customisation from a marketing instrument to a sustainable business model at Adidas, Int. J. Mass Customisation, 1(4): 463 – 479 th [15] Integrated Project of the 6 EU Framework Program: “Custom-Fit – A knowledge-based manufacturing system, established by integrating Rapid Manufacturing, IST and Material Science to improve the Quality of Life of European Citizens through Custom-Fit Products” (contract number NMP2-CT-2004-507437), http://www.custom-fit.org

[16] Pandremenos, J., Paralikas, J., Chryssolouris, G., Dybala, B. and Gunnink, J.W., 2008, RM product development: design principles, simulation and tool, Proceedings of the International Conference on Additive Technologies (ICAT 2008), Ptuj, Slovenia, September 2008

A Criteria-based Measure of Similarity between Product Functionalities

D. P. Politze1, S. Dierssen2 AG, Research and Development, Böblingen, Germany, [email protected] 2Swiss Federal Institute of Technology, Zürich, Switzerland, [email protected]

1Daimler

Abstract Today‘s customers request product functions and not components. A specific, modular description of the product functions and how they are realized becomes widely accepted to track how a product function is realized and to support future development. This will result in an additional, modular product structure from a functional viewpoint that is orthogonal to the physical product structure. Because the extent of each functional module has to be defined according to some kind of similarity between product functionalities this article introduces a corresponding concept an presents an approach how it can be assessed based on defined criteria. Keywords: Product Structuring, Design Units, Function Oriented Product Descriptions, Functional Model, Similarity Criteria, Function Module Driver, Modularization

1 INTRODUCTION The growth of system complexity in automotive industry is mainly driven by the fact that product functions are more and more realized by the combination of mechanical, electric/electronic, and software components. Many people have argued that understanding and integrating the users requirements in the development process may be a feasible solution. Subsequently the work of Houdek [1] and Heumesser et al. [2] describe the need and challenges for mature product specifications and raise the question for an adequate description approach. Based on that Allmann [3] suggests an additional abstraction layer for specifying customerrelated product functions, which can be compared to the level of abstraction of a user manual. In this paper the understanding of a product function is as a label for „to do something“. The only purpose of a product function is to serve humans or computer systems in such way that they can communicate by referring to the same concept. Every product function is realized by a specific solution. which in turn comprises a specific functionalitythat is seen as the behavior or „what is actually happening“. It is further assumed that customers demand product functions and that they think of them, when buying a product or complaining about it. Therefore, one can infer that the quality of a product is also perceived by its functionality. That means, in order to provide high quality product functions, it is not sufficient to bring mechanical, electric/electronic and software components to perfection separately. Hence, the component‘s interactions and their contributions to the product functions become important and should be considered very early in the design process. For this reason product functions and the way how they are realized shall be captured and described, resulting in an explicit function oriented product description (FOPD), which allows reuse and improvement of existing descriptions and thus evolves to a mature function oriented product specification that can be used

CIRP Design Conference 2009

99

for future design projects. In addition to that many industrial enterprises act as an integrator for development tasks that are made around the world, which in turn requires a detailed description of the intended product functionality [4]. In particular this is the case when the development is made by an external supplier. Unfortunately in the domain of complex products with high variety - which is the case in the automotive industry - the FOPD becomes very extensive and difficult to use. This is mainly driven by the high number of existing product functions, the increasing share of software and the fact that the description of a product function may differ between product variants. Thus subsequent development steps are confronted with a unmanageable and high complex specification. In this context, modularization is often used as a decomposition technique to define a product structure, consisting of smaller design units or subsystems to master the complexity. Most of the time this is seen on a par with defining the physical part structure of a product, but this is only one application of modularization. According to [5] the structure depends on the viewpoint and there exist many of them for the same product. Particularly for the development of highly complex mechatronic products the function-oriented structure of a product is seen as equally essential as the part structure [6]. Thus this article agrees to a very abstract definition of modularization that is given as „partition of a system into a set of parts (modules) connected in some way with each other“ [7]. As depicted in figure 1, such structure may be defined by an encapsulation of functionality descriptions that are grouped together according to some kind of similarity , which addresses certain aspects. Unfortunately similarity between the description of product functionalities is not clearly defined. Thus, this article presents a criteria-based definition of similarity that can be used for modularization and thus deriving a function-oriented view on a product. Therefore in the next section a model is given that is appropriate for describing product functionalities in a formal way and enables tool support for creating,

Description of Product Functionalities (FOPD) - from observation or specification - describes technical solutions Functional Modularization - based on similarities - pays respect to certain criteria (Function-Oriented) Product Structure - functional design units & their interfaces

Figure 1: The functional modularization task based on the description of product functionalities. managing and using a FOPD. In section 3 the understanding of similarity is explained and a corresponding measure is defined. Based on that section 4 presents criteria that are appropriate for our kind of modularization task. Section 5 gives a short and simple example on how to use the FOPD model and the similarity measure. Finally section 6 refers to related literature before section 7 concludes. 2 MODELING OF PRODUCT FUNCTIONALITIES In this section a formal model is given that is suitable to build up a FOPD. This means it is appropriate for describing or specifying functionalities that constitute solutions for product functions of highly variant and highly complex mechatronic products, such as automobiles.The model presented here allows device-centric functional modeling, as described by Pahl and Beitz [8] and further provides a solution for the modeling of activities, sequences, preconditions and variety, as identified in [9].

Function 1

Function 2 before Function 3

Input-Descriptor that stops a function Input-Descriptor that starts a function Input-Descriptor Output-Descriptor Different types of flows

Figure 2 shows the basic idea of a functional model for product functions as it is described in [9]. It shows different types of flows and functions depicted as lines and flarge white boxes respectively. For every outgoing flow, there exists an output descriptor, represented by a small white ellipse. Similar to that there is an input descriptor for every incoming flow. Depending on the type of associated activity, an input descriptor is either represented as a small coloured box. In this example grey and black are used, which means a flow starts or stops a function respectively. Additionally there is the possibility to define a chronological order between two activities. Thus, from the above figure it can inferred that „function 1“ first stops „function 2“ and then starts „function 3“. At this point it should be clear how „Activities“ and „Sequences“ may be represented with this model. Furthermore the model allows to define preconditions and to handle variety information, which is done with special attributes for functions and descriptors. 3 DEFINING AND MEASURING SIMILARITY As stated in the introduction, there is a need for a measure of similarity that can be used for modularization and deriving appropriate design units. In this section a mathematical measure is presented and therefore a notation shall be introduced as shown in figure 3. Since two objects may be similar regarding different aspects (e.g shape, size or color), this article states that a measure for similarity between two product functionalities has to be dependend on a number of different criteria. Because these criteria may have a different importance a adequate weighting is also needed.

fi , fj s(fi , fj ) n wk Ik (fi , fj ) r

Figure 2: Modeling of functionalities As the product functionality may be realized by a composition of subfunctions, thus decomposition is also part of the model. More specific, the paradigm of Pahl and Beitz where a hierarchy of functions is working on flows is applied. Based on that a function is assumed to have inputs and outputs, which are defined separately in this model. By describing corresponding input descriptors and output descriptors explicitly, the traditional distinction between function and flow is reinforced and allows the integration of variety aspects. Thus, a product function may be described as to work with different sets of flows, depending on specific variability aspects. It should also be noted that the provided model is meant to be used recursively, which means that every functionobject in the model can represent a solution for another product function.

product functionalities similarity between two product functionalities number of different criteria weighting for criterion k indication of similarity between two product functionalities regarding criterion k a positive integer

Figure 3: Definition of variables Similarity can then be assessed with respect to a certain aspect and quantified by an indicator function that indicates differences by returning a value between 0 (no difference) and 1 (totally different). All of the above can be consolidated in a single expression which is a modified version of the weighted 100

Minkowski metric and delivers a value for similarity. A higher value also means a higher degree of similarity. An additive approach is preferred to a multiplicative approach, because the latter one would consider two functionalities as not similar whenever there is a total difference regarding just a single criterion.

s(fi , fj ) = [

n !

k=1

1

wk ∗ (1 − Ik (fi , fj ))r ] r

Driver

Individual Aspects - Is a functionality also used by other products? - Is a functionality used in different places within one product? - Is a functionality realized by the same actors, sensors, control units?

Carry-over

Technical evolution

(1)

Based on the formula given in (1) similarity between two product functionalities is defined as a subjective weighting of correspondences regarding given criteria. In this context it is very important to note that similarity is bounded to a specific context e.g. an application or a point of view, which is expressed by the selection of criteria. Since the application of this article focusses on modularization in order to derive a product structure from a function-oriented point of view, corresponding criteria are presented in the next section. 4 DRIVER FOR SIMILARITIES In order to determine the right criteria for measuring similarity within this context, a series of expert interviews has been conducted at Daimler. Each of the 15 interviews with experts from the domain of the development department took about two hours and was audiotaped for later analysis. From that, individual aspects have been analyzed and extracted which were then be aggregated to main criteria or main reasons for the function-oriented modularization task. Similar to the findings of Erixon 8 main criteria which are further referred as Function Module Drivers have been identified. Those Drivers comprise the individual aspects for the intended purpose and serve as a basis for the assesment and measuring of similarity. Whereas the Module Drivers of Erixon [10] focus on establishing and justifying a physical part structure, the Function Module Drivers aim at a functionoriented product structure, as distinguished in the introduction of this paper. In order to better recognize the overlaps an analogical naming of the Drivers has been chosen. The Drivers and the individual aspects that have been found in the interview series are given in table 1. In this article a corresponding question for each of the individual aspects is provided in order to improve comprehensiveness and practicability. In the following the Function Module Drivers and some of the individual aspects are explained in detail. Therefore it is important to note that some of the aspects are rather subjective than objective. Comparable to Erixon a carry-over is referring to functionalities or part of their realization that are re-used in different products, product generations or in several places in one product. There are also similarities in terms of a carry-over when a functionality uses the same components as another. The technical evolution designates functionalities that are bound to a certain technology that might be replaced in future. In this context it is also likely that the old and the new technology will coexist in different products. Based upon the technical solution of a functionality there may exist functionalities that result in the same effect, but are realized in alternative ways or functionalities that are complementary to others. The variety in the realization of a product functionality also provides a reason why such functionalities could be grouped to a function module. Further evidence of being

101

Technical solution

Common unit

- Is a functionality bound to a certain technology? - Is a underlying technology known to be replaced soon? - Is there evidence that two or more underlying technologies coexist? - Is there an alternative solution / realization for a functionality? - Is a functionality fulfilling the same abstract transformation function? - Is there a complementary functionality (e.g. one that reverses another functionality)? - Is a functionality used in all markets, products and product variants? - Is a functionality used by many other functionalities (interfaces)? - Is a functionality not used at all?

Process

- Is a functionality very important, critical or costly/timeconsuming? - Does a functionality need special treatment (i.e. testing)?

User perception

- Is a functionality involved in creating a desired user experience? - Is a functionality executed at the same time (or within a defined interval) with others? - Is a functionality executed permanently or just once a while? - Is a functionality a reaction of / an indicator for another functionality? - Does a functionality lead to a state that enables other functionalities? - Does a functionality belong to a sequence that is desired by a user?

User Intention

- Is a functionality evoked similar to others (e.g. by the same button)? - Is a functionality part of a scenario or use case of a customer? - Is there an alternative functionality that fulfills the users intention?

Company strategy

- Is a functionality provided by an external supplier (e.g. by software)? - Does a functionality require high communication / coordination effort? - Does a functionality need special attention (i.e. importance, piloting)? - Is there a need to describe a functionality in more detail than others? - Is a functionality too complex and its description unmanageable?

Table 1: Function Module Drivers alternative technical solutions may be found by comparing the function according to Pahl and Beitz [8]. Whenever two functionalities fulfill the same abstract flow-based transformation function they might also be grouped together. Another driver that is similar to Erixon findings is called common unit and refers to functionalities that are similar in that way that they are used commonly or as standard in all product variants. Furthermore, functionalities that may be seen as infrastructure (e.g. providing power) are also adressed by this driver. Very often such functionalities have many dependencies and thus interfaces with others. Besides, functionalities that have no dependencies at all, may also be grouped together, since they have isolation in common. The Driver labelled as process focusses on functionalities that have to be treated special in certain business processes. For example one could group all functionalities together that are ready at a certain point in the assembly process or group all functionalities that may be tested automatically. This Driver also adresses functionalities that are very important, critical or very time consuming in the processes.

Beside the drivers that have some overlap with the findings of Erixon, there exist additional drivers that have not been identified yet and are special for the functionoriented viewpoint. The user perception aims at grouping those functionalities together that are perceived in a certain time intervall or as a causal reaction to something (e.g. when pressing the button for unlocking, the car unlocks, the blinker flashes two times and a sound can be heared). The similarity is in that way that certain functionalities contribute to a desired user experience. Also functionalities that are bounded to the evokation of another functionality shall be grouped together according this Driver, since these functionalities may influence the perception and the behavior of a user. In addition to that user intention refers to functionalities that fulfill a customer‘s goal or consitute a scenario that is demanded by a user. An example for that could be a function module consisting of all functionalities that play a role when a customer wants to put his shopping bags in the trunk. Therefore it should be assessed how functionalities are evoked by a user, because thiy implies similarities regarding the user‘s intention and the desired effect. Hence a grouping can be done according to evokation, scenario or use case descriptions and by finding alternative functionalities that lead to a desired effect Finally the company strategy is an important Driver for deriving a function-oriented product structure. Unfortunately this Driver is very subjective. It allows a grouping of functionalities that consititue special cases, such as novelties, functionalities that need a separate responsibility, functionalities that exist in the context of a piloting or functionalities that are important in any other sense. Also functionalities that are provided by a dedicated external supplier may be determined and grouped according to that Driver. Furthermore this driver decides whether a functionality or an existing grouping shall be split in order to maintain manageability and readability. Finally this Driver also allows grouping functionalities with many dependencies, when the cost for communication and coordinaton becomes too high. 5 EXAMPLE In this section provides a short example on how to use the formal descriptions, the formula and the drivers to determine similarity. To keep the example simple it is restricted to only three very simple functionalities F1, F2 and F3 and only take the drivers common unit and user perception into account. Regarding the formula given in (1) the value for r has been chosen as 1. Furthermore a weighting of 1 and the existence of only two products P1 and P2 are assumed. Since F2 is only used in half of the product as is F1, it is easy to see that the indicator function regarding common unit I1 delivers a value of 0.5. On the other hand F2 is stopped by F1 which means that a user will perceive it in a certain time intervall. Thus there is no big difference and a value of 0.0 for I2 may be assumed. In the same way the values for F1 and F3 may be assessed, resulting in no difference regarding the commonality but a total difference regarding perception. Finally the similarity is calculated according to the formula given in section 4 and the result shows that F1 is more similar to F2 than to F3. 6 RELATED WORK In the past decades, functions and the modeling of functions have become a part of some well-known design methodologies and in order to support a common

understanding of functions between all stakeholders in the design process formal function representations and vocabularies (sometimes called ontologies) have been defined. in P1 and P2

in P1

F1

F2 in P1 and P2 F3

I1(F1,F2)=0.5 I2(F1,F2)=0.0

I1(F1,F3)=0.0 I2(F1,F3)=1.0

s(F1,F2)= 1.0 * 0.5 + 1.0 * 1.0 = 1.5 s(F1,F3)= 1.0 * 1.0 + 1.0 * 0.0 = 1.0

Figure 4: Example The work of [11] gives a very good and detailed overview over functional modeling and they distinguish different types of such ontologies. Whereas a device ontology describes a system to be composed of black box modules, a functional concept ontology aims at modeling the functionality of a system from the viewpoint of human such as the work of Chadrasekaran and Josephson [12], Umeda and Tomiyama [13] or Gero [14]. While integrating the user in the modeling of functions already points in the right direction there is additional need on describing functionalities and thus technical solutions for the development of high complex mechatronic systems in order to enhance re-use and improve quality [2],[3]. The approach that is described in section 2 is also pointing in that direction. In the same way as functional modeling the understanding of modularization has become very ambigious in the past years. A collection of definitions and a very good review on that topic may be found in [15] which also agrees on different types of the modularization task e.g. for design, for production or for use. In this context, the idea of having criteria for modularization has been conducted very succesful by Erixon and his Module Drivers that were found in case studies [16], [17]. A mathematical approach for measuring similarities or commonalities has been presented by Kota [18]. His Product Line Commonality Index is an objective measure for sharing parts across product variants, but not applicable for product functionalities. 7 CONCLUSION This article refers to a model for describing product functionalities in a formal way. Based upon that a definition of similarity is developed and expressed as a mathematical and criteria-based measure of similarity between two product functionalities. This measure shall be used for the development of highly complex mechatronic products with great variety. Then objective as well as subjective aspects are presented that have been collected at Daimler and were subsumed in question form by Function Module Drivers. Those aspects and drivers help define an application context of the similarity measure aiming at the derivation a function-oriented product structure. Finally a short example has been given. This example shows how the formal model, the formula and the criteria may be used to determine a value for similarity.

102

Since five of the provided eight criteria are comparable to Erixons Module Drivers, this research also approves the validity of his findings in parts. Future work will focus on transferring the Function Module Drivers into industry application and on ways how the similarity measure may be used to derive a functionoriented product structure. Therefore we currently analyze how the data in the formal model gives answer to the questions that correspond to the individual aspects and thus to the drivers. 8 REFERENCES [1] Houdek F, 2003, Requirements Engineering Erfahrungen in Projekten der Automobilindustrie, Softwaretechnik-Trends, 23(1). [2] Heumesser N and Houdek F, 2003, Towards systematic recycling of systems requirements, Proceedings of 25th International Conference on Software Engineering. 512–519. [3] A l l m a n n C , 2 0 0 7 , A n f o r d e r u n g e n a u f Kundenfunktionsebene in der Automobilindustrie, SE 2007 – die Konferenz rund um Softwaretechnik, Hamburg. [4] E v e r s h e i m W, 1 9 9 8 , O r g a n i s a t i o n i n d e r Produktionstechnik: Konstruktion, Band 2, 3rd edition, Springer Verlag. [5] Andreasen MM, Hansen CT and Mortensen NH, 1995, On Structure and Structuring, Workshop Fertigungsgerechtes Konstruieren, Erlangen, Germany. [6] Eversheim W, Schernikau J and Goeman D, 1996, Module und Systeme: Die Kunst liegt in der Strukturierung. VDI-Z 138 (1996), Nr. 11/12 – November/Dezember. [7] Stevens WP, Myers GJ, and Constantine LL, 1974, Structured design. IBM Syst. J., Vol. 13:115 – 139. [8] Pahl G and Beitz W, 2007, Konstruktionslehre: Methoden und Anwendung. Springer, 7th edition.

103

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

Politze DP and Dierssen S, 2008, A functional model for the function oriented description of customerrelated functions of high variant products. Proceedings of NordDesign’08, Tallinn, Estonia, August, (to appear). Erixon G, 1998, Modular Function Deployment – A Method for Product Modularisation. Doctoral Thesis, Royal Institute of Technology, KTH, Stockholm. Erden MS, Komoto H, Van Beek TJ, D’Amelio V, Echavarria E and Tomiyama T, 2008, A review of function modeling: Approaches and applications. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 22:147–169. Chandrasekaran B and Josephson JR, 2000, Function in device representation. Engineering with Computers,16:162–177. Umeda Y and Tomiyama T., 1995, FBS modeling: modeling scheme of function for conceptual design. In Proc. Working Papers of the 9th Int. Workshop on Qualitative Reasoning About Physical Systems, 271-278, Amsterdam. Gero JS, 1990, Design prototypes: a knowledge representation schema for design. AI Magazine 11(4):26-36. Salvador F, 2007, Toward a product system modularity construct: Literature review and reconceptualization. IEEE Transactions on Engineering Management, 54(2):219–240. Erixon G, 1996, Design for Modularity. Design for X Concurrent Engineering Imperatives. Chapman & Hall. Edited by G. Huang. Östgren B, 1994, Modularization of the Product gives Effects in the Entire Production. Lic. Thesis, The Royal Institute of Technology. Stockholm, Sweden. Kota S, Sethuraman K and Miller R, 2000, A metric for evaluating design commonalities in product families, Journal of Mechanical Design, 122:403-410

Dynamic Learning Organisations Supporting Knowledge Creation for Competitive and Integrated Product Design R. Messnarz1, G. Spork2, A. Riel3, S. Tichkiewitch3 1 ISCN GesmbH, Schieszstattgasse 4, A-8010 Graz, Austria 2 Magna Powertrain, Lannach, Graz, Austria 3 Laboratoire G-SCOP, Grenoble INP, 46 av Félix Viallet, Grenoble, 38031, France [email protected], [email protected], {Andreas.Riel; Serge.Tichkiewitch}@inpg.fr Abstract This paper shows that learning strategies and a structured approach to turn organisations into learning organisms have a major influence on the success of engineering programs in general, and on integrated design activities in particular. It points out the important relationship between dynamic learning organisations and the successful integrated development of complex mechatronic products using the topical and typical example of safety engineering in automotive development. It points out the key properties of learning organisations and reports about a way in which they have been successfully applied to the showcase example in close collaboration with a car manufacturing company.

Keywords: Learning Organisations, Competitive Design, Mechatronics, Certified Innovation Manager, EU Certificates

1 INTRODUCTION AND METHODOLOGY Designing and modelling of mechatronic systems have acquired key roles in assuring and increasing the quality, efficiency and efficacy of the product development process. This trend poses several challenges not only on engineering tools and on engineering education and formation, but also to a very large extend on organisations. The importance of continuously fertilizing the organisation with new knowledge about requirements, trends and experiences linked to the products and the concerned development methods and tools is still often underestimated. This paper reports on experiences gained in applying principles of dynamic learning organisations to product development organisations in the automotive sector. The original project was called ORGANIC (2005 – 2007, [9]). Based on different European studies about innovation management where members of the partnership and leading industry have been involved we developed a modern learning organisation based innovation management strategy. A company becomes an ORGANISM where through continuous learning spirals the knowledge grows and the core competences increase continuously. In collaboration with innovation leading companies the project developed an example base which is being exchanged and used in different working task forces since 2006. This is also reflected in the way the qualification and certification of the Innovation Manager job role has been transported to industry. Meanwhile the European Union finances a project called EU Certificates Campus (2008 – 2010) where such key areas of knowledge are transported in form of online short courses, together with recognised certificates for innovation management. Chapter 2 of this paper presents and approach to identifying and modelling a learning strategy for a particular development organisation. Chapter 3 shows how the analysis of core competences serves as a key to putting in place a learning organisation.

CIRP Design Conference 2009

104

An issue that is particularly relevant for development teams is that they are increasingly distributed. Chapter 4 reports about experiences gained in designing and establishing a learning organisation in automotive safety engineering. Chapter 5 deals with the important issue of how dynamic learning organisations can assure the sustainability of continuous innovation. Modelling and implementation strategies of dynamic learning principles are the subject of chapter 6. Chapter 7 concludes and gives an outlook on further research and development. 2 MODELLING A LEARNING STRATEGY ORGANIC analyzed 20 competence areas that are considered keys to turning an enterprise into a learning organisation: 1.

Building Basic Understanding  Core Competencies and Customer Relationship Management Skills  Innovation and EU Policies Know-how  Introducing Innovation Management Principles  Knowledge Management Competencies  Market Research Skills  Regional Innovation Strategies Involvement  Human Force Skills Management

2.

Building Communication Skills  e-Challenges in Innovation  Innovation Skills for Reporting and Presentation Skills

3.

Building Management Skills  Corporate Wide Innovation Management  Innovation Aspects in Project Management  Innovation Process Management Process

4.

Building Team-Learning and Teamworking  Cross Cultural Success Factors  Innovation Aspects in Conflict Management

  

Innovation Aspects in Motivation Building Innovation Aspects in Team Communication Innovation Skills for Distributed Team Management

Building Personal Skills  Cross Cultural Skills  Knowledge about Personal Characteristics  Learning Culture Establishment Each of these competence areas has been treated with equal importance, and best practices have been collected. After running through all the proposed steps the architectural design of a learning organisation tailored to the company-specific needs has been established. The three highlighted areas in the above listing will be explained by examples in the following sections of the paper.

functions which they can supply better and quicker than any of the competitors.

5.

3 CORE COMPETENCE ANALYSIS One of the key success principles is that organisations understand that they are part of a learning chain. The innovation ideas of customers influence their own innovation tracks. The closer one gets to such key partners the more dynamic the learning cycles will flow.

Figure 1: Example – Base Development Strategy in Automotive Systems Development Figure 2 illustrates the example of a leading automotive supplier which identified using the analysis that e.g. Customer B would drive the innovation in the mechatronics functions while currently the base knowledge for safety design is created with the idea motor Customer A.

3.1 Step 1: Identify Core Competences A core competence is a field of knowledge of the firm    

were they are stronger than other competitors; where they created already a critical mass of competence; where with one knowledge item/function many customers can be served (Re-usability); where since years dynamically knowledge is extended, newly created and exploited.

3.2 Step 2: Identify Key Customers for Learning Once an organisation identified the core competence fields the next step is to identify which customers are those who most dynamically contribute ideas to this core competence. A key learning customer is identified as a firm which   

regularly gives inputs to new functions, ideas, plans for increasing the identified core competence; has its own known innovation leadership and can help putting new structures into place; is willing to get in closer collaborative partnerships for services and products in the future.

3.3 Step 3: Enable a Social Learning Strategy Once the key customer and the core competence are identified the organisation creates supportive social learning spaces to further enrich the communication and empower the dynamic feedback flow to the core competence. 3.4 Example: Automotive Safety Engineering Figure 1 illustrates the example of Automotive Systems [7] where core functions of e.g. a control system are the same in all variant projects. The company then decides to develop all base functions just once and maintain parameter sets which allow applying the same 80% (ready-to-use) functionality by parameter sets to many different customers. The company then learns continuously new functions and decides whether to include them in the base. This leads in the long run to stable systems working for many customers and focussing the learning on core

Figure 2: Example – Core Knowledge Strategy in Competitive Development Based on the specific team structures are built to further increase this learning spiral. 3.5 Benefits Imagine that one either does 30 parallel projects (30 times the effort, 30 different results, 30 maintenance teams, etc.) or instead that one creates one core competence team that provides one solution adapted (by parameters and configuration options) to 30 variants of customers. One can focus knowledge, resources, and concentrate on customers that are contributing further to the core knowledge. 4 SKILLED DISTRIBUTED LEARNING TEAMS Another key success principle is that organisations are able to model and support the learning spiral (see the learning spiral in Figures 1,2,3) in form of a role based distributed team [1], [2], [3] [4] . This way they learn a so called learning cooperation pattern which can be re-used to dynamically run these learning/innovation partnerships. We continue with the example presented in section 3.4, pointing out how the core competence for safety design further developed. The company analysed the currently involved roles and the current information flows in the safety related learning cycle.

105

4.1 Step 4: Analyse Current Team Roles Figure 3 illustrates the current levels of roles involved in the safety concept, safety design and safety implementation.

4.4 Distributed Innovation – Learning Teams A distributed innovation/learning team    

involves roles from different levels (customer, product, core competence); does not have bottlenecks; enables teamwork and feedback loops to create ideas, solutions, knowledge; distributes and shares information to the team members.

4.5 Benefits The learning effect on the core knowledge (safety design in that example) is multiplied by bringing key players together in a learning team. Much information and time is lost when bottlenecks serve in the middle. Also, remember, we need to further increase the dynamics around the learning cycle and the faster it turns the more we learn together on e.g. safety design. 5 Figure 3: Example – Safety Learning Cycle in Figure 3 – Actual Roles 4.2 Step 5: Analyse Current Information Flows Figure 4 shows the current information flow which showed that there is a bottleneck with the safety manager.

DYNAMIC FEEDBACK LOOP BASED ORGANISATIONAL PROCESSES In most traditional innovation management courses the content relates to patents, supporting new patents, creating idea databases and following up on the ideas, supporting innovative staff, etc. Learning organisations add to this traditional picture the organisational strategy of a continuous learning organism around features which keep the organisation alive and leading for a long time. Therefore another key success principle of learning organisations is the ability to create innovation processes around the learning dynamics of the organisation [1], [3], [7], [9]. Feedback loop based innovation / learning processes    

must represent continuous feedback loops; are created based on the learning cycles; support the continuous increase of core competence knowledge; create critical mass of knowledge to be re-usable in many projects and services.

5.1 Step 7: Create an Innovation Process based on the Learning Cycles Figure 6 illustrates a feedback loop process that has been designed around the safety core team.

Figure 4: Example – Safety Learning Cycle in Figure 2 – Actual Flows of Information 4.3 Step 6: Improve towards a learning team The results of such an analysis are shown in Figure 5. The learning organisation would decide to create a joint learning time to unleash the power of knowledge exchange and collaboration.

Figure 6: Example – Safety Learning Cycle in Figure 3 – Feedback Loop Processes Figure 5: Example – Safety Learning Cycle in Figure 2 – Social Team Learning

106

Customer and project roles collaborate closely, gather key knowledge prepared and stored by the internal team,

and continuously refining the knowledge based on planned feedback loops. 5.2 Benefits If one executes many projects that each contributes core knowledge stored only in the respective project space, the knowledge will stay in each single project and eventually (if a staff member moves to another project) be shared. When certain knowledge has been declared as core knowledge that projects share and a base structure (product, service, knowledge, requirements tree, etc.) is built for all projects in the centre, all knowledge flows together and the feedback loop process is a strategic process in the firm. 6 MODELLING AND IMPLEMENTATION STRATEGIES

Figure 7: Example – Core Competencies Architecture for Cross Company Task Forces Model

6.1 Strategies for the Learning Organisation The established framework for designing a learning organisation has the 20 competence areas listed in Chapter 2. Each area has its own success principles. By running through all 20 areas an architectural design for a learning organisation can be created. The qualification and certification in these 20 areas, including the ways to implement the according principles, has been established as the job role “EU Certified Innovation Manager” [11] by the European Certification and Qualification Association (ECQA) [12]. The certificate is currently issued by iSQI (International SW Quality Institute). Implementing learning organisations aspects in integrated design teams is one of the major subjects in the development of a competence profile, training courses, and certification of Integrated Design Engineers in our recently launched project “iDesigner” [13][14].

Clusters of companies are formed who can contribute key knowledge to a SPICE core competence. Companies can only join on a win-win principle where they give (be a key player to one of the knowledge fields) and take (can access core knowledge elaborated by another cluster team). Still it is exclusive to be a member of the group because existing members must agree the integration of new members. Thus the core group contributors are no competitors, they exchange and learn from each other, and get together better than their competitors on the market. Using the same innovation learning strategy cross company learning teams on core areas of SPICE have been created also in Austria in S2QI 2005, where up to now above 10 leading Austrian firms collaborate.

6.2 Strategies for ISO 15504/SPICE Knowledge about these innovation principles is important for process quality—in the automotive industry typically SPICE [10]—assessors to provide improvement recommendations which help organisations to win from SPICE investments. 





To know the core competence in functionality in a product segment will help to install the requirements and test traceability for core functionality once, and then repeat to use it from there. This obviously multiplies the return on investment. If the understanding of customer, system, and software requirements is demanded, then such learning teams are the basis for such a good communication. Innovation is based on a continuous learning cycle involving the customer and core competencies which can be multiplied into many product segments and projects.

6.3 Strategies for Industry Task Forces Cross company learning teams on core areas of SPICE have been created in SOQRATES 2003 [15], where up to now above 20 German leading companies collaborate. Figure 7 we illustrates the collaborative innovation learning model applied in SOQRATES.

6.4 European Networking Since 1994 an annual European Improvement and Innovation Conference has been organized. The partnerships in the previously mentioned EU projects are actively contributing to the EuroSPI² (European Systems and Software Process Improvement and Innovation) initiative, which has built a pool of approximately 500 experience reports. EuroSPI² 2009 will take place in in Madrid, Spain, in September 2009 [16]. 7 SUMMARY This paper suggests an organisational approach to tackling the increasing system complexity of designing mechanical and mechatronic products using the example of safety engineering in automotive powertrain development. Departing from a framework of 20 competence areas, each having its own success principles, the architectural design of a learning organisation is created. The advantages have been pointed out on the basis of the concrete experiences of a car manufacturer who has successfully applied the proposed transformation to their own organisation. The authors are deeply involved in several research, training and consulting activities in implementing learning organizations for the development and the production of complex mechanical, and often mechatronic products. This context enables them to carry out research further, as well as to validate and fertilize it in real practical environments. Their major common activities are currently in the area of the definition of the skills required for different job roles in order to be able to support a learning organisation in integrated engineering.

107

8 ACKNOWLEDGEMENTS This project is part of the strategic long-term collaboration between the ECQA [12] and the EMIRAcle research association [17] in the field of lifelong learning. It is currently being financially supported by the EU in the Leonardo da Vinci projects LLP-1-2007-AT-KA3-KA3MP (EU Cert - EU Certification Campus) and LLP-LdV-TOI2008-FR-117025 (iDesigner – Certified Integrated Design Engineer) of the Lifelong Learning Programme. Both these European associations have been created from numerous projects and networking activities that have been financially supported by the European Union both in the Sixth Framework as well as in the Lifelong Learning programmes. 9 REFERENCES [1] Biro M., Messnarz R., Davison A., 2002, The Impact of National Cultures on the Effectiveness of Improvement methods - The Third Dimension, in Software Quality Professional, Volume Four, Issue Four, American Society for Quality [2] Feuer E., Messnarz R., Wittenbrink H., 2003, Experiences With Managing Social Patterns in Defined Distributed Working Processes, in: Proceedings of the EuroSPI 2003 Conference, FTI Verlag, ISBN 3-901351-84-1 [3] Messnarz R., Stubenrauch R., Melcher M., Bernhard R., 1999, Network Based Quality Assurance, in: Proceedings of the 6th European Conference on Quality Assurance, 10-12 April 1999, Vienna , Austria [4] Messnarz R., Nadasi G., O'Leary E., Foley B., 2001, Experience with Teamwork in Distributed Work Environments, in: Proceedings of the E2001 Conference, E-Work and E-Commerce, Novel solutions for a global networked economy, eds. Brian Stanford Smith, Enrica Chiozza, IOS Press, Amsterdam, Berlin, Oxford, Tokyo, Washington [5] Messnarz R., Stöckler C., Velasco G., O'Suilleabhain G., 1999, A Learning Organisation Approach for Process Improvement in the Service

108

[6]

[7]

[8]

[9]

[10] [11] [12] [13]

[14]

[15] [16] [17]

Sector, in: Proceedings of the EuroSPI 1999 Conference, 25-27 October 1999, Pori, Finland Messnarz R. et al,, 2006, Assessment Based Learning Centres, in : Proceedings of the EuroSPI 2006 Conference, Joensuu, Finland, Oct 2006, also published in Wiley Interscience Journal, SPIP Proceeding in June 2007 Spork G. et al, 2007, Establishment of a Performance Driven Improvement Program, in : Proceedings of the EuroSPI 2007 Conference, Potsdam, Germany, also published in Wiley Interscience Journal, SPIP Proceeding, June 2008 Messnarz R. et al, 2007, Human Resources Based Improvement Strategies – the Learning Factor, in: Proceedings of the EuroSPI 2007 Conference, Potsdam, Germany, also published in Wiley Interscience Journal, SPIP Proceeding, June 2008 Messnarz R. et al., 2004, ORGANIC - Continuous Organisational Learning in Innovation and Companies, in: Proceedings of the E2005 Conference, E-Work and E-commerce, Novel solutions for a global networked economy, eds. Brian Stanford Smith, Enrica Chiozza, IOS Press, Amsterdam, Berlin, Oxford, Tokyo, Washington, 2004 ISO / IEC 15504 Standard, Parts 1-5 http://www.innovationmanger.org www.eu-certificates.org Riel A., Tichkiewitch S., Messnarz R., 2008, European-wide Formation and Certification for the Competitive Edge in Integrated Design, Proceedings of the CIRP Design Conference 2009 Riel A., Tichkiewitch S., Messnarz R., 2008, The Profession of Integrated Engineering: Formation and Certification an a European Level, in: Academic Journal of Manufacturing, Vol. 6, No. 2 www.soqrates.de www.eurospi.net www.emiracle.eu

A Constraints Driven Product Lifecycle Management Framework Julien Le Duigou1, 2, Alain Bernard1, Nicolas Perry3, Jean-Charles Delplace2 IRCCyN, Ecole Centrale de Nantes, 1 rue de la Noë, 44321 Nantes Cedex 03, France 2 Centre technique de l’industrie mécanique, 57 avenue Félix Louat, 60304 Senlis Cedex, France 3 LGM2B, Université de Bordeaux1, 15 rue Naudet, 33175 Gradignan Cedex, France {Alain.bernard, Julien.leduigou}@irccyn.ec-nantes.fr; {Julien.leduigou, Jean-charles.delplace}@cetim.fr [email protected] 1

Abstract The management of the product information during its lifecycle is a strategic issue for the industry. In this paper, a constraints driven framework is proposed to create and manage the product information. The method proposes to each actor that intervenes on the product life cycle to act on the quote, the development or the industrialisation of the product. From each phase of the product lifecycle, the extraction, capitalisation and reuse of fundamental knowledge is coordinated by a generic meta-model. This paper explains this approach through experiments in three different SMEs of the mechanical industry. Keywords: Product Lifecycle Management, Knowledge Based Engineering, Information System

1

INTRODUCTION

Information is becoming the strategic resource for the 21rst century companies. To produce more efficiently, this information has to be shared with all the actors of the product development, including the external actors, such as suppliers or subcontractors. The firms connect more and more the other companies to their information system, and especially their PLM system. But only 5% of SMEs of fewer than 100 people use a PLM system to manage their product information [1]. Some problems exist that discourage the SMEs from going further in the integration of the digital extended enterprise. To solve these problems, SMEs from mechanical engineering field need the implementation of specific methods and information system models. In this paper, we will introduce first the scientific studies that will enable us to establish the starting point of our approach. Then we will present our research method, an inductive “research/action” approach, based on a spiral cycle structured on successive phases of analysis, development, experimentation, and then linking up with the methods and models enriched by experience feedback. The immersion phases will be described. They were organised in three different companies chosen with respect to a typology of the mechanical engineering SMEs. This immersion phases enabled us to specify the generic meta-model and the deployment method for a PLM system mainly dedicated to product data integration and management in extended enterprise. 2

STATE OF THE ART

In this chapter, different levels of the knowledge information system in the companies are presented: From the PLM as the backbone of an extended enterprise information system, to the integration of specific knowledge based systems. 2.1 The PLM concept

CIRP Design Conference 2009

109

PLM is first an enterprise strategy [2]. It involves managing all the data concerning a product, throughout its life-cycle, and all the internal and external actors involved in the development of this product. CIM data define the PLM as: “A strategic business approach that applies a consistent set of business solutions in support of the collaborative creation, management, dissemination and use of product definition information across the extended enterprise from concept to end of life – integrating people, processes, business systems, and information” [3]. Much work has been done in this field, especially in the aeronautic and automobile sectors in order to propose technical data management methods [4, 5]. Some others try to address the SME specificities and propose solutions such as Delplace for sand casting foundries [6]. 2.2 Knowledge management The management of knowledge was always done in an implicit way, today it becomes a volunteer approach. Knowledge management can be define as “a systematic, organized, explicit, and deliberate ongoing process of creating, disseminating, applying, renewing and updating the knowledge for achieving organizational objectives” [7]. Lots of methods such as MKSM and MASK [8], CommonKAD [9] or MOKA [10] are proposed to help the knowledge engineer in the development of knowledge based systems. Those knowledge based systems automate expertises, but those expertises still need to be integrated in the PLM paradigm. 2.3

Integration

The works to improve the integration of different jobs of the design in the approach of product development are numerous. The more represented jobs are the production process such as machining [11], stamping, foundry [12], forging [13, 14]. Others are focused on simulation stage [15, 16], maintenance or dismantling. Lots of jobs are integrated in the product development approach. But all the job applications are not easily

connectable to a unique system. A solution is to translate a specific model in a standard model [17], as STEP (Standard for the Exchange of Product model data) [18, 19], or other from different projects, such as IPPOP [20].

method will be verified and also its suitability to the requirements of product data management.

2.4 Conclusion

In this paragraph a typology of companies is proposed in order to choose the pilot companies for the immersive periods. Then the initial situation is explained and the proposed approached is implemented in one company of each SME category. Then, those immersions are assessed.

We notice that the majority of the works done about PLM and the integration of the different jobs about product development are applied in assembly industry, mainly automotive or aircraft companies. So the proposed research approach is based on immersions in the SMEs of the mechanical engineering field in order to extract the needs in terms of PLM that are not covered by the actual software. 3

RESEARCH APPROACH

The proposed approach is based on a methodology in order to structure and manage the product data of extended enterprise. To define these methods and to reach a common approach for the different companies, an inductive three-step research approach have been implemented: 

Immersion: needs analysis and integration of specific methods.



Generalization: creation of a generic approach.



Validation: experiment of the approach, back to an extended enterprise.

4

IMMERSION IN COMPANIES

4.1 Typology of mechanical engineering SMEs The choice of the pilot companies was achieved through a typology that characterises the main categories of SMEs in the mechanical engineering field. It was necessary in order to obtain results that allow us to generalize the proposed approach within an extended enterprise. Differentiation axes were chosen: the number of parts that forms the product (produced by the SME). Indeed, the companies with a large number of parts by product often manage BoM to master their product data. At the opposite, the companies with a little number of parts by products manage routes and operations to master their product data. And last, some companies manage both BoM and routes. Thereby, the proposed classification is based on these three types of SMEs (Figure 2).

3.1 Immersion: needs analysis and integration of specific methods The first phase relies in interviews of companies to extract their practices in terms of digital and collaborative engineering and the best practices of implementation. Moreover, we bench the PLM software tools to list the functionalities and their ability to meet SME needs. The pilot companies, representative of the mechanical industry and their common requirements were selected. Then, immersive periods have been achieved into the different companies in order to directly and inductively integrate the technical data structuring and managing methods. This phase was coupled with the implementation of the methods with real data in the companies to verify the gap between the proposed approach and the objectives. 3.2 Generalization: a generic proposal In this phase, based on the analysis of the different pilot companies, the method of managing product data has been generalized, and a meta-model has been created. This approach is compatible with the standards and is applicable to all the types of companies in the mechanical industry so that it may be used in an extended enterprise context. 3.3 Validation: experiment of the proposal The experiment feedback will test, improve and validate the proposed Information System structure. Tested in an extended enterprise environment, the implementation

Figure 2: Typology of mechanical SMEs This typology classifies the different companies present in an extended enterprise, from the toolmaker to the integrator, through all the intermediaries. From this classification, three companies covering the three zones of our typology were chosen as pilot companies. By analysing the needs of these different companies, their generic needs were extracted (the needs that are not specific to the activity of the company) and aggregated in order to obtain the specifications of the proposed generic model for the extended enterprise. The next case studies must enable us to put into practice methods of technical data structuring and management, adapted to those specific companies. In order to do this, the analysis of the needs of the company in terms of PLM was achieved. Then, a specific approach has been proposed to improve the initial situation. And finally, this method has been validated by integrating a specific solution in the company.

110

4.2 An equipment manufacturer: PSL CONCEPT 4.2.1. Description of the company and initial situation The PSL CONCEPT company produces and sells equipment for ships. Among these products, there are reserve rudders, pulleys and tackles, sheaves, jam cleats and various accessories. This is a type II company (“system integrator”). This company organizes the main part of its products into families. In fact, as many system integrators and equipment manufacturers, its products are made from standard products, to which options and modifications are added to meet customer needs. After the audit, it seems that the main needs in this company are as follows: Knowledge capitalisation: An improvement of design, resulting from customer feedback, a set of tests or the optimization of the designer, are not reproduced on the other products of the family without the involvement of the designer on each product. This process is lengthy and is a source of error. BoM management: The BoM (Bill of Material) management is manual and the BoM have to be updated when there is a major modification of the product design. Reference management: Due to the diversity of existing products (1200 references only for pulleys), the product references are hard to manage efficiently. Quote: Giving a precise quote of a new product is complex because it is difficult to know the quantities of raw materials that will be consumed and the manufacturing time before the detailed design of the product. Archive management: When a client comes back with a product, it is not always easy to find the original drawings of the product that has been sold with the references of the different parts. Thus the audit phase underlines the main PLM needs of this company. Based on these first needs, an approach is proposed to give a global solution to these needs.

Thus, if a function is not required by the customer, and if this function is only linked to a single sub-product, then this sub-product will not be present in the final product. Moreover the modification of a functional parameter leads to the modification of the design parameters which are linked to it. When a modification of the design is made, this modification is implemented on all the products of the family that use the function are concerned by the modification. There is no longer information loss when a product is improved because if the function or the definition of the design parameters is modified, all the products of the family will be automatically modified and so have the benefit of the improvement. A system of significant referencing was introduced based on the functions of the product family A table for the quotes was also made to calculate the price of a pulley depending on its functionalities. Each function has a cost. Adding all the costs of the functions required by the customer we obtain the base for the determination of the global quote of the pulley. A global approach of structuring and managing technical data has thus been defined, which enables us to create a software solution to automatically design a family of products and to meet the company needs [21]. 4.2.4. Conclusions on PSL Concept The implementation of the software based on this approach and the results that we have obtained (the design time for a new pulley has gone from hours to minutes) prove that the proposed approach is in phase with the needs of this kind of company, an equipment integrator (type II), in the mechanical industry. The link between function and product appears clearly. The use of functional constraints to express the need of each SME of the extended enterprise in the PLM system is a possible way of improvement for the integration of SMEs. 4.3 An elementary part manufacturer: Capricorn

4.2.2. Proposed approach

4.3.1. Description of the enterprise and initial situation

The design of a pulley passes through the choice of technological solutions that respond to a set of technical and economic constraints. Those constraints are knowledge of the extended enterprise that are formalised to be used by the designers. The customer’s needs are incorporated in the model using constraints: The constraints related to the production capacity, the constraints linked to the product, the constraints linked to the suppliers. The set of those constraints is coupled with the parametric CAD model of the pulley. The customer chooses a certain number of variables that activate the relating constraints. Then a solver proposes a solution that responds to the set of constraints. 4.2.3. Integration of the approach The study focused on a major and well-known family of products for the company: the pulleys. The different functions of the family were broken down, and then a set of functional dimensioning parameters were created for the products in order to link those parameters to the different functions.

111

The second pilot company is named CAPRICORN. This company manufactures crankshafts, connecting rods and pistons for the up market automotive industry and racing cars (F1, Nascar, 24H du Mans, Rally…). It is a type III company in our typology: “elementary part producer”. This kind of company has problematic manufacturing technical data. The elementary part manufacturers directly receive their drawings and CAD models from their customers. Then they add their expertise to draw up the plan of procedure of the product and produce it. The audit phase makes us focus on the following initial situations: External exchange: The customers directly send the CAD files to the engineering department, mostly in STEP format. When a modification occurs, a new file is sent by the customer. The modification must be done manually on the documentation in the engineering and planning department. Knowledge capitalisation: The first phase of a route is quite repetitive. CAPRICORN would like to have software to automate this phase in order to be able to launch the supply earlier. Documentation: The operations of a route need documentations from the production department. They

are manually done and it is time consuming and a source of error. Internal exchange: Once the documentation is made, it has to be sent to the manufacturing department. If a modification occurs, the right version must be sent to this department. The following paragraph proposes an approach to respond to those needs. 4.3.2. Proposal Such as the design of a product, the creation of a process plan is a serial of choice of solutions that respond to a constraints group. Each actor involved to the creation of the process plan must be able to intervene and to add his own constraints. The customer expresses his constraints through the geometry of the part. The subcontractor has a set of possible surface treatments. The supplier got a limited set of diameter for is blanks. The shop floor is composed of a certain number of machines, more or less charged, with specific capacities… The resolution of the set of constraints allows to partially define the operations that compose the process plan of the product. 4.3.3. Integration of the approach The study is focused on the historical product of the company: the crankshaft (Figure 4). Based on a preliminary analysis, it has been decided to structure the technical data in three groups. The information from the product, directly extracted from the STEP file, the information from the work centres and the craft rules, both collected during the audit phase. It is supposed that the macro process plans are already known. In a chosen route, each operation uses the information of some specific faces of the product, the tooling machine information and the craft rules to create the detailed operation. With a face recognition based on fundamental knowledge, it is possible to find the different entities cut in each operation. From the final 3D model product, the blank part geometry is reconstructed. Then each machining operation is simulated by a feature of machining. As a result the 3D model of each intermediary part is obtained. From these detailed operations, drawings are generated with the intermediary dimensioning, tolerance and a full title block. Then these drawings are saved and sent to the production department.

Figure 4 –Interface of the software 4.3.4. Conclusion on CAPRICORN This case study enables us to identify the different technical data used by the process planning department during the industrialisation phase. It also enables us to extract the knowledge to use those data internally as well as externally via the exchange with the customer and the production department. The integration of the process plan management and the intermediary part management is essential to this kind of company. It is possible to use standard, such AP 214 from STEP to formalise the objects and the data use in this example. So the integration of the process plan in the PLM is essential to the integration of the elementary part manufacturer in the extended enterprise. 4.4 A machine producer: SMP 4.4.1. Description of the enterprise and initial situation The third case study company is SMP, a grinding machine producer. This company is a type I, “machine producer”. Due to the high number of components of its products and its high customisation, this type of company often encounters bill of material (BoM) problems. After the audit phase the following initial needs have been selected: BoM creation: Actually, the BoM are done manually from the analysis of the CAD model using a spreadsheet and sent to the manufacturing department. BoM management: When a modification occurs, in a sub-assembly, the operator has to detect all the impacts. He manually applies and checks the modifications to all the BoM that are impacted. BoM structuring for the production department: The production department and the engineering department have two different ways to structure the BoM. So the modifications of the engineering BoM are more difficult to impact on the production BoM and vice versa. BoM integration in the ERP: The integration of the BoM in the ERP of the company is done manually, which is time consuming and source of error. The next paragraph will explain which approach was integrated to improve the initial situation. 4.4.2. Proposal

112

As it has been seen in the two previous examples, the constraints are putting forward by the different actors of the development of the product to define a design problem. To let these actors being able to integrate the constraints by themselves, without the intervention of a knowledge engineer, the constraints have to be integrated in the specific product model of each actor. Then the design department expresses its constraints on a structure that is copied from the CAD structure and the production department expresses its constraints on a structured copy on the ERP structure. Those constraints are expressed on the same product. The set of constraints allows the definition of the problem to be solved. 4.4.3. Integration of the approach The approach proposed here is based on a double view of the bill of material. Using a buffer file without the structuring of the product, we can have a different structuring in each department of the company. First of all the different information needed by the engineering and the production departments are selected. Then, a list of attributes is made for each kind of products, sub-assemblies and assemblies. Then a BoM is created for the engineering department extracted from the list of attributes only those wanted by this department and with the same structure as the 3D model. A second BoM is created for the production department extracted from the same list of attributes only those wanted by this department and with the same structure as the ERP model. If a modification occurs in the attributes, both, the engineering and the production BoM will be automatically changed. If the change is made on a sub-assembly or a part, all the assemblies containing the sub-assembly or the part are updated. 4.4.4 Conclusion on SMP This case study enables us to identify the technical data that are transferred between the engineering department and the planning department in this company. It also allows us to extract the knowledge useful to their transfer and especially concerning the multi view of a product. This multi-view notion has been applied using a buffer file that contains all the information of the product, the structuring of the product being specific to each view. If the works on BoM management problem in PLM are already consistent, the notion of multi view and partial view are always a source of improvement to the integration of SMEs.

rules used to process them. To obtain that information two methods were used: 

Observation of the expert during his work: The observation of the expert allows us to get a first look at the use of technical data in the company. The dialogue with the expert enables us to extract the explicit knowledge inherent to his job.



The practical aspect of the expert’s job: To refine the knowledge about the use of those data, we actually did his job, using his workstation. We extracted the implicit knowledge that was not formalised by the expert. Structuring: The integration of those methods needs the structuring of the technical data by group (objects and attributes) and the structuring of the craft rules by algorithms. Integration: The automation of those methods integrates the technical data and the craft rules into some software validated by the company. Obviously the specific application integrated in the pilot companies are too specialised to be directly integrated into a generic model for the extended enterprise. Some of those data and some processes are really specific to the product manufactured by the company or to its production process (sheave diameter or number of crank pins may not be generic attributes of a product). Nevertheless some other can be processed in a global way in the extended enterprise [22]. The next chapter will generalize the different approach uses in each case study to generate a global approach applicable to the whole extended enterprise. 5

A GENERIC METHOD BASED ON A PPRO METAMODEL

5.1 Aggregation of the specific model By aggregating the three class diagrams of the three pilot companies, we obtained a meta-model that can be applied to the pilot companies, and by extension, to each SME of the mechanical industry that have the same product data problematic. The creation of this meta-model is not described here; it is the subject of an entire paper in progress at the present time.

4.5 Conclusion on the immersions During this phase of immersion a knowledge management approach has been integrated with first of all an extraction of fundamental knowledge, then a structuring of that knowledge and finally its integration into the software. The validation of this work was done by software tests carried out by the expert. The results of those tests go back to the knowledge extraction phase, then another structuring and integration phase, and so on until the desired results are obtained. Knowledge extraction: The first phase of each immersion was an extraction of the data used by the expert and the

113

Figure 5: PPRO meta-model The three approaches that we develop in the pilot companies are transferable to the meta-model figure 5.

5.2 Aggregation of the specific method The specific method of each pilot company was to add constraints on the objects linked with the focus of the studies, and then to solve the problem as a Constraints Satisfaction Problem. In the case study PSL Concept, the design of the product is chosen by constraining the function, the process and the resource. The product attributes keep being variable. By solving this system, we obtain the value of the parameters of the 3D parametric model. If the product and the resources are fix and the process variable, then the resolution of the system gives information about the attributes of the process. In the case study Capricorn, the resources and the product are fixed and the process variable. By solving this system we obtain decision aid on the process planning. The case study SMP doesn’t deal with the resolution of a problem, as the design of a product or a process planning creation. This case study explains the filling of the problem parameters, like the product attributes. Each actor of the development of the product can add his own constraints in the system. The method consists in solving a system expressed by adding constraints to the model. So by constraining some objects of the system and letting some others variable, the system can be solved to partially define the variable object. To fill the meta-model in order to have all the attributes of the system product process resources that are fixed, each actors of the product development put his own constraints. When the system is full, the change of one constraint from the customer, the operator, a supplier… changes one or more variables values of the system, ie of the product or the process or the organisation…depending on what it is considered as variable and what is considered as fix. 6

[2]

[3] [4]

[5]

[6]

[7]

[8]

[9]

[10]

[11] [12]

CONCLUSIONS AND PERSPECTIVES

The next contribution will be the explanation in detail of the PSL Concept case study resolution with the generic method. This study will allow the explanation of the resolution concept used to solve the system. As explained in the research approach, those method and model will be then validated by a second experimentation phase with the integration of a collaborative tool base on this meta-model in an extended enterprise. The last point is to map this PPRO meta-model with other existing models and methods to be able to focus on the specificity of the SMEs extended enterprise to be able to find in other existing models where are the missing points to the model to be applied in a SMEs context.

[13]

[14]

[15]

[16] 7

ACKNOWLEDGEMENTS

We would like to thanks PSL CONCEPT, CAPRICORN and SMP companies for allowing us to carry out our study in their companies and for their technical support.

[17]

8

[18]

[1]

REFERENCES Cetim, 2007, Enquête de besoin sur le travail collaboratif, Document interne.

114

Terzi, S., 2005, Element of Product Lifecycle Management: Definitions, Open Issues and Reference Models, PhD thesis, Université Henry Poincaré Nancy I. CIMdata Inc., 2003, Product Lifecycle Management “Empowering the future of business”. Bacha, R., 2002, De la gestion des données techniques pour l’ingénierie de production. Référentiel du domaine et cadre méthodologique pour l’ingénierie des systèmes d’information techniques en entreprise, PhD thesis, Ecole Centrale Paris. Nguyen Van, T., 2006, System engineering for collaborative data management systems: Application to design/simulation loops, PhD thesis, Ecole Centrale Paris. Delplace, J.C., 2004, L’Ingénierie numérique pour l’amélioration des processus décisionnels et opérationnels en fonderie, PhD thesist, Ecole Centrale de Nantes. Ammar-Khodja, S., Bernard, A., 2008, An overview on knowledge management, Methods and tools for effectives knowledge life cycle management, Springer-Verlag, pp. 3-21 Merlo, C., 2003, Modélisation des Connaissances en Conduite de l’Ingénierie : Mise en Œuvre d’un Environnement d’Assistance aux Acteurs, PhD thesis, Université de Bordeaux 1 Schreiber, G. & al., 1999, Knowledge Engineering and Management – The CommonKADS Methodology, The MIT Press Ammar- Khodja, S., 2007, Processus d aide a la spécification et à la vérification d application d’ingénierie a base de connaissances expertes, PhD thesis, Ecole Centrale de Nantes Ben Yahia, N., 2002, Elaboration automatique de processus d’usinage: application aux entités de fraisage, PhD thesis, ENIT 2002. Martin, L., G. Moraru, P. Véron, 2006, Development of an integrated tool for the foundry industry, 6th International Conference on Integrated Design and Manufacturing in Mechanical Engineering (IDMME), Grenoble, France, 17-19 may 2006. Boujut, J.F., 1993, Un exemple d’intégration des fonctions métier dans les systèmes de CAO : la conception de pièces forgées tridimensionnelles, PhD thesis, Institut National Polytechnique de Grenoble, 1993 Thibault, A., Siadat, A., Bigot, R., Martin, P., 2007, Method For Integrated Design Using a Knowledge Formalization, Digital Enterprise Technology, Springer, pp 577-584 Beylier, S., 2007, une approche collaborative de la gestion des connaissances, application à une PME du secteur de l’ingénierie numérique, PhD thesis, Université Joseph Fourrier, Grenoble Eynard, B., Léinard, S., Charles, S., Odinot, A., 2005, Web based collaborative Engineering support system: application in mechanical design and structural analysis, Concurrent Engineering: Research and application, Vol 13, n°2, pp 145-153 Chen, D., Doumeingts, G., 2003, European initiatives to develop interoperability of enterprise applications – basic concepts, frameworks and roadmap, annual reviews in control 27, pp 153-162 El Khalkhali, I., Ghodous, P., Martinez, M., Fravel, J., 2002, An information infrastructure to share product models using STEP standard, 9th IPSE international conference on concurrent engineering:

research and applications, Cranfield University, 27th31st July 2002. [19] Chambolle, S., 1999, Un modèle produit piloté par les processus d’élaboration : Application au secteur automobile dans l’environnement STEP, PhD thesis, Ecole Centrale Paris. [20] Projet RNTL IPPOP (Intégration Produit – Processus - Organisation pour l’amélioration de la Performance en ingénierie), 2007, à l’URL http://projects.opencascade.org/IPPOP

115

[21]

Le Duigou, J., Bernard, A., Perry, N., Delplace, J.C., 2008, Global approach for product data management, application to ship equipment part families, CIRP Journal of Manufacturing Science and Technology, 10.1016/j.cirpj.2008.10.005 [22] Le Duigou, J., Bernard, A., Perry, N., Delplace, J.C., 2008, inductive approach for the specification of a generic PLM system in an extended enterprise context, 5th International CIRP Digital Enterprise Technology Conference, Nantes.

Using a Process Knowledge Based CAD for a More Robust Response to Demands for Quotation 12

1

1

L. Toussaint , S. Gomes , J.C. Sagot 1 SET Laboratory, team ERCOS, Université de Technologie de Belfort-Montbéliard, 90010 Belfort CEDEX, France 2

MarkIV Systémes Moteurs, Z.A. Les Grands Prés N°6, 68730 Orbey, France [email protected]

Abstract Out of 100 hours of engineering work, only 20 are dedicated to real engineering and 80 are spent on what is considered routine work. To accelerate these routine processes, our research is based on methods and tools to capitalize and reuse knowledge in collaborative conception. To validate our research hypotheses, a series of experiments through a design process, with the aid of a Product Lifecycle Management (PLM) tool and a geometric modeler, have been implemented. This article defines a methodology for design and verification of a concept through the use of a knowledge capitalization and its application. Keywords: Concurrent engineering, design for manufacturing (DFM), knowledge capitalization.

1 INTRODUCTION Designing a product, from the definition of the client’s needs right up to its fabrication is a process that requires time, attention and the capitalization of data, information, knowledge [3] and experiences gathered from previous projects [5][11]. This knowledge is kept by a limited number of people, usually called ‘Experts’ and is not necessarily capitalized in a practical, reusable way, which can be translated into a loss of time and a delay for the projects. Engineering research in knowledge management and feedback information becomes essential to improve productivity and responsiveness during the design phase. This article focuses on developing methods of collaborative design, based on product-process knowledge, to expedite the repetitive processes of engineering. The idea behind this is to enable experts to gather their knowledge gained from previous engineering experiences and store them in an interactive and intuitive database. This database will allow designers to apply subsequent manufacturability analyses from the beginnings of the response to demands for quotation phases. The results of these analyses will allow the user to comply with all the domain’s rules of the trade (companies manufacturing constraints, process constraints, standards, etc.). Between 60% and 80% of components used in products manufactured by OEMs are subcontracted [10]. Companies must bring together experts from several areas to check the manufacturability of the components from the earliest stages of conception. Nevertheless, the unavailability of experts makes this approach ineffective and leads to delays and additional costs. The integration of business rules relating to manufacturing constraints, costs and materials could improve the efficiency of designers by incorporating the concept of design for manufacturing (DFM) in their work. The integration of a flexible DFM verification tool would allow the continuous employment of the experiences of experts with the possibility of continuous updating and adaptation in a case-by-case scenario. As a result, the designer can recover all the manufacturing data related to his concept and transmit it to his supplier in a minimum amount of time and keep track of project specifics for future reuse.

CIRP Design Conference 2009

116

2

EXPERIMENTATION WITHIN AN INDUSTRIAL PROJECT These assumptions were tested out within the research department of a Tier 1 supplier of the automotive industry. The research work proposed in this article is positioned in a scientific context where the ultimate goal is to generate semi- automatic, robust and optimized product models, respecting all knowledge information related to their manufacture gathered from project summaries [11] and expert know-how [5]. Once identified, this knowledge will offer different sets of optimal parameters (functional and specific) [1] respecting all the rules of the trade, particularly thanks to an interfacing with multi-objective deterministic and/or meta-heuristic optimization tools [13] [3]. These sets of parameters can then be transcribed in a parametric three-dimensional CAD model set that will be able to semi-automatically generate several different optimal geometries (solutions on the Pareto frontier), respecting the knowledge retained by the enterprise. 3 PROPOSED METHODOLOGY For the proper implementation of such an approach, some functional requirements are needed to enable it to be as generic as possible: 1. Assess the technical feasibility for every new concept. We have to verify the manufacturability of the concept by the chosen means of industrialization. 2. Submit an efficient and effective feedback loop for manufacturability problems. The usage of an interactive verification tool will enable the designer to identify possible problems related to the manufacturability of his concept (unmolding, underdrafts, ill-balanced pieces, constant thickness, etc.). By identifying the possible problems, this feedback will correct or update the concept to adapt it to the means of production planned. 3. Allow experts to store their rules and knowledge with a minimum of effort and time. The tool should enable different experts to identify and stock their rules of the trade in a simple and fast manner to facilitate future

project developments. 4. Under the existing trade rules, allow the analysis of manufacturability at various stages of design. The design of a product passes through different stages according to the internal organization of each company: Project proposal (responding to a demand for quotation and then launching the project in case of a success), the project (the design of the project, the development of prototypes, the industrialization and the proof of concept) and the manufacturing stage (supply, production and final delivery). The ultimate goal is not to restrict the analysis only to the response to demands for quotation phase, but to apply it throughout the whole design process [16].

the ‘10C’s’: creation, capitalization, categorization, consulting, completion, coherence, consensus, cohesion, condensation and growth, that allow for the proper gathering and storing of a company’s knowledge and know-how. Through the use of knowledge summaries, knowledge can be later on digitalized into a Product Lifecycle Management (PLM) tool where they can be called up to perform automatic verification functions. 3.2 Product/process knowledge capitalization The context for the application of this methodology requires the use of a collaborative engineering tool style PLM, an evolution of PDM (Product / Process Data Management) style tools [1]. The tool chosen for our work is the Project Monitoring Cooperative Workshop (in french ‘Atelier Coopératif de Suivi de Projets’ – ACSP). This Web environment has been developed at UTBM since 1996 to enable synchronous and asynchronous cooperation between the various members of a project [4][6]. The main feature of the ACSP system is its data, information and knowledge management capabilities. Indeed, the ACSP allows them to be capitalized in order to disseminate, share and reuse them [1]. Moreover, this knowledge can be exported in the form of exchange files (Extensible Markup Language - XML) and then used by other software such as MS Excel and CATIA V5 (via scripts). Expert product/process rules issuing from the KNOVA methodology can be capitalized into the PLM tool and reused for designing a new product. The definition of roles of ACSP, using a multi-domain multi-views approach (Project / Product / Process / Activities) [14] allows experts to transcribe their knowledge and users to operate independently during the next steps of the methodology. The use of the PLM tool also allows the storage of functional specifications of each product, by filling various associated parameters and indicators (strength, cost, etc.) as well as the results of the various phases of

1

3.1 Knowledge capitalization – KNOVA lifecycle The first part of the methodology starts with the application of Serrafero’s knowledge acquisition methodology [12]. This methodology describes the process of transforming a company’s tacit knowledge into a properly framed knowledge summary, comprising knowledge in five levels of granularity: • the line of work of a company (e.g.: Automotive manufacturing), decomposed into several knowledge fields (e.g.: plastics, sheet metal, machining...), • a field (e.g.: plastics) composed of several knowledge domains (e.g.: injection, extrusion...), • a domain (e.g.: extruded air conducts, injected air intake manifolds...) decomposed into several knowledge proficiencies (e.g.: design of extruded air conducts, design of their manufacturing processes...), • a proficiency (e.g.: design of extruded air conducts) is decomposed into several specific knowledge (e.g.: the equivalent section area of an air conduct). Proficiencies constitute the different knowledge summaries in a company. • a specific knowledge (or cogniton) is the elementary component of a knowledge compendium. The KNOVA methodology goes through several steps,

Expert Phase

Conception Phase

Validation Phase

Projet, Product, Process information entry Project Managers Concept initialization

PLM/ACSP Environment Rules, constraints, parts, technical characteristics, manufacturing means

Optimal parameter definition

KBE Application through constraint propagation inferences

Conception choice

Manufacturing expert Concept Validation/manufacturing

Conception objectives Multidisciplinary optimization module

Parametric CAD model

Optimal parameters

Validation by calculations

Filling out of solid model and adaptation of concept

Rule Validation and updating CAD designer

Experts

Feedback

Figure 1. Knowledge reutilization methodology principles - managing knowledge 1

117

Knowledge Valorization and Acquisition

Manufacturing

product design such as calculations, modeling, testing or the manufacturing process (Fig. 1). The various actors involved in the project must update all information belonging to each product all along its conception. The classified storage of design specifications on the design/process database, along with a permission to modify system, facilitates their subsequent exploitation in all the concerned stages of design [15]. 3.3 Design-verification-validation loop During his creative phases, the designer must take into account many rules set by different experts. During early design stages, the ability to verify the compliance with each of these rules becomes very important. Success is defined by the company’s capacity to generate a product that meets the specifications of the customer and is simultaneously in line with the different trade rules established by the company, according to the manufacturing process chosen. The semi-automatic verification of the choice of design through the export of knowledge in the form of scripts and its implementation in the CAD software (for example CATIA V5) can reduce design time dedicated to human verification by the designer and the expert. This routine process is amplified when the expert and designer do not share the same geographical location or the same workload [7]. Using a database to identify the various indicators of each product the designer can, during this designverification-validation loop, find the critical values and exploit them. This operation is done with the CAD software chosen, using the "expert rule" in the form of a script and the critical values for each parameter established in the functional product specifications. This step establishes a feedback loop that furnishes the current product database throughout its lifecycle.

Figure 2. Expert rules in script form. 4 INTERACTIVE VERIFICATION METHODOLOGY A practical implementation of expert rule verification requires the interaction of different actors responsible for the design of a product. The product design leader, responsible for the functional design of the product, defines the individual characteristics according to the specifications requested by the client and the knowledge generated from experience in terms of materials, components, etc. Then the various operations necessary for manufacturing, as well as the general architecture of the product, are defined.

118

Once the manufacturing operations are chosen, several knowledge and business rules are defined for the new product. Geometrical rules like heights, thicknesses or interference between parts coming from the customer's specifications, the choice of manufacturing process or internal recommendations of the research department come into play and their values, predefined by the experience feedback loop (Fig. 1), can be recovered using the database and the PLM tool. The next step carried out by the designer is to start modeling the desired product with his CAD software. During this stage we can draw on other geometric modeling methodologies to better manage the concurrent and knowledge-based functional design of the product [1]. The methodology used adds a preliminary step to the geometric modeling to establish a product architecture (skeleton based modeling) linked by parameters, which guarantees a better monitoring and subsequent modification of the 3D model. However, all the parameters identified for the product cannot be predefined beforehand. Depending on the characteristics of the product to manufacture, there are parameters that can be modified (see ignored) by choice of the designer without the 3D model being necessarily bad or wrong. By exporting their settings and then using a script linked to an expert rule (Fig. 2), the designer may, at any time, verify the compliance of his concept with these rules and justify his choice in case of deviation. In the case of an air intake circuit for a car engine, the clients’ functional specification establishes the length of the line, the amount of fluid to transport and its speed and a footprint or size to comply to. Due to the evolving nature of car engines, all these parameters cannot be defined beforehand, but they can be verified post fact. After the definition of the path and general shape of the line, the designer can export the customer’s needs from the PLM in the form of a script (Figure 2), which will enable him to verify that his concept properly responds to the constraints imposed. Using some basic geometries (generic models) the script verifies the concept, identifies relevant information, compares it with the prior values in the functional specifications and collects the results to be exploited later. The results allow the designer to validate that his 3D model meets the demands requested (or locate possible errors) and, in case of deviation, they provide evidence to justify the reasons for his choice (for example, the path of a conduit with its section areas, figure 3). The advantage of this method is that it allows the designer to instantly check his work and complete the archives of the product with the direct result of his design choices. These archives will later serve to save time when making design decisions and when reviewing the manufacturability analysis of a new product. 4.1 CAD Analysis The singularity of this verification methodology is that it processes the geometrical form of the products analyzed. The analysis does not pertain exclusively to features, as there is already research work making headway into this venue [8]. In regards to this research, this translates into knowledge rules being made about the forms of the different products to be analyzed and geometrical analysis being performed to verify these rules. This analysis can result in a color coding of the product currently being analyzed, depending on if the breach of the rule needs to be corrected or if it stands to function as an accepted deviation.

5 CONCLUSION AND PERSPECTIVES This experiment allowed the R&D department to accelerate the finalization of the first steps of response to demands for quotation on several products and identifying possible complications in downstream stages of the design process. This shows the importance of an ongoing verification as well as the importance of capitalizing the knowledge used in various projects that are carried out. Reducing the number of verifications made by the experts during the early stages of design can increase the responsiveness of the R&D department as well as reduce the time dedicated to routine activities. It is recognized that 80% of time spent in an R&D dept. is dedicated to routine activities, against 20% dedicated to innovation [9]. Using a knowledge database and semi-automatic tools included in the PLM will enable us to consider dividing this routine work time by two [2]. This time saved can be invested at all stages of product design, providing a reliable and robust result with minimal iterations and validations necessary.

Figure 3. Air conduct analysis by semi-automatic script. This principle opens up several interesting perspectives, already mentioned in [3], with implications in the domain of generation and semi-automatic verification (Verification Phase, Figure 1), in a parametric geometrical CAD tool (CATIA V5, NX6, etc.) incorporating rules of engineering. These rules are extracted and driven directly from a functional specification and a project record in the PLM tool. For the moment the KNOVA methodology is being used to develop knowledge summaries of the different products manufactured by the company where the research is being made. If this investigation bears fruits they will be used to further the development of the methodology proposed in this article. If not, the KNOVA methodology will be revised and a proposition for a different one to follow will be subsequently made to the company. This will be done in order to find a methodology that will be perfectly tailored to the company and products in hand. 6 REFERENCES [1] J.B. Bluntzer, S. Gomes, J.C. Sagot, 2006, Application de la modélisation CAO à base de connaissances à la conception fonctionnelle, collaborative et multi sites de produits modulaires, Colloque francophone sur les sciences de l’innovation, CONFERE 2006, 06-07 July, Marrakech, Maroc.

[2] N. Gardan, Y. Gardan, 2003, An application of knowledge based modeling using scripts, Expert Systems with Applications. [3] S. Gomes, J.C. Sagot, 2002, A concurrent engineering experience based on a cooperative and object oriented design methodology, In Best Paper Book, 3rd International Conference on Integrated Design and Manufacturing in Mechanical Engineering, Dordrecht, Pays Bas, pp.11-18. [4] S. Gomes, J.C. Sagot, A. Koukam, N. Leroy, 1999, ACSP an intranet forum supporting a concurrent engineering design life cycle, 6th European Concurrent Engineering Conference, ECEC’99, Erlangen-Nuremberg, 21-23 April, pp. 249-251. [5] M. Grunstein, 2007, GAMETH : un cadre directeur pour repérer les connaissances cruciales pour l’entreprise, MG Conseil, February 2007. [6] D.T. Liu, X.W. Wu, 2001, A review of web-based product data management systems, Computer in Industry, 2001, 44, 251-262. [7] C.K. Mok, K.S. Chin, H. Lan, 2008, An internetbased intelligent design system for injection moulds, ScienceDirect, 2008. [8] G. Molcho, Y. Zipori, R. Schneor, O. Rosen, D. Goldstein, M. Shpitalni, 2008, Computer aided manufacturability analysis: Closing the knowledge gap between the designer and the manufacturer, CIRP Annals – Manufacturing Technology, 2008, 57, 153-158. [9] B. Prasad, 1996, Concurrent engineering fundamentals, Vol. 1, Prentice-Hall, Englewood Cliffs, 1996. [10] M. Rezayat, 2000, Knowledge-based product development using XML and KCs, Computer Aided Design 2000. [11] P. Serrafero, S. Gomes, D. Bonnivard, L. Jezequel, 2006, De la mémoire projet à la compétence métier: vers la synthèse de connaissances métier en ingénierie robuste des produits/process, International Conference on Integrated Design and Manufacturing in Mechanical Engineering, IDMME ‘06, May 2006. [12] P. Serrafero, 2003, Cycle de vie, maturité et dynamique de la connaissance : des informations aux cognitons de l’Entreprise Apprenante, Revue ENSAM, April 2003. [13] B. Sid, M. Domaszewski, F. Peyraut, 2005, Topology optimization using adaptive genetic algorithm and new geometric representation, OPTI ‘05, Ninth Int. Conf. on Computer Aided Optimum Design in Engineering, WIT Press, pp. 127-135, 2005. [14] S. Tichkjewitch, E. Chapa Kasusky, P. Belloy, 1995, Un modèle produit multi-vues pour la conception intégrée, Congrès international de Génie Industriel de Montréal - La productivité dans un monde sans frontières, Volume 3, pp. 1989-1998. [15] C. Yang, M. Yu, 2006, A study on product knowledge management method supporting product agile customized design, International Conference on Integrated Design and Manufacturing in Mechanical Engineering, IDMME 2006, Grenoble, 17-19 May. [16] Z. Zhao, J. Shah, 2005, Domain independent shell for DFM and its application to sheet metal forming and injection molding, Computer-Aided Design, Sept 2005.

119

Development of a Software Tool to Support System Lifecycle Management V. ROBIN, S. BRUNEL, M. ZOLGHADRI, P. GIRARD IMS Laboratory – LAPS department, UMR 5131 CNRS, University of Bordeaux, 351 cours de la Libération, 33405 Talence Cedex, France. {vincent.robin ; stephane.brunel ; marc.zolghadri ; philippe.girard} @ims-bordeaux.fr Abstract In the extended enterprise context, many stakeholders act on the product during all its lifecycle. They influence the product development and managers have to be able to control all the activities and their interactions that are generating the different processes. They have also to manage each actor involved in the project during the product lifecycle. In this paper, propose an approach to identify, define and manage factors influencing product development. It is the System Lifecycle Management. PEGASE, a prototype of software to control design project, follow-up the system evolution and support decision-making, is also presented. Keywords: System Lifecycle Management, Strategic Design Process Management, Computer-Aided Engineering.

1 INTRODUCTION In the extended enterprise context, many stakeholders act on the product during all its lifecycle. The notion of performance in product development not only concerns the product and the process but also the organization of actors and the system on the whole. Since the beginning of the development, managers have to encourage and favour collaboration between actors involved in the project. They have to manage design teams and existing networks but also to create new partnerships. These partnerships not only concern design process but also all the phases of the product lifecycle. In this paper, we focalised on the design phase management since it has many preponderant influences on the other phases of the product lifecycle. We are interested in the definition, the follow-up, the capitalization and the reuse of the performance inductors that could have an influence on the design performance. First, we study the PLM (Product Lifecycle Management) epicycle view to identify factors influencing product development and the information flows between them. Second, we propose a model to manage these factors and we focus on their description through out the system, from the actors to the enterprises network. Objective is to identify specific factors impacting the performance of each entity of the system. Finally, we present a prototype of software to control product development process and to support decision-making. 2

SYSTEM LIFECYCLE MANAGEMENT (SLM) IN THE EXTENDED ENTERPRISE CONTEXT Co-ordination and control of design projects are part of a global approach for the new products/systems development that implies the need to identify the different situations occurring during the design process and the adequate resources to satisfy design objectives. The design situations are described by identifying components of the design activity and their relationships [1],[2]. In design project management, the control of the design process is defined as the understanding and the evaluation of these existing design situations to take decisions. These decisions will modify and improve the

CIRP Design Conference 2009

120

future process, according to design objectives given by customer specifications or the company strategy. In a nutshell, management of design projects is a decisionmaking problem to support designers in their activities and achieve an objective in a specific design context [3]. This context has an influence on the project and refers to the environment of the enterprise (society, subcontractors, market, supply chain, etc) and to its organization [4]. Influences of the context affect each entity of the organization. Sudarsan et al. [5] proposed a high level view of these influences in their adaptation of the epicycle diagram adapted from [6] (Figure 1). It explains the epicycle nature of PLM and characterizes the information flow pattern in any product lifecycle. The PLM epicycle current view emphasis that many kinds of information have to be considered and managed to ensure a coherent multi-level project management adapted to each decision-maker at each decision-level. In such a context, PLM support needs to connect the product design and analysis processes to the production and supply chain processes, including: product data management (PDM), component supplier management (CSM), enterprise resource planning (ERP), manufacturing execution systems (MES), customer relationship management (CRM), supply and planning management (SPM), and others that will undoubtedly follow [7]. Objective is to provide to each project manager a set of information representative of the real state of the system. All the data and the information have to be synchronized for each project in the organization to ensure coherence of the project management. Information has also to be continuously defined and characterized to permit an efficient decision-making during the progress of the project. It is possible only if all information flows, for each project are traced, analyzed and exploited to follow-up the design project. To identify and manage all these information flows, our approach was to developed a model centred on the design system (the system in which the product/system is designed) in order to analyse and describe them and to follow- up its evolution.

P roduc t design , evo lution, design organization an d pr actic e, product researc h, deman d, need for society reward, di spo sal

Specific ation birth Tec hn ology Fo rmal/in formal rep. Mo del lin g lan gua ges Onto logy, KR Stan dards and best practices

Pro duct in use a nd soc ietal feedback

Sc ience Resourc es: • Hu man • In fo rmation • co llabo ration • In frastru cture • Organ ization

Man ufa cturing Supply chain Di stribution

Design an alys is V&V P erforma nce Qu ality

Design: • Co nceptu al • P roduc t • P rocess Design activities, tools

Figure 1: PLM epicycle current view



Actor Link 2

The technological factor that concerns the techno-physical environment (scientific and technological knowledge).

Process

The context in which the design process takes place. It includes natural, socio-cultural and econo-organizational environments (external and internal environments).

Human and his different activities during design process (actor). These factors influence the design process and the design system. All of them and their interactions are integrated in a model composed with a technological axis, an environment axis and an actor one (Figure 2). Then specific objectives, action levers and performance indicators, dedicated to the design system, have to be identified according to these elements. Interactions between these objectives, action levers and performance indicators have to be considered to supply pertinent information to decision-makers. These interactions are a composition of each element of the model and of relationships between them. The product, process and organizational models allow us to put in evidence and manage relationships between factors influencing performance of the product development [4] (Figure 2). These models are local performance inductors for design system and interactions between them provide a dynamic vision of the design system evolution. In this model, the description of factors influencing the design system, at each decision-making level provides a global vision of the design context. Hence, thanks to such a representation of the design context, the decision-maker can analyse the design situation and identify particularities of each

Actor Axis



project. He is able to observe evolution of each component (environment, technological and actor one), interactions between them and consequently adapt his project management method. He could also study the impact of one of his decision by simulating the possible evolution of the system. To make it possible the model must be completed with a methodology to follow-up the design system evolution and to evaluate design process.

Link 3

Enterprise

2.1 Design system modelling During the IPPOP project [8], a model integrating Product, Process and Organization models (PPO model) has been developed [9]. We placed this PPO model in a more global context to describe and analyze the design system. This approach puts in evidence the global and local performance inductors influencing the design system. They have to be considered to follow and manage suitably the design system and the design process co-evolution. Global performance inductors are [4]:

Organization

Link 5 Link 4 xis al A gic olo hn Tec



Scientific and Technological Knowledge

Link 1

Design system

Product

En vi r on me nta lA xis

Link 6

External and Internal Environments

Figure 2: Design system modelling, factors influencing the design system [4] 2.2 A PLM epicycle mediated-view to manage design system and design process co-evolution The global and local performance inductors influencing the design system have to be considered to follow and manage suitably the design system and design process co-evolution. Dynamic of the design system is provided by the evolution of these factors but also by their interactions. PPO model is used to put in evidence and manage these relationships. Sudarsan et al. [5] have proposed a model for the mediation of information flow across the activities of PLM thanks to a common set of ontological structure and information models to represent product and process: the NIST information-modelling framework. The PPO model completes NIST framework by considering simultaneously three models that have an influence on the design system. We could propose the PLM epicycle mediated-view in Figure 3.

121

Product design, evolution, design organizati on and practice, product research, demand, need for society reward, disposal

Product in use and societal feedback

Specification birth Technology Formal/informal rep. Modelling languages Ontology, KR Standards and best practices

Science Resources: • Human • Information • collaborati on • Infrastructure • Organization

Core PPO model Product ontology Process ontology Organization ontology

Manufacturing Supply chain Distribution

Design analysis V&V Performance Quality

Design: • Conceptual • Product • Process Design activities, tools

Figure 3: PLM epicycle-mediated view

Res ources of the design system

Resources o f the comp any

Scientific and Technological Knowledge

A ct or‘s acquired knowledge

Kno wledge in t he design system

Knowledge in the comp any

Internal Environment

Exis ting relationships in the company

Inte ractions with in ternal syst ems

Internal organization of the comp any

External Environment

Exist ing exte rna l relationships

Inte ractions with external systems

Interactions wit h othe rs compan y

Actor

Act ivities an d actual actor’s state of mind

A ffected resources for design pro je ct s

Affected resources for project s

Scientific and Technological Knowledge

Actor‘s knowled ge u sed in projects

Kno wledge u se d in the design system

Kno wledge used in the company

Internal Environment

Effective inte rn al relationships during proje cts

Effe ctive interactions with in ternal syst ems

Current organization of the comp any

External Environment

External relationships during proje cts

Inte ractions with external systems during projects

Interactions wit h othe rs compan y during project s

Actor

Professional / perso nal evolution

Evolution of t he resources in the design system

Evolution of the reso urces in the company

Scientific and Technological Knowledge

Evolution of actor‘s knowledge

E volu tion of knowle dge in the design system

Evolution of knowled ge in the comp any

Internal Environment

Creation o f new internal relationships

Evolution of t he interactions with in ternal syst ems

Evolution of the organizatio n of the comp any

External Environment

Creation o f new e xternal relationships

Creat ion of interactions with external systems

Creation of int era ctions with othe rs compan y

Figure 4: Global performance inductors influencing PD

122

System Lifecycle Management

Positioning, pe rsonal aspects

Knowledge management

Actor

Real-ti me sy stem manageme nt

“As-wa s” situation (past) As-i s situati on (present) To-be situation (future)

System Lifecycle

Ente rpris e / Compa ny

Predictive sys tem mana gement

Desi gn sy stem Actor

The aim of our approach is to propose concepts, models and software tools to obtain an extended PLM support managing the global co-evolution of the product and the system. Our ambition is to work on the opportunity to make evolve models, approaches and tools from PLM to SLM (System Lifecycle Management). The SLM approach has to consider all the elements of the system influencing the product development (PD), their interactions and their co-evolution to establish the better context for decisionmaking. That obliges to have a modelling of the enterprise and of the network in which it has to evaluate. Objective is to capitalize and follow information about each entity of the system. This capitalization helps decision-makers to analyze and understand the as-is situation regarding to the capitalized information (“aswas” situation) and to evaluate the impact of its decisions by considering the possible evolution of the system (the to-be situation). The system could be described by defining the global and local performance inductors and their interactions. The description of the system according to different viewpoints allows obtaining a great number of information that has to be capitalized and dynamically managed [10]. Figure 4 presents a macroscopic description of the system regarding different viewpoints. This figure focuses on the global performance inductors and we have the same description for the local performance inductors [10]. Specification of all these elements permits the creation of a model of the system. 3 SOFTWARE TOOLS FOR SLM According to the SLM concept and the PPO model, we developed a prototype of software to support actors during a design project: PEGASE. To ensure that our prototype respects criterion of conformity, reliability, safety, dimensioning and maintainability [11], the design phase was based on concepts proposed by the creators of UML language [12]. This choice is justified by the fact that this method is very structured. Objective is to capitalize, manage and use information about the system and its evolution to support decision-making. PEGASE must answer is to ensure the connection between the structuring of the organization of the company relating to the creation and the control of a different kinds of projects. Information in the database has to be generic to offer the opportunity to help decision-makers in different situations. The detailed analysis of processes and of the mechanisms of decision-making throughout the product development allows identifying elements that have to be managed to control product development process (Figure 2). PEGASE has been developed to integrate and manage all these elements to ensure a coherent vision of the system (from a macroscopic vision (the network of enterprises) to a microscopic one (the actors of the projects)). The administrator of the system implements and configures the data base. The product development process has to be structured, planned and resources have to be allocated. This phase is realized by the projects managers. Finally, PEGASE controls project evolution by managing the realization of the designers’ activities. It also helps managers to follow-up the project. In a nutshell, control of the product development

processes thanks to PEGASE results in several actions from the genesis of the projects to their closure: •

implementation and configuration of the data base,



structuring and planning the projects and allocated resources: o after a project was initialized and the objectives of the company were specified, the head of project structures his project to achieve his goals, o he defines several sub-projects for which he specifies the objectives and the persons in charge (as local decision centres), o he associates input technical data necessary to achieve the designers’ goals, and output technical data corresponding to the achievement of these objectives, o he defines a planning of the activities to be carried out and specifying their data and their objectives,

realize the activities and follow-up the design projects: o to allow the follow-up of the project, the designers generate the awaited technical data and valuate the required performance indicators. These actions associated with the integrated PPO model ensure that the organization of the company, the multilevel management of the projects, the differentiation between the decisions and the transformation of productprocess knowledge, the synchronization of informational and decisional flows and finally the follow-up of the projects are taken into account. •

3.1 Implementation and configuration of the data base Within the framework of GRAI R&D approach [13], the modelling of a company makes it possible to formalize its organization (functional decomposition and decisional system) and its technological system (design process). Via an administrator access, the organization is seized within PEGASE (Figure 5). The structure of the decisional system is defined thanks to GRAI R&D grid. Decision centre are identified and their temporal range, their nature and information flows connecting these centres are identified too. This structure is deployed in PEGASE by associating each element of the organization (plant, services, stakeholders, etc.) and the corresponding decision centres and by connecting them between specifying information flows (Figure 5). The administrator configures information flows that will be implemented in the course of project by the various local coordinators implied in order to ensure the coherence of their communication and their decision-making. Information and the information flows concern the data and links defined during the modelling of the system based on the macroscopic viewpoint on Figure 4.

123

Structure of the society Summary

Structure

Resources

Compet encies

St atis tic s

Company A

Plant of V eliz y (FR) B Res earch and Development Dept . B

Structure of the s oc iety

Design Depart ment C

Architect ure Depart ment

C

Chas sis Design Depart ment

Su m m a ry

Struc tu re

Res o urce s

C om peten cie s

Sta ti sti cs

(tac tic al )

B

Plant of Paris (FR)

Co l l abo r ati on

Ac tiv i ti es

Ex te rn al c o l a l b o ra tio n

De c i si o na l / Desi gn Fra m e w or k

De c i si o n al / Des i g n Fr am ew or k

Inf or ma ti o n al l n i k

B

Administ rativ e Direction

Fra m ew o rk an d i nfo rm a ti ona l l n i k Add e ntry

B

Sta ti sti c s

Ex ter na l c o l a borati on Pl a n t o f Vel i z y

A

R e sou rc e s

Manufac turing and Quali ty Dept.

Marketing Depart ment

In forma ti on al l n i k Fram ew o rk an d i n o f rm a ti onal l n i k Ad d ex i t

B Res earch and Development Dept . A

P lant of Vigo (SP) B Manuf ac turing and Met hods Dept . B

Welding S hop

B

Machine Shop

M a rk eti ng Pari s (FR )

M a n ufa c turin g … ( FR)

Ch as s si Dept. (FR )

M a n ufa c turin g … ( SP) D es i gn De part me nt (F R)

M a nu fa ctu ri ng… (FR)

We l d n i g Sho p (SP)

M a nu fa ctu ri ng… (SP)

Cha s si s Dep t. (FR)

Decisional links

Stud i es & M ethod s De p t.

Tec hni c a l Esti m a te D ep t.

Informational links

Figure 5: Graphical User Interface (GUI) defining functional structure and organization of the company

Process Definition Summary

Structu re

Resources

Competen cie s

Activities

Statistics

Statistics

(tactical )

Resources

Collaboration

List of thecollaboration activities: External Decisiona l / Desig n Frame work Integrate Safety Integrate Informa tio nal link Constraints Chassis D ata Framework an d informational link

Integrate Pr oducti on Data

Validate Chassis Data

Clo se Integrati on Acti vity

Val idate Prod ucti on Data

Transmit Chassis Data

Add an a ctivity

Integrate Safe ty Constraints

Add an in put

Cl ose Integration Acti vi ty

In tegrate Ch assis Data

Va lidate Chassis Data

In tegrate Production Data

Va lidate Production Too ls

Close In tegration Activity

Transmit Cha ssis Data

Add an o utpu t

Valida te C hassi s Data

Integrate Chassis Data Integrate Production Data

Validate Chassis Data

Chassis Integration

V alida te Prod uctio n Too ls

Transmit Chassis Data

Integ ra te Sa fe ty Con strai nts

Figure 6: GUI for the processes definition (description of the sequences of activities) The administrator deploys the processes modelled in the organization by associating to each decision centre the sequences of tasks (Figure 6). This process could be formalized according to the quality procedures of the company. When configuration is completed, PEGASE is operational. The administrator creates and initializes a project by sending the decision frame and associated design frameworks to the decision centres concerned in

124

the organization. The administrator access also permits to define the whole of the resources: human, material and software. The knowledge and competencies of the actors are also managed. They could be specified according to competencies matrix of the company. Managing actors’ competencies allows decision-makers to find and affected to specific tasks human resources during the design projects (Figure 7).

Huma n Resour ces Mana gem ent Sum mary

S tructure

Resour ce s

Compe tencies

St atistics

(tact ic al)

You ar e man aging h uman resourc es for the pro ject : Select criterio n to se arch a r esourc e for the project All

None

T echnic al comp etencies Abili ty to u se s oftware

Chas sis M9 0

Name: LEGARDEUR Surname: Je rem y Fu nctio n: Car Ar chitect Situatio n: Inter n r esourc e Availab ili ty: 50% rega rdi ng all projects

Tec hnical c om peten cie s

Organizational compete ncies

PHP Langua ge C+ + Language ANS Y S CA T I A V 5 Pro Engineer CA M Works

Meth o ds Search Affect a huma n resourc e to the project: Affect

Res ource Decis i on-mak er

Designe r G ene ral c ompetencies

Affect ed Re so u rces: To the activity : Chassi s Integra tion In the department: Des ign Depa rtment ( FR) For th e projec t : Chas s is M9 0

Social compete ncies

Te chnica l compe tency : Softwar e

L EGA RDEUR J e re m y D e le te

ANSYS

U p da te

De si g ner for th e project

L evel 3

M odify

De lete

Ro le

Cha rge:

S ocial co mpeten cy : Know-how

F or th e pr o jec t

A vaila bility :

Extrave rt

F or o the r p r ojec t

L evel 2

M odify

De lete

Figure 7: GUI presenting general information about an actor and planned. The coordinator has the opportunity to 3.2 Structure, plan and follow-up a design project create sub-projects which will be automatically When the project is initialized, PEGASE systematically associated to decision centres for the lower decisional informs the users of the new events which relate to them. level. He defines finally the tasks to be carried out by So each coordinator is informed of his new statute when completing whole or part of the tasks specified by the he is connected. He has information about the administrator, or by introducing new tasks depending on organisational structure of the company in order to know the needs for the project. It guarantees the flexibility of the other coordinators with whom collaborations will be the process evolution during the project. By using the established. He is able to reach directly the details of the preset informational links, PEGASE informs each new new project and to reach the decision frame or the design local coordinator of sub-projects and each designer framework that is sent by the upper decisional level affected to specific tasks. Project managers and the (Figure 8). The decision frame enables him to know his designers have the same GUI (Figure 8) to understand context of work: his objectives, his criterion and decision the context in which they must carry out their tasks. variables, his constraints, his performance indicators and Difference is that project manager could create the resources which are allocated to achieve his goals performance indicators and designer just could complete regarding to performance indicators. He is then able to these indicators. They must, at the end of their task, begin the phase of control previously structured, assigned indicate the values of the performance indicators. Decision framework Summary

St ruc ture

P roject

[ Return to P roject E dition ]

Objec tives

Performanc e Indicators :

Impact of the welding process on the cost of the chassis: No more than 4% Target: X euros, Actual value of the PI: X euros

Crit erion Const raints

Capacity of the welding shop to realize the welding pass: 100% Target: 100 %, Actual value of thePI: X %

Dec ision Vari ables Perform anc e I ndic ators In formation Human Resources

Percentage of satisfied constraints of safety: more than 98 % Target: 99%, Actual value of the PI: X % Maximal duration of the integration phase: 2 months Target: 2 months, Actual value of the PI: X months

M at erial Resources Nam e : Type : Target : Unit : As soc iat ed Obje c tiv e : Add

Choose an Object ive

Cancel

Figure 8: Dedicated actor’s GUI to consult his decision frame

125

A PhD student is testing PEGASE in a real case study in the case of a partnership with LASCOM. LASCOM is a PLM / BPM solution developer. The company work on the validation of our approach by adapting some concepts developed in PEGASE in their PLM solution (named ADVITIUM). 3.3 Use of the data base to support decision-making To offer new functionalities to decision-makers we are working on new concepts to make evolve PEGASE. Our objective is to combine information in the data base to allow decision-makers to obtain a particular representation of the system, the process or the actors. We have to treat and organize capitalized information of the data base of PEGASE to provide to decision-makers a set of information describing an element of the data base or a specific viewpoint about the system. This set of information could concern the product, the process or the organization (data of the PPO model), or elements of the system (data describing actor, environments or knowledge). According to this information, we propose to decision-makers scorecards describing the resources of the company, the tools used during the different activities and the knowledge. For the moment, this scorecard is a comparative scale (Figure 9).



Identify “what I can not do even though other company can do it (external point of view)”: “I know what I can not do and I know who can do that”.



Put in evidence the possible difficulties by defining “what I cannot do and to find solutions to provide me information about this lack of knowledge”: “I know what I cannot do but I don’t know who is able to do that”.

Help decision-maker by providing information about possible solutions to solve a problem (internal or external solution, tools, and resources). The gap between the internal and external viewpoints in the representation (white part in figure 9) corresponds to the position “I don’t know that I don’t know”. The gap emphasis that this scale is opened and could always evolves. From now on, to create this comparative scale we correlate information about: •



The actor: who possesses information or data about the element and what is the state of his relationship in its environment? We are able to identify if an actor (internal or external actor, department of the company, etc) possesses the information and what he is doing in the system.



The supports of the information: what are the objects that permit to “make it real” and to “use” and “reuse” it? To create interactions or collaborations between actors, we have to specify the objects and supports that favour exchanges and the share of information. That could make appear interoperability problems.

Knowledge

Internal Figure 9: Comparative scale Such a comparative scale provides to decision-makers information about the positioning of the company, the design system or the actor (depending on the adopted degree of analysis) and contributes to: •

The knowledge: what are the theories that fund existence of the information about the element? Objective is to precisely define the information. Far from the comparative scale, we are also able to make appear organization of the data and of the information. It possible because we capitalize the way the information is used during an activity, a “map” of the knowledge in the company and of the actors’ abilities. This capitalization allows us to obtain a set of information about the different entities describing actors, knowledge and support of the information. Product, process and organizational models are aggregation of some of these entities. For instance, Figure 10 presents a partial view of the entities and of their organization that have been capitalized during the design process of a bike. The box on the left of the figure is an aggregated vision of the product model of the bike. It is composed with some ball bearings and we have theoretical elements in relation with theses elements. We have also a box to identify the resource which is able to design the product or a part of the product. Specific knowledge, exchanges and collaborations between actors, and actors’ abilities or capabilities could be also put in evidence, All these information are capitalized during the system evolution. This example shows that such a representation could provide information about each entity of a product. The entity could be decomposed, linked with other ones and resources which are able to work on the product or share information about it are identified too. •

Knowledge

Tools Tools

Human resources

Human resources

External

Know “what my company / design system or an actor is able to do (internal point of view)”: “I know what I know and what I am able to do”.

126

The product: a bike

Actor which is able to design a part of the bike

Part of the product: the ball bearings

Theory concerning the ball bearings

Figure 10: Partial representation of information about a bike [5] Sudarsan R., Fenves S. J., Sriram R.D., Wang F., 4 CONCLUSION 2005, A product information modelling framework During the product lifecycle a great number of information for product life cycle management, Computer-Aided concerning the product, the process and the organization Design, vol. 37, n°13, pp. 1399-1411. is created and evolves. Furthermore, the system and its [6] Lederberg J., 1990, The excitement and fascination environment also evolve and information about it too. Our of science: reflections by eminent scientists, vol. 3, objective is to catch information about these evolutions Part 1: Annual Reviews, Inc. and to capitalize them if it necessary. From now on, much information is capitalized thanks to PEGASE but many [7] Sudarsan R., Subrahmanian E., Bouras A., Fenves evolutions are not considered and the database has to be S.J., Foufou S. and Sriram R.D., 2007, Information frequently manually updated. Procedures to automatically sharing and exchange in the context of product capitalize some evolutions are not well established for the lifecycle management: Role of standards, Computer moment and have to be studied and integrated in our Aided Design, doi:10.1016/j.cad.2007.06.012. software. Despite the fact the database does not evolve [8] IPPOP (Integration of Product, Process and quickly, it could be used by decision-maker to analyse the Organisation for engineering Performance situation of the system. Our propositions permit an Improvement) is a french RNTL network project analysis of this situation by capitalizing information about labelled by the French Ministry of Economy, the company, the design system and the actor and their Finances and Industry More information on context of evolution. The comparative scale provides a http://ippop.laps.u-bordeaux1.fr/ vision of what the company is able to do or not. Our [9] Robin V., Girard Ph., 2008, An integrated productapproach and our prototype of software help decisionprocess-organization model to manage design makers to analyse the as-is situation and formalize their system, International Journal of Product strategies. Development, accepted, to be published. 5 REFERENCES [10] Robin V., Sperandio S., Topliceanu G. and Girard [1] O’Donnell F.J.O., Duffy A.H.B., 1999, Modelling P., 2008, Managing product design by considering product development performance, International evolution of design context: from Product Lifecycle Conference on Engineering Design, ICED 99, Management (PLM) to System Lifecycle Munich, Germany. Management (SLM), Proceedings of the 5th [2] Chen H.H, Kang H.Y., Xing X., Lee A.H.I. and Tong International Conference on Product Lifecycle Y., 2008, Developing new products with knowledge Management, Seoul, Republic of Korea. management methods and process development [11] Morlay C., 2001, Gestion d’un projet système management in a network, Computers in Industry, d’information – Principes techniques mise en œuvre vol.59, pp.242–253. et outils. Edn Dunod. [3] Girard, Ph., Doumeingts, G., 2004, Modelling of the [12] Quatrani T., 2000, Modélisation UML sous Rational engineering design system to improve performance, Rose, Edn Eyrolles. Computers & Industrial Engineering, vol.46, n°.1, [13] Girard P., Merlo C., 2003, GRAI-engineering pp.43-67. methodology for design performance improvement, [4] Robin V., Rose B. and Girard P., 2007, Modelling Proceedings of the International Conference on collaborative knowledge to support engineering Engineering Design, Stockholm, Sweden. design project manager, Computers in Industry, vol. 58-2, pp 188-198.

127

Integrated Design and PLM Applications in Aeronautics Product Development D. Van Wijk1, B. Eynard2, N. Troussier2, F. Belkadi2, L. Roucoules3, G. Ducellier4 1

Pi3C, 127-129 avenue de Paris, Châlons-en-Champagne, F.51000, France Université de Technologie de Compiègne, Department of Mechanical Systems Engineering, CNRS UMR 6253 - Roberval, BP 60319, rue du Dr. Schweitzer, Compiègne, F.60203, France 3 Arts & Métiers ParisTech, 2 cours des Arts et Métiers, Aix-en-Provence, F.13617, France 4 Université de Technologie de Troyes, CNRS FRE 2848 - Institut Charles Delaunay - LASMIS, BP 2060, 12 rue Marie Curie, Troyes, F.10010, France [email protected], {benoit.eynard, [email protected], farouk.belkadi}@utc.fr, [email protected], [email protected] 2

Abstract Well known challenges in Aeronautic industry, namely reducing time to market, risks and development costs, could be reached thanks to innovative design methods supported by PLM technologies. Such methods are based on integrated design or collaborative engineering enabling close exchanges and cooperation between the project partners. The paper proposes a survey on integrated design methods and PLM technologies. It presents the development of a collaborative design platform, as part of SEINE project, which aims to improve partners’ cooperation in the French aeronautics supply chain. The paper also discusses how to include multiple expertises and integrated design in this collaborative platform. Keywords: Collaborative Engineering, Integrated Design, Data Exchange, PLM, Aeronautics Industry

1 INTRODUCTION The aeronautics industry is hugely concerned by the competition between developed and emerging countries. In such a context, western companies develop strategic outlines and objectives to remain competitive [1]. By way of example, from one side the ACARE (Advisory Council for Aeronautics Research in Europe) insist on the following points [2]: 

Answering the customer needs: In term of security (Ex: five-fold reduction in the average accident rate), quality and affordability (Ex: 99% of punctuality, no more than 15 or 30 min waiting at the airport), environment (Ex: lower 50% of CO² emission, lower 50% of fuel consumption) and air traffic management system (Ex: n handle 16 million flights a year)



Securing global leadership (Ex: Halve time to market with help of advanced technologies, a new framework that permits and encourages companies to work together more effectively)



Establishing supportive public policy and regulation (Ex: Facilitate greater integration of European, national and private research programmes)

 Identifying research agenda Whereas at another side, the NASA research orientations are concentrated on [3]: 

Improve mobility through the air and improve aviation capabilities



Improve aviation for national security and homeland defence.



Keep aviation safe.

CIRP Design Conference 2009

128



Security of and within the aeronautics enterprise must be maintained.



The US should continue to possess and develop its world class aeronautics workforce.



Assuring energy availability and efficiency for the growth of the aeronautics enterprise.



The environment must be protected while sustaining growth in air transportation.

Face to these issues, the extended enterprise concept [4] aims to bring direct and indirect answers to some points of the above guidelines. In fact, it proposes a networked enterprise framework asking the actors of different companies to work together and then allowing to find product best solutions (By management of tasks), reduce the development time (By improvement of the communication), gain confidence between partners, etc. At the same time, this concept considers globalization context constraints, of which geographic dispersion of the partners. Extended enterprise is made possible through other methodologies. And this paper will consider that collaborative engineering and integrated design are two among the most important ones. According to the current engineering way of work and to the new possibilities, engineering activities could still strongly progress, especially in a collaborative mode. Indeed there is still a lack in the way of managing product data between the different partners. Then bring improvement in this domain could bring important advances to reach the aeronautic policy objectives. Collaboration in engineering activities will always need a neutral (politically, maybe technically) mediator that will organize the team work around the project and that will

manage the product data concerning this project. The SEINE (Standard pour l’Entreprise Innovante Numérique étendue) project platform proposes to specify such collaborative engineering mediator and to prepare the basis for a future deployment. In parallel, it could be ignored that an aircraft is deeply multidisciplinary and there is still many problems in exchanges of heterogeneous data, along the whole lifecycle as well as just inside the engineering step. In this sense, the collaboration in product development has to take the integrated design into account. The IPPOP project (Intégration Produit – Processus - Organisation pour l’amélioration de la Performance en ingénierie) provided results for integrated design and led to the development of a platform based on a PPO kernel (Product Project Organization). This paper will first make a short review of research works related to collaboration in engineering activities and especially in the case of aeronautic, then it will present the SEINE project and its collaborative engineering platform. Finally, it will complete the vision of this platform by possible interaction with integrated design systems. 2

LITERATURE REVIEW

2.1 Aeronautic survey The aeronautical industry is still considered as a leading one concerning technologies implementation and new concepts development. Moreover, projects in this domain are structured so as it is relatively well adapted to the methodologies presented in introduction: Indeed an OEM main aircraft project is clearly decomposed in many subassemblies developed by first suppliers that decompose again the assembly under their responsibility in subassembly and so on [4] [5]. Then collaboration case could be well identified. In consequence, many individual actions as well as governmental programs have been already engaged to develop collaborative product design. In [6] authors present some European programs, notably the DIECoM project (Distributed and Integrated Environment for Configuration Management) that aim to improve cross-organisational integrated process and product configuration management in collaboration. Authors mention also the ENHANCE project (Enhanced Aeronautical Concurrent Engineering), which objectives are to define common way of working for the companies in collaboration and also to set operational tools for this collaboration [7]. ENHANCE results have led to a wider project called VIVACE (Value Improvement through a Virtual Aeronautical Collaborative Enterprise). VIVACE project was co-funded by the European Commission to address Aeronautics’ Vision 2020 objectives reducing development cost and lead time and is decomposed in three sub-projects: Aircraft studies, Engine studies and advanced capabilities [8]. This last one was especially focused on the collaboration (between organisation and development disciplines) and worked about an Engineering Data Framework management to improve it. Confirming the platform centric collaboration approach, a work package had also in charge to develop a hub to implement the different concept based on the standard step AP239. Along the different projects, it could be observe that standards was more and more taken into account as well as interoperability problems. In fact, interoperability could by considered as a support for the collaboration. The interoperability needs are confirmed in projects like ATHENA project (Advanced Technologies for Interoperability of Heterogeneous Enterprise Networks and their Applications) in which standards have been also

studies (Process standards: ISO15288, CMII; as well as product standards: STEPAP214, 233, 209, 239). Although this project is supported by EADS, it doesn’t address only the aeronautic domain but also aeronautics, automotive, telecommunications, etc. Wished advances are knowledge support and semantic mediation solution, enterprise modelling in the context of collaborative enterprise, cross-organisational business processes, interoperability framework and services for networked enterprise and planned and customisable SOA (ServiceOriented Architectures) [9]. Another important point often underlined in the aeronautical research programs cited above is the progress of the SME in term of information technology. That was a central topic for the CASH project (Collaborative working within the aeronautical supply chain), that want to bring aeronautical SME into client digital processes. The working team evaluated the moment leading research to masteries it and then to adapt, package and disseminate methods and best practice The above part lists a certain number of governmental programs, which has not the pretention to be exhaustive at all. It only shows the main aeronautic priorities and the motivation in removing barriers to allow the distributed and concurrent product development in the whole supply chain. 2.2 Collaborative engineering definition The precedent parts have demonstrated that there is much interest in the methodologies for the engineering improvement. The following part will then review how they are understood in literature. Collaborative engineering is a still relatively recent methodology; however it is already named under many names. And a first report is that some people don’t consider collaborative engineering as a science. In [10], authors explain this ambiguity by its fuzzy position (between many disciplines) and situate it as the “practical application of collaboration sciences to the engineering domain”. Visions like [11] are especially focused on geographical aspects: “Using collaborative engineering, the collocated team is replaced with an interactive team structure where the team members are geographically distributed and the best engineering talent can be applied to the design effort regardless of physical location”. Other works are more focused on the interdisciplinary aspect and at the same time the temporal aspect like. In [5], it is underlined that some authors include integrated design and the lifecycle disciplines in the collaboration and others see collaborative engineering as an extension of concurrent engineering. Another aspect that plays an important role in many literatures is technologies aspect; however, this aspect is not the only and main one like remark [12] for who collaborative engineering is: ‘‘an Internet based computational architecture that supports the sharing and transferring of knowledge and information of the product life cycle amongst geographically distributed companies to aid taking right decisions in a collaborative environment’’ The former paragraph highlighted different definition points, but many authors have a more holistic approach. The research works of [10] introduce the following definition: “Collaborative engineering is a new sociotechnical engineering discipline, which facilitates the communal establishment of technical agreements among a team of interdisciplinary stakeholders, who work jointly toward a common goal with limited resources or conflicting interests.” Moreover, authors attribute a large scope to the collaboration actions, which could be “across various cultural, disciplinary, geographic, and temporal 129

boundaries”. They enlarged again this scope in an application case with the Airbus Company, adding the organisational range (Collaboration between or inside the organization). In [13] the definition is quite close to the previous one: "Collaborative Engineering supposes the Integration of the Product Development Process through Teamwork with all the areas involved in its Life Cycle. With this aim, product Design Methodologies and Tools are used to allow the regular exchange of the product-related information that is generated and to allow internal and external collaboration to take place. They are also employed to ensure that decision making is carried out in a synchronized way with general agreement, which thus allows firms to achieve the improvement of terms, quality and innovation required by the Client". 2.3 Integrated design definition By making more and more complex items, the Man began to decompose the product development. We have changed from a global approach to a Cartesian one. In [14], it is underlined that the cutting of problems brought a new problem: the integration. In addition, authors add that integration has to be done at company model and process level, at data level and at tools level. This last level is main subject of [15], which takes the notion of integration effort in account: “new tools can be added to make the system more capable in solving design problems. This also implies that the system can be easily configured, e.g. without the need to change existing source codes”. Considering all the different levels, [16] propose an application case using the PDES/STEP standard to integrate many disciplines. In [17], authors agreed that integrated design is a methodology based on concurrent engineering environment. They remind that a designer make part of a team, a team is composed of many different competencies (Technologists for technological solutions choices, analyst for stresses analysis with mechanical analysis tools, manufacturers, maintenance partners …) and in an integrated design approach, “each of those actors must participate in the joint effort and indicate the own constraints as soon as possible during the design process”. They finally propose to federate all the experts around one reference product database. A second possibility is to make interface between expert bases. For [18], integrated design “extends the scope of the design phase, such that the subsequent process requirements are considered alongside the product design” and “Due to the incorporation of the later stages of the development cycle in the design stage, integrated design increases the information available to the designer and, hence, increases the design certainty in the early design process”. As a short conclusion after this review, we can notice that the three notions that are “collaborative engineering”, “integrated design” and “concurrent engineering” are strongly linked. In order to have a simple and concise view, the paper will remain that temporal aspect of the collaboration is mainly addressed by concurrent engineering, multidisciplinary aspect by integrated design and geographic aspect by collaborative engineering. 3

AERONAUTIC PLATFORM COLLABORATION

FOR

MEDIATED

3.1 The SEINE project SEINE means “Standard pour l’Entreprise Innovante Numérique Etendue” that we could try to translate by “Standard for Innovative Digital Extended Entreprise”. It was proposed by the GIFAS (“Groupement des Industries

130

Françaises de l’Aéronautique et Spatiale” : meaning “Aeronautic and Spatial French Industry Group) and was in line of the French project call named “TIC&PME 2010” (“Les Technologies de l’Information et la Communication & les Petites et Moyennes Entreprises” meaning “ICT&SME, Information and Communication Technologies & Small and Medium Enterprises). The main goals was to improve and standardize, using the innovating digital methods, the data exchanges (Data as well as process) between OEM and suppliers in the Aeronautics and Defence sector (and then these of other sections having quite the same skills and suppliers) [19]. Those improvements will be done through two axes: An SCM (Supply Chain Management) axis and a PLM (Product Lifecycle Management) axis, which is the only part of the project this paper has interest in. Like some projects presented in section 1.1, the main points treated are the collaborative engineering in aeronautical supply chain, the product data standards, the SME integration in the digital processes and the building of a platform. This platform approach could be explained by the essential necessities for aeronautical companies to use a “neutral” place for exchange and reconciliation like we said before, but also by the project methodology itself. Indeed, the research done here could be considered like “action research” as detailed in [20] because it addresses a complex problem so it asks systemic approach and amelioration cycle, it contribute to direct problems related to the topic, there is a need to realize studies at the same time as implementation and it need to employ change strategies. In this case, a platform gave a base to the project for applying this way of working. Below is written some of the objectives to reach the goals: 

Assess and choose standard data model.



Specify standard collaboration processes for the specific case of aeronautic.



Define platform services and functionalities needed to support the collaboration.



Implement concepts through a working platform



Write global specification for the deployment of an “in production” platform

The innovating contribution compared to other projects, which have a relatively holistic view, is that data management processes are centred on the gap between the companies (i.e. exchanges processes) and not on the global companies processes, even if, the specification begin from companies effective business scenarios (Reconciliation between company processes and business processes was also described). Because purpose is not to redefine complete collaboration (processes through the whole extended enterprise) but to improve exchanges and communication, according to the fact that people and companies are more productive working in their own environment [17]. Although the positioning is different from other projects, there is also a dynamic vision (processes) and static vision (data model).

TITLE

DESCRIPTION

ASP PDM

Minimum PDM services provided in ASP mode (Application service provider) for companies without such systems which must integrate the customer digital chain.

SME Shared Workspace

Creation of secure workspaces to provide an exchange place and manipulation in such environments.

Data protection

To ensure that data dropped off on the platform are walled (needed because the platform is shared between many organizations), protected for hackers and also that access are controlled.

Context delivery

Delivery of context data from the customer PDM system to the supplier in order to allow the “design in context”

Engineering data package

Product data exchange (Particularly product structure) between partners through the platform.

Collaborative review environment

3D Real time product revue between many actors on data on the platform including all classic revue functionalities (Cuts, annotations, measures, communications). Table 1: Main cases treated in SEINE

3.2 Exchange processes specification An example of business scenarios considered for the specification work was a customer and his supplier that design together a rough part and machining instruction each other at the same time to produce innovative alternative and iterate about it. Another example and more classical case was a customer gives the responsibility to a supplier to develop a sub-assembly, he delivers the specification, and then the supplier makes his work, provide the customer with the completed work, who has to integrate the sub-assembly in the full product. Such scenarios led to many “Use Cases”, which are seen here as main process bricks needed to complete a scenario. After a selection, seven of them were retained to demonstrate the ideas. Table 1 illustrates these different main uses cases treated in the SEINE project. It has been remarked that all this different “Use cases” can be decomposed in many different services. As an example: Engineering data package consists in requesting the elements to send to the platform, packaging it, delivering the package to a partner through the platform and acknowledging the partner reception. Context delivery consists in selecting the context element, sending it to the platform, distribute element context depending on the partners and notify the partners. The both Use Cases has common services like “Send product data package”, “Receive product data package” or again “Archive envelop”. 3.3 Reference data model The previous section proposed to unify the exchange “protocol”. However, during the execution of the exchanges, this protocol has to be support by a neutral reference model in order to allow semantic correspondences in communication between the heterogeneous environments. We have seen earlier that heterogeneity could be solved by two kinds of solutions: systems using the same database or translations between the systems that have their own database. And yet, aeronautical industry needs to own the element they work on at home (For questions of property right, psychology, etc.). So as project (meaning a node of the OEM final product) is the intersection of each case of collaboration, SEINE proposes a walled standard reference product structure that will archive and conciliate the partners work.

Many neutral data models exist so there is a choice to do like did in [21]. Among the standard possibilities, the two mains retained were STEP AP239 (PLCS) and STEP AP214. An evaluation has been done depending on the maturity, the application target, the standard nature, etc. The first choice was finally the last one because more matured and meet better the first need (meaning for the demonstration step). However the two propositions are not incompatible and the second one fit better to the final needs. That is why, it is planned to migrate at long term to the PLCS standard. 3.4 Use of PLM concept It could be imagined to specify a new collaborative information system from nothing, but PLM systems proposes interesting concepts, especially important in aerospace industry what is attested in [22]. Moreover, PLM systems are already in production in aeronautical companies and the collaboration system mustn’t ask changes to private systems, what is named as “noninvasive” system in [8]. In fact, it has been decided to consider current PLM capabilities to define the platform specifications. Among the different PLM concepts, many of them have been used, not to support the product development as usual, but to enable performing data management exchange in engineering collaboration. The following lines give examples and suggestions. 

Use of lifecycle to follow processes: the first idea people think when we talk about processes is “workflows”. However today, cross organizational workflows are difficultly realized, processes steps are not effectively traced, etc. Thus, the lifecycle could be seen as a “workflow tracer”. In this case, lifecycle expressing the workflow has to be affected to a specific object because maturity of the different objects (parts, documents, ECM: Engineering Change Management, etc.) could not be sacrificed for this use of lifecycle. Then objects carrying the exchange lifecycle are linked to the product data objects that have their on maturity and receive different signatures, process states, and other process information during the running the process



Use of the product structure: This paper agrees with the idea that people and companies are more effective if they work with their own language in their own

131

environment. Then, the work team will be more productive if common model is able to receive the data specific to the different companies. But these specificities must be seen only by the people concerned. Moreover, due to the aeronautical project structure (see above) the collaboration could be centred on the product structure. In fact, neutral product structure has been extended with company specificities (Attributes, etc.) and access to the customized parts depended on people organization, people role and node level in the product structure 

Notification for data pull: During the exchanges, data could be pushed or pulled by the collaborative system to owned systems. Data push allows a better synchronization, but is submitted to many constraints (Security, etc.); while data pull let too much liberty. Then subscriptions mechanisms were used to find a compromise and simulate a flexible data push.

4 INTEGRATED DESIGN CHALLENGES In such aeronautical mediated collaboration, high level exchanges between industries were addressed. Under the “high level” term is understood “product structure level” (Figure 1). However, authors think that collaboration should be also possible at a lower level, namely data (ie parameters) level of the different product model. In fact, loud product structures managed by PDM are necessary to develop a product in a collaborative mode, but aren’t adapted to integrated design because not flexible enough in term of data heterogeneity (They are only linked to the 3D models). Indeed, manage directly data content instead of files itself and product structure allow avoiding format constraints, simplifying heterogeneity but also collaborating around the precise specific information needed without move related information. In order to support the collaboration about “low level” data and then collaboration between the different competencies, another neutral data model has been considered in addition to the first neutral model used in SEINE, namely the PPO model. This paper has interest the PPO neutral model because it is generic enough to map the low level and heterogeneous data. Moreover, a platform based on a kernel implementing those concepts has already been developed in the IPPOP project.

As the two models have to define the same product, a synchronization link has to be made between the both. That is why an interoperability module has been then developed between PPO kernel and PLM collaborative platform to synchronize the product structures with the files and the PPO model, based on XML exchanges through XSLT translation [23] . The correspondence between the two models was mainly organized around the “part” concept in term of product structure because parts reflect the design intention: For example expert first reflexions, will be organized around components that will led to parts. Those parts have then to be managed (Maturity, lifecycle, etc.) what is done in PLM system. 5 SUMMARY The paper showed the SEINE contributions in the Aerospatiale context: It defined a reference space to receipt the product data in the gap between companies. It described collaboration “protocols” focused on the exchanges between the organizations and the PLM concepts supporting the exchanges processes. It also mentions the step abilities to do this. Finally, it extended the product structure collaboration to data content in order to add integrated design dimension to this collaboration. The different limits the works presented in the paper and that could be invested in further works are: The link and the synchronization between data content and files themselves (Because the parameters manipulated in the PPO kernel and the files using these parameters are still manually updated), the connection with limited effort to couple of any new digital tool to the two collaboration platforms (Because mapping and connection between the considered elements was still quite loud and manual) and also rely those works with other systems DMS (Data Management Systems). 6 ACKNOWLEDGMENTS Authors would like to thanks all the participant of the SEINE project for sharing their knowledge and the colleagues of the UTT (Université de Technologie de Troyes, France) working on the PPO platform.

PPO

SEINE Organisations PRODUCT DETAILS

Product Structure

Items / Parts

Organisations

Departments

Experts

Files / docs

Data (i.e. Parameters)

Departments

Experts

Figure 1: SEINE and PPO systems interoperability.

132

7 REFERENCES [1] AIA 2008, [2] ACARE 2008, [3] NASA 2007, [4] Pardessus T., 2001, The multi-site extended enterprise concept in the aeronautical industry, Air & Space Europe, 3: 46-48. [5] Nguyen Van Th., System engineering for collaborative data management systems: Application to design - simulation loops, PhD thesis, Ecole Centrale Paris 2006. [6] Delpiano M., Fabbri M., Garda C., Valfrè E., 2002, Virtual Development and Integration of Advanced Aerospace Systems: Alenia Aeronautics Experience, Symposium on Reduction of Military Vehicle Acquisition Time and Cost through Advanced Modelling and Virtual Simulation, Paris, France, 2225 April. [7] Braudel H., Nicot M., Dunyach J.C,, 2001, Overall presentation of the ENHANCE project, Air & Space Europe, 3: 49-52. [8] VIVACE 2007, Final Technical Achievements brochure, [9] Ruggaber R., 2005, ATHENA - Advanced Technologies for Interoperability of Heterogeneous Enterprise Networks and their Application, International Conference on Interoperability of Enterprise Software and Applications, Geneva, Switzerland, 23-25 February. [10] Lu S.C.Y., Elmaraghy W., Schuh G., Wilhelm R., 2007, A scientific foundation of collaborative engineering, CIRP Annals - Manufacturing Technology, 56; 605-634. [11] Monell D.W., Piland W.M., 2000, Aerospace Systems Design in NASA’s Collaborative Engineering Environment, Acta Astronautica, 47: 255-264. [12] Huang S., Fan Y., 2007, Web-Based engineering portal for collaborative product development, Lecture Notes in Computer Science, 4674: 369–376.

[13] Vila C., Romero F., Contero M., 2004, Implementing collaborative engineering environments through reference model-based assessment, Lecture Notes in Computer Science, 3190: 79-86. [14] Nahm Y.E., Ishikawa H., 2005, An Internet-based integrated product design environment. Part II: its applications to concurrent engineering design, International Journal of Advanced Manufacturing Technology, 27: 431-444. [15] Zhang W.J., Luttervelt C.A., 1995, On the Support of Design Process Management in Integrated Design Environment, CIRP Annals - Manufacturing Technology, 44: 105-108. [16] Zha X.F., Du H., 2002, A PDES/STEP-based model and system for concurrent integrated design and assembly planning, Computer-Aided Design, 34: 1087-1110. [17] Brissaud D., Tichkiewitch S., 2000, Innovation and manufacturability analysis in an integrated design context, Computers in Industry, 43: 111-121. [18] Iqbal A., Hansen J.S., 2006, Cost-based, integrated design optimization, Structural and Multidisciplinary Optimization, 32: 447-461. [19] SEINE 2007 [20] Mejía R., Lópeza A., Molina A., 2007, Experiences in developing collaborative engineering environments: An action research approach, Computers in Industry, 58: 329-346. [21] Moalla N., Chettaoui H., Ouzrout Y., Noel F., 2008, A. Bouras, Model-Driven Architecture to enhance interoperability between product applications, International Conference on Product Lifecycle Management – PLM’08, Seoul, Korea, 9-11 July. [22] Belkadi F., Troussier N., Huet F., Gidel T., Bonjour E. and Eynard B., 2008, Innovative PLM-based approach for collaborative design between OEM and suppliers: Case study of aeronautic industry, Computer-Aided Innovation, Springer-Verlag, Berlin. [23] Van Wijk D., Roucoules L. Eynard B., Etienne A., Guyot E., 2008, Enabled Virtual and Collaborative Engineering Coupling PLM System to a Product Data Kernel, 5th International Conference on Digital Enterprise Technology, Nantes, France, 22-24 October.

133

The Mechanisms of Construction of Generic Product Configuration with the Help of Business Object and Delay Differentiation S-H. Izadpanah, L. Gzara, M. Tollenaere G-SCOP Laboratory Grenoble Institute of Technology, France {Seyed-Hamedreza.Izadpanah, Lilia.Gzara, Michel.Tollenaere } @inpg.fr

Abstract Product configuration has a central role in PLM application. The generic product configuration which factories the similarities between the product types, is used to facilitate the production by permitting delay differentiation. In this research, the role of business object in the construction of generic product configuration is studied. Then the rules used in the procedure of structuring the generic and specific configuration are presented. These rules are based on the” variability points” method. The concept is identifying the major variability’s of different products by finding the dependencies of their properties. The industrial case of the culinary articles is then studied. Keywords: Product Lifecycle Management, Product Structure Model, Delay Differentiation, Genericity in product configuration

1.

INTRODUCTION

Implementation of a PLM system within an enterprise is an opportunity to improve its information management and rationalize its document structure, since it preliminary requires structuring plenty of product information. Product informations to be structured in a PLM are essentially the documents and the configuration. Structuring of product configuration is studied in this research. [Rangan 05] Analyzing some industrial cases of PLM deployment, shows that the management of product configuration in the many cases is about to construct the configuration for each commercialized product, independently to the configuration of other similar products in the same production line. Therefore the amount of information to be managed is relatively huge. This manner of configuration management, can make some difficulties concerning to the creation and possible future modification of configurations. It’s obvious that many of commercialized products fabricated by an enterprise belong to a family of product, more explicit, a production line. Therefore, it’s frequent that some of their properties are similar and common. This commonality is not already been taken to account in many of industrial cases; each product configuration is constructed independently to the configuration of the other similar product. This procedure, make the system so huge and difficult to manage, mostly its modification.

CIRP Design Conference 2009

134

So, in order to avoid from this problem, it seems that it suit to factorize the common information of whole products of a production line (from their configurations), and create one product configuration which represent the “generic product configuration”, in opposite of “specific product configurations”, that are constructed separately for all products. [Mannisto 01] It should be noticed that the specific product configuration is created based on two logics (or viewpoints), design or fabrication. [Jiao 99] Therefore, two type of configuration may be managed in PLM systems, “as-designed” and “asbuilt”. These two types of specific configuration can correspond to two type of generic configuration. The generic configuration “as-designed” is more known because the generic configuration usually created for the design-based purposes. This research is aimed to propose a mechanism which leads to construct the generic product configuration “asbuilt” as well as “as-designed”, in the way that creation and mostly modification of specific configurations became more easily and straightforward. The basic issues of generic product configuration, the type “as-designed” and “as-built” are studied and presented in the first section. In the second section, the construction rules of generic product configuration with the “as-designed” viewpoint are discussed and validated by the industrial case. Then in the later section, the construction of generic product configuration with the “asbuilt” viewpoint is under analysis. It is done by using the

delay differentiation exigencies. The article is then finished by the conclusion and perspective.

2.

GENERIC PRODUCT CONFIGURATION

Generic configuration has been proposed with the aim to regroup all of the properties of the products in the same production line. So it contains all options and variables possible of these products. [Gzara 03] Here some important properties are presented: Like other configurations, the generic configuration is composed essentially by two different relationships, the composition relationship and specialization relationship. - Abstraction: All the objects (document, components etc) in the generic product configuration are abstracted, means virtual. This is because the generic configuration should cover all the similar product of a same type. So there are several parameters without values. Going from generic configuration to specific one is done by allocating values to these parameters. The values of these parameters can be optional or variable.

The fact to be noticed here is the continuity of this heritage. The heritage may be applied only in the beginning of specific configuration structuring, which means the saved specific configuration is not more dependent on the generic product configuration. But the better strategy is that this heritage will continue during the lifecycle of the specific product configuration. In this case the specific product configuration is reconstructed by the pattern of generic product configuration, each time that is used.

2.2 As-Built vs As-Designed The product configuration is based on the business viewpoint of the user. It means that the configuration that is done for a manufacturing or fabrication is different with the configuration of the same product but structured for design phase. The configuration for design viewpoint is more conceptual and functional structure, whereas the one with a fabrication viewtpoint is more based on the process and procedures of manufacturing. [Gzara 03].

Heritage

3.

Within the product configuration, the specialized objects (variant objects that are assigned to a generic object) inherit the properties of their generic object. It means that if a decomposition schema is defined for a generic composed object, all of the objects which are the specialization of this generic object obey this decomposition schema. This is one of the advantages of using the concepts of object-oriented theory in the product configuration context. This is shown in figure 1

3.1 The definition of basics of method.

Generic Component 1

Generic Component 3

Generic Component 2

Generic Component 4

Generic Object Color= “”

Specialized Object

Component 1

Component 3

Color= “Blue”

Specialized Object

Component 2

Component 4

Color= “Red”

Figure 1: the heritage of composition within the specialization With the help of that, the amount of information to be stocked in the system decrease as all the common knowledge are transferred to the generic level and saved only one time. Moreover the evolution of this system is much easier, because a modification in the generic product configuration will be implemented automatically in all of its specialized specific product configuration.

THE CONSTRUCTION RULES FOR “ASDESIGNED” GENERIC PRODUCT CONFIGURATION AND CENTRAL ROLE OF BUSINESS OBJECT

The properties of generic configuration are already discussed. The genericity of product should be conserved. This means that the as-designed generic configuration must represent the variety of specific product designed by a company. The heritage is one of the most important relations in the product configuration. A specific product inherits the properties and relationships of its father (generic product). If the generic configuration can not support this, then it’s not appropriate. The as-designed generic configuration depends also on the enterprise’s process of design. For example the circumstance of codification of the components of a product and its order may Influence on the generic configuration. In the other case, the organization of the design department and their logics influence on the configuration. Our proposed method of structuring of the as-designed generic configuration is based on the variability point technique. In this technique, all of the variety of specific products are studied and compared in order to find the whole set of proprieties making them different. Then this set of variability points is analyzed in order to find the points on which the other points mostly depend. These points are then considered as the properties that should be valued, even in the generic configuration. The other properties can be regarded as the generic parameters. (Without value) In order to illustrating this method, an industrial case study will be elaborated in the next section. 3.2 The industrial case The case that has been chosen to validate the proposed method of generic product configuration construction is a frying pan. The different parts of a frying pan are presented in figure 2.

CIRP Design Conference 2009

135

The Object Frying pan product)

The properties (whole

Dimension Product

of

Family of Product Dimension DD

Deep Dish

Family DD Diameter PD

Pressed Disk

Tools DDD Thickness Figure 2, the decomposition of a frying pan

All of the equipments

The initial decomposition of the specific product configuration of a frying pan is shown in the figure 3.

Diameter Family

Lid

As it’s shown, the objects presented on the product configuration, are not only the components of a product, but also the tools and the documents associated to the components.

Diameter L Tool L

Table 1, the different properties determining a frying pan.

Frying Pan

Lid

Disk L Tools L

Equipments

Screw Basement Pin Handle

Deep Dish

Pressed Disk

Disk DD Grille Tools DDD

Tools DD Figure 3 the decomposition of a frying pan

Table 2 presents the extracted list of dependencies between variability points: Table 1lists the different properties of the product configuration which cover the variability points.

136

The Parameters

The dependencies

Dimension DD

Dimension of product

Family DD

Family of product

Dimension PD

Dimension of product

Tools DDD

Dimension of product

Dimension related to the equipments

Dimension of product

Family related equipments

Family of product

to

the

Diameter L

Dimension of product

Tool L

Dimension of product

huge customization product. The concept of delay differentiation is how to maintain the genericity of a product during its fabrication. [Agard 02] [Ghiassi 03] The production line consists of several stations. In each station, one of the production activities is done. The starting product has uncompleted configuration. During the fabrication, its configuration evolves and enriched. This evolution is done, by specifying or determining the values of the parameters. This means whether choosing an option or a component or valorizing a value (such as color). So the evolution of product configuration is considered as a key concept in this study. This evolution is similar with the evolution of generic configuration to exemplary configuration via several specific configurations.

Table 2 , the dependencies.

Delay differentiation will be done by elimination of intermediary products, and construction of a generic intermediary product. Figure 4 shows the simple example of delay differentiation.

These dependencies are found via the business rules and experiences and obtained from the discussion meeting with the technical sector members of the industrial partner.

Therefore the delay differentiation is to retard the differentiation point of product or process within the production line. At the differentiation point, different specific products obtain their own valued properties, identities or codification.

It’s shown here that the determining variables are “diameter of product” and “family of product”. Therefore the constant parameters in the generic frying pan are these variables. So the generic frying pan is defined like that: Generic Fraying Pan of 20 and the Family of Best Cuisine. This will be a starting point of specialization of the product.

The proposed concept here is like the variability points technique used in the previous section, but the variability relates to the production process. Therefore the procedure is similar to the previous one, but here the parameters of product that are related to the fabrication are considered. It’s obvious that the product configuration extracted from this mechanism is different with the previous one. Here, the decomposition of product is done based on fabrication conditions.

Coloring

Kitting

Distribution

Red pullover

Coloring

Kitting

Distribution

Bleu pullover

Before implementation of Delay Differentiation Coloring

Distribution

Red pullover

Coloring

Distribution

Bleu pullover

Kitting

After implementation of Delay Differentiation Figure 4, the concept of product differentiation

4.

THE CONSTRUCTION RULES OF “AS-BUILT” GENERIC CONFIGURATION AND THE CENTRAL ROLE OF DELAY DIFFERENTIATION

As the generic configuration “as-built” is related to the manufacturing process its construction is based on the genericity in the fabrication. One of the subjects that analyze the genericity in the fabrication line is delay differentiation. Delay differentiation is a strategy of the production line design for the high variety production or

The first stage is to find the parameters and variables1 that specify a product and then categorize these parameters and variables with fabrication viewpoint. This is done by analyzing the different specific product configurations “as-built”. Then the different stations of production line are studied in order to determine in each station, which parameter of product is specified. For example in the case of frying pan, in a station that coloration is done, the color of product becomes fixed or in the press station, the family and dimension of product take values.

1

The parameter may have a fixed value, but the variable is not valued. 137

Next stage is to find the dependencies of stations, which means the obligatory order of fabrication. These are considered like the constraints of fabrication. For example, a coloring or coating station can not be placed before the pressing station. Then finally the stations are ordered from the most dependent station, and will continue until the most independent, in order to fulfill the delay differentiation exigencies. The corresponding generic product configuration is then constructed based on this proposed order. For example the generic configuration “as-built” is a product in the first stage, but each station specialize one or more properties of it.

5.

CONCLUSION

The generic product configuration is considered as one of the most important structure in the PLM systems. In the other hand, there is a variety of configurations made for each type of fabricated product. This diversity of products which is a source of the huge quantity of different configurations to be managed in the system obligates us to take in to account the concept of genericity in product configuration and search to find this genericity within all types of a product line. So in the context of structuring the product configuration, which is done with the design viewpoint, the genericity is inevitable. Moreover in the fabrication process, in order to reduce the cost, delay differentiation has been introduced. This method of ordering the production line is used with the objective of preserving the genericity of product as long as possible during the production. This genericity may be represented in the form of generic product configuration as-built. In this study, we construct the generic product configuration as-designed from the basics of design process and variability points. This “as-designed” generic configuration must be capable to facilitate the design process, the development of product, the evolution of configuration, etc. Subsequently, the generic product configuration as-built is extracted from delay differentiation exigencies. It means the generic configuration and delay differentiation are considered like high relevant concepts. The generic configuration as-designed is more about the static genericity in the designed product, whereas the generic configuration as-built, as well as delay differentiation are the dynamic one; they are about the genericity of a fabricating product during its fabrication process. In this research, the genericity of configuration within PLM systems was studied. The other important subject to be study is the genericity in the documentation. The other subject that should be analyzed in the future is the relationship between the two generic configurations, asbuilt and as-designed. The mechanism of transforming between these configurations as well as the coherence that should be maintained between them are the interesting to be studied.

138

6.

REFERENCES

[Agard 02] Agard, B. (2002), ` Contribution à une méthodologie de conception de produit à forte diversité’ PhD thesis report, INPG, 2002 [Gzara 03] Gzara L. (2003)`Product information systems engineering: an approach for building product models by reuse of patterns`, Robotics and Computer Integrated Manufacturing, Vol.19, pp. 239-261. [Mannisto 01] Mannisto T. et al. (2001) ` Multiple abstraction levels in modelling product structures`, Data & Knowledge Engineering Vol 36. [Jiao 99] Jiao J. (1999), ` An Information Modeling Framework for Product Families to Support Mass Customization Manufacturing`, Annals of the ClRP Vol. 48 [Ghiassi 03] Ghiassi M. et al. (2003) ` Defining the Internet-based supply chain system for mass customized markets `, Computers & Industrial Engineering Vol. 45 [Rangan 05] Rangan R. M. et al. (2005) ` Streamlining Product Lifecycle Processes: A Survey of Product Lifecycle Management Implementations, Directions, and Challenges’, Journal of Computing and Information Science in Engineering, Vol 5

Interoperability and Standards: The Way for Innovative Design in Networked Working Environments

1

1

2

3

C. Agostinho , B. Almeida , M.J. Nuñez-Ariño , R. Jardim-Gonçalves 1 UNINOVA-GRIS, Group for the Research in Interoperability of Systems, Campus da Caparica, 2829-516 Caparica, Portugal 2 AIDIMA – Instituto tecnológico del mueble, madera, embalaje y afines, Valencia, Spain 3 DEE, FCT-UNL – Universidade Nova de Lisboa, Caparica, Lisbon, Portugal 1 2 3 [email protected], [email protected], [email protected], [email protected] Abstract In today’s networked economy, strategic business partnerships and outsourcing has become the dominant paradigm where companies focus on core competencies and skills, as creative design, manufacturing, or selling. However, achieving seamless interoperability is an ongoing challenge these networks are facing, due to their distributed and heterogeneous nature. Part of the solution relies on adoption of standards for design and product data representation, but for sectors predominantly characterized by SMEs, such as the furniture sector, implementations need to be tailored to reduce costs. This paper recommends a set of best practices for the fast adoption of the ISO funStep standard modules and presents a framework that enables the usage of visualization data as a way to reduce costs in manufacturing and electronic catalogue design. Keywords: Interoperability, Modular Architectures, Morphisms, Standards, Visualization

1 INTRODUCTION The globalised nature of the world economy is evidencing a tremendous increase in trade and investments. Nevertheless, in such an open market, the challenges to organizations, especially the smaller ones, are real and they must protect themselves to ensure that competitiveness doesn’t decline. Customers demand more information every day and it must be complete, updated, understandable and without errors [1]. Electronic business as the way for communication will only be effectively achieved by industrial organizations when product data, business and technology become fully aligned and interoperable between them. To accomplish this goal, standards implementation is a must. Their usage is accelerating technological and organisational change, thus improving innovation performance [2]. Designers and manufacturers using standards will get a considerable advantage over those that don’t. Sending and receiving e-commerce documents in standardised format may get easier access to new markets and facilitate the management of product data through product life cycle (PLC) phases, distributing information from the designers to manufacturers, retailers and emarketplaces. These advantages give the possibility to reduce administration costs when handling quotations, orders, etc., as well as the opportunity to have electronic catalogues, product customization, user-centric design and e-commerce. However, in the SME-based industries, as furniture, Information and Communication Technology (ICT) systems, namely the ones with greater concerns with interoperability, are still often viewed with some scepticism. Organisations seemingly spend large amounts of time and effort trying to implement standard recommendations, and training the employees [3]. Therefore, this paper, supported by the European research project INNOVAFUN (standards.eu-

CIRP Design Conference 2009

139

innova.org/Pages/Innovafun/Default.aspx), proposes a methodology based on use-cases that serve as guidelines for the adoption of STEP standards [4], covering the needs expressed and promoting innovative and error-free design. In addition, to help SMEs reducing costs related to the manipulation of geometrical information in the design, manufacturing and commercial stages, the authors propose a framework based on openstandards for the usage of visualization data. The challenge is to extract basic geometry information from complex CAD drawings and enable it to non-expert users [5]. 2 ISO 10303-AP236, THE FUNSTEP STANDARD To cope with interoperability problems in the furniture industry supply chain, the funStep group (www.funstep.org) engaged in standardization activities within the STEP group of standards and created the funStep standard, officially known ISO 10303-236 [6]. This standard, also known as Application Protocol 236 (AP236), is the part of STEP that defines a formalized structure for catalogue and product data under industrial domains of the furniture sector. AP236 is focused on product definition of kitchen and domestic furniture, extensible to cover the whole furniture domain (e.g., bathroom, office, etc.). It is a foundation for data exchange in the furniture industry so that all the software involved in the design, manufacturing and sale of a product, understands the same vocabulary [6]. 2.1 Modular Architecture The AP236 is designed in order to optimize reutilization of existent standard models, and modularization was the answer. Therefore, similar and common requirements were identified from existent STEP APs, and subsets of these models were selected to be integrated as part of AP236 (see Figure 1) [7][8]. This characteristic enables a faster standard development process and a guarantee of cross-sectorial interoperability since some of the modules

STEP AP Implementation class of modules

AP modules

CC1 unused external modules

reused modules

CC2

Other

... CCs Figure 1 - Modular STEP AP. Therefore, AP236 groups the standardized modules from the STEP community in six different implementation sets (designated by conformance classes – CCs in Figure 1). With them, anyone could implement funStep at different 1 levels of compliance namely : 1) Simplified catalogue representation (CC1); 2) Catalogue data and product geometry representation (CC2); 3) Parameterized catalogue (CC3); 4) Interior decoration project (CC4); 5) Parameterized catalogue data and product geometry representation (CC5); 6) Full AP236 that encompasses the others (CC6). 3

BEST PRACTICE METHODOLOGY FOR THE ADOPTION AND IMPLEMENTATION OF FUNSTEP Traditional manufacturing sectors are interested in changing and evolving. They are motivated to innovate and explore new markets by means of global integration, creative and sustainable design, homogenization of business methods and services, and also to explore opportunities through widen collaboration better customer service and support [10]. To support this, the funStep standard was officially published by ISO in December 2006, and even before that, organizations have been demonstrating interest on using popular technologies such as XML to implement it [11][12][13]. Despite of the value of the openness of the solution as it prevents future dependence of proprietary technology or services thus assuring reusability of investments, companies have the perception of risk on following these new technologies. Knowledge costs are also considered a threat as extra personnel training should be required.

Due to their reduced size and lack of resources, and given the complexity of STEP technologies, SMEs have been facing some difficulties understanding and implementing the standard [9]. Therefore, the funStep group, to which the authors are part of, has defined a set of innovative services and implementation guidelines for the funStep standard adoption, in order to help organizations to overcome these barriers. 3.1 funStep services The funStep services are available to the end user in the form of: a) Software Services; b) Training Services; c) Validation Services and d) Consultancy Services, to support the funStep standard-based solutions [14]. The services have the objective of assisting on the funStep standard comprehension, implementation process, and also on development and design of new business practices on SMEs. They offer new opportunities for innovation and content management, while also achieving lower costs and more rapid deployment. Software services They are the key to complement legacy systems, or support new software design and development in different companies. With businesses needing closer cooperation between suppliers and customers, companies need the capability to link up their systems quickly with other companies. Training services They are meant to accelerate the transfer of knowledge, skills, and competencies to the stakeholders according to their requirements and profiles. The training is structured in the form of modularized tutorials [15], and is delivered in different ways, such as: traditional Classroom, Virtual Classroom and by E-Learning. Validation services The validation of implementations plays an important role, guarantying that the stakeholders are using correctly the funStep standard, and are interoperable. Consultancy services Whenever the case justifies, the funStep community may designate experts to provide external in-house services. Interior Partially Catalogue Geometry Expressions decoration Compliant CC1 √ CC2 √ √ CC3 √ √ CC4 √ √ √ CC5 √ √ √ Level 1

are the same. Product and interior designers, as other stakeholders, may now be part of multiple supply chains without greater concerns with interoperability issues. However, in addition to reutilization, modularization in AP236 also enables to define implementation classes and options according to the stakeholder profiles. For example, in the furniture case, retailers, manufacturers, suppliers, e-marketplaces and interior designer/architects are the principle stakeholders, whose characteristics and relationships lead to different implementation requirements [9].

Table 1 - Level 1 of funStep compliance. 3.2 funStep compliance and ICT adoption The ideal scenario in the communication between two different furniture stakeholders is that both of them are fully compliant with the funStep standard for product data. However, if that is not possible, the stakeholder receiving the information should have the same or higher level of compliance than the sender. Considering the number of CCs implemented: it is possible to define three different levels of funStep compliance [9]: •

Level 0: the stakeholder has no funStep standard adopted and interoperability is never guaranteed;



Level 1, for the stakeholders that have adopted some CC modules of AP236. Inside this level, there can still be different sub-levels according to the parts of AP236 implemented (see Table 1). Here,

1

The enumerated names are simplified and do not correspond to the official AP236 CC names. Please refer to [1] for the formal designations.

140

interoperability is only assured if the sublevels implemented are the same or if the receiver level encloses the sender’s; •

Level 2, for the stakeholders that have adopted full AP236, i.e. CC6;

3.3 Use-case (UC) suite by level of compliance At present most of the furniture organizations have not yet adopted any funStep standard and will be on level 0 of compliance. Indeed, many have still different ICT usage situations. Below, it is presented an analysis of the more common situations [9]: •

Situation 1 - “Does not have an ICT Infrastructure”. This is the case where no ICT equipment is used in the organization and all information is stored in paper format. Fortunately, this case is currently being reduced, and is concentrated in the microenterprises with less than 10 employees. In those many design specifications are still being sent by fax to manufacturers;



Situation 2 - “Has an ICT Infrastructure, but is not focused for information exchange”. This is the case common to the majority of SME environments and is the case where companies have computers, internet connection but have no specialized system to enable creative design, e-commerce or any kind of information management (e.g. ERP). Companies in this situation normally store their information in MS Excel, MS Word documents, or in very specific software formats.

Situation 3 - “Has an ICT Infrastructure for information exchange and management”. This case reflects the situation of companies that have already invested money in a system to enable e- business and PLC management. In this situation companies might already be adopting funStep (fully or partially), or may use proprietary formats not understandable by all, thus obstructing seamless interoperability. Considering this, the levels of funStep compliance and the typical stakeholders’ profiles in SME environments, the authors propose a set of UCs which will show the actions stakeholders should carry for a fast implementation of STEP standards, namely funStep. Depending on the ICT starting situation, Table 2 guides the implementors on the order of UCs they should follow,



ICT adoption funStep Compliance

Situation 1

Level 0

Situation 2

Level 0

Situation 3

Levels 0, and 1

to adopt certain parts of funStep and raise the level of compliance. Therefore, the guidelines eliminate part of the complexity of implementing a STEP standard, i.e. where to start [9]. 3.4 A use-case and its recommended action plan Nowadays most SMEs, independently of their profile, will be on situation 2 or 3 without any funStep CC modules implemented. To better illustrate how the UC suite works, its best to follow an example: Taking for instance a furniture retailer that decides to implement the funStep standard. Due to its business scope, the retailer already uses an ICT system that enables to electronically receive furniture catalogues from different manufacturers. However, due to the heterogeneity of the information received, it has trouble enlarging its business network. Clearly the retailer is suffering from an interoperability problem, and might gain from funStep. By the description above, the retailer is on situation 3 and on level 0 of funStep compliance. Following Table 2, it should start by finding and detailing the exact requirements that the current system does not answer (UC-07). At this stage, the actions in the usecase, should be partially accomplished otherwise the retailer would never have felt the need to change and innovate. Next, the second step relies on the profound analysis of the standard capabilities to see if and how it will solve the problem (UC-08). The procedure continues with UC-09, UC-10, UC-11, UC-12, until it reaches UC-05 where it is foreseen that the organization will check if its implementation has been successful and obtains a compliance level certificate. Due to space restrictions only the last UC of the recommended implementation process is detailed in the paper (refer to [9] for others). Use-Case 05 – “Test the level of funStep compliance” The UC 05, illustrated in Figure 2, represents a scenario describing how a company tests the level of funStep compliance of its own software system. This test will help the company to know if its system is in conformance, both syntactically and semantically, with the funStep standard and if it is interoperable with other systems already using AP236 [16].

Priority Steps (#, name) Use-case Uptake basic ICT UC-01 Build data system based on funStep UC-02 Implement system interfaces UC-03 Populate data system UC-04 Test the level of funStep compliance UC-05 Build data system based on funStep UC-02 Implement system interfaces UC-03 Migrate internal data to funStep system UC-06 Test the level of funStep compliance UC-05 Find requirements that the current system does not answer UC-07 Analyse how funStep could answer the requirements UC-08 Discover mapping from internal system to funStep (if starts from level 0) UC-09 Implement functionalities/ services to transform internal data in funStep data and vice-versa (if starts from level 0) UC-10 5 Implement new parts of funStep UC-11 6 Implement system interfaces for the new parts UC-12 7 Test the level of funStep compliance UC-05 1 2 3 4 5 1 2 3 4 1 2 3 4

Table 2 - Use-Case suite for the adoption of the funStep standards.

141

Figure 2 - Use-Case 05 “Test the level of funStep compliance” [9]. This UC is rather complex in terms of the diversity of actors involved. Six actors have actions assigned. However the “Software Engineer” and the “funStep Consultant” have preponderance regarding the others: the former, because he/she is in charge of leading the testing process on the company side, and the latter because he/she is the one responsible for making the final certification on the funStep side. Using the sequence of actions represented in the usecase the organization that wants to test the level of compliance of their software, knows exactly the sequence of actions to carry, which are: 1) The “Software Engineer” starts analysing the available funStep methodologies in order to test company’s level of compliance; 2) Then, he chooses the conformance testing (CT) mechanism [16]. It can weather be remote through the funStep web-services, the online 2 testing application , or local; 3) After that, the “ICT Technician” is in charge of preparing the Company’s System for the CT and interoperability checking (IC) procedures [16]; 4) The next step, consists on the generation of a sample data set covering the full extent of data that the system can handle; 5) With that, the “CT Mechanism” can execute the validation of the data set, detecting the level of compliance and reporting the errors found in the implementation of the “Company’s funStepbased System” if that is the case; 6) After these tests, the “Software Engineer” continues with the IC procedures, downloading 2

http://gris-public.uninova.pt:8080/funStepServices/

the pre-prepared funStep battery of tests and feeding them to the “Company’s funStep-based System”; 7) He visualizes the imported information and modifies it using the system interfaces; 8) Before storing and exporting the modified information in funStep format, the “Software Engineer” takes snapshots of the displayed information. This procedure provides a printable document to make proof of the information inserted in the “Company’s System”; 9) Finally, the “funStep Central System” imports the information from the company and then displays it to a “funStep Consultant” that will compare the snapshots with the displayed information. If everything matches, he/she will certify the software system as funStep compliant. 4 HANDLING VISUALISATION DATA With the path towards product data standards adoption cleared, communications, interoperability, and innovation should come easily. Nevertheless, software vendors are still pushing their proprietary solutions and delaying information openness. The exchange of geometry and computer-aided design (CAD) data is one of the most prominent barriers still remaining. End users experience many difficulties trying to read geometry files from other systems and most of the times have to pay for expensive solutions that deal with it. CAD vendors generally claim to be interoperable through the usage of translators, yet their formats remain closed and are only partially exchangeable with different systems [17]. This problem is many times transported to users that don’t really need the full complexity of a rich CAD drawing. Users might just need a “light” view on the

142

altering, where given two models, source and target model, a mapping is created relating each element of the source with a correspondent element in the target, and leaving the two models intact; 2) model altering, where the source model is transformed using some kind of function that applies a set of mapping rules to the input model, modifying it into the targeted output. The above concepts have been applied in the architecture design. Looking at Figure 3, there are two major divisions that are relevant. First, the four level approach defined by the MDA, seen vertically from the meta-meta model (level M3) to the data (level M0), and second, the three parts that compose the morphisms architecture (seen horizontally). The “Common Base” is the pillar of the architecture. Its goal is to provide a meta-structure capable of describing the largest number of geometric artefacts. The author’s purpose was not to invent a new geometry representation format. Therefore, on [5], the authors elaborated a study that selected X3D, an ISO standard for the representation of 3D scenes, as the core format for the Common Base meta-model [23]. With the “Common Base” specified, non-altering morphisms can be used to discover relationships among the “Specific Formats”. These two parts of the architecture enable importing and exporting data from and to a neutral format. The process may result in information loss at the M0 level if the models have different degrees of expressiveness. The last part of the conceptual framework is related to the views. It defines the data structures that the views use to show the information graphically to the users. For instance, in a table, the model defines what information goes to which column. This part is related to the Common Base using model altering morphisms where the geometrical information is simplified for visualization.

geometry, and visualization data would be sufficient enabling to adapt the information to their needs, e.g. show geographical information on a map instead of on complex tables [18]. Visualization could also be useful to complement the funStep standard efficiency. As described before (see Table 1), one of its parts, i.e. the CC2 is meant for geometry representation. However, maybe not all industrial stakeholders that need to deal with geometry need the full complexity of AP236 geometry modules. Therefore, one of the actual challenges on this area is a creation of a framework that regardless of the format of the geometry exchanged can show the information accordingly with the goals of the worker. Thus capable of simplifying the complex geometry based product data in a way accessible to all. Activities like virtual simulation would be accessible to all, thus enabling optimization and sustainability.

Level M3 (Meta-metamodels)

4.1 Model-driven in visualization framework design The Object Management Group (OMG) has been proposing the Model-Driven Architecture (MDA) as a reference to achieve wide interoperability of enterprise models and software applications. Model-driven Development (MDD) consists on the software development starting from a high level of abstraction, which enables the interaction of the final user in the development phase, i.e. customization. With this, the software can be more efficient meeting their goals and requirements. The MDA provides specifications for an open architecture appropriate for the integration of systems at different levels of abstraction and through the entire information systems’ life cycle [19][20][21]. For these reasons and due to the automation process on the software generation, the framework architecture was designed following the model driven paradigm (see Figure 3). The MDD leads to a need that everything is described as a model, i.e. the diverse formats (inputs and outputs) need to be expressed as models, thus enabling the integration of different applications by explicitly relating and transforming their models. The model relationships are based on the concept of Model Morphisms, which addresses the problem of mapping and transformation of models [22]. In this context, there are two classes of morphisms: 1) non-

5 A SCENARIO FROM THE FURNITURE INDUSTRY Throughout the different product life cycle stages, there are many people working together and handling with product and geometry information. Some are working in the product design, while others are more concerned with manufacturing and others with the marketing and selling of that product. Meta-meta-model

is defined by

is defined by

Meta-model Meta-modelo

Meta-model Meta-modelo

Model

Instance Mapping

Specific Format

Model Modelo

Instance Mapping

are described by

Data

Transformation

Common Base

is described by

View Data

File X

Data

are described by

is defined by

is defined by

Results in

Model Modelo

Results in

Level M1 (Models)

is defined by

Level M0 (Data)

Meta-model

Type Mapping extends

Level M2 (Meta-models)

is defined by

Transformation

Visualization

Figure 3 –MDD principles applied on the design of the visualization framework

143

However, for example, for marketing purposes, people are more interested in the visual characteristics for promoting that product and not the specificities. Also, marketing departments need to develop product catalogues where they need to specify all the variants of a single product. Most of the times, these activities are performed by people that are not expert users in CAD tools leading to mistakes and extra time creating and updating catalogues. For these situations, visualization software is more effective for the organization because it saves both time in training of personal, and money in CAD software licences and error recovery [5]. The example introduced in section 3.4, is actually a real example of a furniture organization implementing the funStep standard. It is taking advantage of visualization techniques to have affordable and sustainable design along its supply chain, i.e. from the product designer to the manufacturer. The advantages of using a standard for data exchange are only noticed if its suppliers and/or costumers use it as well. For this reason and despite of being a retailer, it felt the need to use visualization software so that it can provide its suppliers, i.e. the furniture manufacturers, an easy tool to help in the process of semi-automatic catalogue creation following AP236 [24]. Figure 4 illustrates the process in more detail. In this particular scenario, the furniture designers remain producing and sending the CAD data to the manufacturers using the traditional rich formats. In turn, these use that data to proceed to the fabrication of the object and its catalogation. However, this last process typically involves other departments and personnel not specialized in CAD. Therefore, and to assure a funStep data communication the retailer provides a tool that enables the manufacturer to establish an easy link between the product specifications, visualization, and configurability, thus accelerating the catalogue creation and communication. Manufacturers which have neither implemented funStep, nor use this tool will remain sending their catalogues in the traditional way where errors and misinterpretations may demand further iterations between the manufacturer and the retailer (right side of Figure 4). This strategy enables the retailer to enlarge its business network seducing manufacturers with a way of exchanging information following an international standard, and at the same time enables furniture manufacturers to create electronic catalogues at low costs and widens the possibility of spreading them worldwide being sure that the receiver will understand the data structuring. Similar advantages pose to the designers. Retailer Interior decoration tool

5.1 Use-case matching and services applied Based on the scenario description and on the retailer business description, it is possible to verify that it had an ICT infrastructure for information exchange but using proprietary non-standard solutions (third-party and homemade solution). This way it meets the ICT situation 3 and level 0 of funStep compliance [9]. Applying the use-case suite best practices from section 3.3 to the scenario implementation, the steps carried were the following [24]: 1) UC-07, the search for requirements that the retailer system was not accomplishing was performed by three directors from the ICT, furniture, and decoration sections of the company. The technical feasibility report reflected the need to use a standard for receiving furniture product data. The goal of the adopted solution was to use all the product data and CAD associated files of every configured product from their furniture providers in order to do interior decoration projects; 2) UC-08, the analysis of funStep to meet the requirements was performed by their software engineer in collaboration with the authors that explained how funStep worked and how it could respond to their needs; 3) UC-09, the mapping discovery was a consequence of that collaboration, i.e. both teams joined and formalized a mapping between the retailer internal structures and the funStep standard; 4) UC-10 followed, using a mediation database with import/export functionalities. It accepts all the information coming from the associated manufacturer’s catalogue products (already in AP236 thanks to the cataloguing tool developed), but at the current piloting stage and for security reasons, it requires management approval before synchronization with internal structures; 5) Neither UC-10 nor EC-11 was implemented at this stage because the retailer is still evaluating the efficiency of funStep on the transactions they already were doing. Thus they are not yet enlarging their business scope; 6) The final activities were related to the testing of the implementation (UC-05); Thanks to this, several misinterpretations and implementation errors were resolved. The retailer is currently CC1 compliant but already with some part of CC2, CC3 and CC4 working on the interior decoration. As is implicit, for the execution of these steps, the funStep services revealed an added value as well [24]: •

Software services such as the online testing 3 application and CADEF , i.e. the funStep cataloguing tool mentioned in the scenario were used and adapted at their needs;



Several training sessions regarding the standard explanation were carried;



Validation services and methodologies have been applied as predicted in the UC-05;



And also consultancy has been used on the analysis and definition of the mapping between the standard and the internal information model.

Designer

CAD Data

CAD Data

Designer

Request for clarifications

At the time of the paper preparation, the retailer in question was already receiving catalogues in funStep format from 25 companies as part of a pilot project.

Visualization Data

use

use

Cataloguing tool

manufacturer

manufacturer

manufacturer

Figure 4 – funStep implementation scenario using visualization data.

3

144

Developed by AIDIMA (http://www.aidima.es)

5.2 Visualization framework: an instance The model driven framework presented in this paper (Figure 3) has been instantiated and used to provide CADEF the capability of extracting visualization information from the original CAD and merging it back together with the product characteristics in order to build a funStep compliant electronic catalogue reusing the original product design. CADEF uses the presented framework in such way that completes the product and parts information described in TM the catalogue with a CAD model in DWG format from Autodesk®. Once the CAD model is opened in the embedded viewer, all the product variability defined in the catalogue could be selected and delaminated according to the manufacturers needs [5][14].

TM

Figure 5 - DWG

to X3D instantiation of the visualization framework.

Figure 5 depicts the morphisms that are present in the CADEF implementation of the framework. The authors’ TM used the DWG specification published by the Open 4 Design Alliance (ODA ) as starting point to define the “Specific Format” model, meta-model and parser. With them defined, links with the X3D model and meta-model (“Common Base”) have been detailed and specified implicitly in the tool. Finally, since the 3D viewer embedded in CADEF also uses X3D, the morphism to the output (“Specific Format”) was direct and it was only required to choose the visualization properties desired so that the visualization morphism could be described. Hence, level M0 of the visualization framework represents TM the execution stage. When CADEF imports DWG data, it automatically imports it into an internal X3D structure which enables to generate an X3D file, or show the 5 visualization data in the embedded Xj3D viewer. 6 CONCLUSIONS AND FUTURE WORK To solve interoperability problems in the furniture industry supply chain, which is comprised mostly by SMEs with heterogeneous needs, the funStep group has created an ISO standard which defines a formalized structure for catalogue and product data under industrial domains of the furniture sector.

4 5

Open Design Alliance (http://www.opendesign.com/) open source Web3D toolkit (www.xj3d.org)

145

Due to the modularization properties of STEP it is possible to establish direct cross-sectorial links with other ISO standards. Among them are the automotive, aircraft, ship-building, building & construction and other relevant sectors to the furniture segment, e.g. many furniture manufacturers and designers act as suppliers subcontracted by other sectors, like automotive (refurbishment), ship-building (luxury Yates) or building & construction (wood-made houses). However, the main benefit of adopting the ISO funStep standard is the increased efficiency that results from sharing data between different ICT systems seamlessly bringing additional benefits without the need for re-enter information. Thus, there is a reduction of human errors and end-to-end transaction time (lead-time). Using standard compliant systems means that, component or products suppliers can provide full technical information about their products to the retailer, who in turn, can publish catalogues, operate e-commerce systems, manage stock control systems or supply data to interior designers in an interoperable manner, all without the need to enter any data more than once. Customer orders placed with retailers can be communicated back up the supply chain immediately, enabling components, materials and manufacturing resources to be allocated at the earliest opportunity. Furthermore, it enables to combine catalogue data from several sources in a single retail management system by importing component specifications from multiple suppliers to a furniture design or manufacturing system. However, due to the complexity associated with the implementation of standards, especially STEP standards, the SMEs require a push. Mechanisms to facilitate and accelerate the adoption task and simultaneously minimizing the costs are required. Therefore, this paper recommends a use-case based methodology to assist in the adoption of the funStep standard (AP236) by furniture related organizations and proposes a framework applying the principles of Model-driven Development to support dynamic integration geometry vital information in the form of visualization data to non-expert users. TM Using the public DWG specification made available by the ODA, the authors implemented one instantiation of TM the framework, developing a DWG model, meta-model and parser, and defining the appropriate morphisms for intelligent integration with the X3D standard open format. During the INNOVAFUN project, the presented framework has been validated in an industrial scenario from the furniture industry, where CADEF, a tool to build product catalogues has been successfully integrated with the framework. It enables access to visualization data for support in manufacturer catalogue creation and design. Manufacturing and retailing systems are complex and dynamic. They need to be constantly adapting to new market and costumer requirements who more and more demand a faster and better quality service. Even standards need to be adjusted from time to time. This behaviour is reflected in a constant fluctuation and evolution of business networks and system models, which makes interoperability difficult to maintain. The authors intend to address this non-linear problem in future research involving feedback, monitoring and prognosis mechanisms as part of the business networks. With these, they intend to include dynamism in the morphisms maintenance among systems, thus allowing automatic readjustments in the information flows without the need to reprogram the full systems.

7 ACKNOWLEDGMENTS The authors would like to thank all the organizations that supported the international projects that enabled the required budget for the development of the best practices and framework presented in this paper. Namely, the European Commission, the INNOVAFUN project partners that somehow contributed for the presentation of this work, CEN/ISSS and ISO TC184/SC4 for the effort in developing industrial standards and binding guidelines. 8 REFERENCES [1] Global Competitiveness Council, 2006, Rising to the Challenge of Global Competition, State of Washington, USA. [2] European Commission, DG for Enterprise and Industry, Europe INNOVA Annual Report 2006. [3] funStep Interest group, 2004, SMART-fm marketing material for manufacturers, www.fsig.funstep.org, retrieved on 24 September 2008. [4] Kemmerer SJ, 1999, STEP: The Grand Experience, NIST Special Publication 939. [5] Almeida B, Agostinho C, Nuñez-Ariño MJ, and Jardim-Gonçalves R, 2008, Model Morphisms as an Enabler for Open Visualization of Product Data, 4th International Conference on Intelligent Systems (IS 2008), Varna, Bulgaria, September 6-8, 2008. [6] ISO TC184/SC4, Industrial automation systems and integration -- Product data representation and exchange -- Part 236: Application protocol: Furniture catalog and interior design, Dec 2006. [7] Jardim-Gonçalves R, Cabrita, RO, Steiger-Garção A, 2005, The emerging ISO10303 Modular Architecture: In search of an agile platform for adoption by SMEs, International Journal of IT Standards and Standardization Research (IJITSR), Vol. 3 (2). pp. 82-95, ISSN 1539-3062. [8] Feeney A, 2002, The STEP Modular Architecture, Journal for Computing and Information Science in Engineering, Volume 2, Issue 2, 132. [9] INNOVAFUN - EC INNOVA Project No.: 031139, 2007, Deliverable 1.2, Use cases and action plan for standard adoption and implementation. [10] Brown J, Jiangang Z, Extended and virtual enterprises similarities and differences, International Journal of Agile Manufacturing Systems, Volume 1, Number 1, 1999. [11] Jardim-Gonçalves R, Agostinho C, Maló P, and Steiger-Garção A, 2005, AP236-XML: A framework for integration and harmonization of STEP Application Protocols, ASME-CIE2005: International Design Engineering Technical Conferences & Computers and Information in Engineering

[12]

[13]

[14] [15]

[16]

[17]

[18] [19] [20]

[21]

[22] [23] [24]

146

Conference, 24-28 Sep 2005, Long Beach, California, USA. Peak RS, Lubell J, Srinivasan V, 2004, “STEP, XML, and UML: Complementary Technologies”, Journal of Computing and Information Science in Engineering, Volume 4, Issue 4, 379. Jardim-Gonçalves R, Agostinho C, Maló P, and Steiger-Garção A, 2007, Harmonising technologies in conceptual models representation, International Journal of Product Lifecycle Management (IJPLM), Vol. 2 (2). pp. 187-205. ISSN 1743-5129. INNOVAFUN - EC INNOVA No.: 031139, 2008, Deliverable 2.1, Services for funStep standard adoption and design of new business practices. New Designs for Career and Technical Education Design Review No.61, http://newdesigns.oregonstate.edu/compendium/Or ganization/design61.htm, retrieved on 23 Sep 2008. Jardim-Gonçalves R, Onofre S, Agostinho C, and Steiger-Garção A, 2006, Conformance Testing for XML-based STEP Conceptual Models, ASMECIE2006: International Design Engineering Technical Conferences & Computers and Information In Engineering Conference, 10-13 Sep 2006, Philadelphia, Pennsylvania, USA. Dalton-Taggart R, 2007, Interoperability—The CAD Vendors Speak Out, CADCAMNet magazine, http://www.caddigest.com/subjects/cad_translation/ select/031507_cadcamnet_cad_vendors_speak.ht m, retrieved on 24 September 2008. Van Wijk JJ, Views on Visualization, 2006, IEEE Transactions on Visualization and Computer Graphics, vol. 12 no.4, pp. 421-433, Jul/Aug, 2006. OMG, Model-Driven Architectures (MDA), http://www.omg.org/mda/, retrieved 24 Sep 2008. Atkinson C, Kuhne T, 2003, Model-driven development: a metamodeling foundation, Software, IEEE, vol. 20, Issue 5, pp. 36- 41, ISSN: 07407459, Sept.-Oct. 2003. Jardim-Gonçalves R, Grilo A, and Steiger-Garção A, 2006, Challenging the interoperability between computers in industry with MDA and SOA, Computers in Industry Vol. 57, Issues 8-9, Dec. 2006, Pages 679-689 Collaborative Environments for Concurrent Engineering Special Issue. InterOP NOE consortium, (2005) Deliverable DTG3.2: TG MoMo Roadmap. ISO/IEC FDIS 19775-1.2:2008 ― X3D Architecture and base components Edition 2, Dec 2007. INNOVAFUN - EC INNOVA Project No.: 031139, 2008, Deliverable 2.3, Report on innovation: targeted solutions.

Product Lifecycle Management Approach for Sustainability 1

1

2

N. Duque Ciceri , M. Garetti , S. Terzi 1 Department of Economics, Magament and Industrial Engineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milano, Italy, 2 Department of Industrial Engineering, Università degli Studi di Bergamo, Viale Marconi 5, Dalmine, Bergamo, Italy, 1 [email protected], [email protected], [email protected]

Abstract Starting from the framework of Product Lifecycle Management (PLM), sustainability should be provided by continuous sharing of information among the different product lifecycle phases. A PLM system provides lifecycle knowledge generated by PLM systems through product lifecycle activities. The paper aims at presenting how PLM systems represent a very important foundation for achieving a more sustainable paradigm for life, a more sustainable development, engineering, manufacturing, use and disposal of products. Keywords: Sustainability, Sustainable Engineering and Manufacturing, Product Lifecycle Management

1 INTRODUCTION From the semantics of the word, sustainability is a quality that permits to preserve, to keep, to maintain something. When something is sustainable, it is able to be kept. In the past, the term was mainly environmentally-oriented, i.e. as the quality to sustain the environment. However, in current literature, sustainability is defined with three dimensions: environmental, social, and economical; often adding a forth one, technology [1]. Therefore, what is meant with sustainability is to be able to keep human development in all these dimensions, which is often referred as sustainable development. Sustainable development is not a new concept, one of the most used definitions is the one given in 1987 by the Brundtland Commission as “the development that meets the needs of the present without compromising the ability of future generations to meet their own needs” [2]. From the basics, engineering is a key driver of human development. Looking at the role of technology in human development, engineering is the key driver of technologybased human development which is leveraging on a large collaboration from many individual disciplines (i.e. industrial, mechanical, electrical, etc). Henceforth, Sustainable Engineering can be defined as the way of applying engineering for sustainability purposes. This concept is depicted in figure 1, where Sustainable Engineering is seen as a layer of engineering oriented approaches, methods and tools crossing the four pillars of Society, Economy, Environment and Technology for achieving sustainability oriented results. Environment Society

Economy

Technology

Figure 1: Sustainable Engineering dimensions.

CIRP Design Conference 2009

147

Within this view, Sustainable Manufacturing is defined as, an instance of Sustainable Engineering, meaning to apply scientific knowledge to design and implement of products, materials, systems, processes, etc. that take into account constrains coming from the 4 pillars of sustainability to develop solutions for the design, operational and organizational activities related to products, processes and services in the manufacturing sector. As known, sustainability will be a major issue for the next decades. The awareness of limited resources availability, of problems related to pollution, of increasing demand of goods, energy and materials from the already developed and the new developing countries and, as a consequence of these factors, the increase in costs of scarce resources, all are calling for a new paradigm of life, overcoming the obsolete consumerist model of modern societies. Assuming to maintain the current level of well being of developed countries as a reference target, the shift to the new life’s paradigm will be something like a Copernican revolution. Really, this shift will be extremely difficulty, requiring revolutionizing a well established model. Social, cultural and also psychological implications will be involved from one side; while from the other tremendous improvement of current technologies will be required to enable this kind of change. To reach such revolution, the product concept itself should be totally re-shaped, especially taking into account a lifecycle view: product design for a low-priced and clean production, for a long and safe use and for an integral recycling should be provided. Not only the product design is required to be substantially and continuously improved and innovated, but the development of new materials and an overall redesign of production processes will be needed; entailing for example, the development and subsequent processes of totally new production processes made of a sequence of small intelligent, clean and energy saving steps. In such a vision, Sustainable Manufacturing will surely become one of the most relevant topics in the next engineers’ interests.

The present paper aims at investigating the climbing role of Sustainable Manufacturing for engineers and designers, taking care of the general framework of Product Lifecycle Management (PLM). For this purpose, the paper is organized as follows: •

Section 2 defines the main elements of the general framework of PLM.



Section 3 summarizes the relevance of Sustainable Manufacturing, also conducting a state-of-the-art of the tools and methods for its deployment.



Section 4 investigates the role of PLM in Sustainable Manufacturing, also defining a research agenda for such a topic.



Middle-of-Life (MOL), including distribution (external logistic), use and support (in terms of repair and maintenance). In its life, a product passes from the company’s hands to service suppliers (e.g. transportation suppliers, but also after-sales assistance suppliers), to arrive in the customer’s hands. These passages could happen many and many times, in reiterative ways. Product usage data are to be collected, transformed and used for various purposes in the service chain. For example, data on product behaviour during the usage phase can be fed back in BOL and used for design improvement.



End-of-Life (EOL), where products are retired – actually recollected in the company’s hands (reverse logistic) – in order to be recycled (disassembled, remanufactured, reused, etc.) or disposed. Recycling and dismissal activities require and provide useful information on product components, materials and resources from/to the design and manufacturing stages. Many different actors are involved in this phase (company’s service suppliers; customers, institutions, etc.). For many products (e.g. commodities, electronic goods), customer’s environment sensibility has a relevant role to the management of such phase.

Section 5 concludes the paper.

2 PRODUCT LIFECYCLE MANAGEMENT In recent years, the competitive pressure coming from the opening of markets has strongly affected the companies approach to product development: shorter lifecycles and explosion of product variety have been the main consequences, together with an ever standing requirement for low production costs. Reduction of time to market, collaboration and delocalization have been the companies’ answers to these issues, which were achieved by restructuring the organizational models of product design and production, while leveraging on the new ICT (Information and Communication Technology) tools for collaborative product design, development and production, made available by recent technology progress. The PLM paradigm emerged as “a product centric business model, ICT supported, in which product data, are shared among processes and organization in the different phases of the product lifecycle for achieving top range performances for the product and related services” [3]. PLM is already well known in the ICT market, even if unlike other technology solutions PLM is not a point solution or an off-the-shelf tool. Instead, it is grounded in the philosophy of connectedness of knowledge and seeks to provide “the right information, at the right time, in the right context”. PLM is physically enabled by the integration of a variety of enterprise software applications, like Computer Aided Design (CAD) tools, Product Data Management (PDM) platforms, and Enterprise Resource Planning (ERP) and Supply Chain Management (SCM) solutions. These applications are offered by many market vendors with different backgrounds and expertises, for supporting enterprise information management, along diverse phases of an ideal product lifecycle. Such a term – lifecycle – generally indicates the whole set of phases which could be recognized as independent stages to be passed/followed/performed by a product, from “its cradle to its grave”. According to [4], product lifecycle consists of three main phases (Figure 2): •



Middle of Life

Beginning of Life

Design

Product Design

Process Design

Manufacturing

Plant Design

Production

Internal logistic

Use

Distribution

External logistic

End of Life

Support

Repair

Maintain

Retire

Reverse logistic

Recycle

Dismiss

Figure 2: Reference model for product lifecycle phases (adapted from [4] and [5]). The PLM concept is certainly a question of data visualisation and transformations, where ICT plays a fundamental role. However, PLM comprises other two important levels: processes (where data flow among actors/resources with relative competences, inside and outside an organization), and methodologies (practice and techniques adopted along the processes, using and generating product data). These three elements (ICT, Processes, and Methodologies) are the fundamentals of the PLM concept, evolving along the lifecycle phases of the product (Figure 3).

ICT (tools, interop. standards, architectures, etc.)

MOL

BOL

Beginning of Life (BOL), including design and manufacturing. Design is a multilevel phase, since it comprises product, process and plant design. Generally, a design action is performed in a recursive way, identifying requirements, defining reference concepts, doing a more and more detailed design and performing tests and prototypes. Today’s knowledge-intensive product development requires a computational framework that enables the capture, representation, retrieval and reuse of product and process knowledge. Manufacturing means production of the artefacts and related plant internal logistic. At this stage, product information has to be shared along the production chain, to be synchronized with future updates.

Product

Processes

Methodologies

(actors, activities, competences, organisation, etc.)

(practices, procedures, techiniques, etc.)

EOL

Figure 3: Reference model for product lifecycle management [3].

148

2.1 PLM as a “system of systems” BOL phase deals with product design and manufacturing. These two main activities have an intrinsic difference: product design is a recursive and reiterative intellectual activity, where – generally – designers and engineers might find solutions for given problems. On the contrary, manufacturing is a repetitive transactional-based activity, which might concretize the decision taken by others. For this main reason, design and manufacturing are supported by a plethora of different ICT PLM tools: authoring tools (CAD, etc.) and collaborative product development platforms (PDM, etc.) in the design activity, and a set of enterprise applications (ERP, SCM, CRM, etc.) in the manufacturing and distribution activities. Then, during BOL phase, PLM is basically a design support system (Figure 4): product design data might be created and managed efficiently in order to be distributed to the right actors at the right time for efficient manufacturing. MOL and EOL phase could provide useful information, analyzing data gathering from the field. It might be honestly affirmed that in the market an entire PLM system supporting all the processes of BOL phases is not currently provided by a single vendor (and probably it will not exist in the next future), but PLM might considered like a “system of systems”, where diverse vendors just provide a piece of a larger PLM picture.

BOL

MOL

EOL

Usage data Maintenance history Production status information Design knowledge

Updated BOM by repair Technical support information Updated customer requirements

Product lifetime Status of EOL product Recycling / Reuse rate Dismantaining information

PLM system

A product design Support system

Environmental effect information

(a system of systems)

Figure 4: PLM in the BOL phase. MOL phase deals with the real life of the product when it is in the customer’s hands, while EOL deals with its “death”. During these phases, many actors are in touch with the product: logistic service suppliers, customers, after sales service suppliers, recycling service providers, etc. All these actors perform their repetitive activities, generally without exchange much detailed information with other actors, being measured in term of process efficiency. Similarly to the BOL phase, during MOL and EOL phases, PLM is basically a service support system (Figure 5 as example of the MOL phase), composed by a plethora of subsystems: product data are collected from the field using various tools, in order to monitor and control the life status of the product, while information from BOL phase are needed to analyze and understand behaviors and structures of the product. More and more product data management during these two phases is becoming an unavoidable aspect, since regulations and legislations are taking care of diverse product data in order to improve customer safety and security. In particular, this is already a relevant aspect in process industries (i.e. pharmaceutical, food and beverage, etc.). However, in spite of such increasing interests of normative offices, during MOL and EOL phases, the information flow becomes less and less complete. For the majority of today's products (e.g. consumer electronics, household machines, vehicles), it is fair to say that the information flow breaks down after

149

the delivery of a product to a customer. As a consequence, actors involved in each lifecycle phase have made decisions based on incomplete and inaccurate product lifecycle information of other phases, which has led to operational inefficiencies.

BOL

MOL

EOL

Usage status information BOM information

Maintenance history

Product order information

Product logistics information

Info for maintenance and service

Recycling and Reusing parts or component information

Production system configuration

A product service support PLM system (a system of systems)

Figure 5: PLM in the MOL phase. In spite of its vision, PLM has not yet received much attention so far from industry for the MOL phase because there are no efficient tools to gather product lifecycle data over the whole product lifecycle. Single applications exist for supporting specific activities within these phases (e.g. maintenance and after sales supporting tools), but a comprehensive system is not existing. Recently [4], thanks to the advent of product identification technologies such as Radio Frequency Identification (RFID), PLM has now powerful tools to implement its vision. The product identification technologies enable products to have embedded information devices (e.g. RFID tags and on-board computers), which makes it possible to gather the whole lifecycle data of products at any time and at any place. Thus, in the near future, the whole product lifecycle could be visible and controllable, allowing all actors of the whole product lifecycle to access, manage, and control product related information. Especially, the information after product delivery to customers and up to its final destiny could be gathered, without temporal and spatial constraints. This way, BOL information related to product design and production could be used to streamline operations of MOL and EOL. Furthermore, MOL and EOL information could go back easily to designers and engineers for the improvement of BOL decisions. 3 SUSTAINABLE MANUFACTURING From a general perspective, sustainability can be seen as a critical business issue driven by factors outside of the industry’s control, unlike many other business issues. Multiple constituencies such as shareholders, regulators, consumers, customers, NGOs (Non Governmental Organisations) are demanding that companies address it. Sustainability intercepts with every aspect of the business. Consumer businesses rely on a wide range of natural resource inputs, such as agricultural products, water, forestry and marine fish stocks. Consumer products and packaging are also one of the largest contributors to solid waste, compared to other industries. Sooner or later in each industrial sector, it will not be enough for product manufacturers simply to design their products for disposal and recycling: manufacturers will be responsible for the actual disposal and recycling, until the end of the life of their artefacts (as many regulations currently in place at an European level and in the road of legislation in most of the other industrialized countries, i.e. Table 1), facing the needs of Sustainable Manufacturing.

Regulation ELV End of Life Vehicle

WEEE Waste Electrical and Electronic Equipment

RoHS Restriction of use of certain Hazardous Substances REACH Registration, Evaluation, and Authorization of Chemicals EuP Energy using Product

Environmental Requirements Regulation for automobiles and electronic devices make the product producer responsible for recycling and disposal (e.g. 85% recycling/recovery rates in terms of weight by 2006 and 95% by 2015). Requires manufacturer to have a program to take back and recycle products such as TVs, computers and cell phones; register the product and finance the collection, treatment, recovery, and disposal. Mandates companies to not manufacture products with more than a maximum concentration of a number of certain substances such as lead, mercury, cadmium among others. Designed to ensure that 30,000 banned chemicals do not make their way into a product at any point in the supply chain. EuP is a product which requires or produces energy The Directive does not introduce directly binding requirements for specific products, but does define criteria for setting product environmental requirements (i.e. energy consumption).

Principle Reuse

Reuse means using waste as a raw material in a different process without any structural changes

Recycle

Recycling is a resource recovery method involving the collection and treatment of waste products for use as raw material in the manufacture of the same or a similar product. Recovery is an activity applicable to materials, energy and waste. It is a process of restoring materials found in the waste stream to a beneficial use, which may be for purposes other than the original use. Repair means an improvement or complement of a product, in order to increase quality and usefulness before reuse; it decreases consumption, because the product’s life is extended.

Recover

Repair

Regeneration

Re manufacturing

Factor X, Factor 4, Factor 10

Table 1: Summary of some of current environmental legislations. It is clear that sustainability is not a new topic, even if it has reached a relevant attention in the last two-three years. The diverse dimensions of sustainability and the diverse declinations have been already investigated and developed. Also, Sustainable Manufacturing has been obtained many attention from the industrial and research communities, as the plethora of strategies, methods, procedures and tools existing in literature demonstrate. The following classification (adopted from [6]) outlines some of the many contributions that have been developed in the ambit of sustainable manufacturing practices, in terms of Principles, Tools and Strategies:

Description

Waste Management

Reduction

Regeneration is an activity of material renewal to return it in its primary form for usage in the same or a different process. Remanufacturing is defined as substantial rebuilding or refurbishment of machines, mechanical devices, or other objects to bring them to a reusable or almost new state A direct way of utilizing metrics in various activities that can reduce the throughput of resources and energy in a given process. The overall aim of Factor X is to enable society to achieve the same or even better quality of life improving human welfare, while using significantly less resource inputs and causing less ecosystem destruction. The Factor X concept proposes X times more efficient use of energy, water and materials in the future as compared to the usage today. Activities involving the handling of Solid, liquid and gaseous wastes originating from the industrial manufacture of products (i.e. 4Rs: Reduction, Reuse, Recycling and Recovery) Practices that reduce the amount of waste generated by a specific source through the redesigning of products or patterns of production or consumption.



Principles (Table 2): are fundamental concepts that serve as a basis for actions, and as an essential framework for the establishment of a more complex system.

Table 2: Principles of Sustainable Manufacturing. (Adopted from Glavič, 2007 and Robert, 2002)



Tools (Table 3): contain a group or cluster of principles related to the same topic, building a more complex system, showing how to apply specific practices in order to contribute to improved industrial performance.

Tool



Strategies (Table 4): Each consists of approaches and systems connected together that are to be met in order to incorporate the principle of sustainability into everyday business activities.

Description

Design for Environment

Also known as Eco-design is the integration of environmental aspects into product design with the aim of improving the environmental performance of the product throughout its lifecycle (e.g. design for recycling). [6]

Green Manufacturing

Identifies manufacturing methods that minimize waste and pollution during product design and production. [7]

150

Green Chemistry

The design of chemical products and processes that eliminate or reduce the use and generation of hazardous substances. [8]

Waste Minimization

Measures or techniques that reduce the amount of wastes generated during industrial production processes. [6] Identification and development of new value-added products from existing waste streams or under-exploited byproducts, creative search for completely new educts and products, and implementation of breakthrough technologies. [9] Methodological framework for estimating and assessing the environmental impacts attributable to the lifecycle of a product, such as climate change, stratospheric ozone depletion, ozone (smog), etc. [10]

Zero Emissions

Life Cycle Assessment

Cleaner Production

Life Cycle Management

Zero Waste

Green Procurement

The continuous application of an integrated preventive strategy to process products and services to make efficient use of raw materials, including energy and water, to reduce emissions and wastes, and to reduce risks for humans and the environment. [11] A comparative decision making tool that evaluates the difference between products or processes to arrive at the most economically and environmental possible option in a systematic business decision framework. [12] A design principle that includes ‘recycling’ but goes beyond recycling by taking a holistic approach to the vast flow of resources and waste through human society. [6] Environmentally responsible or 'green' procurement is the selection of products and services that minimize environmental impacts. It requires a company or organization to carry out an assessment of the environmental consequences of a product. [13]

satisfies standards and requirements [16] Total Quality Environmental Management

a method of applying total quality management approaches to corporate environmental strategies. TQEM supports continuous improvement of corporate environmental performance. Developed by a coalition of 21 companies that operate in a variety of industry sectors and share best practices. The four basic elements of TQEM: Customer identification (i.e. environmental quality is determined by customer preferences), Continuous improvement, Doing the job right first time (i.e. elimination of environmental risks and a Systems approach. [17]

The Natural Step programme

a method of reaching consensus about sustainable futures. The theory has given its name to a global network which describes itself as 'an international organization that uses a science-based, systems framework to help organizations and communities understand and move towards sustainability. [18] ISO14000 series is a family of environmental management standards. [19]

ISO and the environment

EMS Environmental Management Strategy

EcoManagement and Audit Scheme (EMAS)

Table 3: Tools for Sustainable Manufacturing. Strategy Pollution Prevention (P2)

Industrial Ecology

Environmentally Conscious Manufacturing

Description A strategic goal for effective environmental protection. P2 techniques are designed for the reduction of the quantity and toxicity of end-of-plant waste. P2 technologies have been developed for technology change, material substitution, in-plant recovery/reuse and treatment. [6] Systems-oriented study of the physical, chemical, and biological interactions and interrelationships both within industrial systems and between industrial and natural ecological systems. [15] Emerging discipline concerned with developing methods for manufacturing products from conceptual design to final delivery, and ultimately to end-of-life, that

151

Environment, health and Safety (EHS) Programmes SA 8000

Responsible Care

Set of management tools and principles designed to guide the allocation of resources, assignment of responsibilities and ongoing evaluation of practices, procedures and processes, and environmental concerns. [20] The European Union's voluntary programme which enables organizations within the EU and the European Economic Area to seek certification for their environmental management systems. Effective from 1995. [21] Programs driven by occupational health and safety regulations including environmental issues needed to be incorporated into operational practice. [22] Standards in social responsibility accounting for: Child labour; Forced labour; Health and safety; Freedom of association and collective bargaining; Discrimination; Disciplinary practices; Working hours; Compensation; Management systems. [23] Chemical industry’s global voluntary performance guidance system. [24]

Table 4: Strategies for Sustainable Manufacturing. Sustainable Manufacturing has been already investigated in deep, as literature and in industrial practices demonstrate, even if a comprehensive approach doesn’t exist. Moreover, as the global conditions are revealing

every day, sustainability is still not one of the first key factors in industrial decisions. In their day-by-day decisions, companies cannot easily take into account Sustainable Manufacturing, which is more and more affected by lifecycle considerations. Additionally, this happens at global level and so requires the consideration of related technical, operational, societal and cultural issues. Many efforts have been made in the past twenty years for improving product development and lifecycle management. However, many of these efforts have been disconnected. Approaches like PLM for integrating and sharing product data can be of great help in controlling and supporting sustainability issues. As the next section aims at illustrating, in deep, PLM, [25] being the accepted setting for product design and management has the potential to incorporate all the lifecycle considerations needed for a sustainable development. 4 PLM FOR SUSTAINABILITY Sustainability has an important global dimension and most of the major challenges cannot be solved in one isolated region of the world. The so-called “civilized” world is made of many products, consuming a large amount of global resources. This “way of life” is based on products (for living, for transportation, for dressing, for eating, etc.), which might be designed, manufactured, used, maintained, recycled, dismissed. As Global Sustainability Indicators show clearly [26], current patterns of mass production of cheap goods and over consumption of products with a short use cannot be evidently sustained. Evidently, sustainability has a social responsibility impact, but its attainment is matter of practical implementations: sustainability can be achieved through optimization of the use of resources along the product lifecycle, while retaining quality of products and services, but optimization and quality of product related processes are strongly based on the use of information. For this reason, PLM represents a very important approach for achieving a more sustainable way of work and life, a more sustainable development, manufacturing, use. Being PLM an ICT infrastructure able to support product data, information and knowledge sharing, it can be the foundation of the business model needed to comply with sustainability requirements (Figure 5). In particular, PLM could enhance design in the BOL phase, as well as provide new services for the MOL and EOL phases. Sustainability in the context of the product lifecycle can be seen as an optimization of all the activities belonging to the product lifecycle: a more sustainable development could be reached managing efficiently processes and information. For example, during BOL PLM can support a sustainable product development providing a common system in which the relevant product information is stored, managed and actually retrieved by the applicable lifecycle phases. The PLM system can be used to store and manage relevant information regarding resources (such as energy and renewables, ground and soil, water, etc.) and materials (such as hazardous substances, waste and recycling, etc). In fact, a lifecycle approach means to evaluate a product/service beginning with material extraction, continuing with manufacturing and use, and ending with recycling and disposal. This information generation and sharing can be housed by PLM in terms of providing the place to be created, to be stored, to be processed and to be properly allocated.

PLM

Figure 5: PLM can provide support to sustainability (adapted from [26]). In terms of evolution of PLM for Sustainable Manufacturing, there is an urgent need to identify and retrieve decades-old product information for delivering service and sustainability along the product lifecycle. As mentioned, sooner or later in each industrial sector, it will not be enough for product manufacturers simply to design their products for disposal and recycling: manufacturers will be responsible for the actual disposal and recycling, till the end of the life of their artefacts. In spite of this vision, for making PLM an efficient approach for Sustainable Manufacturing, many issues are still open, as well as some challenges might find a stable solution. Some of the most relevant open issues are the followings: •

It is a matter of fact that sustainability (and its dimensions) is more and more assuming a relevant role in our Societies. Even if a positive trend in sustainability dissemination is on going, many efforts might be spent to communicate such a concept to many stakeholders: not only users/customers might be involved, but more and more companies (in terms of manufactures and suppliers) have to understand sustainability in its wider dimensions. This question still remains a relevant open issue about sustainable development and manufacturing.



Within such social increasing interest on sustainability, different business models might be declined within diverse industrial sectors, taking care of the diverse lifecycle stages. In particular, one-of-akind productions (e.g. ships building, tailored products) have relevant differences with many-of-akind ones (e.g. commodities). Such differences constitute a number of relevant open issues for sustainability and PLM.



Sustainability entails an integration perspective. Sustainability is the final result of the interaction among many economic actors; interoperation of all product related domains is a pre-requisite to the evaluation and support of sustainability issues.



As a matter of fact, a huge amount of data is generated during product usage. These data must be archived, filtered, extracted and transformed. These operations require an efficient data management system to be developed for each PLM system/tool in operation. Such product data management still remains an open issue, which might be solved to support a real product sustainable development.



In terms of ICT, PLM is no more than a database problem, which physically enables a product-centred

152

business model. Information about products and processes are dispersed along a variety of information systems, which have been executed as isolated islands till now. Issues still open deal with the integration of these islands into more integrated distributed meta-repositories, in order to provide a wider and sustainable use of product data. This way, ICT interoperability is a relevant open issue afflicting sustainability deployment. According to these open issues, two main challenges for doing PLM an effective tool for Sustainable Manufacturing can be highlighted: •

Closing product lifecycle information loops requires involvement of product users and contributes to the overall objective of sustainability of product systems. A strong element within this context is represented by information and knowledge sharing between producers and consumers. Such need of sharing constitutes a challenge to be solved adopting new available product embedded technologies. Embedded systems are complex electronic modules integrating computing devices, persistent storage, sensors and communications tools; embedded systems are already part of today products (e.g. airbag sensors, HD and DVD controllers, etc.) and in the next years the use of such technologies will be extended from the actual use in identification applications to a wide variety of applications by the generalization of tagging of products at the item level. Embedding intelligence in products is a practical way to create “intelligent products”, which can be put together to create a network of intelligent objects interacting among themselves using wired or wireless communication technologies. These technologies will play a relevant role in PLM for Sustainable Manufacturing. Then, dedicated studies on this challenge are mandatory.



A relevant challenge deals with the definition of a reference model for the PLM approach to sustainability and Sustainable Manufacturing in particular. Sustainability is a complex matter entailing business processes, technologies, organization and culture: a model integrating all aspects is needed to afford the epochal transition we have to face for the incoming future. There is a need of socio-technical models addressing topics such as ICT interoperability, knowledge sharing in the product chain and social implications of sustainability. A number of answers to these challenges are already under development, while a comprehensive PLM reference approach to sustainability is still lacking and it should be provided soon.

5 SUMMARY Approaches for sustainability claim to have a holistic view. They need to manage the product lifecycle and keep track of the information flow within all of the lifecycle phases of the product. Evidently sustainability has a social responsibility impact, but its attainment is matter of practical implementations: sustainability can be achieved through optimization of the use of resources along the product lifecycle, while retaining quality of products and services. As the paper revealed; however, optimization and quality of product related processes are strongly based on the use of information. For this reason, Product Lifecycle Management (PLM) represents a very important approach for achieving a more sustainable paradigm of work and life, a more sustainable product development, manufacturing, use and dismissal.

153

6

REFERENCES

[1] IMS, 2007, Strategies for Global Manufacturing: A European View of IMS Sustainable Manufacturing, 15-16 November, ETHZ, Zurich http://cordis.europa.eu/ims/src/sustman-web.htm. [2] Brundtland Commission, 1987, Our Common Future, Report of the World Commission on Environment and Development, World Commission on Environment and Development, Published as Annex to General Assembly document A/42/427, Development and International Co-operation: Environment, August 2. [3] PLM IWG, 2007, A new point of view on PLM, White paper of the International Working Group on PLM, www.plm-iwg.org. [4] Kiritsis, D., Bufardi, A., Xirouchakis, P., 2003, Research issues on product lifecycle management and information tracking using smart embedded systems, Advanced Engineering Informatics, 17: 189-202. [5] Terzi, S., Panetto, H., Morel, G., Garetti, M., 2007, A holonic metamodel for product traceability in PLM, International Journal on Product Lifecycle Management, 2, 3: 253-289. [6] Glavič, P., Lukman, R. 2007, Review of sustainability terms and their definitions, Journal of Cleaner Production, 15, 18: 1875-1885. [7] Barreto L., Anderson H., Anglin, A., Tomovic, C. (2007) ‘Product Lifecycle Management in Support of Green Manufacturing: Addressing the Challenges of Global Climate Change’, Proceedings of ICCPR2007: International Conference on Comprehensive Product Realization, June 18-20 2007, Beijing, China [8] Martel, J.A. Davies, W.W. Olson and M.A. Abraham, Green chemistry and engineering: drivers, metrics and reduction to practice, Annual Review of Environment and Resources 28 (2003), pp. 401–428 [9] Eyerer, P, H. Florin, T. Kupfer, M.-A. Wolf, R. Kuehr, Towards Sustainability – Zero Emission (ZE) and Life Cycle Engineering (LCE) IKP, University of Stuttgart http://www.ce.berkeley.edu/~horvath/NATO_ARW/FI LES/EyererLignin.pdf [10] Rebitzer G., Ekvallb T., Frischknechtc R., Hunkelerd D., Norrise G., Rydbergf T., Schmidtg W.P., Suhh S., Weidemai B.P., Pennington D.W. (2004) ‘Life cycle assessment Part 1: Framework, goal and scope definition, inventory analysis and applications’, Environment International, 30, pp. 701– 720. [11] K.H. Robert, B. Schmidt-Bleek, J. Aloisi de Larderel, G. Basile, J.L. Jansen and R. Kuehr et al., (2002), Strategic sustainable development – selection, design and synergies of applied tools, Journal of Cleaner Production 10 (3) pp. 197–214. [12] Armstrong L., Kerr S. (2004) ‘Life Cycle Tools for Future Product Sustainability’, URS Corporation report, pp.23-36. [13] US Environmental Protection Agency (EPA). Available from: http://www.epa.gov/ [14] European Environmental Agency (EEA), Glossary Available from: http://glossary.eea.europa.eu/ [15] United Nation Environment Programme, Division of Technology, Industry, and Economics (UNEP DTIE), Cleaner production (CP) activities http://www.uneptie.org/

[16] George K. Knopf, Surendra M. Gupta, A. J. D. Lambert (2007) Environmental Conscious Manufacturing CRC Press [17] The Global Environmental Management Initiative (GEMI) http://www.gemi.org/gemihome.aspx [18] Hawken, Paul, (1995) “Taking The Natural Step”, Business On A Small Planet, Summer 1995, Page 36 http://www.context.org/ICLIB/IC41/Hawken2.htm [19] International Organization for Standardization, International Standards for Business, Government and Society, http://www.iso.org/iso/iso_catalogue/management_st andards/iso_9000_iso_14000/iso_14000_essentials. htm [20] The International Finance Corporation (IFC) World Bank Group, http://www.ifc.org/enviro/Publications/EMS/Introducti on/introduction.htm [21] Eco-Management and Audit Scheme (EMAS) http://www.emas.org.uk/regulation/mainframe.htm

[22] Canadian government - Environment Canada http://www.ns.ec.gc.ca/udo/reuse.html [23] Social Accountability International, http://www.saintl.org/ [24] Responsible Care, http://www.responsiblecare.org/page.asp?p=6341&l= 1 [25] Joshi N., Dutta D. (2004) ‘Enhanced Life Cycle Assessment under the PLM framework’, Proceeding of the International Intelligent Manufacturing Systems Forum, 17-19 May 2004, Villa Erba, Cernobbio, Lake Como, Italy, pp. 944-95. [26] Seliger G. (2004) ‘Global Sustainability: A Future Scenario’, Proceedings Global Conference on Sustainable Product Development and Life Cycle Engineering, Berlin, Germany.

.

154

Through-Life Integration Using PLM M. Gomez1, D. Baxter1, R. Roy1, M. Kalta2 1 Decision Engineering Centre, Cranfield University, Cranfield, Bedfordshire, MK43 0AL, UK 2 Edwards Ltd., Shoreham-by-Sea, West Sussex, BN43 6BP, UK [email protected]

Abstract It is widely agreed that organisations would benefit from a PLM implementation founded on a standard structure that integrates through-life information and knowledge. Thus, this paper describes a PLM data structure that provides a standard repository of data through all the stages of the lifecycle: conception, manufacture, and operation. This structure classifies the data into project, product, process and resource, and has been implemented into the PDM system Teamcenter Engineering as part of a case study with a vacuum pump manufacturer. A methodology to implement a knowledge structure from an ontology editor into PDM system is also presented. Keywords: Product Lifecycle Management, PLM, data structure, design, manufacturing, service, knowledge

1. INTRODUCTION In today’s global and competitive market, companies are facing many challenges namely shortening of the time to market, changing regulations, price competition, increasing product complexity and diversity, innovation, support and maintenance of the products, environmental concerns and end of life issues, and many others. Product Lifecycle Management (PLM) systems can support these challenges by improving the accessibility and trustworthiness of intellectual assets, and by managing resources and processes across the whole product lifecycle. In fact, PLM provides a collaborative environment to create, manage, share and use all product related data, since it eliminates the technology silos which have previously limited the interchange of information across the product lifecycle. Thus, organisations are implementing PLM systems to achieve business objectives. However, in order to achieve these objectives, it becomes crucial to define a suitable data structure. Across the product life cycle, product, process and resource related data must be created, stored and managed (i.e. CAD files, product specifications, customer requirements, NC codes, cost analysis, project plans, service processes, etc). Though these intellectual assets do not originate in a standardised format, they must be stored in a common data structure. Today’s organisations possess vast amounts of knowledge widely spread across different sources, and it continues increasing exponentially. This knowledge, which can be described as an intangible asset of the company, resides everywhere, varying from emails and instant messages to detailed reports and PowerPoint presentations. Companies are also aware of the critical importance of maintaining this expertise and intellectual property for process improvement and product innovation. Therefore, in the current distributed organisational environment, having a data structure which integrates

CIRP Design Conference 2009

155

product knowledge as a strategic asset, would drive future business success. This paper combines two relevant topics related to PLM: product data structuring and knowledge management in PLM. Various sources can be found related to each of the concepts, but there is little combining both. In the next section of this paper an overview of recent research on data structuring and knowledge integration within PLM is presented and discussed. In section 3, the current PLM environment of a vacuum manufacturer is presented. The data structure developed in this paper is presented in section 4. In section 5, the tools used in the execution of the research – Protégé and Teamcenter Engineering – are compared. Section 6 describes the methodology to integrate the knowledge from the knowledge editor (Protégé) into the PLM system (Teamcenter Engineering). In section 7, various scenarios implemented in PLM, which simulate real business cases, are presented. Finally the last section summarises and concludes the research. 2. LITERATURE REVIEW Today’s organisations are looking to PLM as the strategy to achieve product excellence and business objectives. However, in order to achieve these objectives, it becomes critical to define a product structure since it is the core of the PLM implementation. Schuh et al [1] state that the data structure describes the structured relationship between product’s components, and integrates all the data and documents related to the product (e.g CAD files, NC codes, product specifications). In this context, organisations would benefit from a standard data structure. Nevertheless, according to [1], recent researches and evidences from the practice have revealed that a unique reference model cannot define a standard product structure, since processes and products vary from one company to another. Moreover, as stated in

[2], there is still a lack of standard representation of data which identifies and details complete and corporationwide integrated product information architecture. Zina et al [3] also agree that implementing a generic reference model is ineffective, so the implementation has to be adapted to the necessities of the particular situation. Thus, they propose a methodology to particularise a reference model. This methodology starts with the selection of a reference model applicable to the sector of industry related. Then this reference model is converted in a generic model by studying several case studies of the implementation of the selected reference model. And finally the generic model is particularised to the specific situation in order to achieve the required model. Based on the fact that a unique reference model cannot be applied to model the product structure of every company in the industry, [1] presents a methodology for product structuring and a set of six customised product structuring reference models to address the specific needs of each existing project type. Sudarsan et al [2] propose a product informationmodelling framework which is intended to address this issue, supporting the full range of PLM information needs. The framework consists of four major components that form its kernel: the NIST Core Product Model (CPM) and its extensions, the Open Assembly Model (OAM), the Design-Analysis Integration model (DAIM) and the Product Family Evolution Model (PFEM). The main objective of implementing the information modelling framework is to provide a standard repository of the full product information at every phase of the design process, “serving all product description information to the PLM system and its subsidiary systems using a single, uniform information exchange protocol” [2]. However, the framework described cannot be implemented in a PLM system because: (1) the framework is a first version of a product modelling architecture as it does not model and include every product information component. (2) Possible information interchange problems due to the heterogeneous nature of the framework. And (3) full PLM system products needs and product information needs have to be identified to develop a complete conceptual framework. On the other hand, [4] describes an integrated data model and implementation approach for a PDM system based on the STEP (Standard for the Exchange of Product) standard, the PDM schema. The PDM data schema proposed, which includes product structure management, configuration management, document management, workflow management, effectivity management and engineering change management; is a standardised product data architecture that fulfils the PDM functionalities. A study described in [5] presents a knowledge management framework to support consumer-focused product design; the authors state that it is crucial that the knowledge of a company integrates its product, process and resource elements. Thus, they describe the productprocess-resource model (PPR). In this model, the product configuration is represented by the following relationship: “configuring the product (product structure, materials bill) → configuring the business process (process structure, operation types) → configuring the resource (structure of system, equipment and staff types)” [5]. Huang and Mak [6] also express the importance of integrating product, process (activities) and resource components in the design collaborative environment and describe the product realisation process as “a triple P, A, R.of Products which compete in the market, Activities

which realise products, and Resources which are available for realisation. P, A, and R are interrelated to each other. Interactions can be explained that products consume activities and activities consume resources” [6]. Other research studies present other methods of modelling product related information such as the productprocess-organisation (PPO) model. But according to [7], the relationship between the three core concepts and other information objects as well as the relationships between them are not comprehensible. In this context, Han and Do [7] propose a top-down object oriented model named 4P2C model. The 4P2C stands for product, process, project, participant, cost and collaboration. Based on this, they describe six submodels that compose a full CPDM model. On the other hand, although Kim et al [8] agree that the PPR model contributes to address all the engineering information related to the product, they argue that human related information, which is frequently considered as a part of the resource information, is not managed properly in PLM systems. For this reason they propose a PPR+H model, which is an XML-based schema that integrates and manages information about product, process, resource and human in PLM. Eynard et al [9] describe a PDM system named VPMChains, based on Enovia, providing a detailed class diagram of the product structure and the workflow management. The proposed system integrates the product with the process and the resources based on a framework for collaborative work, which includes three applications: (1) a Web portal to support access to product data, (2) BuildTime application for modelling processes and (3) RunTime application as a workflow engine for running the processes. At the last instance an element called briefcase provides the integration of product, process and resources. This briefcase contains elements of the process, product data and metadata. Regarding to the role of PLM in knowledge management, current enterprises possess vast amounts of knowledge widely spread across different sources, and it continues increasing exponentially. This knowledge, which can be described as an intangible asset of the company, resides everywhere from emails and instant messages to detailed reports and PowerPoint presentations. Companies have realised of the crucial importance of maintaining this expertise and intellectual property for process improvement and product innovation. In fact, in the current organisational distributed environment, knowledge is a strategic important asset which drives future success and must therefore be correctly controlled. Knowledge Management (KM) is defined as “the process that deals with systematically eliciting, structuring and facilitating the efficient retrieval and effective use of knowledge. It involves tacit knowledge of experts and explicit knowledge, codified in procedures, process and tools.” [10] Sharing and controlling knowledge has become a challenge to organisations due to the complexity of the relationships within a company and the difficulty to capture, share and make use of the knowledge. Recently some IT solutions like ERP, PLM, CRM and SCM have begun to support and facilitate knowledge sharing, but there is still a clear need for more effective frameworks related to this issue. PLM supports the integration of the different variety of knowledge along the entire life cycle, allowing users work more effectively. In fact, having a standard knowledge management embedded into PLM improves the efficiency

156

(i.e. reduces the learning curves), allows process excellence and encourages innovation. Ebert and Man [10] present a knowledge-centric PLM system that has allowed Alcatel-Lucent to achieve effective interaction of engineering tools, processes and people. This approach that combines KM and PLM brings together knowledge about products, processes and projects. The implementation of the system has brought a reduction in cycle times, an improved communication, a reduction in rework and overheads and more benefits. Other sources such as [11] propose a mechanism of integration of a PDM system and an expert system – a type of KM system – through a Java based program. This program, which also includes the knowledge management system Protégé 2000 created by Stanford University, can be distributed as an applet to the PDM system (in this study Windchill PTC) enabling CLIP rules to be added to the knowledge base. It also allows a very intuitive and graphical user interface based control underlying ontology of the knowledge base, and a new functionality which consists in including the process of entering knowledge in the knowledge base into the workflow. In contrast, Cheung et al [12] describe a methodology for structuring knowledge and integrating it in product development. This methodology employs a knowledge management editor – Protégé - that uses an ontology to capture, organise and represent knowledge. Then, the knowledge is converted into XML files so that it can be stored in a web-centric PDM system to support a distributed and collaborative product development environment. In summary, the literature has revealed that a standard data structure that includes information and knowledge related to the product through the entire lifecycle does not exist. It also suggests that such a structure would facilitate

the implementation and customisation of the PLM system in today’s organisations, improving data and information sharing within and between enterprises. The product-process-resource model described has shown its benefits to integrate all life cycle data. The problem with these PPR models is that there is not a top level view which integrates the three components. 3. AS-IS MODEL The AS-IS model, which represents a picture of the current PLM environment of the vacuum pump manufacturer, has revealed that the PLM system implementation is not mature. Dassaults SmarTeam is the PDM system implemented, however it is only used to manage and control CAD files created by Dassaults Catia V5 – the main CAD software used in the company. In fact, the organisation is not taking advantage of the implementation of any workflow, such as the design release process or change management; or any other feature such as BOM management, or drawing number management. Thus, these activities are carried out by other systems such as Microsoft Excel and Access; by web based tools or by paper work, as shown in Figure 1. For instance, it has been highlighted by the vacuum pump manufacturer’s employees that there is still plenty of paper work in tasks such as release process, which slows down the development of the product and could be improved by implementing them into the PDM system. In addition, there is also a lack of integration among the different systems implemented. This is the case of the PDM system, which is not integrated with MAPICS, the main ERP system used in the company for manufacturing operations. This issue entails manual data entering (i.e. the BOM has to be manually entered into MAPICS), which requires a large amount of resources. Although all the

Figure 1. AS-IS model of a vacuum manufacturer

157

sites use SmarTeam PDM system and MAPICS, there are no standard procedures for all the sites and each site maintains its own processes and systems. For instance some sites use various CAD systems or even a different PDM system. Then, where does the organisation store all the product related information? The answer is in the ‘Project Folder’. Every project has its own project folder where all the relevant data (CAD files, requirements, FEA analysis results, manufacturing data and files, marketing information, service information, project planning, etc) is stored. This folder is also used to share and transfer all kind of documents and designs between engineers. However, files are often stored in personal computers and transferred by ftp, CD or other memory storage devices, or email. This practice entails that there is very little support for sharing, version control and release. The problem of this project folder is that it does not offer the functionalities a PDM system does, such as product structuring, workflow editor, revision control and many others. For this reason, it will be beneficial for the organisation to have a data structure implemented in PLM that replaces the project folder. This implementation should be based on a data structure that integrates all product related information. 4. PLM DATA MODEL The data structure integrates all data and documents (e.g. CAD models, requirement documents, BOM, NC codes, etc) through all the stages of the lifecycle. According to [2] there is a lack of standard product information architecture. The data structure created is based on a combination of the ontology developed in a parallel research project [13] using the Protégé knowledge editor tool (see Figure 2), the project folder structure of a vacuum manufacturer, and findings from the literature. Implementing the lifecycle data structure in a PDM system would enhance process efficiency and improve information flow within the organisation. The data structure developed has its foundation in the product-process-resource model described in the literature. However, the structure presented extends this model with project related information having as a result a project-product-process-resource model as it is shown in Figure 3. The ‘project’ element was added since this is a key mechanism applied within the case study company. Current product development programmes often include multiple products; a product family. Modular components and various project management resources are therefore referred to according to the project. Such a distinction makes a valuable addition to the practical element of operating a new product introduction programme, and does not detract from the relationships between products, processes and resources. The upper level element ‘lifecycle system’ shows that the system represents a combination of project, product, process and resource to describe the entire lifecycle of the product. This top level element is missing from the literature studies where generally product is the central element. The ‘project’ element contains information about marketing, project management, competitors and product family definition. The ‘product’ element describes the product, its architecture, BOM, and its components. The ‘process’ element of the data structure includes information about the design, manufacturing, service, logistics and disposal of the product. Service process information has been addressed by [14]; whereas

manufacturing process information has been addressed by [13]. Finally, the ‘resource’ element of the structure integrates information about the facilities, person, equipment, information resource, fixture and raw material. The full data structure is depicted in Figure 4.

Figure 2. Protégé class hierarchy Once the data structure was created, it was transferred into the PDM system Teamcenter Engineering. A collaboration context object – Vacuum pump Lifecycle system - was created as a top level element to represent the entire system. Then, immediately under the collaboration context, structure context objects were used to represent the project, product, process and resource. Finally, under each structure context, a folder hierarchy was created to collect and organise all the data and knowledge. Figure 5 shows the top level view of the data structure in Teamcenter Engineering.

Figure 3. Top level of the data structure 5. SYSTEM COMPARISON This section introduces the ontology editor – Protégé and the PDM system – Teamcenter Engineering – used in this research. Protégé had been selected by the parallel research project as the ontology editor. Teamcenter Engineering was selected for convenience: Cranfield University has a fully functional installation available. PDM systems have significant costs associated with purchase, installation and support so this was the only available commercial scale PDM system. The two tools are significantly different mechanisms used to create, manage, share and reuse information and knowledge. Protégé is a free and open source knowledge system that allows creating ontologies, which can be used to represent data and knowledge related to the product. Protégé is a customisable tool that provides a total freedom to create, manipulate and interconnect knowledge bits. It is flexible and expansible, and there is a

158

Figure 4. Data structure wide number of plug-ins that can be included into the system. Moreover, it has no limitation in terms of class hierarchy, number of attributes of each class and knowledge interrelation. On the other hand, it is a centralised system which does not allow accessing at the same time to different. This means that it cannot be used as a collaborative tool. Another disadvantage of Protégé is that it does not allow attaching files. For this reason it can only store certain product related knowledge and data, character strings and numbers, but not specific product files such as specifications word documents, CAD files, hypermedia, etc. Teamcenter Engineering provides a collaborative and distributed secure environment, allowing to multiple users the access to the tool at the same time from different locations. It allows the attachment of a wide variety of files, from word documents, excel documents and text files to CAD data. In addition, it permits creating data structures and workflows; and the visualisation of the attached files. On the other hand, Teamcenter Engineering is an expensive proprietary system, which although it can be customised in some manner, the users have not full freedom to personalise the different aspects of the system. The integration of knowledge is not an easy task and it has to be entered in document formats. Finally, although it allows referencing files, it is difficult to relate and link knowledge distributed across the whole data structure. 6. KNOWLEDGE INTEGRATION INTO PLM 6.1. Methodology for knowledge integration PLM is a business strategy which embraces a wide range of tools to support the product lifecycle from the concept, through design and manufacture, to the disposal of the product. However, in today’s competitive and global environment it is also crucial to support decisions with non product specific information related to each of the stages

159

of the lifecycle. Organisations have to manage more efficiently one of their most precious assets, the knowledge. Thus, the integration of knowledge into PDM system has become a strategy to improve organisation’s competitiveness in the business. The literature review revealed the growing importance of integrating knowledge management into PDM systems to support a collaborative and distributed product development environment. Recent studies have carried out the integration of knowledge base system and PDM system by means of programming languages to create interfaces between both systems or by using XML files to share the knowledge with the PDM system. Nevertheless, this paper presents a methodology for the integration of design, manufacture and service knowledge captured by an ontology editor – Protégé – into the developed data structure in the PDM system without using any of the solutions described in the literature. This methodology is a generic approach, which means that it has to be customised every time it is implemented in order to adjust to the needs of the specific organisation’s The methodology, which is depicted in Figure 6 comprises the following steps: Step 1: Understand the need The first step consists in understanding the need and future benefits that drives the integration of design, manufacturing and service knowledge into PLM, and ensure that every stakeholder gets involved in the process. Step 2: Study the Knowledge System capabilities Secondly, the knowledge system has to be exhaustively analysed. In this case, the knowledge system utilised – Protégé – is an ontology editor, and as such, ontology’s capabilities had to be analysed. Step 3: Analyse the Ontology

Figure 5. PLM Data structure: screenshot from the Teamcenter Engineering system It is crucial to analyse the ontology, identifying the classes, their attributes, relations between the different objects and the constraints. Step 4: Create the UML representation of the ontology Once the ontology has been analysed, it is necessary to create the UML representation of the ontology which will help to identify in detail the classes, subclasses, their properties and the relations between them. Step 5: Study the PDM system capabilities The step number five consists in studying the capabilities, functionalities, types of object available and applications of the PDM system in order to identify the most suitable manner to integrate the knowledge. Step 6: Identify and determine type of objects The sixth step is crucial since it is related to the way of managing and representing the knowledge in the PDM system. PDM systems allow the creation of a number of different types of object. The most commons are items (i.e. to represent a product), datasets (to import documents of other applications such as Word), folders (to create the project hierarchy), BOM (to create the product structure), collaboration context, forms, etc. Thus, it has to be decided in which type of object is going to be translated each piece of the knowledge Step 7: Integrate the knowledge into the PDM system In this step, it has to be identified the best manner and location to include the objects which compile the knowledge in the current data structure in the PDM system. Eventually modifications to the data structure could be done to make the knowledge more accessible. Finally, the knowledge is integrated. Step 8: Identify the cross-references / links Once the knowledge has been integrated, some pieces of knowledge could affect, or could be necessary in different parts of the product structure due to the relation of product related knowledge across the entire product lifecycle. Thus, references to these knowledge pieces will have to be created in the required locations of the data structure. Step 9: Validation with case studies

Once the knowledge has been integrated into the data structure, a validation from experts by means of organisation’s real case studies in PLM is necessary. Step10: Training The training stage accelerates the learning process of the people involved in the PDM system. Step 11: General use, maintenance and support Finally, in the last step, the data structure with design, manufacture and service knowledge is in full operation and used by all the stakeholders; but this knowledge has to be properly managed and maintained by the people who use the PDM system. Thus, PDM system has to be used to capture, explore, share, reuse, manage and maintain the knowledge. 6.2. Knowledge integration into PLM Then, the key step in the integration of the knowledge compiled in Protégé into PLM, is to determine how to transfer the knowledge to the objects available in the PDM system. Generally there is not a direct transition, so the implementer has to choose in what type of PLM object convert the knowledge. The class hierarchy or ontology structure can be translated into Teamcenter since the PDM system allows creating folders to organise the data. However, the knowledge contained in Protégé can only be included in Teamcenter if it is converted in files which can be imported in Teamcenter as datasets (word documents, excel files, text files, etc,) or if the knowledge can be transferred to one of the various form templates available in the PDM system. In fact, this is one of the main limitations of PDM when including knowledge since it is not possible to attach knowledge to other files such as CAD files, drawings, BOM, etc. unless the knowledge is written in the description field of these files. The latter was considered as an inappropriate solution. So, when transferring the knowledge to PDM, the process consisted in deciding the most appropriate file type. Some of the knowledge pieces integrated into PLM are product architecture, manufacturing and service processes, machining features, facilities and many others.

160

For example, knowledge related to the machining features was integrated by creating excel files, which included the tolerances of the feature, the magnitude, shape, scraprate and other parameters. On the other hand, knowledge about the facilities of the organisation including factory, shop, cell and station was integrated into Teamcenter as an excel file, named ‘Facility’. Moreover, knowledge about the people of the organisation was integrated by means of an excel file, which incorporated information about the name, factory where it was placed, role, project and task. The Figure 7 shows the resource structure which includes the knowledge files related to the machining features, facility, people and tools in Teamcenter Engineering. Other knowledge pieces such as the product architecture were translated into a proper BOM in the PLM system.

process have to revise it and approve the design. Finally, if everyone approves it, the project manager applies the release status (see Figure 8).

Figure 7. Knowledge integration example

In order to simulate this scenario in PLM, a workflow template was created. Then, once the release workflow template was designed, it was applied to a real product in Teamcenter in order to observe the benefits of implementing workflows in PLM. This PLM functionality allows tracking the current stage of the process and accessing to the target information (CAD data, specification docs, drawings, etc) involved in the activities the process is comprised of.

Figure 6. Implementing an ontology into PDM 7. SCENARIOS FOR PLM In order to validate the implementation in PLM of the data structure, four scenarios related to the pump manufacturer business were developed. The first scenario simulates the release process of a particular product. The second one simulates the performance modelling of a pump. The third scenario deals with the development of a new product based on an existing pump. And finally, the last scenario shows how information resources can be shared among different products. Focusing on the first scenario, it was decided to implement the release process in PLM because the AS-IS model of the vacuum manufacturer revealed that in the company it was done in a paper basis. This involved a large number of drawings to be signed by the different stakeholders, consuming a large amount of resources. The release process is started by the project manager who initiates the workflow and decides who has to review the pump. Then, all the stakeholders involved in this

161

Figure 8. Release process 7.1. Validation The PLM proposal was validated with four users from the case study company: two engineers and two from CAE systems. Two were carried out via email: respondents answered a questionnaire about a presentation describing the proposal. Two were carried out using web conferencing: respondents answered questions after a live demo. The respondents all stated that the implementation of the PLM data structure within the NPI process would offer significant benefit to the business. It was noted that the scenarios used for validation were simplistic, and did not therefore reflect the detail required

in a live project. They also recognised that the cost and effort for implementation would be substantial, and so significant top management support would be required. 7.2. Limitations of the PLM proposal The PLM data structure proposal was made on the basis of a limited case study: a small number of products, components, manufacturing data sets and processes were added to the system. In the case company, approximately 1% of the total components were added to the system (4 of 300). Coverage of knowledge within the proposed structure has therefore not been fully tested. However, the generic nature of the structure should allow for extension to new or unidentified areas. The effectiveness of the proposal has also not been tested in a live NPI scenario; such an effort would require substantial effort beyond the scope of this research. 8. CONCLUSIONS Product Lifecycle Management (PLM) has revealed to be the approach to address the challenges today’s organisations are facing, by improving the management of the intellectual assets, resources and processes across the whole product lifecycle. PLM implementation will lead to shortening time to market, process excellence, a reduction in costs, an increase of the revenues and a better relationship with customers, suppliers and partners. Nevertheless, the deployment of PLM is not an easy task. Thus, having a standard data structure would facilitate the implementation and customisation of the PLM solution to the particular needs and requirements of a specific organisation. In addition, having a standard knowledge management embedded into PLM would improve the efficiency, enable process excellence and encourage innovation. This paper has developed a data structure that integrates all product related information (e.g. CAD models, requirement documents, BOM, NC codes, etc), providing a standard repository of product data through all the stages of the lifecycle. This structure classifies the data into project, product, process and resources, and has been implemented into the PDM system Teamcenter Engineering, integrating data and knowledge from a vacuum pump manufacturer. This implementation has revealed the benefits of managing all the intellectual assets, including knowledge, in a collaborative and distributive manner. In addition, the methodology developed to integrate knowledge from a knowledge editor into PDM is a new approach to embed through-life product knowledge into an existing data structure. Regarding to the vacuum pump manufacturer, the AS-IS Model revealed that the current PLM implementation is not mature due to the lack of integration between the different systems and the few functionalities implemented in the PDM system, which is mainly used to store CAD data. For this reasons, in order to achieve process excellence, organisation’s’ PLM strategy should focus on the system integration and on the role of the PDM system as the foundation of the PLM environment. In this context, the company would need to carry out three actions. First, it would have to develop a complete data structure template in the PLM system, which includes all the project files, and links between these files; secondly, the company would have to customise their PLM system by implementing the various functionalities the system provides (e.g. workflow templates, BOM); and finally it would have to improve the compatibility between all the

systems that are part of the PLM environment such as the ERP system. Finally, the implementation of these recommendations would require a huge amount of resources in terms of money and dedication, a reengineering of the organisation’s processes, a change in people working philosophy, and a full commitment of every person in the company, from top management to development teams. REFERENCES [1] Schuh, G., Assmus, D., Zancul, E. 2006, Product Structuring - the Core Discipline of Product Lifecycle Management. 13th CIRP International Conference on Lifecycle Engineering, Leuven, Belgium. [2] Sudarsan, R., Fenves, S.J., Sriram, R.D. and Wang, F. 2005, A product information modeling framework for product lifecycle management. Computer-Aided Design 37(13): 1399-1411 [3] Zina, S., Lombard, M., Lossent, L., Henriot, C. 2006, Generic modeling and configuration management in Product Lifecycle Management. International Journal of Computers, Communications & Control 1(4): 126-138 [4] Yeh, S-C., You, C-F. 2002, STEP-based data schema for implementing product data management system, International Journal of Computer Integrated Manufacturing 15(1): 1-17 [5] Chandra, C., Kamrani, A.K. 2003, Knowledge management for customer-focused product design. Journal of Intelligent Manufacturing 15: 557-580. [6] Huang, G.Q., Mak, K.L. 1999, Design for manufacture and assembly on the Internet. Computers in Industry 38: 17-30 [7] Han, K.H., Do, N. 2006, An object-oriented conceptual model of a collaborative product development management (CPDM) system. The International Journal of Advanced Manufacturing Technology 28: 827-838 [8] Kim, G.Y., Noh, S.D., Rim, Y.H., Mun, J.H. 2007, XMLbased concurrent and integrated ergonomic analysis in PLM. International Journal of Advanced Manufacturing Technologies: January 2008 [9] Eynard, B., Gallet, T., Nowak, P., Roucoules, L. 2004, UML based specifications of PDM product structure and workflow. Computers in Industry 55(3): 301-316 [10] Ebert, C., Man, J.D. 2008, Effectively utilizing project, product and process knowledge. Information and Software Technology 50(6): 579-594 [11] Gao, J.X., Aziz, H., Maropoulos, P.G., Cheung, W.M. 2003, Application of product data management technologies for enterprise integration. International Journal of Computer Integrated Manufacturing 16: 491500 [12] Cheung, W., Bramall, D., Maropoulos, P., Gao, J., Aziz, H. 2006, Organizational knowledge encapsulation and re-use in collaborative product development. International Journal of Computer Integrated Manufacturing 19(7): 736-750 [13] Baxter, D., Roy, R., Gao, J. & Kalta, M. 2008, Development of a knowledge capture and reuse framework for inspection and machining capability for engineering design support. IMechE part b: Journal of engineering manufacture, submitted August 2008. [14] Doultsinou, A., Roy, R., Baxter, D., Gao, J. 2008, Developing a Service Knowledge Reuse Framework for Engineering Design. Journal of Engineering Design. Submitted July 2008.

162

Implementing an Internal Development Process Benchmark Using PDM-Data 1

J. Roelofsen1, S.D. Fuchs1, D.K. Fuchs², U. Lindemann1 Institute of Product Development,Technische Universität München, Boltzmannstraße 15, 85748 Garching, Germany; ² EMCON Technologies, Augsburg, Germany [email protected]

Abstract This paper introduces the concept for an internal development process benchmark using PDM-data. The analysis of the PDM-data at a company is used to compare development work at three different locations across Europe. The concept of a tool implemented at the company is shown as well as exemplary analyses carried out by this tool. The interpretation portfolio provided to support the interpretation of the generated charts is explained and different types of reports derived from the analyses described. Keywords: Process Benchmark, Product Development Process

1 INTRODUCTION Today’s development-process is mainly driven by three factors: time, quality and costs. High quality products need to be brought to the market in as little time as possible at competitive costs. In addition to this, flexibility and creativity have further important impact on the development-process. With these influences competing, ways need to be found for an optimal product development process. One way to identify potentials for development process improvement is carrying out a process benchmark. In this paper the concept of a tool to support an internal process benchmark is introduced. This tool is implemented at a company using data derived from its Product Data Management System (PDM-system). Three different development locations are compared this way. To do so, qualified characteristics for process analysis are discussed in this paper. Furthermore practical methods of analysing business processes based on data-collection from digital data-sources are introduced. Also, a general concept of an analysis-tool based on Microsoft Excel and a PDM-system is introduced. In order to support the interpretation of the results of the development-processanalyses and to draw conclusions for corrective or investigative actions, a portfolio is provided. An international automotive supplier company serves for the practical application of the theoretically planned analyses. 2 PROCESS CHARACTERISTICS In this chapter the development- and change-process in particular are characterised. Dettmering [1] defines as characteristics of a development-process the required information and the required organisational departments. In addition to this, the goal and the result of a process are named as important characteristics, as well as the development-phase and the current state of process and product. In the context of product-information, Dettmering [1] gives the following list of characteristics: 

identification data (owner, revision, date, name, IDnumber)



structural data (technical classification, standard name, kind of product)



constructive data (function, material, weight, size, tolerances)



production data (procedure, alternative procedures, processing time)



controlling data (material costs, machine costs, stock costs)

 purchase data (price, source) When dealing with processes within the development of products, however, the relationship between product and process can be seen as so tight that product-information can also be used for a characterisation of the development process. Gaul [2] mentions some characteristics in the context of distributed development-processes as well. He gives the following, mainly qualitative aspects for a description of the development-process. By naming specific values for these characteristics, in some cases a quantitative evaluation is seen as possible, so that these aspects can be taken into consideration for an objective processdescription and analysis. Some examples are given in the following: 

number of partners



distribution of locations (local, regional, international)



time order



intensity of cooperation (integrated, loosely linked)

(two, more than two, not clear) (parallel, sequential, mixture)

 data access (possible, not possible) In addition to this, the different functions of PDM-systems can serve as characteristics and therefore as a source for analyses. With products and documents passing through these functions within the development-process, their main attributes are changed and therefore can be used for the characterization of the development-process. But it has to be warned against the wrong conclusion that the PDM-functions are a complete digital implementation of the development-process. The PDM-system it is not an exhaustive documentation of the development-process, only certain process steps can be seen and analysed by its data. Derszteler [3] provides a further set of variables that describe processes. He specifies seven groups of variables, which are: time, information, resources, costs, human resources, quality and flexibility. For this paper six different kinds of information are derived from these literature sources, as can be seen in figure 1. In order to be able to analyse these characteristics they need to be implemented and used in the PDM-system.

CIRP Design Conference 2009 163

Master data



person



date



status



access-specific information

Having described the second group of implemented characteristics, the last group of characteristics can be addressed: the non-implemented ones. These characteristics might be interesting for the description of the change- and development-process, but they are not implemented and thus not applicable for process analysis.

Temporal information

Process

Relationships and interfaces

Economical valuation and procedure

Results

The question arises, which characteristics should and can be taken into consideration. This is discussed next.

3 OBJECTS OF ANALYSES The aim of this paper is to analyse change- and development-processes. Focused on workflows according to zur Muehlen [4], these processes can be analysed by five kinds of data: events (processes with irregular operations, such as aborted processes)



activities (comparison of similar activities for analysis of efficiency)



processes (analyses of distribution and required resources)

Figure 1: six kinds of characteristics for process description



resources (identification of organisational ratios or learning curves)

In order to adapt process characteristics to the kind of process analysis proposed in this paper, these characteristics have to be implemented and administered in a software system, in this context a PDM-system.



Background information



As far as implementation of characteristics is concerned, the following three kinds of implementation can be distinguished in the context of PDM-systems: 

characteristics that are attributes (e. g. name)



characteristics that are summarised in the history (e. g. any modification)



directly

implemented

in

characteristics that are not implemented in any attributes (e. g. development time)

Besides the form of implementation also the form of administration can vary. On the one hand, an attribute can be administered by the PDM-system; on the other hand, an engineer might be responsible for the administration and by that can cause inconsistencies and gaps within the documentation. The different forms of implementation and administration are discussed more detailed in the following. The first group of characteristics are directly implemented in form of attributes, so they can be collected directly and their analysis is simple. They are administered automatically by the PDM-system or manually by the development engineers. However, the second group of characteristics are documented in the history of an object and therefore can only be analysed indirectly. These data are subject to supervision of the PDM-system. Moreover, these characteristics document different kinds of events related to an object (e. g. check in, check out, modification or promotion). Histories have a common content, even if there are different ways of notation. So, the recorded events are mainly documented with name, date, and applicationspecific information. The history can include information like the ones listed below: 

kind of event

business objects (analyse of performance of processes) In this paper, all of these are needed. But from the PDMperspective, all of these data can be seen as objects administered by the PDM-system. Due to this, they can all be summarised to a single group named “PDM-objects”. As in this case one single group of objects is not sufficient and five are too many, the objects of analysis are grouped into process and product specific objects The grouping into process and product leads to two ways of analysing a process: 

direct analysis analysing the process (process, activity, event)



indirect analysis analysing the product (resource, business object, event)

3.1 Description of the tool The approach on the analyses-tool in this paper bases on the management-information-systems introduced in Best and Weth [5]. In contrast to the original concept the semi-automated tool in this paper only uses a single data-source: the PDM-system. Furthermore, the update of the data and the modification of analyses-algorithms are carried out manually instead of the automation that is originally described by Best and Weth [5]. But the main difference between the two systems can be found in the inclusion of a support for interpretation. While the managementinformation-system-approach only presents variables, the semi-automated process-analyses-tool in this paper is accompanied by an interpretation portfolio to support the interpretation and therefore facilitate the handling of this system. At the beginning, the data are collected, and several table reports are generated. Based on these files from the PDM-system, the information is summarised into two analysis-files, one for the background information and another one for the actual analyses. The first one, the general data, includes general information that is universally valid, such as a list of persons and the location they work at or a list of customers. The latter contains the

164

information on the object for the actual analyses. After the generation of these two files, the information is analysed and specific values are calculated with by formulas and Visual-Basic-for Application-macros. Based on these values, charts are generated and the most important variables are integrated into the summarysheet. From both, charts and summary-sheet, a specific report is generated for presenting the results to the management. The following figure illustrates the concept of the analyses-tool in a flowchart.

Figure 2: Concept of analysis-tool 3.2 Analysis of products As far as the analysis of product-change- and -development processes is concerned, some information has to be taken into consideration, as stated before. Mainly the product that passes the processes can serve as a source for this information. Thus, the relevant objects and the analyses of their characteristics are presented. In this respect, the following PDM-objects are analysed more detailed: 

persons



drawings

 notify- and responsibility-lists Based on these objects, the analyses described below have been performed. Persons and their locations and departments The persons involved in a change- or developmentprocess play an important role. On the one hand they have strategic functions such as planning or deciding about change requests or other processes, on the other hand they fulfil operative functions like giving estimationstatements or realising the decided changes. Additionally, the analysis of persons and their locations and departments can serve as background-information and reference for further analyses. Thus, persons and their organisational and regional impact on processes are interesting to analyse. There are mainly two aspects worth investigation: the location and the department a person is located at. As an example the analysis of departments is described in the following. Although person-related analyses have a rather statistical character they can serve as background information and reference for the interpretation of further analyses. Moreover, the persons have strategic and operative influence on the processes. These are the main reasons for the importance of this investigation. The analyses are based on attributes that can be taken from the PDM-system. The relevant attributes are: 

login-name and full name of the persons



location and department of the persons



availability of the persons (active / inactive)

Persons per departments One analysis in the context of persons is the analysis of the number of persons per department. This analysis can serve as statistical background, as possible reason of a

165

certain level of influence and as reference variable for further analyses. The company’s structure of organisation is observed more detailed in order to acquire knowledge about the real impact of departments.

3.3 Drawings as object of investigation In development processes, the main object of communication and documentation concerning geometrical information are drawings [6]. Furthermore, they contain functional information and are therefore important representations of the product. Moreover, drawings include information that cannot be taken directly from the digital product models such as tolerances, materials or methods of production. Thus there are a few aspects that have to be analysed in more detail in order to gain information about the development process: 

number of drawings per development-phase



number of approvals and rejections of drawings



number of drawings generated per person



current number of drawings per state



number of drawings per state in trends



relative number of revisions per drawing



average development duration per drawing



number of drawings with a certain duration in statistical overview These aspects have been selected for mainly two reasons. The first one is the availability of data. Not every kind of information is implemented and administered in the PDM-system, thus only available information can be considered. The second reason why these analyses have been selected is that they focus on topics that are tightly connected to the development process such as development duration or releases. Therefore, the selected information is suitable for a process analysis based on PDM-data. As well as persons, drawings are objects that are documented with by the PDM-system. In order to carry out the analysis of the aspects named above, the following characteristics have to be taken into consideration: 

name of the drawings and the related parts



current revision and all revisions of the drawings



rejections and approvals of the drawings



current state and policy of the drawings



date of origination and all states of the drawings

Number of drawings per development-phase The number of drawings per development-phase can be used as a general overview of the activities of every location. Additionally, the absolute number of drawings per development-phase can be identified. Also the ratio of these numbers is important for the strategic orientation of the company. These numbers can help to identify weaknesses concerning the company`s future. There need to be enough activities in development in order to guarantee future success. In order to enable a more specified analysis, it is not the absolute numbers of all locations which are plotted, but the values for the three main development-locations. Number of drawings originated per person The number of drawings originated per person can be seen as an index for productivity. For enabling the

comparability between the different locations, this variable is referenced to the number of persons working at the location. Thus the formal definition is similar to the following productivity factor named by Burghardt [7]:

Factor of productivity 

Expense on product Number of persons

(1)

Moreover Burghardt [7] hints that this index describes a business economical variable, but in this paper’s context, it can also be interpreted as indicator for the quality of work in a process. Furthermore, this variable can indicate the workload at a location or special events in the development-process such as a request-for-quotationphase. While the number of persons can be collected easily, the expense on the product is hard to evaluate. In this paper, the number of generated drawings has been selected to represent the expense. Thus, the analysis of drawings originated per person can be seen as an analysis of productivity and events in the development-process. For a better overview not only the value of a single location is plotted, but the values of the three main development-locations as well as the average value. By that a more detailed analysis is enabled. Still it has to be regarded that the amount of work necessary to complete a drawing is related to the complexity of the part or product in question. So not only the number of drawings is taken into account but also the complexity of the products generated over a period of time as it is done in the following type of analysis. Average development duration per drawing and part-class In this paper, the average development duration is analysed in relation to the complexity of the related part. These two variables are described in more detail next. According to van der Aalst [8] and Derszteler [3], the average development duration can be used as a processindicator. It is mainly used for performance-analysis in the context of workflows. Heinz [9] warns against the wrong conclusion that the duration between two states is identical to the work time. This wrong interpretation would imply that the developing engineer is exclusively working at this single product, which is not a realistic assumption. The average duration in this context means the arithmetic mean of all durations between the date of origination and the date of releasing the drawing. In order to quantify the complexity of a part and its drawing, part-classes are assigned to the standardised part-names in the PDM-system. This provides the advantage that the products are generalised and comparable. The following six part-classes are used for the analysis: 

class A

combined assembly



class B

complex assembly



class C

simple assembly, complex component



class D

simple component



class E

small or standardised part

 class X not assignable The analysis is then carried out regarding the average development time for the different part-classes and the numbers of parts developed per class. 3.4 Notify- and responsibility-lists Notify- and responsibility-lists can be seen as automatic documentation of information-distribution and responsibilities. While the notify-lists include all persons who need to be informed automatically about important news concerning a development project, the

responsibility-lists include all persons who take a role of responsibility in a process like the change-process. Additionally, both lists can be used as a source for information about possible contact persons. These lists are an object of investigation as the information-distribution and responosibilities are documented here. As the analyses for both types of lists are very similar, only one example is given in this paper. The following items are aspects of the analyses: 

number of active persons per list



number of projects per list



number of references in lists per location

 number of departments per list Along the lines of the analyses of drawings, these aspects have been selected because of their availability and their tight connection to the development-process. As notify- and responsibility-lists are objects in the PDMsystem, they can be administered and analysed as described above. To do this, the following relevant attributes can help: 

e-mail-address of persons



location and department of persons



availability of persons



projects and customers of actual list

 date of last modification Having given the relevant characteristics, one analysis is described in more detail. Number of active persons per list As far as the analysis of information-distribution and the responsibilities with the help of notify- and responsibilitylists is concerned, a very interesting point is the number of persons, whose communication is documented in the lists. It is analysed, whether there are enough persons communicating, but no answer can be given whether these are the right persons or not. For this analysis, the number of active persons in the lists is counted and the number of lists with a certain number of persons is plotted in a bar-chart. Furthermore, ranges of optimal sizes are pointed out by coloured areas. These graphs and an explanation what to do when optimal sizes are missed build up part of the interpretation portfolio described later on. 3.5 Analyses of process As stated above, along with the analyses of products, information can be collected by the direct analyses of processes. The process analysis adds to the already carried out analyses of development processes via product data. As the development-process was regarded intensively in the analysis carried out by product data, in this part of the contribution the focus is laid on changeprocesses. The change-process The change-process is tightly connected to the development-process. For this reason, exemplary analyses of this process can be used to get information about the overall development-process. With the changeprocess being implemented in the PDM-system as a workflow, data about this process are available and the PDM-system can serve as data-source.

166

In this context several aspects can be taken into consideration. They have been selected because of their relevance for the change-process and for the availability of information in the PDM-system. In order to give an overview of the analyses, the selected aspects are listed below: 

number of affected drawings



current number of change requests per state



number of change requests per state in trend



number of change requests per location

 number of tasks on time Processes can be implemented as objects in a PDMsystem with characteristic attributes. With these attributes being the base of the analyses, they are shortly introduced in the list below. 

involved persons and their roles



start- and target-date of the change requests



description of the change requests



date of initiation and the states of the change requests

 related customer of the change requests One of the analyses carried out to analyse the change process is described in the following. Current number of change requests per state The number of change requests that are currently at a certain state are analysed. This analysis can serve as answer to the question, what the affected engineers are working at currently. This analysis can be completed by a look at the number of drawings generated in the development process. By that a snap-shot of the momentary topics of work can be deduced. In order to allow a more specified analysis, a distinction for all locations, all customers and all supervisors is integrated. 3.6 Summarising, interpreting and reporting Having analysed processes and their resulting products, these analyses and their results need to be summarised, interpreted and presented for the initiation of further adequate action. In this respect, zur Muehlen [4] and Derszteler [3] point out the need of different information for different users. The strategic management needs longtime-oriented information, whereas the operative management requires rather short-time-oriented facts [4]. Additionally, a third and maximum detailed level of information is important for the direct controlling by PDMadministrators. For the supply of information regarding the different needs for information, summary-sheets, the interpretationportfolio and reporting are used. Summary-sheets During the analyses of processes and products in the PDM-system, a great amount of different variables is calculated. But this amount of variables is neither clear, well-structured nor handy. For these reasons, summarysheets are created within the analyses-tool. These contain the most important variables of the analyses at a glance, such as the number of drawings in a certain state or the number of missing tasks within a process. In addition to this, they contain some organisational information such as the date of data-collection or the name of the analysing person. Referring to van der Aalst [8], two concepts have to be traded-off in this respect: aggregation and abstraction. While aggregation describes the accumulation of required information, abstraction focuses on the reduction of unnecessary information.

167

While for strategic and operative management the information displayed in the summary-sheets is integrated into a report at different levels of detail, PDMadministrators can take the required information directly from the summary-sheets and the analyses-tool. All in all, based on the summary-sheets, the generation of reports is facilitated, different needs of information are considered and a clear, structured and handy overview on relevant process-information is given. Interpretation-portfolio for analyses The textual documentation of the interpretations for the analyses as provided by the summary sheets does not fit the need for an easy and fast interpretation. Therefore, the interpretation-portfolio has been developed, which is described in the following. After having analysed the development-process, the results of the analyses need to be interpreted in order to enable a decision for corrective or investigative measures. Zur Muehlen [4] suggests a procedure of defining hints for predefined results of process-analyses. Here, explanations can be inserted for positive or negative correlations of indicators. In contrast Heinz [9] provides an abstract level of support for interpretation by naming five patterns of problems that can be found analysing the data. While the first concept is not applicable for the specific situation because the hints are given automatically, the second concept is too general and therefore provides no reasonable suggestions for suitable actions. In order to fill this gap the interpretation-portfolio has been developed. This document contains a list of all analyses carried out, along with the developed chart and its interpretation. Additionally, some restrictions are given that are related to the specific analysis. The support for interpretation is given by naming possible company-specific reasons or effects for high or low values. Moreover, tasks for further analyses or questions on the process are listed, which can prove the proposed reasons and which support further corrective or investigative actions. The optimal value is documented as well, in order to enable the documentation of processknowledge and the provision of a reference for the analyses. The restrictions given in the interpretation-portfolio include information on limitations that are applied for the analyses. They consist mainly of the following content: 

vault for data-collection



locations that can be analysed



important hints on the interpretation



information about the displayed values

 visualisation-specific restrictions Figure 3 shows an extract from the portfolio.

Figure 3: Exemplary excerpt from the interpretation portfolio By using the interpretation-portfolio, a clear interpretation and hints for further analyses are given to the user. In order to ensure its validity, the interpretation-portfolio has been generated in discussions and workshops and therefore documents the process-knowledge of several persons. Important persons for the creation and maintenance of the portfolio come from management and operative engineering. They all hold certain aspects of processknowledge that can be summarised in this way. Additionally, their integration into the interpretation of the processes can help to raise both acceptance and motivation for process-analyses. In the context of maintenance it has to be mentioned, that the interpretation-portfolio is not and possibly never will be exhaustive. Thus, the interpretation needs to be questioned every time applied. The portfolio should be completed and adapted during its use so that more process knowledge can be accumulated. Table 1 summarizes the strengths and weaknesses of the interpretation-portfolio as it is introduced in this contribution. Table 1: Strenghts and weaknesses of the interpretation portfolio strengths

Weaknesses



use of a summary makes interpretation faster, safer, easier and more intuitive



increasing understanding



discovering bestpractise- and limit-values



documentation of term knowledge



enabling common interpretations, avoidance of wrong interpretation and wrong measures

process-

long



used to characterise the process



support for interpretation and for drawing conclusions



great expense on creation and maintenance



danger of unreflected taking of interpretations



low manageability



not exhaustive

When interpreting a result of a process-analysis, this interpretation has to be seen critically. Management as well as work council might be concerned from the analyses and their results [10]. So both analyses and their interpretations can be dangerous for company politics. For example, an analysis shows that a developmentlocation seems to be less efficient than the others. A premature interpretation might be that this location is working worse, and the replacement of the bad working engineers is a possible measure. But the lower efficiency can also indicate a lack of manpower. Thus, the interpretations of the analyses need to be done very carefully in order to avoid wrong interpretations. Due to nondisclosure agreements no further detailing of the analysis-results and the measures taken afterwards can be given here. Reporting After analysing and interpreting the development-process, the results need to be documented and presented, in order to be able to take measures for process improvement. To do so, reporting is seen as a suitable solution. Based on reports, managers should be able to take strategic or operative decisions for process improvement. For the generation of the report, the generated summarysheets and charts as well as the interpretation-portfolio are used. All components have been generated during the analyses and provide actual, available and relevant information without much additional effort. The summary-sheets give important variables from the relevant analyses. In addition to this, charts are presented to visualise important results. In order to allow a more intuitive understanding, the relevant interpretations from the interpretation-portfolio have been integrated into the charts in form of message-boxes. A management report adapted for the partner company was generated this way. 4 CONCLUSION AND FUTURE WORK In this contribution a concept for an internal process benchmark using PDM-data is introduced. The process characteristics used for analysis and the concept of the tool implemented at a partner company is described. Furthermore some exemplary analyses are depicted as e. g. the number of drawings generated per person. The interpretation portfolio that was developed in order to support the interpretation of the generated charts is explained. Different kinds of reports for different kinds of receivers (amongst others, management and PDMadministrator) are created. By this kind of analysis of PDM-data an internal development process benchmark at the company is enabled and suggestions for possible measures or actions for process improvement are derived. After enabling this benchmark it is now planned to implement a similar kind of analyses for different development projects in order to find characteristic metrics that hint on possible project delays and thus enable taking early measures to prevent these delays. 5 ACKNOLEDGEMENTS The results presented in this contribution were generated in the research alliance ForFlow. The alliance consists of six institutes from mechanical engineering and computer science and twenty affiliated companies. ForFlow is financed by the Bayerische Forschungsstiftung.

168

6 REFERENCES [1] Dettmering, H., 2005, Produktdatenmanagementsysteme. München: TU, Lehrstuhl für Informationstechnik im Maschinenwesen, Umdruck zur Vorlesung Produktdatenmanagementsysteme [2] Gaul, H.-D., 2001, Verteilte Produktentwicklung Perspektiven und Modell zur Optimierung. München: TU, Lehrstuhl für Produktentwicklung, Diss. [3] Derszteler, G., 2000, Prozeßmanagement auf Basis von Workflow-Systemen. Lohmar: Josef Eul [4] zur Muehlen, M., 2004, Workflow-based process controlling. Fundation, design, and application of workflow-driven process information systems. Berlin: Logos [5] Best, E.; Weth, M., 2005, Geschäftsprozesse optimieren. Der Praxisleitfaden für erfolgreiche Reorganisation. 2. Auflage. Wiesbaden: Gabler [6] Jania, T., 2005, Änderungsmanagement auf Basis eines integrierten Prozessund Produktdatenmodells mit dem Ziel einer durchgängigen Komplexitätsbewertung. Paderborn: Universität Paderborn, Lehrstuhl für Rechnerintegrierte Produktion, Diss. [7] Burghardt, M., 2006, Projektmanagement. Leitfaden für Planung, Überwachung und Steuerung von Entwicklungsprojekten. 7. Auflage. Erlangen: Publicis Corporate Publishing [8] van der Aalst, W., 2003, Workflow/Business Process Management. Eindhoven: TU, Department of Information Systems. [9] Heinz, K., 2002, Workflow-Management-Systeme. Datenermittlung und -analyse für die Prozessoptimierung. Dortmund: Verlag Praxiswissen [10] Becker, J., Kugeler, M., Rosemann, M., 2003, Prozessmanagement. Ein Leitfaden zur prozessorientierten Organisationsgestaltung. Berlin: Springer

169

170

Invited Paper How to make “Value Flow” for a Start-up Enterprise W. A. Beelaerts van Blokland, B. Dumitrescu, R. Curran Technical University Delft Faculty of Aerospace Engineering (AMO) [email protected] Abstract This paper is about research towards the design of a start-up organization “OpDieFiets.nl”. The start up focuses on market demand for quality bicycles at the lowest price possible for the Dutch market. Theories around innovation and value systems form the background of designing the organization. The 3C value flow model currently in development defines the core value drivers for enterprises and is used to pre-design the value system for OpDieFiets.nl. Four other start-up companies were investigated with help of the 3C value flow model to define the value system for OpDieFiets.nl. Keywords: 3C value flow model, Low-cost, Value networks, Lean manufacturing, Start-ups, Bicycle

1. INTRODUCTION Today’s business market place for various products is characterized by pressure on better quality, faster delivery and lower price. There is a constant pressure on companies to perform better than the competition and outclass the rivals on answering better to customers demand, making better products and deliver better value offerings due to unique business processes.

Three key points compose the new bicycle product namely; concept, an utmost robust quality, offered at the lowest price of all city bicycles on the Dutch market. Available through a web-store eliminating the need for physical stores and are home delivered. This paper outlines the design of the value chain of OpDieFiets.nl with help of the 3C value flow model which identifies three main value drivers; Continuation, Conception and Configuration. To find support for the design of the new value chain four other companies, Taniq, AELS, Bikesonline.nl and QuinTech, are used as input for the value chain design of OpDieFiets.nl. The focus is thus on rigorous effectiveness and efficiency in the business model and the focus on value adding processes. Therefore the following research question arose:

One model that profoundly combines these concepts is the 3C value flow model [1, 2, 3]. This analytical framework on value chain innovation processes and its value drivers is a combined view of issues based on literature on value systems, innovation and supply chain management. The core motto is ‘faster, cheaper, better’ value according to the 3C value flow model and as this paper will clearly outline, this is also applicable for a starting businesses such as OpDieFiets.nl. The alignment of the 3C value flow model is necessary to answer to the stringent demands in de market place to outclass rivals and set up companies in a lean [4, 5] and efficient way, where business innovation as strategic factor is necessary to survive in the marketplace.

‘Is it achievable to start a company that will establish new value offerings in the low-end bicycle market? The structure of this paper is as follows. First, the theoretical framework is discussed, followed by a market, product and value flow analysis of OpDieFiets.nl, along with the other cases, after which preliminary conclusions are drawn.

The start up OpDieFiets.nl was initialized to find the answer to a wide demand and supply problem within an insufficiently served, fragmented, but large potential bicycle market. OpDieFiets.nl is using the 3C value flow model to design its own value chain.

2. Theoretical framework The theoretical framework of the 3C value flow model (figure 1) is an emergence of 3 pillars of value chain Innovation processes to design a value chain or value system and measure value flow on product level consisting of customer value, supply value and focal enterprise value. The 3 C’s stands for:

The innovative aspect of OpDieFiets entrepreneurship is the no frills product concept in combination with the lean business model as a basis for the creation of new value offerings for bicycles. By a rigorous minimization of cost [6] without compromising quality and online easy accessibility, the low-end segment of the bicycle market is targeted.

CIRP Design Conference 2009

171

Conception: To organize and collaborate with investment sharing (supply) partners in order to create unique and smart original processes and accelerate customer value and supply value by co-development. Co-innovation can be seen as a form of co-development. This investment sharing effect can be measured by the investment multiplier (IMP): total investment into the new product divided by investment into the new product by the focal enterprise. This investment multiplier expresses the ability of the focal enterprise, which can be the innovator, to multiply the investments in the development of the new product by involving other partners. Configuration: To organize and collaborate with production sharing partners in order to create and accelerate customer value and supply value by co-production. This production sharing effect can be measured by the production multiplier (PMP): total production value of the product divided by the own production value. This expresses the value contribution of the supply chain. Configuration

Continuation

Conception

Figure 1: 3C value flow model. The initial 3C construct was based upon major publications on the value chain, core competences, supply chain and innovation [7, 8, 9, 10, 11, 12, 13, 14] that put the generation of value in a perspective of value flow. The essence of the 3C “value-flow model” is the clear depiction of the drivers of value involving the supply chain and the customer chain, depicting the chains as a value flow system in which only value is added and waste is minimized as lean principles also depict.

Value Networks The theory of value networks [16], is also used as an addition to have a solid theoretical backbone for the analysis. To build close interlinked cooperative relationships, interdependencies and value adding coinnovative cooperative links or in short value networks, firms are investing in both upstream and downstream initiatives to cope with developments. These networks are the supplier, development and product development networks. SCM and Lean principles Supply Chain Management is the management of the entire value-added chain, from the supplier to manufacturer right through to the retailer and the final customer [17]. More and more companies are switching their corporate processes towards a more lean approach. These developments are visible throughout many industries and disciplines, because major improvements, advantages and savings can be gained by applying these lean tools [18]. Additionally to the lean principles, characteristics of low-cost strategies are used. As OpDieFiets.nl is anticipating a “no-frills & low-cost” strategy, it is important to state the principles and possibilities of this strategy. The low-cost strategy can be identified by three key features [19, 20, 21]: - simple product, - clear Positioning, - low operating cost. 3. Market Research To start off, the bicycle market is divided in the low-end, middle and high-end segment, figure 2, and this analysis focuses on the low-end segment. It is also important to first define the market and specify the customers to continue to the analyses. It must also be noted that an apparent ‘gap’ exits in the market as the low-end market is insufficiently served with a cheap, robust and quality product, yet characterized by simplicity and low variety as depicted by the following picture: Market Positioning: Current Situation OPTIONS Large •Youth

Superstore

Innovation To create new value innovation is the research field to study. Today’s innovative products are not solely developed any more by companies itself but can be developed in a more open [15] way introducing customers to the innovation process as well as suppliers. This approach reduces risk of product failures as all stakeholders are more involved. It also can reduce the invested capital as suppliers can participate in the development of new products eliminating waste in design processes and accommodating just in time supplies when the product is ready to enter the market.

CUSTOMERS/ Users

Large

•Incidental bikers/YUP

All-round Store

•Low-income

All Groups

•Students

Variety

Brand store

(Internet) Discounter

Variety •Youth

Target Group store

•Incidental bikers/YUP

store

Price

Added Value

•Students

Small Price

Differentiator

Figure 2: Market positioning matrix.

172

•Sporting

•Low-income

Maintenance

Small

•Leisure

Added Value

Differentiator

Data [22, 23, 24] shows that annually 1.4 million new bicycles and 650.000 second-hand bicycles are sold in The Netherlands forming the total market potential. Selections need to be made on price range, bicycle type and purchase channel (“Buy without trying”). The next figures 3 and 4 provide a schematic overview of the market segments that are relevant.

-

The market should be served with a cheap, robust and good quality bicycle in the price range 300 €)

Hybrid, MTB, Children's & Electrical bikes

Product Trial Required

Relevant segment for OpDieFiets.nl

Continuation If we look at the Taniq, it understands well what the customer wants, in respect of durability at affordable costs of innovative automotive products, because it closecooperates with the customer in this case the truck manufacturer. The product for OpDieFiets.nl had been developed upstream with the help of potential future customers, which is in line with value flow driver “Continuation”. Continuity of the start up is better preserved by involving the customer into the value development process (Von Hippel, E., 2005).

Figure 3: The relevant market of new bicycles. Source: BOVAG/RAI

X 1000 bikes/year 650

195

100 %

228

96

132

Product Trial Required

Relevant segment for OpDieFiets.nl

A great deal of Customization is performed for its products upstream, which is also the case for AELS and Quintech. OpDieFiets.nl has customization, but only within a low-cost, no-frills philosophy, downstream. For Bikesonline.nl the amount of customization is available but less tempting due to its large array of products and configurations. Due to the fact that some of these products will not be exactly what customers want, automatically waste will emerge in the value chain. This example of extra waste of Bikesonline.nl is not preferable for OpDieFiets.nl. Leaving room for customization based on customer involvement is thus necessary.

20% Used Bikes Sold in NL annually

Higher price Range (>200 €)

Hybrid, MTB, Children's & Electrical bikes

Additional products will be sold via the internet as additional revenue generating activities within the no frills philosophy. This concept is easily expandable in other countries and cities.

Figure 4: The relevant market of second hand bicycles. Figure 3 and 4 shows that the relevant market segment in NL is 167.000 for new bicycles/year and 132.000 for second hand bicycles/year, totaling to 299.000 units for as market potential. The target groups are students, YUP’s, the youth and low-income civilians [25], [26], 27]. Product research Based on research [28] the following arguments influenced the product choice: The product should be simple and robust making use of the low-cost philosophy. The low-end market that OpDieFiets.nl is targeting is not sufficiently served and fragmented. This market has a potential of 299,000 bicycles annually.

173

Conception For Conception Taniq and Quintech hold a patent and that is their reason of existence, whereas AELS delivers a total package product/service combination. These companies manage the chain to keep control on their partners and deliver value to customers that is unique, by bringing together different value adding parties and controlling the chains as a ‘value processor’. Also its knowledge of its customers and the specific student market is a big advantage to know what the customers want and to design the chain to deliver what the customer wants. Taniq, AELS and Quintech all have thorough knowledge of their customers and suppliers, because they work together in their product development phase. This was also partially

true for OpDieFiets.nl only for the startup product development phase. For OpDieFiets.nl, Conception is its ability to integrate parties and to control the value chain as well. Minimization and cost control in combination with maximizing customer value is the goal for OpDieFiets.nl Configuration As can be seen from the different cases of Taniq, AELS and Quintech all perform some sort of coordinated and integration mode, processing demand value into supply value. Waste is eliminated and only value adding activities are integrated in the chain. OpDieFiets.nl could also initiate a coordinated and integration mode processing demand value into supply value, but with a limited presence within the chain. If we look at the physical value system, the value system of Taniq most resembles the anticipated value system of OpDiefiets.nl. If we look at Bikesonline.nl we see similarity in the value system, as it also has similar value offerings, but still some waste presides in the value system. The biggest difference with the other start-ups is that OpDieFiets.nl serves the low-end market, whereas Taniq, AELS and Quintech focus on the high-end market which gives a difference in value system. Bikesonline.nl is somewhere in the middle. The Value Network for Taniq, AELS and Quintech are Product Development Networks. For OpDieFiets.nl and Bikesonline.nl this is the Supplier Network. OpDieFiets.nl holds direct relationships with its suppliers and clients as the focal company and coordinates the relations between the different entities. The other two networks are more sophisticated as networks in comparison with the network that OpDieFiets.nl is currently in. If the company expands to other countries and develops a broader range of products it will move to the distribution network. If OpDieFiets.nl will fully design and co-develop its own bicycles it will adapt to the product development network. As a start-up company, OpDieFiets.nl could start in a Supplier network moving toward the distribution network, whereas Taniq, AELS and Quintech already have the product development network. This can be seen by the fact that they cooperate with Universities and other institutions on developments of their services and products. For OpDieFiets.nl and especially for Bikesonline.nl this is not the case in the start-up phases. Due to the fact that the product of OpDieFiets.nl is simple and a low-end market is served a product development network could be good for the future but is not completely feasible to develop in the start-up phase. From these cases it thus could be seen that this network is not realistic. It can be stated that the use of the 3C value-flow model for OpDieFiets.nl could be useful to analyze its value system and for value system design. Similarities with the four business cases Taniq, AELS, Quintech and Bikesonline.nl are present and can give a definition for the business model of OpDieFiets.nl.

5. Design of value system for OpDieFiets.nl Introduction OpDieFiets.nl will introduce the no frills concept eliminating non value adding product features and organizational activities from a customer value perspective. The no frills concept of OpDieFiets.nl means that one type of bicycle with only the basic features and function is on offer. OpDieFiets.nl makes it cheaper to buy a bike for it sole purpose: a means of transportation. The product is entirely outsourced. Furthermore, the customer will be able to buy additional accessories on the site like, locks, clip-on lights, leaving it to the customer to decide what is necessary for the bicycle. This keeps price low and allows for flexibility. Another advantage is that additional profit can be made on the extra accessories. The complete order is communicated to the expeditor and delivered the customer. The bike will be sold through an attractive website, which has been developed and fully tested. Payments are done by the iDeal system. Continuation A questionnaire was held under 28 customers who went for a trail with a prototype bike. The results indicate that within the group of 28 initial bicycles, there are positive indications about the customer satisfaction. Overall it can be stated that the product is in line with expectations and customer demand for this particular market segment and the people interviewed. The people interviewed, were selected on the basis of the majority in the low-end market segment that is targeted by OpDieFiets.nl and were a natural selection of respondents from that group. The product trial has been performed in January 2008, during which thirty sample bikes, were sold to actual customers for a reduced price. The respondents provided extensive feedback in which they were asked to give their opinion of the bike and other aspects. They were asked, on an ordinal scale, what they thought of the quality of the bike and the parts, the website, the ordering process, the assembly, the delivery and if there were any defects on the purchased bicycle. But the most important questions were what the respondents thought of the price in comparison to the quality, what the psychological ceiling of the price for the bike would be and if they would recommend the bike to somebody else. This was set at € 109,- incl. taxes. As these results are indicative in comparison to the potential market for these bicycles, it gives an optimistic and clear view of the potential to create value. Even after two months of testing all of the interview people said they would recommend others to purchase the bike. Secondly, their feedback resulted in a change of various parts in order to further improve product satisfaction. Conception Smart trading and an efficient business organization allows for low-costs and low-pricing. The Business concept of OpDieFiets.nl is unique for the bicycle business. None in the bicycle industry have the same business process.

174

OpDieFiets.nl anticipates on the market situation that no online company is selling low- cost bicycles directly to the customer without owning a shop or having expensive personnel. The founders of OpDieFiets.nl will only control all administrative tasks and the general coordination. The core business of OpDieFiets.nl is thus the coordination between supplier, expediter and the customers. The products go directly from supplier to the expediter and on to the customer. Using online selling, the customer’s details are communicated to the expediter, who handles all transportation and storage from the point the containers arrive at the port of Rotterdam towards the customer’s door. The supplier handles all production, assembly, shipping and customs towards the port of Rotterdam.

customers. All these total logistical costs are covered by the payments of the customers for the shipping of their goods to their address. All logistical cost of one container divided by the amount of bicycles in one container is the additional shipping costs for the customers. These costs including a small margin will become the extra cost a customer needs to pay for shipping. So OpDieFiets.nl will not have any overhead cost regarding storage, clearance and shipping. The suppliers are chosen from the ‘developing low-wage’ countries with special trade agreements between the Netherlands and its government. This means that import tax on bicycles is attractive (usually 0%) and that assistance in trade and shipping is well developed. Suppliers from countries that can compete with suppliers from these ‘developing’ countries including the import tax are also considered for the future. OpDieFiets.nl has a solid agreement with one trustworthy supplier with over 25 years of experience in production of city bikes and the typical Dutch ‘granny bicycle’. The contact has been open, professional and successful and based on co-development. A second supplier has been contacted as well and will be developed to a second supplier in parallel. Also an expeditor has been selected, who will take care of all logistics in The Netherlands. All numbers and figures are based on existing quotations.

Internal organization is minimized by selling the bicycles through the internet and outsourcing all storage and transport responsibilities. Bicycles will be sold online, which eliminates having a store and personnel. The customers will pay in advance and the bicycles will be delivered at home in a 90% assembled state. Advantages are that when the brand has become well known, a large part of the direct customers can be reached and served easily. Online selling is also cheap, quick to set up and an easy accessibility means starting the business easily, while providing for an easy expansion of the business. The value system is relatively compact and runs directly from the supplier to the end customer. In this fashion all wholesalers, middlemen and stores are cut out to save costs. There is no own store, handling personnel and storage. The business office can be small with minimal accessories to run and keep control of the value chain. OpDieFiets.nl acts as the coordinator and processor of value between supplier, expediter and customer. Purchasing of the bicycles and 3rd party delivery amounts one part of the price the customer is paying. Alignment of the Conception with Continuation is one to one, as direct communication can be done between customer and supplier through OpDieFiets.nl.

The Value system has been designed with teachings from the four business cases. The 3C value-flow model and lean principles have also contributed to the formation of this “lean value system” avoiding non value adding activities as much as possible. The result is the following value system exposed in figure 5. Typical Value Chain Cheap Bike Shops Shops Suppliers

Shops Shops Expediters

€ 86,-

The development of the low-cost bicycle was fully absorbed by the suppliers which relieved OpDieFiets.nl from risk-taking investments.

€ 10,-

Transport & Storage

Shops Shops Expediters

€ 38,-

€ 10,-

Shops Shops Shops

€ 25,-

Customers

€ 169,- incl. 19% VAT

Value Chain of OpDieFiets.nl OpDieFiets.nl

Configuration The configuration process is achieved by visiting 3 supply partners in Asia and Africa for making agreements on how and on which conditions cooperation was possible. Four interviews with logistical companies in the Netherlands were held to cooperate and design the logistical process.

Shops Shops Suppliers

Shops Shops Expediters

Transport & Storage

Customers

€ 99,- incl. 19% VAT + € 15,shippingcost incl. VAT

Figure 5: Value system of OpDieFiets.nl.

From the point the bicycles enter the harbor in Rotterdam, to the storage & delivery; the bicycles will be handled by an expediter delivering at the customer’s door at the lowest cost possible. All distribution cost including transport and storage is handled by the expediter, which is much cheaper compared to owning own storage and logistics services. Customers have direct contact with the expediter for delivery appointments. All administrative tasks and contact with the suppliers and the expediter are controlled by OpDieFiets.nl. All logistical costs are the cost for the clearance in the port, clearance to and from the storage facility, general storage and shipping directly to the

6. Conclusion This brings us to the answer to the main research question that has been drafted at the beginning of this thesis: “Is it achievable to start a company that will establish new value offerings in a segment of the low-end bicycle market?” After concluding all important aspects for OpDieFiets.nl it can be stated that it is achievable and innovative (process)

175

indeed to establish this company to offer new value offerings within a new market that will be formed by these ideas. It is clear that there is a need for such a product and value offering. It can be stated that there is a demand for a cheap, robust and good quality bicycle in a market that is insufficiently served. This market is characterized as the low-end bicycle market. The organization is designed around the three value driving concepts: conception to capture the specific customer demand, configuration to involve suppliers delivering a large part of the customer demand, conception to establish a unique value processor flowing value between demand and supply.

[10] Prahalad, C.K, Ramaswamy, V., 2004, The Future of Competition: Co-Creating Unique Value with Customers, Harvard Business School Press, Boston. [11] Hamel, G., Prahalad, C, K., 1990, “Core Competence of the Organization”, Harvard Business Review, Vol. 68 No 3, pp. 79-91. [12] Hamel, G., Prahalad, C.K., 1994, “Competing for the Future”. Harvard Business School Press, Boston. [13] Von Hippel, E., (2005). “Democratizing innovation”, MIT Press, Cambridge. [14] Leifer, R., C. M. McDermott, G. C. O'Connor, L. S. Peters, M. Rice and R. W. Veryzer (2000), Radical Innovation: how mature companies can outsmart upstart, Harvard Business School Press, Boston.

----------------------------------------------------------------------------[1]

[2]

[3]

Beelaerts WWA, Santema SC., 2006 Value Chain Innovation Processes and the influence of co-innovation, Tools and Methods of Competitive Engineering (TMCE) 2006, Ljubljana, Slovenia.

[15] Chesbrough, H., 2003, Open-innovation, Harvard Business School Press, Boston. [16] Håkansson, H., Snehota, I., Ford, D., Gadde, L.E., 2006, “The Business Marketing Course, Managing in Complex Networks”, second edition, Wiley & Sons, Ltd.

Beelaerts van Blokland, W.W.A., Verhagen, W.J.C., Santema, S.C. ,2008, “The Effects of Co-Innovation on the Value-time Curve: Quantitative Study on Product Level”, Journal of Business Market Management, Vol. 1 No 1, pp 5-24.

[17] Lambert D.M., Cooper M.C. and Pagh J.D. ‘Supply Chain Management: Implementation Issues and research Opportunities’ The international journal of Logistics Management 9, no.2 ,1998, P. 10.

Beelaerts van Blokland, W.W.A., Fiksinski M.A., Amoa, S.O.B., The lean value network system: co-investment and co-innovation as drivers for a sustainable position in the marketplace, Delft University of Technology, 2008.

[4]

Womack, J.P., Jones, D.T., 1996, Lean Thinking: Banish Waste and create Value within your Corporation, Simon & Schuster, New York.

[5]

Womack, J.P., Jones, D.T., 1990, The machine that changed the world, Simon & Schuster, New York.

[6]

Greenwood T., Bradford M., CPA and Greene B.: ‘Becoming a Lean Enterprise’ Strategic Finance, Cost accounting, November 2002.

[7]

Porter, M.E., 1985, Competitive Advantage, creating and sustaining superior performance, The Free Press.

[8]

Porter, M.E., 1996 What is strategy? Harvard Business Review, November-December, 6178.

[9]

Prahalad, C.K.,1993, “The Role of Core Competencies in the Corporation”, Research & Technology Management, Vol. 36 No. 6, pp. 40-47.

[18] Karlsson, C., Ählström, P. ,1996, “Assessing Changes towards Lean Production”, International Journal of Operations and Production Management, Vol. 16 Issue 11, pp 42-56. [19] Barbot, C., 2004. “Price competition amongst LCC’s, CETE – Centro de Estudos de Economia Industrial, do Trabalho e da Empresa (research Center of Industrial, Labour and managerial Economics). [20] Jiang, H., 2007, “Competitive Strategy for Low Cost Airlines”. Proceedings of the 13th Asia Pacific Management Conference, p 431-436, Australia, 2007. [21] MERCER Management Consulting, 2002. Impact of Low Cost Airlines – Summary of Mercer Study. [22] BOVAG, 2006, Kerncijfers 2006, Stichting BOVAG RAI Mobiliteit, Amsterdam. [23] BOVAG, 2007, Kerncijfers 2007, Stichting BOVAG RAI Mobiliteit, Amsterdam.

176

[24] BOVAG, 2008, Kerncijfers 2008, Stichting BOVAG RAI Mobiliteit, Amsterdam [25] BOVAG, Hendriksen, I., 2008, Rapport, Elektrische Fietsen, Marktonderzoek en verkenning toekomstmogelijkheden, HBD, BOVAG, TNO, Bovag.nl [26] Buzaglo, 2008, Interview met Ger Leeseman van Buzaglo, Nederlands grootste importeur van vouwfietsen, over the markt van vouwfietsen. [27] Meijer & Van Der Ham, BOVAG, HBD, 2006, Samenwerking en ondernemerschap bepalend voor toekomstmogelijkheden, Strategieonderzoek fietsdetailhandel en-herstelbranche, bovag.nl [28] Dumitrescu B., 2008 ‘Price Revolution for Quality Bicycles – A market expansion for low-end bicycles’, M. Sc. Thesis, Delft University of Technology. www.lean.org www.bikesonline.nl www.OpDieFiets.nl/index.php www.cia.gov/factbook

177

Design and Manufacturing Uncertainties in Cost Estimating within the Bid Process: Results from an Industry Survey S. Parekh, R. Roy and P. Baguley School of Applied Sciences, Cranfield University Cranfield, Bedfordshire, MK43 OAL, United Kingdom [email protected]

Abstract This paper discusses the issues of the bidding process with emphasis on the design and manufacturing uncertainties that can occur. The context of the paper is within manufacturing companies and in particular within the Defence sector. The paper presents the bidding process of a large Manufacturing company and details the main challenges and uncertainties that may occur. It also discusses the methods that are currently used to tackle uncertainty. The results of an industry survey compare the practices of other manufacturing companies and highlight the challenges at the bidding stage. The paper concludes that the development of an appropriate framework is necessary in order to effectively manage uncertainty at the bid stage. Keywords: Bidding Stage, Cost Estimation, Uncertainty

1 INTRODUCTION The business of many organisations is based on performing contract work obtained by submitting and winning bids to client organisations in competition with other contractors. This paper describes the bidding process of a Manufacturing organisation within the Defence sector, common activities and the various challenges faced at this stage. It also presents the results of an industry survey within the Manufacturing sector in order to underline common issues. Each organisation aims to prepare accurate cost estimates at the bid stage of a project whilst maximising profit and adding value to the customer. During the project lifecycle, uncertainty diminishes as time progresses, revealing more information and increasing confidence. During the bid stage, however, uncertainty is not often foreseeable as the progression of a project may vary considerably from an early viewpoint. Therefore, uncertainty is at its peak during the bid stage, particularly when the bid team are considering new projects or products including innovative technological requirements. The scope of the project life cycle in this study is restricted up to the concept, assessment, demonstration and manufacturing stages of the Ministry Of Defence’s (MOD) CADMID cycle. In-service and disposal stages are out of scope of this research. The focus is on cost estimation practices and challenges for the manufacturing of a tangible product. 2 RESEARCH METHODOLOGY The industry survey was carried out using a semistructured questionnaire. A face-to-face interview approach with mixture of open and closed questions was adopted to enable a clear understanding of the problems faced in a qualitative manner, as described by [1]. The results presented here involve the responses of 14 experts from 5 different organisations for various Manufacturing sectors including: aerospace (9), automotive (2), oil & gas (1) and consultancy (1). Figure 1 shows the nature of projects that each of the participants is responsible for. The average duration of each interview was approximately 90 minutes.

CIRP Design Conference 2009

178

Project context Software Development 6% 4% Environmental

Engineering (other) 27%

Product 13% Development Maintenance & Repair 11%

13% Research & Development

Manufacturing 15% 11% Operations

Figure 1. The project context for the participants The results gathered were used to obtain an overview of uncertainty management and cost estimation at the bidding stage. Questions to set the context were the monetary size and type of industry sector of existing projects. The key areas of focus were cost estimation and uncertainty management, both within the context of the bid process. Cost estimation techniques and their utility and usefulness were explored. Some open questions were required to add knowledge to the study, such as where respondents identify the problems are when an estimate’s accuracy is significant. Uncertainty questions were designed to name sources of uncertainty and methods used to identify them. Analysis of the results involved compilation of each questionnaire and reviewing the answers based on the role category of respondent. This provides an indication of which area of work they are typically referring to. For example, a more senior level of staff will usually portray issues that occur from a high level such as: “poor

statement of work documented”. A design engineer may refer to: “poor verification” as a problem area. 3 RELATED WORK The business of many organisations is based on performing contract work obtained by submitting and winning bids to client organisations in competition with other contractors. Cost estimation techniques are therefore critical when utilised in the bidding process of a project/product. The focus in the UK defence sector of the bidding process is that fixed-price bids are invited for a specified piece of work, and the contractor submitting the lowest bid and all other things being equal, is awarded the contract. The client's decision is relatively straightforward, but the contractor's decision on what price to bid is more difficult. Bidding low in the face of competition increases the chance of winning the contract but reduces profitability. However, bidding at a level which ensures a good return increases the chance that a competitor will win the contract by submitting a lower bid. The problem is compounded by the difficulty encountered in estimating the probability of winning with a given bid, and by uncertainty about the costs involved in performing the contract [2].

types of uncertainty (e.g. epistemic and variability), whereas risk is a singular concept of an event leading to an unfavourable impact. Uncertainty is comparatively different to risk. Uncertainty expresses a lack of definitive knowledge around a task or activity and for that reason, as opposed to risk/opportunity; uncertainty does not differentiate between positive and/or negative. However, the very idea of it being unclear can be intimidating since the precise impact is unknown. The importance of uncertainty has only recently become clear in terms of organisational performance [4]. When a project fails to meet the expected outcome, it is assessed to determine the areas that resulted in failure and this is where elements that were not anticipated may be identified. This section starts by defining uncertainty and identifying its main sources. It then continues by providing modelling approaches and methods to manage uncertainty. There are essentially two common approaches to classify uncertainty. The first is to apply a classification based on the degree of uncertainty such as that presented by [5] shown in table 1. Uncertainties are allocated into groups according to how uncertain each element is perceived to be. Lack of Knowledge Lack of Definition Statistically Categorised Variables Known Unknowns Unknown Unknowns Table 1. Uncertainty Classification (Hastings, 2004)

Figure 2. The bidding process (Behrens, 2003) Figure 2 presents a graphical overview of the bidding process as presented by [3]. The first steps of the bidding process involve managerial/soft actions that include locating the customer, engaging interest and presentation of the company’s products. For the bid to be acceptable, the company’s products must be properly customised to meet the customer’s needs. The next step is to develop a proposal and this is the step where cost estimation is largely involved. In order to develop a successful proposal that secures the deal and results in winning the bid, an accurate as possible cost estimate is required. What makes the bidding process important is that, according to [2] considerable expertise is required in preparing bids, since the terms of the bid not only influence the chances of winning, but also shape the working context for successful bids. Effective and efficient bidding processes which are based on a sound understanding of all the important issues, and the concerns of all parties involved, are critical success factors for contractor organisations. 3.1 Uncertainty Management The focus of the current research is placed upon the identification and control of uncertainty in the context of the bidding process. Uncertainty is a relatively new concept in that it is not fully understood, even by those that are appointed to manage it. There is a lack of a coherent definition for uncertainty. There is considerable ambiguity between uncertainty and risk within industry. One reason for this confusion may be due to the fact that uncertainty is present within each risk. There are different

Lack of knowledge entails elements that are not known or only known imprecisely, which can be reduced by acquiring available knowledge relevant to the uncertainty in consideration. This acquisition of knowledge can be through research or experimentation. Lack of definition refers to the areas of a project that have not been clearly specified. Time is a critical factor in this category as specifications need to be allocated appropriately. Thus difficulties may arise if definitions are placed too early or too late for specific elements. Solutions to such uncertainties are achievable through effective project management. Statistically categorised variables include events/conditions that are difficult to determine with absolute exactness but they can be modelled using statistics (e.g. probability distribution). The inflation rate is a suitable example for this category, in that the exact value difficult to ascertain but is often modelled with a range of values. Known Unknowns refer to events/conditions similar to the previous category but with increased vagueness in the probability of occurrence and its associated impact. A suitable example is the prediction of the inflation rate at a future date. Unknown Unknowns are the very problematic uncertainties as they are very difficult to determine and even when efforts are made to begin identification they may appear impractical to include in an estimate. A true and disastrous event relating to this category is the terrorist attack on the New York’s Twin Towers. The second approach entails grouping uncertainties with similar characteristics forming an array of sub-categories. These can be highly diversified though examples of

179

groups may include political, environmental, behavioural and so on. This is demonstrated by [6], who presents types of uncertainties from a variety of engineering perspectives as well as others, such as economics. His overarching types of uncertainties in complex systems are frequently referred to in literature. These are epistemic, aleatory, ambiguity and interaction (less common). Ambiguity is described as linguistic imprecision by this author and interaction uncertainty relates to unforeseen interactions between events. This interaction may result in multiple outcomes, which may be difficult to determine prior to occurrence. Epistemic uncertainty is also referred to as reducible, subjective, type B and state of knowledge uncertainty. This uncertainty is due to any lack of knowledge regarding the context under study. Hence, reduction in epistemic uncertainty can by accommodated by increasing relevant knowledge. Aleatory uncertainty is also known as stochastic, variability, irreducible, type A and inherent uncertainty. Attempts made to reduce this may fail as the nature of aleatory uncertainty is inherent and depicts variance in the available data, which cannot be reduced with more information [7]. The source of epistemic uncertainty is from outside the system, whereas aleatory uncertainty originates within the system as if part of it. Epistemic and aleatory uncertainties are more common amongst many authors including [8], [9], and [10]. [11] and [12], however they regard the use of this terminology ineffective as it does not place emphasis on how uncertainty should be managed. An uncertainty matrix was developed in [13] which was distinct from the two above, categorising uncertainty into three different dimensions. These are location of uncertainty, level of uncertainty and nature of uncertainty. Each dimension has further sub-categories to defining specific uncertainties. The cost estimation process includes several inputs from a potentially large number of stakeholders across multiple functions of an organisation. As a result, the level of subjectivity is high and the question arises whether suitable bias, such as relevant domain knowledge, has been introduced. This will enable more realistic and well defined estimates rather than ambiguous suggestions from inexperienced members. The bid stage may regularly face projects that have limited information so that subjective input from an expert would be required. The level of subjectivity increases as objectivity reduces. Objectivity is viewed as historical data from previous projects and subjectivity is that which is provided by a subject matter expert (for example a predicted value from personal experiences) who may not have complete information. It is important to note that computer simulations may also tend towards subjectivity as their assumptions are provided by expert individuals. Objectivity and subjectivity are notions that are difficult to isolate, however this appears to be dependent upon those bidding for a project. A completed project will have elements of objectivity, such as the final result(s). The subjective components can be identified in the early life of the project. It is possible to understand both these aspects at each stage of the project and form relationships to comprehend efficiency at the bid stage. 4 QUESTIONNAIRE RESULTS An overview of the results is described in this section, giving a summary of the responses provided. One of the

180

first questions investigated the average project size in the participating companies. The responses regarding the average size of projects (in monetary terms) usually undertaken by the companies were recorded. Here, the majority (49%) of the projects are above £10million and 25% between £1m and £10m. Therefore this sample of projects require an effective bidding process. Errors made within the pre-concept phase of a project may have critical impact on cost, performance and schedule and since most of the projects deal with such issues at such large scale in terms of cost, it is important to ensure that projects are estimated and bid for in an efficient way. Another question was concerned with the number of the bids that each company has to prepare on an annual basis. With the exception of the 3 respondents who prepare a significantly large number of bids (between 40 and 50), the average bids per participant lies around 10. One respondent is also an exception with zero bids as he comes from an insurance position. Having to prepare and bid for ten projects per year can be a frustrating process where each successful case can play an important role as the department’s performance indicator. Therefore, it is important to have a robust and effective bidding process in place that limits uncertainty and ensures successful cases. The participants were also asked to propose any improvements on the current bidding process that they follow. Many of them proposed a framework which will provide organised feedback of lessons learned, data and benchmarking. Also an accurate statistical analysis of the main risk factors and a clear understanding of the bidding purpose along with clear focus from the customer were proposed. The overall feeling was that there is a need for better data capture and metrics. Two of the questions were related to uncertainty, and both were open-ended. The first involved the perception and definition of the participants towards uncertainty. The second sought to investigate the sources of uncertainty within the cost estimate of the proposed design at the bidding stage. The definitions given about uncertainty reveal the diversity of understanding around the subject. A common denominator in all the answers was related to unknown events. Uncertainty towards an event’s impact was a frequent occurrence from the interviewees. The responses provide a wide range of explanations but not confident definitions. The respondents’ definitions often referred to risk rather than uncertainty itself. Several answers were very generic and vague, whilst some were expressed in the context of cost estimation although that was not required. Five responses referred to the impact of an occurring event, which is related to project risks and not uncertainty. The respondents were more specific on the sources of uncertainty. According to a significant number of participants, the sources of uncertainty are identified by the steps in a risk management process. For example using brainstorming and/or multi-disciplinary workshops to identify risks and evaluate probability and impact. Also in terms of the bidding process, the bid team are asked to identify the level of uncertainty associated with the bid. Regarding the sources of uncertainty, it is interesting that although the respondents were specific, very few responses have common elements. Cost, technology and future are identified as three of the factors that were mentioned by most of the respondents.

Technology maturity New technology Technology obsolescence Technology

Poor specifications Lack of clear definition Ambiguous statement of work Poor customer understanding Customers/supplier relations

Resources

Future costs Change in exchange rates Volatility in inflation

Policy and Regulations

Scope

Raw material costs Labour rates Skilled labour availability Asset ownership

Economic

Inappropriate terms and conditions Change in certification Changes in industry regulations International nature of projects

Figure 3. Types of uncertainties from participants’ responses Figure 3 shows these responses along with types of uncertainties in groups. These are typical uncertainties involved in long term contracts/projects at the bidding stage. In relation to figure 3, an alternative uncertainty classification methods [5] and [6] are presented, which is part of a framework developed in decision making. One reason for confusing uncertainty and process complexities may be due to current management of uncertainty. Currently, uncertainty is assigned to elements of a process such as a Work Breakdown Structure (WBS). Therefore, it may be assumed a complicated WBS suggests a complex level of uncertainty, which is not necessarily true. From reviewing all the responses, there may be a tendency to incorporate a high level of subjectivity when dealing with uncertainty. 4.1 The bidding process of a Manufacturing Defence Company This section describes the bidding process of a large Manufacturing company from the Defence sector and describes the key uncertainties identified. Although uncertainty is a concept crossing several industry sectors, scope may not always extend beyond the requirements of the Defence sector as the solutions offered by other disciplines may not facilitate it. In order to gain a practitioners perspective, a series of face to face interviews were carried out using a semi-structured questionnaire lasting approximately 1 hour. Topics included uncertainty management, existing bid processes and the issues surrounding these aspects. This approach

was taken to best acquire knowledge from industry practitioners as the semi-structured questionnaire allowed room for interviewees to expand upon selected topics, surfacing problems that were not originally considered. Description of the bid process The bid process captured is shown in figure 4. Initially the company carries out market analysis to identify potential customers. Once interest is expressed, the needs of the customer are assessed alongside current market conditions and company objectives. This leads to the starting point of the process shown in figure 4 where the company must decide whether or not to go ahead with the project or not. If a particular project is of no interest then the bidding process would cease, due to incurred cost of the process itself. If the decision is to proceed with the bid, the process begins with three initial steps, occurring simultaneously. There is communication between all three. Alternative solutions to the bid will be presented relative to the design requirements. Concurrently, plans and schedules will be introduced into the project, consisting of historical data from previous projects. The approach by analogy is taken when utilising these to assign relevance to this particular project. A separate team will be preparing the bid proposal, dealing with requirements and structure of the bid. Communication within these three phases is crucial as each is essential in generating the proposal.

Initial Cost Estimate Prepare Proposal

Prepare Bid/No Bid

Red Team Review

Initial Design

Bid Approval /Tender Vet

Update Plans & Schedules Design Review Design Solution to be Bid

Refine Offer (as required)

Technical Bid Review

Offer Submission

Reviewing Proposal

Negotiate Contract

Figure 4. The Bid Process of the Manufacturing Defence Company

181

Repeat Review Process

Final Approved Cost Estimate

Figure 5. The Cost Estimation Process (manufacturing context) Once the proposal is complete, it undergoes the “Red Team Review”. This essentially is a group of experts that were not involved in producing the initial proposal. The review reduces inappropriate data, possibly due to optimism or pessimism bias. The review is conducted to give the proposal a “second look” to identify, correct and report potential issues and to check bid sensibility. The Technical Bid Review comprises of experts assessing the engineering costs and the associated uncertainties. This does not include the entire proposal itself but only the specific design solution selected. All the data gathered and processed will be presented to experts of selected functions in a formalised report. This will be reviewed at great detail to ensure all aspects have been covered and to question those that may be ambiguous. Uncertainties are challenged at this stage, whether it be new ones not considered or those included. This review is also known as Request for Bid Approval (RBA). If successful through all the reviews the offer is submitted to the customer. It is important to note that the customer and associated suppliers may be involved at various stages of reviews. This is to ensure that all aspects have been covered and iterations of the review process is minimised. The feedback received from the customer may entail some form of negotiation of the contract the company has submitted. If any changes have been suggested the proposal will be modified accordingly where reiteration(s) of the review process follow. If successful this will lead to project initiation and the contract will be realised. The bid team faces many problems at this point in a potential project’s lifecycle as the level of uncertainty is highest. This is usually due to incomplete information, lack of experience and further pressure introduced by allocating a time limit to submit the proposal. Major issues faced at the bid stage include both internal (organisation processes) and external functions (supply chain operation). Internal issues include lack of clarity surrounding uncertainty and its communication to multiple levels of the organisation’s hierarchy. A cost estimate will be calculated based on subjective input from multiple functions. With this large array of information, deciphering a final cost estimate proves to be a difficult task. This may lead to poor decisions by senior members, who may not have the correct level of information at that time. There also seems to be a high level of dependency on experienced staff members due to the insufficient amount of time available for junior or less experienced staff to complete bid activities. A key internal issue is that concerning historical data and its validity. There is a lack of confidence in using historical data. This is due to poor recording of data during a project’s life, which is therefore uncertain. This is

182

currently overcome by applying three point estimates to historical data but this gives rise to additional uncertainty that will prove difficult to justify and rationalise when compiling the final cost estimate. Proposed solutions and their requirements may not be coherent, practical or affordable. These are addressed by working closer together but these interactions may be lengthy and require considerable effort to understand what the customer actually wants. In terms of suppliers, there may be a number of areas where double counting may occur, thus affecting the cost. These may be reduced by working closer with the suppliers, however, with limited time, interfacing at length with both customer and suppliers poses a threat to a project and/or the organisational strategy. The Cost Estimation Process – an overview A summary of the cost estimation process is shown in figure 5, which is indicated as the ‘prepare proposal’ box in the bid process (figure 4). At this point the project has been approved to bid for and a multi-function review will take place to evaluate the contents. The terms and conditions will be one of the outputs at an early stage, ensuring the legal and regulatory issues have been managed. The white boxes with a blue outline are estimates (or part of an estimate) produced by the relevant functions. These are compiled together and adjusted according to the project requirements through a number of iterations, alongside several reviewing stages, as shown in figure 4. Note that figure 5 is a generic process representing milestones completed at various stages. 4.2 Challenges in Uncertainty Management for Cost Estimates at the Bid Stage The cost estimating process was found to be lacking general planning [14]. With uncertainty at its peak at the bid stage, correct planning and methods must be potentially utilised in its prediction for the lifecycle of a project. Whatever is uncertain today will be more certain as time progresses (see figure 6) but the problem lies in predicting the actual impact of each uncertainty. Duplication of any risk or uncertainty is reduced by initially developing the base estimate without uncertainty and risk. They are reviewed around the base line estimate at a later stage which involves multiple iterations. The process deals with both risks and opportunities, though the latter involves realisation costs to achieve a positive result. Uncertainty is regarded as the inherent variability around a ‘most likely’ point. It is regarded as uncertainty due to its

ambiguity and lack of precise knowledge towards each scenario. There are no categories allocated for different types of uncertainty unlike some examples found in literature ([5], [6] and [13]). Categorisation processes,

Cost (£)

development of the product and so must be thoroughly managed. Any errors made at this stage may cause significant impact in terms of cost, schedule, performance and quality, as this will dictate the nature and functionality of the product. It affects all the stages of the life cycle. For example, if a product has reached the end of its life, it may be revealed that the cost of disposing this is much greater than expected. Thus qualified and experienced personnel must be allocated appropriately. Uncertainties are shown in figure 7.

Uncertainty

Most Likely

Concept stage

Production stage

Time (s)

Figure 6. Uncertainty in a Cost vs. Time perspective specifically for the types of uncertainty, may be too extensive to integrate within a business environment, which may already comprise of complex processes. Allocating uncertainty to each element of a work breakdown structure (WBS) or a cost breakdown structure (CBS) is one method practiced in the Defence sector. Workshops are often used to assess relevant risks and opportunities, usually composed of stakeholders from multiple functions/departments in order to cover all aspects of a project. A checklist of questions is also a common approach to cover general aspects of uncertainty from company documentation. Another approach for surfacing uncertainties is by assessing similarities and differences of two or more independent cost estimates for the same project. Issues such as optimism bias, which potentially leads to underestimates, can be tackled at this stage by introducing more than one perspective and method. Monte Carlo simulation is the common method of modelling uncertainty as it has proven to be an effective tool within industry. The inputs used are three point estimates, which are produced with increased precision. For example, the minimum and maximum values are not simply tails of a probability distribution but are values that have been assessed and have rationale. In some cases they will be related to budget, where the minimum will be the minimum budget. Once Monte Carlo simulation has been used another tool, tornado charts, are used to observe the sensitivity of each cost element and how it affects the final cost estimate. Prior to simulation, experts will perform rough order of magnitude (R.O.M.) estimates which can be compared to that of simulation later in the bid process. Common practice may also involve the development of ‘as is’ and ‘to be’ models to understand the requirements against the capabilities. 4.3 Uncertainty at the Design Stage There are two areas of uncertainty at the design stage. One is the uncertainty around the product being developed and the second being the process of design itself. This section is in the context of the latter. The effectiveness of the design process is crucial to the

4.4 Uncertainties at the manufacturing stage This stage is one of operations that aims to manufacture the product that has been designed. If there are issues with the product from the onset then manufacturing staff may not be able to address it as they may not be aware of it. Uncertainties mainly exposed to largely involve resources (labour, materials & tooling), equipment (machines) and overheads. All procurement activities need to be efficient at this stage in order to maintain an efficient production line. Communication with multiple functions is important at this stage as staff need to be aware of schedules (particularly the critical path), in order to prevent overruns. Typical uncertainties at the manufacturing stage can be seen in figure 7.

Figure 7. Design & Manufacturing Uncertainties 5 DISCUSSION AND CONCLUSIONS There have been several issues raised regarding uncertainty. It is proposed that a formal definition and methodology is required to improve the integrity of future projects. Clear distinction between uncertainty and process complexity must be made to reduce the level of project complexity and lack of understanding, if entered into a new business domain. However, there is very little knowledge on how extreme cases (low probability of

183

occurrence) of uncertainties are handled at the bidding stage. Examples of such uncertainty are unknown unknowns, which are those that have not been considered but may still have a damaging impact on the project/organisation. Also, there is not enough understanding of the distribution of a cost estimate, whereas some regard it as not being wide enough to include uncertainties that may have been overlooked. The communication of uncertainty is not as clear as it should be. As a result, uncertainty is not understood in the same manner by all stakeholders from engineers to senior management. A varied background would lead to a variation in terminology [15]. A solution would be to map customer requirements to effectively design solutions. It may be well received in qualitative form but may require attention when transferring to a quantitative form. Another solution would be understanding uncertainty in the new business environment of service solutions by retrieving accurate historical data. Current approaches create ambiguity in the accuracy of data, reducing confidence. This may have to start with techniques that have effective ways of recording the data to enable accurate retrieval. What was also noticed was high dependency on experienced members of staff. Inexperienced staff members require more time to complete tasks than the bid allows by using multiple software programs during the bid that are not integrated. Also, the communication of the bid document to senior members of staff is an issue as they may not fully understand how the confidence levels were arrived at or what uncertainties have been taken into consideration and where. As a result, more emphasis should be placed on each variable’s behaviour towards the final cost estimate. Many aspects can be considered that can offer solutions to in order to address several issues by offering a methodology that may be developed into a software model. Examining the design and manufacturing stages shows the need to clarify uncertainty and communicate it effectively. The following are possible areas the research will contribute towards a more reasonable cost estimate. o A formalised methodology used during bidding to help identify and manage the uncertainties involved up to and including the manufacturing stage. o An effective mapping system that allows visibility of the capabilities required by the customer as functional requirements and their respective design solutions. o Offer a method to identify uncertainties in a simpler fashion that will allow those with little experience to complete the tasks in the time allocated without too much involvement from experienced staff members. o Develop an effective means of communicating uncertainties to all the stakeholders, across all hierarchies, involved in the bid process. This will give more meaning to the final cost estimate and justify its distribution and not just the ‘most likely’, ‘worst case’ and ‘best case’ scenarios. This paper presented the bidding process of a large Manufacturing company in the Defence sector, which discussed the key challenges and uncertainties that may occur during the process. It also presented the results of an industry survey regarding these issues. The paper concludes that the development of an appropriate framework is necessary in order to effectively manage uncertainty at the bid stage. The focus areas of the

184

framework have been identified and involve areas of modelling and managing uncertainty in an effective way. 6

REFERENCES

[1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

Robson, C., 2002, Real World Research: A Resource for Social Scientists and Practitionerresearchers, Blackwell Publishers Chapman, C.B., Ward, S.C. and Bennell, J.A. (2000) ‘Incorporating uncertainty in competitive bidding’, International Journal of Project Management, 18, pp. 337-347. Behrens A., (2003) ‘Improving bid success in increasingly competitive environments’, White paper, Cambashi Limited. Klir, G.J. and Smith, R.M. (2001) ‘On measuring uncertainty and uncertainty-based information: Recent developments’, Annals of Mathematics and Artificial Intelligence, 32 (1-4), pp. 5-33. Hastings, D. and McManus, H. (2004), ‘A Framework for Understanding Uncertainty and its Mitigation and Exploitation in Complex Systems’, 2004 Engineering Systems Symposium Thunnissen, D.P. (2005) ‘Propagating and Mitigating Uncertainty in the Design of Complex Multidisciplinary Systems’, California Institute of Technology Pasadena, California, Thesis. French, N. and Gabrielli, L. (2006) ‘Uncertainty and Feasibility Studies: An Italian Case Study’, Journal of Property Investment & Finance, Vol. 24, No. 1, pp. 49-67. Oberkampf, W.L., Helton, J.C. and Johnson, J.D. (2005) ‘Competing failure risk analysis using evidence theory’, Risk Analysis 25 (4), pp. 973-995. Bedford, T. and Cooke, R. (2001) Probability Risk Analysis: Foundations and Methods, Cambridge University Press Rao, K.D., Kushwaha, H.S., Verma, A.K. and Srividya, A. (2006) ‘Quantification of epistemic and aleatory uncertainties in level-1 probabilistic safety assessment studies’, Reliability Engineering and System Safety 92 (7), pp. 947-956. Winkler, R.L. (1996) ‘Uncertainty in Probabilistic Risk Assessment’, Reliability Engineering and Systems Safety, 54, pp. 127-132. Aughenbaugh, J.M. and Paredis, C.J.J. (2006) ‘The value of using imprecise probabilities in engineering design’, Journal of Mechanical Design, Transactions of the ASME, 128 (4), pp. 969-979. Walker, W.E., Harremoes, P., Rotmans, J., Van Der Sluijs, J.P., Van Asselt, M.B.A., Jenssen, P and Krayer Von Krauss, M.P. (2003) ‘Defining Uncertainty: A Conceptual Basis for Uncertainty Management in Model-Based Decision Support’, Integrated Assessment, Vol. 4, No. 1, pp. 5–17. Rush, C. and Roy, R. (2000), 'Analysis of Cost Estimating Processes Used Within a Concurrent Engineering Environment Throughout a Product Life Cycle', CE2000 Conference, Lyon, France, pp. 5867. Roy, R., Kerr, C., Sackett, P., Corbett, J., 2005, Design Requirements Management Using an Ontological Framework, CIRP Annals-Manufacturing Technology, 54: 109-1

Design Interference Detector-A Tool for Predicting Intrinsic Design Failures V. D’Amelio, T. Tomiyama Department of BioMechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering Delft University Technology, Delft, the Netherlands [email protected]

Abstract Intricate failures at the system integration phase of the design are mostly generated by interactions of product modules or/and engineering domains. Modules of the product are tested independently and when their integration is performed, design failures are detected. This significantly delays the machine development. This paper aims at introducing a software tool able to predict phenomena which generate destructive couplings of engineering domains and product modules. An example shows how these couplings influence the system behaviors. The analysis is conducted at the conceptual deign level and makes use of qualitative data to reason out behaviors of the product. Keywords: Mechatronics, Unpredicted problems, Qualitative Physics, Verification, Integration

1 INTRODUCTION Market is nowadays invaded by very complex machines. A product is complex when it has to fulfill a large amount of functionalities. Current mobile phones for instance are not anymore just devices to transmit and receive sound but may support many additional services and accessories, such as SMS for text messaging, email, packet switching for access to the Internet, and MMS for sending and receiving photos and video. Printers are generally scanners and copiers together; they can deal with many paper formats and materials maintaining a high quality. Costs, environmental issues, energy consumptions and high performances play other important roles in the development of products. Numerous technologies, skills and competencies are then required to make a product competitive. Mechatronics solutions are adopted by designers because of their efficient way to combine machine functionalities. Suh defines complex a product in which functional requirements are coupled [1]. In this sense mechatronics solutions lead to increase product complexity and therefore to a complicated product development process (PDP). Even if Paul and Beitz [2] provide a systematic way to get from the conceptual to the detailed design to avoid as many design failures as possible, still years are required to deliver products. To have a better overview of the product, standard modularization methods are used as for instance Design Structure Matrix [3]. On one hand these methods allow a better subdivision of tasks among engineers; on the other hand they can generate a huge number of integration problems when the system is considered as a whole. Modules of the machine are designed and tested independently of each other and only at the end of the PDP a total test is performed. Unpredicted problems are the result of the total test and they are difficult to troubleshoot and to solve. Unpredictable problems are unexpected behaviors or physical phenomena that occur within a domain or by interactions of domains. They are generated by destructive couplings of engineering domains which are undesired and unpredictable interactions.

CIRP Design Conference 2009

185

A fault-tree analysis (FTA) [4] is not sufficient to understand causes of problems. Indeed, the machine architecture is also influenced by unpredictability given by unknown connections between components. Furthermore, FTA and FMEA (Failure Mode and Effect Analysis) deal with problems coming from deterioration of the system and faults which are consequences of anomalous circumstances, while unpredictable problems are generated intrinsically by the design of the system. This article introduces a software tool, the Design Interferences Detector (DID) which is able to predict unpredicted problems of mechatronics solutions at the conceptual design level. In order to understand capabilities of the DID, the article summarizes main phases of the PDP referring to wellknown design methods and it specifies where the DID is located into the PDP. Then the article focuses on the architecture of the DID which is implemented as an extension of KIEF (Knowledge intensive engineer framework) [5]. An example shows how a total test of the machine can be roughly but efficiently performed by using a qualitative analysis. Then section 4 shows which behaviors are predicted when the DID is used for integrating sub-modules. Results are discussed in the conclusions together with limits and future work on this research. 2

PLACING THE DID INTO THE PRODUCT DEVELOPMENT PROCESS The V-model represents the system development lifecycle and it was developed by Stevens in 1998 [6]. Figure 1 shows the main design phases to generate a product based on the V-diagram [7]. Although the V-model is mainly used for software creation, it is adopted also in product design. All the design and verification phases are mapped in figure 1. The left side of the model describes decomposition of requirements and creation of system specifications. Suh in his Axiomatic Design [1] provides a methodology to valuate if functional requirements and specifications are adequate. He states that functions requirements must be

subsystems test. At the conceptual design phase the system can be modeled and behaviors can be reasoned out just using rough information by means of Qualitative Physics (QP) [9]. This is what the next section is going to explain by introducing the DID architecture and methodology. Requirements analysis

Validation Integrated System test

System design Subsystem design

Subsystem test

Lower level Subsystem design

Integrated Subsystem test

Component design

Component test

From Design to Prototype

Integration and verification

Decomposition of requirements

Figure 1: PDP without DID.

Validation

Requirements analysis

Integrated System test

System design Subsystem design Lower level Subsystem design Component design

D I D

Subsystem test Integrated Subsystem test Component test

From Design to Prototype

Integration and verification

Decomposition of requirements

decoupled or uncoupled. This method cannot deal with cases in which the complexity increases unexpectedly during the course of design due to undesired and unpredictable interactions among subsystems. On the right branch of the model the integration and the verification of the system are performed. In integration testing the separate modules are tested together to expose faults in the interfaces and in the interaction between integrated components. Many different verification and validation techniques are used to determine if the system fulfills specifications and if its output is correct for each prototype test. Among the other techniques it is important to mention the Functional testing, which consists of proving all the functions of the system which are defined in the requirements; the Structural Testing, which uses architectural information of a system to verify the operation of individual components; Random testing, which can detect peculiar faults; Fault injection in which system is observed while working under fault conditions; and Risk Analysis which identifies consequences of obstacles and possibility of occurring [8]. Going down on the left branch of the diagram each design level gives more functional and architectural details than the previous one. On the right side of the V-diagram (Figure 1) the system is tested at various levels, from components to subsystems and from subsystems to system level. Strong delay in the product development is given by failures which are found in this leg of the Vmodel. The design process of a complex product requires months while the verification process requires years. A problem at the component level of the verification branch is easy to solve and to troubleshoot because it is about a sole system unit. Problems at the system level, where total test is performed, are not-trivial to troubleshoot and solve because they involve the entire system. Therefore, problems at the system level can lead to changes at the conceptual design of the product which is at the top level of the left diagram branch. The early design level and top verification level depend on each other but they are performed at the opposite extreme of the PDP. Years are in between the two development phases. The distance between design level and correspondent verification level increases going up to the right branch of the diagram. It is risky to make mistakes at the conceptual design level because these mistakes will only appear at the end of the PDP. Failures at a high level of the prototype test correspond to flaws at an early level of design. Going backwards to an early design phase at the end of the PDP is obviously time consuming and cost inefficient. The ideal situation would be to have verification of the conceptual design as early as possible. The DID wants to create the shortcut between early design level and high level verification as it is shown in figure 2. The objective is to recognize design failures given by integration of subsystems well in advance. This will save money in prototyping and increase the design quality by using alternative ideas instead of repairing design mistakes. Compensation of design mistakes turns out in adding pieces of software and hardware to the system to bring the system output to nominal values. This leads to changes at the conceptual design. By means of the DID the iterative trial-and-error process will be minimized and the best design among those proposed can be determined by analyzing several design solution at an early stage. Even if the article emphasizes the application of the DID methodology at the system level, the software and the methodology can be used for subsystem and lower level

Figure 2: PDP with DID. 3 DID ARCHITECTURE AND METHODOLOGY In order to reason system behaviors and unpredictable problems which can arise at a late phase of the design DID employs Function Behavior State (FBS) model as representational scheme [10] and QP based reasoning system [9] as the reasoning engine of the system. The FBS model incorporates functional, behavioral and architectural information of the product. The architecture of the system consists of entities, which are physical components of the product, and of physical relations among entities, which denote the static structure of the product. The behavioral information consists of physical phenomena, which are physical laws or rules that govern behaviors, and of states of entities, which are values of parameters associated to the entities such as ‘rotational speed’ or ‘pressure’. Qualitative Physics reasons qualitatively about the behavior of physical systems. This branch of artificial intelligence perfectly matches the FBS model in the way in which knowledge is structured. Qualitative Process Theory (QPT) is part of QP and it was developed by Ken Forbus at Massachusetts institute of technology [11]. The goal of QPT is to understand the commonsense reasoning about physical processes. Processes in QPT are the only source of change in physical situations. Examples of physical processes include ‘boiling’, ‘motion’, ‘acceleration’ and ‘rotational transmission’. The idea of using QPT for prediction of design failures came out from two considerations: at the conceptual design level no precise details have been specified; in troubleshooting, monitoring and diagnosis there are no precise mathematical models of failure modes and humans operate with less detailed model [11]. Designers seldom use numbers but more approximations and rough values for the system state.

186

Steps to take toward the detection of unpredictable design problems are schematically represented in figure 3. The first step consists in the construction of the primary FBS model. The model can be mono-disciplinary (Model 1D) or multidisciplinary model (Model nD), which represents mechatronics products. The qualitative engine reasoned out behaviors which can possibly happen to the design object including predicted and unpredicted ones. The behavior is represented in term of state transitions. When the product is multidisciplinary it is necessary to integrate the engineering modules before reasoning behaviors. This is performed automatically by the DID that adds Interfaces between modules. These Interfaces are represented by extra physical phenomena and extra connections among components. The amount of physical phenomena reasoned out by the qualitative engine can easily explode for a complex system such as mechatronics product. A filtering method will be used to constraint the amount of possible behaviors reasoned out by the software [12]. All the operational blocks which are necessary to generate system behaviors are resumed in figure 4 [13]. Physical features consist of entities, relations and physical phenomena [14]. Physical features can be thought as high level building blocks of the product. The designer selects physical features from the database and combines them into an FBS model. Attributes of entities are generated by a direct influence of physical phenomena and they are related to each other by the indirect influence of physical laws. Attributes are assigned to entities by physical phenomena. The physical rules create the network among attributes. QPT engine combines all the information of the product. State transitions and causal connections among attributes are then automatically generated. KIEF works as a knowledge base and reasoning system for the DID. It reasons out the possible physical phenomena occurred on the designed object by using the physical feature reasoning system (PFRS) facility. The PFRS is based on pattern matching technique. The integration of physical features (product building blocks) is performed by comparing the FBS model, which is created by the designer, and physical features, which are stored into the database. This step is necessary to predict unpredictable phenomena in the product design. The DID connects modules of the product by introducing hidden relations among components and it integrates building blocks at the behavioral level (physical phenomena) and at the architectural level (physical relations) as well. This will be better clarified in the next section. Predictable Predictable Behaviors Behaviors

Model Model1D 1D

Function Function Hierarchy Hierarchy

Physical Feature Relations Attributes

Entities

Qualitative Qualitative Process Process Theory Theory

Physical Phenomena

Physical Laws

State Transition

Behaviors

Figure 4: DID methodology. 3.1 The DID extension for KIEF DID makes use of the PFRS and Qualitative reasoner of KIEF in order to reason out unpredicted physical phenomena which lead to unpredicted behaviors of the product. The PFRS generates unexpected physical phenomena. The physical feature reasoning system has been modified in the DID to generate also unpredicted physical relations of components. Unpredicted relations can cause further unpredicted physical phenomena. These connections are often implicit and the designer can omit their importance. An example can be found in the concept of ‘distance’. For instance the phenomenon ‘heat transfer’ connected to the source ‘heater’ will affect the destination ‘entity’ when their relation is ‘Near’. Relations represent a condition for the phenomenon to be generated. For activating the condition, the DID will ask to the designer to verify if the connection exists or not by generating unambiguous queries. Especially when the system is very modular, it is likely for subsystems to be implicitly coupled. The DID keeps the attention of the designer on these unknown relations, which are important because they can activate further physical phenomena in the system. Next section shows unpredicted phenomena and unpredicted relations of product components by means of the example of the engine top of an inkjet printer. 4

Qualitative Qualitative Model Model

Integration Integration

Behaviors Behaviors

Filter Filter

Unpredictable Unpredictable Behaviors Behaviors

Model ModelnD nD

Figure 3: DID architecture.

187

EXAMPLE OF THE TOP ENGINE OF AN INKJET PRINTER This section represents the FBS model and the behavior of the top engine of an inkjet printer which is used to accurately position the print-head, and consequently the ink, over the paper. Results are shown for the model built by the designer (primary model) as well as the model including unpredicted phenomena (reasoned model). 4.1 FBS representation of the primary model and of the reasoned model. The engine top of an inkjet printer is the module that allows the ink to be accurately placed on the paper. The main function of this artifact is to print dots of ink. This function consists into shooting a stream of ink forcefully

forth from a nozzle while print head moves along the guidance. The print head is supported by a carriage. A motor (generally stepper motor) moves the carriage back and forth across the paper. A belt is used to attach the carriage to the motor. The first abstraction model of this product is taken as an example to clarify all the concepts which are mentioned in the previous chapter. In figure 5 the top engine of the inkjet printer is translated into an FBS model. The oval shapes represent functions of the product. Functions are decomposed into sub functions until physical features can be linked to them. In the example the three physical features: ‘carriage print head system’, ‘carriage drive’ and ‘rotation by motor’ are correspondently associated to the functions: ‘To print drop of ink’, ‘to transmit motion to the carriage’ and ‘to rotate pulley’. The components of the model are a print-head, a carriage, which supports the print-head, and a carriage drive. The carriage drive contains a motor, a battery and a pulley mechanism. Connections of the design object are represented, for instance, by ‘coaxial connection’ between motor and pulley, and by the ‘electrical connection’ between motor and battery, which are described in the physical feature ‘carriage drive’. Figure 5 shows the physical features which are associated to the functions. The physical features are in relation to each other. Indeed the same component can appear in more that one physical feature. It is task of the system engineer to delegate components which are the same in one unique frame. In figure 6 components which are the same are delegated. The primary model is represented by the white blocks. The unpredicted physical phenomena and connections among entities are highlighted in the same figure with gray boxes. White and gray blocks together constitute the total behavioral model of the system which is the reasoned model. Eleven unpredicted physical phenomena and two extraconnections such as ‘Heat Generation’, ‘Deformation’, ‘Rotation’, ‘Heat Flow’, ‘Belt Transmission’, and ‘Near’ are the reasoned results from the primary model.

The reasoned connection ‘Near’ generates the unpredicted phenomenon ‘Melting’ in the FBS model and change the model topology and behavior. This connection is automatically generated by the KIEF extension and it is an additional result in KIEF physical feature reasoning. 4.1.1 A filter for physical phenomena The reasoned model suggests physical phenomena which can happen. No information is given on the probability of phenomena to happen. Furthermore, complex product can lead to a large amount of unpredicted phenomena. For these two reasons, it is necessary to have a method to filter reasoned phenomena based on their probability to occur. The limit analysis and physical phenomena causal network analysis are two of the studies used to filter phenomena out of the broad range of possible physical phenomena. The limit analysis establishes the condition for a phenomenon to be generated. Limit analysis is a basic operation in prediction for figuring out what kind of things might happen next [15]. For instance the phenomenon ‘Deformation’ will be activated just when ‘battery temperature’ turns to the value ‘hot’. Qualitative values of attributes become crucial to identify when a phenomenon is activated or not. Whether the system does not turn to the value associated to the phenomenon, the phenomenon is erased by the list of possible phenomena. Filtering can be performed also looking to the causal network of phenomena. Physical phenomena can be connected to each other by a causal link. A phenomenon ‘B’ can be caused by another phenomenon ‘A’ and generates a further phenomenon ‘C’. The chain of phenomena is in this case AÆBÆC. The phenomenon C is the one with less probability of appearance because it requires first the appearance of both A and B. Among the 3 phenomena A is the most probable to happen. The next section shows a comparison between the initial model and the reasoned model in terms of reasoned behaviors.

Figure 5: FBS representation of the primary model.

188

BeltTransmission Melting

Heat Generation

Heat Generation BeltTransmission Deformation

Rotation

Melting

Heat Generation

Rotation s Cau

al

Torque Generation

HeatFlow RotationToLinearMotionTransmission l usa Ca RotationToLinearMotionTransmission Transmission

ElectricPowerSupply

Battery

Misalignment

Support

Motor&Plate

Pulley

Belt

Pulley

Carriage

Printhead

Rolled

ElectronicConnection

Rolled CoaxialConnection

on on

on Contact Near Near

Figure 6: FBS representation of the reasoned model. 4.2 Behaviors of primary model and total behavioral model. When the FBS model is complete, which means that all the unexpected physical phenomena are reasoned out, the model can be transferred to the qualitative physics reasoner. The task of this reasoning engine is to combine all the information which is represented in the FBS model. Sequences of state transitions are reasoned out which represent the behavior of the system. The tables 1 and 2 graphically show behaviors of the initial model and of the reasoned model in terms of state transitions. 4.2.1 Results for the primary model Just four states are reasoned out for the initial model by the qualitative physics reasoner. Behavior is represented by the sequence of state transitions over time. Values of attributes for each state are shown in table1. Attributes which are related to the primary model are shown in the first column of table 1. These attributes are generated by the physical phenomena introduced by the designer (figure 6 white oval blocks). All values of attributes turn to the qualitative value ‘plus’ in the fourth system state. This is the expected result of the designer and it can be interpreted as the nominal behavior of the inkjet printer. The first three states represent the transition to the generation of the final state. When the pulley torque is positive and all entities have positive velocity and acceleration, the print-head is moving without any problem along the guide. Nothing is specified about the print quality, but it indicates that no strange phenomena are disturbing the nominal value.

189

4.2.2 Results for the total behavioral Model Table 2 takes into account just one of the reasoned behaviors coming from the reasoned model. Figure 7 shows the total amount of behaviors reasoned out for this example. 64 system states are generated. This represents a big transformation in comparison to the four states generated for the initial model. Each state transition represents a different system behavior. Therefore, it is evident from figure 7 that the real amount of behaviors generated by the DID for the reasoned model is much larger than in the initial model. Analyzing each behavior requires long time to the designer. The computational time for reasoning behaviors also increases exponentially for the reasoned model in comparison to the initial model. It is important to pay attention to the value of the attribute ‘Print-head accuracy’ represented in table 2. This refers to the precision with which the ink is positioned on the paper. At the fourth state the value of ‘print head accuracy’ turns to minus while all the other attributes maintain plus or zero value. This means that print accuracy decreases while the other attributes are reaching the positive and nominal value. A low value of accuracy can represent an undesired and unexpected behavior of the system. This is an important result which was not reasoned out from the initial model. Next step in the analysis is to detect the origin of the problem by looking into the parameter network of the reasoned model which is automatically generated by KIEF. The parameter network for the reasoned model of the inkjet printer is shown in figure 8. Result of the analysis is that print-head accuracy depends on the printhead displacement, which is connected to the

displacement of the carriage, which depends on printhead temperature. The parameter network suggests to the designer that decreasing the print head temperature leads to reduce the carriage displacement and finally to have a better print accuracy. In the example, ‘print head’, which incorporates a heater system to fuse ink, is the main source of ‘heat generation’. To avoid loosing accuracy in the print, the designer has to act on the temperature of the print-head.

For instance he/she can add a pre-heater before the ink enters the print-head or use another kind of ink which has lower melting point. In this way the effect of the print head temperature on the carriage temperature will decrease and deformation leading to misalignments will not be generated. In this section the article has showed how already with the trivial model of an inkjet printer, the designer can create new inventive solutions at an early stage.

State Transition 1st state Belt1 Position Belt1 Velocity Carriage Position Carriage Velocity Motor&Plate Voltage Pulley1 Angular velocity Pulley2 Angular acceleration Pulley2 Torque

0 0 0 0 0 0 0 0

2nd state 3rd state 4th state 0 0 1 0 1 1 0 0 1 0 1 1 1 1 1 0 1 1 1 1 1 1 1 1

Table 1: Qualitative Behavior of the primary model.

Battery Temperature Belt Position Belt Velocity Carriage Dispacemet Carriage Position Carriage Temperature Carriage Velocity Motor&Plate Temperature Motor&Plate Voltage Printhead Accuracy Printhead Dispacement Printhead Temperature Pulley1 Angular acceleration Pulley1 Angular velocity Pulley2 Angular acceleration Pulley2 Angular velocity Pulley2 Torque

1st state 2nd state 3rd state 4th state 5th state 6th state 7th state 8th state 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 1 0 0 0 1 1 1 1 1 0 0 0 0 0 0 1 1 0 0 0 0 1 1 1 1 0 0 0 0 0 1 1 1 0 0 0 -1 -1 -1 -1 -1 0 0 0 1 1 1 1 1 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 1 0 0 0 0 0 0 1 1 0 0 0 0 0 1 1 1

Table 2: Qualitative behavior of the reasoned model.

Figure 7: State transitions of the reasoned model.

190

Pulley 8 Radius 71 Pulley 8 AngularAcceleration61

Printhead3 Temperature68 Pulley 2 Radius 72

Constant0 Motor&Plate1 Voltage65 Carriage11 Temperature76

Pulley 8 Torque60

Carriage11 Displacement78

Pulley 8 Angle55 Pulley 8 AngualVelocity54 Motor&Plate1 Temperature70

Battery5 Temperature70

Printhead3 Heat77 Printhead3 Accuracy80

Belt4 Velocity53

Pulley2 AngularVelocity66 Pulley2 Angle67

Belt4 Position52

Printhead3 Displacement79

Figure 8: Parameter network connections.

5 CONCLUSIONS, LIMITS AND FUTURE WORK This paper suggested a method and a software tool, the DID, for automatically detecting unpredictable problems in the conceptual design of complex machines such as mechatronics machines. The DID generates a shortcut between early design stage and high level verification. The DID is an analysis tool which identifies design failures which are generated by integration of technologies. Finding and resolving problems at the conceptual phase increases the design quality, it helps to save money in prototypes reducing the amount of flaws at the engineering prototype phase. Two different models of the top engine of an inkjet printer were analyzed and compared: primary model and total behavioral model. In section 4 differences between the two models in terms of behaviors were investigated. The analysis leaded to the suggestion of two alternative design solutions in order to avoid the quality of the print to decrease. Such predictions at an early stage of the design save months in the development of the product and they are cost efficient since fewer prototypes will be then needed in the verification phase. Unfortunately the simulation time and the reasoned behaviors increase exponentially with the amount of attributes. The risk is to generate an explosion of solutions as it happens in the reasoned model. This is why it is necessary to filter solutions. A brief introduction of filtering methods has been provided in section 4. The analysis suggests the use of the filter to reduce the number of possible unpredicted phenomena before the qualitative physics reasoner is activated. In this way not only the number of behaviors but also the simulation time is reduced. The software has reasoned significant results at the design level just by using qualitative information. The lessons learned by the designer concerning the product in terms of unpredicted phenomena and the solutions provided for the problems can be introduced into the database in terms of physical features. This allows the reused of the reasoned results in subsequent sessions or in a new model. This avoids having the same unpleasant surprise again. The next step for this research is to develop the two mentioned filtering methods to prioritize behaviors reasoned out by the system based on their significance and probability to appear.

191

6 ACKNOWLEDGMENTS The authors gratefully acknowledge the support of the Dutch Innovation Oriented Research Program ‘Integrated Product Creation and Realization (IOP-IPCR)’ of the Dutch Ministry of Economic Affairs. 7 REFERENCES [1] Suh, N. P., 1990, the principle of Design, Oxford Series on Advanced manufacturing. [2] Pahl, G., Beitz, W., 1977, A Systematic Approach, Engineering Design Berlin, Springer-Verlag. [3] Browning, T. R., 2001, Applying the design structure matrix to system decomposition and integration problems: a review and new directions, IEEE Transactions on Engineering Management, 48: 292306. [4] Chatterjee, P., 1975, Modularization of Fault Tree: A method to Reduce the Cost of Analysis, Reliability and Fault Tree Analysis, SIAM: 101-126. [5] Yoshioka, M., Umeda, Y., Takeda, H., Shimomura, Y., Nomaguchi, Y, Tomiyama, 2004, T., Physical concept ontology for the knowledge intensive engineering framework, Advanced Engineering Informatics, 18: 95-113. [6] Stevens, R., Brook, P., Jackson, K., Arnold, S., 1998. System Engineering: Coping With Complexity, Prentice Hall Europe, ISBN 0-13-095085-8. [7] Kari, L., Provisec, O., 2005, Challenging Global Competition: tune up your product development, VTT working paper 34, ESPOO. [8] Tran, E., 1999, Verification/Validation/Certification, in Koopman, Topics in Dependable Embedded Systems, USA Carnegie Mellon University, http://www.ece.cmu.edu/~koopman/dess99/swtestin g/index.html, accessed April 13, 2008. [9] Kuipers, B., 1994, Qualitative Reasoning Modeling and Simulation with Incomplete Knowledge, p. cm. Artificial Intelligence. [10] Umeda, Y., Ishii, M., Yoshioka, M., Shimomura, Y., Tomiyama, T., 1996, Supporting conceptual design based on the function-behavior-state modeler, AI for engineering Design, Analysis and Manufacturing, 10, No. 4: 275-288.

[11] Forbus, K.D., 1984, Qualitative Process Theory, 1984, Artificial Intelligence, 24: 85-168 [12] D’Amelio, V., Tomiyama, T., 2007, Predicting the Unpredictable Problems in Mechatronics Design, in Conference Proceedings of the 16th International Conference on Engineering Design –Design for Society, Cité des Sciences et de L’Industrie, Paris, August 28-30, The Design Society, Paper number 153, 10 pages, (CD-ROM). [13] D’Amelio, V., Tomiyama, T., 2008, Design at the Cross-Section of Domain, the 18th CIRP Design conference, April, 7-9, Enschede, the Netherlands. [14] Kiriyama, T., Tomiyama, T., Yoshikawa, H., Qualitative Reasoning In Conceptual Design With Physical Features, 1992, Recent advanced in qualitative physics, Eds. B. Faltings, P. Struss, The MIT Press, Cambridge, MA: 375-386. [15] Forbus, K. D., De Kleer, J., 1993, Building Problem Solvers. Artif Intelligence, Cap. 6-11.

192

A Generic Conceptual Model for Risk Analysis in a Multi-agent Based Collaborative Design Environment J. Ruan, S. F. Qin School of Engineering and Design, Brunel University, Uxbridge, Middlesex, UK [email protected], [email protected]

Abstract This paper presents a generic conceptual model of risk evaluation in order to manage the risk through related constraints and variables under a multi-agent collaborative design environment. Initially, a hierarchy constraint network is developed to mapping constraints and variables. Then, an effective approximation technique named Risk Assessment Matrix is adopted to evaluate risk level and rank priority after probability quantification and consequence validation. Additionally, an Intelligent Data based Reasoning Methodology is expanded to deal with risk mitigation by combining inductive learning methods and reasoning consistency algorithms with feasible solution strategies. Finally, two empirical studies were conducted to validate the effectiveness and feasibility of the conceptual model. Keywords: Conceptual Model, Risk, Collaborative Design

1 INTRODUCTION Risk-based design is attracting significant attention in designing large scale products, such as airplane and ship. In conventional risk-based design projects, many risk assessment approaches have been developed in terms of design process and activities [1]. Through these approaches it is easy for designers to determine risk sources and anticipate their consequences after quantifying their probabilities. Global collaboration is a mainstream to distribute product design activities by using up-to-date design tool and technology. However, although this collaborative design is awarding but it involves more uncertainties due to complicating factors [2]. These factors are not only related to multidisciplinary tasks and enormous resources, but also concerned coordination, negotiation and decision authorities within multi-agent interactions. The complexity and associated risks in planning and managing such large scale projects are increased by the need to integrate the functions of both technical and social teams that may be distributed across geographical regions with diverse languages and culture [3]. Despite the importance of risk management given to collaborative design in a few literatures, the subject area continues to suffer from three interrelated problems: lack of risk constraints and variables with regard to an integrated conceptual model for a multi-agent based collaborative design environment; and uncertainty determination relating to how to quantify these constraints and variables in the existing literatures of research; and lack of the appropriate mitigation method for the feasible solution. This paper presents a constraint-based generic conceptual model of risk evaluation (GCMRE) specifically designed in response to these problems, explicating the processes by which involvement and risk perceptions are caused and influence one another as well as subsequent consequences. The conceptual model identifies three distinct dimensions of risk constraints, and relates these to the relevant variables under a distributed collaborative design environment. Additionally, the validation of the CIRP Design Conference 2009

193

conceptual model is discussed by using two empirical studies. This study is established on a novel generic conceptual model (GCMRE) which initially maps constraints and variables by using a hierarchy constraint network and then, utilizing an effective approximation technique named Risk Assessment Matrix (RAM) to evaluate risk level and rank priority after probability quantification and consequence validation. The effectiveness of the model aiming for risk management in concurrent engineering (CE) projects is determined by the degree of data sharing and reuse, as well as the available support for decision making processes within the projects [3, 4, 5].The core of the study is an Intelligent Data-based Reasoning Methodology (IDRM) which is expanded to deal with risk mitigation by combining the inductive learning method and the reasoning consistency algorithm with feasible solution strategy. Consequently, the novel model will not only facilitate the decision making from a risk perspective but also emphasize on the data retrieving, storing, sharing and updating. 2

A GENERIC CONCEPTUAL MODEL OF RISK EVALUATION (GCMRE) Under a concurrent multi-agent collaborative design environment, advanced technologies in computer networks have enabled collaborative designers more effectively to collaborate and integrate with a wide range of design agents and resources. Computer Supported Cooperative Work (CSCW) provides a design research area concerned with multi-agent interaction under multidisciplinary task dependencies supported by computer and web networks. Collaborative design has typically multiple functional perspectives that address interrelated aspects of a distributed product design involving communication and negotiation among engineering agents. Owing to distinct domain perspectives, discipline knowledge and

Contextual & Flow Structure

Constraint Database

Constraint Capture

Agent Constraint Mapping

Constraint Classify

Design Stages: Specification Conceptual Design Detail Design Product Design A

Constraint Register

Risk Variable Derivation

Variable Analysis

Risk Identification

Variables Representation

Probability Quantify Consequence Validation

Risk Assessment

Risk level Priority Rank

Risk Migration

Reasoning Methodology Migration strategy

Constraint Date base

Variable Date base

Case Date base

Rule Date base

Risk Monitor

Figure 1: Architecture of GCMRE for Collaborative Design .

evaluation standards in a collaborative design system, collaborative risk evaluation is critical and needs to be further considered. GCMRE is designed as a generic conceptual risk evaluation model in a web-based multiagent collaborative design environment. The aim of GCMRE is to support a distributed collaborative design through global collaboration from the risk perspective. Figure 1 shows the generic conceptual model and is briefly illustrated below. 2.1 Contextual and Flow Structure The contextual and flow structure is established as a VB module and interacts with customer-based GUI interface. In order to structure the contextual information and model the flow related to different design phases, i.e. conceptual design, preliminary design, detail design, manufacturing and assembling, a database is built. 2.2 Constraint Mapping Constraint mapping is a technique which can manage the uncertainty, constraint relationships and all of the objects related to the constraints in a concurrent and collaborative multidisciplinary design project [2, 6]. In collaborative design, there are many restrictions among multiple design agents, including task based design criteria, design rules, design resources and the up-to-date design techniques. These restrictions under concurrent engineering (CE) can be characterized as constraints, and those classified constraints in the process of collaborative design can form a constraint network or database [2]. From the point view of risk, a constraint must have relationships with risk variables. The emergence of risk variables will result in straight

constraint violation, thus the risk variables can be derived and identified by using the constraint mapping. The constraint mapping can check whether or not the collaborative design result satisfies the whole constraint network by constraint propagation, if not, there must be risk variables that exist, and then we must track them and register the constraints. There are three ways to input constraints by capturing, classifying and registering. In order to accelerate the constraint mapping, a hierarchical constraint network technique is used in this study. The constraint can be generalized into three levels as shown in Fig.2: task-dependence level, actorinteraction level and resource-integration level. Task-dependence level constraints represent constraints of the schedule, product quality, time and so on. The actor-interaction level constraints describe the design constraint of various design actors, which link communication, negotiation decision-making, etc. Resource-integration level constraints represent restrictions on knowledge, technology; design material, funding, human resources, etc. For example, in the conceptual design phase, the task-dependence level constraints are the most important factor and the taskdependence level constraint network is propagated first to derive and identify risk variables. With the design goes further, the actor-interaction level constraint is more concerned about, and the constraint network of this level has the priority of detection. At the stage of detail design phase, we need to check the resource-integrated level constraints. Thus, with the help of the hierarchy constraint network, risk variables could be identified promptly.

194

1 to 9 that indicates how magnitude risk is associated with the overall risk variables. The overall risk level is determined by the lowest value within all risk variables. For example, assuming a knowledge risk probability is quantified as Low (L) while its consequence is validated as Severe(S), the knowledge risk level is equal to 5 in terms of Risk Assessment Matrix Method. So to others risk variables. Considering a cost level risk which is assigned 2, it is the lowest value among all risk variables, the overall risk level is equal to 2 [Figure 3].

Task-dependence Level Schedule/Time/ QualityA Actor-interaction Level Communication/ Negotiation/ Decision-makingA

Probability Quantify

Resource-integration Level

2.3 Risk Identification Risk variables are derived based on constraint mapping and classified by using some traditional techniques such as Failure mode and effect analyses, Fault-tree analysis, problem reports and records tracking technique etc. There are two relationships among risk variables: independent and dependent. In this study, each risk variable is assigned a unique risk identification number as a reference in order to aid with communication and tracking during the whole risk evaluation process. A comprehensive questionnaire needs to be carried out to gather the general and sufficient risk information. As shown in Table 1, a list of formal risk variable representation with corresponding attributes is created and will be input into the risk variable database. These risk variables also can be inherited in later iteration. The risk variable database is a bank that store and list the risk description and other basic information about each risk variable. The database of risk variables are a good source of lessons learned and useful for identifying risks in the future. Table 1. Risk Attribute for Collaborative Conceptual Design. Constraint Risk Variable Probability Consequence Risk level Rank Priority XXXX

XXXX

Level

Level

Number

Number

A

A

A

A

A

A

2.4 Risk Assessment After identifying risk variables, a Risk Assessment Matrix (RAM) technique is used for assigning a risk level and a rank priority relating to probability quantification and consequence validation. Since a risk variable is associated with its probability and consequence, some literatures [1, 3, 7] suggest ranking them into several levels in order to quantify or validate them. In GCMRE, the RAM is adopted by a 3 × 3 Risk Assessment Matrix as show in Figure 3 [7]. The probability for each risk variables is assessed as high (H), medium (M), or low (L) according to pre-specified criteria, while the consequence here is addressed as severe (SE), moderate (MO) and Minimal (MI). Through calculation of the magnitude of probability and consequence, a risk level is validated. The risk level is an assigned value from

195

Consequence

Validation

Technology/Financial KnowledgeA Constraint Variable Figure 2: Hierarchy of GCMRE Constraint Mapping .

High (H)

Mediu m (M)

Low (L)

1

2

5

Moderat 3 e (MO)

4

7

Minimal (MI)

8

9

Severe (S)

6

Risk level calculation: Knowledge risk level =5 Technical risk level =7 Cost risk level =2 Time risk level =4 Overall risk level =2 (lowest number)

Figure 3. Risk Assessment Matrix Method

Finally, a rank priority is identified in order to decide the sequence of prompted disposal. The high rank priority indicates high significance of risk variables needing to be resolved by choosing an optimum mitigation strategy. 2.5 Risk Mitigation To eliminate a risk variable by mitigation requires feasible a mitigation strategy and sufficient resources to execute the risk mitigation plan [7]. All mitigation strategies can be generated by iterative processes or inherited experience [3]. By combining inductive learning methods and reasoning consistency algorithms, a flow chart of Intelligent Data-based Reasoning Methodology (IDRM) is presented in Figure 4. In the proposed IDRM, following initial contextual and flow structure, two reasoning consistency algorithms are adopted as the IDRM methods first to match a series of given constraints and variables to rules or cases through database. The constraints and variables are collected as a set of data bank in the database by an iterative or inherited manner. There are three reasoning consistency algorithms related to distinct risk mitigation strategy respectively in IDRM: Rule-based Reasoning Consistency algorithm (RRC), Case-based Reasoning Consistency algorithm (CRC) and constraint or variable relaxation consistency algorithm. If the proposed IDRM could handle the risk constraint and variable properly, Rule-based Reasoning Consistency algorithm or Case-based Reasoning Consistency algorithm would be called to deal with the risk by matched rules or cases; else risk constraints and variables would be transported to a constraint or variable relaxation consistency algorithm. A corresponding rulebased or case-based mitigation strategy would be implemented appropriately if matching successful. After the risk mitigation solving completely, rules or cases inherited during the IDRM process would be added to the rule or case database by inductive leaning method ultimately. 2.6 Risk Monitor To keep a continual flow of risk evaluation, it is critical to monitor risk status with accurate tracking and recording. Some practical techniques are implementing in practice. Generally risk monitor is applied to chase case and

update database and provides the GCMRE further information in the future. Contextual & Flow Structure

Constraint Database

Variable Database

Constraint Mapping

Match Constraint to Rule No Rule-based constraint ?

Rule Database

Case Database

Data Base

Yes Variable Identification & Assessment No Rule-based Mitigation? Yes

Case-based Mitigation Strategy

They also recommended that the model should be further developed and evaluated to include more details about how to link constrains and variables with industrial practice. 4 CONCLUSION This paper presents a constraint-based generic conceptual model of risk evaluation (GCMRE) in order to manage the risk through related constraints and variables under the multi-agent collaborative design environment. A hierarchy constraint network is developed to mapping constraints and variables. Then, an effective approximation technique named Risk Assessment Matrix (RAM) is adopted to evaluate risk level and rank priority after probability quantification and consequence validation. Additionally, the Intelligent Data-based Reasoning Methodology (IDRM) is expanded to deal with risk mitigation by combining inductive learning methods and reasoning consistency algorithms with feasible solution strategies. Finally, two empirical studies were conducted to validate the effectiveness and feasibility of the conceptual model.

Risk Mitigation

Fig 4: Intelligent Data-based Reasoning Methodology (IDRM) .

3 VALIDATION Two deep industrial field studies were conducted in order to test and validate the effectiveness and feasibility of the proposed conceptual model in two design business companies within UK. One is ACDP in Berkshire, an integrated leading building services engineering consultancy with a wealth of experience and expertise within collaborative design; the other is Industrial Design Human Factor (IDEF) department in Xerox Corporation (Welwyn Garden City, UK). Most projets from ACDP are small and short-term based and with less collaboration due to limited agents and resources. While collaborative projects from Xerox are large-scale and long-term based which involved more multidisciplinary agents, complicated tasks and collaborative resources. Initially, industrial interviews and questionnaire surveys were conducted in two companies with 30 design staffs each from various levels. They are all with wealth experiences in collaborative design as industrial designers, design managers, product engineers and project managers etc. During the two-month industrial field study, interviews, questionnaires and field observation were employed in the two companies in order to find out if the proposed generic conceptual model can support or enhance risk evaluation under the multi-agent collaborative design environment. The authors have participated in one design project at each company. In terms of the collaborative design practice and observation of the whole collaborative design activities, process and management, the authors believe that the proposed model is appropriate for their agent-based collaborative design projects. And this conclusion was validated objectively by most involved multidisciplinary collaborative designers through face-to-face industrial interviews. As for the questionnaire surveys, there were 67 responses and 78% of the interviewees believed the proposed generic conceptual model was effective and feasible in collaborative risk evaluation which could be implemented in various industrial organizations.

5 ACKNOWLEDGEMENT This paper was supported by the CAD Group of Collaborative Design from the Engineering and Design School of Brunel University. 6 REFERENCES [1] Qiu, Y., Ge, P., 2007, A risk-based global coordination system in a distributed product development environment for collaborative design, part 1, framework, 15,357-367. [2] Wang, W., Hu. J, Yin. J., Peng, Y. H., 2006, Uncertainty management in the concurrent and collaborative design based on generalized dynamic constraints network (GDCN), Proceeding of the 10th International Conference on Computer Supported Cooperative Work in design, 2006. [3] Kayis, B., Arndt, G., Zhou, M,. Savci,S., Khoo, Y.B., Rispler, A., 2006, Risk quantification for new product design and development in a concurrent engineering environment, Annals of the CIRP Vol.55/1, 2006 [4] Danesh, M.R., Jin, Y., 2001, An agent-based decision network for concurrent engineering design, Concurrent Engineering: Research and Applications, 9/1, 37-47. [5] Rozenfeld, H., 2002, An Architecture for shared management of explicit knowledge applied to product development process, Annals of the CIRP, 51/1, 413-416. [6] Xiong, G. L., Chang, T. Q., 1998, Coordination model for the concurrent engineering product development process, High Technol. Lett., vol. 4, no. 2, pp. 1-8. [7] David, L. L., 2001, System Engineering: A risk management approach, Technology Review Journal, Fall/Winter 2001, 41-54.

196

A Methodology for Variability Reduction in Manufacturing Cost Estimating in the Automotive Industry based on Design Features F. J. Romero Rojo, R. Roy, E. Shehab Decision Engineering Centre, Manufacturing Department, School of Applied Sciences, Building 50, Cranfield University, Cranfield, Bedford, MK43 0AL, UK. {f.romerorojo; r.roy; e.shehab}@cranfield.ac.uk

Abstract Small to medium manufacturing companies are coming to realise the increasing importance of performing fast and accurate cost estimates at the early stages of projects to address customers’ requests for quotation. However, they cannot afford the implementation of a knowledge-based cost estimating software. This paper explains the development and validation of a consistent methodology for the cost estimating of manufactured parts (focused on pistons) based on the design features. The research enabled the identification of the sources of variability in cost estimates, and the main one is the lack of formal procedures for the cost estimates in manufacturing SMEs. Finally, a software prototype was developed that reduces the variability in the cost estimates by defining a formal procedure, following the most appropriate cost estimating techniques. Keywords: Cost estimating, manufacturing SMEs, process improvement, variability.

1 INTRODUCTION Currently, manufacturing companies are forced to invest more and more in innovation, in order to improve their products quality, flexibility, and variety, at the same time that they are trying to reduce their costs. This is necessary in order to survive, maintaining their competitive edge, and also satisfying the customers, who are demanding higher quality at lower prices. Therefore, it can be concluded that cost has become one of the main factors in product development. Manufacturing companies are requested by the customers to provide a quote at the early stages of the project. Therefore, it is important for these companies to make use of accurate and well-defined methodologies to estimate the costs. Many big companies have in place a cost estimating software that, combined with an Expert System, provides good estimates [1]. However, the Small and Medium Sized Enterprises (SMEs) cannot afford such an expensive system, but they still need a procedure that allows accurate estimates in order to be competitive. There are several techniques for the cost estimating process. Most of the manufacturing SMEs are not aware of which of those techniques is most suitable in each situation. On top of that, most of them are not using a well-defined procedure, which leads them to a high fluctuation in the estimates. Accordingly, the aim of this research paper is the identification of the sources of inaccuracy and the determination of the most suitable cost estimating procedure, in order to develop a framework to improve the cost estimating process within a manufacturing environment.

CIRP Design Conference 2009

197

2 LITERATURE REVIEW In a manufacturing environment, it is important to estimate the costs incurred when manufacturing products in order to issue a quote to the customers. In this way, the quote submitted is required to be lower than the competitors’ one, but high enough to make a profit. Therefore, the requirements for a successful estimate are accuracy, speed and consistency [2]. Reduction of effort and time, increase in accuracy and higher consistency in estimates are additional advantages derived from the use of software programs for cost estimating. However, there are some disadvantages, for example, an up-to-date database is required and any competitor may use the same software, achieving similar results. In order to improve the reliability and accuracy of cost estimates, many support tools have been developed for the early stages of the design [3]. Most of the authors agree that there is not a ‘best’ method. The most appropriate method depends on the context, that is, the company, the customers and the stage of production at which the cost estimating is being carried out [4]. In broad terms the qualitative methods are more suitable than the quantitative methods in the early stages, when the detailed information is not available. The main limitation of the qualitative methods is that are less accurate than the quantitative methods. Therefore, qualitative methods should be used as a decision-support tool that provides a rough cost estimate at the early stages of the project. On the other hand, quantitative methods should be used when detailed information is available and an accurate cost estimate is required [5].

None of these techniques is more appropriate than any other for all the possible scenarios. Therefore, as Niazi [5] states, “recent research in the field focuses on getting quicker and more accurate results by developing integrated systems combining two or more approaches”.

Particularising for each technique, Traditional Detailed Cost Estimating is more accurate than Parametric Costing, Expert Judgement and Analogy-Based Reasoning. However, it has limitations in the allocation of indirect costs and overheads. Activity-Based Costing (ABC) overcomes this limitation but this technique is very costly [6]. Therefore ABC should be only implemented when accuracy in cost estimates is critical. Parametric Costing, Expert Judgement and AnalogyBased Reasoning are quicker than Detailed Cost Estimating and Activity-Based Costing [7]. Expert Judgement can be considered as the quickest but the least accurate method, because it may be easily biased by subjectivity. Therefore it is suitable for calculating an initial rough estimate and also to contrast the result of any other method. The transparency for the cost estimator is the main advantage of the Analogy-Based Reasoning over the Parametric Costing, which can be considered as a ‘blackbox’. Moreover, the Analogy-Based Reasoning has the ability of ‘learning’ from previous and new cases. However, the Analogy-Based Reasoning has a higher degree of subjectivity in the process than the Parametric Costing. Therefore, the suitability of this methods is based in the ease of identifying the ‘cost drivers’ and the availability of previous similar cases. The technological advancement is stimulating the use of Artificial Intelligence (AI) in order to reduce time, handle uncertainty, increase accuracy and reliability of estimates. However, not many companies have implemented yet any kind of AI due to its high cost of implementation and maintenance. Another limitation of this technique is that it is like a ‘black box’. Therefore, it is required to use it in collaboration with other technique in order to validate the estimate.

Request for quotation

3

RESEARCH METHODOLOGY

3.1 The Case Study The case study company, Cosworth UK Ltd., is a SME funded in London in 1958 and specialised on the manufacture of engine components for automobile racing (motorsport). Cosworth has had a long relationship with Ford, which began when Cosworth first started manufacturing racing engines in 1959. By the end of 2004 this relationship was broken off so Cosworth started to face a new scenario in which they have great expertise in the design, development and manufacture of automotive engines but the cost estimation for quotation was rising as a new challenge. The main objective of this research is the reduction of the variability in cost estimating process within a manufacturing environment. It is necessary to identify the main causes of variability within a manufacturing environment [8]. In the case that the cost estimating process starts from scratch or is based on similar past cases, there is no predefined procedure to follow. Consequently, most of this process is based on the experience of the estimator and hence, the estimate is very fluctuating from one estimator to another. This is the key challenge identified in this study because the absence of a formal procedure may mislead the cost estimator. Unless this problem is addressed, the estimates are unreliable.

Customer requirements for quote structure Piston Characteristics

Define new piston characteristics

Most similar piston

A11

Identify similar piston A12

Retrieve data from most similar piston

Inflation, learning curve and complexity

A13

Tools available

Previous piston Information updated

Identify information to be kept and new or different operations required A14

Access Query Piston Database Access Form

New route Completely defined

Calculate time required to perform new operations

Operations cycle-time

Draft of the Piston cost estimate

A21

Tool supplier information Excel

Estimate costs

Access Query

Piston cost estimate

A22

Machine/Labour rate

A23

Cost estimator

ANALOGY-BASED

DETAILED

Figure 1. IDEF0 diagram of the Piston Quote Generator

198

Test for omissions and duplications

A framework based on the techniques reviewed in the literature survey was developed and validated. Subsequently, in order to develop the prototype that allows the implementation of this methodology in a manufacturing SME, a workshop was hold. It was attended by the main experts in pistons of the case study company, in order to determine the main characteristics of pistons that are considered for the measurement of the degree of similarity. From this point, the development of the prototype was based on MS Excel and MS Access. 4 DEVELOPMENT OF COST ESTIMATING FRAMEWORK The most suitable methodology for those components that are similar to others that have been designed and manufactured in the past is the: Hybrid Analogy-Based / Detailed Cost Estimating [8]. This methodology combines the speediness of the Case Based Reasoning (CBR) and the accuracy of the Detailed Cost Estimating. Therefore, it is a suitable method for this scenario. It is based on the set of processes defined in the Virtual Cost Estimating Studio (V-CES) project [9], but it has been improved and adapted to the requirements of a manufacturing SME. The methodology explained is focused on the pistons, but it can be equally adapted for the quotation of any other part comprised in this scenario. This Piston Quote generator is a tool that allows the cost estimate of any piston at the early stages of the design. This tool ensures the use of a well-defined procedure, which reduces the variability in the cost forecast.

Several sources of variability in the cost estimates have been identified during this study. Firstly, there is no formal procedure defined for the cost estimating. Therefore, the cost estimated by different employees for the same product may become very different. Most of the employees are not properly trained for the performance of a cost estimate. Consequently, they will struggle to estimate costs and also the results obtained will be very fluctuant. There is a lack of data available from previous cost estimates. Moreover, most of this data is stored arbitrarily, without following any logical criteria, making the retrieval process a very complex task. As a result, most of the cost estimating processes may start from scratch, increasing the range of fluctuation for the cost estimate. 3.2 The Approach Adopted The first step is the capture of the current cost estimating practice within the case study company. Understanding of the current practice is very important because it is necessary to identify the areas that require further improvements [8]. Once it has been identified the sources of variability on the estimates and the necessity to reduce it in order to improve the results, the next step was the development of a solution. Taking into account that there are no formal procedures defined for the cost estimating process in SMEs, the development of it may reduce the variability to a great extent.

MS EXCEL - piston

TAB 1: START Summary of the processes

quote generator.xls

TAB 2: A11 – A13

TAB 3: A14 – A21

Include the following processes:

Include the following processes:

(A11) Define new piston characteristics (A12) Identify similar piston (A13) Retrieve data from most similar piston

(A14) Identify information to be kept and new or different operations required (A21) Calculate time required to perform new operations

TAB 4: A22

TAB 5: QUOTE

Include the following process:

Include the Final Quote

(A22) Estimate costs

MS ACCESS - piston

FORMS

quote generator.mdb

DATABASE

Select_Piston

Pistons Database

Form used to specify the main characteristics of the new piston

Database in which all the past pistons are stored

Pistons_Entry

QUERY

Pistons Query Tool that retrieves from the database those pistons that match the characteristics specified in the Select_Piston Form

Form used to modify or add a new piston to the database

Figure 2. Piston Quote Generator Structure

199

4.1 Cost Estimating Scenario In this section the rationale of the tool is described thoroughly. The overall process can be appreciated in the IDEF0 diagram (see Figure 1). Each step of the process is explained as follows. The process starts when a ‘Request for Quotation’ is received from the customer, including the design specifications of the part that is intended to be quoted. This methodology commences applying the CBR technique, in terms of identifying the most similar past case. The first step is to define the main characteristics of the part that will be quoted. The cost estimator is required to assess them in order to be able to identify the most similar part stored in the database. The characteristics are entered in a form by the cost estimator. Then, the database is explored in order to identify the past cases that match those characteristics. At this stage, the cost estimator has the flexibility to undertake different searches, including the possibility of performing a partial match or exact match retrieval. That is, the cost estimator has the freedom to search just for those characteristics that are regarded as more important. Once the most similar past case has been identified, its quote is retrieved from the database.

determined, considering the machine and labour rates, the cost incurred during the manufacturing process can be easily estimated. The cost of materials, cosmetics, packaging, new tools and fixtures should be added to the manufacturing cost in order to calculate the final quote. (Note that the indirect cost, overheads, machinery depreciation and maintenance cost are already included in the machine rates). The objective of this last step is to assure that the quote is correct, avoiding omissions or duplications. The procedure defined for this purpose is: first, print and check a report of the cost estimate; second, correct any error found, an third, send the quote to the sales department, so it can be issued to the customer (Figure 1). 4.2 Overall Structure of the Developed Prototype This methodology has been developed by combining MS Excel and MS Access. The structure of the prototype developed is represented in Figure 2. In the “piston quote generator.xls” subsystem, all the necessary instructions for the performance of the cost estimating are expressed. Five tabs guide the cost estimator during the whole cost estimating process.

Figure 3. Selection Form of the System The data included in this quote should be updated and normalised by the cost estimator, regarding the inflation, learning curve and complexity. Subsequently, it is necessary to define the manufacturing route that the new part will follow. Then the cost estimator will be able to identify which operations of the past case can be maintained, which ones should be modified or removed and what operations should be added. The need for any new tool required to perform a new operation is also identified at this stage (Figure 1). This is the first step of the Detailed Cost Estimating phase of the overall process. Now that the manufacturing route has been completely defined for the new product; the cost estimator, based on the information provided by the tool supplier, is able to determine the cycle-time required for each operation. Once the cycle-time required for each operation has been

The “piston quote generator.mdb” subsystem includes two forms, a database and a query. In the database it is stored all the information related to past pistons, their characteristics and their quote. As it is shown in Figure 3, the “Select_Piston” form has six lists of options. Each list is associated with a different characteristic that defines the piston. For each characteristic there are six possible values for the assessment of the new piston. There is an extra value “0” that can be selected if the cost estimator decides not to include that characteristic in the search query. Every time the user selects any option, an image representative of that option is displayed. This feature, added to the form by means of Visual Basic for Applications (VBA), ease the selection process for the cost estimator.

200

The ‘Pistons_Entry’ form allows the user to make additions of new pistons to the database. It also gives the user the possibility of making any modification in the information about the pistons already stored in the database. By means of this form, the user is able to populate the database, adding every piston that has been quoted.

Considering the previous statements and the validation results, it is observed that the Piston Quote Generator developed has improved the cost estimating process within a manufacturing SME. Therefore, the aim of the research has been accomplished, reducing the variability in the cost estimates.

The Pistons Query has been developed using VBA, in order to allow the user the retrieval of the pistons stored in the database that matches the characteristics specified by the user.

7 ACKNOWLEDGMENTS This research project was carried out in collaboration with Cosworth UK Ltd. They funded the project, provided the case study and information required and collaborated validating the results and prototypes developed. Special gratitude shall be expressed to David Barford, who was the industrial supervisor in this study.

5 VALIDATION In order to validate this procedure defined for the piston quote generator, the detailed IDEF0 diagram (summarised in Figure 1) was explained to the 3 main experts in pistons and cost estimation from the Engineering/Manufacturing department of the case study company. After checking it, all of them gave their approval. Another validation process was performed, in order to make sure that the prototype is working properly and fulfils all the requirements for which it was designed. The validation of this tool was performed by the Production Engineering Manager of the manufacturing SME. He is the most appropriate one for this purpose, because he combines expertise in pistons and also in cost estimating. Therefore, he is capable to determine whether this programme meets his expectation. After running the program and testing it with several hypothetic cases, it was verified that it works properly and fulfils the requirements. 6 CONCLUSIONS The methodology defined for the rationale of the tool developed is a hybrid between detailed cost estimating and analogy-based costing. This decision is based on the literature review carried out. The reason behind this decision is the combination of level of effort required and accuracy provided by this methodology. The literature review and the study about the current cost estimating practice within manufacturing companies have revealed a lack of formal procedures for the cost estimating in manufacturing SMEs. The fluctuation of the cost estimates is a concern in manufacturing SMEs. The variability of the estimates has been reduced defining a structured procedure. The customer-expectation study has identified the time expected by the customer to receive the quote for each scenario since the Request for Quotation (RFQ) is sent. The study has identified the need to reduce cost estimate development time.

201

8 REFERENCES [1] Shehab, E. M. and Abdalla, H. S. (2002), “A design to cost system for innovative product development”. Proc. Instn Mech Engrs, Part B: J. Engineering Manufacture, vol. 216, no. 7, pp. 999-1020. [2] Kingsman, B. G. and de Souza, A. A. (1997), “A knowledge-based decision support system for cost estimation and pricing decisions in versatile manufacturing companies”, International Journal of Production Economics, vol. 53, no. 2, pp. 119-139. [3] Hicks, B. J. (2002), “Cost estimation for standard components and systems in the early phases of the design process”, Journal of Engineering Design, vol. 13, no. 4, pp. 271-292. [4] Rush, C. and Roy, R. (2001), “Expert judgement in cost estimating: Modelling the reasoning process”, Concurrent Engineering, vol. 9, no. 4, pp. 271-284. [5] Niazi, A., Dai, J. S., Balabani, S. and Seneviratne, L. (2006), “Product Cost Estimation: Technique Classification and Methodology Review”, Journal of Manufacturing Science and Engineering, vol. 128, no. 2, pp. 563-575. [6] Lere, J. C. (2000), “Activity-based costing: a powerful tool for pricing”, Journal of Business & Industrial Marketing, vol. 15, no. 1, pp. 23-33. [7] H'mida, F., Martin, P. and Vernadat, F. (2006), “Cost estimation in mechanical production: The Cost Entity approach applied to integrated product engineering”, International journal of production economics (in press), vol. 103, no. 1, pp. 17-35. [8] Romero Rojo, F. J. (2007), “Cost Estimating Process Improvement within a Manufacturing Environment”, MSc Thesis, Cranfield University, UK. [9] VCES Consortium, Deliverable 5.4 Final Report on V-CES version 2, 2006.

Assessing the Complexity of a Recovered Design and its Potential Redesign Alternatives R. J. Urbanic, W. H. ElMaraghy Intelligent Manufacturing Systems (IMS) Centre, Faculty of Engineering University of Windsor, Windsor, Ontario, Canada {jurbanic; wem}@uwindsor.ca

Abstract Reverse engineering techniques are applied to generate a part model where there is no existing documentation or it is no longer up to date. To facilitate the reverse engineering tasks, a modular, multiperspective design recovery framework has been developed. An evaluation of the product and feature complexity characteristics can readily be extracted from the design recovery framework by using a modification of a rapid complexity assessment tool. The results from this tool provide insight with respect to the original design and assists with the evaluation of potential alternatives and risks, as illustrated by the case study. Keywords: Product Complexity, Design Recovery, Redesign

1 INTRODUCTION When designing a product, a top-down hierarchical process is followed, where general principles are methodically applied to synthesize solutions that satisfy the need. Design parameters (DPs) are determined to fulfil the functional requirements (FRs) at the product, component and feature levels. Several engineering design methodologies such as Value Engineering (VE) [1], Axiomatic Design [2], and the Theory of Inventive Problem Solving (TRIZ) [3] assist the designer in creating a robust design that meets the necessary FRs based on logical and rational thought processes. Consequently, when reverse engineering an engineered component there must be a methodology for recognizing the design intent for the individual features, and the component structure and product architecture in both the physical (form) and logical (FR) domains. Effective design recovery consists of linking the function and form characteristics in context with the application and the operating environment in order to be able to infer the designer’s intent at the system, embodiment and detail levels to produce pertinent product documentation. A comprehensive design recovery strategy must be performed to ensure that the essential attributes are captured to ensure that (i) the reconstructed design will fit within a product’s architecture, and (ii) no unexpected behaviours could emerge during usage. Conditions may exist where the recovered design needs to be modified before the component can be remanufactured. These re-design requirements may be due to the introduction of a new product variant, different operating conditions or available manufacturing processes, or other design constraints. For these reasons, the design recovery framework should readily link to other formal design tools in order to assess the original design and to highlight areas of improvement. The goal of this work is to leverage the design recovery framework to quantify the original design’s product complexity and subsequent design alterations using an adaptation of the product complexity analysis methodology developed by ElMaraghy and Urbanic [4].

CIRP Design Conference 2009

202

2 DESIGN RECOVERY FRAMEWORK The design recovery framework has been developed to provide a multi-level roadmap to allow the functional, structural and data information to be accumulated at different levels of abstraction (Figure 1). Information is gathered from the contextual to the detail levels to answer the ‘what’, ‘how’, ‘where’ and ‘why’ related questions with respect to the design in an explicit manner. The component/feature functions are enumerated for the ‘Logical: What’ rubric using the National Institute of Standards and Technology (NIST) design vocabulary [5]. NIST research partners developed a comprehensive, standardized terminology to reflect the intended reasons for a component’s architecture. The information contained in the ‘Logical: How’ rubric provides a brief description as to how the functions are met in the design. The hypothesized functional requirements are presented in the ‘Logical: Why’ rubric. The associated design parameters and specific dimensional and tolerance data are identified in the physical and detail layers, as illustrated in Figure 1. N/R Design Information Functional requirements Component only

System Viewpoint Embodiment Viewpoint Detail Viewpoint

What

How

Data

Function InterMotivation connection

Where

Why

Contextual Conceptual Logical

Function Function Function

Physical

DP

Detail

DP

DP

FR

DP

DP

DP

DP

Figure 1: Design recovery framework. Gathering this information in a modular, systematic, and comprehensive manner allows the designers the means to make informed decisions as to whether the current component design is adequate, or how may it be modified

to add value and/or address the present set of design and manufacturing constraints. For a detailed description of the design recovery framework, refer to Urbanic and ElMaraghy [6]. To complement the design recovery framework, a connectivity diagram, a technique used in network design to illustrate the logical and physical connections, is used to illustrate physical feature links within a component, and the interface components. Features analysed in the design recovery framework should be illustrated in the connectivity diagram along with influential interface components. The rules developed for constructing an artefact connectivity diagram are as follows: 

Each feature must be identified, and provided with a concise, descriptive label.



Feature patterns and pattern types must be identified and labelled. The pattern types are linear, circular, polar grid, linear grid, and peripheral.



The mating components for each feature must be identified. If the mating component is an external component, it must be included and labelled appropriately.



Critical external components, which influence the design of the component being analysed, must be also included in the connectivity diagram.



Each feature type has a distinct font and connector style, as shown in Table 1. The appropriate connector style is drawn between the features.



Transition geometry is included in the model at the discretion of the engineer. Feature Type

Font

Connector Style

Outline Shape

External component, special

Italicized, Blue

Solid

Oval

External component, standard commercial item

Italicized, Black

Phantom

Product

Normal, Black

Solid

Process

Bold

Dashed

Normal, Red

Solid

Assembly

holes D1 and D2 lie in the same bolt circle as the threaded fastener clearance holes C1 – C4; hence the common pattern designation. The power steering pump pulley is joined to a dual groove pulley via two locating holes (D1 and D2) and the mounting face. The dual groove pulley drives the air conditioning compressor and the water pump. This pulley system is fastened to the dampener using three 3/8-24 UNF bolts through holes B1-B3, and is connected on the crankshaft using the crankshaft mounting hole (A1). (Note: all units are in inches.) The crankshaft is the driver for this system, and the power steering pump is the driven component using a standard V groove and belt configuration. The enclosure walls are encompassed by the air-conditioning / water pump duel groove pulley.

Figure 2: Power steering pump pulley.

Rectangle

Table 1: Feature summary for connectivity diagrams. The connectivity diagram for the power steering pulley pump (Figure 2) case study is illustrated in Figure 3. The features contained in the power steering pump pulley are the: 

Crankshaft mounting bolt hole, A1 (datum -A-)



Threaded fastener clearance holes, B1 – B3, pattern B_C1



Threaded fastener clearance holes, C1 – C4, pattern C_C2 (which does interface with any other component on the engine)



Locating dowel holes D1 and D2, pattern D_C2



V groove,

 Mounting face, enclosure and blending fillet. Each feature pattern is identified by an xx_yy label, where xx is the feature label, and yy is the pattern label. A common pattern label is used when multiple features are contained in a similar pattern. For this example, the dowel

Figure 3: Power steering pump pulley connectiviety diagram. The design structure matrix is employed to evaluate the actual design structure coupling based on the designer’s understanding of the functional requirements and the features contained in the component being assessed. The design structure matrix (DSM) is a project development tool used to illustrate task coupling for individual activities in a matrix format. There are three different matrix structures to reflect activity types. Activities that occur independently are represented as a parallel structure. Activities that occur in a sequence, or have dependencies, are presented as a serial structure. Highly coupled activities, where the parameters are interdependent, as represented as a crossover structure. This design structure matrix representation is used to illustrate the physical interconnections of the features within a component. Independent features correlate to parallel activities, dependent features (i.e. a boss containing a feature that interfaces with another component) correlate to serial activities, and coupled features correlate to interacting activities. Coupled and

203

Crankshaft mounting bore A1

X

Threaded fasteners B1-B3

X

Threaded fasteners C1-C4

X

Figure 4: Coupled component attributes. Complexity may be, in part, associated with understanding and managing a large volume or quantity of information, as well as a large variety of information. The general manufacturing complexity model introduced ElMaraghy and Urbanic [4] is a heuristic model that focuses on these elements. The model is composed of three basic components – the absolute quantity of information, the diversity of information and the information content or the “relative” measure of effort to achieve the required results (Figure 5).

Locating holes D1, D2 V Groove V1 Mounting Face

X X

X

been developed by ElMaraghy and Urbanic [4] to assess the product, process and operational complexity elements within a manufacturing process. Although all these elements are interlinked (Figure 4), when too many facets of manufacturing complexity are combined this results in a loss of meaning for the final result. Consequently, a framework was developed to decouple and relink the elements of manufacturing complexity using a systematic, uncomplicated methodology, which can be adapted for use in any design or manufacturing environment. A brief overview is presented here. For a detailed description, refer to ElMaraghy and Urbanic [4].

Fillet

Enclosure body

Mounting Face

V Groove V1

Locating holes D1, D2

Threaded fasteners C1-C4

Threaded fasteners B1-B3

Crankshaft mounting bore A1

dependent features are sensitive to geometric, material and surface related variations. Understanding this coupling is important when assigning tolerances, and introducing any variations to the original product design. The design structure matrix for power steering pump pulley is presented in Table 2. The mounting face is influenced by burr on clearance holes and the fillet blending, and in turn it influences the V groove position (due to its thickness, flatness and parallelism). The position and orientation of the V groove and clearance holes is influenced by the locating holes. If the mounting face thickness were deeper, this feature would also be an influencing feature on the holes. The enclosure body supports the V groove and blends into the fillet. The fillet in turn supports the enclosure body and blends into the mounting face. The fillet is included in this analysis as it is a supporting feature, and is a potential stress point.

X

X

X

Quantity Quantityof of Information Information

X

Enclosure body

X

Fillet

X

Diversity Diversityof of Information Information

X X

Table 2: Design structure matrix for the power steering pump pulley.

Information Information Content Content

The features catalogued in the design recovery framework, along with their functions and DPs, the connectivity diagram and the DSM inter-relationships are used as input into the complexity model. 3

Complexity Complexity

THE COMPLEXITY MODEL

3.1 Introduction Evaluation of a product’s complexity is not as simple as determining the physical characteristics of an object, as each person has a unique perception of complexity. There are highly coupled relationships between the product design, the materials, the manufacturing processes, and support systems. These elements are integrated with activities within all levels of an organization and capturing a relevant perception of complexity can be problematic. A proper understanding of the nature of complexity is required in order to be able to determine its characteristics, and provide an effective relative measure, as the areas of complexity need to be identified before they can be effectively managed [7, 8]. As opposed to creating a specific model, an adaptable framework has

204

“Effort” to produce the desired result

Figure 5: Elements of complexity. Although the quantity of information is a factor of complexity, the absolute quantity of information may contain much redundancy. Therefore a compression factor, the information entropy measure H, is used to represent the quantity of information element:

H  log 2 ( N  1) where

(1)

N is the total quantity of information.

The measure of uniqueness or the diversity ratio DR is defined as a ratio of distinct information to total information, as given by:

DR  where

n N n is the quantity of unique information and

(2)

N is the total quantity of information. Information content is defined here as a “relative” measure of effort to achieve the required result, not a measure of the probability of success as per the Axiomatic Design Theory [2]. The higher the effort (i.e. the more required stages or tools), the more complex the feature or task is. Each work environment has a different perception of complexity, but is typically consistent. The complexity index needs to effectively capture this. To this end, the relative complexity coefficient, cj is introduced and a matrix methodology is used to determine the relative complexity coefficient. This coefficient has a value between 0 – 1, complementing the diversity ratio DR. The method of determining the relative complexity coefficient, cj is described in ElMaraghy and Urbanic [4], along with an example. The product complexity analysis is performed independently from any process plan, and focuses on the product features and specifications. The product complexity indices visibly reflect the influences of the feature quantity, variety and the characteristics of the product features. The product complexity index CIproduct is a combination of the diversity ratio and the relative complexity, and is scaled by its information entropy. This is expressed as:

CI product  ( DR  c j ) * H

(3)

There are three types of complexity to be considered in a manufacturing environment: product complexity, process complexity and operational complexity, and each one flows into the other as shown in Figure 6. Only the product complexity can be assessed within the bounds of the design recovery framework.

3.3 Component and Feature Codes In order to link the design recovery framework to the product complexity index, a complexity code that represents the essential information with representative fields needs to be developed. A feature code, used to generate the feature and component complexity indices, contains information with respect to the feature quantity and variety, its form and structure, and a selection of attributes that influence the complexity (Figure 7). The attributes being considered are: the component material, the feature shape, the pattern placement for a set of features, the tolerances related to the feature, the surface finish and the spatial relationships with respect to the feature – all information contained within the design recovery framework. A factor level is associated with each attribute highlighted with an asterisk (*) in Figure 7. The factor level corresponds to the level of “effort” to produce the feature based on the attribute being considered. A multi-tier ranking system is used where low, medium, and high effort levels correspond to factor levels 0, 0.5 and 1 respectively. Feature Code

Figure 6: Manufacturing complexity cascade. 3.2 Introduction to Coding Methods In order to streamline the complexity analysis for a recovered design, and provide a basis for other tasks such as process planning, a code is introduced to classify the component and its features. Coding methods are employed in classifying parts into part families. Product codes are used with the group technology manufacturing philosophy and computer aided process planning. The product code consists of a set of alphanumeric values each of which represents design attributes. There are three types of code styles: 1. Monocode or hierarchical code, 2. Polycode or attribute code, or 3. Hybrid or mixed code. The monocode system was originally developed for biological classification in the 18th century. Each symbol depends on all of the information provided in the previous digits; hence, resulting in a hierarchical structure. The polycode symbols are independent of each other. Each

1-7 1-12

Spatial Relations

n

Surface Finish

N

Tolerance

Operational Complexity

Pattern

Process related Tasks

Complexity Analysis Attributes

Shape

Product related Tasks

Type

Process Complexity

Feature Related

Material

Environment

Basic Geometry

Volume

Total Number of Features, N Number of Distinct Features, n

Product Complexity

digit in a specific location of the code describes a unique property of the component. Therefore, each code character represents a distinct piece of information, regardless of values in other code positions. The hybrid or mixed coding method combines characteristics of the monocode and polycode systems. The Opitz classification system, widely used in industry for process planning, is an example of a hybrid code. The form is represented in the first five digits, supplementary information that represents the size, material type, raw material shape, and accuracy is contained in the next four digits. An optional four digit secondary code is utilized to identify the production operation type and sequence [9].

*

*

*

*

*

*

Figure 7: Feature codes. The total and distinct number of the general feature types, N and n respectively, the basic construction geometry and the general feature type is contained in the feature related fields. A large variety of elements is used in design; however, standard design methods are used to create any given feature. The basic geometry can be modelled as an extrusion, a surface or solid of revolution, a swept or lofted surface or solid, a ‘net’ or combination of surfaces, a fillet or a blended chamfer/bevel edge. The generic set of feature types, as defined in the design recovery framework database, is presented in Table 3. Certain materials are easier to manipulate than others are. This is based on both the material characteristics (i.e. formability, castability, and machinability) and the experience base within the manufacturing environment. The shape or geometry of the feature influences the value of the shape attribute. The more faces and edges within a feature (i.e. multiple step bore) or the more curve primitives defining an edge (i.e. an irregular shaped pocket), the higher the effort to produce the feature. The pattern type (i.e. linear or circular grid, mirror pattern,

205

peripheral pattern), the positional relationships between features and the number of unique features within the pattern dictate the values for the pattern attribute. The effort decreases with the amount of allowable variations for the feature’s dimensions and interrelationships. The tighter the tolerances, the more material removal steps are required. This is also true for the surface finish requirements. The geometry of the feature may not be challenging, but the feature’s position or orientation may provide a manufacturing challenge, i.e. if the features are positioned at an oblique angle, are recessed or an under cut, or contain an internal intersection (e.g. oil holes within engine components). This effort is reflected in the spatial relationships attribute. In addition, effort levels associated with fixturing are included in this attribute. Digit No. and Value 1 N – total number of feature types 2 n – number of distinct feature types Feature Basic Construction Geometry: 1 - Extrusion 2 - Revolved 3 - Swept 3 4 - Loft 5 - Surface net 6 - Fillet 7 -Blended chamfer/bevel edge Basic Feature Types 1 - Clearance features 2 - Complex features 3 - Enclosing or container features (cover, o-ring groove, …) 4 - External protrusion (boss, cooling fin, tab, …) 5 - Fastening features (threads, rivets, …) 6 - Free form feature (aesthetic features, contours, 4 3D fillets, …) 7 - Locating features (dowels, tongue and groove ..) 8 - Planar faces (mounting faces) 9 - Precision feature (shaft / hole) 10 - Precision / complex feature (multiple step bore, gear teeth) 11 - Seating features 12 - Support features 5

Material 0 - Low effort 0.5 - Medium effort 1 - High effort

6

Shape 0 - Low effort 0.5 - Medium effort 1 - High effort

7

Pattern 0 - Low effort 0.5 - Medium effort 1 - High effort

8

Tolerances 0 - Low effort 0.5 - Medium effort 1 - High effort

 When assessing the feature complexity, only the information entropy measure H and the relative complexity coefficient, cj, feature, are used. If there is only one feature for a given feature type, DR will equal one, significantly distorting the feature complexity value.  The maximum values for the attributes for a set of feature types are used for the complexity analysis.  The total number of features N for a feature type is multiplied by a factor related to that type prior to calculating the information quality variable H. This is done as the explicit number of dimensions and geometric modifiers are not being assessed. Typically, there are three dimensions to locate a feature in space. Maximum and minimum values or GD and T dimensions are used to describe the allowable variation for the form and to establish feature interrelationships. There are five basic GD and T categories (form, orientation, location, profile and run out). As a rule, the profile and run out categories are not used simultaneously for a feature, nor are profile and size; therefore, four categories are considered feasible for a ‘simple’ feature, generating a default information quantity factor of ‘7’. This factor is used for feature types 1, 4, 8 and 12. Other feature types (i.e. threaded fasteners, complex features such as a gear form or a free form feature) contain more information than these ‘7’ basic factors in order to convey the essential manufacturing information. Locating features and precision external features (feature types 7, 9) typically have simple geometry with precision tolerances; therefore, the default factor is set to ‘8’. Fastener features typically include chamfer and thread information; free form features may have sets of specific curvature information; multiple step bores and seating features (i.e. bearing) have additional geometry and specification; hence, the default factor for these features (3, 5, 6, 11) is ‘10’. For complex features (gear teeth, non-standard thread forms and so forth), the default factor is set to ‘12’ (feature types 2 and 10). The feature type and their default factors are presented in Table 5.  The default factors can be modified based on the feature functions and inter-relationships captured in the DSM, else the default values are utilized.  If there are noticeable differences for features that are categorized within the same feature type (i.e. pipe thread and deep hole fastening features), a separate analysis should be performed, as there are unique factors for the features. However, features with similar characteristics (i.e. same hole size, but slightly different depths) should be clustered.

9 Surface Finish 0 - Low effort 0.5 - Medium effort 1 - High effort 10

 Each feature is associated with a feature type. Feature types are clustered to generate a complexity index for the various feature types within the component.

Spatial Relationship 0 - Low effort 0.5 - Medium effort 1 - High effort Table 3: Feature complexity code.

Rules have been developed to be able to apply these codes in generating the complexity indices, and are listed below.

 The average ‘relative effort’ values for each attribute should be calculated and compared. Attributes that have higher values should be thoroughly reviewed, as the manufacturing challenges increase with higher values. Using these rules, a feature complexity index, and subsequent component complexity index, can be quickly extracted from the description codes.

 The feature basic construction geometry is used to determine the shape effort value. A sample is presented in Table 4. No.

206

Low Effort

Medium Effort

High Effort

(0)

(0.5)

dampener via the dual pulley system. The power steering pump pulley is encased by the dual groove pulley that drives the air conditioning compressor and the water pump; hence, the support-position function. The features, feature functions, and design parameters are presented in Table 6.

(1)

1

Basic, simple, symmetric shape, length: width ratio < 4

Complex, symmetric shape, length: width ratio > 4

Complex, asymmetric shape, draft length: width ratio > 4

2

Simple profile, no helix

Simple profile, helix

Complex profile, helix

3

N/A: a 1D sweep is an extrude

3D, non2D sweep, orthogonal simple sweep or symmetric profile complex profile

Ruled surface / solid

Complex profile sets, multiple sections+ construction Complex profile, geometry is multiple sections required to but moderate create the final amounts or no shape, synchronizing synchronizing geometry is geometry is automatic challenging

4

4.2 Complexity Analysis The complexity analysis for the power steering pump pulley is shown in Table 7. For the power steering pump pulley, there are six feature sets being considered. As the mounting holes are similar in shape, function and design parameters, these were clustered, where N = 7 total features, and n = 3 to enumerate the distinct types. Each feature is associated with a feature type. The default factors listed in Table 5 are used in this analysis. The original design utilized steel, and rolling and stamping fabrication processes. As only one replacement component is required, the pulley will be made from aluminium billet (6061-T6), and the design modified to suit. The complexity analysis presented here is performed on the adapted design. Feature

Function

Design Parameters

Factor

Through hole: Crankshaft Couple - join Diameter mounting bore Support Depth A1 position Clearance tolerance

1, 4, 8, 12

1 - Clearance features 4 - External protrusion (boss, cooling fin, tab) 8 - Planar faces 12 - Support features

7

Through hole: Threaded Diameter fasteners B1Depth B3 Couple - join Clearance tolerance

7, 9

7 - Locating features (dowels, tongue and groove) 9 - Precision feature (shaft / bore)

8

Through hole: Threaded Diameter fasteners C1Depth C4 Couple - join Clearance tolerance

3, 5, 6, 11

3 - Container feature 5 - Fastening features (threads, rivets, …) 6 - Free form feature (aesthetic features, contours, …) 11 - Seating features

10

2 - Complex features 10 - Precision / complex feature (multiple step bore, gear teeth)

12

Table 4: Feature basic construction geometry related to effort levels. Feature Number

2, 10

Feature Types

Table 5: Default factors used to calculate H for the different feature types. 4

CASE STUDY: POWER STEERING PULLEY PUMP

4.1 Design Recovery Analysis The power steering pump pulley for a mid-70’s high performance vehicle, shown in Figure 2, is significantly damaged, and cannot be purchased from the original manufacturer. Flexible belt-pulley systems are used to transmit power and motion between widely spaced shafts, or when the driver and driven shafts must rotate at different speeds. This power transmission method is simple, easy to install and maintain and can be used in a variety of applications. The features to be assessed, the interface conditions and feature inter-relationships are described in section 2. The functions performed by the power steering pump pulley are: channel – transfer, couple – join and support – position. The power steering pump pulley channels power and torque from the crankshaft to the power steering pump. The pulley is joined to the crankshaft and harmonic

Locating holes Support D1, D2 position

Through hole: Diameter Depth Roundness Location tolerance

Channel V Groove V1 transfer

SAE 440 V groove: Established standard design parameters Parallelism to mounting face

Mounting Face

Support position

Flat base: Flatness Surface finish

Enclosure body

Support Contain

Enclosing profile: Rotationally symmetric Clearance to work envelope

Fillet

Simple 2D blend, maximum Couple - join radius to minimize stress Support concentrations

Table 6: Power steering pump pulley feature – function design parameter summary. The feature codes and relative complexity values cfeature are developed in Table 7 (a) and the feature and component complexity calculations are demonstrated in Table 7 (b). For the component complexity analysis: N = 13 and n = 8. The sum of N*factor = 98, hence H = 6.629. The diversity ratio DR = 0.615 and the relative complexity coefficient = 0.045. This provides a product complexity index CIproduct = 4.377.

207

Tolerance

Surface Finish

Spatial Relations

2

10

0

0.5

0

0.5

0.5

0

1.5

0.25

Mounting faces

Planar surfaces

1

1

2

8

0

0

0

0.5

0

0

0.5

0.08

Mounting holes

Clearance feature

7

3

1

1

0

0

0

0

0

0

0

0.00

Support body

Container feature

1

1

1

12

0

0.5

0

0

0

0

0.5

0.08

Locating holes

Precision feature

2

1

1

7

0

0

0

0.5

0

0.5

1

0.17

Blending fillet

Fillet

1

1

6

12

0

0

0

0

0

0

0

0.00

Pattern

1

Shape

Basic

1

Feature Type

Material

n

Precision features

Feature Label

Type

N

V groove

Sum of Fields 5 - Average of 10 Fields 5 - 10

Table 7 (a): Feature complexity analysis. Feature Label

Feature Type

V groove

Precision features

Mounting faces

Planar surfaces

Mounting holes

Clearance feature

Support body

Container feature

Factor N* factor H, feature DR, part 12

CI feature H* c,feature

Weighted c, feature

12

3.700

0.25

0.925

0.019

7

7

3.000

0.08

0.250

0.006

7

49

5.644

0.00

0.000

0.000

7

7

3.000

0.08

0.250

0.006

Locating holes

Precision feature

8

16

4.087

Blending fillet

Fillet

7

7

3.000

(Sum of n) / N

98

6.629

0.615

Sum

c, feature

Complexity product

0.17

0.681

0.013

0.00

0.000

0.000 0.045

4.377

Table 7 (b): Feature and component complexity calculations. The average attribute factors, which are associated with the effort for a specific attribute, are plotted in Figure 7. The effort associated with producing the product to the required shape, tolerances, surface finish and spatial relations is low to moderate (0.17, 0.33, 0.08 and 0.17) respectively). No other attributes are a concern.

the water pump V belt is identical to the power steering pump; hence, both grooves must conform to a standard SAE 440 type. Information with respect to the C1-C4 bolt holes and the locating features D1 and D2 on the respective pulleys is eliminated as these features serve no function. The enclosure is of no concern, but an appropriate body to support the grooves must be developed, along with an applicable material. Minor changes have been made to the mounting holes, i.e., chamfers have been added to the fastening clearance holes. A short (1/4 inch) internal cylindrical feature is added at the lip for locating purposes. The final part is illustrated in Figure 8.

Figure 7: Relative effort comparison for each attribute. 4.3 Redesign Further redesign was performed on this pulley. There is no air conditioning in this vehicle, and there is no apparent use for the bolt holes C1 – C4. It is speculated that these pulleys were used on multiple engine families. Based on this, it was determined to redesign and manufacture a pulley system appropriate for this vehicle, as an alternate material and manufacturing processes needed to be considered anyway. The modular nature of the design recovery framework allows the inclusion of ancillary components with minimal adjustment. The essential information is collected for all related components (water pump/air conditioning pulley and dampener). Information with respect to the water pump pulley groove V2 and the mounting hole A1 must be added. The cross section of

208

Figure 8: New pulley to drive the water pump and power steering pump – CAD model and machined part. For the modified pulley design complexity analysis, the default factors are adjusted. There are less inter-feature relationships to be considered, i.e., the top of the mounting face is not a mounting interface, and the mounting holes are not related to any intermediate location geometry; therefore, the factor values for these features is reduced (bolded in Table 8 (b). The factor for the V grooves was not adjusted, as the profile complexity and feature inter-relations are not reduced, although the assembly is being replaced by a single component.

Type

Surface Finish

Spatial Relations

Tolerance

Basic

2 1

2

10

0

0.5

0

0.5

0.5

0

1.5

0.25

Mounting faces

Planar surfaces

1 1

2

8

0

0

0

0.5

0

0

0.5

0.08

Mounting holes

Clearance feature

4 2

1

1

0

0

0

0

0

0

0

0.00

Support body

Container feature

1 1

1

12

0

0.5

0

0

0

0

0.5

0.08

Blending fillet

Fillet

1 1

6

12

0

0

0

0

0

0

0

0.00

CI feature H* c,feature

Weighted c, feature

Shape

Pattern

N n

Precision features

Feature Label

Material

Feature Type

V groove

Sum of Fields 5 - Average of 10 Fields 5 - 10

Table 8 (a): Updated design feature complexity analysis. Feature Label

Feature Type

Factor N*factor H, feature DR, part

c, feature

V groove

Precision features

12

24

4.644

0.25

1.161

0.028

Mounting faces

Planar surfaces

5 (7)

5

2.585

0.08

0.250

0.009

Mounting holes

Clearance feature

6 (7)

24

4.644

0.00

0.000

0.000

Support body

Container feature

7

7

3.000

0.08

0.250

0.009

Fillet

7

0.00

0.000

0.000

Blending fillet

Sum

7

3.000

(Sum of n) /N

73

6.087

0.667

Complexity product

0.046

4.340

Table 8 (b): Updated design feature and component complexity calculations. For the redesigned component complexity analysis: N = 9 and n = 6. There are less features overall, but a greater variety; hence, the diversity ratio DR = 0.667. The sum of N*factor = 73, hence H = 6.087. For the new design, the overall effort is approximately equivalent to fabricate this part as the relative complexity coefficient = 0.046. However, the overall product complexity index for the redesigned part is 4.340, which is slightly less than the original design due to the reduced number of features and factor multipliers. The average attribute factors are plotted in Figure 9. The shapes, tolerance and surface finish attributes require the most attention.

Averaged Factors

0.8

0.6 Original

0.4

Modified

0.2

0 Material

Shape

Pattern

Tolerance

Surface Finish

Spatial Relations

Attributes

Figure 9: Relative effort comparisons for each attribute. 5 SUMMARY AND CONCLUSIONS For effective design recovery of an engineered component, the form, functions and the features must be reconstructed effectively. A comprehensive, modular, multi-perspective framework was developed to assist with data collection and its transformation into relevant design knowledge. To assist with the analysis of a recovered design and potential redesign alternatives, an adaptation of the manufacturing complexity assessment methodology [4] is presented to assess the product complexity. Using design recovery framework information, the connectivity diagram and the DSM along with a structured set of attributes and feature-function factors, a product complexity value can be quickly determined for comparative purposes. Information with respect to the features and attributes is isolated, and can be presented in a graphical manner to highlight the critical

characteristics. Conditions may exist where the recovered design needs to be modified before the component can be remanufactured due to new design constraints, as shown by the case study. These structured tools and systematic approach can be used to graphically and “mathematically” show tradeoffs for each important criterion. The attributes and rules within the framework can be adapted for a particular environment. To conclude, people with diverse backgrounds are able to rapidly evaluate alternatives and risks with respect to a reconstructed product’s attributes using these tools. 6 REFERENCES [1] Miles, L., 1989, “Techniques of Value Analysis and Engineering”, 3rd ed, McGraw Hill. [2] Suh, N. P., 2001, Axiomatic Design: Advances and Applications, Oxford University Press. [3] Altshuller, G., 1997, 40 principles: TRIZ Keys to Technical Innovation, Translated by L. Shulyak and S. Rodman, Worchester, Massachusetts: Technical Innovation Center, ISBN 0964074036 USA. [4] ElMaraghy WH and Urbanic RJ, 2003, “Modelling of Manufacturing Systems Complexity”, Annals of the CIRP, 53/1: 363-366. [5] Hirtz, J., Stone, R., McAdams, D., Szykman, S., Wood, K., 2002, A Functional Basis for Engineering Design: Reconciling and Evolving Previous Efforts, Research in Eng. Design, 13/2: 65-82. [6] Urbanic, R.J., ElMaraghy, W.H., 2007, A Design Recovery Framework for Mechanical Components, Journal of Eng. Design,CJEN-2007-0082, (in press). [7] Corbett, L.M., Brocklesby, J., Campbell-Hunt, C., 2002, “Thinking and Acting: complexity management for a sustainable business”, 2nd International Conference of the Manufacturing Complexity Network:. 83 – 96. [8] Bainbridge, A. F., 2002, “Making Things Simpler: Management in Complexity”, 2nd Int.l Conference of the Manufacturing Complexity Network: 403 – 410. [9] Groover, M. P., 2001, “Automation, Production Systems, and Computer-Integrated Manufacturing”, Prentice Hall, NJ.

209

Invited Paper A Study on Process Description Method for DFM Using Ontology K. Hiekata1 and H. Yamato2 Department of Systems Innovation, Graduate School of Engineering, The University of Tokyo, 5-1-5, Kashiwanoha, Kashiwa-city, Chiba 277-8563, Japan 2 Department of Human and Engineered Environmental Studies, Graduate School of Frontier Sciences, The University of Tokyo, 5-1-5, Kashiwanoha, Kashiwa-city, Chiba 277-8563, Japan [email protected], [email protected] 1

Abstract A method to describe process and knowledge based on RDF which is an ontology description language and IDEF0 which is a formal process description format is proposed. Once knowledge of experienced engineers is embedded into the system the knowledge will be lost in the future. A production process is described in a proposed format similar to BOM and the process can be retrieved as a flow diagram to make the engineers to understand the product and process. Proposed method is applied to a simple production process of common sub-assembly of ships for evaluation.

Keywords: Production process, DFM, Ontology, Computer system

1 INTRODUCTION There are many research and computer program for supporting optimization of production process in shipyards. And knowledge of experienced engineers is embedded into the system. Once the knowledge is embedded into computer system, the knowledge is not transferred to young engineers and will be lost in the future. And in shipyards, initial design department is separated from production design and fabrication shops, so engineers in initial design department don’t know production process very well. So a new system for describing knowledge, especially knowledge about production, is needed. In some research work of knowledge management in industries, New Energy and Industrial Technology Development Organization's 'Digital Meister Project' was an project to explore this field [1]. The knowledge base system to transfer the knowledge is discussed [2]. In this paper, a method to describe process and knowledge in computer system is proposed. This method is based on RDF (Resource description framework) [3] which is one of ontology description languages and IDEF0 [4] which is a formal process description format. The production process is described in a proposed format similar to BOM (Bill of material) system [5]. Production process can be edited by interactive user interface to design the product and production process and can be retrieved as flow diagram to make the engineers understand the product and production process. Proposed method is applied to simple production process for common sub-assembly of ship and evaluated. 2

METHODOLOGY FOR DESICRIBING PRODUCTION PROCESS

2.1 RDF and RDF Schema RDF is one of a language for ontology and is a framework for describing metadata for some object. Metadata is data for explaining the object. For example author, date and publisher are metadata for a book. In RDF format, URI

CIRP Design Conference 2009

210

(Unified Resource Identifier) is assigned to all the objects, and metadata is defined for all objects using the URI. Metadata is defined as a statement with subject, predicate and object. The statement is called triple in RDF. Two kinds of elements, Literal or Resource, can be a component of a statement. One is Resource and the other is Literal. Resources are objects with URI, and can be subject, predicate and object in RDF statements. Literals are plain text and must be used only in object. Complicated products or processes can be described by iteration of definition of resources and literals. Usually RDF can be stored in XML or other format, and is visualised in RDF graph. In RDF graph, subject, predicate and object are shown as in Figure 1. Sample of RDF graph with some information is illustrated in Figure 2. The RDF graph indicates “creator” of “http://www.nakl.t.u-tokyo.ac.jp/” is “Kazuo Hiekata”, the date of the website is “2006-04-13” and updated information is at “http://www.nakl.t.utokyo.ac.jp/update.html”. Prefix “dc” and “ex” used in Figure 2 are XML namespace defined in other files or website. “dc” is a standard vocabulary defined by DCMI (Dublin Core Metadata Initiative) [6]. Figure 3 shows XML format of RDF metadata of the RDF graph in Figure 2. The RDF graph in Figure 2 is shown in XML format. The rest of RDF expressions in this paper will be illustrated in RDF graph because the graphs are easier to understand.

Subject

Predicate

Object

Figure 1 Simple RDF graph

Kazuo Hiekata

dc:creator http://www.nakl.t.u-tokyo.ac.jp/

dc:date

Resource Property

2006-04-13

Literal

ex:updateInfo http://www.nakl.t.u-tokyo.ac.jp/update.html

Figure 2 Example of RDF graph

“Operation” class. And the “rdfs:range” specifies the value of the property. The value of “rdfs:range” is Literal, so “duration” must be text string for man hour of a specified instance of “Operation” class. The definition by RDF schema strictly defines each term and is useful for formal description. Process and product/intermediate product can be described using these vocabularies. This description method is based on the concept of IDEF0. Basic process and product description by defined six vocabularies are shown in Figure 5. Figure 5 illustrates joining two steel plates to create a skin plate and the duration of joining process is 10. The left side of the figure is definition of basic members. The right side shows the process and parts. Two “Steel plate” are instances of Primitive class and “Joining plates” is process with two “input” and create “Skin plate” which is an instance of “Assembly” class. This schema is applied to all the production process.

Figure 3 Example of RDF in XML file

Assembly The class of assemblies of primitives.

Then RDF schema is introduced in this paragraph [7]. Metadata can be defined by RDF, but meta-languages are needed to define vocabularies and data schema of RDF. Resources in RDF are described as the class and instance concept same as in object oriented programming. To create models of real world using RDF, vocabularies to describe the object must be defined by RDF schema. Classes are defined by RDF schema and the relationships between instances are described by RDF. By defining new vocabularies based on RDF schema, RDF can be applied to any specific domains or specifies application.

duration Standard duration of an operation.

2.2 Process description based on RDF and RDF Schema Production process of building blocks of a ship is picked up in this paper. Production process is described in RDF metadata using newly defined vocabularies by RDF schema. At first, vocabularies required to describe production process are defined. Production process will be presented in formal manner using the vocabularies. To describe the process, parts and materials, “Operation”, “Assembly” and “Primitive” are defined by RDF schema. “Duration”, “input” and “output” are defined for relationships between the resources. The vocabularies are shown in Table 1. “Operation” is a process in shipyards. And instances of “Operation” require clear input materials and/or parts and output assembly. To define the vocabulary, “Operation”, “input” and “output” must be defined. “Input” can be the predicate of an RDF statement if the subject is an instance of “Operation”. And “output” can be the object of an RDF statement. “Duration” is also defined for instances of “Operation”. “Duration” must be Literal and be assumed as dimensionless number in this paper. “Primitive” is a material which cannot be divided into multiple parts, or parts whose components design engineers don’t need to consider. “Assembly” is a member which has one or more instances of “Primitive” and “Assembly”. Part of the definition is shown in Figure 4. The first part is definition of “Assembly” class and the second part is definition of “duration”. In the definition of “duration”, some restriction is defined by RDF schema. “rdfs:domain” indicates that “duration” must be a property of specified class. So, the duration must be defined for instances of

Figure 4: Definition of vocabularies by RDF schema

Class Primitive Process Assembly

type type type

Instance Steel plate(1) Steel plate(2) input input Joining plates output Skin plate

duration 10

Figure 5: RDF Graph for process description

211

Name

Description

Operation

Class for operations in fabrication shop

Primitive

Class for parts or materials which cannot be divided into sub-components

Assembly

Class for parts consists of one or more “Primitive”

duration

Property for describing man hour of specified “Operation”

input

Property for describing input “Primitive” or “Assembly” for specified “Operation”

output

Property for describing output “Assembly” for specified “Operation” Figure 7: Interface of client program

Table 1: List of vocabularies for describing production process 3

SOFTWARE SYSTEM

3.1 System overview Prototype system is implemented to evaluate proposed production process description method. The prototype system is developed ShareFast system which is an open source client/server document management system based on workflow [8]. ShareFast system manages document files using RDF metadata based on Jena framework [9][10]. The client program is developed under C# language, and the server program is developed under Java based on servlet technology. Metadata in the server can be accessed by several interfaces through servlets. The server has RDF model management component and the client program accesses to the metadata on the server by the componts via several interfaces. Figure 6 shows the system overview.

Flow Diagram Generator

Client

Figure 8: Assembly definition dialog

RDF Model Management

Server

3.2 Assembly definition Proposed process description method consists of iteration of “Assembly” definitions. Figure 8 shows the input dialog box for defining “Assembly” instances. This dialog shows by right click menu of tree view of client program. Required field of this dialog box is the name of assembly, parts list of the assembly and the name and duration of the operation. The engineer is required to fill the name, and then the server creates instances in the metadata database in the server by assigning URI to defined objects. Figure 8 shows the same process and parts as Figure 5.

Figure 9 is the same production process and parts in a tree view of the client program. Output assembly is located in the parent node, and operation and input parts are put as children of the output assembly to make it easy to understand the relationship of each object. The concept of BOM system is introduced to this user interface.

Process and Product Description in RDF format

Figure 6: System overview Figure 7 shows the main window of client program. The user can edit relationships between production process and parts using the left side of the program. Instances of “Operation”, “Primitive” and “Assembly” class are shown in tree view in the left side. The system generates flow diagram of any node of “Assembly” instances in the tree view. Generated flow diagram will be shown in the right side of the program.

Output Assembly Operation Input Parts (Assembly and/or Primitive)

Figure 9: Representation in BOM tree view

212

3.3 Production process editing After defining assemblies, design/production engineers have to organize the whole products. When the user edit the parts (“Assembly” and “Primitive”) hierarchy, the production process is also changed simultaneously because the assembly has process information as an instance of “Operation” class. The client program provides drag and drop feature to edit the hierarchy same as other common software. A drag and drop operation sends a request to the server of updating metadata. Figure 10 shows the hierarchy of defined assemblies, and then the assemblies are organized into one assembly.

Edit BOM hierarchy by drag and drop operations

Figure 10: Production process editing 3.4 Generating flow diagram Production process and parts information are stored in RDF format. The data has lists of parts of each assembly and dependency information of parts. Using the information of the tree view, a flow diagram can be generated. The hierarchy of parts in right side of Figure 10 is converted to flow diagram shown in Figure 11. In this process, four assemblies are fabricated sequentially, so the flow diagram has no branches and one track. Duration defined in assembly definition is put as an annotation in the flow diagram. And the total duration and the duration of critical path are calculated automatically by the system, and shown in the diagram. With this feature, engineers can get flow diagram with duration just after planning production process in graphical user interface like BOM system.

4

EXPERIMENT

4.1 Production process for experiment To evaluate proposed method, a case study of planning sample production process is described by this system. The process is illustrated in Figure 12. A plate, four longitudinals and a transverse member are fabricated to a panel in this process. Two steel plates are joined to form the base plate. Several ways can be applied to fabricate longitudinals and transverse members. Two production processes for the same panel are described in this case study as shown in Figure 12. Duration is assigned to each process and shown in the figure by case arc. In case (a) in Figure 12, large slits are made in transverse member to weld the transverse member and longitudinals. A transverse member and longitudinals are easier to weld but the slits must be closed after fabrication, so collar plates must be welded. So, the whole process is as follows. 1. Joining plates (10). 2. Welding longitudinals to a base plate (20). 3. Welding a transverse member to a base plate (20). 4. Welding collar plates to slits in transverse member (20). In case (b) in Figure 11, a transverse member and longitudinals are fabricated to an egg box prior to be welded to the base plate. So the slits in transverse member are not needed. But the welding process of the egg box and the base plate is a complicated operation. The process of this case is shown as follows. 1. Joining plates (10). 2. Fabricating an egg box (15). 3. Fabricateing an egg box and a base plate (40). collar

Joining Plates (10)

Welding Long. (20)

Welding Trans. (20)

Welding Collar (20)

(a)

Joining Plates(10)

Fabricating Egg Box(15)

Fabricating Egg Box and Panel(40)

(b) Figure 12: Sample panel production process

Figure 11: Flow diagram generated by proposed system

4.2 Description results Two cases shown in Figure 12 are described by the proposed system. The hierarchy of parts is shown in Figure 13. In case (a), all the intermediate products which are instances of “Assembly” class are input of the next operation. So the four operations must be performed sequentially. By the way, in case (b), two assemblies are fabricated in the first step, and those two assemblies are

213

joined afterwards. The depth of the hierarchy is smaller than in case (a).

(a)

(b) Figure 14: Flow diagram generated by the system (a) 4.4 Summary Two cases can be described by this software and the translation from BOM tree to flow diagram is demonstrated. The tree view has dependency information of processes and duration of each task. Engineer can get the information of the flow diagram and duration of the product when planning the production process.

(b) Figure 13: BOM tree for Sample panel production process 4.3 Results of flow diagram generation Flow diagrams for these two cases are generated by the proposed program. The diagrams are shown in Figure 14. The flow diagram of case (a) is straight forward and there are no branches as explained in section 4.1. Total duration is 70 and the duration of critical path is also 70. As for case (b), the flow diagram has a branch and “fabricating egg box” and “Joining plates” can be performed simultaneously. The total duration is 65 and the duration of critical path is 55. Flow diagram can be generated using the result of editing the BOM tree.

5 DISCUSSION Proposed method introduces only three classes and three properties to describe production process. This simple data schema can describe dependency of assemblies and also basic process information. In this case study, very limited information is stored in the server and the generated flow diagram is not very large. But the case study shows editing the data for product configuration changes the flow diagram of the production process. This mechanism to provide multiple views for single data is evaluated in this experiment. Using this system, Engineers can create parts lists by the tree view. UML [11] and IDEF are useful, but these methodologies are too difficult for engineers to describe the production process in most cases. BOM system is common to production process engineers, so the system adopts the graphical user interface similar to BOM system. Engineers are expected to use the software by themselves. The proposed system can describe dependency relationships between production process and parts in single database thanks to ontology based on RDF and RDF schema. This integrated description format can be extended to many other applications. 6 CONCLUSION A method for describing production process based on RDF schema is proposed. And computer program for describing the production process is developed. Sample production process of shipbuilding is described by the system and proved that a BOM tree with process

214

information can be translated to a flow diagram. The system tells the users about the total duration and critical path. 7 FUTURE WORK The prototype system must be applied to large production process to evaluate if the system is useful for detecting problem in production process. Visualization of a BOM tree and a flow diagram might be useful, but must be evaluated in real production design process. The data is stored in RDF format. More complicated and detailed modelling of production process and parts can be done by defining other properties by RDF schema. As URI are assigned to all the parts in the products, this system can work with existing information systems if appropriate properties related to CAD or BOM system are defined by RDF schema. 8 ACKNOWLEDGEMENTS This work was supported by Grant-in-Aid for Young Scientists (B) No. 20760556. We’d like to thank engineers in shipyards for their valuable discussion. 9 REFERENCES [1] New Energy and Industrial Technology Development Organization, Japan, 2005, ‘Product Fabrication and IT Integration Promotion Technology (Digital Meister Project)’, NEDO [2] Guus Schreiber, Hans Akkermans, Anjo Anjewierden, Robert de Hoog, Nigel Shadbolt, Walter Van de Velde, and Bot Wielinga, 2000,

‘Knowledge Engineering and Management The CommonKADS Methodology’, The MIT Press [3] Klyne, G., Carrol, J. (eds.), 2004, Resource Description Framework (RDF): Concept and Abstract Syntax, W3C Recommendation, http://www.w3.org/TR/rdf-concepts/ [4] National Institute of Standards and Technology , 1993, INTEGRATION DEFINITION FOR FUNCTION MODELING (IDEF0) [5] Dave Garwood, 1997, Bills of Material, Dogwood [6] DCMI (Dublin Core Metadata Initiative), 2003, Dublin Core Metadata Initiative, Dublin Core Metadata Element Set, Version 1.1: Reference Description, DCMI Recommendation, http://dublincore.org/documents/dces/ [7] Brickley D, Guha R., 2004, RDF Vocabulary Description Language 1.0: RDF Schema., http://www.w3.org/TR/rdf-schema/ [8] Hiekata K, Yamato H, Oishi W, Nakazawa T., 2007, Ship Design Workflow Management by ShareFast, Journal of Ship Production, 23(1), 23-29. [9] Hewlett-Packard Development Company, 2005, ‘Jena 2 - A Semantic Web Framework’, HewlettPackard Development Company [10] B. McBride, 2001, ‘Jena: Implementing the RDF Model and Syntax Specification’, Proceedings of the Second International Workshop on the Semantic Web [11] Grady Booch, James Rumbaugh , Ivar Jacobson, The Unified Modeling Language User Guide 2nd Edition, 2005, Addison-Wesley Professional

215

The Use of DfE Rules During the Conceptual Design Phase of a Product to Give a Quantitative Environmental Evaluation to Designers H. ALHOMSI, P. ZWOLINSKI G-SCOP Laboratory, 46 av Félix Viallet – 38031 Grenoble, France [email protected], [email protected]

Abstract In order to help designers to understand and translate the environmental constraints into effective actions, methods and tools have to be developed to enable the generation of more environmentally benign design alternatives according to Design for Environment rules. This article explains how to use DfE rules earlier during the conceptual design phase, when the designers don’t have simple qualitative tools or methods to evaluate their products. Two main actions have been realised: 1) to identify which kind of rules can be applied when designers only have a functional representation of their product 2). To create the necessary indicators to evaluate these rules depending on designers choices. Keywords: Environment, Conceptual design phase, DfE rules, quantitative environmental evaluation

1 INTRODUCTION Product design decisions have significant impact on the environment all along the product life cycle. The main actor in these decisions is the designer which can make significant improvement to the product environmental impact by considering design for environment (DfE) rules during the design process. These rules have to be integrated specifically during the conceptual design phase [1] , when the designer has still the ability to easily modify the product while considering the environmental exigencies and the functional representation of the product [2]. But because of the nature of the functional representation [3] which doesn’t have accurate data and detailed information about the final product, the implementation of the DfE rules is not easy. To help the designer to consider DfE rules earlier during the design process, a tool is proposed that consists in a list of DfE rules usable during the conceptual design phase. In the next sections the recommended approach to use DfE rules earlier is presented with the rules that can be used. The model and indicators that contribute to the evaluation are also presented. Finally, a case study is presented to show how these elements can be used.

2 THE SUGGESTED APPROACH TO REALLY CONSIDER DFE RULES DURING THE DESIGN PROCESS 2.1. The current approach to consider DfE rules during the design process Many researches are concerned with the integration of the environmental constraint during the product design and the design of its related processes [4]. This leads to develop new design methods based on the consideration of the entire product lifecycle to reduce the environmental impacts [5]. A view of the current approach is presented figure (1). In the early design phases designers can use guidelines as Ecodesign Pilot [6] to be guided in their choices. These guidelines are well adapted to the conceptual design phase but are not systematically used because they don’t return usable quantitative indicators that could be analysed and compared to other design indicators at this stage of the design project.

Optimisation

Guidelines

(DfE & LCA…)

Preliminary Design

Conceptual Design

Embodiment Design

Satisfied

Detailed Design

Not satisfied

Design process Figure 1: The current approach to design environmentally friendly products

CIRP Design Conference 2009

216

So, the environmental impact of the product is mainly considered during the detailed design phase. At this stage, the necessary data are available (components, weights, material, joining techniques, manufacturing processes…) and Design for Environment (DfE) tools can be applied and Life Cycle Analysis (LCA) can be realised [7] [8]. After carrying out these analyses, the designers validate if the product satisfy or not the environmental requirements. Then if the requirements are satisfied, the design process continues. If not, the solution has to be reconsidered: with minor modifications for an optimization or with major modifications in the conceptual design phase, that conducts to a large waste of time.

This is necessary to avoid significant modifications at the end of the detailed design phase. So, in this article some rules that are used during the conceptual design phase are presented and it is explained how they can be used. A focus is done on the chosen product model that supports the evaluation process and the different indicators created in relation with the selected DfE rules.

2.2. The proposed approach to consider DfE rules during the design process In order to help the designer to optimize the environmental point of view during the design and to minimize the time of the design project, a simple environmental evaluation of the product has been define that will be carried out during the conceptual design phase and that can be extended until the detailed design phase (figure 2). Our objective is to propose an evaluating tool using first the element of the functional representation to return simple environmental indicators to the designer [9]. Then, for each step of the design process, an evaluation of the environmental requirements can be conducted to avoid too large trial/errors buckles during the design. The developed indicators will not be related to an environmental impact assessment as a LCA because of the lack of product data at this stage. But a first estimation of the product environmental profile is possible by applying DfE rules. The objective is not to replace LCA at the end of the detailed design phase. It is to guide the designer earlier toward a good compromise for its product by simple estimations.

The objective of this work is to apply DfE rules earlier during the design process, when designers need a quantitative tool to evaluate their propositions.[10] [11]. To do that, DfE rules have been selected and classified to be applied on a first definition of the product; that means on the functional representation of the product. [12]

3. DFE RULES TO BE USED CONCEPTUAL DESIGN PHASE

    

Rules to choose the right materials. Rules to improve production processes. Rules to reduce the transportation. Rules to improve the use of the product. Rules to increase product durability.

.

(evaluation tool)

Optimisation

Embodiment Design

Detailed Design

(DfE & LCA…)

Preliminary Design

Conceptual Design

Satisfied

Not satisfied

Design process Figure 2: The current approach to design environmentally friendly products

DfE Rules classification Main group To choose the right materials. To improve production processes. To reduce the transportation. To improve the use of the product.

To increase product durability

THE

3.1. DfE Rules classification A classification of the DfE rules has been proposed according to the product lifecycle phases which are defined by five main life cycle phases: Raw materials, production, transportation, usage and finally disposal phase[6]. So there are:

Indicators Guidelines

DURING

Sub- group rule To select Material To save material for component To save material for component during the production processes To save Energy during the production processes To improve the assimilability of components (product assembly) To improve the packaging To improve the transportation To improve the maintenance To optimise the product functionality To optimise the energy consumption in use phase To reduce waste in use phase To improving the disassembly To improving the remanufacturability To improve the recyclability Table (1): the Main group et sub-group for DfE rules.

217

For each DfE rule group there are sub-groups related with specific technical points to guide designers more precisely during the product definition. Indeed, the rules presented in these sub groups can guide the designers to integrate environmental constraints but also different other concepts in parallel of product design development (remanufacturing, disassembly, maintenance,…). They are defined according to practical design guide and translated to be adapted to the conceptual design phase. Table (1) shows some of these groups and sub-groups For each set of these sub-groups, there are two types of rules that have been identified: the technical rules and the environmental rules (figure 3). This classification is related with functional requirements [1] and environmental exigencies [13-15] that are defined in the requirement list. 3.2. Technical rules The technical rules are used to cover technical requirements which have to be satisfied by the product during its lifecycle to be environmentally friendly. When the designer defines the type of product (Mechanical, Electrical, Electronic …) [6] and proposes an end of life scenario, he needs some rules to guide the design process and to take into account the technical aspects. These rules propose general ideas related with component material, product structure, product assembly and disassembly axes, joining techniques and disassembly, pollutants, etc.… These rules are inspired from DfX approaches [16] [17], such as design for disassembly (DfD) [18], design for remanufacturing (DfRem) [19]. They are supposed to guide the designer to adapt the product technical requirements during each life cycle phases: from materials suggestions and proper production processes until the definition of the product end of life scenarios. 3.3. Environmental rules The difference between the environmental rules and the technical rules is that the technical rules are means to propose technical ideas, to solve technical issues, to find functional solutions, to adapt proposed approach like DfRem, and environmental rules are more related to the environmental standards and exigencies. They are related to the life cycle requirements for the whole product: energy consumption, environmental impact, end of life scenario, product durability [6]…. They are supposed to guide the designer to adapt the product environmental requirements during each life cycle phases. These rules aim to improve the conceptual design process by giving goals to designers, rather than improving technical issues.



Rules to choose the right materials. Objective: The objective of this first group is to guide the designer in materials choices and to determine the effect of their presence in the product. Technical rules “TO USE RECYCLABLE RAW MATERIAL” “TO MINIMISE THE NUMBER OF TYPE OF MATERIALS IN THE PRODUCT” Environmental rules “TO ADAPT THE MATERIAL TO THE LIFE OF THE PRODUCT”



Rules to improve production processes Objective: The objective of this second group is to optimise the materials consumption and the level of energy used in production processes by adapting cleaner production strategies. Technical rules “TO USE STANDARDIZED ELEMENTS, PARTS, AND COMPONENTS FOR EASY PRODUCT ASSEMBLY”. Environmental rules “TO AVOID HAZARDOUS MATERIALS AS AUXILIARY OR PROCESS MATERIALS”.



Rules to reduce the transportation. Objective: The objective of this third group is to optimise the product packaging and the transportation impacts. Technical rules: “TO ADAPT MODULAR STANDARDS SHAPES FOR REUSABLE PACKAGING”. “TO PREFERABLY USE RENEWABLE RAW MATERIALS FOR PACKAGING”. Environmental rules “TO MINIMISE THE PRODUCT TRANSPORTATION”.



Rules to improve the use of the product. Objective: The objective is to minimise the impact of the product and of its consumables in use. Technical rules: “TO MINIMIZE THE ENERGY CONSUMPTION IN USE” Environmental rules “TO INCREASE THE LIFE TIME BY DESIGNING PRODUCT FOR SEVERAL USAGE PHASES” “TO EXTEND THE USAGE TO SEVERAL USERS IN THE SAME USAGE PHASE”

3.4. Examples of DfE rules In this paragraph, we illustrate for each group of DfE rules examples for technical and environmental rule:

Figure (3): Types and groups for the DfE rules usable during the conceptual design phase

218



Rules to increase product durability. Objective: The objective of this group is to minimise the waste and to adopt the best end of life strategy for the product. Technical rules: “TO IMPROVE DISASSEMBLABILTY OF THE PRODUCT”. Environmental rules “TO PREFER CLOSED LOOP END OF LIFE STRATEGIES”. 4. The DfE indicators to be used during the conceptual design phase As presented in the previous paragraph, numerous DfE rules exist to evaluate a product. To improve the use of these rules during the design process we have:

 

4.2. Indicators To evaluate the design choices, a triple indicator has been defined (Kc, Kr, Kp) related to the (C, R, P) model. For each of these three indicators, a value is assigned, related to specific characteristic. Some of these indicators have numerical values (weight, number…) and some of them are described by a literal formulation (material, EoL…). A symbolisation for these indicators has been proposed (figure 5). They are presented with the letter (K), and their group is specified with the second letter (C, R or P). The third letter is a (P) if the indicators have specific relations with polluting components. At the end of the indicator appears the abbreviation for the characteristic considered, in small letters.

Identify the product model that can support these rules during the conceptual design phase. Define factors related to the design rules and weighting factors to evaluate the preliminary solutions from an environmental point of view.

4.1. The product model used during the conceptual design phase The product model has been chosen regarding the simplest combination needed to obtain a structure of the product. The simplest structure consists of two components and one relation (figure 4). It is the (C, R, P) [20]model for Component, Relations between the components and Product.

Figure (5): Indicators’ symbolisation

For example: - Kcda belongs to the component characteristics groups and gives information on the component disassembly axis. - Krtype belongs to the relations characteristics group and gives information on the Type of relations. There are general types and the designer chooses the relation type from the list presented table (2). - Kppcn belongs to the product characteristics group and characterizes the number of polluting components in the product. As an example, for the product illustrated in the example table (2), the indicator value of Kppcn is equal to 3 (Kppcn =3).

Figure (4): The (C,R,P) model

Name

Components Group

Relation Group

Product Group

Disassembly axis

Relation type

Number of polluting components

Symbolisation

Kcda

Krtype

Kppcn

Value

Kcda = List (X, Y, Z)

Krtype = List (x1, x2, x3…)

Kppcn =Value(real numeral)

Illustration

Table (2) Example of three indicators

219

4.3. Factors The factor is an evaluated item that is linked to a DfD rule. The factor is evaluated by a formula that uses the indicators presented in the last section and is valued as a real number belonging to an interval [0-1]. The symbolisation system of factors looks like the indicator’s symbolisation system (figure 6). The factors are represented by a letter (F) and their group is specified with the second letter C, R or P. The third letter is a (P) if the indicators have specific relations with polluting components. At the end of the indicator appears the abbreviation for the rule considered, in small letters

4.4. Factor total and Weighting indicators After assigning factors to rules by factors and obtaining the value for each factor, the main factor (FACTOR TOTAL) can be clculated This factor total is specific for the whole product and represents the aggregation in one value of all the factors. The factor total value can be evaluated by giving each factor a different weighting value (the value of the weighting indicator related with designer point of view) and by dividing the total with the sum of the weightings.

(2) Figure 6: Factor symbolisation

As an example for the factor calculation, we will consider the factor Fppsd (Similarity direction factor for the disassembly of Polluting components). This factor belongs to the Product characteristics group (P). It is related to the number of pollutant components (Kppcn) and to the number of components which have the same disassembly direction (Kcpd) : Fppsd= interval [0-1] = Max (Kcpd) /.(Kppcn) (1) - Kcpd.: Number of polluting components having the same disassembly direction. - Kppcn: Number of polluting components. The figure (7) represents an example to calculate the Fppsd . In this example there is a product model with seven components, three of them are pollutants. In the first assumption each component has its own disassembly direction, and Fppsd = 1/3 In the second example two from three pollutants components have the same disassembly direction, and Fppsd = 2/3

Fptot: I i: Fi : n:

Factor total for the whole product (P) Weighting indicator for the factor (Fi) Factor of one realised rule Number of used factors

For giving a proper justification for the values of factor, it is very important to link these factors with obvious criteria: links with known database (standards, limits and reference indicators), reference to the specialty of each product and the experience of the designer (designers and researches related with new product, processes or materials). The value of the weighting indicators is defined by the designer itself. He chooses the value according to the customer’s needs and the designing specification. In this research the scale of weighting indicator will be assumed as a scale of ten (X /10); the most important will take 10 and less important will take 0 depending on the designer point of view. The factor is an evaluated item that is linked to a DfE rule. The factor is evaluated by a formula that uses the indicators presented in the last section and is valued as a real number belonging to an interval [0-1]. The symbolisation system of weighting indicator looks like the indicator’s symbolization system (figure 8). The weighting indicators are represented by a letter (I) and their group is specified with the second letter (R, C or P). The third letter is a (P) if the indicators have specific relations with polluting components. At the end of the indicator appears the abbreviation for the main specific characteristic of the weighting indicator considered, in small letters

Figure 8: Weighting indicator symbolisation system.

The next figure (figure 9) shows that the factor value (Fptot) is finally represented as a percentage to be compared with the proper limits of the design requirements. Whenever the value of (Fptot) is near to (0) that means the level of applying the DfE rules is not

Figure 7: Example to calculate the Fppsd

220

satisfied, and whenever this value is near to (100) that means the design apply the DfE rules. Figure (9) represent a proposed scale to illustrate the final result of total value (Fptot). The arrow underneath shows the Fptot value.

X% Figure 9: Factor total limits to evaluate the satisfaction.

The objective of a single factor formulation is to provide the designer a first estimation for his design. This estimation is related to how much the design adapts the aspects of environment during the conceptual design phase. We will see an application of this evaluation in the next section. 5. Case Study In this case study, the design of a refrigerator is considered. The main components of this refrigerator have been defined during the conceptual design phase as in the Table 3. During the design process, a functional block diagram has been established (figure 10) and shows the functional components and their relations that are necessary to define the (C,R,P) characteristics.

To illustrate our approach, we will consider in this example the rules related to the group “Rules to choose the right materials: 5.1. First case In this case and relating to the first rule, “MINIMISING THE NUMBER OF TYPE OF MATERIALS IN THE PRODUCT” .we can define the factor: Fpms : (factor of Product material similarity): This factor is related with each components material type by the indicator; (KcMat); Component material Indicator. The value used in this factor is the number of components which are made of the same material (mono-material). The equation which gives its value is formed as following: Fpms (material number)=Internal(0-1)= (KcMat)/ (Kpnc) (3) While taking into account that: Fpms(1) + Fpms(2) + Fpms(3) +…= 1 (4) Kpms : Product material similarity. Kpnc : Number of components Fpms(1) :Percentage of material number (M1) in the product (P). Fpms(2) : Percentage of material number (M2) in the product (P). Fpms(3) : Percentage of material number (M3) in the product (P). ... In our case study; Fpms= 40% (2/5) and this value can refer to two materials (Steel and Polystyrene). When the designers follow the rule and increase the number of component which have the same material in his design (by example change the external door material into “Polystyrene instead of Steel) the factor value becomes: Fpms=60% (3/5). This shows that the product design now applies the rule with a percentage of 60% after modifying the external door material. Next figure (Figure 11) shows through the satisfaction scale the result and the increase in the satisfaction level.

Case no.2 Figure (10): The FBD for the refrigerator

C1 C2 C3 C4 C5

Component External door Internal door External body Internal body Cooling system

Material type Steel Polystyrene Steel Polystyrene multi-materials

EoL scenario Recycling Incineration Recycling Incineration Not clear yet

Table (3): Components materials and end of life for the refrigerator

60%

40% Case no.1

Figure (11): Satisfaction scale for the first DfE rule

5.2. Second case Relating to the second rule, “USING RECYCLABLE RAW MATERIAL” we can define the factor: FsEoLns: (Product similarity factor of components’ EoL number). This factor is related to the number of components (Kpcn) and to components which have the same type of end of life (KcEoL). Each type of EoL has its own factor. This factor helps in giving an indication of the EoL number independently to the total number of component that have the same EoL.

221

For each type of EoL there is separated indicator. As for the last factor, the value used in this factor is the highest number of components which have the same EoL. Then we can write; FpEoLns =Internal (0-1) = (KpEoLns) / (Kpcn) (5) While taking into account that: FpEoLns(1) + FpEoLns(2) + FpEoLns(3) +…= 1 (6) KpEoLns : Product’s EoL number similarity Indicator. Kpcn : Components number indicator. FpEoLns(1) : Percentage of components number which have EoL(1) in the product. FpEoLns(2) : Percentage of components number which have EoL(2) in the product. FpEoLns(3) : Percentage of components number which have EoL(3) in the product. … In our case study; FpEoLns= 40% (2/5) and this value can refer to the two EoL scenarios (Recycle and Incineration). When the designer follow the rule and apply it by using recyclable materials EoL scenario instead of the incineration scenario in his design, this will increase the factor FpEoLns. By example, changing the internal door and internal body into recyclable instead of incinerate Polystyrene, the factor value is FpEoLns=80% (4/5). Figure 12 shows through the satisfaction scale that the result have increased from non-satisfied to good level.

Case no.2 80%

40% Case no.1

Figure (12): Satisfaction scale for the second DfE rule

6. Conclusion During the last phases of the design, it is not easy to take into account the environmental criteria since the product is already carried out and because any modification on it generates additional delays or over-costs. But actually, environmental aspects are mainly considered during the detailed design phase, by taking into account the end of life scenarios and product life cycle analysis. For all these raisons, a method has been proposed to help the designer to take into account the environmental exigencies by early estimation during the conceptual design. This estimation is based on DfE rules that have been translated into valuable factors. Two main contributions have been realised : - the environmental exigencies have been translated into rules (DfE rules), and the rules that can be applied in the conceptual design phase have been detailed. - these rules have been translated into factors. Each factor is evaluated to identify if the rule is respected or not and to obtain a general estimation with one total factor for the product and all the rules. Detailed analysis should still be realised later in the design process to validate/optimise the product. But with this first proposed approach, designers are guided toward a valuated goal and should be more concerned with environmental aspects.

222

7. References 1. Beitz, G.P.a.W., Engineering Design (A Systematic Approach). 3rd edition 2007: p. 39. 2. Haoues N., Zwolinski P., Brissaud, D., How to integrate End of Life Disassembly Contraintes in Early design stage? Int. Sem. CIRP LCE Belgrad, 2004. 3. Crow, K., Value analysis and function analysis system technique. DRM Associates, 2002. 4. Betz M., Schoech H., Design for Environment (Dfe)Important Tool Towards An Environmental Efficient Product Development. October 2001. 5. Kurk, F., Eagan P., The value of adding design-for-theenvironment to pollution prevention assistance options. Journal of Cleaner Production, 2008. 16(6): p. 722-726. 6. UT, V., ECODESIGN PILOT -http://www.ecodesign.at/ pilot/ONLINE/ENGLISH/. 2000. 7. http://en.wikipedia.org/wiki/Life_cycle_assessment, W., Life cycle assessment. 2008. 8. http://www.pre.nl/default.htm, P.C., What is Life Cycle Assessment? 2008. 9. Luttropp C., J.L., EcoDesign and The Ten Golden Rules: generic advice for merging environmental aspects into product development. Journal of Cleaner Production, 2006. 14(1396e1408). 10. Ammenberg J., Sundin E., Products in environmental management systems: drivers, barriers and experiences. Journal of Cleaner Production, 2005. 13(4): p. 405-415. 11. Shinji Kawamoto, M.A., Yuji Ito, Eco-Design Guideline for Software Products. IEEE., 2005. 1-4244(0081). 12. Luttropp, C., Lagerstedt J., EcoDesign and The Ten Golden Rules: generic advice for merging environmental aspects into product development. Journal of Cleaner Production, 2006. 14(15-16): p. 1396-1408. 13. Union, E., 15.10.30.30 Waste management and clean technology, 2008. 14. Scipioni, A., et al., The ISO 14031 standard to guide the urban sustainability measurement process: an Italian experience. Journal of Cleaner Production, 2008. 16(12): p. 1247-1257. 15. Stoyell, J.L., et al., Results of a questionnaire investigation on the management of environmental issues during conceptual design. A case study of two large made-to-order companies. Journal of Cleaner Production, 1999. 7(6): p. 457-464. 16. Brissaud D., Zwolinski P., Designing products that are never discarded. Designing products that are never discarded, 2006: p. 225. 17. http://en.wikipedia.org/wiki/Design_for_X, W., Design for X. 2007. 18. HAOUES N., Contribution à l’intégration des contraintes de désassemblage et de recyclage dés la première phase de conception de produits. ." (2006). 19. Zwolinski, P., Lopez-Ontiveros M.-A., Brissaud D., Integrated design of remanufacturable products based on product profiles. Journal of Cleaner Production, 2006. 14(15-16): p. 1333-1345. 20. Hayder ALHOMSI, Developing a method for elaboration the scenarios related with sustainable products lifecycle, master thesis, 2007.

Developing a Current Capability Design for Manufacture Framework in the Aerospace Industry A. Whiteside1, E. Shehab2, C. Beadle1, M. Percival1 1 Rolls-Royce plc. Moor Lane, Derby, DE24 8BJ 2 Decision Engineering Centre, Cranfield, Bedford, MK43 0AL, UK [email protected]

Abstract During progressive product design and development in the aerospace industry, a lack of effective communication between the sequential functions of design, manufacturing and assembly often causes delays and setbacks whereby production capabilities are unable to realise design intent in high-complexity product models. As a result, there is a need to formalise the progressive release of an engineering model to production functions during New Product Introduction (NPI) via defining key stages of definition maturity and information requirements through a structured process. This paper describes the development of a framework to facilitate optimal ‘design for manufacture’ based on current manufacturing capabilities within the aerospace industry.

Keywords: Design for Manufacture and Assembly, Process Capability Analysis, Aerospace Industry

1 INTRODUCTION Due to the high complexity and sensitivity of aircraft engine design, a progressive design release process is followed over a period of time during the introduction of a new product. The nature of staged product definition is built around resource planning to allow long lead-time activities such as material sourcing and machining acquisition to take place before the design is finalised. Design and manufacturing functions need to communicate and negotiate on a multitude of design factors to ensure that the product can be manufactured to the desired specifications under strict quality control. This is a key Design for Manufacture and Assembly (DfMA) principle. It has been identified that up to 80% of product costs are defined during early concept design [1]. Despite this statistic, the design function within manufacturing organisations often sits largely unconnected to sequential functions throughout the duration of a design definition. There is often a lack of formal buy-off procedures, with manufacturing and assembly functions frequently missing a quantitative means of conveying their capabilities to design via statistical analysis and key performance indicators. Consequently, up to 50% of development effort can be wasted simply correcting product designs that have been sent back as unworkable from the manufacturing and assembly functions [2]. This research paper defines a framework to facilitate optimal ‘design for manufacture’ based on current manufacturing capabilities within the aerospace industry. This framework takes the form of a progressive definition release process route-map to guide integrated product teams through the efficient release of a product master model from design to the manufacturing and assembly functions. The remainder of this paper is structured as follows. Section 2 provides an overview of research related to the topic area; Section 3 describes the research methodology

CIRP Design Conference 2009

223

followed to undertake the study. Section 4 provides a description of the produced framework, which is validated and concluded in sections 5 and 6. 2 RELATED RESEARCH The historical approach to engineering design and product development has largely been via a series of sequential stages [3]. Firstly, a need for a new or adapted product is identified and an initial design is formulated. This is then passed to manufacturing and assembly who have the responsibility to make and build the product. It is then released onto the market, where its in-service performance, lifespan and success are determined. This linear method encountered many problems due to lack of upstream communication of requirements from manufacturing and assembly to design [4]. The concurrent engineering tool of DfA (Design for Assembly) was first proposed following the undertaking of a number of studies into assembly constraints caused by inefficient product design [5]. Such considerations were bought into the manufacturing domain with the proposal of DfM (Design for Manufacture) techniques that promoted part reduction, simplification and the formulation of manufacturing rules for design [6]. These methodologies incorporate manufacturing and assembly capabilities into the very earliest stages of concept design, ensuring that products are designed in such a way that they can be optimally manufactured. The topic area has expanded to include various other dimensions within the product design stage such as maintainability, quality and lifecycle management (DfMt, DfQ and DfLC) [7]. The need for the implementation of ‘matrix management’ for the successful facilitation of DfM methodologies is constantly emphasized [8]. This moves companies away from a vertical business layout towards a matrix layout that as well continuing to foster functional specialists also promotes cross-functional integrated product teams. This

concept of integrated functional teamwork emphasises the importance of the communication of knowledge and information between different functional departments working on creating, developing and maintaining a quality product. In addition to sources of explicit knowledge such as operating manuals, product drawings and written company procedures, employees each possess substantial tacit knowledge about their work as a result of training and experience, enabling them to fulfill their responsibilities. Substantial research effort that has been pursued into knowledge based systems in the facilitation of capturing and representing tacit knowledge related to both the product being designed and its manufacturing environment [9]. This knowledge can then be categorised in line with the larger DfM framework according to the defined separate product and manufacturing hierarchies. The importance of considering the structure and organisation of such knowledge feed-in is highlighted [10] to ensure that the range of knowledge input is filtered and fed in at the correct process planning stage for optimal effect. General, top-level awareness of whole process capabilities and factory capacities is used in the early, holistic views of concept assessment, whereas specific shop floor and machine level performance awareness is required for specific feature manufacturing analysis. The Foundation of Manufacturing-Committee of the National Academy of Engineering stress how “world-class manufacturers recognise the importance of metrics in helping to define the goals and performance expectations for the organization. They adopt or develop appropriate metrics to interpret and describe quantitatively the criteria used to measure the effectiveness of the manufacturing system and its many interrelated components” [11]. This quote emphasises the integral role and importance of good quality capability data and measurement information in producing quantitative performance records and metrics to drive an organisation’s strategic planning and success. Within any manufacturing process, a vast amount of measurement data is collected in order to monitor and control the process and product, ensuring quality and stability. Statistical Process Control (SPC) can be used to analyse this data, measuring process capability through numerical and graphical analysis. However, there often lacks an inter-relation between the establishment and promotion of a new DfMA framework and the significance of quantified process capability analysis. There is a necessity for a clear definition of what manufacturing and assembly knowledge is required throughout each stage of concept and component design within the aerospace industry. This paper describes a proposed methodology to formalise capability transfers as standard within a design and buy-off process. 3 RESEARCH METHODOLOGY A qualitative research methodology was adopted throughout the study, using the primary tools of semistructured interviewing and subjective observations to collect and analyse information from which to draw conclusions and build a solution. As opposed to traditional research methods where by a theory is built up and then applied to contextual application, this framework was developed directly around the requirements and problem statement of the case study company. Given the breadth of information gathered, this could then be built up to form a generic methodology, the

principles of which are wholly transferable to wider application. To assess current practice and identify the requirements from the solution framework, a series of thirty hour-long interviews and workshops were undertaken with a total of 34 employees. Participating interviewees included senior representatives from central design, assembly and manufacturing functions in addition to teams from specific component manufacturing plants. Semi-structured questionnaires examined and scrutinised examples of previous design releases, gauging the roles and requirements from each stage in the progressive model release process:       

What are the key stages and milestones of design release? What individuals and functions are involved at each stage? What methods and media of communication are adopted? How are manufacturing capabilities communicated and used? How are lessons learnt captured and used? What are the major causes of setbacks or delays within current practice? What resource management and production planning tasks are directly coupled to the design buy-off?

The progressive release route-map was constructed and populated with information through the further use of ten hour-long interviews and workshops. These were carried out through three contrasting case-study component plants chosen in order to collect unbiased and broad company information set, and the focus was on understanding the reasons behind inefficiencies and problems with current practice and finding solutions that would overcome this and shorten lead-times and design iterations. Studying such contrasting components, each with different methods of manufacture, machining and production lead-times, aided not only in highlighting all of the different considerations required to produce a generic framework, but also in exposing variation in procedures of progressive model release and capability process control followed within different facilities. 4

THE PROGRESSIVE MODEL RELEASE FRAMEWORK The formulated framework has the form of an interactive process route-map as part of the case study company’s production system intranet ‘How to’ guides, imbedding best practice into company operating procedures. The process (Figure 1) consists of five principle activities (large ovals), and is hypertext-enabled, whereby clicking on each stage takes the user through to sequential layers of information and links to associated documentation. Each activity consists of a series of interactive steps that are broken down into further levels of information. At the end of each activity, a ‘Gate Checklist’ poses a series of questions to the user to ensure that all requirements specified in completing that activity have been met before progression to the next stage.

224

Figure 1: Top-level interactive route map

During data gathering, IDEF0 modelling was used to structure and organise all the information to be included within the process route-map. Traditionally intended for functional modelling, IDEF0 is frequently used to pictorially represent an ordered process due to its ability to accurately portray complex processes at different levels of detail and granularity. Each function or activity is represented by a box, which can be dissected to reveal all the sub-activities contained within. The whole activity, ‘Facilitate an Effective Manufacturing Design Buy-off’, is displayed in Figure 2. The primary input to the process, the release of a concept design, is represented coming in form the left. Emerging from the right are the outputs resulting from the process taking place. These are a Master Model ready for production and a feature acceptance log and issue database completed during the buy-off process. The constraints (coming down from the top) and mechanisms (coming up from the bottom) respectively govern and facilitate the progression of the process. The main process is broken down into five principle activities (Figure 3). These activities are discussed in detail in [12]. The first activity, ‘Define Progressive Definition Release Plan’, addresses the foundations required from which to carry out an efficient model release process. It supports the establishment of an integrated product team to create a plan for the progressive release of a model, defining specific stages of release based on the constraints of manufacturing scheduling requirements and product introduction milestones.

225

The second activity (Figure 4) describes how to create an enumerate capability forecast for each feature or requirement on a model through a translation of qualitative knowledge and capability performance data. Due to the high-level of design and feature carryover for new products, an extensive awareness of manufacturing capability can be established before the model is first released by studying past manufacturing data from the related component family. Components are frequently grouped together based on similarity or relation of features, promoting not only standardisation and organisation of parts but also data and information reuse across parts where appropriate. By identifying the component family, top-level methods of manufacture, specifications and operation listings can be identified in the first step of planning production and analysing capability. Statistical Process Control data (SPC) and Key Performance Indicators (KPI) are key mechanisms used to assess current manufacturing capability levels, analysed to give an indication of present performance levels and highlight any potential capability issues. Process capability indices (PCIs) form an effective means of summarising process performance relative to a set of specification limits, proving effective tools for both process capability analysis and quality assurance. The primary indices used are the Cp and Cpk indices. The Cp index is a measure of the precision of a given process; the Cpk index is a measure of the distribution of points relative to the design specification limits. Other KPIs such as nonconformance rates and percentage of scrap also provide effective indications of capability.

Figure 2: IDEF0 mapping detailing the top-level process activity

Figure 3: IDEF0 mapping detailing the five primary process activities

226

Figure 4: IDEF0 mapping detailing the sub-process activities within stage 2: ‘Capture current manufacturing capabilities’ Predicting future manufacturing capability is also important, proving especially relevant for new or changed manufacturing methods where existing capability knowledge is sparse. This can be collected through the use of machining trials, process modelling and computational predictions and then incorporated into the forecast. Qualitative, tacit knowledge concerning efficient manufacturing practice can be gleaned from individuals involved in the product realisation through the utilisation of effective manufacturing review meetings. The framework defines specific feature acceptance logs and issue trackers, the adherence to which are incorporated into the stage-gates so that progression is not permitted until they are adequately fulfilled. Manufacturing and process capabilities are typically manifested through assigning tolerances to all manufacturable design parameters. A tolerance is the permissible range that the quantity may vary from that specified without detrimentally impacting functionality or performance of the product. Tolerance allocation is of significant importance for the functionality of mechanical products and the manufacturing cost of the parts. From a design point of view, the definition of tolerances is based around the criticality of the feature and the resulting affect that a variation would have on its resulting performance. The more critical or sensitive a feature, the tighter the tolerance band shall be. Conversely, from a manufacturing standpoint, tolerances reflect the capability of the manufacturing process in achieving the nominal value. These are dependant on the ability of the machines, cost, production and measurement processes used to create the feature, and there will always be an unavoidable degree of statistical variability due to common cause variation in factors

227

such as material quality, machining stability and environmental conditions.Stage 2 culminates in the creation of a specific capability forecast for every feature on the drawing. Through the definition of an achievable manufacturing tolerance band in the early stages of a product concept, the designers are explicitly aware of the exact production capabilities before entering into detailed design, preventing later iterations and enabling a more informed and data-driven buy-off for each specific feature. Activities 3 through 5 detail the specific company procedures and standard practices to follow in negotiating and agreeing each design specification. Reviewing the model for release and assessing the manufacturing capability on a feature-by-feature basis ensures that the final master model cannot be fixed until all drawing features are accepted by the production functions and all concerns have been resolved. 5 VALIDATION AND INTEGRATION The validation process has passed through a number of stages during the framework development. For final validation, the ten principle contacts (senior manufacturing and design engineers) involved with the research were revisited with the completed route-map and the process was dissected step-by-step to ensure agreement and make any final changes. Validation was undertaken with both the functional representatives (to secure integration with company procedure) and also with the specific component introduction teams (to ensure usability and case applicability).

This research project, the subset of a larger initiative, coincides with one of the key milestones in the product introduction of the latest company engine project. The finalised framework was carried forward upon completion for implementation within a series of both design buy-off workshops and product definition meetings as part of a continuous improvement initiative. 6 CONCLUSIONS The developed progressive definition and release framework promotes the incorporation of process capability knowledge during the design and definition of a product. Adherence to the process route-map ensures that no engineering model is released that cannot be realised by manufacturing and assembly functions. This research amalgamated DfMA principles and process capability knowledge into the creation of a tangible process to facilitate the release of an engineering model for production. This framework was founded on an analysis of the current practice of product definition and development across the aerospace and automotive sectors and promotes the identification of (1) the major stages and activities within the progressive release of a model in order to support manufacturing production planning, (2) the individuals and functions involved within each activity and their requirements and roles in supporting the evolving model, and (3) the capability data and information required to optimally carry out each activity through informed design. 7 ACKNOWLEDGEMENTS This research project was carried out in collaboration between Cranfield University and Rolls-Royce plc. Special thanks are expressed to all the employees who provided input and support to the study. 8 REFERENCES [1] Shehab, E., Abdalla H., 2006, A Cost Effective Knowledge-Based Reasoning System for Design for Automation, Proceedings of Instn Mech Engrs (IMechE), Part B: Journal of Engineering Manufacture, 220 (5): 729-743.

[2]

Miles, B.L., Swift, K., 1998, Design for manufacture and assembly, Manufacturing Engineer, 77 (5): 221224 [3] Keys, L., 1988, Design for Manufacture; system lifecycle engineering design for the life-cycle, Proceedings of the IEEE/CHMT International Electronic Manufacturing Technology Symposium: 62-72 [4] Boothroyd, G., 1994, Product design for manufacture and assembly, Computer-Aided Design, 26 (7): 505-520 [5] Boothroyd, G., Dewhurst, P., 1983, Design for Assembly: selecting the right method, Machine Design: 94-98 [6] Stoll, H.W., 1986, Design for Manufacture: an overview, Applied Mechanics Reviews, 39 (9): 13561364 [7] Kuo, T.C., Huang, S.H., Zhang, H.C., 2001, Design for manufacture and design for ‘X’:concepts, applications and perspectives, Computers and Industrial Engineering, 41: 241-260 [8] Swift, K.G., Brown, N.J., 2003, Implementation strategies for design for manufacture methodologies, Proceedings of the IMechE Part B: Journal of Engineering Manufacture, 217 (6): 827833 [9] Grant, E.B., Gregory, M.J., 1997, Tacit Knowledge, the Life-Cycle and International Manufacturing Transfer, Technology Analysis & Strategic Management, 9 (2): 149-162 [10] Naish, J.C., 1996, Process Capability Modeling in an Integrated Concurrent Engineering System - The Feature-Oriented Capability Module, Journal of Materials Processing Technology, 61: 124-129 [11] Ghalayini, A.M., Noble, J.S., 1996, The changing basis of performance measurement, International Journal of Operations & Production Management, 16 (8): 63-80 [12] Whiteside, A.C., 2008, Developing a Current Capability Design for Manufacture Framework in the Aerospace Industry, MRes thesis, Cranfield University.

228

Design for Low-Cost Country Sourcing: Motivation, Basic Principles and Design Guidelines G. Lanza, S. Weiler, S. Vogt Institute of Production Science (wbk), Universität Karlsruhe (TH), Kaiserstr. 12, 76131 Karlsruhe, [email protected], [email protected]

Abstract Not every product can be successfully sourced in low-cost countries. Disadvantageous cost structures or extremely complex workpiece designs are the most frequent reasons for failures. A design that has been tailored to low-cost country sourcing offers the possibility of increasing potentials while reducing risks and costs at the same time. The wbk Institute of Production Science at the Universität Karlsruhe (TH) developed a new approach which ensures that the product design meets the requirements of the supplier. This paper identifies the factors influencing the design, deduces basic principles and illustrates guidelines for an adapted product design. Keywords: Design for X, Low-Cost Countries, Global Sourcing, Product Development

1 INTRODUCTION Increasingly global markets are providing the possibility of sourcing, manufacturing and distribution of products in every part of the world [1]. Competitive prices are the precondition for gaining market shares in the domestic market as well as abroad [2]. The fierce cost competition is forcing companies to focus on their core competencies and to pay particular attention to their purchasing decisions. Several studies show that supply part costs currently represent 60-70% of production costs [3-4]. Thus, production itself can only offer a limited savings potential and cost reduction throughout the whole supply chain is necessary to remain competitive. Furthermore, shorter product life cycles and reduced non-value adding activities, such as the inspection and testing of received parts, are forcing companies to co-operate with their supply partners along the entire supply chain [1]. As a result, the purchasing division, a mere operational procurement department in the past, is increasingly becoming a strategic entrepreneurial planning body. Figure 1 shows the impact of purchase on corporate profits and productivity. The EBIT (earnings before interest and taxes) in engineering therefore increases by 11%, if purchasing costs are reduced by 1% [5]. Trade

37

Construction

36

Engineering

11

Chemical industry

9

Car manufacturers

7

Consumer goods

4 0

% 10

20

30

40

Figure 1: Increase in EBIT at a cost reduction of 1% in purchase [5] Global procurement markets provide the basis for competitive and cost-effective products. In procurement, low-cost country sourcing plays a special role as considerable cost savings can be realised in this area [67]. In times of increasingly globalised and liberalised markets companies are required to tap these potentials in order to remain competitive.

CIRP Design Conference 2009

229

Great sourcing opportunities in emerging markets [8-12] in the shape of cost reductions and market developments go hand in hand with huge challenges [8-10][13-14] (see Figure 2).

Objective: • Entering new markets • Cost reduction Solution: Design for low-cost country sourcing • Use of comparative advantages (factor costs) • Developing a low-cost country manufacturing process

?

Consequence: • Increased risks

Challenges: Sourcing items are designed to be manufactured in highincome countries • Achieving the required quality standard • Realising cost saving potentials

Figure 2: Interdependence of objectives and risks in lowcost country sourcing Even relatively simple bulk products represent a considerable challenge to the purchasing department [14]. Major problems arise from insufficiently qualified lowcost suppliers, particularly with regards to quality control, production planning and manufacturing equipment [1517]. Low-cost suppliers often do not meet the higher quality standards which are required by multinational companies. Even if promising suppliers were chosen, measures to promote development such as technical support and continuous process control at the supplier’s production site are necessary in order to guarantee a reliable product quality as well as on-time delivery [15][18]. Low-cost country sourcing is stimulated by low labour costs and the new future markets that can be developed. A study has shown that labour costs which account for only 22% of those in Germany are no guarantee at all for a decrease in purchasing costs [19]. The study reveals that one in three companies pay more for sourcing their products in China than they would pay for procuring them at the local German market [19]. The companies that

were interviewed as part of this study showed a savings margin between 48% and -16% for sourcing in China in comparison to local procurement. Another study reveals, that only 28% of Western companies are very satisfied with the service quality of low-cost suppliers [20]. Although low-cost countries are suitable for the production of a range of products, the examples mentioned above show very clearly that not each and every component can be successfully sourced in low-cost countries. The success of low-cost country sourcing projects is not only influenced by the respective cost structure but by several other elements and component requirements as well. The necessary technology and existing complexities of the workpiece design have an impact on the manufacturing of a product by a low-cost supplier. Companies that are considering having assemblies, components and products manufactured by a supplier in a low-cost country should therefore think about adapting their design to the local conditions. 2

CHANCES AND OPPORTUNITIES OF DESIGN FOR LOW-COST-COUNTRY SOURCING A number of guidelines for Design for Manufacturing (DfM) already exist, i.e. product design which meets manufacturing requirements [21-22][28-30]. These guidelines, however, were implicitly created for the manufacturing conditions in the established industrial states, the so-called high-income countries. Components are oftentimes manufactured on state-of-the-art technological equipment and therefore show a very high level of complexity [23]. High-income countries focus on automated production processes in particular which represent a reliable approach to cost reduction without cutting back on quality. To allow an automated production process, specific design requirements already have to be considered in the development phase. The attempt to transfer this paradigm to other countries as well is partly responsible for the fact that outsourcing and procurement projects in low-cost countries were accompanied by unexpectedly high costs and/or quality issues or even failed because some products simply cannot be manufactured by low-cost suppliers. If it is clear, however, as early as during the development phase that a component is to be purchased in a low-cost country, the complexity of this component can be adapted to the capabilities of the supplier without cutting back on its functionality. A special design provides the opportunity to increase the potential of low-cost country sourcing while reducing its risks. Furthermore, the design can be constructed in a manner to better exploit the comparative advantages (e.g. wage, energy and machine-hour costs etc.) of low-cost countries and to realise an extensive cost savings potential by adapting production to the local conditions. It has been, for example, pointed out that an adapted product design can increase the degree of manual labour involved in the production – as an alternative to a capital-intensive automated production [24]. This means that only a limited amount of capital is tied up and flexibility is increased [24]. Abele as well favours an adapted product design for the production in less developed countries [23]. This typically leads to less demanding, smaller process steps. On the one hand, this provides the possibility of realising long-term cost advantages. On the other hand, it represents a measure to face the lack of experts which is observed in many emerging markets and which, according to recent estimates, is very likely to increase [32]. Experts are basing their opinion on a necessary re-design of products which incurs short-term modification costs. If the specific needs and characteristics of low-cost countries are considered right from the outset, when the design is

agreed upon, no major costs will incur for adapting the product to low-cost country conditions (see Figure 3). Costs/Influence high

Possibility to influence costs

Modification costs

low Identify Task

Conceptual Embodiment Design Design

Detail Design

Product development process

Figure 3: Modification costs and possibilities of influencing costs during the product design phase [25] Moreover, the early product design stages offer, in general, more possibilities to influence product costs (see also Figure 3). The earlier the objectives of low-cost country sourcing are integrated into the product design process the higher the chances are to reduce costs and entrepreneurial risks. Innovative product design also paves the way for a considerable reduction of the effort and time needed to further develop and support suppliers, as this is often necessary to guarantee the required quality and productivity [23]. 3 METHODOLOGY REQUIREMENTS The realisation of cost savings potentials while maintaining the required quality standards represents an objective of low-cost country sourcing which can only be achieved with a systematic and methodical approach. This approach requires basic principles and guidelines serving as a comprehensive toolbox to fully tap the aforementioned potentials of a specific design for low-cost country sourcing. Since the scope of product development and design can never be gathered in all its complexities [22], those basic principles and design guidelines to be elaborated will necessarily be heuristic. This means that the abundance of influencing factors will be deliberately limited in this approach in order to develop suitable and practical solutions by means of simple rules. For this, the identification of general production factors and special aspects of low-cost sourcing is an important first step. As this paper deals with a newly defined research area, it is essential to provide a basis that paves the way for further research. First of all, this article aims at presenting a structure that provides this necessary framework. The aspects relevant to the further development or re-design of existing products shall be easily identified thus resulting in the creation of a flexible range of design guidelines. These guidelines shall not necessarily be implemented into rigid, dogmatic approaches but rather serve as creative ideas designers can individually adapt [26]. 4

PRODUCT DESIGN - BETWEEN MARKET REQUIREMENTS AND COMPANY PRINCIPLES A product draft basically determines the shape, material and manufacturing process of the individual components as well as the joining process for putting them together [25][27]. Design therefore evolves around these three parameters which defines the designer’s options.

230

Shape, material and manufacturing process are highly inderdependent. The ideal functional shape of a component thus depends on the material used. A supporting structure, for example, can come in different shapes, depending on whether it is made from cast iron or welded steel (see Figure 4). The relatively brittle grey cast iron is to preferably loaded in compression and, as a bracket, is therefore to be supported downwards. Since steel, in contrast, is characterised by a high tensile strength, suspended constructions constitute an excellent use of materials. Strut is tensile loaded

Strut is loaded in compression

Casted construction

in Germany. Existing supply networks can also create restrictions, such as achieving the aim of a constant and full utilisation of production capacities, for example. Further product design requirements arise from the supplier network. The components in question must be first of all producible by low-cost countries. Chapter 1 showed clearly that this cannot be taken for granted. A non-fulfilment of a requirement can in extremis lead to the consequence that no supplier can be found who is able to meet the basic preconditions for the order. Should the complexity of the order exceed the supplier’s competencies, severe quality issues might be the result. This could incur additional costs, such as quality assurance measures or supplementary supplier development. “Producibility by low-cost suppliers“, as the major requirement, represents a precondition for achieving the function and cost targets. The design, therefore, must meet the requirements of the market as well as those of the company. It is the product more than anything – defined by the shape, material and manufacturing process of all components – that provides the link between market and company (see Figure 5).

Market

Steel construction

Customer requirements

Figure 4: Shape of a cast iron and a steel construction as a result of the material and manufacturing process used This example illustrates that the shape of mechanically stressed components needs to comply with the characteristics of the respective material in order to achieve a good material utilization while maintaining its original function. This is necessary to manufacture the component cost-efficiently. While in this case the chosen material determines the shape, the correlation between design and material can also be vice versa. If the shape of a component is roughly defined in the design draft, a material is to be chosen which fulfils the mechanical function most efficiently with respect to the given conditions. There is a third aspect as well which needs to be taken into account: which is the most appropriate manufacturing process? On the one hand, not every shape can be efficiently designed in various manufacturing processes. Since cast iron components must be removed from the mould after casting, drafts are necessary. On the other hand, not every shaping and joining process is compatible with every material. While steel sheets or tubes can be bent into different shapes, the same does not apply to cast iron components. Ehrlenspiel speaks of the triad of shape, material, manufacturing process [21]. Ashby adds a fourth, equally important aspect: function [27]. While the triad describes the material aspect of the product, functions can be seen as customer requirements (e.g. derived by quality function deployment). Supply and demand form a price that customers are willing to pay which can serve as a basis for the company to determine its target costs. The market, therefore, does not only determine the function of a product but also its maximum cost. In this context, functions also cover ergonomic, aesthetic and quality features. Since customer requirements can vary greatly with regards to the respective cultural sphere, the regional market particularities are to be taken into account. Companies themselves add an additional range of requirements which have an impact on the options in product design. Companies set up a strategic and operational framework which forms the basis for product design requirements. If the corporate identity demands that the product is “Made in Germany“, the design must meet the product target costs and functions for production

231

Product design options

Competing products

Companies Operational and strategic framework conditions

Product Shape Functions Costs

Supply network Supplier

Material Process

Strategy

Figure 5: Impacts on product design 5

DESIGN FACTORS FOR LOW-COST COUNTRY SOURCING Basically only a few product design requirements need to be added in order to achieve a Design for low-cost country sourcing. Existing requirements, however, need to be weighted differently. Figure 6 shows five essential factors which, with regard to [21-23], need to be taken into account in adapted product development and design: expenditure of labour, processing time, material used, requirements on manufacturing equipment and necessary employee qualification and training. The order of these factors corresponds to the usual hierarchy applied in highincome countries to large-scale production. According to this concept, an extensive use of material, for example, can be accepted if the expenditure of labour can be reduced [21]. With regards to low-cost countries, the order of priorities is vice versa. Since the availibility of employees, in general, reduces in disproportionate terms to an increase in their qualification whereas the costs rise in disproportionate terms to the qualification of the employees. Therefore companies should try to refrain from employing specialists. Due to the technological conditions of these countries, employee qualification and manufacturing equipment requirements are to be reduced. The basic requirement, therefore, is that suppliers must be able to produce a specific design (see also chapter 4). If a product design is adapted to the competencies of employees and manufacturing equipment, staff and technological requirements are low, suitable suppliers can be found more easily or made possible at all, the risks to incur quality issues are

diminished and procurement prices reduced. Furthermore, quality issues can hereby be prevented. 1. Expenditure of labour Assembly, set up of equipment, finishing…

2. Processing time Machine hours, working hours



Particularly critical to production in highincome countries



3. Material used Amount of material used + waste+ defective parts

4. Requirements to manufacturing equipment Accuracy, level of automation

5. Necessary employee qualification and training

Particularly critical to production in low-cost countries



Dip lom a Xxxx xxxx Xxx xxx xxxx xxxx xx xxxx xxxx

Xxx

a

X

Process management, quality assurance (to be reduced in accordance with hierarchy)

Figure 6: General production factors and their significance with regards to the country that the production is located in Expenditure of labour, processing time and material used have a direct impact on production costs which will be reflected in the procurement price. The qualitative requirements on manufacturing equipment and employees (point 4 and 5) negatively influence prices as well. The five factors listed above can only be minimised individually to a certain extent. However, compromises are necessary. The hierarchy of these factors serves as guidance to find compromises for each production site. One factor may be improved at the expense of a less critical one but a compromise shall never be achieved at the expense of a factor of higher priority. This concept gives basic orientation. In view of optimisation, the particular factor costs in the sourcing country as well as the cost structures of production in the country need to be finally taken into account. A design adapted to the cost structure means that the dominant cost drivers will be reduced by constructive measures. With regards to low-cost country sourcing, not only the five general production factors are to be considered but the aspects illustrated in Figure 7 as well: different cultural and specialist background of suppliers, long hauling distances, tariffs and taxes, costs for coordination and support, product piracy and knowledge drain as well as the dynamics of framework conditions. The eleven major factors will be looked at individually in the following.

6. Different cultural and specialist background



lower quality standards, understanding of technical drawings cannot be taken for granted

7. Long hauling distances delays, higher risks and costs (depending on weight, volume, batch size…)

Special aspects

8. Tariffs and taxes

for low-cost

Charges of all types to authorities

9. Costs for coordination and support Supplier selection, development and support



country sourcing

10.Product piracy and knowledge drain Copycats, overproduction, fluctuation of employees

11.Dynamics of framework conditions changing taxes, tariff regulations, material costs, labour costs and exchange rates

t

Figure 7: Special aspects of low-cost sourcing

5.1 Expenditure of labour Since labour costs in low-wage countries are per definitionem lower, the use of labour can be increased. Companies can hereby refrain from a strongly automated manufacturing process which, in turn, reduces high initial investment costs and cost of capital on the one hand, and, on the other hand, lowers the requirements on the production equipment as well as on employee qualification and training. Complex manufacturing equipment always needs to be set up, monitored and maintained by specialists. 5.2 Processing time The processing time is used to allocate the machine hour rates and wages to the units produced. The higher the processing time, the higher the costs per unit. Since labour costs and machine time in particular are cheaper in low-cost countries, a higher processing time is acceptable if this leads to benefits in other areas. It is therefore possible, for example, to use a cheaper material which requires a slower production process. 5.3 Material used Although the factor of material used was considered of relatively medium importance in Figure 6, it is of higher priority in absolute terms. This is to be explained by the fact that the relative share of material costs in production costs in low-cost countries is usually higher than labour and machine hours, for example, are cheaper. The economic use of material, thus, accounts for a relatively sharp reduction of the total costs. Ehrlenspiel et al. recommend in this respect also minimizing material costs for production in China [21]. By substituting material costs with labour costs, it would be possible to reduce the amount of material used if, for example, several parts were joined manually instead of machining components from a block – a procedure which is labour-saving but material-intensive. Furthermore, as early as during the design phase, it needs to be taken into account that material shall be used which can be procured from local suppliers in order to prevent the use of expensive imported material. 5.4 Requirements to the manufacturing equipment The machine-hour costs in low-cost countries are, in general, lower than in high-income countries. Precision and reliability though are oftentimes also lower if local equipment is used [6]. In order to prevent product quality issues, relatively low requirements are to be set to the manufacturing equipment. This is part of the basic principle of creating a design adapted to the competencies of the supplier. These requirements and their fulfilment are also adapted to the cost structure, since the investment in high-quality manufacturing equipment in low-cost countries is relatively expensive compared to the labour costs in these countries. 5.5 Required employee qualification and training The level of qualification and training of employees in lowcost countries is, generally speaking, much lower than in high-income countries [32]. Qualified employees are rare and, for that reason, disproportionally expensive (e.g. expatriate) [23]. A design that is adapted to the competencies of the employees must therefore compensate for their usually lower qualification. 5.6 Different cultural and specialist background of suppliers The suppliers’ different cultural and specialist backgrounds can be the reason for serious misunderstandings [31]. While it can be expected of domestic suppliers to implicitly understand the aspects of

232

an order, these are to be explicitly communicated to a foreign supplier. This, in turn, has an impact on the product design which needs to be adapted to the communication needs of the supplier. Misunderstandings are to be prevented from the outset by a simple, explicit design, illustration and specification of the quality features. Quality criteria can only be successfully communicated if the designers make sure that the critical features can be specified. Several geometries such as threads or undercuts are subject to standards. A reference to the respective standard spares the communication of detailed information between customer and supplier and is therefore in line with the adapted communication needs. This only applies though, if the recipient understands the reference, has access to the respective standard and is capable of complying with it. This procedure can fail if the supplier lacks the standardised tool to manufacture the required standardised geometry. Product design in line with standards is therefore only adapted to the communication needs of the respective cultural sphere. 5.7 Long hauling distances Long hauling distances are a result of the geographical location of low-cost countries. Since goods that were manufactured in low-cost countries can oftentimes not be transported overland, they need to inevitably be transported by ship or even by aeroplane. Eastern European countries are an exception to this rule. The transport and transfer of goods incurs costs and leads to insecurities and a loss of time. These disadvantages are to be reduced by a design adapted to transport requirements. Sea freight costs are directly dependent on the volume of the goods whereas the costs for the more expensive air transport are based on weight [23]. It is therefore another important task of the product designers to minimise the relevant factor. Furthermore, significant economies of scale apply for all kinds of transport which should not be neglected [23]. The shipment of small entities (less-thancontainer loads = LCLs) per kilogram costs about 40 to 50 percent more than the shipment of full containers (fullcontainer loads = FCLs) [23]. The same applies to the transport of goods by road or by train before and after shipping [23]. The transfer of goods in full container loads as well tends to be faster, cheaper and incurs less risks [23]. Therefore, large batches are in principle a goal which needs to be assured as early as in the design phase. Large sourcing-batches can be induced, for example, by particularly choosing and designing standardised modules and components to be produced by low-cost-suppliers. A design adapted to transport requirements therefore focuses on an efficient use of transport volumes, an easy logistic handling as well as an easy, efficient and – due to the relatively poor infrastructure in low-cost countries – safe packaging. 5.8 Tariffs and taxes Whereas inside the EU, for example, the freedom of goods is enshrined in EU legislation, the import and export regulations for low-cost countries as well as the different tax regulations in these countries need to be taken into account. A design adapted to tariffs and taxes is required in order to minimise the amount of charges paid to the different authorities. The amount of outsourced products and/or the manufacturing concept can have an impact on tariff rates. The tariff rate for the import of roller bearings, for example, amounts to 8%, whereas only 6% are to be paid for bearings already incorporated in a housing. Furthermore, a considerable product share in ingoing material which is not manufactured in the EU can turn the product into a non-EU product. If such a product

233

is sold to a customer resident in a country which signed a free trade agreement with the EU, the customer is nevertheless obliged to pay tariffs for this product. 5.9 Costs for coordination and support The costs incurred by the coordination and support of suppliers in low-cost countries are in general higher than for supplier relations within high-income countries. On the one hand, this is a result of the different cultural and specialist background of the suppliers. On the other hand, the long geographical distance between customer and supplier complicates a close cooperation. Meetings faceto-face generally require a longer preparation, more time and expenses, particularly if an interpreter is needed to overcome language barriers [31]. Even if a design which is adapted to the compentencies and the communication needs of the supplier in particular reduces the scope of support, additional measures should nevertheless be taken into account. It is a matter of fact that the increase in coordination costs is at least proportional to the number of players involved, i.e. manufacturers, raw-material suppliers and hauliers. The number of different product components leads to higher costs as well. If these are produced by the same supplier however, synergy effects are to be expected. This can only be achieved if the components are similar in their material and manufacturing process. This illustrates clearly that a design that is adapted to coordination needs is required to create the necessary preconditions for a simple supply chain with low costs for coordination and support of the suppliers. An appropriate product structure and/or segmentation can make a huge contribution to these improvements. 5.10 Product piracy and knowledge drain There are different reasons why product piracy and knowledge drain are an important issue in low-cost countries. One aspect is a different mentality which is characterised by low loyalty and a high fluctuation of employees. Knowledge about products and manufacturing processes are therefore easily spread. In China, this development is going even further. State intelligence services considerably promoted industrial espionage in Western companies and research institutes [33]. Technology theft and product piracy were also facilitated by laws and their application [33]. Two different scenarios for the approaches to Design for low-cost country sourcing are to be distinguished: 1. The selected supplier uses free capacities for overproduction. The manufactured products or even defective parts enter the market without the customer’s knowledge. For this to happen, the items, of course, must be saleable. However, even individual components can be saleable, especially spare parts [31]. 2. Copycats manage to gather information about the product and/or its manufacturing process by the supplier which allows them to produce copies. A range of measures should be taken in order to face the challenges mentioned above [33-35]. A design adapted to anti-piracy needs constitutes a basic element. There are two diffferent appropriate strategies that can be combined though in order to prevent product piracy and knowledge drain. First of all, the relevant knowledge can be spread to several independent suppliers by segmenting the product into suitable manufacturing units. It is to be ensured that no single manufacturing unit is marketable on its own, contains key technologies or significant clues that point to the end product. This strategy is particularly appropriate if the final assembly is to be carried out in the domestic production plant [36].

Another possibility would be to design a sourcing unit in a fashion that it is not usable without a “key component“ and therefore not saleable. The key component, which is exclusively manufactured for the supplier, is provided to the supplier in the exact amount needed for the volume of the order. Overproduction is therefore not possible. Copycat activities can be prevented by the impossibility of obtaining the key component or the high degree of difficulty in copying it [37]. 5.11 Dynamics of framework conditions Low-cost countries are undergoing rapid changes as newly emerging markets. China serves as the perfect example of high dynamics [31]. Annually changed tax deductions with massive modifications for export products, for example, are accounting for considerable insecurities on the manufacturer’s side. A design that is adapted to the dynamics of the market aims at preventing or reducing the effects and implications and the added costs resulting from a change in the factors mentioned in Figure 6 and Figure 7. Companies need to identify the dynamics that are relevant to the enterprise and its product. Parameters such as the availibility and the price of energy and materials, freight conditions, export regulations etc. are subject to potential modifications and fluctuations which cannot be influenced. A design that is adapted to the dynamics of the market shall pave the way for seizing opportunities while limiting risks. Flexibility is of key importance in combination with the prevention of critical dependencies and the principle of risk spreading. The risk of supply bottlenecks can be diminished if backup material can be used in production. Another possibility would be permanent cost savings by gearing towards a flexible use of the currently less expensive material. BASIC PRINCIPLES OF THE DESIGN FOR LOWCOST COUNTRY SOURCING Chapter 5 pointed out how the basic principles of the Design for low-cost country sourcing are deduced from the main factors. Figure 8 gives another overview.

The all encompassing paradigm is simple, clear, safe. These three basic design rules apply always and everywhere, and most of the guidelines can be traced back to them [22]. For low-cost country sourcing however, they are of even greater importance. A simple design is adapted to the respective competencies, a clear assignment of functions and specifications reduces the risks of errors and, above all, is adapted to the respective communication needs. A safe design can be seen as a way to prevent errors. 7

DESIGN GUIDELINES FOR THE DESIGN FOR LOW-COST COUNTRY SOURCING Following are specific visualised guidelines to the basic principles of a design for low-cost country sourcing as explained in chapter 5 and 6. Some of the design proposals also refer to other basic principles than indicated in the headline which is noted in the respective figure. 7.1 Adapted to the respective cost structure Instead of choosing an expensive material aiming at a high surface quality, additional time and money should be spent on finishing the surface of a less expensive material. Guideline

Basic principle

2. Processing time



Adapted to cost structure

Cheaper structural foam is coated afterwards

disadvantageous

advantageous

Basic principle Adapted to the respective cost structure (adapted to the respective transport needs)



Dipl om

a

Xxxx xxxx Xxxx xxx xxxx xxxx Xxxx x xxxx xxx

a

X



Adapted to competencies

Adapted to communication needs

Figure 10: Increased use of wage advantages 7.2 Adapted to the respective competencies If possible, zero backlash shall be realised by elasticity instead of exact fit which creates a robust system. Guideline

Adapted to transport needs

8. Tariffs and taxes

Adapted to tariffs and taxes



10.Product piracy and knowledge drain 11.Dynamics of framework conditions

Bulk plastic part, material selection for a flawless surface

The lower wages in low-cost countries should be either used in pre-assembly or in final assembly. Moreover, larger transport units are easier and cheaper to transport. Guideline

7. Long hauling distances

9. Costs for coordination and support

Cheaper steel, finished with anticorrosive

As much of the assembly to be carried out in the lowcost country

4. Requirements to manufacturing equipment

6. Different cultural and specialist background of suppliers

Stainless steel without surface finishing

Figure 9: Finishing less expensive material



3. Material used

5. Required employee qualification and training

advantageous

Adapted to the respective cost structure

6

1. Expenditure of labour

disadvantageous

Desired characteristics achieved by surface finishing instead of by high-quality material

t

disadvantageous

advantageous

Function is robust against manufacturing inaccuracies

Basic principle Adapted to the respective competencies

Adapted tp coordination needs

(adapted to potential errors)

Elasticity compensates for positional errors

Adapted to anti-piracy needs•

Figure 11: Robustness vs. manufacturing inaccuracy

Adapted to dynamics

Early, multi-tier inspections ensure that errors are detected in time and limit potential damage.

Figure 8: The main factors and the deduced basic principles of the design for low-cost country sourcing

234

Guideline Reliable quality checks on several levels at an early stage of the supply chain (in any case prior to transport)

disadvantageous

advantageous

Guideline

(adapted to transport needs, adapted to the respective cost structure)

Figure 12: Early and simple quality checks 7.3 Adapted to the respective communication needs The understanding of Western and international standards cannot be taken for granted. Explicit technical specifications that are easy to understand facilitate communication. disadvantageous

advantageous

Domestic standards are to be given in explicit specifications

advantageous

Basic principle

Adapted to anti-piracy needs

Figure 13: Foregoing standards or using regional standards 7.4 Adapted to the respective transport needs If similar or different elements are designed in a fashion that allows them to be fit into each other or to be packed closely reduces package size and transport costs. disadvantageous

advantageous Spaceefficient stacking (without stucking together)

Using voids in order to reduce pack size

Basic principle Housing elements need huge space

Housing elements fit into each other

Basic principle

Adapted to the respective dynamics

advantageous

Influence official classification of import articles by means of design Basic principle

Loose Bearing Tariff rate: 8%

7.8 Adapted to the respective dynamics If the long-term availibility and the price development of raw material are subject to insecurities, alternatives are to be in place from the outset to ensure that the supplier can continue production with alternative materials or semifinished products at low costs. Guideline

7.5 Adapted to tariffs and taxes If technical components or assemblies are subject to different tariffs, an adapted design of supply and product structures can pay off. disadvantageous

Figure 17: Designing key components

Alternative material as an option

Figure 14: Reducing transport volumes

Adapted to tariff and tax regulations

7.7 Adapted to anti-piracy needs A product is to be segmented in a fashion that no manufacturing unit on its own runs the risk of being copied. Guideline

Basic principle

Guideline

Figure 16: Sourcing complete modules

Segmenting product in manufacturing units which are only interesting for copycats in their combination

Adapted to the respective communication needs

Adapted to transport needs

LowCost Country

Adapted to the respective coordination needs

(adapted to the respective risks)

Guideline

advantageous

LowCost Country

Basic principle

Basic principle Adapted to the respective competencies

Guideline

disadvantageous

Whole modules instead of individual parts are to be preferably defined and designed for low-cost country sourcing

Bearing as part of Assembly Tariff rate: 6%

Figure 15: Adapt Design to tariff and tax regulations 7.6 Adapted to the respective coordination needs It is preferable to design whole modules in line with lowcost country sourcing. A multitude of individual sourcing units, produced by different manufacturers, leads to higher costs for coordination, support and logistics.

235

advantageous ABS or SAN or “emergency solution“ designed with back-up material

Compatible thermoplasts which can serve as alternatives can be used on the same machine

Figure 18: Considering an alternative design 8 SUMMARY AND OUTLOOK In times of globalisation, the increasing importance of lowcost country sourcing needs to be taken into account. The realisation of cost saving potentials while maintaining the required level of quality at the same time can only be achieved by a systematic and methodical approach. Basic principles and design guidelines are necessary in order to adapt the products to the capabilities of the supplier and to the conditions in low-cost countries. The article at hand first presented the motivation and the aim of constructing a design for low-cost country sourcing. The major factors influencing the design were pointed out from which basic principles for an adapted product design were deduced. These basic principles were finally illustrated by examples and design guidelines. The introduced design guidelines are currently being applied and validated at one of the world’s leading manufacturers of sensors, safety systems and automatic identification products for industrial applications. The company is developing a new product which will be sourced from China. In addition to this, the current research aims at delivering a comprehensive framework which follows the stages of the general product design process as e.g. described in

[22]. For each design stage a step-by-step procedure shall guide the user to solutions which fit the needs of low-cost country sourcing but do not neglect the numerous other objectives and constraints of the design process. This will finally result in a holistic, flexible and expandable Design for Low-Cost Country Sourcing methodology. 9 REFERENCES [1] Pfeifer, T., 2001, Qualitätsmanagement: Strategien, Methoden, Techniken, 3. Edition, München, Hanser [2] Pfefferli, H., 2002, Lieferantenqualifikation – die Basis für Wettbewerbsfähigkeit und nachhaltigen Erfolg, Expert [3] Heberling, M.E., Carter, J.R., Hoagland, J.H., 1992, An investigation of purchases by American businesses and governments, International Journal of Purchasing and Materials Management, Vol. 28, No. 4, p. 39-45. [4] Chapman, T.L., Demsey, J., Ramdsdell, G., Reopel, M.R., 1997, Purchasing - No time for lone rangers, The McKinsey Quarterly, Vol. 34, No. 2, p. 30-40. [5] Bain & Company, 2002, Einkaufsstrategien Herausforderungen für Top Manager. Results - Bain & Company, Journal 03 [6] Fitzgerald, K. R., 2005, Big Savings, But Lots of Risk, in: Supply Chain Management Review, December 2005; p.16-20 [7] Hemerling, J., Lee, D., 2007, Sourcing from China – Lessons from the Leaders, BCG Focus Report July [8] Vlcek, J., 2006, Risk Management for Business with Low-Cost Countries (LCC), European Centre for Research in Purchasing and Supply, Vienna [9] Kerkhoff, G., 2005, Zukunftschance Global Sourcing, WILEY-VCH Verlag, Weinheim [10] Krokowski, W., 1998, Globalisierung des Einkaufs: Leitfaden für den internationalen Einkäufer, Springer, Berlin, Heidelberg, New York [11] Piontek, J., 1997, Global Sourcing, R. Oldenbourg, München [12] Gruschwitz, A., 1993, Global Sourcing – Konzeption einer internationalen Beschaffungsstrategie, M und P Verlag für Wissenschaft und Forschung, Stuttgart [13] Harland, C., Brencheley, H., Walker, H., 2003, Risk in supply networks, Journal of Purchasing and Supply Management, Vol. 9, No 2, p. 51-62. [14] Kaufmann, L., 2001, Internationales Beschaffungsmanagement – Gestaltung strategischer Gesamtsysteme und Management einzelner Transaktionen, Gabler, Wiesbaden [15] Würstl, J., 2006, Risiken der Low-Cost-CountryBeschaffung – Global Sourcing aus China, Beschaffung aktuell, 10, p. 28. [16] Bogaschewsky, R., 2005, Einkaufen und Investieren in China, BME-Leitfaden Internationale Beschaffung, Band 2, Centrum für Supply Management GmbH [17] Granier, B., Brenner, H., 2004, Business-Guide China: Absatz-Einkauf-Kooperation, Wolters Kluwer Deutschland GmbH, München/Unterschleißheim [18] Fleischer, J., Wawerla, M., Weiler, S., 2006, Entwicklung von Low-Cost Lieferanten, Produktion, Nr. 37 [19] Pricewaterhouse Coopers, 2008, Beschaffungslogistik im China-Geschäft – Kosten, Prozesse,

[20] [21] [22]

[23]

[24]

[25]

[26]

[27]

[28] [29] [30]

[31]

[32]

[33]

[34]

[35]

[36]

[37]

236

Strategien, Bundesverband Materialwirtschaft, Einkauf und Logistik e. V., Frankfurt am Main Pulic, A., 2004, Einkaufsgeschäfte in den Niedriglohnländern boomen, Procurement Letter 12 Ehrlenspiel, K., 2007, Cost-Efficient Design, 1st Edition, Springer, Berlin, Heidelberg Pahl, G., Beitz, W., 2006, Engineering Design - A Systematic Approach, 3rd Edition, Berlin, Heidelberg, Springer Abele, E., 2008, Global production: a handbook for strategy and implementation, Berlin, Heidelberg, Springer Lockström, M., 2007, Low-Cost Country Sourcing: Trends and Implications, 1st Edition, Wiesbaden, Deutscher Universitäts-Verlag Ehrlenspiel, K., 2003, Integrierte Produktentwicklung: Denkabläufe, Methodeneinsatz, Zusammenarbeit, 2nd Edition, Wien, Hanser Kuo, T.-C., Huang, S.H., Zhang, H.-C., 2001, Design for manufacture and design for ‚X’: concepts, applications, and perspectives; in: Computers & Industrial Engineering 41 (2001) S. 241-260; Elsevier Ashby, M. F., 2005, Materials selection in rd mechanical design, 3 Edition, Amsterdam, Heidelberg, Elsevier Muhs, D., et al, 2003, Maschinenelemente, 16th Edition, Vieweg, Wiesbaden Ehrenstein, G.W., 2007, Mit Kunststoffen konstruieren; 3rd Edition, München, Hanser Andreasen, M. M., Kähler, S., Lund, T., 1983, Design for assembly, IFS Publications, Berlin, Heidelberg, Springer Kasperk, G., Woywode, M., Kalmbach, R., 2008, Erfolgreich in China: Strategien für die Automobilzulieferindustrie, Springer, Berlin Aulig, T. G., 2008, Wirtschaft und Bildung in der VR China: Die Qualifikationsanforderungen des KfzHandwerks in der VR China - aufgezeigt an Fallstudien in Lanzhou und Weifang, Vdm Verlag Dr. Müller Blume, A., 2006, Haifischbecken China: Know-howAbfluss und Gegenmaßnahmen, In: Seminar »Produktpiraterie – Was tun?«, von Fraunhofer IPT und wzl forum, Aachen, 18. Mai 2006. Schuh, G., 2006, Produktpiraten den Wind aus den Segeln nehmen – Hintergründe und Praxisbeispiele, In: Seminar »Produktpiraterie – Was tun?«, Fraunhofer IPT und wzl forum, Aachen, 18. Mai 2006. Hahn, V., 2006, Erfolgreicher Marken- und Produktschutz – Strategie und Praxis am Fallbeispiel NIVEA, In: Seminar »Produktpiraterie – Was tun?«, Fraunhofer IPT und wzl forum, Aachen, 18. Mai 2006. Große-Heitmeyer, V., 2006, Globalisierungsgerechte Produktstrukturierung auf Basis technologischer Kernkompetenz, Dissertation, Garbsen, Hannover, Univ. Neemann, C.W., 2007, Methodik zum Schutz gegen Produktimitationen, Shaker, Aachen

Design Rework Prediction in Concurrent Design Environment: Current Trends and Future Research Directions P. Arundachawat, R. Roy, A. Al-Ashaab, E. Shehab Decision Engineering Centre, Cranfield University, Cranfield, UK, {p.arundachawat, r.roy, a.al-ashaab, e.shehab}@cranfield.ac.uk

Abstract This paper aims to present state-of-the-art and formulate future research areas on design rework in concurrent design environment. Related literatures are analysed to extract the key factors which impact design rework. Design rework occurs due to changes from upstream design activities and/or by feedbacks from downstream design activities. Design rework is considered as negative iteration; therefore, value in design activities will be increased if design rework is reduced. Set-based concurrent engineering is proposed as an alternative design approach to mitigate design rework risk, however, duplication effort for designing set of artefacts are still needed to consider before selecting set-based concurrent engineering in design activities. Keywords: Design Rework; Concurrent Engineering; Literature Review

1 INTRODUCTION In concurrent design environment, there are early involvements from downstream activities such as manufacturing. The earlier involvement in concurrent engineering is known as overlapping design activities. The design lead time could be reduced by overlapping design tasks. Overlapping among design tasks could cause design reworks. Design rework inherits in overall product development lead time. Therefore, the design rework activities need to be taking into consideration for estimating the design development period. In general, design reworks are considered as a part of iteration in every product development project. However, design reworks are considered as negative iteration [1]. Understanding the characteristics of design rework is beneficial for planning design activities. In design phase, Gantt chart or project planning network, i.e. Project evaluation review technique or critical path method (PERT/CPM), are commonly used for project planning purpose. Within project planning, design duration is assumed to be given, and normally duration in each task is inherent with rework. Value in every project is increased, if rework is removed. The focus of this paper is to review the related literatures on design reworks. The analysis on causes of design rework and methods to estimate design rework are the outcome of this paper. The paper is structured as follow. Design rework definition is presented in Section 2. Then, influences of preliminary design information exchanging on design rework are discussed in section 3.2. In section 4.1 to 4.3, information exchanges are discussed based on types of overlapping tasks. Tools for modelling design activities with considering overlapping and design rework prediction techniques are explored in section 4.4. Within these techniques, the impacts of design rework are concluded and discussed in table 1 and 2. Moreover at, the factors used to explain upstream changing or downstream feedback which impact design rework are summarized in table 3. Set-based concurrent engineering is introduced to CIRP IPS2 Conference 2009

237

reflex clearly the disadvantage of design rework. Finally, conclusion and implication for future research are presented. 2 DEFINITION OF REWORK In construction context, rework is considered as the source deviates time planed and real progress in construction project [2]. Cooper [3] mentioned that rework is defined as error found by downstream activities. The interesting point he made is rework might be found years later after projects finished. Love [4] concluded the definition of rework is unnecessary effort of re-doing a process or activity that was incorrectly implemented the first time. He collected the definition from previous studies most of them agree the commonality definition as quality deviation from expectation. Errors, omission, failures, damages, and change orders throughout the procurement process are concluded as caused of rework. So, rework in construction and building context is concerned on quality issue. In concurrent design context, exploitation of preliminary information helps to reduce lead time in product development within concurrent design context. However, rework is occurred from updating of un-finalised design information [5]. This incomplete information tends to change in the later stage. Therefore, it is necessary to optimise this issue. Costa and Sobek [6] identified rework as a repeat of design under the same abstraction level. The reason for repeating is to correct error. Rework is caused due to: (i) receiving new information from overlapped tasks after starting to work with preliminary inputs; (ii) probabilistic change of inputs when other tasks are reworked; (iii) probabilistic failure to meet the established criteria [7]. Rework is a result of proceeding tasks in parallel with using preliminary information. Yassine et al. [9] clearly developed graphical representation of preliminary information exchanged which lead to design rework in concurrent design environment as shown in figure 2. They

emphasis the downstream rework occurring due to the possibility of upstream design changes during overlapping design activities. Another dimension of rework is a required repetition of a task because it was originally attempted with imperfect information [10]. Moreover in concurrent design, upstream rework is happened because of faults detected by downstream activity, while downstream rework occurs due to uncertainty of preliminary information given from upstream. This framework combines rework from quality deviation and information change aspects [11]. In this paper design rework is defined as unnecessary repetition of design effort. Design rework is occurred because of influences from other tasks [7], which are considered as dependency among design tasks under concurrent engineering environment. Furthermore, the design rework is uncertain or stochastic in nature [8]. Krishnan et al. [15] and Loch and Terweisch [8] are the two most cited in design rework estimation area. Lead time is assumed to be linearly converged in each rework. Therefore, changing customer requirement, which is not linearly in nature, is excluded from this area. The grahpical representations of rework in concurrent design environment are shown in figure 2b and 2c. Design rework among tasks is explained in detail in section 4. 3 PRELIMINARY INFORMATION EXCHANGE IN CONCURRENT DESIGN ENVIRONMENT 3.1 Overlapping activities Overlapping among tasks is a major characteristic of concurrent engineering. This characteristic represents the early involvement of constituents [12]. The importance characteristic of overlapping is represented in terms of early released preliminary information. The advantage of overlapping is that it reduces product lead time and improves product innovation capabilities. Figure 1 shows total lead time reduction by concurrent development approach compared with sequential approach.

before hand in the design stage itself. This feedback of failure in design is caused upstream design rework. 3.2 Risks of using preliminary information The risk of using preliminary information in overlapping design tasks is information changes. The reasons of changes are either from customer changes or from evolution of designs, etc. In Figure 1, design activity provides ‘draft’ design to manufacturing activity for starting development design tool earlier, however, this draft design is likely to change or update with time. Eastman [13] claimed that using advance released information bring on the issue of rework due to obsolete of data, extra time and effort to prepare to release, extra delay due to confusion, as well as bias upstream team to use conservative side of tolerances and specification. This bias impacts manufacturing difficulties and additional costs. Terwiesch et al. [14] addressed that up to 50% of total engineering capacity has to be spent to resolve rework issues. One cause of rework is from updating of preliminary information in concurrent design approach. For example, the preliminary CAD drawing could be changes after prototype testing. Therefore, the manufacturing phase needs to do some rework based on changes. The various factors that impacted design reworks are explained in section 4. 4 DESIGN REWORK ESTIMATION 4.1 Classification of overlapping tasks The overlapping of tasks can be classified into three types as independent, dependent and interdependent [9]. The graphical model representation of overlapping tasks is shown in Figure 2.

Final information

B

(a)

Concurrent development approach

A

Time reduction

Marketing

Preliminary information

A

A1

B Design

XB

… B1

An

Bn

(b) A

Manufacturing Marketing Overlapping

B

Design Sequential development approach

XB

(c) Manufacturing

Time

Figure 1: Comparison between sequential and concurrent product development approach Concurrent design approach drive design activities to release preliminary information, so downstream activities could detect any faults and feedback in order to solve problems earlier. Unlike sequential design approach, each activity assumes to be complete before releasing to the next activity. For instance, if manufacturing team wait until design team completing their design, it would be more expensive and waste of time compared to faults found

Figure 2 Pattern of tasks execution a) Independent tasks execution b) Interdependent tasks execution c) Dependent tasks execution The advantage from independent overlapping execution is that any tasks can start freely (see Figure 2a); therefore, lead time reduction could be fully gained from concurrent approach. While, interdependent overlapping execution is defined as tasks are interaction each other, so changing from one task will cause rework to others. Figure 2b illustrates that changing of final information from task A causes rework XB. Task A would also be reworked, if there is feedback from task B to task A. The update of information between task A and B induces reworks,

238

couple of A1-B1…An-Bn, until the mutual results are satisfied. Finally, dependency overlapping execution is represented dependency of downstream design task on upstream design task only, as shown in Figure 2c. The independent overlapping execution among tasks is preferable for product design phase. The attempt to avoid interdependent and dependent design relationship is preferred, but sometime it is hard to achieve [15]. Design structure matrix (DSM) is well accepted to be a tool for dealing with complex design activities especially interdependency tasks [16]. DSM helps to re-sequence activities to avoid couple tasks by a process called partitioning, however, sometime new task arrangements is not achievable in reality [17]. Dependency among tasks is the major cause of design rework. Independent overlap among tasks is preferred in product development because any change in each activity will not impact to others. 4.2 Design rework in dependent overlapping tasks Krishnan et al. [15] described that early exchange information to downstream could cause unnecessary iteration due to dependency of preliminary information between upstream and downstream activities. The dependency is explained by upstream information evolution and downstream sensitivities. Up stream information evolution refers to the rate of which the exchanged information reaches its final form. Downstream sensitivity is a measuring the duration of downstream work required to accommodate changes in the upstream information. Downstream rework would be huge, if a downstream activity is very sensitive to changes from upstream activities. However, this work models interaction of two activities only, while there are a lot of activities in a product development process. Downstream rework occurs because overlapping process makes downstream design task to rely on preliminary information [8]. This scenario is accounted as uncertain information. Organisation’s capacity plays an importance role to reduce uncertainty during the pre-communication before starting design processes. Overlapping is risky if the upstream information may change substantially or if there is an existing of a strong dependence between activities. Therefore, changes of upstream information may cause downstream rework. The delay due to rework need to be considered trade-off between time gains from overlapping and delay from rework time. The amount of downstream rework also depends on how far downstream has already progressed. So, rework is influenced by overlapping period, pre-communication intensity. Again, this paper models rework for two overlapped activities only. Smith and Eppinger [18] combined the benefit of design structure matrix and reward Markov Chain technique to estimate lead time. Markov chain is a method to predict the future probabilities of occurrences by analysing present known probabilities. Design structure matrix is used to identify the dependency strength among product design tasks known as repeat probabilities. However, this work is not mentioned any factors influenced to design rework. This work is very good example to considers the multi-tasks overlapping in product development project. However, the extensive discussions on the mechanism how reworks are occurred are not considered in this work. Xiao and Si [19] combined the concept of evolutionsensitivity in information transferred [15], and the awareness of information uncertainty [8] for developing information exchange methodology. The main contribution on this paper is to prove exchanging information in batch, which they believe it could help to lower the risk from downstream rework.

239

Yassine et al. [9] defined downstream design rework as it is occurred by overlapping period and design changes. Downstream design task begins with knowledge accumulated by upstream design task. The knowledge accumulation is represented by probability. The design change is calculated by weighted average from probability of drastic design change and small design change. However, the probability of knowledge accumulation and probability of drastic and small design change are from expert judgments. The value in this work is the clarification of overlapped execution among design tasks, which are classified to be dependent, independent and interdependent overlapped execution. However, the model presented in this work shows the result for dependent overlapped execution only. Luh et al. [20] defined repetition design task as uncertain task, and its occurrence can be represented by probability. However, the criteria to define of occurrence are not explained in details. Roemer et al. [21] provided a model to calculate time and cost trade-offs of overlapping product development. The extended design time is recognized as a major risk from overlapping approach. The downstream rework is called extended design time, which is caused by the evolution and sensitivity basis. The concept of evolution and sensitivity is taken from [15]. The probability of rework occurring is non-decreasing function which is a function of overlapping period. However, the procedure to extract the probability of rework is not explained in details. Chakravarty [22] used risk of design modified to predict downstream rework. This risk is defined by probability of incompatibility in design. The amount of rework is the result of multiplication from mapping function analogous to sensitivity, standard unit time for design or built, and risk having to modify the design work. The critical issue of this work is the factors influence the incompatibility in design, which is not considered in this work. Roemer and Ahmadi [23] integrated the probability of rework function from [21] with the impact function. This work attempts to generate the relationships of upstream evolution and downstream sensitivity to rework. In addition, work intensity is considered as an approach to relief impacts of rework. The interactions between work intensity, and overlapping are used to calculate design lead time and cost. In this case, probability of rework is a function of upstream achievement. Cho and Eppinger [7] assumed that task reworks are occurred from the following reasons: (1) new information is obtained from overlapped tasks after starting to work with preliminary inputs, (2) inputs change when other tasks are reworked, and (3) outputs fail to meet established criteria. The valuable part in this work is rework is classified into feedback rework and feed forward rework. Feedback rework is caused by the failure of downstream task to meet the established criteria, so upstream design task need to rework. Feed forward rework is rework that downstream task needs rework due to new information generated by upstream. This couple call iterative rework. Since, the development processes converge to its final solutions with iterative rework, there are fewer chances that new information is generated and errors are discovered. Therefore, the rework probability tends to decrease every iteration. This idea is coherent with the work from [9] in developing interdependent task modelling. However, the probability of rework and expected duration of rework are estimate by experiences. Cascading of rework through a product development process is always issues [24]. This knowledge is used to trade off cost and schedule risk. In this work, product

development is modelled as a network of tasks, so output from one task is an input for the other task. Each rework is caused by change in particular input. However, input change are caused by either the closest upstream task itself or an impact from an upfront task, then the probability of input change is a product of multiplication from a probability of upstream changes (volatility) and a probability of a typical change causing rework from the upfront activity (sensitivity). However, all probabilities used in this model are got from experiences. Jun et al. [25] modeled an entire product development process which includes all task patterns in reality, feedback, branch&merging, no-overlap, interaction, overlap, cycle ,and communication. The occurrence of downstream rework is estimated by non-homogenous Poisson process with fine tuning from similar historical projects, while the amount of rework estimation is based on sensitivity concept. Overlapping of design tasks is not without risks or costs [26]. Some of the risks and costs are associated with overlapping because it initiates incomplete information for design tasks. Incomplete information in overlapping occurred from early freezing design criteria and early releasing of preliminary information and prototyping. Thought, this work suggests the approach to deal with evolution and sensitivity among design activities, but it does not provide a framework to show the amount rework that could be reduced. Yassine et al. [5] used dynamic programming to estimate lead time with optimal information transfer policy. Too much information could extend lead time unnecessarily. Reworks are caused from time spent on outdated information, type of change (major or minor changes), and degree of sensitivity. Probability of rework is put into Monte Carlo simulation for the total rework cost incurred. However, all probabilities used in this model are got from experiences. Conclusion on factors impact design rework for dependency tasks is shown in Table 1. 4.3 Design rework in interdependent overlapping tasks Smith and Eppinger [10] developed work transformation matrix (WTM), which is the extension of DSM, to model the design iteration process. The concept of Eigen value is used to estimate rework time. WTM can be modelled interdependent relationship among design tasks. However, the dependency data between tasks are provided by experiences engineers. Yan and Wu [27] introduced key factors such as time, order, information, resources, and overlapping time. All these key factors are optimised by genetic algorithm (GA) before feed into the heuristic and dynamic mechanisms for scheduling purpose. Upstream design failure found by downstream is a feedback taken into account for rescheduling in their model. However, the relationships described the mechanism of rework in upstream and downstream task are not covered in this work. Yan et al. [28] develop a branch-and-bound algorithm with heuristic rule for minimizing design lead time for concurrent product-process design activity pair. The process design’s ability of discovering the faults in the product design is considered as a factor impacted rework. The authors provide the methodology to estimate mean duration, but the detail relationships between upstream and downstream are not presented in this research. Joglekar et al. [29] explored the performance of coupled development activities by proposing a performance generation model (PGM). Optimal strategies (i.e., sequential, concurrent, or overlapped) are developed with

aiming to manage coupled design activities. This work is push by fixed amount of engineering resources and deadline constraints. In this work, the coupling or interdependency is modelled by performance deterioration in one task due to the rework generated by the other task. However, the practice to calculate rework in upstream and downstream is borrowed from [10]. Wang and Yan [30] modelled the iteration among an upstream product design activity and several downstream process design activities. Optimisation time and cost is a goal in this work. The distribution of faults detection by downstream is classified to be non increasing convex function and non increasing con curve function. The functions of upstream changes are assumed in the same trend. However, this model provides the optimisation framework between time and cost of product development only. Mitchell and Nualt [32] tried to prove that cooperative planning can reduce uncertainty in concurrent design environment. Upstream rework is impacted by a lack of firm experiences in such projects, but downstream rework. Upstream changes are impacted downstream rework, which causes project delay. Rework is defined in a frequency dimension (number of change iterations) and a magnitude dimension (amount of change) relative to the original design. They study 120 business process (BP) redesign and IT development projects in the healthcare and telecommunications sectors where upstream BP design and downstream IT platform design are interdependent. The formulation of relationship is developed by using a Partial Least Squares (PLS) model and a magnitude of rework. Types of design changes are ranged from (1) incremental, (2) modular, (3) architectural, and (4) radical. However, reasons of design changing are not covered in this work. Conclusion on factors impact design rework for dependency tasks is shown in Table 2. In conclusion, downstream design rework is impacted by upstream change, while upstream design rework is occurred due to feedback from downstream. The conclusion of factors on design rework are combined and classified into seven groups as shown in Table 3. Pre-communication is a factor to reduce upstream uncertainty [8]; furthermore; crashing or increasing intensity is a solution to compensate design rework [23], and these two factors are used to reduce the impacts of rework. Moreover, Terwiesch et al.[14] and Bogus et al.[26] proposed to use set-based design to avoid downstream design rework. Details are discussed in section 5. 4.4 Methods to estimate design rework There are 3 approaches to acquire design rework which are direct experiment, mathematical modelling and simulation [31]. However, there is only simulation approach present in literatures. Direct experiment is hard to achieve in reality. Mathematical modelling required precise data, while product development is not [31]. Therefore, tools desired should be represented real world situation in product development in this case overlapping is compulsory characteristic needed to represented. Product development process is dynamic and stochastic, so methods to estimate design rework should allow to deal with stochastic nature. Table 1 is the conclusion of literatures related to dependent overlap execution, while Table 2 is for interdependent overlap execution. Column 2 and 3 are criteria to define factors and methods to estimate design reworks consecutively. Factors in Table 1 and 2 are concluded in Table 3. Design reworks are originated either from upstream changes or downstream feedbacks.

240

Authors

Factor impact rework

Criteria to define factors

Estimating methods

Hodemaker et al. [33]

Project complexity

Risk of unsuccessful integration

Stochastic model

Krishnan et al [15]

Upstream Information Downstream Iteration

Evolution Sensitivity

Non-linear program

Loch and Terwiesch [8]

Preliminary information Uncertainty Dependency Pre-communication

Yassine et al. [9]

Overlapping Type of task dependency Engineering change Approach used

Roemer et al. 21]

Incomplete information transferred in overlapping task

Chakravarty [22]

Information exchange Design incompatibility

Probability density function

Yassine et al. [34]

Change in information

Browning and Eppinger [24]

Change in particular inputs

Terwiesch et al. [14]

Coordinate among couple tasks

Xiao and Si [18]

Information exchange Uncertainty

Roemer and Ahmadi [23] Cho and Eppinger [7]

Jun et al. [25] Bogus et al. [26]

Yassine et al. [5]

Rate of upstream changes Reduction in rate of change Impact the modifications on the downstream task Meetings before the development work starts. Level of overlapping Independent Dependent Interdependent Major change Minor change Sequential Overlap Concurrent (fully overlap) Probability of incorrect prediction for updated design

Probability of change Probability direct input change Probability of the change from far upstream Early released information Uncertainty Intensity (Non-homogenous Poisson distribution) Evolution degree Progress in downstream work Probability of rework Work intensity

Upstream evolution Downstream sensitivity Incomplete information Crashing Update of preliminary information Impact of rework from far upstream Outputs fail to meet established criteria Information exchange Overlapping Sensitivity Evolution of upstream information Lack of design optimisation Insufficient design information Sensitivity

Probability of rework

Using outdate information

Time spent in using outdate information Major change Minor change

Information and number of activities related Sensitivity

Non-linear program

Stochastic model

Stochastic algorithm Optimisation model DSM DSM Qualitative framework Nonlinear program

Stochastic algorithm DSM Advance simulation Analytical model

Degree of overlapping Sensitivity function n/a

Qualitative framework

Dynamic programming model

Robustness

Table 1 Conclusion of factors and method to estimate design rework for dependent overlap execution From Table 1 and 2, the methods can be classified to be discussed rather than rework prediction. Quantitative as qualitative and quantitative framework. framework can be grouped into three groups. The first one tries to classify on which criteria are impacted upstream or Qualitative frameworks are the works from [14] and [26], downstream rework [32] by using a statistical technique. but all of them suggest the suitable scenario for pointThe second and the third groups are related to prediction based concurrent engineering and set-based concurrent of design rework. However, the second group ([8], [15], engineering to eliminate rework impacts. Furthermore, the 10], [11]) is defined as prediction tools for simplify two explanation describes on how factors impact rework are tasks.

241

Authors

Factor impact rework

Smith and Eppinger [10] Yan and Wu [27]

Joglekar et al. [29] Yan et al.[28]

Criteria to define factors

Couple of tasks Imperfect information received Upstream design failure found by downstream Overlapping (Pre-release information) Feedback Couple of tasks Concurrency Risk of late detects upstream design faults. Uncertainty in input information received from upstream Downstream design discovery faults from upstream design

Wang and Yan [30]

Mitchell and Nualt [32]

Estimating methods WTM

Rework proportion Heuristic scheduling Genetic Algorithm (GA)

Weighting factor Rework proportion

Performance generation model

Poisson process of faults discovery Magnitude of design iteration Probability of fault detection

Lack of experiences (Impacted upstream rework)

Uncertainty

Cooperative planning (Impacted downstream rework)

Amount of cooperative planning

Heuristic rules Branch and bound algorithm

Probability theorybased method (for estimating task duration) One-dimensional search algorithm Survey research (Quantitative) Seven points Likert scale

Table 2 Conclusion of factors and method to estimate design rework for interdependent overlap execution Sources of Rework Factors Project complexity (Integration issue)

Downstream Feedback

Upstream Changes





Upstream design changes Evolution (Speed of continuous change)



Detail explanations Product based Product based Experience of designers Design support Technology

Degree of changes/updating design (Discrete changes) Major change Minor change

Organisational based 

Changes from far upstream (Knock on effect)



Product based

Amount of overlapping task



Planning based

Dependency of tasks Sensitivity



Product based Process based

Faults found by downstream Chronological (early, late)



Experience of designers Design support Technology

Pre-communication





Organisational capability

Crashing





Increase more resources

Table 3 Conclusion of factors impacts to design rework Finally, the third group is a method to predict rework for are implemented discrete event simulation such as multistage overlapped. However, reworks are assumed stochastic algorithm, dynamic programming model, either probability distribution given or rework factors branch and bound algorithm, genetic algorithm, heuristic provided. So, literatures in this area are rather accounted algorithm, etc. (details in Table 1 and 2), to predict lead reworks for lead time estimation than to predict it. time of product development. Literatures in this area are clustered to be DSM based Table 3 is the conclusion of factors initiated rework from and non-DSM based. Works based on DSM are from [7], table 1 and 2. They are considered from the point of of a [24], [10] and [34], while others are [5], [21], [22], [25], particular design activity. For example, if designers are [27], [28], [29], [30] and [32]. The non-DSM based tools designing a subsystem, their solution could be impacted

242

either from the predetermined subsystems (upstream) or the subsequence subsystems (downstream). So, coordination and communication among team menber are critical in product design and development. If the members are not coordinate to solve the integration issues and communicate to update each one result, it could increase unnescessarily rework in the project.

Set-based concurrent engineering is proposed to solve a downstream rework issue [26], which is directly eliminated dependency of downstream from upstream activity (details in factor 4 Table 3). Set-based concurrent engineering is claimed as one key success of Japanese compared to US auto industries. However, to change from ‘point’ to set paradigm need to be investigate more.

5. REDUCING RISK OF REWORK BY SET-BASED CONCURRENT ENGINEERING Changing in upstream phases in product development processes can alter not only upstream but also downstream activities. Sources of changing can be either from higher level strategic decision (market changed) or operational decision (constrains in technical aspect) [35]. Variation of design solutions due to all these source of changing is account as uncertainty in product development [8]. Ward et al [36] revealed how TOYOTA’s automotive development team deal with uncertainty in car development. They also point out that lead time of development in TOYOTA is less than the other US automotive manufacturers significantly. The key difference is what TOYOTA implements set of possible design solutions and then narrow the alternatives down in parallel until achieve satisfied design, while the US use only one solution. If there are mistakes, they have to either solving with a lot of rework or getting new solution to work with. The US practice is relied on one single “point”, so it can be mentioned as point-based concurrent engineering. Ward et al. [36] conclude that the set-based paradigm is the key importance to deal with uncertainty and ambiguity occurred in automotive development. Terwiesch et al. [14] argued that set-based concurrent engineering is suitable not only for uncertainty but also ambiguity situation. However; design planners need to consider the duplicate cost and information starvation cost against rework cost. Therefore, this issue needs to be proved mathematically. Bogus et al. [26] proposed to use set-based concurrent engineering to reduce sensitivity of downstream design activity. However, finding from [36] contrast this propose, because TOYOTA development teams tend to use set based for all levels in car development projects by using a lot of prototypes. Ford and Sobek [37] compared set-based concurrent engineering and point-based concurrent engineering development time and cost by using real option concept. Product development is divided into three phases, conceptual design, system design and detail design. There are 2 probability types needed for calculation, probability of generate change initially and probability of discover change need. Each design phase is addressed with probability number, after that put them into real option calculation. The convergence time results are calibrated with the survey data from industries. However, time convergence can be model, but the detail characteristics of set-based concurrent engineering are not mathematically modelled in this work. There are various aspects needs to be considered before implementing setbased concurrent engineering, e.g. performance of team members Engineers in TOYOTA work for more than one vehicle development projects [36]. Other aspects are involvement of suppliers, controlling of chief engineers, organisation, etc. Therefore, the comparison of cost and time between set-based concurrent engineering and point-based concurrent engineering needs to be considered the whole context of product development process.

6. CONCLUSION IMPLICATION FUTURE RESEARCH Most of literatures implemented simulation based approaches to illustrate design rework embed in design lead time. The major finding is most of which prefer to put “probability” of changes or feedback into the models to represent rework, while the factors impacted rework are qualitatively explained rather than expressed mathematically in the model. Furthermore, the contexts of implementing concurrent engineering such as, performance of team members, involvement of suppliers, controlling of chief engineers, organisation, tools used, etc., need to be considered. Concurrent engineering approach could be implemented with higher efficiency in design activities, if reworks are lowered. Sources of design rework in concurrent engineering are from upstream design changes and downstream feedbacks. Furthermore, rework is particularly occurred when one single design choice is selected (point-based concurrent engineering). It is necessary to understand and estimate design rework in point-based concurrent engineering and estimate duplicate cost and starvation cost in set-based concurrent engineering for select product development approach. Based on the understanding of design rework from upstream changes and downstream feedbacks, all factors addressed in Table 1 to 3 will be used for design rework estimation. This estimation will be based on analogy approach. This approach allows putting multi factors in estimation framework by using the techniques call pairwise comparison [38]. The one outcome from rework estimation is helping development team to select between ‘point’ or ‘set’ of designs in product development, which is lack in [18] 7. REFERENCES [1] Ballard, G. 2000, Positive VS Negative Iteration in Design, Proceedings of the International Group for Lean Construction 8th Annual Conference (IGLC-8),

Brighton, UK [2]

[3]

[4]

[5]

[6]

243

Friedrich, D. R., Asce, M., Daly, J.P. and Dick, W.G., Revision, Repairs and Rework on Large Projects, Journal of Construction Engineering, 113, 3: 488-500 Cooper K. G. 1993, The Rework Cycle: Benchmarks for the Project Manager, Project Management Journal, 25, 1: 17-21 Love, P. E. D. 2002, Auditing the Indirect Consequences of Rework in construction: a case Based Approach, Managerial Auditing Journal, 7, 3: 138-146 Yassine, A. A., Sreenivas, R. S. and Zhu, J. 2008, Managing the Exchange of Information in Product Development, European Journal of Operational Research: 311-326 Costa, R., and Sobek II, D. K. 2003, Iteration in Engineering Design: Inherent and Unavoidable or Product of Choices Made?, Proceedings of DETC’03 ASME 2003 Design Engineering Technical Conferences and Computers and Information in Engineering Conference

[7]

[8]

[9]

[10]

[11

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

[23]

Cho, S. H. and Eppinger, S. D. 2001, Product Development Process Modelling Using Advanced Simulation, ASME 2001 Design Engineering Technical Conferences and Computers andInformation in Engineering Conference Pittsburgh, Pennsylvania September 9-12: 1-10 Loch, C. H. and Terwiesch, C. 1998, Communication and Uncertainty in Concurrent Engineering, Journal of Management Science, 44, 8: 1032-1048 Yassine, A. A., Chelst, K. R. and Falkenburg, D. R. 1999, A Decision Analytic Framework for Evaluating Concurrent Engineering, IEEE Transactions on Engineering Management, 46, 2:144-157 Smith, R. P. and Eppinger, S. D. 1997, Identifying Controlling Features of Engineering Design Iteration, Journal of Management Science, 43, 3: 276-293 Mitchell, V. L. and Nault, B. R. 2007, Cooperation Planning, Uncertainty, and Managerial Control in Concurrent Design, Journal of Management Science, 53, 3: 375–389 Koufteros, X., Vonderembse, M. and Doll, W. 2001, Concurrent Engineering and Its Consequences, Journal of Operations Management, 19, 1: 97-115 Eastman, R. M. 1980. Engineering Information Release Prior to Final Design Freeze, IEEE Transaction on Engineering Management, EM-27, 2: 37-42 Terwiesch, C., H. Loch, C. H. and Meyer, A. D. 2002, Exchanging Preliminary Information in Concurrent Engineering: Alternative Coordination Strategies, Journal of Organization Science, 13, 4: 402-421 Krishnan, V., Eppinger, S. D. and Whitney, D. E. 1997, A Model-Based Framework to Overlap Product Development Activities, Journal of Management Science, 43, 4: 437-451 Browning, T. R. 2001, Applying the Design Structure Matrix to system Decomposition and Integration Problems: A Review and New Directions, IEEE Transactions on Engineering Management, 48, 3: 292-306 Browning, T. R. 1998, Using of Dependency Structure Matrices for Product Development Cycle Time Reduction, The 5th ISPE International Conference on Concurrent Engineering: Research and Applications, Tokyo, Japan, July 15-17: 1-8 Smith, R. P. and Eppinger, S. D. 1997, A Predictive of Sequential Iteration in Engineering Design, Journal of Management Science, 43, 8: 1104-1120 Xiao, R. and Si, S. 2003, Research on the Process Model of Product Development with Uncertainty Based on Activity Overlapping, Journal of Integrated Manufacturing Systems: 567-574 Luh, P. B., Liu, F. and Moser, B. 1999, Scheduling of design projects with uncertain number of iterations, European Journal of Operational Research, 13: 575592 Roemer, T., R. Ahmadi and Wang, R. 2000, TimeCost Tradeoffs in Overlapped Product Development, Journal of Operation Research, 48, 6: 860–865 Chakravarty, A. 2001, Overlapping Design and Build Cycles in Product Development, European Journal of Operation Research, 134: 392–424 Roemer, T., A. and Ahmadi, R. 2004, Concurrent Crashing and Overlapping in Product Development, Journal of Operations Research, 52, 4: 606–622

[24] Browning, T. and Eppinger, S. D. 2002, Modelling Impacts of Process Architecture on Cost and Schedule Risk in Product Development, IEEE Transaction on Engineering Management, 49, 4: 428–441 [25] Jun, H. B., Ahn, H. S. and Suh, H. W. (2005), On Identifying and Estimating the Cycle Time of Product Development Process, IEEE Transaction on Engineering Management, 52, 3: 336-349 [26] Bogus, S. M., Molenaar, K. R. and Deikmann, J. E. (2006), Strategies for Overlapping Dependent Design Activities, Journal of Construction Management and Economics, 24: 829-837 [27] Yan, J. H. and Wu, C. 2001, Scheduling Approach for Concurrent Product Development Processes, Journal of Computers in Industry, 46: 139-147 [28] Yan, H. S., Wang, Z. and Jiang, M. 2002, A Quantitative Approach to the Process Modelling and Planning in Concurrent Engineering, Journal of Concurrent Engineering: Research and Application, 10, 2: 97-111 [29] Joglekar, N. R., Yassine, A. A., Eppinger, S. D. and Whitney, D. E. 2001, Performance of Coupled Product Development Activities with a Deadline, Journal of Management Science, 47, 12: 1605–1620 [30] Wang, Z. and Yan, H. S. 2005, Optimising the Concurrency for a Group of Design Activities, IEEE Transactions on Engineering Management, 52, 1: 102-118 [31] Smith, R. P. and Morrow, J. A. 1999, Product Development Process Modelling, Journal of Design Studies, 20: 237-261 [32] Mitchell, V. L. and Nault, B. R. 2007, Cooperation Planning, Uncertainty, and Managerial Control in Concurrent Design, Journal of Management Science, 53, 3: 375–389 [33] Hoedemaker, G. M., Blackburn, J. D., and Wassenhove, L. N. V. 1999, Limits to Concurrency, Journal of Decision Sciences, 30, 1: 1-17 [34] Yassine, A. A., Whitney, D. E. and Zambito, T. 2001, Assessment of Rework Probabilities for Simulating Product Development Processes Using the Design Structure Matrix, ASME 2001 International Design Engineering Technical Conferences Computers and Information in Engineering Conference, Pittsburgh, Pennsylvania [35] Sobek II, D. K., Ward, A. C. and Liker, J. K. 1999, TOYOTA’s Principles of Set-Based Concurrent Engineering, Sloan Management Review, Massachusetts Institute of Technology [36] Ward, A., Liker, J. K., Cristiano, J. J. and Sobek II, D. K. 1995, The Second TOYOTA Paradox: How Delaying Decisions Can Make Better Cars Faster, Sloan Management Review, Massachusetts Institute of Technology [37] Ford, D. N. and Sobek II, D. K. 2005, Adapting Real Options to New Product Development by Modelling the Second TOYOTA Paradox, IEEE Transactions on Engineering Management, Vol. 52, No. 2, p. 175185 [38] Saaty, T. L., 1994, Fundamentals of Decision Making and Priority Theory with the Analytic Hierarchy Process: 6, RWS Publications, Pittsburgh, PA.

244

A Method of Analyzing Complexity by Effects and Rapid Acquisition of the Most Ideal Solution Based on TRIZ P. Zhang, F. Liu, D.R. Zhang and R.H. Tan Institute of Design for Innovation, Hebei University of technology,Tianjin, People’s Republic of china [email protected]

Abstract Presently, designers could only use definitions in the complexity theory to analyze the complex problems in the system. In the article, a new method is put forward which is the method of analyzing complexity by effects and rapid acquisition of the highest ideal solution based on TRIZ. By the twice mapping events of complex problems, transform them into chain of additional effects. Due to that some relationship exists between the ideal of module of additional effects’ solution and S-curves, designers could rapidly obtain the most ideal solution. Finally, use the ultrasonic system examples to check the method. Keywords: Complexity, TRIZ, Additional effects, most ideal solution

1

INTRODUCTION

Complexity Theory based on Axiomatic Design[1] is one of the important theory of Problem Solving techniques. Designer analyze the system’s complexity according to nothing more than the concept of the complexity presently. This paper puts forth a new method of analyzing the complexity by effects and Rapid Acquisition of the highest ideal Solution Based on TRIZ. The essence of the design is to eliminate the complexity. There are several study performances for Complexity Theory based on Axiomatic Design. Suh[2] shows that designers are likely to use c/p transformation to reduce timedepended complexity. Liu[3] has investigated how to use TRIZ tools to reduce system's complexity. Zhang[4] propose that designers can reduce the complexity by a design model which combines Complexity Theory based on Axiomatic Design with evolution path in TRIZ. Since it is very important to eliminate the complexity of a system, it makes it a new problem. Designer will focus on the reducing the complexity in this study. The goal of design is to satisfy the functional requirement. The functional requirement can be realized by its effects. Effect is one of the important concepts in TRIZ[5]. If the functional requirement can not be satisfied, the complexity can be analyzed by its effects. The method of analyzing the complexity by effects is advantageous not only to analyzing the complexity but also to obtaining the solutions which can reduce the complexity. However,the method brings some new problems to us. If there are many additional effects in the chain and each additional effect have several solutions, a large amount of time will be wasted on finding the TRIZ special solutions and it is difficult to find the highest ideal Solution. Among the designs available from the functional point of view, one may be superior to others in terms of achieving the design goals as expressed in the functional requirements[6]. However, the best design is usually hidden in the several TRIZ special solutions. A designer with years of experience may probably find a better solution. However, a designer, who is a freshman, may obtain a

CIRP Design Conference 2009

245

worse one. It is inappropriate that a designer could select the best one by his or her experience. In view of this problem, this paper put forwards a method of analyzing the complexity by effect and obtaining the highest ideal Solutions for reducing the system’s complexity. A design example of ultrasonic system is presented to demonstrate the design process of complexity elimination. 2

COMPLEXITY THEORY

Suh put forth the Complexity Theory based on the Axiomatic Design method [1]. The design effort may produce several designs, all of which may be available in terms of functional requirements. It is likely that different designers will come up with different designs, because there can be many designs that satisfy the function requirement. However, one of these designs may be superior to the others. The Complexity Theory based on the Axiomatic Design is useful in selecting the best among those designs that are available. Among the designs that available from the functional point of view, one may be superior to others in terms of achieving the design goals as expressed in the functional requirements. The Complexity Theory based on the Axiomatic Design states that the design with the highest probability of success is the best. Probability density

Design range

Common range

FR

System range Figure 1: the relation among design range, system range and common range[1]

Suh states that Complexity is defined as a measure of uncertainty in achieving the functional requirements (FRs) of a system [1]. The overlap between design range and system range is called common range, which is shown in Figure 1. Complexity is a function of the relationship between the design range and the system range [1]. A design is called complex when its probability of success is low, that is, when the information content required to satisfy the FRs is high. A physically large system is not necessarily complex if the information content is low. Conversely, even a small system can be complex if the information content is high[7]. 3

A METHOD OF ANALYZING COMPLEXITY BY EFFECTS AND RAPID ACQUISITION OF THE HIGHEST IDEAL SOLUTION

3.1 The mapping of the complex problem Complexity is the function of design range and system range[8]. According to the complexity theory, designers should determine in the functional domain whether there are complex problems exist in the system, which actually is a comparatively very difficult process. Due to the mapping relationship as zigzagging between functional domain and physical domain [8], designers could determine the complex problems in the system through its performance in the physical domain. Here, the complex problems can be defined in the physical domain as event of complex problems. It is comparatively simple to obtain events of complex problems. Nevertheless, since the problem description by events of complex problems has comparatively large limitation, when obtaining solutions of events of complex problems, they can only be solved by applying knowledge processed by professional designers. Moreover, due to the limit knowledge, designers couldn’t get solutions with high ideality , and even worse, without any solution sometimes. To explicitly describe the complex problems of a system, designers should map the events of complex problem in physical domain into the ones in functional domain, the process of which is defined as the first mapping. After determining the (problem) function caused the complex problems of system, designers could apply TRIZ tools[9] to get solution of complex problems. The description of complex problems by function is comparatively clear. Therefore, knowledge that could be used is much wider compared with the condition of solving complex problems in physical problems. While, only a few TRIZ solutions could be get when adopting this method and it is possible that the ideal solution with highest degree can not be obtained. Due to the great difference between each solution, it is relatively difficult for designers to pick the ideal solution with the most ideal. To easily getting more TRIZ solutions and choose the most ideal one from them, complex problems in the functional domain should be mapped as chain of additional effects, the process of which is called the second mapping of complex problems. The additional effect is put forward on the basis of effect concept in TRIZ. As an important concept, effect[10] is also an important method to realize the high level ideal solution of TRIZ. The effect that can meet the functional requirements is the ideal one, and chain of effects composed by ideal ones is called the chain of ideal effects[11]; effect that cause complex problems of system is the additional one, and the chain of effects composed by additional effects is named chain of additional effects. Usually, additional effect is caused by the following factors: noise, coupling, environment and random variables in the design parameters. General effect chains contain the ideal effect chains and additional ones, as shown in Figure 2.

Input

Output Effect1

Effect2

Effectn

Inputa Effectax

Effecta2

Effecta1

Outputa

Figure 2: total chains of effects Events of complex problems in physical domain are caused by problem functions in functional domain, while the ones in system are caused by additional effects. Complex problems in functional domain will be transformed into additional effect chains through the second mapping of complex problems, which is very favorable for obtaining TRIZ solution when getting the one of additional effect. The second mapping is input on the basis it’s the first mapping. A new analyzing model of complex problem on the basis on effect will be established trough the first and second mapping of complex problems. The process of twice mapping of complex problems is shown as the Figure 3. The physical The functional domain domain The first mapping Event of complexity

The second mapping Function of complexity

Additional effects

Figure 3: the twice mapping of complexity 3.2 A method of Analyzing Complexity by Effects Additional effect The purpose of design is to meet functional requirements while function could be expressed by its corresponding effect. Moreover, effect could be described by the relationship between input and output. If the functional requirements could not be reached, issues of complex problems in the structural domain could be mapped into corresponding additional effect chain through two-level mapping. The additional effect is the effect which causes the functional requirement can not be satisfied. Relation sketch of additional effects Complex problems in the system are induced by additional effects, while additional effects in the system always exist in the form of additional effect chains. The output of pre-level additional effect will affect its following one. Three basic relationships do exist between additional effects in additional effect chains [11], such as “and gate, or gate and not gate” as shown in the Figure 4. The relationship sketch of additional effects composed by these three basic relationships is also shown in the Figure5.

And

Or

Not

Figure 4: three kinds of relationship of additional effect In the sketch, input and output contents are all corresponding information content. When information content of some additional effect is “1”, it means that this effect will affect the following one. On the contrary, when the content of some additional effect is “0”, the following one will not be affected by this. Additional effects will lead to complex problems of system. According to the additional effect relationship sketch, at least one of the total input of 246

affect the following one’s input. 4. The output of module of additional effects is only relative to their own additional effect and input of module of additional effects. 5. Complex problems will be eliminated when total output of chain of additional effects is “0”. It is not necessary to get TRIZ solution of every additional effect in chain of additional effects for designers after obtaining module of additional effects according to the above mentioned principles. When the output of some modules of additional effects are all changed from “1” to “0”, both input and output of the following level modules of additional effects will be changed into “0”. Similarly, after transmitting from one level to another, finally, total output of chain of additional effects will change to “0”. Therefore, the complex problems of a system will be solved if designer could get TRIZ solution of any module of additional effect. But usually, many modules of additional effects will exist in chain of additional effects and each one has many solutions. Therefore, it has become an urgent problem to help designers rapidly obtaining the best solution. In this article, designer put forward the method of rapidly get the method of analyzing complexity by effects and rapid acquisition of the highest ideal solution based on TRIZ.

additional effect sketch in the system that exists complex problems is “1”, which the total output is also “1”. The process of applying additional effect relationship sketch to analyze complex problems of a system is actually the very one of transforming the total output from “1” to “0”. In1 In2 …..

Out

…… InN-1 InN

Figure 5: The relation sketch of additional effects In chain of additional effects, the output of additional effect is only relative to its input and its own additional effect, the input content is the pre-level information content. The output content of additional effect is the information content of present level. The input value is “1” when some level additional effects are affected by the previous ones; or the input value will be “0” if the additional effects are not disturbed by the previous ones. Due to the existence of complex problems in the system, total input and output of additional effect chains will be both “1”. Additional effect chains will be applied to analyze complex problems in system, which namely is the process of transforming total output of additional effect chains from”1” to “0”. If every output of additional effect in additional effect chains is “0”, then, total output of additional effect will be “0”. Nevertheless, when getting solutions for each additional effect in additional effect chains and making its output “0”, though total output of additional effect chains will be “0”, and additional effects of the system will be eliminated, complex problems in the system will also be solved accordingly. Actually, it takes a lot of time of designers.

3.3 Method of Rapid Acquisition of the highest ideal Solution The method of analyzing complexity by effects is not only favorable for analyzing complex problems, but also for favorable for obtaining solutions of eliminating complex problems in the system. However, the method also brings in some new problems. If there are many additional effect modules in additional effect chains and each one has a lot of solutions, to obtain all TRIZ solutions will take a large amount of time, and it’s also very difficult to fin the best solution. Ideal result[12] is an important concept inTRIZ. An important principle of TRIZ theory is to enhance the ideality of a system. The degree is defined as[13]:

Module of additional effects To get every solution of each additional effect in additional effect relationship sketch will cost a large amount of time. Therefore, additional effect chains could be divided into sequently arranged modules of additional effects, as shown in the Figure 6. Among them, the ones on bottom of additional effect chains is called bottom module of additional effects, which is close to the total output of additional effects. Moreover, the one on top of additional effect chains is called top module of additional effects. They are close to total input of chain of additional effects. It is helpful for designers to solve complex problems when bringing in module of additional effects. When dividing module of additional effects, following principles must be paid attention to: 1. The entire input of chain of additional effects is from top module of additional effects. 2. The entire output chain of additional effects is from bottom module of the additional effects. 3. The output of pre-level module of additional effects will

In1

……

(1)

In view of this problem, this paper put forwards a method of obtaining the highest ideal Solutions of the method of analyzing complexity by effects. Designers could rapidly obtaining the highest ideal solution of eliminating complex problems in the system through the distribution relationship of the ideality of solutions of different additional effect modules in additional effect chains and the difference of ideality between different solutions of the same additional effect. The ideality distribution of solutions of addtional effect modules is one of an important reserach part to the method of obtaining the highest ideal Solutions. According to the ideality of TRIZ solution, it includes : low-level ideal result (LIR), intermediate-level ideal result (IIR), high-level ideal result (HIR), and ideal final result (IFR), etc[5]. The low-level ideal result is to eliminate the complexity of the system by using resources outside the system, while the intermediatelevel ideal result is to eliminate the complexity by using the resources inside. The high-level ideal result refers to Bottom module of additional effects

Top module of additional effects

In2 InN

Ideality =∑ Benefits / (∑ Costs + ∑ Harm)

……

……

Figure 6: The relation graph of additional effects and the modules of additional effects 247

Out

eliminating the complexity of the system by using the resources within special field. The ideality of this kind of solution is higher than others and this kind of solution approach the final results (IFR). Products are carriers of functions. In TRIZ, s-curve is used to describe the ideality of products.TRIZ solutions obtained according to additional effects is also to meet the functional requirements of the system. Therefore, for TRIZ solutions of the same additional effect, its ideality distribution also comply with S-curve regulation. Ideality distribution of TRIZ solution of additional effect is shown as Figure 7. Ideality

IFR HIR IIR

LIR

0 Low-level Figure 7: the distribution of the ideality of the same module of additional effects’ solutions Solutions of module of additional effects also comply with Scurve regulations. Solution of the additional effect with the highest ideality could be obtained accroding to its ideality.Due to the fact that so many modules of additional effects exist in the chain additional effects, it will take a large amount of time if designer are going to get solutions for every addtional effect module and compare their ideality. Then, how to get the solution with the most ideal in the whole chain of additional effects ? How the ideality of solutions of different modules of additional effects is distributed ? Ideality

effects to eliminate complex problems of a system can be transformed into getting solutions from a function. Product Evolutionary in TRIZ, products with the same functions under different effects follow the S-curve regulations. The process of obtaining solutions of modules of additional effects in chain of additional effects is similar to that of product evolutionary, which are both to realize the process of obtaining solutions of the same functions and different effects. Module of additional effects also comply with Scurves regulations. Therefore, the ideality of solutions of prelevel additional effect modules are higher than the corresponding ones of the following. The relationship between module of additional effects and group of S-curves is shown as Figure 8. According to the group of s-curves, HIR is comparatively a better solution in additional effect modules. If there are many modules of additional effects in the chains of additional effects, since the ideality of solutions of bottom module of additional effects is lower than that of top module ones, namely, the top ones are more ideal , there is no need to get solutions of bottom module additional effects if there are already solutions of top ones. When the modules of additional effects are obtained in the chain of additional effects, designers need not find every modules of additional effect’s solutions. Designers can find solutions of the top module of additional effect .If the top module of additional effects can find no the TRIZ special solutions, designers will try to find solutions to the next module of additional effects in the direction from the top module of additional effects to the bottom module of additional effects. There is no need to get solutions of every module of addtional effect if designers have already get the additional effect modules in chain of additional effects. The only thing is to obtain solutions of top module of additional effects. If there is no solution of this module of additional effect, designer will turn to the next along the direction from top module of additional effects to the bottom ones. Figure 9 shows the process of obtaining the highest ideal solution. By adopting the method of rapid acquisition of the highest ideal solution, designer could not only get the best solution of eliminating complex problems, but also have no need to get all TRIZ solutions as well as to calculate the ideality of all solutions. Therefore, it is relatively easy to get best solution by using this method and saves much time. 3.4 A design process

0 Bottom Figure 8: a group of s-curve and the ideality of additional effect’s solution According to the principle of obtaining additional effect modules, it is found that there is close relationship between them. Any solution of a random a module of additional effect designers get could be used to eliminate the complexity of a system. By obtaining TRIZ solutions of module of additional

In the article, a new method is put forward which is the method of analyzing complexity by effects and rapid acquisition of the highest ideal solution based on TRIZ. Now, practical examples of ultrosonic system will be used to check the method. Six steps need to be followed : Step1 : Descripe complex problems in physical domain. To descripe complex problems by events of complex problems. Step2 : Carry out the first mapping of complex problems. To transform events of complex problems in physical domain into problem functions. Step3 : Carry out the second mapping of complex problems. To transform functions of complex problems into chain of additional effects. Step4 : Determine the additional effect relationship sketch according to the relationship between additional effects. Step5 : Determine additional effect modules by dividing principes of module of additional effects. Step6 : Get the solution with the highest ideal solution of eliminating complex problems according the method of rapid acquisition of the highest ideal solution of complex problems.

248

Top module of additional effects

No solution

Y

Next module of additional effects

N No solution High ideal solution

N

Y

Y

Next module of additional effects

N

High ideal solution

No solution

N

Y

N

Bottom module of additional effects

……

Y High ideal solution Y

N Y

High ideal solution N

Compare ideality

Compare ideality

……

Compare ideality

The highest ideal solution Figure 9: The process of obtaining the highest ideal solution 4

A CASE STUDY

Fig.10 is the working principle of an ultrasonic system. These devices transmit a short burst of ultrasonic sound toward another sencor, which reflects the sound back to the sensor[14]. The system then measures the time for the echo to return to the sensor[15]. The ultrasonic system includes the ultrasonic sensors, the water tank, the intake pipe, and the outlet pipe. If water is static in the pipe, the ultrasonic sensors measure the time for the echo to the other sensor. However, when water is flowing in the pipe, the ultrasonic signal becomes smaller and smaller untill the ultrasonic signal disappeared. For the system could not satisfied the functional requirements, there is a complex problem in the ultrasonic system. The method of analyzing the complexity by effects and Rapid Acquisition of the highest ideal Solution Based on TRIZ can be used to analyze this system.

sensors

Figure10: the ultrasonic system

249

Step1: Describe complex problems in physical domain. To descripe complex problems by events of complex problems. If water is static in the pipe, the ultrasonic sensors are working. However, when water is flowing in the pipe, the ultrasonic signal becomes smaller and smaller untill the ultrasonic signal disappeared. Even if water comes to rest in the pipe, the ultrasonic sensors are not woring, too. When the ultrasonic sensors take out water and lay them in water again, the ultrasonic sensors can not woring, too. For the ultrasonic sensors in liquid have a frequency in excess of 100,000 cycles per second (hertz)[16]. If the ultrasonic sensors in liquid are propagated in air, the ultrasonic signal diminishes. Step2: Carry out the first mapping of complex problems. To transform events of complex problems in physical domain into problem functions. The reason why the system can not satisfy the functional requirement is that the ultrasonic signal is decreased when it transits the gas. There are little gas bubbles on the ultrasonic sensors when water is not flowing in the water tank. When water is flowing, the gas bubbles become more and more on the ultrasonic sensors while the ultrasonic signal becomes smaller and smaller. At last, the ultrasonic sensors can not work. Even if water comes to rest in the water tank, the ultrasonic sensors can not work because the gas bubbles are still on the ultrasonic sensors. The ultrasonic sensors are out of water where are little gas bubbles on the ultrasonic sensors. When the ultrasonic sensors are put in water again, the ultrasonic sensors are working. Figure 11 shows the gas bubbles on the ultrasonic sensors.

Gas bubbles Sensor

V Sensor

Acoustic wedge

Figure 11: the gas bubbles on the ultrasonic sensors Step3: Carry out the second mapping of complex problems. To transform functions of complex problems into chain of additional effects.Accoring to the results of the first mapping of complexity, there are two additional effects which are the Gravity effect and Agglutination effect in chain of additional effects. Step4: Determine the additional effect relationship sketch according to the relationship between additional effects. Figure 12 shows the relation sketch of additional effects of the ultrasonic system. input

Gravity effect

Agglutination effect

output

Figure 12: the relation sketch of additional effects of the ultrasonic system Step5: Determine additional effect modules by dividing principes of module of additional effects.The gravity effect is the top module of additional effects and the agglutination effect is the bottom module of additional effect. Figure 13 shows the modules of additional effects in the ultrasonic system. Top module of additional effects input

Gravity effect

Bottom module of additional effects Agglutination effect

output

Figure 13: the modules of additional effects

Figure 15: the project two The TRIZ special solution, whose ideality is the highest, is the best. Designer can obtain two TRIZ special solutions which eliminate the complexity of the ultrasonic system. The project one is a high-level ideal result because it uses the resource in special field to eliminate the complexity of the ultrasonic system. The project two is a low-level ideal result because it uses the resource out of the system. For the project one eliminates the gravity effect which is the module of top nmodules of additional effects, the project one, whose ideality is the highest, is the best. According to the method of analyzing the complexity by effects and rapid acquisition of the highest ideal solution based on TRIZ, designers can obtain the highest ideal solution which is the project one. To prove that the project one is the highest ideal solution among the TRIZ solutions which can reduce the complexity of the ultrasonic system, designer will obtain the solutions which are the Agglutination effect’s. Subsequently, designer can determine whether or not the project one is the highest ideal solution among the solutions.If designer can remove the gas bubbles where are on the ultrasonic sensors, designer can eliminate the agglutination effect which is another additional effect. Designer can obtain two solutions which are the project three and project four. The project three, which shows as Fig.16, is a device that the ultrasonic sensors have its coat of nano-paint which it has difficulty in assembling the gas bubbles. The project four , which shows as Fig.17,is to design an eraser that can remove the gas bubbles on the ultrasonic sensors.

Step6: Get the solution with the highest ideal solution of eliminating complex problems according the method of rapid acquisition of the highest ideal solution of complex problems. For the gas bubbles rise in water by universal gravitation, the gas bubbles arise from water and are assembled on the ultrasonic sensors. Designer can obtain two solutions which are the project one and project two. The project one, which install the ultrasonic sensors where it has difficulty in assembling the gas bubbles on the ultrasonic sensors, can eliminate the gravity effect which is one of modules of additional effects. Fig.14 shows the project one. The project two, which shows as Fig.15, is a device which is an acoustic wedge can ward off the gas bubbles.

Figure 14: the project one

Sensor

Figure 16: the project three According to the ideality classification of TRIZ solutions, the project one is a high-level ideal result of the top module of additional effect because it uses the resource in special field to eliminate the complexity of the ultrasonic system. The project two is a low-level ideal result of the top module of additional effect because it uses the resource out of the system. The project three is an intermediate-level ideal result of the Agglutination effect because it uses the

250

resource in the system. The project four the Agglutination effect is a low-level ideal result because it uses the resource Sensor

[9] [10] [11]

[12]

Eraser

[13]

Figure17: the project four out of the system. For the ideality of the project one is the most ideal among all TRIZ solutions,project one is the best.

[14] [15] [16]

5

CONCLUSIONS

The case study results show that the method can help designers to solve the complex problem of the ultrasonic system. Designers obtain the highest ideal solution by using method of analyzing the complexity by effects and Rapid Acquisition of the highest ideal Solution Based on TRIZ. The twice mapping of complexity and the distribution of the ideality of additional effects’ solutions are first put forward. At last, a design example of ultrasonic system is presented to demonstrate the method of analyzing the complexity by effects and Rapid Acquisition of the highest ideal Solution Based on TRIZ. 6

ACKNOWLEDGMENT

The research is supported in part by the Chinese Natural Science Foundation under Grant Numbers 50675059 , Tianjin Natural Science Foundation under Grant Numbers 07JCZDJC08900,Ph.D Education Foundation under Grant Number 20060080002 and Hebei province Natural Science Foundation under Grant Numbers E2008000101. No part of this paper represents the views and opinions of any of the sponsors mentioned above. 7

REFERENCES

[1] Suh N P, 2005, Complexity: theory and applications. New York: Oxford University Press. [2] Suh N P, 1999, “A theory of complexity, periodicity, and design axioms”, Research in Engineering Design, 11, : 116-131. [3] Liu, F., Zhang, P., Tan, R.H., 2007, A Method of Reducing Complexity of Product Based on TRIZ. 2007 IEEE International Conference on Industrial Engineering and Engineering Management, Singapore, 2-5 Dec 1125-1128. [4] Zhang, P., Tan, R.h., 2007, “The highest ideal Solution Obtaining of Conceptual Design Based on Complexity Theory.” Second IFIP Working Conference on Computer Aided Innovation, Michigan, USA, 8-9,October: 115-123. [5] Tan, R.H., 2002, “Innovation Design—TRIZ:Theory of Innovation Problem Solving”, China Mechanic Press, Beijing. [6] Suh N.P ,1990,The Principles of Design (Oxford University Press, New York). [7] Suh N P, 1995, Design-in of Quality through Axiomatic Design [J]. IEEE Transactions on Reliability, 44(2):256264. [8] Suh N P, 2001, Axiomatic Design: Advances and Applications (Oxford University Press, New York).

251

Altshuller G , 1999,The Innovation Algorithm, TRIZ, systematic innovation and technical creativity, Technical Innovation Center, INC., Worcester. Cao, G.Z., Tan, R.H., 2006,Functional design based on behavior and effect, The 16th CIRP International Design Seminar, Calgary, Canada July 16-19. Cao, G.Z., Tan, R.H., Zhang, R.H., 2004, Connect Effects and Control Effects in conceptual design, Journal of Integrated Design and Process Science. 8(3):75-82 Ellen Domb, 1997, The Ideal Final Result: A Tutorial. The TRIZ Journal ,February. Ellen Domb, 1998, Using the Ideal Final Result to Define the Problem to Be Solved .The TRIZ Journal, June. Paul A. Shirley ,1989, An Introduction to Ultrasonic Sensing, Sensors , Nov:15-18. Donald P. Massa, 1987, An Automatic Ultrasonic Bowling Scoring System, Sensors, Oct. Frank Massa, 1965, "Ultrasonic Transducers for Use in Air," Proc IEEE, 53:1363-1371.

Interrelating Products through Properties via Patent Analysis

1

1

1

2

1

P.-A. Verhaegen , J. D'hondt , J. Vertommen , S. Dewulf , J. R. Duflou Centre for Industrial Management, Dept. Mechanical Engineering, K.U.Leuven, Celestijnenlaan 300A Bus 2422, B-3001 Heverlee (Leuven), Belgium Corresponding author: [email protected] 2 CREAX N.V., Ieper, Maarschalk Plumerlaan 113, 8900 Ieper, Belgium [email protected] Abstract TRIZ emerged from systematic analysis of patents, a process involving the mapping of innovative patents to extracted generic problems and generic inventive principles. During problem solving, TRIZ users, relying on their TRIZ skills, map their specific problem to a generic problem, solve it via TRIZ tools, and map back to a specific solution. A methodology and algorithm are proposed that, through identification of specific word categories in patents, analysis of term-term correlation data, and data mining techniques, automatically identify similar products, and properties relating or differentiating products. This algorithm can quantifiably guide creativity efforts and aid in patent portfolio management. 1

Keywords: Systematic Innovation, TRIZ, Creativity, Patent Analysis, Data mining

1

INTRODUCTION

1.1 Need for patent mining In 2007, the United States Patent and Trademark Office (USPTO) granted 157283 patents of which most to corporations [1]. Patent offices assign each patent to one or multiple classes, categorizing them in a hierarchical system based on topic or technological area, such as the US or IPC patent classification schemes, through which patents related to an application area can be searched, e.g. IPC groups A45B 11 to A45B 19, A45B23 and A45B25 all relate to patents covering umbrellas. Hundreds of companies’ patent portfolio increases by 40 to over 3000 patents per year [1], making portfolio management ever more complex, and demanding tools and techniques enabling the discovery of market opportunities outside the application area of the organization’s own technology as well as the identification of possibilities to license in complementary technology. While commercially available patent databases offer full text and more specific search features based on different patent fields, such as citations, applicant, inventor or issue date fields, these functionalities do not allow a company developing umbrellas to directly search for market opportunities, or to identify complementary technology. This research facilitates reaching these two objectives by proposing automatically identified similar products, and the properties relating or differentiating these products. 1.2 TRIZ TRIZ is the Russian acronym for the Theory of Inventive Problem Solving, and encompasses a series of tools and a methodology for generating innovative ideas and solutions for problem solving. It was formed through the systematic interactive analysis of what TRIZ practitioners estimate to be one and a half to three million patents, from which forty thousand innovative patents were withheld and

CIRP Design Conference 2009

252

their applied innovative solutions were mapped onto a small number of extracted inventive principles. TRIZ is based on three postulates [2] [3]: • The Postulate of Existing Objective Laws states that engineering systems evolve according to a set of laws; • The Postulate of Contradictions states that, in order to evolve, an engineering system has to overcome one or more contradictions; and • The Postulate of the Specific Situation states that the problem solving process should take into account the specific problem peculiarities. Derived from this patent analysis and based on the postulates, a set of TRIZ tools was conceived, of which the most popular are [4]: • The Contradiction contradictions; • The Separations contradictions;

Matrix Principles

to to

solve solve

technical physical

• Substance-Field (SU-Field) modeling and the Inventive Standards to transform technical systems; • ARIZ as a list of logical procedures for eliminating contradictions; and • TRIZ Trends as a system of laws that govern engineering system evolution. TRIZ incorporates the idea of mapping a specific problem to a more general problem specification, solving this generic problem via the TRIZ toolset, and mapping back the generic solution to the specific problem. This enables TRIZ users to benefit from the generalized inventive solutions outside their fields of knowledge, but also relies heavily on the user’s TRIZ skills.

As TRIZ users are interested in analogous inventions in other fields, or technological areas, that solve the same contradictions, the analyses can not easily be automated by simple search functionalities for other patents based on the IPC classes or patents fields. Instead of requiring the user to map to and from the generic TRIZ solution space, the proposed algorithm directly relates products with other products from other technological areas with similar product properties, and assumes that the observed contradictions may already be solved in products with similar properties. The following section gives an overview of related research on data mining of the structured and unstructured fields of patents, and on innovative idea generation. The third section describes the proposed methodology, while the fourth illustrates this methodology with a case study. The final section formulates the conclusions. 2 RELATED RESEARCH Research has been conducted to automatically infer structure from non-text patent fields. Citation analysis permits functionalities such as the identification of major competitors, the construction of technology indicators, and documentary search possibly identifying related technologies and applications [5]. This analysis is based on references given by the applicant, which is optionally for the European Patent Office, and which are screened by the patent office, thus basing any further analyses on already known product or technology relations, and excluding e.g. new applications domains. In addition, most patents never get cited or only begin to get cited after several years [6]. Patent text fields, such as the title, summary, description or claims fields, contain vital information about the patent, and can be subjected to text mining techniques, which extract relevant information from less structured textual data through use of keyword extraction, pattern recognition, linguistic analysis, and statistical techniques. A series of text mining techniques for patent analysis is presented and evaluated in [7]. In [8], Yoon and Park propose a network-based analysis as an alternative to citation analysis. This methodology is based on keyword extraction and linking patents based on the occurrences of these keywords, instead of citations between patents. It allows users to visually identify patent network structure, such as central patents, or disjoint groups. [9] proposes a case-based reasoning methodology and product innovation retrieval system (PIRS) for retrieving similar products based on 87 user-centered design (UCD) attribute dimensions. The techniques relies on a large database of products scored on these attributes, a manual process performed by UCD experts. The functions of the identified products are candidate ideas for the product under investigation. Compared to the methodology proposed in this research, the PIRS system can not retrieve products which are not manually analyzed and inserted in to database. The use of the UCD attributes furthermore causes products similar only in these attributes to be retrieved. In [10], Yoon and Park describe a morphological analysis methodology based keyword dictionary developed by text mining patent and factor analysis on the terms. The morphology of all patents is identified, and technology gaps within a product or technology can be identified. The commercially available Goldfire InnovatorTM from Invention Machine [11] has a semantic engine to infer Subject-Action-Object (SAO) from plain text sentences in

patents and queries and offers several TRIZ inspired idea generating functionalities based on an indexed database of these SAOs. Other research by Cascini [12] [13] describes algorithms to automatically analyze patent text fields revealing the invention’s components, architecture, and positional and functional interrelations, and aiding in identifying the solved TRIZ contradictions. Research by He Cong and Loh Hang Tang [14] proposes a text based expert system which allows classifying patents according to TRIZ inventive principles. Similar research by the same authors [15] proposes an automatic patent classification system based on clustering to categorize patents in TRIZ inventive principles, and evaluates the performance of different clustering algorithms on the selected text features. Other research by Cavallucci [16] proposed and validated the possibility to incorporate the eight original Altshuller's laws of development in the design process on a manifold case study. Based on TRIZ and domain knowledge, the conclusions concerning the development potential can be translated into specific directions for future improvements of the manifold. In [17] and [18], Mann and Dewulf propose the concept of evolutionary potential, which is similar to the approach proposed by Cavalucci [16], but using more specific TRIZ trends or lines of evolution allows for a more actual and specific categorization. Later research in Directed Variation by Dewulf [19] suggests depicting the product on a radar plot of property spectra, instead of trends or lines of evolution, e.g. rigid, jointed and flexible are all properties of the spectrum flexibility. While classical TRIZ assumes the evolution usually occurs in a certain direction along the trends, Directed Variation regards changes of properties towards both directions in a spectrum as variations to ensure certain functionalities of a product, e.g. for the surface spectrum, evolving towards the flat property can decrease resistance, while evolving towards the protruded side of the surface spectrum can increase grip or allow faster cooling. The radar plot of property spectra of a product, or product DNA, can be compared to the DNA of other products to find similar products. Figure 1, copied from this research, compares the product DNA of sugar and dish washing tablets, graphically illustrating the similarity and dissimilarity among the products, potentially inspiring the creativity of engineers.

porosity

surface flexibility colour components state unity transparency information automation senses

sugar

dishwashing tablet

Figure 1: Product DNA comparison of sugar and dishwashing tablets [19]

253

Dewulf also identifies the link between adjectives and product properties, and between verbs and product functions. This research builds further upon this idea, proposing a method for automatic extraction of product properties and automatic comparison of products, and suggesting directions for creative efforts. This enables the discovery of market opportunities outside the application area of the organization’s own technology as well as the identification of possibilities to license in complementary technology. 3 METHODOLOGY This research proposes an algorithm and framework that, through patent analysis and identification of word categories, can extract information concerning the properties of a given product or product family, which in turn allows to identify properties relating or differentiating two products. Other functionalities based hereon are the finding of similar products. These algorithms can assist in steering the creative efforts of the R&D department in a formalized and quantifiable manner, and aid searching for market opportunities, or identifying complementary technology in the context of patent portfolio management. 3.1 Gathering properties Currently several modules of a test platform have been implemented, some of which have been graphically depicted in Figure 2.

Patent

Patent Patent (in XML)

Lemmatizer

Title, Abstract abstract & Abstract description

POS Tagger

Tokenizer

Indexer

Term-Doc Matrix

Yytitle , Tagged abstract & description

Similar products

Titleand and Title

XSLT Transformation

Dissimilar properties

Find properties between 2 selected products

Co-occurrence Matrix

Figure 2: Modules of the platform Patents written in English are converted into structured XML files, which are fed into a XSLT transformation module retaining only certain patent sections for further processing. Some patent sections contain specific numerical or textual information, such as patent number, date of application and authors. Other, more narrative, patent sections are: • The title of the invention; • The abstract; • The claims section; • The background section; • The summary section; • The description section; [15] indicates the importance of including the titles and abstracts in the automatic classification of patents, while

254

the summary section gives only marginal improvements. Other research [20] [21] shows that the inclusion of a certain number of words or lines of the description, applications and/or claims can be beneficial to patent classification. In the proposed approach, only the title, the abstract and the description sections are retained, although the additional benefit of processing the claims section too will be analyzed at a later stage. For most patents, the title and the abstract are available in English, which is not always the case for the other patent sections, such as the description section. The XSLT transformation concatenates the text contained in the title, abstract and description fields and pipes this text on a per patent basis to the tokenizer module, which splits the text into a set of tokens to be interpreted by a Part-Of-Speech (POS) tagger. The tokenizer recognizes a manually assembled list of multiwords, e.g. ‘de facto’, which are then regarded by all subsequent step as being one word. A TnT Tagger [22] is used to POS tag the text to the CLAWS5 tag set. This tagger is trained on a different set of patents in order to adapt the configuration files to the specific language used in patents. The trained tagger proves to correctly tag a word in more than 95 % of the cases. This 5 % error includes a number of words incorrectly tagged as adjectives which should have been identified as nouns that modify other nouns, or attributive nouns or noun adjuncts [23]. This misclassification occurs when the tagger encounters constructions such as ‘loudspeaker system’, ‘textile cover’, ‘volume control’, or ‘earphone jack’. It should be noted that [7] describes a method ‘Keyword and phrase extracting’ which allows for the identification of multiword phrases, based on the assumption that these multiwords would occur several times in the document. However, currently such a functionality is not implemented and further research will evaluate the usefulness of further decreasing the number of errors due to such misclassification. The stream of tokens is then run through a rule based lemmatizer described in [24], which allows to normalize these words based on the given POS tag to the form used as the headword in a dictionary, e.g. ‘cooled’ and ‘cooling’ both map to ‘cooled’. This step maps some misspelled word suffixes to a common lemma, but the main advantage of this strategy over the use of a Porter stemmer [25], which removes the word’s suffixes, is that the resulting terms and further analyses are easier to interpret by humans. 3.2 Property selection Currently, only nouns and adjectives are withheld for further processing. However, no filtering is done to extract only relevant adjectives or nouns. In a later stage of this research only selected adjectives will be processed through a property selection phase explained below. The result of this step is a list of adjectives and nouns as input for the indexer. [19] defines a property as ‘what a product is or has’, its attributes. This is mainly expressed in adjectives and is related to physical parameters. Examples of properties are hollow, smooth, transparent, strong, and flexible. These are all generic, in contrast to product specific attributes, for example light weight or inspectable. These product specific attributes are related to functional requirements. A generic property such as hollow can lead to a product specific property as light weight, just as transparent can lead to inspectable. The link between adjectives and properties was further examined by the authors in [26].

Currently, only adjectives are processed in the property selection phase as it is assumed that adjectives can express system properties [19] and that adjectives are less domain specific than nouns. Further research will investigate the effects of including other word categories, such as verbs. As a prerequisite for this research, the authors validated the possibility to identify clusters of adjectives which relate to the same generic product property. From a random sample of 22684 non-chemical patents, the process described in previous section produces a list of 81750 adjectives, of which 69260 only occurred in a single patent, and are discarded for further processing. The remaining 12490 adjectives are run though a Porter stemmer [25] resulting in 10361 different stems. A Porter stemmer was preferred over a lemmatizer because this analysis is only performed once by TRIZ experts, and the results are never interpreted by the users of the system. A 10361 by 22684 term document matrix is constructed, weighted with a Term Frequency Inverse Document Frequency (tf-idf) scheme [27], and normalized to account for different patent text lengths. A singular value decomposition (svd) step [28] is performed to reduce the number of dimensions before clustering. Most related research uses a value around 300 as a rule of thumb for the number of reduced dimensions for a similar sized collection [29]. Through experimentation, this value was set to 1000 to ensure enough discriminatory power through the explained variance, possibly leading to overfitting the model, which is less an issue as the results are manually analyzed by TRIZ experts as explained below. The terms are then grouped by clustering to 700 clusters, a value experimentally determined through sweeping this variable. Table 1 shows the first 5 clusters with the contained adjectives. These results can be used as an aid to manually identify adjectives that describe system properties. This manual step, currently being performed by TRIZ experts, is still needed because the results are too noisy for full automatic extraction. Cluster Number

Clustered adjectives (or noun adjuncts). Related terms are in Italic

Relationship

1

Teletypewrite, prefinished, preparatory, preassembled

Preliminary action

2

Degaussed, activated, deactivated, deactived

Activation, time segmentation

3

Boss, multilayered, layered

Layered

4

Dislodged, 1 cental , radiated

(Cars)

5

Seam, ring, weak

bumper,

concluded. Therefore, no filter is currently applied and all adjectives are input to the processing in further steps. In a later stage the results from the clustering and manual verification will be used to identify the adjectives relevant for further processing. Besides adjectives, and to allow product or technology identification, mostly expressed by nouns, this word category is also processed in further steps. Further research will examine the possibility to only retain nouns occurring in the headings of the IPC class hierarchy, such that nouns directly indicate a product, or technology. 3.3 Information from term-term correlation data A slightly modified open source program Lucene [30], that only indexes a given list of adjectives and nouns and outputs results in human readable format, implements the indexer module of the test platform. This data is then read into a term document matrix A, in which each element Aij represent the number of times term i occurs in patent j. From this, the term-term correlation matrix C is calculated as AAT [27], in which Cij is the sum, over all patents, of the product of the number of times a term i and the number of times a term j occurs in a patent. Given two nouns that characterize two different products or product families, the term-term correlation matrix allows looking up adjectives which co-occur with these two nouns. These adjectives directly interrelate these two products, or product families. This methodology is illustrated by the term-term correlation matrix in Figure 3. The matrix elements ‘X’ indicate that the adjective cooccurs with noun 1 and noun 2, linking the product 1 with product 2 through this shared adjective. It is at least equally important to find the property dimensions differentiating the two products. By looking at the differences in term-term correlations of an adjective with the product nouns, this methodology can be used to highlight the differences between the two products. This information can be used to transfer knowledge from one product to the other. Given the assumption that adjectives relate to generic product properties, this technique allows to automatically calculate the degree of similarity of two products along these property dimensions. As rough indication the sum of the term-term correlations of all adjectives with the two nouns or products can be calculated. As can be seen from the case study in the next section, some identified adjectives, or noun adjuncts, do not relate to generic product properties and should not be included in the analysis. This adjective filter, or property selection, explained in section 3.2, should allow filtering the adjectives based on relevance to generic product properties.

Segmenting, or attaching

Table 1: Examples of adjective clusters, manual identification of related adjectives and description of the relationship It can be seen that cluster 4 relates to certain car parts, and not to a generic property, which can be detected in the manual step. The adjectives or noun adjuncts from this cluster should therefore not be withheld for further processing. As stated in the first paragraph of this section, the research on this adjective or property filtering is not yet 1

Analysis of the patents reveals that «cental» does not refer to the weight unit, but should be interpreted as a misspelled version of the word «central».

Figure 3: Co-occurrence matrix used to calculate the similarity between products

255

This methodology can also be used to find products related to noun 1 or product 1. Looping over all different nouns, or products, permits comparing the closeness figures of different products, and finding related products in large patent databases. Adding the constraint that the two selected nouns may not co-occur in any of the patents, which is illustrated by the ‘0’ elements in the term-term correlation matrix of Figure 3, this allows to find directly unrelated products in different technological areas having similar in product properties. Depending on the size of the patent database this constraint can be implemented by a threshold value different of zero. The methodology infers a link between two not directly related products. Such a higher order co-occurrence can also be found by techniques such as singular value decomposition, but these techniques complicate the interpretation of the property dimensions as these are linear combinations of the adjectives. In this light, section 3.2 can be seen as a manual step to ease this interpretation. 4 CASE STUDY To illustrate the proposed methodology, the title, abstract and description sections of a random set of 64529 patents were tokenized, lemmatized and POS tagged. The identified adjectives and nouns, collectively called terms, are mapped onto their lemma through a rule based lemmatizer. In a next step, specific chemical terms are identified and the patents in which a certain number of these terms occur were discarded from further processing. This was primarily done because the results from the chemical domain are less easily interpreted by the researchers due to their background. It can be envisaged to also exclude a list of words commonly found in patents [7]. The lemmatized terms are stored in an index file associating them with the patents in which they are found. This data is imported in a term-document matrix A, from which the term-term correlation matrix C is calculated. The nouns ‘umbrella’ and ‘windscreen’ are selected for analysis, and based on the correlation matrix a list of adjectives co-occurring with these two nouns is retrieved. Under the assumption that these adjectives related to product properties, these adjectives directly link the two products. Figure 4 presents the list of adjectives co-occurring with the nouns ‘umbrella’ and ‘windscreen’. The values in the bar chart represent the minimum of term-term correlation values of the adjectives with each of the nouns. It is noteworthy that the noun ‘windscreen’ is used to indicate ‘a screen for protecting something from wind’ and ‘a windshield of a motor vehicle’ [31], which explains the high occurrence of some adjectives, e.g. aerodynamic. These figures indicate that windscreens and umbrellas are both foldable and collapsible, and both products’ patents cover aerodynamic properties. Not all resulting adjectives relate to different product properties, and some are similar, e.g. foldable and collapsible. To facilitate the interpretation of the figures, the adjectives could be grouped in meaningful clusters, as explained in section 3.2. Figure 5, a bar chart indicating individual term-term correlation values of the same adjectives with the umbrella and windscreen nouns, illustrates how the methodology can be used to highlight the differences between the products umbrella and windscreen. A designer can use this information to transfer knowledge from one product to the other, e.g. making a foldable windscreen based on the knowledge from the umbrella product family.

256

Number of times 0

25

50

75

foldable aerodynamic collapsible blind protruding interfitting stepped conditional helical bright detachable lost polygonal Figure 4: Minimum of term-term correlation values of the adjectives with the windscreen and umbrella

Number of times 0

25

50

75

100 125 150

foldable aerodynamic collapsible blind protruding interfitting stepped conditional helical bright detachable lost polygonal umbrella

windscreen

Figure 5: Term-term correlation values of the adjectives indicating the differentiating properties between umbrella and windscreen An indication of the degree of closeness of two products along their property dimensions is given by the sum of the term-term correlation values, e.g. for the umbrella and windscreen products, this value is 384. This figure currently does not account for the length or number of patents, and further research will study the necessity of a normalization step. Comparing the closeness figures of different products with the umbrella product, allows finding related products in large patent databases, e.g. products which are similar to umbrellas in terms of product properties. Adding the constraint that the two selected nouns may not co-occur in any of the 64529 patents in our database, the closest noun to the umbrella noun is ‘slider’, indicating that the designer could be inspired by looking at the slider product family. The relevance of this result is verified by the fact that in our patent database no patent contains both the words umbrella and slider. An online search on a global patent database [32] reveals that of the 35388 patents covering

umbrellas, 805 contain the word ‘slider’. This indicates that the proposed algorithm can find products with similar properties to the product under investigation, and allows to steer creative efforts. 5 SUMMARY By means of a case study comparing the umbrella product category with windscreen products, it was shown that based on term-term correlation data between adjectives and nouns, the proposed methodology allows to automatically find product properties related to both products, and list these in order of relevance. It was also shown that further analysis of the term-term correlation matrix permits finding properties which cooccur more often with one of the two nouns, enabling the extraction of properties differentiating the products. By looping over different extracted nouns, the proposed methodology furthermore allows to automatically search for related products. This was demonstrated by the identification of the slider product, which is closely related to the given umbrella product, but not occurring with the umbrella product in database applied. 6 REFERENCES [1] United States Patent and Trademark Office, 2008, Patenting by Organizations 2007. [2] Khomenko, N., Ashtiani, M., 2007, Classical TRIZ and OTSM as a Scientific Theoretical Background for Non-Typical Problem Solving Instruments, Proceedings of the TRIZ-Future Conference, 73-80. [3] Altschuller, G S., 1984, Creativity as an Exact Science - The Theory of the Solution of Inventive Problems, Gordon and Breach Science Publishers, New York. [4] Savransky, S. D., 2000, Engineering of Creativity: Introduction to TRIZ Methodology of Inventive Problem Solving, Boca Raton, Florida. [5] Michel, J., Bettels, B., 2001, Patent citation analysis: A closer look at the basic input data from patent search reports, Scientometrics, 51/1: 185-201. [6] Karki M.M.S., 1997, Patent Citation Analysis: A Policy Analysis Tool, World Patent Information, 19/4: 269-272. [7] Tseng, Y.-H., Lin, C.-J., Lin Y.-I., 2007, Text mining techniques for patent analysis, Information Processing and Management: an International Journal, 43/5: 1216-1247. [8] Yoon, B., Park, Y., 2003, A text-mining based patent network: Analytical tool for high-technology trend, Journal of High Technology Management Research, 15: 37-50. [9] Wu, M.-C, Lo, Y.-F., Hsu, S.-H., 2006, A case-based reasoning approach to generating new product ideas, The International Journal of Advanced Manufacturing Technology, 30: 166-173. [10] Yoon, B., Park, Y., 2005, A systematic approach for identifying technology opportunities: Keyword-based morphology analysis, Technological Forecasting & Social Change, 72: 145–160. [11] Goldfire Innovator, Invention-machine.com. Available from http://www.invention-machine.com [12] Cascini, G., Russo, D., Zini, M., 2007, ComputerAided Patent Analysis: Finding Invention Peculiarities, Monterrey Nuevo León, Mexico.

[13] Cascini, G., Russo, D., 2007, Computer-Aided Analysis of Patents and Search for TRIZ Contradictions, International Journal of Product Development, 4/1-2: 52-67. [14] He, C., Loh H. T., 2008, Grouping of TRIZ Inventive Principles to facilitate automatic patent classification. Expert Systems with Applications, 34/1:788-795. [15] Loh H. T., He, C., Shen, L., 2006, Automatic classification of patent documents for TRIZ users, World Patent Information, 28/1:6-13. [16] Cavallucci, D., 2002, Integrating Altshuller’s Development Laws for Technical Systems into the Design Process, Annals of the CIRP, 50/1:115-120. [17] Mann, D., 2002, Better Technology Forecasting using Systematic Innovation Methods, Technological Forecasting and Social Change, 70/8:779-95. [18] Mann, D., Dewulf, S., 2002, Evolutionary-Potential in Technical and Business Systems, The TRIZ Journal. [19] Dewulf, S., 2006, Directed Variation: Variation of Properties for New or Improved Function Product DNA, a Base for 'Connect and Develop', Proceedings of the ETRIA TRIZ Future Conference, Kortrijk, Belgium. [20] Larkey L.S., 1999, A patent search and classification system, Proceedings of the fourth ACM conference on Digital libraries, ACM Press, New York, US., 179187. [21] Fall, C J, et al., 2003, Automated Categorization in the International Patent Classification, SIGIR Forum, ACM Press, 37/1:10-25. [22] Brants, T., 2000, TnT - A Statistical Part-Of-Speech Tagger, Proceedings of the Sixth Applied Natural Language Processing Conference ANLP-2000, Seattle. [23] Noun Adjunct., Wikipedia, Available from http://en.wikipedia.org/wiki/Noun_adjunct [24] Carl, M., Schmidt, P., and Schütz, J., 2005, Reversible Template-based Shake & Bake Generation, Proceedings of Workshop ExampleBased Machine Translation, MT Summit X, Phuket, Thailand. [25] van Rijsbergen, C. J., Robertson, S.E., Porter, M.F., 1980, New models in probabilistic information retrieval, British Library Research and Development Report, 5587. [26] Verhaegen, P.-A., Vertommen, J., D'hondt, J., Dewulf S., Duflou, J., 2008, Relating Properties and Functions from Patents to TRIZ trends, CIRP Design Conference 2008: Design Synthesis, Twente. [27] Baeza-Yates, R. A., Ribeiro-Neto, B., 1999, Modern Information Retrieval, Addison-Wesley Longman Publishing Co., Inc., Boston, MA. [28] Berry, M. W., Dumais, S. T., O’Brien, G. W., 1995, Using Linear Algebra for Intelligent Information Retrieval, SIAM Review, 37/4: 573-595. [29] Berry, M. W., Dumais, S. T., Shippy, A.T., 1995, A Case Study of Latent Semantic Indexing, Technical report, UT-CS-95-271, Knoxville, TN, USA. [30] Apache Lucene, available from http://lucene.apache.org. [31] The Free Dictionary, available from http://www.thefreedictionary.com/windscreen. [32] Micropatent, Thomson Reuters, available from http://www.micropatent.com.

257

The Product Piracy Conflict Matrix – Central Element of an Integrated, TRIZ-based Approach to Technology-based Know-how Protection G. Schuh, C. Haag Fraunhofer Institute for Production Technology IPT, Steinbachstraße 17, 52074 Aachen, Germany [email protected] Abstract

The paper gives a general introduction to product piracy as an economic and a methodological challenge. Technology-based know-how protection is presented and its potentials outlined. The corporate value chain is discussed as the relevant system when implementing technological know-how protection mechanisms and as an essential dimension of the so called Product Piracy Conflict Matrix (PPC Matrix). Forming the methodological analogy, the TRIZ contradiction table is presented as a starting point for the PPC Matrix. The development of the matrix is described and its implications as part of a comprehensive process model for technological know-how protection are discussed. Finally, a detailed and critical outlook both towards its application potentials and towards further research needs is given. Keywords: Product Piracy, Product Imitation, Technology Know-how Protection, Contradiction Matrix, PPC Matrix, Ideality Approach

1 TECHNOLOGY-BASED PROTECTION AGAINST THE NEW CHALLENGE OF PRODUCT PIRACY Product and brand piracy has risen to a worldwide mass phenomenon [1], not only burdening luxury goods and digital media anymore but also technology-intensive branches like automotive, electronics and machinery industry. Companies are gradually facing up to this new challenge and taking action. Besides legal counteractions, an increasing number of firms is also trying to implement technology-based know-how protection as a new approach against product piracy [2]. The potential of these approaches is so far being only exploited to a limited extent which can mainly be traced back to lacking knowledge regarding the functionality, benefits and application conditions of these new knowhow protection mechanisms. But as a survey by the Fraunhofer-Gesellschaft has revealed, technological and organisational protection measures are expected to be the most effective counteractions against product piracy in the future [3]. However, companies which try to identify concrete protection mechanisms and implement these measures into their running business often face problems and conflicts, which seem insurmountable: on the one hand, a burdened company quests for powerful protection strategies for its product and brands. Yet on the other hand, it is not willing to accept excessive modifications to its products and value chain. From a corporate point of view, a common requirement for instance is that the general product functionalities must not be noticeably affected by the implementation of a protection feature. In other cases, financial limitations, after sales service requirements or constraints by regulations have to be considered. Due to these restrictions, companies often face a “deadlock situation” when trying to install suitable measures against product piracy. In order to overcome such conflicts, companies require systematic methodological support in finding appropriate measures that do not influence their value chains in a negative or harmful way. In line with this claim, the article introduces the so-called Product Piracy Conflict Matrix (PPC Matrix). This new problem-solving approach has been developed by the Fraunhofer Institute for Production

CIRP Design Conference 2009

258

Technology IPT based on insights from numerous consulting and research activities in the field of product piracy protection. 2

THE PPC MATRIX AND ITS THEORETICAL BACKBONE The PPC Matrix has been designed as a methodical guideline for companies to select appropriate protection measures against product piracy. Contrary to other approaches (e.g. [2] [4] [5]) the PPC Matrix pays special attention to boundary condition within the value chain of a company, which may not be influenced in an undesired or harmful way by the implementation of protection measures. The methodology primarily addresses professionals in R&D management who are searching for means to protect their products but yet have little experience in that issue. Also it can be helpful for experts who have already considered certain protection schemes but would like to double-check their selection in order to reduce the risk that a more appropriate measure might have been forgotten. 2.1 The Contradiction Table of TRIZ as a Methodological Frame The basic idea of the PPC Matrix is derived from the contradiction analysis as a well-established method within TRIZ. According to Altshuller, an inventive problem contains at least one contradiction. This insight arose from his observation of 40.000 patents. He identified 39 design parameters that can induce conflicts in engineers’ work (e.g. to reduce “weight” and enhance “strength”). From each patent he studied, Altshuller selected several principles for each combination of conflicting parameters, finally coining a list of 40 inventive principles. Out of these findings arose the TRIZ contradiction approach, which relies on expressing a challenging problem as a technical contradiction, for which solutions can be identified in a systematic way based on Altshuller’s inventive principles. A technical contradiction exists, if improving a parameter “A” of a system causes a

development and validation of solutions based on standard principles proposed in the cells of the matrix. 2.3 Design of the PPC Matrix In contrast to the TRIZ contradiction table, which uses the same 39 technical parameters to structure both row (feature to improve) and column (undesired results) of the table [6], the axes of the PPC Matrix are designed in a non-symmetric way. Although several parameters can be found simultaneously in the columns and the rows of the matrix, the structuring frameworks for the rows and the columns are different (Figure 1): Derived from a gametheoretical analysis, the parameters within the rows are classified according to the generic behavioural pattern of the imitator and original product manufacturer. On the other hand, the parameters in the columns are arranged according to the standard value chain of a company. These two basic structuring concepts will be described in detail in the following. Value chain stages

• Users often fail in structuring and expressing piracy problems in terms of a technical or physical contradictions due to lacking knowledge and experience. That is why more problem-specific guidance is required.

2.2 The Analogy Between the TRIZ Contradiction Table and the PPC Matrix Following the basic idea of the TRIZ contradiction table, the PPC Matrix is designed to identify standard solution principles against product piracy threats, for which suitable solutions must be identified. Therefore, the structure of the PPC Matrix is quite similar to the contradiction table. The rows contain actuating parameters or levers to implement protection mechanisms. The columns on the other hand represent a list of reactive parameters that could be harmfully affected by the actuating parameters. In short, the analogies between the traditional contradiction table and PPC Matrix can be described as follows: • Conflicts between active and reactive parameters are recorded at the intersections of the rows and columns of the PPC Matrix. • Standard solution principles within the PPC Matrix are based on a catalogue of protection measures as a research result of the Fraunhofer IPT [2] [8]. • The application of the PPC Matrix is embedded in a comprehensive problem-solving procedure that comprises an initial problem analysis and the identification of main conflicts and is followed by the

Game-theoretical stages

• Solutions against product piracy are clearly not limited solely to technical principles. That is why the scope of innovative solutions generated by the contradiction table is limited in the context of piracy problems. However, the issue of creating solutions against product piracy can as a whole be viewed as a physical contradiction, due to the already mentioned conflict: on the one hand, protection mechanisms are called for, yet on the other, persons responsible are not willing to take negative or harmful alterations to their product into account. In many workshops that Fraunhofer IPT has conducted with companies in different lines of industry, this conflicting situation represented a major restriction or even a knock-out criterion against the implementation of powerful protection measures. The idea of a problemspecific contradiction analysis arose from this insight.

Conflict of parameter modification

Harmful parameters within value chain

Useful parameters within gametheoretical stages

different parameter “B” to deteriorate, whereas a physical contradiction exists if some aspect of a product or service must simultaneously adopt two opposing states. Expressing the problem in question as a technical or physical contradiction is therefore a prerequisite to applying the contradiction table [6]. The analysis then relies on fitting the problem to a table of conflicts between 39 technical parameters and identifying solutions based on 40 inventive principles, which have proved successful in solving these conflicts. In this sense, the contradiction table represents a comprehensive data compilation of expert knowledge on the applicability of innovative principles in solving technical or design problems (for a more detailed description of the contradiction analysis see [7]). In certain cases, the contradiction table itself is already applicable and appropriate in the context of product piracy protection. The 40 inventive principles can be very helpful to find new, unconventional technical approaches for know-how protection. Yet in general, the direct applicability of the contradiction table is limited, mainly due to two reasons:

Technical principles Organisational principles Market-related principles = TOM principles of conflict solving

Figure 1: Concept of the PPC Matrix. 2.4 Structuring the Rows Based on Game-theoretical Analysis From a game-theoretical point of view, there are four more or less sequential stages which are suitable to generically describe the behavioural pattern of an imitator and thus, the stages of opportunities for an original product manufacturer to take counteractions [2]: • Selection of a product to be copied • Analysis of the product • Reproduction of the product • Marketing and sales of the imitation The first decision an imitator has to make is the selection of the product he intends to copy. Needless to say, the imitator’s decision mainly depends on the expected commercial benefits which are linked to the imitation. Hence, the initial selection and decision-making process of the imitator represents the first stage for the original product manufacturer to take action. By shifting certain parameters within the product design or business model (e.g. in terms of production techniques or after sales services), the original product manufacturer can deliberately lower the imitation attractiveness of his products. After an imitator has decided to copy an original product, the second lever is to make the product analysis or reverse engineering as time-consuming and tedious as possible for the imitator. Various counteractions can be considered, yet most of them are directly affiliated to the

259

product structure or design (e.g. increase of product complexity or limitation of product access). If the imitator has succeeded in analysing a product, the third and subsequent stage is the reproduction (in terms of manufacturing) of the original product by the imitator. For the original product manufacturer, the according lever is to impede the imitator from actually realising a reproduction. At this stage, mainly measures related to supply chain management (e.g. limiting the access to essential suppliers or components) can be considered. Finally, assuming the imitator has actually achieved in manufacturing a false replica, the remaining lever for the affected company is to hinder the imitator in bringing his imitation to the market and gaining market shares. Counteractions primarily apply either directly to the design of sales channels or to implementing appropriate authentication means. These four stages represent a suitable classification framework for a generic list of actuating parameters companies can modify in order to implement protection measures. The according parameters represent the rows of the PPC Matrix. 2.5 Structuring the Columns Based on the Ideality Principle According to “ideality thinking” as a key concept of TRIZ, a technical sub-system should fulfil its desired functions without calling forth an undesired effect of the corresponding entire system. The TRIZ dictum says: effective technical solutions evoke maximal positive effects within a system and simultaneously limit possible negative effects to a minimum. The effectiveness E is a universal TRIZ ratio to evaluate the degree of ideality of technical system.

E=

Sum of the positive effects Sum of the negative effects

can be applied to dissolve the described conflict. Three basic categories have been distinguished to classify these principles: • Technical principles: they are directly integrated within the product as additional features or as modifications of existing product components • Organisational principles: they can be implemented within the internal organisational structure, without considering external links to markets • Market-related principles: they are implemented according to customer requirements and relationships, i.e. taking the market of the burdened product into account. According to the initials of the three categories, the 26 principles against product piracy are labelled the TOM principles (Figure 2). In analogy to the 40 inventive principles of TRIZ, the TOM principles represent standardised mechanisms in an abstract form. This implies that a principle considered justified must be adjusted to individual problem characteristics, i.e. to product and corporate boundary conditions. Technical principles: T1 Fixed cost-intensive manufacturing T2 Branded functionality T3 Product (de-)activation T4 Decomposition barriers T5 Functional black boxes T6 Fake black boxes T7 De-standardisation T8 Local increase of performance density T9 Product authentication T10 Product bundling Organisational principles: O1 Product certification O2 Staff retention O3 Codification of documents

O4 O5

Proprietary development of production facilities Collaboration with imitator

O6 O7

Chinese walls in the supply chain Contracted supplier relationships

Market-related principles: M1 Lead time M2

Release management

M3 M4

Simultaneous market launch Price differentiation

M5 M6

Product differentiation Shadow placement

M7

Mass customisation

M8 M9

Extended life cycle services Establishment of industry standards

Figure 2: TOM principles for product piracy protection. 3

Metaphorically speaking, the ideal system provides a desired function without even existing [9]. Following this line of thinking in the context of product piracy, protective measures can be considered subsystems of an (already existing) system, i.e. of the value chain of the burdened product which has to be protected. Hence, an ideal measure against product piracy fulfils its desired function without bringing harmful effects into the value chain system it is supposed to protect. A full corporate value chain generally comprises seven stages [10]: • Research & Development • Procurement • Production • Distribution • Marketing • Sales • Service As illustrated in the introduction already, certain characteristics within the value chain may be influenced in a harmful or destructive way by the implementation of protection measures. This circumstance accounts for the conflict companies are faced with in the context of product piracy. Consequently, according to the notion of ideality, the PPC Matrix offers standard solution principles which enhance the positive (protective) effects and minimise the negative (value chain-modifying) effects. 2.6 The 26 TOM Principles Based on the research results of Neemann [2], Fraunhofer IPT has compiled a list of 26 principles that

260

PRACTITIONER’S GUIDANCE FOR APPLICATION: A COMPREHENSIVE APPROACH FOR TECHNOLOGY-BASED KNOW-HOW PROTECTION The Fraunhofer IPT has elaborated a user’s guideline, how to apply the PPC Matrix and what critical factors to consider. A six-step procedure is suggested as a comprehensive approach to identify and implement nonlegal protection measures using the PPC Matrix. Figure 3 gives an overview about the individual steps and shows, where by experience the most effort has to be put in. The approach will be introduced in the following. Step 1: Piracy Problem Analysis Step 2: Identification of Relevant Stages for Counteractions Step 3: Selection of Appropriate Levers for Protection Mechanisms Step 4: Identification of Fixed Parameters in the Value Chain Step 5: Determination of Appropriate TOM Principles Step 6: Evaluation of TOM Principles and Companyspecific Adaptation Effort

Figure 3: PPC Matrix Application. 3.1 Step 1: Piracy Problem Analysis A detailed problem analysis is the first step to applying the PPC Matrix. During this initial phase, the company’s specific situation must be investigated i.e. the actual or most likely imitation scenarios determined. The following

categorisation of imitation types (Figure 4) gives companies guidance to clarify this issue: Companies must get a clear idea about what kind of imitation they actually have to fear. In consideration of the different types of imitation, the potential imitators can be characterised (see Figure 5). Manufacturers of brand counterfeits normally have a poor quality level in comparison to the original product manufacturer. He does not (need to) acquire comprehensive knowledge about the original product. However, his addressed markets differ largely from those of the original product. Imitations are full or partial reproductions of specific characteristics of a product Counterfeits put own products under creatorship of others Brand counterfeits Example of Tempo

Plagiarisms put other’s intellectual property under own creatorship

Slavish counterfeits Example of Nintendo

Concept copies Example of Festo

Slavish copies Example of Stihl

Imitationen

Original Plagiat

Imitation Original

Original Imitation

Original Imitation

Figure 4: Generic types of product imitation. (image source: www.plagiarius.com) Slavish counterfeits also tend to have a worse quality level than the original product. But still, sometimes they are hardly to differentiate from each other. That is why the addressed markets are quite similar. Concept as well as slavish copies demand for comprehensive know-how adaption by the imitator. Mostly, the imitations have a high quality level and customers are very similar to those of the original product. On basis of these characteristics, there has to be considered, what kind and extent of commercial issues might be implicated by an imitation. For instance, when simple brand counterfeits are reproduced by the imitator, this probably may not immediately end up in significant sales losses because the potential customers of the imitation are not similar to those of the original product; a decrease in brand reputation is much more likely to fear for this type of imitation scenario. In other cases, concept copies will probably not result in unjustified product liability complaints, because the original product is easily to differentiate from the imitation. Only slavish counterfeits will do so and demand for appropriate counteractions. Characteristics of potential imitators

Technology strategy

Legitimate fast-follower

Illegal/Illegitimate technology acquisition

Market similarity 100%

Market similarity 0%

Identity of markets Similar quality level

Poor quality level

Quality level Comprehensive know -how build-up -

No know-how build-up

Know-how adaption

Brand counterfeits

Slavish counterfeits

Technology strategy

Technology strategy

Identity of markets

Identity of markets

Quality level

Quality level

Know-how adaption

Know-how adaption

Concept copies

Slavish copies

Technology strategy

Technology strategy

Identity of markets

Identity of markets

Quality level

Quality level

Know-how adaption

Know-how adaption

Figure 5: Characteristics of potential imitators [2].

3.2 Step 2: Identification of Relevant Stages for Counteractions As next step, the relevant (game-theoretical) stage for taking counteractions must be identified. Questions to ask in this context are: is the considered product highly attractive to be imitated? Has the product already been analysed by potential imitators resp. has technical knowhow already left the boundaries of the company (e.g. due to personal fluctuation, trade fairy appearances etc.)? Have concrete product imitations already emerged or are they however foreseeable? Depending on how far the (expected) imitation process has already stepped ahead, companies have to define their appropriate stage for counteraction. When for example the product development has not been completed yet and no critical know-how has left the company so far, the product’s attractiveness to be imitated can be reduced by certain technical and commercial characteristics so that imitators will not select the product for imitation purposes in the first place. However, when the product has been analysed by imitators already and reproductions of the original product are very likely to be proceeded, the appropriate stage to take counteractions would be the marketing and sales phase of the imitation. 3.3 Step 3: Selection of Appropriate Levers for Protection Mechanisms The third step involves identifying those levers which are available for integrating protection mechanisms. In accordance to the stage for taking counteractions, the company must think about parameters in there own business to change, in order to prevent product piracy. This may include parameters like brand appearance, product complexity or even annual part volume. Such useful parameters for modification are listed in the rows of the PPC Matrix. Needless to say, companies do not have to focus only on one parameter i.e. row of the PPC Matrix but can also take three or four parameters into account. In this case the following steps would have to be conducted multiple. 3.4 Step 4: Identification of Fixed Parameters in the Value Chain During the fourth step, the firm must consider the entire value chain and determine which phases are likely to be harmfully affected by modifying the selected parameters. As taken as an example before, when decreasing the annual part volume in order to lower the imitation attractiveness of a product, the sales volume will decrease as well. It is easily foreseeable, that this kind of parameter modification will hardly be accepted because it might hurt the company’s commercial performance even more than the appearance of product imitations itself. The sales volume is included in the list of harmful parameter to be changed within the company’s value chain, which constitute the columns of the PPC Matrix. The comparison of useful parameters against product piracy in the rows and harmful parameters of the value chain in the columns state the conflict which is tried to be solved in the following step. 3.5 Step 5: Determination of Appropriate TOM Principles Within the fifth step, the core of the PPC Matrix is finally applied. First, the line containing the useful actuating parameter to implement a protection mechanism is

261

selected. Then the column that corresponds to the harmful reactive parameter is indentified which represents the fixed aspect in the value chain that is negatively affected by an alteration of the actuating parameter. At the intersection of the line and the column, the solution principles capable of solving the conflicting situation are indicated with their respective short name as shown in Figure 6. Selection of reactive parameters

Annual sales volume

Marketing & Sales



Levers within game-theoretical stages with useful effect of modification … Product selection by imitator

Selection of actuating parameters





Parameters within value chain stages with harmful effects of modification



Annual part volume

M2, M5, M7

M2= Release Management M5= Product Differentiation M7= Mass Customisation



Figure 6: Identification of suitable TOM principles There always can be more than one principle within an intersection. For each TOM principle a short description as well as application advices are given, so that the user can get a clear picture about the principle and its effect against product piracy. 3.6 Step 6: Evaluation of TOM Principles and Company-specific Adaptation It is not automatically guaranteed, that an identified TOM principle is an appropriate measure for a specific case. Therefore in the next step it is inexpedient to evaluate a measure regarding its actual problem-solving potential for the particular product piracy scenarios. For this last step all stakeholders that might be affected by the implementation of the measure have to be involved and asked about the principle’s applicability from their point of view. After a principle has finally been approved from all experts as a suitable and applicable measure for the company to prevent product piracy and also cost calculations have confirmed its commercial benefit, the technical, organisational or market-related adaptation of this principle can be initialised. 4 REFLEXION AND OUTLOOK First and foremost, the introduced approach for technology-based know-how protection using the PPC Matrix as methodological support can be regarded as a basic framework for companies that so far have little idea concerning the range of solution principles and their application potential. Furthermore, the PPC Matrix can provide valuable guidance for companies in validating mechanisms previously identified. From a more academic perspective, by analysing in detail the coherences and determining factors of protection mechanisms, the matrix represents an important step in structuring this novel field of research. In this sense, the matrix represents a comprehensive compilation of expert knowledge concerning the applicability of non-legal principles in preventing product piracy. Clearly, the matrix does not claim to deliver „turnkey solutions“ for a concrete piracy problem; in analogy to the contradiction table, it generates ideas in the sense of abstract principles, stimulates creativity and forms the

262

basis for further validation, implementation and enhancement steps. The approach has proved successful in a number of consulting projects conducted by Fraunhofer IPT. In the future, Fraunhofer IPT will continually supplement the 26 TOM principles and update the matrix accordingly. Although the set of principles can already be considered highly comprehensive, the initial set must be extended mainly triggered by experiences gained in concrete industrial applications. This process has not been finalised and will be continued on an ongoing basis, thus ensuring the up-to-date status and capability of the methodology. 5 REFERENCES [1] OECD (2007) The Economic Impact of Counterfeiting and Piracy - Part IV: Executive Summary. OECD, Paris. [2] Neemann, C. W. (2007) Methodik zum Schutz gegen Produktimitationen, Shaker-Verlag, Aachen. [3] Krueger, J. and Nickolay, B. (2006) Marken- und Produktpiraterie 2006: Wahrnehmung von Markenund Produktpiraterie und Akzeptanz technologischer Schutzinstrumente, Fraunhofer IPK, Berlin. [4] Fuchs, H.J. (ed.); Kammerer, J.; Ma, X.; Rehn, I.M. (2006) Piraten, Fälscher und Kopierer: Strategien und Instrumente zum Schutz geistigen Eigentums in der Volksrepublik China. Gabler-Verlag, Wiesbaden. [5] Specht, D.; Mieke, C. (2008) Strategien gegen Produktpiraterie – Schutzmaßnahmen setzen an in Forschung & Entwicklung, Produktion und Vertrieb, in: Wissensmanagement 2/2008, Büro für Medien, Augsburg. [6] Terniko, J. et al. (1998) Systematic Innovation. An Introduction to TRIZ, CRC Press, Boca Raton. [7] Pannenbaecker, T. (2001) Methodisches Erfinden in Unternehmen, Gabler-Verlag, Wiesbaden. [8] Schuh, G.; Kreysa, J.; Haag, C. (2007) TRIZ-based Technology Know-how Protection: How to Find Protective Mechanisms against Product Piracy with TRIZ. In: Gundlach, C.; Lindemann, U.; Ried, H. (ed.): Current Scientific and Industrial Reality – TRIZ future 2007. Kassel University Press, Kassel. [9] Orloff, M. A. (2003) Inventive Thinking through TRIZ, Springer-Verlag, Berlin-Heidelberg-New York. [10] Mueller-Stewens, G. and Lechner, C. (2003) Strategisches Management, Schaeffer-PoeschelVerlag, Stuttgart.

Computer-Aided Conceptual Design Through TRIZ-based Manipulation of Topological Optimizations 1

1

2

2

G. Cascini , U. Cugini , F. S. Frillici , F. Rotini Dip. di Meccanica, Politecnico di Milano, Italy 2 Dip. di Meccanica e Tecnologie Ind.li, Università di Firenze, Italy [email protected], [email protected], [email protected], [email protected] 1

Abstract In a recent project the authors proposed the adoption of Optimization Systems [1] as a bridging element between Computer-Aided Innovation (CAI) and PLM to identify geometrical contradictions [2], a particular case of the TRIZ physical contradiction [3]. A further development of the research has revealed that the solutions obtained from several topological optimizations can be considered as elementary customized modeling features for a specific design task. The topology overcoming the arising geometrical contradiction can be obtained through a manipulation of the density distributions constituting the conflicting pair. Already two strategies of density combination have been identified as capable to solve geometrical contradictions. Keywords: Computer-Aided Innovation, Computer-Aided Conceptual Design, Embodiment Design, TRIZ

1 INTRODUCTION Computer-Aided Innovation (CAI) is an emerging discipline within the environment of Computer-based systems and applications for Product Development. Despite CAI still requires a precise identification of its scientific foundation and the main directions of research, it receives a growing attention both from academia and industry as the class of software systems supporting any activity from the fuzzy front-end of product development to the following phases of detailed design. Among the main issues to be approached by researchers in the field of CAI, a proper attention should be dedicated to: (i) the poor interoperability between computer tools actually adopted in innovation related activities, due to the lack of formalized procedures and means to accomplish conceptual design tasks [4]; (ii) the limited usability of CAD systems for conceptual design. In facts, the generation of a geometry capable of delivering a certain function is not supported by actual CAD systems, mainly conceived as a means for parametric variations of design details [5]. At the same time, also modern sketch-based 3D modeling systems still present several key problems limiting their usability [6]. In this context the authors have addressed the goal of improving the interoperability of Computer-Aided design systems by an original integration of TRIZ-based software tools with Optimization and PLM systems through the PROSIT project (www.kaemart.it/prosit) [4]. A reference book for TRIZ (Russian acronym for Theory of Inventive Problem Solving) is [3]. The promising results obtained so far have triggered the idea to adopt the results of a topological optimization as a customized modeling feature to be adopted in the embodiment design phase, i.e. when the abstract functional architecture defined in the conceptual design phase, is molded into a system to be produced.

CIRP Design Conference 2009

263

The next section presents some open research problems from the related art and summarizes the results of the PROSIT project relevant for the present activity. Then the paper proposes an original TRIZ-based approach to combine the results of different topological optimizations in order to generate a new geometry with improved performances and characteristics compared with classical multi-objective optimizations. The fourth section reports some exemplary applications of the proposed approach to clarify its practical implementation and to discuss its expected benefits. 2

RELATED ART

2.1 Conceptual design and CAD systems Despite it is widely recognized the relative importance of conceptual design, due to its influential role in determining product's fundamental features, as a matter of facts, CAD/CAE systems are not conceived to allow fast input and representation of concept models, and consequently they introduce inertial barriers in experimenting new models of design solutions. Indeed they don’t provide any support to designers in developing and expressing their creativity [7, 8]. In fact, commercial CAD systems let the users carry out successfully tasks related to the detailed design stage, but not enough efforts have been dedicated to the conceptual design phase, especially activities such as function synthesis, concept generation and exploration. Preliminary attempts to provide conceptual design capabilities to CAD systems are in progress: in [5] shape and topological variations of a 3D model are proposed as a means to generate an optimal geometry through the application of genetic algorithms. Nevertheless, topological and shape variations are obtained through the modification of classical 3D modeling features, which dramatically limit the design space and impact the practical usability of the proposed method.

2.3 The PROSIT project By means of the PROSIT project, the authors addressed the integration of Computer-Aided Innovation systems, Optimization systems and PLM/EKM tools as a means to improve the innovation resources and the efficiency of a product development cycle. The rationale of the research was the lack of formalized and validated procedures allowing the systematic introduction and integration of these tools in the design process. A relevant aspect of the results achieved by the PROSIT project is the integration of apparently incompatible tools, thanks to the new role and way of usage of the Optimization Systems. The starting point is that in the design process designers have to address three subsequent interconnected tasks: - correct problem stating (precisely formulate the right question); - define the correct-optimal architectural-morphological answer; - finalize the best solution taking into account the technical/engineering constraints. In order to perform these tasks, designers have at their disposal different dedicated approaches and tools. The goal of PROSIT project was to demonstrate that it is possible to define a coherent and integrated approach leveraging on available theories, methods and tools as illustrated in figure 1 [2]. S OL TO CH OA PR AP KS TAS

CAI systems

Topology Optimization SW

TRIZ Theory

Problem setting Abstraction level

2.2 Topological Optimization systems Topology Optimization is a technique that determines the optimal material distribution within a given design space, by modifying the apparent material density defined as design variable. The design domain is subdivided into finite elements and the optimization algorithm alters the material distribution within the design space at each iteration, according to the objective and constraints defined by the user. The surfaces defined as “functional” by the user, are preserved from the optimization process and considered as “frozen” areas by the algorithm. Thus, designing through Topology Optimization technique means translating a design task into a mathematical problem with the following basic entities: • An Objective Function, i.e. a combination of Evaluation Parameters, adopted as a reference metric to assess the degree of satisfaction of the design requirements; • A set of Design Variables, i.e. material density variables by which the design domain is parameterized; they constitute the Control Parameters of the system affecting the Evaluation Parameter. • A set of External Inputs and Constraints representing the operating conditions and requirements the system has to satisfy. Among them, manufacturing constraints may be set in order to take into account the requirements related to the manufacturing process. Sliding planes and preferred draw directions may be imposed for molded, tooled and stamped parts as well as minimum or maximum size of the structural elements (i.e. ribs, wall thicknesses, etc.). The optimization algorithm finds the material density distribution within the given design domain which minimizes, maximizes, or, in general, “improves” the objective function, i.e. the Evaluation Parameters while satisfying the Constraints. Topology Optimization is widely used to support the design of lightweight and stiffened components, a survey of methods is presented in [9, 10]. During the last years they have been integrated in several CAE tools such as: HyperWorks [11], TOSCA [12], Nastran [13], ANSYS [14] and others. Although Topology Optimization was born with the aim to support design tasks related to the structural fields, it has been recently applied to address design problems also in other fields such as: fluid dynamics, heat transfer and non linear structure behavior. Several works are available in literature, examples are provided in [15-17]. However, since the design process has multidisciplinary characteristics, improving one performance of a system may result in degrading another. This kind of conflicts cannot be solved using Design Optimization since these techniques are able to focus the design task only to one specific performance to be improved. More precisely, Design Optimization tools allow to manage multiple goals just by defining complex objective functions where a weight must be assigned to each specific goal [18]. Thus, the best compromise solution is generated on the base of an initial assumption made by the designer about the relative importance of the requirements, without taking into account the reciprocal interactions. The integration among Topology Optimization technique and CAD tools is another very important open issue that should be addressed in order to enhance interoperability. As stated so far, Topology Optimization uses a material density distribution within a given design domain to represent a geometry: this paradigm cannot be directly translated into the feature-based representation used in CAD tools.

*

Topology Optimization Theory

Optimal morphology / topology

*

PLM systems (e.g. CAD-CAE)

KBE Methods Best engineering solution Product Development time

Figure 1: Methods and tools to support the tasks of a product development process. Innovation and optimization are usually conceived as conflicting activities. Besides, topology and shape generation capabilities of modern design optimization technologies can be adopted as a means to speed-up the embodiment of innovative concepts, but also as a way to support the designer in the analysis of conflicting requirements for an easier implementation of TRIZ instruments for conceptual design. In facts: (i) defining a single multi-goal optimization problem leads to a compromise solution; (ii) besides, defining N complementary mono-goal optimization problems, each with specific boundary conditions, leads to N different solutions; (iii) these solutions can be conflicting and this is the key to find contradictions. In [1] it was proposed a classification of these contradictions mostly related to the geometrical differences between the results of the mono-goal optimization tasks and to the nature of the conflicting design parameters: - Size Contradictions: a dimensional parameter of the Technical System (TS) should be big and should be small according to two or more different mono-goal optimization tasks. Three different sub-classes can be defined: 1D, 2D, 3D.

264

- Shape Contradictions: an element or a detail should assume different forms, e.g. sharp vs. rounded details, circular and polygonal. - Topological Contradictions: an element or a detail should assume different topologies (material distributions, e.g. monolithic and segmented) and/or orientations (e.g. horizontal and vertical etc.). Within the PROSIT project a set of guidelines were developed to lead the designer to the identification of the most appropriate instruments of classical TRIZ for overcoming physical contradictions and in their consequent application to the development of the final solution. It is worth to notice that the PROSIT project didn’t aim at the creation of a fully automatic system for design embodiment, because both the comprehension of the root-cause of a geometrical contradiction and, most of all, the translation of the TRIZ principles into a new set of optimization tasks, requires a creative even if systematic step, demanded to the designer. Besides, the obtained results suggested the investigation of semi-automatic procedures to combine the outputs of the single-goal optimization tasks as a means to reduce the creative contribution of the designer, still in charge to select the most suitable directions among those proposed by the computer-based system. 3

MANIPULATION OF TOPOLOGICALLY OPTIMIZED DENSITY DISTRIBUTIONS

3.1 Topologically optimized density distributions and TRIZ contradictions As described in the previous section, instead of accepting a compromise solution generated by a multi-goal optimization, it is preferable to determine the best geometry for each boundary condition the technical system may encounter and, if these results conflict each other, adopt a TRIZ approach to overcome the emerging contradictions. The minimal contradiction involves two alternatives density distributions arising from two topological optimizations of the same technical system (TS) where different boundary conditions are applied, as schematically represented in figure 2: the symbols “+” and “-“ mean that the behavior of the TS under the i-th Boundary Condition improves and worsens respectively according to the goal function of the optimization problem. In other words, the diagram in figure 2 should be read as follows: the density distribution should assume the topology “∨” in order to improve the behavior of the TS under the Boundary Condition #1, but then it degrades the behavior under Boundary Condition #2 and should assume the topology “∧” in order to improve the behavior of the TS under the Boundary Condition #2, but then it degrades the behavior under Boundary Condition #1.

Figure 2: Geometrical contradiction derived by the comparison of two topological optimizations related to alternative boundary conditions of the technical system. The density distribution is not a scalar variable, but a 3Darray representing the optimized density of each voxel.

265

Such a formulation clearly resembles a classical TRIZ contradiction where the density distribution is the parameter under the control of the designer (CP) and the goal function under different Boundary Conditions constitutes the Evaluation Parameters of the Technical Contradiction [19]. More generally a TS can experience more than two different operating conditions and consequently more than two topologically optimized density distributions can impact the same contradiction. The properties of such a “generalized contradiction” are still under investigation as well as the most effective directions to generate a satisfactory solution [20]. In this paper only contradictions in the form represented in figure 2 are taken into account. 3.2 Topologically optimized density distributions as customized 3D modeling features A general conclusion can be drawn by the references mentioned in section 2.1: the modeling features actually adopted by CAD systems are too rigid to be compatible with the fuzziness of the preliminary steps of embodiment design. Besides, the transformation of any basic modeling elements (i.e. protrusions, revolutions etc.) into more flexible features (e.g. loft, sweep) as proposed in [5] appears computationally expensive and hard to integrate with other existing design tools. In this paper we propose the density distributions generated by topological optimizations of mono-goal problems as elementary customized feature for the definition of the geometry of a certain mechanical part during the embodiment stage, when its functional role must be translated into a geometry to be manufactured and coupled with other subsystems. Even if a proper discussion about this choice is postponed to the last section of the paper, it is worth to highlight some characteristics of these customized modeling features: - as mentioned in section 2.2, the result of a topological optimization is a distribution of density so that each cell of the design space assumes a fuzzy value between 0 and 1, which in turns means that boundaries are not rigid as it happens also with classical free-form modeling features; in facts, a density distribution can produce both topological and shape variations while, apart few exceptions, parametric modifications of a free-form surface produce just shape variations; - compared with free-form surfaces where a shape variation is obtained by moving many control nodes, the output of a topological optimization produces different specific geometries by editing just one parameter, i.e. the threshold value of the density discriminating between void and filled space. Also according to the results of the PROSIT project, the embodiment design phase should start with the translation of system requirements into separate boundary conditions to build complementary mono-goal optimization problems. The solutions generated by each topological optimization can be considered as elementary modeling features to be combined as described in the following section. 3.3 TRIZ-based combinations of density distributions When a geometrical contradiction is formulated as represented in figure 2, different strategies can be considered to define a solution capable to satisfy both the conflicting requirements. A TRIZ expert can recognize a certain similarity between a density distribution and a team of “smart little people” [3]. From this point of view a first option to obtain the advantages of both the “values” of the density distribution is a hybridization obtained by a weighted sum of the partial values:

ρ (x, y , z ) =

K 1 ρ 1 (x , y , z ) + K 2 ρ 2 ( x, y , z ) K1 + K 2

(1)

where: - ρ(x, y, z) is the distribution of density in the design space overcoming the geometrical contradiction; - ρi(x, y, z) is the distribution of density of the i-th monogoal topological optimization problem; - Ki is the weight assigned to the result of the i-th monogoal optimization. The investigation carried out by the authors about many different geometrical contradictions and related solutions (more details about their source can be found in [1]) revealed that typical solution paths can be associated to: - different orientation of a geometrical feature, i.e. a rotation of a geometrical element, or in TRIZ terms, “Another Dimension” (Inventive Principle #17); - multiple copies obtained by a translation of a geometrical feature, as suggested from the trend of evolution Mono-Bi-Poly of homogeneous systems (figure 3) applied to geometrical features; - a combination of the above, i.e. the trend Mono-Bi-Poly applied to systems with shifted characteristics obtained by introducing multiple copies of a geometrical feature, each with a proper position and orientation (figure 4); the simplest case is obtained by duplicating a geometrical feature by means of a mirror operation (figure 4, below). BI

MONO

… POLY

Figure 3: Mono-Bi-Poly transformation applied to geometrical features.

Figure 4: Exemplary bi-features obtained by a combination of rotations and translations of the original geometry. A general expression capable to represent all the above solution strategies is the following (2): N

ρ (x, y , z ) =

i =1 j =1





Mi

∑∑ K

ij

ρ i ⎜⎜ [ROT ]ij ( x, y , z ) T + ( x 0 , y 0 , z 0 ) Tij ⎟⎟

⎠ (2)



N

Mi

∑∑ K

ij

i =1 j =1

where - N is the overall number of conflicting mono-goal optimizations (two if a classical TRIZ contradiction model is adopted);

- Mi is the number of “copies” of the i-th solution (step of a mono-bi-poly trend); - Kij is the weight assigned to the j-th copy of the i-th distribution of density; - [ROT]ij is the rotation applied to the j-th copy of the i-th distribution of density; - (x0, y0, z0)ij is the translation applied to the j-th copy of the i-th distribution of density. The authors are now collecting typical values of Mi, Kij, [ROT]ij, (x0, y0, z0)ij from the database of examples collected in [1]. The weights Kij have been added to the formula (2) to extend its adaptability to different situations, but in most cases binary values can be applied: 0 when the i-th solution doesn’t contribute to the definition of the density distribution overcoming the geometrical contradiction, 1 in the other cases. Nevertheless, while approaching hybridization strategies (e.g. the first standard combination proposed in section 3.4), fuzzy values can be assigned to the weights Kij, according to the potential impact of each loading condition estimated as maximum stress, maximum deformation, strain energy etc. 3.4 Exemplary standard combinations A typical combination for the density distributions obtained by different mono-goal optimizations is the hybridization obtained by assigning to (2) the following values: - Mi = 1; - Kij = a value among 0 and 1 as a function of the relevance of each loading condition; - [ROT]ij is the identity matrix (no rotations); - (x0, y0, z0)ij is the null vector (no translations). It is worth to notice that the results obtained by this strategy do not necessarily coincide with the results of a multi-goal optimization performed by commercial optimization tools, as shown in section 4.2. A second typical combination is the abovementioned mirrored geometry (fig. 4, below) obtained through: - M1 = any (it doesn’t impact the result due to he value assigned to the weights K1j); - M2 = 2 (two copies of the same density distribution); - K1j = 0 (only one optimized density distributions is used for generating the final geometry); - K2j = 1; - [ROT]21 is the identity matrix (no rotations); - [ROT]22 is a 180° rotation; - (x0, y0, z0)21 is the null vector (no translations); - (x0, y0, z0)22 is the minimal translation suitable to eliminate the overlap between the high density regions of the design space. In the following section a typical combination for axialsymmetric density distribution is shown, according to the following values: - Mi = 1; - Kij = 1; - [ROT]11 is the identity matrix (no rotations); - [ROT]21 is a rotation around the axis of the system, the angle being calculated as half the periodicity of the geometrical feature; - (x0, y0, z0)i1 is the null vector (no translations). 4 EXAMPLES AND DISCUSSIONS With the aim to explain the approach so far described, some examples are here presented. The first case study

266

concerns a redesign task of a motor-scooter wheel which should be manufactured using plastic material instead of aluminum alloy; the second one is related to the design of a linear guidance system that experiences two different loading conditions. The optimization tasks have been carried out by using the commercial software Optistruct embedded in the suite HyperWorks rev.7, developed by Altair Inc. 4.1 Motor-scooter wheel redesign This test case has been inspired by a real case study developed during a collaboration of the authors with the Italian motorbike producer Piaggio [21]. The goal of the project was the design of a plastic wheel for light motoscooters mainly aimed at costs reduction, of course without compromising safety and mechanical performances. The traditional approach used in Piaggio to assess the conformity of a wheel to requirements consists in three different experimental tests: 1. deformation energy under high radial loads/displacements (simulating an impact against an obstacle); 2. fatigue strength under rotary bending loads (simulating the operating conditions such as curves); 3. fatigue strength under alternate torsional loads (simulating the accelerations and decelerations). These tests have been adopted as reference criteria for topology design optimization, under the constraint of manufacturability through injection molding and the goals of minimizing mass and maximizing the stiffness distribution on the rim wheel. The optimization problem has been set up as it follows:

criterion while all fulfilled fatigue strength requirements (2, 3). A deeper investigation of the radial stiffness distribution along the wheel rim has been performed for each optimized geometry (figure 7). As supported also by intuition, when the number of spokes rises, the stiffness of the rim on spokes decreases, while it increases among the spokes.

Figure 6: output topologies obtained by topological optimizations: boundary conditions (loads and constraints), optimization constraint (overall mass), optimization objective and density threshold are the same for all four instances. Only the number of the pattern repetition is clearly different. Normalized radial stiffness

1,2



Objective Function: maximize wheel stiffness;



Constraints: several upper limits for the mass of the wheel; manufacturing constraints for injection molding process;



Loading conditions: radial and tangential loads applied on the rim of the wheel Rim profile and hub have been defined as non-design areas since they are functional surfaces (figure 5).

1 0,8 0,6 0,4 0,2 0 3 Spokes

6 Spokes

9 Spokes

12 Spokes

Figure 7: Normalized radial stiffness distribution evaluated on the wheel rim for different topologies: radial force applied on the spokes (dark) and in the middle between two adjacent spokes (light).

Figure 5: Design domain for Topology Optimization. The rim and the surface of the hub have been defined as functional surfaces, i.e. non design areas (light gray); dark gray represents the design space. The optimization task led to several topologies having different number of spokes (figure 6). Their compliance to the design criteria above described, has been checked through virtual simulations. Results revealed that three and six spokes wheels widely satisfy the deformation energy test only when the radial load is applied on the areas of the rim directly supported by a spoke while, when the radial load is applied among them, the proof fails. The other topologies never satisfied the deformation energy

267

According to these results a contradiction appears: a smaller number of spokes provides the highest radial stiffness in the areas of the rim directly supported by the spokes, but the deformation between two spokes is maximum. A bigger number of spokes allows to obtain a more uniform stiffness distribution along the rim but with low overall values. This technical contradiction can be modeled as shown in figure 8. Small # spokes

EP: 1

Stiffness under radial load on spokes

Small # Of Spokes CP:1

Density distribution Big # Of Spokes EP: 2

Big # spokes

Stiffness under radial load among spokes

Figure 8: model of the technical contradiction: EP1 is the stiffness on spokes, EP2 is the stiffness among spokes.

Taking into account these considerations, “three spokes” and “nine spokes” geometries have been selected to produce an improved “manipulated” topology through formula (2). The goal is the definition of a new topology, not identified by standard optimization systems, with a higher mechanical performance. As described in section 3.4, axial-symmetric density distributions can be combined by a relative rotation with respect to the common reference. Taking into account the functional surfaces, the hub axis is assumed as reference to apply the transformation. The rotation is defined as a half of the angular periodicity of the nine spokes wheel, thus 20°: such a value provides the minimum overlap between the original distributions of density. Figure 9 shows the profile of the original distribution of densities (3 and 9 spokes wheels) and the result of the manipulation; as a result of the density combination, a “Y” shaped spoke is suggested. It is worth to notice that such a topology is definitely different from any result provided by the optimization systems. A preliminary concept of a Y-shape spoke wheel has been developed in order to compare its radial stiffness with the mechanical performance of the original geometries. Figure 10 summarizes the results of such a comparison.

The analysis revealed that the suggested topology is 20% lighter than both the “three spokes” and “nine spokes” configurations. The “Y” version gives also an improvement of the rim radial stiffness among spokes. Even if the stiffness evaluated on spokes worsens with respect to the “three spokes” wheel, “Y” configuration satisfies the deformation energy design criterion. 4.2 Linear guidance system design The second case study concerns the design of a linear guidance system. New applications of such kind of component (i.e. medical machines, etc.) require units with maximum stiffness. Typically these mechanical parts experience different load cases and boundary conditions during their service life. The linear guidance system here considered is typically subjected to two load cases, as shown in figure 11: an orthogonal load and a lateral load acting on the surface of the guidance.

Figure 11: Exemplary cross-section of the linear guidance and applied loads [22].

Figure 9: Above: conflicting distributions of density according to the contradiction modeled in figure 8 (same overall mass). Below: density distribution automatically obtained by the application of formula (2) to the conflicting pair (left) and exemplary interpreted geometry (right). The darkness of the images is directly proportional to the optimized density.

Normalized radial stiffness

As a consequence, two different mono-objective optimizations must be performed in order to obtain the customized modeling features of the system, each corresponding to a specific loading condition. The objective for both the optimizations tasks is maximizing the stiffness of the structure evaluated as reciprocal function of the total deformation energy. A mass 16.5 kg/m has been considered as optimization constraint. Figure 12 shows the topologies emerging from these load cases.

On spokes Among spokes

A

1,1 1,0 0,9 0,8 0,7 0,6 0,5 0,4 0,3 0,2 0,1 0,0

B 3 Spokes

"Y"

9 Spokes

Figure 10: comparison of radial stiffness distribution among “three spokes”, “Y” and “nine spokes” wheels. “Y” has an improved stiffness among spokes with respect “three spokes”. The behavior is similar to the “nine spokes” wheel but with an improved stiffness on spoke. “Y” is 20% lighter than the other configurations.

Figure 12: Topologically optimized density distribution of a linear guidance corresponding to the load cases 1 (A) and 2 (B) of figure 11.

268

According to these results a geometrical contradiction arises: in facts, the best density distribution for load case 1 has several topological differences with the optimized geometry for load case 2. The geometrical contradiction can be modeled as shown in figure 13, where topology called “A” is related to the geometry shown in figure 12-A, while topology called “B” is presented in figure 13-B. The Evaluation Parameters are constituted by the total deformation energy in load cases 1 and 2.

-

C is the deformation energy of the multi-goal optimization, to be minimized Ci is the deformation energy related to the i-th load case; wi is the weight assigned to the i-th loadcase. The weights assumed in (3) have been applied also to perform the hybridization task, according to (1). The optimized topology is shown in figure 15.

A

B

Figure 15: Topology resulting from multi-objective design optimization according to the objective function (3) under a mass constraint equal to the hybrid topology (figure12B).

Figure 13: Geometrical contradiction: “A” is topology coming from optimization under load case 1, “B” is topology resulting from optimization under load case 2. Due to the lack of a rotational symmetry and the constraints acting on the functional surfaces, in this case a hybridization as proposed by formula (1) is the favorite approach to generate a new topology partially overcoming the contradiction represented in figure 13. In such a case weights have been assigned in order to take into account that load case 2 involves a deformation energy greater than load case 1. Thus, density distribution “B” has been weighted correspondingly more than the density distribution “A”. This approach led to the result shown in figure 14. In order to satisfy the mass constraint of 16.5 kg/m, a density threshold of 0.85 has been considered.

A

B

Figure 14: Above (A): hybrid solution obtained through the application of formula (1) to topologies “A” and “B” of figure 12. Below (B): resulting topology after applying a threshold equal to 0.85 to the density distribution in order to have a total mass of 16.5 kg/m. With the aim to assess the benefits provided by the hybrid solution, a benchmark has been performed with respect to a solution obtained through traditional multi-objective design optimization, of course keeping the same mass constraint. The following objective function has been considered for this task: (3) C = w 1 C 1 + w2 C 2 where:

269

According to these results an evaluation of the deformation energy for both hybrid and multi-objective solutions has been carried out taking into account each load case. The analysis brings the results shown in table 1. Total Strain Energy (mJ) Loadcase 1

Loadcase 2

Multiobjective

6,63E-01

6,04E-01

Hybrid

4,65E-01

4,96E-01

∆%

-30

-18

Table 1: Comparison of deformation energy among multiobjective solution and hybrid solution. The last one is more effective, thus partially overcomes the geometrical contradiction represented in figure 13. It is worth to notice that the suggested hybridization is surprisingly much better than the solution obtained through the traditional approach based on multi-objective optimization. In fact, the hybrid solution is somehow similar to the topology presented in figure 12B and quite different from the multi-objective optimized geometry shown in figure 15. Design optimization always leads to the best compromise density distribution since it is driven by an objective function constituted by a combination of Evaluation Parameters related to different conflicting conditions. In this case, the optimization algorithm has presumably reached a local minimum. Besides, hybridization considers solutions coming from monoobjective optimizations each having the task to improve a single Evaluation Parameter. This allows to preserve and extract the useful features of each solution and trim the redundant ones. 5 DISCUSSION AND CONCLUSIONS The papers presents the preliminary results of a research aimed at the definition of a new approach to ComputerAided Conceptual Design: topological optimization systems are adopted as a means to identify geometrical contradictions, i.e. conflicting density distributions responding to different boundary conditions. Those topologically optimized density distributions can be assumed as customized modeling features to generate a geometry capable to overcome the geometrical contradiction.

In order to combine these customized modeling features, a general expression able to reproduce at a geometrical level several TRIZ inventive principles has been proposed. Among the different strategies to manipulate conflicting density distributions identified so far, two exemplary combinations have been detailed: hybridization and integration of axial-symmetrical topologies obtained through a rotation around their axis. According to the results so far described, the proposed approach leads to very different topologies with respect to the traditional design optimization; the resulting geometry has often better performance than an equivalent multiobjective optimization. At present the methodology is under validation by several other case studies in order to determine further combination criteria according to the specific resources, boundary conditions, etc. REFERENCES [1] Cascini G., Rissone P., Rotini F., 2007, From design optimization systems to geometrical contradictions, Proceedings of the 7th ETRIA TRIZ Future Conference, Frankfurt, Germany, 6-8 November 2007. [2] Cugini U., Cascini G., Muzzupappa M., Nigrelli V., 2008, Integrated Computer-Aided Innovation: the PROSIT approach, submitted for publications to the Special Issue on Computer-Aided Innovation of the Journal of Computers in Industry. [3] Altshuller, G.S.: “Creativity as an Exact Science: The Theory of the Solution of Inventive Problems”. Gordon and Breach Science Publishers, ISBN 0677-21230-5, 1984 (original publication in Russian 1979). [4] Cugini U., Cascini G., Ugolotti M., 2007, Enhancing interoperability in the design process – The PROSIT approach, Proceedings of the 2nd IFIP Working Conference on Computer Aided Innovation, Brighton (MI), USA, 8-9 October, 2007, published on “Trends in Computer-Aided Innovation”, Springer, ISBN 9780-387-75455-0, pp. 189-200. [5] N. Leon-Rovira, J. M. Cueva, D. Silva, J. Gutierrez, 2007, Automatic shape and topology variations in 3D CAD environments for genetic optimization, International Journal of Computer Applications in Technology, Vol. 30, No.1/2, pp. 59 – 68. [6] L. Burak Kara, K. Shimada, S. D. Marmalefsky, 2007, An evaluation of user experience with a sketch-based 3D modeling system, Computers & Graphics, Volume 31, Issue 4, August 2007, Pages 580-597. [7] S. F. Qin, R. Harrison, A. A. West, I. N. Jordanov, D. K. Wright, 2003, A framework of web-based conceptual design, Computers in Industry, Volume 50, Issue 2, Pages 153 - 164. [8] M. Tovey, J. Owen, 2000, Sketching and direct CAD modeling in Automotive Design, Design Studies, Volume 21, Issue 6, Pages 569-588. [9] K. Saitou, K. Izui, S. Nishiwaki, P. Papalambros, 2005, A survey of structural optimization in mechanical product development in Journal of Computing and Information Science in Engineering, Volume 5, Issue 3, Pages 214-226. [10] R. Kicinger, T. Arciszewski, K. De Jong K, 2005 Evolutionary computation and structural design: a survey of the state-of-the-art, in Computers and Structures, Volume 83, Pages 1943–1978. [11] ALTAIR: www.altair.com

Another important issue is the definition of criteria to perform the interpretation of the manipulated density distribution. As shown in the previous section, the raw solution coming from the automatic manipulation of the conflicting density distributions, presents different levels of density, thus generating sometimes fuzzy borders to be defined by means of a threshold. Indeed, such information can be kept while designing composite parts as well as directions for the introduction of stiffening like ribs, etc. A further development of the present research is the extension of the procedure to topological optimizations not limited to the structural characteristics of the system. From this point of view, the expected trend in the field is the introduction in the market of multidisciplinary optimization systems with increased capabilities.

[12] [13] [14] [15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

270

FE-DESIGN: www.fe-design.de MSC SOFTWARE: www.mscsoftware.com ANSYS Inc.: www.ansys.com W. Hutabarat, G. T. Parks, J. P. Jarret, W. N. Dawes, P. J. Clarkson, 2008, Aerodynamic Topology Optimisation Using an Implicit Representation and a Multiobjective Genetic Algorithm. In Artificial Evolution, Lectures Notes in Computer Science, Volume 4926, Pages 148-159, Springer Berlin / Heidelberg. T.E. Bruns, 2007, Topology optimization of convection-dominated, steady-state heat transfer problems. International Journal of Heat and Mass Transfer, Volume 50, Issues 15-16, Pages 28592873. T.E. Bruns, D.A. Tortorelli, 2001, Topology optimization of non-linear elastic structures and compliant mechanisms. Computer Methods in Applied Mechanics and Engineering, Volume 190, Issues 26-27, Pages 3443-3459. D. Spath, W. Neithardt, C. Bangert, 2002, Optimized design with topology and shape optimization. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, Volume 216, Issue 8, Pages 11871191. N.Khomenko, R. De Guio, L. Lelait, I.Kaikov, 2007, A Framework for OTSM-TRIZ Based Computer Support to be used in Complex Problem Management, International Journal of Computer Application in Technology, Volume 30, Issue 1/2. T. Eltzer, R. De Guio, 2007, Constraint based modelling as a mean to link dialectical thinking and corporate data. Application to the Design of Experiments, Proceedings of the 2nd IFIP Working Conference on Computer Aided Innovation, Brighton (MI), USA, 8-9 October, 2007, published on “Trends in Computer-Aided Innovation”, Springer, ISBN 9780-387-75455-0, pp. 145-156. G. Cascini , P. Rissone, 2004, Plastics design: integrating TRIZ creativity and semantic knowledge portals. In Journal of Engineering Design, Special Issue: “Knowledge Engineering & Management Issues in Engineering Design Practices”, Volume 15, Issue 4, Pages 405-424. J. Sauter, R. Meske, 2001, Industrial applications of topology and shape optimization with TOSCA and ABAQUS. In Proceeding of ABAQUS World Users‘ th st May – 1 June, Maastricht Conference, 29 (Nederland).

Interpretation of a General Model for Inventive Problems, the Generalized System of Contradictions S. Dubois, I. Rasovska, R. De Guio LGECO, INSA de Strasbourg, 24 bld. de la Victoire, 67000 Strasbourg, France [email protected]

Abstract Design of technical systems implies either optimisation or inventive problems resolution. Resolution tools and methods exist for each kind of problems. Each family of resolution tools uses specific models for problem statement. A generic model that fits both kinds of problems has been defined, the Generalized System of Contradictions model. In border of this model a problem can be stated when no solution can be found by optimisation techniques. In this paper the Generalized System of Contradictions is linked to Design of Experiments model. Moreover a step towards problem resolution is proposed by the interpretation of the generic model. This interpretation is based on the definition of exhaustive concepts, it means of concepts enabling to look for solution outside of the initially defined domain. This process of problem statement out of the result of DoE and of interpretation of the built model is detailed and then illustrated through an example. Keywords: Design of Experiments, problem models, contradiction.

1 INTRODUCTION Designing new technical system means making technical systems evolve [1]. Evolution can be made by two main ways [2, 3]: (1) increase the efficiency of systems by optimisation of its parameters or (2) re-design the system when, for example, the use of a new resource or the application of a new working principle is required. A hypothesis is that these two evolution types could be fitted with two kinds of problem resolution: when optimisation techniques enable resolution or when a change in the problem model is required. In this article the first case will be defined as optimisation problems, the second as inventive problems. At the beginning of the design process, it is rarely known if optimization will enable the satisfaction of the requirements or if inventive design will be required. Thus it often appears that it is necessary to shift from one strategy to another. Different techniques and methods for problem resolution exist, dedicated either to optimisation problems or to inventive ones. However the shifting from one kind of approach to the second one is not obvious as none operational technique covering both approaches is proposed. This emphasises the need for a unified model fitting both approaches. In this paper two methods for problem resolution will be presented: Design of Experiments (DoE) for optimisation problems and TRIZbased approaches for inventive problems. DoE enables to settle the problem and to rapidly check if a solution can be found or not, it also enables the use of resolution algorithms as proposed by Constraint Satisfaction Problems (CSP). In [4-6], the complementary aspects of DoE and TRIZ were studied by the definition of concepts solutions with TRIZ methods and by making these concepts more robust by the use of DoE. In [7, 8] the comparison of the CSP approaches in terms of problem statement and in terms of problem resolution techniques has been initiated. In [9] a general model for inventive problems representation, based on TRIZ

CIRP Design Conference 2009

271

approaches, has been defined to satisfy the existence of an inventive problem only when no optimisation solution exists. The exploitation of such a model has to be defined to make a step from problem statement to its resolution. This article presents a first step towards resolution by the interpretation of the problem definition in order to build a meaningful representation of the problem; it means a definition that will enable to search for a solution easier way. This article will focus mainly on the problem statement starting from a DoE model to a model enabling the use of inventive problem techniques and the way the constructed model could help resolution. A first part will describe DoE and TRIZ based models. A second part will propose the comparison and the bridging of the two models. Then an example will depict the process of shifting from a DoE representation to a model enabling application of techniques of resolution for inventive problems. 2

PROBLEM STATEMENT

2.1 Design of Experiments model Design of Experiments [10] is a strategy to gather empirical knowledge, based on the analysis of experimental data and not on the theoretical models. In an experiment, one or more process variables (or factors) are deliberately changed in order to observe the effects these changes have on one or more response variables (or outputs). One can recognize two kinds of tools and techniques in DoE: those dedicated to the establishment of a model and those dedicated to optimisation. The first family proposes structured methods used to determine the relationships between different factors affecting a process and the outputs of this process. The factors are controlled parameters, usually noted X, whereas the outputs are measurable, usually noted Y.

The results of experiments are generally listed in a chart and enable the building of a mathematical model. One of the objectives of DoE is to obtain the most robust model with the minimum of experiments, which can be reached by the use of Taguchi’s methods [11]. The second family of DoE techniques concerns the exploitation of the obtained mathematical model. The major kind of exploitation is the determination of requested controlled parameters values: given a required value of the measured variables, the mathematical model is used to find requested controlled parameters values. In this article the second aspect of DoE, for which many algorithms exist, won’t be considered as it concerns only optimisation techniques. In the rest of article only the results of DoE will be considered, and DoE will refer to any formalisation of a set of relationships between controlled parameters and measured ones. Traditionally the operational steps for robust design are [12]: (1) statement of the problem and objective; (2) list of the responses and control parameters; (3) plan of the experiment; (4) running of the experiment and prediction of the improved parameter settings; (5) run of the confirmation experiment; (6) adoption of the improved design if objective is met or back to step (2) otherwise. 2.2 TRIZ based models Classical TRIZ models TRIZ [13] is a Russian acronym for Theory for Inventive Problem Solving, it is a theory built on the elicitation of the modes of the technical systems’ evolution. Its aim is to give the axioms to develop methods and techniques for problems resolution in the field of technical system design and in particular for problems that cannot be solved by optimisation techniques. TRIZ has been initiated and developed under the control of Genrich Altshuller. Classical TRIZ refers to the development of the theory approved by Altshuller. In border of this theory contradiction is the main problem stating model. “A problem exists” is equivalent to “a contradiction can be elicited”. TRIZ defines three kinds of contradiction: •

The administrative contradiction identifies some dissatisfaction in a situation, without any mean to act on the situation. “I know what I want, but I don’t know how to reach it”.

• The technical contradiction is the expression of two opposite requirements. “The satisfaction of the first requirement disables the satisfaction of the second requirement and vice versa.” •

The physical contradiction is the expression of two contradictory yet required states of the same parameter. “A parameter is required to be both in state one and in its opposite state”.

OTSM-TRIZ system of contradictions The idea of contradiction has been reinforced in border of OTSM-TRIZ [14], but for a generalized application, including non technical problems. The administrative contradiction has not been kept in the border of OTSM-TRIZ, as this contradiction definition only refers to the objective and no corresponding solving tool exists. The two kinds of contradictions that are proposed in OTSM-TRIZ are the Contradiction of a System and the Contradiction of the Parameter, which respectively generalize the TRIZ technical contradiction and physical one.

Moreover a System of Contradictions is proposed in the frame of OTSM-TRIZ to build coherence between the levels of Contradiction of the System and Contradiction of the Parameter, as illustrated in bold on figure 1. This system of contradictions is based on the existence of a parameter contradiction and of two contradictions of the system that justify the need of the two different states of the parameter. The two system contradictions are complementary as they correspond to the increasing of the first parameter that implies the decreasing of the second; and of the increasing of the second parameter that implies the decreasing of the first. The two parameters of the contradictions of the system are defined in [8] as taking part in the description of the objective, they are called Evaluation Parameters, whereas the parameter of the parameter contradiction is a mean to make the situation change, defined as Action Parameter. Generalized System of Contradictions model In [8, 9] a postulate has been proposed to build a generic model for inventive problem statement: this model has to satisfy the following equivalence: “a contradiction exists” is equivalent to “no solution can be found by optimisation of a known model”. The models proposed in classical TRIZ and in OTSM-TRIZ do not fit this requirement. Thus in order to get this equivalence we propose a generalization of OTSM-TRIZ system of contradictions. As a result we get the Generalized System of Contradiction (GSC), as illustrated in italic in figure 1. The generalisation is based on the use of concepts, which are defined as logical assertions about values of the parameters. Thus as generalization of the physical contradiction, a set of action parameters and concepts involving exclusively those action parameters respectively replace the action parameter and their values. The generalisation of the technical contradiction is then built on two concepts involving two sets of evaluation parameters. Thus the Generalized System of Contradictions is the generalisation of the OTSM-TRIZ system of contradictions where two concepts based on a set of action parameters satisfy two sets of evaluation parameters. The desired result is then the simultaneous satisfaction of the two sets of evaluation parameters. A Generalized System of Contradictions will be formulated in the example (part 4). 3

COMPARISON APPROACHES

AND

BRIDGING

OF

THE

3.1 Comparison of the models Even if the model of DoE is not explicitly defined in order to state problems, it is quite compatible with the Generalized System of Contradictions model. The analogy between the two models is quite evident, as defined in table 1. Both models define two categories of parameters, those to evaluate the result and those to act on the system in order to reach the desired result. Generalized System of Contradictions

Design of Experiments

System model

Action Parameters

Controlled Parameters

Result evaluation

Evaluation Parameters

Measured Parameters

Table 1: Comparison between the GSC and the DoE models.

272

but then it degrades EVALUATION PARAMETER 2 but then it doesn’t fit CONCEPT 2 OF EVALUATION PARAMETERS has to be VALUE 1 has to be CONCEPT 1

Action Parameter (of an element) Set of action parameters (of a system)

has to be VALUE 2 has to be CONCEPT 2

to improve EVALUATION PARAMETER 1 to satisfy CONCEPT 1 OF EVALUATION PARAMETERS to improve EVALUATION PARAMETER 2 to satisfy CONCEPT 2 OF EVALUATION PARAMETERS

DESIRED RESULT

but then it degrades EVALUATION PARAMETER 1 but then it doesn’t fit CONCEPT 1 OF EVALUATION PARAMETERS

Bold: OTSM-TRIZ system of contradictions Italic: Generalized System of Contradictions

Figure 1: OTSM-TRIZ system of contradictions and Generalized System of Contradictions. The defined analogy shows the potentiality in terms of model coherence to define a fitting between optimisation models and resolution tools and TRIZ-based inventive ones. 3.2 Bridging DoE and Generalized System of Contradictions Based on the previously explained analogy, the Generalized System of Contradictions can be represented in a DoE model quite easily. Independently from the values of the action parameters, a Generalized System of Contradictions can be recognized on the arrangement of a set of evaluation parameters. Let us define a DoE characterized by a set of controlled parameters X=(x1,…, xl), a set of evaluation parameters Y=(y1,…, yr) and a set of experiments E=(e1,…,e9) as presented on table 2. An experiment ei is characterized by a set of values (vi1,…, vil) attributed to the set of controlled parameters and by a set of values (zi1,…, zri) taken by the evaluation parameters. In the rest of the article the values zij will be considered logical values, equal to 1 if the evaluation parameter yi is satisfied by the experiment ej, equal to 0 otherwise. x1



xl

y1

...

yi

...

yr

e1

v11

v1l

z11

z1i

z1r

e2

v21

v21

z21

z2i

z2r

e8

v81

V8l

z81

z8i

z8r

e9

v91

v9l

z91

z9i

z9r

• E2 is a set of experiments for which all the evaluation parameters of Y2 are satisfied. The table 3, which is obtained by permuting rows and columns of table 2 in order to group the identified Ei and Yi, represents the properties of the Generalized System of Contradictions from the values of the evaluation parameters. In table 3, the values of the evaluation parameters are normalized as being 1 if the parameter is satisfied, according to the objective of the resolution, and as being 0 if the parameter does not fit the requirement.

X

Y1

Y2

∀ eiBE1

E1

E1~Y1: zij=1

ei~Y2:

Y0 E1~Y0

F j / zij=0

∀ eiBE2 E2

ei~Y1:

E2~Y2: zij=1

E2~Y0

F j / zij=0 E0

E0~Y1

E0~Y2

E0~Y0

Table 3: Representation of a GSC in a DoE



The matrix of table 3 has specific features: • E1×Y1:

Table 2: A Design of Experiments table

• E1×Y2:

If no solution exists in such a table, i.e. if no experiment satisfies all the evaluation parameters, a Generalized System of Contradictions could be formulated [9]. Identifying a Generalized System of Contradictions in such a table is looking for: • Three sets of evaluation parameters Y0, Y1 and Y2, Y1∩Y2=∅, Y0∩Y2=∅, such as Y0∩Y1=∅, Y0∪Y1∪Y2=Y, Y1≠Ø and Y2≠Ø • Three sets of experiments E0, E1 and E2: E0∩E1=∅, E1∩E2=∅, E0∩E2=∅, E0∪E1∪E2=E, E1≠Ø and E2≠Ø. Moreover • E1 is a set of experiments for which all the evaluation parameters of Y1 are satisfied.

273

• E2×Y2: • E2×Y1:

∀ (i,j) / (eiBE1) AND (yjBY1), zij=1. ∀ i / (eiBE1), F j / (yjBY2) AND (zij=1). ∀ (i,j) / (eiBE2) AND (yjBY2), zij=1. ∀ i / (eiBE2), F j / (yjBY1) AND (zij=1).

The analysis and automatic extraction of the three sets out of the DoE result has to be studied and proposed, but several algorithms exists to facilitate this extraction [15]. A manual approach to obtain the Yi and the Ei is proposed in the example of next section. 3.3 On the use of the Generalized System of Contradictions The main interest of formulating problems through the Generalized System of Contradictions pattern is to

propose a synthetic description of the root cause of problem. This synthetic description enables to build a new understanding for the designer. In [7] the difference between optimisation methods and inventive problems resolution tools has been defined as the ability to change the parameters which model the system. The Generalized System of Contradictions focuses the attention on the conditions that strictly have to be considered, rejecting other ones. Moreover, formulating the problem through a system of contradictions shape could lead to the application of OTSM-TRIZ resolution tools, as they are defined to be generic for contradictions resolution whatever the domain is. However, this point has still to be tested. Thus, currently the only recognized point is the interest of this synthetic definition to rebuild a meaningful representation of the problem out of a rich description of the problematic situation. 4

ILLUSTRATION

4.1 Description of the problem Let us consider an electrical circuit breaker. When an overload occurs, the overload creates a force (due to magnets and electrical field) which operates a piece called firing pin. The firing pin opens the circuit by pressing the switch, located in the circuit breaker. In case of high overload, the firing pin, this is a plastic stem, breaks without opening the switch. Components are presented on figure 2.

Figure 2: Components of electrical circuit breaker. 4.2 Problem statement The problem has been studied and the main system parameters and their domains have been defined as: x1: firing pin material (plastic – 1, metal – 0) ; x2: core internal diameter (high – 1, low – 0) ; x3: core external diameter (high – 1, low – 0) ; x4: firing pin diameter (high – 1, low – 0) ; x5: spring straightness (high – 2, medium – 1, low – 0) ; y1: circuit breaker disrepair (satisfied – 1, unsatisfied – 0) ; y2: circuit breaker reusability (satisfied – 1, unsatisfied – 0) ; y3: spring core mounting (satisfied – 1, unsatisfied – 0) ; y4: firing pin bobbin mounting (satisfied – 1, unsatisfied – 0) ; y5: normal mode release (satisfied – 1, unsatisfied – 0) ; y6: firing pin initial position return (satisfied – 1, unsatisfied – 0). In this definition of the problem the xi are the action parameters whereas the yi are the evaluation ones. The system behaviour was modelled by Design of Experiments and it is shown in table 4. The objectives that have been established to build the DoE are: •

the satisfaction of at least one evaluation parameter in each experiment;



each of the action parameters has at least one time each of its possible values;

• to minimize the number of experiments. Even if the assumption is not totally consistent, the action parameters have been considered independent in the limits of their defined domains.

e1 e2 e3 e4 e5 e6 e7 e8 e9

x1 x2 x3 x4 x5 y1 y2 y3 y4 y5 y6 1 1 0 0 1 1 0 1 1 1 1 0 1 1 1 1 0 1 0 0 1 1 1 0 1 0 0 1 0 1 0 0 0 1 1 0 0 0 1 1 1 1 0 0 1 0 1 0 1 1 0 1 0 1 1 0 1 0 1 2 0 1 0 1 1 1 1 0 1 1 0 1 0 1 0 0 0 1 0 0 0 1 1 0 0 1 1 1 0 1 0 0 2 0 1 0 1 1 1 Table 4: DoE for the circuit breaker

First evidence is that no solution can be found in the defined DoE, as no experiment enables the satisfaction of all the evaluation parameters. Looking for Generalized System of Contradictions in such a table could lead to several ones, at least one per evaluation parameter, as soon as each evaluation parameter is at least satisfied once. Assuming that the choice of action parameters is done such a way that each evaluation parameter will be satisfied at least in one experiment, and assuming that no solution is found in the table, one can say that each evaluation parameter will have at least one experiment in which it will be satisfied and one experiment in which it will not. Thus a contradiction could be formulated for each of the evaluation parameters. But of course the Generalized System of Contradictions also enables the formulation of more complex Generalized System of Contradictions, implying two combinations of evaluation parameters. Thus a set of Generalized System of Contradictions can be formulated for one solutionless DoE. A first question arises then: should all the Generalized System of Contradictions be elicited? If no, how to choose the Generalized System of Contradictions, or set of contradictions to be considered? In this article, the postulate is to choose the Generalized System of Contradictions that minimizes cardinality of E0, as it is composed by the experiments that won’t be considered in the contradiction model. The hypothesis is that the more experiments the Generalized System of Contradictions will include, the more representative of the problem it will be. To build the Generalized System of Contradictions in the example, the frequency of simultaneous satisfaction of two evaluation parameters has been studied; it is presented in table 5. It shows that the parameters y5, the normal mode release, and y6, the firing pin initial position return, are simultaneously satisfied in six experiments.

y1 y2 y3 y4 y5 y6

y1 y2 y3 y4 y5 y6 1 5 2 3 3 1 3 3 3 2 2 2 4 4 6

Table 5: Simultaneous satisfaction of pairs of evaluation parameters Thus, the minimization of E0 leads to a Generalized System of Contradictions where •

274

E0 is made of two experiments, E0=(e3;e7).



E1 groups the experiments where y5 and y6 are simultaneously satisfied, E1=(e1;e2;e5;e6;e8;e9)



E2 corresponds to the experiment 4 where the evaluation parameters y1, y2, y3 and y4 are satisfied. Therefore the DoE is organised, as shown on table 6, to graphically represent the Generalized System of Contradictions.

e1 e2 e5 e6 e8 e9 e4 e3 e7

x1 1 0 1 0 1 0 1 1 1

x2 1 1 0 1 0 1 1 0 0

x3 0 1 1 0 0 0 0 1 1

E1

E2 E0

x4 0 1 0 1 0 0 0 0 1

x5 1 1 1 2 1 2 0 0 0

y5 1 1 1 1 1 1 0 0 0

y6 1 1 1 1 1 1 0 0 0

y1 1 0 1 0 1 0 1 1 1

y2 0 1 0 1 0 1 1 0 0

y3 1 0 1 0 0 0 1 1 1

y4 1 0 0 1 1 1 1 0 0

be discriminative in regard of the known elements of E2 and E0. In accordance with this rule the new definitions of the concepts are: E1: C’1=(x5≠0) E2: C’2=(x5=0).(x2=1).(x3=0) The inherent Generalized System of Contradictions is represented on figure 3.

Figure 3: Generalized System of Contradictions for electrical circuit breaker problem.

Table 6: Graphical representation of the Generalized System of Contradictions 4.3 A meaningful representation of the problem A first way to interpret the Generalized System of Contradictions model out of the reorganized DoE is to simply enumerate all the states of the action parameters that are characterized by E1 and E2. We have then two concepts C1 and C2 defining E1 and E2 respectively: E1: C1=(x1=1.x2=1.x3=0.x4=0.x5=1)OR(x1=0.x2=1.x3=1.x4=1.x5 =1)OR(x1=1.x2=0.x3=1.x4=0.x5=1)OR(x1=0.x2=1.x3=0.x4=1 .x2)OR(x1=1.x2=0.x3=0.x4=0.x5=1)OR(x1=0.x2=1.x3=0.x4=0 .x5=2) E2: C2=(x1=1.x2=1.x3=0.x4=0.x5=0) . These concepts may be too long or difficult to be understood by a human. Thus we are looking for more simple concepts C’1 (C’2 ) that recognize any element of E1 (E2) and do not recognize any element of E1∪E0 (E2∪E0) Exhaustiveness of discrimination The interest of C1 is to provide a concept out of the action parameters that is discriminative. But another representation could exist to propose a more synthetic representation of the concepts A discriminative concept of each of the three sets of experiments Ei is a definition that will strictly include experiments of the considered set, excluding any other experiment (within the set of know experiments). The advantage is to be sure that the definition will not be able to include “false” element, it means that if an experiment fits the definition it is sure that this element belongs to the considered set. The pitfall of such a representation is that the definition is based on a particular point of view of the problem made of the considered action parameters, and does not allow to change this point of view. A solution for the problem will have to satisfy the two sets of evaluation parameters included in the contradiction, i.e. the solution will have to combine advantages of both E1 and E2. Thus it is proposed to build an synthetic representation of the two concepts C1 and C2 enabling to enlarge the sets of experiments to unknown ones, it means to experiments to be discovered after redefining the problem model. The rule considered to build the definition of such a representation is to be discriminative in respect to two others groups, i.e. for example the definition of C’1 has to

275

For the domain experts, the initial representation of the concepts is totally not meaningful, but the new definition is more relevant: “The straightness of the spring must not be low to satisfy the normal mode release and the firing pin initial position return; and the straightness of the spring has to be low with a high internal diameter of the core and a low external diameter of the core to satisfy the circuit breaker disrepair, the circuit breaker reusability, the spring core mounting and the firing pin bobbin mounting.” Such a synthetic representation brings the advantage to be better understandable and meaningful. But it can provide the bias not to be fully discriminative. 5

CONCLUSIONS AND PERSPECTIVES

5.1 Benefits of the model The proposed Generalized System of Contradictions model is a model that covers both optimisation and inventive problems. As soon as no solution can be found by optimisation algorithms, a set of Generalized System of Contradictions can be formulated. The first interest is to enable a linking between optimisation tools and inventive problems resolution tools such as those from TRIZ-based approach. As most of the time, the nature of the problem is not known at the beginning of the problem resolution process, it is interesting to be able to shift from one family of resolution tools to the second one. The contribution of this paper is to increase usability of the Generalized System of Contradictions model by defining a representation of the used concepts which give more sense to the problem statement in accordance with an objective of resolution. It means that the proposed definition does not only state the DoE knowledge but enable to consider more knowledge as the solution has to be found in a domain larger than the one considered initially. 5.2 On-going work Several steps are still remaining and several questions still have to be answered. The proposal of algorithms to extract automatically contradiction out of optimisation models has already been tackled in this paper. This can be done by the help of machine learning algorithms or with Constraint Solving Problems methods, as was presented in [7]. One of the remaining questions related to this automation is the reproducibility of the approach. In the treated

example the comparison of the evaluation parameters by pairs was sufficient to apprehend the minimization of E0 but in more complex systems, with higher number of evaluation parameters, this comparison could be not relevant. At least it will have to be completed by more complex comparisons, 3 by 3, 4 by 4 and so on. Then the question of the contradiction to take into account has also to be considered. In this paper the hypothesis was to consider the Generalized System of Contradictions minimizing the size of E0, but maybe the resolution could be easier with other contradictions, the different hypothesises will be tested. Another question is the evaluation of the meaning of the contradiction; it is represented currently by the dimension of E0, but what if this dimension is high? What if its dimension is higher than the ones of E1 and E2? As the resolution phases have not been tackled for the moment, it is difficult to propose an answer, but the relevancy of the Generalized System of Contradictions will be evaluated in accordance to the benefits of its resolution. This means also that one important criterion to evaluate the relevance of a Generalized System of Contradictions is the number of considered evaluation parameters. Last point of discussion is the way to simplify the definition of concepts, this step could also be improve by the use of family classification algorithms [16]. 6 [1]

[2]

[3]

[4]

[5]

[6]

REFERENCES Cavallucci, D. and R.D. Weill, 2001, Integrating Altshuller's development laws for technical systems into the design process. CIRP Annals Manufacturing Technology, 50(1): p. 115-120. Clarke, D.W., 2000, Strategically Evolving the Future:: Directed Evolution and Technological Systems Development. Technological Forecasting and Social Change, 64(2-3): p. 133-153. Seliger, G., 2001, Product Innovation - Industrial Approach. CIRP Annals Manufacturing Technology, 50(2): p. 425-443. Hsing, J., 2001, Conflict Resolution Using TRIZ and Design of Experiment (DOE). TRIZ Journal, May 2001. Xinjun, Z., 2003, Develop new kind of plough by using TRIZ and Robust Design, in Altshuller Institute TRIZCON 2003: Philadelphia, USA. Yang, K. and H. Zhang, 2000, Enhancing Robust Design with the Aid of TRIZ and Axiomatic Design, in

[7]

[8]

[9]

[10] [11]

[12] [13] [14]

[15]

[16]

276

International Conference on Axiomatic Design, ICAD ‘2000: Cambridge. Dubois, S., I. Rasovska, and R. De Guio, 2008, Comparison of non solvable problem solving principles issued from CSP and TRIZ, in IFIP International Federation for Information Processing, G. Cascini, Editor, Springer: Boston: p. 83-94. Eltzer, T. and R. De Guio, 2007, Constraint based modelling as a mean to link dialectical thinking and corporate data. Application to the Design of Experiments. in 2nd IFIP Working Conference on Computer Aided Innovation, Brighton, USA: Springer. Eltzer, T., S. Dubois, and R. De Guio, 2009, A dialectical based model coherent with inventive problems and optimization problems. Computers in Industry, Submitted (Special issue "Advances and Trends Computer Aided Innovation"). Montgomery, D.C., 2004, Design and Analysis of Experiments: Wiley-Interscience. Roy, R.K., 2001, Design of Experiments Using The Taguchi Approach: 16 Steps to Product and Process Improvement: Wiley-Interscience. Buyske, S., 2001, Advanced Design of Experiments, Rutgers University: Piscataway. Altshuller, G.S., 1988, Creativity as an Exact Science, New York: Gordon and Breach. Khomenko, N., et al., 2007, A framework for OTSMTRIZ-based computer support to be used in complex problem management. International Journal of Computer Applications in Technology, 30((1) special issue Trends in computer aided innovation): p. 88104. Michalski, R.S., J.G. Carbonell, and T.M. Mitchell, 1984, Machine Learning. An Artificial Intelligence Approach. R.S. Michalski J.G. Carbonell T.M. Michell ed, Berlin Heidelberg New-York Tokyo: Springer-Verlag. Bouzid, L., 1992, Application of conceptual learning techniques to generalized group technology. Applied Artificial Intelligence, 6: p. 443-458.

Long-Run Forecasting of Emerging Technologies with Logistic Models and Growth of Knowledge D. Kucharavy, E. Schenk, R. De Guio LGECO - Design Engineering Laboratory, INSA Strasbourg - Graduate School of Science and Technology, France [email protected]

Abstract In this paper applications of logistic S-curve and component logistics are considered in a framework of longterm forecasting of emerging technologies. Several questions and issues are discussed in connection with the presented ways of studying the transition from invention to innovation and further evolution of technologies. First, the features of a simple logistic model are presented and diverse types of competition are discussed. Second, a component logistic model is presented. Third, a hypothesis about the usability of a knowledge growth description and simulation for reliable long-term forecasting is proposed. Some interim empirical results for applying networks of contradictions are given. Keywords: Component logistic model, Innovation process, Knowledge acquisition, OTSM-TRIZ

1 INTRODUCTION An innovation can be seen as the result of a sequence of a production of knowledge and information along a chain that consists of a concept definition, experimentation for validation of the concept and finally exploitation on the market. The outputs of these activities are knowledge or information. But as the different activities along the chain have different time constants and are not systematically harmonized from a decision point of view, the flow of knowledge between them must be synchronised. In order to do so, the knowledge and information resulting from an activity is often stored. From a quality management point of view the trends of improvement concern the cost, delay and quality of the process leading to innovation. In order to adapt the time-toproduce and time-to-market to the context of increased competition it becomes necessary to anticipate the production of knowledge so that required knowledge is available at the right time and production resources of the innovation chain are used for producing relevant knowledge. Our research deals with forecasting problems for the exploration activities (conceptual design stage requiring inventive activities) along the innovation chain. General research in forecasting provides general methods and technologies. Our purpose is to solve the problems that prevent us from using them for forecasting the evolution of the parameters of a technological system even if this system is unknown. In this paper we review and discuss forecasting technologies based on so called logistic models. 1.1 About innovation Sometimes long-term technological forecasting is perceived as an attempt to predict the technological future. Yet such an attempt would be condemned to failure. Why? Because technologies are embodied in innovations – i.e., products or processes which have successfully passed the barrier of user adoption. Unfortunately for the firm putting innovative products onto the market, some innovative products and processes do not pass this barrier and hence, never become innovations. It is commonly accepted that the future success

CIRP Design Conference 2009

277

of an innovative product can hardly be predicted, as it is often the outcome of complex interactions between a set of elements: the product itself, the users (their habits, competences, etc.), the economic environment of the product (competitors and complementary products) as well as its socio-political environment (laws, social concerns, etc.). These elements are continuously evolving themselves – by direct interaction or independently – so that in turn, the success of an innovative product appears to be rather unpredictable. Even though they belong to quite a general lexicon, the terms we are using should to be explained in detail. Invention Invention relates to the transposition of technical and scientific principals into an “artefact” to provide a “new way” to accomplish a (more or less generic) function. In inventions, the barrier of technical feasability has been passed. Yet uncertainties remain: will the product pass the test of standard use? Will it be possible to produce and bring the product to market in a satisfying way? Will the potential buyers eventually adopt the product? These issues are related to a series of uncertainties. Innovative product or process A product or a process is innovative if it is “new” for the group of people who are likely to use it. In the case of an innovative product, uncertainty concerning the industrialization and distribution is partially overcome. This simple definition seems usable but it is obviously relative: a product may be innovative for the group of users it is intended to, but not innovative for another group of users. For instance, a company implementing quality management methods could see that as an innovation, but the innovation is only internal to the firm. Innovation An innovation is an innovative product or process that has passed the barrier of user adoption. Innovative products and processes often never become an innovation because they are rejected by the “market”. In case of market adoption, it

might take quite a long time until the innovation is qualified: the diffusion curve can be slow. Innovation process This term refers to the global process going from invention to innovation, i.e., adoption of an innovative product/process by potential uses. The innovation process consists of uncertainty reduction over a given period. Time per se is not really important. What is important in our view is the sequence of activities of the actors taking part in the process. These actors are resource providers for the process: for the firm itself, but also for research laboratories providing outputs of scientific research, suppliers of needed components and networks providing useful collaborations. This means that an innovation process may take a long time if resources are not provided or provided in an unorganized way. This leads us to an important point: efficient management of the innovation process (from invention to market) requires forecasting of the resources needed and actors involved along the process. Forecasting enables us to identify key resources – i.e., resources that are likely to be unavailable and to hinder the process – and to plan and organize R&D so that key resources are available “on time”. Long-term forecasting will permit us to anticipate and organize the availability of resources needed for future innovation processes. 1.2 Related work Our research on long-term technological forecasting is inspired by the description of "the lifeline" of technological systems from Altshuller G.S. [1] where contradiction models, S-curves, and the limitation of resources play important roles. The distinction between short-, medium-, and long-term forecasts, based on three phases of S-curves is proposed in Figure [2]. A short-term technological forecast is about one phase of an S-curve, while a medium-term forecast considers two phases. The scope of study for a long-term forecast is usually beyond one technology, since it studies at least three phases on an S-curve and may consider several growth processes and more than one system. Our article presented at the ICED'07 conference [3] depicts some theoretical and practical results from two forecasting projects. The concept of critical-to-X feature is proposed in order to unify qualitative and quantitative studies for longterm forecasting. How the prediction of technological barriers can be supported by mapping the contradictions in combination with the assessment of limiting resources is illustrated in this paper [3] by case studies for energy technologies. In our paper for the TFC’07 conference [4] emphasis on the definition of growing parameters for a logistic growth model is made. To deal with the issue when there is no data for emerging technologies, the naïve and causal methods for long-term forecasting are used. In order to distinguish the two principal ways of addressing the problems of long-term technological forecasting we have distinguished two complementary directions: a bypass way through substitution of technologies or a direct way through study the growth of knowledge. Our paper, developed for TFC’08 refers to a direction through substitution of technologies [5] and discusses some working hypotheses of using the so called naïve methods for long-term forecasting by applying logistic substitution models, proposed by Marchetti C. [6]. In the present paper we are focusing our attention on ‘subsystem’ direction which applies causal methods to foresee new technology diffusion in long-run prospects using

knowledge growth as a cause factor of technology change. Some working definitions for the components of knowledge acquisition process are given. Lastly, it is proposed to apply the component logistic models [7, 8] for specifying exploration, experimentation and exploitation phases according to knowledge growth. In the following section a short history of logistic models, a simple logistic S-curve, the concept of competition, and a component logistic model are presented. The section Applying Component Logistic Growth for Technological Forecasting explains the idea for linking growth of knowledge with diffusion of technologies in view of long-term forecasts. The concluding remarks propose directions for future research. 2 LOGISTIC MODELS AND COMPETITION The first reference to the logistic equation as a model of population growth can be found in Pierre-Francois Verhulst (1838, 1845, 1847). In 1925 and 1926, working independently Lotka Alfred J. and Volterra Vito generalized the growth equation into a model of competition among different species and coined the predator-prey equations. Early studies about technological substitution described by S-curves were done in 1957 by Griliches, Z and in 1961 by Mansfield [13]. The diffusion of the innovation theory, formalized by Everett Rodgers [14] in 1962 postulated that innovations spread in society in an S-curve. A significant achievement was accomplished by Fisher, J.C. and Pry, R.H. (1970) by formulating the model for binary technological substitution as an extension of Mansfield's findings [15]. Marchetti C. proposed the logistic substitution model to describe technology substitution in the dynamics of long-run competition (1976-1979) by extensively using the Fisher-Pry transform [6]. In 1994 Meyer P.S. proposed the component bi-logistic growth model [7]. Later on, a component logistic model with multi-logistics generalizes the bi-logistic growth model [8]. Logistic substitution and component logistic models provide clear and suggestive outputs for supporting medium- and long-term forecasting of technology change [6-8, 16-19]. Logistic models are also widely and successfully used in microeconomics and econometrics for modelling individual decisions [38]. 2.1 Simple logistic model Natural growth of autonomous systems in competition might be described by the logistic equation and the logistic curve respectively. Natural growth is defined as the ability of a 'species' (systems) to multiply inside finite 'niche capacity' (i.e. carrying capacity [7], or physical limit of resources [1]) during a time period. Provided function parameters can be estimated using a partial set of data (e.g. efficiency of internal combustion engine change over last 20 years). It is possible to use the logistic equation in a predictive mode (e.g. how much efficiency will grow and when). Nevertheless, availability of a reliable dataset is a principal limiting factor for applying the S-curve model to technological forecasting [9, 11]. In order to describe continuous "trajectories" of growth or decline through time in socio-technical systems, one generally applies the three-parameter logistic growth model (1):

N (t ) =

κ 1 + e −α t − β

(1)

Where N(t) – the number of units in the 'species' or growing variable to study; κ – is the asymptotic limit of growth;

278

α – is the growth rate which specifies1 "width" or "steepness" of the S-curve (e.g., α=0.19 means roughly 19% growth per time fraction); it is frequently replaced with a variable characteristic duration (Δt) that quantifies the time required for the trajectory to grow from 0.1κ to 0.9κ. Characteristic duration Δt is used more than α for the analysis of time-series data since the units are easier to appreciate. The decline can be described by a logistic with a negative Δt. β – specifies the time (tm) when the curve reaches 0.5κ: the midpoint of the growth trajectory (tm implies symmetry of simple logistic S-curve). These three parameters κ, α, and β are usually calculated by fitting the data. There are diverse fitting techniques. For instance, the asymptotic limit of growth κ can be estimated by expert judgment, when α, and β are optimized to minimize residuals.

Figure 1: Growth of a bacteria colony consuming sugar and minerals in a closed Petri dish fitted to a logistic curve: limit of growth κ=50 species, characteristic duration Δt=2.2 days, midpoint tm = 2.5 days. In various publications it was concluded that the other models of non-symmetric growth have limited application due to their complexity and low efficiency for technology forecasting [9-12]. Empirical studies have shown that the Sshaped curve is present in thousands of growth and diffusion processes [3, 6, 16, 18]. Therefore, this model can be applied to both systems where the mechanisms of growth are understood and the growing principles are hidden. Naïve vs. Causal Methods Naïve methods apply past data about the growing variable (Y) to specify trends and extrapolate them into the future (see Figure 2). Causal methods apply causal variables (X) to foresee future changes of the growing variable (Y). A causal variable (X) is one that is necessary or sufficient for the occurrence of an event (Y), when it is assumed that X precedes Y in time.

1

Where e - the base of the natural system of logarithms, having a numerical value of approximately 2.71828

279

Naïve methods: Yt‐d, …, Yt‐2, Yt‐1, Yt    Causal methods: Xt‐d, …, Xt‐2, Xt‐1, Xt 

 

 

→ 

Yt+h 

 

 

 

 

→ 

↓ b 

 

Yt‐d, …, Yt‐2, Yt‐1, Yt 

 

    Xt+h  ↓ bh 

 

Yt+h 

Figure 2: Naïve and Causal methods. Adapted from [34]. When the growing mechanism is known, the causal methods can be effective for a long-term forecast. However, naïve methods can be efficient even for hidden growing mechanisms when datasets are available. 2.2 Systems in competition, or where does logistic growth come from? In order to really understand logistic models and to apply them in the most relevant way, it is necessary to grasp the rationales of logistic growth. Logistic growth is the outcome of a particular form of interaction within a system. For instance in population dynamics, species can compete for a common resource (such as groups fighting for territory), or they can be part of a “biological chain” (as in the predatorprey model). These different interaction schemes generate specific a growth pattern for the species under consideration. One of the basic assumptions for long-term forecasts that we apply is that all systems evolve under competition according to the law of logistic growth. In techno-economic systems, the growth variable is a frequently applied number of units or marketshare ratio. However, the growth parameter should be defined in accordance with the forecasting task [4]. In order to propose a relevant model of technology forecasting (i.e., a model that can be applied in a way that is reproducible), the nature of the interactions driving technology competition and diffusion must be clarified [13, 35]. In his paper about a scientific approach for managing competition, Modis T. describes 2 six ways that two competitors, can influence each other’s growth rate [17]: pure competition, predator-prey, symbiosis, parasitic (winimpervious), symbiotic (loss-indifferent), and no-competition. These concepts have also been developed – with different terms – in economic and management literatures [35]. Pure competition – a situation when both species suffer from each other’s existence because they exploit the same resources to survive. In economic terms, competition takes place when technologies offer similar functions – technologies that are highly substitutable compete for the same market. However, the literature on imperfect competition shows that pure competition is exceptional: transportation costs, search cost, switching costs or capacity constraints can relax the competition between products [36]. Symbiosis – a situation when both species benefit from their association. For instance, mobile phone sales trigger an increase of operators and new services. Advanced services of mobile phone operators initiate sales of the latest mobile phone models. Situations of complementarity have been vastly discussed in the economics of networks literature [37].

2

It is referenced to a study of Kristina Smitalova and Stefan Sujan (1991).

Predator-Prey competition – a relationship in which two 'species' interact as a 'predator' and its 'prey'. These dynamics go on in cycles of growth and decline. In economic terms, the predator-prey situation is characterized by both complementarity and competition as in the “hawk-dove” model. Examples of these situations are found in trade economics [39]. Parasitic (win-impervious) – a relationship between two species in which one obtains some benefit while the other is unaffected. Examples include growth of digital cameras and computers sale that trigger growth of external hard discs market. However, sales of external hard disks do not influent to sales of digital cameras. This situation is characterized by one-sided complementarity (and indifference from the other side). Symbiotic (loss-indifferent) – a situation when one party suffers from the existence of the other, which remains insensible to what is happening. In economic terms, this situation refers to well known negative externalities [40] – a concept well suited for analyzing pollution issues for instance. No-competition - a relationship between two species when there is no overlap in using resources to evolve. For instance, sales of coffee do not affect sales of tea in the same supermarket. Sales depend on seasonal variation rather than on the competition between two products. In rather simple cases, it would be justified to study the precise pattern of interaction and competition to formulate a specific model of growth useable for specific forecasting. But interaction among competitors are often complex (see the economic literature on technology competition, Arthur 1988) and addressing the issue is likely to become excessively complex, time-consuming and sensitive to arbitrary modeling strategies or even hidden errors and biases. Therefore, we prefer a second option: decomposing the competition system into sub-systems in order to obtain an accurate fit of the model on observed data. 2.3 Component logistic model Frequently, due to the difficulties with system definitions, the time-series data cannot be refined and split properly. It leads to inaccuracy with fitting a logistic S-curve to data. Prior to discussing the component model, a problemcontradiction should be defined: system evolutions should be described by multi-parameter complex functions and curves, since the systems rarely follow a single S-curve trajectory due to endogenous and exogenous complexities. However, system evolutions should be described by a simple, three-parameter logistic function, to provide a clear physical interpretation, to be comparable with other systems' evolutions, to decrease errors during forecasting and to be applicable in practice. In response to the formulated contradiction, the component logistic model proposes a description of the complex growth processes using a combination of simple three-parameter functions by applying bi-logistics [7] or multi-logistics [8]. The mechanism of this combination resembles the principle of the 'nested doll' [1] and once again confirms the fractal feature of natural growth concept [20]. 3 For instance , to study the dynamics of US Nuclear tests (source of data [21]) a single simple logistic does not provide the adequate level of residuals (accurate fit), while a bilogistic growth curve fits data with acceptable residuals (see Figure 3).

3

Such a result was interpreted the following way: "…the fastest rate of growth (midpoint) of the first pulse occurred in 1963, following the Cuban missile crisis. While the first logistic pulse was largely the race to develop bombs with higher yields, the second pulse, centered in 1983 and nearing saturation now (1994), is probably due to the research on reliability and specific weapons designed for tactical use. The Bi-logistic model predicts that we are at 90% of saturation of the latest pulse. Processes often expire around 90%, though sometimes they overshoot. The residuals show the extraordinary, deviant increase in U.S. tests after the scare of the 1957 sputnik launch…" [7] When a reasonable interpretation is made, the application of component models provides suggestive and attentiongrabbing results. However, our practical experience shows that experts might need several weeks to propose a realistic interpretation for obtained curves.

(1) (2)

Figure 3: US nuclear weapon tests with decomposed bilogistic growth curve. One of the remarkable characteristics of the component logistic model is it helps to understand the observed system through the reduction of residuals and decomposition of the initial dataset by multi-logistic ones the initial definition of the system can be corrected and refined. 3

APPLYING COMPONENT LOGISTIC GROWTH FOR TECHNOLOGICAL FORECASTING The component logistic models can be used for testing and validating our view of the innovation process. 3.1 Innovation process as a learning process The applied working definition of technology diffusion is the following: technology diffusion is the process of getting a (new) technology adapted through practice. In the context of long-term technological forecasting, technology diffusion can be presented as a process of transition from invention to innovation [14, 22]. For instance, the transitional process from the first feasible prototype to the first regular production and new market creation takes time: for photography - 112 years, for the steam locomotive – 55 years, for rolled wires – 47 years, for the lead battery – 79 years; and for arc welding – 49 years respectively [22].The innovation process can be seen as a sequence of activities which have their own characteristics [27]. In many cases, the separation between the invention, technical validation and commercial

This example is adopted from [7].

280

exploitation is justified as each of these phases generates an output to be used by the next phase: Invention is defined as a result of engineering activities in a technological context; resolving the contradictions between specific needs (how it was perceived) and known laws of nature. The output of the invention process is a feasible solution and working prototype, but not necessarily a patent4. Validation aims at producing and testing prototypes under a set of controlled laboratory conditions to obtain a prototype that could be used as a pre-series. Production and commercialization aim at increasing the added value of the innovation for the producer and the user. This can be done by increasing the productivity itself, by developing the distribution network, or by introducing small modifications to the product to satisfy consumer requests. Each of these three phases obviously deals with “something” different and we expect that the use of the component logistic models based on this decomposition can yield the required results. But because their purposes are different, one is faced with the difficulty of measuring their “growth” with a common indicator. For instance, the number of sells is a natural growth indicator for the production and commercialization phases, but it has no meaning for the measurement of growth in the other phases. What these phases have in common is that they describe learning processes. Learning relates to the reduction of uncertainties [18]. When enough information and knowledge has been acquired, moving to the new phase of the innovation process is required for further knowledge and information acquisition [27]. 3.2 What should be measured? Information and knowledge growth is common to the different phases we have described and is a potential growth indicator in the perspective of component logistic forecasting. A preliminary step is to define these concepts in an operational way. The measurability of knowledge and information will be our most important concern. First of all, let us propose the concepts needed to understand what learning actually is about. Data Data is a description of facts from a certain viewpoint using known parameters and values (measurement). In other words, it is a description (e.g. measurement) of facts through a comparison with something known (size, color; strength). Data acquisition is limited by examples to compare and selected measurement units (e.g. how does one measure a personal value?) It is important to underline that the same facts can be described by different datasets (e.g. the performance of military aircrafts and of passenger airplanes). Information Information can be defined as a structured representation using data and interpretations from a certain viewpoint. It is structured, articulated, codified, and stored in certain media. The most common forms of information are manuals, documents, and audio-visual materials. Information is not related to individuals but it has an interpretative content. By watching the daily news from different countries it is easy to witness how different information can be about the same facts and events.

4

"Patent is a government grant to an inventor assuring him the sole right to make, use, and sell his invention for a limited period." [Collins English Dictionary. 8th Edition, 2006]

281

Knowledge Knowledge is a personal way of using information to manage practical or intellectual tasks. Knowledge always belongs to an individual and includes conscious, subconscious, and unconscious components. Knowledge cannot be placed on the carrying medium since it is dynamic and constantly changing. The applied working definition for "knowledge" is similar to “tacit knowledge”, which was proposed by Michael Polanyi (1951). In brief, knowledge acquisition can be presented as a result of information that has been assimilated through interpretation, validation and adaptation phases. Knowledge can also take the form of experience based on feedback from practice. World-view The totality of our beliefs about reality forms our world-view. In other words, it reflects how we perceive the world. One’s personal view of the world and how one interprets it relies on the dominating world-view in society on the one hand and it influences the transformation of society’s world-view through communication, on the other hand (e.g. ideas of Galileo Galilei). According to our experience, simply having new information does not systematically change one’s worldview. However, regular knowledge acquisition contributes a lot to the evolution of one’s world-view and increased learning capacity. 3.3 Law of logistic growth and emerging technologies The key question to be answered has been formulated the following way: How can the future of emerging technologies be forecast by using simple logistic S-curves, when there is no statistical data about it? Before the 'infant mortality threshold' there is no statistical data about growing variables like efficiency, market share or number of ‘species’, since the system does not exist outside of laboratories. How can the logistic S-curve be constructed before having statistical data for the growing variable? The generic question can be reformulated the following way: How can one foresee time, place and specificity of transition from invention to innovation in advance? The application of causal methods has been suggested to answer the above question. In the first experiment (20042005: a forecasting project for small stationary fuel cell) the hypothesis about studying problems (contradictions) as causal variables to foresee the future evolution of technology was tested. The quantity of contradictions was applied as a unit of measurement to judge technology maturity. The assumption was that at the early stage of study the growth rate of problems is slow, later it increases, and at a certain stage the growth of contradictions slows down until no new problems are registered. This assumption has been confirmed with the first experiment in 2004-2005 at the European Institute for Energy Research (EIFER, Karlsruhe, Germany) [23]. It was also tested in a project for distributed energy technologies (2005-2006, EIFER) [24]. Several small scale tests performed afterwards showed relevant results as well. Obtained results which demonstrate conformity with logistic growth, have been recognized as preliminary ones until a explicit measurement mechanism for knowledge acquisition can be proposed. 3.4 Networks of contradictions For the practical forecasting projects mentioned above, the network of contradictions has been proposed [28] as a knowledge acquisition process guideline. Among many other roles, the network of contradictions helps differentiate signal and noise information-knowledge in the early stages of

emerging technologies. The application of networks of contradictions intensifies learning and research processes on the one hand, and facilitates selection of relevant information on the other hand. In addition, a number of contradictions and interlinks between contradictions and critical parameters may be applied as growing variables to depict knowledge growth. During the practical forecasting projects [3, 23, 24] a slight growth of problems-contradictions were registered at the beginning of the study, fast growing of contradiction number at certain stage of project and decreasing number of problems until stable network at the end. Networks of contradictions help discover new problems and guide the knowledge acquisition process in accordance with them. The resulting networks of problems on the one hand accumulate information and structure the knowledge of experts; while on the other hand, the construction of maps of contradictions contributes to the reduction of expert bias. It has also been observed that constructing a network of contradictions helps members of the working team develop their competence more rapidly. This effect takes place as soon as knowledge acquisition is combined with constructing the network. This process produces a system

  100

effect when experts are forced to study new limitations for existing and emerging technologies instead of being preoccupied by existing solutions. 3.5 Growth of knowledge on the way to innovation Currently, the extension of an original concept of ‘contradictions as causal variables’ for logistic growth is under examination. There are two basic assumptions behind this: 1) any process (especially problem solving) can be considered as a learning process [19, 25]; 2) at the outcome of any learning process, there is a knowledge growth issue. Therefore, it is proposed to measure knowledge growth during the transition from invention to innovation. In his book [22] Mensch classified innovations based on the date of the first commercial sell and the inventions with the state of the first working prototype. Based on this classification he presented historical data for 113 basic innovations. The distance between invention (feasible prototype) and innovation (first production for market) for different technologies was different. For instance, for electricity production - 92 years, for dynamite - 23 years, for magnetic tape recording - 39 years, and for fluorescent lightning - 82 years (see working definitions for invention and innovation in section 3.1.)

Knowledge acquisition, in percents of readiness to transit to the next stage

75 50

25

Experimentation (Field test) Exploration (invention)

Exploitation (innovation)

0

Time Innovation

Invention

Figure 4: Growth of knowledge within exploration, experimentation and exploitation phases on the way from invention to innovation. According to the results of research in organization and The working assumption is: if we can measure knowledge economic sciences, the transition period from invention to growth during E1, E2, E3 stages, it gives an opportunity to innovation can be described through three consecutive foresee the beginning of commercialization with the use of stages [26, 27]: exploration (laboratory research) – E1, logistic S-curves. The amount of knowledge can be applied experimentation (field tests) – E2, and exploitation as a growing variable on the emerging technology. (commercialization) – E3. As an interim solution the relative ratio 'knowledge It can be assumed that at the beginning of the exploration acquisition in percent of readiness to transition to the next stage (E1), most knowledge is implicit (See Figure 4). At the stage' is adopted as a temporary answer for measurement end of stage E1 knowledge is represented as information units (see Figure 4). One hundred percent represents the mostly in scientific papers and patents. At the experimental knowledge acquisition ceiling for a certain stage (e.g. stage (E2), the need for sharable information increases, exploration). In Figure 4 the broken line curve on the right nevertheless, it is necessary to protect intellectual property. represents knowledge growth at the exploration stage for the Therefore, most of the information at the beginning of E2 next generation system. It shows that knowledge is can be found in internal reports about field tests, in reviews, accumulated over a period of time and when there is and in local patents. At the end of the stage E2, the sufficient knowledge (saturation phase) to decide about the international patents and publications in industrial journals next stage, the S-curve of the next stage (e.g. increase in number; conference papers, and marketing experimentation) passes through its α-point. When articles are also numerous. accumulated knowledge approaches 90% of the growth limit, it is time for transition to the next stage.

282

In practice, the experimentation (field test) stage can be launched before exploration stage of knowledge acquisition reaches the saturation phase. A weak point of the consecutive S-curves model for knowledge is the ambiguity about when the next growth curve will substitute the old one and how to distinguish between these two curves. In order to address this issue the component logistic model which allows decomposing a complex growth process by several simple logistic curves [8] should be employed. One more fact should be taken into account: all technologies evolve under competition and the number of research projects and amount of funds are limited as well. Hence, it is obvious, why certain inventions have never reached the experimentation stage or some of the inventions which have passed through 70% of experimentation have not arrived at the exploitation phase [18]. A reliable technological forecast should provide an explicit answer to the questions regarding which technology will succeed in competition, when it will happen, and where it will take place. Taking into account described assumptions and models it seems feasible to answer these critical questions. 3.6 How to measure knowledge? This crucial question is answered in various specific situations by different way. The detail review of existing knowledge measurement techniques is planned to be done in future publications. In scope of this paper we would like just to point out some fruitful research domains: •

Literature-related discovery [29];



Patent-based analysis for quantitative estimation of technological impact [30];



Assessment of knowledge in education [31];



Measurement of scientific output for different fields [32];

• Text and data mining [33]. Unfortunately, after detail consideration, it becomes evident, that most of the techniques measure not knowledge, but information (see working definition in section 3.2). Nevertheless, growth of information can be regarded, at certain extend, as indication of knowledge growth. 4 CONCLUSIONS The proposed working hypothesis concerning the knowledge acquisition mechanism through problems solving is still theoretical and should be checked through practice. Allowing that knowledge belongs to individuals, measurement for knowledge growth should take into account information growth (e.g. publications) as well as the number of persons involved in the process of knowledge growth. Therefore, it is proposed to measure knowledge as the product of a number of specialists (including authors of information) by number of publications (e.g. patents, conference papers, research reports, journal articles, videotitles and other kind of information). There are three major working hypotheses to be tested in the near future: 1. To measure knowledge growth by applying a network of contradictions as a guideline to differentiate signal and noise information. 2. To employ the concept of limiting resources from a super-system for validation of the network of contradiction. 3. To adapt the knowledge growth factor as an underlying cause of the technology substitution mechanism.

283

1. Signal and noise information can be differentiated when one focuses their attention not on the existing technological solutions, but on the problems to be solved regardless of known answers. A network of contradictions is technology to realize the basic principles of system thinking: "First, one should examine their objectives before considering ways of solving a problem. Second, one should begin by describing a system in general terms before proceeding to the specific." [34] 2. The application of simple logistic S-curves to represent growth of knowledge follows the same concept of 'limiting resources' from the nearest super-system as it was implemented to study the evolution of technical systems. For instance, there is a well known situation when, at a certain stage, new laboratory experiments do not provide additional knowledge about a research topic. A typical answer for such a situation is to redesign experiments or to conduct field tests in real conditions but not in a laboratory. An open question for us is the limiting resources in a proposed example. Analysis of limiting resources for constructed networks of contradictions helps to review and validate an obtained map of problems through the study of how formulated problems are recognized in research and development societies. In the same time, study about limiting resources discloses future problems and technological barriers according results of two forecasting project in energy technologies. 3. According to preliminary results of our research, the knowledge growth mechanism is one of the major factors in the chain of technology substitution issues. The competition issue is the exterior side of technology substitution when knowledge acquisition is an internal force for surviving under competition. 5 ACKNOWLEDGMENTS This research is supported by the European Institute for Energy Research (EIFER), in Karlsruhe, Germany. We would also like to thank our colleagues from the LICIA team of LGECO, at INSA Strasbourg for their constant attention. We are also grateful to all our colleagues from different countries and institutions for their curiosity, questions, discussions and helpful criticism regarding the presentation of this hypothesis. Special thanks to Sarah Sands for English corrections of the proposed article. 6 REFERENCES [1] G. S. Altshuller, Creativity as an Exact Science, Sovietskoe radio Publishing House, Moscow, 1979. p.184. in Russian. [2] D. Kucharavy, and R. De Guio, Problems of Forecast, ETRIA TRIZ Future 2005, Graz, Austria, 2005. 219235. [3] D. Kucharavy, R. De Guio, L. Gautier, and M. Marrony, Problem Mapping for the Assessment of Technological Barriers in the Framework of Innovative Design, 16th International Conference on Engineering Design, ICED’07, Ecole Centrale Paris, Paris, France, 2007. [4] D. Kucharavy, and R. De Guio, Application of SShaped Curves, 7th ETRIA TRIZ Future Conference, Kassel University Press GmbH, Kassel, Frankfurt, Germany, 2007. [5] D. Kucharavy, and R. De Guio, Logistic Substitution Model and Technological Forecasting, 8th ETRIA TRIZ Future Conference, University of Twente, Enschede, Netherlands, 2008.

[6]

[7] [8]

[9] [10]

[11]

[12]

[13] [14] [15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]

C. Marchetti, and N. Nakicenovic, The Dynamics of Energy Systems and the Logistic Substitution Model, International Institute for Applied Systems Analysis, Laxenburg, Austria. 1979. P. S. Meyer, Bi-logistic growth Technological Forecasting and Social Change, 47(1), (1994) 89-102. P. S. Meyer, J. W. Yung, and J. H. Ausubel, A Primer on Logistic Growth and Substitution: The Mathematics of the Loglet Lab Software, Technological Forecasting and Social Change, 61(3), (1999) 247-271. R. U. Ayres, What have we learned? Technological Forecasting and Social Change, 62(1-2), (1999) 9-12. N. Meade, and T. Islam, Forecasting with growth curves: An empirical comparison, International Journal of Forecasting, 11(2), (1995) 199-215. T. Modis, Strengths and weaknesses of S-curves, Technological Forecasting and Social Change, 74(6), (2007) 866-872. S. Makridakis, and M. Hibon, The M3-Competition: results, conclusions and implications, International Journal of Forecasting, 16, (2000) 451-476. E. Mansfield, Technical change and the rate of imitation, Econometrica, 29, (1961) 741-66. E. M. Rogers, Diffusion of Innovations, 5th, Free Press, New York, 2003. p.512. ISBN 0743222091 J. C. Fisher, and R. H. Pry, A simple substitution model of technological change, Technological Forecasting and Social Change, 3(1), (1971) 75-78. T. Modis, Predictions - 10 Years Later, Growth Dynamics, Geneva, Switzerland, 2002. p.335. ISBN 29700216-1-7 T. Modis, A Scientific Approach to Managing Competition, The Industrial Physicist, 9(1), (2003) 2427. A. Grubler, Technology and Global Change, International Institute of Applied System Analysis, Cambridge, 2003. p.452. ISBN 0 521 54332 0 J. H. Ausubel, and C. Marchetti, The Evolution of Transport, The Industrial Physicist, April/May, (2001) 20-24. T. Modis, Fractal aspects of natural growth Technological Forecasting and Social Change, 47(1), (1994) 63-73. Stockholm International Peace Research Institute Yearbook 1992, Oxford University Press, New York, (1992). G. Mensch, Stalemate in Technology: Innovations Overcome the Depression, Ballinger Pub Co, Cambridge, Massachusetts, 1978. p.241. ISBN 088410611X L. Gautier, M. Marrony, and D. Kucharavy, Technological forecasting of Fuel Cells for small stationary applications, HN-42/05/016, European Institute for Energy Research - EIfER, Karlsruhe. 2005. N. Avci, S. Cassen, F. Chopin, L. Henckes, K. Doerr, and D. Kucharavy, OTSM-TRIZ approach for a technological forecast of Distributed Generation (DG): Results of the 8 first working sessions, European Institute for Energy Research - EIfER, Karlsruhe, Germany. 2006. C. Marchetti, Society as a Learning System -Discovery, Invention, and Innovation Cycles Revisited, Technological Forecasting and Social Change, 18(4), (1980) 267-282.

[26] J. G. March, Exploration and Exploitation in Organizational Learning, Organization Science, 2(1), (1991) 71-87. [27] P. Llerena, and E. Schenk, Technology Policy and ASynchronic Technologies: The Case of German HighSpeed Trains, in Innovation Policy in a Knowledge Based Economy, P. Llerena and M. Matt, eds., Springer, Berlin, 2005, 115-134. [28] N. Khomenko, R. De Guio, L. Lelait, and I. Kaikov, A Framework for OTSM-TRIZ Based Computer Support to be used in Complex Problem Management, International Journal of Computer Applications in Technology (IJCAT), 30(1/2), (2007) 88-104. [29] R. N. Kostoff, Literature-related discovery (LRD): Methodology, Technological Forecasting and Social Change, 75(2), (2008) 186-202. [30] C. Choi, S. Kim, and Y. Park, A patent-based cross impact analysis for quantitative estimation of technological impact: The case of information and communication technology, Technological Forecasting and Social Change, 74(8), (2007) 1296-1314. [31] G. J. Cizek, Assessing Educational Measurement: Ovations, Omissions, Opportunities, Educational Researcher, 37(2), (2008) 96-100. [32] H. Horta, and F. M. Veloso, Opening the box: Comparing EU and US scientific output by scientific field, Technological Forecasting and Social Change, 74(8), (2007) 1334–1356. [33] A. L. Porter, and S. W. Cunningham, Tech mining: exploiting new technologies for competitive advantage, John Willey & Sons Inc., Hoboken, New Jersey, 2005. p.384. ISBN 0-471-47567-X [34] J. S. Armstrong, Long Range Forecasting. From Crystal Ball to Computer., 2nd, John Wiley & Sons, Inc., 1985. p.689. ISBN 0-471-82360-0 [35] Luis M.B. Cabral, Industrial Organization. MIT Press, 2000. ISBN 9780262032865 [36] George J. Stigler. Competition, in The New Palgrave: A Dictionary of Economics, v. 3, MacMillan Publ., 1987. pp. 531-546. [37] Arthur W.B. Increasing Returns and the New World of Business, Harvard Business Review, 1996, July-Aug. [38] Greene, William H. Econometric Analysis. 5th, New York University. 1991. p.802 ISBN 0-13-066189-9 [39] Anderton C.H. Conflict and Trade in a Predator/Prey Economy, Review of Development Economics, 7(1), (2003) 15 – 29. [40] Baumol, W.J. On Taxation and the Control of Externalities, American Economic Review, 62(3), (1972) 307-322.

284

A TRIZ Based Methodology for the Analysis of the Coupling Problems in Complex Engineering Design 1

1 1 2 G. Fei , J. Gao , X.Q. Tang School of Engineering, University of Greenwich, Chatham Maritime, Kent, ME4 4TB, UK 2 School of Mechanical Engineering and Automation, Beijing University of Aeronautics and Astronautics, Beijing, China [email protected]

Abstract Conceptual design is a critical and innovative stage in engineering product and system design. In the conceptual design process, it would be ideal if all functional requirements are maintained independently according to the law of Axiomatic Design theory. However, in practice, especially in complex engineering product and system design, more often the requirements are not independent (or coupled), and this makes conceptual design more difficult. In this paper, a coupling analysis methodology, framework and related techniques are proposed which integrate axiomatic design with the theory of inventive problem solving (TRIZ), in order to identify and analyse the coupling problems existing in conceptual design. An illustrative example is also presented. Keywords: New product design, Coupling analysis, Axiomatic design, TRIZ

1 INTRODUCTION Conceptual design, seen as a critical part of new product and system development, is getting more attention both in academia and industry. Many techniques have been proposed in the past decades in order to improve the effectiveness and efficiency of conceptual design. Some of them, such as Quality Function Deployment, Axiomatic Design [1], and the Theory of Inventive Problem Solving (TRIZ) [2], are proven successful in conceptual design of engineering products and systems and are widely used in industrial applications. Axiomatic design provides an effective approach to developing products and systems throughout the whole design domains, including the customer domain, the functional domain, the physical domain and process domain. The zigzagging developing process and two axioms of axiomatic design theory developed by Suh [1] are widely adopted, especially in mapping functional requirements to design parameters at the conceptual design stage. During the zigzagging process, function requirements and design parameters are acquired with corresponding design matrices. By populating design matrices, uncoupled, decoupled and coupled solutions can be identified and further measures can be carried out to eliminate couplings. However, this is not viable in some complex engineering products and systems in the real world [3,4,5]. Firstly, current techniques of coupling analysis are implemented on a qualitative basis. The strengths of couplings existing in the solution can not be obtained. For example, when there are many couplings and not all of them can be solved all together, the critical couplings need to be identified, prioritised and solved in order to improve the effectiveness and efficiency of design. Therefore, it is important to find a methodology that can analyse and quantify the strengths of couplings. Secondly, the original theory of axiomatic design is inefficient when the scale of design matrix gets very large. Generally, decoupling of

CIRP Design Conference 2009

285

design is conducted in two steps, i.e., (i) the design matrix is populated so that couplings existing in the design are identified; and (ii) the design matrix is rearranged to adjust the sequence of functions and design parameters in order to make the design decoupled. However, when the number of functional requirements increases, the number of combinations will grow in a geometric progression and the rearrangement of design matrix will be extremely time consuming [1], and this is difficult to implement in industry. Thirdly, resources, in terms of development costs, leadtime, staffing and so on are always precious and need to be allocated properly in most projects. The scale and complexity of some large engineering projects are enormous and solutions of these couplings are not easy to be obtained. Therefore, it is unacceptable to spend too much resource in resolving the less critical coupling issues (some are even harmless). Instead, the critical couplings should be identified and resolved with intensive efforts. In summary, a more practicable and efficient coupling analysis approach is needed to analyse the couplings existing in design solutions, which is able to identify couplings quickly and enables engineers to make more efforts on solving critical couplings that are most harmful to the implementation of required functions. In addition, the progress of the project may be speeded up and unnecessary costs may be reduced by leaving the less critical or even harmless couplings unsolved. 2 LITERATURE REVIEW 2.1 Axiomatic Design The theory of axiomatic design is proposed by Suh [1], which is dedicated to constructing a design framework with a scientific basis and improving design activities with a logical and analytic thinking process. Basically, there are three essential parts of the axiomatic design that are

widely used in academic research and industrial applications, namely the zigzagging design process, design axioms and the design matrix. The Axiomatic design theory divides the design world into four domains, i.e., the customer domain (CAs), the functional domain (FRs), the physical domain (DPs) and the process domain (PVs). The design is gradually realised by mapping from one domain to another. Typically, the mapping process between functional domain and physical domain is studied more often in literature than others because the conceptual design is mostly undertaken at this stage. As depicted in Figure 1, the mapping system works in a topdown way. Each design parameter (DP) in the physical domain corresponds to each functional requirement in the functional domain at the same level. Then design parameters in this level derive functional requirements in the next level until it reaches the leaf level so that functions and solutions are decomposed and obtained during this process.

FR

FR1

DP FR2

FR11 FR12 FR21 FR22 FR121 FR122 Functional domain

DP1

DP2

from the analysis of thousands of patents. Recently, many researchers and practitioners are trying to apply TRIZ in other non-technical areas, such as management, education, environment and politics. Although there are a considerable set of techniques in the theory of TRIZ, such as contradiction matrix, inventive principles, knowledge/effects and ARIZ, the Substance-Field (shortly Su-Field) analysis model is picked up in this project in order to clarify the coupling relationships during the zigzagging design process of Axiomatic design. The SuField analysis model is based on the minimal technological system which is also known as the triad ‘object-tool-energy’. The triad system is composed of a tool, an object and the energy and describes that the tool performs action on the object by the force coming from the energy. Through the analysis of the triad system, interactions between elements within this system can be clarified. Along with the triad system, four kinds of actions are also identified which include unspecified action, specified action, inadequate action and harmful action. For example, the Su-Field analysis model of driving nail into the wall is depicted in Figure 2. In this system, mechanical force is performed on the hammer by the user, and then the hammer performs mechanical force on the nail.

DP11 DP12 DP21 DP22 DP121

DP122

Physical domain

Figure 1: Zigzagging mapping process between functional domain and physical domain [1] There are two axioms recognised in design, namely independent axiom and information axiom, companied by related theorems and corollaries. Design axioms are the elementary part of axiomatic design and deemed as the basis of good design, which are used to guide the design process and evaluate alternative solutions. The independent axiom indicates that the function requirement should always be maintained independently so that any change of the corresponding DP of one FR will not affect functionalities of other DPs. As the basis of the axiomatic design theory, the independent axiom takes effect throughout the design process. The information axiom indicates that the best design solution should contain minimum information content. More information means being more complicated and more possible that the design parameter can’t satisfy the functional requirement. Design matrix is a technique used to analyse the coupling relationships between a group of FRs and their corresponding DPs. Normally the matrix is populated in a binary way so that all the coupling relationships are recognised qualitatively. According to the independent axiom, only uncoupled and decoupled designs are acceptable. However in the design of some complex engineering products and systems, it is impossible to keep all FRs independent of DPs. Quantitative analysis of coupled elements should be carried out. A practical approach is needed to clarify the coupling relationships within these designs so that the direction of improvement can be pointed out. 2.2 The Substance-Field Model of TRIZ TRIZ is the Russian acronym of Theory of Inventive Problem Solving [2]. Since TRIZ was proposed, it has been widely used in industrial applications to solve technical problems due to the fact that TRIZ is a result

Figure 2: Su-Field model example In this project, direct or indirect interactions between DPs in the Axiomatic design methodology may be identified using the Su-Field analysis method. Fields existing in interactions can be clarified as well so that effects caused by fields can be estimated by specific expertise. This is important to identify the couplings between DPs and FRs. 2.3 Integration of Axiomatic Design and TRIZ Many attempts at integrating TRIZ and Axiomatic design together have been made by researchers, in order to improve the product development process. Comparison between Axiomatic design and TRIZ is carried out to identify advantages and disadvantages of the two theories. The possibility of complementary integration of axiomatic design and TRIZ is also discussed [6,7,8]. It is found that, on one hand the Axiomatic design is powerful in functional analysis and provides a logical thinking approach to devising conceptual design in a zigzagging and hierarchical structure. On the other hand, although it is effective to identify functional conflicts underlying the solutions, there is a lack of specific tools in the Axiomatic design theory for problem solving [9]. Based on a wide range of analysis of a large number of patents, TRIZ becomes a sophisticated methodology for physical and technological problem solving. However, it is relatively less powerful in complex system analysis [6, 10]. With the advantages of TRIZ, it is possible to improve the ability of problem identification and solving within the Axiomatic design theory. In the light of the above discussions, many methodologies have been proposed to enhance the capability of product design by making Axiomatic design and TRIZ work together [5,8-13]. Particularly, some methodologies are devised, from different perspectives, for coupling analysis recently. Su and his colleagues [4] developed a methodology to deal with coupling analysis of engineering

286

system design in a quantitative way. A comparative approach and a scale algorithm are proposed in order to transfer the binary design matrix into a quantitative one on an analytical basis. Zhang et al [5] proposed a conceptual framework by integrating TRIZ with axiomatic design. Some tools of TRIZ, such as contradiction analysis, separation principles, inventive principles and effects, are used to solve constraints and coupling problems. Shin and Park [13] classified the coupled designs into six patterns. Tools of TRIZ, such as standard solutions, scientific and technical effects, contradiction matrix, separation principles and ARIZ, are used in each pattern respectively or combined, to solve different coupling problems. Kang [12] proposed an uncoupling methodology using contradiction matrix and inventive principles. Within this methodology, coupling problems are formulised as contradictions and FRs are converted into standard characteristics, and then inventive principles are applied to solve all the contradictions. By reviewing the above methodologies regarding the integration of Axiomatic design and TRIZ, it is found that there still exits a weakness in using these methodologies in conceptual design. TRIZ is good at solving technical and physical problems, but in conceptual design, detail design parameters are still vague and it is difficult, and also time-consuming, to solve problems using the principles or standard solutions in TRIZ. The aim of this project is to identify the coupling relationships within solutions and find critical paths for designers to focus on. As an ongoing project, although not all the coupling problems will be solved in the proposed methodology directly, it provides an efficient way for designers to find which path is most valuable to take for improvement. 3

THE PROPOSED COUPLING ANALYSIS METHODOLOGY Functional design is to find an object or a group of objects that can realise the function requirements by some properties of them or by interactions between them. In other words, the function is the outcome of the operation of the triad system, in terms of TRIZ. Design parameter is one kind of properties of these objects that can be used to drive the realisation of required functions. Any unexpected actions will affect the realisation of functions. In this project, expected interactions within and between design elements that are used to realise functions are not considered. Instead, unexpected interactions are focused on because they are most possible to cause unexpected couplings. Different from the term “contradiction”,

unexpected interactions are not contradictions or conflicts from a technical point of view. They are just functional interactions, but out of the expectation of designers. The approach to analysing couplings in the zigzagging design process is depicted in Figure 3. As the product design is organised in a hierarchical structure by the zigzagging process, design parameters (DPs) in lower levels should be consistent with their parent ones (parentDPs) in upper levels. In other words, characteristics of design parameters in lower levels will reflect characteristics of those in upper levels. Given the coupling analysis is carried out in the second level of the zigzagging process, design parameters DP11 and DP12 are identified as coupled in this solution. On account of the lack of design details at this stage, although the qualitative results of impact of this coupling can be roughly estimated by the inputs and outputs of DP12 and DP11, the more accurate and quantitative strength of coupling can not be obtained yet. Provided that the third level is the leaf level of this design, the corresponding child design parameters of DP11 are DP111, DP112 and DP113, and likewise, the corresponding child design parameters of DP12 are DP121 and DP122. At the leaf level, behaviours of these child design parameters are analysed by the Su-Field analysis model, so that couplings between design parameters derived from the same parent parameters are identified and quantified. At the same time, couplings between child parameters of different parent parameters are also identified. Pointing to the second level, by analysis of the third level of design parameters, not only couplings within DP11 and DP12 can be calculated by specific algorithms but also coupling between DP11 and DP12 , which is caused by F12(o’), can be determined by analysing behaviours between their child parameters (i.e. F121(o)’ and F122(o)’). Due to the fact that this project primarily focuses on the analysis of coupling relationships between design parameters, the Su-Filed method is partially used. Conventionally, Su-Field method is used to analyse problems and guide designers to solving problems with standard solutions [2]. In this project, standard solutions are not involved, because no efforts will be made to solve coupling problems at this stage. In other words, the triad analytical model is the only part that is used to clarify interactions within solutions. Discussion of using standard solutions or laws of system evolution to suggest or predict the measure of improvement is out of the scope of this project.

Figure 3: Analysis in the Zigzagging design process

287

4

COUPLING ANALYSIS TECHNIQUE WITH SUFIELD METHOD In order to formulate the coupling analysis process, the framework of coupling analysis methodology is developed (see Figure 4), which is mainly composed of 8 steps. In this section, every step of this framework will be described and related techniques used in each step will be clarified. Step1: Complete the zigzagging design process. The zigzagging design process is conducted by designers at the beginning of product design. Hierarchical design structures of functional requirements (FRs) and corresponding design parameters (DPs) are constructed with current design capability of the team. A qualitative design matrix is populated and rearrangement of the matrix is conducted so that uncoupled and decoupled functions are identified [14]. Meanwhile, coupled blocks existing in the binary design matrix are identified as well, which are looked into in this project. Unlike the conventional axiomatic design approach that does not decompose coupled blocks, in the proposed methodology, each coupled block in the design matrix is decomposed further until it reaches the leaf level and interactions between constituent elements are analysed in step 2 by the Su-Field analysis method.

Step2: Analyse couplings between leaf-level DPs by the Su-Field method. The coupling analysis method in this project is built upon the Su-Field analysis method which is used to clarify interactions between design elements and their effects caused by these interactions. For example, as depicted in Figure 5, there are three DPs and their interactions are expressed in the way of Su-Field analysis. In this coupling analysis model, Fields are denoted by F(i), F(o) and F(o)’, and Substances are denoted by DP.

Figure 5: Su-Field analysis of couplings among Design Parameters F(i) denotes the expected input field of a DP, which is designated when the DP is designed. F(i) could be fields coming from out of the system, like actions from users or environments, or fields coming from other DPs in the system. F(o) is the expected output field of a DP. Similarly to the F(i), F(o) is also designated when the DP is designed. The F(o) is what the system wants in order to realise the function corresponding to the DP. F(i) and F(o) are necessary for the realisation of functions, so in this paper the couplings caused by F(i) and F(o) are not considered. Another output field is F(o)’ which is not expected by the initial design of the system. In other words, F(o)’ is the factor that may be out of control and cause unexpected couplings between DPs. So the analysis of F(o)’ will clarify what the coupling of DPs is, how the coupling happens. Another important factor in this model is the DP. Strictly according to the theory of Axiomatic design, DP means a feature that can satisfy the realisation of functional requirement. The carrier of desired feature may be an object or a particular part of an object. For simplicity, here, ‘DP’ denotes an object or a part of an object that has these design parameters so that the expression can be consistent with the theory of Su-Field analysis as well. In terms of design parameters, their expected states are controlled by F(i)s and their carriers. However, with influence made by F(o)’s from other design parameters, their states may vary. Thus, by comparing the state influenced by F(o)’s with the initial state expected by design, changes of these states of DPs are looked into. The effects of functional performance caused by changes of DP’s states can be quantified by a scale system so that strengths of couplings can be obtained.

Figure 4: The framework of coupling analysis

Step3: Quantify coupling strengths between leaf-level DPs. Due to the fact that couplings are caused by unexpected fields, i.e., F(o)’, acting on DPs, the effort of calculating coupling strength is focused on the influence that F(o)’s make on DPs. To achieve that, a scale system is developed. The strength of coupling is scaled by engineering experts according to the effect that one DP performs on another DP in every level of the zigzagging design process. The relationship between coupling strengths and effects can be learnt from Table 1. Taking

288

the system in Figure 5 as an example, if F113(o)’ performs a negative effect on DP113, which significantly reduces it’s performance, then the scaled coupling strength will be marked as -5 on DP113; if the F112(o)1’ performs a positive effect on DP113, which slightly improves it’s performance, then the scaled coupling strength will be marked as 1 on DP113, as depicted in Figure 6. Along with the progress of zigzagging design, the scale system expresses the coupling strength in a more accurate way, because there are more details emerged from top level to lower level design until the leaf level. In turn, more accurate estimation of coupling strength in lower levels can improve estimation of coupling strength in upper levels with the help of an estimating algorithm. Coupling Strengths 9

calculating its coupling strength. For DPl, its relative coefficient

Necessity of function

denoted

as

ε l ∈ (0,1)

and DPl means a certain DP in the

,

where

hierarchical structure, e.g. DP113 in Figure 5. Step 4.2: Calculate the coupling importance. When a DP performs actions on another DP, it means this DP has the ability to influence others. Given that there is a DP1 performing actions on DP2, in other words there is a coupling between them, the outcome of DP2 will be influenced by DP1. Furthermore, the outcome of DP2 will act on other DPs that are coupled with DP2. Thus, it is important to consider the ability that how one DP can influence others before calculating its coupling strength. The coupling importance coefficient of DPl is denoted as

λl

Descriptions of Coupling Strengths

is

εl

importance

which can be calculated as follows:

7

Extreme performance improvement

Provided that DPl has K F(o)’s act on H DPs, each single coupling strength resulting from Fk(o)’ acting on DP can

5

Significant performance improvement

be denoted as

3

Moderate performance improvement

DP that is acted on by F(o)’s is denoted as ε h , where

1

Slight performance promotion

0

No effect

k ∈ K and h ∈ H . Then, the importance can be calculated as:

-1

Slight performance reduction

-3

Moderate performance reduction

-5

Significant performance reduction

-7

Extreme performance reduction

-9

Function damaged

λˆl =

H

fk

K

∑∑f

( h −1) k k =1

and the functional importance of each

original

•εh

k

coupling

(1)

In order to be consistent with functional importance, the importance coefficient should be a number between 0 and 1. Thus, the original coupling importance needs to be normalised. The normalised coupling importance coefficient can be calculated as:

Table 1: The scale system of coupling analysis

λl =

λˆl

(2)

L'

∑ λˆ l =1

l

where L’ denotes the number of child parameters of DPl’s parent parameter. Step5: Calculate synthesised coupling strengths of DPs. The coupling of DPl can be expressed by Figure 6: Example of scaled coupling strength Step4: Calculate relative importance of each DP. Before calculating the coupling strength of a DP, the importance of each DP needs to be clarified, due to the fact that DPs have different importance compared to each other. DPs with different importance will be considered differently when their coupling strengths are calculated. In this project, there are two kinds of importance need to be analysed, which are the functional importance and the coupling importance. Step 4.1: Calculate the functional importance. Among the child design parameters of the same parent parameter, one child parameter is expected to realise one child function of the corresponding parent function. Obviously, these child functions play different roles in realising the parent function therefore there is different relative importance existing. To obtain the relative importance of each child function, the Analytical Hierarchical Process method (AHP) [15] is used to deal with a pair-wise comparison between these child functions of a parent function. As a result, each child parameter will get a relative importance coefficient which will be used in

289

Cl (ct )cn , cp

where cp means the aggregate coupling strength caused by positive effects performed on DPl, cn means the aggregate coupling strength caused by negative effects performed on DPl, and ct means the aggregate coupling strength caused by all effects performed on DPl. For example, if there are n fields act on DPl, p of them make positive effects on DPl and q of them make negative effects on DPl. Then cpl and cnl can be calculated as follows: p

cpl = ε l • λl •

∑f

cnl = ε l • λl •

∑f

i −1

2

(4)

2 j

(5)

i

q

j =1

{

}

{

}

where i ∈ 0,..., p , j ∈ 0,..., q , p + q = n and f means the coupling strength caused by a field; The aggregate coupling strength can be calculated by this equation:

ctl = cpl2 + cnl2

(6)

Step6: Calculate synthesised coupling strengths of DPs in the next-upper level. For the parent design parameter, its coupling strength can be calculated easily by integrated coupling strengths of child parameters together. For example, if DPp has R child parameters, then the coupling strength can be calculated as follows: R

cp p =

∑ cp

cn p =

∑ cn

r =1

2 r

(7)

2 r

(8)

R

r =1

ct p = cp 2p + cn 2p where

Jeff Thielman’s research, for the purpose of demonstrating the proposed methodology in the simplest way, only three FRs and their corresponding DPs are selected in this paper, which can be found in Table 2.

cp p

(9)

denotes the aggregate positive coupling

strength of the parent DP,

cn p denotes

the aggregate

negative coupling strength of the parent DP, cp r denotes the aggregate positive coupling strength of a child-DP, and cnr denotes the aggregate negative coupling strength of a parent-DP. The coupling strength of every DP in each level is calculated until it reaches the top level. Before calculating the coupling strength of each upper level, relative importance coefficient needs to be calculated first. Step7: Search for the hierarchical design structure to get critical coupled paths. After obtaining all the coupling strength of every DP in each level, a searching algorithm is used to identify critical coupling paths in this hierarchical design structure. Designers can get the most coupled path by searching coupling strengths t from the top level to the leaf-level in order to get the most promising route to improve the design. Designers can also get the most negative coupled path by searching negative coupling strengths cn in the structure in order to get the most valuable way to eliminate critical problems existing in the design. Additionally, designers can get the most positive coupled path by search for the positive coupling strength cp of every DP so that they can decide whether some parts of the design can be integrated together. Step8: Design improved and coupling re-calculation. By recognising some most valuable paths for improving the design, improvements need to be implemented and the design is refined. If the design is still not satisfactory, recalculation of the coupling strength of the design is carried out and further improving work needs to be done. 5 AN ILLUSTRATIVE EXAMPLE In this section, an example is demonstrated to show how the methodology works to identify and quantify the coupling relationship between FRs and DPs in an engineering system. The engineering system chosen in this paper is the reactor cavity cooling system (RCCS) of General Atomics’ Gas Turbine-Modular Helium Reactor (GT-MHR) nuclear reactor which is described in the GTMHR conceptual design description report [16,17] and is further studied by Jeff Thielman et al [3,18] in order to evaluate and optimise the system with Axiomatic design theory. The RCCS is one of the cooling systems of the GT-MHR and works in a passively natural circulation cooling condition to remove decay heat when the reactor is shut down (see [10] and [12] for details). Although there are seven sub-FRs and seven sub-DPs of the DP3.2.2 in

Figure 7: The reactor cavity cooling system [18] Sub-FRs of FR3.2.2

Sub-DPs of DP3.2.2

Air exit temperature

Riser Width

Air velocity in riser

Riser Height

Maximum riser wall temperature

Outlet Area

Table 2: Selected FRs and DPs of RCCS There are three functional requirements selected in this demonstration, namely air exit temperature, air velocity in riser, and maximum riser wall temperature. In the conceptual design of the RCCS, the air exit temperature is supposed to be maintained as low as possible. The value of air velocity in riser needs to be kept as high as possible so that there will be more heat taken from the reactor. Obviously, the maximum riser wall temperature is designed to be as low as possible due to the fact that high temperature is negative to the safety of the reactor. The operation model of the RCCS system is simply built up using the Su-Field analysis method, as depicted in Figure 8. Obviously, it can be learnt that Reactor is the source of heat and it delivers heat to risers by radiation. The riser, therefore, is heated and delivers forward the heat to the circulating air in the riser. By the nature of air, the heated air drives air in the riser to rise up and go outside of the riser. Finally, the exit air is led by the outlet duct and gets into the atmosphere. Beside these expected actions, there are also some actions that are not expected by the original design. For example, with increase of the width of riser, the air inside the riser is heated more effectively so that the velocity of air in riser increases. Meanwhile, the temperature of the riser wall decreases because there is more heat taken from the reactor. Another fact is when the height of riser increases the temperature of exit air and the maximum temperature of the riser wall increase because when the height of riser increases the damp of air circulation increases as well and the performance of releasing heat decreases. The outlet area also affects functions of air exit temperature and air velocity in riser by control of the exit of air. Thus, the coupling diagram can be obtained based on the analysis of Su-Field method and engineering expertise, which is shown in Figure 9. By analysing the effects caused by unexpected actions,

290

relative coupling values are obtained according to the scale system of coupling analysis.

Furthermore, the coupling value of each design element can be calculated by equation 4, 5, 6. The results are shown in Table 5. Positive coupling (cp)

Negative Coupling (cn)

Total coupling (ct)

RW

0.169

-0.508

0.535

RH

0.011

-0.011

0.016

OA

1.37

-0.587

1.49

Table 5: Coupling strengths of design elements Finally, the coupling strength of the parent element, DP3.2.2, can be calculated by equation 7, 8, 9. The result of coupling strength of DP3.2.2 is displayed in Table 6. It needs to be noticed that the coupling strengths of DP3.2.2 below are not the actual values because there are only three pairs of FRs and DPs selected to demonstrate in this example.

Figure 8: Su-Field analysis of heat removal

DP3.2.2

Positive coupling (cp)

Negative Coupling (cn)

Total coupling (ct)

1.38

-0.776

1.583

Table 6: Coupling strength of DP3.2.2

Figure 9: Interactions and couplings between objects

Air velocity in riser

Air exit temperature

1

2

1/2

0.297

Air velocity in riser

1/2

1

1/3

0.164

Maximum riser wall temperature

2

3

1

0.539

Relative Importance

Air exit temperature

Maximum riser wall temperature

After obtaining the coupling relationship between design elements, step 4 of the coupling analysis framework needs to be carried out to calculate the relative functional importance and relative coupling importance of each element. By the algorithm of AHP (Analytical Hierarchical Process), the relative functional importance can be calculated as in Table 3.

Table 3: Relative functional importance According to the results of coupling analysis in Figure 9 and the relative functional importance in Table 3, the relative coupling of each design element can be obtained by equation 1 and equation 2, which is shown in Table 4.

Relative coupling importance

RW

RH

OA

0.57

0.067

0.363

Table 4: Relative coupling importance

291

By calculation of coupling strengths of three design elements, the coupling problem can be learnt from the result intuitively. From Table 5, the design element OA is supposed to the critical element that gets coupled in the system with others because both the strongest negative coupling and the strongest total coupling occurred on OA. Thus, some proper improving efforts should be assigned to the design of OA in order to effectively reduce the couplings of the solution. If the full decomposition of the design structure and the coupling analysis of all design elements are completed, there would be a hierarchical structure of coupling analysis results where a comparative algorithm can be applied to search the strongest couplings in each level in a top-down way. As a result, critical paths for system improvement are identified to facilitate the effectiveness and efficiency of product design. 6 CONCLUSIONS The theory of Axiomatic design is widely used in new product and system design, especially at the conceptual design stage. According to the Independence Axiom, it is critical to maintain the independence of functions that minimises the disturbance to realisations of other functions when anyone of design parameters changes. However, in the real world, it is almost impossible to maintain the complete independence of all functions at an acceptable cost in some complex engineering systems. In this project, a methodology of coupling analysis is proposed by integrating TRIZ with Axiomatic Design. SuField method, an important part of TRIZ, is used to identify and analyse the couplings existing in design solutions. With the assistance of this methodology, coupling relationships within the designs are clarified and quantified. It is much easier for designers to find out clues to improve the system. Furthermore, if the number of design parameters is large, it is impossible for designers to carry out a rearrangement of the design matrix. Therefore this method can help to find critical coupled elements that affect the performance of the system. Also, it can help to improve the effectiveness and efficiency of engineering design because critical coupled paths can be found by searching in the hierarchical structure based on

the coupling analysis results. The design team can make more efforts to improve the critical aspects of the system (and less efforts on less important or harmless couplings) and resources can thus be allocated more properly. 7 DISCUSSION AND FURTHER WORK Although the proposed methodology provides a new way to analyse coupling issues in conceptual design, some uncertainties and shortcomings also appear, which are worthy of further discussion and consideration. From the perspective of TRIZ, substance in the Su-Field method indicates “thing” or “entity” which normally is a physical object. In this project, the carrier of design parameters is considered as an object or a part of an object which possesses the feature that can realise the corresponding function. Therefore, design parameter is used to represent that object or that part of the object or carrier. However, the carrier may have more than one feature to realise different functions. Thus, further analysis needs to be carried out to clarify which action is performed on a certain feature and what is the effect. This analysis is done by individual designers in this paper. A further research of mapping between physical Su-Field analysis and abstract coupling analysis is interesting to be looked into. Another issue is that the scaling system of coupling strength is used to quantify the coupling based on the expertise of individual engineers, which may make the estimation of couplings inconsistent if there are engineers in the team with different levels of experience. The scientific and technical effects of TRIZ are possible to be helpful to estimate the coupling strength by analysing the interactions. In this paper, the coupling strength of design element is the value that denotes the effects caused by other design elements acting on the current design element. But the effect, which is caused by the current design element acting on other design elements, has not been considered. Further research needs to be done in order to clarify the strengths of effects that the current design element acts on other design elements. The illustrative example in this paper is based on a complex engineering system. However, due to progress of the current research, the system has not been decomposed in details so that coupling analysis is not based on a rigorous engineering analysis and coupling analysis in upper-levels is not demonstrated. Thus, further research on the reactor cavity cooling system needs to be carried on. A real industrial case study is planned in the next stage of this project.

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18] REFERENCES [1] Suh, N.P., 2001, Axiomatic Design: Advances and Applications, Oxford University Press, New York. [2] Fey, V. and Rivin, E., 2005, Innovation on Demand: New Product Development Using TRIZ, Cambridge University Press, New York. [3] Thielman, J. and Ge, P., 2006, Applying axiomatic design theory to the evaluation and optimization of large-scale engineering systems, Journal of Engineering Design, 17(1): 1-16 [4] Su, J.C., Chen, S.J. and Lin, L., 2003, A structured approach to measuring functional dependency and sequencing of coupled tasks in engineering design, Computers and Industrial Engineering, 45: 195–214. [5] Zhang, R., Tan, R. and Li, X., 2005, An innovative conceptual design model using axiomatic design and TRIZ, The 15th CIRP International Design Seminar, Shanghai, China, 22-26 May: 281-286. [6] Lee, K.W., 2005, Mutual Compensation of TRIZ and Axiomatic Design, the proceedings of the European

292

TRIZ Association Conference, TRIZ Futures 2005, Graz, Austria, November Mann, D, 2002, Axiomatic Design and TRIZ: Compatibilities and Contradictions, Proceedings of ICAD2002, Second International Conference on Axiomatic Design, Cambridge, MA, June 10-11 Kim, Y.-S., 2000, Reviewing TRIZ from the perspective of Axiomatic Design, Journal of Engineering Design, 11(1): 79-94 Hua, Z., Yang, J., Coulibaly, S. and Zhang, B., 2006, Integration TRIZ with problem-solving tools: a literature review from 1995 to 2006, International Journal of Business Innovation and Research, 1(1/2):111 - 128 Yang, K., Zhang, H., 2000, Enhancing Robust Design with the Aid of TRIZ and Axiomatic Design, Proceedings of ICAD2000, International Conference on Axiomatic Design, Cambridge, MA, June 21-23 Shirwaiker, R.A., and Okudan, G.E., 2008, Triz and axiomatic design: a review of case-studies and a proposed synergistic use, Journal of Intelligent Manufacturing, 19(1):33-47 Kang, Y.J., 2004, The Method for Uncoupling Design by Contradiction Matrix of TRIZ, and Case Study, Proceedings of ICAD2004, The Third International Conference on Axiomatic Design, Seoul, June 21-24 Shin, G.-S., Park G.-J., 2006, Decoupling process of a coupled design using the TRIZ, Proceedings of ICAD2006, the 4th International Conference on Axiomatic Design, Firenze, June 13-16 Gabala, D.A. and Eppinger, S.D., 1991, Methods for Analyzing Design Procedures, ASME Conference on Design Theory and Methodology, Miami, FL, September: 227-233. Saaty, T.L., 1999, Decision Making for Leaders: The Analytic Hierarchy Process for Decisions in a Complex World, RWS Publications, Pittsburgh, Pennsylvania. General Atomics, 2002, Gas Turbine-Modular Helium Reactor (GT-MHR) Conceptual Design Description Report - Part 1, U.S. Nuclear Regulatory Commission, http://www.nrc.gov/reactors/newlicensing/new-licensingfiles/gtmhr-preapp1.pdf. LaBar MP., 2002, The gas-turbine modular helium reactor: a promising option for near-term deployment. International Congress on Advanced Nuclear Power Plants, ICAPP 2002, Hollywood, Florida, USA, 9-13 June: GA-A23952 Thielman, J., Ge, P., Wu, Q. and Parme, L., 2005, Evaluation and optimization of General Atomics' GTMHR reactor cavity cooling system using an axiomatic design approach, Nuclear Engineering and Design, 235:1389–1402

TRIZ Evolution Trends in Biological and Technological Design Strategies N. R. Bogatyrev, O. A. Bogatyreva Department of Mechanical Engineering, The University of Bath BA2 7AY, UK [email protected]; [email protected]

Abstract The concept of evolution in technology and biology is discussed. It appears that most of the evolution trends in technology and biology result from different development strategies. This conflict has roots from the time when technology emerged to adapt the environment for our needs. Following that strategy to full extent is dangerous. We need also to adapt to the environment, but current technology neither has the mechanisms for such changes nor the knowledge of which directions to go. Therefore learning from nature is a real challenge. We suggest ten new evolution trends for strategic design to be ahead on future markets. Keywords: TRIZ, Evolution, Bio-Inspired Design

1 INTRODUCTION The trends of a product evolution developed in TRIZ are very helpful in product design and strategic management for choosing the best financial investment. Obviously the best way to predict the future is to design it. It is well known in TRIZ that at each different stage of product development (expressed by an S-curve) different evolutionary trends are applicable. Actually all the trends reflect how often we use the particular sets of inventive principles or standards. In other words they reflect the pattern of our thinking we have become used to. G. Altshuller extracted inventive principles and revealed his laws of evolution of technical systems from the analysis of a huge number (figures vary around 3 million) of patents. As these trends reflect the popularity, and success, of some particular inventive principles amongst engineers there is a question whether these pathways are programmed as working patterns in the mind or “TRIZdefined” and belong to technology as an artificial phenomenon? In other words, are these principles (evolution trends) inherited or invented? So, basically we face the question: are there any general laws of nature reflected in technology development that gave to G. Altshuller [1] the idea of technological evolution trends? If the evolution trends are “invented” are they always good to follow? In this paper we describe our quest to find the answer. Using TRIZ we certainly learn from past experience and transfer our thinking patterns to provide the future technology’s development. But what if we need to adapt current technology to the totally new conditions where past experience is not valid anymore? G. Altshuller and M. Rubin [2] described what will happen after “the final victory of technology” in its opposition to nature. The main message of that paper is that technology suppresses nature, replaces it with artefacts in our life and finally, inevitably, we will come to the point where we will live in totally artificial environment. (Any professional ecologist will also say that after this stage the days of humankind will be numbered, but this is another story). Thinking about technology and nature as two contrasting opposites is a common ideology from the past. The future looks very pessimistic if technology will be following the same trends. Yes, at the very beginning technology was CIRP Design Conference 2009

293

designed to extend our abilities, to replace some of our functions with machines carrying out work much better than we can. But technology has changed dramatically today. With the development of information technology and new materials we can now produce machines that may possess many, or even all, of the features of living creatures. We have even started thinking about the selfreproduction of machines: engineers have already designed and built a set of modular robots that can be combined into machines of varying sizes that will in turn be able to construct identical copies of themselves [3], [4]. To remain ahead of this new market and make reliable forecasts we need to know what evolutionary trends these new technological “creatures” will follow. In such circumstances the evolutionary trends of the former technology (with its counter-nature ideology) will be no longer valid or applicable and we need to be ready for such changes. We are now at the beginning of a new technological revolution – changing the paradigm of thinking in order to survive. Unfortunately it often happens that we are making our technology “green” simply by using the prefix “eco -“. This will not help us survive any future global ecological crisis. We need, carefully and thoughtfully, to include our new technology into the ecosystem of the planet we belong to and pay nature back what we have been credited. So, the goal of our paper is to provide engineers with the trends and concepts that they need to take into account in the design of the future generation products. Some of these trends are similar to ones defined by TRIZ, but quite a few of the concepts now being learned from nature can provide us with new ideas and strategies for engineering of the future. To do this we need to define both what is common and what is different in biological evolution and the development of engineering products.

2

biosphere include our civilization into its cycles. We suggest that our future ideal technology will possess all the advantages of animate nature together with our current traditional technological artifacts. One of the basic features of living systems is the emerging of autonomy or independence of action, with a degree of unexpectedness directly related to the complexity of the living system. This gives living systems great adaptability and versatility, but on the other hand makes its behaviour difficult to predict. K. Loretz often gives this example to describe the differences of living creatures and inanimate nature: if we take a stick and hit the ball we can predict where the ball will land with high probability, but if we hit with the same stick a dog, the outcome is extremely variable and depends on numerous factors that affect dog’s decision how to react (run away, bite, scream, hide, freeze, etc) On the other hand engineers in general do not appreciate unpredictability in technical systems; indeed they try to avoid it by any means. But we need to consider this even in our current technology, since nearly every technical system is actually a combination of a technical device and a human that operates it. This viewpoint immediately suggests a broader and more general definition of the very term – “a technical system” – a biological system, part of the functions of which is delegated to a device that is mostly artificial and/or non-living. This consideration is commonly omitted; technical systems are often considered in isolation, neglecting any broader context despite the fact that engineering is really a subset of human behaviour: a decision making process is very common in animate nature, even amoebas make choices. In fact, it is compulsory parameter in looking for the difference between animate and inanimate object. At best, neglecting of the biological aspects of engineering can lead to reduced effectiveness; at worst it can produce technological catastrophes. So, there is a good reason to learn from biology how nature deals with extreme complexity and uncertainty.

BIOLOGY AND TECHNOLOGY – TWO OPPOSITES AND A CHALLENGE FOR SYNTHESIS

2.1 Comparison of living and technical systems First of all we decided to have a look what are the major differences between animate nature and technology (table 1, 2). Technical systems are created by humans (biological systems) for themselves for some definite number of purposes. But biological systems possess their own selfvalue. The use of TRIZ in biology faces serious restrictions which are set by the very features of biological systems which make them alive. The simplest example: widespread mechanicalengineering methods are based on extremely high or extremely low temperatures however similar methods cannot be applied to the vast majority of living creatures because biological systems will not survive such energy impacts. Living nature is energy efficient and avoids extremes. An ideal technical system and an “ideal” biological system look very different (table 1). For example, a living creature typically needs to put a lot of pressure on the environment to survive (change the environment, which is in fact engineering), but also to be very adaptable (changes itself in response to environment, which is not typical for technology). Another example: the tendency of ideality (i.e. decreasing the size of a device retaining its function) exists in biological systems, but only in some limited cases (for example, parasites). One of the most profound reasons of this might be that the living systems are self-valuable objects of and for themselves. In other words, it is not sufficient (at least!) for them to perform only their role, simply because their own existence is the ultimate independent value for themselves, but because they affect various super-systems. Contemporary technology does not have many of the features that we find in life, but still there is something in common which gives us hope for merging (table 1, 2) the two domains harmoniously within some synthetic disciplines (e.g., in biomimetics). Our future depends on how we manage to adapt current technology to the dramatically changing environment and to help the

Ideal technical system

“Ideal” Biological system

Ideal future technology

1 2

Simple structure Everlasting or have necessary life length

Complex structure Mortal

Simple Everlasting

3

Easy to operate (Deterministic)

Difficult external operation (due to stochastics)

Easy to operate

4

Min. use of resources

Max. use of resources

Minimal use of resources

5

Min. waste production

Min. waste production

Minimum waste production

6

Max capacity reserve

Available in abundance

Max capacity reserve

7 9

Easy to repair Has different modes of operation for different environment

Sustainable Adaptive

Self-repairing Adaptable

10

Automatic

Self-regulated

Self-regulated

11

Reliable

Reliable

Reliable

Table 1: Animate and inanimate systems: two different idealities (differences are marked grey).

294

NON-LIVING TECHNICAL ARTIFICIAL SYSTEMS

LIVING BIOLOGICAL NATURAL SYSTEMS

1. Operate within sufficiently wide conditions, which are beyond the limits of living creatures’ tolerance. Utilisation of high-energy electromagnetic fields, laser, radiation, extreme temperatures, and pressure is widespread. 2. Most human technologies are open-ended “cycles”, which causes most of the problems in various types of misbalance and lack of sustainability. 3. Very fast and accelerating development.

1. Operate within relatively narrow conditions of temperature, pressure, chemical environment, etc. Utilisation of high-energy electromagnetic fields, radiation and low temperatures is absent. 2. Complex living systems tend to keep balance – static (homeostasis) or dynamic (homeorhesis) due to closed cycles of energy and substance. 3. Relatively slow rates of evolution.

4. Short term effectiveness (“here and now at any price”) 5. Slow processes are considered as shortcomings.

4. Long term sustainability.

6. Economical forces make steady shift from K- to rmode in products.

6. Complex ecological systems tend to drift from r- (“cheap”, small, short living organisms) to K-(large, long living) mode.

7. Contemporary industrial systems are unimaginable without massive global transport flows. 8. Evolution of technology goes from mechanisation via automatisation towards nearly total replacement of humans in the technological process.

7. Biological systems mostly avoid long-range transportation. 8. Living creature mainly participates in all the processes in which it is concerned as a central figure.

11. Typically new technology substitutes the old one to maximum extent.

9. The newly evolved biological systems do not necessarily substitute the old ones, but often show the parallel existence.

12. The most common type of locomotion and manipulation: rotation.

10. The most common type of manipulation and locomotion: oscillation, reciprocation, pulsation.

5. Slow processes are wide-spread.

Table 2: The differences between living nature and technology. 2.2 TRIZ as a bridge between nature and technology We started to merge TRIZ with biology for the needs of biomimetics (a science that takes ideas from biology to implement them in technology) – in 2000-2002 [5], [6], [7], [8], [9]. The whole aim was to use TRIZ as a bridge between biology and engineering in order to enable us to implement natural principles in design and technology. In fact biomimetic devices should provide success in the immediate future (table 1). It was also very tempting to see whether TRIZ evolutionary trends are working in animate nature as this could make a significant contribution to evolution theory. On the other hand, the trends of biological evolution might enhance the technological one. We looked at morphology development and found that some of the trends also worked in biology [10]. Analysing the biological phenomena and the laws and regularities currently being developed within biology, we found all the 40 “inventive principles” and also 72 “solution” bio-standards (in press) in biological systems at all levels of complexity – from cell to ecosystem [11]. So, there is the first evidence that biological and technological evolutions reflect a more general reality and therefore look similar. Such reality is the subject for the Complexity theory study. To enable us to compare parameters from technological and biological domains we established a logical framework based on the “mantra” – “Things do things somewhere”. This establishes six fields of operation in which all actions with any object can be executed: Things (substance, structure) – this includes hierarchically structured material, i.e. the progression sub-system – system – super-system – do things (requiring energy and information) – it implies also that energy needs to be regulated; somewhere in space and time. These six

295

operational fields (namely – substance, structure, energy, information, space and time) re-organise and condense the TRIZ classification (Contradiction Matrix) both of the Features used to generate the Conflict statements and the Inventive Principles [12]. This generalisation is considerably more logical and easier to use than the Altshuller’s 39x39 Contradictions Matrix. Our matrix allows the inclusion of more parameters that were previously missing. Moreover, our new 6x6 matrix derived from these fields has no blank cells. This more general TRIZ matrix is also used to place the Inventive Principles of TRIZ into a new order that more closely reflects the biological route to the resolution of conflicts. We call this new matrix BioTRIZ matrix [11]. It is possible now to compare the types of solution for particular pairs of conflicts in technology and biology (table 3,4). We have analyzed 500 biological phenomena, covering over 270 functions at least 3 times each at different levels of complexity – from cell to ecosystem. In total we have analyzed about 2500 conflicts and their resolutions in biology, sorted by levels of complexity [11]. As the result we revealed some crucial differences between biology and technology that should be discussed. Although the problems commonly are very similar, the inventive principles that nature and technology use to solve problems are very different. In fact the similarity between the TRIZ and BioTRIZ matrices is only 0.12, where complete identity is represented by 1 (Table 3, 4). This is actually not surprising at all, because technology appeared as a response to the “imperfection” of biological systems. But then this separation tends to increase and finally leads to numerous problems such as the current ecological crisis. Thus it is the right time to look at biological systems and the ways, techniques and strategies that they employ for problem solving.

Operation fields that should be improved

Operation fields that cause problems Energy/Field

Information/ Regulation

Substance

Structure

Time

Space

Substance

6, 10, 26, 27, 31, 40,

27

3, 27, 38

14, 15, 29, 40,

10, 12, 18, 19, 31

3, 15, 22, 27, 29

Structure

15

18, 26

27, 28

1, 13

19, 36

1, 23, 24

Time

3, 38

4, 28

19, 35, 36, 38

22, 24, 28, 34

Space

8, 14, 15, 29, 39, 40

1, 30

6, 8, 15, 36, 37

1, 15, 16, 17, 30

Energy/Field

8, 9, 18, 19, 31, 36, 37, 38

32

14, 19, 21, 25 36, 37, 38

2, 19, 22

Information/ Regulation

3, 11, 22, 25, 28, 35,

30

2, 6, 19, 22, 32

2, 11, 12, 21, 22, 23, 27, 33, 34,

10, 20, 38 5, 14, 30, 34 4, 5, 7, 8, 9, 14, 17 12,15, 19, 6, 19, 35, 30, 36, 37, 36, 37 38 4, 14

9, 22, 25, 1, 4, 16, 17, 28, 34 39

Table 3: Matrix derived from standard TRIZ 39x39 matrix Operation fields that cause problems Operation fields that should be improved Substance Structure

Time Space Energy/Field Information/ Regulation

Substance

Structure

Time

Space

13, 31, 15, 17, 1, 2, 3, 15, 15, 19, 27, 15, 31, 1, 5, 20, 40 24, 26 29, 30 13 1, 10, 15, 19

1, 15, 19 24, 34

1, 2, 4

10

Energy/Field

Information/ Regulation

3, 6, 9, 25, 31, 35

3, 25, 26

1, 2, 4

1, 3, 4, 15, 19, 24, 25, 35

1, 2, 3, 4, 2, 3, 11, 1, 2, 3, 4, 7, 3, 9, 15, 20, 22, 6, 15, 17, 20, 26 38 25 19 2, 3, 4, 5, 4, 5, 14, 17, 1, 3, 4, 15, 19 3, 14, 15, 25 1, 19, 29 10, 15, 19 36 1, 3, 5, 6, 1, 3, 13, 14, 3, 10, 23, 1, 3, 4, 15, 3, 5, 9, 22, 25, 25, 35, 36, 17, 25, 31 25, 35 25 32, 37 40 1, 3, 6, 18, 2, 3, 9, 17, 3, 20, 22, 1, 3, 6, 22, 32 1, 6, 22 22, 24, 32, 22 25, 33 34, 40 1, 3, 15, 20, 25, 38

1, 2, 3, 10, 19, 23 3, 15, 21, 24 1, 3, 4, 15, 16, 25

3, 10, 16, 23, 25

Table 4: BioTRIZ matrix derived from biological effects

Figure 1: Biology and engineering: a comparison of TRIZ and BioTRIZ contradiction matrices. As it is clear now that bio- and technological “design” have almost completely opposite strategies, we may even regard them as two anti-systems. Technology tends to

solve problems spending energy and building up structures, changing substances – in energy and matter domains. In animate nature problems are mostly avoided

296

in space and resolved in information – this is much cleverer and less energy demanding way of problem resolution (Figure 1). There is obvious challenge for synthesis. To develop the approach to such a synthesis we need to know the reasons for this kind of difference. We may find the answer while comparing the evolution trends in life and technology. 3 TWO “EVOLUTIONS” Evolution is one of the most exciting subjects as everyone is interested why this world is changing and why some changes lead to success, but other changes cause failure and extinction. It is vital for us to understand the principles that underline these changes. After G. Altshuller’s discoveries of mechanisms and trends in evolution of technology, many papers were published to support and to enhance these ideas. Nearly all TRIZ specialists contributed to this subject. The recent book of Nikolay Shpakovsky “Trees of evolution” [13] is the best and the most exhaustive one, which also has a review of different opinions and recent achievements. 3.1 What are we studying? Let us firstly define what we mean by the word “evolution”. Evolutions of species, evolution of societies, and evolution of aeroplanes have certainly different meanings and express different processes. The word “evolution” has a “tail” context taken from biology – transformation of species and origin of new ones as a result of numerous natural mechanisms of selection. Technology evolves more as behaviour does (rather than physiology and morphology) as it is the product of innovation and problem solving. Also it is driven by human minds and decision making in particular. Without humans all our clever devices are purposeless. In this sense the evolution of technology in fact reflects how our mind works. Being driven by human minds we can regard technological evolution as to a large extent a subjective phenomena. The most discussed aspect of biological evolution is the evolution of morphology (which is not mind-driven). Of course there are some fruitful hypotheses on behavioural evolution, but within the current evolutionary paradigm they can only deal with genetically inherited patterns of behaviour. Therefore the decision making process is totally excluded from the evolutionary concepts in biology. According to current views biological evolution is not driven by mind and therefore can be certainly regarded as an objective phenomenon. The laws of evolution formulated in TRIZ (the law of system completeness, energy conductivity, rhythm coordination, increase of ideality, uneven development of system parts, dynamisation, etc.) are not laws s.str., but trends or tendencies. They describe the process, but not the very mechanisms of it. When we are building “evolutionary” trees we in fact applying these trends to generate product diversity and to get “lines” of evolution. The only mechanism of technological evolution is that it is driven by resolutions of contradictions. It is hard to distinguish the borders and causality amongst TRIZ “laws”, “trends” and “lines” in the real life, so for our convenience we have put them all together into one table (table 5). There is also difficulty in revealing the mechanisms that drive biological evolution. They are different for macroevolution (evolution of high-range taxons and ecosystems) and micro-evolution (evolution of species). These mechanisms are also different on molecular level (genes) and on the ecosystem level. Moreover there are more than 24 different concepts about these mechanisms that drive the evolutionary process in living nature [14]. In

297

spite of the fact that there is enormous amount of literature on modelling of the evolutionary process, we still know very little about actual mechanisms of evolution: computer models (or better to say – simulations) of evolution process currently do not give reliable predictions, they express the human opinions on the reality rather than reality itself and often unfortunately are not properly substantiated and validated by evolutionary biologists and are yet to be used in industry. The only exclusion is so called Evolutionary optimisation algorithms, which were developed from the inspiration of the works on genome evolution and had nothing to do with the real evolution of species and eco-systems at all. As we decided to deal with real and “solid” facts (visible results of evolution) we excluded from our consideration the vast amount of literature on modelling of different hypotheses of evolutionary process. So, we intentionally limited ourselves and assumed that we can neglect the mechanisms/causes which drive evolution in technology and biology as they are obviously different for animate nature and technology. We are dealing only with the results of those mechanisms in action – with trends we can observe as scientific fact (not an opinion). We operate with well known facts described in the books on comparative anatomy and physiology, evolutionary morphology, ecology (cycles of energy and substance in different eco-systems and within different time scale), palaeontology, etc. For our assumptions we used the knowledge that general biology accumulated for the last 200 years [15]. [16], [17], [18], [19], [20], [21], [22], [23, [24] and many other publications In the current paper we as experts provide ‘compressed information’, which has not been trivial to extract from numerous case studies on comparative anatomy, physiology and evolutionary morphology. So, in our discussion of the evolution trends we are leaving for the future the questions “how?” and “why?”, but answering only “what?” or “how it looks like?” questions. 3.2 New evolution trends for future technology Both of the realms – biology and technology – have profound intrinsic advantages and shortcomings. The challenge for future engineering and TRIZ as decisionsupport tool is to use positive sides and get rid of shortcomings of the both domains. In such case we will achieve the ideal result for future technology (table1). We analysed the evolution trends in technology and biology. The comparisons of biological and technological systems are presented in the table 3. From the total amount of sixteen trends (table 5) only four are common for technology and living systems, three biological trends happened to be unknown within technology, two technological ones are not described in biology and seven trends are opposite for biology and technology. Such as, for example to achieve sustainability all technological processes should follow the “steps” of longterm bio-evolutionary strategies and middle-term ecological cycles (e.g., ecological successions); increase the energy flow paths, provide enough diversity for complex engineering systems or networks to achieve reliability, etc (table 5). It is very clear that the vectors of development of animate nature and technology are opposite. In some cases, when we need to conquer nature, this gives local advantages, in others (or sometimes at the same time), if we want to cooperate with nature creates problems as current engineering strategies evolved to replace natural phenomenon rather than use it. If engineering eager to evolve towards nature, technology has now at least ten new strategic lines to follow to prepare it for future market conditions.

Trends in technical evolution 1. Transition of the working functions from the macroto the micro-level 2. Increase of the degree of ideality – the more emptiness in a system the better. 3. Systems change while they grow following S-curves 4. Systems and products evolve toward the use of higher frequency energy and use of fields: Gravitational - Mechanical – Acoustic – Chemical – Thermal - Magnetic - Electric - Electromagnetic

Trends in biological evolution

3. System ontogenesis can be expressed in S-curve 4. Life started as a bio-chemical phenomenon and evolved towards the active search for energy resources. Single-cellular organisms started from: Electro-Magnetic – Electrical- Chemical – Mechanical (multi-cellular organisms)- Acoustic (complex communication) in their organisation and behaviour.

5. Dynamisation, increase of the degree of freedom and flexibility.

5. Decrease of the degree of freedom in functions – species specialisation. The more primitive biological taxons are the more their universality.

6. Mono-Bi-Poly cycles , i.e. polymerisation of monomerical parts.

6. Trends in the evolution of morphology: oligomerisation of effectors and metamerical parts of the body.

7. Segmentation: reduction of the unit.

7. Replication, reproducing, multiplication of the units

8. Increase of automation and eventual exclusion of humans.

8. Increase of the role of the central control and sophistication of the nervous system. But decrease of automation, increase the role of feed-forward control. 9. Morphological degradation of parasites and other superspecialised species (“folding”) is the dead-end of the evolutionary line. 10. Also true for all living systems

9. “Folding-Unfolding” structural complexity.

10. Harmonization and coordination of the system parts (materials, shape, structure, information, rhythms and energy distribution) 11. Parts of systems (sub-systems) evolve nonuniformly, creating constantly changing opportunities for innovation. 12. Shortening of the Energy Flow Path.

14. Life span of a product is definitely shorter than the life spans of the classes of similar product and obviously shorter than the life of the whole industrial branch.

cloning,

metamerisation:

11. Species either change themselves or change each other. Misbalance in sub-systems’ interactions causes ecosystem catastrophes or individual physiological stress, illness and triggers changes or death. 12. Energy flow paths are getting longer in the evolution of life on our planet 13. The acceleration of evolution speed is in direct proportion to the complexity of a system (mammals evolved faster than bacteria). 14. Life span of the ecosystem is 4-5 time larger than life spans of families, the families live 3-4 times longer than genus, genus – 3-4 time longer than species. 15. The higher level of system complexity the more diversity of forms of such systems. Eukaryotes more complex than prokaryotes and contain 500 times more different species. 16. Living nature evolves from short life-cycles to the long lifecycles. For example, the cycle “phototrophs →reducers→mineral substances→ phototrophs”” evolves to the cycle “phototrophs (producers) → consumers-1→consumers 2→…..→ reducers→ mineral substances →phototrophs””

Table 5: The differences and similarities (grey) between the evolution trends in animate nature and evolution of technology. 4 SUMMARY In the TRIZ literature the expression of “evolution of technical/technological systems” is widely accepted and employed. This is OK until the technology is compared with biology, where the same term is in circulation more than 200 years. Borrowing biological principles for the engineering applications causes serious confusion and misunderstanding of the concept of evolution. Biological systems possess at least two more types of transformation (ontogenesis and ecological succession) and they are different from evolution s.str. That is why the work we’ve done on comparison and analysis of transformations and development in biology and technology is essential.

Engineers mostly consider the future; biologists are mostly focused on the past. Both approaches have their own advantages. Living creatures both adapt themselves to the environment and change the environment for their needs. But these two processes are very well balanced in nature. This is not true for the technology: we put too mush pressure on the environment and very little adapt to the needs of natural environment. So, there are two evolution strategies – adapt to environment and adapt the environment. If unbalanced these strategies become dangerously separated as their driving mechanisms do not match each other. We could make a long list examples of contradictions life and technology, but we only pointed out the main issues. Some of the technologies already realised the danger of the growing

298

gap and already start making attempts to improve this opposition. For example, the founders of permaculture tried to formulate the new approaches in agriculture and related spheres [25]. Technology should learn a lot how to be adapted to the environment. Now it is obvious that we should merge both the most advanced features from biological principles and the vast historical engineering experience [10]. In our research we found the similarity of design patterns (inventive principles), but not the context of their application within the evolution trends of life and technology [11]. [26]. This means that evolution of animate nature and evolution of technology are different phenomena as a result of their original aim – to change the environment or to change themselves. The future of technology also must lie in its ability to deal with its own complexity and ability to build itself into the life of biosphere. Knowing natural principles that we learnt from biology may contribute significantly to the future of technology as this knowledge underpins the laws of any complex system development. As the result of our study, future industry now has at least ten new strategic lines to follow to prepare itself for future market conditions. Our BioTRIZ tool [11] was developed to initiate this process. Taking into account the laws of development (not only evolution in biological sense!) of living and non-living artificial systems within one engineering domain is the real challenge! Modifying TRIZ into its Bio-TRIZ version hopefully makes technology more ecologically sound and environmentally friendly and therefore sustainable. When we carry out problemsolving workshops we give our customers option to use classical TRIZ contradiction matrix and the biological one and nearly all participants have found their best solutions using the inspiration from the BioTRIZ matrix. This does not mean that we have developed something better than Altshuller. It only shows that current market demands shape technology in such a way that it should co-evolve with life and follow evolution trends of living systems in order to survive. 5 ACKNOWLEDGMENTS We are sincerely grateful for our colleagues from both domains – biology and technology: biologist Prof. Julian F.V.Vincent and engineer Mr. Michael Hinds for helpful discussions and improving of the text of the paper. 6 REFERENCES [1]. Altshuller G. S., 1973, Algorithm of invention. Moscow, “Moscow worker”. [2]. Altshuller G.S., Rubin M., 1991, What will happen after the ultimate victory of technology? Chance for Adventure, Petrozavodsk: 221-237 [3]. Bowyer, 2007, Breed your own Design, Icon Magazine, 52. [4]. Zykov V,, Mytilinaios E., Desnoyer, M., Lipson H., 2007, Evolved and Designed Self-Reproducing Modular Robotics, IEEE Transactions on Robotics, 23: 541-546. [5]. Bogatyreva O. A., Vincent J.F.V., 2003, Is TRIZ Darwinian? TRIZCON-2003, , Altshuller Institute, USA. 16-18 March: 17/1-17/5.

299

[6].

[7]. [8].

[9].

[10].

[11].

[12].

[13]. [14]. [15].

[16]. [17]. [18].

[19].

[20]. [21].

[22].

[23].

[24].

[25].

[26].

Bogatyrev N.R., 2000, Ecological engineering of survival, Publishing house of SB RAS, Novosibirsk. Bogatyrev N.R., 2004, A “living” machine. Journal of Bionic Engineering, 1, 2: 79-87. Bogatyreva O.A., Pahl A.-K., Bogatyrev N.R., Vincent J.F.V., 2004, Means, advantages and limits of merging biology with technology. – Journal of Bionic Engineering, 1, 2: 121-132. Vincent J.F.V., 2002, Smart biomimetic TRIZ, TRIZ Future, ETRIA World Conference, Strasbourg: 6168. Bogatyrev N.R., Bogatyreva O.A., 2003, TRIZ and biology: rules and restrictions, Proc. Of International TRIZ Conference, Philadelphia, USA, 19: 1- 4. Vincent J.F.V., Bogatyreva O.A., Bogatyrev N.R., Bowyer A., Pahl A.-K., 2006, Biomimetics – its practice and theory. “Interface” Journal of Royal Society, 3, 9: 471-482. Bogatyreva O, Shillerov A, Bogatyrev N., 2004, Patterns in TRIZ Contradiction Matrix: integrated and distributed systems, 4-th ETRIA Conference, Florence, 3-5 November: 305-313. Shpakovskii N., 2006, Trees of Evolution, Puls, Moscow. Bogatyreva O.A., 1991, The Concept of social succession Nauka, Novosibirsk. Berg, L.S., 1958, System der rezenten und fossilen Fischartigen und Fische. VEB Verlag der Wissenschaften, Berlin. Berg L.S. Study of Evolution Theory, 1977, Nauka, Moscow. Chaikovskii Yu.V., 2003, Evolution, Centre for system research, Moscow. Beklemishev V.N. 1964, Foundations of comparative anatomy of invertebrates. Nauka, Moscow: I & II. Foster A.S., Gifford E.M. ,1959, Comparative morphology of vascular plants, San Francisco, Freeman and Co. Thompson d’Arcy W., 1959, On Growth and form, Cambridge, Cambridge University Press: I & II. Snodgrass R.E., 1935, Principles of insect morphology, McGraw-Hill Company, New York, London. Imms A.D., 1960, A general text book of entomology including the anatomy, physiology, development and classification of insects, London, Methuen & Co Ltd. Kardong K. V., 2005, Vertebrates: Comparative Anatomy, Function, Evolution, McGraw-Hill Science Engineering. Greger R., 1988, Advances in comparative and environmental physiology, Springer-Verlag Berlin; London. Mollison B., Holmgren D., 1978, Permaculture one. A Perennial Agriculture for Human Settlements, Tagari Publications, NSW 2484. Pahl A.-K., Vincent J.F.V., 2002, Using TRIZbased Evolution Trends to integrate Biology with Engineering Design, TRIZCon, St. Louis, USA.

Procedures and Models for Organizing and Analysing Problems in Inventive Design D. Cavallucci, F. Rousselot, C. Zanni Laboratory LGeCo, Design Engineering Laboratory, INSA Strasbourg France, 24 Boulevard de la Victoire, 67084 Strasbourg Cedex, France [email protected], [email protected], [email protected]

Abstract One of the first tasks designers are facing is the gathering of all potentially interesting information for understanding an initial situation. Its main objective is the drawing of a problem statement and the understanding of all future difficulties their project will face with. In this paper, we consider the problem of highlighting challenges within an inventively oriented design process, based on expert questioning procedures. Our intentions are to obtain a list of clearly formulated contradictions in the sense of TRIZ. In addition, we wish to minimize expert’s time solicitations while guaranteeing that the highlighted inventive challenges have been exhaustively identified. Keywords: Problem statement, Inventive Design, TRIZ, Contradictions.

1

INTRODUCTION

1.1 Orientations of a design process Prior to be engaged in a design process, the understanding of an initial situation is a crucial stage often poorly exploited by designers. If neglected, there is a high risk that a project evolves towards poorly effective outcomes since somewhere else; a similar task might have been already solved by another team. A second situation is that design efforts might have been connected to a goal of secondary importance in a given field of activities since the goals of primary importance have been missed. When designing an inventive way, this issue is even more critical. In our research, the problem of guiding the design process in a direction consistent with the laws of TRIZ has already been exposed in a previous publication [1]. The topic to be discussed in this article concerns the mapping of known problem and partial solutions as a preamble to the synthesis of contradictions of a specific field. Other articles have already dealt with the ontology building of our main concepts and their interactions [2] and the choice of a reduced set of contradictions in order to impact appropriately on the initial problem network [3]; they are to be considered as a continuation of this article. 1.2 Knowledge and graphical representations A significant amount of knowledge recording modes are nowadays available to companies so that the experience of their experts is both captured and formalized graphically [4]. Such representations are sometimes helpful to highlight deficiencies in the model represented and are sometimes initiating proposals for solutions [5]. Other models are known to better understanding the complexity of specific situations [6]. Our approach is also a proposal for knowledge recording

CIRP Design Conference 2009

300

and representing but can be differentiated in the sense that our aims are turned towards the assistance of contradictions formulation of a given field. The contradiction model is to be understood within the meaning of TRIZ, as it has already been exposed in several other publications [7] [8]. 1.3 Optimizing versus inventive design The paradigm in which our contribution lies resides within a particular category: inventing. Invention results from a human thinking act leading to a physical embodiment (an artefact) non-existent before. This “invention” reaches its status by the fact that one of its components proposes an original solution to a problem so far unresolved. TRIZ [9] distinguishes inventions problems whose solution requires overcoming a contradiction (technical or physical) from those not requiring the resolution of such a contradiction. TRIZ considers the former and does not consider the latter, which are optimization problems, in opposition to Inventive Problems. The rest of this article relates implicitly to inventions that cannot be obtained under the procedures known within the theory of optimization. To conclude on this subject, optimizing and inventive design are complementary and respond to different logics of problem-solving. Used in conjunction with optimization, invention makes possible to exceed actual design limits. Our postulate is that invention is an unavoidable path when optimizing has exhausted its area of potential solutions and when we can no longer be satisfied with best possible compromises [10].

2

LIMITS OF EXISTING PROCESSES FOR PROBLEM STATEMENT

In the state of the art of existing techniques and approaches for assisting with processes a problem statement, we can find four categories of findings. The operational research community has achieved many interesting results in the definition of problems in an axiomatic way. Among others, CSP or Nonlinear Analysis clearly define and constitute a mathematical orientation for addressing such kind of problems [11]. The abundance of findings is this area reveals also the deepness of such a field and several authors have highlighted that one of the boundary of this research was the presence of a man’s brain and perception as an unknown land where mathematics are of poor impact. Design indeed is a lot about a human act [12] and our purpose in this approach is neither to deal with existing data compiled in databases (rarely exhaustively representing a wide part of a domain knowledge) nor to reproduce human brains but to interface with a know-how in an expert knowledge only tacitly present in his mind. For instance, an obvious limitation we forecast within our needs in using the findings of this community resides in the fuzzy capacity of their models to both acquire knowledge in a generic way and in a detailed way covering a dynamically moving wideness of known things in a mono or multi-domain perspective. Conceptual mapping techniques and their modes of representation of unstructured knowledge [13] constitute a complete field of research activity from both education sciences and artificial intelligence. As a result we can observe various techniques like web-pads or mind-maps of specific domains [14] established within this community. Although the approach has been proven to be useful for education purposes and tested in pedagogical situations, such models still have to prove their relevance in industry where the speed and the contradictory aspect of several experts beliefs needs to be taken into consideration. A novel community, namely working on Computer Aided Innovation Software, can also be considered. Their findings are diverse depending on the company’s philosophy behind. For instance, the most known is certainly the Invention Machine’s Goldfire Innovator product and its “cause and effect” model. The graphical aspect is ergonomic and its interpretation and use rather simple. Nevertheless the simplicity of highlighting a “core problem” obviously limits such claims to a reduced typology of situations (relatively simple ones). Moreover, we were not able to find in their product the possibility neither to implement a new rule for graph interpretation nor to link what the model claims to be a “core problem” to any set of contradictions prior to entering the solving aspect of the study.

thoroughly described ontology of concepts [17][7]. While we have appreciated the originality of some of these findings, we shall register our contribution within this field of activities with the aim of further describing (also sometimes differing from OTSM) a complete framework of knowledge acquisition, representation and manipulation, useful within inventive problem solving concerns. 3

DESCRIPTION OF THE PROPOSED MODEL

3.1 Key terms of our approach In this section we will summarize and illustrate the main definitions associated with the key terms used in our process. For a better understanding of the concepts and their interrelations, readers may refer to figure 5 of the paper in the case study section. Problems A problem is expressed as a sentence ( + + ) reduced to its essentials. A single idea is to be contained in its definition. In the network and beyond its syntax form, a problem (in the sense we give to it) describes a situation where an obstacle prevents a progress, an advance or achieving what has to be done. Generic aspect of a problem As remarked in the definition of a problem, its expression must first have reached its maximum decomposition. This type of decomposition aims to remove ambiguities which may occur during a too generic description containing an unknown number of sub-problems which could then not be traced with partial solutions related to them. Partial Solutions In its simplest form (To + ) expresses a result known in the domain and verified by experience. It may materialize a tacit or explicit knowledge of one or more members of the design team upon their past experience, a patent filled by the company or a competitor or any partial solution known in the field of competence of the members of the design team. Uncertainty in partial solutions: We want to remind here that a partial solution is supposed to bring the least possible uncertainty about assertions of its effects on the problem it is attached to. Confusion can occur between a "solution concept" (which is the result of an assumption made by a member) and a partial solution, which has been validated by experience, tests, calculations or results known and verified. This distinction is important because any ambiguity inserted in the network would lower the relevance of working hypotheses taken from the interpretation of this network.

Finally, within TRIZ ongoing researches, several models for initial situation analysis have been proposed [15][16]. Among these results, the OTSM framework has proposed some promising directions, but without a complete,

301

Contradictions A Contradiction (figure 1) includes 3 types of components: the elements, the parameters, the values.

volume results in an ease of manipulation; in this case the pair chosen for V is heavy / light.

TCn.m

Elements The Elements are constituents of a system. From a syntax viewpoint, they may be names or groups of names or nouns (for example: the hammer drives the nail, E = hammer). The nature of the elements can change any time based on the description which is given upon a certain viewpoint. Thus “the hammer drives the nail” may become “the anvil pushes the nail” when expressed by another expert. In the second case, E = anvil. For a third expert “The man pushes the nail”. In this case E = man. It is important, when identical situations are described with divergent points to organize a consensus in forcing the reformulation within the meaning of fundamental physics and the systemic decomposition that has been previously made when starting the study. Parameters Parameters describe elements by assigning them a specificity, which reflects an explicit knowledge of the area observed. They are mainly names, objects or adverbs. The form of expression is diverse, sometimes contradictory when expressed by different experts. We distinguish two categories of parameters: 

Active Parameters (AP): On which the designer has the power to modify its state (the designer can make the choice to design an anvil having a light volume or small one, in this case volume = AP). This type of formulation has generally two directions that can potentially result in positive impacts on the object or its super system.



Evaluating Parameters (EP): The nature of these parameters can be observed in their ability to evaluate both positive and negative results of a designer’s choice. The consequence of designing an anvil having an important mass is that its ease of driving is improved (in this case ease of driving = EP). This type of parameter has often a logical sense of progress (its positive direction seems obvious) while the other seems absurd.

Values Values are mostly adjectives used to describe a parameter (the volume of the anvil should be heavy; in this case V = heavy). Note that the fundamental aspect of the concept of contradiction, when expressed at a physical level, is the qualitative difference of values of a parameter: if the meaning induced by the adjective associated with the V leads to positive aspects, then it is essential (in order to complete a contradiction) to investigate adjectives qualifying V’s antonyms to highlight the contradictory aspects of the analysis and then to validate it or not. We choose, as a first step for practical reasons, to limit the values of V pairs consisting of an adjective and its antonym. Thus, a heavy anvil volume leads to an ease of driving while a light anvil

302

Active Parameter APn Va Va

Evaluating Parameter EPx Evaluating Parameter EPy

Figure 1: Generic table of a contradiction (from TRIZ viewpoint) 3.2 Construction of a network of problems / partial solutions The main foreseeable problem has been pointed out by [18]. It states that consultation with experts is effective because it allows the problem space and the solution to be negotiated interactively, whereas computer-based systems simply offer passive data. Our process of building a network of problems / partial solutions is iterative and passes through a set questions and answers between the facilitator and the members of a design team. The entry point of the questioning can be the problem that, according to the participants, appears as the most critical from the expert awareness. This mode of entrance into the network may seem arbitrary. Nevertheless we do not intend here to describe a single problem but to enter in the problem space to be formalized through a specific one (one of the subproblems among others) and to discover its immediate surroundings (immediately related problems) until a satisfactory level of space coverage is reached. Here, the notion of “problem space” has to be understood as the sum of interconnected problems sufficient to completely describe the initial problematic, while each problem have to be taken as equivalent explanations clarifying a specific part of the overall problematic. In order to be complete, a problem space must be composed by partial solutions. The sum of partial solutions can also be called a “partial solution space”, interacting with problem space. The ending point of the domain clarification is generally observed when participants (experts) have expressed what they had to say on the subject (parts of their knowledge regarding the problematic situation) and when it can be observed, several times, that any new input (new problems or new partial solution) seems similar to previous ones already expressed. A saturation of problem elicitation by expert is therefore reached, symptomatic of a space where most of what we wanted to represent has been revealed. The next paragraph will describe how the networks may be graphically constituted (see table 1) and iterated within time, therefore offering the possibility to add, remove or change any data on its appearance.

3.3 Maintenance and monitoring of the network data

Links between Problems

It is acknowledged that companies give little time for problem formalization in the early stages of a project. Therefore, our goal is to get and maintain as many information as we can in a minimum allowed time for the project. In various past situations we encountered in companies, it was hardly possible to go beyond 3 to 4 meetings for problem networks constitution. The topic we deal with in this paragraph is therefore the activity of maintaining a network of problems / partial solutions through a series of 3 to 4 consecutive sessions.

A chain of several successive problems can be created. Such a sequence means that the appearance of a problem is generated by others. This type of representation is to be used with precaution since if a problem disappears; it means that all subsequent problems will disappear as well. Such statements are subject to precautions before being placed in the network of problems.

All elements (problems, partial solutions, links) placed in the network during the first meeting are in black / solid only when validated by all participants. Before any validation (when a conflict between two or more participants appears), the feature is the same colour but a dotted line. When the first meeting ends and before the second one, any additional suggestion by a member of the design team is allowed to integrate the network but the colour of this proposal should be the colour of the second meeting using a dotted line. During the second meeting, we therefore begin working on (if one or more members have worked on the model) black / solid lines (for what has already been approved at previous meetings), afterwards with the dots in another colour (the second day) placed in the network between meetings by one or more participants. The task for the second day will summarize therefore the transition from dotted elements to strong lines (validated by the group) and / or additions of new elements whose state can vary from strong lines to dotted ones according with the fact they have been co-validated by the design members. 3.4 Standards situations From Problems to Partial solutions Any problem, stated in the problem space and in relation to one or more experiences having led to an acknowledged result gives rise to a partial solution. The nature of the relationship between the problem space and the partial solutions space can be interpreted as "one can". Example: PB1: Thermal expansion generates an uneven roll’s profile “One can” PS1: Create a concave roll in cold situations. From Partial solutions to Problems Any partial solution provoking no subsequent problem virtually suppresses the existence of this problem. When the implementation of a partial solution creates new problems, a link between the spaces of partial solutions to problem space is created. This link can be interpreted as "but then". Example: PS1: One can create a concave roll in cold situations”but then” PB2: Strip deviation is observed at start-ups.

Example: PB1: Rolls are deformed by thermal expansion “and thereafter” thermal expansion generates an uneven roll’s profile. Links between Partial solutions A chain of several partial solutions, succeeding each other is to be considered with precaution. A partial solution following another signifies that the previous one had not solved the whole problem. If not, the new partial solution probably solves another problem either already presents in the network or needing to be formalized. Example: PS1: One can create a concave roll in cold situations “and thereafter” PS2: One can create a convex roll in hot situations. Note: Such situations can underline the necessity to disclose problems (if they were not mentioned by experts before). Our example can, for instance, underline the necessity to disclose the following relation: “PS1: One can create a concave roll in cold situations”but then” PB3: There is a necessity to have a stock of rolls”. 3.5 Particular Cases AND operators To be validated, at least two partial solutions need to be associated for partially solving a problem (if one of them is removed, the rest of the links aren’t true anymore). In this case lines joining problems and partial solutions are converging in the equivalent of an "AND" cell. Note that this situation can be reversely used between problems and partial solutions. OR operators A partial solution is generating alternatively a problem or another (but not both problems at the same time). In such cases a line coming out of this partial solution enters in an “OR” cell and its output is connected to the alternative generated problems. Problems only partially solved A partial solution only partially solving a problem, but not creating new problems: In this case a batch line (axis line) is created and indicates that this partial solution only partially solves the problem (the problem remains despite the existence of a partial solution to reduce its effects).

303

Graphical representations

4.2 Links between problems / partial networks and contradictions network

Definitions Problem Partial solution

PBn

PSn

“one can” link type

PSn

PBn

“but then” link type

PBn

PSn

Signify that the problem is only partially influenced by this partial solution

from Pb or Ps from Pb or Ps

In our industrial experience when applying such networks, it is often apparent that each problem (when formulated as described above) may be linked to one (or several) evaluating parameters. The partial solutions, in their case, may be linked (or give rise) to one or several action parameters. By organizing a formal relationship, when possible, among problems, partial solutions and parameters, we obtain a set of links between the networks of our explored domain.

&

to Pb or Ps

“AND” cell

>1

to Pb or Ps

“OR” cell

But encouraging the emergence and the gathering of parameters, we achieve an important step in problem formulation. The next paragraph will synthesize some of our procedures.

Contradiction

Our common goals in the synthesis of contradictions of an area are as follows: 

To transform key problems in the contradiction format since we know that TRIZ uses contradictions as a base for starting its heuristics for its tools and techniques deployment. To reveal all relevant contradictions arising from the key problems thus remains a primary objective.



To choose, among a coherent and consistent set of contradictions, the smallest amount of single contradictions having the highest impact on the problem network within the context of corporate objectives (to remove a maximum of key problems).

Active Parameter Evaluating parameter

a

Value (adjective) of an Active Parameter

ā

Opposite Value (adjective’s antonym) of an Active Parameter Signify that this group forms a contradiction

Table 1: Graphical representations and their definitions 4

solutions

SYNTHESIZING CONTRADICTIONS OF A GIVEN DOMAIN

4.1 Knowledge location As we have already mentioned in the introduction, the necessary sources to conduct a mapping of the problem space are twofold: 

They can be included within textual corpuses (compiled in various documents as patent, internal reports, lists of requirements, …)



They can be tacitly or explicitly in experts mind but not written somewhere.

The first case will not be discussed in this article but will be the purpose of a further contribution. Regarding the second case, the first task is to organize the exchange between experts in order to extract elements from their knowledge appropriately fitting in our networks formalism. Their respective knowledge will be therefore thoroughly recorded and co-validated by members of the design team. During these questions, the networks of problems and partial solutions are jointly constructed.

304

In order to preserve the coherence with TRIZ fundamentals, let us keep in mind that a contradiction is an obstacle that stands out ahead of the artefact on the laws of evolution its is supposed to follow. The identified contradictions must record their possible links with laws if these links were expressed during the study. Otherwise, using hypotheses of evolution’s formulation may facilitate the identification of these links [1]. 4.3 The sources parameters

enabling

the

emergence

of

There are three sources that facilitate the emergence of parameters prior to the synthesis of contradictions. 

Multi-screens (figure 2), especially the transitions from past-present in the system screens / super-system and subsystem.

System

Super-syst.

our formalism, with the aim of a systematically questioning in a reverse way experts to highlight the opposite side of the contradiction. In case of impossibility of finding an inverse positive situation, there might not be any contradiction attached to this AP. In other cases we can either reveal a new EP or a link with an existing one.

Sous-syst.

4.4 Links between contradictions to form a network

Past What has been evolving negatively: Parameter 1 Parameter 2 … Parameter n

Present

What has been evolving positively: Parameter 11 Parameter 12 … Parameter m

Future

Parameter emerging as important when cross-observed with laws of engineering systems evolution.

Figure 2 : Location of parameters extracted from multiscreen scheme analysis 

Discussions in relation with the laws of engineering systems evolution (see figure 3), at this point the advantage is to be able to directly record links between parameters and laws observed.

We have observed, within solving processes, that when contradictions having the same active parameter were considered, solutions concepts generated by design members were likely guiding the thinking process to similar categories of ideas. This creates a limitation in the scope covered by solutions. In a reverse way, when a similar couple of EP is considered, a solution concept impacts unexpected contradictions since we did not engage the solving process through these ones. As a consequence and in order to be able to compute and observe the consequence of a specific solution concept (for instance useful in R&D decision making) links between contradictions have the same pair of EP can be created and sorted upon the fact that their root problems are sorted the same way.

Law 6, 7 & 8

5

Law 3

Law 2

CASE STUDY: PROBLEM

CONTINUOUS

ANNEALING

5.1 Problem statement and decisions Law 1

Law 4,5

Time

Figure 3: Summary of Altshuller’s laws location along « S » curve scheme 

The ENV template (figure 4) from OTSM-TRIZ) [7][17] reveals the missing parameters when ensuring the polycontradictions model’s completeness.

If…

Technical system

is…

then...

Element 1 EP1 EP2

EP1 Va1

EPn

EP3

AP1 EP3 EP3

Va1

EP1

EPn

Figure 4: Template for ENV diagram completion (after OTSM) Let us note that there is a high probability that the nature of knowledge expressed by experts will not appear the template proposed figure X. Indeed, few experts are used to formulate both sides of a contradiction since traditionally a single side of a contradiction is expressed. Nevertheless through this single side formulation, we propose to enter in

Steel material hardens after cold rolling due to the dislocation tangling generated by plastic deformation. Annealing is therefore carried out to soften the material. The continuous annealing process comprises heating, holding of the material at an elevated temperature (soaking), and cooling of the material. Heating facilitates the movement of iron atoms, resulting in the disappearance of tangled dislocations and the formation and growth of new grains of various sizes, which depend on the heating and soaking conditions. These phenomena make hardened steel crystals recover and re-crystallize to be softened. This type of annealing involves uncoiling, and welding strips together, passing the welded strips continuously through a heating furnace, and then dividing and re-joining the strips. The total length of the strip in the line is approximately 2,000m while its travel speed is about 200 to 700 m/min for a strip of 0.15mm in thickness (a maximum speed of 1,000 m/min. is still possible). To operate such lines, speed control, tension control, and tracking control of the strip are necessary, in addition to a high level of automatic temperature and atmosphere control. Our company partner has observed for already several years that among these parameters an optimum situation is reachable but strip defect are observed and provoke line interruption regularly. Line interruptions are provoked mainly due to thermal situation within the furnace.

305

The observed thermal expansion of rolls (transporting the strip) is unevenly distributed along its volume resulting in two different situations:

To conduct this partnership we have divided the sessions allowed for the project in four parts: 

Questioning their experts during four sessions of about 5 hours in order to compile their problems and partial solutions using our network formalism.



Highlighting a key problem and decompose this key problem in a set of contradictions.



Treat a reduced set of contradictions and list a limited amount of solution concepts using TRIZ tools for solving them inventively.

5.2 Partnership process as it has been engaged



The partnership consisted in proposing a technologically validated solution to these recurrent problems, taking into consideration all existing attempts (both partial successes and failures) already tested and their competitor’s known solutions (mostly observable through patents).

Engage a technical description and calculations proofs that to highlight that a specific solution concept is worthwhile investing R&D funds for its deployment.

Figure 5 partially illustrates the interaction between networks and summarizes the whole process in a global graphical representation.



Lateral strip movement due to non-perpendicular velocity of the strip to roll axis. As a result, the strip is hitting the furnace and gets degraded.



The formation of thermal folds, depending directly on strip traction, provoke the necessity to stop the process, remove either partially or completely the damaged strip and start over the production line.

PbN

PsN

Pb1: Rolls are deformed by thermal expansion and creates an uneven traction profile Pb2: Problems of strip deviation is observe at startup. Ps1: Create a concave roll in cold state situations. Ps2: Place a piston in the roll to compensate thermal deformation.

TCN TC1.1: of must be both in odrer to satisfy and in order to satisfy .

PN AP1: Surface geometry EP1: Lateral movements EP2: Fold appearence

Figure 5: Partial graphical representation of the example used for illustrating our approach 5.3 Conclusion regarding the case study Our proposed approach has been evaluated by participants after a final meeting with R&D decision makers and research managers. Among others, several points have been expressed by participants of the workshops: The detailed aspect of the problem analysis has been well appreciated and appeared as new compared to traditional project processes commonly practiced within the company. It has been also evaluated as a good capitalization of actual knowledge of experts. The original “profile” of solution concepts has also been pointed out with a twofold aspect:

306



A reduced amount of solution concepts compared to classical already organized brainstorming on this problem.



The novelty of these solution concepts since at least one fourth of them have never been found by any workshop in the past related to this problem. The “simplicity” of several solutions, so as their aspect (new, not expensive, easy to test, easy to manufacture) has been highly appreciated.

In brief, the case results have convinced the team that our approach can reduce the population of useless R&D attempts through a better mastering of an overall

problematic. But even if some solutions have been proven to be simple, cost effective and technologically feasible, they still need to be validated through an “on line” experiment while being fully technically developed. This perspective has been drawn by decision makers and will start in a near future. This has mainly the aim to finalize the study by a more detailed return of investment balance to convince, now, managers about the financial effectiveness of obtained solution concepts from our model, when appropriately introduced within company’s practices and thoroughly conducted by trained animators. 6

DISCUSSIONS ON THE PROPOSED MODEL

6.1 Strong points and approach

novelty of

the proposed

From an inventive design perspective, it has never been clearly proposed to link the problem statement with contradiction formalism. Here, we have proposed and tested that problems, when formulated and recorded in their simplest form, can easily be linked with EPs in the same way as partial solutions may be linked with APs. We have been led toward this assumption when observing that experts were tacitly evoking parameters of an “evaluating nature” when qualifying their expressed problems so as parameters of an “active nature” when evoking partial solutions. As a result, we can draw the assumption that a large part of the relations between problem and contradiction networks can be automatically built during the sessions. These links are crucial for R&D decisions since, when entering in a solving mode, contradictions are considered and solved. As a result, solution concepts and problems are linked through contradictions and ease the visualization of how those solution concepts may impact Initial problem statement. At this stage we can only emit “working hypothesis” since what is proposed is only an automatic interpretation of what has been complied during problem statement and problem solving stages, but these working hypothesis are traceable and therefore increase the confidence of an R&D decision makers choices. 6.2 Limits and short term perspectives related to our model The limits of our model are similar to what many other researchers have already pointed [19]. The time required to record all data for a relevant use is consequent and suppose significant efforts (at least time) from the company experts. Time spent for capturing their know-how and translating it into exploitable data directly raises the question of the use of such a model in an industry, constantly in search of time saving. We have also observed that many TRIZ experts intuitively converge to a reduced set of contradictions in a very limited time. Hence, one of our ongoing perspectives resides in the comparison of intuitive expert techniques (fast ones) and systematic and procedural ones as proposed through this article. It will also be interesting to evaluate the relevance of an intuitive expert choice and clearly state its

value. This research will be performed in order to evaluate the relevance of our model and understand what separates or associate our results with expert practices. 7

CONCLUSIONS AND PROSPECTS

An ever growing amount of industries are affected by the need to formalize their innovation strategy. In this context, tools from the quality area have shown their limits so as approaches assisting creativity derived from Brainstorming. One of our research results is to have highlighted several limits of TRIZ and identified some potential areas of its development. We conclude now that it is timely appropriate to investigate the problematic of software support for experts practices in an inventive design context. We have built prototypes of such tools enabling designers to go beyond the current limits of TRIZ. The purpose of this software prototype structures experts approaches in the frame of inventively considering complex situation in design of artefacts evolution. The procedures having being built, when tested on real industrial situations, have also proved their usefulness in assisting R&D decisions. For improving the exhaustive aspects and the speed of gathering knowledge, we have also investigated specific text mining procedures to find and collect data contained in documents related to the covered field (patents, specifications, papers,…) in order to populate our graphical representation and assist the formulation of key problems of a given domain within the meaning of TRIZ. Our aim, when a complete system will be completed, is to be able to claim that all problems mapping a specific situation in a specific domain being co-constructed and co-validated, may assist decision makers in their choice to engage relevant inventive activities in accordance with context of their corporate objectives. The traceability and relevance of these choices, controlled by our approach, will then be based on a coherent analysis rather than an intuitive one. 8

REFERENCES

[1]

Cavallucci, D. and Rousselot, F. 2007, Evolution Hypothesis as a means for linking system parameters and laws of engineering system evolution, 7th ETRIA’s International TRIZ Future Conference (TFC2007), Nov. 6-8, Franckfurth – Germany.

[2]

Zanni C., Cavallucci, D. and Rousselot, F. “An Ontological Basis for Computer Aided Innovation”, Special Issue of Computers in Industry "Computer Aided Innovation”, ISSN:0166-3615, ELSEVIER (paper submitted April 2008).

[3]

Cavallucci D., Rousselot F., 2007, Structuring Knowledge Use in Inventive Design, In Trends in Computer Aided Innovation, Springer, ISBN 9780387754550.

[4]

Miles, J.C.; Moore, C.J.; Hooper, J.N., "A structured multi expert knowledge elicitation methodology for the

307

development of practical knowledge based systems," Knowledge Engineering, IEE Colloquium on , vol., no., pp.6/1-6/3, 15 May 1990. [5]

[6]

Mertins K. , Jochem R., “Architectures, methods and tools for enterprise engineering” International Journal of Production Economics, Volume 98, Issue 2, 18 November 2005, Pages 179-188 Rudman, C. and Engelbeck, G., 1996, “Lessons in choosing methods for designing complex graphical user interfaces,” In M. Rudisill, C. Lewis, P. B. Polson and T.D. Mc Kay (eds). Human Computer Interface Design: Success Stories, Emerging Methods, RealWorld Context, San Francisco: Morgan Kaufmann, 198-228.

[7]

Cavallucci, D. and Khomenko, N., 2007, From TRIZ to OTSM-TRIZ: Addressing complexity challenges in inventive design, International Journal of Product Development (IJPD), Volume 4 - Issue 1-2, p 4-21.

[8]

Cavallucci, D. and Eltzer, T., 2007, Parameter Network as a mean for Driving Problem Solving Process, International Journal of Computer Applications in Technology”, Vol. 30, No.1/2 pp. 125 – 136.

[9]

Altshuller G.S., “To Find an Idea: Introduction to the Theory of Inventive Problems Solving”, (Novosibirsk, Nauka, 1986).

[10] Cavallucci D., De Guio R., “Engager les activités de conception dans des voies inventives pertinentes: l'apport de la TRIZ” in Systèmes techniques, lois d'évolution et méthodes de conception, Hermès, (forthcoming in 2008). [11] Razgon, I. (2006), "Complexity analysis of heuristic CSP search algorithms", RECENT ADVANCES IN CONSTRAINTS, LECTURE NOTES IN ARTIFICIAL INTELLIGENCE, pp 88-99. [12] Erik K. Antonsson and Jonathan Cagan, “Formal Engineering Design Synthesis” Cambridge University Press, ISBN: 0-521-79247-9, December, 2001, 1st edition Hardcover, 523p. (design as a human act). [13] Traczyk W. (2005), “Structural representations of unstructured knowledge”, Journal of Telecommunications and Information Technology (JTIT), pp 81-86. [14] Oxman, R (2004), "Think-maps: teaching design thinking in design education", DESIGN STUDIES, 25, 1: 63-91. [15] Souchkov V, Bolckmans, K. “Selecting Contradictions for Managing Problem Complexity”, Proceedings of the TRIZ-future conference November 06.-08., 2007 Frankfurt – Germany. [16] De Feo J., Bar-El Z., “Creating strategic change more efficiently with a new Design for Six Sigma process”, August 2002, Journal of Change Management.

308

[17] Khomenko N., De Guio R., Lelait L. and Kaikov I., A framework for OTSM-TRIZ based computer support to be used in complex problem management, International Journal of Computer Applications in Technology, Vol. 30, No. 1/2, 2007, pp 88-104. [18] Kidd A L, ‘The consultative role of an expert system' in People and Computers: Designing the Interface, Proceedings of the Conference of the BCS HCISG, Cambridge University Press (1985). [19] Daniel Geer Jr., "The Problem Statement is the Problem," IEEE Security & Privacy, vol. 3, no. 2, 2005, p. 80.

Achieving Effective Innovation Based On TRIZ Technological Evolution J.G. Sun1, R.H. Tan1, G.Z. Cao1 1

School of Mechanical Engineering, Hebei University of Technology, Hongqiao District, Tianjin, China [email protected]

Abstract This paper outlines the conception of effective innovation and discusses the method to achieve it. Effective Innovation is constrained on the path of technological evolution so that the corresponding path must be detected before conceptual design of the product. The process of products technological evolution is a technical developing process that the products approach to Ideal Final Result (IFR). During the process, the sustaining innovation and disruptive innovation carry on alternately. By researching and forecasting potential techniques using TRIZ technological evolution theory, the effective innovation can be achieved finally. Keywords: Effective Innovation, TRIZ, Disruptive Innovation

1

INTRODUCTION

1.1 Motivation and overview As the economy becomes more global and increasingly competitive, innovations to increasing productivity and quality while reducing costs and cycle times command the attention of managers of firms and the rate of this change is accelerating. Between the years of 1963 and 2004 the United States Patent and Trademark Office (USPTO) granted 3.7 million utility patents [1]. But not all of the innovations in patents could achieve above effects due to its pitfalls such as unimportant needs, poor technology practicality, and bad market prospect. Effective Innovation is opposite to Null Innovation. An innovation termed as Effective Innovation must have two conditions. The first one: this innovation must be feasible in technology and it can solve the technical contradictions that haven’t solved in design. The second one: it must have market potential to develop into mainstream products and the firms can profit from it.

Figure 1: The effective innovation on the path of technical evolution As shown in Figure.1, Effective Innovation locates near the actual path of products evolution during the entire

CIRP Design Conference 2009

309

process of products evolution. The innovation is called Null Innovation, which is deviated the path of products evolution and no commercial value. So, the aim of product innovation development is to achieve Effective Innovation and avoid Null Innovation. How to achieve Effective Innovation? We must research the path of products evolution, forecast potential technique correctly, solve technical contradictions using TRIZ theory, and then turn potential technique to Effective Innovation. 1.2 Background Don Clausing and Victor Fey (2004) put forward the conception of effective innovation firstly. According to their opinions, all inventions are divided into three parts: launch inventions, growth inventions and library inventions. Effective innovation will be achieved in the former two parts after six steps as follows [2]: Technology strategy. Concept generation. Conception selection. Robustness development. Technology readiness. Technology transfer. The first step is very important because its distractor would lead to failure of the later five steps. Don Clausing give some strategies based on TRIZ in the first step. “TRIZ” is the (Russian) acronym for the ”Theory of Inventive Problem Solving.” G.S. Altshuller and his colleagues in the former U.S.S.R. developed the method between 1946 and 1985. TRIZ is an international science of creativity that relies on the study of the patterns of problems and solutions, not on the spontaneous and intuitive creativity of individuals or groups. More than three million patents have been analyzed to discover the patterns that predict breakthrough solutions to problems [3]. It mostly includes the forecast of technology maturity, technology evolution, the contradiction solution, effect, standard solution and ARIZ etc. The CAI software [4] based on TRIZ has been developed recently. All kinds of methods in TRIZ can be used either separately or together, so that different problems in invention can be solved [5].

The process of products technological evolution is a technological developing process that the products approach to Ideal Final Result (IFR). IFR is the absolutely best solution of a problem for the given conditions proposed by Altshuller and Shapiro in the 1950s [6]. The sustaining innovation and disruptive innovation carry on alternately in this process(shown in Figure.2). The sustaining innovation can be achieved by using traditional design theory combined with TRIZ inventive principles. The disruptive innovation is divided into new-market disruptions and low-end disruptions. The former is to avoid evolution unbalance of products technical system caused by long-term sustaining innovation. The latter is to avoid the surplus of users’ needs caused by long-term sustaining innovation. The coordination of the two disruptive innovations and the sustaining innovation impels products to develop to IFR. Therefore, the most important task of innovation process is to determine that the innovation is sustaining innovation or disruptive one. If the innovation is the latter, it is very important to distinguish between new-market disruption and low-end disruption. It is possible to achieve Effective Innovation only after completing the two correct choices above. Under this constraints, we can forecast potential technique by using TRIZ technological evolution theory and then achieve Effective Innovation finally.

Figure 2: Path of products technical evolution. 1.3 Objectives This paper has four objectives. First one is to present a methodology by which Effective Innovation may be achieved. Second one is to present the principles derived by this methodology; Third one is to research a method for latent technologies forecasting. Last one is to illustrate a method for applying these principles through the use of a product development case study.

Figure 3: The Effective Innovation process.

2

METHOD FOR EFFECTIVE INNOVATION PROCESS

Now we will delve further into the Effective Innovation process. Effective Innovation process includes three parts: Part 1: 1. Project selection 2. Function analysis

3. IFR definition 4. Decomposing technological system 5. Technological evolution analysis Part 2: Before technologies forecasting, there are two judgement problems: Are the customers’ needs over satisfied? Is the technological system evolution unbalance? The questions determine the types of innovations, such as low-end

310

disruptive innovation, new-market disruptive innovation and sustaining innovation. After that, according to features of different innovations, latent technologies are forecasted based on TRIZ technological evolution theory. Part 3: The Managers need to understand the feasibility of this obtained technologies. To achieve this objective, a robustness evaluation for the obtained technologies will be given. If result is not ideal, the former forecasting process will be carried out anew by selecting a different TRIZ technological evolution path till getting a ideal robustness evaluation. Then, the following 4 steps proceed: 1. Technical design 2. Detailed design 3. Blueprint 4. Put into production This research methodology is illustrated in Figure 3, and each of stages is detailed further in the following sections. 2.1 Project selection In the field of new product development (NPD), project selection is currently a topic of much interest in industrial communities. We consider the conception of NPD project as being the new product design project translating customer requirements into a product definition and a manufacturing process definition[7]. The presence of various kinds of uncertainties is one of new product development main characteristics, making its selection quite a challenge. These uncertainties make it difficult to foresee the detailed technologies for the new product; thus the appropriate technology strategies are needed very much. In order to optimize the product and increase market competitiveness, the firms must develop new product according to the market. NPD is a component of the process of technological evolution. During the process, all of the firms have equal opportunity, but technological difficulty of NPD is different. Mainstream firms usually have advantages during sustaining innovation and new firms have advantages during disruptive innovation (See figure 4). Hence, it’s significant to select NPD strategies based on product technological evolution.

Figure 4: The Difficulty of NPD process between different companies. 2.2 Function decomposition Product design is, in its essence, the transformation from product function to product form. It relies upon the successful gathering of customers’ needs and their

311

mapping to a functional model of the product. Functional decomposition, also known as functional modelling, is the process of breaking the overall function of a product into smaller, easily solvable sub-functions. The sub-functions are related by the flow of energy, material or signal passing through the product to form a functional model, known as a function structure [8]. 2.3 IFR definition and sub-evolution study According to the result from section 2.2, function decomposition makes it possible to get IFR of subsystem. Because all of the technologies always evolve to its IFR state, we can get different evolution lines of every subsystem separately. Technological evolution is the precondition of product evolution,and should evolve to the IFR evolution. According to this, the technical forecasting can be carried on. TRIZ technology evolution theory has provided the specific operational methods for the technology evolution. A law of technological system evolution describes significant, stable, and repeatable interactions between elements of the system, and between the system and its environment in the process of its evolution. Fry and Rivin[9] reduce the technical evolution laws to 9 points as follows: 1. Law of increasing degree of ideality: Evolution of technological systems proceeds in the direction of an increasing degree of ideality; 2. Law of non-uniform evolution of sub-systems:The rate of evolution of various parts of a system is not uniform; the more complex the system is, the more non-uniform the evolution of its part; 3. Law of increasing dynamism: Technological systems evolve in the direction to more flexible structures capable of adaption to varying performance regimes, changing environmental conditions, and of multifunctionality; 4. Law of transition to higher-level systems: Technological systems evolve from mono-systems to bi- or polysystems; 5. Law of transition to micro-levels: Technological systems evolve toward an increasing use of micro-level structures; 6. Law of shorting of energy flow path: Technological systems evolve in the direction of shorting of energy flow passage through the system; 7. Law of completion: An autonomous technological system consists of four principle parts: Working means, transmission, engine, and control means; 8. Law of increasing controllability: Technological systems evolve towards enhancing their substance-field interactions; 9. Law of harmonization of rhythms: The necessary condition for optimal performance of a technological system is coordination of the periodicity of action of its parts; Laws of technological system evolution give us directions of evolution, but they didn’t show us the details of each direction. There are many technical evolution paths under every law and the technologies evolution line is made up of different stations that indicate the process which the technique evolves from junior to senior and they also offered the function of technological forecasting (See figure.5).

the technological chances always occurred in lagged technologies, therefore the next state of technology on lagged technological evolution line is the latent technology needed to be forecasted.

Figure 5: Technological system evolution model. 2.3.1 Analysising customers need for sub-functions According to the research performed by Clayton Christensen [10], the demand for new marketable product changes in the sequence as shown in figure 6.

Figure 7: Latent technologies forecasting for different innovations 2.4 Robustness evaluation of latent technologies

Figure 6: The development of the need for new product With the development of marketable product, customer’s need will decrease from better functioning to cheaper. Thus, for the business success the manufacturer of the product should exactly determine the state of customer’s need. When the development of product is faster than demand growth of customer, the customer’s need will be over satisfied, and then a low-end disruptive innovation can be achieved.

By the process above, a new technology works well in ideal conditions, such as in a laboratory. In order to achieve Effective Innovation, the challenge is to make it work well in all of the conditions in the future, in other words to make it robust in its performance. The difference between ideal and actual conditions is called noise [2], such as environmental variations, variations in production, and variations as the result of time and use [11]. Under the real conditions, the noises will be greater and the product performence will be much worse. As shown in figure.8, for an automatic control system, robustness can be achieved by constructing a feedback control system and maintain the system stability by adjusting control parameters .

2.3.2 Analysising technologies evolution for subfunctions Every technology of sub-function is in a state of continuous change in order to bring the product to a higher stage. But the changes of those sub-functions are generally imbalanced, some of technologies develop radically, whereas others lag behind. It is usually possible to detect development status about sub-function technologies by analysing its technological evolution line. Through improvement of lagged technologies, a newmarket disruptive innovation can be achieced. Figure 8: Feedback control system diagram for robustness of technological innovation

2.3.3 Technologies forecasting Summing up, when selecting innovation type of target product, it is helpful to take into account the following rules to forecast potential technologies (shown in Figure.7): 1. For the sustaining innovation, the mainstream technologies of products are always improved, thus the next state of technology on mainstream technological evolution line is the latent technology we want to forecast. 2. For the low-end disruptive innovation, customers are over satisfied and a simple technology is needed, so always the former one state of technology on mainstream technological evolution line is the latent technology we want to forecast. 3. For the new-market disruptive innovation, due to the imbalance of sub-functions technological development,

3

CASE STUDY: WII - A DISRUPTIVE TECHNOLOGY OF VIDEO GAME CONSOLE

3.1 Background Video game console is an interactive entertainment computer or electronic device that produces a video display signal which can be used with a display device (a television, monitor, etc.) to display a game. Video Game Technology has progressed tremendously since the dawn of its existence. The first video game system was made over three decades ago. That's a lot of years to cover [12]. In this paper we will discuss how video game consoles have evolved over the years and how to achieve

312

Effective Innovation on the process of game console development. 3.2 Functional decomposition of game console A video game console is an interactive entertainment computer or electronic device that manipulates the video display signal of a display device to display a game. There are about 4 parts in a game console as follow[13]: 1. Controllers: Video controllers allow the user to input information and interact with onscreen objects. 2. Power supply: a power supply converts 100-240 volt AC utility power into direct current (DC) at the voltages needed by the electronics. 3. Console Core Unit: The core unit in a video game console is the hub where the television, video game controllers, and game program connect. It usually contains a CPU, RAM, and an audiovisual coprocessor. 4. Game software: Video game consoles have their programs stored on external media. Based on the above description, the function structure of a video game console is shown in figure.9. It is functional decomposition process of breaking the overall functions into smaller. By means of further analysis, one of 4 parts, controllers, are decomposed to a series of humanmachine interface, such as game control stick, face expression sensor and microphone. Furthermore, the output devices are decomposed to game picture, game audio, game feeling and game smell. All above components constitute the sub-function evolution module system.

Figure 10: IFR analysis of game console. 3.4 Effective latent technologies forecasting In this segment of our in-depth look at the evolution of game consoles, we cast our attention on the partial of sub-functions that customers care about, such as game picture, game sound and game controller. The performance of game picture and sound is determined by game console core unit. As shown in figure.11, each new generation of console hardware made use of the rapid development of processing technology. Newer machines could output a greater range of colors and introduced graphical technologies. The graphical performance of console hardware is dependent on many factors. ”Bit” is one way to represent the processing power. The bit-value of a console referred to the word length of a console's processor. Form 8-bits to 128-bit, console's processor has greatly developed during the whole process of its evolution. In addition, other two influence factors of console hardware are CPU operating frequency and memory size. In the past years, a great progress has been made in the researches on CPU performence and memory size. According to the FCC, the PlayStation 3 has passed approval tests and the final clock speed of the Cell CPU has been verified at 3.2 GHz and it has 256MB RAM. Thus, we can get a conclusion that technologies of picture and sound are mainstream evolutionary technologies of game console.

Figure 9: Function decomposition of game console. 3.3 IFR and sub-evolution generation According to the result from section 3.2, eight subevolution lines are detected. Especially, game software always develop simultaneously with console hardware and keep a coevolution relationship rigorously, thus the subevolution about game software will not be taken into consideration. By removing the harmful effects and increasing the useful functions, seven IFRs are obtained and provide the opportunity to apply the back-ward method to detect the detailed technological evolution line (shown in figure.10). According to IFRs listed in figure, combined with current state of technology, the situation of sub-evolution can be concluded respectively.

Figure 11: Technology evolution of video game console main processor.

313

Compared with the picture and sound performence, the controller of game console is easy to be disregarded although the controller plays an important role in the game, which makes the game more fun and enjoyable. According to figure.12, in the last decades, game controller has not gotten a remarkable progress like CPU and memory, till the appearance of Wii at last. In conclusion, as listed in table.1, there are two mainstream technologies and a laggard technology in game console technological evolutionary system, the technological evolution of game console system is unbalanced and a new-market disruptive innovation should occur.

Figure 13: Forecasting video game console controller according to TRIZ evolution theory. Fig.14 shows a whole evolution line of game console system. According to that, we can forecast a potential effective innovation process: first, reduce graphics technological standard and improve controller standard to achieve new-market disruptive innovation and occupy the market, then based on advanced controller technologies, improve graphics technology of game console gradually to sustain product development in the market.

Figure 12: Technology evolution of video game console controller. Mainstream evolutionary technologies

Picture display technology of video game system (sub-evolution 4)

Laggard evolutionary technologies

Action controller of video game console system (sub-evolution 1)

Audio technology of video game system (sub-evolution 5)

Table 1: the classification of technical system evolution of video game system In order to achieve new-market disruptive innovation, we can reduce the graphics standard and improve the technique of game controller. It is easy to reduce the graphics standard, but how to improve game controller technology is an important problem. Searching the TRIZ technology evolution line, the technical evolution of game controller coheres with the TRIZ evolution principle 4, that is to say, the technology system transition to higher-level systems(shown in Fig.13). According to the evolution line of game controller, Wii was put forward in 2006 by Nintendo with a wireless game controller, which can be used as a handheld pointing device and detect movement in three dimensions. With a powerful controller and a lower hardware, Nintendo solved the contradiction between higher performance and lower cost, and stated that its console targets a broader demographic than that of Microsoft's Xbox 360 and Sony's PlayStation 3. It competes with both as part of the seventh generation of video game systems.

Figure 14: Whole evolution line of video game system products. 3.5 Evaluation of the innovation Impressively, the responsive Wii controller remains satisfying to use and player’s movements can become more subtle (and less energy consuming). There's also the classic controller option, and the promise of myriad forthcoming controller shells. The Wii's ridiculously enjoyable titles and innovative, motion-sensitive controllers help make it feel more like a toy you'll want to share with a group of players than a console you'd use strictly on your own for hours on end. Because of Wii, Nintendo has officially become the most successful next-generation game console, in terms of introduction sales volume. 600,000 units in North America helped the company to achieve a market share of about 55% in the video game console market. To sum up, Wii is an Effective Innovation of game console system.

314

4

[8]

CONCLUSIONS

This paper presents a method for achieving Effective Innovation. Its core component is forecasting process and method on base of analyzing the existing evolution principles and the evolution line combined with the features of the disruptive and sustaining innovation. The sub-technologies can be obtained by decomposing IFR of given product. By analyzing each sub-technology evolution route, the mainstream innovation technology and the relative lag evolution technology by using the TRIZ principle, it makes the forecasting of latent technologies possible. The method can be used by firms to do technical system analysis on the existing productions in the market, develop new technical market, defeat competitors, and effectively prevent their mainstream products from defeating by new comer. 5

FUTURE WORK

Although much progress has been made towards the goal of achieving Effective Innovation and applying method to detailed design stage, some problems still remain. The study and analysis of latent technologies forecasting should be continued. So far the focus of innovators has been on sustaining innovations, but a thorough analysis of disruptive innovations would also be important and most useful. Further research is also needed to develop a more analytical approach to latent technologies forecasting. The approach followed thus far is more experimental and depends, to a great extent, on the expertise and subjectivity of the examiner. 6

ACKNOWLEDGEMENT

This research is supported in part by the Natural Science Foundation of China under Grant Numbers 50675059, the Natural Science Foundation of Hebei under Grant Numbers F2006000092 and Scientific Research Program of Hebei under Grant Numbers 07215602D-2. Any opinions or findings of this work are the responsibility of the authors, and do not necessarily reflect the views of the sponsors or collaborators.

REFERENCES [1] United States Patent and Trademark Office , All Technologies (Utility Patents) Report, http://www.uspto.gov/web/offices/ac/ido/oeip/taf/all_c h.htm [2] Clausing, D., Fey, V., 2004, Effective Innovation. New York: ASME press. [3] http://www.triz-journal.com/archives/what_is_triz/, 2008. 7 [4] Kohn, S., Husig, S., Kolyla, A., 2005, Development of an Empirical based categorsation scheme for CAI software. 1st IFIP TC-5 Working Conference on CAI, ULM, Germany. [5] TAN, R. H., 2004, Theory of Innovative Problem Solving, Beijing: Science Press, [6] Altshuller, G. S., Shapiro, R. B., 1956, About the Psychology of Inventiveness, Problems of Psychology, 37/6. [7] Dragut, A. B., Bertrand, J. W. M., 2008, A representation model for the solving-time distribution of a set of design tasks in new product development (NPD). European Journal of Operational Research 189:1217-1233.

315

[9] [10]

[11] [12] [13]

Robert B. Stone, Kristin L. Wood, 2000, Design Studies, 21/1: 5-31. Fey, V., Rivin, E., 2005, Innovation on demand. New York: Cambridge University Press. Christensen, Clayton M., 1997, The Innovator’s Dilemma. When New Technologies Cause Great Firms to Fail. Boston, MA: Harvard Business School Press. Taguchi, G., 1993, Taguchi on Robust Technology Development, New York: ASME Press, NY. http://www.answers.com/topic/video-game-console, 2008. 7 http://www.oswego.edu/~mhunt/project2/index.html 2008. 7

Modelling the Product Development performance of Colombian Companies 1

2

2

2

3

M.C.Herrera-Hernandez , C.Luna , L.Prada , C.Berdugo and A.Al-Ashaab IMSE Department, University of South Florida. 4202 E. Fowler Ave, ENB 118. Tampa, FL 33620, US. 2 Departamento de Ingeniería Industrial, Universidad del Norte. Bloque B, Piso 2. Barranquilla, Colombia. 3 Decision Engineering Centre, Manufacturing Department, Cranfield University. Cranfield, MK43 0AL, UK. [email protected], [email protected], [email protected] 1

Abstract This paper presents the general model of the Product Development Process (PDP) in the Metal mechanics Industry in Barranquilla-Colombia, since this sector contributes significantly to the productivity of this industrial city. This case study counted on a five-company sample. The main goal was to model the current conditions of the PDP according to the Concurrent Engineering philosophy. The companies were selected according to their productive profile, in order to contrast differences regarding the structure of their productive processes, conformation of multidisciplinary teams, integration of different areas, customers and suppliers to the PDP; human resources, information, technology and marketing constraints. Keywords: Product Process Development, Concurrent Engineering, Multidisciplinary Teams, Human Resources, Information, Technology and Marketing.

1 INTRODUCTION The Product Development Process (PDP) consists of all the activities that a product would go through from market need, concept design throughout engineering and development as well as manufacturing planning then production until shipment to the customer. PDP represents a key process for manufacturing companies to achieve high levels of competitiveness. Decisions made about costs, delivery times and quality improvement during first stage such as concept design, impact up to 80% the performance of following processes, such as manufacturing, product use and maintenance. As such, PDP plays a determinant roll while designing and implementing differentiation strategies. These strategies imply offering products that are innovative and that fulfil needs simultaneously, considering constraints such as costs, quality and delivery time [1]. PDP based on Concurrent Engineering (CE) involves the interaction among different areas within the company, such as marketing, design engineering and manufacturing. In order to design a product that would satisfy the different aspects of the product life cycle. Design results a keyword in the PDP concept: It engages sub-activities for defining specifications, detail design, planning and production [2]. Manufacturing companies should reinforce the PDP as a strategy for integrating innovation with knowledge and technology, achieving with this a positive impact mainly on quality and time of response [3]. In this paper, the interest is the diagnostic of the current status of PDP in the Metal mechanics industry in Barranquilla-Colombia. The case study is based on the general results from the research project “Design of Product Development Process (PDP) in the Metal mechanics Industry in Barranquilla-Colombia, within the context of Concurrent Engineering (CE)” [4]. This project was funded by the National Research Office COLCIENCIAS and Universidad del Norte, supporting this way the University-Industry relationship for improving

CIRP Design Conference 2009

316

productive systems and knowledge from Academy, as well. The Metal mechanics sector was selected for the study given that, as reported by Proexport [5], this sector contributed with US$121 millions to the local economy in 2005 and around US$40.2 millions in 2004. Compared to other sectors, this increasing contribution determines this productive sector as one of the most representative economic activities in Barranquilla-Colombia as an industrial city, representing the 80% of the manufacturing activity in the Colombian Caribbean region. This paper is structured as follows: A review of relevant literature about the PDP concept based on CE is presented in section 2. A review of previous research works about the tools selected for modelling the PDP for this case study is presented in section 3. Sections 4 and 5 are respectively devoted to describe the methodology designed for analyzing the PDP based on CE and to present the results for each company from the sample. Finally, section 6 presents some concluding remarks. 2 LITERATURE REVIEW In the literature, authors have addressed the PDP concept from a large variety of vantage points. Clearly related with Design, Pahl and Beitz [6] presented the PDP as the interaction of four main phases: Problem definition & planning, conceptual design, detailed listing of design tools and, finally, the detailed design. According to Ulrich [7], the PDP is conformed by Concept development, System-level design, Detailed design and Product testing and refinement. Regarding the relation between PDP and CE, Koike [1] presents the PDP set of activities including design, management and utilization of resources. Considering them with organizational nature, these activities must be supported by the concept of CE for reaching functional levels of integration. With this, the PDP is oriented to facilitate a parallel, simultaneous design of product and manufacturing processes, instead of the classic path for

executing tasks in a sequential way. For achieving the desired integration within the PDP, Koike, Luna and AlAshaab and Molina propose the conformation of multidisciplinary teams in order to count on information from customers, merchandising, sells and production areas while the PDP is in progress. Al-Ashaab and Molina [8] consider that multidisciplinary teams allows sharing relevant information, which in the short-term considers relevant Product Life Cycle issues and facilitates the making decisions process from the very first Design phase. Koike [1] and Luna [3] complement this position since both authors agree that members conforming multidisciplinary teams need to be selected from different functional departments within the organization, with different knowledge expertise as well. For measuring the performance of the PDP according to CE, Griffin and Page [9] presented a basic, generic set of measure categories used by Companies and Researchers. The list of metrics included meeting revenue goals regarding customer, meeting profit goals regarding finance, and go to market on time, regarding product and project program. The same research work presented a parallel of metrics that are not commonly used by companies and researches. As for companies, they were reported to use more customer and financial measures. As for researchers, the list included companylevel and product-related measures. Complementing the interest of measuring the performance of the PDP, Cohen et al. [10] studied the time-to-market trade-off. Their model evaluates how fast a product is completely designed or, based on a previous version, improved designing minor changes. These authors consider important how large the multidisciplinary team is conformed, and how long members conforming it are devoted to work on the PDP process, as well. The contribution of this research work regarding the PDP and CE results important given that recognizes different stages for completing the PDP, considers production and feedback as simultaneous activities and, finally, integrates Design and Development in the short and long rung. In this paper, we will present our own methodology for analyzing the PDP based on these contributions as references. Under the scope of CE, we have selected tools for collecting and displaying information, and we have adapted multidisciplinary team and measure categories according to the real context of the Metal mechanics sector in Barranquilla-Colombia.

activities and resources involved in their product development processes. The main advantage of this set of boxes is that they make it easier to the team to identify inputs, outputs and their connections between two or more activities, either sequential or not. The multidisciplinary team will find this information useful for making decisions about simultaneous activities. Referring previous applications, Crump et al. [12] consider that IDEF0 captures some constraint-related information, although at a relatively course-grained level. In theory, objects classified as mechanisms must be the starting point for cataloguing and validating controls (constraints). For IDEF0, controls must be validated to determine whether they represent constraints or not, since IDEF0 does not explicitly capture which mechanisms enforce which controls. Bosilj-Vuksic et al. [13] compared IDEF0 with Petri Nets for business process modelling, concluding that using IDEF0 does not represent all the elements important for simulation modelling, such as queues, random behaviour and process dynamics, but could provide the basic elements for simulation model development. The following are the main advantages the authors have identified while using IDEF0 with the Colombian industry:



Dimensions: Five main areas a company must be conformed of. These are: Organization, Human Resources, Market, Information and Technology.

3



Key factors: The most representative activities that the PDP must include regarding each dimension.

TOOLS SELECTED FOR MODELING THE PDP BASED ON CE Given that CE involves the interaction of key factors (i.e. people, material, machinery, technology and information), Icam DEFinition level 0 (IDEF0) and Actual-PDPEvaluation Tool (A-PDP-ET) permit to model properly these interactions in the PDP and to reflect a reliable overview of current conditions. Sections 3.1 and 3.2 describe relevant issues about these tools, in order to justify their selection for modelling the PDP in this research work. 3.1 Activity modelling using IDEF0 IDEF0 is a result of the graphic language Structured Analysis and Design Technique (SADT). According to the National Institute of Standards and Technology [11], it is used as a “function model” to produce a structured representation of the functions, activities and/or processes within the modelled system or subject area. As a communication tool supporting the PDP based on CE, IDEF0 models helped the Colombian industrial collaborators to visualize graphically the set of phases,



Makes it easier to understand the Product Life Cycle.



Improves the Planning phase for posterior product development based on the models obtained with this technique.



Contributes to the definition of information required for each activity and by the integration between two or more of them.



Integrates the correct information on the correct place, on the correct time and using the correct format.

3.2 Actual-PDP-Evaluation Tool (A-PDP-ET) Designed and applied by Luna [3], this tool with survey format and graphical representations is useful to collect the perception of the multidisciplinary team for the PDP. This information allows to the leader of the team to analyze the current status of the PDP within the company, in order to detect area(s) to be improved based on CE. The A-PDP-ET tool evaluates the following elements:

Management level: Degree of effort for conforming multidisciplinary teams and for keeping this scheme for working during the PDP progress. Level zero (0) and level four (4) are the minimum and maximum values, respectively. This scale of integer values is used for quantifying the level of integration within the PDP reached by a company, counting on its real, current conditions. In order to illustrate the A-PDP-ET tool, Table 1 presents the list of key factors evaluated in each dimension, and Figure 1 presents a general diagram. Once a key factor is evaluated, it is represented with a mark on the correspondent level from zero to four. This way, the diagram presents a useful overview of the results for detecting factors to be improved. •

317

Dimension

Organization

Human Resources

Market

Information

Technology

Key Factor

I.D.

Support received from Board Committee

1

Conformation of multidisciplinary teams Suppliers Continuous Improvement

2 3 4

Methodologies for supporting processes.

5

Methodologies for Planning.

6

Empowerment Motivation and Creativity Continuous education and Training. Meeting customers’ demands.

7 8 9 10

Marketing analysis Planning and checking potential markets Product Management

11 12 13

Management of product data Documentation and utilization of Manufacturing capabilities. Feedback Information exchange

14 15 16 17

Standards Technological strategy Computer Aided Technology

18 19 20

Table 1: List of dimensions and key factors evaluated with the A-PDP-ET. 4

METHODOLOGY PROPOSED FOR MODELING AND ANALYZING THE PDP The methodology applied for evaluating the current status of the PDP was proposed considering as a reference the research work by Luna [3]. This proposal included the following stages: Stage 1: Conformation of the five-company sample for the case study For characterizing the PDP in the Metal mechanics sector to conform the sample for the case study were selected based on the following criteria: •



PDP clearly defined in the company. This means, activities within the PDP are not usually confounded with external activities for supporting the process (i.e. Transportation and Packing). General Manager and members from different areas are willing to establish a multidisciplinary team for executing their projects.

• Inclusion of technologies in the PDP. Stage 2: Introduction to the CE. The aim of this stage is to make the company board to understand the advantages and benefits yielded from CE as a strategic policy. This view of the CE represents a better, competitive management of the PDP and a positive impact on its profile in the market. Stage 3: Creation of the model of the PDP in each company from the sample This stage is addressed to comprehend how the PDP is performed in each one of the companies from the sample. Given that financial, marketing and some other characteristics change from company to company, the result of this stage is to obtain a standard model of the PDP. This model reflects agreements within the multidisciplinary team regarding activities, responsibilities

318

and requirements of information and some other resources, as well. Once the model is obtained, it becomes useful for evaluating the actual level of CE involved in the PDP in each case. 5 STANDARD MODEL OF THE PDP BASED ON CE The standard model for the PDP includes two sections: Section 5.1 presents an overview of the general conformation of PDP, where issues are common to the set of companies from the sample for the case study. Section 5.2 presents the evaluation of the key factors grouped in dimensions, according to CE. 5.1 General conformation of the PDP In order to identify general issues regarding the PDP, the required information related to each key factor was collected through direct observation, visiting the process in-situ, interviews and oriented surveys applied to members of the multidisciplinary team for the design and development of products. The drawings of the standard model were obtained using the IDEF0 technique, representing the general function (Figure 1), phases (Figure 2) and activities (Figure 3). In general, these three levels reflect common stages within the PDP for the five companies participating in the case study. Figure 2 presents that one single input, control and/or mechanism could be equally required for more than one single activity. Figures 2 and 3 illustrates the fact that each phase and each activity could receive more than one input, more than one mechanism and more than one control, if needed.

Figure 1: Standard model, General Function.

Figure 2: Standard model, Phases.

319

Figure 3: Standard model, Activities (for Phase 1). The common pattern presented by companies in the sample involves the importance of having feedback between phases (level 2) and activities (level 3). For example, the standard model reflects that the Design and Prototyping phases (Figure 2) represent the most critical phases within the PDP, since it allows correcting inconsistencies in Design in order to assure that the product will result as functional as planned. Simultaneously, the feedback between Design and Prototyping allows optimizing the utilization level of the manufacturing capacity according to the current conditions in each company.

Representation – PDP in

5.2 DP Performance Measurement A PDA performance measurement has been carried out for each participating company according to dimensions and correspondent key factors proposed by CE. Each key factor presented on Table 1 was evaluated by members of the multidisciplinary team using a scale of integer values, from zero (0) to four (4). Level zero (0) and level four (4) are the minimum and maximum values, respectively. The evaluation assigned to each key factor quantifies the level of integration within the PDP actually reached by each company. The Figures 4 to 8 present graphical representation of the evaluation given to each key factor.

Representation – PDP in COMPANY 2

COMPANY 1

CS

CS

Figure 4: Results for company 1.

Figure 5: Results for company 2.

320

Representation – PDP in COMPANY 3

Representation – PDP in COMPANY 4

CS

CS

Figure 6: Results for company 3.

Figure 7: Results for company 4.

Representation – PDP in COMPANY 5

CS

Figure 8: Results for company 5.

6 CONCLUSIONS This case study modeled the performance of the PDP in the Metal mechanics sector in Barranquilla-Colombia. Based on the PDP in the five-company sample, relevant results are: •





IDEF0 technique has been a good tool to illustrate functions, phases and activities within the PDP that helped communications among the multidisciplinary team. Just three companies consider executing activities simultaneously (always that possible). Just one of the three companies had previous, formal knowledge of this analysis coming from CE. This company commented having experienced a reduction of the time of response and an improvement of the utilization of resources required in the PDP. All five companies tend to establish multidisciplinary teams for the PDP. Just one company establishes a team for the PDP including members from areas related to the process, having them participating actively even if they participate in some other teams for some other projects within the company. Two companies establish temporal (heavy) teams. This means that some of the initial members of the team for the PDP do not participate all the way until the end of the process. Instead, they are consulted whenever it is necessary, while they work on their regular, daily functions.







Just one company presents multidisciplinary teams supported 100% by the Board Committee. This means that the team receives autonomy for making decisions related with the project in the PDP. The other four companies establish multidisciplinary teams, but members are allowed just to formulate possible sceneries for making decisions. Final decisions are made by the Board Committee of each company. Just one company involves partially its suppliers in the PDP. The other four companies present weakness for evaluating and selecting suppliers. After selecting them (based on an informal way, based on previous experiences), these suppliers do not participate in the Planning and Design phases in the PDP. None of the companies recognized either the existence or the advantages of QFD, DFX, FEMA and so forth as methodologies based on knowledge, oriented to support and improve the results obtained in the Design phase.

The five companies count on technology for Design, but just two companies have integrated them to the multidisciplinary activity in the PDP. Just one company presents a high evaluation regarding the acquisition and utilization of this type of technology. The results of this research have helped the companies to have methods to identified opportunities of improvement through detail performance measurement.



321

In addition, activities modelling using IDEF0 aided to have an enhanced structure of the key activities involved in product design and development. Furthermore, the analysis of the current practice helps to identify then introduce new practices into their product design and development. Such practices are multidisciplinary teams, Quality Function Deployment (QFD) and Design for Manufacturing and Assembly (DFMA). This assisted the participating companies to have a better understand of customers’ need and then translating them into product design. At the same time, gaining a deeper understanding of the impact of process and resources capabilities on product design and supporting more effective DFMA consideration. 7 ACKNOWLEDGMENTS We extend our sincere thanks to the National Research Office ColCiencias and to Universidad del Norte, for sponsoring the research work. We would like to thank also to the companies from the Metal mechanical sector in Barranquilla-Colombia. Finally, our thanks to our colleagues working for participant companies and for the University, as well. 8 REFERENCES [1] Koike, T., 2005, Interfaces for integration of Logistics in Project Design. A contribution based on the research work about a crawler tractor. Institut National Polytechnique de Grenoble, France. 310 pages. [2] Herrera, M.C., 2006, Proposal of a methodology for reusing standards during the New Product Development Process. Universidad del Norte, Colombia. 223 pages. [3] Luna, C., 2004, Proposal of a methodology for improving the Product Development Process. Validation in the Metal mechanics sector in Barranquilla-Colombia. Doctoral thesis. Edit. UPV. Valencia, Spain. [4] Berdugo, C., Herrera, M.C., Luna, C., Prada, L., 2006, Designing a model of the Product Development Process in the Metal mechanics sector in Barranquilla-Colombia, from the perspective of Concurrent Engineering. National Research Office COLCIENCIAS. [5] Proexport, 2006, National Report of the tendency to export goods and services by Regions. Dec. 2005 – Apr. 2006. [6] Beitz, W., Blessing, F.B., Pahl, G., Wallace, K., 1996, Engineering Design: A Systematic Approach. ISBN: 3540199179. Springer, 544 pages [7] Eppinger, S., Ulrich, K., 2004, Product Design and Development. ISBN:0071232737, Irwing/McGrawHill, 366 pages. [8] Al-Ashaab, A., Molina, A., 1999, Concurrent Engineering Framework: A Mexican perspective. Concurrent Engineering Research Group. The international conference of Concurrent Engineering, Research and Applications. [9] Griffin, A., Page, A.L., 1993, An interim report on measuring product development success and failure. Journal Production Innovation Management. Vol. 10, pp. 291-308 [10] Cohen, M.A., Eliashberg, J., Teck-Hua, H., 1996, New Product Development: The performance and Time-to-market tradeoff, Management Science, Vol. 42, No. 2. pp. 173-186

322

[11] National Institute of Standards and Technology, 1993, Integration definition for function modeling (IDEF0), Processing Standards Publication 183. [12] Crump, J.W., Fernandes, R., Mayer, R., Keen, A., Painter, M., 1995, Information Integration for Concurrent Engineering (IICE). Compendium of methods report. [13] Bosilj-Vuksic, V., Giaglis, G.M., Hlupic, V., 2000, IDEF Diagrams and Petri Nets for Business Process Modeling: Suitability, Efficacy, and Complementary Use. Iceis2000.

Design of a Virtual Articulator for the Simulation and Analysis of Mandibular Movements in Dental CAD/CAM E. Solaberrieta, O. Etxaniz, R. Minguez, J. Muniozguren, A. Arias Graphic Design and Engineering Projects Department, The University of the Basque Country, Urkixo zumarkalea z/g, 48013, Bilbao, Spain [email protected]

Abstract This paper presents a research project aiming at designing a Virtual Articulator in order to simulate and analyze mandibular movements of the human jaw. Its main goal is to improve the design of dental prostheses, adding kinematic analysis to the design process. First, plaster models are scanned. Second, the type of articulator is selected. Third, the prosthesis is statically modelled. Fourth, excursive movements are simulated using a CAD system, analyzing occlusal collisions to adequate/modify the design. Finally, the current shortcomings of virtual articulator simulation are discussed in detail and a research prospect is advanced. Keywords: Dental Virtual Articulator, Occlusal contact, Collision Detection, Dental CAD/CAM

1 INTRODUCTION This project arises out of the need to design a Dental Virtual Articulator in order to simulate and analyze mandibular movements of the human jaw. This can be achieved by means of CAD systems and Reverse Engineering tools. This development has been made at the Product Design Laboratory (PDL, www.ehu.es/PDL), in the Faculty of Engineering of Bilbao (The University of the Basque Country). This Laboratory has focused its investigation efforts on Reverse Engineering and Rapid Prototyping knowledge areas and is currently looking for new fields of application for these new design methods in an effort to promote technological transference with neighbouring companies. The PDL is developing the design of this virtual articulator in collaboration with the Department of Prosthetic of the Martin-Luther University of Halle. In addition, the Dentistry Department at our university (The University of the Basque Country), has supported this project with some useful advice. In this first step, different articulators have been selected to be modelled through different CAD systems (SolidEdge and CATIA). The design process has been carried out using measuring tools and Reverse Engineering tools available at the PDL. These tools are: Handyscan REVscan 3D scanner and its software (VXscan), Reverse Engineering and Computer-Aided Inspection Software (Geomagic Studio and Qualify), Rapidform XOR, as well as ATOS I rev.2 GOM 3D scanner. Once the articulator is digitized, the next stage is to obtain the upper and lower dentures digitally. Apart from this, it is necessary to register the relative location of the occlusal surface referred to the intercondylar axis. This is achieved by means of the face bow. Afterwards, the design of the dental prosthesis is developed using the CAD system and finally, mandibular movements are simulated. The final

CIRP Design Conference 2009

323

purpose is to optimize the design of the dental prosthesis whilst avoiding collisions during the excursive movements. 2 STATE OF THE ART Nowadays, around 90% of technical dental work is carried out using the wax-up technique to generate the cast framework and then, the design work finishes with the hand ceramic phase (drop by drop) (Figure 1).

Figure 1: Design of a tooth by hand After comparing the results of cast framework and CNC machining (computer aided system) [1], the conclusion is that CNC machining has obtained results of the same accuracy in less time. Besides this, there are several advantages in producing computer-aided prostheses such as time, data registration, material resistance, control of several parameters, etc. Therefore, nowadays there is no doubt as to the vast potential offered by CAD/CAMsystems. Throughout the last years, thanks to 3D scanning and computing developments, some very relevant improvements have been made in digital dentistry. However, digital dentistry is still unable to offer some possibilities. Standing out among them is the kinematic design of occlusal surfaces. Focusing on this particular field, as there is no possibility of applying real movements in dental CAD systems, most of them are not as good as typical mechanical dental articulators. Current CAD/CAM-systems can only work as simple mechanical occludators, allowing for just one rotation movement

along a hinge, so the dental prosthetist must design statically. Once the prosthesis is designed, the generated denture must go back to the mechanical dental articulator and apply the movements manually on the articulator so that occlusal collisions can be eliminated. 2.1 Mechanical dental articulators Mechanical dental articulators (Figure 2) are tools that simulate the movements of the human lower jaw and the TemporoMandibular Joints (TMJ-s). They have been used for more than 100 years for different purposes in dentistry (Figure 3). They have become indispensable instruments for dentists in their diagnostic activity as they simulate specific patients for dental technicians in their laboratory work.

2.2 Face bow To ensure that movements in an articulator are as similar as possible to those of the human masticatory system, models have to be mounted onto the articulator with the help of a so called face bow. This ensures a relationship between the plaster models and the joints of the articulator similar to the relationship between the jaws of the patient and his/her TMJ-s (Figure 4). Therefore, upper and lower models have to be oriented to each other with a high degree of precision in the so called intercuspal position with the help of a wax or silicone bite.

Figure 4: Face-bow mounted on the patient and on the articulator

Figure 2: Human skull and mechanical dental articulator They enable technicians to carry out a study of occlusal relations between dental arches and to detect harmful occlusal interferences on models before more sophisticated occlusal equilibration procedures are performed on the patient. This equilibration of partial and full dentures is also carried out in dental articulators. Together with the use of the wax-up technique, articulators enable technicians to construct fixed or removable prostheses in the dental laboratory according to the particularities of the different movements of each patient. Nowadays, this procedure is considered standard, so current efficient dentistry necessarily involves the use of mechanical dental articulators.

(a)

(b)

(c)

Figure 3: Occludator, Dentatus ARL and Protar articulators Over the last 120 years, hundreds of different articulators have been constructed [2-4]. Throughout these years there has been no remarkable development on articulators. Today’s articulators are handy, functional and more precise in both construction and operation. Among them, many differences can be pointed out: adjustment, cost, Arcon and Non-Arcon, versatility, etc. In order to reproduce the individual parameters of the patient the articulator must be adjustable. The setting data are measured on the patient and, using the face bow, the relative location of the occlusal plane is transferred from the patient to the mechanical dental articulator (Figure 4).

2.3 Dental CAD/CAM Systems Dental CAD/CAM systems constitute a new way to produce dental prostheses. There is no doubt as to these high-tech instruments taking over dentistry in the future. The dentists who already offer the most advanced technologies at their dental laboratories are starting to call “dentistry of single visit”. In a few minutes, the dentist is able to obtain the necessary electronic impression with a scanner and then, it will take him/her about 20 minutes to have the tooth designed in a computer and afterwards, He/she will have it milled from one ceramic block in less than an hour. However, considering the limited accuracy of the occlusal surface, this type of restoration can only match the possibilities offered by simple occludators (static design). The system can not take into consideration functional movements, so the occlusal surface of the new tooth has to be manually trimmed to these movements in the mouth or in an articulator. Even a really high-tech-system such as the Cerec3 [5-7], the latest CAD/CAM development, presents this severe handicap despite being able to make an occlusal surface fit the antagonists in intercuspal position. This shortcoming is common to nearly all laboratory CAD/CAM systems. Unfortunately, in order to take these movements into account, it is not possible to integrate any mechanical dental articulator in such systems. As a consequence, all dental CAD/CAM systems aim to deliver similarly precise occlusal surfaces analogous to the occlusal surfaces obtained when working with adjustable mechanical dental articulators. These systems should use kinematic methods for occlusal surface construction or correction. 3 CAD/CAM DENTAL LABORATORY The new design paradigm is fully based on computeraided tools and on virtual modelling and simulation [8]. As explained above, nowadays it is not possible to perform the whole design process on a computer- aided system.

324

Figure 5: The latest technology dental laboratory The main goal of this project is to identify the fundamental phases of the development process of dental prostheses as well as to verify the adequacy and limits of current hand-made tools and technologies such as the wax-up technique. The design process has been intended to implement the best practices used by dental technicians, always ensuring high-level products independently of the craftsmanship of the expert manufacturing the dental prostheses. It integrates the following tools (Figure 5): reverse engineering tools for the automatic (or semiautomatic) acquisition of the patient’s occlusal morphology, a modeller that allows the designer to represent both the articulator and the existing dentures of the patient and an environment for collision-based simulation to reproduce the real behaviour of the human mandible and to verify potential interferences. Thus, the process to obtain the final design of the dental prosthesis consists of several steps according both to the specifically adopted CAE tools and to the obtained partial results [9].

Figure 6: The new design paradigm The first step consists in a reverse engineering process: plaster models of upper and lower parts of the jaw are

325

scanned to obtain a digitalized set of data of the patient. In this phase, the real geometry of the mouth and its relative location are reconstructed in a CAD system using the face bow. In the second phase, the type of articulator is selected depending on the required accuracy and/or on the patient’s setting data available in each case. Once the dental prosthesis is modelled, the functional simulation is performed in order to obtain the interfering collision points which could produce a disease in the temporomandibular joints, which may end up producing a disease in the temporomadibular joints. Excursive movements, such as protrusion and laterotrusion, are simulated using a CAD system, analyzing possible occlusal collisions so that the design can be adequately modified. Finally, the dental prosthesis is milled and tested on the mouth of the patient. 4 VIRTUAL ARTICULATOR DESIGN PROCESS This step deals with the creation of the digital model of different types of mechanical dental articulators. 4.1 Selection of the articulator The selected articulator [10], and even more importantly, the skill and care, with which it is used, have a direct effect/impact on the success of fixed or removable restorations. If the dentist’s only concern is the relationship of the antagonist teeth at the point of maximum intercuspation, the design and the use of an articulator will be greatly simplified. Since the intercuspation position is static, the articulator will need to act only as a rigid hinge, which is little more than a handle for the model. The mandible, however, does not act as a simple hinge. Rather than this, it is capable of rotating around axes in three planes. The occlusal morphology of any restoration for the mouth must accommodate the free passage of the antagonist teeth without interfering with the movement of the mandible. Because of their potential to produce pathologies, occlusal interferences must not be incorporated into restorations placed by the dentist. One way of preventing this problem is the use of fully adjustable articulators which simulate mandibular movements with a high degree of precision. Treatments using these articulators are timeconsuming and demand a great skill from both dentist and technician. As a result, the cost of such treatments does not make it feasible for minor routine treatment plans. Nowadays, most single crowns and fixed partial dentures are fabricated on small hinge articulators that have a limited capacity to simulate mandibular movement or even none at all. While many of the inaccuracies produced by this type of instruments may be corrected in the mouth using valuable chair time, the final result is an occlusion that is less than optimal. Unfortunately, many of these inaccuracies are not acknowledged and they are allowed to remain in the mouth as occlusal interferences which frequently produce symptoms of occlusal disease. For this project several semiadjustable articulators have been chosen. The following aspects have been taken into consideration: resulting accuracy, cost, time and TMJ´s type. The articulators that have been modelled are the ones shown in Figure 7. They are the Hanau H2 and the Ivoclar Stratos 200.

Once the Virtual Articulator is constructed, all the measures are verified. The final step deals with locating the models on the articulator. For this purpose, the relative position of the upper model is scanned using the face bow. Afterwards, the location in the virtual articulator is direct, and the location of the lower model is made using an electronic bite in Centric Relation. Then, the virtual articulator is ready to apply the kinematic simulation using the CATIA CAD system.

Figure 7: Hanau H2 and Ivoclar Stratos 200 articulators 4.2 Process Once the articulators are selected, their structures and shapes are analyzed in order to clarify how to use the Reverse Engineering and measuring tools. The general structure, this is, upper and lower bodies, is similar in both articulators, but the TMJ-s, which are the most important part of the articulators, present a great variety of configurations.

Stratos 200 The Ivoclar Stratos 200 has been modelled using a SolidEdge CAD system. As shown in Figure 9, some parts were modelled directly after measuring the mechanical dental articulator. However, the Handyscan 3D scanner has been used, due to its mobility, and almost all the articulator has been scanned. Using Geomagic point cloud edition software, the useful data has been taken from the millions of points that had been scanned.

Hanau H2 The first articulator that has been modelled has simple geometrical bodies (cylinder, prismatic bodies and spheres). Therefore, once several physical measures have been taken, modelling each part has not been a difficult task.

ATOS I

Figure 9: Modelling process of Stratos 200 Finally, as it has been done with Hanau H2, the models have been located in the correct position, ready to apply the kinematic analysis.

Figure 8: Modelling process of Hanau H2 However, the ATOS I 3D scanner has been used in order to have the drafts located on the correct position in space, as shown in the second step of the design process (Figure 8). To get the sections of the scanned point cloud, the Rapidform XOR software has been used. The whole articulator has been constructed combining both measured and scanned parts.

4.3 Flexibility The aim of this parameterization is to have the flexibility to fit easily the surface to each patient. Nevertheless, obtaining these data from each patient is not an easy task. Another advantage of this modelling and parameterization is that it makes it possible to introduce new settings which do not exist on the physical articulator. For instance, at the Hanau H2 articulator the intercondilar distance (Figure 10) can be introduced as a new setting parameter,

326

making the simulation more accurate because the real radius is defined. This can be made with different parameters, making the articulator much more versatile.

Figure 10: Hanau H2 virtual articulator with different intercondilar distances

5 SIMULATING KINEMATICS As Figure 11 shows, the steps involved in the simulation process become an extension of the product development process.

In order to complete this assembling process is finished, mechanical joints will have to be created either by automatically converting the existing assembly constraints or by manually selecting different joints between parts. Then, after adding the commands or actuators, the user will be able to control the DOFs. Then, the simulation is run and any possible interference on the designed prosthesis is checked out. If there is any collision, the interfering part is removed. There are several possibilities to generate videos or analyse trajectories of any of the points of the occlusal surface. Each CAD system has different possibilities and capacities for simulation. The project started using the SolidEdge V18, modelling the Ivoclar Stratos 200 mechanical dental articulator. There were problems to import digitalized models (.stl files). This problem was solved accruing the V20 of SolidEdge, so the virtual articulator was able to simulate excursive movements correctly. However, there has been a limitation when simulating the TMJs parts, because this program is unable to simulate movements based on collision between surfaces. As Figure 12 shows, the structure of the TMJ is made by primitive geometry, as sphere, cones and cylinders. Depending on the movement, the difficulty to simulate is different. Due to symmetric constraint, it has been possible to simulate protrusion through the relation between a cylinder and a changeable protrusion part. However, the SolidEdge CAD system is not able to calculate laterotrusion movements because contact surfaces change at the same time these movements occur. The next step is to overcome these limitations by means of the Dynamic Designer software, based on the MSC.ADAMS simulation engine.

Figure 12: TemporoMandibularJoints of Stratos 200 articulator Hence, the CATIA DMU-Kinematics module has been used for the next works. This module offers more options than SolidEdge CAD system. So, the movements of the Hanau H2 and Denar Mark II have been simulated more accurately (Figure 13 and 14). On one hand, the movement of protrusion has been simulated and the trajectory of the first lower left molar has been analysed. Figure 11: The steps of simulation process The parts of the articulator modelled in the CAD system and the scanned dentures are converted to solid by means of the Rapidform software and then, they are assembled adding the necessary constraints.

327

Apart from an environment for collision-based simulation, another possibility to reproduce theoretical movements has been explored. Hobo et al. presented [11-13] kinematics studies of mandibular movements in which the trajectory of the centre of TMJ-s is determined by a formula and then, a theoretical movement is applied. When there is no possibility to get the patient's setting data, this is a useful procedure. 6 VIRTUAL DENTAL ARTICULATORS The Virtual Articulators are able to design prostheses kinematically. They are capable of: - simulating human mandibular movements, - moving digitalized occlusal surfaces against eachother according to these movements, and - correcting digitalized occlusal surfaces to enable smooth and collision-free movements. Two different approaches have been made till now.

Figure 15: Kordass’ Virtual Articulators

Figure 13: Denar Mark II´s protrusion simulation On the other hand, the lateral movement has been simulated using different values of the Bennett angle.

The virtual articulator of Kordass and Gaertner [14] from the Greifswald University in Germany was designed to record the exact movement paths of the mandible with an electronic jaw movement registration system called Jaw Motion Analyser and move digitised dental arches along these movement paths on the a computer (Figure 15). This software is able to calculate and visualise static and kinematic occlusal collisions. It is further planned to integrate the system into the design and correction of occlusal surfaces in CAD-systems. Szentpétery’s virtual dental articulator from the MartinLuther University of Halle [15] is based on a mathematical simulation of the articulator movements. It is a fully adjustable three-dimensional virtual dental articulator capable of reproducing the movements of an articulator (Figure 16). In addition, mathematical simulation contributes to offer possibilities not offered by some mechanical dental articulators, such as curved Bennet movement or different movements in identical settings. This makes it more versatile than a mechanical dental articulator. On the other hand, since it is a mathematical approach, it behaves as an average value articulator, and therefore, is not possible to obtain easily the individualized movement paths of each patient.

Figure 14: Hanau H2´s lateral movement simulation

328

Figure 16: Szentpétery’s Virtual Articulators This handicap was really well solved in Kordass´ Virtual articulator. However, it must be pointed out that it needs a sophisticated and expensive electronic jaw recording system. Hence, this project was focused on developing a different virtual articulator based on mechanical dental articulators. Knowing which setting parameters can be registered and transferred to the patient, the user can choose which is the most adequate articulator to use in the simulation. Therefore, the implementation of this Virtual Articulator makes it easier for the user (prosthetist or dentist) to manage it and it allows for a comparison between results. One of the purposes of this work is to compare the differences between using a virtual articulator and a mechanical one. 7 CONCLUSIONS The two main practical implications of this research project are the improvement of existing dental CAD-CAM systems by adding the kinematics and the analysis of the simulations of different articulators, since each articulator has an individual pattern of movement. The research project identified several limitations on CAD systems on this specific kinematical application. These problems have been solved throughout the design process. Another remarkable conclusion is the flexibility and versatility offered by this type of Virtual Articulator. The technician can choose the type and adjustment of the articulator, and what is more, add non existing setting parameters on the virtual articulator. The experience acquired highlighted how custom-fit products can be translated into a highly qualitative improvement when innovative computer-aided tools integrating all necessary functionalities to carry out the various dental prosthesis (crowns, bridges, partial or complete dentures) designs in an unique Virtual Articulator are implemented. Ultimately, the prosthetist still relies exclusively on his/her know-how and on standard technical solutions. On these bases, we envisage the need for a development process of custom-fit products based on the virtual environment assisting the whole design process. This process integrates all the necessary tools and performs a collaborative-based approach where each activity is directly supported by the acquired knowledge of the specific domain. Thus, mechanical articulators are not needed any more.

329

8 FUTURE WORK This research project will go on developing a Virtual Articulator software that integrates the correcting software for CAD/CAM system directly into the process of construction of crowns and bridges An educational module will be constructed for didactic objectives in order to: -demonstrate and illustrate the functions of dental articulators and the human masticatory system -simulate different types of excursive movements and its influence on the occlusal surface. -analyze the role and influence of different parameter settings [16] on articulator movements. -analyze of the occlusion of digitized occlusal surfaces of natural dental arches. A digital face-bow is another aspect of this project which allows for a more precise location of the occlusal surface. At present, the face bow has to be mounted on the patient and then brought to the dental mechanical articulator. The aim is to reduce and simplify this process of transferring these data to the computer. Another advantage that is not available on mechanical dental articulators is the possibility to produce an individualized fosa cavity, introducing the setting data on the parameterized fosa surface with the Rapid Prototyping machine. Although this is an 'extra' that is not in the intended direction of the project, it does make the mechanical articulator more versatile. Finally, it is important to remark that several improvements should be made up when obtaining the patient's data. This is a main shortcoming which generates difficulties on the next step, this is, the use of the articulator and the design process. Therefore, a progress in this sense will bring important improvements on the whole process. 9

ACKNOWLEDGEMENTS The authors of this paper want to thank the Faculty of Engineering of Bilbao for locating the Product Design Laboratory in their facilities and the regional institutions of Biscay for financing this project (7/12/EK/2007/52). 10 REFERENCES [1]

Ortorp, A., Jemt, T., Back, T., Jalevik, T., 2003, Comparisons of precision of fit between cast and CNC-milled titanium implant frameworks for the edentulous mandible, Int J Prosthodont, 16(2): 194200.

[2]

Hoffmann-Axthelm, W., 1976, Histroy of Dentistry, Quintessence Publishing Co.

[3]

Mitchell, D.L., Wilkie N.D., 1978, Articulators through the years. Part I. Up to 1940, . J Prosthet Dent; 39: 330-8.

[4]

Mitchell, D.L., Wilkie N.D., 1978, Articulators through the years. Part I. From 1940, . J Prosthet Dent; 39: 330-8.

[5]

Reiss, B., 2003, Occlusal surface design with Cerec 3D. Int J Comput Dent. Oct;6(4):333-42

[6]

Kaur, I., Datta, K., 2006, CEREC - The power of technology. J Indian Prosthodont Soc;6:115-9

[7]

Otto, T., Schneider, D., 2008, Long-term clinical results of chairside Cerec CAD/CAM inlays and onlays: a case series. Int J Prosthodont vol 21 (issue 1) pp 53-9.

[13] Takayama H., Hobo S., 1989, Kinematical and experimental analyses of the mandibular movement in man for clinical application. Precision Machinery; 2: 229-304.

[8]

Colombo, G., Filippi, S., Rizzi, C., Rotini, F., 2008, A Computer Assisted Methodology to Improve Prosthesis Development Process. CIRP Design Conference 2008: Design Synthesis, Twente, NetherlandsCanada.

[9]

Acuña, C., Oclusión computerizada. 1ª parte, www.oclusion.es, Casos clínicos.

[14] Gaertner, C., Kordass, B, The Virtual Articulator: Development and Evaluation. Int J of Computerized Dentistry 6, 11-23.

[10] Hobo, S., Herbert, T., Whitsett, D., 1976, Articulator Selection for Restorative Dentistry. Journal Prosthetics Dentistry.

[15] Szentpétery, A., 1997, Computer Aided Dynamic Correction of Digitized Occlusal Surfaces. J Gnathol; 16: 53-60.

[11] Hobo, S., Takayama, H., 1997, Oral Rehabilitation. Clinical determination of occlusion, Quintessence Publishing Co,

[16] Szentpétery A., 1999, 3D Mathematic movement simulation of articulators and its application by the development of a software articulator, Martin-Luther University of Halle

[12] Takayama H., Hobo S., 1989, The derivation of kinematic formulae for mandibular movement. Int J Prosthodont; 2: 285-95.

330

Contribution of two diagnosis tools to support interface situation during production launch L. Surbier, G. Alpan, E. Blanco G-SCOP Laboratory, Grenoble INP-UJF-CNRS, 46 Ave. Félix Viallet, 38031 Grenoble FRANCE [email protected]

Abstract Firms are urged to constantly introduce new products. Hence, the New Product Development process should be mastered, especially its final phase, the production launch. This paper addresses the critical issue of the information exchange during production launch. Two diagnosis tools considering production launch as a key interface are presented. They permit to examine the information flows, to highlight their weaknesses and hence to find solutions for further improvements. This paper also presents the results of a case study where the diagnosis tools were implemented during a switchgear development project. Keywords: New Product Development, Production Start-up, Information exchange, Intermediary objects.

1 INTRODUCTION The extended globalization and the increasing competition urge firms to constantly innovate and launch new, high technological products on their markets. Therefore, New Product Development (NPD) has become a key process to master for successful companies [1, 2]. As a result, NPD has received a great attention over the past years in the research literature [3-5] . New Product Development has been studied from different points of view, varying from marketing to engineering design and to operations management [6]. The final phase of the NPD process is called “ production launch and ramp-up” [4, 5, 7]. Production launch is described as the period when “firm moves development into a pilot manufacturing phase” [4] and ramp-up as the period “after production launch until full capacity utilization” [8]. Production launch and ramp-up are crucial steps for the whole NPD project. Indeed, their success are necessary conditions for the NPD project success [9, 10]. During production launch occurs the handover of the NPD project from R&D (Research & Development) to Production (manufacturing) [7, 11, 12]. This handover requires an intense collaboration and an important information exchange. But the R&D-Production handover is only one of the numerous cross-departmental activities occurring during production launch. Several actors step in the project during this phase (such as the actors from Purchasing, Procurement or Quality control departments), which intensifies the need for information exchange and collaboration. Furthermore, to increase efficiency and reduce the time-to-market, immature or uncertain information is exchanged, implying a higher risk for the NPD project tasks completion and success. This is why the production launch is considered as a very critical phase. Improving the information exchange during production launch could have sizeable positive consequences for the global NPD project.

CIRP Design Conference 2009

331

As a result, this paper is concerned with the information exchange in the context of production launch. Because of the number and variety of actors involved and the intense information exchange that is necessary, production launch will be considered in this paper as an interface situation. Indeed, an interface is defined, in the management science literature, as the links and interactions between several different industrial functions that are used to communicate and collaborate [13, 14]. This paper is concerned with analyzing the different aspects of the interface situation during production launch. It presents the basis of two diagnosis tools, which aim at analyzing the interface situation and highlighting its weaknesses, giving hence possibilities for improvement. These diagnosis tools are implemented in a case study realized in a major original equipment manufacturer of electrical devices. The next section of this paper will be concerned with the presentation of the concept of an interface, on which the analysis carried out in this paper is based. Section 3 will present in detail the diagnosis tools proposed to analyze the interface moment that happens during production launch. Section 4 addresses the case study and the implementation of the diagnosis tools. Section 5 presents the conclusion of the diagnosis tools and a discussion. 2 FUNDAMENTAL ELEMENTS OF AN INTERFACE From an organizational point of view, the concept of interface is often related to the connections, links, interactions, and relationships that exist between two or more industrial functions (or teams). There has been a strong focus on the interface concept in the design and engineering management literature [11, 14, 15]. An interface can be considered either on a static point of view – trying to describe the fundamental elements of the interface – or on a dynamic point of view – trying to identify the different information flows that compose the information exchange between the project actors.

Concerning the static aspect, the diagnosis tools presented in this paper are based on the characterization of an interface given by Koike et al. [13]. The authors define the concept of the interface among project actors using five fundamental elements: the stakeholders (“interface members”), the intermediary objects (“artefacts or object”), the tools, the procedures and rules, the interface space and time (see figure 1).

Figure 1 : The five fundamental elements of an interface [13] Project’s stakeholders are persons or groups having an interest in the NPD project. In the case study presented in this paper, the three major stakeholders during production launch were the R&D department, the Production department and the Purchasing department. Other stakeholders also took part in the project such as Procurement, Quality or the factory management. The concept of intermediary object was first presented by Jeantet and Vinck [16]. The authors call “intermediary objects” (IO) items that are used or created during the design process. Jeantet and Vinck explain that these items have two utilizations. They are first a way for actors to exchange information. But the authors also insist on the second utilization of IO: these objects in some way also represent the coordination that exists among their users. Analyzing who can modify, who is using the object, how it is shared among actors brings a lot of insights about how the information is exchanged within the project. Intermediary objects used during production launch are for example product bills of material or component drawings. The tools are essential in a project to help the information exchange as well as the work break down. Several different tools are often at the project stakeholders’ disposal such as PLM (Product Lifecycle Management), ERP (Enterprise Resource Planning), and MS Office software. The rules and procedures of an interface define how to design and coordinate the information flows and activity execution. For example, defining the participants of a project structures the information diffusion within the project. Interface spaces and times are moments and places where stakeholders can interact during the project. They are dedicated moments and places to create or use intermediary objects. The interface times could be either synchronous (such as project status meetings) or asynchronous (such as e-mail exchanges). In this paper, only synchronous interface times will be considered.

These five fundamental elements of an interface illustrated above are useful to describe the core elements of an interface. But examining these five elements isn’t sufficient because they only reveal basic and static aspects of an interface. In fact, the most important aspect of an interface is its dynamic one. An interface strongly structures itself around the information flows between its stakeholders. Focusing on the dynamic aspect of information exchange helps identifying the actual information flows during production launch and hence highlighting their weaknesses, giving possibilities for improvement. The diagnosis tools presented in this paper are based on the five fundamental elements of an interface illustrated above. But their principal goal is to capture the dynamic aspect of the interface situation. The diagnosis tools presented in the following section are intended to characterize the information exchange that happens within the different stakeholders interface. 3 DIAGNOSIS TOOLS The diagnosis tools presented in this section are concerned with the identification and analysis of the information flows that exist within the interface situation during production launch. They are based on the five fundamental elements composing an interface. They are concerned with the analysis of the project interface and its information exchange through the characterization of information and of its spaces of exchange. To deeper analyze the dynamic aspect of the interface, the diagnosis tools are focusing on the intermediary objects (as support for the information exchange) and the interface spaces (as spaces for the exchange and the diffusion of information). As a result, the diagnosis tools presented here are two grids: a first grid investigating the project intermediary objects and a second grid investigating the synchronous interface times. The former grid will be named as the IO grid and the latter one as the SIT grid. The IO grid has the objective of investigating the different intermediary objects that are created during the project, because intermediary objects support the information exchange. To further examine the information that is exchanged thanks the different IO, several information characteristics are emphasized in the IO grid. Indeed, to identify and characterize the information flows, the IO grid focuses on the information dynamic, taking into account three characteristics: • The information update frequency, to evaluate how often the information changes, and thus qualifying its maturity. The update frequency evaluates the rate of change occurrence of the IO information. • The information evolution, to evaluate with which tendency the information is reaching its final value. Information evolution [17] characterizes the velocity with which the information will reach its final value. A piece of information with a fast evolution will quickly reach its final value and thus only have small-scale changes. On the contrary, a slow evolution piece of information will not approach its final value until the very end of its evolution. • The possibility of modification of the information, to evaluate if the information can be changed after its release by the information source. An object which cannot be modified by its users is a closed IO whereas an object which is a support for negotiation and interaction is an open IO [18]. This characteristic is used to determine the influence of the different users on the IO and whether the object is more a support for negotiation or for prescription.

332

However, as explained in section 1, a key element of the information exchange is the risk due to the exchange of immature or uncertain information. As a result, the IO grid also focuses on the evaluation of the information maturity. The maturity of information is evaluated with respect to the different impacts an information change can have on the global project process. The IO grid entails three evaluations of information impact: • The sensitivity of information, to evaluate the impact of information changes on downstream tasks. Sensitivity [17] characterizes the information exchanged between an upstream activity (for which the piece of information is an output) and overlapped downstream activities (for which the piece of information is an input). A piece of information is very sensitive if its modification will have serious impacts on the downstream activities. On the contrary, information with a low sensitivity will not have a high impact on its downstream activities. • The update duration, to evaluate the load for the person in charge of the information release to update the information. The longest update duration, the heaviest load for the person in charge of updating the IO. • The information structure, to evaluate elements of context attached to the document to interpret it. There are three degrees of information structure [19] : - Structured Information (SI): Structured Information’s content and form are strongly regulated and fixed through rules and procedures. For example, a design drawing is an IO with Structured Information: all the information enclosed in the drawing sheet is mandatory and thoroughly predefined by official company rules. - Semi-structured information (SSI): The content and the form of Semi-Structured Information can only be partially shaped by the company’s official rules. For example, minutes of meeting could always be handed out following the same frame but the content will always be various. SSI could be either very explicit or totally meaningless for external actors, depending on their personal knowledge. - Non Structured Information (NSI): the information enclosed in the IO is very little formalized. Context elements are the bare minimum for the information receiver to understand.

a diagnosis tool, also allows possible weaknesses or failures in the information exchange and hence difficulties in the actors collaboration and/or relationships to be identified. The second diagnosis tool, the SIT grid, consists in listing the synchronous interface times occurring during the production launch phase of the NDP project. Indeed, these times are precious interface times, where information is exchanged and/or diffused, where IO are created and/or used. In the SIT grid, the team responsible for the interface time is also registered, as well as the participants. The SIT grid also concentrates on the identification of information flows in utilizing the concept developed by Blanco et al. [18], to determine at which level the information is exchanged. Blanco et al. defined four levels of information diffusion in collaborative design activities: • The public workspace: it is in the public workspace that official deliverables are published. It is also the place for external communication with suppliers or customers. In general, the information exchanged in the public workspace is extremely formalized. • The project workspace: this intermediary level concerns the sharing of information within the project team. This level is still influenced by the company formalization of information (and by project’s role segmentation) • The proximity workspace: this level corresponds to the information producers’ personal network. The invited actors accepted in the information producer’s proximity workspace compose a “friendly” assistance for the share of information. • The private workspace: it is in the private workspace that each stakeholder keeps its own information. The SIT grid, as the second diagnosis tool presented in this paper, reveals in which workspace the information is exchanged during the listed meetings. The private workspace isn’t reviewed in the SIT grid, because first of all, it is difficult to access and review the personal data of each stakeholder, and second, the information kept in the private workspace isn’t generally shared as is with any of the other stakeholders. So, as a second part of the diagnosis, the SIT grid illustrated in figure 3 is proposed.

All these IO information characteristics allow a precise picture to be made of the information exchanged between stakeholders. The proposed frame for the IO grid, the first diagnosis tool is presented in figure 2. Figure 3 : Investigation of the synchronous interface times– the SIT grid. The following section details the industrial case study realized to illustrate how the above presented diagnosis tools can be implemented in the production launch phase of an NPD project and contribute to draw valuable conclusions on the weaknesses of the project.

Figure 2 : Analysis of the project intermediary objects – the IO grid. Filling in the IO grid leads to a first deep analysis of the information exchange. The IO grid reveals which objects are critical to the information exchange within the project. Identifying the person in charge and the users of IO helps identifying the real major information flows. The IO grid, as

333

4 CASE STUDY The field study was carried out within a plant of the Siemens Group AG in France. Siemens AG is a global powerhouse in electronics and electrical engineering, operating in the industry, energy and healthcare sectors. The company has around 400,000 employees working to

develop and manufacture products, design and install complex systems and projects, and tailor a wide range of solutions for individual requirements. In fiscal 2007, Siemens had revenue of €72.4 billion. In its Power Transmission & Distribution business area, Siemens is the world's second largest solutions provider to power utilities and industrial customers, offering solutions for the transport and distribution of electricity from the power plant to the consumer. The followed up project was a switchgear development project. As mentioned in section 2, the three most important stakeholders of the production launch phase of the development project were the R&D department, the Purchasing department and the Production department. Concerning the R&D department, the switchgear design was carried out by two physically separated R&D teams (R&D 1 and R&D 2), one of them (R&D 1) being located in the plant where the field study was carried out. Dedicated teams were also appointed in the Purchasing department (2 to 6 persons) and in the Production department (4 to 6 persons). The Quality department and the Procurement department of the plant were also involved. The major information flow needed during the production launch of the switchgear development project is depicted in figure 4. The Siemens factory being an electrical devices’ OEM (original equipment manufacturer), the Purchasing department plays a key role in the setting-up of the new supply chain. Hence, during production launch, the Purchasing team needs information from the R&D team so as to be able to purchase the newly created components. Further downstream, the Production team needs information from the Purchasing department about availability of the components (see figure 4). The Purchasing team is at the centre of the major information flow during production launch.

R&D 1 Purchasing

Production

R&D 2 Information about new components to purchase

Information about the availability of new components

Figure 4 : Main information flow during production launch. Thanks to the diagnosis tools presented in section 3, a deeper identification of the real information flow is possible. Filling the IO and SIT grid allows depicting in detail how and when information is exchange. As a result, the weaknesses of the information exchange are detected and improvement possibilities are identified. 4.1 Methodology The field study was carried out by the authors through an operational involvement of the first author in the NPD project of the Siemens factory. Indeed, her involvement allowed her to keep track of what happened during the production launch phase of the switchgear development project. Several focused interviewed were also carried out to enable the authors to look into imprecise or interesting topics. This operational involvement was completed with a literature review and group meetings. 4.2 IO grid Concerning the IO grid, the operational involvement in the production launch phase of the switchgear development project enabled to list the principal IO used by the project stakeholders. Then, reality-anchored criteria (listed here

below) were used to precise the different characteristics defined in the IO grid (presented in section 3). About the update frequency characteristic, the following categories were chosen: • High update frequency, if the IO was updated more than 10 times in its life-time. Indeed, the field study lasted five months: an IO which was updated more than 10 times is equivalent to information changes at least every two weeks. • Average update frequency, if the IO was updated between 4 and 9 times (i.e. information updated at least once a month) • Low update frequency, if the IO was updated 3 times or less. (i.e. very few information changes) The update duration column is filled with the average necessary time (in hours) for the person in charge of the IO for updating the IO information (based on experience). About the possibility of modification, the rule applied was the following one: either the content of the IO is modifiable by the users and the object is an open one, or the content is definitely fixed by the person in charge of the IO and the object is a closed one. About the information structure, the below detailed rule was followed: • if the IO is an official object of the company (official document, official content of the ERP…), the IO information is considered as structured information (SI) • if the IO information is referenced (for example, Excelsheet columns with explicit titles) and if the document is shared by various actors without needing additional context information, the IO information is considered as semi-structured information (SSI) • if the IO information is almost raw information (raw data) with no special layout so that the person in charge of the IO is almost the only one to understand the information, then it is considered as non-structured information (NSI) The information sensitivity was evaluated with respect to the project lead time and the project cost. As in almost all NPD projects, the switchgear development project was mainly steered according to lead time and cost criteria (in order of importance). So the following categories were established: • High sensitivity of the IO information means that a change in IO information has a direct impact in the project lead time (i.e. the final delivery date of the first customer product). • Average sensitivity of the IO information means that an information change implies rework for some activities and thus an additional cost but no delay in the project lead time. • Low sensitivity means that the global impact (in the project lead time or project cost) of the information change isn’t significant with respect to the project lead time and the project cost. About the information evolution, the criterion chosen was the time occurrence of the update in the lifecycle of the IO. • Either the updates were in a majority at the end of the IO life-cycle (slow evolution). • Or the updates were in a majority close to the first release date of the IO (fast evolution). All these rules and criteria led to the filled grid shown in figure 5.

334

Figure 5 : Intermediary objects grid realized during the field study at Siemens This grid helps to draw conclusions about the information exchange. Indeed, the objects listed in figure 5 are carrying information created by one project actors (the person in charge of the IO) and necessary to several other project actors (the users of the IO). Having an overall view of the IO grid presented in figure 5 allows three remarks to be done. First of all, all the IO listed in this grid are “high sensitivity information” IO. Since the list presented in figure 5 isn’t certainly exhaustive, this characteristic shows that the listed objects are at least part of the most important ones for the project. Second, it emerges that the Purchasing team is always cited as “user” of the IO (IO n°1-7) and only once as a “person in charge of the IO” (n°4). Contradictorily the Purchasing team should be at the centre of the major information flow needed during production launch (as depicted in figure 4) which goes from the R&D to the Purchasing and from Purchasing to Production. The Purchasing team being only “user” of IO seems to reveal that it had a “passive attitude” toward the information exchange. Third, R&D 2 is only cited once as user and person in charge of IO (IO n°2). It denotes a light implication of the R&D 2 team in the information exchange during production launch. Going a deep further in the IO grid, it is remarkable that IO n°1, 2 and 3 share a common profile: the information they enclose is not very often updated and evolves quickly. Even though the impact of the information they embody is high, these objects can be considered as not the most critical ones of the list. Since their information evolves quickly, the information exchange isn’t the riskier one and hence collaboration around the object isn’t the most intense. On the contrary, the next four objects (4, 5, 6, 7) depicted in the grid are “slow evolution” information IO.

335

IO n°4, the NP Purchaser follow-up list, is an object only shared by two actors (Purchasing and Production). Since it is a closed object, it doesn’t let to Production much freedom about it. This IO isn’t considered as the most interesting in order to analyze the information exchange. IO n° 5, 6 and 7 are open objects shared by numerous users. As a result, these IO are believed to be the most critical ones for the global project collaboration. These IO are major supports for activities coordination within the project. To conclude the diagnosis with the help of the IO grid, the IO n°7, the Components-to-buy list, can be identified as being the most critical intermediary object of the production launch phase in the switchgear development project. There are several reasons. First this IO is part of the three most critical IO. Then it also has the biggest number of users (almost all the NPD project actors). Lastly, it was very often updated, even if the update duration is very long (which makes the update particularly difficult). 4.3 SIT grid The second diagnosis tool, the SIT grid, was implemented during the case study in the switchgear development project to allow the analysis of who exchange information and when. As mentioned in section 2, the analysis presented in this paper with the SIT grid is limited to the synchronous interface spaces (i.e. meetings). In order to fill in the SIT grid, our second diagnosis tool, all the gatherings and meetings observed during the production launch phase of the switchgear development project were listed. Then precise criteria were set up to determine the characteristic of these meetings: If the meeting was with participants who didn’t belong to the project team, then its information diffusion level is the public workspace. If the meeting concerned only actors of the project team and that it was a formal meeting

-

(officially set in the actors’ schedule) then its information diffusion level is the project workspace. If the meeting concerned actors of the project team and that it was an informal gathering (not scheduled in the participants’ calendar), then its

information diffusion level is considered as the proximity workspace. Figure 6 illustrates the SIT grid realized during the field study at Siemens.

Figure 6 : SIT grid, second diagnosis tool implemented during the field study. Two major conclusions can be drawn from this grid. First of all this grid helps to identify the intense collaboration around the Purchasing team. Indeed, the Purchasing department is involved in almost all the meetings and gatherings that are listed in the SIT grid (all except n°9). Second, the SIT grid presented in figure 6 points out that a vast part of the project coordination was organized within small groups of actors. Only few meetings to support project-wide coordination were officially scheduled (n°1, with the participation of R&D 2, n°3 and 4, for a total of 16 meetings) while numerous of formal and informal meetings took place locally between two to three actors’ teams (n°5-12, for a total of 120 meetings). In the following section, the conclusions drawn from the realization of the IO and the SIT grid will be confronted to the reality of the switchgear project, as experienced and kept recorded during the case study. 5

CONFRONTATION OF THE DIAGNOSIS TOOLS’ CONCLUSIONS AND DISCUSSION There are several conclusions drawn from the two diagnosis tools implemented during the case study, as presented in section 4. First of all, one of the conclusions drawn thanks to the IO grid is that the Purchasing team seemed to have a “passive attitude” toward information exchange, even though it was supposed to be at the centre of the information exchange (see figure 4). The analysis of the SIT grid, the second diagnosis tool, also revealed that the project actors developed a lot of means to manage and secure the information exchange and the coordination within their interface with the Purchasing team. More generally it became clear thanks to general observations during the field study that there existed a real

communication problem between Purchasing and other actors of the NPD team. For example, the “componentsto-buy” list (IO n°7, figure 5) emerged from a need of the Production team to have a clearer view of the progress in purchasing activities. Moreover, a particularly heavy work load during production launch for the Purchasing department was identified, implying that the Purchasing department was less proactive in the project. In future project, being aware of the need for project actors to secure the information exchange with the Purchasing team could be very useful in order to succeed more easily in the production launch phase. Second, in the IO grid, identifying the “Components-to-buy list” (IO n°7, see figure 5) as being the most critical IO of the production launch phase urges to improve and perfect this object for future similar project. Several improvement possibilities are entailed in the diagnosis of the IO grid. Indeed, the IO grid shows that the information of IO n°7 isn’t totally formalized. It is only semi-structured. Moreover, IO n°7 is an open object (modifiable by users) used by numerous actors within the project. In future projects, an interesting improvement could be to define in the company’s rule and procedure what should exactly be the content and the frame of such an object. A generic template could be for example defined. It could avoid some interpretation mistakes noticed during the lifetime of the IO and thus improve the efficiency of this IO. Besides, the “update duration” characteristic signals that it was extremely absorbing for the person in charge of it to update this list, even though it needed to be updated frequently (every week). Another interesting issue could be to facilitate the creation and the update of this list, here again to improve the information exchange and thus the collaboration around this IO.

336

Lastly, the SIT grid draws attention to the fact that most of the collaboration between actors seemed to be localized and between small groups of actors. This conclusion also shed a very interesting light on the communication problems observed between the R&D 2 team and the other teams during the field study within the switchgear development project. As the R&D 2 team was located in another plant of Siemens than all the other teams, the information exchange was more difficult and hence poorer. This led to a very light collaboration between the R&D 2 teams and the other ones and hence difficulties in the achievement of common activities. A major improvement for future production launch phases could be to pay attention to teams located in other plants and consequently secure the information exchange between all the teams. For example, scheduling several dedicated meetings could be a solution easy to implement. To conclude, the diagnosis tools presented in this paper are very helpful in a production launch context to investigate the information exchange within the project stakeholders. They are valuable to analyze who, where, when and how information is exchanged and thus how collaboration is performed. These diagnosis tools enable the detection of weaknesses in the information exchange between project actors. Besides, the in-depth analysis provided by the two diagnosis tools, the IO grid and the SIT grid, allows possible improvement solutions to be found. A first limit to this work can be found. The analysis of the interface times is limited in the SIT grid to the synchronous interface times. Even if it would be difficult to track each interface time, trying to take into account asynchronous interface times could bring valuable insights about parallel information flows. Increasing the number of IO and interface times studied could be also very interesting to get a more acute picture of the real information flows. As a final point, an interesting further research issue could be to add a quantitative dimension to this work. 6 [1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

BIBLIOGRAPHY Carrillo, J.E.,Franza, R.M., 2006, Investing in product development and production capabilities: The crucial linkage between time-to-market and ramp-up time. European Journal of Operational Research, 171(2):536-556. Meier, H.,Homuth, M., 2006, Holistic Ramp-up management in SME networks. In 16th CIRP International Design Seminar, Kananaskis, Canada. Clark, K.B.,Fujimoto, T., 1991, Product Development Performance: Strategy, Organization and Management in the World Auto Industry: Harvard Business School Press. Clark, K.B.,Wheelwright, S.C., 1992, Revolutionizing Product Development: Quantum leaps in Speed, Efficiency and Quality, New York: The Free Press. 364. Ulrich, K.T.,Eppinger, S.D., 2004, Product Design and Development. Vol. Third edition: McGrawHill. Krishnan, V.,Ulrich, K.T., 2001, Product Development Decisions: A Review of the Literature. Management Science, 47(1):1-21.

337

[16]

[17]

[18]

[19]

Winkler, H., Heins, M., Nyhuis, P., 2007, A controlling system based on cause–effect relationships for the ramp-up of production systems. Production Engineering, 1(1):103-111. Bohn, R.E.,Terwiesch, C., 2001, Learning and process improvement during production ramp-up. International Journal of Production Economics, 70(1):1-19. Uffmann, J.,Sihn, W., 2006, A concept for knowledge transfer between new products projects in the automotive industry. CIRP Annals. Säsfsten, K., Fjällström, S., Berg, M., 2006, Production ramp-up in the manufacturing industry - Experiences from a project under extreme time pressure. In 39th CIRP International Seminar on Manufacturing Systems, Ljubljana, Slovenia. Adler, P., 1995, Interdepartment Interdependence and Coordination: The Case of the Design/Manufacturing Interface. Organization Science, 6:147-167. Säsfsten, K., et al., 2006, The content and role of preparatory production activities in the product development of production interface. In 16th CIRP International Design Seminar, Kananaskis, Canada. Koike, T., Blanco, E., Penz, B., 2005, Interface issues into Life Cycle Engineering agenda: Evidences from the relationships between design engineering and logistics, in 12th CIRP Life Cycle Engineering Seminar - CIRP LCE: Grenoble, France. Dowlatshahi, S., 2000, Designer-buyer-supplier interface: Theory versus practice. International Journal of Production Economics, 63:111-130. Calantone, R.J., Dröge, C., Vickery, S., 2002, Investigating the manufacturing-marketing interface in new product development: does context affect the strength of relationships? Journal of Operations Management, 20(3):273287. Jeantet, A.,Vinck, D., 1995, Mediating and commissioning objects in the sociotechnical process of product design: a conceptual approach, in Management and new technolgoy: design, networks and strategies, D.MacLean, P. Saviotti, and D.Vinck, Editors, Cost A3: Brussels. pp 111-129. Krishnan, V., Eppinger, S.D., Whitney, D.E., 1997, A Model-Based Framework to Overlap Product Development Activities. Management Science, 43(4):437-451. Blanco, E., Grebici, K., Rieu, D., 2007, A Unified Framework To Manage Information Maturity In Design Process. International Journal Of Product Development, 4(3-4):255-279. Gardoni, M., Frank, C., Vernadat, F., 2005, Knowledge capitalisation based on textual and graphical semi-structured and non-structured information: case study in an industrial research centre at EADS. Computer in Industry, 56:55-69.

The Drift of the Xsens Moven Motion Capturing Suit during Common Movements in a Working Environment R.G.J. Damgrave, D. Lutters Laboratory of Design, Production and Management, Faculty of Engineering Technology University of Twente, Enschede, The Netherlands [email protected]; [email protected]

Abstract When using inertial motion capturing technology, the accuracy of the results is strongly influenced by the socalled drift. This paper describes the drift of an Xsens Moven motion capturing suit during common movements, measured in a standard working environment. The test is performed in a room not shielded from magnetic disturbances, to acquire insight on how accurate the absolute position determination of the motion capturing system is during use in a standard environment. The test is performed by walking a path with the same start and end point, and reviewing the measured difference in the absolute position of one sensor. Keywords: Motion Capturing; Inertial Sensoring; Accuracy

1 INTRODUCTION Capturing the movements of persons can be done using different kinds of motion capturing systems. Every system has its own advantages and disadvantages. The selection of the best fitting motion capturing systems for the specific application includes aspects like range, ease of use, installation time, costs and, perhaps the most important one: accuracy. Capturing human motion is currently often used in the movie and entertainment industry. The movements of human actors are used for controlling virtual models, to create special effects and animations in movies or games. Other industries who use motion capturing at this moment are the sport industry (for examining the movement of professional sport players), and the biomechanical industry (for determining the movements of a prosthesis). 1.1 Types of motion capturing Capturing the motion data from objects can be done with different types of sensors and methods. The techniques can be sorted into two categories: with a line of sight (optical) and without a line of sight (non-optical). Optical systems detect motion with the use of video cameras and with specialized software. The cameras record the setting with the moving object(s) from multiple overlapping angles. The more angles used, the more precise the motion recognition can be. The software knows exactly the position of the different cameras (the cameras should be calibrated in the room) and recognizes how the objects are moving in every camera-view. By combining this information with the information about the location of the cameras makes it possible to detect how an object moves through space. The motion of the objects are thereby calculated in relation to the room or a fixed reference point, and not linked to each other. This results in an absolute position of objects in space. There are different methods used with camera motion tracking which differ from each other by the used method of following an object. Most optical systems require a fixed spot on the

CIRP Design Conference 2009

338

body of the subject to be tracked. The spots are created by placing makers on the object. The marker with the highest accuracy is active (emitting light), followed by a less accurate passive marker (reflecting light) or capturing can be done with no marker at all (lower accuracy). The cameras follow how the markers move through space in three angles, and the software can link a body model to the movements of the markers. In contrast to the optical tracking systems, the non-optical techniques detect motion based on the position of parts relative to each other. This means that the motion of a part is in reference to the position of the part where it’s connected to. The advantage of this is that the tracked subject is not limited to a specified space. The detection of movements is done with sensors placed on the body of the subject, and wireless transmitted to a computer. Therefore there is no line of sight needed between the receiver and the tracked subject. This makes it possible to track subjects even if they are inside or behind objects. The sensors detect a 6 degree of freedom, so not only the movement of a sensor but also the rotation of it. Therefore fewer sensors are used in comparison with the markers from the optical systems. Within this category there are multiple methods for determining the motion of a subject, the biggest difference is in the type of the used sensor. The used types of sensors are gyroscopes, magnetometers, acceleration meters and rotation sensors. The Xsens Moven capturing system The Xsens Moven system is a non-optical system, which uses 16 modules, each containing a gyroscope, magnetometer and acceleration sensor [2]. The system is based on inertial motion capturing. The data between the computer and the captured subject is transferred using a wireless protocol.

1.2 Accuracy The accuracy of the systems differs a lot. Additionally, the type of accuracy is different within the system: the accuracy can be roughly separated into three categories: 

The accuracy of the position of the tracked subject in the room.  The accuracy of the large motor skills movements like arms and legs.  The accuracy of the movement of small motor skills like fingers. For many companies, the accuracy of the Xsens Moven motion capturing system is a very important factor for determining if and how to use the suit. Some applications require a higher accuracy then others. In the movie and entertainment industry (being the origin of the Moven suit) the accuracy is less important than for example the ease, range and speed of use. But in other industries, like the product development sector, the accuracy can be very important for some usage scenarios. Every motion capturing technique has its own maximum accuracy which can be achieved with the used technique. The accuracy can be negatively influenced by external factors, and measurement errors [1]. One of the goals of this case study is to determine what the expected accuracy of the Moven motion capturing is during normal use. In the case study, focus is on the overall accuracy with respect to the positioning of the captured subject in the environment. Based on the used technique in the Moven motion capturing suit, some expectations about the accuracy can be made. The measured magnetic field by the magnetic field sensors in the Moven suit can be influenced by metal or magnetic surroundings [3] [4]. This can cause miscalculations in the pose determination. 2 THE MOTION CAPTURING SUIT The Xsens Moven suit [5] is based on miniature inertial sensors combined with biomechanical models and sensor fusion algorithms. The Moven system offers a complete wireless capture of the six degrees of body movements of a human. The data can be recorded or viewed in real-time on a desktop or laptop. The suit is made of lycra (figure 1) and has a total of 16 inertial sensors build in, which are daisy-chained connected. Each sensor module comprises 3D gyroscopes, 3D accelerometers and 3D magnetometers. All cables and sensors are embedded in the suit, and two transmitters on the back of the subject send the information to a receiver using Bluetooth.

Figure 1: The Xsens Moven suit

With the use of sensor fusion algorithms, developed by Xsens, the inertial sensors provide absolute orientation values. These are used to transform the three dimensional linear accelerations to global coordinates. These coordinates are converted into the translations of body segments. The biomechanical body model in the Moven Studio software consists of 23 segments which are connected to each other with 22 ball-socket joints (figure 2). This model includes joint constraints to eliminate drift or sliding. These constraints include the possible angles and movements a normal human joint could make. This can differ from a ball-socket joint which can move and rotate in nearly every direction, till a joint which can only provide rotation in one direction. These limitations prevent the biomechanical model to make movements or poses which are impossible for a human.

Figure 2: Biomechanical model (figure by Xsens) The wireless range of the suit is approximately 50 metres indoor and 150 metres outdoor. Because the system has no cameras, emitters or markers, it has no occlusion or line of sight restrictions. Therefore the suit can be used in every room, or even outside, without preparing the environment. The setup phase of the suit is short; putting the suit on can be done by one person (the subject) or with a little help to speed up the process. This will take approximately 5 to 10 minutes (depending on experience of the subject). The second step is to calibrate the suit. Calibration is required for the software to know the exact location of the sensors on the body of the subject and the current magnetic field in the room. Calibration is done by entering the body size information about the subject, like the height of different body parts, and the arm span. After entering that information it’s necessary to let the subject stand in four different poses for a short while. All this will take up to 5 minutes. When calibration is completed, the subject can move freely, and has nearly no obstruction from the suit. The suit can also be worn underneath normal clothes, if the subject feels more comfortable with that. The Xsens Moven suit determines the pose of the body by combing the information from all sensors with a 23 segments biomechanical skeleton model of a human with 22 joints. The movement of all body-parts is related to one sensor on the pelvis. This sensor is the virtual reference sensor of the complete body movement. This means that the suit knows what the pose of the subject is, but not how that subject is positioned in the environment. The advantage with this method is that the model behaviour and visualization is not related or linked to the

339

environment. The big disadvantage is the same: the system doesn’t know where to position the model and what the interaction with the environment is.

Figure 3: The Moven Studio interface The result of this is that the suit suffers from some drift; although the subject is standing still, the complete virtual model moves slowly in the 3D environment. Here, drift is defined as an unwanted sliding movement on the horizontal plane of the complete biomechanical model in the motion capturing software. This drift is caused by small changes in the magnetic field and errors in the measurement of one or more of the sensors. The suit uses the earth’s magnetic field to determine and correct the movements in the horizontal plane. But metal and magnetic objects can change this magnetic field, which causes the suit to think it’s moving in a different direction. There is no drift in the vertical plane because the calculation used for determining the vertical position is not influenced by any magnetic field. 2.1 Suit-to-body inaccuracy It can also be caused by the movement of the sensors on the body of the subject, for example if the suit is slightly moving over the skin. These small errors cause minor miscalculations, and most of the time the effect is that the whole body of the captured subject moves a bit. To test how severe this drift of the suit is in the use scenarios of this research project, the accuracy in a standard workshop room at the University of Twente with use of an Xsens Moven motion capturing suit has been tested. 3 PREPARATION 3.1 The room The used room is not a magnetic free or completely shielded environment. The reasons for this are that i) in an unshielded room drift is minimal in case there is no magnetic disturbance and ii) the case study focuses on ‘everyday’ circumstances. In theory even no drift will occur if there is no disturbance measured. And because the main usage application of the suit will be in standard rooms, where always a little disturbance can occur, this will give the most reliable and useable results about the drift during most common use. The room has a concrete floor, reinforced with steel. The floor is placed on a construction made out of metal beams. One wall is made of wood, one is made of stone and two walls are made of glass and steel. Furthermore the used area of the room was completely free of objects; except one table made of a synthetic material table-top and metal legs (see figure 4). The total size of the used room is approximately 5 by 7 metre. There were no other electronic devices powered up in the room, except the equipment used for the test and the standard available lights.

340

3.2 The equipment The test is done using the Xsens Moven suit, belonging to the University of Twente. The suit was supplied with the latest available firmware (at May 2008). The data was captured on a laptop computer with a 2,00Ghz dual core processor and 2Gb of RAM. This computer was capable enough to capture the recording session at 100Hz without any delays or slow-downs. The connection was done with the standard provided Bluetooth receivers of Xsens, and the software used was Moven Studio 2.0 (figure 3). Furthermore the session was captured on video using a Sony Handycam with hard disk. 4 THE MEASUREMENT Before every recording, a complete and successful calibration phase was done. Every session was recorded on the laptop using Motion Studio, and captured on video using the camcorder. The floor of the room was visually divided into squares of 50 by 50 centimetre. This grid was mainly used to determine the start and end point of a recording session for the captured person. A recording session consists of walking a specified pattern in the room, varying from 10 till 50 seconds (figure 4), whereby the start and endpoint are on the same location. Each pattern was first walked on slow speed, whereby at least one foot was touching the ground. Later on the speed was increased and eventually jumping and running was added to see how the suit responds when no part of the body is in contact with an object. Although using machines/robots would render more precise results and comparison material, the test used human motion for performing the patterns. The reason for this is that on the one hand it is the best representation of the real use, but on the other hand, it can also include the problem that sensors move inside the suit or the suit moves over the body. Therefore a little measurement error can be caused by the human aspects, but these errors will also be measured in real use scenarios, and therefore should be taken into account. Because the pattern has the same end and start point, the recording sessions on the computer should also show the character starting and ending on the same location. The video camera is only used for visual checks and documentation. If any drift occurs during the session, it will be especially visible using a top-view of the motion capturing results on the computer, and the drift will be visible by a difference in the start and end point of a test session.

Figure 4: Performing the tests

5 THE RESULTS During the tests a total of 40 recording sessions were made. All these sessions were captured at 100 frames per second using Moven Studio 2.0. With the sessions whereby there is always contact with the ground, no post processing is used. In sessions where for some moments the ground is not touched, the motion capture is adjusted to that by post processing the result to remove ground contact during those moments. This ground contact should be removed afterwards because the Moven Studio software is processing the data that always one point of the body is touching the ground. It will adjust the model in such a way that the body part which is closest to the ground will be placed on the ground. Because that will influence the position of the biomechanical model, these contact point have to be removed afterwards. After these contact points are removed manually, the software will recalculate the model without sticking it to the ground. To review the accuracy of the captured sessions, the 3d movement data of the suit is converted into the Biovision Hierarchy Format (.bvh). This file includes a list of the exact coordinates of each sensor 100 times per second in a readable text format. This information is filtered to only include the movements of the reference sensor on the pelvis. This is done because that sensor is in the middle of the body, and suffers the least from small pose changes in the arms and legs.

Figure 5: Result in a diagram from one test The x, y and z movement data from that sensor is imported in a spreadsheet. The x and z movement is placed in a ‘scatter diagram’ to see how they move around in space. This results in a diagram (figure 5) which is in fact the top view of the location of the pelvis sensor. This diagram shows exactly how the pelvis (reference) sensor is moving in the x and z directions. The y direction (vertical movement) is not evaluated because there is no drift in that direction, due to the settings of the Moven Studio software which always place one sensor on the ground. The used coordinates from the system were reviewed with 30 frames per second. This conversion from 100 till 30 is done because the start and end point were most important, and the steps between were less important. By lowering the steps between the start and end the calculation went easier and faster, without losing important information. In the diagrams the deviation of the suit is visible by an opening between the start and end point. The opening between the start and end point can have several causes. A part of the deviation is caused by the small difference in the pose of the user. With a human test object it’s nearly

impossible to let the user stand exactly on the same position and pose at the start and the end of the test. And even a small difference in the pose of the user will be visible within the results, especially when the hips are moved to the front or the back. The deviation caused by this effect can reach up to 15 centimetres. The second cause of deviation is the movement of the sensor inside the suit and over the skin of the user. The sensor is placed in the suit, which is not completely fixed to the body of the user. When the user moves, the complete suit or only a sensor can move slightly. Although this deviation will not exceed some centimetres, it’s important to take it into account. The remaining part of the deviation can be ascribed to the drift of the complete motion capturing suit. This drift is a sum of all small measurement errors during the whole motion capture session. Table 1 gives an overview about the distance between the start and end point of the reference sensors, the total distance walked in that session, the time it took and the conclusion how much the deviation is in that session indicated in the deviation per metre movement over the horizontal plane and the deviation per second of recording time. In the table 12 sessions are included; this selection from the 40 available sessions is made according to the difference of movements. The test results show that in general slow movements cause less drift of the suit than fast movements do. Most of the small movements won’t exceed a deviation of 1,8 cm per moved metre or 1,6 cm per second of recording. An exception to this is session 17, whereby the movements were slow, but the drift is more than double than other slow movement sessions. This can be caused by the larger steps made during that session. Especially the steps of 100 cm can cause that the feet are standing still for a long time, while the rest of the body is moving. That this aspect can cause drift is visible in session 26; although there was nearly no movement during that session, an enormous drift occurred. This means that when standing still, the suit starts to drift more than while moving slowly. A reason for this can be that in the calculation of the suit errors of the measurements are corrected by comparing them with the movement of other sensors. When all sensors are standing still (or at least the sensors which touch the ground) it’s more difficult to filter the real movement data from the error data. These kinds of effects can be decreased by post-processing the results, and locking the position of the feet sensors when they touch the ground. This requires more work afterwards, but increases the accuracy. Unfortunately this can’t be used in real-time recording. Another aspect which causes drift is when the user is not in contact with the ground, for example while jumping. The suit has no reference to a fixed position at that moment, and can therefore make fewer corrections for filtering out errors. Another cause of jumping is that the suit and sensors have a bigger change of moving over the body of the subject. The shock of the body when it lands can cause the sensors to move, or to measure enormous peaks in the movement. In general the conclusion is that how faster, less in touch with the ground and longer lasting the movements are, the more drift will occur.

341

Description

Δtstart-tend (cm)

Total distance (cm)

Total time (s)

Deviation/m (cm)

Deviation/s (cm)

04

Slow walk in line

16,56

1088

10,33

1,522

1,602

06

Slow walk in line 2x

58,15

3387

48,33

1,717

1,203

08

Slow random walk

38,54

2123

26,67

1,815

1,445

13

Fast walk in circle

10,46

983

10,33

1,064

1,013

14

Fast walk in line 2x

52,64

2114

12,67

2,490

4,156

16

Slow shuffle left-right and walk

31,13

3562

42,17

0,874

0,738

17

Slow steps of 50cm and 100cm 2x

79,74

2422

27,90

3,293

2,858

24

Fast shuffle left-right and walk

109,33

2662

25,50

4,107

4,287

26

Standing still for long time

68,12

939

37,00

7,258

1,841

29

Long combination of fast running and walking

162,13

10319

91,00

1,571

1,782

32

Run a line several times

48,17

3658

28,23

1,317

1,706

36

Jump in a line, less floor contact

221,52

3035

23,67

7,298

9,360

Average:

2,860

2,666

Table 1: Results from 12 test sessions 6 MINIMIZING OR PREVENTING THE DRIFT There are multiple options for minimizing or preventing the drift of the Xsens Moven suit. The whole problem is caused by the magnetic field sensors which are integrated in each sensor module. The best solution would be to find a replacement technique for determining the orientation in the horizontal plane, without using the earth’s magnetic field. This can be an external system which will determine the absolute position of the sensors in space. An optical motion tracking or local position system, which refers to a fixed object in the room, could be used. The disadvantage of that solution is that those techniques limit the area of use and will increase the setup time of the motion capturing system. Another option is to minimize the drift by performing small calibrations during the use of the suit. This can be done for example by placing a sensor module on a predefined place in the environment, whereby the software can compare the measured position to the real position of the sensor module, and adjust the values according to that. If the user is interacting with an object which is also available in the virtual environment, a quick calibration can be made by using pressure sensors on the contact point of the body (hand, feet, back, and bottom). If a pressure sensor recognizes contact with an object, the software can review if the virtual model is also in contact with an object. If not, an error is measured and the software should compensate that. This means that the complete real environment of the recording session must be available in the virtual environment of the model. 7 CONCLUSION One of the major problems of an inertial motion capturing system is the drift on the horizontal plane. During the test

342

session drift was encountered especially on moments when there was no contact with the ground, or by fast movements like running. This means that the drift should be taken into account nearly all the time. During slow and small movements the drift is less, but the movement of the body is also less. Therefore the relative measured error is in line with the made movements. This drift can be a problem in scenarios where the position of the user in the room very important. In those usage scenarios an additional motion capturing or location determination technique should be added to the Xsens Moven suit the make it useful. In situations where the pose of the captured person is the most important data needed, the location of the captured subject can be less useful, and in those kinds of situations the Xsens Moven suit can be a useful tool. 8 REFERENCES [1] Luinge, H.J., 2002, PhD Thesis, ‘Inertial Sensing of Human Movement’, University of Twente, The Netherlands [2] Roetenberg, D., Luinge, H., Slycke, P., 2008, ‘Moven: Full 6DOF Human Motion Tracking Using Miniature Inertial Sensors’ [3] Roetenberg, D., 2006, PhD Thesis, ‘Inertial and Magnetic Sensing of Human Motion’, University of Twente, The Netherlands [4] O’Brien, J.F., Bodenheimer Jr., E., Brostow, G.J., Hodgins, J.K., 2000, ‘Automatic Joint Parameter Estimation from Magnetic Motion Capture Data’ [5] Xsens Technologies, http://www.xsens.com

Reconfigurable Micro-mould for the Manufacture of Truly 3D Polymer Microfluidic Devices S. Marson, U. Attia, D. M. Allen, P. Tipler1, T. Jin, J. Hedge, J.R. Alcock Precision Engineering Centre, Cranfield University, Bedfordshire, UK 1 Battenfeld UK Ltd., High Wycombe, Buckinghamshire, UK [email protected]

Abstract This paper concerns the concept, the design and the manufacturing steps for the fabrication of a precision mould for micro-injection moulding of truly three dimensional microfluidic devices. The mould was designed using the concept of replaceable cavities to enable the flexible development of the complex microfluidic device and to reduce machining time and therefore costs during the prototyping, testing and subsequent production phase. The precision machining technique used for the cavity manufacture was micromilling. Keywords: Mould, Micromachining, Microfluidics

1 INTRODUCTION The demand for low cost, high quality miniature parts in the medical technology sector is rapidly growing and the ability to introduce new microparts in the market is dependent on finding methods for the manufacture of parts in high-volume and at low cost but ensuring high product reliability. These characteristics are particularly important for those medical products where devices must be disposable for safety considerations. Micro-injection moulding (µ-IM) is a microreplication technique that offers mass-production capabilities for polymer parts at relatively low cost, short-cycle times (few seconds), full-automation, accurate replication and good dimensional control. Hence, micro-injection moulding is currently used commercially for the production of a number of biomedical miniaturised devices. Similarly to conventional Injection Moulding, µ-IM is a technology in which a thermoplastic material is fed in the form of granules into the plasticating unit and then injected at high pressure into a mould, which is the inverse of the desired shape. The molten polymer freezes into the mould becoming a solid part and is then released from the mould by opening the mould and ejecting the plastic part with a set of ejection pins. The whole process is normally very fast with production cycles of a few seconds. In µ-IM, the mould cavities contain features of micrometre (µm) dimensions which need to be completely filled by the polymer melt. In many cases this requires the process to be adapted by removing air entrapped in the small features and by using additional heating elements to account for the very fast cooling of the injected melt into the small, cold mould micro-features. Moreover, in order to ensure proper cavity filling, high injection speeds and pressures are required. The machines for performing microinjection moulding need to possess the following characteristics: - small plasticating units to avoid prolonged residence of the polymer melt which could result in material degradation

CIRP Design Conference 2009

343

- precise and repeatable shot volume control to carefully meter the volume of material required. No material cushion must reside in the injection unit in order to ensure material uniformity. - adjustable injection speed and pressure - precise mould alignment and gentle open/close mould movements to avoid deformation of the small mould features. One of the focal points of the work currently ongoing within the Precision Engineering Centre at Cranfield University is the investigation of µ-IM as a potential technology for high-volume manufacture of a specific category of biomedical devices commonly called microfluidic devices or “lab-on-a-chip”. Lab-on-a-chip is a term for devices that integrate multiple laboratory functions on a single chip of only millimeters to a few square centimeters in size and that are capable of handling extremely small fluid volumes down to less than picolitres. This category of products is being widely investigated at a prototype level. However, examples of polymer microfluidic devices successfully introduced in the market are very few. Since the introduction of lab-on-a-chip devices in the early 1990s, glass has been the dominant substrate material for their fabrication [1] because of its material properties and because of the fabrication methods which were well established in the semiconductor industry; however, the cost of producing systems in glass is driving commercial producers to seek other materials. Commercial manufacturers of microfluidic devices see many benefits in employing plastics. Polymers are a group of materials offering several advantages over other conventional materials such as glass, silicon or other metals [2] eg a wide variety of properties which are tuneable, relatively low costs, relative simplicity of processing and accurate repeatability in high-volume production. As part of the EPSRC funded project, 3D-Mintegration (EP/C534212/1), a multidisciplinary team based at Cranfield University and Herriot Watt University has identified and designed a versatile, generic module for

use in the preparation of blood samples necessary for a number of lab-on-a-chip diagnostic devices based on blood analysis. The element under consideration is a blood/plasma separator aimed at producing highefficiency plasma separation in the simplest designs to compete with conventional plasma extraction such as centrifugation, blood filtration or CD-like platforms [3]. The biomechanical Fahraeus and Zweifach-Fung effects are combined in the device design to produce a separation between blood cells and plasma within the microchannels. No filtration is used at any stage of the process which results in a clog-free system. The method benefits from the natural plasma “skimming effect” in microchannels of dimensions below 300 µm [4], [5], [6]. This paper describes the design and the manufacturing steps of a truly 3D microfluidic device for blood/plasma separation. The expression “truly 3D” here refers to those plastic parts produced by µ-IM which have geometrical design such that they would not normally be demouldable. A way to overcome this limitation is to produce the 3D parts by lamination. The polymer microfluidic device was designed for functionality, manufacturability by µ-IM and easy assembly. Moreover, the micromould was designed as a set of replaceable inserts with the aim of minimising the mould manufacturing costs and increasing the responsiveness of the process to subsequent changes and adaptation. 2 MICROFLUIDIC DEVICE DESIGN The initial design proposed [8] was based on a 2,5D structure (fig. 1) characterised by a 25µm constriction in the whole blood inlet channel, followed by several bifurcating plasma subchannels 20µm wide and deep. The separation of the whole blood (which in first approximation can be seen as a suspension of red blood cells in plasma) into its basic components of red blood cells and plasma is made possible in the microchannel structure by biomechanical effects. The performances of the systems are believed to be governed by the channel width ratios and the length of the constrictions; however there is currently no definite design rule for determining the exact channel dimensions required for achieving efficient separation.

Fig. 1: Design of a 2,5D microfluidic device for plasma/blood separation [8]

The initial 2,5D design concept was reconsidered and a new 3D design was proposed. The 3D design was developed by re-conceptualising the 2D channels as 3D “disk spaces” around the inlet channel. A cross section of the design is shown in fig.2. The polymer chosen for the plastic parts was polymethylmethacrylate (PMMA) because of its good haemocompatibility and because it is suitable for direct welding techniques.

10 mm

Fig.2: Design of the 3D microfluidic device for plasma/blood separation. The 3D microfluidic device was designed by lamination of 5 PMMA discs all produced from the same polymer shot during the micro-injection moulding process. This device, which consists of one unit for the constriction channel, two separation discs each equivalent to a plasma subchannel in the 2,5 D design, one blood inlet and one blood/plasma outlet, has a number of benefits compared to the 2,5D version: - Overall volume optimisation which allows a more compact product - Use of 2 of the functional layers to act as a top and bottom lid for the device, thereby avoiding the need for manufacturing the lid in a separate process - Optimisation of the area involved in the separation (small channels in the 2,5D design which become thin discs in the 3D design) - Modularity via the addition of separation units in Fig.3: Laminated structure of the the basic module to incorporate extra 3D microfluidic device separation channels if required (fig.2). - Potential for integration of other modular units in series with the blood separation module (for example, mixing units, detection, etc). The initial dimensions as proposed by the designers in the 2,5D model were reconsidered bearing in mind the available manufacturing processes for micromoulds and the relative lack of clear design guidelines for optimising the required dimensions of the microchannels. The new

344

proposed critical dimensions are shown in fig. 4. Experimental trials will determine the tolerances required on the critical dimensions.

5 MICROMOULD MANUFACTURE The mould insert was designed as a set of 5 interchangeable elements (fig. 5 and 6).

3

DESIGN FOR MANUFACTURING BY MICROINJECTION MOULDING The 3D microfluidic device was designed for being manufactured by µ-IM using a Battenfeld Microsystems 50. The 5 PMMA slices comprising the final 3D device were designed to be moulded in one shot with the aim of minimising the variability from shot to shot that can occur during the moulding process. The restriction in the maximum polymer shot volume, which is approximately 1.1cm3 for the particular model available at Cranfield University, was also taken into account for the parts design. Fig. 5: Solid model of the micromould insert. The shading is used to indicate the different parts of the mould and not to represent different materials 400 µm Inserts

50 µm

50 µm 100 µm

Inserts holder

Fig. 4: Critical dimensions in the 3D microfluidic device With regard to the micromould, this was manufactured adapting an existing two plate mould for cost considerations. This posed two major constraints: 1) the accessible surface area was restricted to about 25x25mm2 and 2) the 5 slices were designed with features on one side only to comply with the pre-existing two plate mould. 4 DESIGN FOR ASSEMBLY AND JOINING The plastic microfluidic device was designed for easy assembly. The overall size of the device (10X5mm) allows manual handling during the assembly trials (process which is expected to become fully automated during production cycle). Moreover the device has rotational symmetry of the parts around the central channel requiring alignment only of the central axis. The most critical part for alignment is the 100µm constriction; however this was designed so that it is completely within one side of the plastic inserts and overlaps with a much larger feature (the large inlet channel which has a diameter of 400 µm – bottom disc in figure 3 and 4). The joining process is seen as a very critical step as the device must be leak-proof for correct functioning. Two different joining techniques will be evaluated: Transmission Laser Welding (TLW) and ultrasonic welding. It is expected that ultrasonic welding will prove to be more successful during the initial trials because of the relatively ease of tuning of the process parameters, however TLW will also be investigated because it holds the potential for a clean and fast serial production joining method for microfluidic devices. The polymer material selected for the device manufacture (PMMA) is in principle suitable for both techniques.

345

Mould plate Ejection system

Fig. 6: Exploded micromould assembly including ejection system, insert holder and five inserts A configuration based on replaceable inserts such as the one here proposed has potentially a number of advantages in particular during the prototyping and preproduction stages. First, it allows optimisation of the microfluidic device design which will be discussed below. This is crucial in particular for blood microfluidic devices where the support from simulations or modelling is absent because of the limited knowledge of the blood rheology in microchannels and because of the difficulties in simulating complex fluids such as blood. A similar problem also exists in the simulation of the polymer flow behaviour in microstructured inserts. Flow simulation software programs have proved very successful with conventional mouldings and allow for investigating the feasibility of a micromoulding process for component manufacture without a costly R&D moulding trial. However these software packages cannot be applied to micro moulding as they lose accuracy when considering micro-scale flows [7]. To overcome these limitations micromoulders typically rely on the feedback from the moulders to optimise the mould tools. This requires measuring the first runs of polymer parts to correlate against predictions and remachining the mould cavity to specifications. However because of the complexity of the manufacturing processes and the dimension of the cavity it can be very difficult, costly and time consuming to modify an existing

micromould. Developing a mould in which the cavities are replaceable is therefore highly desirable; however this poses new challenges during the micromould manufacture because of the tight tolerances required between inserts and inserts holder to prevent polymer flash which may occur because of the high pressure, high speed conditions of the µ-IM process. Also the ejection system needs to travel a more complex path through the various parts. This creates new requirements during the micromoulds manufacturing and assembly steps to ensure a smooth ejection process. Both inserts holder and inserts were fabricated in Alumold 1-500 (Alcan). From a functional point of view this type of aluminium was selected because it is appropriate for injection-moulds and can be utilised as an alternative to the more commonly used steel. From a machining point of view Alumold 1-500 was selected because it is a highly machinable type of Al and is suitable for micromilling, polishing and, if required, for subsequent diamond turning. Diamond turning is not suitable for machining of steels because of the extensive diamond tool wear. All the five inserts were fabricated by micromilling with a Kern micromilling centre using the CAD/CAM software Cimatron E7.1. This software package supports the micro milling functions, produces optimal tool paths and the CNC programme for making the precise mould inserts. Fig. 8 shows an SEM micrograph of one of the 5 inserts and fig. 9 the model of the respective plastic part. The cutting strategy for each insert consisted of a roughing step and a subsequent finishing step to remove the top 0.1mm layer. Both roughing and finishing steps were performed using tungsten carbide flat-end milling cutters. The process parameters for cutting the insert of fig. 8 including the 4 slots are shown in table 1. The overall machining time was 15 minutes. Roughing

Finishing

Tool diameter (mm)

Feed rate (mm/s)

Rotational speed (rpm)

Feed rate (mm/s)

Rotational speed (rpm)

2

200

5000

/

/

1.5

200

8000

150

8000

1 (4 slots)

200

8000

200

8000

Table 1: Micromilling process parameters

Fig. 9: plastic part replicating the mould of fig 8. The holes visible at the top and bottom of the micrograph in fig. 8 were machined for the ejection system. Once completed each inserts’ outside diameter was machined to fit the insert holder with a H7/h6 sliding fit. 6 CONCLUSIONS This paper describes the design and manufacture of a micromould for the manufacture of a polymer 3D microfluidic device. - The 3D polymer device was designed for functioning as a blood/plasma separator for a lab-on-a-chip diagnostic device and was achieved by lamination of 5 layers. - The micromould was designed as a set of replaceable inserts to allow for adaptations in the microfluidic design during the research, development and prototyping stages. - The mould cavities were manufactured using a micromilling centre by adapting a two plate mould designed to fit onto a Battenfeld Microsystem 50 microinjection moulding machine. - The efficiency of the product development stage is believed to be greatly improved by the use of moulds with replaceable inserts. This allows easier testing of the design prototypes especially in those products where clear design guidelines are not available. 7 [1] [2] [3]

[4] [5] [6] [7] [8] Fig. 8: Metal micromould

346

REFERENCES D. Jed Harrison et al., Micromachining a Miniaturized Capillary Electrophoresis-Based Chemical Analysis System on a Chip, Science vol. 261 (1993) 895-897 H. Becker and L.E. Locascio, Polymer microfluidic devices, Talanta 56 (2002) 267-287 Madou, M. J. and Kellogg, G. J., The LabCD: A centrifuge-based microfluidic platform for diagnostics, in Proc. SPIE Systems and Technologies for Clinical Diagnostics and Drug Discovery, vol.3259 (1998), 80–93. Fung, Y.C., (2004) Biomechanics. Springer Verlag Publishers. Yang, S., et al, A microfluidic device for continuous, real time blood plasma separation. Lab on a chip, vol. 6 (2006) 871-880. Faivre, M. et al., Geometrical focusing of cells in a microfluidic device: An approach to separate blood plasma, Biorheology, vol. 43 (2006), 147-159. B. Whiteside, P. Manser, Reinventing Micro and Nano moulding, Medical Device Technology March 2007. M. Kersaudy-Kerhoas, et al., Design, Manufacturing and Test of Disposable Microfluidic System for Blood-Plasma Separation, Lab-on-a-chip World Congress, Edinburgh, Scotland, May 2007, accessed at http://www.eposters.net/index.aspx?ID=1046 (16/10/07), ref number: EP10396.

Invited Paper Creative Approaches in Product Design 1

H. Abdalla1 and F. Salah2 Product and Spatial Design Department, De Montfort University, Fletcher Building, The Gateway, Leicester LE1 9BH, UK 2 Interior Design Department, Philadelphia University, P.O. box 1, Area code 19392, Jordan [email protected], [email protected]

Abstract This research paper presents a knowledge-based system for creative design tools (CDT). The main aim of the developed CDT system is to provide designers with a flexible creative design environment to enhance their creative design thinking. Several creative thinking tools are developed and integrated with constructive knowledge databases to widen the search space and expand the design domain. CDT incorporates: a user interface, creativity tools, knowledge databases and five design modules namely: preparation, concept generation, design development, evaluation, and detailed design. A case study is discussed and demonstrated to validate the developed system. Keywords: Creative design, Design computation, Creative design tools, Product design, Knowledge-based systems

1 INTRODUCTION Design is considered a goal-oriented, problem-solving activity that relies on many several factors, namely human experience, creative thinking and related knowledge. Creative design thinking can be supported by providing the suitable means through developing creative design environments which incorporate the necessary elements for creative thinking using design computation. Such creative design environments require the investigation of various domains such as design, creativity, design computation, and collaboration in design. This research paper presents a holistic approach in developing such computational design environments using advanced computer technologies to support creative design thinking. The main aim is to provide design groups with flexible design environments to escape the routine design space to a more non-routine design solution space [1]. The identification of new design knowledge by fluent interaction between designers and knowledge [2], flexibility in defining requirements and constraints, integrity of the whole design process with creative methods and tools that can assist the creation of new design solution spaces with several channels of exploration open in parallel [3] [4], are considered essential solutions to achieve that main aim. Creativity involves four components: the creative process, creative product, creative person, and creative situation [5]. Abstraction is vital to the creative process through using old information and expressing it in an abstract form [6]. This helps the identification of goals and the development of new creative ideas. To meet these goals other means such as processes and methods are essential. Three types of creativity are distinguished according to the process used in generating creative ideas: combinational, exploratory, and transformational [7]. Various ways are relevant to encourage creativity depending on its type. For combinational type as an example; widening the general design knowledge, experimenting unfamiliar combinations, and an evaluating criteria to pick the proper solutions are proposed. Creative methods usually work by increasing the flow of ideas, removing mental blocks, and widening the search area for solutions [8]. These creative methods can be achieved by using special type of tools. For example to increase the flow of ideas brainstorming can be used to produce a large number of alternatives. Furthermore, the

CIRP Design Conference 2009

347

use of analogical thinking can eliminate mental blocks. On the other hand widening the search space can be achieved through evolutionary and combinational tools [9] [10] [11]. Various creative design models were proposed in the literature. Simon model [12] was based on the personal view of creativity by hypothesizing that creativity is a special kind of problem solving behavior which satisfies novelty and value of the resulted product for the designer and his/her culture, unconventional thinking, high motivation, and ill defined problems to be formulated through the design process. On the other hand, Csikszentmihalyi developed the social-cultural creativity model on where is creativity not on what is creativity [13] . His framework composed of three major elements: the person, the field, and the domain. The occurrence of a creative idea, object, or action is determined by the jointly relation between those three elements. An idea is realized as creative if the person recognizes it and the society additionally. Based on those two models Liu proposed the dual generate-and-test model [14]. This dual model encapsulates two generate-and-test loops; one at the level of the individual, and the other at the level of society. Creative design is not fixing the problem and searching for solution, it is more into developing and refining both the problem formulation and the solution [15]. This can be achieved by repetition of analysis, synthesis, and evaluation processes between the problem space and the solution space. The essential stages of divergence, transformation, and convergence were proposed in Hsiao and Chou model of creativity-based design process [16]. Their method contained personal behaviors of human sensuousness such as looking, thinking, comparing, and describing accompanied by stimulation which is an extrinsic influence of the environment. Requirements capture (RC) is usually at the front end of the design process in any new product development. It is the process of research and identification of the customer, user, market, design, and technical requirements. It is essential to conduct a thorough (RC) through information gathering, information transformation, and requirements generation to provide a basis to build design solutions and synthesis [17] [18]. Conceptual design is one of the early stages of design that demands the greatest creativity. Its main aim is to produce design principles concerning the product form and function

to satisfy requirements and be competent [19]. Large numbers of concepts are usually generated at this stage. Two main steps of divergence and convergence are identified in conceptual design and were discussed by various researchers [20] [21]. A multiple divergence convergence approach was proposed to increase the number of the generated concepts to reach a level beyond abstraction to be understood by designers and reduce the solution space [22]. It was recognized that visualization facilitates the concept generation in any design process [23] [24]. The designers externalize their ideas using sketching or other ways of representations such as diagrams, concept maps, or documents using computers. The represented results inspire designers to generate new ideas or concepts [25] [26]. Creative problem solving in design using visual creative tools were discussed in the literature. Visual and classical brainstorming proved to assist design groups in their concept generation process [27]. Concept mapping proved to support creative thinking in general [28] and creative design thinking in specific [29] [30] [31]. It presents a holistic approach by making the structure of the problem more readable and act as memory aids to review the design problem at any stage of the design process [32]. Analogical reasoning is another technique to widen the design solution space. It is the transfer of knowledge between various domains based on similarities between the target and the source space. It involves three major phases: (1) identifying the source candidates for analogy matching and retrieval (2) mapping the source candidate with the target (3) transferring knowledge between the source and the target [33] [34]. It was recognized that combination and evolution play an important role in production of creative work in various disciplines and design is one of them. Combination involves the combining of two design concepts or subsets of them from similar or dissimilar unrelated ideas [35]. The combination can occur at various levels. Furthermore, evolutionary search algorithms look at a population of slightly different solutions at once, and then through cross over and mutation new generations are created. This tool proved to produce creative design solutions [33] [36]. Collaboration was recognized to support the design process by minimizing the lead time of the product development through sharing of information and resources between individuals and organisations. Several researchers indicated the benefits of collaboration through the different design processes [37] [38] [39]. Two collaboration modes were addressed: a horizontal and a hierarchical mode. These collaboration modes are complementary in functions. They established a vertical linkage between the design and the manufacturing processes, and a horizontal linkage of team work in the design phases [40]. The literature review indicated that a holistic approach is needed to enhance creative design thinking among design teams by providing an integrated flexible design environment with the proper knowledge, processes, and creative tools. Several limitations were addressed in the existing creative design models. They lacked the integration of various creative tools and processes with the proper constructive design knowledge, the use of flexible design representation which can be adjusted by designers and reflected to all users immediately, and the distance collaboration among design teams in the early stages of design at various locations. To overcome those limitations, the CDT system has been developed taking into consideration all the aspects of

creative design approaches the creative process, creative product, creative person, and creative situation. 2 CREATIVE DESIGN TOOLS MODEL An integrated design framework is required to achieve efficiency among design groups, creativity and synchronization. Such a framework needs to be flexible to comply with the dynamic nature of the design process, provide a complementary support for designers’ thinking activities, support smooth interface between designers and knowledge, and achieve interdisciplinary interaction between various processes of design. The proposed system framework encompasses constructive knowledge databases, creative tools, five design modules, and a user friendly interface. The overall structure of the CDT system is shown in Figure 1.

Figure 1: CDT Framework. 2.1 Constructive knowledge databases It is essential that the designers are provided with suitable knowledge for each design task before starting the design process. The system encompasses several constructive databases which can provide various data required for the design task. The more the design teams become experienced the more expert their databases will become. The provided databases are dependent on the type and domain of the design problem. Six knowledge databases are incorporated which include: general project information (GPI), active projects information (API), general design knowledge (GDK), specific design knowledge (SDK), others sample designs (OSD), and previous project sample designs (PPSD). Design knowledge representation In the developed CDT system each design alternative is divided into major components, each major component is divided into items with different identified features which distinguish it and specify its characteristics. For each component and item in the design alternative detailed functions, behaviours, and structures are identified in addition to the general ones of the alternative in this hierarchical structure. The incorporated creative tools are structured and implemented based on this hierarchical design representation. Features are structured to include structural, behavioural, and functional data. Relationships between different parts of the designed sample are identified using objects tree hierarchical structure and methods embedded within each object. The data identified in those representations are stored in the databases and can be retrieved and used by all the incorporated creative tools.

348

Design knowledge has been represented in the proposed CDT system using structured hierarchical tree. This hierarchical tree has been selected because it has many advantages over other knowledge representational schemes. These advantages are reflected in their easy connectivity to the databases to store and retrieve data, the flexibility and expandability of its structure where designers can add, hide, or show more detailing as required, and its adaptability to represent any hierarchical design knowledge for any product. An illustration of the concept alternative representation is shown in Figure 2.

incorporated creative tools are discussed in the following sections. Brainstorming Brainstorming is a conventional tool for creative thinking based on generating a large number of ideas in limited time sessions, where no criticism is allowed and crazy idea are welcomed. The developed brainstorming tool’s theme in the CDT framework provided a variety of procedures to generate ideas namely: brainwriting, brainsketching, and brainrelating without short time limits sessions. The generated concepts are stored in the temporary active project database for later reviewing and sharing. The sharing takes place after the sessions are ended. A selection is made based on the evaluation results to proceed with the chosen concepts.

Figure 2: Concept alternative representation. Design knowledge management Microsoft Structured Query Language (SQL) server 2005 was used to create the databases of the CDT system. The data related to products, major components, items, behaviors, functions, and structures are stored in relational tables. The developed CDT system contains structured design knowledge in its databases and allows performing of certain operations on that data such as retrieval, modification, insertion, and deletion. Data management in the developed CDT system has been divided into two major classes: general data management, and specific data management. General data management is concerned with general data identification that can fit any design problem. Designers can identify general data for the design problem under consideration to be used later to identify the specific data for the same problem. This total management of data allows the system to be used for various design problems’ types since any relative data can be identified easily and immediately to suit that specific situation by the designers themselves without any amendments in the system’s structure. The specific data management class utilizes the previously identified general data to identify new specific data related to the design problem situation. It relates the identified features (functions, behaviours, and structures) with the identified products, components, and items. This data management class provides the base for the design knowledge representation structured at later stages in the developed CDT system. 2.2 Creative tools The (CDT) system incorporates various creative design tools which were recognized for their usefulness in the production of creative design solutions. Each tool has its own ways of supporting the design team in their creative design tasks. The major common feature is their visualisation abilities in sharing the same design knowledge representation. The capabilities of the

349

Figure 3: Sample creation procedures. Concept mapping The theme behind concept mapping is to externalise concepts and visualise them simultaneously. Concept mapping tool in the CDT system provides designers with an interface to structure concept’s knowledge, store it in the databases, and link it to various file types. This tool takes into consideration the hierarchical structure of design knowledge throughout the identification of various components, items, functions, behaviours, and structures of each concept. This tool can be used to create new maps or retrieve existing ones. The produced design representation can be used by all other incorporated tools. Analogy Analogy can be defined as finding solutions applied at similar situations but in other domains. The analogy tool developed and incorporated in the CDT system has been designed to assist designers in exploring more design solutions where emergence of creative ideas can be recognized using analogical recognition of similar situations in other design domains. The developed tool has two main procedures which are: matching and retrieval, and mapping. Matching and retrieval is concerned with searching for solutions that match the existing situation of the problem. The situation of the problem is formed at the early stages

of preparation where functions, behaviours, and structures of the problem under consideration are identified. The matching and retrieval of analogical options are based on these previously identified features. The retrieved results may have interesting results which can be selected to perform the second procedure which is mapping. The proposed matching and retrieval alternative procedures are detailed as follows: SIM (FnT, FnS (i)) = maxk SIM (FnT, FnS (k)) (1) SIM (BT, BS (i)) = maxk SIM (BT, BS (k)) (2) SIM (ST, SS (i)) = maxk SIM (ST, SS (k)) (3) SIM [(FnT, FnS (i)) U (BT, BS (i))] = maxk SIM [(FnT, FnS (k)) U (BT, BS (k))] (4) SIM [(FnT, FnS (i)) U (ST, SS (i))] = maxk SIM [(FnT, FnS (k)) U (ST, SS (k))] (5) SIM [(BT, BS (i)) U (ST, SS (i))] = maxk SIM [(BT, BS (k)) U (ST, SS (k))] (6) SIM [(FnT, FnS (i)) U (BT, BS (i)) U (ST, SS (i))] = maxk SIM [(FnT, FnS (k)) U (BT, BS (k)) U (ST, SS (k))] (7) where SIM (*,*) is a real function which measures the degree of similarity between its parameter spaces, FnT is the target function space, FnS (i) is the i-th source function space, BT is the target behaviour space, BS (i) is the i-th source behaviour space, ST is the target structure space, SS (i) is the i-th source structure space. The mapping procedure is to fit the retrieved options with the generated concepts to satisfy the design problem’s goals, and requirements in that specific situation. The mapping can be conducted at any level in the hierarchical structure of the design concepts. It can be at the components, items, or features’ level. The proposed mapping procedures are detailed as follows: Dm = Fnm U Bm U Sm (8) Fnm = τ (FnT U M (FnS (i)) (9) m B = τ (BT U M (BS (i)) (10) Sm = τ (ST U M (SS (i)) (11) where M is a mapping operation, τ is a transformation operator, Dm is the modified design space, Fnm is the modified function space, Bm is the modified behaviour space, and Sm is the modified structure space Combination The combination of two different ideas to come up with a new one is the main concept behind this tool. This concept of combination is valid to be applied even on the most complex products. Therefore, the developed combination tool incorporated in the CDT system took into consideration the complexity of products it can be applied to. Combination can be conducted at various levels of the product’s hierarchical structure such as the whole products’ level, components, items, or features. Four combination methods have been developed and incorporated within the tool to combine options from the newly generated concepts and the already existing ones in the database. These methods are presented computationally as follows: 

Systematic Combination

n k

{ { u ij } }

(12)

j=1i=1

where 

n = number of vertices in each tree k=number of trees.

Tree Random Combination

n k

{ { v g(i)j } }

(13)

j=1i=1

where g is a permutation on {1, 2, 3,…k} ** (choosing the trees randomly) 

Vertices Random Combination

n k

{ { v if(j) }

(14)

j=1 i=1

where f is a permutation on {1, 2, 3,…, n} ** (choosing the vertices randomly) 

Total Random Combination

n k

{ { vg(i)f(j) } }

(15)

j=1 i=1

where

g is a permutation on {1, 2, 3,…, k} f is a permutation on {1, 2, 3,…, n} ** (choosing the trees and vertices randomly) Evolution Evolution theme is based on generating new solutions using two or more parents. It usually incorporates two procedures; the first is cross over where different parts of both parents are crossed over between them to create new generations, the second is the mutation where some parts are altered to fit the boundaries of the situation under consideration. These two procedures have been improved to conduct its methods on the developed hierarchical design knowledge representation developed in the CDT system. 

The cross over procedure can be conducted at various levels and is modelled as follows: Snew1 = [Sparent1 – Śparent1] U Śparent2 (16) Snew2 = [Sparent2 – Śparent2] U Śparent1 (17) Fnnew1 = [Fnparent1 – Fńparent1] U Fńparent2 (18) Fnnew2 = [Fnparent2 – Fńparent2] U Fńparent1 (19) Bhnew1 = [Bhparent1 – Bĥparent1] U Bĥparent2 (20) Bhnew2 = [Bhparent2 – Bĥparent2] U Bĥparent1 (21) where Ś is part of the structure, S is the whole structure, Fń is part of the function, Fn is all the functions, Bĥ is part of the behavior, and Bh is all the behaviors. 

Mutation is the alteration of one or more feature variables by an external process and is modeled as follows: Fnnew = Φm (Fnexisting) (22) Bnew = Φm (Bexisting) (23) Snew = Φm (Sexisting) (24) where Φm is a transformation operator, Fn is the function of an object, B is the behaviour of an object, and S is the structure of an object. 2.3 Design modules The CDT proposed system is composed of five major modules taking into consideration the systematic design process. Each module of them has several processes to be conducted by the design team and different tools which can assist the design process. Preparation module Problem definition seeks answers to many various questions, which in turn should establish key

350

characteristics of the problem which are: the problem goals, the problem space (requirements), and the problem constraints. The proposed preparation module provides essential procedures to explore and define the design problem, specify requirements, and search existing solutions. These procedures are: client’s meeting, problem formulation, search, analysis, synthesis, and problem reformulation. Concept generation module The concept generation module activates the application of the different incorporated creative thinking tools to generate and explore more creative concepts. In this module the design team members are encouraged to generate as many preliminary concepts as possible, share their generated concepts, explore supplementary concepts beyond the design problem space, and use divergenceconvergence techniques to expand and reduce the solution space to select the most appropriate ones. Development module Development of the generated and selected concepts focuses on enhancing the selected concepts taking into consideration various vital issues. This enhancement needs to be documented to present a comprehensive reference for the manufacturing process, future amendments, and the creation of new designs. In the CDT development module two major procedures have been developed namely: enhancement, and documentation. Evaluation module The evaluation module in the CDT system is designed to be used at any stage of the design process. Evaluation is required to select the suitable concepts for future development. The process of evaluation has been divided into three correlated stages. The first stage is conducted at the level of individuals where each designer evaluates the generated concepts and saves the evaluation results for retrieval at later stages. The second stage is team evaluation. It is achieved by calculating the average of individual’s evaluation scores to get the team’s evaluation score. The third stage is the society evaluation by specialists other than the design team such as customers, other department’s personnel, investors, suppliers and any other category of the society. This stage of evaluation explores how others view the design alternatives and if they are creative to them or not. A multi criterion has been used in the CDT system to evaluate design alternatives. In order for a design to be creative certain objectives should be met. The two major objectives are appropriateness and novelty. Detailed design module Detailed design module involves the production of the final specifications, CAD drawings, and 3D models. This module has been supported in the developed CDT system by incorporating CAD and word processing applications to facilitate the creation of these detailed documents for the manufacturing process. 2.4 User-friendly interface The CDT system user interface was designed with a major aim in mind (ease of use). In terms of the proposed system navigation, a tree view navigation scheme is employed to provide immediate access to major tools and modules of the CDT system. The tree view nodes provide quick access to all parts, expandability, and holistic view of the system structural parts in one glance. Various visualization formats and presentations of information such as graphical, tabular, verbal, and written communications were used.

351

Several techniques have been applied in the developed CDT system to minimize the learning time of users such as tool tips and help windows. Furthermore, different types of users’ input validation tools have been incorporated such as required fields validates, range validates, and compare validates to eliminate any conflicts by entering the wrong data. In terms of the UI aesthetics, design principles have been utilized in a balanced manner to get the best out of the system’s layout. Variety can be also recognized in the developed CDT user interface to prevent boredom but on the same time not to cause confusion to the users. 3

IMPLEMENTATION AND APPLICATION OF THE CDT SYSTEM The developed CDT system has been implemented using Microsoft’s ASP.NET web programming technology. This technology provided an easy platform to achieve the creation of data driven dynamic web applications with the minimal effort compared to other available technologies. The application of web-based technologies in design systems has several advantages. It overcomes the geographic factors between designers through its easy accessibility and the consistency of design knowledge it provides. Web applications provide the same experience for all users. Furthermore, the updates of the applications are reflected to all users which will minimize the administration time and cost of such applications updates. Any changes in the data are reflected immediately to all users at any locations which should enhance collaboration among design team members. 3.1 System scenario The CDT system scenario detailed in Figure 4 starts by defining the design problem space through applying various activities such as meeting clients and conducting relative searches. This includes identification of the problem’s goals, requirements, and constraints. This defined problem space can be modified each time a new knowledge evolves to adjust the problem’s situation. Identification of new design knowledge is usually conducted by the design team in two stages. The first stage is at the preparation module where designers identify general and specific knowledge for the design problem to support its situation. The second stage is at the concept generation module when the designers start generating and exploring more design concepts. Designers are provided with the proper means to identify and store new knowledge in the incorporated databases, which in turn, can be part of the design problem situation and can be utilized by all the integrated elements of the system. The generation and exploration of design concepts starts with the concept generation. Two divergent-convergent thinking approaches have been developed and implemented in this module. The generation of design concepts has been developed to apply a single divergentconvergent thinking approach at first using brainstorming and concept mapping tools. Designers brainstorm to generate as many concepts as possible individually without any criticism or evaluation of any sort. These generated concepts are structured hierarchically through the application of the concept mapping tool. At later stages, the designers evaluate the generated concepts of all the design team members to select the most suitable ones for exploration. A multiple divergent-convergent thinking approach is applied through the exploration stage. Analogy, combination, and evolution tools are the active tools at this exploration activity. The designer chooses the methods to

apply in each tool and selects the design concepts to use for exploration. Each time results are displayed, the designer selects few options to proceed with the same tool or to activate another tool. Therefore, it may take several iterations using the same tool to reach creative design concepts which are appropriate and novel.

Figure 4: CDT system scenario. 3.2 Design knowledge implementation The developed design knowledge representation described earlier is implemented using the tree view control in the ASP.NET environment which is connected to the databases or SQL documents. Each node in the tree is related to a certain field in a specific table in the database. Designers at the early stages of the design process can view and update data using the general data grid views. Further in the design process where more concepts are generated the tree view structure is used because it provides a wider scope of the design problem taking into consideration the various design details. The knowledge identification focused activity is divided into two major stages. The first stage involves general and specific data management while the second involves the preparation for the design problem under consideration. Both stages are conducted by designers to identify knowledge which is relative to the design problem under consideration. 3.3 Creative tools implementation Each incorporated creative tool has a unique implementation scheme, although they all share the same design knowledge representation format. Various data web controls have been used to display design knowledge in the different tools such as GridView, DetailsView, and TreeView controls. Drop down list controls have also been used to display the options available for the designer to choose from. Furthermore, ordinary text buttons, and image buttons have been provided to implement certain functionalities. The features identified at the early stages of data management can be used to apply constraints on the solution design space. A certain value for the function constraint can retrieve certain alternatives from the database with that specific function. If the search is for a component which function is heating, then alternatives which have this function are retrieved. Therefore, this limits the design solution space according to the constraints applied by the designer. This approach of implementing the constraints tactic is very flexible and constraints can be relaxed or tightened as required.

3.4 Case study The case study starts with the initial design data management and sharing of basic design information which formulates the base for the preparation stage where the problem is formulated and more specific requirements and constraints are identified. Afterwards, through the use of various incorporated creative tools, design alternatives are generated and explored. Evaluation is conducted to validate and select the most creative ideas for development. The creation of new features (functions, behaviours, and structures) for the refrigerator has been chosen for representation. The refrigerator product has been explored in the case study and a hierarchical structure has been represented and used by the incorporated tools of the developed system. Stage one: Design data management and sharing General data management includes the design domains, products, components, items, and features such as behaviours, functions, structure categories, and structure category items. Product design problems can belong to different domains such as electronics, furniture, mechanical, kitchen appliances and many others. Each design domain includes many specific products. Kitchen appliances for example includes, refrigerators, freezers, ovens, hobs, washing machines, tumble driers, dish washers and many other small appliances such as coffee makers or toasters. Specific data management is concerned with integrating the proper predefined features with the related predefined products, components, and items. In the case study demonstrated various functions have been defined such as cools, controls, stores, and many others. The behaviours reflect the characteristics of various components such as easy to use, easy to clean, saves energy and so on. The structure categories are the major structural features such as style, colour, size, shape, and material, while the structure category items are specific items of those structural features. Those identified specific features for each defined object are used later as constraints in the developed system to retrieve objects with specific features. Stage two: Design preparation Several processes constitute the major structures of this stage which are: client’s meeting, problem formulation/reformulation, and search. Client’s meetings provide the basis that the design team build upon the problem requirements, constraints, objectives and any other issues related to the problem. The meetings’ details are stored in the database and reflected to all design team members to be retrieved when needed by design team members. The identified problem of the case study was to design a refrigerator with new functionalities and behaviours through the use of advanced new technologies which can service the requirements of modern kitchens and enhance the activities undertaken in such a busy zone of the house. Designers search for data and information related to the design problem. They can retrieve sample solutions from the databases, based on specific features’ constraints related to the problem. Relevant and useful retrieved data is stored in the active project database to assist the generation of conceptual design solutions. Furthermore, more searches can be conducted using the World Wide Web (WWW) or any other resources. Stage three: Concept generation The generation of ideas starts usually by brainstorming tool to let the designers generate many abstract alternatives to

352

be considered. The generated ideas are stored in the active project database. The stored ideas can be retrieved by team members to select the most creative ones to develop. The ideas are displayed in a matrix form where each designer can enter his evaluation scale for the identified evaluation criteria built from the design problem requirements. Figure 5 shows a snap shot of the brainstorming tool. Concept mapping is used to create the tree hierarchy structure for the selected ideas in order to be able to apply the other creative tools namely: combination and evolution. Analogy matching and retrieval is used to retrieve more options from the stored design samples in the databases. The parameters used for this technique are the functions, behaviors, and structures already stored in the active design problem. The system displays these features for designers to select their preferred parameter to start the process as illustrated in Figure 6. These parameters are different for each design problem and reflect the type of problem under consideration. The results are displayed and few options are chosen to conduct the mapping techniques. Combination methods are also provided for the designer to choose the preferred one to conduct some combination iterations on the tree structures of the selected options. Furthermore the evolutionary cross over and mutation methods are used to explore more options. The results are displayed and the creative ones are selected and stored.

Figure 5: Brainstorming window. Stage four: Design evaluation After brainstorming, evaluation is used to rank the most creative options to be used for exploration of more design options. Furthermore, after more options are explored and some ideas are developed, evaluation is used to choose the most creative ideas for detailing. Design evaluation stage has been implemented in the CDT system based on three levels: the individual, the team, and the society. Individual designers can view the stored ideas and evaluate those ideas via two part criteria: appropriateness, and novelty. The designers give scores to the design options using a scale from one to ten where one is the least creative idea and ten is the most creative one. The evaluation result for each design idea is stored in the database and associated with the idea data and the individual designer whom evaluation is stored.

353

Figure 6: Analogy matching and retrieval.

Figure 7: Evaluation results. 4 CONCLUSIONS A holistic approach for enhancing creative design thinking through the conceptual design phase has been presented in this research paper. The integration of various creative tools within a shared design environment facilitates collaboration among design team members to produce more creative ideas, evaluate them, and select the most appropriate ones to be developed and detailed for production. The early evaluation of design concepts minimizes the lead time for industry since the inappropriate ideas are eliminated early in the design process. Therefore, reductions in conflicts and inconsistencies at later design stages are achievable and the production of more creative ideas is viable. The system has shown an innovative approach to integrate several factors which affect creative design thinking namely: distributed design team members, design knowledge representation, creative thinking tools, and design processes. 5 REFERENCES [1] Hori, K. (1997) Concept Space Connected to Knowledge Processing for Supporting Creative Design. Knowledge-Based Systems, 10(1), pp. 29-35. [2] Candy, L. and Edmonds E.A. (1997) Supporting the creative user: a criteria-based approach to interaction design. Design Studies, 18(2), pp. 185-194 [3] Shneiderman, B. (2000) Creating Creativity: User Interfaces for Supporting Innovation. ACM Transactions on Computer-Human Interaction, 7(1), pp. 114-138. [4] Hewett, T. T. (2005) Informing the Design of Computer-Based Environments to Support Creativity. International Journal of Human-Computer Studies, 63(4-5), pp. 383-409. [5] Eysenck, H.J. (1994) The Measurement of Creativity. In: BODEN, M., eds. Dimensions of Creativity. USA, Massachusetts Institute of Technology, pp. 199-242.

[6]

[7]

[8]

[9]

[10] [11]

[12] [13]

[14] [15]

[16]

[17]

[18]

[19]

[20] [21]

[22]

[23]

[24]

[25]

Ward, T. B., Finke, R. A., and Smith, S. M. (1995) Creativity and the Mind. Discovering the Genius Within. London; New York, Plenum Press. Boden, M. (2007) Creativity: How Does it Work?, published by Creativity East Midlands for the Creativity: Innovation and Industry conference, 6th December 2007. Cross, N. (2000) Engineering design methods: strategies for product design. 3rd ed. West Sussex, John Wiley and Sons. Bentley, P.J. (1997) The revolution of evolution for real-world applications. In: Engineering Technologies ’97: Theory and Application of Evolutionary Computation, University College London, 15th December 1997. Bentley, P.J. and Corne, D.W. (2002) Creative evolutionary systems. London, Morgan Kaufmann. Gero, J. (1990) Design prototypes: a knowledge representation schema for design [WWW]. Available from: http://www.arch.usyd.edu.au [Accessed 08/02/2005]. Simon, H.A. (1988) Creativity and motivation. New Ideas in Psychology, 6(2), pp. 177-181. Csikszentmihalyi, M. (1996) Creativity: flow and the psychology of discovery and invention. New York, Harper Collins. Liu, Y.T. (2000) Creativity or novelty? Design Studies, 21(3), pp. 261-276. Dorst, K. (2001) Creativity in the design process: co-evolution of problem-solution. Design Studies, 22(5), pp. 425-437. Hsiao, S.W. and Chou, J.R. (2004) A creativitybased design process for innovative product design. International Journal of Industrial Ergonomics, 34(August), pp. 421-443. Bruce, M., and Cooper, R. (2000) Creative Product Design A practical Guide to Requirements Capture Management. 1st ed. Baffins Lane, Chichester, England, John Wiley and Sons, Ltd. Darlington, M. J., and Culley, S. J. (2004) A model of Factors Influencing the Design Requirement. Design Studies, 25(4), pp. 329-350. Baxter, M. (2002) Product Design Practical Methods for the Systematic Development of New Products. 3rd ed. Cheltenham, UK, Nelson Thornes Ltd. Pugh, S. (1991) Total Design. 1st ed. Wokingham, UK, Addison Wesley. Cross, N. (1997) Descriptive models of creative design: application to an example. Design Studies, 18(4), pp. 427-455. Liu, Y.-C., Bligh, T., and Chakrabarti, A. (2003) Towards an Ideal Approach for Concept Generation. Design Studies, 24(4), pp. 341-355. Dahl, D. W., Chattopadhyay, A., and Gorn, G. J. (2001) The Importance of Visualization in Concept Design. Design Studies, 22(1), pp. 5-26. Won, P.H. (2001) The comparison between visual thinking using computer and conventional media in the concept generation stages of design. Automation in Construction, 10(3), pp. 319-325. Nakakoji, K., Yamamoto, Y. and Ohira, M. (2000) Computational support for collective creativity. Knowledge-Based Systems, 13(), pp. 451-458.

[26]

[27]

[28] [29]

[30]

[31]

[32]

[33]

[34]

[35]

[36]

[37]

[38]

[39]

[40]

354

Hoeben, A., and Stappers, P. J. (2005) Direct Talkback in Computer Supported Tools for the Conceptual Stage of Design. Knowledge-Based Systems, 18(8), pp. 407-413. Van Der Lugt, R. (2000) Developing a graphic tool for creative problem solving in design groups. Design Studies, 21(5), pp. 505-522. Buzan, T. (2002) How to Mind map 1st ed. London, UK, HarperCollinsPublishers Ltd. Anderson, N. and Abdalla, H., (2002) A Distributed E-Commerce System for Virtual Product Development Teams. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 216(2), pp. 251-264. Anderson, N. (2002) A distributed information sharing collaborative system (DISCS). Unpublished thesis (PhD), De Montfort University. Yin, Y., Vanides, J., Ruiz-Premo, M. A., Ayala, C.C. and Shavelson, R. J. (2005) Comparison of Two Concept-Mapping Techniques: Implications for Scoring, Interpretation, and Use. Journal of Research in Science Teaching, 42(2), pp. 166-184. Kokotovich, V. (2007) Problem Analysis and Thinking Tools: An Empirical Study of NonHierarchical Mind Mapping. Design Studies, 29(1), pp.49-69. Gero, J. and Kazakov, V. (1999) Using analogy to extend the behaviour state space in creative design. In: GERO, J. S. and MAHER, M. L. (eds), Computational Models of Creative Design IV, Key Centre of Design Computing and Cognition, University of Sydney, Sydney, Australia, pp. 113-143. Gomes, P., Seco, N., Pereira, F. C., Paiva, P., Carreiro, P., Ferreira, J. L., and Bento, C. (2006) The Importance of Retrieval in Creative Design Analogies. Knowledge-Based Systems, 19(7), pp. 480-488. Gero, J. (2000) Computational models of innovative and creative design processes. Technological Forecasting and Social Change, 64, pp. 183-196. Bentley, P.J. and Wakefield, J.P. (1996) Conceptual evolutionary design by a genetic algorithm. Engineering Design and Automation Journal, 2(3), John Wiley and Sons, Inc. Tseng, C.J. Abdallah, H. (2004) A humancomputer system for collaborative design (HCSCD). Journal of Materials Processing Technology, 155156(2), pp. 1964-1971. Abdalla, H.S. (1999) Concurrent engineering for global manufacturing. International Journal of Production economy, 60(1), pp. 251-260. Mok, C.K., Chin, K.S., and Lan, H. (2008) An internet-based intelligent design system for injection moulds. Robotics and Computer-Integrated Manufacturing, 24(1), pp. 1-15. Li, W.D. et al (2005) Collaborative computer-aided design: research and development status. ComputerAided Design, 37(9), pp. 931-940.

An Engineering-to-Biology Thesaurus To Promote Better Collaboration, Creativity and Discovery J.K. Stroble1, R.B. Stone2, D.A. McAdams3, S.E. Watkins1 and Computer Engineering, Missouri University of Science and Technology, 1870 Miner Circle, Rolla, MO 65409, USA 2Interdisciplinary Engineering, Missouri University of Science and Technology, 1870 Miner Circle, Rolla, MO 65409, USA 3Mechanical Engineering, Texas A&M University, MS 1250, College Station, TX 77843, USA [email protected] 1Electrical

Abstract Biological inspiration for engineering design has occurred through a variety of techniques such as database searches, keyword and antonym searches, knowledge of biology, observations of nature and other “aha” moments. This research aims to alleviate the knowledge gap problem by providing a link between engineering and biology with a thesaurus. The biologically connotative terms that comprise the thesaurus were collected utilizing an organized verb-noun search; collocated words were extracted from texts based on a functional search word. This thesaurus should enable the engineering and biology communities to better collaborate, create and discover. Keywords: Engineering design, Function, Analogical reasoning

1 INTRODUCTION The natural world provides numerous cases for analogy and inspiration in engineering design. From simple cases such as hook and latch attachments to articulated-wing flying vehicles, nature provides many sources for ideas. Methods such as design by analogy and functional modeling exist to enhance creativity in the engineering design process by focusing on function rather than form or component. Biological organisms and phenomena, which are in essence living engineered systems, provide insight into sustainable and adaptable design. The evolution of natural designs offers engineers billions of years of valuable experience, which we feel can inspire engineering innovation. Though biological systems provide a wealth of elegant and ingenious approaches to problem solving, there are challenges that prevent designers from leveraging the full insight of the biological domain. A fundamental problem to effectively execute biomimetic designs is that the effort and time required to become a competent engineering designer creates s i g n i f i c a n t o b s t a c l e s t o b e c o m i n g s u ff i c i e n t l y knowledgeable about biological organisms and phenomena (the converse can also be said). In an effort to bridge the gap between the engineering and biological domains, the creation of a partial thesaurus that will contain biologically connotative words related to engineering function and flow terms, is envisioned. This approach should enable the search for biomimetic solutions to engineering functions and aid with comprehension of biological material. The purpose of a thesaurus is to represent information in a classified form to group related concepts. The engineering-to-biology thesaurus proposed here has a unique structure and classification; it is merged with the reconciled Functional Basis [1] as a set of correspondent terms. Thus, the classification is predetermined according to that of the authors’ model; however, it remains the intermediary between the biology and engineering domains. A tool such as the engineering-to-biology thesaurus increases the interaction between the users and the knowledge resource [2]. In the following sections several points will be discussed:

CIRP Design Conference 2009

355

(1) the nomenclature of function based design; (2) the related work and research efforts (3); the model for designing the thesaurus structure; (4) the method used to populate the thesaurus; (5) the implications such a thesaurus has on the engineering and biology communities; and (6) two example applications of the engineering-to-biology thesaurus. 2 NOMENCLATURE Terms used throughout this paper that are specific to this research are described in this section. Biologically connotative term – a word that will appear in a biological text and not an engineering text. Biological phenomena – a biological fact or situation that is observed to exist or happen. Corpus – a collection of written material in machine-readable form, assembled for the purpose of studying linguistic structures. Flow – refers to the type of material, signal or energy that travels through a system or a device. Function – refers to an action being carried out on a flow to transform it from an input state to a desired output state. Functional Basis – a well-defined modeling language comprised of function and flow sets at the class, secondary and tertiary levels with correspondent terms. Functional model – a visual description of a product or process in terms of the elementary functions and flows required to achieve its overall function or purpose. 3

BACKGROUND AND RELATED WORK

3.1 Function based design Design specifications and requirements set by a customer, internal or external, influence the product design process by providing material, economic and aesthetic constraints on the final design. In efforts to

achieve the customer’s needs without compromising function or form, function based design methodologies have been researched, developed and evolved over the years. Most notable is the systematic approach of Pahl and Beitz [3]. Since the introduction of function structures, numerous functional modeling techniques, product decomposition techniques and function taxonomies have been proposed [4-7]. The original list of five general functions and three types of flows developed by Pahl and Beitz [3] were further evolved by Stone, et al. into a well-defined modeling language comprised of function and flow sets with definitions and examples, entitled the Functional Basis [8]. Hirtz, et al. later reconciled the Functional Basis and NIST developed taxonomy into its most current set of terms [1]. The reconciled Functional Basis is utilized for developing hierarchical functional models, which describe the core functionality of products and processes in domain independent function and flow terms. Branching from the Functional Basis efforts is the lexical-analysis-based approach to function based design by Fantoni et al. [9]. By utilizing synonyms and antonyms of desired functions and exploring changes in flow, this method claims to convert each problem into an opportunity. A functional reasoning method developed by Zhang et al. is the Behavior-driven Function-Environment-Structure (BFES) modeling framework, which assigns function to behavior before function to physical structures [10]. This framework provides an opportunity to explore a wide variety of solutions based on behavior, without constraint set by function. Function/means tree is a hierarchal function based design method, which demonstrates the causal relationship between function and means at different levels [11]. Xu et al. utilize hierarchical function structures to create a non-numeric key element vector, of which, functional design knowledge is extracted for use with automated design synthesis [12]. Multi-objective optimization functions apply design constraints to extracted knowledge to produce design solutions. 3.2 Creativity in design Creativity in engineering design is considered to have two distinct forms, novelty and usefulness. Thompson and Lordan explain this dichotomy as ‘[n]ovelty may take the form of something completely new or it may be a combination of existing ideas or products. For something to be creative it must satisfy a need, it must serve a purpose and it must make a positive contribution’ [13]. According to Cross, the generation of creative thoughts, either satisfactory or not, can be described with four generalized models: analogy, combination, first principles, and emergence [14]. In regards to utilizing biological designs, research has shown that the use of analogy has been the most successful method in engineering design. Mak et al. [15] and Chakrabarti et al. [16] both demonstrate that creative, engineered solutions were inspired by their biological analogs, at varying levels of abstraction. Several design-by-analogy methods have been developed and go beyond the formal design methods that just include analogies and metaphors within the design process. McAdams et al. takes a unique approach to design-by-analogy by utilizing a design repository of prior

engineering solutions that includes information about product functionality [17]. A quantitative measure based on functional similarity is presented and validated through case studies. Also following a model based approach is the analogical design research by Bhatta et al. They explore analogies using two types of models: case specific, Structure-Behavior-Function models of physical devices and case independent, Behavior Function models of physical principles [18]. Hey et al. found that while ‘metaphors and analogies in design … can enhance creativity and innovation’ tools and methods that assist in the search process for suitable analogies are lacking [19]. To increase the likelihood of generating a design solution Hey et al. suggest reframing or creating multiple representations of the design problem. Through rerepresentation, multiple linguistic representations are created for key word searches in various databases, which result in a larger set of analogous solutions [19]. Goel reiterates the important application of analogies in his statement ‘[a]nalogical reasoning appears to play a key role in creative design’ [20]. Although Goel was researching AI and concept generation software, he and all the aforementioned researchers have shown that the desirable consequence from analogical reasoning is creativity in design. 3.3 University of Toronto research effort Researchers at the University of Toronto have recently worked to provide designers with biologically meaningful words that correspond to the Functional Basis functions. They analyzed the functions in the secondary, tertiary and correspondent levels to develop groups of words that were similar according to WordNet [21]. Biologically meaningful words were identified through a methodology developed by Chiu et al. [22] using bridge verbs - verbs that were modified by a frequently occurring noun categorizing bridge verbs and screening match results. Four cases for identification are discussed and examples presented: synonymous pair, implicitly synonymous pair, biologically specific form and mutually entailed pair [21]. Based on semantic relationships, the engineering function terms of the Functional Basis were used to systematically generate a list of biologically significant and connotative keywords. A short list is shown in Table 1. 4 ENGINEERING-TO-BIOLOGY THESAURUS The engineering-to-biology thesaurus presented in this paper was developed to enhance the reconciled Functional Basis by Hirtz et al. [1]. The structure of the thesaurus was molded to fit the knowledge and purpose of the authors; synonyms and related concepts to the Functional Basis are grouped at class, secondary and tertiary levels. In the paragraphs below, the thesaurus model and population method are explained followed by implications of an engineering-to-biology thesaurus. 4.1 Thesaurus model Studies have shown that user feedback in the form of questions is the most important source for analyzing information needs [23]. In the authors’ experience, correlating biological terms to the Functional Basis functions was not an issue. The authors’ had the most difficulty understanding biological terms that were

Functional Basis term

Convert

Mix

Transport

Store

Stabilize

Collect

Biologically meaningful term

Decompose

Exchange

Circulate

Deposit

Bind

Concentrate

Table 1: Short list of biologically connotative function words.

356

considered flow type (material, signal and energy) when utilizing biological organisms or phenomena for idea generation or design inspiration. Guessing if a biological material is liquid, solid or a mixture by its name generally resulted in a wrong choice, which made the biological concept perplexing. Similarly, needing a reference to look up biological terms each time a potential organism or phenomenon was found made the research tedious, and disrupted thought patterns leading to decreased efficiency. Thus, flow correspondent terms were chosen for the first draft of the engineering-to-biology thesaurus, which can be seen in Table 2. 4.2 Population Method Population of the engineering-to-biology thesaurus was achieved through functional word searches of a biological textbook that covers a broad range of topics, described as an organized verb-noun search. Chosen words were determined by their macrorelevancy, which is identified by frequency of use [2]. Functional Basis functions (verbs) were utilized for searching the biological textbook to extract biologically connotative words (nouns) that an engineering designer interested in function based design might encounter. Variations of the stem function word were not considered during the searches. For example, detect is the stem function word and the variations of this verb: detection, detects, detected, and detecting were not included in search results. The nouns that were collocated, within the sentence, to the search word were counted and sorted by frequency and all nouns that appeared more than two times were considered macrorelevent. Each macrorelevent term was researched to determine if it was of signal, material or energy type in the new Oxford American dictionary [24] and Henderson’s dictionary of biological terms [25] before being placed. Placement of terms in the thesaurus was at the discretion of the authors and trite, domain independent terms were dismissed from the engineering-to-biology thesaurus. Key challenges to this approach for populating the thesaurus were the time required to search each function term to generate a noun listing and understanding the definition provided in the dictionary of biological terms. Several biological dictionary entries referenced other biological terms that were unclear or unknown, which required referencing more than one defintion to determine the material, energy or signal type of the term in question. This 4.3

Implications on Biological and Engineering Communities The engineering-to-biology thesaurus was generated with the intention of promoting collaboration between the biology and engineering domains, resulting in discovery of creative, novel ideas. The following paragraphs describe plausible applications of the presented thesaurus. However, with few boundaries in the field of design, this thesaurus could be employed in ways the authors’ have not considered. Comprehension The engineering-to-biology thesaurus has the potential to aid engineering designers with the comprehension of biological contexts by substituting Functional Basis terms for commonly used biological words. Lopez-Huertas wrote that a thesaurus “…is thought of as a way of easing communication between texts and users in order to increase the interaction in information retrieval, and thus facilitate information transfer” [2]. Achieving efficient information retrieval through a thesaurus allows an engineering designer to cross into the biological domain to gain functional knowledge, without becoming overwhelmed by biological organisms and phenomena.

357

Searching for biological inspiration Searching a natural-language corpus for biological inspiration based on engineering functionality or using engineering terms typically produces results that are mixed. Hits that contain the search words often use them out of context or in a different sense then the designer intended. By utilizing the biologically connotative flow terms with desired functionality, search results improve and become more focused on biological systems. Functional modeling of biological systems The engineering-to-biology thesaurus provides direction when choosing the best suited flow term to objectively model a biological system. Functional modeling of biological systems allows representation of solutions to specific engineering functions and direct knowledge discovery of the similarities and differences between biological and engineered systems as viewed from a functional perspective. The creation of engineered systems that implement strategies or principles of their biological counterparts without reproducing physical biological entities is another benefit to biological functional models. Collaboration, creation, discovery Terms contained within the engineering-to-biology thesaurus can be utilized to discover biological analogs to existing engineered systems and visa versa. Analogical reasoning often requires an interdisciplinary team to ensure the analogy is properly represented, whatever the mix of domains. Exploration of biomimetic designs prompt collaboration between biology and engineering researchers. 5 EXAMPLE APPLICATION In this section, two examples utilizing the engineering-tobiology thesaurus are presented. A simple translation of the phenomenon abscission is presented first to demonstrate how this thesaurus can aid with comprehension. Second, a comprehensive example is given regarding the method of sensing within bacteria; signal transduction occurs to alert the bacteria of stimuli via a two-component regulatory system [26]. 5.1 Simple Translation A text excerpt describing abscission is presented in its original form and in a “translated” form using the term relationships established through the engineering-tobiology thesaurus. The phenomenon of abscission exert is taken from the biology textbook Life, The science of biology [27]. ‘In many species, leaves senesce (deteriorate because of aging) and fall at the end of the growing season, shortly before the onset of the severe conditions of winter. Leaf fall (abscission) is regulated by an interplay of the hormones ethylene and auxin. Finally, the entire plant senesces and dies. The effect of auxin on the detachment of old leaves from stems is quite different from root initiation. This process, called abscission, is the cause of autumn leaf fall. Leaves consist of a blade and a petiole that attaches the blade to the stem. Abscission results from the breakdown of a specific part of the petiole, the abscission zone. If the blade of a leaf is cut off, the petiole falls from the plant more rapidly than if the leaf had remained intact. If the cut surface is treated with an auxin solution, however, the petiole remains attached to the plant, often longer than an intact leaf would have. The time of abscission of leaves in nature appears to be determined in part by a decrease in the movement of auxin, produced in the blade, through the petiole.’

Class Secondary (Primary) Material Human Gas

Tertiary

Being, body Oxygen, nitrogen, chlorine Acid, chemical, water, concentration, solute, cytokinin, pyruvate, fluid, nicotine, auxin, opium, glycerol, carotenoid, plasma, repressor body, substrate, microfilament, microtubules, structure, DNA, motor, fiber, chain, matter, nucleus, organ, tissue, muscle, ligand, cilia, gtp, flagella, RNA, tRNA, mRNA, tube, vein, heart, plant, ribosome, seed, apoplast, endotherm, ectotherm, stem, kidney, egg, ovaries, leaves, embryo, bacteria, gene, oncogene, cryptochromes, urea, chloroplasts, carbon, glucagons, adipose, angiosperm, meristems, mineral, stoma, shoot, capillary, receptors, hair, bone, tendon, neuron, photoreceptors, mechanoreceptors, host, chromosome, algae, petiole, promoter, phyla, lysosome, introns, exon, archaea, allele, cone, strand, centriole, spore, euryarchaeota, sporangia, zygote, sulfur, ctenophore, lipoproteins, stp, nephron, hyphae, plasmodesma, angiosperms, conifer, plasmid, xylem, pigment, sperm, hippocampus, somite, parathormone

Liquid

Solid

Biological Correspondents

Object

Particulate Molecule, enzyme, virus, phloem, ribozyme, prokaryote, macromolecule, polymerase, nucleotide, polypeptide, organelle, symplast, mesophyll, brood, codon, messenger Gas-gas Air, dioxide Solution, poison, slime, blood, urine, cytoplasm, peptide, hormone, Liquid-liquid melatonin, thyroxine, calcitonin, thyrotropin, estrogen, somatostatin, cortisol, glucagons, adrenocrticotropin, testosterone Adenosine, glial, glomerulus, blastula, monosaccharides, membrane Solid-solid mulch, phosphate, gibberellin, plastids Lipids, glutamic acid, synapse, peptidoglycan, cell, centrosomes, Solid-Liquid phytochrome, retina, insulin, protein, hemoglobin Liquid-Gas Solid-Gas Solid-LiquidGas Colloidal Change, variation, lateral, allosteric, swelling, catalyzes, translation, exposed, active, separated, cycle, form, reaction, redox, deficiency, saturated, diffusion, broken, vicariant, hybridization, orientation, resting, cues, magnetic, volume, under, organized, fruiting, fatty, anaphase, metaphase, conjugation, osmolarity, senescence Auditory Sound Olfactory Smell Tactile Cold, pain Taste Gustation Visual Length, shortened, long, dark, full, double Place, inhibit, release, excretory, development, match, inducer, digest, integrate, translation, transduction, equilibrium, grown, splicing, capture, distributed, prophase, phosphorylation Analog Flowering, center, synthesis, binding, photosynthesis Discrete Flower, translocation Composite Mixture

Signal

Status

Control

Energy

Human Acoustic Biological Chemical

Echolocation, waves Blood, glucose, gibberellins Calorie, metabolism, glucose, glycogen, ligand, nutrient, starch, fuel, sugar, mitochondria, synthesis, o, lipids Electron, potential, q, feedback, charge, fields Light, infrared Light, sun, ultraviolet Pressure, osmosis, osmoregulation Gravity, fields, waves Muscle, pressure, tension, removing, stretch, depress

Electrical Electromagnetic Optical Solar Hydraulic Magnetic Mechanical Rotational Translational Pneumatic Pressure Radioactive/ Nuclear Thermal Temperature, heat, infrared Overall increasing degree of specification ! Table 2: Engineering-to-biology thesaurus.

358

The translated version of the text exert utilizing the term relationships of the engineering-to-biology thesaurus: In the fall season, leaves of plants indicate a status signal to humans shortly before the onset of severe winter conditions. Leaf fall referred to as abscission is regulated by the interplay of liquid material and liquid-liquid mixture materials internal to the plant. The effect of the liquid material auxin on the detachment of old leaves from stems is quite different from root initiation. Leaves which are material-solid-objects utilize a material-solid-object called a petiole, which attaches the blade to the stem (also a material-solid-object). The abscission zone is where the material-solid-object petiole detaches from the material-solid-object stem. Separation is the main function of this phenomenon. Separation can be deterred with the material-liquid auxin, which is created in the blade of the leaf. The time of the status signal offered by plants, death or all leaves abscise, appears to be determined in part by a decrease in a particular material-liquid flowing through plant leaves The translation process as demonstrated above was performed manually by reading the text excerpt describing abscission and identifying possible terms for translation. Translation of concepts occurs by substituting unclear biological terms with corresponding engineering terms, one sentence at a time. The two paragraphs from Life, The science of biology [27] was shortened to one, consice paragraph clearly presenting the concept of abscisson as events between materials. This tool allows the designer the freedom to choose which of the possible terms for translation are indeed translated, thus suiting the needs for engineers who are at the novice to advaced levels in biological topics. Analyzing the biological phenomena with engineering terms allows engineers to look at the phenomena of abscission as the separation of solid materials due to liquid materials, disregarding specifics. By not confining the design space, the designer captures the underlying principle of the biological phenomena, which increases the interaction in information retrieval and facilitates information transfer between the domains. 5.2 Functional Modeling and Analogical Reasoning In effort to demonstrate the versatility of the engineeringto-biology thesaurus a comprehensive example exploring the use of signal transduction in engineering design is considered next. A majority of biomimetic designs have been modeled after physical biological phenomena that can be observed, meaning direct imitations of biology, and are represented in mechanical, civil and architectural designs [28-30]. Direct mimicry has also lead to two biomimetic designs based on the fly. A elementary motion detector chip based on the physiology of the common house fly was developed, which includes a photoreceptor circuit integrating each ommatidia of the compound eye and signal processing circuit to simulate the wide field motion sensitive neurons found in flies [31]. Second, artificial compound eyes developed by polymeric synthesis self-align to transmit light to a CMOS sensor array [32]. Mimicking unseen phenomena, such as activity on the cellular level, is more difficult as biological terminology becomes narrow and requires more knowledge of the subject. This example is a qualitative measure of the engineering-to-biology thesaurus. The topic of signal transduction in prokaryotes explains how bacteria sense their environment for survival. Bacteria respond to nutrients, synthesizing proteins involved in uptake and metabolism, and non-nutrient signals both physical and chemical [26]. Signaling pathways in bacteria consist of modular units called transmitters (sensor proteins) and receivers (response

359

Figure 1: Method of sensing extracellular signals with TCRS in bacteria. regulator proteins), which comprise the two-component regulatory system (TCRS). Example bacterial processes that are controlled by TCRS are chemotaxis, sporulation and osomoregulation. Tiaz and Zeiger explain bacteria employ TCRS to sense extracellular signals as the following. ‘Bacteria sense chemicals in the environment by means of a small family of cell surface receptors, each involved in the response to a defined group of chemicals (hereafter referred to as ligands). A protein in the plasma membrane of bacteria binds directly to a ligand, or binds to a soluble protein that has already attached to the ligand, in the periplasmic space between the plasma membrane and the cell wall. Upon binding, the membrane protein undergoes a conformational change that is propagated across the membrane to the cytosolic domain of the receptor protein. This conformational change initiates the signaling pathway that leads to the response.’ - [26] Figure 1 provides a visual representation of the sensing process; (A) Defining cellular boundaries and substances present in bacteria; (B) Conformational change sends a signal to cytosolic domain triggering the transmitter to release protein phosphate; (C) phosphate binds to the reciever initiating the output response.

Figure 2: Functional Model of TCRS. To utilize TCRS with function-based design, a functional model of the method shown in Figure 1 is given in Figure 2. Ligans are found in the thesaurus under material-solidobject and energy-chemical. In the case of TCRS, ligands are utilized as chemical signals, thus chemical energy was the chosen flow. Bacterium, the singular form of bacteria, is listed as a material-solid-object in the thesaurus because a prokaryote cell does not have a well defined nucleus. Binding of ligand to the protein is captured with the join chemical energy and solid material function flow pair. After coupling, detection of the stimulus signal occurs, which is propagated to the cytosolic domain, releasing protein phosphate. A signal pathway is now established, which regulates the chemical energy within the bacterium to produce a response. The two components of TCRS are transmitter and receiver proteins, however, from a functional standpoint chemical energy is needed to couple with and change the bacterium material to elicit a response. This abstraction of TCRS can now be utilized for analogical reasoning. Engineered systems that mimic TCRS are wireless entry devices, such as a garage door opener or an automobile key fob. The handheld switch sends electrical energy in the form of a wireless signal to join with the receiver in the solid material, once detected electrical energy is regulated to open the door with the solid material changing position. It is not required to design strictly using the flows recognized in the biological system with analogical reasoning. Rather, adhering to the biological phenomena abstraction is most important. Although this example utilizes reverse engineering to match a biological system to an existing engineered system, engineering designers can utilize, the functional model of the biological phenomena to develop conceptual designs of new products and processes. 6 CONLUSIONS The natural world provides numerous cases for analogy and inspiration in engineering design. From simple cases such as hook and latch attachments to articulated-wing flying vehicles, nature provides many sources for ideas. Though biological systems provide a wealth of elegant and ingenious approaches to problem solving, there are challenges that prevent designers from leveraging the full insight of the biological domain. Biologically inspired or analogical designs require that designers have knowledge of previous design solutions during engineering design activities. The learned representations are organized at different levels of abstraction that facilitate the decomposition of design solutions, and allow analogs to be discovered with cues taken from each level. We presented an engineering-to-biology thesaurus that (1)

allows designers to focus on becoming a competent engineering designer; (2) lessens the burden when utilizing knowledge from the biological domain by providing a link between engineering and biological terms; and (3) lists biologically connotative words that an engineering designer interested in function based design might encounter. Through this research, flow type biologically connotative terms were mapped to engineering terms and placed into pre-determined classifications set by the Functional Basis structure. It was observed that the majority of biologically connotative terms can be grouped at the tertiary level, indicating the preciseness of terms in the biological domain. Several material type flow terms can be grouped as material-solid-object, material-solid-composite or material-mixture-liquid-liquid. Signal is more of a subjective flow classification as materials and energies can also act as signals, as shown with the TCRS example in section 5.2, within the biological domain. Therefore, most signal flow terms are grouped at the secondary level. The most populated secondary energy flow terms, with no surprise, was chemical energy. Many chemical substances provide energy, such as sugars or starches, which humans can relate to. Breaking down a biological solution into smaller parts, based on functionality, allows one to liken a biological organism or phenomenon to an engineered system for ease of understanding and transfer of design knowledge. The biological correspondent terms that comprise the engineering-to-biology thesaurus were collected utilizing an organized verb-noun search that extracts collocated words from a biological text based on the stem search word alone. The first draft of the engineering-to-biology thesaurus and the method for compiling the terms were presented and discussed. Implications of the proposed thesaurus on the engineering and biology communities were explored. The thesaurus will enable the engineering and biology communities to better collaborate, create and discover through comprehension of concepts, functional decomposition and guidance for inspirational searches. Furthermore, the engineering-to-biology thesaurus is a subject domain oriented, intermediary structure, which can be updated as needs are identified. 7 FUTURE WORK An immediate step towards improving the engineering-tobiology thesaurus would be reconciling the efforts performed by researchers at the University of Toronto and Missouri S&T. Including the biologically meaningful function words with the biologically connotative flow terms presented here would create a complete set of biological correspondent terms for the Functional Basis. To further

360

strengthen the thesaurus, the population search method presented in Section 4.2 should be repeated to include variations of the stem function word. We predict this would lead to more results as biology textbooks are written in natural-language format and a longer list of biologically connotative flow terms. Automatic translation of biological text into modified text with correspondant engineering terms inserted is another possible approach to improving the engineering-to-biology thesaurus. By automating the term swaping process, the expected outcome is increased usage, faster recognition of design parallels and overall increased knowledge transfer between the engineering and biology domains. Other design tools created at Missouri S&T could be updated to utilize the engineering-to-biology thesaurus presented in this paper. Specifically, an automated retrieval tool recently developed searches a user defined corpus using Functional Basis functions and subject domain oriented or Functional Basis flows. The search could be enhanced by cross referencing the user input search terms with the thesaurus and searching each possible combination of function and flow terms. The webbased repository of design information could possibly utilize the terms contained within the thesaurus to discover biological analogs to existing engineered systems. A similar type of “reverse engineering” prompted the discoveries made by Tinsley et al. [33], however, the web-based repository was not utilized in that research. 8 ACKNOWLEDGMENTS This research is funded by the National Science Foundation grant DMI-0636411. A special thanks to Dr. L.H. Shu of the Univeristy of Toronto for allowing Missouri S&T to access to their website. 9 REFERENCES [1] Hirtz, J., Stone, R., Mcadams, D., Szykman, S. and Wood, K., 2002, A Functional Basis for Engineering Design: Reconciling and Evolving Previous Efforts, Research in Engineering Design, 13(2): 65-82. [2] Lopez-Huertas, M.J., 1997, Thesarus Structure Design: A Conceptual Approach for Improved Interaction, Journal of Documentation, 53(2): 139177. [3] Pahl, G. and Beitz, W., 1996, Engineering Design: A Systematic Approach, 2 ed, Berlin; Heidelberg; New York, Springer-Verlag. [4] Otto, K.N. and Wood, K.L., 2001, Product Design: Techniques in Reverse Engineering and New Product Development, Upper Saddle River, New Jersey, Prentice-Hall. [5] Ulrich, K.T. and Eppinger, S.D., 2004, Product design and development, Boston, McGraw-Hill/Irwin. [6] Ullman, D.G., 2002, The Mechanical Design Process, 3 ed, New York, McGraw-Hill, Inc. [7] Hundal, M., 1990, A Systematic Method for Developing Function Structures, Solutions and Concept Variants, Mechanism and Machine Theory, 25(3): 243-256. [8] Stone, R. and Wood, K., 2000, Development of a Functional Basis for Design, Journal of Mechanical Design, 122(4): 359-370. [9] Fantoni, G., Taviani, C. and Santoro, R., 2007, Design by functional synonyms and antonyms: a structured creative technique based on functional analysis, Proceedings of the I Mech E Part B: Journal of Engineering Manufacture, 221(6): 673683.

361

[10] Zhang, W.Y., Tor, S.B., Britton, G.A. and Deng, Y.M., 2002, Functional Design of Mechanical Products Based on Behavior-Driven FunctionEnvironment-Structure Modeling Framework, MIT Innovation in Manufacturing Systems and Technology (IMST), http://hdl.handle.net/1721.1/4031. [11] Robotham, A.J., 2002, The use of function/means trees for modelling technical, semantic and business functions, Journal of Engineering Design, 13(3): 243-251. [12] Xu, Q.L., Ong, S.K. and Nee, A.Y.C., 2006, Function-based design synthesis approach to design reuse, Research in Engineering Design, 17: 27-44. [13] Thompson, G. and Lordan, M., 1999, A review of creativity principles applied to engineering design, Proceedings of the I Mech E Part E: Journal of Process Mechanical Engineering, 213(1): 17-31. [14] Cross, N., 1996, Creativity in design: not leaping but bridging, Creativity and Cognition, Loughborough University, Loughborough, 27-35. [15] Mak, T.W. and Shu, L.H., 2008, Using descriptions of biological phenomena for idea generation, Research in Engineering Design, 19: 21-28. [16] Chakrabarti, A., Sarkar, P., Leelavathamma, B. and Nataraju, B.S., 2005, A functional representation for aiding biomimetic and artificial inspiration of new ideas, Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 19: 113-132. [17] Mcadams, D. and Wood, K., 2000, Quantitative Measures for Design By Analogy, ASME 2000 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Balitmore, MD. [18] Bhatta, S., Goel, A. and Prabhakar, S., 1994, Innovation in Analogical Design a Model-Based Approach, Third International Conference on Artifical Intelligence in Design (AID-94), Lausanne, Switzerland, 57-74. [19] Hey, J., Linsey, J., Agogino, A.M. and Wood, K.L., 2008, Analogies and Metaphors in Creative Design, International Journal of Engineering Education, 24(2): 283-294. [20] Goel, A., 1997, Design, analogy and creativity, IEEE Expert Intelligent Systems and Their Applications, 12(3): 62-70. [21] Cheong, H., Shu, L.H., Stone, R.B. and Mcadams, D.A., 2008, Translating terms of the functional basis into biologically meaningful words, ASME 2008 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, New York City, NY. [22] Chiu, I. and Shu, L.H., 2005, Bridging Cross-Domain Terminology for Biomimetic Design, ASME 2005 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Long Beach, California. [23] Dalrymple, P.W., 1990, Retrieval by reformulation in two library catalogs: toward a cognitive model of searching behavior, Journal of the American Society for Information Science, 41(4): 272-281. [24] McKean, E., 2005, The new Oxford American dictionary, New York, Oxford University Press. [25] Lawrence, E. and Holmes, S., 1989, Henderson's dictionary of biological terms, New York, Wiley.

[26] Taiz, L. and Zeiger, E., 2006, Chapter 14: Gene Expression and Signal Transduction, in Plant Physiology, Sinauer Associates, Inc., Sunderland. [27] Purves, W.K., Sadava, D., Orians, G.H. and Heller, H.C., 2001, Life, The Science of Biology, 6 ed, Sunderland, MA, Sinauer Associates. [28] Brebbia, C.A., Sucharov, L.J. and Pascolo, P., 2002, Design and nature: Comparing design in nature with science and engineering, Southampton; Boston, WIT. [29] Brebbia, C.A. and Collins, M.W., 2004, Design and nature II: Comparing design in nature with science and engineering, Southampton, WIT. [30] Brebbia, C.A. and Technology, W.I.O., 2006, Design and nature III: Comparing design in nature with science and engineering, Southampton, WIT.

[31] Harrison, R.R. and Koch, C., 1999, A Robust Analog VLSI Motion Sensor Based on the Visual System of the Fly, Autonomous Robots, 7(3): 211-224. [32] Jeong, K.-H., Kim, J. and Lee, L.P., 2005, Polymeric synthesis of biomimetic artifical compound eyes, The 13th International Conference on Solid-State Sensors, Actuators and Microsystems, Seoul, Korea. [33] Tinsley, A., Midha, P.A., Nagel, R.L., Mcadams, D.A., Stone, R.B. and Shu, L.H., 2007, Exploring the use of Functional Models as a Foundation for Biomimetic Conceptual Design, ASME 2007 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Las Vegas, Nevada.

362

How do Designers Categorize Information in the Generation Phase of the Creative Process? J.E. Kim, C. Bouchard, J.F. Omhover and A. Aoussat Product Design and Innovation Laboratory (LCPI), Arts et Metiers ParisTech, 151 boulevard de l’Hôpital, 75013, Paris, France jieun.kim@ paris.ensam.fr

Abstract In this paper, firstly we provide a wide range of literature review on designer’s cognitive activity in order to bridge informative and generative phase in the early stages of design. In the generation phase of the creative process, designers use various levels of information and execute internal processing. We found that some of this internal processing (especially in encoding, storage and recall) can be described as information categorisation. In this respect, we propose a descriptive model of information processing to generate ideas by integrating a model from cognitive psychology. We conclude by discussing the limitations of current research and perspectives for further work. Keywords: Categorisation, information, design

1 INTRODUCTION According to Jones (1970) [1], the designer has been described as a ‘black box’ because it was thought that designers generated a creative solution without being able to explain or illustrate how the solutions came out. Since the 1980s, the paradigm of ‘design as a discipline’ has led to a vigorous discussion on the view that design has its own things to know and its own ways of knowing them [2]. While in the past the design research community has focused on the former (related to products), nowadays the growth of interest is the analysis of designers’ cognitive activities [3]-[11]. This interest has become a major interdisciplinary topic not only in design science, but also in psychology, computer science and artificial intelligence. Especially as the early stages of design are considered some of the most cognitively intensive stages in the whole design process [12], it is necessary to invest much more research in the early stages of design. The early stages of the design process used to be characterized by information processing and idea generation (also called ‘conceptualization’) [5, 9]. In the early stages of the design process, designers use various levels of information in reducing abstraction through the integration of more and more constraints [6, 13]. In this respect the designer’s cognitive activity has been considered to be an information processing activity. This information processing activity can be described as an information cycle. An information cycle includes informative, generative and decision-making phases (evaluation-selection) whose outcome is an intermediate representation and also evolutionarily iterates (See Figure 1) [5, 14]. Insofar as much design research depends on findings from empirical studies [15], the dominant research interest was specific activities such as ‘what and where’ designers retrieve and collect inspirational sources and ‘how’ they represent their ideas using physical representations, such

CIRP Design Conference 2009

363

as in sketching activities [16]. Therefore, the results of these studies relatively neglected the use of implicit information and its internal processing which can bridge between informative and generative phases in the early stages of design. These uncertain phases are in accordance with the question about creativity. According to well-known Walla’s (1926) [17] four- stage model of the creative process – preparation – incubation – illumination – verification, the middle phases of how designers incubate information and how come they attain creative insight still remain incomplete as regards design in practice [18, 19]. In Howard’s study [20, 21] on the comparison between the ‘engineering design process’ and the ‘creative design process’, the ‘creative process’ was defined as ‘a cognitive process culminating in the generation of an idea. In this respect, it is supposed that cognitive studies as a creative approach bring insights to understand some uncertain phases as we mentioned above.

Figure 1: Description of an informational cycle [5] In this paper, firstly we provide the state of the art on the study of designers’ cognitive activities (Part 2) and also take into account a worldwide study about computational support for designer’s activity through a wide range of literature review (Part 3). In part 4, we will provide a descriptive model of information processing to generate ideas by integrating a model from cognitive psychology.

Finally we discuss the limitations of current research and perspectives for further work (Part 5). 2 LITERATURE REVIEW ON DESIGNERS’ COGNITIVE ACTIVITIES The design information can be divided into external information, such as visual sources conveyed by photos and images; and mental representations of design [16]. The former comes from designers collecting inspirational information and the latter can be structured by cognitive mechanisms [3, 22, 23, 45]. Inspirational information is an essential base in design thinking and for other alternatives and even for other completely different ideas [3, 24]. However, as we mentioned above (in part1), insofar as much design research depends on specific activities from empirical studies, the link between the external information and representation in sketching and drawing is well established [15]. By contrast the importance of the use mental representations in the design was relatively neglected. Currently, there are some pioneers who have stressed the necessity of defining internal processing and the important role of non explicit information. Restrepo[25] found in his empirical study that designers sometimes completely ignore other external information sources when they develop ideas, i.e they can generate ideas without the aid of external representations as the images sources. Recently Bilda's work [26] has shown quantitative results to show that the use of imagery alone supported idea Phase

External information

Info rma tive

Cognitive action

Aims of research

Concerned discipline

Source

Stimuli

The importance of source of inspiration for idea generation in design process

Design science

[3]

Stimuli

Designer’s collecting information behavior to solve design problem

Design science

[25]

Storage

Modeling of recollecting memory in creation process

Cognitive science Kansei eng.

[23]

Encoding

Comparison level of Kansei preference with based on ‘Kodawari’ method between designers and non-designers

Psychology, Kansei Eng.

[30]

Storage

Structuring knowledge establishing rules through the Values-Function-Solutions chain

Design Science, Cognitive science

[6,14]

Storage

Use of visual analogy between expert and novice designer

Psychology, Architecture

[31,32]

Internal information

Mental representation (Imagery) Gen erat ive External representation (Sketching)

development as well as sketching did. This way, even though there are still some questions about the different purpose and efficiency for the use of various levels of design information to generate ideas, we believe that both external and internal information evolutionarily interact with each other to generate ideas and that designers integrate various levels of information which will be gradually visually categorized and synthesized into design solutions [6]. This specific activity will be called ‘Information categorization’ in this paper. To understand this internal processing, the authors intend to identify each cognitive action between the informative and generative phases. These 5 cognitive actions are Stimuli, Encoding, Storage, Retrieval (Recall, Recognition) and Externalization. These were chosen based on a human information processing model in cognitive psychology [27, 28]. Table 1 shows the list of key research concerning designers’ cognitive activities to bridge informative and generative phases by classifying their research according to the 5 cognitive actions and the type of design information (External/Internal) as we defined above. The most interesting finding in table 1, compared with the sequence of information processing model (Encoding –> Storage → Retrieval), is that designers operate information less systematically. External information helps designers to structure mental representations of design ideas.

Recognition /Recall

Comparisons of the different types and content of visualization in concept design

Cognitive science Marketing

[33]

Recognition

The use of existing aids (information) to enhance creative thinking

AI, Creativity

[34]

Recall

Intensive literature review to understand the role of imagery

Architecture,

[22]

Recall

Comparison of the use of imaginary alone and sketching during conceptualization

Architecture

[26,35]

Externalization

Sketching activities as a medium of visual thinking and study on the link between sketching and imagery

Creativity, Architecture

[24,36, 37]

Stimuli

Modeling of the situated Function-BehaviourStructure(FBS) framework

AI, Creativity

[38]

Stimuli

Protocol analysis : why the freehand sketches as external representation are essential for crystallizing design ideas

Cognitive science, Architecture

[39]

Table 1: Key research concerning designer’s cognitive activity in the early stages of design

364

These structured mental representations are reused for generating other refined ideas which can re-stimulate further ideas under externalised form, like as, diagram, sketching etc. and be stored under the type of internal information for being potentially evocated in other design context [29]. This goes largely over their professional designers' activities. Specifically, modeling the mechanism in its entirety, from internal information in the informative phase to mental representation in the generative phase, bring us valuable clues to explain the activity of "information categorization". It was concurred that, this stage of work, although essential, has been concealed. This finding will be provided with a descriptive model in part 4. 3

CURRENT COMPUTATIONAL DESIGNERS’ ACTIVITIES

SUPPORTS

ON

Nowadays, with the penetration of Information Technology (IT), there is a growing trend in using computational tools and internet centered on designers’ activities. Designers tend to build their digital databases of design, increasingly giving them more and more importance within their activities [7, 8, 11]. In this respect, computational support encompassing the design process is very important. However, as shown in Figure.2, the evolution of the computational support has been developed in reversed order of the design process. In contrast to later stages of execution phases which are primarily involved in prototyping technology like CAM (Computer-aided Manufacturing), CAD (Computer aided Design), CAS (Computer-aided Styling), computational support to help idea generation and to explore designer’s creativity in the early stages of design (conceptualisation) are relatively undeveloped [5].

Figure 2: Evolution of the computational support (Developed model [16])

4

DESCRIPTIVE MODEL: HOW DESIGNERS CATEGORIZE INFORMATION? Having reviewed the literature in part 2, a descriptive model of information processing to generate ideas is proposed as a linkage between internal process and external process encompassing between informative and generative phases. This model was largely based on the study of memory in cognitive psychology, especially from the work of Atkinson and Shiffrin[27] which is labelled the 'stage theory'. Our descriptive model of information processing to generate ideas well explains the use of various levels of information between the informative and generative phases. Especially because the authors are interested in information categorisation as the internal processing, we found the strong similar mechanism of information categorisation linked to long-term memory (LTM) like designers did in design practice. In design practice, even though the purpose on information categorisation might be different depending on the context of application, the information categorisation activity provides an unique opportunity to see how the designer’s needs for information are shaped by the information already accessed [7]. Also it is very specific inasmuch it includes the ability to diverge and to generate new categories and to converge and classify image resources to fit in existing categories at once. Moreover, in observing designers’ activities, we could find that designers try to discover a ‘new’ or ‘previously hidden’ association between a certain piece of information and what they want to design [34]. In detail, information categorization is based on the use of attributes from low levels such as formal, chromatic and textural to high levels descriptors - semantic adjectives, for instance, ‘warm colours’ to represent colours from the red series. The use of semantic adjectives to link words with images and vice-versa impose a much greater cognitive load than low level attributes [40]. Similarly to our descriptive model from psychology (see figure 3), 3 marked cognitive actions - Encoding, Storage, Recall which are linked to LTM strongly related to the cognitive mechanisms of information categorisation. The definition of three cognitive actions in cognitive psychology is following [27, 28].

Even though commercial image retrieval websites, for instance, ‘Google’, ‘Getty images’ and ‘Flicker’ etc. allow designers to easily get a bunch of information, but retrieving external sources from the web is laborious and inadequate in order to become inspirational sources for designers. Moreover, given the growing size of databases, structuring the design information is increasingly difficult [16]. Also these tools cannot integrate the internal sources which are generated through the cognitive mechanisms. In order to develop the computational tool to support designers’ cognitive activities in the early stages of design, it is very important to understand designers’ cognitive activities and study to formalize the cognitive design process with the extraction of design knowledge, rules and skills [6]. Also, based on the theoretical account for cognitive psychology, we need to translate design rules into design algorithms in order to develop computational tools.

365

− Encoding, The conversion of incoming information into a form that can be stored in memory. − Storage, Retaining information in memory over time. − Recall, Information reproduced from memory can be assisted by cues e.g. categories , Imagery According to the agreed theory about LTM, if the information is transferred from short-term memory (STM) to LTM, information should be encoded in a type of chunk to enter into LTM. Then LTM stored information in associative network with nodes and links. A node may contain concepts, words, images, or any other information, and link is an association between 2 nodes [27, 28, 41]. Then categorised information in LTM can be retrieved through reproducing new information, this process is called 'recall'.

Figure 3: Descriptive model of information processing to generate ideas 5 DISCUSSION In the generation phase of the creative process, designers use various levels of information and internal processing. Both external and internal types of information interact with each other in generating ideas. Designers integrate many categories of information that will be gradually categorized and synthesized into design solutions [6]. These implicit activities between informative and generative phase can be described as information categorization. So, in part 4, a descriptive model of information processing to generate ideas was suggested. Specifically three cognitive actions– encoding, storage, and recall were founded as the possible clues to explain the cognitive mechanism of the information categorisation. These 3 cognitive actions can also serve as a starting point for future experimental studies close to designers. According to further development of computational tools for information categorization, the computational tools in the early stages of the design process should allow designers to easily communicate with others designers and professionals involved in the early collaborative design process [6, 45]. Specifically, the need for computational support on the ‘information categorization’ phase was raised in this paper. The limitations emerged of current research are two. One might come from the ambiguity of the process that stems from the fact that the information categorization is mostly mental and is a subjective task [8, 42, 43]. Even though in this paper, we found out possible cognitive mechanisms which are related to information categorization as we discussed above. The other limitation is the holistic nature of design information including multidimensional data. In design practice, when designers meet design information, they are naturally familiar with using the various levels of descriptors to characterize it, from high-level descriptors (sociological vales, abstract semantics), to middle level (style) and low-level ones (colour, shape, texture) [6].

Especially the high-level descriptor is also used for a source of creativity because it contains strongly the designers’ personal sensibility and ability to diverge design ideas. So, it could bring the problem of a semantic gap between designers’ subjective descriptors and digitalized/ generalized ones which are translated into a computer algorithm [44]. 6 CONCLUSION The question of 'How do designers categorize information in the generation phase of the creative process?’ was raised in this paper. A wide range of literature related to the designers’ cognitive activities and computational support encompassing between informative and generative phases in the early stages of the design process has been reviewed. A descriptive model which has the interesting clues in order to answer our question was also provided. In the early stages of the design process, one of most interesting property is that the iterative process in diverging or converging ideas to arrive one design solution. Especially our research point was to link between informative and generative phases, which are the passage from divergence phase to convergence phase. It is described as a quiet internal process and the use of lots of implicit information. So, it is necessary to understand designers’ cognitive activities with interacting with the work from other disciplines for example cognitive psychology. Our descriptive model of information processing to generate ideas can be benefited from this enlarged approach. In the industrial context, as the customers/users expect a variety of the new product in short time, the market and designers are willing to formalize the earliest design process and improve the computational tool to shorten the duration in the early stages of the design process which are relatively undeveloped.

366

In this respect, the analysis of designers’ cognitive activities and computational support are recognized as a major interdisciplinary topic not only in design science, but also in cognitive psychology, computer science and artificial intelligences. Further research can benefit from various insights from all these areas and communities.

[14]

[15] 7 ACKNOWLEDGMENTS The authors are grateful to the European Commission for funding this project, and express their gratitude to all partners of the TRENDS Consortium for their collaboration. www.trendsproject.org This study also refers to GENIUS project, funded by the ANR (AGENCE NATIONALE DE LA RECHERCHE) and runs from 2008 to 2011. www.genius-anr.org 8 REFERENCES [1] Jones, J.C., 1992, Design Methods, 2nd ed. Van Nostrand Reinhold, New York. [2] Cross, N., 2007 Forty years of design research, Design Studies, 28:1-4. [3] Eckert, C., Stracey M.K., 2000, Sources of Inspiration: A language of design, Design Studies, 21: 99-112. [4] Bouchard, C., Aoussat, A., 2002, Design process perceived as an information process to enhance the introduction of new tools, International Journal of Vehicle Designer, ISSN 0143-3369 31.2:162-175. [5] Bouchard, C., Lim, D., Aoussat, A., 2003, Development of a Kansei Engineering system for industrial design: identification of input data for Kansei Engineering Systems, Journal of the Asian Design International Conference, ISSN 1348-7817, (1):12. [6] Bouchard, C., Omhover, J.F., Mougenot, C., Aoussat, A., et al, 2008, TRENDS: A Content-Based Information retrieval system for designers, Design Computing and Cognition DCC’08, J.S. Gero and A. Goel (eds), 593-611. [7] Restrepo, J., 2004, Information processing in design, Delft University Press, the Netherlands, ISBN 90407-2552-7. [8] Büsher, M., Fruekabeder, V., Hodgson, E. et al., 2004, Designs on objects: imaginative practice, aesthetic categorization and the design of multimedia archiving support, Digital Creativity. [9] Stapper, P.J., Sanders, E.-B.N, 2005, Tools for designers, products for users? In s.n. (Ed.), 2005. international conference on planning and design: creative interaction and sustainable development 116. [10] McDonagh, D., Denton, H., 2005, Exploring the degree to which individual students share a common perception of specific trend boards: observations relating to teaching, learning and teambased design, Design Studies, 26:35-53. [11] Keller, A.I., 2005, For Inspiration Only – Designer Interaction with informal collections of visual material, Ph.D. Thesis, Delft University of Technology, The Netherlands. [12] Nakakoji, K., 2005, Special issue on ‘Computational Approaches for Early Stages of Design’, Knowledge based System, 18:381-382. [13] Bonnardel, N., Marmèche, E., 2005, Towards supporting evocation processes in creative design: A

367

[16]

[17] [18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

[29]

[30]

cognitive approach, International Journal of HumanComputer Studies Computer support for creativity, 63(4-5):422-435. Bouchard, C., Omhover, J.F., Mougenot, C., Aoussat, A., 2007, A Kansei based image retrieval system based on the conjoint trends analysis method, IASDR2007, Hongkong Coley, F., Houseman, O., Roy, R., 2007, An introduction to capturing and understanding the cognitive behaviour of design engineers. Journal of Engineering Design, 311-325. Kim J.E., Bouchard C. Omhover J.F. et Aoussat A. 2008. State of the art on designers’ cognitive activities and computational support with emphasis on information categorisation. Proceedings of the EU-Korea Conference on Science and Technology, Yoo, S.-D. (Ed.), Springer Proceedings in Physics, vol. 124, pp.355-363. Walla, G., 1926, The art of though. (Jonathan Cape 1926, London, 1926) Nakakoji, K., Yamamoto, Y., Ohira, M., 1999, A Framework that Supports Collective Creativity in Design using Visual Images, Creativity and Cognition, ACM Press, New York, 166-173. Pasman, G., Stappers, P.J., 2001, ‘ProductWorld’, an Interactive Environment for Classifying and Retrieving Product Samples, Proceedings of the 5th Asian Design Conference, Seoul. Howard, T., Culley, S.J., Dekoninck, E., 2007, Creativity in the engineering design process, International conference on engineering design, ICED’07. Howard, T.J., Culley, S.J., Dekoninck, E., 2008, Describing the creative design process by the integration of engineering design and cognitive psychology literature, Design Studies, 29(2):160180. Eastman, C.M., 2001, New Directions in Design Cognition: Studies on Representation and Recall, Design Knowing and Learning, 79-103. Oshima, N., Harada, A., 2003, Design Methodology which recollects memory in creation process, 6th Asian Design Conference, Japan. Goldschmidt, G., 1994, On visual design thinking: the vis kids of architecture, Design Studies, 15(2):158-174. Restrepo, J., 2004, Information Processing in Design (Design Science Planning), Delft University Press. Bilda, Z., Gero, J.S., 2008, Idea development can occur using imagery only, Design Computing and Cognition DCC’08. J.S. Gero and A. Goel (eds), 303-320. Atkinson, R., Shiffrin, R., 1968, Human memory: A proposed system and its control processes, In K Spence & J Spence (Eds.). The psychology of learning and motivation: Advances in research and theory (Vol. 2). New York: Academic Press. Dix, A., Finlay, J., Abowd, G. and Beale, R., 1993, Human-Computer Interaction, Prentice-Hall, 1st Edition, Gero, J.S., 2002, Towards a theory of designing as situated acts, The Science of Design International Conference, Lyon Kang, N.G., Yamanaka, T., 2006, Kansei Quality of products: Comparison based on designers and users’ evaluation, Proceedings of the first

[31]

[32]

[33]

[34]

[35]

[36] [37]

International Conference on Kansei Engineering & Intelligent Systems (KEIS ’06), Aizukomatsu/JAPAN September 5-7, 220-227. Casakin, H., Goldschmidt, G., 1999, Expertise and the use of visual analogy: implications for design education, Design Studies, 20:153-175. Casakin, H., 2003, Visual Analogy as a Cognitive Strategy in the Design Process: Expert Versus Novice Performance Design Thinking, Research Symposium 6 Dahl, D.W., Chattopadhyay, A., Gorn, G.J., 2001, The importance of visualisation in concept design, Design Studies, 22(1): 5-26. Sharples, M., 1994, Cognitive Support and the Rhythm of Design, Artificial Intelligence and Creativity, Dartnall T.(Ed.), Kluwer Academic Publishers, the Netherlands, 385-402. Bilda, Z., Gero, J.S., 2007, The impact of working memory limitations on the design process during conceptualization, Design Studies, 28(4):343-367. Goldschmidt, G., 1991, The dialectics of sketching. Creativity Research Journal, 4(2):123–143. Kavakli, M., Gero, J.S., 2001, Sketching as mental imagery processing, Design Studies, 22(4): 347-364.

[38] Gero, J.S., Kannengiesser, U., 2003, A function behaviour-structure view of social situated agents, CAADRIA03. [39] Suwa, M., Tversky, B., 1997, What do architects and students perceive in their design sketches? A protocol analysis, Design Studies, 18(4):385-403. [40] Pasman, G., 2003, Designing With Precedents. Delft University of Technology, Ph.D. Thesis, Delft University of Technology, The Netherlands. [41] Croft, R.S., 2004, A quick look at cognitive theory [42] Jung, H., Son, M.S., Lee, K., 2007, FolksonomyBased Collaborative Tagging System for Classifying Visualized Information in Design Practice. CHI, Beijing. [43] Maya Castano J, 2007, What user product experiences is it currently possible to integrate into the design process. International Conference on engineering design, ICED’07. [44] Wang, X.J., Ma, W.Y., Li, X., 2004, Data-driven approach for bridging the cognitive gap in image retrieval. ICME '04. 2004 IEEE International Conference (3): 2231- 2234. [45] Mougenot C., Bouchard C., Aoussat A. 2007. A Study of Designers' Cognitive Activity in Design Informational Phase. ICED07 International Conference on Engineering Design, Paris

368

The Value of Design-led Innovation in Chinese SMEs S. Bolton, Centre for Competitive Creative Design, School of Applied Sciences, Cranfield University, Cranfield, Bedfordshire, MK43 0AL, United Kingdom. [email protected]

Abstract This paper focuses on understanding the role and use of design-led innovation in Chinese SMEs. The insights were gained by undertaking a pilot study, based on an applied developmental research approach involving participatory workshops, quantitative and qualitative positioning activities, in depth case studies and an individual pilot project undertaken with SMEs in the Pearl River Delta [PRD] over an 18 month period. It will discuss the findings, highlighting key areas of uncertainty that SMEs experience when attempting to make the transition from OEM to OBM, and how the findings have contributed to the development of a new design-led innovation framework. Keywords: Uncertainty, Design-led Innovation, Transition, Creative Risk Management

1 INTRODUCTION Research shows that design has been established as a critical competitive tool, and a means of economic value creation for manufacturing industries [1,2,3,4]. However, large numbers of manufacturers and service providers have as yet failed to explore the potentials and opportunities that might be afforded by positioning design more centrally in their business and strategic development processes [5,6]. This paper focuses on understanding the role and use of design-led innovation in Chinese SMEs. The insights were gained by undertaking a pilot study, based on an applied developmental research approach involving participatory workshops, quantitative and qualitative positioning activities, in depth case studies and an individual pilot project undertaken with SMEs in the Pearl River Delta [PRD] over an 18 month period. This process was undertaken in conjunction with Hong Kong Productivity Council SMEs training program activities. The PRD is a region with many SMEs who have achieved substantial financial success from being original equipment manufacturers (OEMs). However, government organisations such as the Hong Kong Design Centre and the Hong Kong Productivity Council had realized that there was a need to encourage SMEs to shift from OEM based business to developing and owning their products and brands [OBM]. This was being driven by ever decreasing margins and realization that only one company can be cheapest in the market place, what Kotler [7] refers to as “hypercompetition”. This paper will discuss the findings generated through the study, highlighting key areas of uncertainty that SMEs experience when attempting to make the transition from OEM to OBM. The pilot study findings have contributed to the development of a new design-led innovation framework tool that provides a usable systematic support structure to help SMEs understand how to: [1] integrate design-led into their business processes; and [2] manage the transition from OEM to OBM more effectively. It concludes by providing a case study example of the design-led innovation framework tool “Half Step

CIRP Design Conference 2009

369

Innovation Process” [HSI] in action. 2 CHALLENGES AND CONTEXT OF THE PILOT STUDY The key challenges facing many companies, in particular Chinese SMEs, relates to remaining competitive and increasing market share. Kotler [7] argues that we have moved to a state of “hypercompetition”
, associated with Michael Porter [8], which occurs when all companies focus on being the low cost producer. Each company tries to improve their processes by adopting the best practices of their competitors. In effect each company works harder and faster to be more efficient and their profit margins keep dropping. Hong Kong manufacturers Farhoomand, [9] have become increasingly marginalized by this phenomenon, often competing in global markets unable to pass on higher costs to customers, due to Mainland Chinese manufacturers and new and emerging low cost producing countries such as Vietnam gaining in production quality and establishing business connections with international buyers. In order to remain competitive and sustain growth it has been suggested that companies must produce products and services that exceed functional expectations and meet the emotional needs of users [10]. Given the current “hypercompetition”
 situation within the Pearl River Delta (PRD) it would be anticipated that manufacturers would be keen to embrace design and innovation to differentiate them against low price driven strategies and insulate themselves against local competition. Innovation is viewed as an important aid to both sustained success in business and the exploitation of new ideas ahead of the competition [11]. Innovation activity and investment is clearly of growing importance to businesses, with greater numbers of companies listing innovation as one of their organisation’s top three priorities [12]. More significant is the proportion of companies that indicate that innovation is their number one priority: this figure has more than doubled to 40% in the past 12 months [12]. For example, Apple’s policy of continuous innovation, saw their business’s brand value increase by 24% [13].

The benefits and problems attributed to the adoption and use of design is not a new dilemma. Kotler and Rath [14] previously contended, “design is a potent strategic tool that companies can use to gain a sustainable competitive advantage yet most companies neglect design as a strategic tool”. Design has clearly been identified as being vital to innovation and delivering enhanced financial performance [15]. Numerous studies and papers over the last two decades have indicated how design can improve existing products by fulfilling user needs, addressing new market opportunities and combating competitors’ products [16,4,18]. More recently the UK DTI [15] re-affirms that not enough businesses use design to connect new ideas to market opportunities. More specifically the Cox Report [11] pinpoints “that many companies simply don’t recognise opportunities or how to pursue them”. These traits can be attributed to many SMEs within the PRD. 3 COMMUNICATING THE COMPETITIVE VALUE OF DESIGN By 2005 the Hong Kong Design Centre (HKDC) and the Hong Kong Productivity Centre had realized the need to encourage SMEs to shift from OEM based business to developing and owning their products and brands (OBM). HKDCs Reinventing with Design programs in 2006-07 focused on demonstrating how SMEs can profitability shift from OEM based business to developing and owning their products and brands [OBM]. During this period two important factors emerged that were potential barriers to the adoption of design-led innovation in Chinese SMEs: [1] the perceived costs of undertaking design activities [Design Council, 2008] and [2] Chinese aversion to risk. The HKDCs Reinventing with Design programs in 2006-07 focused on several themes, two specifically concentrating on the financial benefits of investing in design and how companies reduce the risk of developing and owning their products and brands [19]. From undertaking and reviewing SME training program activities, run in conjunction with Hong Kong Productivity Council in 2006, it became apparent that Chinese SMEs appeared to respond more positively to numbers first [return on investment] and quality of design second. From these learnings, added value benefits associated with using design effectively were communicated to SMEs via a “numbers first strategy” at the 2006 Reinventing with Design [19]. Table 1 provides a summary of added value benefits associated with using design effectively.

products/services and how they had become more competitive in their key markets [21].

Added Value

Company Responses

Business Performance

97%

 of rapidly growing companies get ideas by understanding their customers [21] 83% of companies believed that has design contributed to an increase in market share [22] 44%

 of companies that see design as integral to business see an increase in competitiveness and turnover [23] 39%

 of companies that see design as integral to business are opening up new markets due to design [23]

Competitiveness

75% of business comes from products/services launched in last five years, in innovative companies [24] 58%

 of customers now buy on added value versus 29% who buy on price - availability is a key issue [21] 50% of business who where manufactures reported that design had sharpened their competitive edge [23] Return on Investment

6.3%

 is increase in market share that design led companies attribute to the use of design. [23] 75% 

 was the average ROI achieved their most successful design projects compared the expected a return of around 50% [22] 30%

 grow was achieved by 39 tracked FTSE 100 Businesses, who invested heavily in R&D. The group out performed the FTSE 100 every year since 1997 to July 2003, while the FTSE declined by 15% [15]

Design Effectiveness

40% of companies that achieved rapid growth used design in the idea generation, research, R&D phases [22]

Table 1: Added value benefits associated with using design effectively

Chart 1:Design Contribution to Business Performance, Adapted from [20] The “numbers first strategy” was supported by communicating, through case study examples, how rapidly growing companies: [1] use design more strategically within their business processes and [2] systematically search out ideas by understanding their customers. In these design-led companies, where design is seen as integral to their business activities, it was possible to demonstrate improvements in financial performance, defensible innovation, improved flow of new

80% of companies believed that design contributes to increased competitiveness [22]

In practical terms it was still necessary to convey how and where the use of design contributes to improving business performance and competitiveness (Chart 1). Hong Kong it could be argued has had a dollar driven culture for many decades. It was therefore important to build a clear financial case for investing in design and demonstrating the specific areas successful design-led companies invest in design.

370

Return on investment (ROI) is often quoted as the key measure of innovation [12] and a deciding factor in the decision to invest or not invest in design. Sometimes ROI is criticised as an over simplistic measure on it’s own [25]. The Design Council has provided evidence that for every £1 spent solving a problem in product design, it will cost £10 to tackle in development and £100 to rectify after product has been launched. 
More specifically they have determined that design-led companies achieve on average a £ 2.25

 return on every £1 spent on design [21]. 4 UNDERSTANDING OBSTACLES TO CHANGE IN CHINESE SMEs The objectives of the pilot study centred on gaining an insight into: [1] the key areas of uncertainty that SMEs experience when attempting to make the transition from OEM to OBM and [2] the factors impacting on effectiveness of design-led innovation. The methodology for the pilot study was based on an applied developmental research approach involving participatory workshops, quantitative and qualitative positioning activities, in depth case studies and an individual pilot project. The pilot study sample comprised of 16 SMEs operating in fashion, electronics and jewellery sectors. The size of companies ranged from 10 – 150 employees with turnovers from HK$750k to HK$10m. The pilot study focused on understanding the effectiveness of design-led innovation at: 

Company Level: operational



Functional Level: marketing, production, personnel and R&D

strategic,

management

Design-led Innovation at a Company Level Identifying and developing new product opportunities appeared to be the two main areas where more consideration was needed. Over two thirds of the SMEs rated themselves ineffective in the marketing area. From the pilot study findings, understanding user/consumer knowledge more effectively appears to be a significant factor impacting on design-led innovation activities. Effectiveness of Design-led Innovation at an Activity Level: A reactive culture prevailed among the SME sample (75%) This was characterised by an over emphasis on production (81%). A high proportion of the companies lacked outward facing capabilities [customer insight, market segmentation] with an over reliance on competitor activities (93%) as a stimulus and driver for product innovation. Many of the sample companies articulated that they lacked confidence to identify and translate their core capabilities into brand values. This lack of confidence resulted in low levels of investment in brand development (62%).

Company Functions

Marketing

and



Activity level: company positioning, product development, brand development and customer insight Effectiveness of Design-led Innovation at a Company Level: At a company level positioning, generating and screening ideas were two main areas of concern that were impacting on the effectiveness of design-led innovation activities. The inability to shape design-led innovation activities through clearly defined brand values and characteristics was clearly impacting on operational activities, in particular idea generation and screening of ideas (see Table 2). A high proportion of the sample felt that their businesses were operationally weak in delivering design-led innovation. The reasons for poor positioning (table 2) started to be explained when exploring the effectiveness of design-led innovation at a functional level.

Company Activities

Factor Impacting on the Effectiveness of Design-led Innovation

Factor impacting on Effectiveness of Design-led Innovation Record of new product/service development poor compared to competitors Speed of converting and bring new products to market Sales force not involved in idea generation and do not submit new ideas

Production

Responding to customer requirements Adoption of new processes and materials

Personnel

Motivating staff Leveraging staff experience to improve innovation performance

R&D

Poor internal communication of R&D capabilities

Table 3: Factors impacting on Design-led Innovation at a Functional Level Summary of Findings: In summarizing the key findings of the pilot study it became apparent that more consideration was needed on the following factors: 

Maximising capabilities

Defining brand values and characteristics Delivering distinctive competitive advantage



Connecting with customers and consumers



Logo’s versus branding



Fear of the quantum leap Worry of investing in intangibles

Management level

Competitor analysis Promotional planning Sales activities



Operational level

Idea generation Communication Screening and evaluating ideas

Strategic level

When unpacking the emergent factors it was possible to identify a number of underlying reasons attributed to the key areas of uncertainty that SMEs experience when attempting to make the transition from OEM to OBM: Maximising capabilities: Many of the SMEs in the sample misunderstood and or had difficulty in communicating

Table 2: Factors Impacting on the Effectiveness of

CIRP Design Conference 2009

371

their company strengths. This was reflected in a weakness in identifying and prioritising key internal capabilities from which to build brand experience. There was an over emphasis on price driven innovation underpinned by an absence of market based procedures for evaluating opportunities. There appeared to be a constant struggle in identifying which issues / parameters to compete on other than price. Insufficient time and resources were allocated on developing ideas to a point where appropriate creative risk assessment could take place. Connecting with customers and consumers: Endemic in the SME sample was an inability to identify and connect with customers and consumers due to a lack of market research capabilities. This was reflected in a lack of understanding of the role and use of user and market research in identifying and developing new opportunities. This was evidenced in an over emphasis on “me-too” products lacking unique product features and differentiation against competitors. There appeared to be a fear of investing in customer and market research activities. This was attributed to a lack of short term pay back and more importantly linked to poor experiences with local agencies delivering outcomes that the SMEs felt were too generic and of little use to their business activities. Logo’s versus branding: There was a common misconception regarding branding amongst the SMEs. Many of the sample perceived branding as a logo activity. Much of this attitude can be attributed to short-term views of the role of promotion and again attributed to poor experiences with local agencies. Naming was problematic, as it required a shift in attitude from traditional family and trade-associated names to re-naming that was more appropriate to international markets. Fear of the Quantum Leap: A key factor impacting on the transition from OEM to OBM was the perception that they needed to go from OEM straight to ODM via a ‘quantum leap’. Their anxiety was partially fuelled [other than costs] by their awareness of their lack of understanding of the role and use of user and market research in identifying and developing new opportunities. This perception was “freezing” attitudes to change. Greater recognition was needed to the fact that the change process takes time, driven by clearly defined transitional phases in order to build confidence, capabilities and brand values. Worry of investing in intangibles: A fundamental cultural change is needed in terms of attitude and culture towards investing in soft-innovation activities such as customer research. Soft-innovation activities need to be considered just as significant as purchasing new machinery to longterm business success. Investing in internal services has to be prioritised in order to facilitate the transition from OEM to OBM. 5 DESIGN-LED INNOVATION FRAMEWORK FOR CHINESE SMEs The participatory workshops, the quantitative and the qualitative positioning activities provided a solid foundation for the pilot study. From the in depth case studies a tendency emerged amongst SMEs who had attempted or were considering developing and launching their own brand products. The predisposition was to

incrementally develop their core capabilities to facilitate the development of their own branded products servicing existing markets, often piggy backing existing client intellectual property and routes to market [see Figure 1]. The emergent “overlapping innovation strategy” [see Figure 1] initially appears to offer easy entry into higher margin business at minimal risk. Its premature attractiveness relates to its apparent potential to negate the need to invest in user or market research and related infrastructure. However the “overlapping innovation strategy” only masks the need for user and market research. Weaknesses of the “overlapping innovation strategy” emerge at the transition point from ODM to OBM. The sharing and use of client intellectual property and knowledge of routes to market in OEM and ODM activities is often acceptable and advantageous to achieving success and building strong long supplier relationships. On the other hand when SMEs attempt to develop their own brand products driven by insight and market knowledge taken directly from key clients, this habitually leads to the fracturing of relationships often resulting in the loss of core business. For many SMEs the fear of lost business frequently halts any attempt to move towards own branded products.

Figure 1: overlapping innovation strategy

Figure 1: Creating a “customer gap” For many in the pilot sample there was a requirement to attain a balance between maintaining existing business and the need to develop more higher margin activities developed through own brand ownership. To achieve these goals the SMEs need to create a “customer gap” that enables them to target non-competing customers, ideally within the same industry, focusing on niche and or specialist markets (see Figure 2). Coughlan and Prokopoff [26] suggest that designers create frameworks that can help simplify and unify design opportunities in order to conceive of possible futures. An easy to use framework that could help facilitate the development of new products by building confidence to anticipate, predict and act upon new market opportunities was a prerequisite for the SME sample companies

372

Half Step Innovation Process

Steps

Phase 1: OEM

Phase 2: OEM - ODM

Phase 3: ODM - OBM

Original Equipment Manufacturer

Original Equipment Manufacturer to Own Design Manufacturer

Own Design Manufacturer to Own Brand Management

OEM Capability Requirements

ODM Capability Requirements + the OEM Capabilities

OBM Capability Requirements + the ODM Capabilities

 Comparable best practice manufacturing expertise of key competitors  Knowledge and understanding of core materials and technologies of components  Understanding and translating given product specifications  Ability to produce high quality components to price and time constraints  Knowledge and understanding of regulatory requirements  Knowledge of component price points

 Expanded manufacturing expertise to encompass all elements of proposed product offer  Introduction of new materials and processes  Introduction of in-house design and R&D capabilities

 Ability to identify and apply differentiated materials and technologies  Enhancement of inhouse design capabilities to cope with brand and marketing activities

 Knowledge of market size and value  Knowledge of key players/competitors  Knowledge of product price points

Step 3: Researching Customer Profiles

 Understanding of User groups  Awareness of Customer profiles

 Understanding of customer profiles  Understanding of consumer attitudes and motivations 

 Development of inhouse marketing capabilities  Development of distribution channels  Customer service support  Ability to identify and translate new and emerging customer trends

Step 4: Developing Product and Brand Experience

 Introduction of brand name  Development of brand identity

 Development of a brand position

 Development of associated brand values and promises  Development of product and service experience

Step 5: Targeting New and Existing Customers

 Retaining current customers – delivering effective products and services  Heightening brand awareness – promotion of core services

 Introduction of competitor analysis skills  Development of distinctive product/service offerings

 Development of differentiated products and service offerings  Ability to identifying and translating customer and user needs into new products /services

Step 1: Positioning Internal Capabilities

Step 2: Defining Market Sector Requirements

Table 4: Factors and Characteristics Impacting on Design-led Innovation at an Activity Level

CIRP Design Conference 2009

373

necessitate change a proposed SME design-led innovation framework would therefore need to address and support:    

The deficiencies in identifying and prioritising key internal capabilities on which to build brand experience. A lack of understanding of the role and use of user and market research in identifying and developing new opportunities. An absence of confidence in developing new product ideas The fear of the quantum leap – transition from original equipment manufacturer to own brand management

The “Half Step Innovation Process” [HSI] was developed in response to the needs and challenges faced by the SME sample companies. The HSI focuses on connecting company capabilities with new customer opportunities. It places emphasis on having an external focus encouraging the SME to understand the role and use of user and market research in identifying and developing new opportunities. In practical and operational terms the HSI process concentrates on building internal capabilities to enable the transition from original equipment manufacturing to own brand management. It centres on providing a practical framework for helping SMEs to identify their core capabilities, building a brand position, developing differentiated product opportunities and identifying new market opportunities that do not cannibalize existing business (see Figure 2). The HSI introduces a practical support structure to help SMEs understand how to: (1) integrate design management into their business processes and (2) manage the transition from OEM to OBM more effectively. The transition is built around a three-phase procedure underpinned by a repeating five-step process that progressively introduces new capability requirements at each phase. The integrated design management is introduced through the progressive and repeatable five-step process (see table 5). 6 OMNI CASE STUDY: HALF STEP INNOVATION PROCESS Omni had taken part in both the 2006 and 2007 Hong Kong Design Centre’s Reinventing with Design programs and had seen the preliminary finding presented at the 2007 event. They agreed to pilot the HSI Process based on these findings. Established in the early 1990s, Omni had over 16 years experience of manufacturing and marketing in the sports and fitness industries through OEM/ODM programs for major retail chains as well as with major brands in the USA and Europe. The company employs over 150 people with facilities located in China and the United States including its own dedicated 80,000 square feet manufacturing and product development facilities. Omni wanted to develop it own brand to target the USA and European markets. It had established Omni Sports Trend and Technology (OSTT) in order to focus on marketing Omni premium line products but realised that they were struggling to develop a differentiated product and brand position (see Figure 3).

Figure 3: Omni’s Old Brand Product The pilot project started in September 2007. The process commenced by undertaking a detailed HSI review of Omni’s design-led innovation capabilities. The review process identified several factors impacting on the effectiveness of their design-led innovation activities:      

Deficiencies in defining brand values and characteristics Lack of experience of promotional planning Poor track record of new product/service development compared to competitors Weak response rate to customer requirements Lack of confidence in developing new product ideas Undifferentiated products/services

Omni introduced the HIS Process following the threephase procedure underpinned by the five-step process. They progressively developed and introduced new capabilities to the company to address the factors impacting on their design-led innovation activities. Investing and developing the brand identity and brand experience appeared to cause the greatest problems. The process reflected the findings from the pilot study attributing initial reluctances to previous short-term views of the role of promotion and poor experiences with local agencies. However by May 2008 Omni had launched its new brand image and differentiated product range at the Denver (USA) Health and Fitness Business Trade Show (see Figure 4).

Figure 4: Omni’s New Brand Product

374

perceived branding as a logo activity. Much of this attitude can be attributed to short-term views of the role of promotion and again attributed to poor experiences with local agencies.

Omni Case Study: Half Step Innovation Process

Steps

Step 1: Positioning Internal Capabilities Step 2: Defining Market Sector Requirements

Step 3: Researching Customer Profiles Step 4: Developing Product & Brand Experience Step 5: Targeting New & Existing Customers

Capability Development



 Enhanced manufacturing capabilities  High quality warehousing  Logistics  Procurement  In-house Industrial design  Premium home gym market in USA  USA distribution channels  European professional gym products market  European distribution channels   USA premium home gym retailers  European professional gym products distributors

To necessitate change the “Half Step Innovation Process” was developed, providing a practical framework for helping SMEs to identify their core capabilities, building a brand position, developing differentiated product opportunities and identifying new market opportunities that do not cannibalize existing business (see Figure 2). The HSI process demonstrated its ability to help Omni, a Chinese SME, to build its internal capabilities needed to make the successful transition from own design management activities to own brand management. REFERENCES

 Development of new brand image  Brand message and positioning  Development of differentiated product range  USA premium home gym retailers  European professional gym products distributors

[1]

Borja de Mozota, B. [2003]. Design management: Using design to build brand value and corporate innovation. New York: Allworth Press.

[2]

Heskett, J. [2002]. Toothpicks and logos: Design in everyday life. New York: Oxford University Press.

[3]

Hertenstein, J., and Platt, M. B. [2002]. Developing strategic design culture. Design Management Journal, 8[2]: 10-19.

[4]

Walsh, V; Roy, R; Bruce, M; and Potter, S; [1992], Winning by Design: Technology, product design and international competitiveness, Blackwell Publishers, Oxford

[5]

Topalian, A [2006] ‘Envisioning, Visualisation and Dynamic Integration in Design’ full text of presentation to Design Management Symposium, KISD, Cologne available at: http://kisd.de/fileadmin/kisd/dm_symposium/topa lian/topalian_presentation.pdf

[6]

Delaney, M [2005] ‘About: International Markets’ Design Council Paper, available at: http://www.designcouncil.org.uk/Documents/Abo ut%20design/Business%20essentials/internation al%20markets%20%20%20Mark%20Delaney.pdf

[7]

Kotler’s Top Five Marketing Tips [2005], http://brandautopsy.typepad.com/brandautopsy/ 2005/06/kotlers_top_fiv.html

[8]

Porter, M. [1996], “What Is Strategy?” Harvard Business Review

[9]

Farhoomand, A. [2008], From Creative Industries to Creative Economy: The Role of Education, Hong Kong Design Centre

[10]

Design Council, [2002], National Survey of Firms

[11]

Cox, G. [2005], The Cox Review of Creativity in Business, HM Treasury

[12]

Boston Consulting Group, Innovation Survey Report

Table 5: SME Design-led Innovation Framework: Half Step Innovation Process The HSI process helped Omni to build the internal capabilities needed to make the successful transition from own design management activities to own brand management. It provided a practical framework for helping them to identify their core capabilities, build a clear brand position and develop a differentiated product range. Most importantly it enabled Omni to identifying new market opportunities that did not impact on existing business activities. 7 CONCLUSIONS The findings from the study highlighted the emergence of the following issues: 





Many SMEs misunderstood and or had difficulty in communicating their company strengths. This was reflected in a weakness in identifying and prioritising key internal capabilities on which to build brand experience. Endemic in the SMEs was an inability to identify and connect with customers and consumers due to a lack of market research capabilities. This was reflected in a lack of understanding of the role and use of user and market research in identifying and developing new opportunities. There was a common misconception regarding branding amongst the SMEs. Many of the sample

CIRP Design Conference 2009

Greater recognition was needed to the fact that the change process takes time, driven by clearly defined transitional phases in order to build confidence, capabilities and brand values.

375

[2006],

2006

[13]

Business Week & Interbrands [2004], Best Global Brands

[14]

Kotler, P. Rath, GA. [1984], Design: a powerful but neglected strategic tool, Journal of Business Strategy,

[15]

DTI, [2003], Innovation Report 83. 1, Dti Economics Paper No. 7, Publication 17 DTI, [2003], R&D Scoreboard

[16]

Cooper, R.G; and Kleinschmidt, E. J; [1987], What makes a new product winner: success factors at the project level, R&D Management, VOL. 17, No. 3

[17]

Herstatt, C; and von Hippel, E; [1992], From experience: Developing New Product Concepts via the lead user method: a case study in a ‘lowtech’ field, Journal of Product Innovation Management, Vol. 9

[18]

Ulrich and Epping [1995], Product Design and Development, Chapter 3: Identifying Customer Needs, McGraw Hill, Toronto

[19]

Bolton, S. [2006], From Zero to Hero Growing businesses through effective product and brand development, Reinventing with Design Conference, Hong Kong

[20]

Design Council, [2005], The Business of Design, Design Industry Research

[21]

Design Council [2004], National Survey of Firms

[22]

Design Council, [2003], National Survey of Firms

[23]

Design Council, [2008], Design Returns A review of national design strategy 2004-08

[24]

PricewaterhouseCoopers, Survey

[25]

NESTA [2008], www.nesta.org.uk/assets/Uploads/pdf/PolicyBriefing/measuring-innovation-policy-briefing

[26]

Coughlan, P. & Prokopoff, I. [2004] Managing Change, By Design. In Boland, R. & Collopy, F. [Eds.] Managing As Designing. Stanford Business Books.

[2003],

Innovation

376

We are Designers Because We Can Abstract 1

1

2

A. Adel , R. Djeridi Magellan Research Center, I-Co-D, University Jean-Moulin Lyon 3, IAE Business and Management School, 6 cours Albert Thomas, B.P. 8242, 69355 Lyon Cedex 08 – France 2 Laboratoire des Sciences de l’information et des systèmes (LSIS), 2 cours des Arts et Métiers, 13725 Aix-en-Provence - FRANCE [email protected]; [email protected]

Abstract Due to the increasing systems complexity, architecture design became an important issue. It gained interest and its importance was framed in three domains: as a way to understand complex systems, to design them, to manage their manufacturing process and to provide long-term rationality. The purpose of this paper is, firstly, to survey the existing definition approaches on architecture. Secondly, we propose a model for architecture design which articulates the potential linkage between two principle concepts: synthesis and abstraction. Our proposal model focuses on abstraction concept and permits an effective top-down design approach. It helps also designers to more respond to issues that characterize architecture design. Keywords: System architecture, Concept, Model, Synthesis, Abstraction.

1 INTRODUCTION Research on system architecture design approaches is still in its progressing phase and several architecture design approaches have been introduced in the last years [1]. However, a consensus on the appropriate system architecture design process is not established yet and current system architecture design approaches may have to cope with several problems. We can also note a glaring lack of modelling and methodologies really used in practice [2]. System architecture design is generally considered to play a fundamental role in coping with the inherent difficulties of the development of large-sacle and complex systems [1] [3]. System architecture design includes the early design decisions and embodies the overall structure that impacts both quality and cost of the whole system. We maintain that the existing architecture design approaches have several difficulties in deriving the right architectural abstractions. In the first section, we give a short background on architecture in which we present some existing definitions and our own definition of architecture. In the second section, we propose a model for system architecture design. The principle motivation for proposing this model is to help designers to manage complexity in designing systems. In the last section, we conclude by giving future work to operationalize our proposal model. 2 ABOUT ARCHITECTURE In this section, we focus mainly on the meaning of architecture by analysing some prevailing definitions in section 2.1. In section 2.2., we provide our own definition of architecture based on the existing definitions and which considers architecture as a concept.

2.1 Definitions Architecture is important in several fields such as building engineering, system engineering, software engineering, etc. The architecture design is a central stage of any system process creation or design. We think so that here would be thus certain joint points between these fields. In this section, we will refer sometimes, in particular, to software engineering. The term architecture is not new and has been used for centuries to denote the physical structure of an artefact [4]. In tandem with the increasing popularity of architecture design many definitions of architecture have been introduced the two last decades, though, a consensus on a standard definition is still not established. Definition approaches are different and, often, they interact and have many joint points. According to us, multitude as well as coexistence of various definition approaches notes a problem of comprehension and positioning as regards architecture. In this section, we are interested, first of all, with the approaches of definition of architecture in design. Let us explain this considering the development of the definitions in the last two decades. The set of existing definitions is large and many other definitions have been collected in various publications [1], [3], [5]. We provide only the definitions that we consider as representative. We can group definition approaches of architecture in three principal categories: a first category which lays the stress, mainly, on the internal composition of a system. The second extends the previous definition approach by including relationship with the environment and evolution over time. Lastly, a third category which is interested rather in the finalities of architecture by regarding it as sub-process of design process. Architecture: an organized structure of components

CIRP Design Conference 2009

377

In this definition approach, the stress is laid on the internal composition of architecture. Maier and Rechtin define architecture as being the structure in terms of 1 components , connections and constraints of a product, a process or a system [6]. In this definition, three basic notions are to be raised: components, connections and constraints. These notions characterize the internal structure of architecture.

(Connection, constraint) C1 C2

C3

Cn

Ci : Component i

Figure 1: Architecture: an organized structure of components According to this approach, architecture contains, primarily, the composition of the system in components; It is an abstract description of the entities of a system and their interactions [1]. The approach of definition of architecture system relates to three levels then: first of all, a first level which consists in determining the components of architecture system to conceive. Components of architecture should not be selected arbitrarily and they should participate on the satisfaction of the overall requirements and help the architecture evolution over time. A second level of architecture focuses on interactions between these components. Lastly, the third level relates to the analysis of the constraints resulting from these interactions. In this approach, the organization of components and their interactions are, primarily, question of optimization (performance, size, cost...). Architecture: an organized structure of components in relation with environment and in evolution over time In this definition approach, architecture is defined, not only, as abstract description system components and their interactions but also in its relationship with the environment. Thus, an important notion comes to be added; this notion is that this structure of components evolves in and by its environment. Evans emphasizes this by defining architecture as being the conceptualization, the description and the design of a system, its components, their interfaces and the interactions with the various internal and external entities and their evolution over time [7]. This definition approach emphasizes the dynamic notion of architecture in addition to its static one. As regards software architecture, Lawson defines architecture as follows: "... we define architecture as a system design model that captures system organization and behaviour in terms of components, interactions, and static and dynamic configurations » [8]. According to this definition, 1

The term component here is used as an abstraction of varying components; it may refer to abstract concepts, subsystems, physical components, etc.

we release the following basic notions associated with system architecture: composition, interactions and interfaces, performance and evolution. These key notions show the importance of architecture in the design process and especially in the future evolution of the system over its life cycle. This evolution is guided for example by maturity of technology. Architecture: a sub-process of system design process This definition approach is quite different from the two other approaches. Indeed, instead of attempting to define architecture by its entities, it considers architecture as a problem solving process in which the problem represents the requirement specification and the solution the structure of the architecture in terms of entities and interactions. Ulrich defines architecture as being: "The scheme by which the function of the product is allocated to physical components", or also "the mapping from functional elements to physical components; the interactive specification of interfaces among physical components "[9]. In other words, architecture is the process by which the designer starts the process of concretization of the solution on the basis of system’s functional definition. According to this approach, architecture is regarded as a process of abstraction of the physical solution using representations, models and syntaxes. It is the scheme by which the structure of the system is determined and the concrete physical solution is, simply, an instantiation of this abstract representation of architecture. 2.2 Architecture as a concept According to all these definitions, it is clear that architecture can be defined basically as arrangement of components or entities and relationships between them. However, architecture is sometimes defined as the scheme by which this arrangement of components is obtained and so confused with architecting. According to us, the multitude of definitions on architecture is due, essentially, to the multitude of perspectives. As Van Wie, we conclude that architecture is an ill-defined design concept, and there is a need for a definition that captures all those perspectives which are important in helping designer [2]. Designers face continual challenges to deal with complexity of system. Consequently, it’s often required to provide aids to them.These aids may be provided in many forms; computer-based aid systems, co-worker networks, pertinent approaches and methodologies and essentially suitable general framework to guide them in their solving problem tasks. Architecture is one of the most important concepts in management complexity in engineering design. It must be defined with consistent and overall definition including various perspectives and viewpoints. For this we provide our own definition: Architecture is a concept forming a set of abstractions, perspectives and viewpoints of a system structure We think that this definition is general and covers also existing definitions on architecture. It synthesizes disparate existing definitions from many domains into a new framework for understanding architecture. We consider architecture as a concept that gives an abstraction of the corresponding domain knowledge. It represents a high-level structure of a given system including, in addition of components structure, its behaviour and the scheme by which is obtained.

378

3

HOW ABSTRACTION WORKS IN ARCHITECTURE DESIGN Architecture design can be considered as a problem solving process in which the problem represents the requirement specification and the solution represents the architecture. A well-known concept in engineering to solve problems is synthesis. This concept is often considered as the process by which a problem specification is transformed to a solution by decomposing the problem into loosely coupled sub-problems that are independently solved and integrated into an overall solution.

Requirement spécification Formulate

Subproblem

Design problem

Consider

Search

Solution domain knowledge

Extract Discover

Satisfaction test

Synthesis / Decomposition

Satisfaction test

Solution abstraction Specify

Solution spécification

Figure 2: Synthesis principle of problem solving During the synthesis process, designers need to consider the design space that contains the knowledge that is used to develop the design solution. Resolving design problem is more and more cognitive in nature. Often, the total amount of conceptual and factual knowledge that ideally should be commanded in order to deliver desired (ideal) solution exceeds the average worker’s mental capacity. In product design, Van Wie note that the stereotypical engineer is famous for solving problem in a logical and rigorous style, but its modelling and analysis capabilities are not generally used in architecture design [2]. Recent cognitive research indicates that human decision making and problem solving is mostly non intuitive and based on associations and abstract mental models [10]. Translating these generalized and abstract representations to a concrete solution is complex. To manage this inherent complexity in problem solving, synthesis can be performed at different higher abstraction levels in the design process. A higher level of abstraction reduces the difficulty in dealing with both problem and solution or function and form. This approach reduces the complexity in particular of the design of larger systems. In addition, higher level abstractions are closer to a designer’s way of thinking and such increases the understandability, which on its turn facilitates to consider various solution alternatives more easily. Using mainly synthesis and abstraction concepts, we propose the following model for architecture design. Requirement specification represents the requirements of stakeholders who are interested in the project of system development. This set of requirement is used to formulate the technical problem. This formulation can be considered as the process by which functional description of the system is obtained. This functional description can be represented by a functional diagrams or function structure [11]; [12]. The functional description has hierarchy aspect representing the decomposition of principal functions into sub-functions which can be decomposed further into lower level sub-functions [12]. It can also crated at different levels of abstraction [13].

379

Compose

Architecture description

Figure 3: Model of architecture design The process of design of architecture begins by considering a sub-problem of the hole identified problem represented by the functional description. For a given sub-problem, designers will search the solution domain knowledge. Then, solution abstraction must be extracted from this solution domain knowledge. This step is called also concept generation. When abstracting solution, we can discover new sub-problem which must be integrated to the whole design problem to be solved later. Finally, the abstract solution must be specified; in other words, this step consists to allocate components to the subfunction considered. The specified solution must be tested with regard to sub-problem formulation. These steps must be conducted with all sub-problems and specified solutions obtained must be integrated to compose or to build the whole architecture. In this way, architecture generation process consists of a series of divergent and convergent steps, completed of at different levels of solution abstraction [14]. The architecture resulting from integration must be tested in regard the initial requirement specification and can be modified to improve satisfaction by refinement using optimisation techniques. This process of architecture design is inherently iterative and therefore feedback loops are not shown explicitly in the model although they certainly exist both within and between steps. This meta-model can help designers in practice and improve their abilities to more respond to issues that characterize architecture design. It permits an effective top-down design approach to generate system architecture. It clarify in a practical way the process architecture design and try to eliminate (or at least minimize) the need for design recursion.

4

ARCHITECTURE DESIGN MORPHOLOGICAL ANALYSIS METHOD

USING

4.1 Principle Architecture, considered as “the mapping from functional elements to physical components" [9], is obtained by arrangement of physical or organic solutions. In the case of complex systems, the conventional approach would be to break the system down into a set of sub-functions. This process is called functional decomposition or also analysis process (Figure 4). In a given level of abstraction, we can allocate to each sub-function a weight which indicates the contribution of the considered subfunction in the fulfilment of the principal function. Functions / Components or entities

System architecture

Complex system

Functional decomposition (analysis)

Where Vi is the weight of sub-function Fi (i ∈ {1, n}) and Pji is the weight of the solution Sji. This last weight gives the satisfaction degree oh the considered sub-function by the solution Sji. In the case of the example given by Figure 7, the quantification of the considered architecture is given by Equation (2) : Q = V1 * P11 + V2 * P22 + V3 * P43 + V4 * P24

5 SUMMARY Design architecture includes the early design decisions and embodies the overall structure that impacts quality and cost of the whole system. Throughout this paper special emphasis is placed on practical difficulties of understanding architecture concept giving a multitude definition approaches. We have focused in this article on the development of a model of architecture design based on the two principle concepts synthesis and abstraction. The proposed model and quantification method guide designers’ reasoning but they must be more operationalized by tools in order to be easily used to improve the practice of generating system architectures.

Integration (Syntheis)

Figure 4 : Architecture as the mapping from functional elements to physical components Architecture is obtained by allocating physical solutions to functions. It is clear that many alternative solutions are possible. To explore all these possibilities, we can use morphological analysis method. Morphological analysis was developed by Fritz Zwicky for exploring all the alternative solutions of a multi-dimensional complex problem. Figure 5 shows the principle of morphological analysis using to generate system architecture. As shown by this figure, architecture is obtained by combining possible solutions of sub-functions. Depending on the outcome of solution generation phase and the level of functional decomposition, the use of morphological analysis method can generate a huge number of possibilities. 4.2 Quantification architecture method It is impossible to study all solutions obtained by using morphological analysis method. Somehow, we must be able to reduce the number of alternative architectures generated. However, considering only constraints and preferences is not sufficient to help designer to select appropriate architectures. This step of reduction may reflect true impossibilities also combinations that seem to be unrealistic. With constraints and preferences we can reduce by up 90 percent the initial morphological field [15]. This reduction leaves us with an enough important number of alternative architectures called candidate architectures. So the question is: how to reduce the number of these candidate architectures to keep only viable and interesting ones? It is clear that quantifying architectures can helps to reduce significantly the morphological field (Figure 6). As shown by Figure 4, the quantification of architecture can be given by Equation (1). Q = ∑ Vi * Pji

(2)

(1)

380

Possible solutions

Functions F1 1

S 1 F1

S 2 F1

F2

S 1 F2

S 1 F2

Fn-1

S 1 Fn-1

Fn

S 1 Fn

S m1 F1 S m2 F2

S mn-1 Fn-1

S 2 Fn-1 S 2 Fn

S mn Fn

: Fi : technical function n°i Where : Where Fi : technical function n° i S j Fi : solution possiblen° solution n° j of Fi function Fi S j Fi : possible j of function n :ofnumber of technical n : number technical functionsfunctions m : number of possible solutions of the technical i m : number of possible solutions of the technical function Ffunction Fi i

i

Figure 5: principle of architecture generation using morphological analysis

Morphological analysis

Morphological field (Alternative architectures)

Reducing morphological field

Constraints

Candidate architectures

Preferences

Figure 6: Steps of morphological field reduction

Figure 7: principle of architecture quantification

381

Quantification

Reduced number of architectures to examine and work with

6 REFERENCES [1] Crawley, E., and al., 2004, The influence of architecture in engineering systems. Engineering Systems Monograph, The Engineering System Department (ESD) - Architecture committee, MIT, March 29-31. [2] Van Wie, M.J., 2002, Designing product architecture: a systematic method. PhD in mechanical engineering, University of Texas at Austin, December. [3] Ulrich, K.T., Eppinger, S.D., 2000, Product design and development, 2nd ed. New York, McGraw-Hill. [4] Merriam Webster on-line Dictionary, http://www.mw.com/cgi-bin/dictionary, 2000. [5] Ullman D., 2003, The mechanical design process, rd 3 edition, McGraw-Hill, New York. [6] Maier M.W., Rechtin E., 2000, The art of systems architecting, CRC Press. [7] Arciszewski, T., 2004, Design and Inventive Engineering. [8] Lawson H.F., Kirova V., Rossak W., 1995, A refinement of the ECBS architecture constituent», Systems Engineering of Computer Based Systems, pp. 95-102. [9] Ulrich, K.T., 1995, The role of product architecture in manufacturing firm, Research Policy, N° 24.

[10] Bechara, A., Damasio, H., Tranel, D., and W. Anderson, S., 1998, Dissociation Of Working Memory from Decision Making within the Human Prefrontal Cortex, The Journal of Neuroscience, January 1, 1998, 18(1):428-437 [11] Vladimir Hubka and W. Ernst Eder, 1988, Theory of Technical Systems (Springer-Verlag, New York. [12] Phal, G., Beitz, W., 1996, Engineering design – A systematic approach, Springer, London. [13] Theodore Fowler, Value analysis in design (Van Nostrand Reinhold, New York, 1990). [14] Salonen, M., Perttula, M., 2005, Utilization of concept selection methods – a survey of finish industry, Proceedings of IDETC/CIE, Long Beach, California USA. [15] Ritchey, T., Modeling Complex Socio-Technical Systems using Morphological Analysis, Adapted from an address to the Swedish Parliamentary IT Commission, Stockholm, December 2002.

382

Bridging the Gap between Design and Engineering in Packaging Development R. ten Klooster, D. Lutters Laboratory for Design, Production and Management, Faculty of Engineering Technology University of Twente, Enschede, The Netherlands [email protected], [email protected]

Abstract Packaging designers often come up with splendid design proposals, without being aware of technical, marketing or economical restrictions. This leads to infeasible proposals, so many packaging design projects fail. To bridge the gap between design and marketing, a tool has been developed that shows several designs to an extensive community. Designers now have the possibility to get feedback of the target market on the design proposals. In a case study (redesigning a packaging), the tool clearly showed its advantage. This paper discusses the tool, its usage and its contribution to a more effective and efficient development cycle in packaging development. Keywords: Packaging, User involvement, Design, Management

1 INTRODUCTION A strong characterization of packaging design is the amount of aspects that have to be taken into concern during the design process [1]. This asks for adequate management of the development cycle, and especially of the design and engineering decisions taken. To depict the decision process, a decision model can be used. This model -closely resembling models used in the automotive industry- aims at interrelating the different stakeholders, methods and tools (see figure 1). In the model, marketers define the need of the consumer or at least try to and come up with a briefing for designers. Product designers and engineers have to come up with design proposals including the design of the packaging, and process engineers have to translate this all to a proper running production process. It is interesting to note the intermediary role of the engineers in the model, interrelating the different stakeholders. With the input of human resource, the purchasing of the packaging materials and tools the production can start. Problems can be overcome by using tools like Kaizen and Six Sigma. If necessary, feedback can be given to process engineering or to product development. Product design or development in the field of packaging design is generally managed by designers and engineers in cooperation. In this, many different views on the packaging development cycle are represented. Many designers focus on the appearance and styling of the packaging without having any knowledge or insight into technical or economical feasibility. The reason for this is that, often, the packaging sells the product, especially in the case of fast moving consumer goods. Other specialists in the cycle may, on the other hand, focus on specialisms like material, ergonomics or filling line behaviour.

CIRP Design Conference 2009

383

Furthermore, the development process is complicated by the fact that –in packaging development- there is a clear lack of proper education. Many designers have no idea about the hundreds of kinds of paper and board, about the use of laminates, about special paper qualities needed for wet glue labelling, about detail design aspects of glass bottles, microbiological problems related to food, permeability of plastics, etc.

Figure 1: The decision model This means that communication with specialists is extremely important and that there is an evident role for someone who can speak the language of all parties involved. The engineer has to translate all this into a solution that runs well on a packaging line, mostly with the expected overall equipment efficiency. If the communication process is not optimal this can lead to frustration and less effective and efficient development processes. Nevertheless, it is expected that a development stage in a well-managed design process leads to a design proposal.

A very difficult question has to be answered at that moment is whether or not the proposed design is a successful representation of what the marketers had in mind. The immediate subsequent question is if the design will be accepted by the market. Next to their own intuition, marketers use several tools to answer these questions. These tools encompass market research tools like in depth interviews, panel interviews with a selected group of customers, discussions with inhouse panels, the use of test markets etc. The problem with many of these tools is that customers behave different from what they say they would do. It appears that the uncertainty of the outcome of research like this is very high. Also designers and engineers have many tools at their disposal to reduce uncertainties in the realization of product-packaging combinations. Examples are final element analysis, (rapid) prototyping techniques, renderings, animations, and so on. In many cases, the marketing tools and engineering tools are strictly dissociated. Clear advantages can be gained if the gap between marketing and engineering could be bridged in an effective manner. Therefore, a tool is being developed to help designers and engineers to get better feedback about design proposals. 2 PACKAGING IN PRODUCT DEVELOPMENT Packaging is being produced in tremendous amounts. Every day a West European citizen opens 7 packaging on average. This means that more than 200 million packaging are being opened every day in the EU countries. This also means that there is a high potential for a high economy of scale in this field. On the other hand, equipment is expensive and Return on Investment calculations are based on payback periods of 8 till 10 years. Additionally, it is important to note that the cost ratio between the equipment and the processed material is extremely high. Most of the processed materials are rather cheap materials (paper and board, glass, plastics with 80% PE and PP, metals like steel and aluminium), resulting in costs of cents per packaging.

Packaging plays an important role in the protection of goods, i.e. food and other fast moving consumer goods, pharmaceuticals, durables, industrial goods and dangerous goods. Looking at the economics of packaging, less than 2% of the total value of the products is spent on packaging. Consequently, for many companies, packaging is the closing entry of the development process, despite the huge importance it has in the life cycle of the product/packaging combination. Moreover, the packaging chain is often longer and more complex than the product chain. Therefore, it can be seen as a special instrument in decision processes, referred to as the packaging development aspect chain (see figure 2.) 2.1 Packaging design Packaging design is often mistaken for a graphical design process. This is not surprising, given the fact that for consumer products, the packaging has an unmistakable influence on the buying behaviour. Of all packaging rd material, at least 2/3 is used for the packaging of food. Most food is sold in self service shops; this means that the packaging has to be the seller of the product. However, graphical design is only one aspect of packaging development. Changing the graphic design has negligible influence on the packaging chain, because the equipment does not have to be changed and no investments have to be done. In this way a packaging design can be updated regularly and can stand for decades. Many people think that packaging design is only of importance in the fast moving consumer market. This is absolutely not true. It plays a similar role in the market of durables and industrial goods, as well in the market of pharmaceuticals. A number of illustrative examples include: 

An Apple computer is being packed in a printed box, while a Dell or Compaq computer is being packed in an undecorated brown box.



Gerkens Cacao uses white paper bags for 25 kg of cacao to show how clean their packaging process is and to express to their customers that the chance on bacterial growth of salmonella is being reduced by their way of working.



Figure 2: Simplified indication of the packaging development aspect chain [2]

Pharmaceutical company Merck Sharp & Dohme tried to change their packaging design, a white box with just a brand logo into a design with coloured flowers on the packaging. Based on customer response, they swiftly reverted to the old design. This shows that also pharmaceuticals are looking for ways to distinguish themselves from competitors by means of packaging. 2.2 The decision process In general, marketing departments decide that a packaging has to be redesigned. As the environment in which the product is sold or the after purchase feelings are influenced by the packaging design and to keep up sales, the marketer wants to have a adequate design. The design process mostly starts with a briefing. The new design or the redesign must for example look more natural, fresh, dynamic, young, green, professional, etc. Designers then start designing and come up with design proposals. Sometimes they try to change the geometry of the packaging; this means that engineers and packaging technologists must be integrated in the project to discuss e.g. feasibility and costs. In many projects there is a certain flexibility concerning investments. If someone is enthusiastic about a design proposal, it can mean that starting points for the project change. In an ideal situation,

384

designers and engineers come up with a joint proposal with a high and substantiated feasibility. As uncertainties always play a role in development cycles, tools can be employed to map, control and reduce these uncertainties. To finish the decision process finally production has to agree to the proposals and has to make estimations of the coming changes and how to anticipate them. Changes can imply additional education, routing and logistic consequences etc. Suppliers, human resource and new tooling can be part of this process. In reality the process is an activity in which the different steps are being executed in cooperation like an integrated process should do. Nevertheless, there can be seen a certain hierarchy in decisions that have to be taken. Many tools are appropriate to guide and facilitate these processes. In figure 1 in the bottom line an enumeration of these tools is given. Examples are Quality Function Deployment, Failure Mode Effect Analysis, Measure System Analysis, Design of Experiment, Statistic Process Control, all also known as being part of approaches like Six Sigma and Lean Tools. Besides that, several design methodologies can be use to optimize this process, for instance the Stage Gate Process Model of Cooper [3]. One of the structural conflicts in packaging design is that a new design needs innovative ideas to be distinguishable in the market. Creative people can boost sales as we know from products like the IPod, but also from packaging like the pump for liquid soap or the tube packaging from Pringles. Another good example is the new beer bottle for Grolsch, introduced in 2007; the new packaging itself caused percentages of market growth in a saturated market with many competitors. The development process requires creativity, but it must be controllable. Moreover, it should not lead to infeasible designs, designs that will not be accepted by the market or that lead to frustrated engineers. Compared to product design, for packaging design, a number of aspects have to be defined in different way. Two examples are cost and styling. Costs For example, often the costs of the packaging and of the packaging process do have a tremendous influence on the total price of product-package combination, while profits are marginal. This is especially true for fast moving consumer goods. Even a small change might require a substantial investment, implying a radical change in the market. An example is the toothpaste tube of the brand Theramed. In the opening of the tube a nozzle has been placed to optimize convenience. When this new design, which was much more expensive, was introduced into the market, the consumer did not see the difference nor (hardly) noticed the improvement in convenience. Consequently, the new product was rejected by the market. The total project costs were very high: not only was the tube more expensive, also the toothpaste itself had to be changed to optimize its viscosity for the functioning of the nozzle. Moreover, new equipment had to be developed for the filling process. Styling Another aspect that plays a role is the appearance/styling of new designs. Experiments with the design of labels of bottles indicated that changes that are hard to appoint by customers, can nevertheless strongly influence the opinion about the product as well as the willingness to pay for it [4]. In the experiment, two bottles with water with labels are compared. A design with curls in the outline of the bottle and a rather straight design were labelled with two different designs, one with a curly font

385

and one with a straight font. People judged the quality of the water higher if the style of the label matched the design of the bottle. 3 TOOLS TO BRIDGE THE GAP 3.1 Cost price estimation A distinct packaging characteristic for consumer products is the large quantities in which they are produced. Therefore, companies that produce packaging usually focus on a certain speciality, within a certain range. This means that it is relatively easy to deduce key figures. Plastics For example, a company producing plastic bottles, for the largest part out of polyethylene (PE) with a price of 1.4 euro per kilogramme, uses 1,000 tonnes of PE and the turnover is about 2.8 million Euros. This means that the added value is twice the cost price of one kilogramme PE. Consequently, a bottle of 20 grams costs about 7 cents. Prerequisites for these approximations are that series have to be large and the shape has to be more or less according to the average standard bottles. This also means that it must be able to make estimates of different sizes and for small or large series. Because of this economy of scale and the relatively cheap materials, for most packaging a fairly accurate estimation of cost prices is possible. Therefore, a tool can be made to estimate the costs of a new packaging design based on key figures. Corrugated cardboard For corrugated cardboard, key figures are based on the amount of carton being used, expressed in square meters. There is a lot of difference between available qualities of cardboard, but these differences are known. The costs of direct printing do hardly influence the cost price except for working with pre-printed materials. Folding boxboard For folding boxboard key figures can be found based on the amount of material, also expressed in square meters. Here, the number of printed colours directly influences the cost price. Special inks and special printing effects also can influence the costs as well as the number of spots to be glued (for the so called ready-glued boxes). Glass For glass bottles and jars the costs can be estimated based on the weight of the design and on the price of the plastic. For glass the colour also influences the price. Metal Rigid metal packaging has more or less fixed prices according to the type of material, the volume and the amount of material used. Standardization determines the prices for a great deal. Foil Flexible plastics have to be divided into the bulk materials like PE for shrink and stretch foils and into the more sophisticated applications like multi-layer foils. In the first group, key figures are based on the weight, in the second group on the surface and the amount of printed colours. For an engineer it is possible to estimate the amount of material used in the design of the packaging. With 3D modelling software the amount of material can be determined and the costs can be defined. This means that an engineer can give accurate feedback on the costs of the packaging quite fast and easily. Therefore, engineers can provide important input in the decision process.

3.2 Judging the appearance of design proposals It is difficult to determine how customers judge a design proposal. Packaging design can be built up out of several elements. Tijssen [5] distinguishes seven of them in two groups: physical aspects and graphical aspects. The physical elements are material and shape. Changing these can cause high investments and therefore high costs. For this reason often only the graphical design is changed. The graphical design elements are: colour, typography, use of images, graphic devices and the use of brand marks. An engineer can make drawings of design proposals by using 3D software and make renderings or snapshots from the drawings. By projecting the graphical design on the design, it is possible to show a realistic picture of the packaging design.

Online tool 1 An online web tool has been developed to involve an extensive community, part of the target group, in the judgement of design proposals. This is done by displaying the designs on a virtual shelf, surrounded by designs of competitors. Small changes are made in the designs of the different packaging, like changing the colour, the fonts, use of different graphical elements, adding logo’s and pictures, one by one. The designs are taken up in a set of screens with pictures of the existing products. Three parameters are tested with the tool: shelf value, brand value and preference value. Shelf Value: The Shelf Impact of a certain package within a fully packed shelf landscape. How does a package hold up within a packed shelf with competitors? Brand Value: Does the target group recognize the proposed brand identity and values in the package. Preference Value: Is the target group attracted towards a package when an affectional choice needs to be made? People out of the target group have to state their preference several times by clicking on items. In several steps the following information is gathered: general details about the person, buy-in-shelf ratio, memory value, recall of position on the shelf, brand value, buy intention and preference ratio. A score model has been made to judge the outcomes of the tool. 4 CASE STUDY A case study has been set up in which a number of design proposals were displayed on the virtual shelf. It was a realistic test with a product that –at that time- was under development, and by now has been introduced into the supermarket. In broad lines, the model that underlies this case study, addresses amongst others a number of distinct steps: 

Preparation of the design (create multiple variants with a controlled, purposeful divergence)



Selection and invitation of probands (based on a >2.000 candidate database)



Categorisation of participants by means of general questions (target group, age, gender etc.)



Product type analysis (focus on product type awareness, familiarity etc.)



Confrontation with new packaging design (assess perceived healthiness, taste, brand image)



Divergence of the new packaging design is assessed (determine relation between e.g. colours, font types, logos, graphics and e.g. perceived naturalness, healthiness etc.)



Statistical analyses (determine the influence of small design changes on consumer perception)



Designer feedback (based on the statistical analyses, the designer gets a substantiated feedback on the design in terms of the consumer perception, together with a sensitivity analysis focusing on the design changes vs. the perception)

Figure 3 shows the elements that can be used to change a packaging design [5] The graphical packaging design can be changed rather fast by using 2D software. The 5 elements can be changed separately. Colour: different colours can be shown very fast. Typography: different typefaces and different lay-outs can be realised quite fast although the efforts to change a design may not be underestimated. Use of images: several images can be put on the packaging, each in different printing qualities and with different appearances. Graphic entity: the design can be supported by many different devices like lines, shapes, coloured surfaces, dots, and so on. Brand marks: a small detail can influence the opinion of a customer considerably. Adding a brand mark is an important example thereof. To better assess the market acceptance of the packaging design, or to optimize the design, it would be beneficial to show the design proposal to a large group of persons. To overcome the problem that a design might be presented as if it is going to be exhibited in a museum, it must be tested in a realistic environment: among many other competitive products, in an adequate context.

1

For reasons of IP-protection, no images of the tool can be shown here

386

Conclusions of the test were remarkable. For example, the tool showed that the placement of the package with the top facing the customer (instead of the side) improves the shelf value. The buying intention of one of the designed packaging scores not so good with floating (impulsive) shoppers. But, on the contrary, the same design scores relatively high when it comes to buying-inthe-shelf simulation. The shoppers with no brand preference in 50% of all cases buy the new packaging design. A validation of the tool has been set up as well. The main conclusion was that the buying patterns in front of the virtual shelf resemble the actual purchasing behaviour in the supermarket to a great extent. The tool showed that marketers were able to quickly see consequences of different designs, which reduced uncertainties in decision making and in briefing the designers and engineers. 5 CONCLUDING REMARKS Bridging the gap between designers and engineers with marketers can be done by using tools like a web based on line tool in which a virtual shelf is used to compared different designs and design variations. The tool made it possible to obtain information about many aspects that play a considerable role in purchasing decisions. The appearance of the packaging design mainly determines the purchasing decision; the tool can be used to optimize the design. In co-operation with designers, engineers can determine the cost prize; moreover, many uncertainties can be taken away. Although the tool needs optimisation, it has already been proven that gaps can be bridged and that design methods can be optimized. ACKNOWLEDGEMENTS The authors would like to thank J. Tijssen for the work he did on the project, and K. Kuiper (Friesland Foods) for their co-operation in the project.

387

REFERENCES [1] Klooster, R. ten, 2002, Packaging Design; a methodical development and simulation of the design process, PhD. Thesis, Technical University of Delft (NL) [2] Oostendorp, J.A., Bode, J.M., Lutters, D., Van Houten, F.J.A.M., 2006, The (development) life cycle for packaging and the relation to product design; 13th CIRP International Conference on Life Cycle Engineering. Leuven (B), pp. 207-212. [3] Cooper, R.G, Kleinschmidt, E.J, Stage Gate Systems for New Product Success; Marketing Management, 1992, vol. 1/4, p. 20-29EC, 1994. [4] Rompay, v T.J.L., Pruyn, A.T.H. (2007), When Visual Product Features Speak the Same Language, Effects of shape-typeface congruence on consumer response. Manuscript submitted for publication. [5] Tijssen, J., The total package; package appearance optimisation, University Twente, 2007. Master thesis in cooperation with Friesland Foods.

Supporting Knitwear Design Using Case-Based Reasoning P. Richards, A. Ekárt School of Engineering and Applied Science, Aston University, Birmingham, B4 7ET, U.K. {richardp, ekarta}@aston.ac.uk Abstract Knitwear design is a creative activity that is hard to automate using the computer. The production of the associated knitting pattern, however, is repetitive, time-consuming and error-prone, calling for automation. Our objectives are two-fold: to facilitate the design and to ease the burden of calculations and checks in pattern production. We conduct a feasibility study for applying case-based reasoning in knitwear design: we describe appropriate methods and show how they can be implemented. Keywords: computer-aided design (CAD), case based reasoning, knitwear design, similarity, adaptation

1

INTRODUCTION

The design and production of patterns for hand knitting is a very tedious process, involving a highly qualified team of a designer, a pattern writer, several checkers and knitters, a typesetter and proofreaders. There is very little documentation and training material available, most of the skills are acquired through learning by doing. The designers and other members of the team find it difficult to formulate the rules which decide how they perform various steps, an outsider would have to watch them, try and understand what they are doing and then learn by asking questions. The designer first designs the pattern and produces a hand-drawn sketch on graph paper, which includes measurements. The pattern writer then manually calculates the pattern for hand knitting in different sizes. The checkers perform an initial check of the pattern and make any necessary corrections. The knitters then hand knit the pattern in one size and make alterations to the pattern if needed (for example to obtain the desired shape of the garment). The alterations are made on the pattern for six different sizes. The checkers manually check that all the calculations are correct. For a proper check, they sometimes hand knit a sample of the complicated parts of the pattern (a finishing, for example) to make sure that the result corresponds to the description. In the next step, the typesetter typesets the pattern using a standard word processor. Finally, at least three proofreaders check the typeset pattern for any mistakes (they perform a readthrough check). If there are alterations needed, the typesetter will make them on the typeset pattern.

CIRP Design Conference 2009

388

The process is very time consuming and requires a lot of human attention and effort. Everything in the process, except the typesetting, is done manually, without the use of computing technology. We demonstrate here how specialised tools can be developed to help the designer in producing the sketch and also to automate the checks, calculation and production of the written patterns. Although knitting and knitwear design has a long history and also includes many calculations and steps that are almost mechanical, there have been surprisingly few attempts to use computers for improving the underlying processes. The design of machine-produced knitwear was investigated by Eckert et al. [1]. Communication difficulties arise in large design teams since the specifications of the garments are often inaccurate, incomplete and inconsistent. To alleviate this, the authors proposed that designers use an intelligent CAD system. Active critiquing, i.e. pointing out differences between a designer’s goal and what is actually happening, can be used to catch errors as early as possible. In the user interface which was presented, choices were only offered if they were relevant, based on the answers to previous questions. Shapes were described using shape grammars and Bézier curves. Interestingly, they say that “case based reasoning could be employed to provide starting measurements for cutting pattern construction”. A system for automatically designing knitting stitch patterns for hand knitwear was presented by Ekárt [2]. It is possible to produce many designs that are not knittable, because they violate the rules of knitting. For example, if a width of a garment does not change then the number of

stitches in a row must be equal to that of the previous row. A method of representing knitwear based on trees is able to avoid many of these invalid designs. Some heuristic measures were described, which can identify aesthetically pleasing patterns. This avoids the fatigue which is involved in the alternative techniques of asking humans to evaluate patterns which have been generated. Many studies on creative design conclude that human designers create new designs by studying past designs from various resources and combining them together, adding new elements. To assist the design and production of knitting patterns, we propose a case based reasoning system that allows for structured step-by-step production of new patterns from scratch or reuse and adaptation of similar patterns from the past. 2

CASE-BASED REASONING

Case-based reasoning (CBR) is a generic problem solving methodology based on the idea that a solution to a new problem can be obtained from solutions to similar problems encountered in the past [3]. In addition to using knowledge from previously experienced problems, CBR has an element of incremental learning: every new experience is recorded for future use. CBR is based on typical human problem solving: an engineer designs a new product or part by possibly reusing features of a past design, whose specification presents a certain degree of similarity with the expected new product; a doctor examining a patient builds up the diagnosis based on a comparison of the symptoms with those described in a book or previously shown by another patient; a decision in court is taken by drawing the similarity to a case in the past. The key in all these situations is to remember the similar problem case from the past and adapt its solution to the new situation. The main cycle of CBR consists of four steps [3]: • retrieve the most similar case(s) in the case base; • • •

reuse these case(s) to attempt to solve the new problem; revise the proposed solution; retain the new problem and its solution in a new case in the case base for later use.

There are various methods to represent cases, in most situations the problem description and the solution are represented as attribute-value pairs. A suitable similarity function has to be devised to compare the new problem case with the ones stored in the case base. The best matching cases are retrieved from the case-base and adapted for reuse. Adaptation is often very simple, such as slightly changing values of selected attributes, or left for the human to do. In domains where there is a huge number of past cases available, it is very likely to find a

very similar past case whose solution needs little adjustment to the new case. However, where there are few past cases available only, the adaptation needs to be more substantial and prepared to deal with larger differences between past cases and the new case. It is customary to allow the user to rate how well the new solution proposed by the CBR system performed and possibly repair it in the revise phase. The new case is then retained for future use in the case base if it is judged to be sufficiently different from the existing cases. Lopez De Mantaras et al. [4] comprehensively discuss retrieval in CBR, e.g. whether to use surface features or data which is derived from a more in-depth analysis of the case. Some of the methods they discuss are variants on nearest neighbour algorithms, others rely on a complex representation of cases, e.g. in a hierarchy or as a graph. They mention systems where a subset of cases is retrieved and presented to the user; in some of these systems they sacrifice some of the similarity in order to introduce diversity. They also discuss “adaptation-guided retrieval”, which uses domain specific knowledge to reduce adaptation failures in situations where the most similar case is not necessarily the easiest to adapt. Price and Pegler [5] describe the Wayland advisor for the setting up of die-casting machines. Wayland consists of a series of fields, each of which has a weight chosen to set the significance; these are summed to give an overall match value. Wayland is an uncommon example of the successful use of CBR in a difficult design problem. There is a large value space for the inputs, and the relationship between those inputs and the desired outputs is not clearly understood. Nevertheless the system was able to find a ‘close enough’ solution, and as a result was successfully deployed in foundries. Kolodner [6] discusses indexing and flexible methods of organising cases into a network, e.g. a memory organisation packet. She gives four methods of computing similarity: using an abstraction hierarchy, a qualitative scale, a quantitative scale, or comparison of roles. The choice of representation and similarity measure depends on the nature of the data, e.g. whether it is hierarchical, discrete choices or continuous range. Bergmann et. al [7] discuss case representation, pointing out that the choice of case representation and similarity measure are related. Straightforward representations are feature vectors, textual and object-oriented. Generalised cases are achieved by introducing variables into cases to represent solutions to a wide range of problems. Complex cases may involve a choice of variables and wide ranges for their values. The importance of generalised cases lies in the fact that they can enable a CBR system to perform well with a small case base.

389

Bergmann et al. also discuss CBR systems with a hierarchical representation. Larry Leifer of Stanford University famously said “all design is redesign”. This applies to knitwear since at the lowest level garments are composed of the same types of stitch. Therefore, a partonomic hierarchy is conceivably a good way of representing knitwear. However, comparing the detail of complex cases in order to judge their similarity is much more difficult than simply matching on surface features. Craw [8] describes k-NN (nearest neighbour) algorithms and makes the point that recording how many times a particular case has been reused can, in future, help to identify problems with the coverage of the case base. Mejasson et al. [9] use a weighted sum algorithm in their intelligent design assistant system, which assists designers of submarine cables. The algorithm involves the sum of the squares of individual distances; the weights applied to these distances can be set by the user. In order to compare values, their range was often mapped onto a qualitative scale, with the points on the scale treated as being of equal distance. They use an abstraction hierarchy to measure the distance between components. Watson [10] underlines the fact that CBR is a methodology and implementers are free to use whichever technology is appropriate to implement, mentioning nearest neighbour, induction, fuzzy logic and SQL as examples. The existing work shows the prevalence of k-NN algorithms using weighted sums and techniques to deal with symbolic, rather than numeric data. This approach is suitable for our domain, since adaptation of our hierarchical representation (see below) will mean we do not need to index our cases or organise them in a particularly complex way, leaving us to focus on the challenges of adaptation. CBR is particularly suitable for the knitwear design problem. Designers themselves study many past patterns before they create a new design. A new design will always contain some new element or an interesting combination of elements used in previous patterns, but will be similar to previous designs. A new sweatshirt will most commonly contain the four ordinary parts: front, back, right and left sleeves. However, their shape and structure will be designed according to new trends and wool type. There could be many neck and armhole types, various different lengths and sleeve lengths, possible accessories (such as a belt) or interesting borders, and the stitch pattern can vary across the whole sweatshirt. Sirdar Spinning Ltd. is producing about 300 new patterns every year, so the case base will contain a limited number of very specialised cases. A new case can be solved by

390

combining and adapting several sufficiently similar cases (over different attributes) rather than adapting one single case, as it is unlikely to find a very similar past case across all dimensions. Once the high level representation is created for the new case, the details at the lowest level must be rigorously produced in order to obtain a full solution. Our system cannot solely rely on the case base as the new trends may involve a new element or shape that has not been encountered before. The human user will be offered the possibility to create a new design themselves with the help of the system and then store it in the case base. 3

CBR APPLIED TO KNITWEAR DESIGN

3.1

Case Representation

We represent our cases using three levels of detail: 1. Questionnaire: A high-level specification that is similar to a sheet currently used by knitwear designers. 2. Sketch: A visual illustration of the shape, consisting of points, lines or curves between them, and rules that govern them. 3. Chart: A detailed and complete description of the knitting stitch pattern that makes up the garment. The three-level representation is consistent with the existing process, where detail develops as the design progresses. It facilitates case-based reasoning since we can use the questionnaire in the retrieval stage, so indexes are not required. We store our cases as XML files, in a human-readable format. This affords portability, and facilitates testing, since no interpretation of the files is required. 3.2

Similarity Algorithm for Retrieval

We use a weighted sum algorithm to assess the similarity of two garments. The first step is the calculation of the raw similarity raw_sim; the higher the value of raw_sim, the more similar the garments are judged to be. Each feature in the questionnaire for the garment that we are creating is compared to the equivalent in the previously created garment, stored in the case base. If the features match, a weight is added to raw_sim. The values of the weights are determined by the user (see section 4.4). In many cases, the values in the questionnaires are “yes/no” choices. However, in some scenarios there is a choice from a list, and options may differ in their similarity to each other. For example, a wrist-length sleeve is more similar to a 3/4 length sleeve than it is to a very short one. We define a 2 x 2 matrix for the attribute values and the user can assign a score to each pair of values. In these situations the the weighted score is added to raw_sim.

If the feature is not present in the garment then the weight is not calculated and other features which are dependent on this feature are ignored in the existing garment. For example, if we are creating a sleeveless garment then a weight (Whassleeves) would be used for existing sleeveless garments; but not for sleeved ones. In the latter case, details such as the cuffs are irrelevant. The normalised similarity (norm_sim) is given by:

norm_sim(a,b) =

raw_sim(a,b)-min_sim(a,b) max_sim(a,b)-min_sim(a,b)

(1)

attempting adaptation on cases that are very similar to this may also be futile. Our algorithm was implemented in Java application on a computer with an Intel Core 2 Duo 2GHz processor, and 2GB of RAM. We experimented with similarity evaluation on a large case base consisting of 25000 pseudorandomly generated cases, to ensure that efficiency is not an issue with the algorithm (we envisage only about 300 cases a year being added). Our system has been designed for ease of maintenance and flexibility; if efficiency becomes a problem, then the architecture has to be altered to improve execution speed.

where: min_sim(a,b) = Warmholestyle x min_scorearmholestyle + Wneckshape x min_scoreneckshape +Wwearer x min_scorewearer max_sim(a,b) = raw_sim(a,a) min_scorearmholestyle, min_scoreneckshape and min_scorewearer are the minimum values in the score matrix of the garment being created for the row corresponding to the armhole style, neck shape and wearer respectively. All these attributes are mandatory and independent of other data, so it is possible that some rows on this matrix will have values that cause the minimum similarity not to be zero. So, norm_sim(a,b) lies in the range [0,1], where 0 corresponds to completely different and 1 to identical. Finding the most similar garment to the new one is a maximisation problem. We argue that maximisation of similarity is more intuitive to the non-technical user than minimisation of distance. Let us now examine the axioms of distance metrics, as discussed in Tversky [11]. Minimality applies to our algorithm; this is exploited in our description of the normalisation, as explained above. However, as in many of the situations Tversky describes, symmetry and the triangle inequality do not necessarily apply to our algorithm. According to Craw [9], roles are important where there are asymmetric similarity measures; here the roles are ‘garment being created’ and ‘previously created garment’. Symmetry and triangle inequality axioms need not apply in our scenario, since what is really relevant is the adaptation distance. For example, if the sleeves are removed from garment A to produce garment B, then B is quite similar to A but not vice versa since it is easier to remove a sleeve than re-design it. The obvious retrieval strategy is to attempt to adapt the existing garment(s) with the highest similarity. If this is unsuccessful then the next most similar garment can be attempted, and so on. However, we may consider introducing diversity, as discussed in Aamodt et al. [5], since if the most similar case cannot be adapted then

We have explicitly coded our features and similarity measure into our Java software, as opposed to ‘adaptive programming’, described by Long et al. [12], using metadata instead. Our users require a tool to create new cases manually, so the features must be specified in our program code anyway. Our software offers both ‘creation from scratch’ and creation using CBR. 3.3

Adaptation for Reuse

One of the assumptions of CBR is that if two objects are similar then one can be adapted into the other. However, the well-known '15-puzzle' explained by Archer [13] illustrates that this assumption is questionable in some circumstances. In the 15-puzzle, numbered tiles are moved around on a board with the goal to finish with the tiles in a particular order. Some configurations of tiles have no tile in the correct position but are solvable. However, other configurations involve just two of the 15 tiles being in the wrong position but are completely unsolvable. It needs further consideration to establish whether this holds in our domain. As a fallback remedy, our users always have the option to manually create a knitting pattern. Mitra and Basak [14] survey the adaptation methods used in many CBR systems, and classify them in various ways, e.g. knowledge lean and knowledge intensive. Our domain perhaps lends itself more to knowledge intensive adaptation since the cases are highly structured. Stochastic methods are likely to require extensive repair mechanisms. Derivational replay is not applicable since the way designs are constructed by humans is likely to be idiosyncratic and inconsistent. Substitution (swapping parts of the existing and newly created cases) and transformation (making structural changes to the newly created case) seem particularly applicable. However, the drawback of these techniques is that they require domain knowledge, so they reduce one of the claimed benefits of CBR, i.e. the elimination of the knowledge elicitation bottleneck.

391

It is worth noting that our search space is fairly large: the questionnaire (see section 4.3) is capable of specifying 12 approximately 10 different garments. If we assume that ease of adaptation correlates with structural similarity, then the danger is that the gap between the new solution and the nearest existing case will be too large for any effective adaptation strategy. One of the ways to avoid the large search space problem is proposed by Watson and Perera [15] in their ‘divide and conquer approach’. Knitwear is composed of separate pieces, e.g. cardigans typically consist of a back, front, and two arms. The CBR could be done separately on these pieces. So, our new cycle might be: • repartition the query case into constituent parts • • •

retrieve cases which have parts that are similar to the equivalent parts in the new case reuse the parts



revise using one or more of the adaptation methods discussed recompose the parts back into a whole



repair any inconsistencies between the parts



retain the solution for the future.

The major disadvantage of the divide and conquer approach is that the parts need to be independent, as noted by Watson and Perrera. The potentially introduced inconsistencies are usually dealt with in the repair phase. In the case of a cardigan design, the inconsistency could be the production of parts with slightly mismatched armhole, which can be repaired using well-defined rules. One could imagine an inconsistency of a front and a back with different length as such, but this can be very simply avoided by using strong constraints on the size of the parts in the first place in the repartitioning phase. Purvis & Pu’s COMPOSER system [16] used multi-case adaptation in two case-based design problems. The adaptation phase was viewed as a constraint satisfaction problem; they use a greedy algorithm to find a good initial solution then employ a minimum conflicts algorithm. The algorithm uses heuristics to change the values of variables until all constraints have been satisfied. They claim that their approach requires less adaptation knowledge. The fact that the constraints have to be identified appears to be a disadvantage of this approach, but in our case most constraints are simple to specify (for example, matching length and matching armhole for parts, symmetries) and the creation of garments 'from scratch' will have to include them anyway for the purpose of consistency checking. To ensure compliance with potentially unpredicted detailed user requirements, the user will always have the final say in accepting the adaptation proposed by the

392

system and potentially propose changes before the solution is produced. 4

ARCHITECTURE AND IMPLEMENTATION

4.1 Process Flow A useful case-based design system effectively automates the full CBR cycle: retrieve, reuse, retain and revise [3]. However, we start with an empty case base, so manual methods must be supported. Many studies (e.g. [17]) have confirmed that iteration must be supported when producing a semi-automatic design system. Accordingly, the users will be allowed to move back and forward between the questionnaire, sketch and chart stages. Automated creation may jump straight from the questionnaire to the chart. 4.2 Transformation Rules To support the manual creation of garments, we require rules which will enable us to convert the questionnaire into the sketch, and the sketch into the chart. Equally, to ensure that the representations do not become inconsistent, we need inverses to these rules. This is analogous to the discussion given in Börner [18]. The rules are coded in the system using domain knowledge elicited from experts at Sirdar. For example, the featurevector representation of an armhole shape is just a single integer. This is transformed into a series of points which either form the outline of the shape, or in some cases control points for a Bézier curve. This is then discretised in the chart into a matrix of individual stitches. 4.3 Editing and Creating Patterns Figure 1 shows the stages of our knitwear design system starting with the questionnaire, followed by sketch editing and approval, then chart production and finally the sketch is converted into the written knitting pattern. Figure 1(a) shows an example of a fully completed questionnaire. Note that we seek to hide unnecessary complexity, so things are only shown when they are relevant. Also, feedback from designers indicated that they want to be able to proceed to a sketch as soon as possible. The ‘basic’ tab contains everything which is necessary to proceed to the sketch stage; the advanced tab can be left until later. The Random button creates a garment with pseudo-randomly generated data, utilising some heuristics, and populates the fields accordingly. The Clear button resets all the fields to blank and populates the fields accordingly, therefore offering potential for new garments. The other buttons are self-explanatory. If the divided box is checked, this indicates that the front of the garment separates, i.e. it is a cardigan rather than a sweater, and the user can choose the type of fastener; if it is buttoned then they can additionally choose how many

buttons and where they are. If there are sleeves then the user ticks the box and specifies the shape, and their length. If the garment has pockets, the user can choose the option indicating their position. There are additional items for the shape of the body, neck, and armhole. On the advanced tab, which is not shown here, the user can select an option indicating what the ‘background stitch’ is. This is the default that will show on the chart (figure 1(c)), unless the user selects another stitch later. Similarly, if the user has opted for a bottom border, cuffs, front border or collar then they can choose the stitch for these. When the user presses the Save button the design is stored as an XML file, ready for reuse. (a) Fully completed basic tab When users open an existing garment, our system will automatically jump to the sketch stage. The sketch initially shows the whole garment, as it would be constructed by a knitter in separate pieces, i.e. sleeves, back, and front. The user can click on any one of those pieces, for example a sleeve, which is shown in figure 1(b). The sketch is scalable and the line thickness can be varied. The shape of a piece can be modified through dragging of points with the mouse, within the limits allowed for that particular piece. For example, a set-in sleeve can change the shape of the underlying curve (implemented using Bézier curves), but cannot become a raglan sleeve.



Figure 1(c) shows a portion of a knitting chart. The knitting chart is a matrix of elements indicating what the knitter should actually do step-by-step to achieve the shape and structure of the garment. Each element corresponds to one stitch and is represented by a symbol carrying significance. For example, a circle typically means to create a hole and a triangle to combine two stitches into one. Each row in the chart corresponds to a row of knitting in the finished garment. The user can edit these symbols by numerous means, e.g. drag and drop with a mouse.

(b) Sketch

The product of the design process, a written knitting pattern, is shown in figure 1(d). It is a series of textual instructions to the knitter, partly codified, using industry standard terminology. The knitting pattern can be produced in a fairly straight-forward automatic way from the knitting chart, using predefined rules.

(c) Chart



↓ 1st Row.K0 [2:2:0:0:1], p2 [2:2:1:2:2], * k3, p2, rep from * to last 0 [2:2:4:0:1] sts, k0 [2:2:3:0:1], p0 [0:0:1:0:0].2nd Row.K0 [2:2:0:0:1], p2 [2:2:1:2:2], * yon, s1, k2tog, psso, yfwd, yrn, p2, rep from * to last 0 [2:2:4:0:1] sts, k0 [2:2:0:0:1], (yon, s1, k2tog, psso, yfwd, yrn, p1) 0 [0:0:1:0:0] times.3rd Row.K0 [0:0:1:0:0], p0 [2:2:3:0:1], * k2, p3, rep from * to last 2 [4:4:1:2:3] sts, k2 [2:2:1:2:2], p0 [2:2:0:0:1].From 1st to 3rd row sets patt. Keeping continuity of patt as set (throughout) work until back measures 44 [46:50:54:58:60]cm, (17 1/4 [18:19 ¾:21 1/4:22 3/4:23 3/4]in), ending with a rs row. (d) Pattern Figure 1: The phases of computer aided knitwear design

393

4.4 Similarity Preferences

Figure 2. Preferences In order to facilitate the retrieve stage of CBR, we allow the user to specify their preferences for similarity between the new pattern being created and the retrieved pattern from the case base. We allow the user to indicate the relative importance of features, for comparison; our approach involves ranking features in order of preference, since people are typically more comfortable doing this than they are assigning numeric weights. Figure 2 shows the preferences window. On the left, the users can rank in order the importance of features; important items towards the top of the list are highly relevant. Items in bold are all of equal and maximum relevance, and items in the separate list at the bottom are irrelevant. Hence, we cater for situations in which the user feels that several factors are highly significant, but they cannot decide which one is more important. Also, by allowing factors to be marked as irrelevant, we allow the user to exclude things that have no bearing on similarity. The slider is used to set the scale. We offer a choice of seven functions for the progression of weights from one (most important) to zero (irrelevant). Figure 3 shows an example with 28 features, none of which are irrelevant. The shapes of the functions are shown; the middle is arithmetic progression.

Sleeve shapes are an example: set-in is similar to semiset in, but different from raglan. Similarities are expressed using a Likert [19] scale, which is mapped linearly into the [0,1] range and the weighted scores are added up to produce the overall garment similarity as described in Section 3.2.

Figure 3: Functions for converting ranks to weights 5

In the preferences (Figure 2) we also allow the user to set the relative score for similarity between options for neck style, armhole style, etc. These options are available by clicking on the corresponding button in the top right box.

394

CONCLUSIONS

Knitwear design involves many creative processes and as such it is a human activity that is hard to reproduce or automate using the computer. At the same time, the production of the associated knitting pattern includes

several stages that are repetitive and mechanical for the human, which suggests automation. In this paper we described how the process of knitwear design, including the production of the knitting pattern can be automated using case-based reasoning.

[5]

We proposed a software system that allows both design from scratch and design based on previously created patterns. The system works in stages, starting with a simple questionnaire, then a sketch, then a detailed chart. These stages are interactive; the user can decide how much they want to change the options offered by the system. The chart is then used to produce the pattern automatically. The proposed system has two main goals: to facilitate the design and to ease the burden of calculations and checks. The designers will be able to easily reuse past patterns or their parts. Errors that are often present in early stages of pattern production and propagate to later stages will be prevented from occurring. We envision that through the use of case-based design, the company will be able to respond rapidly to fashion trends. Case-based reasoning systems reuse good practice, learning with each design that is produced, and thus becoming an automated repository of knowledge. Our feasibility study shows that CBR is particularly suitable for this problem domain. We investigated the important aspects of CBR: representation, similarity measures, adaptation, reuse, and retention using a prototype system for cardigans. We are planning to implement adaptation next, in order to have a fully functional system. 6

ACKNOWLEDGEMENTS

We are grateful for the support of EPSRC and Sirdar Spinning Ltd. via a CASE studentship. We also thank Sue Batley-Kyle for her valuable help on knitwear design.

[6] [7]

[8]

[9]

[10]

[11] [12]

[13]

[14]

[15]

REFERENCES [1]

[2]

[3]

[4]

Eckert, C.M., Cross N., Johnson, J.H., 2000, Intelligent support for communication in design teams: garment shape specifications in the knitwear industry, Design Studies, 21/1:99-112. Ekárt, A., 2007, Evolution of lace knitting stitch patterns by genetic programming, Proceedings of the 2007 GECCO conference companion on Genetic and evolutionary computation, 24572461. Aamodt, A. and Plaza, E., 1994, Case-based reasoning: Foundational Issues, Methodological Variations and System Approaches, AI communications, 7:39-59. Lopez De Mantaras, R., McSherry, D., Bridge, D., Leake, D., Smyth, B., Craw, S., Baltings, B., Maher, M.L., Cox, M.T, Forbus, F., Keane, M., Aamodt, A., Watson, I., Retrieval, reuse,

[16]

[17]

[18]

[19]

395

revision, and retention in case based reasoning, The Knowledge Engineering Review, 20/3:215240. Price, C.J. and Pegler, I.S., 1995, Deciding Parameter Values with Case-Based Reasoning, 1st UK Workshop on Case-Based Reasoning, Salford, 121-133. Kolodner, J.L., 1993, Case-Based Reasoning Morgan Kaufmann Publishers Inc. Bergmann, R., Kolodner, J. and Plaza, E., 2005, Representation in case-based reasoning, The Knowledge Engineering Review, 20/3: 209-213. Craw, S., 2008, CM3016 Knowledge Engineering (Case-Based Reasoning), available online at http://athena.comp.rgu.ac.uk/staff/smc/teaching/ cm3016, accessed 23 June 2008. Mejasson, P., Petridis, M., Knight, B., Soper, A., Norman, P., 2001, Intelligent design assistant (IDA): a case base reasoning system for material and design, Materials & Design, 22/3:163-170. Watson, I., 1999, Case-based reasoning is a methodology not a technology, KnowledgeBased Systems 12/5:303–308 Tversky, A 1977, Features of Similarity Psychological review,. 84/4:327-352. Long, J., Stoeckin, S, Schwartz, D.G. and Patel M.K., 2004, Adaptive Similarity Metrics in Casebased reasoning, Proceedings of Intelligent Systems and Control, Honolulu, Hawaii, USA. Archer, A.F., 1999, A Modern Treatment of the 15 Puzzle, The American Mathematical Monthly, 106/9:793-799. Mitra, R and Basak, J, Methods of Case Adaptation: A Survey, International Journal of Intelligent Systems archive, 20/6: 627:645 Watson, I and Perera, S, 1998, A hierarchical case representation using context guided retrieval, Knowledge-Based Systems 11:285292. Purvis, L. and Pu, P., 1998, COMPOSER: A Case-Based Reasoning System for Engineering Design, Robotica, 16/3:285-295 Parmee, I., Cvetkovic, D., Bonham, C., Packham, I., 2001, Introducting prototype evolutionary systems for ill-defined, multiobjective design environments, Advances in Engineering Software, 32/6:429-441. Börner, K., 1994, Structural similarity as guidance in case-based design, in: S. Wess, K.D. Althoff, M. Richter (Eds.), Topics in Case based Reasoning, Springer, Kaiserslautern, 1993, pp. 197–208 Likert, R., 1932, A Technique for the Measurement of Attitudes, Archives of Psychology, 140: 1-55.

Investigating Innovation Practices in Design: Creative Problem Solving and Knowledge Management D. Baxter1, N. El Enany1, K. Varia1, I. Ferris1, B. Shipway2 1 Decision Engineering Centre, Cranfield University, Cranfield, Bedfordshire, MK430AL, UK 2 Technical & Engineering Directorate, MBDA, Lostock, Bolton, BL6 4BR, UK [email protected] Abstract This paper describes a case study investigating two key aspects of innovation practice in an engineering company: creative problem solving (CPS) and knowledge management (KM). CPS methods offer benefit to organisations in developing novel solutions and improving operations. This research identified the key factors in applying CPS methods from the literature, and compared the creative practices of one engineering organisation with three creative organisations. KM practices can support the sharing and reuse of innovative practices and creative outcomes. A central conflict in adopting KM is codification vs. personalisation. This issue is discussed with reference to a KM framework proposal. Keywords: Innovation, Creativity, Creative Problem Solving, Innovation Team, Knowledge Management

1 INTRODUCTION This research set out to investigate two critical areas within the scope of an innovation team in a large engineering company (hereafter referred to as EngCo). The innovation team are a central resource charged with the promotion of innovation and creativity across the business. They operate a variety of activities and programs which are designed to foster and improve innovation, including innovation awards, an ideas scheme, and innovation booster sessions. In order to stimulate creativity and innovation, EngCo assign teams of employees to take part in creative and technical thinking meetings to support and promote creative practices. Innovation booster sessions are one such example. They are essentially creative problem solving (CPS) workshops led by a trained facilitator. The sessions are generally initiated by a problem owner who suggests the topic and identifies suitable attendees. The innovation office staff arrange the workshop location, as well as assigning a facilitator and additional employees to take part. Once the workshop is complete, the problem owner creates a report describing the outcome of the session. The innovation office collates these reports with the intention of sharing them across the company. This project was focused in particular on the CPS workshops, from two perspectives. The first perspective relates to the CPS tools applied in the workshop: which tools should be used in a given scenario, and what are the best practices in applying those tools? The second perspective relates to the knowledge management (KM) activities carried out by the innovation office: how best to share the outcome of the innovation workshops? The academic literature in the creative problem solving domain is somewhat limited: there is a large amount of literature describing creative problem solving methods but very little describing how to link a given problem type with a particular problem solving method. In the knowledge management domain, various generic solutions have been proposed to deal with knowledge management for innovation. The term innovation is variously used to describe both ‘creativity’ and ‘new product development’. As such, there is limited provision for knowledge management for ‘creativity’.

CIRP Design Conference 2009

396

First, this paper will describe the methodology applied in the research, followed by a description of the underlying concepts from the literature. Following that, a description of the case study findings is presented. This includes a discussion on how CPS can be applied in an engineering design scenario, and a proposal for a KM framework to support an innovation team. 2 RESEARCH METHODOLOGY This project set out to identify the important factors in applying CPS in an engineering company, and the KM methodology associated with that activity. The approach to the research involved three key stages: a literature analysis, a case study with EngCo, and two comparative studies with other organisations (one KM and one CPS). The case study took place separately in these two distinct areas, with one researcher focusing on each. A series of semi-structured interviews was carried out (15 in each area) and notes were taken in each interview. A summary of the notes from each interview was validated by the appropriate participant. For both areas, an analysis of these summary documents was carried out to identify the important themes. In identifying CPS practices, a comparative study was carried out with three creative organisations. The objective was to compare creative company practices with engineering company practices, therefore their responses will be considered as a group, and collectively they will be referred to as CreativeCo. A comparative study to identify the relationship between KM and innovation in other product development organisations also took place. Their responses will be considered as a group, and collectively they will be referred to as ProdCo. 3 LITERATURE FINDINGS 3.1 Creativity and creative problem solving Creativity is an essential part of design. This applies to all types of design, including but not limited to: industrial design, aesthetic design, creative design, fashion design, system design, mechanical design, and engineering design. Creative problem solving is a process which individuals go through as a way of developing a solution

to a specified problem. Creative problem solving (CPS) methods are widely recognised to offer benefit to organisations in developing novel solutions. This is of critical importance in a design context, contributing towards innovation and competitive advantage. There are three key issues relating to the application of CPS methods that this research set out to discover. Firstly, there are a large number of CPS methods available. It is not clear which of these methods should be applied in a given context. Second, best practices in applying CPS methods should be identified. Third, important factors in the application of CPS methods need to be identified in order that they can be effectively addressed: personal qualities, skills, culture, location, and so on. It should be noted at this stage that there were no quantitative indicators identified in the literature regarding the suitability of any given CPS method to a particular situation. Best practices are also extremely sparse; the few references identified were from practitioner sources and not academic journals. Having developed an understanding of the background of CPS, of the three issues only one is properly addressed by this paper: important factors in the application of CPS methods. This is due to the limitations of the current literature, which is in part due to the unexpected complexity of the subject, but also relates to a lack of empirical studies to assess the effectiveness of CPS methods. The study into factors supporting creativity and CPS has been supported by an analysis of the literature and a case study. 3.2 Creativity in the UK manufacturing sector In 2005 the UK government published The Cox Review, a report identifying a need for enhancing creativity in the manufacturing sector. Creativity is, according to the report, not simply a way to develop novel products and services but also a proven method to enhance productivity; however it is not always recognised as such. Definitions of creativity, innovation and design will be adopted according to the Cox review: “‘Creativity’ is the generation of new ideas – either new ways of looking at existing problems, or of seeing new opportunities, perhaps by exploiting emerging technologies or changes in markets. ‘Innovation’ is the successful exploitation of new ideas. It is the process that carries them through to new products, new services, new ways of running the business or even new ways of doing business. ‘Design’ is what links creativity and innovation. It shapes ideas to become practical and attractive propositions for users or customers. Design may be described as creativity deployed to a specific end.” [1] 3.3 Creative Problem Solving Structured problems which are well defined can be approached with direct and systematic methods. Creative problem solving is particularly suited to problems that are ill structured or difficult to define. According to Stouffer et al [2], the creative problem solving process consists of four key stages. First, a notion or need (sensing, problem definition, and orientation); second, an investigation of that notion or need (testing, preparation, incubation, analysis, and ideation); third, an articulation of a new idea or solution (modifying, illumination, and synthesis); and fourth a validation process of that idea or solution (communicating, verification, and evaluation) resulting in an idea, theory, process, or physical product. Mauzy & Harriman were able to identify four critical qualities that underpin creative thinking: motivation, curiosity and fear, the breaking and making of connections, and evaluation [3].

3.4 Creativity in academic literature Definitions of creativity in the literature are numerous and varied. Rhodes performed an analysis of over 40, with the intention of creating a single unified definition. Instead, the analysis led to the proposal that there are four strands of creativity: Person, Product, Process and Press (Environment). “Each strand has unique identity academically, but only in unity do the four strands operate functionally” [4]. This reflects both the multi dimensional nature of creativity and the difficulty in creating a universal definition. More recently, researchers have examined the relationship between creativity and cognitive styles. Kirton identified two types of creative style through observing managers in a company. The first group, called ‘adaptive’, was characterised by their ability to initiate changes which helped in improving the organisation, and their inability to see opportunities outside the organisation. The second group, called ‘innovative’, were characterised by their frequent ideas for radical change and low acceptance rate of those ideas. Kirton’s later hypothesis suggested that rather than a discrete typology there is a continuum between the two styles. This can be described as the “adaptor-innovator” continuum [5]. Essentially ‘adaptors’ are individuals who work and think in a precise and methodical way, and ‘innovators’ are individuals who work and think in an undisciplined, ‘different’ way. Understanding how individuals, or cognitive styles, relate to creativity is one important component of identifying appropriate CPS methods. 3.5 Organisational Environment An organisation’s culture is determined by the basic values, assumptions and beliefs that are shared, at the deepest level, by the organisation’s members. The culture manifests in the actions of those members [6, 7, 8]. Isaksen and Lauer identified ten factors which contribute to creativity in a collaborative environment, and nine dimensions which promote creativity and creative problem solving [9]. These are shown in table 1. Dimensions which promote creativity

Factors which contribute to creativity

Risk-Taking

Trust

Trust and Openness

Team spirit

Idea Support

Unified commitment

Freedom

Principled leadership

Challenge and Involvement

An elevating goal

Debate

Participation in decisionmaking

Conflict

An aptitude to adjust roles and behaviours to accommodate new emergent values

Playfulness and humour

A results-driven structure

Idea time

Standards of excellence External support and recognition

Table 1: factors supporting creativity [9] Organisational culture is frequently cited as an important factor in applying CPS, however there are no qualitative models offering an insight into how changes to certain aspects of culture influence the adoption or effectiveness of CPS. In part, this is limited by the nature of culture:

397

whilst the organisation has some influence, it is by no means under their direct control. Given a lack of measurement frameworks for organisational culture, it is difficult to identify the ‘most important’ factors. 3.6 Individual Qualities Individual qualities relevant to creativity are increasingly prevalent in the literature. The relationship between creativity and cognitive styles, creativity skills, and the ability to learn creative methods are some examples of how creativity is related to the individual in the literature. Amabile suggested that there are three key components that support creative production or the creative outcome from an individual’s perspective: domain-relevant skills, creativity-relevant processes, and task motivation. Domain-relevant skills refer to the knowledge each individual has and the expertise in the area. Creativerelevant processes refer to the cognitive styles and creativity strategies that each individual adopts. Task motivation is closely related to the successful development of a creative outcome, in particular intrinsic motivation [10]. In fact, motivation is widely recognised as a critical component of creativity [11].

X

X

Thompson & Lordan (1999)

X

X

X

X

X

X

Brain Writing

X

Value Engineering

Singer & Adkins (1984)

Forced Relationship

X

Attribute Lists

X

Reversal

Synectics

Hall (1996)

Check Lists

Brain Storming

3.7 Creative Problem Solving Tools There is a lack of academic material on how to identify appropriate CPS tools. In part, this is due to the wide range of CPS tools available. It is also due to the twin influence of individual qualities and organisational environment. A further complicating factor is the range of problems CPS tools can be applied to. This research sought to identify CPS tools which can be used specifically in an engineering context. The analysis of the literature identified three sources citing the application of CPS tools in an engineering context. Note that there was no indication of selection rationale or effectiveness; these issues are not addressed in general in the CPS literature. The three sources citing CPS tools applied in an engineering perspective are shown in table 2 along with the tools applied in each case.

X

X

Table 2: CPS tools used in engineering 3.8 Knowledge management Organisational knowledge provides a platform for innovation and allows individuals across the organisation to share and apply creative ideas. Innovation is very closely linked with knowledge management [12]. The definition of knowledge management adopted in this research is: “…knowledge management is the set of proactive activities to support an organization in creating, assimilating, disseminating, and applying its knowledge’” [13].

398

Nonaka & Takeuchi argue that organisational knowledge and learning are vital in the innovation process, as innovation is predominantly a process of knowledge creation which relies heavily on the availability and readiness of knowledge [14]. A small number of knowledge management for innovation frameworks have been developed, including the integrated management framework for knowledge management and innovation [15] and the ‘know-net’ framework [16]. It is essential that an organisation manages both tacit and explicit knowledge to ensure their organisational knowledge is effectively applied. Goh [15] explains how the socialization phase (direct personal communication) of their SECI model enables individuals to share their experiences, ideas and knowledge. Li & Gao [17] highlight that knowledge could not be ‘managed’ and had to be ‘led’ through creating and managing the ‘Ba’ (a shared context in which knowledge is shared, created and utilised). Haldin-Herrgard [18] highlights that methods such as, “direct interaction, networking and action learning that include face-to-face social interaction and practical experiences” are key to sharing tacit knowledge. Bröchner et al [19] also found that face-to-face meetings were an effective knowledge transfer mechanism. Hansen et al [20] studied knowledge practices at management consulting firms, health care providers and computer manufacturers. They found that in companies that provided, ‘standardized products’ knowledge was codified and stored in databases, allowing the data to be accessed at any time. They called this the ‘codification strategy.’ Within companies that provided ‘highly customized solutions to unique problems’, knowledge was shared between ‘person-to-person contacts’ and computers were only used to help people to communicate knowledge, not to store it. They called this the ‘personalization strategy.’ Hansen et al argue that one strategy or the other will dominate. This goes against many views which reinforce that both IT infrastructure, allowing codification, and an open knowledge sharing culture, allowing personalisation, must be in place for effective knowledge management [12]. Koners and Goffin [21] highlight the importance of postproject reviews (PPRs) as a knowledge creation and sharing activity. They highlight that most researchers focus on documenting knowledge and sharing and fail to realise that there is more to learning than documentation. They carried out research on five companies from different sectors to assess how R&D companies carry out post-project reviews and whether they ‘promote the creation and transfer of tacit knowledge’. They highlight that people, time, location, duration and preparation are vital. Since a PPR is a meeting, made up of people coming together for a certain purpose, a comparison is drawn between a PPR and an ‘innovation booster’ and a PPR and a ‘booster review’. These activities will be discussed in the case study section. The framework developed in this research aims to illustrate how to use knowledge management activities for an innovation team for example, through utilising knowledge management tools [12], organised post-project reviews [21] and enabling effective tacit knowledge transfer [18]. 4 CREATIVITY CASE STUDY The CPS objective was to identify a CPS toolkit. This objective was not supported by the academic literature, so the focus of the project changed to identifying potential CPS methods and critical factors in their application. The 15 interviews with the EngCo employees focused on creative problem solving methods. Questions include: what CPS techniques are you aware of; which ones do you apply; how is CPS promoted; and what qualities do

Frequency

you think an individual needs to be creative. In addition to the company interviews, three respondents from three creative companies (referred to collectively ad CreativeCo) were also interviewed using the same semistructured template for a comparative study. Content analysis was carried out to indicate the frequency of responses. Whilst it is recognised that the small sample sizes do not reflect broadly applicable trends, it is a useful mechanism for comparison. A summary of the analysis showing the most common responses are shown in table 3 for EngCo, and table 4 for CreativeCo.

Questions

Key Themes: EngCo

How long have you been in the company?

Over 25 years

40%

What type of problems do you encounter at work?

Time Issues Communication Problems

47%

What qualities do you think an individual needs to be creative?

Open minded to other peoples ideas Willing to take risks Not afraid to ask questions

Would you call your organisation one that takes risks?

Calculated Risks Needs to take more risks

47%

[how] Does your organisation promote Creativity and CPS?

Innovation Awards Innovation Workshop Facilitator Training Idea Scheme

73% 60% 60% 27%

What CPS tools are you aware of?

Mind Maps Brain Storming Not Aware

47% 6% 33%

Which ones do you apply on your role?

Brain Storming and Mind Maps None

66%

Do you think job roles affect attitudes towards creativity?

Yes No

13% 87%

Is knowledge managed well within the organisation?

Knowledge managed poorly, not shared enough

93%

How could Creativity and CPS be improved throughout the organisation?

More awareness of CPS

60%

66% 47% 40%

33%

33%

How long have you been in the company?

3 years +

Frequency

Key Themes: CreativeCo

Communication Issues

100%

What qualities do you think an individual needs to be creative?

Open minded free thinkers

66%

Would you call your organisation one that takes risks?

Yes

100%

[how] Does your organisation promote Creativity and CPS?

Training in creativity

66%

What CPS tools are you aware of?

Brain storming and Brain Writing

66%

Which ones do you apply on your role?

Brain storming and Brain Writing

66%

Do you think job roles affect attitudes towards creativity?

Yes

100%

Is knowledge managed well within the organisation?

Client Meetings, Intranet

100%

How could Creativity and CPS be improved throughout the organisation?

Research into creativity Branding creativity

40%

Table 3: content analysis of company interviews EngCo (15 respondents)

Questions

What type of problems do you encounter at work?

100%

33% 33%

Table 4: content analysis of company interviews CreativeCo (3 respondents) 4.1 Discussion and comparison of CPS findings The results from the interview data analysis identified that a high proportion of respondents in the engineering company had worked in the company for over 25 years: 40%. The respondents from the creative companies had all been there at least 3 years. Common problems encountered at work included communication issues in both company types, in addition to time pressures in EngCo. Additional issues mentioned in EngCo which relate to the complexity of the products and organisation include: understanding the project, and following numerous complex processes and procedures. CreativeCo identified their main problems in terms of communicating their work. They often work as a contracted creative specialist, so problems include getting the right information from the client, and (a common issue) communicating with the various partners on the project. This issue of communicating between disciplines is not restricted to creative domains: there are often conflicts brought about by a lack of understanding where people from different disciplines work together. Key qualities thought to support creativity include (from both) open mindedness and (from EngCo) a willingness to take risks. This risk factor is an interesting quality of creativity, particularly since it was not mentioned as a factor by the CreativeCo respondents. However, the creative companies all professed to be in risk taking organisations, whereas 47% of EngCo respondents

399

identified that it takes ‘calculated risks’, and 33% suggested they need to take more risks. Risk is identified by Isaksen and Lauer (2002) as a dimension which promotes creativity. This is supported by the case study findings. There is some conflict regarding the view of risk in small and large organisations: in a smaller organisation, every activity is inherently more risky since relatively small expenditures represent a much larger proportion of the company revenue than in larger companies. Similarly, inherent risk in a single project is larger if the total number of projects is small. Even with the variance in company size and therefore ‘relative risk’, risk remains an important factor in creativity and should be recognised as such. There were a variety of methods in place in EngCo to promote creativity, including innovation awards, an idea scheme, innovation workshops and workshop facilitator training. Two of the three CreativeCo respondents delivered creativity training. EngCo did not formally train people in creative methods. Regarding awareness and use of CPS tools, 66% of EngCo respondents applied brainstorming and mind maps, and the same proportion in CreativeCo applied brainstorming and brain writing. The key difference is that 33% of EngCo respondents are not aware of and do not apply any CPS techniques, whereas all of the CreativeCo respondents were aware of, and used, various CPS techniques. All CreativeCo respondents believed that job roles affected attitudes towards creativity. One respondent in CreativeCo suggested that “individuals will only fulfil the requirements that their job roles state”. The question was included in order to identify whether job roles restrict individual flexibility to operate in new areas, and in doing so restrict the potential for creativity. Our survey does not provide a full answer to this question, but does prompt a further investigation into the potential adverse relationship between job roles and creativity. Another response from CreativeCo was “creativity could be stifled depending upon which specific role you were playing”. The majority of EngCo responses indicated that there was not such a relationship; some also indicated that creativity is a personal issue. Of the 13% who suggested that job roles were related, one suggested that they were “over-worked with little time to spend on problem solving or thinking of new or creative ways of tackling problems”. This view of job roles is one of three key differences between the two sectors, as identified through our case study. The third relates to knowledge management. CreativeCo responses all indicated that they try to share as much knowledge as possible with employees and clients through mechanisms such as meetings, intranet and forums. 93% of EngCo respondents suggested that knowledge was not managed well, including comments such as “Knowledge management does not seem to be visible”; “Knowledge management is applied poorly”; “there needs to be more interaction between people” and “people are too busy to share knowledge”. Whilst this could indicate that EngCo is less effective in its KM practices than CreativeCo, it may also indicate a difference in understanding of what constitutes KM. Organisational structure and culture were cited as strong influencing factors in knowledge management. In terms of improving creativity, the largest response from EngCo was awareness. This is supported by the finding that 33% of respondents were not aware of CPS techniques. CreativeCo identified continued encouragement, research, and branding. Branding is closely related to awareness. This indicates that organisations need to adopt and promote creativity and CPS if it is to be effective. It needs to be an actively encouraged and supported part of the culture.

400

5 KM CASE STUDY A series of interviews were carried out during the KM case study. These were carried out with 15 EngCo employees and 7 ProdCo employees. Topics covered during the interviews were: background of the project; employee background: roles & duties; “How is KM applied in your role (systems, methods)?”; and “Is there a link between KM and innovation?”. Additional questions were asked of the ProdCo respondents, including ‘how would you rate the innovation / KM in your company”, “what are the barriers to innovation”, and “how closely linked are innovation and KM?” 5.1 EngCo KM findings The innovation booster sessions are considered within this case study as the main activities of the innovation office. An innovation booster session is defined by the employees of EngCo as: 

A kind of workshop… using creative problem solving techniques (Head of Innovation).



A method of exploring a problem or investigating a specific topic with the assistance of colleagues from varying backgrounds (Principal Engineer). As identified in the case study, important activities include planning and logistics, facilitation, output, and follow-up. Within the scope of these activities, there are several KM tools currently in use, including company intranet, internal wiki, Windows SharePoint Services (WSS), and various shared drives. Knowledge sharing is not formally recognised, and there are no rewards schemes in place. The knowledge management tools currently in place are not updated regularly, and in some cases are country specific. Not all information can be accessed outside of the innovation office. Capturing and documentation of information during the booster, formal report writing after the booster, dissemination and sharing of the results, feedback and follow up sessions all take place, but inconsistently. There are no formal processes in place for measuring the success of the boosters for any of the critical elements (planning, facilitators, output, and follow-up). The most beneficial output appears to be in an intangible form rather than in the form of reports. During the booster, conversations, interaction with other participants, understanding different perspectives, and personal learning all take place. 5.2 ProdCo KM findings Respondents stressed that innovation was critical to them but barriers such as bureaucracy and individual thinking can hinder innovation. Three of the seven respondents rated their innovation as ‘Good’. Regarding knowledge management, the respondents explained that knowledge sharing is vital, and that it is enabled by IT infrastructure, social networking and meetings. They emphasised that rewards for sharing knowledge should be in place as well as a knowledge sharing culture. Knowledge management was defined differently by all the organisations. Four of the seven respondents rated their KM as ‘Good’. The respondents identified a definite link between knowledge management and innovation, suggesting that organisations should ensure knowledge management supports innovative practices. Four of the seven respondents highlighted that knowledge management and innovation are ‘Extremely related’. One respondent highlighted that their knowledge management and innovation are not related in their organisation, stating

‘We show very little attempt to innovate and have no clear knowledge management structures’. Workshops were used within ProdCo to promote innovative and technical thinking. Respondents reported various different barriers to innovation, including excessive bureaucracy, poor IT infrastructure, insufficient resources, and not having a formal procedure for submitting ideas. Six of the seven respondents reported innovation teams in their organisations. One described ‘working groups’ which are formed on a ‘need to have basis’ as ‘the main vehicle for sharing knowledge.’ Innovation teams were different in all the organisations with some being R&D related and others dedicating an entire ‘Advanced Technical Centre’ to an innovation team: all of the innovation teams were unique in their activities and structure. 5.3 Discussion of KM findings ProdCo respondents reported a variety of mechanisms for managing innovation. Workshops are a key method. KM was reported to be very closely related to innovation; a supporting function without which innovation would be less effective. They key to the success of the innovation boosters, and ultimately the innovative practices they promote, is to ensure that their planning, follow-ups and the actual meeting itself are effectively managed, measured and monitored. Within EngCo there are potential improvements to every stage. 6 PROPOSED KM FRAMEWORK Figure 1 shows the proposed KM framework. The framework is presented as ‘KM for an innovation team’, meaning that It should be applicable to any innovation team, rather than just EngCo. In order to show the changes made to the current process, changed or additional tasks or information sources are shown with red borders. 6.1 Promotion and Continuous Improvement Stage The innovation team should promote their activities and make the outcomes available across the organisation. The two tasks in figure 1 are linked to WSS and intranet; however any accessible platform could be used. The company investigation found that the intranet was not utilised or updated and that the different innovation managers used separate country specific network drives rather than WSS where information could be shared internationally. The industry study found that knowledge management though effective tools were vital for knowledge sharing and communication across the organisation. 6.2 Contact Stage The contact stage should be led by problem owners, as in the current situation, and also by facilitators. The investigation showed that the limited visibility of the innovation office prevented innovation sessions from being initiated. 6.3 Planning Stage: There are a variety of mechanisms being applied during the planning stage. Initially, the problem is assessed for suitability. An internal wiki could be applied to this stage, to search and consult on the problem in order to understand it and to assess its suitability for a booster session. If the problem is suitable for a booster the problem owner is provided with a ‘booster pack’ containing instructions, support documents and feedback forms. The documentation is not currently formally defined. Venue is arranged. In assembling the team, the innovation team

should consult the yellow pages to identify appropriate personnel based on the venue, problem type and existing participants. Further work is required to support the matching of personnel with problem type. Currently, the innovation team have access to a small selection of personnel. The industry study found that one organisation ‘would be more efficient in our execution of innovation if we had stronger global links and (IT) systems’. This highlights that other organisations also have IT issues relating to KM and global contacts. 6.4 Booster Session Stage: This activity is presented as the central part of the innovation activity, since it performs two key functions. First, the application of CPS has a direct impact in terms of improving the outcome of the problem (often a product innovation). Second, the meeting itself is a key KM activity. We have described that methods such as direct interaction and networking that include face-to-face social interaction and practical experiences are key to sharing tacit knowledge: such meetings are regarded as the most effective knowledge transfer mechanism. The booster session includes a series of CPS activities. Various CPS tools suitable for an engineering environment have been described in section 3.7. The booster pack should include details of these tools, including best practices and rules of engagement. The industry study found that the purpose of these workshops was to ‘promote creative thinking’ and ‘break down barriers between people and groups’. The ‘Advertise Booster’ activity has been added to indicate that the booster should be advertised as not only a session which may provide results to a problem but also as a knowledge sharing activity which will promote idea and experience exchanges between participants from varying backgrounds. Additionally, since these are key outcomes of the booster, the company should seek to periodically measure them. The mechanism to support measurement of knowledge sharing is identified as further work. 6.5 Outputs: Follow-Up and Review Stage Feedback is a critical part of this stage. A feedback activity should take place within the session itself, in order to identify the experiences of the participants regarding the venue, facilitator, CPS tools and quality of the outcome (i.e. the solution to the problem). The problem owner report should also be created at this stage. A template for the report is provided with the supporting documentation (the booster pack). The report is used in two ways: to share the result, and to promote the innovation activities. This should be completed, shared and made widely available. Participant feedback should be sought after a suitable period (thought to be 3-6 months) to investigate the outcome of the meeting itself for all participants: did practice change as a result of the booster session? This activity could take place with randomly selected participants, using a simple online questionnaire format to minimise the administration requirement. As a key activity supported by the innovation office, an attempt should be made to measure the value of the intangible outcome of this activity: knowledge sharing. Facilitators should meet periodically to share experiences and best practices. The company investigation found that reports are not always created, and are rarely shared. The industry study found that it is important to have a great depth of common

401

knowledge between people which will lead to innovation and to enable them to share this knowledge. 7 VALIDATION The knowledge management framework was validated using a structured interview with the lead industry participant from EngCo. It was considered that implementing the framework would provide value to the innovation team and the NPI process. 8 SUMMARY It was suggested in this paper that the creative problem solving workshop, or innovation booster, is itself a key knowledge management mechanism, promoting knowledge sharing through face to face communication. Two aspects of that workshop were investigated through an industry case study and comparative studies with external organisations. First, appropriate CPS methods to apply during the session were investigated. Second, the knowledge management methodology and tools were investigated and a proposal made for KM to support innovation. Creativity has been identified as a critical element of design and of productivity improvement in manufacturing and engineering companies. Our comparison study identified that creative organisations routinely apply creative methods. Our experience with a large engineering company supports the findings of the Cox review: creativity practice is limited, and needs to be more widely adopted. There are a variety of important factors regarding the successful application of CPS identified in the literature. Our case study findings indicate that creative companies are better at applying creative problem solving than the engineering company we studied. Whilst this is not an unexpected finding, some of the contributing factors are interesting. For instance, the creative companies deliver specific creativity training, whereas EngCo focuses not at the individual level, but instead trains people to facilitate CPS events. A proportion of the EngCo employees were not aware of CPS methods, and therefore had not applied them. This is closely related to the issue of communicating creativity and CPS methods. In a highly planned and structured organisation in which time is booked against specific projects, free time will always be limited. In a time limited organisation, intuitively there are two ways which could improve the use of CPS methods. Either build CPS into the structure, making it a part of the project delivery plan and accounting for the time required, or promote CPS and deliver individual training in order that it is so ingrained into the work practices of every individual that it is naturally a part of their activities. A knowledge management framework was developed to support an innovation team. The framework describes activities and inputs required for the four key steps: planning and logistics, facilitation, output, and follow-up. A flowchart and description of the KM framework is provided. The framework emphasises the importance of knowledge sharing tools to enable communication and updated information, the meeting as a knowledge sharing activity, and the potential to measure the tacit and explicit outcomes of a knowledge sharing activity. 9 ACKNOWLEDGMENTS The encouragement and expertise of the industrial participants in this project was greatly appreciated. 10 REFERENCES [1] Cox, Sir George. 2005, The Cox Review. http://www.hmtreasury.gov.uk/independent_reviews/cox_review/coxrevie w_index.cfm. Accessed 1st October 2008.

402

[2] Stouffer, W.B., Russell, J.S., Oliva, M.G. 2004, Making the Strange Familiar: Creativity and the Future of Engineering Education. Proceedings of the 2004 American Society for Engineering Education Annual Conference & Exposition. Salt Lake City, USA. [3] Mauzy, J., Harriman, R. 2003, Creativity, Inc: Building an Inventive Organization. Harvard Business School Press, Boston, USA. [4] Rhodes, M. 1961, An analysis of creativity. Phi Delta Kappan, 42: 305-310 [5] Kirton, M.J. 1961, Management initiative. Acton Society Trust, London, UK. [6] Maker, C.J., Jo, S., Muammar, M. 2008, Development of creativity: The influence of varying levels of implementation of the DISCOVER curriculum model, a non-traditional pedagogical approach. Learning and Individual Differences 184: 402-417 [7] McFadzean, E. 1998, The Creativity Continuum: Towards a Classification of Creative Problem Solving Techniques. Creativity and Innovation Management 7(3): 131-139 [8] Nayak, A. 2008, Experiencing Creativity in Organisations: A Practice Approach. Long Range Planning 41(4): 420-439 [9] Isaksen, S.G., Lauer, K. 2002, The climate for creativity and change in teams. Creativity and Innovation Management 11(1): 74-86 [10] Amabile, T.M. 1983, The social psychology of creativity: A componential conceptualization. Journal of Personality & Social Psychology, 45,: 357–376 [11] Gilson, L.L., Shalley, C.E. 2004, A Little Creativity Goes a Long Way: An Examination of Teams’ Engagement in Creative Processes. Journal of Management 30(4): 453–470 [12] Du Plessis M., 2007, The role of knowledge management in innovation. Journal of Knowledge Management 11(4): 20-29 [13] Hussain, F. & Lucas, C., 2004, Managing Knowledge Effectively. Journal of Knowledge Management Practice, May 2004 [14] Nonaka, I., Takeuchi, H. 1995, The knowledge creating company. Oxford University Press, USA. [15] Goh, A., 2005, Harnessing knowledge for innovation: an integrated management framework. Journal of Knowledge Management 9(4): 6-18 [16] Apostolou, D. & Mentzas, G., 2003, Experiences from knowledge management implementations in companies of the software sector. Business Process Management Journal 9(3): 354-381 [17] Li, M. & Gao, F. 2003, Why Nonaka highlights tacit knowledge: a critical view. Journal of Knowledge Management 7(4): 6-14 [18] Haldin-Herrgard, T. 2000, Difficulties in diffusion of tacit knowledge in organisations. Journal of Intellectual Capital 1(4): 357-65 [19] Bröchner, J., Rosander, S. & Waara, F. 2004, Crossborder post-acquisition knowledge transfer among construction consultants. Journal of Construction Management and Economics 22(4): 421-427 [20] Hansen, M., Nohria, N., Tierney, T., 1999, What's Your Strategy for Managing Knowledge? Harvard Business Review 77(2):106-116 [21] Koners, U., & Goffin, K., 2007, Learning from PostProject Reviews: A Cross-Case Analysis. Journal of Product Innovation Management 24(3): 242-258

Figure 1: proposed KM framework

403

Invited Paper Set-Based Design Method Reflecting the Different Designers’ Intentions M. Inoue, H. Ishikawa Department of Mechanical Engineering and Intelligent Systems The University of Electro-Communications (UEC), 1-5-1 Chofugaoka, Chofu-shi, Tokyo 182-8585, Japan {inoue, ishikawa}@mce.uec.ac.jp

Abstract The previous series of the study have proposed a preference set-based design (PSD) method that enables the flexible and robust design under various sources of uncertainties. In contrast to the traditional design method, this method generates a ranged set of design solutions that satisfy sets of performance requirements. In this study, a system based on PSD is implemented by combination of 3D-CAD and CAE, and the system is applied to a real industrial design problem, i.e., automotive front-side frame. This paper, also, discusses the applicability of the system for obtaining the multi-objective satisfactory design solutions reflecting the different designers’ intentions. Keywords: Set-Based Design, Design Intention, Flexible and Robust Design, Multi-Objective Design, 3D-CAD

1 INTRODUCTION The early phase of design called conceptual and preliminary design contains multiple sources of uncertainties in describing design, and nevertheless the decision-making process at this phase exerts a critical effect upon all design properties. Since the late 1980’s, concurrent engineering (CE) has brought new possibilities for realizing faster product development, higher quality, lower costs, improved productivity, better custom values, and so on. The traditional design (point-based) practices obtain a point solution within the solution space and then iteratively modify that solution until it meets a satisfactory solution, however, the iterations to refine that solution can be very time consuming. In this iterative process, there is also no theoretical guarantee that the process will ever converge and produce an optimal solution. In addition, using unique point solution does not express information about uncertainties caused by many sources of variations. The previous series of the study have proposed a preference set-based design (PSD) method that enables the flexible and robust design while incorporating designer's preference structure to resolve the problems of the traditional design methods [1][2]. In contrast to the traditional design methods, this method generates a ranged set of design solutions that satisfy sets of performance requirements. Meanwhile, various computer-based simulation tools such as 3D-CAD systems and CAE are widely used as designers’ everyday design works and have helped propel the CE practice. In this study, the system based on PSD is implemented by combination of 3D-CAD and CAE. This paper presents the applicability of the system for obtaining the multiobjective satisfactory solutions reflecting the different designers’ intentions by applying to a real industrial design problem, i.e., automotive front-side frame problem.

CIRP Design Conference 2009

404

2 SET-BASED DESIGN METHOD PSD method consists of the set representation, set propagation, set modification, and set narrowing. Figure 1 shows the procedure of the proposed method. 2.1 Set Representation The representation and manipulation of engineering uncertainties have great importance at the early phase of design. To capture the designer’s preference structure on the continuous set, both an interval set and a preference function defined on this set, which is called the “preference number (PN)”, are used. The PN is used to specify the design variables and performance requirements, where any shapes of PN are allowed to model the designer’s preference structure as shown in Figure 2 as well as the traditional design specifications (e.g., the-larger-the-better, the-center-the-better or thesmaller-the-better). The interval set at the preference level of 0 is the allowable interval, while the interval set at the preference level of 1 is the target interval that the designers would like to meet. Consider a variable, Xi (i = 1, 2, … , m), defined on the real line R, and denote an ~ element of Xi by x. Then, the quantified PN (QPN), X i [3] is defined by: ~ X i  QX i

(1)

where

Q  {, }

(2)

X i  {( x, pi ( x ) | x  X i , pi ( x ) : x  [0,1]}

(3)

The QPN uses an interval set and a preference function (pi (x)). In this manner, the designers can incorporate their design intentions into the controllable or uncontrollable variables in defining both possible design space and required performance space. The QPN for describing design solutions and performance requirements are here

Start Set representation Set modification

Set propagation

No

Possible solution set ? Yes Set narrowing

Desired degree of Preference & robustness ?

No

Yes End

Figure 1: Procedure of the set-based design method.

Preference number (PN)

1.0

0.0

Target interval Most proper region

To select an optimal design subspace out of those feasible design subspaces, robust design decisions need to be made to make a product’s performance insensitive to various sources of variations. The QPN has been also used to define the possible design space by capturing the designer’s preference structure. In addition to the design robustness, we should take into account which one is preferred by the designer. The design preference and robustness are evaluated to eliminate infeasible design subspaces. 2.4 Design Metric Robustness

Figure 2: Designer’s preference structure. called the “design QPN” and the “performance QPN, respectively. 2.2 Set Propagation and Set Modification Set propagation method that combines the decomposed fuzzy arithmetic with the extended interval arithmetic (i.e., Interval Propagation Theorem, IPT [3]) is proposed to calculate the possible performance spaces achievable by the given initial design space. Then, if all the performance variable spaces have the common spaces (i.e., acceptable performance space) between the required performance spaces and the possible performance spaces, there is a feasible subspace within the initial design space. Otherwise, the initial design space should be modified in set modification process. 2.3 Set Narrowing If the overlapping regions between the possible performance spaces and the required performance spaces exist, there are feasible design subspaces (i.e., not a single point solution) within the initial design space. However, if the possible performance space is not the sub-set of the required performance space, there also exist infeasible subspaces in the initial design space that produce performances outside the performance requirement. Then, the next step is to narrow the initial design space to eliminate inferior or unacceptable design subspaces, thus resulting in feasible design subspaces.

Preference

and

y U( 0 )

 p ( y )q ~ Y

y L( 0 )

Not best but allowable

Variable x

Design

Measuring design preference A preference function has been employed to capture varying degrees of preference of a ranged set of possible design solutions and a ranged set of performance ~ requirements. A performance QPN Y is specified to represent the varying degree of desirability of the performance requirement in performance variable Y. Then, a preference function, p Y~ (y), is a function defining the relationship between the degree of desirability (p) and the elements (y) of a ranged set of performance requirement. When the input QPN of design variables are related to the performance Y, the resulting performance will correspondingly be a possibilistic distribution, q Y~ (y), of the performance Y. In this paper, the design preference index (DPI) [4] is adopted to evaluate the performance variation resulting from a range of solutions. Mathematically, the DPI is defined as the expected preference function value of design performance within the range of design solutions as depicted in the following form: ~ DPI (Yi )  E [ p( y )] 

Allowable interval

for

~ Yi

( y )dy

(4)

Measuring design robustness Although the DPI is a good design metric to measure design solutions with the possibilistic distributions with respect to the varying degree of preference, it often makes incorrect evaluations due to the incapability of measuring the uncertainty of the possibilistic distributions [1]. A new measure of uncertainly have been proposed, what is called the precision and stability index (PSI) [2]. The PSI could also be used to measure the design robustness and indicates how much of the distribution is close to 0.0 and 1.0. The PSI is developed by modifying Shannon’s entropy measure [5] and employing a correction factor [6]. ~ PSI (Yi )  C

Y

 PS(q y

~ ( y )) Yi

(5)

where C

W A

S(qY~ ( y ))  K i   if 0  qY~ ( y )  0.5 or 0.5  qY~ ( y )  1  i i PS(qY~ ( y ))   i K if qY~i ( y )  0 or qY~i ( y )  1  0 if qY~i ( y )  0.5 S(q( y ))  q( y ) ln(q( y ))  (1  q( y )) ln(1  q( y )) K   ln(0.5)

405

(6)

(7)

PSD Calculation System < MS-Excel >

Software Automatic Cooperation System

Input of design conditions

Design of Experiment

Setting of formula

Modification of parameters

Set-based calculation solver

Modification of CAD model

Modification of analysis conditions

Finite element analysis

Response surface method Design solution set

Modification of CAD model

Modification of parameters

Consideration of form

Check for analysis results

Analysis for check

Final design

Figure 3: Set-based design system based on 3D-CAD. where C is a correction factor to make correct measure about the uncertainty degrees of subnormal distributions with different heights [6], W and A denote the width of interval at the preference level of 0 and the area size of distribution, respectively. S(q(y)) is Shannon’s entropy function. The more values of distribution close to 0 and 1, the larger the PSI measures. Measuring design preference and robustness This study can measure the preference and robustness, what is called the preference and robustness index (PRI), of possibilitistic distributions by combining DPI with PSI. To provide the relative effectiveness among design alternatives, the DPI and PSI need to be normalized with respect to the maximum of all DPI values and the minimum of all PSI values, respectively. The PRI is obtained by: ~ ~ ~ PRI (Yi )  (DPI (Yi ) / Max j 1,,nDPI (Y j )) ~ ~  (Min j 1,,nPSI (Y j ) / PSI (Yi ))

APRIs ((PRI1, 1),, (PRIn , n ))   (PRI1 )s    n (PRIn )s  1  1    n 

 NDPI  NPSI where NDPI and NPSI indicate the normalized DPI and normalized PSI, respectively. Since more than one performance variable are commonly considered in the multi-objectives design problem, the PRIs for multiple performances need to be aggregated, what is called aggregated PRI (APRI), to provide the effectiveness of the design alternatives with respect to all performances. A family of parameterized aggregation functions is used for the multi-objective decision making problem, based on the weighted root-mean-power [7]: 406

1 s

(9)

By varying the parameter s, the expression Equation 9 produces some well-known averaging operators: min, harmonic mean (HM), geometric mean (GM), arithmetic mean (AM), quadratic mean (QM), and max. Finally, the set narrowing method first eliminates infeasible or unacceptable design subspaces that produce the performances outside the performance requirement, and then selects an optimal one from a few feasible design subspaces, which are more preferred by the designer and provide better design robustness (i.e., the highest APRI measure). 3

(8)

   

APPLICATION FRAME

TO

AUOMOTIVE

FRONT-SIDE

3.1 Set-Based Design System Based on 3D-CAD Figure 3 shows the overview of the proposed system. This system consists of a PSD calculation system and a software automatic cooperation system. The PSD calculation system is implemented by developing an add-in program of Microsoft Excel (MSExcel). This program is written in Visual Basic. A designer can specify the design QPN and the performance QPN, by directly using MS-Excel interface or initiating a special QPN composer. The performances (i.e., possiblisitic distribution) achievable by the given input design

(b) Published front-side frame

Front-side frame

(a) Published automotive body

(c) Parametric CAD model

Figure 4: Front-side frame model.

1. Width of frame

OB

2. Thickness of frame 3. Height of frame

OA

Z Z

X

Y Frame Stiffener

Frame

4. Origin position of break point location

Origin position of front/rear point location of stiffener

5. Thickness of stiffened plate

Outer plate (Thickness of outer plate: 1.6 mm) 8. Height of stiffener

OC

Stiffener 6. Front point location of stiffener

7. Rear point location of stiffener

Figure 5: Design variables of front-side frame model. variables are calculated with the designer’s input of the number of decomposition of input the input QPN, and its result is automatically displayed in a new sheet. The software automatic cooperation system operates a 3D-CAD system and analysis software by cooperating with MS-Excel. The system can activate and execute analysis software and can change the geometric size of 3D-CAD models automatically. In this system, Unigraphics (EDS, Inc.) is used as a 3D-CAD system, and Nastran (MSC, Inc.) is used as FEM analysis software. The relationship between design variables and performances variables (i.e., surrogate models) is needed for carrying out the PSD calculation. In this paper, the response surface model (RSM) is adopted to build a surrogate model of actual computer simulation, since it is the most well-established meta-modeling technique, and provides closed-form equations as the approximation model. In the RSM, different design parameter value combinations data are selected through design of experiment (DoE) technique and least squares regression analysis is used to fit these data with a polynomial function. The value of each design variable is changed by using

DoE, and then the form of the parametric CAD model is changed. The FEM analysis is carried out with changing the analysis conditions. These operations are repeated for DoE times, and the results of FEM analysis are written into the MS-Excel sheet automatically. 3.2 Setting of Design Problem In this paper, a design of an automotive front-side frame is chosen to illustrate the effectiveness of the proposed design method for simultaneously obtaining multiobjective satisfactory design solution. The part of automotive front-side frame as shown in Figure 4(b) was extracted from the published automotive body structure of 2.0L displacement [8] as shown in Figure 3(a), and then, the parametric CAD model as shown in Figure 4(c) was created by defining the part sizes representing the form feature of the structure. The present study applies the proposed system to the automotive front-side frame by using this CAD model. The purpose of this design is to fine the values of eight design variables as shown in Figure 5. Table 1 shows the domains of the design variables, given by designers. Performance requirements include the considerations on five performances, i.e., bending stiffness, tie-down

407

strength, maximum reaction force, average collapse load, and mass.

“designer B” emphasizes the balance of the performance and the cost, and the “designer C” emphasizes the cost. Figure 6 shows these designers’ design QPN. In this case, the setting method of the designers’ intention is explained as an example of the width of frame. The domain of the width of frame in Figure 6(a) is [47, 67] (mm). First, the “designer A”, who emphasizes the performance, defines the interval set at the preference level of 1.0 as 67mm that is the widest frame. As a narrow width of frame is difficult to secure the performance, he/she sets the lower preference level. However, the narrowest width 47mm is capable of setting, so the preference level is 0.3. Second, the “designer B”, who emphasizes the balance of the performance and the cost, defines the interval set at the preference level of 1.0 as [55, 60] (mm) that is the middle area of the width of frame. He/she sets the lower preference level of both the narrower frame side and the wider frame side. As the narrower frame has an advantage of cost, he sets the higher preference level of the narrower frame than the wider frame. Thus, the preference level of the narrowest width 47mm is 0.8, and the preference level of the widest width 67mm is 0.5. Finally, the “designer C”, who emphasizes the cost, defines the interval set at the preference level of 1.0 as [47, 50] (mm) that is the narrower area of the width of frame. As a wider frame isn’t preferable for cost, he/she sets the lower preference level of the frame above 50mm.

3.3 Setting Design Intensities of Design Variables and Requirement Functions To verify how the designers’ intentions reflect in the design solutions, the intentions of different three designers are represented as the different design QPN. In this paper, three designers (designer A, B, and C) are defined. The “designer A” emphasizes the performance, the Table1: Setting of design variables. (coordinates: OA, OB, OC)

No.

Domain (mm)

Design variables

1

Width of frame

[47, 67]

2

Thickness of frame

[1.6, 2.3]

3

Height of frame

[150, 170]

4

Break point location (OA)

[-30, 20]

5

Thickness of stiffener

[1.0, 2.0]

6

Front point location of stiffener (OB)

[10, 50]

7

Rear point location of stiffener (OC)

[10, 100]

8

Height of stiffener

Designer C ( emphasis on cost)

Designer B (emphasis on balance)

Designer A (emphasis on performance)

W idth of frame (mm)

[5, 30]

Thickness of frame (mm)

Height of frame (mm)

Break point location (mm)

1.0

1.0

1.0

1.0

0.5

0.5

0.5

0.5

0.0 45 50 55 60 65 70

0.0 1.5

0.0 140 150 160 170 180

0.0 -40

1.0

1.0

1.0

1.0

0.5

0.5

0.5

0.5

0.0 45 50 55 60 65 70

0.0 1.5

0.0 140 150 160 170 180

0.0 -40

1.0

1.0

1.0

1.0

0.5

0.5

0.5

0.5

0.0 45 50 55 60 65 70

0.0 1.5

0.0 140 150 160 170 180

0.0 -40

2

2

2

2.5

2.5

2.5

(a) Design QPN of frame Figure 6: Preference of design variables.

408

-20

0

20

-20

0

20

-20

0

20

Designer C ( emphasis on cost)

Designer B (emphasis on balance)

Designer A (emphasis on performance)

Thickness of stiffener (mm)

Front point location of stiffener (mm)

Rear point location of stiffener (mm)

1.0

1.0

1.0

1.0

0.5

0.5

0.5

0.5

0.0 45 50 55 60 65 70

0.0 1.5

0.0 140 150 160 170 180

0.0 -40 -20

1.0

1.0

1.0

1.0

0.5

0.5

0.5

0.5

0.0 45 50 55 60 65 70

0.0 1.5

0.0 140 150 160 170 180

0.0 -40 -20

1.0

1.0

1.0

1.0

0.5

0.5

0.5

0.5

0.0 45 50 55 60 65 70

0.0 1.5

0.0 140 150 160 170 180

0.0 -40

2

2

2

2.5

2.5

2.5

Height of stiffener (mm)

-20

0

20

0

20

0

20

(b) Design QPN of stiffener

Preference

3.4 Setting Design Intensities of Required Performances Figure 7 shows three designers’ performance QPN. In this paper, the performance QPN are the common requirements to three designers, and the differences of the emphasis of three designers are represented by weighting the each performance requirement. Figure 7(a) shows the performance QPN of the bending stiffness. The higher the bending stiffness is the better. Considering the conflicting performances, the bending stiffness below 1.0104N/mm is allowable but the preference level is low because the need of adding strength is expected. The bending stiffness below 0.2104N/mm isn’t admitted by the past experiences. Figure 7(b) shows the performance QPN of the tie-down strength. The expected load is the range of [16, 22] (kN), but the strength above 18 kN is preferable because it is possible that the planed body mass increases. Figure 7(c) shows the performance QPN of the maximum reaction force. Lower limit of force is 3.2105N to utilize the energy absorption of crushable zone effectively at the time of the crash. Upper limit of force is 4.1105N to protect the cabin. Figure 7(d) shows the performance QPN of the average collapse load. The load above 9.0104N is preferable because the frame absorbs more energy at the first half of the crash. On the other hand, the load below 9.0104N isn’t preferable because it’s necessary to adjust the structure of the seatbelt for passengers.

Preference

Figure 6: Preference of design variables.

1.0

0.0

1.0

0.0

1.6 1.8 (b) Tie-down strength (104 N)

Preference

Preference

0.2 1.0 (a) Bending stiffness (104 N/mm)

1.0

1.0

0.0

Preference

3.2 3.5 3.8 4.1 (c) Maximum reaction force (105 N)

0.0

6.0 9.0 (d) Average collapse load (104 N)

1.0

0.0

5.3 7.8 (e) Mass (kg)

Figure 7: Preference of required performance.

409

Emphasis on balance

Emphasis on cost 3

Reaction force

10

4

6

Collapse load

10

10

6

3

6

10

0

45 55 65 Width of frame (mm) 1.0 0.8 0.6 0.4 0.2 0.0 145 155 165 175 Height of frame (mm)

1.0 0.8 0.6 0.4 0.2 0.0 -40 -20 0 20 Break point location (mm)

1.0 0.8 0.6 0.4 0.2 0.0 0.8 1.2 1.6 2 Thickness of stiffener (mm)

1.0 0.8 0.6 0.4 0.2 0.0

1.0 0.8 0.6 0.4 0.2 0.0

1.0 0.8 0.6 0.4 0.2 0.0

Preference Preference

0

25 50 75 100 Rear point location of stiffener (mm) Initial design QPN

5

10

1.0 0.8 0.6 0.4 0.2 0.0 14 16 18 20 22

15

Bending stiffness (104 N/mm)

1.0 0.8 0.6 0.4 0.2 0.0 1.5 2 2.5 Thickness of frame (mm)

Preference

1.0 0.8 0.6 0.4 0.2 0.0

Preference

Preference

Preference

Preference

Preference

Mass

1.0 0.8 0.6 0.4 0.2 0.0

Tie-down strength (104 N/mm) Preference

2

8

Preference

2

3

3.5

4

Maximum reaction force (105 N) Preference

7 10

Clash

Bending stiffness Tie-down strength

1.0 0.8 0.6 0.4 0.2 0.0

Preference

Emphasis on performance

Preference

Table 2: Weighting of required performances.

1.0 0.8 0.6 0.4 0.2 0.0

1.0 0.8 0.6 0.4 0.2 0.0 5

7

9

11

Average collapse load (104 N)

Required performance Possibilistic distribution

2

4 6 8 Mass (kg)

Figure 9: Preferences of design requirements. (Designer C)

0 20 40 60 Front point location of stiffener (mm)

0 10 20 30 Height of stiffener (mm) Design set solutions after narrowing

Figure 8: Preference of design variables (Designer C). Figure 7(e) shows the performance QPN of the mass. The lighter the mass is the better. The mass below 7790g is allowable, but the most lightweight frame in this class is achieved if the mass below 5270g is. When a design object has various required performances, there are more highly weighted performances or lower weighted performances. To reflect the importance of the required performances in the structure of the front-side

410

frame, the weight of each required performance as shown in Figure 7 is classified. Table 2 shows the weighting factors of required performances. In this paper, three patterns are defined: emphasis on performance, emphasis on balance of performance and cost, and emphasis on cost due to the designers’ intentions. 3.5 Results and Discussions Figure 8 and Figure 9 show the ranged set of solutions of design variables and the possibilistic distribution of performances in the case of the “designer C“, respectively. Figure 8 indicates that all of the ranged sets of solutions of design variables as shown in solid line are narrowed from the initial preferences of design variables as shown in dotted lines. Figure 9 indicates that all of the possibilistic distributions of performances as shown in solid line are limited within the required performances as shown in dotted lines. These results show that the multiobjective satisfactory design solutions are obtained. The ranged set of solutions that satisfy five requirement performances at the preference level of 0.0 in the case of the “designer A“ and “designer B“ are shown in Table 3. Figure 10 compares, in terms of the relation between the mass and the maximum reaction force, performancesoriented solutions (designer A), balance-oriented solutions (designer B) and cost-oriented solutions (designer C). This result indicates that the balanceoriented solutions exist between performances-oriented solutions and cost-oriented solutions.

Table 3: Design set solutions. Designers Design domain

Items

A Performance min.

Stiffener

Design variables (mm)

Frame

Width

Required performance

Average collapse load ( 104 N )

max.

min.

max.

57.0

62.0

47.0

52.0

51.5

56.0

Thickness

[1.6, 2.3]

1.95

2.13

1.95

2.13

1.78

1.95

Height

[150, 170]

160

165

165

170

155

160

Break point location

[-30, 20]

-5.0

7.5

7.5

20.0

-5.0

7.5

Thickness

[1.0, 2.0]

1.75

2.00

1.25

1.50

1.25

1.5

Front point location

[10, 50]

30.0

40.0

30.0

40.0

20.0

30.0

Rear point location

[10, 100]

55.0

77.5

32.5

55.0

32.5

55.0

Height

[5, 30]

26.3

30.0

25.0

30.0

11.3

17.5

Above 0.2

9.81

12.09

6.77

9.06

7.59

9.73

Above 1.6

1.79

1.96

1.90

2.07

1.72

1.90

[3.2, 4.1]

3.65

4.08

3.63

4.07

3.39

3.82

Above 6.0

7.17

8.13

6.91

7.86

6.23

7.18

Below 7.790

6.834

7.569

6.365

7.105

5.985

6.721

Bending stiffness (104 N/mm) Tie-down strength (104 N) Maximum reaction force (105 N) Average collapse load (104 N) Mass (kg)

Performance 8.0

7.0 Balance Cost 6

min.

C Cost

[47, 67]

In this way, the proposed design method can capture the designers’ preference structures and reflect the design intentions of designers in their design solutions.

6.0

max.

B Balance

7 Mass (kg)

8

Figure 10: Preferences of design requirements. 4 SUMMARY In this paper, the concept of preference set-based design (PSD) method is introduced, and the system based on PSD is implemented by combination of 3D-CAD and CAE. The PSD method is an approach to achieve the design flexibility and robustness while incorporating the designers’ intentions under various sources of uncertainties. The implementation system is applied to a real industrial multi-objective design problem (i.e., automotive front-side frame problem) with uncertain parameters in the simulation-based design environment. This presents the possibilities of the system for obtaining the multi-objective satisfactory design solutions reflecting the different designers’ intentions.

5 ACKNOWLEDGMENTS The proposed design method has been a test bed for innovation of a product development in the Structural Design and Fabrication Committee of JSAE (Society of Automotive Engineers of Japan). The authors gratefully acknowledge the members of the committee. 6 REFERENCES [1] Nahm, Y.-E., Ishikawa, H., 2006, Novel SpaceBased Design Methodology for Preliminary Engineering Design, Int J Adv Manuf Technol, 28: 1056-1070. [2] Nahm, Y.-E., Ishikawa, H., 2005, Representing and Aggregating Engineering Quantities for Set-Based Concurrent Engineering, Concurrent Eng, 13, 2: 123-133. [3] Finch, W. W., Ward, A. C., 1996, Quantified Relations: A Class of Predicate Logic Design Constraint among Sets of Manufacturing, Operating and Other Variations, ASME Design Engineering Technical Conference, Irvine, CA, 18-22 [4] Chen, W., Yuan, C., 1999, A Probabilistic-Based Design Model for Achieving Flexibility in Design, Trans ASME J Mech Des, 121, 1: 77-83. [5] Zimmermann, HJ., 2001, Fuzzy Set Theory and its Applications, Kluwer, New York [6] Luoh, L., Wang, WJ., 2000, A Modified Entropy for General Fuzzy Sets, Int J Fuzzy Syst, 2, 4: 300-304. [7] Sott, MK, Antonsson, EK., 1998, Aggregation Functions or Engineering Design Trade-Offs, Fuzzy Sets Syst, 99, 3: 253-264. [8] National Crash Analysis Center, http://www.ncac.gwu.edu

411

On the Potential of Function-Behavior-State (FBS) Methodology for the Integration of Modeling Tools A. A. Alvarez Cabrera, M. S. Erden, T. Tomiyama Faculty of Mechanical, Maritime, and Materials Engineering, Delft University of Technology, Mekelweg 2, Delft, 2628 CD, The Netherlands {a.a.alvarezcabrera, m.s.erden, t.tomiyama}@tudelft.nl Abstract Current mechatronic products tend to be very complex systems. A design team is necessary to develop such products, and appropriate modeling and design support tools are essential to aid the design team. The Automatic Generation of Control Software for Mechatronic Systems project aims to develop a set of prototype tools and a framework to integrate available modeling tools, aiming to support the generation of control software for mechatronic machines. The project contemplates functional modeling as part of this framework. This paper considers the Function-Behavior-State (FBS) model as a base for the functional model, and discusses its potential regarding integration of modeling tools. Keywords: Function modeling, function behavior state, model integration, mechatronic systems design

1 INTRODUCTION Development of mechatronic products brings new challenges for design because modern mechatronic systems tend to be complex by nature. The design of such systems requires the participation of experts from several domains that cooperate to solve problems from the point of view of their specialties. Appropriate modeling and design support tools are essential to deal with system complexity, and one alternative for support is to accomplish modeling tool integration. The project of Automatic Generation of Control Software for Mechatronic Systems aims to develop a set of prototype tools and a framework (see Figure 1) to allow seamless integration among available modeling tools, so that an interdisciplinary product development team can (almost) automatically generate control software for mechatronic machines. The project considers functional modeling and reasoning from model information (i.e., qualitative reasoning [1]) as mechanisms to reach the goal of model integration (encircled in clear dash-dot lines in the figure), and to endow the set of models with the necessary information to generate control software. Implementing these aspects seeks to cope with complexity by providing a base for a complete system model in the most abstract levels, where attaining common understanding is more practical. Use of functional models can be advantageous for several reasons. First, they provide a way of representing the intention of the designers of the system, both for design and for use. Secondly, but not less important, functions can represent a system at several levels of detail, which allows to change the level of abstraction in which the model is seen while preserving, what we could call, the consistency of the model (i.e., the model can still represent the whole system while showing more detail where required). Additionally, functions can model indistinctively hardware, software, and systems from different domains. In a sense, functional models get very close to represent the architecture of a system. The importance of modeling functions for machine and process design was already recognized in works of Rodenacker [2] and Pahl and Beitz [3]. There, design is

CIRP Design Conference 2009

412

seen as a process of transformation and mapping of information from abstract concepts (i.e., functions and requirements) to concrete descriptions of physical systems, that later will allow manufacturing a system. Thus, design cannot be done without the existence of these abstract concepts that specify what the system is expected to do. Careful documentation and modeling of the functional description is then as necessary as it is for any other information related to the design.

Figure 1: Architecture of the proposed control software generation framework. Black dash-lined blocks correspond to existing, commercial modeling tools [4]. This paper considers the Function-Behavior-State (FBS) model [5] as a base for the functional model description in the proposed framework, and discusses the potential of such model regarding integration of modeling tools.

The FBS model was designed to be part of an integrated framework but it was not intended to be the backbone for the integration activity, and thus, some adaptation is necessary. Some advantages that lead to the choice of FBS are that it: • Clearly separates design intention and objective relations between components. • Is built to support qualitative reasoning activities. • Has been already implemented in a software tool and tested to some extent (cf. FBS modeler in [6]). Another important reason to support the choice of FBS is that FBS differs from most system models developed at an early stage of design which are not aimed to prescribe how the systems actually behave [7]. Instead, FBS also pretends to simulate the behavior of the system from an objective point of view. Section 2 exposes some basic concepts regarding model integration. Sections 3 and 4 recapitulate the literature about the FBS modeler and other tools that appear in its implementations. The discussion about the potential applications of FBS for model integration and proposals to do this can be found under section 5. Section 6 presents the general integration approach using FBS. Finally, section 7 describes the current progress of this research with the help of a practical example and mentions the next steps to work at. Section 8 presents the conclusions.

As specified in [11], process ontologies focus on the effects of processes over the attributes of entities, and functional concept ontologies look to develop models of devices from the subjective perspective of humans. An FBS model (see Figure 2) can be divided in three parts: (1) the functions layer, (2) the behaviors layer, and (3) the states layer. Each layer is connected to the next one to form a framework that describes the functionality of a system and how to attain such functionality. Behavior and state representations are based on QPT. All the objects are stored in a knowledge base, which is briefly described in section 4.2. The next part of this section contains a brief description of the concepts and main ideas of FBS [5], [13]-[15].

2 MODEL INTEGRATION REQUIRMENTS An integrated modeling paradigm that gives the designers a proper view of the system as a whole in several levels of abstraction, and that keeps track of the current state of design is fundamental to attain an integrated design that can cope with the problems brought by complexity [4]. To establish some common grounds for the integration of models, literature proposes some basic requirements: 1. It is necessary to separate the modeler from the solver in order to deal with the definitional integration (i.e. of the models) and the procedural integration (i.e. integration of the solvers) processes separately [8]. 2. Definitional integration becomes possible as models can be represented in a common language. A conversion of external models to a common language is necessary [8]. 3. Procedural integration may be more suitable for situations where the models and their associated solvers are of diverse nature [8]. 4. It is necessary to detect correspondence of variables between models. This seeks to minimize necessary human intervention in the detailed levels of the model integration process. Typing schemes offer an alternative to aid in this process [8]. 5. Graphical user interfaces and views are crucial to provide model integration support [8]. 6. One shared database that contains all the data of the integrated models quickly becomes a bottleneck [9]. 7. Modularity, from the point of view of reusability, and the use of model libraries helps to speed up the modeling and verification processes [10].

Has attribute

3 FBS FUNCTION MODELING FBS is a function modeling scheme created to support conceptual design in computer aided design (CAD) systems [5]. FBS aims to build a functional concept ontology [11]. Most components of the FBS model are based on a process ontology known as Qualitative Process Theory (QPT) [12].

Super-Level

Sub-Level

Functional Hierarchy F-B Relationships Behavior Level B-S Relationships View State Level

Figure 2: Scheme of FBS model [14] Mass: 1 kg

Paper Weight Relation: On Paper

3 Has Volume: 100 cm attribute

Has attribute 3

Density: 10 g/cm

Figure 3: State of paper weight (adapted from [14]) 3.1 State To define state, first the concept of entity must be introduced. An entity corresponds to an object like a solid, a gear, or a single tooth of a gear. The choice for an entity depends on the level of detail being modeled. Entities possess attributes that describe them. Lastly, entities are connected to other entities by relations. For modeling proposes, in FBS states and entities are treated simultaneously. A state is defined as “a set of attributes and relations between entities”, and thus a state cannot be described without the use of entities. Figure 3 depicts a state, showing several attributes of the entity “Paper Weight” and how it relates to the entity “Paper”. 3.2 Behavior First it is necessary to define physical phenomena in order to ease the explanation of behavior in FBS. Physical phenomena link a group of entities and their relations to physical laws (e.g., first law of Newton) that regulate the changes of attributes and states. These changes are called state transitions. An example of a physical phenomenon is “linear motion”, which connects an entity (e.g., a solid body) and its attributes to a law (e.g., F = m a ). Physical phenomena are knowledge elements that contain the Behavior-State (B-S) connections among the classes of the objects. Physical phenomena become active or inactive according to a set of enabling conditions specified by the presence of a set of entities, attributes, and relations. Behaviors constitute objective representations of what a system does. A behavior is defined in FBS as “a sequence of state transitions over time”.

413

To model behavior it is possible to directly instantiate physical phenomena or groups of them. These instantiations are called physical features. Causality between involved physical phenomena can also be specified inside a physical feature. Another modeling option is to specify a behavior as a state transition table. Then an additional tool (described in section 4.1) searches and proposes candidate physical features that are able to obtain such state transitions. 3.3 Function The definition of function tends to vary in the field of functional modeling, but many authors agree that the function is subjective in nature and carries the intention of design or use [11], [16]. In FBS, function is defined as “a description of behavior abstracted by human through recognition of the behavior in order to utilize the behavior.” Since the function is abstracted from the behavior, the function alone is not meaningful for representing the system. Therefore, in FBS a function is represented by a tuple of function symbol and behavior that can realize the function. Function-Behavior (F-B) relations are established when a function is connected to a physical feature. The function symbol is a text that describes the function in the form of “to do something.” No further restrictions or guidelines are necessary to describe the function at this level because the function symbol itself is just intended for human recognition. Functions form a hierarchical structure that results from the decomposition of general functions into more specific subfunctions, forming a function tree [17]. Decomposition of functions is classified as either causal decomposition (i.e., into subfunctions whose execution is causally related) or task decomposition (i.e., the subfunctions can be executed independently from each other). When several functions and F-B relations have been placed in the model, the designer can proceed to connect the entities of different physical features that represent the same object. This is referred as unification of entities. 4 EXISTING DEVELOPMENTS RELATED TO FBS The FBS modeling scheme proposes a framework to model functions. Even though these models are useful by themselves, other methods and tools appear along the development of FBS implementations. These tools are complementary to the FBS modeler and aim to make use of the advantages of the functional model. This section introduces some of the tools that relate more strongly to the model integration goal. 4.1 Qualitative Process Abduction System and Qualitative Process Reasoner The Qualitative Process Abduction System (QPAS) [15] has as a goal to suggest to the designer physical features that can achieve a behavior, taking as input a description of the behavior (by means of a state transition table) the designer desires. The system finds for the designer suitable ways of attaining a certain behavior (i.e., physical features) from a set stored in a database. QPAS also offers a more “stepped” solution by suggesting and instantiating physical phenomena to build a new physical feature “on the fly”. After a defining the FBS model, the Qualitative Process Reasoner (QPR) [14] can simulate it qualitatively to verify that all the phenomena in the behavior network can be executed. With this simulation the system can detect possible “side effects”. These side effects are physical phenomena which are not considered in the modeled behavior network, but that are activated by virtue of their enabling conditions (see section 3.2). The qualitative reasoning system is based on QPT. A simulation consists

414

of generating all the possible state transition sequences (behaviors) from the model and comparing them to the desired (modeled) state transitions. 4.2 The Pluggable Metamodel Mechanism The pluggable metamodel mechanism [18] aims to attain multiple model integration in design. Its implementation is the Knowledge Intensive Engineering Framework (KIEF) [6]. KIEF is supported over a knowledge base that stores concepts which include those used for the FBS model [13]. Objects from different modelers, like FBS or geometric CAD systems, are mapped to the objects of the knowledge base. This mapping is part of the knowledge base, and constitutes part of the knowledge about the modeler data. A metamodel of the system is built according to the ontology of the knowledge base. KIEF manages data transfer and consistency between modelers. Other possibilities of KIEF include suggesting modelers for a specific part of the model and creating models in a specific modeler by using information from other models. An application example of this process can be found in [18]. Next we briefly present the concepts of the physical concept ontology [13] that specifies how to build the knowledge base. • Entity: Represents an atomic physical object. • Relation: Represents a relationship among entities to denote static structure. • Attribute: It is a concept attached to an entity. It takes a value to indicate the state of the entity. • Physical phenomenon: Designates physical laws or rules that govern behaviors. • Physical law: Represents a simple relationship between attributes. All the concepts have a name that can be used to identify them. With the exception of the physical laws, all objects can have supers. Supers are objects from which the object inherits properties. 5 DISCUSSION AND PROPOSED IMPROVEMENTS The past sections show the main features of FBS and other tools related to it. The way in which all these tools and concepts can be applied in order to obtain a concise integration of models for a design process is rather apparent, and is well documented in the references. This section discusses some details on which the authors consider worthy working more. It is not the purpose of this paper to evaluate the performance of the referenced implementations of FBS and its related tools, but to discuss the potential of such developments with respect to model integration and to propose improvements if available. The next subsections analyze aspects that, according to the authors, have room for improvement and will increase the value of the FBS methodology. Section 5.1 presents a metamodel integration paradigm supported on FBS. Sections 5.2 and 5.3 relate to the topic of behavior simulation, and sections 5.4 and 5.5 are related to requirements 2 and 4 in section 2. Section 5.6 relates closely to requirements 5 and 7 in section 2. 5.1 Model integration over a metamodel The ideas from section 4.2 revolve around mapping objects from different models to a metamodel. In KIEF the metamodel is built mainly by extracting information (e.g., connections between objects) from an FBS model. The metamodel also contains other information related to objects, such as physical phenomena, physical laws, and knowledge about modelers which are not modeled in FBS but make part of the knowledge base that FBS uses. The

proposal is to use the FBS model directly as the metamodel on which objects of other models can be mapped, by using the entire ontology of KIEF. The “building blocks” of the FBS model must be detailed enough to allow representing the objects used in other models, but at the same time these blocks must act like components which result practical for the user and allow him to build a model quickly. As appears in section 7.2, at the moment the authors are taking first steps to build an FBS-based metamodel so the previous challenge can be cleared in the near future.

Figure 4: Diagram of the proposed model integration approach [4] A rough diagram of the idea proposed here can be seen in Figure 4 (a more extensive explanation appears in an earlier work [4]). The FBS model is represented by the system architecture block. The lower part of the figure shows different stages of the design process (labeled “Design x”), represented by a set of models. Each of these models can correspond to single or multiple domains and to different levels of detail. The data correspondence between models is mapped in the metamodel and managed by the pluggable metamodel mechanism. The metamodel contains the necessary information to link a coherent, high-level, model of the system and the models mentioned before. Systems architecture is not clearly visible in the objects that compose the system. Some design rationale must be modeled and communicated to the users. Functional information is used to that end also. 5.2 Qualitative reasoning A characteristic of FBS is that it is focused towards behavior simulation through computational means. Parting from an initial condition, the qualitative reasoning system generates all the state transitions reachable through the influence of the active physical phenomena. Because qualitative reasoning works with rather incomplete information about the system, all qualitative reasoning algorithms face the problem of combinatorial explosion [1]. These algorithms implement mechanisms to reduce the number of combinations or to filter the results. The current implementation of QPR presents all the results, and thus some decision or filtering mechanism is desirable. As it is, the reasoning algorithm might, for example, instantiate the effects of gravity over every entity in a system. Though this is correct, in some cases the effects of gravity are negligible compared to other phenomena and the model grows unnecessarily. Next section proposes a partial solution for this. 5.3 Function specification and ontology The use of FBS for the function symbol (i.e., the name of the function) is restricted to have an object that can be identified by the user and to which we can link the

behavior information. The function symbol itself carries no meaning in the knowledge base, and the design intention is transmitted to the computer as the framework of connected concepts from the knowledge base. The function symbol itself carries the most abstract part of the design intention in a function. The proposal is to describe the function symbol in terms of a predefined vocabulary that carries a meaning for the reasoning algorithms in the computer. Then, the algorithm can use such information to guide QPR towards the phenomena of interest. This does not solve the entire problem of combinatory explosion, but it may contribute to eliminate a good amount of spurious behaviors. In this way, we approach a natural-language-like functional representation [16] with a more formal background for “functional primitives” [19] to facilitate its application in an algorithm. The basic idea is to identify the phenomena of interest as those that manipulate the main kind of energy specified by the function symbol, which in principle should be the biggest portion of energy flowing through the system, and therefore, the most representative in behaviors. As an example of a restricted vocabulary to support function modeling in software, here we consider the work of [20], also mentioned by Chandrasekaran [19]. Although this particular ontology was developed from the device perspective of functional modeling [11], the vocabulary can describe a very broad variety of functions. It can be used in the verb-object format and also supports other constructions for functions symbols. 5.4 Function decomposition Function decomposition is an essential part of the FBS model [14]. Nonetheless, the scope of function decomposition in FBS is more closely related to the ability of the algorithm to suggest physical features which are causally dependent than to give general guidelines about how a function must be decomposed. Functional decomposition is in itself one of the core activities of the design process. A designer decomposes the required functions arriving to more concrete descriptions in every step. Having guidelines to perform such a crucial activity looks to formalize it so that it can be represented in a model in a reproducible way and to justify the decomposition choice to certain extent. The current problem can be pictured easily when performing a functional description of an existing system such as a permanent magnet DC motor. In this exercise, discrepancies appear even when the same person performs the functional decomposition of the same system several times (see left part of Figure 7, after the references). A reason for this is that a particular point of view can be used to decompose a function. On top of that, decomposing functions while remaining in the functional domain is very hard in practice, because at each level of decomposition some concreteness must be added [21], and this points towards a particular solution. In the case of some of the device centered function models [22]-[24], functional decomposition is achieved by following internally the “flows” that a function processes, in a similar way to how functional block diagrams are created [17]. This decomposition approach is not applicable to FBS because its supporting ontology does not take functions, but processes, as the objects responsible for changes. The authors propose to use a functional decomposition approach similar to the “zigzagging” presented in the axiomatic design theory [21], where the design parameters help to guide the decomposition process. For each function a corresponding behavior is assigned. As explained in section 3.2, behavior is carried out by physical features. Physical features carry information

415

about the involved processes (i.e., physical phenomena) and structure (i.e., entities and relations). Like this, it can be seen that the model contains functional and design parameter domains similar to those used in the zigzagging decomposition process of axiomatic design. The idea is to use this method to guide the functional decomposition process, and not to consider the details related to the independence axiom of the axiomatic design theory. Looking back at the example in Figure 7, we see that both decompositions can be realized with a different choice of physical features (Figure 7 right), but in the case of the second decomposition the function “ConvertElectric EnergyToRotationalMechanicalEnergy” would be realized by the physical feature “IdealDCMotor” that contains less detail of what happens inside the motor at the second level of the decomposition (it uses the proportional relation between current and torque). The choice of a particular decomposition depends on the models used to represent its features. 5.5 Multiple level modeling and model consistency Simultaneous modeling at several levels of detail is one of the potential uses that the authors see in functional models. By analyzing a function tree it is easy to identify how functions (or the interpretation we make of them) have the property of describing a consistent model of a system while at the same time more detail can be presented for some parts. Here, a consistent model is understood as one that represents the modeled system without leaving any “holes” or unexplained parts in it. For example, a consistent model for a stepper motor might include a detailed dynamic model of the motor, geometric representations, and a “black box” controller model, while other consistent model can detail the controller structure and treat the physical part of the motor as a transfer function (which can be considered almost as a black box). Though the tendency of some users of the FBS modeler is to associate functions to physical features only for functions which are not further decomposed into subfunctions [25], FBS does not impose this restriction. The proposal is to use the F-B relations and the mapping suggested by the pluggable metamodel mechanism at different levels of detail (i.e., different levels of function hierarchy) so that a user can build and view a consistent model of the whole system while looking in detail some parts of the model. 5.6 Model and data standardization Model and data standardization are factors that strongly influence the use and acceptance of a system modeling implementation. This happens because standards are made accessible to more people by the organizations, and also because good standards tend to fill in the needs of industry better than other solutions. This is partially explained by the fact that most standardizing organisms are born from industry. The project in which the present work is carried out is closely related to industry, and thus, the advantages of standardization must be exploited as much as possible, though this is almost always desirable. FBS defines a semantic structure for the knowledge base, but it does not define any data structure for it and it is not restrictive in that sense. The KIEF implementation is programmed in Smalltalk language, and the data of the knowledge bases is specific for that implementation. These choices were driven in part by the origins of KIEF in the research community, where basically the developers are the main users of the implementation. Some mention to standards for data representation appears in literature about the pluggable metamodel mechanism [18]. There, STEP (ISO 10303) is mentioned

416

as an example of standard data representation that can simplify retrieving data from complex products. The STEP standard is widely used by CAD systems mainly to exchange information about geometry, though the standard allows representation of other information relevant for product design such as dimensioning, configuration management data, and assembly data. In recent years the extensible markup language (XML) has gained tremendous popularity. XML formatted data can be found in a broad range of applications such as web pages, modeling languages (e.g., UML), and mathematical notation (e.g., MathML). The STEP standard does not fall behind, and it is currently implementing an XML based representation for its application protocols (i.e., Part 28 XML). XML forfeits characteristics such as terseness in favor of qualities like extendibility, broad applicability, and human readability. Apart from the data representation format, model standardization is also desirable. As an example, most 3D geometry modelers in CAD tools through the years have arrived to an implicit agreement in the available operations. This agreement is also related to the data in the representation models. That allows a user to quickly switch tools and still be able to produce the desired geometry. One standard that is gaining strength in the modeling field is the Unified Modeling Language (UML). Though initially and most broadly used to describe software products, these days some of its “profiles” (which contain restrictions as well as extensions) are used to represent business models and real-time systems. The relatively new profile of Systems Modeling Language (SysML) [26] seems suitable to represent most of the information used in systems’ design. It is also worth mentioning that part of the developing group of SysML also belongs to the group that develops STEP [26]. SysML has been successfully applied as part of an integrated design platform in works like [27] and [28]. At this point, the proposal is to implement FBS in SysML. This will get FBS in the path of standardization for both, data representation and modeling language. 6 INTEGRATION OF MODELING TOOLS Gathering the ideas from section 5, we present the general approach to implement the integration of modeling tools using an FBS model. The function and behavior layers of the FBS model (cf. section 3) form a metamodel that plays the main role in integration. Till now, the proposal mainly addresses definitional integration. The metamodel is based on knowledge about physical concepts. The models, being abstractions of reality, are compatible with such concepts. Like this, a model-independent metamodel can be established. On the other hand, since current tool data and format are not standard, additional knowledge about this is necessary to integrate the tools. Using an XML compatible model aims towards data compatibility in the future, though this format is already supported by many tools. The objects represented in the models are associated with the objects in the behavior layer. For example, a solid geometry represented in a CAD model can be associated with a “SolidBody” entity. Attributes in the model, like the volume of the solid, can be mapped directly to attributes of the entity. At the attribute level, a network of constraints is built using the laws attached to the phenomena. This network and the state layer may be stepping stones for procedural integration, providing information to coordinate the manipulation of the models. The models can represent different domains and degrees of detail. The functional layer is related to the models

through the behavior layer. In this way, models are linked to a layer where their differences become less relevant. From another perspective, the functional layer also communicates to the user the role of a model in the design, supporting decision making. Diversity in the models’ detail level is addressed by the hierarchical representation of the architecture, both in the function and behavior layers. 7 CURRENT PROGRESS AND FUTURE WORK The first step was choosing FBS as a base for the functional models to be used in the project, after studying the basics of several developments related to functional modeling that can be found in literature. The readers should refer to [11] and [19] for a review of functional modeling approaches, and to [29] for an overview about functional reasoning. Currently the authors are working to implement the physical concept ontology of KIEF (which contains the ontology of FBS) in SysML. This is done keeping in mind the ideas of section 5 while paying special attention to the model integration aspects. To illustrate the implementation, we show part of the model corresponding to a permanent magnet DC motor. For the models here the authors used the commercial tool MagicDraw UML and its SysML plug-in. 7.1 FBS in SysML Obtaining a formal description of how to develop an FBS model in SysML is an important first step for the implementation of functional modeling in the framework of the project. To properly understand the SysML objects of the mapping the reader should refer to the SysML specification [26]. A first proposal for such “mapping” is presented in this section. Italicized terms in the next paragraphs correspond to SysML terminology. Block is the term used for classes in SysML, and thus is used extensively. Most definitions are done at the class level, and thus can be reused to define instances of the objects that will be part of the actual model. Entity: An entity can be mapped to the SysML block class (Figure 8.a). In the model, a block for the entity is created by specifying the blocks that correspond to its supers, and then the details are added to the new class. Instances of the class with specific values will be used in the model. The example contains entities such as “rotor” “coil”, and “shaft”, which in turn are children of the entity “Solid Body”. Attribute: Attributes described by numeric values (like moment of inertia, torque, and angle) can be represented as ValueTypes. Units and dimensions can be defined for a ValueType (see Figure 9). For the case of attributes that correspond to the derivative of other attributes (e.g., acceleration, velocity, and position) a directed association named “derivative” can be placed from the attribute to the derivative of the attribute (e.g., from position to velocity). For other attributes that describe special conditions of an entity (e.g., matter state), enumerations that contain the set of values/string descriptions (e.g., solid, liquid) can be defined. Attributes are placed directly in the entity classes (the blocks) as properties (SysML uses value properties and other kinds of properties). Values for the attributes can be assigned to the instances. Relation: Relations can be represented (Figure 8.b) as association blocks that connect the involved entities (blocks). The relations do not appear from the side of the connected entities, but as the type of the memberEnds in the association block. The relation holds a reference participantProperty for each connected entity.

Physical law: Physical laws are represented by mathematic expressions. The idea is to store here the qualitative relations, so that the QPR can access this information from the metamodel. To represent this information SysML includes a very specific object called constraint block (Figure 8.c). With constraint blocks, systems of equations can be built by connecting the ports of several blocks in the parametric diagram, linking variables between mathematic expressions. The expressions are placed as constraints in the constraint blocks. Also other constraint blocks can be nested inside the constraint block as constraintProperties. The variables that appear in the expression are defined as constraintParameters of the constraint block. Figure 8.c depicts a constraint relating three parameters. bdd [Package]PhysicalPhenomena [

PhisicalPhenomena ]

PhysicalPhenomenon 1DOFRotation parts

Object : SolidBody constraints

Equation1 : SecondLawOfNewton_Rotation

par [Block] 1DOFRotation [

1DOFRotation]

Object : SolidBody MomentOfInertia : MomentOfInertia ExternalTorque : Torque Mass : Mass T : Torque J : MomentOfInertia Equation1 : SecondLawOfNewton_Rotation {T=J*alfa} alfa : AngularAcceleration

Figure 5: Physical phenomenon representation. Block representation (above) and statements definition (bellow) Physical phenomenon: This is one of the most complex knowledge units in FBS. Therefore, special attention is required to map this structure in SysML. A physical phenomenon is also represented as a SysML block (Figure 5). A description of the mapping for specific parts of the physical phenomenon is next: • Name: As all UML objects, the block has a name. • Supers: Supers of a block are represented by a generalization relation. • Entities: Entities are part properties of the block, typed by the blocks that define the entities. • Attributes: They can be extracted directly from the related physical laws and entities. • Physical laws: Defined as constraint parameters. • Statements: Constraint parameters already tell us which attributes are involved in the physical phenomenon, and they are connected through binding connectors to the attributes (value properties) in the entities (blocks). The physical phenomenon “1DOFRotation” in Figure 5 binds attributes of “Solid Body” to their corresponding constraint parameters in the “SecondLawOfNewton_ Rotation” physical law. Physical feature: Physical features can be represented as packages that contain instances of the necessary physical phenomena, entities and relations. This limits the use of the physical feature as an object because it already uses instances, but allows a direct contact with

417

the entities inside a physical phenomenon. The feature “ShaftCoupling” (Figure 6) contains a phenomenon “Unification_Rotation” that associates the torque attribute of three assembled entities (motor rotor, coupling, and output shaft) to a constraint to computes the total torque transmitted through the assembly. pkg [Package]ShaftCoupling[ rotatingAssembly : Unification_Rotation

Bodyi = motorRotor, outputShaft, coupling

[2] [3] [4]

ShaftCoupling] motorRotor : Shaft

coupling : SolidBody

outputShaft : Shaft

: Joined

: Joined





[5]

Figure 6: Physical feature representation in SysML Function: Functions are represented by SysML activities (Figure 7, left). Modeling of F-B relations is done by allocating functions to the respective features. When modeling, the physical features are placed in the model, and unifying relations are created between entities from different features that represent the same real object. Like that it is possible to create a consistent model from a group of features. 7.2 Future work The next step is to use the implementation scheme proposed in section 7.1. The goals of that step are to test if real systems can be modeled with the proposed implementation, to gradually build a knowledge base, and to refine the required modeling steps. For the modeling steps, special attention must be put in the way in which the user must input information to the model. Another aspect to investigate is the choice of appropriate visualization methods for the model. Visualization of models is important to facilitate understanding and appeal of the model, which strongly influence the decision of using a model or not using it. 8 CONCLUSIONS FBS has good potential to work as a metamodel over which other models can be mapped. However, the corresponding information about the modelers must be added. Also, more work has to be done to model software-related aspects, as the work here has focused so far on representation of physical objects. Though definitive choices about the correspondence for some elements are still to be made, the current work proves that SysML is powerful and flexible enough for building in it meta-models that support model integration. About the modeling process in SysML it is possible to conclude that, after mapping some components of the physical concept ontology, the authors could verify the flexibility of SysML to represent a wide variety of concepts. Nonetheless, such flexibility can cause difficulties in the choice of mapping for a component or term. Some diagrams, like the parametric diagrams, become easily cluttered when using more than ten blocks or so, and this cannot always be avoided with packaging.

[6]

[7]

[8]

[9]

[10]

[11]

[12] [13]

[14]

[15]

[16]

[17] 9 ACKNOWLEDGMENTS The authors gratefully acknowledge the support of the Dutch Innovation Oriented Research Program ‘Integrated Product Creation and Realization (IOP-IPCR)’ of the Dutch Ministry of Economic Affairs. 10 REFERENCES [1] Barr, A., Cohen, P. R., 1989, The Handbook of Artificial Intelligence, Vol. 4. Chapter 21. Los Altos, CA: William Kaufmann, Inc.

418

[18]

[19]

Rodenacker, W., 1971, Methodisches Konstruieren, Springer-Verlag, Berlin. Pahl, G., Beitz, W., 1988, Engineering design: A systematic approach, Springer-Verlag, Berlin. Alvarez Cabrera, A. A., Erden, M. S., Foeken, M. J., Tomiyama, T., 2008, “High Level Model Integration for Design of Mechatronic Systems,” proceedings of IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications. Beijing, China. pp. 387-392. Tomiyama, T., Umeda, Y., 1993, “A CAD for functional design,” in Annals of the CIRP'93, 42(1), pp. 143-146. Tomiyama, T., Umeda, Y., Ishii, M., Yoshioka, M., Kirayama, T., 1996, “Knowledge systematization for a knowledge intensive engineering framework,” WG 5.2 Workshop on Knowledge intensive CAD-1, pp. 33-52. Derelöv, M., 2008, “Qualitative modeling of potential failures: On evaluation of conceptual design,” Journal of Engineering Design, 19(3), pp. 201-225. Dolk, D. R., Kottemann, J. E., 1993, “Model integration and a theory of models,” Decision Support Systems, 9(1), pp. 51-63. Cutkosky, M. R., et al, 1993, “PACT: An experiment in integrating concurrent engineering systems,” Computer, 26(1), pp. 28-37. Geoffrion, A. M., 1989, “Reusing structured models via model integration,” Proceedings of the TwentySecond Annual Hawaii International Conference on System Sciences, 1989, Vol.III: Decision Support and Knowledge Based Systems Track, pp. 601-611. Erden, M. S., Komoto, H., van Beek, T. J., D'amelio, V., Echavarria, E., Tomiyama, T., 2008, “A review of function modeling: Approaches and applications,“ Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 22(2), pp. 147-169. Forbus, K., 1984, “Qualitative process theory,” Artificial Intelligence, 24(3), pp. 85-168. Yoshioka, M., Umeda, Y., Takeda, H., Shimomura, Y., Nomaguchi, Y., Tomiyama, T., 2004, "Physical concept ontology for the knowledge intensive engineering framework," Adv. Eng. Inf., 18(2), pp. 69–127. Umeda, Y., Ishii, M., Yoshioka, M., Tomiyama, T., 1996, “Supporting conceptual design based on the function-behavior-state modeler,” AIEDAM, 10(4), Sept. 1996, pp. 275-288. Ishii, M., Tomiyama, T., Yoshikawa, H., 1993, “A synthetic reasoning method for conceptual design,” IFIP World Class Manufacturing ’93, Amsterdam, pp. 3-16. Chakrabarti, A., Bligh, T. “An approach to functional systhesis in mechanical conceptual design. Part I: Introduction and knowledge representation,” Research in engineering design, 6(3), pp. 127-141. European Cooperation for Space Standardization, 1999, Space engineering – Functional analysis (E10-05A), (http://esapub.esrin.esa.it/pss/ecss-ct05. htm) Yoshioka, M., Sekiya, T., Tomiyama, T., 2001, “An integrated design object modeling environment pluggable metamodel mechanism -,” Turk J Elec Engin, 9(1), pp. 43-62. Chandrasekaran, B. “Representing function: Relating functional representation and functional modeling research streams,” AIEDAM, 19(2), pp. 65-74.

[26] Object Management Group, 1999, OMG Systems Modeling Language (OMG SysML™), V1.0 , [online] (http://www.omg.org/cgi-bin/apps/doc?formal/07-0901.pdf) [27] Peak, R., Burkhart, R., Friedenthal, S., Wilson, M., Bajaj, M., Kim, I., 2007, “Simulation-based design using SysML: Celebrating diversity by example,” INCOSE Intl. Symposium, San Diego, [online] (http://eislab.gatech.edu/pubs/conferences/2007incose-is-2-peak-diversity/2007-incose-is-2-peakdiversity.pdf) [28] Tactical Science Solutions Inc., 2007, Quicklook final report, [online] (http://www.tacticalscience solutions.com/files/05-30-07%20Quicklook%20Final %20Report%20v1.19.pdf) [29] Far, B. H., Elamy, A. H., 2005, “Functional reasoning theories: Problems and perspectives,” AIEDAM, 19(2), pp. 75-88.

[20] Hirtz, J., Stone, R., McAdams, D., Szykman, S., Wood, K., 2002, “A functional basis for engineering design: Reconciling and evolving previous efforts,” Res. Eng. Des., 13(2), pp. 65–82. [21] Suh, N. P., 1990, The Principles of Design, Oxford University Press, Oxford. [22] Stone, R., Wood, K., 2000, “Development of a functional basis for design,” ASME J. Mech. Des., 122(4), pp. 359–370. [23] National Institute of Standards and Technology, 1993, Integration definition for function modeling (IDEF0), [online] (http://www.idef.com/pdf/idef0.pdf). [24] Wood, W., Dong, H., Dym, C., 2004, “Integrating functional synthesis,” AIEDAM, 19(3), pp. 183-200. [25] van Eck, D., McAdams, D., Vermaas, P., 2007, “Functional decomposition in engineering: A survey,” Proceedings of the ASME 2007 IDETC/CIE, Las Vegas, Nevada, USA. pkg [Package]Functions[

Decomposition1]

pkg [Package]Behaviors[

SupplyRotationalMechanicalEnergy

SupplyElectricalEnergy

pkg [Package]Functions[

SupplyElectricalEnergy

TorqueGenerator IdealBattery

ConvertMagneticEnergyToRotationalMechanicalEnergy

ConvertElectricalEnergyToMagneticEnergy

Decomposition1]

Coil MagneticAttraction

TransmitRotationalMechanicalEnergy

pkg [Package]Behaviors[

Decomposition2] SupplyRotationalMechanicalEnergy

ConvertElectricalEnergyToRotationalMechanicalEnergy

Decomposition2]

TorqueGenerator IdealBattery

TransmitRotationalMechanicalEnergy

IdealDCMotor

Electromagnet ConvertElectricalEnergyToMagneticEnergy

ShaftCoupling

ShaftCoupling

MagneticAttraction

ConvertMagneticEnergyToRotationalMechanicalEnergy

Figure 7: Example of two different functional decompositions for a permanent magnet DC motor. Function trees (left) and corresponding physical features (right) pkg [Package]Entities[

bdd [Block] ContactConnected [

Entities]

Definition]

ContactConnected

Entity

pkg [Package]PhysicalLaws[

PhysicalLawsList ]

SecondLawOfNewton_Rotation {T=J*alfa}

ContactConnected

SolidBody



values

MomentOfInertia : MomentOfInertia{unit = KilogramPerMeterSquared} ExternalTorque : Torque{unit = NewtonMeter, dimension = Torque} Mass : Mass{unit = Kilogram, dimension = Mass}

parameters

+Solid2 +Solid1 ContactSolid

(a)

T : Torque{unit = NewtonMeter, dimension = Torque} J : MomentOfInertia{unit = KilogramPerMeterSquared} alfa : AngularAcceleration{unit = RadianPerSecondSquared}

(c)

(b)

Figure 8: SysML representations for (a) entities, (b) relations, and (c) physical laws pkg [Package]Attributes[

AttributesList ]

Mass

AngularAcceleration

MomentOfInertia





unit =

Real

RadianPerSecondSquared

dimension = Mass unit = Kilogram



unit =

KilogramPerMeterSquared

Energy

Time

Force

Torque









dimension = Energy unit = Joule

dimension = Time unit = Second

dimension = Force unit = Newton

dimension = Torque unit = NewtonMeter

derivative AngularVelocity

unit =

RadianPerSecond derivative AngularPosition

MatterStateKind Solid Liquid Gas

UnitsAndDimensions

SurfaceGeometryKind Smooth Rough Thoothed MaleGeometry FemaleGeometry

RadianPerSecond

KilogramPerMeterSquared





dimension =

Velocity

dimension =

MomentOfInertia

RadianPerSecondSquared

dimension =

Acceleration



unit =

Radian

Figure 9: Representation of attributes and definition of units and dimensions in SysML

419

Percentage

Function Orientation beyond Development – Use Cases in the Late Phases of the Product Life Cycle A. Warkentin1, J. Gausemeier1, J. Herbst2 1 Heinz Nixdorf Institute, Germany, Fürstenallee 11, D-33102 Paderborn [email protected], [email protected] 2 Daimler AG, Germany, Wilhelm-Runge-Str. 11, D-89081 Ulm [email protected]

Abstract The importance of a function driven way of working in the field of Electric/Electronic-systems (E/E) is increasing. However, the existing methods are focusing on the development phase. In contrast to this, we performed a comprehensive use case analysis concentrating on the late phases of the product life cycle. In this paper we describe the results of this analysis by illustrating the main use cases identified. For each use case we present a solution of how to exploit potential of function orientation. Based on this we will be able to define a concept of a function-oriented representation. Keywords: Functions, Function Orientation, Product Life Cycle, Manufacturing Process Planning, Manufacturing

1 INTRODUCTION Modern automobiles have a huge amount of innovations inside and are characterized by high complexity, especially concerning Electric/Electronic systems (E/E). There are many functions which are distributed over several components. At the moment, the way of working is oriented towards the components of an automobile. For example, this becomes apparent in the product documentation which is focused on components. Moreover, the arrangement of the organizational structure in development is influenced by components. In addition, development processes concentrate on components. However, this component driven way of working is not sufficient to deal with the complexity of today’s automobiles. In order to meet this challenge, there is an ongoing paradigm shift towards function orientation. Function orientation implies that the functions of an automobile are being considered explicitly, i.e. by documenting functions or including functions into development processes. This way it is easier to perceive the interdependencies within a product. Moreover, functions are the most important issue of a product from the customer’s point of view. By having an explicit view on functions it is possible to ensure that these functions are fulfilled at the end of the development process. The use of functions in the early stage of the product development process has often been addressed in recent research. For example, in [1] and [2] the focus is directed to the usage of functions in order to find new product concepts and solutions. We are convinced that function orientation can also generate an additional benefit beyond the development phase, i.e. during manufacturing process planning, manufacturing and usage of a product. Therefore we performed a comprehensive use case analysis at an automotive OEM. The goal of this use case analysis was to identify areas in which a function-oriented representation improves certain tasks and to analyze how a function-oriented representation has to look like in order to support these tasks. So, a use case in our context describes a situation in which a function-oriented representation is helpful. We concentrated on the phases beyond

CIRP Design Conference 2009

420

development. In this paper we describe the results of this analysis by illustrating the main identified beneficial use cases and presenting possible solutions. The initial modeling of information contained in a functionoriented representation of a product is associated with time and effort. In the same way, maintenance of this information throughout the product life cycle is expensive in labour. Consequently, it is important to know which elements of a function-oriented representation lead to a benefit in the different phases of the product life cycle. This way it is possible to find an optimum between the effort associated with modeling and updating of information on the one hand and the benefit associated with the usage of this information on the other hand. Current approaches which deal with a function-oriented representation are not concerned with this question. So, there are different approaches to represent a product in a functionoriented way but they do not answer the question which elements out of the representation have to be modeled and updated throughout the product life cycle. Our use case analysis, by contrast, shows the different benefits resulting from a function-oriented representation in the form of use cases in the late phases of the product life cycle and presents possible solutions. These solutions define the corresponding elements needed from a function-oriented representation. Having this information will enable us to find an optimum concerning the elements to be modeled. Besides, the use cases presented in this paper are assigned to different points in time. These circumstances have an impact on the need to update the elements of a function-oriented representation. Consequently, a detailed analysis of the use cases concerning the positioning in the product life cycle will enable us to define the points in time where the relevant elements have to be updated. The remainder of this paper is structured as follows. Section 2 discusses related work. Section 3 describes the basic terms in our context of function orientation while section 4 presents several beneficial use cases and corresponding solutions. Section 5 concludes with a summary of the main results and an outlook on an approach

which allows configuring an appropriate function-oriented representation on the basis of desired solutions for certain use cases. 2 RELATED WORK In this section, relevant design methodologies in the context of a function-oriented representation are discussed. In the past decades, the utilization of functions has become an important part of several general design methodologies, e.g. [1], [3], [4] and [5]. In the following we describe some of these methodologies and mention the basic concepts concerning function-orientation contained in these methodologies. Axiomatic Design [3] aims at guiding the decision-making process within the development of new products by means of two axioms. Here, SUH defines several levels of abstraction, the so called domains, which are used to handle the complexity of a design task. In the context of this paper, the customer domain, the functional domain and the physical domain are important. The customer domain is composed of customer needs. The functional domain contains functional requirements which are derived from customer needs whereas the physical domain comprises relevant design parameters. The elements contained in one of these domains are mapped to elements in the following domain. Another basic concept used in Axiomatic design follows from the complexity of products: to describe the functional aspects, the definition of a single function is not sufficient. Therefore, functions are decomposed to subfunctions using decomposition relations. This decomposition leads to a function hierarchy or a function tree respectively. PAHL and BEITZ [1] defined another well-known methodology aiming at the development of new products. Here, functions are used to find appropriate solutions. In this approach, functions express desired relations between inputs and outputs within a system. Thus, apart from the decomposition relations, functions are also connected with flows of energy, material or information. This is expressed with the term function network. Moreover, PAHL and BEITZ define five general function classes. The development of such a standard vocabulary to describe functions (also known as function taxonomy) has been addressed by several approaches, e.g. [4], [6], [7], [8]. An overview can be found in [8]. The objective of such taxonomies is to establish a universal language to facilitate communication during the design process and to simplify the search for appropriate solutions. Another example of utilizing functions in the design process can be found in the specification technique for the description of the principle solution of self-optimizing systems as shown in [5]. This specification technique consists of several partial models which represent different aspects of the system to be developed. A function hierarchy is one of these partial models. The functions contained in the hierarchy are developed from defined requirements and are used to derive solutions patterns. In the area of automotive E/E there are also several methods to describe systems with consideration of functional aspects, e.g. [9], [10], [11] and [12]. These methods adapt the basic concepts mentioned before to the description of E/E-systems. Here, the utilization of several levels of abstraction as shown in Axiomatic Design is also widely accepted. In [9] a specification technique for the description of automotive E/E-systems in the design phase is defined. It consists of three levels of abstraction. In the first level, among other things, the expected functions from the customer’s point of view are described. The second level

called logical architecture comprises functions on a logical level. The third level, the technical architecture, contains information concerning technical realization subdivided in software and hardware. A similar approach for the development of automotive E/E-systems is described in [10]. Here, functions are described as they are perceived by the user. These functions are mapped to software and hardware. The approaches described in this section provide a basis for the function-oriented representation of automotive E/E-systems. However, these approaches focus on supporting the development phase within the product life cycle and they do not deal with the question whether a function-oriented representation of the product should be continuously documented and maintained and up to which point in the product lifecycle this should be done. In contrast the goal of this work is to answer the questions how to utilize and profit from a function-oriented representation of automotive E/E-systems in the following phases of the product lifecycle, how to adapt such a representation for this purpose and what the documentation process for the function-oriented representation should look like. Therefore, we performed a comprehensive use case analysis presented in this paper. 3 BASIC CONSIDERATIONS In this section, we introduce terms which are used in the remainder of this paper. These terms are based on certain approaches in the field of automotive E/E described in section 2. In the remainder of this paper, we use three levels of abstraction and corresponding terms that are based on the approaches described in [9] and [10]. In the first, most abstract level functions are presented as they are perceived by the user or customer respectively. This also includes a high-level description concerning the expected behaviour of an E/E-system. On this level, functions are independent of realization details. This level is called user level. To represent the user level, function hierarchies are often used. Figure 1 shows an exemplary function description on the user level. The function “to control tire inflation pressure” is decomposed in two subfunctions which are directly perceivable by a user. These functions are independent of realization details. control tire inflation pressure Function warn of a pressure loss

show tire inflation pressure

Decomposition Relation

Figure 1: Exemplary User Level. In contrast to the user level, the second level concentrates on the way the functions are realized on a logical level. Therefore, this level is called logical architecture or design level. Here, the description of functions is more detailed. The logical architecture contains a decomposition of functions and information concerning the in- and output on a logical level. Another important issue is the description of behaviour of a function, e.g. via a state transition process. Figure 2 shows an exemplary logical architecture which concretizes the functions shown in Figure 1. It becomes obvious, that a logical architecture contains also functions which are not perceivable by the user, for example the function “capture tire inflation pressure”. Moreover, Figure 2 shows that a logical architecture contains assumptions concerning the realization of functions as the illustrated functions describe only one possible solution. The function “warn of a pressure loss” could for example also be

421

realized by comparing the number of rotations between the left and the right tire. In this case, the logical architecture would be different whereas the function shown in Figure 1 would be the same for both possible solutions. capture left tire inflation pressure capture right tire inflation pressure

compare tire inflation pressure with to-be values

show tire inflation pressure

warn of a pressure loss

Function Information

Figure 2: Exemplary Logical Architecture. The third level describes the technical details of E/Esystems. Therefore, this level is called technical architecture. The technical architecture consists of hardware and software architecture. The hardware architecture includes the physical components of an E/E-system. Above all, these are actuators, sensors as well as control units. The software architecture describes the software components of an E/E-system. In our context, the relations between functions of a logical architecture and elements of the technical architecture, i.e. hardware and software components, are important. These relations describe which parts, i.e. hardware and software components, contribute to the fulfillment of the related function. These relations are called mapping relations. There is a wide range of possible levels of detail concerning the modeling of mapping relations, i.e. the target of a relation can be on different levels of the logical architecture or technical architecture, respectively. For example, a function can be related to a control unit. A more detailed relation could link a certain information output of a function to a physical connection between hardware components. 4 USE CASES In this section several beneficial use cases identified in our analysis are described and corresponding possible solutions are presented. The first use case is settled in manufacturing process planning and deals with the prioritization of functions to be tested in manufacturing. Another use case focusing on manufacturing process planning is the specification of test cases for functional testing. The third use case is occupied with the extraction of compatibility information for certain purposes in manufacturing and usage. The fourth use case focuses on capturing of customer feedback during the usage of a product and the last use case describes the update of functions. 4.1 Prioritization of functions to test Functions on the user level represent the customer’s view on a car. Thus, testing functions is the direct way to ensure the functional aspects of a car’s quality from the customer’s perspective. Via testing of functions in manufacturing it is possible to assure that functions are fulfilled at the end of the manufacturing process. Moreover, the high number of variants of modern automobiles increases the importance of testing of functions during manufacturing. The following examples illustrate the high number of variants: Audi states that there are 1020 possible configurations, at Daimler, there are 1027 possible configurations and at BMW 1032 [13], [14]. Therefore, only a restricted percentage of the possible configurations can be tested in the development phase. Testing of functions in each possible configuration would result in an unreasonable effort. Moreover, this effort is unnecessary as not each possible configuration is being actually ordered. This shortcoming can be resolved by an additional testing of functions during manufacturing as the tests are applied on a customer’s car, i.e. on a particular configuration.

422

On the one hand, testing of functions during manufacturing is important as we have mentioned. On the other hand, this testing of functions causes a high effort as there are more than 2000 functions in a car [15]. It is not feasible to test all of these functions during manufacturing. Therefore, there is a need to prioritize functions to be tested on the basis of defined criteria. In order to find a solution for prioritizing functions, the failure mode and effects analysis (FMEA) and the field of risk management are helpful. These approaches address a similar issue. In FMEA and risk management, the following factors are relevant: probability of a failure and consequences of a failure [16], [17], [18]. In FMEA, detectability of a failure is additionally taken into account. Thus, according to these approaches, following influencing factors have to be taken into consideration in order to prioritize functions to be tested: ƒ Severity of the consequences caused by a failure in a function: This factor describes the seriousness of consequences that result from a defective function from the customer’s point of view. ƒ Probability of a failure in a function: This factor describes the likeliness of a failure to occur in a function. ƒ Probability of detecting a failure (detectability): This factor describes the likeliness to find a failure before a product arrives at the customer. The combination of these three factors leads to the prioritization of functions to be tested. There are several ways to determine values for the three factors. The first alternative is to estimate values in a subjective manner on the basis of the knowledge of experts. Thus, it is possible to prioritize functions without a comprehensive basis of information concerning functions, e.g. information about the mapping relations between components and functions. Only a documentation of functions on the user level of an automobile is needed for the estimation of values in a subjective manner. Moreover, a documentation of the logical architecture might be helpful as it provides a better inside into consequences of a failure of a function. Here, the effect of a failure in a function on other functions becomes transparent. Another way is to determine or calculate estimated values for severity, probability and detectability on the basis of detailed information as shown in Figure 3. The following examples shall deliver an insight into the possible information that could be taken into consideration. For an estimation of severity criteria like the safety relevance and the importance of a function from a customer’s perspective can be helpful. The safety relevance specifies whether there is a hazard when the considered function is not fulfilled or not. Consequently, this is very important information for prioritizing functions. Furthermore, the importance of a function for the customer should be regarded. Thus, a documentation of functions on the user level and of values for these criteria for each function would be helpful for the estimation of severity. For a determination of probability of a failure in a function it is helpful to take, among other things, the complexity and error rates of related components into consideration. Complexity can for example be estimated on the basis of the number of hardware and software components that are necessary to fulfill the considered function. Here, information like the lines of code (LOC) of participating software can give an additional hint concerning complexity of a function. Moreover, existing information regarding error rates of the components related to the considered function improves the determination of the probability of a failure. To sum it up, information about mapping relations

between components and functions is important for a determination of probability of a failure in a function. The probability of detecting a failure is influenced by many criteria. In the context of our use case analysis, i.e. at the OEM the detectability is especially influenced by the ability to test the physical connections between components that are related to the considered function. The reason is that in manufacturing the testing of connections between components is dominating. With testing of single components and connections between components there is a kind of an implicit testing of functions. So, if connections between components that are contributing to a function are not testable, the considered function cannot be tested implicitly. Consequently, the probability of detecting a failure in this function is quite small when tests are limited to connections. The ability to test physical connections is determined by the type of involved components and the corresponding connections. Consequently, information about the mapping relations between functions and components and especially the physical connections is helpful for determining detectability. Severity of the consequences caused by a failure in a function - Safety relevance - Importance for customer - …

Probability of a failure in a function - Complexity - Error rates - ...

Priority of functions to be tested

Probability of detecting a failure (detectability) - Type of involved components - Type of connections between involved components - ...

Figure 3: Influencing Factors for Prioritizing Functions. The combination of values for severity, probability and detectability for the function in question leads to the determination of the priority to be tested. 4.2 Specification of test cases As we pointed out, there is a need to test functions during manufacturing. In order to execute tests the corresponding specifications of tests have to be derived. There are several methods to test technical system depending on the objective of the testing. In our context, the objective of testing is to ensure that functions are fulfilled at the end of the manufacturing process. So, out of the existing methods to test, functional testing has to be used and corresponding specifications have to be generated. Functional testing means that a stimulus is created and acts on the tested automobile. Afterwards, the real response is observed and compared to the to-be response. Consequently, information about stimuli, preconditions and to-be responses are needed to specify a test (Figure 4). Preconditions can be further subdivided into conditions that have to be fulfilled at the beginning of a function and conditions that have to be fulfilled throughout the whole execution of a function. No information concerning the internal design of functions is needed for functional testing. Consequently, functional testing is also known as black-box testing [19].

Function Stimulus

Preconditions

To-be response

Figure 4: Information for Specification of Function Tests. In general, testing of functions can be executed manually or automatically or via combination of both. Manual execution means that a person initiates a certain stimulus and checks the response of the automobile. This method is characterized by a high congruence with reality, i.e. functions are tested just like they are used by customers [20]. However, manual testing is time consuming and not always reliable because of the probability of human errors. Automatic execution is achieved without any intervention by a person. Thus, there is sometimes the need to manipulate stimuli, e.g. to simulate that a button was pushed. This leads to a smaller congruence with reality in comparison to manual testing. However, automatic execution of testing is less time consuming and more reliable than manual execution of testing [20]. Because of the specific advantages of executing testing of functions in a manual and automatic way there is a need to support both methods. There are several ways to generate a specification for a test of a function. First of all, a specification can be created manually with a documentation of functions on the user level. To complete a specification, stimuli, preconditions and the to-be response must be defined (see Figure 4). The specification of tests can be supported by integrating this information into a function-oriented representation. A function would be described by stimuli, preconditions and the to-be response. To derive a specification for a test from such a function-oriented representation, a consistent selection out of this information has to be made. For instance, if a function can be initiated through several stimuli, one stimulus has to be chosen and integrated into the test specification. Especially the specification for the automatic execution of tests can be simplified by additional technical details concerning the elements shown in Figure 4. This is illustrated by the following examples. Stimuli, preconditions and to-be responses of a function could be detailed by specifying corresponding signals in the logical architecture. This way it is possible to generate a test in which a function is initiated by sending a certain stimulus in the form of a signal. The to-be response would be observed by controlling the corresponding signal. An additional help for specifying tests is also offered by considering information about mapping relations between components and functions. That way it is possible to use characteristic properties of components for defining a functional test. For example, by knowing the current consumption of a component related to the response of a function it is possible to specify a test in which the to-be response is detected by observing the current drain. 4.3 Extraction of compatibility information The function range of modern vehicles is reached by an interaction of many components. Therefore, it must be ensured that the components contained in a vehicle are compatible to each other as a whole. So the knowledge about the compatibility must be available. With this knowledge it can be ensured over the product life cycle that the components used in an assembly are compatible, for example during manufacturing. Here, knowledge about compatibilities is essential for assuring that the components mounted in an automobile are compatible. A further example is the case of an error during the usage of an automobile. If one or several faulty components must be exchanged by newer versions or new software has to be brought in, a new configuration arises as a

423

result. So, after the elimination of errors it must be assured that the components are compatible to each other again [21], [12]. This can possibly require further measures, for example the exchange of further, non-defective components or the installation of new software. Again, knowledge about compatibilities is needed. The derivation of knowledge about compatibilities causes an effort which should be minimized. Compatibility is affected by different factors. Apart from non-functional aspects like mounting constraints or physical characteristics of components also functional criteria and thus the view on the functions realized by the components play an important role. Thus a function-oriented representation is helpful for the determination of compatibilities of components. In [22] it is shown that two scenarios are important in the field of compatibility: compatibility concerning replacement (replaceability) and interaction compatibility. Replacement compatibility focuses on the versions of a single component. More precisely, replacement compatibility means that different versions of components are exchangeable. Interaction compatibility implies that the components within a configuration cooperate faultlessly and don’t exclude each other [22]. There are several ways to determine replaceability concerning functional aspects. A first estimation can be given by comparing the functions related to the versions of the considered component with the help of mapping relations. A version of a component can supposedly be replaced by another version if the former contains all the functions of the latter. Obviously, this method offers only a hint concerning replaceability. In [21], functional compatibility is subdivided into structural and behavioral compatibility. Considering these two aspects brings more significance to the estimation of replaceability. Structural compatibility looks upon the in- and outputs or signals respectively. So, a documented logical architecture has to be taken into consideration. Here, several levels of abstraction from an abstract signal delivered from a function to the corresponding concrete signal on a bus can be taken into account. In particular, criteria of signals like the type or unit are used. So, an analysis of these criteria of functions related to different versions of the component in question leads to a statement concerning the structural compatibility. Behavioral compatibility focuses on behavioral aspects which are visible to the environment [21]. So, a statement concerning the behavioral compatibility of different versions of a component can be given by comparing the behavioral aspects of the related functions. In order to analyze interaction compatibility, similar to the determination of replaceability, structural compatibility has to be considered. To ensure structural compatibility the inand outputs of functions in a logical architecture within a configuration have to be consistent. In particular, there have to be outgoing inputs for all required inputs. Moreover, the in- and outputs must fit to each other. Again, several levels of abstraction from an abstract signal to the corresponding concrete signal on a bus can be taken into consideration. 4.4 Capture of customer feedback The number of functions in an automobile increases more and more. This trend is accompanied by a rise in development effort and complexity and by the corresponding disadvantages like the increase in potential error sources. Therefore, it is very important to concentrate on the development of functions that are actually perceived and required by the customer. This way it is possible to avoid an unnecessary increase in the amount of functions [23]. Consequently, feedback concerning functions from the

424

customer’s perspective during usage must be gained. This feedback is especial useful as it is based on experience with real automobiles in a common environment. Having this knowledge it is possible to improve functional aspects of new releases [24] or of a new model series as shown in Figure 5. Feedback information related to a certain release of a particular can be gained during the usage and utilized for an improved development of the next release or of another model series. Model Series x

Usage Model Series (MS) x, Release 1

Usage MS x, Release 2

Development MS x, Release 2

Functional Description MS x, Release 1

Feedback Information

Functional Description MS x, Release 2

Usage MS x, Release 3

Development MS x, Release 3

Feedback Information

Process Documentation

Model Series y

Usage MS y, Release 1

Development MS y, Release 1

Functional Description MS y, Release 1

Derivation and Utilization of Feedback Information

Development MS y, Release 2

Feedback Information

Figure 5: Integration of Customer Feedback. In relation to feedback concerning functions the following aspects are relevant: Importance of functions, satisfaction with functions, issues concerning functions and demand for new functions. To analyze the importance of functions from the customer’s perspective two general methods can be used: questioning customers and utilizing of feedback functionalities embedded in a product. There are several approaches to analyze the importance of product properties by questioning customers. For instance, the ranking method is such an approach. Here, customers are asked for arranging properties of a product in a ranking depending on their importance. Pairwise comparison, constant-sum-scales and rating-scales are further examples of approaches to analyze the importance of product properties by questioning customers [25]. These approaches can also be used to analyze the importance of functions. A documentation of functions on the user level is the basis for such an analysis. Moreover, the values for the importance of functions should be contained within this documentation. However, the approaches mentioned are not sufficient in the case of a great amount of product properties [25]. As we mentioned above, the amount of functions in an automobile is enormous. Consequently, the utilization of the approaches mentioned for analyzing the importance of functions is problematic. An alternative method to analyze the importance of functions is to use feedback functionalities embedded in a product. Here, the utilization of functions is monitored and analyzed automatically. In particular, the following aspects can be captured: frequency and possibly intensity and duration of function usage [24]. On the basis of this information it is possible to derive a statement concerning the importance of functions. The existence of software and sensors in the product is a precondition for this method [24]. This precondition is fulfilled in modern automobiles. For enabling this method it is important to have technical

information concerning function monitoring. For example, information about the stimuli of a function and about the possibility to detect these stimuli is necessary. Apart from importance of functions the satisfaction with functions is also an important issue related to customer feedback. There are several methods to measure customer satisfaction. So-called objective methods derive a conclusion concerning customer satisfaction on the basis of aggregated indicators like turnover or market share. However, these indicators are influenced by many determining factors apart from customer satisfaction [25]. Moreover, the level the indicators are focusing on is to coarse-grained for a statement concerning customer satisfaction with functions. The so-called subjective methods are more suitable for measuring customer satisfaction. Subjective methods are based on individual customer satisfaction judgements [24]. The customer satisfaction is usually analyzed with the help of customer surveys - either by satisfaction scales or by measuring of the fulfillment of expectations [25]. These methods can also be used to analyze the customer satisfaction with functions. A documentation of functions on the user level is the basis for such an analysis. However, because of the enormous amount of functions the utilization of these methods is problematic. Another important aspect of customer feedback is the capturing and documentation of issues related to functions. These issues include reports about failures of functions, about handling problems during usage of functions or suggestions for improvements, for example. These issues should be linked to the corresponding functions on the user level. In this way it is possible to identify problems related to functions and to find potentials for improvements. In [24] and [26] it is stated that customer feedback can also be used to identify new requirements. So, in our context, the demand for new functions should be derived from customer feedback and integrated in the functionoriented representation.

quired hardware and software with information about the appropriate variants and versions. Moreover, the required variant coding is to be determined. Thus, information about the corresponding variant coding for a function is helpful. The required activities for every new function are defined via a comparison of the required configuration and the actual configuration. Regarding software, the following activities can become necessary: an exchange, parameterisation or variant coding. Concerning hardware, a replacement or an addition of hardware might become necessary. After the execution of the activities for updating the functions desired by the customer it is reasonable to document the new composition of the automobile. This documentation includes the modified function-oriented representation and configuration of the automobile.

4.5 Update of functions Production series are developed further also after the beginning of the series production. Thus, new functions are integrated into vehicles during the production period of a production series. The increasing share in electronics and primarily software offers the potential to update a vehicle already produced with the new functions with relatively low effort. In this way it is possible to increase customer satisfaction and customer binding. To enable the update of new functions it is necessary to know which functions have been added during continued development in comparison to the automobile to be enlarged. Moreover, activities that have to be performed for the update must be identified. Examples of such activities are the application of new software or the exchange of components. Thus, the procedure to enable an update of functions consists of several steps as shown in Figure 6. In the first step new functions are identified by a comparison of the current functions on the user level with the functions at the time of the production of the relevant automobile. In the second step the activities which must be carried out for the realization of the desired new functions are identified. The identification of the activities can be done either on the basis of expert knowledge or on the basis of detailed information about the mapping relations between functions and software, functions and components and so on. This information has to be documented in the functionoriented representation. For every new function the configuration of the automobile required for the realization is determined. Among other things, this includes the re-

Figure 6: Procedure for Updating of Functions.

Identification of new functions Identification of required activities Determination of required configuration

Determination of actual configuration

Definition of required activities Software Hardware Exchange Replacement Parameterisation Addition Variant coding

Execution of activities Documentation of new composition

5 SUMMARY AND OUTLOOK With the increasing complexity of Electric/Electronic systems of modern automobiles the so-called function orientation becomes more and more important. So far, the existing methods in this field are focusing on the development phase. However, function orientation can generate an additional benefit in the late phases of the product life cycle, especially manufacturing process planning, manufacturing and usage of a product. In this paper, several beneficial function oriented use cases during these phases of the product lifecycle were described. Moreover, we presented a possible solution for each use case. An appropriate function-oriented representation is a crucial factor for enabling these solutions of the use cases. However, a concept of a function-oriented representation with consideration of the needs of use cases beyond the development phase does not exist up to now. Therefore, our goal of further research is to define a function-oriented representation which supports use cases in manufacturing process planning, manufacturing and usage of a product. During our use case analysis we have realized that there are several possible solutions for each use case. This issue has to be considered in the definition of an appropriate function-oriented representation. We will face this challenge by allocating each solution to the corresponding element of the function-oriented representation. Figure 7 shows this approach with a simplified example. The left columns contain the use cases and the corresponding solutions. The top row contains an excerpt

425

of elements of a function-oriented representation as mentioned in section 3. These elements of a function-oriented representation are, among other things, a documentation of functions on the user level, a logical architecture and information about mapping relations. The latter describe which parts, i.e. hardware and software components, contribute to the fulfillment of the related function. For each solution of a use case, there is a statement about the required elements of the function-oriented representation to support this solution. For instance, solution 1 of use case 1 (e.g. prioritizing functions to be tested on the basis of an estimation of severity, probability and detectability in a subjective manner) is supported by a documented user level. An example for a more demanding solution is the estimation of values for severity, probability and detectability on the basis of detailed information for prioritization of functions to be tested as described in section 4.1. Here, a documentation of functions on the user level and of values for the importance and safety relevance of a function within this documentation would be helpful for the estimation of severity. Considering the function network helps for the estimation of effects of a failure in a function, for example. Information about mapping relations between components and functions is important for an estimation of complexity and therefore among other things, for the determination of probability of a failure in a function. Moreover, documented mapping relations between functions and components and especially the physical connections are helpful for determining detectability as shown in section 4.1. Thus, it will become visible which elements are required for a certain solution of a use case. With this means it will be possible to configure an optimal function-oriented representation that it suitable for the desired solutions of each use case. Elements of a function-oriented representation SW

HW

Use Case 1

Use Case 2

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13] …

[14]

Solution 1

x

Solution 2

x

x

x

Solution 3

x

x

x

Solution 1

x

x

[15]

… User Level

HW

Mapping to Hardware

Logical Architecture

SW

Mapping to Software

[16] [17]

Figure 7: Instrument for Configuring a Function-oriented Representation on the Basis of desired Solutions of Use Cases.

[18]

6 REFERENCES [1] Pahl, G., Beitz, W., Feldhusen, J., Grote, K., 2007, Engineering Design - A Systematic Approach, Springer, Berlin. [2] Roth, K., 2000, Konstruieren mit Konstruktionskatalogen, Springer, Berlin. [3] Suh, N.P., 1990, The Principles of Design, Oxford University Press, New York. [4] Altshuller, G.S., 1984, Creativity as an Exact Science: The Theory of the Solution of Inventive Problems, Gordon and Breach Science Publishers, New York. [5] Gausemeier, J., Zimmer, D., Donoth, J., Pook, S., Schmidt, A., 2008, Proceeding for the Conceptual

426

[19]

[20]

[21]

Design of Self-Optimizing Mechatronic Systems, Proceedings of the International Design Conference - DESIGN, Dubrovnik, 19-22 May: 1-12. Stone, R.B., Wood, L., 1999, Development of a functional basis for design, Proceedings of DETC99, 1999 ASME Design Engineering Technical Conferences, Las Vegas, 12- 15 September: 1-15. Szykman, S., Sriram, R., Racz, J., 1999, The Representation of Function in Computer-Based Design, Proceedings of the 1999 ASME Design Engineering Technical Conferences (11th International Conference on Design Theory and Methodology), Las Vegas, 12-15 September. Hirtz, J., Stone, R., McAdams, D., Szykman, S., Wood, K., 2002, A functional basis for engineering design - Reconciling and evolving previous efforts, Research in Engineering Design, 2/2002: 65 - 82. Frauenhofer Institut für Software- und Systemtechnik, 2006, Modellbasierte Systementwicklung in der Automobilindustrie - Das MOSES Projekt, Fraunhofer-Gesellschaft, Berlin. Ringler, T., Simons, M., Beck, R., 2007, Reifegradsteigerung durch methodischen Architekturentwurf mit dem E/E-Konzeptwerkzeug, 13. Internationaler Kongress Elektronik im Kraftfahrzeug, BadenBaden, 10-11 October: 199-209. Lönn, H., Freund, U., Orazio Gurrieri, L., Küster, J., Migge, J., 2004, EAST-EEA - Definition of language for automotive embedded electronic architecture. Report. Pretschner, A., Broy, M., Kruger, I.H., Stauner, T., 2007, Software Engineering for Automotive Systems: A Roadmap, International Conference on Software Engineering, Minneapolis, 20-26 May: 5571. Andres, M., 2006, Die optimale Varianz, brand eins, 1/2006: 65-69. Katzenbach, A., 2003, Lösungsansätze zur Beherrschung der Komplexität in der Automobilindustrie, 12. AIK-Symposium - Herausforderung Komplexität - Komplexitätsmanagement und IT, Karlsruhe, 17 October. Broy, M., 2006, Challenges in automotive software engineering, ICSE '06: Proceedings of the 28th international conference on Software engineering, Shanghai, 20-28 May: 33-42. NN, 1980, Military Standard 1629 A. Procedures for Performing a FMECA. Pfleeger, S.L., 2000, Risky business: what we have yet to learn about risk management, The Journal of Systems Software, 3/2000: 265-273. Amland, S., 2000, Risk-based testing: Risk analysis fundamentals and metrics for software testing including a financial application case study, The Journal of Systems Software, 3/2000: 287-295. Wallentowitz, H., Reif, K., 2006, Handbuch Kraftfahrzeugelektronik: Grundlagen, Komponenten, Systeme, Anwendungen, Vieweg, Wiesbaden. Guddat, U., 2003, Automatisierte Tests von Telematiksystemen im Automobil, Dissertation at the University of Tübingen. Bechter, M., Blum, M., Dettmering, H., Stützel, B., 2006, Compatibility models, SEAS '06: Proceedings of the 2006 international workshop on Software engineering for automotive systems, Shanghai, 23 May: 5-12.

[22] Stützel, B., Dettmering, H., 2006, Kompatibilitätsorientierte Entwicklung für softwareintensive mechatronische Systeme, 4. Paderborner Workshop Entwurf mechatronischer Systeme, Paderborn, 30-31 March: 281-291. [23] Burger, F., 2007, Innere Werte - Mit innovativen elektronischen Steuerungseinheiten lassen sich Milliarden verdienen - aber nur, wenn die Industrie hochwertige System-Software entwickelt, McK Wissen, 20/2007: 110-113. [24] Schulte, S., 2006, Integration von Kundenfeedback in die Produktentwicklung zur Optimierung der Kundenzufriedenheit, Dissertation at the University of Bochum. [25] Matzler, K., Bailom, F., 2006, Messung von Kundenzufriedenheit, In: Hinterhuber, H. H., Matzler, K. (Ed.): Kundenorientierte Unternehmensführung Kundenorientierung - Kundenzufriedenheit - Kundenbindung, GWV, Wiesbaden: 241 - 270. [26] Edler, A., 2001, Nutzung von Felddaten in der qualitätsgetriebenen Produktentwicklung und im Service, Dissertation at the University of Berlin.

427

An approach to the Integrated Design and Development of Manufacturing Systems 1,2,3

H. Nylund1, K. Salminen2, P.H. Andersson3 Department of Production Engineering, Tampere University of Technology, P.O. Box 589, FI – 33101 Tampere, Finland [email protected]

Abstract This paper describes an approach to integrated manufacturing systems. It aims to integrate design and development activities, as well as the entities existing in a manufacturing system. A model of manufacturing systems is presented, including manufacturing entities with different roles and domains related to them. The characteristics of the manufacturing entities are discussed, including changeability, service orientation, and learning capabilities. One of the main enablers is a digital manufacturing system, which includes tools for modeling, simulation and analysis, as well as digital information and knowledge. This is illustrated by an example process, from product ideas to the efficient production of the products. Keywords: Manufacturing System, Integration, Design, Development

1 INTRODUCTION The competition in global markets obliges manufacturing enterprises to respond rapidly and in a cost-efficient manner to changing constraints and requirements. The enterprises are required to be context-aware and to have knowledge about their skills and capabilities. They have to be able to adapt to, for example, changing possibilities existing within the industrial environment, requirements derived from customer demands, and constraints limiting how they can do business. An integrated environment, connecting the manufacturing activities, can be one of the main enablers for successful operation in the markets. The integration of (a) design and development activities and (b) products and production systems into one system enables existing skills and knowledge to be used more efficiently. It can offer a wide knowledge and information base to be used in decision-making processes. This paper describes an approach to such integrated manufacturing systems. It is part of an ongoing scientific research project, FMS 2010. The objective is to improve the efficiency of manufacturing enterprises by offering capabilities which can support all activities, from visions and ideas to actions and customer satisfaction. A model of integrated manufacturing systems is presented. It consists of manufacturing entities of products, resources, and orders which have different roles in the manufacturing system. The entities are connected through the process, production, and business domains. The entities are explained with their internal structure consisting of digital, virtual, and real parts as being autonomous and their communication part as being involved in co-operation between different entities. The entities are also examined in a context ranging from industrial ecosystems to individual entities. The changing characteristics of the system are discussed from the viewpoints of changeable, learning, and service-oriented systems and entities. This can lead to a knowledge-based manufacturing system in which the information and

CIRP Design Conference 2009

428

knowledge are also constantly changing. A digitally presented manufacturing system is one of the key enablers in the changing environment to keep the information and knowledge up-to-date and available. 2

THE FMS 2010 CONCEPT OF ADAPTIVE MANUFACTURING SYSTEMS The aim of the FMS 2010 research project is to create a concept of adaptive and autonomous manufacturing systems. The intention is to integrate the design and development of products, production systems, and business processes into one environment. The entities of the system can exist in a distributed network both on the physical and information levels. This provides more effective use of existing knowledge and skills. Duplicate design and development processes can be reduced and more cost-effective solutions achieved. Need for change  Ideas Innovation process

Synthesis Adaptive DiMS System

Implementation

Figure 1: The process of the FMS 2010 concept [1].

Figure 1 illustrates the process of the FMS 2010 concept. The process combines three main phases: synthesis, solution creation, and the use of the system created. The phases are connected with processes of emergence as implementation concepts, the implementation of the new system, and the growth of skills and knowledge as the system is operating. In the synthesis phase, the existing skills and knowledge of a manufacturing system are combined with new requirements and possibilities, derived from ideas and needs for change. In the process of creating the implementation concepts, the solution principles are used to create the solution. In the event of contradictory situations, the different goals are analyzed, using the principle of positive intention [2]. This is done to achieve a mutually acceptable solution that can be considered for implementation. When the newly implemented system is in operation, it is constantly developed. The knowledge and skills of the manufacturing system are updated. During each of the phases, accepted principles will be added to the existing skills and knowledge and they will form the basis for how future design and development challenges are met. The process is iterative both at the whole process level and also in the steps of the process. For example, a synthesis can be repeated until acceptable solution alternatives are found. In a similar fashion, a whole loop can be repeated to achieve a feasible solution. The approach utilizes, to the appropriate extent, principles from the paradigms of Holonic Manufacturing Systems (HMS), Fractal Manufacturing Systems (FrMS), Bionic/Biological Manufacturing Systems (BMS), Cognitive Technology Systems (CTS), and Service-Oriented Architecture (SOA). Table 1 summarizes the main principles used.

3

MODEL OF INTEGRATED MANUFACTURING SYSTEMS The model of integrated manufacturing systems to be described is intended as a starting point for modeling real manufacturing systems. The basis of the model is derived from the principles behind the term ‘holon’. It comes from the Greek word ‘holos’, which is a whole, and the suffix ’– on’, meaning a part. Therefore the term holon means something that is at the same time a whole and a part of some greater whole [10]. The model of integrated manufacturing systems consists of manufacturing system entities and related domains, the structure of individual manufacturing entities, and the structuring levels of the entities. A manufacturing system is, at the same time, part of a bigger system and a system consisting of entities. 3.1 Manufacturing System Entities and their Related Domains The model of manufacturing systems explains the system with manufacturing entities and their related domains; see Figure 2. The basic entities are products, resources, and orders, based on the reference architecture of HMS: the Product-Resource-Order-Staff architecture (PROSA) [3][4]. The entities are connected with the domains of process, production, and business. Each part of the manufacturing system has a specific role, and all of them have to be considered in an integrated environment for successful operation.

Process Domain

Features

Product

Capability

Principle HMS [3][4][5]

Demand

Business Domain

Markets

A short description Competence

Autonomous and co-operative entities. Network-based teams. Modular system structure.

FrMS [6]

Horizontal and vertical self-similarity on all structuring levels.

BMS [7]

Evolving capabilities. New methods and methods integration . Intelligent and adaptive structures.

CTS [8]

Developing reasoning capabilities. Adaptive decision-making.

SOA [9]

Formal communication language and content between the entities.

Table 1 : A brief summary of the main principles used in the FMS 2010 approach. The process of the FMS 2010 concept is being piloted in several major Finnish enterprises. Each of them has its specific challenges, which differ from each other and give an individual aspect to developing the concept on a detailed level. The FMS 2010 research project is divided into work packages of: • Challenges in state-of-the-art manufacturing systems technology. • Manufacturing systems control architecture. • Integration of manufacturing methods. • Flexible automated fixtures. • Modeling, simulation, and analysis of machine tools and robots, as well as manufacturing systems.

Resource

Order Capacity

Production Domain

Figure 2: The manufacturing system entities and their related domains, adapted from [11]. Products are what the manufacturing system is offering to its customers. Orders are instances of products the customers are purchasing. The customers can be other entities within the same enterprise, or entities in the enterprise network. The ordered products will be manufactured with the resources existing in the manufacturing system. The business domain connects products and orders. In the markets where a manufacturing system exists, the demand of customers has to be met with sufficient supply. In the process domain, the capability to manufacture the products is defined. The system needs to be able to manufacture all of the features of the products, i.e. the resources should be associated with corresponding methods. The resources, having the needed capabilities, also define the capacity of the system in the production domain. It is responsible for manufacturing orders at the right time. It should have enough capacity to manufacture the volume and scalability needed to handle any variation in orders. The competence of a manufacturing system is

429

defined by the skills needed in each of the different types of entities and their related domains. Each of them has to be efficient in order to achieve feasible and efficient results. 3.2 Structure of System Entities The entities, despite having different roles, have similar internal structures. The structure consists of digital, real, and virtual parts explaining the autonomous part of the entities. The entities are connected via the communication part, which makes possible co-operation with other entities existing in the system.

Industrial Ecosystem

Enterprise Network

Enterprise

Factory Communication

Manufacturing System

Digital Information Knowledge

Manufacturing Unit Real

Virtual

Physical

Model

Machine Tool

Communication

Figure 3: Structure of manufacturing entities [11]. The digital part includes all the digitally presented information and knowledge. It is used for developing and controlling the real system, as well as building the virtual models. The real part represents what exists physically in the real system, such as machines and tools, humans, and products to be manufactured. The virtual part is a representation of the physical part as a computer model. This includes, for example, CAD models of products and production facilities and simulation models of robots, machine tools, and manufacturing systems. The communications part is responsible for the co-operation on the physical and information levels. The information part of the communication is the language and content of the data that are transferred within the system. The amount of information transferred between the system entities is kept to a minimum in order to reduce the complexity of the operations. In the context of a currently operating manufacturing system, the information for the real and virtual parts is the same as that existing in the digital part. New information and knowledge, gathered from either the real or virtual worlds, is added to the digital part and made available for both. In future design and development cases, a copy of the digital part is used to avoid inaccurate information being added to the current system. This is done to eliminate false information from failed ideas for future design and development cases. 3.3 Horizontal and Vertical Self-Similarity As a manufacturing system is a part of some larger system and at the same time consists of subsystems, it can be examined on different vertical structuring levels. A manufacturing system is also a part of a supply chain, which is its horizontal context. Material comes from a supplier and is delivered to a customer. Figure 4 presents the structuring levels of different industrial entities, where a manufacturing system is part of a bigger entity and at the same time consists of several entities on lower structuring levels. At the top level it is an industrial ecosystem, where all the entities of lower levels exist. Being aware of the changes in the ecosystem enables a more rapid response to be made when new partners, suppliers, or customers are required.

430

Figure 4: The structuring levels from industrial ecosystem to machine tools. A manufacturing system entity consists of manufacturing unit entities, i.e. it is a network of resources needed to manufacture all the features of product entities. It also has resources for storing and handling material and transferring it between manufacturing units. The product entities are typically part families from which the volume and variation of orders is composed. Manufacturing units consist of resource entities of machines, devices, workers, and other entities required, such as robots, fixtures, sensors, readers etc. The units are designated to manufacture certain product entities, i.e. work pieces that have similarities in size, shape, features, material properties etc. They are also required to produce a certain amount of order entities to keep the material flowing between manufacturing units. A factory entity consists of manufacturing and assembly systems, as well as storage areas for blank parts and final products, including both manufacturing and assembly units. The products are typically final products and the customers are the final users of the products. Enterprises and enterprise networks consist of factory units, which can exist globally. The distance between the entities brings logistics into the picture as an important factor. The difference between enterprises and their networks is that entities in the network may have different owners and possibly contradictory goals. The behavior on the industrial ecosystem level differs from the five lower structuring levels because it is not under any administration. A manufacturing enterprise can have a certain amount of control over its own enterprise network, but it cannot control other entities in/outside or coming into the ecosystem. A level above includes all the structuring levels below it. The levels are self-similar externally in terms of the structure of the entities as they communicate in the same environment. Despite their self-similarity, internally their autonomy can vary and they can be different from each other, even when they have a similar role in the system. From another viewpoint, manufacturing entities can be similar or different, depending on who they are examined by. A product in a manufacturing system is a resource

from the customer’s viewpoint. Similarly, the resources in a manufacturing system are products from the viewpoint of resource suppliers. 4 CHANGING CHARACTERISTICS Manufacturing systems operate in a constantly changing environment. The changes can be external or internal, direct or indirect. Typical external and indirect sources for change are politics, society, ethics, the world economy, and the environment [12]. Laws and different rules are examples of external and direct sources for change. These sources can be mandatory or voluntary. Mandatory sources force the manufacturing system to adapt to the changes. For those changes that are voluntary, the manufacturing system has to choose whether to change or not. The decisions will have an impact on the competence of the manufacturing system. Customers, partners, and suppliers are external and direct sources for change from the viewpoint of a manufacturing system. They differ from the other external and direct sources in their nature, as they are similar entities communicating in the same environment as the manufacturing system. Similarly, new ideas, materials, and technologies can derive from the manufacturing system itself or from the context. The external changes will cause internal changes that will change the system. The changes can affect the system entities of products, resources, or orders, as well as their related domains of business, process, and production. A change within a system will almost always cause a chain of change events until the system has adapted to the new situation. Environment Politics

4.1 Service-Oriented and Learning Manufacturing Entities The basic conceptual model of SOA architecture consists of service providers, service requesters, and service brokers [9]. The entities in a digital manufacturing system based on SOA have the following roles: • Service provider entities are typically the resource entities having the needed capabilities. • Service requester entities: the order holons. The resource entities can also be in the role of a requester, for example when they require maintenance services. • Service broker entities can be seen as rules of the cooperation between the entities, i.e. the autonomy of the upper level of entities. In the proposed model of manufacturing systems having the basic building blocks, products, resources, and orders, services happen in the domains of process, production, and business. Knowledge-based services can be seen in three dimensions: role, context, and receipt. They are based on the distributed character of knowledge: normative expectations, interactive situations, and dispositions [13] and object, cognitive state, and capability [2]. Each entity has a role in the system in which it exists, i.e. it has expectations of the other entities. It is also one of the objects existing in the system. In the context an entity is performing its activities as interactive situations where the cognitive state of the entity collects data and information. The dimension of dispositions is seen as a receipt, data and information collected from the system, to learn and improve the knowledge as the capability of the entity.

Society Ethics

Service Requester

Economy

Product

Laws Rules New Ideas

Partners

New Materials

Suppliers

New Technologies

Context ƒ STATUS ƒ SKILLS ƒ ACTIONS

Process

Service

Customers

Updating Learning Collecting

Receipt ƒ HISTORY ƒ RULES ƒ MODELS

Learning

ƒ THEORY ƒ KNOW-HOW

Updating Receipt Resource Service Provider

Product

Internal Changes

Production

Figure 5: Examples of external and internal changes a manufacturing system faces.

Figure 6: Roles, context, and receipt of products and resources. Figure 6 presents an example service happening in a process domain between a product and a resource. The resource entity is providing a service as requested by the product entity. The service is a manufacturing process happening in a certain context. The actions during the service depend on the skills of the resource entity and the state of the system. When the service is in operation, both resource and product entities collect data from the process. They learn and update the data and information they receive. When a certain product entity uses a service provided by a certain resource entity, the data collecting, learning, and updating phases include adding the same data and information to the knowledge of both entities'.

431

The knowledge of a resource entity is updated with several product entities using the services it provides. In a similar fashion, the knowledge of a product entity consists of all the services it requests. 4.2 Changeable Manufacturing Systems and Entities The changeability of manufacturing systems and entities can be classified into changing by requirements, changing by learning, and changing structure during the lifecycle of the entity. The entities face changing requirements during their lifecycles. Typically, the entities must change during their existence both to meet the new requirements and to improve their actions. Changing by requirements An entity may have to change because its requirements change. The need for change can be seen from the vertical structuring levels: • Industrial ecosystem - Being aware of existing and future possibilities and requirements. • Enterprise network - To rapidly form a new enterprise network structure when markets change. • Enterprise - Transparent co-operation with suppliers, partners, and customers to get better results. • Factory - Rapid response to changing product families. • Manufacturing system - Flexibility to change manufacturing processes with minimal reconfiguration. • Manufacturing unit - To rapidly change the system configuration for the requirements of new part families. • Machine tool - Ability to change between work pieces with minimal setup times. Changing by learning Changing by learning can be understood as the evolution of skills and knowledge from unknown towards core skills and knowledge; see Figure 7. An unknown activity cannot be considered until the possibilities are known. It requires new information and knowledge to be acquired from the enterprise network or industrial ecosystem. When it is clear that the change is possible, the technologies needed can be investigated. By having a wide network of knowledge, it is possible to gather information on the technologies, skills, and knowledge existing in the enterprise network. When the technologies are available, the system may be configured and the capabilities achieved. When the actual possibility is implemented and integrated into the system, the capability exists in the system. As the system operates, the capabilities are constantly improved towards core information and knowledge by learning from actions.

Unknown Possibilities Would Technologies Should

Capabilities Configuration

Could Implementation Integration Yes Core

Improvement Learning

Figure 7: A change from unknown to core knowledge.

432

Changing structure during life cycle The structure of a manufacturing entity consisting of digital, virtual, and real parts will change during the life cycle of the manufacturing entity. Not all of the parts have to exist all the time. A product entity, in the early phases of the design process, is an idea, a vision of what it could be, and has only a vague description that can be presented digitally. When the design evolves into a detailed solution principle, there could be a virtual part, a computer model that can be used to test the functionality and present the product idea to other people. The physical part exists for the first time if prototypes are manufactured. When the product entity is accepted into production, instances of product entities, the order entities, are realized. They can have the physical product and also digital information and virtual models of the product as parts of the service to a customer. From past via present to future One viewpoint from which to consider digital manufacturing systems is the time span in which the entities exist. It can be seen, for example, as past, present, and future. The past exists as data and information collected from the manufacturing system. As the system operates, the events occurring in the manufacturing system are logged. The data can be examined and analyzed to find out what happened and why it happened. In finding the root causes for the phenomena, the system can learn from its actions. It can improve the manufacturing processes, update its skills and knowledge, and be prepared for unexpected situations in the future. At the present, during the current operation of the manufacturing system, the digital and real manufacturing systems co-exist, constantly updating each other. The state of the real manufacturing system can be seen in the digital manufacturing system and vice versa. Actions can be taken on the basis of the state of the system as a starting point. The viewpoint of the future can be divided into tactical decisions and visions, the difference being the time horizon. In both cases, the operating process occurs mostly in the digital manufacturing system because the events under investigation have not happened yet. Tactical decisions consider the near future into which the manufacturing system is heading. Future visions are similar to tactical decisions, the difference being the time horizon. The outcome of future visions is more obscure but there are more possibilities for creativity and idea investigations. 4.3 Knowledge-Based Manufacturing System In a changing environment, managing the information and knowledge of a manufacturing system is an important factor. A manufacturing system can be characterized as a distributed knowledge system [13] and managing knowledge as a dynamic and continuous organizational phenomenon [14]. Knowledge can be divided into explicit and tacit knowledge [15]. Explicit knowledge can be presented as symbols, i.e. it is possible to represent it formally and digitally. Tacit knowledge consists of, for example, human beliefs, know-how, and skills. Managing the two dimensions of knowledge includes processes of knowledge creation, knowledge storage and retrieval, knowledge transfer, and knowledge application [2]. A service-oriented manufacturing system, presented digitally, can enable information and knowledge to be managed. The intelligence of the manufacturing entities is kept as their autonomy. Only the needed information is

transferred between the co-operating manufacturing entities. This requires a formal communication language and information content. If all the entities can communicate formally, the entities can be changed, added, or removed without changing the system itself. Each entity can exist in the system regardless of their autonomous part. This enables different types of manufacturing entities to be integrated into one system. 5

DIGITAL MODEL OF MANUFACTURING SYSTEMS FOR THE INTEGRATED APPROACH A Digital Manufacturing System is one of the main enablers of efficient design and development processes. Presenting the information and knowledge of manufacturing systems digitally makes possible a wider outlook on all aspects of manufacturing systems, compared to the skills and knowledge of individual humans. It can be used to evaluate everything from creative ideas during conceptual stages to detailed solution alternatives. Research on Digital Manufacturing on different levels, from enterprises to manufacturing entities, has no commonly agreed definitions, but they all share similar characteristics (see, for example: [16][17][18][19][20][21]): • An integrated approach to improve product and production engineering processes and technology. • A framework for new technologies, including the collection of systems and methods. • Computer-aided tools, such as modeling and simulation, for planning and analyzing real manufacturing processes. In this paper, the Digital Manufacturing System is defined as “An integrated environment for the design and development of products, production systems, and business processes” [11]. The digital manufacturing system includes modeling, simulation, and analysis by using computer tools, as well as digitally presented information and knowledge. It exists only once in a formal and up-to-date form. It can be distributed, but is accessible to all parties regardless of time and location. 5.1 Modeling, Simulation, and Analysis Modeling in a wide sense is used to understand something better, why the system behaves in the way it

does. It can be used to repeat or refine performance to achieve a specific result, as well as to extract and formalize a process in order to apply it to a different content or context [22]. Simulation, especially discrete-event simulation (DES), is used when the model evolves over time. The states of the manufacturing entities change at separate points in time. Simple models can be investigated analytically, but typical manufacturing systems and the relations between the entities are too complex to solve without simulation [23]. The use of modeling and simulation is one of the largest application areas of the design and development of manufacturing systems. Typical areas usually addressed using modeling and simulation are, for example [24]: • Need and the number of resources, both human and machines, i.e. defining the needed capacity of the system. • Performance evaluation, such as throughput and bottleneck analysis. • Evaluation of operational procedures, such as planning, controlling and scheduling of manufacturing activities. In an integrated design and development environment, the modeling and simulation of manufacturing systems is a part of the digital manufacturing system. It needs to be kept up to date, rather than a typical simulation model that is created in a project and then, after analysis of the results, becomes obsolete or is only seldom updated. 5.2 An Example from Product Ideas to Efficient Solutions Requirements from customers, needs for change, and general requirements combined with ideas turn into solution principles. Further on in the process, the solution principles translate to solution alternatives, which define the manufacturing requirements. Figure 8 presents an example process from the requirements towards an efficient solution to meet customer demands. It includes the verification and validation of manufacturing capabilities and the capacity to manufacture the new product. The three loops in the process are the product requirements loop, capability loop, and capacity loop.

Digital Manufacturing Information and Knowledge Ideas Visions Need for change Existing Capabilities

Existing Capacity

Modeling and Simulation

Modeling and Simulation

Product Requirements Capability Reconfiguration Capability Implementation

Capacity Reconfiguration

Modeling and Simulation

Capacity Implementation

Modeling and Simulation

Figure 8: An example of the integrated approach from product ideas to deliverable products and services.

433

The product is divided into features which form the service requests, the requirements for the system. For each requirement there must exist a corresponding capability, a method to manufacture the product. The resource having this capacity is the service provider. The first decision is made in the product requirements loop, where it is decided if the product design alternative is worth going on with. It is possible to go back and modify the design or to check if there are capabilities. In the capability loop the result between each of the requirements and capabilities can be classified, for example, as one of the following five categories:

development activities. General benefits include, for example:

• Existing capabilities: The capabilities exist for all of the product requirements without any need for changes to the system. The products can be manufactured as the service requests have service providers.

• Modeling and simulation tools offer real-looking 3D models and animations that can be used to demonstrate plans and train workers.

• Possible existing capabilities: At least some of the product requirements need further investigation as to whether the capabilities exist. The requirements are close to the existing capabilities and, using modeling and simulation, the capabilities can be verified. • Capabilities after reconfiguration: There is no existing capability but it may be possible to reconfigure the system so that it has the capabilities. By modeling the reconfigured system the possibility can be verified. • Capabilities after implementation: The system does not have the needed capability. It may be possible if new capabilities are added to the system. Again this can be verified using modeling and simulation. • No capability: The result may also be that there are no capabilities and they cannot be implemented either. This leads to the need for an alternative solution, which leads to a result that fits into one of the first four categories. When it is known that the capabilities exist for all the product requirements, the efficiency of the capabilities still needs to be evaluated against factors such as cost, quality, and time. It has to be decided if the solution alternative is good enough. It can be further investigated in the capacity loop or it can be rejected and sent back to the capability loop. If all the needed capabilities exist, the capacity of the system has to be checked. The same five categories can be used in capacity evaluation. If it is known that there is enough capacity, nothing else has to be done. Modeling and simulation can be used to verify that there is enough capacity. It can also be used in capacity reconfiguration and implementation issues. Modeling and simulation of capacity has the same constraints as in the case of capabilities. The capacity for existing volume and variation still has to exist when new products are considered as an addition to existing products. In the capacity loop, the solution can be accepted or rejected, as in the capability loop. If the solution is rejected, it can be sent back to the capability loop or further back into the design requirements loop. All the solutions are the results of decisions which combine existing digital information with the new requirements and possibilities. The digital information and knowledge is input as it is used as support for decisions when existing knowledge is combined with new knowledge gathered by the new product requirements. It is also output from the solutions, as the system is updated to include the new information and knowledge. 5.3 Benefits and Challenges of Digital Manufacturing Systems Both digitally presented information and knowledge and the computer tools for modeling, simulation, and analysis offer efficient ways to achieve solutions for design and 434

• Experiments in a digital manufacturing system, on a computer model, do not disturb the real manufacturing system, as new policies, operating procedures, methods etc. can be experimented with and evaluated in advance. • Solution alternatives and operational rules can be compared within the system constraints. Possible problems can be identified and diagnosed before actions are taken in the real system.

• Being involved in the construction of the digital manufacturing system tasks increases individuals’ knowledge of the system. The experts in a manufacturing enterprise acquire a wider outlook compared to their special domain of knowledge. More specific advantages related to the integrated approach of manufacturing systems from product ideas to deliverable products and services and presented in Figure 8 are, for example: • When new products are introduced, service requests can be simulated and they provide a response in terms of the system’s capability to manufacture the products. • If changes are needed, different solution alternatives can be simulated, analyzed, and compared. The most suitable solution can be selected to be considered for implementation. • The solutions can be viewed against factors such as cost, quality, and time, as well as how they affect the operation of the existing system. • Using the approach in the early steps of product requirement analysis makes it possible to detect change requirements in advance. Challenges exist both in the autonomous and cooperating parts of the digitally presented manufacturing entities. The internal part has to include only the needed information and knowledge and it also has to improve the actions taken by individuals with their own personal skills. The entities co-operating with other entities need to have predefined ways to communicate. Both the language and content of the transferred information and knowledge have to be formally described in such a way that both humans and machines can communicate in the same system. 6 CONCLUSIONS Changeability is a precondition for success. It is a combination of creativity with quality and productivity [25]. An approach to manufacturing systems that integrates product, production, and business processes is an enabler for the efficient use of design and development activities. The model of the integrated system, with manufacturing entities with different roles and self-similar structures, as well as their relations, makes it possible to construct models of real manufacturing systems. The model supports the changing characteristics of manufacturing systems by updating the information and knowledge when the system changes, for example by learning from its actions and by adapting to new requirements. If the system is presented digitally, the information and knowledge of the system can be presented formally and it is available to all relevant parties, regardless of time and location. Different solution alternatives can be examined and results can be

achieved before they are put into practice in the real system. The consequences of changes in one area can be evaluated and it is possible to see how they change other areas and the whole system. 7 ACKNOWLEDGMENTS This paper is part of a scientific research project, FMS 2010. It is co-financed by Tekes, the Finnish Funding Agency for Technology and Innovation, and several major companies in Finland. 8

[15]

[16]

[17]

REFERENCES

[1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10] [11]

[12]

[13]

[14]

Nylund, H., 2008, Changeability issues in adaptive manufacturing systems, Flexible Automation and Intelligent Manufacturing, FAIM2008, Skövde, Sweden, 30 June – 2 July: 1037-1044 Dilts, R., 1998, The Principle of 'Positive Intention', Robert Dilts NLP Home Page, Retrieved: 15th July 2008, http://www.nlpu.com/Patterns/pattern2.htm Wyns, J., 1999, Architecture for Holonic Manufacturing Systems - The Key to Support Evolution and Reconfiguration, PhD thesis, K.U. Leuven, Belgium Van Brussel, H., Wyns, J., Valckenaers, P., Bongaerts, L. and Peeters P., 1998, Reference architecture for holonic manufacturing systems: PROSA, Computers in Industry, 37: 255-274 Valckenaers, P., Van Brussel, H., Bongaerts, L. and Wyns J., 1994, Results of the Holonic Control System Benchmark at the K.U. Leuven, Proceedings of the CIMAT Conference, Troy, NY, USA, 128-133 Warnecke, H.J., 1993, The Fractal Company: A Revolution in Corporate Culture, Springer-Verlag, Berlin, Germany Ueda, K., Vaario, J. and Ohkura, K., 1997, Modelling of Biological Manufacturing Systems for Dynamic Reconfiguration, Annals of the CIRP, 46/1: 343-346 Zäh, M.F., Lau, C., Wiesbeck, M., Ostgathe, M. and Vogl, M., 2007, Towards the Cognitive Factory (Keynote Paper), Proceedings of the 2nd International Conference on Changeable, Agile, Reconfigurable and Virtual Production, Toronto, Canada, July 22-24: 2-16 W3C, 2004, Web Services Architecture, http://www.w3.org/TR/ws-arch/, accessed 18th July 2008 Koestler, A., 1989, The Ghost in the Machine, Arkana Books, London Nylund, H., Salminen, K. and Andersson, P.H., 2008, Digital virtual holons - An approach to digital manufacturing systems, In: Mitsuishi, M., Ueda, K. & Kimura, F. (eds.) Manufacturing Systems and Technologies for the New Frontier, The 41st CIRP Conference on Manufacturing Systems, Tokyo, Japan, May 26-28: 103-106 Wiendahl, H.P. and Heger, C.L., 2005, Justifying Changeability - A Methodical Approach to Achieving Cost Effectiveness, The International Journal for Manufacturing Science & Production, 6/1-2: 33-40 Tsoukas, H., 1996, The Firm as a Distributed Knowledge System: A Constructionist Approach, Strategic Management Journal, 17: 11-25 Alavi, M. and Leidner, DE., 2001, Knowledge Management and Knowledge Management Systems:

[18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]

435

Conceptual Foundations and Research issues, MIS Quarterly 25: 107-136 Nonaka, I. and Takeuchi, H., 1995, The KnowledgeCreating Company: How Japanese Companies Create the Dynamics of Innovation. Oxford University Press, Oxford Kühn, W., 2006, Digital Factory - Integration of simulation enchanging the product and production process towards operative control and optimization, International Journal of Simulation, 7/7: 27-39 Maropoulos, P.G., 2003, Digital enterprise technology-defining perspectives and research priorities, International Journal of Computer Integrated Manufacturing, 16/7-8: 467-478 Souza, M.C.F., Sacco, M. and Porto, A.J.V., 2006, Virtual manufacturing as a way for the factory of the future, Journal of Intelligent Manufacturing, 17: 725735 Reiter, W.F., 2003, Collaborative engineering in the digital enterprise, International Journal of Computer Integrated Manufacturing, 16/7–8: pp 586-589 Offodile, O.F. and Abdel-Malek, L.L., 2002, The virtual manufacturing paradigm: The impact of IT/IS outsourcing on manufacturing strategy, International Journal of Production Economics, 75: 147-159 Bracht, U. and Masurat, T., 2005, The Digital Factory between vision and reality, Computers in Industry, 56: 325-333 Dilts, R., 2005, Modeling, Robert Dilts NLP Home Page, Retrieved: 15th July 2008, http://www.nlpu.com/Articles/artic19.htm Law, A.M. and Kelton, W.D., 2000, Simulation modeling and analysis, 3rd ed. McGraw-Hill, New York Law, A.M. and McComas, M.G., 1999, Simulation of Manufacturing Systems, In: Farrington PA, Nembhard HB, Sturrock DT and Evans GW (eds), Proceedings of the 1999 Winter Simulation Conference, Squaw Peak, Phoenix, AZ, 56-59 Sohlenius, G., 2005, Systemic Nature of the Industrial Innovation Process - Contribution to a Philosophy of Industrial Engineering, Doctoral dissertation No. 527, Tampere University of Technology, Finland

An Improved Method of Failure Mode Analysis for Design Changes 1

2

R. Laurenti , H. Rozenfeld Nucleus of Advanced Manufacturing, Engineering School of São Carlos, Department of Production Engineering, University of São Paulo. Av. Trabalhador Saocarlense, 400 13566-590 São Carlos-SP, Brazil 1 [email protected]; [email protected] 2 [email protected]

1,2

Abstract Customers and market changes behaviour are a large part of the product creation and modification. Though, design changes introduce new potential failures into the products. In this paper, it is presented an integrated method which turns attention to the analysis of design changes. The method is based on FMEA and DRBFM methods, and on remarks from four focused interviews. The interviews showed the necessity of a structured process of managing engineering changes, multidisciplinary work, empowerment of responsibilities, committed personnel and understanding of the modifications. Yet, further work must be undertaken to assess and validate the novel method. Keywords: Failure Analysis, Design Changes, FMEA, DRBFM

1 INTRODUCTION In the global market, product development has shown to be one of the most important business processes for companies in the achievement of competitive advantage [1]. Over the last decade, new products have been appearing at an ever increasing pace. Also, product modifications have increased significantly to meet existing needs, emerging wants, and latent expectations of consumers [2]. Most new products in engineering are designed by modification from existing products, namely, product development involves the steady evolution of an initial design [3] [4]. However, changes always create an increased potential failure in the design [5] [6]. The failures can affect the reliability and availability of a product and can cause profit loss to both manufacturer and user. This is particularly true in the automobile industry. Many researches have shown that besides financial harm, disclosure of product defects (such as recalls) can result in negative abnormal results on the automakers reputation [7] [8], with consequent losses in stock market valuation [8] [9] and product sales [8] [10]. In a typical month, several recall campaigns of motor vehicles are carried out by automobile manufacturer to correct defective vehicles [9] and its incidence are increasing over the time [8]. Likewise, the same evidences of profit losses were found for non-automotive recalls [11]. In this the scenario, companies have the challenge to proactively prevent failures during the early stages of the new product development (NPD) process, since the later a failure is detected into the product life cycle, the bigger becomes its financial consequences [12]. Several methods of design failure analysis currently exist and are used in industry, but by far the most widely used is the Failure Mode and Effect Analysis (FMEA) method [13]. The FMEA helps designers to understand and know the potential modes of a failure, to assess the risk of each known potential failure mode, and to identify countermeasures to avoid the failure to occur [14]. It has been intensively applied over the years in the NPD

CIRP Design Conference 2009

436

process. Nevertheless, as previously discussed, many defects are being discovered by the final consumer. Moreover, the FMEA method has several shortcomings, for instance, it does not take into account potential failures due to changes. Therefore, the goal of this paper is to advance an integrated method to proactively find potential failures introduced by design changes. The method was named Failure Mode and Effect Analysis of Modifications (FMEAM). In this paper, it is adopted that the terms change and modification have exactly the same meaning. Also, the review emphasis is placed on failure analysis just for product design; not for manufacturing process. A combination of literature review, regarding the methods FMEA and Design Review Based on Failure Mode (DRBFM), and findings from four focused interviews were the sources of evidence to define the proposed method. The interviews not only helped to understand practical analyses to avoid failures when a product is modified, but also they provided substantial ideas of how the FMEAM could be integrated into the NPD process. Design changes can be problematic because designers are not always aware of the connectivity between the different parts of a product and can inadvertently ignore the incidental effects of change [3]. Inspected failures due to change can be avoided through high redundancy in the product or, more economically, through an intelligent anticipation of later failures due to the change [15]. Thus, the FMEAM should bring superior results to the NPD process. The rest of this paper is organized as follows: section 2 describes the research methodology; in section 3 a brief literature review is given, encompassing the methods FMEA and DRBFM; in section 4 the interviews are described; section 5, which constitutes the major part of this paper, is where the FMEAM is presented and the procedure of carrying it out is explained; and section 6 concludes the paper by discussing the approach taken by FMEAM and pointing out further work.

2 METHODOLOGY Research approaches can be divided into the categories: quantitative, qualitative and mixed [16]. Also, they can be classified as exploration, descriptive, predictive, and explanatory research [17] [18] [19]. Exploration research involves [17] an attempt to determine whether or not a phenomenon exists; i.e. Does X happen? Descriptive research involves [17] examining a phenomenon to more fully define it or to differentiate it from other phenomenon; i.e. What is it? How is it different? Predictive research involves [17] identifying relationships that enable us to speculate about one thing by knowing about some other thing; i.e. What is it related to? Explanatory research involves [17] examining a cause-effect relationship between two or more phenomena; i.e. What causes it? The research approach and its source of data collection have to be chosen according to the purpose of the research. In this work, the research the approach taken was qualitative and it can be classified as a descriptive research. Descriptive research requires that the investigator begin with a descriptive theory. Accordingly, a review of literature about the methods FMEA and DRBFM was undertaken. Then, four focused interviews were conducted with product design stakeholders from different companies. Focused interview is used in a situation where the respondent is interviewed for a short period of time, usually answering set questions [18]. The Interviews were carried out aiming at understanding the practical analyses performed to avoid failures when a product is modified. In all interviews the first quarter was conversational to comprehend the scenario of the company which the interviewee works for. Specific questions dealt with how the analyses are done, their objectives, whether they are formal or informal, which are the employees involved, and whether the lessons learnt are registered and further reused. Contextual questions probed which are the benefits and difficulties of analyses, the resources and efforts necessary to accomplish the analyses. 3 LITERATURE RESEARCH It is appropriate to present a brief of the literature review about the methods FMEA and DRBFM. 3.1 FMEA, shortcomings and adaptations Failure Mode and Effect Analysis (FMEA) is a quality method that identifies, prioritizes, and mitigates potential problems in a given product. FMEA begins with identification of functions or requirements of a system, subsystem or component, ways that they can fail and its potential causes of failure. A small, but representative, group with members of the design team and other disciplines familiar with the product life cycle performs the analysis in one or more meetings. For each failure mode and cause, the team identifies the probability that they can occur and scores them on a scale from 0 to 10. After identifying the effects, the analysis scores the severity of each end effect on a similar scale. The team documents which actions have already been taken, and which actions have still to be performed in order to avoid or to detect the failure mode. Finally, the detection rating scored refers to the likeliness of catching the failure modes before they happen. The product of these terms is the risk priority number (RPN) which gives a relative magnitude for each failure mode. If an FMEA is done properly, the resulting documents contain a lot of knowledge about the product design. Thus, it is a valuable source of know-how for the

company. Furthermore, since it supports the early detection of weaknesses in a design, a reduction of development costs and fewer changes during series production are expected [20]. However, there are numerous shortcomings within the failure analysis of FMEA, its implementation and utility. Some of these shortcomings include a lack of welldefined terms [21], problems with the terminology [22], problems with identifying key failures [23] and it is treated as a stand-alone technique [20], which is neither integrated with the design process, nor with other methods of quality management. Other common complaints of the FMEA method is that it is tedious and time consuming [13] [24] [25], its analyses are subjective (based on the user’s experience) [26], it is considered by the engineers to be ‘‘laborious’’ [20], the analysis is often done to check rather than to predict [22]. When concerned with product design, it is important that failure analysis is carried out early in the design process in order to reduce the necessary amount of redesigns. It is important to perform failure analysis in conceptual design, but it has been reported that FMEA is commonly performed too late in the NPD cycle and has very little effect on the overall product design [27]. To overcome the FMEA shortcomings, many adaptations and improvements have been made to its process, application and target. Previous papers [25] [26] have described concepts for automated FMEA employing qualitative simulators and reasoning process to produce a report that is more timely, complete and consistent in the design cycle. Other authors [28] take automated FMEA a step further, developing a concept for analysis of the effects of significant multiple failures as well as single failures. A software that uses quantitative simulator has been developed [29], to produce results that are not only more accurate for designers, but are also more useful to test and diagnostics engineers. Bayes belief networks has been employed [21] to provide a language for design teams to articulate, with greater precision and consistency and less ambiguity, a physical system failure cause-effect relationship, and the uncertainty about their impact on customers. It has been shown [30] that “function to structure mapping” can be used in the early stages of design to assess diagnosability; i.e., a measure of the ease of isolating the cause of a malfunction. 3.2 DRBFM DRBFM is a method of discovering problems and developing countermeasures by taking notice of and discussion intentional changes (design modifications) and incidental changes (changes in part environment) [31]. It is carried out, throughout the NPD process, to guide the design engineer during the engineering change process, to integrate design, production, quality and supplier personnel into this process, and to achieve a robust design [32]. DRBFM was developed by Tatsuhiko Yoshimura, who has worked at Toyota Motor Corporation for 32 years. In the Japanese automobile manufacturer, Yoshimura was one of the responsible engineers to assure the quality and the reliability of the products, dedicating his professional life to avoid problems before they occur. However, the other employees acting as “troubleshooters”, namely, solving problems just when they appeared, were apparently the heroes of the company [33]. It has been reported [33] that the summary for Yoshimura of this experience is similar than the findings of the study conducted by MIT: “Nobody Ever Gets Credit for Fixing Problems that Never Happened” [34].

437

Good Design robust design, to reduce/notice changes

To nip the problems in the bud

Interface Element (component)

Element (sub-system)

Good Design

GD 3 Good Discussion

Good Design Review

Area of potential problems due design or environmental changes Systematic search of problems

Figure 1: GD3 philosophy. Despite that, Toyota has successfully implemented the However, the company’s major difficulty is the short time DRBFM method. Nonetheless, the method has not yet to test the new machine before sending it to the client. deeply investigated by the academic researchers nor For this reason, sometimes, after a couple of months, broadly disseminated throughout other companies. when they have already sold a few machines of the new model, a design error can be found and they need to A recent study [8] showed that Toyota has the highest make a recall. It was said by the interviewee that this is a product variety of all other car manufacturers and has low better situation than not sending the machine to the client volume of recalled vehicles as a percentage of and losing the sale. registrations. This suggests an ability on Toyota’s part to simultaneously offer relatively high variety whilst retaining Lessons learnt are registered in a validation report for tight control of product development and manufacturing further sharing, but this has not shown to be efficient. So, processes [8]. most of the lessons learnt are verbally shared. 3 Toyota has a philosophy called GD or Mizen Boushi 4.2 Second interview (roughly translated as “countermeasures”). GD3 stands for The second interviewee is a project manager from a Good Design, Good Discussion and Good Design Review medium-sized manufacturer of medical-ophthalmic [35]. The principles of Good Design are to use, as much equipments. There, the “engineering change as possible, robust components and avoid design management” process is systematically structured and changes to reduce the complexity of the error prevention. controlled. All the analyses of design changes are Besides, this principle tries to nip the problems in the bud. coordinated by an improvement group. First, they Good Discussion and Good Design Review are the document who requested the change, the type of change processes of thoroughly discussing design plans to (structural, material, treatment, etc.), the reason (cost discover previously undetected problems and they are reduction, upgrade, improvement of technology, etc.), used to formulate the best countermeasures to these 3 product documents (drawings, procedures, lists, etc.) that problems [31] [5]. The GD philosophy is represented in may need to be modified, and the “in charge” of the figure 1. In Toyota, the DRBFM method is applied in this modification. Then, the group judges the risk of the contest. change. If there is a risk, a change process is started; otherwise, no analysis is needed. The improvement group 4 THE INTERVIEWS empowers a multidisciplinary team (mechanical and electrical specialists) to assess the potential failures Results from the interviews and respective description of introduced by the modification. For that, the team uses companies are summarised in the following subsections the methods FMEA and FMECA (Failure Mode, Effects 4.1 First interview and Criticality Analysis) complemented with CAD (Computer Aided Design) and FEM (Finite Element The first interview was performed with the design Method) analyses, followed by several tests executed in engineering manager from a medium firm, which functional prototypes and discussions within the members manufactures machines and equipments for plastic of the multidisciplinary team. After the tests, the new transformation. They manufacture two segments of design is approved or declined by the project manager machines: for bag making - automatic machines for small (technical responsible). and large plastic bags, T-shirt bags, rounded-bottom bags, sleeve labels pre-perforated or not; and for It was said that recalls do not occur due to modifications, thermoforming and vacuum forming - machines for onebut rather to upgrades and technology improvements. way packaging, i.e., pots, lids and trays. Also, it was stated that before implementation of the design changes management process, requests for When a family of machine is required to be modified, a modifications were constants and they diminished just design engineering group first creates a virtual model and after the implementation. Nonetheless, it was added that performs virtual tests. Then, a complementary analysis of the beginning was difficult for the employees to aware failure is done based upon knowledge of the group; themselves that it was necessary to work in group. therefore, there is the need of competent employees. The analyses are done to verify the novel design and to prevent failures from happening in field.

438

4.3 Third interview A third interview was done with the research and development manager from a medium-sized company. The enterprise develops equipments for automation to a broad range of segments, such as, ethanol, pulp and paper, chemical, metallurgical and agricultural. They have a structured and formal process for engineering change. The process can be initiated by claims from costumers or when internal personnel or the field technical support give suggestions of improvements. Subsequently, the manager and the head of the research and development department carry out a critical analysis identifying impacts (in the company) of the modifications, risks involved, necessary resources and time, and level of difficulty for implementing it. Then, if the modification is approved, a plan is developed which includes necessary personnel, schedule of alterations, and needed tests. Tests are performed by groups of software and hardware specialists in different modules. The groups can consult a database of previous problems. After the tests, the engineering development manager gathers the tests results and approves the modification or not. It was said that the analysis is carried out to clearly identify what is needed to be done, and to become known the possible failures in the functionality of the equipment due to the change. The benefits related were: higher quality and reliability of products, more efficient work (focus on necessary work), superior customer satisfaction and decline of reworks. A good and fast product development depends on the company’s labour force. Therefore, the interviewee pointed out that the main difficulty is to have skilled labour to correctly perform the analyses. Besides, the interviewee affirmed that a product modification causes higher probability of failure, and added that their recalls are associated with a design modification. Although a pressure to quickly launch the novel product into the market exists, it is better to have a robust product, rather than a weak product with a client. Owing that, they have spent larger time with the tests. 4.4 Fourth interview The fourth interview was carried out in a local site of a large global manufacturer of hermetic compressors for air conditioning and refrigeration products and centrifugal pumps. They produce hermetic compressors for domestic and commercial refrigeration and air conditioners. An interview was conducted with the manager of product engineering group. When asked how failure analyses of “new” and “variant” products were done there, it was answered straight away “through the FMEA method”. He added saying “a design error discovered by the client can cause us large financial harm”, consequently he recognizes the necessity of applying FMEA. In the company, the FMEA is used to check the design using nonfunctional prototypes to help visualization. This is done during a meeting promoted by the product designer (leader), who summons leaders from the quality departments, manufacturing process, supply chain (if supplier is involved) and research and development (if it is a new product). However, it is common for the leaders not to attend the meeting and also to lack knowledge about what was modified in the product. Thus, as the meetings are inefficient, many of them are needed to accomplish the necessary work. In the end, it was answered that the lessons learnt are documented, yet are not retrieved in further analyses. The interviewee concluded saying that their recalls are not frequents, mainly occurring due to failures from the

manufacturing modifications.

process

and

less

due

to

design

4.5 Findings Results from the interviews not only confirm that design modifications introduce potential failures into the product, but also suggest the necessity of the following: • Multidisciplinary work; • Structured process of managing engineering changes; • Empowerment of responsibilities; • Committed personnel; • Understanding of the modifications. 5

DESIGN FAILURE MODE AND EFFECT ANALYSIS OF MODIFICATIONS (FMEAM) PROPOSAL FMEAM was developed based on literature about FMEA and DRBFM methods, and remarks from the focused interviews. FMEAM attempts to find any potential failures introduced into the product due to modifications. FMEAM aims to encourage creative discussion of even more issues than FMEA does, by stimulating each other’s brains to make one another notice things, thus preventing problems. FMEAM links design and evaluation in order to promote an integrated prevention of failures. It should be seen as a live document integrated with the NPD process, rather than just a task to be done after the design is completed. Namely, its documentation shall be constantly updated with data from the results of evaluation and field. 5.1 Conducting FMEAM in a integrated manner Most current FMEA procedures are a one-time task done in the phase of Testing and Validation, which may serve as an important design check, but otherwise, it contributes little to the design. The proposal here is a series of activities to be done throughout the NPD process. During the Conceptual Design phase, data, such as field reports, checklists and other guidelines based on lessons learned, technology advances, and the history or analysis of similar designs, which has proven to be successful, should be collected and block diagrams developed - to illustrate the physical and functional relationships between items and interfaces within the system. In the Detailed Design phase a FMEAM meeting shall be conducted just prior of the release of final drawings. The following tolls, regarding the concerned system of the analysis, must be provided beforehand of the FMEAM meeting: • Block diagram: to show interdependencies functional entities and interfaces.

of

• Fault tree analysis (FTA): to show logical relationship between a failure and its causes, and to provide a logical framework for expressing combinations of component failures that can lead to system failure. • Change point list: a list of what has been changed or intended to be changed in the components of the system. This list aims to clearly identify and organize the changes. Examples of change point could be change in structure, material, surface, thermal treatment, manufacturing process and stress/load. • Relational matrix between functions and components: a matrix which correlates the components and its functions. It intends to clarify failure modes by identifying which function is affected. • Previous drawings and prototypes (if available): to help visualization. 439

History of field failures: to prevent a known problem from occurring again. The tools should be used throughout the FMEAM meeting. They are meant to identify the intentional changes made and also the changes resulting from them. In addition, they should help to visualize the system structures and the functions of components during the FMEAM meeting. Results from the discussion of the FMEAM meeting shall be reflected in the Testing and Validation phase. In addition, redesigns may occur in the NPD process. Therefore, those results should be also reflected in redesigns in a timely manner. Finally, during the Product Use and Support phase, FMEAM should be used as a guide to collect field data for assessing analysis accuracy, and for developing maintenance troubleshooting procedures. Conducting the FMEAM in this manner it shall enforce a disciplined review of the baseline design and may allow timely feedback to the design process. •

5.2 Adaptations made in the traditional FMEA worksheet The headings of the FMEA table were modified. The scores for severity, occurrence, detection, and the risk priority number (RPN) were replaced by “Adverse effects on customer” with a scale of three levels: A, B, C (A being the most important). The "customer", as referred to for the purpose of FMEAM, includes not only the "end users" but also the design staff and teams of the subsequent processes and the engineers operating in such fields as the production, assembly, service, etc. The most relevant section inserted was “Discussion Results”, since the value added by FMEAM activity depends on the extent that new and specific items are identified and entered into this section. The section encompasses “Items to reflect in design work” and “Items to reflect in evaluation work”, each of them followed by a column “Responsibility and deadline”, where the person responsible to follow-up the action and the deadline for its implementation is entered. Table 1 shows the FMEAM headings. 5.3 Performing FMEAM FMEAM has to be performed in a meeting by a multidisciplinary team to take advantage of sharing their specific knowledge. The design engineer responsible for the system (part), which will be analysed, should lead the meeting.

Engineers from design, evaluation, production engineering and manufacturing, inspection and material departments shall be selected for the meeting. Yet, experienced engineers with the intention of getting actively involved in the discussion by putting themselves in the place of the design leader should be selected. In the FMEAM meeting, initially, the design engineer (leader) should explain the mechanism and the functions of the concerned part, a general idea of its design, and any specific factors that have been given special consideration. The participants then should ask questions and/or make remarks about any matter of concern in relation to the given explanation, and discuss with the design engineer based on the raised questions and indications. It is a way of ensuring thorough mutual understanding among FMEAM team in relation to the concerned part. Afterwards, the discussion is conducted by filling out the FMEAM worksheet sections. The following elucidates all the headings of the worksheet. 1. Component name / change: enter the name of the component subject to analysis. Enter the modification made or planned to be made in it and the details of the modification. In the case of a newly adopted part, it is preferred to have a comparable part on hand, if possible, for the purpose of comparative evaluation. 2. Function: enter the function intended for the subject of analysis by as concisely as possible. If there are more than two intended functions, enter all such functions separately. 3. Potential failure mode due to change: enter as how the component will fail as a result of the modification. If there are more than two intended functions, make separate entries for each of the functions. Also make entries regarding the factors that cause the loss of commercial value, such as abnormal noise and poor appearance quality, in a concrete expression phrased from the customer's point of view. 4. Root cause / dominant cause: the causes of the failure of function and commercial value are, in a sense, the weak-points of the current design. Indicate, therefore, the root causes of the failure and loss of commercial value as concretely as possible to facilitate the future implementation of the measures formulated from design perspective and to allow participants to clearly visualize the concern.

Active discussion about the design changes

Function

Potential failure mode due to change

Root cause / dominant cause

Adverse effects on customer (System)

Importance

Component name / change

Current design to avoid concern point (incl. design rule, design standard & check items

Discussion Results Items to reflect in design work

Resp. & Dead Line

Items to be reflected in evaluation work Table 1: FMEAM headings.

440

Resp. & Dead Line

Action Results

5. Adverse effects on customers: customers, as explained in the previous section, can be assumed to exist in various levels. In this column, indicate the phenomena, which customers will presumably experience, using expressions phrased from customer's point of view. So, the type of adverse effects can be clearly understood. Classify the adverse effects into three ranks - A, B or C - according to the severity of impacts and enter the rank in the "Importance" column. 6. Current design to avoid concern point: enter the considerations made in the design to prevent the failure and loss of commercial value. Entries shall be made in such a manner so that the information in this column can be readily associated with the relevant information in the "Root cause / dominant cause" column. 7. Items decided in FMEAM to reflect in design work and to reflect in evaluation work: the participants are to make attempts to identify potential problems through discussion, and then summarize the decisions made into these columns using concise expressions. Enter those items that should be considered in the design work into the "Items to reflect in design work" column and those items to be considered during the test and evaluation work, such as in relation to the conditions and items of evaluation, into the "Items to reflect in evaluation work" column. Be sure to provide clear and readily understandable instructions on what needs to be done. Enter the name of responsible employee and the deadline of implementation into the "Responsibility and deadline" column, for each of the Items decided. 8. Action results: For each items listed in the previous columns (7.), enter the information regarding the date of implementation of the measures, details of measures implemented, and the consequences of the implementation. Finally, a FMEAM report shall be written to summarize the results, reports the decisions and recommended actions to be taken for elimination or reduction of failure. Also, it should include the problems which could not be corrected by design. Since FMEAM is intended to be a living document, this report shall be integrated in the NPD process as a deliverable to the design and evaluation work. 6 FINAL REMARKS Failure Mode and Effect Analysis (FMEA) has been intensely used to ensure the quality and reliability of products. Additionally, over the years, several adaptations and improvements have been made in the method FMEA. Besides the use of the FMEA and the several attempts to mitigate its shortcomings, many incidents in the field are still occurring, which is costly to companies. This paper has presented a method to analyzing the effects of changes made in the design. The method was conceived from the knowledge that changes carry a higher potential failure. It is based on the methods FMEA and DRBFM and on findings from focused interviews. The novel method was called Failure Mode and Effect Analysis of Modifications (FMEAM) and it aims to assure the product quality after design changes. The interviews indicated that when carrying out analyses of engineering changes, it should be a structured process. Additionally, it was accentuated that there is a need of multidisciplinary work group, empowerment of responsibilities and personnel committed with the work and aware of the modifications.

Neither isolated nor just for design checking the FMEAM should be applied. Furthermore, its practice should comprise a continuum application within and between NPD processes in order to constantly be detected potential failures and to create a historic of potential failures due to changes. FMEAM should promote identification of potential problems through active discussion of modifications and the causes of such problems. Consequently, it is vital that all participants of the discussion have a good understanding of the substance of the modifications. Although FMEAM method was based on stabilised methods, certainly, it is necessary to take further action research approaches, in order to establish the feasibility, usability and utility of the new method. In Summary, the intention of the proposed method is to meet the current necessity of companies, which are to launch novel products into the market in shorter cycles and to effectively assure the quality of its new products. 7 ACKNOWLEDGMENTS The authors are grateful to the companies for volunteering to take part in this research, to NUMA colleagues and to CNPq for the financial support. 8 REFERENCES [1] Unger, D. W., Eppinger, S. D. 2006, Improving Product Development Processes to Manage Development Risk, MIT Sloan Research. [2] Lauglaug, A. S. 1993, Technical Market Research: Get Customers to Collaborate in Developing Products, Long Range Planning, 26(2): 78-82. [3] Clarkson, P. J., Simons, C., Eckert, C. 2004, Predicting Change Propagation in Complex Design, Journal of Mechanical Design, 126(5): 788-797. [4] Gerst, M., Eckert, C., Clarkson, J., Lindemann, U. 2001, Innovation in the Tension of Change and Reuse, Proceedings of the 13th International Conference on Engineering Design: Design Research – Theories, Methodologies and Product Modelling, Professional Engineering Publishing, 371-378. [5] Schmitt, R., Krippner, D., Hense, K., Schulz, T. 2007, Keine Angst vor Änderungen! Robustes Design für Innovative Produkte, Qualität und Zuverlässigkeit, 52(03): 24-26. [6] Chao, L. P., Ishii, K. 2007, Design Process Error Proofing: Failure Modes and Effects Analysis of the Design Process, Journal of Mechanical Design, 129(5): 491-551. [7] Rhee, M., Haunschild, P. R. 2006, The Liability of Good Reputation: A Study of Product Recalls in the U.S. Automobile Industry, Organization Science, 17(1): 101-117. [8] Bates, H., Holweg, M., Lewis, M., Oliver, N. 2007, Motor vehicle recalls: Trends, patterns and emerging issues, Omega, 35(2): 202-210. [9] Barber, B. M., Darrough, M. N. 1996, Product Reliability and Firm Value: The Experience of American and Japanese Automakers, 1973-1992, Journal of Political Economy, 104(5): 1084-1099. [10] Haunschild, P. R., Rhee, M. 2004, The Role of Volition in Organizational Learning: The Case of Automotive Product Recalls, Management Science, 50(11): 1545-1560. [11] Davidson, I., Worrell, D. L. 1992, Research Notes and Communications: The Effect of Product Recall

441

[12]

[13]

[14]

[15]

[16]

[17] [18] [19] [20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

Announcements on Shareholder Wealth, Strategic Management Journal, 13(6): 467-473. Schmitt, R., Krippner, D., Betzold, M. 2006, Geringere Fehlerkosten – höhere Zuverlässigkeit, Qualität und Zuverlässigkeit, 51(06): 66-68. Stone, R., Tumer, I., Stock, M. 2005, Linking Product Functionality to Historic Failures to Improve Failure Analysis in Design, Research in Engineering Design, 16(1): 96-108. Stamatis, D. H. 1995, Failure Mode and Effect Analysis: FMEA from Theory to Execution, ASQC Quality Press. Eckert, C., Zanker, W., Clarkson, P. J. 2001, Aspects of a Better Understanding of Changes, International Conference on Engineering Design, Glasgow, UK, 21-23 August. Creswell, J. 2003, Research design: Qualitative, Quantitative, and Mixed Methods Approaches, Thousand Oaks, USA, Sage. Dane, F. C. 1990, Research Methods, Pacific Grove, USA, Brooks/Cole. Yin, R. K. 1994, Case Study Research: Design and Methods, Thousand Oaks, USA, Sage. Marshall, C., Rossman, G. B. 1995, Designing Qualitative Research, Thousand Oaks, USA, Sage. Wirth, R., Berthold, B., Krämer, A., Peter, G. 1996, Knowledge-based Support of System Analysis for the Analysis of Failure Modes and Effects, Engineering Applications of Artificial Intelligence, 9(3): 219-229. Lee, B. 2001, Using Bayes Belief Networks in Industrial FMEA Modelling and Analysis, Proceedings of the Annual Reliability and Maintainability Symposium, IEEE press, Philadelphia, USA, 7-15. Kara-Zaitri, C., Keller, A., Barody, I., Fleming, P. 1991, An Improved FMEA Methodology, Proceedings of the Annual Reliability and Maintainability Symposium, IEEE press, Orlando, USA, 248-252. Bednarz, S., Marriott, D. 1988, Efficient analysis for FMEA, Proceedings of the Annual Reliability and Maintainability Symposium, IEEE press, Los Angeles, USA, 26-28 January, 416-421. Hunt, J. E., Pugh, D. R., Price, C. J. 1995, Failure Mode Effect Analysis: A Practical Application of Functional Modelling, Applied Artificial Intelligence, 9(1): 33-44. Price, C., Pugh, D., Wilson, M., Snooke, N. 1995, The Flame System: Automating Electrical Failure Mode and Effects Analysis (FMEA), Proceedings of the Annual Reliability and Maintainability Symposium, IEEE press, Washington, USA, 16-19 January, 90-95. Bell, D., Cox, L., Jackson, S., Schaefer, P. 1992, Using Causal Reasoning for Automated Failure Modes and Effects Analysis (FMEA), Proceedings of the Annual Reliability and Maintainability Symposium, IEEE press, Las Vegas, USA, 21-23 January, 343-353. McKinney, B. 1991, FMECA, The Right Way, Proceedings of the Annual Reliability and Maintainability Symposium, IEEE press, Orlando, USA, 29-31 January, 253-259. Price, C. J., Taylor, N. S. 2002, Automated Multiple Failure FMEA, Reliability Engineering and System Safety, 76(1): 1-10.

442

[29] Montgomery, T., Marko, K. 1997, Quantitative FMEA Automation, Proceedings of the Annual Reliability and Maintainability Symposium, IEEE press, Philadelphia, USA, 13-16 January, 226-228. [30] Clark, G. E., Paasch, R. K. 1996, Diagnostic Modelling and Diagnosability Evaluation of Mechanical Systems, Journal of Mechanical Design, 118(3): 425-431. [31] Shimizu, H., Imagawa, T., Noguchi, H. 2003, Reliability Problem Prevention Method for Automotive Components - Development of GD3 Activity and DRBFM (Design Review Based Failure Mode), Proceedings of the International Body Engineering Conference, SAE International, Chiba, Japan, 371-376. [32] Schorn, M., Kapust, A. 2005a, Im Fluss: Wie Toyota von DRBFM Profitiert, Qualität und Zuverlässigkeit, 50(04): 56-58. [33] Schorn, M. 2005, Entwicklung mit System: Wie Toyota von DRBFM Profitiert, Management und Qualität, 12, 8-11. [34] Repenning, N. P., Sterman, J. D. 2001, Nobody Ever Gets Credit for Fixing Problems that Never Happened: Creating and Sustaining Process Improvement, California Management Review, 43(4): 64-88. [35] Schorn, M., Kapust, A. 2005b, DRBFM - die Toyota Methode, Vdi Z Integrierte Produktion, 147(7/8), 6769.

Invited Paper Object-Oriented Simulation Model Generation in an Automated Control Software Development Framework M.J. Foeken, M.J.L. van Tooren Faculty of Aerospace Engineering, Delft University of Technology, Kluyverweg 1, 2629HS Delft, The Netherlands {m.j.foeken, m.j.l.vantooren}@tudelft.nl

Abstract The automated development of control software for mechatronic systems requires the integration of control models for design and verification purposes. To obtain high-fidelity models, unintended behaviour of the system must be taken into account which requires knowledge about the systems architecture and component interaction. Furthermore, integration of design and analysis tools into a meta-model framework is needed to exchange system information. This paper discusses the use of Modelica to organise libraries and model the behaviour of systems in this framework, and the need for a knowledge-based tool to automatically generate these models. Keywords: Mechatronics control software, Object-oriented modelling, Simulation, Integration framework, Knowledgebased engineering

1 INTRODUCTION Nowadays, computers control industrial machines, information devices, aircraft and office equipment to name just a few. The development of such mechatronic products requires the collaboration of mechanical designers, electronic system engineers, aerodynamic engineers, and software engineers. Whereas software development for mechatronic systems in industry benefits from advances in tools and supporting systems, in general they still suffer from problems like a lack of integration across design domains, a lack of physical modelling, the need to handle irregular situations, and foremost, a lack of automation. To attack these problems, a project named ‘Automatic Generation of Control Software for Mechatronic Systems’ was started to develop a set of prototype tools and an integration framework with which an interdisciplinary product development team can automatically generate control software for mechatronic systems. Figure 1 shows the framework with the set of eight tools that will be developed within the project, each represented as a white block. The project envisions the use of a functional model as input to the control software generation process, specifying the required functionality of the system being developed. The ‘Function Modelling’ tool will create a formal representation of these functions, which will be used to generate the necessary behaviour based on qualitative reasoning methods. At the same time, the function model will enable the ‘Mechatronic Feature Modelling’ tool to generate the product definition by using mechatronic features, or function performers [1]. The behaviour description and the mechatronic feature model serve as an input for the mechanical embodiment and electrical system design. Combined with data from analysis tools as finite element method (FEM) or computational fluid dynamics (CFD) solvers which are often used in aerospace design, these form the basis for the control code and control model generation processes.

CIRP Design Conference 2009

443

In Figure 1, these existing commercial software tools, like e.g. CATIA for mechanical CAD design or Fluent for CFD analysis, are represented by dashed-line blocks.

Figure 1: Systems architecture, with the white blocks representing tools being developed in the project. Dashed-line blocks correspond to existing, commercial software tools. At the end of the design process, the generated code can be verified at software and hardware level, using either

the generated control models or the prototype hardware, respectively. The integration of design and analysis tools in an automated framework will support a more concurrent software design process, in contrast to the sequential process often seen in practice, by automating the sharing of information across the design domains. The framework in Figure 1 shows the control code generation and the control model generation processes in parallel. In this view, the ‘control model’ is defined as a model of the entire system minus the control software. This means that the control hardware, like for example a micro controller, might be part of the control model, if required. The need for a control model generator in the software development framework arises from the wish to be able to verify the software at earlier stages of the development, before the real hardware has been built. A high-fidelity control model would enable the verification of the control software by using emulation and/or simulation methods, which on one hand partially eradicates the need for more expensive machine based verification, and enables software verification at earlier stages of the development on the other. A second point of interest to the project is the integration of irregular situations and operating modes into the controller design development. Software development has to deal with irregular operation modes and abnormal situations, as well as regular modes like initialization, shutdown, maintenance and calibration. In terms of verification, this not only requires the controller, but also the control model to be adjustable to these situations. To obtain a high fidelity model, not only the intended behaviour as needed to realise the required functionality, but also behaviour that was not anticipated on beforehand must be included. In that way, unexpected side effects of the design implementation can be discovered before prototype testing begins. The methodology supporting the automated generation of control models and taking into account these requirements is the main focus of the current research. First, however, the implementation of modelling and simulation concepts during the controller design process will be discussed. 2

MODELLING AND SIMULATION

2.1 Object-Oriented Physical Modelling As noted in [2] it is important to make a distinction between modelling and simulation. Whereas Websters Dictionary defines modelling as ‘to produce a representation or simulation of,’ the Oxford Dictionary defines it as ‘to devise a mathematical model of.’ More accurately, modelling can be defined as creating a simplification of reality based on physical principles, while simulation, in the broad sense, is an imitation of behaviour, for which mathematical models can be used. The mathematical model normally used in controller design, being either feedback, sequential or hybrid, is often presented as a block diagram containing transfer functions, representing the particular behaviour of a system by means of state-space matrices, eigenfrequencies and damping coefficients, and mathematical operators. Often, linear models that are only valid at a nominal design point are used. Matlab/Simulink [3], the de-facto standard in controller design, fully supports this modelling paradigm. However, taking into account the entire system design, there are more types of mathematical models available to

verify whether requirements are met. FEM or CFD analysis are frequently applied methods to verify a design, each requiring a different kind of mathematical model based on other physical principles, and subsequently different kinds of simulation. An integrated parallel simulation might be attractive in terms of physical accuracy, the computational effort and time required for large scale CFD or FEM simulations makes the combination unsuitable for controller verification. Instead, combining these different types of simulations is normally done in a sequential order, with the results of the first used in the model of the second. With CFD methods, ranging from linearised potential flow to the Navier-Stokes equations, lift, drag and moment coefficients and stability derivates for a range of airspeeds can be derived, which are subsequently used in flight mechanics models during controller design. Typical properties that can be derived with these analysis tools are then used in controller design are collected in Table 1. The block diagrams frequently used in controller design tools as Simulink are in principle nothing more than mathematical equations in a visual form, where the basic elements have no direct relation with the physical world. To model a real-life system with such elements, it is necessary for the designer to know:  How to represent the expected behaviour of (a part of) the system in mathematical equations, and,  In what form the equations must be written such that the input and output can be ‘connected’ to the equations of other parts. An alternative to this signal based approach are bond graphs. Independent of the physical domain, the graphs consist of basic elements like junctions, resistors and capacitors to represent the flow of energy through the system. This ‘physical modelling’ already makes the modelling effort less prone to error. To move the viewpoint of the modeller from the equation level to the component level, bond graph elements can be combined into a model representing a physical component. Parallel to the application physical modelling languages, the emergence of the object-oriented (OO) modelling paradigm and languages like Modelica [4], allowed for new model development methods. The OO modelling paradigm nicely suits the engineering view on the product definition, as models built from objects allow for a good mimic of the real world [5]. One has to keep in mind the difference between the use of objects as basic building blocks and the use of OO programming concepts like encapsulation, inheritance and polymorphism. Encapsulation is a method to ‘hide’ information, by concealing the internal methods of a class from objects that interact with it. The part that is visible to other objects is called the interface. Polymorphism is also related to interfaces, and is in general a method to ensure that different datatypes, e.g. integers or characters, can be handled by a consistent interface. Finally, (class) inheritance deals with the specialization of classes by introducing subclasses which inherit the attributes and methods of their parent class and subsequently add new attributes and methods of their own. The use of polymorphism and inheritance in modelling languages will be further discussed in Section 3.2. Although a model can be built up from components that have object properties, the model language might not support these basic OO concepts. Reference [6] shows that bond graphs can be viewed as some kind of objectoriented physical modelling, and that OO languages like Modelica can be used to textually describe bond graphs. Furthermore, it is argued that the principle of encapsulation and inheritance makes the use of

444

component libraries and ‘the building of large and complex engineering systems more safe.’ 2.2 Related Research Object-Oriented Modelling In the framework of the Open Library for Models of Mechatronics Components (OLMECO) project, [2] describes the architecture of an object-oriented library of reusable simulation models. The model structure that is suggested consists of three layers, being technical components, physical concepts, and mathematical relations. The choice for these three viewpoints based on the fact that each of them needs consideration when modelling. Figure 2 shows an adapted version of the toplevel view of the library architecture, using UML notation. On the technical component level, the system decomposition is build of from various components which are part of a component class. These components can be represented by a conceptual physical description, which is build up from one or more bond graph elements, representing mathematical equations. The same reference also discusses the need for a taxonomy of component classes to handle the complexity associated with large libraries. The kind-of relations do not restrict the structure to be tree-like, instead, a lattice structure can also be obtained.

interaction type, preventing the coupling of incompatible components. One of the issues when applying a component-based approach for this type of physical modelling is that the physical behaviour is not limited to the intended behaviour, which is often described in a single domain. An electric actuator, intended to transform electrical energy into mechanical energy, might produce an amount of heat that not only influences its own behaviour, but also that of other components. The ports of a component should therefore not be fixed nor restricted to only the intended connections, but must depend on the system’s implementation. 3 CONTROL MODEL GENERATION From literature, part of which is mentioned in the previous section, as well as from our own research, the following prerequisites have been recognised that enable automatic model generation in the context of the software development framework:  It must be possible to build-up the system architecture from basic technical components.  These components must have one or multiple representations in the physical modelling world. The taxonomy of the physical models should be based on component classes, amongst others.  When connecting elements the port compatibility must be checked to prevent the coupling of incompatible elements.  Not only the intended behaviour, but also ‘secondary’ behaviour must be recognised and included.

Figure 2: Top-level view of OLMECO library architecture, adapted from [2]. The use of objects is further extended in [7], which presents the concept of Composable Objects, combining form (CAD) and behaviour into a single object. By connecting these component objects to each other through their ports, it is possible to create both a systemlevel design description and a virtual prototype of the system. The interaction between components is portbased, and reconfigurable, so that components of different levels-of-detail are interchangeable. The relation between form and behaviour is given by a parametric description, ensuring that both remain consistent with each other. The behaviour of a mechanical design can be derived from constraints between parts [8]. Reference [9] discusses more about the use of an ontology for ports to be used for automatic model composition. With this ontology one can represent and verify compatibility between the ports in a connection, and reason to select the interaction models automatically. They also note that when connecting ports one must take into account the type of interaction taking place. Often, these interaction models depend on the parameters of both subsystems involved. On the same topic, [10] presents a framework to capture the interaction in component based design. The system checks the compatibility of a component with a certain

445

 For the required physical system parameters it must be possible to trace back to the providing design or analysis tools, or e.g. a database. Of these, the first and last item on this list are related to integration into the development framework, whereas the other three concerned will have to take into account the methodology supporting the control model generation process. Previously, the authors have identified that using SysML [11] as a language for mechatronic system modelling supports the integration into the development framework by keeping an object centred view on the system [12]. Modelica, on the other hand, supports a component based modelling paradigm that can be used in combination with controller design methods and tools. In the next sections, the above mentioned points will be further discussed. Apart from showing how SysML and Modelica support the automated generation of control models, the possible methodology to come from one to the other while taking into account these requirements will be introduced. 3.1 Mechatronic System Modelling As stated in the introduction, the project considers functional modelling as part of the framework. Reference [13] considers the application of the Function-BehaviourState modeller [14] as a basis for this functional model and discusses its use as a meta-modeller, facilitating the integration of other modelling tools. It shows that SysML is both powerful and flexible in representing the fundamental FBS concepts, and as such can be used to build meta-models. The representation of the systems architecture in SysML is based on mechatronic or physical features. These features represent a physical component that performs a certain function, without specifying its mechanical embodiment beforehand and by that can be considered as a bridge between the function and the implementation

level. The level of abstraction of the functions and the mechatronic features is depending on the application domain as well as the amount of ‘zoom’, and as a consequence no common level of detail, nor a fixed amount of features, can be defined. However, in general the decomposition of the design continues until a physical relation between function, model and behaviour is known [15]. In Section 3.3 the discussion on these levels of detail will be extended to the use of high, middle and low level primitives as a possible solution for the automated control model generation process.

The possibility to switch between models or elements of different complexity included in a bigger system model, named polymorphic modelling by De Vries [17][18], can be accommodated by ensuring that the ports of these model elements are identical. In object-oriented programming terms, coping with replacing objects is covered by the subtyping concept, which is based on the Liskov substitution principle [19].

In terms of technical implementation, the use of SysML and the associated XML-based XMI language enable an easy mapping of the metamodel to other languages by means of one of the available XML processing techniques. The integration of SysML into the systems development process is also discussed in [16]. 3.2 Polymorphic Physical System Modelling The Modelica language has been designed to model large, complex and hybrid physical systems and is based on differential and algebraic equations. It supports noncausal and object-oriented modelling techniques, and as such stimulates the reuse of modelling knowledge. Although the language is text-based, the models can also be presented to the user as schematic block diagrams, each block representing a system component.

Figure 4: ‘DC-motor’ component in SysML, left for an idealised model, right with friction and variable resistor due to temperature effects.

In general, a Modelica class contains a public declaration of parameters, variables and class instances, followed by the definition of equations and the connections between the instances. Connectors in Modelica can either be energy or information based, depending on the domain. Similar to bond graphs, the connection between physical components is achieved by specifying a flow and a nonflow variable for each connector, which when multiplied have the dimension of energy or power. The input to actuators and the output of sensors is a data stream. The idealised dynamics model of a DC-motor can be represented by a combination of a voltage source, a resistor, an electrical ground, an electro-to-mechanical transformer, and an inertia element, see Figure 3. The parameters defining the behaviour of each of these separate elements, like the torque constant, are typically provided in the motor specification. Instead of showing the entire internal structure of the DC-motor’s model, the ‘DCmotor’ component can be characterised by it’s subelement parameters, as in Figure 4. The information embedded in the component is however not restricted to parameters only, but could also contain links to mechanical CAD drawing, including dimensions, masses, inertia, etc.

Figure 3: Idealised DC-motor model. However, if a non-ideal model is required, taking into account e.g. friction and variable resistance due to heating, the DC-motor model has to be extended with additional elements, see Figure 5, requiring additional parameters as well. On the technical component level however, the element is still a ‘DC-motor’.

Figure 5: DC-motor model with rotational friction and variable resistor heating. While class inheritance is a well known and often used concept in OO programming, the difference between class and interface inheritance, or nominal subtyping, lies in the fact that that the latter only describes when an object can be used in place of another, and does not describe the object's implementation [20]. With class inheritance, the methods are also inherited, which can subsequently be extended or possibly changed, depending on the language. For the subtyping concept inheritance is however not a requirement: an object can also be a subtype of another object without using interface inheritance, which is then called a structural subtype. The type, or interface, is that part of the class that enables the substitution of one class with the other. In terms of physical modelling, this means that at least the ports or connectors of the component’s physical description must be the same. However, characteristic parameters might also be part of the subtype, which makes that the DC-motor models in Figure 3 and 5 can be considered not to be subtypes.

446

The subtyping mechanism in Modelica is based on the object theory of Abadi and Cardelli [21]. The language specification defines a type or interface as ‘the “essential” part of the public declaration sections of a class that is needed to decide whether A can be used instead of B’ [4], where A and B are classes or components. At the same time, ‘A is a subtype of B, or equivalently, the interface of A is compatible to the interface of B, if the “essential” part of the public declaration sections of B is also available in A’. Due to the nature of the language, Modelica does not accommodate the re-declaration of the methods of a class (i.e. equations) when using the class inheritance mechanism, which prevents the full use of the subtyping concept when using class inheritance. Furthermore, when comparing Figures 3 and 5, the heating resistor component in Figure 5 is not a subclass of the normal resistor, as the resistance R is no longer a fixed parameter, but a variable. These two limitations prevent the creation of a natural specialization hierarchy based on only class inheritance relations. This problem can be partially circumvented by using a socalled ‘partial’ model, such that one can create an methodless superclass from which multiple subclasses can inherit. In this way, grouping of components into component classes is still accommodated. A hierarchic class structure can be obtained by adding tagged information to each class, which can be done by using inheritance. Altogether, Modelica enables both the use of OO modelling concepts to structure the libraries, as well as an object-based approach that supports the easy assembly and updating of the models itself. The relation between elements on component and control model level is depicted in Figure 6.

mechatronic system

1

1..*

component 1

meta model

1..* physical component classes

control model

1

1..*

physical description

1

1..*

Modelica element 1

1..* equation

mathematics

Figure 6: Relationships between components and elements at different viewpoints. 3.3 Model Generation As becomes clear from the previous sections, a one to one mapping of components at different viewpoints is easily possible, with Modelica providing the capabilities to easily upgrade the physical model using object-oriented modelling techniques. However, if unintended behaviour due to component interaction in the model has to be included, a direct mapping method will not be sufficient, as additional modelling knowledge is required. The application of knowledge engineering for the development of conceptual simulation models is discussed in [22]. In here, the focus is on how to capture, represent and organise the knowledge required for simulation modelling.

447

The use of expert knowledge in engineering applications to automate that part of the design and analysis process that is repetitive, non-creative and time-consuming can be supported by Knowledge Based Engineering (KBE) techniques and tools. In general, KBE tools implement rule based design, parametric CAD and object-oriented programming [23]. Though often the main focus is on creating new (CAD) designs, [24] gives an example of the use of domain specific modelling languages (DSL) as a base for KBE models that are not restricted to the geometrical domain. The method is applied to the design of wire-harnesses, which is both a geometrical as well as a conceptual problem, extending the application of KBE design methods beyond the geometrical domain. The basic building blocks, named high-level primitives (HLPs), represent classes containing sets of design rules that determine parameter values to instantiate objects. The collection of HLPs describing the system is called the product model and provides a parametric view on the system. Associated capability modules (CMs) describe processes that can be applied to the HLPs to generate certain views on the system, like e.g. a 3D or a finite element model. The product models constructed using the current KBE systems like the ICAD system or Genworks’ GDL [25] are object-oriented and based on general-purpose programming languages. GDL is a superset of ANSI Common Lisp, one of the two main dialects of Lisp, which is often used in artificial intelligence research. The one on one mapping of these high-level primitives from the product model into the software model making up the DSL described in [23] and the use of CMs to obtain a specific view on the system provides a method to generate control models. The HLPs and CMs can be further split up in middle and lower level primitives, related to the various physical submodels that might be required. The decision on which physical submodels to use is based on the specifics of the system. The rules governing this decision can be formalised in such a way that information stored in or derived from the system metamodel can be used to generate the control model. This so-called procedural knowledge describes the conditions under which processes or tasks are carried out, in contrast to conceptual knowledge, which deals with how objects are related to each other, among others. The knowledge on which a KBE tool is based is stored in a knowledge base, which can be split up in a process and a product composition part [26]. The development of an ontology underlying the knowledge base is a shared task for the entire research project: concepts defined in the meta-model, i.e. the features, must be related to the knowledge base on which the control model generation process is relying. As mechatronic systems come in quite different forms and sizes, the amount and type of features is fully dependent on the type of system.

Figure 7: ‘Insight’ quadrotor UAV [27]. 4 APPLICATION The concepts introduced in sections 3.1 and 3.2 will be exemplified by looking at the ‘Insight’ quadrotor UAV as

an example mechatronic application. Shown in Figure 7, the “Insight” is a quadrotor being developed at the Faculty of Aerospace Engineering of Delft University of Technology to perform indoors surveillance missions. The aircraft weighs 72 g, has a diameter of 30 cm and an endurance of 20 minutes while providing live streaming video [27]. The use of partial models defining only the type of the class can be illustrated by looking at the definition of the rotor components. For the calculation of rotor lift and drag multiple methods are available, the most simple only taking into account rotor speed in combination with a fixed lift and drag coefficient, while a more detailed model can take into account the local velocity field, air density and blade twist distribution. The blade’s inertia and the mechanical connectors are shared components defined in a partial model, while the algorithm to calculate the force and torque generated by the rotor are added in the full model. To capture aeroelasticity effects advanced algorithms for the calculation of the aerodynamic force distribution in combination with elastic structural elements are required, which is something that can not be easily modelled without specialist knowledge. The same holds for phenomena like rotor-rotor and rotor-ground interference, which requires the extension of the basic algorithm. Figure 8 shows the three components and two connectors of the actuator assembly build up from standard Modelica library components, with the rotor and DC-motor components replaceable by other submodels.

The systems meta-model in SysML consists of a hierarchical decomposition of the system in basic components. As in Figure 9, the actuator assembly consists of three components at the same level. For now, each of the components has only connectors in a single domain, such that they can only be connected in one way. These connectors are based on the type of energy they represent.

Figure 10: Sensors in the quadrotor assembly as used in controller development tools.

Figure 8: Actuator assembly for quadrotor UAV using standard Modelica library components.

Figure 11: Part of sensor assembly, showing intended connections and outputs of technical components.

Figure 9: Actuator assembly hierarchical decomposition. The one on one mapping of the DC-motor and gear component from the technical component view to a single physical description in the actuator assembly shows the problem when components are not only used for their main functionality, but, in this case, also as a load introducing part of the system. Instead of introducing force and torque back to the system via the gear and DC motor, by using standard library components they can only be introduced directly back into the system, or via separate 3D mechanics connectors through the gear and DC motor.

Additional problems related to both the mapping of components and the connections between components emerge when adding sensors and actuators. Where in normal controller design environments the output of ‘sensors’ are values of e.g. acceleration or angle, possibly with the addition of noise and delays (Figure 10), in reality sensors need an external power supply and the output is either an analog or digital signal which uses a specific type of software interface, as in Figure 11. The intended behaviour of the sensor can thus be viewed at different levels, each having a completely different physical representation. This difference has also to be taken into account in the control software generation process, which should take care for the extraction of the data from the analog or digital signals. The amount of main technical components in the full model for the quadrotor is around 20, not taking into account the various parts of the structure. To map all of these into not only their intended, but also their not intended physical descriptions is already a task that needs expert knowledge in the various physical domains, as well as good insight in the system architecture itself. Furthermore, even for a relatively small system like a UAV the amount of design and analysis data, like body mass, inertia, drag coefficients, lift and torque coefficients, sensor noise characteristics, etc. needed to

448

obtain a basic model is substantial. For industrial applications the amount of data easily becomes hard to manage, and tool and data integration by means of (metamodel) repositories is necessary. 5 SUMMARY In relation to an automated control software development framework the need for a ’control model generator’ has been discussed. An object-oriented physical modelling approach for the control model supports mapping of technical component or features at the meta-model level to physical descriptions in the control model view. The Modelica language is well suited for polymorphic physical modelling as well as component library development; although the creation of class specialization hierarchies based on class inheritance alone is not possible. To be able to include unintended behaviour in the control model, the interaction between components can not be fully predetermined. This dependence on the system implementation requires the use of expert knowledge in the form of formal rules. This is exemplified in the case study, which shows that a one on one mapping from a component based meta-model to a physical modelling description will not result in a high-fidelity model. The application of knowledge based engineering techniques as a means to formalise and use this knowledge in an automated environment is considered. Using HLPs representing technical components as the basic building blocks of the system, the development of an ontology underlying the knowledge base is a shared task in the research project. 6 FUTURE WORK The development of a mechatronic system metamodelling concept based on features is the basis of the project’s integration framework, on which various tools and associated methodologies rely. The ontology relating concepts in the various views on the system will enable the creation of a knowledge base which can be used to develop a control model generator application. The use of a meta-model as the core of the development framework and the integration of design and analysis tools is the topic of separate research in the project, but is closely related due to the dependency of a possible tool on the information stored in the meta-model. A knowledge-based tool able to generate control models based on information in the meta-model requires the acquisition of expert knowledge on physical modelling. Supported by methods able to determine interaction between components, the tool will be able to realise the requirements set in the introduction. At a later stage, the addition of 3D models to the control model generation process will be added, to have a more ‘physical’ view on the system’s behaviour during the verification process. This requires further integration of design tools, notably mechanical CAD, into the framework, such that not only design parameters, but the visual representation can be linked to the meta-model as well. Further application of KBE techniques to obtain and integrate this 3D representation seems a logical step, as parametric CAD is an integral part of most KBE platforms. 7 ACKNOWLEDGMENTS The authors gratefully acknowledge the support of the Dutch Innovation Oriented Research Program ‘Integrated Product Creation and Realization (IOP-IPCR)’ of the Dutch Ministry of Economic Affairs.

449

8 REFERENCES [1] Lutters-Weustink, IF, Lutters, F and Van Houten, FJAM, 2004, Mechatronic features in product modeling, the link between geometric and functional modeling?, Proceedings of International Conference on Competitive Manufacturing, Stellenbosch, South Africa: 125-130. [2] Breunese, A, Top, JL, Broenink, JF and Akkermans, JM, 1998, Libraries of Reusable Models: Theory and Application, Simulation 71(1): 7–22. [3] The MathWorks, 2008, MATLAB and Simulink, http://www.mathworks.com. [4] Modelica Association, 2007, Modelica Language Specification - Version 3.0, http://www.modelica.org/ documents/ModelicaSpec30.pdf. [5] Sully, P, 1993, Modelling the World with Objects, Prentice-Hall, ISBN 0-13-587791-1, Englewood Cliff, New Jersey. [6] Borutzky, W, 1999, Relations between Bond Graphs Based and Object-Oriented Physical Systems Modelling, International Conference on Bond Graph Modeling and Simulation, San Francisco, California, USA: 11-17. [7] Paredis, CJJ, Diaz-Calderon, A, Sinha, R and Khosla, PK, 2001, Composable Models for Simulation-Based Design, Engineering with Computers 17: 112–128. [8] Sinha, R, Paredis, CJJ and Khosla, PK, 2000, Integration of Mechanical CAD and Behavioural Modelling, IEEE/ACM International Workshop on Behavioral Modeling and Simulation, Orlando, Florida, USA: 31–36. [9] Liang, V-C and Paredis, CJJ, 2003, A Port Ontology for Automated Model Composition, Proceedings of the 2003 Winter Simulation Conference, 1: 613–622. [10] Lee, EA and Xiong, Y, 2001, System-Level Types for Component-Based Design, Lecture Notes in Computer Science 2211. [11] Object Management Group, 2007, OMG Systems Modelling Language, http://www.omgsysml.org. [12] Foeken, MJ, Voskuijl, M, Alvarez Cabrera, AA and Van Tooren, MJL, 2008, Model Generation for the Verification of Automatically Generated Mechatronic Control Software, IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications, Beijing, China: 275-280. [13] Alvarez Cabrera, AA, Erden, MS and Tomiyama, T, 2009, On the Potential of Function-Behavior-State (FBS) Methodology for the Integration of Modelling Tools, CIRP Design Conference 2009, Cranfield, UK. [14] Tomiyama, T and Umeda, Y, 1993, A CAD for functional design, Annals of CIRP'93, 42(1): 143146. [15] Schut, EJ, Van Tooren, MJL and Berends, JPTJ , 2008, Feasilization of a Structural Wing Design Problem, 49th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Schaumburg, IL, USA. [16] Friedenthal, S, Moore, A and Steiner, R, 2008, A Practical Guide to SysML: The Systems Modeling Language, Morgan Kaufmann, Burlington, Massachusetts, USA: 489-508. [17] De Vries, TJA, 1994, Conceptual Design of Controlled Electro-Mechanical Systems, Ph.D. thesis, University of Twente, The Netherlands.

[18] De Vries, TJA, Breedveld, PC and Meindertsma, P, 1993, Polymorphic Modelling of Engineering Systems, International Conference on Bond Graph Modelling, San Diego, California, USA: 17–22. [19] Liskov, B, 1987, Keynote Address - Data Abstraction and Hierarchy, OOPSLA ’87: Addendum to the Proceedings on Object-Oriented Programming Systems, Languages and Applications, New York, NY, USA: 17–34. [20] Gamma, E, Helms, R, Johnson, R and Vlissides, J, 1995, Design Patterns – Elements of Reusable Object-Oriented Software, 1st ed., Addison-Wesley. [21] Abadi, M and Cardelli, L, 1996, A Theory of Objects, Springer, New York, NY, USA. [22] Zhou, M, Son, YJ and Chen, Z, 2003, Knowledge Representation for Conceptual Simulation Modeling, Proceedings of the 2004 Winter Simulation Conference: 450–458. [23] La Rocca, G and Van Tooren, MJL, 2007, Enabling Distributed Multi-Disciplinary Design of Complex

[24]

[25] [26]

[27]

Products: a Knowledge Based Engineering Approach, Journal of Design Research 3(5). Van der Elst, S and Van Tooren, MJL, 2008, Development of a Domain Specific Modeling Language to Support Generative Model-Driven Engineering of Aircraft Design, 26th Congress of International Council of the Aeronautical Sciences (ICAS) , Anchorage, Alaska, USA. Genworks International, 2008, General-Purpose, Declarative, Language, http://www.genworks.com. Stokes, M, 2001, Managing Engineering Knowledge: MOKA Methodology and Tools Oriented to Knowledge Based Engineering Applications. Professional Engineering Publishing Ltd. Insight Team, 2007, ‘The Insight,’ Faculty of Aerospace Engineering, Delft University of Technology, Delft, The Netherlands, D.S.E. final report.

Characteristic

Physical domain

Input and method

Lift / downforce

Aerodynamic

Drag

Aerodynamic

Stability derivates

Aerodynamic

2D/3D geometry i.c.w. a fluid dynamics solver at certain conditions (speed, direction) to obtain lift coefficient. 2D/3D geometry i.c.w. a fluid dynamics solver at certain conditions (speed, direction) to obtain drag coefficient. 2D/3D geometry i.c.w. a fluid dynamics solver at range of conditions (speed, direction) to obtain forces and moment. Further calculations to obtain derivatives.

Stiffness

Mechanical

Eigenfrequency

Mechanical

Buckling strength

Mechanical

Heat capacity

Thermodynamic

3D geometry with material properties i.c.w. a finite element solver to obtain stiffness matrices. 3D geometry with material properties i.c.w. a finite element solver to obtain eigenfrequencies. 3D geometry with material properties, i.c.w.finite element solver to obtain strength Geometry with material properties i.c.w. finite element solver to obtain total capacity

Table 1: System characteristics and associated analysis tools.

450

Improving Patient Flow Through Axiomatic Design of Hospital Emergency Departments J. Peck, S-G. Kim MIT Park Center for Complex Systems Department of Mechanical Engineering Massachusetts Institute of Technology 77 Massachusetts Ave, Cambridge, MA 02139, U.S.A. [email protected] Abstract In response to crowding in hospital emergency departments (ED), efforts have been made to increase patient flow through the Fast Track (FT). The use of FT, however, has not always been accompanied by an increase in the overall patient flow, sometimes leaving the FT underutilized. We find that this is mainly caused by the current practice of assigning patients to FT based only on the Emergency Severity Index. One index for two functional requirements results in a coupling between prioritizing of patients and encouraging the fast flow of them. By introducing a new index for patient flow, we could uncouple this design problem and significantly decrease the overall patient waiting time (~50%) compared to that of the existing use of FT. Key Words: Axiomatic Design, Emergency Department System Design, Patient Flow

1

INTRODUCTION

As demand for emergency care increases, hospital administrators are seeking new ways to provide treatment more efficiently. In hospital Emergency Departments (ED) this yields a need to find new ways to organize and categorize patients based on the severity and the nature of their illness and how fast they will be treated. One example of this is when a hospital sets aside resources for patients that will go through the system quickly. This is often known as a Fast Track (FT) [1-4]. In practice FT is specifically reserved for low acuity patient. Since low acuity patients tend to have short treatment times, FT can clear them quickly out of the system. Many Fast Track areas do not contain fully functional ED rooms or are staffed by nurse practitioners rather than doctors. This means that the FT has a lower overhead for treating patients that do not require more complex/expensive facilities [5-9]. Many hospitals have experienced the success of FT in decreasing the length of stays for low acuity patients, which was recently highlighted as a solution to ED crowding in the American College of Emergency Physicians report on boarding [10]. In response to the success of FT, many hospitals have decided to invest more resources into operating FTs. When building a new ED, one suburban teaching hospital in the Greater Boston area set aside an extra four fully functional ED beds as well as 1 doctor and 1 nurse for FT. The rest of the new ED was comprised of 8 pediatric beds and 24 main Emergency Room (ER) beds. A study of this hospital’s ED was performed by the authors using discrete event simulation (DES). The results of this study were that the benefit of FT, in terms of patient flow, was to bypass the significant bottleneck of patients being transferred to the inpatient unit (IU). Therefore in order to make the most of a high overhead/fully functional ED, patients of middle acuity levels should be allowed to enter FT as well as those of low acuity levels as long as they are not going to require admittance to the hospital IU [11]. The study results showed that allowing patients of higher acuity level

CIRP Design Conference 2009

451

to enter the FT led to shorter waiting time for all level patients. However, this improvement was not substantial and may still not have been the optimal solution for maximizing FT usage. An assumption that has been used by most EDs is that patient acuity level, as assigned at triage, was a good indicator for deciding who should be sent to FT. Historically, the purpose of triage has been to prioritize patients based on how long they can wait to be seen (severity of illness) and how many resources they will require. Currently a very prominent triage system in the US is the five-level “Emergency Severity Index” (ESI) System. [12] If a patient requires immediate life-saving intervention, and therefore can not wait to be seen, then they are assigned an ESI level 1. If a patient is at a high risk, in severe pain, or requires many resources and has vital signs at dangerous levels, they are assigned ESI 2. Otherwise, the patient is assigned a level based on the amount of resources they will use: ESI 3 for many resources, ESI 4 for one resource and ESI 5 for no resources. The definition of what is a resource in terms of assigning ESI can be fairly wide. A resource can be lab work (blood or urine tests), X-rays, fluids, consultation and so forth. Therefore a patient who only needs a urine test and a patient that needs only IV fluids will each be given an ESI level 4 despite the fact that their complaint as well as treatment requirement is very different [12]. In most hospital operations research practice, ESI is used to predict how quickly a patient will move through an ED. Thus, they assign ESI 4 and 5 patients to a “Fast” track with hoping they can be cleared quickly. However observation of an ED or conversations with ED staffs show that often the correlation between ESI and speed of treatment is not true. Figure 1 is built from data provided by the hospital described earlier in our study which shows the mean time in ER (+/- one standard deviation) for each ESI based on the total length of stay of a patient minus their time to bed.

As can be seen in the figure, there are patients of ESI 1, 2 and 3 that take the same or even less time in an ED than ESI 4 and 5 patients. These high acuity patients can move quickly through the system due to the nature of their injury: perhaps they will get transferred quickly; require fewer tests; have shorter treatment requirements. When the FT is fully functional and capable of treating these patients (as is the case with the hospital studied), they can be accepted to FT and maintain unhindered flow. Our previous study of the FT patient flow showed that these higher acuity patients suffer higher lengths of stay due to the loss of resources to FT that they would otherwise have priority too [11]. This situation is one that may have significant ethical problems. This study seeks to discover a more practical method of assigning patients to FT without facing this ethical dilemma of giving resource priority to low acuity patients. Spread of Time in ER (Mean+/-Standard Deviation)

350 +σ

300

Mean

250

-σ 200 150 100 50 0 ESI 1

ESI 2

ESI 3

ESI 4

ESI 5

This study employs systems analysis and Axiomatic design to better articulate the FT design issues that cause the removal of priority from high acuity patients and that are resulting in low utilization of a FT. From these techniques an improved design is suggested which employs a new triage index to identify patients based on their expected treatment times. Finally these design suggestions are tested using a DES model.

2.1

The solution to this coupled design problem is a simple one: the introduction of a new DP that can be used to identify patients based on how they will flow through the ED. We call this index the Park Index (PI). The PI assigns a level to a patient based on their expected treatment time in ED. Like the ESI, the PI will require iteration and practice to discover how many levels are worthwhile and how to define these levels. In our former study [15], it was identified that need for transfer to an inpatient unit causes an increased length of stay in the ED. Therefore this would be a significant factor in assigning a PI level. Similarly “the amount of tests a patient would need,” “how long those tests take,” “whether a consultant will be needed,” “patient factors that slow treatment (disability, age, mental state etc.)” are the initial key factors that would weigh into assigning a patient’s PI level. Like the ESI system, the assigning of a PI would rely heavily on the ability of experienced triage nurses to predict the treatment that a patient will undergo. With the PI, triage nurses can simply list the treatments and convert it to resource usage and time requirements. 2.2

Figure 1: Time in ER by ESI level.

2

The design parameter (DP1) for FR1 is the ESI system. The usefulness and accuracy of this system for that purpose is beyond the scope of this paper, but has been studied and improved by professionals such as the American College of Emergency Physicians over extend period time. The point in our study is that most ED management has used the same ESI as DP2 to satisfy FR2 (selecting patients for fast track). This is a classic coupled design case where two FRs have only one DP.

MATERIALS AND METHODS Systems Analysis

At the present time, patients are sent to FT if they are of a low acuity (ESI 4 or 5), assuming that the low acuity would require lower treatment time. However, the data shown in Figure 1 does not support this reasoning. Therefore, it is worth questioning whether the use of ESI levels is a proper base for selecting FT patients. This question leads to the analysis of the current design of the triage system.

In order to test the effect of the PI in uncoupling the FT design and improving the ED patient flow, this study uses the simulation model from the authors’ previous study [11]. This simulation was built in Rockwell Automation, Inc.’s ARENA DES Software. The model ED was built based on real patient data, ED processes, layout, and staffing from the ED of the local suburban teaching hospital mentioned earlier. Generation of the ED model began with extensive observation of the teaching hospital’s ED operation and the creation of detailed flow charts for patients, doctors, nurses, and information as well as studies of ED processes. The simulation model was built by closely following the actual processes through which a typical patient goes. Figure 2 shows the structure for the highlevel model of the ED where each block represents a detailed sub-model. Patient Arrival

In order to analyze the design of an ED triage system, we employed Axiomatic Design (AD). The first axiom of this design method is called the Independence Axiom which requires all functional requirements (FR) of a design are to be satisfied independently in such a way that their solutions or design parameters (DP) do not affect one another [13]. Triage was originally created as a method to prioritize patients based on the severity of injury. However it has evolved into an attempt to manage patients, such as assigning patients to FT [14]. In other words modern triage has evolved such that it has two primary functional requirements (FRs): 

FR1: Prioritize treatments,



FR2: Organize patients to facilitate process flow in ED.

patients

based

on

urgency

Discrete Event Simulation of an ED

Ambulance Entrance

Walk-in Entrance

Vehicle Out

Registration Front End

Triage Rooms Test Area

Adult ER Fast Track Treatment Areas

Discharge

of

Pediatric ER

Inpatient Unit

Figure 2: Conceptual simulation model. As seen in Figure 2 patients begin in the patient arrival sub-model. In this sub-model, patient entities are generated and then assigned with the attributes that will

452

guide how the patient progresses through the ED. Within the patient arrival sub model, we made the percent of patients that will be assigned to FT into variables that can be controlled externally. External control is preformed using a program that comes with the Arena software known as the Process Analyzer. This program allows a user to display a list of control variables that have been established in the simulation programming, and also a list of response variables that result from a run of the simulation. The program makes it simple for a user to change variables and quickly view the results of the changes. After leaving the patient arrival sub-model, ambulance patients and walk-in patients are then sent to different entrances. In these entrances there are recording and assignment blocks for statistical and routing purposes. Walk-in patients that have ESI 1 are sent directly to triage while all others are sent to registration, which is a simple delay. All walk-in patients go to triage, and gain access to a triage nurse. After being seen by a triage nurse, patients can be assigned to receive preliminary testing. Patients are then sent to wait for entry into the appropriate treatment area. It is assumed that ambulance patients gain some level of triage on the ambulance and therefore are sent directly to their treatment area rather than going through triage. When assigned to the main ER, a patient entity must wait to be assigned to an open bed. The patient is then assigned a nurse. The nurse performs an examination which is programmed as a delay for the patient. After being seen by a nurse the patient waits for a doctor for further examination. At this point all patients undergo testing, no patient is sent for testing more than twice. Throughout the testing process patients release doctors and wait to be seen by doctors as needed. The patient may then be seen by a consultant, who will relieve the doctor and can send the patient for more testing. The patient then receives treatment by a doctor; the length of time for this treatment can be externally controlled. Then the doctor is released and there is a final nurse treatment, the duration of which can also be controlled. After treatment, IU bound patients wait to be assigned a free IU bed and are then transferred, while patients to be released from the ED are discharged. Finally the nurse and bed are both released. The pediatric ER is the same as the main ER except that it has a module which will send patients to the main ER if they are waiting for a pediatric bed when the room closed. Even though the doors to the Pedi ER close, resources continue to work until the beds are empty. In our simulation model, both ERs are programmed such that a patient will leave without being seen if they have the acuity level of ESI 3 to ESI 5 and have been waiting for a bed for more than 4 hours. Like the pediatric ER, the FT model begins with a module that will send patients to the main ER if the FT is closed. If any patient is receiving treatment when the FT closes, FT resources continue to work until the necessary treatment process is completed. Like in the other treatment areas, an FT patient begins by waiting for a bed, then a nurse performs a preliminary examination. FT patients are tested once at most and are then seen by a physician. Then patients receive treatment from their physician. The duration of this treatment is externally controllable, and the inputs for this duration are random distribution curves

453

which will be discussed later. Finally the patient is discharged. A patient who is being admitted to the IU will wait in their ED bed until an IU bed is available. Once a bed becomes available, they leave the ED. The IU sub model is a simple delay process where the patient is held for some period of time, after which they are discharged out of the hospital. It should be noted that the discharge volume exhibits a distinct pattern in most hospitals – heavily concentrated on the mid- to late-afternoon hours. To model this, we used a Poisson distribution function that was adopted from similar studies [4,15]. No matter how accurate a simulation is, it will not give useful results unless its inputs are properly chosen. With this in mind, we worked with the subject hospital in order to receive 12 weeks of real historical patient data, while observing all applicable Health Insurance Portability and Accountability Act (HIPAA) protocols. The data included 11540 entries, and 3015 entries were discarded due to missing information or clearly inaccurate information, leaving 8525 useable patient records. The patient data included important times for tracking a patients flow through the ED, such as Triage to Bed (TTB), Triage to Doctor, Greeting Time and Length of Stay. The information also included dates and times of the patient’s visit as well as the patients age, ESI at triage and at disposition, and to where they were discharged. From the data, we documented an arrival pattern of all ESI levels for each day of the week. This was the patient arrival input to our model. Using the patient data we were also able to calculate percentages for assigning the attributes within the patient arrival sub-model [11]. 2.3

Preliminary Park Index

For the purposes of testing the potential impact of the PI, a preliminary version of PI is proposed. Assignment of a preliminary PI level would only be given to a patient who will: 

Not have the need for later admission to the IU,



Not be pediatric,



Arrive while FT is open.

Five PI levels were proposed and these levels are assigned to a patient based on the patient’s time in ED as follows. 

PI 1, Patient Time in ED between 0 and 30 min



PI 2, Patient Time in ED between 30 and 60 min



PI 3, Patient Time in ED between 60 and 90 min



PI 4, Patient Time in ED between 90 and 120 min



PI 5, Patient Time in ED greater than 120 min

Using the data set that was provided by the subject hospital, all eligible patients were retrospectively assigned a PI level. As mentioned earlier, in the DES model, the doctor treatment time of a patient that is sent through FT is assigned according to a random distribution. The equation for that distribution is calculated using the real ED data and is dependent on ESI and the percent of patients of that ESI being sent to FT. Therefore the patient data was separated based on ESI and PI assignments. Each PI level corresponds to a certain percentage of patients of each ESI that will be sent to FT, and that in turn corresponds to a specific random distribution for treatment time.

It is worth noting that there were no ESI 1 patients that met the criteria for PI assignment of less than 5 and therefore it was decided not to send ESI 1 patients to FT at all. This is appropriate due to the fact that in reality it is very difficult to estimate the length of stay or future needs of a patient in such an acute condition, and therefore it would be impractical to attempt to assign them a PI level at triage.

TTB in the 28 bed scenario. The figure shows this percent difference for each ESI and also shows a weighted total TTB difference across all ESI levels. 4950 Total Relevant Throughput 28 Bed Scenario

4900

4850

3

RESULTS 4800

In order to see the potential impact of using the PI in our simulated ED, six possible scenarios were considered. These scenarios were for the acceptance of all PI 1, PI 12, PI 1-3, PI 1-4, and PI 1-5 patients to FT and the sixth scenario was the case where all FT resources were used as main ER resources instead of FT, which is referred to as the 28 bed scenario. The scenarios were generated by changing the percentages and corresponding treatment time distribution data, described above to the discrete event simulation model described in the previous section. To evaluate the impact of changes, this study measured patient throughput and time-to-bed (TTB).

4750

4700

4650 PI1

50

% Difference

10

-10 -30 -50

ESI 4

# of Patients

500 400 300 200

PI1

PI 1-2

PI 1-3

PI 1-4

PI 1-5

Figure 5: Percent difference between TTB in 28 Bed scenario and PI scenarios

ESI 3 ESI 5

ESI 5 ESI 4 ESI 3 ESI 2 ESI 1 All ESI

30

ESI 2

600

PI 1-5

119

-70

700

PI 1-4

70

1000

800

PI 1-3

Figure 4: Total Relevant ER Throughput

Figure 3 is the total FT throughput broken down by ESI level for each scenario. As can be seen in the figure, there is a peak in FT throughput for the PI 1-4 scenario. This means that up to that scenario the FT is being underutilized. However, it may be over utilized in the PI 15 scenario, causing competition and lower patient throughput. 900

PI 1-2

The change in TTB across ESI levels between the PI scenarios and the 28 bed scenario are approximately: for PI 1 scenario: +14%, PI 1-2: -32%, PI 1-3: -49%, PI 1-4: 4%, PI 1-5: +27%. It is noteworthy that the PI 1-3 scenario improves TTB for all patients more than 49% and the improvement is for all ESI levels.

100 0 PI1

PI 1-2

PI 1-3

PI 1-4

PI 1-5

Figure 3: FT throughput with increasing PI levels accepted. To get a better sense of how the use of PI affects the entire ED, it is worth looking at the change in throughput for all patients in FT and the main ER that are not being sent to the IU. We refer to this throughput as “relevant ED throughput”. Relevant ED throughput does not include patients sent to IU because those patients tend to have priority and the throughput of those patients is independent of changes in FT usage. Figure 4 shows the relevant throughout for each scenario. The use of PI shows potential for a significant increase in total relevant ED throughput over the 28 bed scenario. This improvement in relevant ED throughput means that removing some ESI 2 and 3 patients from the competition in the main ED and placing them in FT has significant benefits for the whole patient flow. Although it is useful to look at throughput to get a sense of how changes in FT assignment affect the system, it is more important, from an administrative standpoint, to see how the use of PI affects time-to-bed (TTB). To do this, we take an average TTB for patients of each ESI for both FT and the main ED which is weighted by the amount of patients that go through each track. Figure 5 shows the percent difference between this weighted TTB and the

4

LIMITATIONS AND DISCUSSIONS

Despite the steps that we took towards validation of the use of Park Index, there are limitations to this study, which can be caused by potential modeling deviation and potential data inaccuracy. We assumed that a patient can accurately be assessed for FT or main ER in advance by a triage nurse. In addition, we assumed that a triage nurse can predict whether a patient will be admitted to the inpatient unit or not, at the time of triage. This is not always the case, and therefore the correct patients for FT will not always be sent there. The authors believe such cases are rare and do not affect the dynamics of FT significantly. Similarly the PI levels as defined in this paper will be difficult to implement in real life exactly as they are proposed. The real implementation of PI will have clearer ways of defining a patient and may require less distinct PI levels, such as a 3 level system. Triage has developed in today’s EDs as a method of sorting patients based on how quickly they need to be seen, but also as a way of managing patient flow. The system used to sort patients based on urgency is called the ESI system. However this system does not take patient flow needs into account when assigning levels. Therefore when using the ESI system to try and facilitate patient flow, its original purpose may suffer due to a

454

coupled design. This exact situation was observed when the ESI system was used to assign patients to FT. The low acuity patients received quicker service while middle acuity patients suffered a loss of available resources. In order to allow triage to satisfy the need for prioritizing based on acuity and the need to facilitate flow, AD was applied. The use of AD led to the creation of a new index based on how long a patient is likely to remain in the ED, called the PI. In order to test that this new index would improve patient flow, PI levels were assigned retrospectively to real patient data, and this was used to find parameters to enter into the DES. The results of the simulation showed that use of the PI when assigning patients to FT, is associated with a significant increase in patient throughput as well as a decrease in the amount of time that a patient must wait for a bed. Having shown that the PI can be used successfully, future studies should focus on how to quantify the factors in an ED that make a patient’s stay longer, judge whether those factors are indeed predictable, and design a practical PI system based on these findings. The results of the simulation showed that use of the PI when assigning patients to FT is associated with a significant increase in total relevant patient throughput as well as a decrease in the amount of time that a patient must wait for a bed as compared to a scenario where there is no FT. It is important to note that although the simulated scenarios show that the optimal usage of FT is at a PI level of 1-3 this may not be the case in real-life usage of a PI-like system. There may be a point in-between PI 1-3 and PI 1-4 that is in fact the optimal solution. Also, since PI 1 and PI 1-2 have no significant effect, it may not be worth having a system broken down by every half hour. Instead, real life implementations may find that it is most useful to only have a three point system with PI levels based on hour increments or even 1.5 hour increments. Although in this case PI was only used for assigning FT it may be used for other applications in the ED. For example it may be worth while to create multiple different tracks rather than just FT and main ER. Then the PI can be used to assign patients to each of these different tracks. In the end, the universal conclusion of this study is that AD justifies the use of another index and that this index has potential for great improvements to ED patient flow. 5 REFERENCES [1] Cooke, M., Wilson, S., Pearson, S., 2002, The Effect of a Separate Stream for Minor Injuries on Accident and Emergency Department Waiting Times, Emergency Medicine Journal, 19: 28-30. [2] Garcia, M., Rivera, C., 1995, Reducing Time in an Emergency Room via a Fast-Track, Proceedings IEEE Simulation Conference, 1995: 1048-1053. [3] Nash, K., Zachariah, B., Nitschmann J., Psencik, B., 2007, Evaluation of the Fast Track Unit of a University Emergency Department, Journal of Emergency Nursing 33:1: 14-20. [4] Williams, M., 2006, Hospitals and Clinical Facilities, Processes and Design for Patient Flow, In Patient Flow: Reducing Delay in Healthcare Delivery, ed. R. W. Hall, Springer, Los Angeles: 45-77. [5] Simon, H., Ledbetter, D., Wright, J., 1997, Societal Savings by “Fast Tracking” Lower Acuity Patients in

455

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

an Urban Pediatric Emergency Department, The American Journal of Emergency Medicine, Vol 15, Issue 6: 551-554. Fernandes, CM., Christenson, J., Price, A., 1996, Continuous Quality Improvement Reduces Length of Stay for Fast-Track Patients in an Emergency Department, Academic Emergency Medicine, Vol 3, No 3: 258 – 263. Hampers, L., Cha, S., Gutglass, D., Binns, H., Krug, S., 1999 Fast Track and the Pediatric Emergency Department: Resource Utilization and Patient Outcomes, Academic Emergency Medicine, Vol 6, No 11: 1153 – 1159. Meislin, H., Coates, S., Cyr, J., Valenzuela, T., Fast Track: Urgent Care Within a Teaching Hospital Emergency Department: Can it Work?, Annals of Emergency Medicine, Vol 17, No 5: 453(37) – 456(40). Wright, S., Erwin, T., Blanton, D., Covington, C., 1992, Fast Track in the Emergency Department: A One-Year Experience with Nurse Practitioners, The Journal of Emergency Medicine, Vol 10: 367-373. ACEP Boarding Task Force, Emergency Department Crowding: High Impact Solutions. April 2008 http://www.acep.org/WorkArea/downloadasset.aspx? id=37960. Peck, J., Lee, T., Kolb, E., Kim, SG., Redesigning Fast Track – A Discrete Event Simulation Approach Towards Emergency Department Improvement, Pending Publication Approval. Gilboy N., Tanabe P., Travers D., Rosenau A. Eitel D., 2005, Emergency Severity Index, Version 4: Implementation Handbook, AHRQ Publication No. 05-0046-2, Agency for Healthcare Research and Quality, Rockville, MD. http://www.ahrq.gov/research/esi/. Suh N., 2001, Axiomatic Design – Advances and Applications, Oxford University Press, New York, NY. Hauswald, M., 2005, Triage: Better Tools but the Wrong Problem, Academic Emergency Medicine, June 2005, Vol. 12, No. 6: 533-535. Kolb, E., Peck, J., Lee, T., 2007, Effect of Coupling between Emergency Department and Inpatient unit on the Overcrowding in Emergency Department, Proceedings of the IEEE Winter Simulation Conference 2007: 1586-1593

Combining Axiomatic Design and Case-Based Reasoning in a Design Methodology of Mechatronics Products N. Janthong1,2,3, D. Brissaud1, S. Butdee2 G-SCOP research laboratory, University of Grenoble, France 2 Integrated Manufacturing System Research Center (IMSRC), Department of Production Engineering, Faculty of Engineering, King Mongkut’s University of Technology North Bangkok (KMUTNB), Thailand 3 Thai-French Innovation Centre (TFIC), King Mongkut’s University of Technology North Bangkok (KMUTNB), Thailand 1

Abstract Current market environments are volatile and unpredictable. The ability for design products to meet customer’s requirements has become critical to success. The key element to develop such products is identifying functional requirements and knowledge utilization based on a scientific approach to provide both designers of new products and redesigners of existing products with a suitable solution that meets to customer’s needs. This paper presents a method to (re)design mechatronic products by combining the axiomatic design and case-based reasoning approaches. Innovation has increased the new product value, which has improved the product efficiency and the need for new engineered design method. Keywords: Engineering design; Adaptable design; CBR; Axiomatic design

1 INTRODUCTION Mechatronic is a technology which combines mechanics with electronics and information technology to form both interaction and spatial integration in components, modules, products and systems [1]. In fact all electronically controlled mechanical systems are based on the idea of improving products by adding features from other types of products. The result is that new product functionality is created and more efficient technologies utilized. These caused by industrial circumstances change and existing product with its function is no longer satisfied. Thus, existing product life will be extended. In industrial environment, however, customers need specific machines to perform specific tasks in their industry, which their functions and performances may be different or similar to the previous generation. In this sense, customers’ needs have become very personalized and the major factor to guide the development of such products. However, the success of new product development in satisfying one customer goes through the reuse of elements of the responses to previous customers. By reusing previous designs, an engineer can reduce duration and cost of development cycle and risks on product quality and performances. Moreover, the relevant and innovative information in any design discipline may also be mobilized and used to update or adapt a previous design in response to changes in technology or market preferences. As show in figure 1, the reuse of design is normally needed for some modification or adaptation, which can occur in two ways. The first one is the adaptation of a previous product of the product family to a new list of requirements; the advantage is that the producer can adapt the same design to different requirements and produce different product models. The second one is the adaptation across product families. The advantage comes from reusing the same ‘design’ for

CIRP Design Conference 2009

456

different product families, and lead to the ability of sharing functions and components. Hence, organizing, storing and retrieving information on previous product designs are the most important tasks in knowledge utilization to provide both designers of new products and redesigners of existing products with a suitable solution that meets the customer’s needs.

Figure 1: The reuse of design output

Our objective is to develop a new methodology to support the design of mechatronic products based on principles of generating new ideas from both new knowledge and previous solutions based on company experience while minimizing risks. Section 2 will present the literature review then section 3 will develop the methodology proposed and applied on the example of a redesigned electric vehicle as automated vehicle. Conclusions are given in section 4. 2 LITERATURE REVIEW The main technique reviewed was case-based reasoning (CBR) applied to design. The basic idea of case-based reasoning is that new problems can be tackled by adapting solutions that were used to solve previous problems [2, 3]. Case-Based Reasoning is a general paradigm for problem solving based on the recall and reuse of experience. The practice shows that it is often more efficient to solve a problem by starting from a solution of a previous, similar problem than to generate the entire solution from scratch. Due to the mentioned properties, CBR systems have a multitude of applications in architecture design [4], in chemical process engineering [5, 6], in injection mould design [7], and in mechanical design [8, 9] as well as design for mass customization [10] etc. The two major research issues in the CBR approach to design are the representations of design cases and the process model for recalling and adapting design cases [11]. Representing design cases requires an abstraction of the experience into a symbolic form that the system can manipulate [12, 13]. Design-case recalling involves finding a relevant design experience: it is broken down into the subtasks of indexing, retrieval, and selecting. Indexing design cases is a critical issue in CBR approach, and CBR systems suffer from an inability to distinguish between cases if indexing is inadequate [14]. Design-case adaptation recognizes the differences between the selected design case and the new design problem, and changes the design case so that it solves the new design problem. This process is divided into three steps: propose, evaluate, and modify. As the literature review showed, case-based reasoning techniques have been investigated and the principles and technology are now mature. The concept of case based reasoning can be defined in the way to organize information or data, and this concept is applied to either ‘idea’, innovation or any other kinds of information that is to be stored and used at a later point in time. However, mechatronic products designs are especially difficult to represent as a well-structured list of features. The representations of design cases require various models of knowledge from each domain. Highly structured representations of design knowledge can be used for reasoning. However, case-based reasoning usually require manual pre or post processing, structuring and indexing of design knowledge to identify the information needed by designers. There is a need to develop a method that clearly determines mechatronic products design requirements. One such a method that rigorously defines the design requirements could be Axiomatic Design. Axiomatic Design defines design as the creation of synthesized solutions in the form of products, processes or systems that satisfy perceived needs through mappings between Functional Requirements (FRs) and Design Parameters (DPs) [15]. The implementation issues were discussed by many publications [16, 17, 18, 19, 20, 21]. A fundamental aspect of the mapping process is the idea of break down through zigzagging. The design progresses

from a higher, abstract level down to a more detailed level. This result in the formation of design hierarchies in the FRs and DPs which are similar in nature to standard product functional and structural hierarchies. Thus it can identify which parts of the design structure are used to perform specific functions. To facilitate (re)designing mechatronics product, this paper combines the axiomatic design and case-based reasoning approaches. The case based reasoning is used as a general framework for the reuse of product designs and applied when a similar function is required. The axiomatic design principle is used for creating cases by analyzing existing products which FRs and DPs were decomposed. These FRs and DPs are utilized as case index and case representation in case libraries. It is also used for creating design databases or design libraries by identifying relationships between FRs and possible DPs of each component in a design library. The information content is used for evaluating design solutions (DPs) from design libraries or design databases composed of various components to fulfill a new functional requirement which is not yet existing in the case libraries. The design with satisfy independence axiom provides the sequence to modify DPs in the adaptation process. 3 DESIGN METHODOLOGY The methodology as shown in figure 2 is based on the assumption that the designers do not need to design products from scratch every time. They go through their ability to access to existing designs from related products and components, then revise them to fulfill specific customers’ needs. Figure 3 is an example of real world problems that we solved based on the concepts afore mentioned. The function structure and the physical structure of products from past design experiences were stored in a case library. Moreover, the design library kept the designs which included components information and their function definition, which come from supplier’s standard catalogue. Both the case library and the design library were utilized to create suitable design solutions to achieve the new functionalities. Reuse case when new customers’ requirements have similar function combines with new design sub-functions when retrieved case doesn’t have function that customer wanted. It is the basic concept for combining case based reasoning and Axiomatic design principles. The process started by the comparison of new customers’ requirements and constraints to function structures and physical structures of existing products that perform similar requirements and constraints. The result is that functions can be separated in product functions that have already been developed in existing designs, and add-on functions that did not exist and require to be fulfilled through the designs process. To achieve that, add-on functions are decomposed in terms of functional requirements; physical solutions are retrieved by comparison to other products of the family and by searching in design databases and standard components libraries. The retrieval process based on functionality and other specifications is accomplished by the aid of an inference engine. Both rules and cases are necessary for the reasoning process. Then, adaptations of the design is needed to re-configure and integrate components to achieve the new design. Thus, product architecture, platforms, modules as well as functional and physical structures are the main drivers to create the case base. The adaptation process needs to follow the most suitable sequence.

457

Figure 2: Design Methodology

Figure 3: An example of add-on functions 3.1 Case Representation The basic idea is to organize specific cases, which share similar properties under a general structure. The scheme of a case consists of four parts as shown in figure 4, including customer’s requirements, customers’ constraints, functional requirements, and design parameters. The case represented in terms of a design hierarchy in each of the domains: functional and physical. The hierarchical structure in the FR-domain and the DPdomain correspond to customers’ requirements. An advantage of this representation is that it allows a case to be accessed on its whole or by its parts when a new problem must be solved. Similar cases at appropriate levels of abstraction are retrieved from the case base and the solutions from these cases are combined and refined; the constraints can be used to guide adaptation.

Figure 4: The scheme of a case representation

458

3.2 Case Indexing Case indexing involves assigning indices to cases for their quick and easy retrieval from a case library. Axiomatic design decomposition principles are used to determine the indexing of both design cases and their solutions as shown in figure 5. A hierarchical case library is similar in nature to the product architecture; designers often care of design the entire systems down to the lowest component levels that compose the systems. Thus, cases are indexed by their functions allowing a case to be retrieved in several ways. This indexing structure scheme also allows the composition of different case pieces to create a new solution. However, it usually require manual pre or post processing, structuring and indexing of design knowledge to identify the information needed by designers. Based on axiomatic design principle, designers map from the requirements what they want the design do to the solutions of how the design will achieve these. As the design progresses, broad, high-level requirements are broken down into smaller sub-requirements, which are then satisfied met by sub-solutions. It is also important to maintain the functional independence. That is why the index structure was created to distinguish between the cases in the case library. An example of the index structure and solutions of high lift stacker product is shown in figure 6. It shows that there are many different ways to satisfy the FRs. FR skeleton sets can be generated for each of the design cases in the case library. Each step down the hierarchy represents a

refinement of the unit design. It helps distinguish between cases which lead to efficient case matching and retrieving. This example comes from past design experiences satisfying customers’ needs and the underlying product

architecture in product family. This expresses that firms can manage single products and platforms to deliver the different products while sharing components.

Figure 5: An index structure in case library

Figure 6: An example index structure of modified high lift stacker

459

P  CAi  Casei ( FRi )

3.3 Case retrieval As afore mentioned, when new customers’ requirements and constraints are given, similar historical design cases are searched, matched and retrieved. The result is that two major functions are classified, namely product functions from existing products in case library that have similar functions according to problem inputs and add-on functions that are not on the retrieved existing products. Thus, the case retrieval process includes two phases – (i) similarity matching of product functions and (ii) similarity matching of add-on functions. Each phase relies on achieving two goals: finding a similar case set and finding the most similar case in this set.

; P  set of relevant product

After all similar cases are found, a mechanism to find the most similar case in this set is needed. The input constraints (CSi) are used to compare to design parameters of each retrieved case. Then, the

sim(Csi , Casei ( DPi )) can be calculated by: n

sim(Csi , Casei ( DPi ))  1  1

(Csi  Casei ( DPi )) system range( DPi )

where: Csi - Casei(DPi) is the difference between the feature values of the input and the retrieved case and system range(DPi) is the range which each DP can satisfy FR based on the capacity of the producer. Then to turn a normalized distance function into a similarity measure, its value subtracts from 1. The set of cases are ranked by these similarity scores and retrieves case with the highest similarity score. In the second phase, the add-on functions (as shown in figure 8) are the customer’s attributes (CAi) that did not match (Casei(FRi)) in the first phase. There are two possibilities for the remaining CAs. The first one is when the function does not exist in the retrieved cases but could be in other product families: the system is called to search in the other product libraries by the same procedure as the first phase.

Figure 7: Case retrieval based on similarity of product function matching In the first phase, the similarity matching of product functions as shown in figure 7, finds the similar case set from customers’ requirements (CAi) that are compared to product function hierarchy of each case (Casei(FRi)). The simplest similarity measure is to score 1 for equality and 0 for inequality as follow:

 1 if CAi  Case i ( FRi ); sim (CAi , Case i ( FRi ))    0 otherwise Thus, a set of cases from the case base that are similar to the current input case is equal to the intersection of (CAi) and (Casei(FRi )) as follows :

460

Figure 8: Case retrieval based on similarity of add-on functions matching The second one is when the function does not exist at all in any cases of the database: the producer never did this function before for any product they did. A new design of the function of the product must be created. The add-on library and the designs database include mechanical parts, electrical parts, software modules etc. These addon components are defined as pairs of FR and DP for single component and hierarchy of FR and DP in case of assembly components. Similar to matching CAi with Casei(FRi) in the first phase, Designi(FRi) are defined to distinguish the sources of information between case library for reuse design and

design library for new design. The components in design database were evaluated to find solution which satisfies add-on function. If a CA corresponds to one function, a solution can be found by sim(CAi,Designi(FRi)). If a CA corresponds to a hierarchy of functions we can find the solution of the CAs by sim(CAi(FRi),Design(FRi)). Thus, matching a CAi with Designi(FRi) is as follows :

 1 if CAi  Designi ( FRi ); sim (CAi , Designi ( FRi ))    0 otherwise or  1 if CAi ( FRi )  Designi ( FRi ); sim(CAi ( FRi ), Designi ( FRi ))    0 otherwise

The solution to satisfy each FR must be evaluated by minimizing the information content of the design based on Suh’s axiomatic design principle as follows:

I  log 2

system range common range

Figure 10: An example or retrieved product model and add-on components

n

I total   I i 1

The idea mentioned above is shown in figure 9. In the general case-base system, cases and designs experience are usually used to solve new problems by evaluating similar cases and modifying or adapting the retrieved cases. In our work, we found that designing is a complex task and it is unreasonable to expect a case base to contain all the possible design cases. It is the reason why our methodology combines case-based reasoning to initiate an appropriate design due to past experience and axiomatic design rules to provide this design with the new functions needed. This provides a combined advice that is better to satisfy design constraints and compatibility requirements compared to only CBR system. An example of the retrieval of a towing vehicle case is shown in figure 10. The example shows that the matching first retrieved the towing vehicle product then retrieved the component from high lift stacker satisfying the function.

Figure 9: The design retrieval concept

3.4 Case Adaptation If an exact matching case is found from the case retrieval process, its design can be used for the new order without any modification. Otherwise, and adaptation process is invoked to detect the discrepancies between the most similar case and the new order, and to reconcile the discrepancies by adapting the past design to the new situation. The adaptation knowledge is usually represented as rules. The adaptation rules specify, under a certain situation, how to modify the value of a feature, or how to insert or delete certain features of the case representation in order to generate a solution for the new problem. According to axiomatic design principles when the relationships between FRs and DPs is uncoupled design, the set of adaptation rules can be easily and automatically selected by the system to make effect on similar old case and to produce the new one. Uncoupled design occurs when each FR is satisfied by exactly one DP. The resulting matrix is diagonal and the design equation has an exact solution. The selection of adaptation rules is done easily by comparing the conflicting differences between the new problem and the current retrieved case. In addition, the sequence of applying the adaptation rules is also important because when the design matrix is lower triangular the resulting design is decoupled, which means that a sequence exists, where the FRs can be satisfied by adjusting DPs in a certain order. This is a very important finding, as the design process is determined to a great extent by this sequence. Figure 11 shows the simple case to express the application of the concept mentioned above. Axiomatic design is applied in case adaptation process of the customized leg of the high lift stacker (customers’ requirement comes from the size of the pallet). The resulting matrix is lower triangular; the resulting design is decoupled. It means that the adaptation rules first need to adjust parameter DP1 to achieve FR1 and then adjust parameter DP2 to satisfy FR2. If the case adaptation process does not follow the sequence specified by the triangular design matrix, the system appears to be very complex, which is defined as the imaginary complexity [22].

461

systems. Function and physical decompositions are the basic method to represent cases and are also used to define indexes in case library as well as used to determine new designs when no case exists in case library. It also supports design engineers to achieve the adaptable design by the defined sequence of the adaptable process when the design is decoupled. However, the quality of the design solution depends on the set of FRs and DPs in the case and add-on library. Design engineers must carefully decompose the set of FRs and DPs in existing product functions and add-on component library functions that will be further reused. The next step in this project is to address the adaptation process by extending the system with knowledge on the global behavior of the product to fulfill the (re)design of mechatronic products.

Figure 11: An example of applying axiomatic design principle in the case adaptation process While the traditional approach of case based reasoning does not specify how to consider the sequence to adjust parameters of old case features, so there is no clear way to guarantee the correct sequence to apply adaptation rules. The axiomatic design principle can help designers make decisions in order to adapt old cases to solve new cases without a random manner to satisfy the desired system function. 4 CONCLUSIONS AND FURTURE WORK This paper has presented the concept of combining axiomatic design and case-based reasoning, to assist the design process of evolving systems of industrial products. The paper illustrates how companies can react to customers’ demand of industrial products in very competitive market. The company knowledge can be stored, then reused and integrated with various technologies from design databases and generate new functionalities for improving the existing products. With this methodology, the customer can extend existing product’s life (refer to machine they already used) and the producer can provide customized products which have new functionalities suited customer’s needs (refer to ability to create a variety of product). Currently a software implementation based on this methodology is being developed. It consisted in formalizing the case in two main parts: the problem (customers’ requirement and constraints) and the product (functional requirements, design parameters and components), to create the library of cases and design database. The problem was formalized in an adequate manner enable the calculation of the similarity function. The product was formalized to highlight the driven parameters of the design. Two critical processes in case based reasoning were addressed namely case retrieval process and case adaptation process. The critical process in case retrieval process is functions classification, namely product functions when existing products in case library have similar functions according to input problem, and add-on functions when the retrieved existing product does not have such function. In addition, case adaptation process is most important to achieve reusability past designs for new situation. Axiomatic design principle is the systematic approach in engineering design which can assist design engineers to design products and also case based

462

5 REFERENCES [1] Xu, Y. and Huijun, Z., “Function principles for a mechatronics system design”, J. Engineering Manufacture Proc. IMechE, Vol.201, 2007,pp 10651077. [2] Aamodt, A. and Plaza, E., “Case-based reasoning: Foundational issues methodological variations, and system approaches”, AI Communications, Vol 7, 1994, pp. 39-59. [3] Watson, I., “Case-based reasoning is a methodology not a technology”, Knowledge-Based Systems, Vol 12, 1999, pp. 303-308. [4] Heylighen, A. and Neuckermans, H., “A case base of Case-Based Design tools for architecture”, Computer-Aided Design, Vol 33, 2001, pp.11111122. [5] Suh, S. M., Jhee C.W., Ko K.Y., and Lee., “A casebased expert system approach for quality design”, Expert Systems with Application, Vol 15, 1998, pp.181-190. [6] Avramenko, Y. and Kraslawski, A., “Similarity concept for case-based design in process engineering”, Computer & Chemical Engineering, Vol 30, 2006, pp.548-557. [7] Hu, W. and Masood, S., “An Intelligent Cavity Layout Design for Injection Moulds”, International Journal of CAD/CAM, Vol.2, No.1, 2002, pp. 69-75. [8] Qin, X. and William C. R., “Applying Case-Based Reasoning to Mechanical Bearing Design”, In Proc. Of the ASME 2000 DETC conferences, #DETC2000/DFM-14011, 2000. [9] Vong C.M., Leung, T.P. and Wong, P.K., ”Casebased reasoning and adaptation in hydraulic production machine design”, Engineering Application of Artificial Intelligence, Vol 15, 2002, pp.567-585. [10] Tseng, M.M. and Jiao, A, “Case-Based Evolutionary design for mass customization”, Computers and Industrial Engineering, Vol. 33, No. 1-2, 1997, pp. 319-324. [11] Maher, M.L., and A. Gomez de Silva Garza, “CaseBased Reasoning in Design”, IEEE Expert, 1997,pp. 34-41. [12] Praehofer, H. and Kerschbaummayr, J, “Case-based reasoning techniques to support reusability in a requirement engineering and system design tool”, Engineering Application of Artificial Intelligence, Vol.12, 1999, pp. 717-731. [13] Han, Y.H., and Lee, K., “A case-based framework for reuse of previous design concepts in conceptual

[14]

[15] [16]

[17] [18]

[19]

[20]

[21]

[22]

synthesis of mechanisms”, Computers In Industry, Vol. 57, 2006, pp. 305-318. Boyle, M.I. and Rong, K., “ CAFIXD: A Case-Based Reasoning Fixture Design Method. Framework and Indexing Mechanisms” ,In Proc. Of the ASME 2000 DETC/CIE conferences, #DETC2004-57689, 2004. Suh, N.P., The Principles of Design, Oxford University Press, 1990. Harutunian, V., Nordlund, M., Tate, D., and Suh, N.P., Decision Making and Software Tools for Product Development Based on Axiomatic Design Theory, Annals of the CIRP, Vol.45/1, 1996. pp.135139. Suh, N.P., Design of Systems, Annals of the CIRP, Vol.46/1, 1997, pp.75-80. Suh, N.P., and Do, S.H., Axiomatic Design of Software Systems, Annals of the CIRP, Vol.49/1, 2000.pp.95-100. Melvin, J.W., and Suh, N.P., Simulation within the Axiomatic Design Framework, Annals of the CIRP, Vol51/1, 2002.pp.107-110. Deo, H.V., and Suh, N.P., Mathematical Transforms in Design: case Study on Feedback Control of a Customizable Automotive Suspension, Annals of the CIRP, Vol53/1, 2004,pp.125-128. Goncalves-Coelho, A.M., and Mourao, J.F., Axiomatic design as support for decision-making in a design for manufacturing context : A case study., International journal of production economics, Vol.109, 2007, pp. 81-89. Suh, N.P., Complexity in Engineering, Annals of the CIRP, Vol53/1, 2004.

463

Set-Based Concurrent Engineering Model for Automotive Electronic/Software Systems Development A. Al-Ashaab , S. Howell , K. Usowicz , P. Hernando Anta and A. Gorka 1

1 2

2

1

1

1

Decision Engineering Centre, SAS, Cranfield University, UK, [email protected]

Jaguar Engineering Centre, Abbey Road, Whitley, Coventry, UK, [email protected]

Abstract This paper is presenting a proposal of a novel approach to automotive electronic/software systems development. It is based on the combination of Set-Based Concurrent Engineering, a Toyota approach to product development, with the standard V-Model of software development. Automotive industry currently faces the problem of growing complexity of electronic/software systems. This issue is especially visible at the level of integration of these systems which is difficult and error-prone. The presented conceptual proposal is to establish better processes that could handle the electronic/software systems design and development in a more integrated and consistent manner. Keywords: Set-Based Concurrent Engineering, V-model, automotive electronic/software systems development.

1

would be able to communicate and collaborate more effectively.

INTRODUCTION

Development of new products is now considered fundamental for corporate growth and sustained competitive advantage. This is due to increased competition, the rapid development of technology and shortened product life cycles. The success of product development depends on several factors, such as the organisation and team structure as well as the technology employed. The mechanical aspect of the automotive product development has reached an impressive advance in automation and computerisation of the design and manufacturing using CAD/CAE/CAM, Virtual Reality and Rapid Prototyping. Although these technologies are stateof-the-art, they do not guarantee the production of a product that meets or exceeds customers’ demands in terms of quality, innovation, cost, customisation features, service and delivery time. This is due to the fact that nowadays cars are software driven which gives it a luxury and modern character. This cannot be achieved unless full integration of electronic/software (E/S) systems is well placed in the vehicle. The task of E/S systems integration should be understood by all departments involved in product development, otherwise consistency problems are going to arise resulting in product and process redesign or even failure in the hands of the customers. This project addresses the challenging issues of the E/S systems development and integration in automotive industry. Automotive industry currently faces the problem of growing complexity of E/S systems. The development process of these systems is not as mature as other processes within automotive companies, for example the development of purely mechanical systems. This issue is especially visible at the level of integration of E/S systems which is difficult and error-prone. Therefore, there is a need to establish better processes that could handle the development of E/S systems in a more integrated and consistent manner wherein different engineering teams

CIRP Design Conference 2009

464

2

AUTOMOTIVE ELECTRONIC/SOFTWARE SYSTEMS

The automotive industry has been facing an increasing complexity of E/S systems in automobiles for 30 years but nowadays this process has become challenging as never before. The reason for the growing complexity is decreasing cost of new technologies, market pressure for new innovative functionalities that can be implemented only by means of software and hardware as well as the need to reduce petrol consumption, gas emissions and improve overall performance. According to a recent study [1] there are 270 functions that run on 70 embedded platforms and it is expected that these numbers will grow. Dealing with the growing complexity is difficult at every stage of the development and at the integration stage it is even more complicated. Modern cars are filled with a number of E/S systems that support different vehicle functions. According to Schäuffele and Zurawka [2] vehicle function is a set of functional features that the driver is able to control directly or can only perceive indirectly (for example steering is a vehicle function). Nowadays the majority of vehicle functions are electronically controlled and monitored. In every vehicle the complete E/S system can be divided into the following sub-systems: 

Powertrain,



Chassis,



Body, (Comfort, Passive/Active Safety)

 Multimedia. (Telematics, Infotainment) Each sub-system is different, however they are all interconnected and form a network of modules like the one depicted in Figure 1.

inflexible. The process begins with the analysis of user requirements so if these are wrong, late changes are difficult to incorporate and very expensive. On the other hand, if there is significant effort made during early stages of designing logical and technical system architecture, the development of components can be relatively easy and integration is also supported. 3

THE CURRENT PRACTICE OF ELECTRONIC/SOFTWARE SYSTEMS DEVELOPMENT

This paper is a result of the MSc group project undertaken by post-graduate students from Cranfield University with Jaguar Cars. During the project the following areas for improvement were identified:

Figure 1. Network of electronic systems in a modern car

1. Communication within the department responsible for E/S systems, 2. Lack of ownership of interfaces and componentbased approach to E/S systems development, 3. Requirements/Specifications generation, breakdown and management, 4. Supplier selection and relationship, 5. Procedures to support E/S systems integration, 6. Not appropriate or misused IT systems.

(courtesy of Jaguar Cars)

2.1 Software Development Models There are several software development and software process maturity models described in professional literature that could be used in automotive industry, namely: Waterfall model, V-model, Spiral model, various incremental and iterative models and Capability Maturity Model Integration (CMMI). The authors found that the variants of the V-model are most commonly used in the automotive industry. This fact is based on the experience of Jaguar with their supply chain and their formerly relation with Ford and current relation with TATA. In addition other automotive companies such as Visteon and Lotus are also using the V-Model. The V-model is an extension of the Waterfall model. According to Schäuffele and Zurawka [2] the interactions among vehicle, electronic and software necessitate an integrated development process that covers all steps – from the analysis of user requirements to acceptance tests of the complete system. The V-model is indicated as the one fulfilling these requirements. The shape of the model – the letter “V” – reflects its vital characteristics that make it especially suitable for automotive embedded software engineering: decomposition of the system on the left side of the “V” and integration of the system on the right side of the “V” accompanied with testing at each level, see Figure 2.

In Jaguar searching for the relevant product data is informal – personal relationships are an important facilitator in the information gathering. This is time consuming due to communication problems which affect the quality of the work. One major source of this problem is related to the interaction between the components and sub-systems of the E/S system of the vehicle, which are not well defined or understood from the conceptual design stage. In addition, the engineer responsible for the integration of the electronics and software components is unknown and not involved until the later stages of the product development. This means that the current practice and procedures are inadequate. The proposal presented in this paper is addressing all the areas for improvement above apart from point 6.

4

SET-BASED CONCURRENT ENGINEERING MODEL

Concurrent Engineering (CE) is an approach to product development in which multi-disciplinary teams work together from the requirements stage until production. The idea behind it is to ensure that the requirements of all the stakeholders involved in the product development are met. It reduces the number of late changes, time-tomarket and cost as decisions at each stage of the product development are based on the common point of view of people from different disciplines involved.

Figure 2. The V-Model [2]

Set-Based CE (SBCE) is part of Toyota product development system that is different from other manufacturing companies. Design participants practice SBCE by reasoning, developing, and communicating about sets of solutions in parallel and relatively independently. As the design progresses, they gradually narrow the sets of solutions based on additional information from development, testing, simulation, tradeoff, customer and other participants until they agree on one solution [3].

The V-model has similar advantages and disadvantages as the Waterfall model: it clearly distinguishes welldefined stages but its sequential nature makes it

465

4.1 The Combined V-model and SBCE As identified in the literature, the V-model is a typical development approach to automotive E/S systems engineering. Its advantages are clearly visible due to the emphasis placed on decomposition of the system, integration and testing. However, V-model inherits typical weaknesses of the Waterfall model – its sequential nature and the prerequisite for early requirements correctness. The proposed novel model combines the typical “V” approach with SBCE. Set-based approach can overcome the need to specify all the requirements at the beginning, enables late-binding decisions and early verification of them. Concurrent and multi-disciplined team-working on the other hand can help to overcome sequential characteristic of the V-model. The model proposed intends to involve people responsible for E/S systems integration at the level of architecture design, early in the development process. This is based on the analogy found in mechanical domain where manufacturing engineers are involved into the design process early to ensure manufacturability. The overview of the model is shown in Figure 3.

detail by a multi-disciplinary team consisting of people both from marketing and engineering lead by the chief programme engineer (CPE). This will enhance the communications among the team and then address the first opportunity of improvement. Techniques like Quality Function Deployment (QFD) can be applied at this level. In the proposed model, customers’ requirements are reflected in the CPE’s vision. The CPE’s vision is a written document that briefly but precisely describes highlevel functional and non-functional requirements, such as cost, quality, time-to-market, dependability, scalability of the system etc. This document has input from different participants but its final form is approved by the CPE. This recorded vision is then passed to functional engineers (e.g. electrical distribution, chassis systems, integration, infotainment etc.). The CPE’s vision is supposed to be a document of major importance that all further developments have to conform to and has to be expressed properly so that everybody can understand it. Moreover, CPE should be a person with a very strong engineering background with at least several years experience in automotive E/S systems development rather than project management. This stage is depicted in Figure 4 and it is also addressing the third opportunity of improvement of requirement/specifications generation.

Figure 3. Combined V-Model and SBCE The cone in Figure 3 represents the fact that the team responsible for the design of the architecture works with the set of conceptual alternatives narrowing the set down with increasing details and results from feasibility studies. In the model presented here the emphasis is put on the system architecture level. Traditional approaches that rely on the component level design are no longer effective. There is a great need to first have clear picture of what the overall architecture is going to be and evaluate different concepts using both quantitative and qualitative evaluation techniques. It is important for Jaguar Cars to know when architectural decisions about hardware and software are committed in order to support bottom levels with unambiguous picture of the system that is going to be developed. The proposal of the combined V-Model with SBCE consists of three stages. These are:

Capture user requirements.



Define system architecture.

 Define sw/hw components and implementations. The following sub-sections present each stage with some detail.

4.2 CAPTURE USER REQUIREMENTS At the beginning of any product development process it is crucial to capture and understand customers’ requirements. These can be captured and analysed in

466

Figure 4. Capture user requirements and pass to functional engineers 4.3 DEFINE SYSTEM ARCHITECTURE Once the vision is handed over to engineers who represent different functions, they start to develop simultaneously a set of conceptual architectural solutions of their domain that will conform to the CPE’s vision, i.e. engineers responsible for communication buses propose a few different layouts. This step is based on the first principle of SBCE: map the design space [3]. The set of concepts is then given to the team called System Design Team (SDT) composed of engineers responsible for the overall architecture of the system including representatives from the integration function. This concept is called Early Integration Engineers Involvement (EIEI) similar to the concept of Early Manufacturing Involvement that comes from manufacturing domain. EIEI could also be a basis of Design for Electrical Integration (DFEI). SDT evaluates different concepts, combines them and creates a set of several conceptual architectures. The overall functionality of the system, its decomposition into nearly-independent sub-systems (that will allow a fairly independent development at the bottom of the “V”) and functions performed by each sub-system are specified. At this level critical characteristics of the architecture must be taken into account, especially the degree to which system will be distributed and how

scalable and extensible it is going to be. Engineers during the development should use approved knowledge that comes from previous experience and previous projects rather than personal opinions. Once the set of conceptual architectures is ready, the CPE evaluates them and chooses 2-3 for further development. These steps are depicted in Figure 5. SDT have 2-3 conceptual architectures to focus on. The document called architecture study, which describes these conceptual architectures, is written and sent to evaluators like the team responsible for available working space, integration engineers, electrical engineers, mechanical engineers and key suppliers. SDT

Figure 6. Define system architecture – Level II 4.4 DEFINE SW/HW COMPONENTS & IMPLEMENTATION

Figure 5. Define system architecture - Level I starts to increase details of these architectures using feedback from different evaluators. This stage is an iterative and incremental process during which feasible designs are refined and the infeasible designs are eliminated. This step is based on the second principle of SBCE: integrate by intersection [3] hence support E/S systems integration that was identified as one of the opportunities of improvement in section 3. The architectures are now specified both at a general functional (logical) level and a physical level, focusing mainly on the interfaces between components and systems. Clear and complete definition of interfaces is of major importance as rigorous encapsulation is a guarantee of relatively independent development and seamless integration. As such this proposal is address the opportunity of improvement of lack of ownership of interfaces and component-based approach to E/S systems development. These remaining architectures are now simulated and evaluated by expert judgement. The results of simulations are analysed by SDT as well as other evaluators and the architecture with best results is chosen as the final one. This step is based on the third principle of SBCE: establish feasibility before commitment [3]. At this level the architecture is analysed by integration engineers and a document called integration study is elaborated. This document should include information about expected integration procedures, guidelines, possible problems and should be used during the integration stage as a basic guide. The final decision about the architecture is made by the CPE. These steps are depicted in Figure 6.

Afterwards, full-scale architecture development can be undertaken by different teams so that all software and hardware components are fully specified. It should be unambiguous what configurations the architecture will allow, full topology of the car network, number of timetriggered links, number of ECUs etc. As far as software is concerned, all software components should be defined, the underlying platform with a real-time operating system, real-time requirements of different software functions etc. In addition, decisions about which sw/hw components will be outsourced and which will be built in-house must be made. At this level it is important to have efficient and standardised policy for dealing with component (sw/hw) suppliers. When selecting the supplier, the decision should not be made just on the component cost basis, but also on its capability, conformance to specified quality standards, market position, ability to deliver the component on time as well as a long-term cost model. This model should address the re-education cost of new suppliers and the engineer’s effort required in Jaguar to tailor new supplier’s component to Jaguar’s needs. While considering the change of the supplier that model should be taken into account. Once the component supplier is selected, a gateway process for reviewing the design should be agreed between the supplier, the direct component engineer and also engineers responsible for other interfacing components. In every stage of the gateway process there is a formal meeting between them. In that meeting the supplier explains the current state of the component development and all component engineers (direct + interfacing) provide him with the technical feedback. This approach should ensure that the supplier is aware of the inter-relationships with other components. This is shown in Figure 7.

467

development timescales and costs and ultimately reduce our warranty figures. It is hoped to reduce the electrical warranty bill by more than 50%, and likewise the electrical development costs - particularly from a manpower standpoint. This would also offer Jaguar’s customers an improved ownership experience. Jaguar quantifies the impact of the study purely based on how many development issues might have been avoided had the development team better analysed our design integrity. Adopting the presented proposal, Jaguar estimates a reduction in late issue resolution of between 40 and 60%. This model, if thoroughly defined and implemented, is expected to improve integration and communication alongside the product development process. It is recommended that further studies or projects should be carried out to extend the results from this project. It is believed that this model could be applicable in other automotive companies or even other industrial sectors, such as aerospace or marine.

Figure 7. Define HW/SW components and implementation 5

DISCUSSION

6

ACKNOWLEDGMENTS

It is deemed essential and quite significant to note here that the proposal of the new model for E/S systems development is a completely novel approach. It was ascertained from literatures reviewed that there is no evidence in the electronic/software product development of an approach combining concepts from the standard Vmodel and Set-Based Concurrent Engineering. The advantage of this combination is the SBCE was developed mainly for mechanical parts with emphasis on the body style [3 and 4]. The combination of the SBCE with the V-Model provided the right guide to consider the detail aspect of electronic/software development.

Firstly, we would like to thank our sponsor both Jaguar Engineering Centre and the Manufacturing Department of the school of Applied Sciences for their support during the project. Special thanks go to all of the employees of Jaguar Engineering Centre who were associated with this project for their time, support and valuable information provided. A special acknowledgement goes to the rest of the Jaguar MSc group project team, namely Hicham El Ammari, Jeevan Sagoo, Muaaz Almosalm, Muhammad Fahad and Sanu Omolade Williams.

The electrical development team of Jaguar engineering centre has very positive opinion of the whole project that cover process model developed, opportunities of improvement (issues raised) and the proposed model that is the topic of this paper. The engineers were very impressed with the research approach taken for the study as being practical and realistic as well as the standard of the research team (the MSc students). While the proposal of combined V-Model with SBCE reflects the need of the current development process to meet the new challenges, further study is required to add detail to the proposal. Jaguar would like to look at the implementation of the technique as a priority concerns to improve the current process. Using this technique will improve the ability to design from a systemic approach, which will reduce the

7 REFERENCES [1] Broy, M., Kruger, I., Pretschner, C., Salzmann, C., 2007, Engineering Automotive Software. In: Proceedings of the IEEE. [2] Schäuffele, J., Zurawka, T., 2005, Automotive Software Engineering Principles, Processes, Methods and Tools, SAE International [3] Sobek, K. D., Ward, C. A., Liker, K. J., 1999, Toyota’s Principles of Set-Based Concurrent, Engineering, Sloan Management Review, MIT, 40 (2),67-83. [4] Eshati S and Al-Ashaab A. 2008 Review of SetBased Concurrent Engineering Applications. Cranfield Multi-Strand Conference, 6-7th May 2008

468

Symbiotic Design of Products and Manufacturing Systems Using Biological Analysis T.N. AlGeddawy, H.A. ElMaraghy Intelligent Manufacturing Systems Centre, Department of Industrial & Manufacturing Systems Engineering University of Windsor, Windsor, Ontario, Canada [email protected], [email protected]

Abstract Changes in manufacturing systems are oftenly driven by product design variations that exist at specific points in time, and gradual product design changes that appear over time. Further interactions take place to fully utilize established manufacturing system capabilities when considering product future designs. This is common in eco-systems when two or more different species co-evolve simultaneously, and it has inspired the development of a new model to capture the symbiotic relationship between products and their manufacturing systems based on Cladistics which is commonly used in biology. The obtained hierarchal order was analyzed in depth to track the product changes record and guide the next design steps to benefit from system capabilities. Keywords: Design Change, Evolution, Symbiosis, Cladistics Analysis

1 INTRODUCTION This research is motivated by the belief that some clues from nature can help explain products development and find parallels between their progression and that of the manufacturing systems used to produce them. One of these clues and parallels can be found in biological evolution. Evolution in nature is considered the source of life diversity on earth; it causes both gradual change overtime and brings about distinct variations at specific times. Different species in nature do not live in isolation; they interact, change and together produce new varieties [1]. It is believed that this kind of co-evolution is not limited to nature, but it can also be observed in manufacturing environments, albeit with different mechanisms, since products design and manufacturing systems are dynamic and dependant on each another. Products design changes and variations are two main drivers that affect the manufacturing system design and capabilities. In nature; change and variety accompany evolution of species, while in manufacturing they accompany the co-evolution of products and systems. In nature, species that do not adapt and evolve fade away and become extinct. Similarly, in manufacturing, systems that do not accommodate the products changing processing requirements become less useful and are eventually phased out. Products evolution modeling that follows the biological evolution definition was developed [2] as a first step towards a more comprehensive design framework, described in this paper, that considers not only changes in products design but also the corresponding manufacturing systems design changes. Cladistics is a mathematical technique that is commonly used in biology to visually help understanding the theories behind different species taxonomies, and eventually how they evolve and vary. That method was utilized to identify how manufactured products evolve and to illustrate the

CIRP Design Conference 2009

469

potential knowledge and information that can be gained from the developed model. In this paper, the developed classification method [2] for products instances is extended to include the wider view of product / system design inter-dependency and symbiotic relationships. In addition, a methodology is proposed for recommending the type and direction of logical product design modifications in order to increase the life span of a current manufacturing system and to exploit its full capabilities. An innovative in depth cladogram analysis is presented, which utilizes the historical data set of a product to shed light on its possible future design steps taking into consideration the manufacturing system capabilities. 2 SYMBIOSIS IN MANUFACTURING Most design methodologies of manufacturing systems found in literature are uni-directional [3], where the needs are identified in terms of functional requirements, and the system components are expressed as design parameters and are best expressed in the 'Axiomatic Design' process terms [4]. However, the flow of the design tasks is normally in one direction, from functional requirements through to definition of the system components and their relationships and design parameters. The overall view of a manufacturing system design methodology has to change to account for the noticeable mutual influence between products and manufacturing systems. In view of the apparent symbiotic relationship between changes in products and those in their manufacturing systems, where change of characteristics over time and emergence of variants at specific instances, a corresponding two way flow should exist in the design process that relates both products and systems components designs and changes. That loop is meant to capture the natural progression of products design, or technology break throughs by expressing and

Effects

DFX Frontiers / limitations

Design shift

Innovation (new technology)

System

Manual Æ Automated dedicated Æ flexible & reconfigurable New processing techniques New equipment development

Drivers

Paradigm shift

Effects

Higher production rate New Product Variety / mix New materials

Product

Drivers

Symbiotic relations

Figure 1: Disruptive Symbiosis in Product / System Relationships. perfectly directed with the least effort and careful premodeling their close inter-dependence and symbiotic planning to gain the most benefit from the changes. relationship. A change or a variation in product design Introducing a new product variant to an existing product would change the processes needed to produce it, which family, making a product design update, or installing would require changes in the manufacturing system additional manufacturing capabilities in a system, are all design, unless the current system capabilities are common change drivers that do not require massive sufficient to accommodate the product changes. The change in either domain, they rather need new vision to modified system capabilities in turn would present new direct these changes and their consequential effects in a opportunities for processing additional features that may clear and streamlined design process flow (Figure 2). be introduced over time as the products change or vary. Few examples are found in literature to illustrate the The symbiosis between products design and selection of the appropriate product design from a given manufacturing systems may be characterized as either set of solution options to best suite a current configuration disruptive in the case of innovations and inventiveness or in a Reconfigurable Manufacturing System (RMS) [5], and steady to account for gradual effects exchange between reconfiguring machines layout according to a current products and manufacturing systems. configuration plan to minimize cost [6, 7]. However, the 2.1 Disruptive symbiosis notion of cyclical and bi-lateral interactions between Drivers exist for any change; those drivers might be products and manufacturing systems does not exist. A sufficiently strong to infuse a series of milestone effects. manufacturing system structure can be adjusted based on These influential strong drivers do exist in the domain of processes needs and constraints and the products design products and manufacturing systems. They don't just can be modified and optimized given the current system result in minor tweaks on both sides, but they introduce capabilities. Both change and variety are present in the total renovation. Some examples of these powerful drivers two dimensional symbiotic relationship between products and their effects are shown in Figure 1. They may include and manufacturing systems. on the product side the need for higher product production 2.3 Changeable boundaries of products families rate, introducing a totally new product, and moving to a Innovative ideas about how families of products evolve to product family concept, as well as the introduction of new form new species, which are closely related to the technologies and processes on the system side. All of essence of biological evolution, were presented and the these drivers are massive in their effects. They often lead new term “Evolving Parts / Products Families” was coined to a major production paradigm shift affecting the [8]. The changes occurring in those product families over manufacturing systems and often cause complete retime were described as mutations, with features losses assessment of products design. Changing from manual and gains through generations leading to the appearance operation to automation, dedication to flexibility, applying of new families of products. Novel models and principles of design for automation, assembly, and methodologies for “Evolvable and Reconfigurable manufacturability (DFX) all represent possible twists in the Process Plans”, which are capable of responding original conditions. It should be mentioned that this kind of efficiently to both subtle and major changes in “Evolving symbiosis is impulsive and disruptive where innovation Parts and Products Families” and “Changeable and and creativity in producing new solutions are part of the Reconfigurable manufacturing systems”, were developed symbiosis process. [9, 10]. In this approach, the sequence of features / 2.2 Steady symbiosis operations, which represent the Macro-process plan, is The other type of symbiosis between products and thought of as a genetic sequence and the added new manufacturing systems can be considered gradual and features / operations in the reconfigured process plan steady; it does not involve big leaps, or major changes on would represent mutation of that sequence by optimally either side. This kind of symbiosis is the main concern of inserting new genes (features / operations). This is this paper; it is the type of influence that needs to be consistent with the concept of evolving parts families and

470

Process acquiring

Major: Layout, MHS Mid: M/C capabilities, Fixtures Minor: Tools, S/W

Drivers

Optimization Sequencing flexibility

Effects

Product

System

New family member Design update

Effects

Drivers

Symbiotic relations

New installed capabilities

Design tweaking

Figure 2: Steady Symbiosis in Product / System Relationships. entities within that group. The existence of a the biological evolution context. Such evolving families manufacturing classification is based on the process of with dynamic changeable boundaries need a comparative study, which enables the storage and manufacturing system with higher adaptation capabilities, retrieval of information to facilitate the application of which led to introducing the notion of changeable generalization points. This process enhances the manufacturing and emphasizing the importance of knowledge and understanding of entities in the building change enablers into the system [11]. However, manufacturing environment and enables early predictions the benefits from changeability enablers, such as about their behavior [12]. reconfigurable process plans, in manufacturing systems are not fully utilized yet. Two-way links should be Cladistics is an important classification approach in established to reflect the inter-dependence of products biology that establishes a classification scheme based on families and the manufacturing systems capabilities. commonalities. Cladistics is a method of classification that groups entities hierarchically into discrete sets and 2.4 Evolution in Biology subsets, in order to organize their comparative data. In nature, organisms are always changing, their properties Cladistics is used mainly in the field of biological and characters are altered and transformation in their form Systematics, however, it was also used in the field of and behavior is observed through the generations. These organizational Systematics [13]. It was recently used by changes are described as the biological evolution of life the authors to study evolution in manufacturing [2] and to forms. Evolutionary modification in living things has some construct a layout of an assembly line that follows a distinctive properties; evolution does not proceed along delayed product differentiation strategy [14]. In this paper, some grand, predictable course, instead, the details of Cladistics is used to extend the earlier product evolution evolution depend on the environment that a population study to capture the steady symbiosis between products happens to live in and the genetic variants that happen to and manufacturing system(s) due to gradual changes. arise in that population. Cladistics was originally developed by Hennig [15], where Biological evolution does not just indicate an individual the systematic construction process begins with choosing temporary change in attitude or in morphology of a group end-taxa, which are the variants to be investigated and of entities, but rather describes the wider inheritable placing them at the end of a cladogram (tree-like changes transferred to successors from their ancestors. That is why the main characteristic of evolution process is Root 1 not only the occurrence of the change, but rather the Nodes ability to preserve and transfer that change over time. This Character emphasizes the fact that evolution as described in biology 2 Loss is gradual and steady compared to the spontaneity of Conflict 3 creation and innovation. 2.5 Cladistics for manufactured products change analysis As evolution is a process of change for the involved entities, if a classification is linked to this change process, it is postulated that groups of manufacturing entities can be formed based on similar technological and behavioral attributes, and that there will exist an ideal model or solution for the group. This group reference model will then help reduce the time and costs associated with developing solutions (e.g. design procedures, process plans, tooling and fixuring methods, etc.) for individual

471

4

Branch 6 A

5 -4 11 B

Character 7 9

7

10 C

Terminal

8 D

E

End-Taxa

Figure 3: Cladogram Shape and its Related Terms.

A

Depth (change) Analysis

Advisory pool of Features

Evolution trend

Features

A

B

C

D

E

F

G

H

Product Sophistication

Figure 4: Cladistics' Depth Analysis to Manage Change in Products. Identifying that branch corresponds to recognizing the structure) terminals (Figure 3). Next, each character trend of product design evolution, since usually the most states inherited by each taxon are identified. While a sophisticated product in the entire studied set of products character means a certain feature, its states are its would be located at the end of that branch, as an enddifferent values, ranges, shapes, phases…etc. A taxon (terminal entity). Such innovative analysis was not Cladogram length is the number of steps appearing in the originally attributed to Cladistics; however, it promises to cladogram, which is the total number of character state be useful as a planning, managing and analyzing tool for changes necessary to support the relationship of the taxa the historical progression and evolution of a product. in a tree. Fewer steps mean a better Cladogram and better representative hypothesis of the taxa relationship, Since this analysis considers nodes, it is called "Depth" or what is referred to as 'parsimony'. analysis - as the number of splitting nodes increases the Cladogram branches get longer and deeper. The analysis 3 DIRECTING PRODUCT DESIGN TRENDS USING reveals a design trend that represents the evolutionary CLADISTICS ANALYSIS twists exhibited in the history of the most sophisticated While Cladograms are meant only to be a visual aid for product entity in the studied set of products. Furthermore, clarifying concepts in biological science regarding the features that appear along that branch (trend) can be organisms' taxonomies, some additional and beneficial retrieved and stored in an advisory pool of features. As information can be derived from them using the innovative the product changes gradually, the manufacturing system and new analysis introduced in this paper for managing changes accordingly and new capabilities may be added products change with respect to manufacturing system as needed. However, in the subsequent product design capabilities. A depth Cladogram analysis is performed for changes, some of these product characters / features managing products change (Figure 4), which can be may be lost / eliminated, while the manufacturing system described in the following steps: continues to posses the related capabilities. The advisory • Identify the features that impose design differences pool of features then represents a design boundary. If the among studied product entities. next product design lies with that boundary; it would be compliant with the current capabilities of the • Gather the needed historical data about the different manufacturing system, and no further system changes product entities (data about product entities from would be needed to produce the new designed product. different competitors may enrich the analysis, and reveal more perspective). The use of the advisory pool of features would also exclude the undesirable product features that were • Perform Cladistics analysis and obtain the most proven not to hold against change and design parsimonious Cladogram. requirements (i.e. that did not survive). Less sophisticated • Search for the Cladogram branch that contains the products appear as end-taxa of the right hand side most evolutionary twists (largest number of character branches of the Cladogram where some features that states – it represents product evolution trend). don't appear on the evolution trend branch are found. This indicates their lack of survival and adaptation ability; • Retrieve existing characters on the found branch. hence they are excluded from the advisory pool of • Establish an advisory pool of features using those features. characters. Examination of the Cladogram tree can easily identify the branch that contains the most nodes and other branches that are splitting out of it. An example of this kind of branching is shown in the far most left side of the cladogram in Figure 4, however its location can be anywhere on the tree depending on the layout of the resulted cladogram, yet, the identification criterion, being the highest number of splitting nodes, remains the same.

A case study that shows how data can be analyzed to derive an advisory pool of features for further product development is presented in the next section. 4

CASE STUDY - PLANNING FUTURE CYLINDER BLOCK CHANGES A set of cylinder blocks that consists of six different instances is used as an example to demonstrate the use

472

Deck Height

Deck End

Cylinders Closeness Oil Pump

Camshaft Housing

Wheel Drive Type (mounts)

Cylinder Arrangement

Crank Case Water Pump

Figure 5: Cylinder Block with All-Derived Characters. cylinder blocks and more features are published in [2]). Characters States Description The cylinder blocks are made of either Aluminum or Cast 0 Aluminum Iron. They belong to either inline or V-type, high-deck or 1 Material low-deck, front or rear wheel drive, Over Head Cam 1 Cast Iron (OHC) or Over Head Valve (OHV) engines. Table 1 Cylinders 0 Inline with Ø=0 2 identifies and summarizes the different characters, states Arrangement 1 V-Banks with Ø=60˚ or 90˚ variations, and their descriptions. In the character states column; (0) means that the cylinder block variant does not 0 Front- (transverse position engine) possess the character or it is absent or primitive (low 3 Wheel Drive Type Rear- engine mounts are on block profile state), and (1) means that the character exists or it 1 sides (longitudinal position engine) is derived (high profile state). Figure 5 shows a composite part for the cylinder blocks representing the whole data 0 Open- block made by die casting 4 Deck End set including all derived features. The six cylinder blocks 1 Closed- block made by sand casting are also presented along with their inherited characters in 0 Siamese cylinders 5 Cylinders Closeness 0Table 2. 1 Separated cylinders A commercial package ('WinClada') was used to perform 0 Assembled to the block 6 Skirt (Crank Case) Cladistics analysis on the given data set. The Cladogram 1 Integrated with the block in Figure 6 represents the hypothetical evolutionary path 0 Absent from block (over head cam) of the six presented cylinder blocks. The total length of Camshaft and Pushrods housing exists 1 this Cladogram is 17 steps, which is the most 7 Camshaft Housing in block (over head valve) parsimonious for this set of data. The small solid circles Exists (Balance shafts overcome 2nd 1 represent derived character states, while the small hollow harmonic vibrations in the engine) circles represent the disappearance of a character in 0 Completely separable from the block further evolutionary steps, which was allowed in this 8 Water Pump 1 Pump housing integrated in the block Cladistics analysis as they simulate features lost due to 0 Completely separable from the block design considerations along the product evolution history. 9 Oil Pump 1 Mounted on the block Depth analysis was performed on the obtained 0 Low deck *stroke lengthbore the evolutionary twists and design changes among all other studied engines. That branch represents the Table 1: Identifying studied characters and their states. evolution trend of the studied set of engines. The intended advisory pool of features can be established by Characters retrieving the characters appearing along that trend 1 2 3 4 5 6 7 8 9 10 Variants (branch). It contains 8 characters (1, 3, 4, 5, 6, 7, 8 and 9) while two characters (2 and 10) are excluded as they appear in less sophisticated engines. Although characters 4A-GEU 1587cc 1 0 1 0 0 1 0 0 1 0 6 and 8 disappeared in later evolutionary steps in this 711 M 1691cc 1 0 1 1 1 0 1 0 1 0 trend, their corresponding manufacturing system capabilities remain. Hence, they can be used in a future QR20DE 1998cc 0 0 0 0 1 0 0 1 0 0 product design especially that their disappearance Mopar 2360cc 0 0 0 0 0 0 0 1 1 1 occurred late in the evolution path of this trend. 5 CONCLUSIONS Buick215 2900cc 0 1 0 0 0 1 1 1 0 0 Product design has always been the outcome of LS2 5967cc 0 1 1 0 0 1 1 1 1 0 designers’ innovative creation, however, that outcome needs to be managed and logically guided to benefit from Table 2: Characters' States in the studied cylinder blocks. the product past evolution and inform the next generation of the proposed Cladogram depth analysis and its merits. The cylinder block variants belong to automotive engines of different makes and materials (extended data of more

473

8 Evolution trend

5 6

9 3

1

Mopar2.4 2

3

4 8

Buick215 2

1

LS2

-8 7

More

-7

6

4A-GEU

4

Advisory pool of Features

4

9 5

QR20DE

9 10

7

Sophisticated product

5 -6 711M

Figure 6: Depth Analysis of the Cylinder Blocks Cladogram. product design. An innovative method of product design analysis based on Cladograms was introduced. The Cladistics technique is a way to logically order entities in a hierarchy according to their commonality. Hence it was used innovatively in this paper to reveal and analyze product evolution (change) using in depth Cladogram analysis to isolate the most evolutionary branch with the most evolutionary twists, which resembles the change trend in the studied set of products. The design features appearing along the change trend are used to form an advisory pool of features that defines the desired targeted boundary for future product designs. The features pool is consistent with the capabilities that already exist within the manufacturing system and excludes the features that were not required by the most sophisticated products. The feedback to the product designer, based on this analysis, has the benefit of prolonging the life and utility of manufacturing systems and reducing the need for its reconfiguration, re-design, re-tooling or dismantling by advising the designer on the most promising products family boundary and existing factory resources. This feedback is one part of the proposed design framework that captures the symbiosis between products and manufacturing systems. The presented innovative framework of this symbiotic relationship is akin to the biological co-evolution model found in nature. It aims to manage product variation and change through enhancing the design process of both the product and the manufacturing systems. 6 REFERENCES [1] Mayr, E., 2000, The Growth of Biological Thought: Diversity, Evolution, and Inheritance. Harvard University Press. [2] ElMaraghy, H., AlGeddawy, T., and Azab, A., 2008, Modelling Evolution in Manufacturing: A Biological analogy. CIRP Annals - Manufacturing Technology, 57: 476-472 [3] AlGeddawy, T., ElMaraghy, H., 2008, “Manufacturing System Design for Changeability”, Changeable and Reconfigurable Manufacturing Systems, Springer-Verlag Publishers, ISBN: 978-184882-066-1, 280-297 (In Press). [4] Suh, N. P., 2001, Axiomatic Design: Advances and Applications, The Oxford Series on Advanced Manufacturing.

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

474

Lee, G. H., 1997, Reconfigurability Consideration Design of Components and Manufacturing Systems, International Journal of Advanced Manufacturing Technology. 13: 376-386. Kochhar, J. S. and Heragu, S. S., 1999, Facility layout design in a changing environment, International Journal of Production Research, 37(11): 2429-2446. Meng, G., Heragu, S. S., and Zijm, H., 2004, Reconfigurable Layout Problem, International Journal of Production Research, 42(22): 4709-4729. ElMaraghy, H., 2007, Reconfigurable Process Plans for Responsive Manufacturing Systems, in Digital Enterprise Technology: Perspectives & Future Challenges, Springer Science, 35-44. Azab, A. and ElMaraghy, H., 2007, Mathematical Modeling for Reconfigurable Process Planning, CIRP Annals, 56(1): 467-472. ElMaraghy, H., 2008, “Changing and Evolving Products and Systems - Models and Enablers”, Changeable and Reconfigurable Manufacturing Systems, Springer-Verlag Publishers, ISBN: 978-184882-066-1, 25-45 (In Press). Wiendahl, H. P., et al., 2007, Changeable Manufacturing: Classification, Design, Operation. Keynote Paper, CIRP Annals, 56(2): 783-809. McCarthy, I., 1995, Manufacturing Classification: Lessons from Organizational Systematics and Biological Taxonomy, Integrated Manufacturing Systems, 6(6): 37-48. McKelvey, B., 1982, Organizational Systematics: Taxonomy, Evolution, Classification. University of California Press. AlGeddawy, T. and ElMaraghy, H., 2008, Assembly System Design for Delayed Product Differentiation, 2nd CIRP Conference on Assembly Technologies and Systems, Toronto, Canada. Hennig, W., 1966, republished in 1999, Phylogenitic Systematics, Urbana: University of Illinois Press.

Scenario Based Design

Supporting Scenario-Based Product Design: the First Proposal for a Scenario Generation Support Tool I. Anggreeni, M.C. van der Voort Laboratory of Design, Production and Management, Faculty of Engineering Technology, University of Twente, The Netherlands {i.anggreeni, m.c.vandervoort}@utwente.nl

Abstract Using concrete stories about product or technology use, or ‘scenarios’, is a promising design approach. Nevertheless, design practices face uncertainties especially in the activities of identifying, creating and selecting scenarios. Based on a workshop series at a design company, specific problem areas in its scenario practice have been identified. Support is required in: (1) documentation of design information and creation of scenarios from the information, (2) identification and selection of the scenarios for specific purposes, (3) concept evaluation using scenarios and (4) communication of scenarios to stakeholders. This paper proposes a functionality of the scenario generation support tool that addresses the requirements. Keywords: Scenario-based product design, Product design, Scenario generation, Design practice, Support tool

1 INTRODUCTION With a more competitive market and more selective buyers, consumer products will most likely thrive by shifting the design focus to users. Despite their different personal characteristics, all users want the best value for their money. In nowadays market this is often simply translated as one product having as many functions that fulfil the diverse goals of the users. Nevertheless, one generic solution might not be the answer because a product needs to perform well in diverse situations that the users may experience. For design teams, this means they have to deal with immense and not uncommonly contradicting design information. Furthermore, the various use aspects demand multi-disciplinary design teams, which often presents a challenge in team-building and communication. In facing these challenges, design teams are eager to receive support. Using concrete stories about product or technology use, or in our term ‘scenarios’, seems to have all the potentials to address the above mentioned challenges. The concreteness of scenarios helps to glue together relevant information pieces and make them more meaningful. By enforcing the use of natural language and making assumptions explicit, communication internal within the design team as well as external with users and stakeholders can be improved. This approach of applying scenarios in a design process is known as ‘scenariobased design’ (SBD). Originating from the software engineering discipline, SBD aims to tackle technical problems especially in the activities of balancing reflections and actions, encouraging active participation of stakeholders and building reusable design knowledge [1]. Currently, SBD is also increasingly being applied in a full spectrum of product use. Software engineering however has underlying differences compared with consumer product development. In comparison with tangible products, software applications concern a more limited set of interactions and use situations. A web-based application for project management, for example, has a closed set of

CIRP Design Conference 2009

475

interactions (utilizing mouse and keyboard) and context of use (i.e. office or corporate setting). On the other hand, tangible consumer products often have diverse means to interact with and more varied use settings, demanding a more elaborate use of scenarios. Therefore, initiatives to refine scenario use in the design of user-friendly consumer products are growing, and we refer this to as ‘scenario-based product design’ (SBPD). Despite these new initiatives, it has been observed in e.g. [2, 3] that the use of scenarios in product design is actually not new. These works confirm the benefits of scenarios in product design process to improve communication and afford early exploration and evaluation of design ideas. Despite the potentials, design practice is often discouraged by the uncertainties involved in applying scenario-based approaches. Building scenarios indeed can be a waste of time if the purpose and the future use of the scenarios are not well-defined. To remedy this, a design team needs to know in advance what the gain is from using scenarios before even starting to create them. The design team will need a more solid framework of scenario use to be able to measure the efficacy of a scenario-based approach. Unfortunately, available SBPD approaches remain mainly heuristic and loosely defined. To better reflect this situation, we define SBPD in our research context as a common denominator for techniques that apply scenarios to bring actors, products, environments and their interactions into harmony. There seems to be a gap between ‘theoretical SBPD’ and the ‘real battlefield’ of product designers. Despite the promising benefits of using scenarios, designers often face doubts whether they have identified, created and communicated scenarios in an optimal way. SBPD in practice needs a more practical guidance. Motivated by this need, we propose to guide scenario generation as a form of support that is likely applicable for the practice of small to medium-sized design companies. Small to medium-sized design companies are chosen because they provide a pragmatic test bed to apply and evaluate our proposed scenario use framework. This type of companies is characterized by the close-knit design

teams, short-term deliveries and dynamic project execution. The designers will be able to quickly measure whether a framework for scenario use would address the practical challenges they face or if it would only obstruct the design process. A realistic challenge therefore lies that the new supported scenario-based approach has to be more reliable, time-saving and gratifying to the designers than the currently used design approach. A research question can be formulated to capture this underlying requirement: how to create a framework for scenario use that makes design practice more effective and efficient? This paper presents our approach to answer the research question. Scenario uses within design activities have been studied which resulted in a generic classification of scenario types [4]. Further on, collaborations with product designers are a significant part of this research, to learn about practical design challenges that can be addressed by using scenarios. Based on our results so far, we propose a scenario generation support tool that acts as a framework for using and building scenarios. A support tool is flexible enough for the differing design practices, meaning that designers are free to choose to use it or not. On the other hand, by utilizing such a tool, a better practice can be shaped and consistently sustained. The support tool is expected to help product designers structure their design knowledge, confirm their rationales and communicate their design ideas. 2 APPROACH This research aims at supporting design practice by means of a scenario generation support tool. Study on existing frameworks and tools to apply scenarios has revealed that most of them focus on only specific areas in the design process such as requirements capture and verification, e.g. [5, 6]. Another framework, the Design Information Framework (DIF) [7], addresses scenario use in a more holistic approach. However, it is also exhaustive and might not be immediately suitable for the differing practices in small to medium-sized design companies. The findings from our study motivated us to conduct a broader study on scenario uses in design-related domains. An overview of scenario uses in a form of scenario classification has been developed to summarize the results of this study [4]. The scenario classification has been further used as a common reference with designers to discuss their design practice and identify challenges where scenarios can potentially bring in solutions. In relevance with the aimed practical value of this research, two workshops have been conducted with designers at a Dutch medium-sized design company. From here on, it will be referred as ‘the company’ and the designers working there as ‘the designers’. The company specializes in products for human care, medical cure and user comforts. Their approach focuses on users since users’ acceptance is essential for these types of products. The designers already apply scenarios to a certain extent in their design process. With this favourable background, the workshops have evoked the designers’ valuable opinions about their motivations for using scenarios and the challenges they face in their current scenario practice. These findings cannot be generalized to all design practices in other organizations. Therefore, additional contacts with other design companies are currently planned to verify if the company’s practice is relevant to other design practices. Nevertheless, preliminary requirements for the support tool can already be drawn and will be used to direct future workshop sessions. Based on the findings so far, we could suggest a hypothesis that sustaining scenario uses throughout a

design project is more useful than sporadically using scenarios only when needed. To realize a sustainable scenario use in a design practice, a good foundation in identifying, creating and selecting the scenarios is needed. Therefore, we propose a support tool to guide scenario generation from empirical design information. To make sure that the support tool will be usable in practice, designers are involved actively throughout this research as the potential end-users of the tool. 3 RESULTS Each of the two workshops conducted at the company was attended by a moderator and two designers. The participating designers have all had some experience using scenarios. They are willing to discuss their current practice and the limitations they meet in applying a scenario-based approach. Their motivation comes down from the curiosity to know SBPD beyond their practice and to possibly improve it. A part of the workshop objectives was to get insight in their design practice and to discuss support forms that are potentially applicable. Additionally, we also aimed to verify the scenario use classification from the designers’ practical point of view. To create a concrete discussion during the workshops, a fictional case study of designing a bicycle luggage carrier was used to represent a design project. This following subsection elaborates our workshop findings at the company: the designers’ reasons for using scenarios, how they currently conduct their scenariobased approach and challenges inherent in their practice. Based on these, a set of requirements are formulated and functionality of a scenario generation support tool is proposed. 3.1 Scenarios in Design Practice The conducted workshops have indicated the loose ends of a scenario-based practice particular to the company. In general, the motivation for using scenarios at the company is to maintain information about users and the product use situations in its life cycle. For the company, the complete product life cycle may include, but not limited to, production, delivery, shopping, usage and disposal or recycle. Each product life stage is taken into account during the design process. Taking inspiration from empirical data, the designers focus on extreme and critical situations which may present risks throughout their product life cycle. The largest and most important part of the critical situations concerns the use of the product. Henceforth, this paper will only refer to ‘critical use situations’ as situations that potentially lead to failing outcomes during product use. Within the company, time and resource pressure is inherent in any project. Consequently, the critical use situations are expressed only in brief scenarios and assessed using an objective scale measure. These brief scenarios represent fragments of complete and coherent scenarios. Design activities are aimed to actualize the product’s performance measures in all critical use situations. Therefore, the list of critical use situations is heavily used by the designers to verify their design (or design changes) in quick iterations. As the results from the workshops, three main problem areas within the company’s practice have been identified. Additionally, the scenario use classification has been verified being shown that the designers were able to relate their own scenario practice to the classification. The following subsections will elaborate the identified problem areas in more details.

476

Documenting design knowledge The current practice: Before a project is started, the designers build intensive contact with their clients to get the business requirements right. Once the project is defined and throughout the process, the designers compile their design information from contacts with potential end-users, observations on competitor products and close investigations of established standards (e.g. safety or ergonomics). Especially at the beginning of the design process, the design information tends to explode because the significance of the information pieces cannot yet be determined. As a result, the designers take in all the information so that they do not miss anything that may contribute to their decisions later on. The challenges: Within the development of a complex product, this early phase will pose challenges in selecting the information relevant to the design case beforehand. Considering the amount of information, there could be a lot of efforts saved if the most relevant information are identified early and gathered first. Furthermore, the task to document all the gathered information might take an enormous effort and time. While it is getting more streamlined later on, the beginning of a process is usually associated with ad-hoc documenting activities. Designers within a team would therefore benefit from a more structured manner to collaboratively document their findings; this can be regarded as an investment for future easy access to their design knowledge. Identifying and prioritizing critical use situations The current practice: Based on the registered information from the previous step, the designers pay special attention to critical use situations that may lead to failures during product use. While mostly based on observations and interviews with potential end-users, the list of critical use situations could also receive contributions from the clients, specific requirements from established standards (e.g. ISO, NEN, Arbowet) and possibly designers’ own imagination of plausible failures during product use. The last type of contribution however, still needs to be verified with end-users and other stakeholders. As mentioned earlier, the designers express these situations in brief scenarios on top of a framework for risk analysis. The framework provides a more structured approach to assess the magnitude of each critical use situation based on its severity and frequency. Subsequently, it also helps the designers to prioritize the most critical aspects to address with their design. The challenges: The framework used at the company is a formal risk analysis tool. It does deliver objective analysis on the use situations of a product. However, the formality of the tool could potentially be a hindrance to designers’ creativity. While the analysis is supposed to be performed up front, what often happens is that the list of critical use situations still grows during and after design activities. Unfortunately, the risk analysis is an activity very different from the design creative process. The designers prefer to postpone this activity to when they have time apart from designing, which results in the analysis and documentation often lags behind the execution. Furthermore, the current way of registering critical use situations sacrifices coherence for timeefficiency which could present a risk. When some elements of the critical use situations are left out to be assumed, the assessment of the risk severity could become less reliable. Quick evaluation on design concept The current practice: Idea generation in design is intuitive and rarely performed in a structured manner.

477

Nevertheless, rationality should not be compromised: any design concept must lead to at least a sufficient performance in all use situations. Within the company, the designers use the list of (critical) use situations to quickly assess their design ideas in order to not overlook any use aspect. They also take an initiative to perform the quick evaluation together with their peers to retain objectivity. The challenges: When designing a complex product, the list of critical use situations could grow considerably. Running through the list for evaluation then becomes extremely challenging due to the large number of interconnectivity among the different situations. For example, a modification to improve the performance in a use situation A might lead into a worse performance in use situation B. Designers cannot simultaneously remember all the present risks while they are designing. Furthermore, there could be uncertainties concerning which use aspects will be affected when a change is made. Despite being reliable, evaluating a concept by running it through the critical use situations can be mentally exhausting. 3.2 Scenario Generation Support Tool Scenario building is an extension to design inquiries, which focuses to make further use of the inquiry results for communication and evaluation purposes. To make sure that designers have a good foundation in their further use of scenarios, the proposed support needs to guide: (1) the initial documentation of necessary scenario elements and the scenario storylines creation, (2) the identification and selection of scenarios for specific purposes, (3) the evaluation of design concepts using scenarios and (4) the communication of scenarios to other stakeholders. The following subsections propose the functionality of the scenario generation support tool. To illustrate the tool, several scenarios are presented to depict a comparison between the commonly occurring design practice and the plausible future practice using the tool. The scenarios are based on a fictive design case of (re)designing a bicycle luggage transporter. Repository of design information and generated scenarios A design team prepares a project by researching existing products or competitors, technologies that may be useful, and most importantly potential users of their products. A simple inquiry can already gather a lot of information which can be a challenge to organize. With several designers in a team, an extra challenge is to make sure that every team member has the same level of knowledge to move forward as a team. To be able to create useful scenarios, designers need to first of all gather scenario elements. The elements of a scenario comprise, but not limited to, a user (and his/her characteristics), tools or products, a goal concerning the product use, physical setting (where the scenario takes place), non-physical setting (e.g. time pressure, nervousness), user actions and possible events that could happen. Leaving out too much detail of the scenario elements could present a risk of misunderstanding. Therefore, a proposed functionality of the tool is to put up the types of information explicitly so that designers can easily notice which information is still missing. These information pieces can then be used conveniently as building blocks for scenarios which are more meaningful and memorable. Current practice scenarios: Please imagine the following situation… Alice and Bob are designers at the company. After a kickoff meeting with a client yesterday, their manager assigns them tasks within the project of designing a new breed of bicycle luggage transporter. Alice is going to visit an

exhibition of bicycle latest technology -which coincidentally takes place in a good time- to find out the market situation. While Bob is going to observe/interview buyers at one reputable bicycle store in town, and hopefully he finds some users who have suitable profiles to participate later on in their research… “How can we share our findings quickly?” After a long day, Alice and Bob come back with a lot of information; they have taken along notes, photos, brochures, etc. Both Alice and Bob are wondering how they can organize this information neatly and share it quickly with their team members. They try to ask the team for a quick meeting, but it’s difficult to get everyone together especially at this moment when everyone is busy doing field studies. Preparing a document could be a good idea to share the info with the team, but it takes time especially with the different media of information that has been collected. Alice and Bob just want to “drop” their findings into a common place that everyone can refer. This way, everyone can access the information him/herself when he/she has time. Future practice scenarios: Imagine a different situation… Alice returns from her field visit to a bicycle fair in Amsterdam. She is a bit exhausted after the trip and making contacts with bike manufacturers at the fair. She’s satisfied though with what she has learned of the latest bicycle-related designs and technologies. During the fair, she had a chance to remark the current state of bicycle luggage transporters. She took many photos that highlight their main features so she could show and discuss them with the team. She also took some brochures to get references/contacts of the companies…

Alice shares her reviews on latest bike products Now that Alice’s back in the office, she wants to store all information she has just learned quickly and call it a day. Alice uploads the photos she shot to the company’s server where everyone with a login can access. But she’s not done yet; she wants to give out her reviews and opinions now that they are still fresh in her memory. She opens her internet browser and runs an application called “Scenario Central“. She finds the photos she just uploaded, gives them short descriptions and annotates some parts of the photos (Figure 1). Bob records user profiles and disagrees with Alice Bob comes back from surveying the bicycle stores a bit later after Alice left for home. He checks the application “Scenario Central” to get a glimpse of what Alice has put there. Aha! Bob reads Alice’s positive review about product ‘panniers’ that she found at the fair. Coincidentally, today Bob met a user who has been using the product for some time and is not satisfied with it. Bob immediately put his findings as a reply to Alice’s review (Figure 2). He then continues with registering the information about users he met during the observation. On the same work area, Bob adds 3 user profiles he has had interesting conversation with. Each of them has experiences and strong opinions about the existing products. “This kind of users will be valuable information source in this project”, Bob thinks. Bob connects each user profile with product(s) they have used so far, along with the comments these users have made (which are on Bob’s notes). Bob knows he still needs to give a more thorough and structured review about these users and products, but for now this is sufficient just so that he remembers the key details.

Figure 1: An overview of existing products with designers’ comments and annotations.

478

Figure 2: An overview of users and products they use. Bob composes scenarios of user Jane The following day, Bob has had a chance to interview user Jane. He specifically notices the diverse goals and situations Jane has concerning transporting “something” on her bicycle. Bob asks Jane to describe her normal day involving her bike and bike accessories. Additionally, he also prompts Jane with some events (e.g. reckless driver) and asks Jane’s reactions in the case of such events.

Now, Bob understands Jane’s situations better. He creates 2 scenarios based on his interview with Jane. Bob also indicates Jane’s emotions as she performs actions in the scenarios. Installing the toddler-seat is no fun for Jane thus a grim face next to it. (Figure 3) The existing scenario elements can be used later on as inspiration for other scenarios. For example, the team might imagine a scenario of Jane in a different setting e.g. a bumpy road. Or with the goal to ‘transport groceries’, how does user Melissa or user John do it using their products.

Figure 3: Scenario building toolbox consists of selectable elements which make it easier to compile scenarios.

479

Figure 4: An overview of users, their goals and the products’ performance in each goal. scenarios (Figure 4). To get a rough idea about the Graspable overview of scenarios performance of the registered products, each product is Bits of information become more meaningful and scored on how good it is to fulfil each particular goal. For memorable when composed into scenarios, making it an example, from Figure 4 it can be seen that a bungee cord accessible knowledge. Nevertheless, with the diverse is not good for transporting groceries (low score). users and use aspects, the set of possible scenarios could Among these, some user-product relationships have been grow more and more extensive throughout the design extended into a large set of possible scenarios. Mike the process. In the course of design, a scenario could be an project manager has asked everyone to get acquainted important decision factor, though its existence might not with all the information posted on “Scenario Central”. The be known by the decision-makers. Therefore a clear meeting will discuss what to do next as a team, instead of overview of all available information is crucial so that at explaining the design information (which is already least designers know where to look for more detailed registered in “Scenario Central”) to one another. information. Within a scenario-based approach, a specific form of support can be given by providing the design team Well-informed designers make a productive meeting with an overview of scenarios and the related scenario During the meeting, designers are ‘empowered’ with the elements. well-organized information as they can easily refer to Current practice scenarios: specific scenarios to back up their opinions. Mike suggests a discussion on the user goal “transporting Please imagine the following situation… groceries” because it looks promising as a tentative The “bike luggage transporter” design team is meeting for direction. The “Scenario Central” application has a the first time after the kick-off meeting. During this period, function to filter scenarios based on a specific element. everyone has been busy doing research (desk research or To aid their discussion, Mike uses the filter function to field studies). Therefore, this meeting aims to be a forum show only information relevant to the goal “transporting where everyone can share what he or she has found groceries” (Figure 5). The designers see the overview of during the research. And of course, if there’s time left, the problems with current products when their users team can discuss what they must do now, how to move “transport groceries” and this helps them to focus. on, etc. Again, an unproductive meeting Before the meeting, designers (individual or in group) prepare presentations to describe their findings within 1015 minutes time-frame. Most often, this is nowadays done using PowerPoint presentation which will be quite tedious to manage afterwards. Quite often, time runs out before any meaningful discussion gets to the table. When this happens, Mike the project manager (as a representative and member of the design team) and other management will have another meeting, and later on decide what to do next… Future practice scenarios: Imagine a different situation… The “bike luggage transporter” design team is meeting for the first time after the kick-off meeting. During this period, everyone has been busy doing research (desk research or field studies) and now the “Scenario Central” application shows a good overview of the users and their use

Figure 5: Filter on a specific goal, showing the users that share this goal and their use scenarios.

480

Scenarios as a test bed An early product design such as a rough idea can already be tested for its validity using a good set of use scenarios. The realistic use scenarios enforce designers to reflect whether a small attribute or component change could influence the use aspect. It is not easy to detect the chain of influence between many elements in a design process. If designers can be wisely informed about other aspects and situations that may be affected by such modifications, they also receive a concrete reference on which scenarios are potential for testing purpose. The tool tries to realize this by suggesting a list of scenarios that may be affected by a modification elsewhere. Current practice scenario: Please imagine the following situation… The design team is split into two smaller teams to brainstorm ideas/concepts. Bob and Charlie are in the same team. Charlie only knows a glimpse about the users from the presentation Bob gave earlier. He throws in quite a few ideas for the concept they are working on together. However, Bob has to reject some of Charlie’s ideas because they don’t fit in users’ life situations. After some arguments and Bob’s slight frustration, Bob and Charlie eventually come up with a rough concept which seems suitable for the users. Be careful with what you change A few days later, Bob and Charlie continue to refine the concept. They need to change/rearrange some components to make the size smaller. After squeezing in some components and removing parts that they think are not so necessary, they don’t realize that now their concept design is becoming less secure. The design team hopefully will find it out during the final test much later. Future practice scenario: Imagine a different situation… Evaluate from the eyes of the users During a brainstorm session, the design team quickly comes up with many ideas. A rough concept (#1) quickly emerges: some sketches are drawn, specific features are proposed, and a to-do list is created (i.e. needed further studies to verify that the proposed concept is feasible) (Figure 6). Of course the design team does not forget to imagine how the users would use #1; it’s user-centered design after all. The team chooses to try #1 in user Jane’s life situation to see how it would perform (i.e. how pleased would Jane be using it?) (Figure 7).

Figure 7: A new concept is evaluated in the hypothetical uses of users. A few days later, Bob and Charlie have been working together to refine #1. Despite it seemed near perfect in the beginning, they still change many parts of concept #1. Luckily the “Scenario Central” application helps them to keep track what they are doing; it indicates to them other related parts of #1 and scenarios that might need to be adjusted.

Figure 8 Scenarios reflect the consequences of changes/modifications to a concept.

Figure 6: A new concept is reviewed and documented in a collaborative environment.

481

4 DISCUSSION The results described in the previous section are drawn from a workshop series at one Dutch design company. The identified problem areas, despite being familiar, could be idiosyncratic to this one particular company. Furthermore, the functionality proposed in this paper also strongly relates to the experience and needs in the company. We are aware that the contribution will not be scalable enough for design science without further verification with other design practices. Therefore, in parallel with the development and refinement of the support tool concept, we are surveying designers in different companies on their familiarity with the problem areas. In the previous section, we have detailed the problem areas within the company’s design practice. While

working on the verification of these problem areas, we assume that they are reliable enough as a foundation for our further proposal. As an answer to the problem areas, functionality of the scenario generation support tool has been proposed (see Table 1). We are probing design practices to assess the most optimal way to realize the functionality and to improve designers’ acceptance towards the tool. This research aims to provide a useful and easy-to-use tool to support scenario-based practice, and therefore a firm connection with current design practice is maintained. The set up of the proposed functionality is such that it is not rigid and therefore can be extended to address requirements that surface later on e.g. in the follow-up workshops. In our effort to develop a practical framework for SBPD, we have adopted scenarios in our approach. Aided by the flexibility of scenarios, we are able to communicate the proposed functionality to our stakeholders without yet committing to any certain form. Reflecting on the fact that there are loose ends in our proposal, we have benefited greatly from using scenarios to acquire early feedback. In the future, we will continue using scenarios to involve designers in determining the most feasible form of the tool’s functionality.

scenarios. Our finding indicates that though scenariobased product design seems ideal in literature/theory, there are still loose ends in practice. We have identified three problem areas within the scenario approach in the observed company. Despite yet being unverified for its scalability, we have used this finding to formulate requirements for support in applying a scenario-based approach. A set of functionality for a scenario generation support tool is proposed to answer these requirements. Our future work includes verification of both the identified problem areas as well as the applicability of the proposed functionality in design practice in general. To find out whether the challenges are also experienced by designers in other organizations, a questionnaire to probe design practice is being circulated. Furthermore, more contacts with diverse designers are planned to determine the most feasible form of the functionality. As we aim to develop a useful support tool, designers will be actively involved to allow the tool to blend with their preferred future design practice. Eventually, a software prototype will be developed to demonstrate and evaluate the scenario generation support tool.

5 CONCLUSION AND FUTURE WORK As the results of a workshop series at a design company, we have observed a design practice that utilizes

6 ACKNOWLEDGEMENT We would like to extend our sincere gratitude to the designers that have participated in the workshops as well as our colleagues who have spent their time generously to help during the workshop preparation.

Requirements

Functionality

Gather, register, organize relevant design information efficiently (going broad, taking in information)



A “template” for documenting design information based on scenario elements and a “toolbox” to create scenarios using the information as building blocks

Be in the know of what information is available, especially for choosing the one important/relevant for a specific purpose (from the extensive information, how to narrow it down to fulfill a goal)



A visualized summary of all scenarios and a possibility to relate, trace and filter scenarios based on the elements



The tool as a “wise wizard” that suggests to designers the scenarios which are potential for testing purpose. Rather than running through a long list of test cases, scenarios could be easily and quickly recalled to memory and thus less exhausting.



The tool presents the scenarios in narrative, which is the basic form of other types of scenarios. The narrative form offers scenarios a flexibility to be extended to different media (e.g. storyboard or role play).

“Quick-and-dirty” evaluation of concepts/ideas (reliable without being a hindrance to designers’ creativity) Communicating scenarios to other stakeholders for specific purposes (e.g. testing functionality/ user acceptance/ safety, selling/ marketing, convincing clients/ management, brainstorming/ idea generation)

Table 1 A summary of extracted requirements and proposed functionality. 7 REFERENCES [1] Carroll, J.M., 2000, Making Use: Scenario-Based Design of Human-Computer Interactions. 2000, London: MIT Press. [2] Moggridge, B., 1993, Design by story-telling. Applied Ergonomics, 24/1:15-18. [3] Suri, J.F., Marsh, M., 2000, Scenario building as an ergonomics method in consumer product design. Applied Ergonomics, 31/2:151-157. [4] Anggreeni, I., Van der Voort, M.C., 2008, Classifying Scenarios in a Product Design Process: a study towards semi-automated scenario generation, Proceedings of CIRP Design Conference 2008:

[5]

[6] [7]

482

Design Synthesis, University of Twente, Enschede, The Netherlands, April 7-9, 2008: Rolland, C., Souveyet, C., Achour, C.B., 1998, Guiding goal modelling using scenarios. IEEE Transactions on Software Engineering, Special Issue on Scenario Management, 24/12:1055-1071. Maiden, N., 1998, CREWS-SAVRE: Scenarios for Acquiring and Validating Requirements. Automated Software Engg., 5/4:419-446. Lim, Y., Sato, K., 2006, Describing Multiple Aspects of Use Situation: Applications of Design Information Framework to Scenario Development. Design Studies, 27/1:57-76.

The Procedure Usability Game: A Participatory Game for the Development of Complex Medical Procedures & Products J.A. Garde, M.C. van der Voort Department of Industrial Design Engineering, University of Twente, Drienerlolaan 5, 7522NB Enschede, The Netherlands [email protected]

Abstract When product designers develop advanced medical appliances, they have to deal with medical treatment procedures. If treatment procedures are ignored by designers, final products might conflicts with the hospital practice. Therefore, the development of procedures and product requirements should take place before or parallel with appliance design. However, the development can only be realized when access to the knowledge of users of the appliances is available. This paper discusses the application of a participatory design game to facilitate the participation of users in the development of a treatment procedure including appliances. The game has proven its usefulness in a case study. Keywords: Participatory Design, Usability, Design Game, Medical Appliance

1

INTRODUCTION: USE SITUATIONS OF MEDICAL APPLIANCES Use situations of advanced medical appliances have a complexity that challenges product designers. Five aspects contribute to this complexity, and they are described below. 1. The medical appliances are technically complex products; therefore it is hard to design a simply usable interface that gives access to all functionality. 2. The appliances are integral parts of established treatment procedures that may be unfamiliar to designers. 3. Treatment procedures are complex; they usually include several people and often several appliances. 4. Medical appliances are often used by several different hospital departments [1], users with differing backgrounds must be able to work with the appliances equally well. 5. There are not only many users but also many stakeholders for medical appliances that must be considered. These stakeholders include the hospital managers, who are responsible for the purchase of an appliance and also the patient [1]. The design of a medical appliance, of its interface and of its compatibility always has an influence on treatment procedures that the appliance will be used in. This means that the designer is already –possibly unconsciouslyshaping the future treatment procedure when he or she is designing a medical appliance. In treatment procedures people and appliance use must be well-coordinated to prevent faults in the medical treatment. Since human wellbeing is at stake and doctors time is costly, anticipating the consequences of design decisions is essential when designing medical appliances.

CIRP IPS2 Conference 2009

483

Therefore, development of the treatment procedure that complements the appliance should take place before or parallel to the development of the appliance. Such a design process should prevent the development of appliances that entail complicated procedures. However, product designers are often unfamiliar with treatment procedures. Therefore, the experience of actual users is crucial for the design of feasible procedures and including appliances. 2 CHALLENGES FOR PARTICIPATORY DESIGN How can the experience and knowledge of specialist users be accessed? Observing specialists during their work will evoke questions about reasoning or other invisible “know-how”. Interviews or focus group techniques (group discussions) rely on verbal communication without visual aids. Therefore, they are reliant on the accurate interpretation of each other’s words. Communication becomes complicated when discussing treatment procedures that include parallel actions and several actors. Additionally, observations, interviews and focus group techniques usually only provide meaningful information about the current situation. A transformation of the information to a new treatment procedure still needs to be done. It is complicated to involve users as co-designers in the design process, especially at the concept generation stage when there is not yet a product design concept available to reflect on. Therefore, in practice specialists are brought into the design process at a point when the initial design choices have already been made, without considering the effects on the treatment procedure. The authors believe that the participation of specialist users in medical appliance design must start in an earlier stage. Therefore application of a participatory design method that can deal with the challenges of the

development of complex medical appliances is proposed. The focus will be on the development of the treatment procedure instead of just the appliance itself. To deal with the problems discussed above, a participatory design approach is needed that has six qualities. It should: 1. enable the users to invent and design a usable new treatment procedure for a product that has not yet been developed, 2. include all users at the same time, so that it can be discussed immediately what a change in one user’s domain of responsibility means for the domains of others, 3. give a clear overview of a lengthy and complex treatment procedure and the consequences that changes to this procedure have, 4. trigger the participants to empathise the new treatment procedure situation. 5. include not only the appliance under consideration but also other appliances that are involved in the procedure, 6. be time efficient in view of the limited availability of time medical specialists have. 3 THE PARTICIPATORY DESIGN GAME APPROACH What kind of participatory approach should be used to develop new treatment procedures and the including appliances? The authors believe that a low-tech participatory game is fit for the task. It can stimulate users to do concept development by themselves and thereby bring in their specialist expertise. The open character of a game gives the game participants room for independent choices and they can visualize the consequences of their choices [2, 3]. A game helps to achieve commitment. It is simple, and is experienced as more exciting and appealing by participants than other techniques such as focus group discussions. It creates an informal atmosphere which is productive for creative work [4]. In addition, low-tech games can be developed with low effort and at low costs. They can show results within a short period of time. In summary, a low tech game is cost-effective way to evoke, structure and discuss ideas. Looking at the existing techniques, it was found that there is no low-tech participatory game approach available that has all six qualities. Therefore a new game was developed and called the Procedure Usability Game (PUG). PUG is a novel combination of customized participatory techniques. Existing techniques were selected and adapted to complement one another and thereby meet the required qualities. The viability of the PUG was tested by applying the game to an authentic design problem. The design problem is an actual design case within a medical appliance company. For reasons of intellectual property protection, details of the design case itself cannot be provided in this paper. However to depicture the PUG clearly we will replace the real case by an illustration case of a design problem that has the same characteristics. This example will be presented in boxes. The product under consideration has a complexity that can be compared to a computer tomography scanner or to a laryngoscope system. It exists in the state of a “next generation” product system idea: New technical solutions -and thereby new functions- should be added to an

existing product to improve the patient treatment. The new treatment the product should deliver was defined. However, the product requirements that would result from implementing the product into a treatment procedure in daily practice were unknown.

Illustration case: Operation room radiotherapy appliance Topic of the case is the design of a new generation high-tech operation room that includes an appliance for radiotherapy. The operation room set up including the radiotherapy appliance can be used to perform radiotherapy treatment while there still is an opening in the body from a surgery and the target area can be reached easily, thereby minimizing damage to surrounding tissue. Traditionally radiotherapy treatment is a separate procedure, given after surgery in a special treatment room. A radiotherapy appliance is a complex device. A smooth implementation of a radiotherapy appliance into a surgical procedure most likely requires a redesign of the appliance itself, as it has been designed for a different use situation. It probably needs new functionality to be able to treat an area in an open body. In addition the treatment procedure and likely the operation room environment need to be adapted accordingly. When starting the re-design project a first step should be to develop a feasible treatment procedure.

To develop a feasible treatment procedure that includes the product idea, develops this idea further and distils product requirements, potential users must be involved. Only they own the specialist knowledge about the medical procedures. The PUG was applied to stimulate the specialist users to design future treatment procedures, develop a clearer product concept by identifying the product requirements and possible bottlenecks that result from these procedures. 4

THE PROCEDURE USABILITY GAME

4.1 Game set up The Procedure Usability Game (PUG) is a low-tech design game. It is a combination of a task flow analysis and a pivot game technique. Task flow analysis is meant to organize the task flow of different people when doing a task chronologically. A pivot game on the other hand includes a scale model of an environment with persons and appliances that is used to play out tasks. The two components not only complement each other, but also serve as mutual verification tools for the generated procedure. Both components have their own objective. The task flow analysis helps to capture the procedure in a structured and detailed way. It focuses on chronology, time management, staff deployment and information flow. The pivot game component provides a hands-on experience that clarifies logistical problems and helps participants to envisage the procedure in a realistic hospital setting. By acting out the defined task flow by means of pivots, the treatment developed can be assessed, optimized and verified.

484

Task flow analysis component The task flow analysis within the PUG is inspired by the CUTA approach [5] and the CARD technique [6]. It helps the participants to sort out which tasks they wish to achieve using the new medical appliance and in what chronological order these tasks should be executed. A simple card layout was used that is based on the activity oriented CUTA cards that contain fields to fill in an activity, the person that performs the activity and a duration. However, there were added two fields; one regarding the information the user needs to fulfil the task described on the card and a second to indicate whether a task is performed alone or in cooperation with other actors. These additional fields enabled us to record the required information flow and cooperation between users. The developed task flow card scheme was expected to provide a good overview of the procedure and to be easy and efficient to use. It facilitates the recording of the developed procedure by making previous steps continuously visible for all participants. Additionally, it supports an iterative development process since rearrangement of the task flow is easily manageable. Pivot game component However, the task flow component of the game does not take care of logistics and might, due to its high level of abstractness, not stimulate the participants to consider all aspects of the treatment procedure. Therefore, the pivot game component has been added to the PUG. It helps participants to envisage the procedure in a realistic hospital setting and to clarify logistics in the new treatment procedure. A pivot is a “physical, symbolic representation that allows a person to move back and forth between a Figured (imagined) world and the real world” [7]. It has been stated in constructionism learning theory that learning can happen most effectively when people are actively creating things in the real world [8]. Designing a new procedure is a process of applying changes and learning what the effects are. Therefore, building the treatment procedure with pivot elements is most likely to support this process. A pivot game also has the capability to bring together people from different backgrounds. The game pieces work as “boundary objects” [4] because the physical game elements make it easy to exchange information [7] and oversee the situation. Many pivot techniques (for example [7]) include only a limited set of rules and are therefore very open. However, sometimes the principle of structured play is used in a pivot game to give it more direction. This means that the interaction the participants must play out is prescribed to some extent. The PUG employs structured play by providing a general treatment scenario consisting of a fictious patient record and treatment advice at the beginning of the game session. Why low-tech? For reasons of both effectiveness and efficiency, it was chosen to implement the PUG as a low tech game. The PUG could have been implemented digitally as a computer- or a virtual game, but this would have taken away the hands on experience. Furthermore, making adaptations to the procedure in a digital setting would have required a certain level of computer skills from the participants and therefore possibly a special training. This would have been time-consuming and might have distracted from work on the actual procedure problem.

485

4.2 Game participants To develop a feasible treatment procedure, the PUG should ideally be played by the same team of specialists that currently handle the medical treatment procedures in their hospitals.

In the example case the participating team would consist of a surgeon, a surgical nurse and an anaesthetist from the former surgical procedure as well as a radiotherapist, a technician and a clinical physicist from the radiotherapy department. The patient would not be included as a participant since within the considered treatment he or she will be sedated most of the time.

The game is played by one hospital team at a time. As Törpel [9] points out, the relations of power within the product usage field should be taken into account within the development of participatory design games. This means that in our game the higher ranking surgeon should probably not play together with his support staff, since there would be a chance that the doctor might enforce realization of his own ideas above those of other staff. On the other hand including participants from different backgrounds can be beneficial because they have to reflect on each other’s views and thereby become inspired to think beyond their own boundaries [7]. Furthermore, the presence of a game moderator is meant to prevent such conflicts. It was chosen to invite all main users to one joint session to combine the different insights and benefit from the cross-fertilization effect. Within the PUG, the moderator function is shared; the moderating team consists of a game facilitator, an expert support worker and an observer. The role of the expert support worker is to ensure that the company designers can obtain as much information as possible from the game by asking the participants to clarify or motivate their decisions. This is a task that requires detailed medical knowledge. In our case, the role was therefore fulfilled by an employee of the company with a relevant medical background. 4.3 Game elements Game material for the PUG was specially designed. The design case was analyzed in co-operation with the medical appliance company that had provided the case. Based on this, the game elements were chosen. For every game element, a degree of freedom [9] was determined. Degrees of freedom were for example whether the time frame of the introduction of the new product onto the market should be limited or how explicit the description of the problem case (patient data information) should be. Determination of the degree of freedom consists of a trade-off between the minimization of the risk of influencing game participants (and thereby the developed treatment procedure developed) and the “usability” of the game itself, since the game could be impaired by too open and complex a structure. To prevent the participants from not taking the game seriously, an abstract design of the game pieces was selected. All game material was designed for simplicity while at the same time to making the game look well designed and appealing.

Game board A central element in the game is the game board that forms the environment for the pivot playing. The layout of the game board is constructed by the users themselves. Since a new treatment procedure must be developed, the participants are given the assignment to “build” the ideal facilities for this procedure by placing “room cards” on a hospital layout game board and assign room characteristics to these cards. This technique helps the participants to go beyond their own hospital context. In the example case the participants might build an operation room, a recovery room and additional rooms. A new characteristic of the operation room might be a wall shielding, since nuclear radiation will be released during the treatment.

Figure 1 shows the game board and some of the gaming material.

Figure 2: A task card. Pivots In between filling in the task cards the game participants play out the defined treatment procedure on the game board with pivots of their own personal representation. To get to know which (future) appliances users would like to use they are provided with game pieces representing appliances. There were pieces for existing appliances as well as unassigned game pieces to represent the new appliances that are needed to perform the new procedure. In the case of the radiotherapy appliance in the operation room, the participants would be provided with small representations of radio therapy appliances, operation tables, anaesthesia trolleys, lights, computers wit planning systems and surgical instrument tables.

Figure 1: The game board. Task cards The lay-out of the cards the players need to fill in as a part of the task flow analysis is based on the activity oriented CUTA technique. They are complemented by a field about the required information and cooperation with other actors. An example of a task card is shown in Figure 2. Cards were specified regarding the players. For the main players there are three types of personal cards with a representational picture of the player’s character on it. By this means it was possible to hand over every participant his or her own set of cards to support the fact that every participant is an associated owner of the developed procedure. There were also used wild cards, which could be used for additional personages the players possibly wanted to introduce, such as additional assistants. The PUG cards are colour-coded to indicate categories of activities. A two-colour code indicates whether a task in the procedure represents an addition or change to the existing procedure. With the help of this code it is easy to see which part of the procedure has been redesigned. These new parts of the procedure are the most important to explore in the further product development.

Product/tool cards Within medical treatment procedures, appliances cannot be associated with single users since they are often used by several users simultaneously. Therefore, separate cards are needed to define the task flow of persons and objects. To give participants the opportunity to assign product characteristics to the appliances used and to document these, product/tool cards are included in the PUG. Game participants are asked to fill in product/toolcards for every appliance or product they would like to use. On these cards preferred product characteristics can be listed. On a product/tool card for the operation table might be filled in that it must be compatible with the radiation appliance. On the card for the radiation appliance some technical specifications might be given that are premises for radiation therapy inside a body.

At the end of the game, all product/tool cards are integrated within the task flow by placing them next to the task flow and by linking every product/tool card with the task cards of the tasks the product or tool is needed for. A completed task flow scheme is shown in Figure 3.

486

Event cards After participants have developed a complete treatment procedure by the use of the task cards, product/tool cards and the pivots, “events cards” are introduced. Participants are confronted with five descriptions of events that might conflict with the procedure they have conceptualized. An event would be: “The local database crashes… all prepared patient data is lost.”

The participants are asked to discuss the impact of the events on their procedure, pick out the one event with the greatest impact and adapt the procedure to deal with it. The introduction of events forces the participants to reflect once more on the developed procedure and the identified product requirements and to verify their robustness under all circumstances.

Figure 3: A task flow scheme with product/tool cards (white) 4.4 Over-all game session set up To give the participants an idea of what awaits them, they received a preparation letter in advance of the game session. In this letter, the general idea for the product improvement was introduced. There was also described a fictitious patient case that would be the basis of the game. The participants were also given an overview of the game, its goals and several questions about the treatment procedure the participants were asked to reflect upon in advance. The game session itself started with a short interview and a discussion with all participants. The purpose was to stimulate team-building among the participants and to obtain some general knowledge about the mindset of the participants. Next, the facilitator gave a short overview of the game structure. The detailed explanation of the game was divided into small pieces so the players could start to

487

engage with the game and would receive the next bit of information just at the moment they were ready for it. The general game structure consisted of alternating the task flow analysis and the pivot game. When the participants reached the point that they were satisfied with the basic procedure and corresponding product requirements they had developed, events were introduced and, if required, changes to the procedure were made accordingly. After the procedure development, a discussion about the feasibility of the results was initiated. The session ended with a debriefing and a “thank you” to the participants. Several weeks after the session the participants received the session report for confirmation of the procedure. 5 RESULTS To date, two PUG sessions have been run. The application of the PUG proved to be efficient. A redesign of the procedure could be made and required information could be fully obtained through a three hour gaming session. Both sessions resulted in a complete overview of a new procedure set-up, the required appliance characteristics, the information flow and actor and appliance movements within the hospital. Before the game, participants were critical about the game since they were not familiar with “serious gaming” techniques. Afterwards, participants were surprised at how much coherent information they had been able to generate within the time frame of just three hours. Players were all engaged during the game. Participants worked well together. Doctors, technicians and supporting staff – all players decided what would be written on his or her own task cards and took part in the discussions. The task flow set-up was not straightforward but revised several times. This was due to discussions or as a result of the pivot game playing component that revealed that the initially proposed task flow did not work well. The level of detail of the task flow analysis was limited to higher level tasks such as “accepting the treatment plan”. Lower level tasks such as “pushing the button” were not described. The level of detail was automatically applied by the participants. It resulted in enough information for the participants to develop a new treatment procedure and identify appliance requirements. Product tool cards were filled in during pivot playing with several product requirements. During the discussion at the end of the game, participants stated that the game set up had been really useful, without being asked for a comment on it. They said that the game had helped them address all elements of the procedure and included appliances and prevented them from overlooking the consequences of choices they had made. The procedure design and requirement information was directly accessible after the game, as it was recorded within the task flow scheme and the product/tool cards. The complete sessions were documented by means of observational reports and video recordings to capture discussion between the participants. The appliance company was satisfied with the quality of the results and the efficiency with which they were obtained. The resulting task flow and utilized appliances were similar in the two sessions. Furthermore, important information about bottlenecks in the procedure has been discovered. The most relevant criteria in decision making proved to be the best possible treatment for the patient, the time that the doctor needed for the treatment and practical logistics.

Having product users participate in the product development by playing the PUG resulted in an effective and efficient design process: The results gave designers detailed insight in the ideal treatment procedure and corresponding product requirements – all developed in consensus between the expert users. 6

DISCUSSION

6.1 Working of the game The PUG’s complementary set-up worked as intended. It proved to have the six identified qualities that are needed to develop a new treatment procedure. 1. The PUG makes possible the development of a new complex treatment procedure. It facilitates and structures discussions and makes the involved elements “visible and touchable”. 2. All specialist users that work with the appliance were included in one game session and their participation was good. Therefore, it can be stated that the voice of every relevant specialist user of the hospital was heard and reflected in the developed procedure and identified product requirements. 3. The task cards and product tool cards supported the systematic development of the procedure. The task flow card scheme provided a good overview of the procedure. It made manageable adjustments to tasks that had been set up in an earlier stage. The pivots and the game board supported the imagination process. The alternative techniques worked as control mechanisms for each other. The game helped the participants to consider all elements of the procedure and prevented them from overlooking the possible consequences of choices they had made. 4. The game set up worked well in triggering the participants to emphasize the new treatment procedure and product requirements. 5. The set-up included all users, all necessary medical appliances (some of which new concepts that didn’t yet exist), pieces of equipment and required rooms. 6. Participants were able to set up a complete - yet new - treatment procedure and identify product requirements within three hours. The results from two game sessions cannot be seen as a scientific proof of the working of the game or the use of gaming for the development of sophisticated medical appliances. However, scientific proof is hard to realize in this field. Every game session evolves differently and nobody likes to pay for large numbers of sessions that do not deliver relevant new information for the business case. Therefore, the value of a design game can only be related to the amount of worthwhile design information that has been achieved by playing it, the effort that was needed to achieve this, the satisfaction of the company with the results and the economic success and performance of the product that is developed. As for the amount of information gained, the realised efficiency and the company satisfaction, the PUG has achieved a satisfactorily score: a large amount of information has been gained, a complete new treatment procedure has been designed by the users, an overview of main product requirements has been made and there have been some new insights into bottlenecks within both the new procedure and product requirements. With respect to the economic success and performance of the product, there cannot be given any indication yet, as the new product is still under development. However, the company is very satisfied with the results of the first gaming sessions. They are planning to continue using gaming techniques.

Overall, the application of the PUG has shown its valve in triggering participants to empathize with a new treatment procedure situation and to provide a clear overview of a lengthy and complex treatment procedure - and the consequences that changes to this procedure have to the product requirements. Merely “talking” about the treatment procedure would most likely have required an enormous memorizing performance on the part of the participants. 6.2 Possible game improvements In optimizing the PUG, improvements could be made to the game set-up or the organizational setup of the sessions. Regarding the session setup, it is sometimes advisable to let people work out something individually first (as done within LEGO® Serious Play™; see [10]) and have them afterwards discuss and combine their ideas in shared sessions in order to prevent the situation in which one leader dominates the whole session while the other participants are passive. In the PUG, this has indirectly been realised by means of the questions in the preparation letter. Although there was not perceive any passive behaviour or “overruling” of participants, starting with individual development sessions might be worth a trial to see whether this would deliver a broader spectrum of results. However, this adaptation might have consequences for the time-frame of the game session and thereby its efficiency, which is one of the PUG’s strengths. Every participant will possibly develop the optimum solution with a different priority list of goals in mind. These goals could be, for example: efficiency of the treatment, best treatment for the patient, costs or maintenance of existing structures. The bottlenecks discovered in the procedure can only give some indirect information about this. Although this might be interesting to investigate further, from a commercial perspective it is far more important to know where the boundaries of feasibility are located for the whole team of specialists. 7 CONCLUSION The concept of using a low-tech participatory design game to develop a new treatment procedure including a innovative new medical appliance was presented. Due to the difficulty of over viewing a medical procedure with all those involved persons, additional appliances and the consequences that changing this procedure might have, a complementary game of combined participatory techniques has been developed. Intended future users of the appliance were asked to participate in this game to provide insight into their roles in the treatment and to benefit from their specialist knowledge and experience. The Procedure Usability Game (PUG) comprises a custom-made combination of a task-flow analysis and a pivot game. Within this complementary set-up, the taskflow analysis supports a structured development of the procedure whereas the pivot game stimulates envisioning the whole procedure and all the elements within. The PUG was tested in a commercial medical design case for a medical appliance company. In the organized game sessions the specialist users were able to design a complete new treatment procedure within a time-frame of only three hours. They managed to do this despite the fact that they were not skilled designers. The players showed commitment to solving the problems and enjoyed taking part in the game. Playing the PUG resulted in a large amount of useful information, the design of a complete treatment procedure and insights into possible

488

bottlenecks in both the new procedure and product requirements. The results obtained - as well as the efficiency of the application of the PUG - were appreciated by the company. It is therefore concluded that the Procedure Usability Game can successfully support the development of a new treatment procedure including an innovative new medical appliance. As the first applications of the Procedure Usability Game have been a success, there are plans to organize more such gaming sessions for further research on both the procedure and the game. Furthermore, a “follow-up” game will be developed that uses the procedure that has been developed in the first game as a starting point scenario. It is intended to work out the treatment procedure in more detail and enable users to find and validate corresponding product requirements. The complementary game set-up is also likely to work for non-medical, similarly structured design problems as well. Not only the designers of medical products are confronted with an early phase in the design process where a new product function is defined in general, but the effects of its implementation to a use situation still need to be discovered. We believe that the PUG could be beneficial in the early development phases of every product that is used within lengthy procedures, complex use situations or with various specialised users. 8 ACKNOWLEDGEMENTS We would like to thank all the people who participated in the game sessions and the company that provided the design case. 9 REFERENCES [1] Martin, J.L., Norris, B.J., Murphy, E. & Crowe, J.A., 2008, Medical device development: The challenge for ergonomics. Applied ergonomics, Vol. 39, pp. 271283

489

[2] Brandt, E. and Messeter, J., 2004, Facilitating collaboration through design games. In: Proceedings of Participatory Design Conference 2004. [3] Johansson, M. and P. Linde, 2005, Playful Collaborative Exploration: New Research Practice in Participatory Design. Journal of Research Practice, 2005. 1(1): p. Article M5. [4] Brandt, E., 2006, Designing exploratory design games: A framework for participation in participatory design? In: Proceedings of Participatory Design Conference, pp. 57-66. Trento, Italy [5] Lafrenière, D., 1996, CUTA: A simple, practical lowcost approach to task analysis. Interactions, vol. 3, Issue 5, pp. 35-39 [6] Tudor, L.G., Muller, M.J., Dayton, T. and Root, R.W., 1993, A participatory design technique for high-level task analysis, critique and redesign: The CARD method. In: Proceedings of HFES’93, pp. 295-299(5). Seattle WA, US [7] Engbakk, S., Rafn, J.K., Urnes, T., Weltzien, Å. & Zanussi, A., 2002, Pivots and structured play: Stimulating creative user input in concept development. In: Proceedings of NordiCHI 10, pp. 187-195. Århus, Denmark [8] Harel, I. and Papert, S. (eds.), 1991, Constructionism. Noorwood, USA: Ablex Publishing Corporation. [9] Törpel, B., 2006, The design game in participatory design and design education – Chances, risks and side effects. In: Proceedings participatory Design Conference, pp. 77-86. Trento, Italy [10] Rasmussen and associates. The science of Lego serious play. Available at www.rasmussen-andassociates.com, accessed march 2008

Scenarios and the Design Process in Medical Applications

R. Rasoulifar, G. Thomann, F. Villeneuve G-SCOP Laboratory, Grenoble Institute of Technology, France { Rahi.rasoulifar, Guillaume.thomann , francois.villeneuve}@inpg.fr

Abstract Scenario has been largely used in design progression of different engineering disciplines, mostly in software engineering and Human-Machine Interaction. Scenario helps the software designers to border on the user and usage requirements. Although the design process in many fields of product engineering deals with tasks and functions, and step away from the final users, in medical and healthcare product design the understanding and specification of user requirements become an important issue. Thus, scenario can play an essential role as a tool for engineers to help them to identify and determine the usage of the medical devices. This paper investigates the use of scenarios in design development of healthcare products, and proposes a new concept of using scenarios in the evaluation phase of user dependent healthcare products. Our proposed model for scenario driven approach represents the confrontation between engineering (device design) and medical (usage), through the scenario specification. The new scenario specifies: cure procedure, device functions, usage situation and observation.

Keywords: Scenario-Based Design (SBD), Medical Application, User integration

1.

INTRODUCTION

Modern technology has transformed the practices in medical domains. Physicians and medical doctors are now able to see where they could not before, conducting the intervention and the operation with minimal trauma, intervene at the genetic level, replace whole natural organs with functional artificial ones, make rapid diagnoses, and peer into the workings of the brain. Much of the credit for these advances goes to the engineers, designers and industries who together identified what needed to be done, the science required to support it, and how it could be made practical. Engineering design for medical products took a large step to provide multifunctional solutions for new requirements. Historically, medical engineers tried to bring new technical solutions to the medical applications. They imply various sciences such as biophysics, applied mathematics, physiological modelling, biomechanics and control, imaging, and electrical engineering to accomplish their advances. These developments and the particularity of the design process in medical domain have attracted the design researchers. The design process of medical devices has yet identified as participatory design [1], design for patient safety [2] validation based design [3] and so on.

CIRP Design Conference 2009

490

Design and development of new healthcare devices needs the participation of the health agent, from the innovation phase at the beginning to the medical validation at the end [4]. From the medical point of view, surgeons consider themselves as the innovator [5]. Undoubtedly, health agent has an important role in the design, but, the development process is usually advanced by engineers. Regarding the discussion, a design approach should be developed in order to compromise user’s ideas, medical requirements and technical possibilities. The problematic of usage and user integration was not limited to the healthcare research. In Human-Machine Interaction (HMI) studies, the use of scenario has been surveyed by the researchers. Scenario-Based Design (SBD) and Usability Engineering have been proposed and used to help designers improving their understanding of the user requirements and the usage situation [6, 7, 8]. However, the scenario could help the medical and healthcare designers to integrate the professional user needs into the design process. This research asks this question: What would be the role and importance of scenarios in healthcare design process, and how designers in this field can use the scenario as a design tool.

In this paper, firstly the concept of the scenario and the evolution of use of scenario in design are reviewed. In section three, a survey of the use of scenario in the medical and surgical engineering is showed and discussed. Section four defines the characteristics of our proposed model for scenario driven approach which represent the confrontation between engineering (device design) and medical (usage). These characteristics are divided to four main categories: usage procedure (cure), device prototype functions, usage situation and observation. Finally we explain the scenario-based approach in design and the advantages of using this approach in some design cases.

2.

SCENARIO AND THE DESIGN PROCESS

A substantial amount of current research and development activities is focused on creating a more useoriented perspective on the new product development. One key element in this perspective is the user-interaction scenario, a narrative description of what users do and experience as they try to make use of new products. Thus, the first question would be what is the scenario and how does it look like? 2.1

Scenario; definition and usage

Scenarios are simply the stories about people and their activities [7], and these stories are more and more attractive for the researchers who try to find the logic of the design by studying the essential aspects of the problem and the birth of the solution. First researches about the scenario was about to characterize the story by a setting of elements [9]. In the same context, researchers made effort to discover new aspects of the scenario: Agents and actors, goals and objectives, and actions and events were being included as the main notions and in different domains, researchers started to use the scenario as a tool for design or introduce it consciously to the design process.

three major disciplines: strategic management, humancomputer interaction and software and system engineering, and propose an interdisciplinary framework for scenario management. They also concluded that despite of some diversity in terminology and use, two particular qualities emerge from their study. First, a scenario is a context-dependent and purposeful description of the word with the focus on task interaction. Second, scenarios are a mean of communication among stakeholders. Their findings are summarized by Hertzum considering the underlying role of scenario: to ground decisions in a sound and communicable understanding of the use situation [21]. The scenario is supposed to capture and explore the finer structure of the operative psychology in the situation of use [22]. Kurakawa proposes situation as the one of the three essential components of a scenario, and he defines the situation as “the setting surrounding the actor/agent and the state before and after the actor/agent takes a particular action or there occurs a particular event.” [23]. Mostly, description of the situation of use is given by the scenario. This description desired to be narrative, detailed [22], and to be written very carefully [24], but unfortunately there is no accurate study about the situation of use, except for the issue of task analysis. The specification of the environment or the different elements of use situation are very important, particularly when we need to realize that an artifact could not be used free of environmental elements. Scenario-based design provides a framework for managing the flow of design activity and information in the task-artifact cycle [7]. Designers can see their work as artefacts-in use and, through this focus, to realize usage and other use-related constraints in the design process. Moreover, researchers can use scenarios to analyze the varied possibilities afforded by their designs through many alternative views of usage situations. This concept is represented in Figure 1.

Historically, strategic gaming and military were the first use of scenario [10, 11]. In management and economy, scenario has been used for analyzing the consequence of actions and policies. By the first proposition of the use of scenario in Human-Computer Interactions (HCI) [12], researchers have employed scenarios as representation of system requirements to improve the communication between developers and users. The scenario identifies the person as having certain motivations toward the system, describes the action taken and some reasons why these actions were taken, and characterizes the results in terms of the user’s motivations and expectations [6]. The idea to recognize some consequences in the description of activities involving actors and details of the situation of manipulation makes the researchers use the “scenario-based” term in their methodologies. A superficial search leads to find lots of scenario-based methodologies in variant disciplines, such as decision making [13, 14], technology (in software) development [15], requirement analysis [16], accounting [17], and finally the design as the Scenario-based Design [6, 18, 19]. As some observation recognizes the scenarios like “one of the least understood recent success stories in the information technology (IT) and management areas” [20b], there are some main domains in which use of scenario stands out. Jarke et al. reviewed scenario from

Figure 1 - Challenges and approaches in scenariobased design form [7] Once scenario is defined and characterised from the original context, we should point out the next question: “How does the scenario help designers to make a good design in healthcare industry?” Considering the fact that the answer depends on the nature of product and could not be strict, a framework of procedures which explains what to do and how to do should be provided. A Scenario helps to clarify what the usage supposed to be and how the design can satisfy the predicted use. For instance, in the design process of surgical instruments, scenario by the classical perception is limited to the operation

491

procedure. On the contrary, scenario can serve as a more powerful tool and can contain more details.

human system (for example teamwork in urgency units). However, in a real application all the three subjects are in relation. Scenarios in this context can be used to describe the relations in details and to help the designer to understand and take into account all important factors of the required solution.

In this paper, we used the scenario-based approach borrowed from HMI sciences as a concept for user integration in design progression. However, concepts and approaches in HMI could not be directly used for the healthcare design domain, but need modifications. Next section explains the specification of scenario in the healthcare design domain.

3.

Scenarios address goals and motivations of medical users, describe the design alternatives and demonstrate the medical environment in which the requirement should be satisfied. The use of scenarios acknowledges that the end users should play an active role in the design process themselves. But, is it possible to leave all the use aspect of the design progression to the healthcare users? Or on the contrary, is it reliable to let engineers imagine the clinical exigencies? Well, that is what we are facing in every design progression of a user centred healthcare artifact. The point is that the actual use of scenarios in design seems not to be the best way of user and usage situation integration in the design process. In the next section, a new insight of using scenarios in the design process of medical product is explained.

SCENARIOS IN HEALTHCARE DESIGN

There are a large number of studies on usability of medical devices. A survey on critical-care nurses shows that health agents are very much concerned about the usability and feels that manufacturers should place additional emphasis there. More over, a significant number of the surveyed would like a role in the development of future device [25]. How designers possibly solve this problem? Scenarios have been used in the design of healthcare systems mostly to deal with the use activities and the work situation of clinicians and surgeons. By scenarios, researchers build a task-based workflow of what actors should do. Scenarios are even used to explore the user’s knowledge [26]. Literature shows that scenarios used as design tools to demonstrate the situation in which the design artifact would be used. Table 1 shows some examples of using scenarios in the context of the healthcare. There are two vast use of scenario which are excluded from the table: first, scenarios only as surgical operation procedures (techniques of surgery) which is the subject of surgical publications (ex. J. of Surgery, American J. of Surgery). Second, scenario as a set of mechanical tasks for a new instrument, without human interference (ex. the automatic tasks of an artificial organ in body) which is very frequent in biomedical engineering devices design.

4.

SCENARIO DRIVEN APPROACH

Designers can use many different approaches to become acquainted with the user. Perhaps the simplest approach is just to watch user perform a medical task and then talk to him. This step helps the designer to understand the basics requirement and may bring him some ideas about the solution. The next step is to come back with a solution, which is better to be a physical prototype, to hear the user’s opinion about the prototype. User can comment anyhow on the design propositions, but to have a reliable evaluation from the expert user, the designer needs to Figure 2 – User, artifact and situation in scenarios prepare a realistic usage environment. For instance, watching a surgery, a designer finds some basic idea to design or modify an instrument to help the surgeon. The after surgery discussion make the things more clear. In the following step, the designer prepares a prototype to show it to the surgeon to have his comments and critics.

Regarding the literatures, scenarios are used to characterise the interaction between following subjects: the user, the design artifact and the environmental situation [32]. Figure 2 suggests a schematic view of these interactions. The user-artifact interaction is normally a treatment procedure through a device, like the incision in surgery. The artifact-situation relation concerns about the compatibility of the artifact with the other devices or subsystems in the context. The user-situation relation is the interaction of the user and his work atmosphere. This last becomes very important when the design subject is a

Medical devices are often used in specialized environments (for example the operating room, the intensive care unit, etc) and it is not usually possible for researchers to simulate the necessary condition and obtain useful results from testing in a controlled environment. Moreover, surgeons need to manipulate the prototype in a near real situation. Some critical

Study Context

User

Usage Situation

Artifact

Source

EMS patient handling

Nurses

Hospital, emergency

handling devices

[27]

Wireless Connection in-hospital and e-emergency environments

Physicians nurses

Hospital, emergency

Connection system

[28]

practice skills and evaluate performance of a healthcare team

Training surgeon

Operating room

Computer Simulation

[29]

Identifying and studying a general problem about continuity in interaction

Surgery team

Operating room

Distributed medical workplace

[30]

Identify the overuse and underuse of medical procedures

Physicians nurses

Hospital

Evaluation criteria

[31]

Table 1: Use of scenarios in the design of medical devices and systems

492

researches clamed that the oral validation of surgeons on prototype is not always reliable and it should be tested by them in a real medical situation. Thus designers have to prepare a phantom of the concerned organ (for the case of surgery) or even use a cadaver, and they prefer to make the evaluation in an operating room, with all restrict and limits. Organising such a situation needs a prescript which we call a scenario or a part of the scenario.

Thus, by creating scenarios, the designer can prepare the evaluation step in a proper way. Scenario describes the required action for the artifact in use. It describes the usage situation which is important for the user’s activities. In a former research of the authors, two other aspects of the design in medical application were discussed to be integrated to the scenario: the prototype functions and the observation [33]. In result, we define scenario as a document explaining which main functions of the future product are realized in the present prototype, for which usage activities the prototype is going to be tested, and under which situation. The forth proposed element, the observation, is become more and more important because of the complexity of usage and the limits of direct observation (for example in the case of minimally invasive surgeries). By the observation, we group all the ways that the designer use to capture the information during the user’s evaluation. The simplest is to take notes and put a camera, but for more efficient observation more tools are needed, such as sensors, professional cameras, etc. One important issue in the observation is to record the expert user’s comments and critiques on the prototype. Scenario also describes how the observation will be take place. Figure 3 shows a schematic form of a scenario integrated into our coevolutive model. The movement that we sketched above, from preparation of document (i), through prototype evaluation in the emulation, to data capturing and the analysis, often

involves a shift in the conceptual focus of the scenario. Scenario plays the role of a design guide document. Early preparation requires the medical experts’ opinion to prepare the optimised usage situation, according to the actual prototype and usage. “Emulation” is the concept we used for the prototype evaluation by the expert user in the real situation. Scenario previses the tasks and activities and, based on this, the designer can setup the observation system. Finally, the analysis of scenario provides sets of causal relations among functions of the prototype, features of the use situation and behaviours of the user. In other words, scenario as a report shows what the prototype was suppose to have and what the user supposed to do, how it was going during the cooperation and what is needed for the next step: modifications on the prototype, on the usage procedure, on the usage situation and on the observation setup. Table 2 shows an example of scenario in the context of design of a new surgical instrument. In this example, a new instrument is needed to perform a new operation, in order to transform an open surgery to MIS. Considering the (i) position, the prototype (i) is made, surgical procedure (i) is described, and, in the same way, the emulation situation and observation setup are described in the scenario. In the design of a medical device, the physician-designer cooperation is the most important issue. Scenario is a tool to ameliorate the communication and to facilitate the decision making. Moreover, the design process in this context can be considered as a coevolutionary progression of the instrument and the usage, because from medical point of view the usage such as operation techniques, is a part of the design artifact and starts from an idea and step by step approaches a validated medical treatment. More details on the coevolutionary design model can be found in [34]. Nonetheless, by using scenarios, designers can trace the evolution of the prototype and the usage. Finally, some advantages of the scenario driven approach for user centred medical application are: • • • •

Figure 3 - Scenario driven approach, preparation, Data capturing and analysis

493

Data accumulation of the progression of prototypeusage User participation and integration in the design process Analysis of the decision making based on tasks and criteria Organising and facilitate the communication

Surgical procedure Procedure for new operation

Actor

Instrument prototype Objects (surgical

Functions

instruments)

Receive the rod (swerving mechanism) Identifying fractured vertebra Positioning screws (3 pairs of screws on 3 vertebras) Incision (1-2 cm) Insertion tubular retractor Position screw Inserting and fixing rod (2 times) Incision (0.5 cm) Insertion rod by rod holder Fix the rod in screws head Release the rod holder

Surgeon Surgeon

Hold the rod during insertion Release the rod

Screw preparation by technician

Evaluation criteria Ergonomic of handle

Surgeon

Weight of prototype Charging rod holder by technician

Size of rod housing part Feasibility of manipulation in presence of tubular retractor

Usage situation

Observation A general camera on operation site

Operation room at hospital

A frontal camera on surgeons head, capturing where he looks

Amplifier radio Mannequin: spine model with a fracture on L1, positioned in a box, filled by synthetic plastic ball, covered by artificial tissue

Sound recorder

Photographs Notes by engineers

Table 2: An example of scenario in the context of innovative surgical instrument design 5. CONCLUSION The notion of scenario initiated from HMI sciences was review and the specification of scenario-based approach in healthcare product design was discussed in detail. An aim of our research was to point out the importance of such an approach for the designer in healthcare domain. Designers need to understand and specify the requirements of their (professional) user. Our proposed approach used the coevolution concept and specified the scenario as a tool for design process, significantly for the evaluation phase which differs the design methodology in healthcare from the other design disciplines. On one hand, as it is argued by other researches, users of a device are mostly viewed by the designers as one of the subsystems of the global device [35]. On the other hand the health agents would like to participate in the design process, and in new complex medical devices, the expert user integration in development is inevitable. Scenario driven approach proposes a design tool which provides the organisation and design practices for designers, in a way that they can integrate their medical expert users in the development. We have focused on supporting design situation in which the design process is coevolutive and the artifact is a pair of instrument-usage, situation in which the designer is trying to have the medical expert user integrated in the design and the evaluation. This approach helps designers to increase the reliability of the expert evaluation phase.

494

Scenario supports a fluid exchange of reasoning between the prototype functions and the usage implementation, such that user know-how can inspire the new product and the new product can extend expert user manipulation. Our research focused on a new medical product in which the user is professional or expert, and the design artifact is the couple of instrument and usage (operation). Nonetheless, the discussion is valuable for other medical devices and system in which the user plays an important role. This approach is being used at the University of Grenoble, in the Design for Technological & Surgical Innovation (DESTIN) project for design development of surgical instruments in collaboration with Grenoble Hospital and for design of musical instruments adaptation for handicapped children in collaboration with AE2M project. However, the scenario driven approach needs more details in “how to do” aspects and should be supported by a workflow management tool be able to better serve the designers and project managers on which we are working on this at the present.

6. [1]

REFERENCES G. Thomann and J. Caelen, "Proposal of a new Design Methodology including PD and SBD in

Minimally Invasive Surgery," in The International Federation for the Promotion of Mechanism and Machine Science (IFToMM), Besançon, France, 2007. [2] P. J. Clarkson, P. Buckle, R. Coleman, D. Stubbs, J. Ward, J. Jarrett, R. Lane, and J. Bound, "Design for patient safety: A review of the effectiveness of design in the UK health service," Taylor & Francis, 2004, pp. 123 140. Available: http://www.informaworld.com/10.1080/09544820 310001617711 [3] K. Alexander and P. J. Clarkson, "A validation model for the medical devices industry," Taylor & Francis, 2002, pp. 197 - 204. Available: http://www.informaworld.com/10.1080/09544820 110108890 [4] C. Lettl, "User involvement competence for radical innovation," Elsevier Science Publishers B. V., 2007, pp. 53-75. [5] D. Riskin, M. Longaker, M. Gertner, and T. Krummel, "Innovation in Surgery: A Historical Perspective," Annals of Surgery, vol. 244, pp. 686-693, 2006. [6] J. M. Carroll, Scenario-based design: envisioning work and technology in system development. New York: Wiley, 1995. [7] J. M. Carroll, Making Use : Scenario-Based Design of Human--Computer interactions Cambridge, MA: MIT Press, 2000. [8] J. Nielsen, Usability Engineering: Academic Press Limited, 1993. [9] V. Y. Propp, Morphology of the folktale, 1958. [10] H. Becker, "The role of gaming and simulation in scenario project. In: Stahl," in Operational gaming: an international approach. Laxenburg, Australia: International Institute for Applied Systems Analysis, 1983. [11] S. Brown, "Scenarios in system analysis. In: Quade ES, Boucher WE (eds). Systems analysis and policy planning: applications in defense," Elsevier, pp. 298-390, 1968. [12] M. Y. Richard and B. Phil, "The use of scenarios in human-computer interaction research: turbocharging the tortoise of cumulative science," ACM, 1987, pp. 291-296. [13] Y. Bontemps and P.-Y. Schobbens, "The computational complexity of scenario-based agent verification and design," Journal of Applied Logic, vol. 5, pp. 252-276, 2007. [14] R. Blanning, "A decision support framework for scenario management," in international symposium on decision support systems, Hong Kong, 1995, vol. 2, pp. 657 - 660. [15] K. Weidenhaupt, K. Pohl, M. Jarke, and P. Haumer, "Scenario usage in software development: current practice," IEEE Software, pp. 34 - 45, 1998. [16] K. Jintae, K. Minseong, and P. Sooyong, "Goal and scenario based domain requirements analysis environment," Elsevier Science Inc., 2006, pp. 926-938. [17] P. Pacharn and L. Zhang, "Accounting, innovation, and incentives," Journal of Engineering and Technology Management, vol. 23, pp. 114-129, 2006.

[18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

[29]

[30]

[31]

495

H. Morten, "Making use of scenarios: a field study of conceptual design," Academic Press, Inc., 2003, pp. 215-239. T. Yin-Leng, G. Dion Hoe-Lian, L. Ee-Peng, L. Zehua, Y. Ming, P. Natalie Lee-San, and W. Patricia Bao-Bao, "Applying scenario-based design and claims analysis to the design of a digital library of geography examination resources," Pergamon Press, Inc., 2005, pp. 2340. M. Jarke, X. T. Bui, and J. M. Carroll, "Scenario Management: An Interdisciplinary Approach," Requirements Engineering, vol. 3, pp. 155 - 173, 1998-03-01 1998. M. Hertzum, "Making use of scenarios: a field study of conceptual design," International Journal of Human-Computer Studies, vol. 58, pp. 215-239, 2003. J. M. Carroll and M. B. Rosson, "Getting around the task-artifact cycle: how to make claims and design by scenario," in Proceedings of a workshop on Human-computer interface design : success stories, emerging methods, and realworld context: success stories, emerging methods, and real-world context, Boulder, Colorado, United States, 1995. K. Kurakawa, "A scenario-driven conceptual design information model and its formation," Research in Engineering Design, vol. 15, pp. 122-137, 2004. D. Diaper, "Scenarios and task analysis," Interacting with Computers, vol. 14, pp. 379-395, 2002. M. E. Wiklund, Medical Device and Equipment Design: Usability Engineering and Ergonomics. USA: CRC Press, Taylor & Francis Group, 1995. M. Offredy, S. Kendall, and C. Goodman, "The use of cognitive continuum theory and patient scenariosnext term to explore nurse prescribers’ pharmacological knowledge and decisionmaking," International journal of Nursing Studies, vol. 45, pp. 855 - 868, 2007. K. M. Conrad, P. A. Reichelt, S. A. Lavender, J. Gacki-Smith, and S. Hattle, "Designing ergonomic interventions for EMS workers: Concept generation of patient-handling devices," Applied Ergonomics, vol. 39, pp. 792-802, 2008. E. Klaoudatou, E. Konstantinou, G. Kambourakis, and S. Gritzalis, "Clustering oriented architectures in medical sensor environments," in 3rd International Conference on Availability, Reliability and Security, Barcelona, SPAIN, 2008, pp. 929-934. K. Yaeger and J. Arafeh, "Making the move: from traditional neonatal education to simulationbased training," J Perinat Neonatal Nurs, vol. 22, pp. 154-158, 2008. E. Kaldoudi and D. Karaiskakis, "A service based approach for medical image distribution in healthcare Intranets," Computer Methods and Programs in Biomedicine, vol. 81, pp. 117-127, 2006. P. G. Shekelle, J. P. Kahan, S. J. Bernstein, L. L. Leape, C. J. Kamberg, and R. E. Park, "The Reproducibility of a Method to Identify the Overuse and Underuse of Medical Procedures," 1998, pp. 1888-1895. Available:

[32]

[33]

[34]

[35]

http://content.nejm.org/cgi/content/abstract/338/2 6/1888 A. I. Antón and C. Potts, "A Representational Framework for Scenarios of System Use," Requirements Engineering, vol. 3, pp. 219-241, 1998. R. Rasoulifar, G. Thomann, and F. Villeneuve, "Integrating an expert user in design process: How to make out surgeon needs during a new surgical instrument design," in International Symposium series on Tools and Methods of Competitive Engineering (TMCE), Izmir, Turkey, 2008. R. Rasoulifar, G. Thomann, F. Villeneuve, and J. Caelen, "Proposal of a New Design Methodology in the Surgical Domain," in ICED, Paris, 2007. F. Darses and M. Wolff, "How do designers represent to themselves the users' needs?," Applied Ergonomics, vol. 37, pp. 757-764, 2006.

496

Scenario-Based Evaluation of Perception of Picture Quality Failures in LCD Televisions 1 1 1 1,2 J. Keijzers , L. Scholten , Y. Lu , E. den Ouden 1 Sub Department of Business Process Design, Department of Industrial Design, Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven, The Netherlands 2 Philips Applied Technologies, Industry Consulting, High Tech Campus 5, 5656 AE Eindhoven, The Netherlands [email protected]

Abstract In innovative Consumer Electronics products, such as LCD televisions, consumers often perceive the product's malfunctioning differently than designers do. To support critical design decisions, it is therefore important to understand how consumers perceive potential product failures. This paper discusses the development of realistic failure scenarios related to the picture quality of an LCD television. The impact of television content as well as failure origin on the perception of failures is evaluated in an experiment. Advantages and drawbacks of using the scenarios to evaluate design decisions, and implications for further research are discussed. Keywords: Scenario, Product Design, User-Perceived Failure, Consumer Electronics

1 INTRODUCTION Innovative Consumer Electronics (CE) products, such as LCD televisions, are becoming increasingly complex, both in terms of the embedded technologies (i.e. increasing software content, ambient technologies, and open systems) [1] as in terms of the number of functionalities provided [2]. For example, the LCD television of today can be used to access the Internet, watch digital photos, and connect to a personal computer to watch downloaded movie content. Furthermore, because of the globalization of demand, these products require a high level of connectivity with other products and other brands. Research by Den Ouden [3] has shown that companies report an increasing number of customer complaints and even product returns for such products. A closer view on these consumer complaints has shown that the number of 'No Fault Found' (NFF) failures in the CE industry has risen to a level of nearly 50% of the complaints in 2004 [4]. NFF failures are failures where the product performs according to the technical specifications but the product is still rejected because it does not meet the consumers' expectations [4]. Because consumers do not always understand how these complex products function [5], they often perceive the product's malfunctioning differently than designers do. Even when a product is still functioning according to technical specifications, a consumer might think otherwise. For example, configuration problems with the channel settings of an LCD television could be attributed by a consumer to the cable provider, the design of the television, bad instructions in the manual or even lack of knowledge of the user. To prevent such product failures in the field, there is a need to incorporate more user focus in the new product development process [3]. Specifically, it is necessary for the designers to have more insight into how consumers perceive potential product failures, to support critical design decisions early in the product development

CIRP Design Conference 2009

497

process. Consumer tests can be used to measure consumer perception of product failures by letting consumers experience potential product failures and subsequently asking them to attribute the perceived failure to the perceived cause. However, reproducing realistic failures, which occur in actual product usage situations, in a controlled experiment, is often difficult, especially when resources and time available are limited. Alternatively, product failures can be presented to subjects by using different means, for example by showing simulated failures using video recordings [6] [7]. Yet, how to create realistic failures scenarios that need to be tested is still a question to be addressed. The goal of this paper is therefore to develop failure scenarios, related to picture quality problems of a high-end LCD television, which can be used to measure consumer perception of those failures. The remainder of this paper is organised as follows. First, in section 2, relevant literature related to picture quality failures in LCD televisions and scenario-based product evaluation will be discussed. This section will conclude with the hypotheses addressed in this paper. In section 3, the development of the failure scenarios related to picture quality of an LCD television will be discussed. Subsequently, in section 4, the set-up of an experiment that was used to evaluate the developed scenarios, will be discussed. The results of this experiment will be discussed in section 5. Finally, in section 6, this paper will be concluded with a discussion and conclusions on the advantages and drawbacks of using the developed failure scenarios to evaluate design decisions. 2

LITERATURE REVIEW AND HYPTOHESES

2.1 Picture quality failures in LCD Televisions LCD televisions are a typical example of the trends in the development of CE products discussed in section 1.

TV systems have an increasing number of features and the amount of software that is embedded in the TV is dramatically increasing [8]. Although the TV system has changed dramatically from a technical point of view, many consumers do not understand these changes. Consequently, as shown by De Visser [6], consumers perceive product failures and product failure severity of failures in TV systems differently than designers do. For this research, a choice is made to focus on developing failure scenarios related to picture quality of LCD televisions. Picture quality is chosen because it is one of the most important aspects of the quality of LCD televisions [6] and because it is easy to simulate picture quality related failures (see for example research by Puchihewa and Bailey on different artefacts in image and video systems [9]). Regarding user-perceived failures in picture quality, among other things (see also De Visser [6]), two aspects are important when evaluating the use of failure scenarios. Firstly, it is important to investigate whether consumers perceive (i.e. notice) the simulated failures in picture quality by using the failure scenarios. Secondly, it is important to investigate the perceived failure impact. According to De Visser [6], failure impact can be defined as 'the percentage loss of functionality as a result of a failure' and is an important predictor of user-perceived failure severity. 2.2 Scenario-Based Evaluation of Product Failures The use of scenarios in product design is widely supported. Scenarios can be used as a communication tool between designers, users, and stakeholders, they require less time and costs than using prototypes, and they provide designers flexibility [10] [11] [12]. Furthermore, scenarios can have many different views and forms [11] [13] and can be used for many different purposes throughout the design process [12]. Since product failures in CE products are difficult to reproduce in a real product for use in a controlled laboratory experiment, it is interesting to investigate whether product failure scenarios can be used instead. In literature, several examples can be found of the use of scenarios to evaluate product failures [7] [14] or service failures [15]. However, how to create realistic failure scenarios of complex CE products that can be used to let consumers evaluate the failures as input for critical design decisions is currently not addressed. Research has shown that user-perceived failures are very dependent on both the characteristics of the failure and the user, as the context in which the failure occurs [6]. Consequently, the choice is made to focus this research on the influence of a scenario contextual variable (television content) and a product failure specific variable (failure origin) on userperceived failures. These variables and related hypotheses will be subsequently discussed in section 2.3 and section 2.4. 2.3 Television Content Earlier research by Ghinea and Thomas [16] has demonstrated the relation between the content of video clips and the level of user perception and understanding of the content of the video clip. Highly dynamic and information rich video clips have a negative impact on the users' understanding and information assimilation. This occurs because users have difficulty absorbing audio, visual, and textual information concurrently. The importance of audio, video, or textual information also varies between video clips with different content [7]. For example, a news-broadcast has different properties than a music video or a wild-life documentary. This research implies that there is a possible effect of television content on user-perceived failures in television picture quality.

Therefore, one can hypothesize that television content that is so captivating to the user could result in the failure in picture quality being unnoticed. As a result, the following hypothesis was formulated: Hypothesis 1: Captivating television content negatively influences the perception of a failure in picture quality of LCD televisions. Besides the hypothesized influence of content on the perception of the presence of a failure, television content could also influence the perceived impact of the failure. In other words, one could hypothesize that, depending on personal preferences for television content, a failure occurring during an interesting news-broadcast or an exciting movie is perceived as having a higher impact than the same failure occurring during a broadcast of a political debate. This resulted in the following hypothesis: Hypothesis 2: Captivating television content positively influences the level of perceived failure impact in picture quality of LCD televisions 2.4 Failure origin According to technical experts, television picture quality can be influenced by internal problems in the TV (e.g. faults in the software) when processing or displaying the TV signal, as well as by external problems outside the TV such as bad weather or cable connection problems. Problems internal to the TV could, for example, result in ghosting or blocking artefacts (see [9] for more examples). Bad weather or cable connection problems could, for example, result in noise on the screen. Because consumers might attribute externally caused problems to the TV system or the other way around, it is important for product developers to gain insight into how consumers perceive these different types of failures in picture quality. Furthermore, because failure impact is an important predictor of user-perceived failure severity and subsequent consumer complaint behaviour, it is interesting to investigate whether failure origin influences perceived failure impact. Subsequently, the following hypotheses related to failure perception and perceived failure impact are developed: Hypothesis 3: Failure origin influences the perception of a failure in picture quality of LCD televisions. Hypothesis 4: Failure origin influences the level of perceived failure impact in picture quality of LCD televisions. To test these hypotheses, failure scenarios with different television contents (captivating and less captivating) and different failure origins (internal or external to the TV) were developed. The design of these scenarios will be discussed in the following section. 3

SCENARIO DESIGN

3.1 Design Methodology Many factors have to be taken into consideration when creating the required failure scenarios. In this section these factors and the methods used to incorporate those factors will be discussed. According to Rolland et al. [13], scenarios can be designed along four different views: purpose, content, form and lifecycle view. This framework was used as a guideline to define and develop the scenarios for this paper. As previously mentioned, the purpose of developing failure scenarios here, is to let participants evaluate picture quality problems by viewing the failure scenarios which simulate experiencing the same failures in a realistic use situation. To achieve this, an iterative design process was used in which digital TV system experts (both picture quality experts and system testers) were

498

actively involved in designing the scenarios and evaluating the quality and realism of the scenarios. The specific design choices and the resulting failure scenarios will be discussed in the following sections. 3.2 Selection of Scenario Form A scenario can be made by using different media, all of which with their own advantages and disadvantages. Seawright and Sampson [17] compare written scenarios to video scenarios, in which they emphasize that in written scenarios there is an empirical experience but not a real (timely) experience. Also the interpretation variance is different because with a written scenario the user needs to interpret the meaning of words (e.g. 'fast', 'slow') and in a video scenario the user needs to interpret the simulation of the situation. On the other hand, Hamberg and De Ridder [18] make an argument for the use of still pictures instead of moving video to measure the perceptual image quality. The use of still pictures simplifies the stimulus material significantly, because stills do not have time variation of the scene content. In more current research by Pinson and Wolf [19] moving video scenarios are successfully used for the assessment of picture quality. This is because they use a continuous assessment method and again the timeframe is of important matter. Since timing and realism are of highest importance for experiencing picture quality failures in LCD televisions, it was decided to use a video scenario. 3.3 Design of Scenario Context The main disadvantage of the use of any type of scenario for product failure evaluation is the detachment of the failure experience from its original context. Although consumer experiments in a laboratory setting have the same drawbacks, the context used in a failure scenario is important because such scenarios lack any type of userproduct interaction. Consequently, the context of the scenario in which the picture quality failures were shown to the subjects was carefully designed. To simulate a realistic environment, a video recording of a living room environment was used in which someone acted as the user of an LCD television. Each failure scenario video had a duration of 90 seconds (30 seconds for the introduction, 30 seconds for television content without failure, and 30 seconds for television content with the implemented failure) and consisted of the following fragments: •

Introduction to the living room setting: family member enters the living room and switches on the TV.



Interaction with the TV: family member switches through several TV channels and ends up at the channel with the implemented failure in picture quality. A fragment of this part is shown in Figure 1.



Camera zooms in on the TV

Video clip with failure scenario is shown on the TV in which after 30 seconds the implemented failure occurs. To provide a frame of reference, a fragment of the wild-life video without an implemented failure in picture quality is shown in Figure 2.



Figure 1: Scenario context in living room.

Figure 2: Fragment of scenario without a failure. 3.4 Selection of Scenario Content To investigate the influence of TV program content, two video clips that differ on the level of captivating content were selected. The content description of both clips is shown in Table 1. Each clip contained a meaningful segment of a TV program. To conduct an accurate study both contents needed to differ on only one factor: captivating. This meant that all the other factors, for example, movement, lightness, sound level, duration, and quality, needed to be kept as constant as possible. The term captivating TV content is personally linked. Each person will have their own taste and interest in television programs. However, the assumption in this study was made that an action Scenario content

Description

Captivating content

Genre: action movie Content: fragment of 'Matrix Revolutions' with a motor and car chase scene

Less captivating content

Genre: wild-life documentary Content: fragment of 'Earth' documentary with walking and swimming elephants

Table 1: Failure scenario content description.

499

movie is experienced as more captivating for the selected test participants (see section 4.4) than a wild-life documentary. This assumption, however, needs to be validated by consumers when evaluating the developed scenarios. 3.5 Design of Failure Scenarios To investigate the influence of failure origin, two different failures were selected in agreement with the digital TV systems experts. Two variations that were chosen to be integrated in the scenarios are: •

Failure which is most likely to be caused by (software) faults in the television: blocking artefacts on the TV screen.

Failure which is most likely to be caused by a signal disturbance in the cable or a bad cable (connection): noise on the TV screen. For both failure scenarios it is important to notice that the failure origin can differ depending on, among other things, the type of cable signal (analogue versus digital), the TV system configuration, the duration of the failure, and the appearance of the failure. Both failure scenarios used for this research were selected and designed to represent the difference in failure origin as discussed above. Because only failure origin was varied, all the other factors, for example start time, duration, severity; were kept as constant as possible. Both the failure scenario with blocking artefacts and the failure scenario with noise were created with video editing software. Since videos are used as a medium, it was very important to use the highest video quality possible. On the one hand this ensured that the picture quality failures were clearly visible while on the other hand the implemented failures could not be attributed to bad video quality or the program used to display the video. Both scenarios were reviewed by the digital TV system experts on the quality, perceived failure impact and perceived failure origin to verify whether the scenarios were realistic. The scenarios were approved after adjustments made during two iterative cycles. A fragment of the wild-life video with blocking artefacts and a fragment of the same video with noise are shown in Figure 3 and Figure 4 respectively. Both fragments are exactly the same fragment as the fragment without an implemented failure as shown in Figure 2. In the next section, the set-up of an experiment will be discussed that was used to test the hypotheses and to evaluate the use of failure scenarios. •

Figure 3: Fragment of scenario with blocking artefacts.

Figure 4: Fragment of scenario with noise. 4

EXPERIMENT METHODOLOGY

4.1 Overview To investigate the influence of television content and failure origin on the perception of the failure, a 2 x 2 between-subjects experiment was set up. The subjects were asked to fill in a web-based survey in which the failure scenarios were implemented. To evaluate the validity of the failure scenario design, the questionnaire consisted of three parts. In the pre-experimental part, a video with either the action movie or wild-life documentary without a failure was shown to measure perceived quality. Subsequently, in the experimental part a video with the implemented failure and with the same content as the introduction video was shown to measure failure perception and perceived failure impact. Finally, in the post-experimental part several control variables and demographics were measured. 4.2 Experimental Variables The independent variables were the content of the television program (wild-life-documentary versus action movie) and the origin of the failure (blocking artefacts versus noise). As dependent variables, failure perception and perceived failure impact were measured. Failure perception was measured by a single yes/no question. Failure impact was measured on a 5-point scale (with 1 = not annoying and 5 = very annoying, adjusted from De Visser [6]). Additionally, several control variables were measured to evaluate the realism of the scenario. To test the assumption that the action movie is more captivating than the wild-life documentary, interest in the content was measured on a 5-point scale (with 1 = interesting and 5 = not interesting). Furthermore, to evaluate the quality of the designed scenarios, the subjects were asked to rate the realism of the scenarios on a 5-point scale (with 1 = realistic and 5 = unrealistic). Finally, to evaluate the successful simulation of picture quality failures, subjects were asked to evaluate the picture quality of the LCD television for both the introduction and failure scenario on a 5-point scale (with 1 = very good and 5 = very bad). 4.3 Apparatus and Materials The web-based questionnaires with inserted videos were set-up by using Limesurvey [20]. Each subject filled-in the questionnaire on a computer at the university campus. For each subject, a researcher was present to access the survey and select the appropriate settings (i.e. choosing the appropriate failure scenario to achieve equal sample sizes). No additional materials were provided and the

500

subjects were not allowed to access the Internet to search for any additional information. 4.4 Participants Since the goal of this research was to validate the design and use of failure scenarios, it was considered important to use a group of participants with homogeneous characteristics (similar to research by De Visser [6]). Therefore, the experiment was carried out with students of the faculty Industrial Design at Eindhoven University of Technology. They were chosen as participants because they are a group with homogeneous characteristics and because they are easily accessible. To further ensure that differences in perception of the picture quality failures would not be attributable to differences in personal characteristics among the participants, levels of familiarity (usage and ownership) and expertise [21] regarding televisions were measured. In total, 40 participants (27 males and 13 females) with a mean age of 22.0 years (SD = 2.42 years, range = 19 to 29 years) were randomly assigned to one of the experimental groups. Nonparametric Kruskal-Wallis tests [22] showed no significant differences between the personal characteristics of the participants in the four groups. 4.5 Procedure At the beginning of the questionnaire, the participants were instructed that the experiment was set up to evaluate the quality of LCD televisions. The entire questionnaire was written in Dutch and consisted of the following parts: •

Introduction to the purpose of the questionnaire and general instructions



Introduction scenario: video with no implemented failure. Additionally, information was provided on the capabilities of the LCD television displayed in the scenario and on the configuration of the whole set-up (i.e. type of cable signal and conditions under which the content on the TV was shown).



Control questions on introduction scenario: measurement of interest, picture quality and failure perception



Failure scenario: video with implemented failure



Questions on failure scenario: measurement of picture quality, failure perception, failure impact and perceived realism



Questions on familiarity and expertise regarding LCD televisions

• Questions on demographics Filling-in the entire questionnaire took approximately 15 minutes. The entire procedure and questions were pretested in a pilot test and based on the comments several small adjustments were made to the formulation of questions and question order.

Inter es t in sc en ario c onten t p ic ture q ua lity failur e pe rc eption [% ] failur e im pac t s ce nar io re alis m

5

RESULTS

5.1 Overview of Results In this section, an overview will be given of the results of this experiment and the validity of the scenarios will be discussed. The results for the most important measurements for the different scenarios are shown in Table 2 below. The measures did not meet the assumption of normality and equal variances between the different groups which are a prerequisite for being able to use parametric tests. In order to compare differences across the scenarios, one-way Kruskal-Wallis analyses were run. In order to compare differences between groups, separate pair-wise Mann-Whitney U tests [23] were used. Holm's sequential Bonferroni correction [24] was used to determine the corrected significance level when multiple comparisons were made. The level of significance was set at p = 5%. Results within the less restrictive 10% level are indicated as marginally significant. Before testing the hypotheses, first the validity of the designed scenarios will be discussed. First of all, the results of a non-parametric Mann-Whitney U test confirmed that the action movie content is more interesting to the participants than the wild-life documentary content (p=0.012). Furthermore, from the results in Table 2 it can be seen that all of the scenarios are considered reasonably realistic, although the blocking artefacts scenario is considered less realistic than the noise scenario. Results of a Kruskal-Wallis test show there are no significant differences between the level of perceived realism for all the scenarios (χ2(2) = 2.83, p < 0.42). Subsequently, the Wilcoxon signed-rank test [22] was used to test for a significant difference between the picture quality measurements of the introduction video with failure and the subsequent failure scenarios. The results show that the manipulation of the picture quality failures was successful (p=0.000). Finally, the failure impact measurements between all the scenarios were compared to test whether the severity of the failure was perceived as constant between the two failure scenarios within each content group. The results of a Kruskal-Wallis test show that the measurements of failure impact are significantly different across the scenarios (χ2(2) = 11.46, p < 0.01). However, the separate Mann-Whitney U tests confirmed that, when comparing the noise and blocking artefacts scenario for the wild-life documentary (p=0.422) and when comparing the noise and blocking artefacts scenario for the action movie (p=0.112), the perceived failure impact is not significantly different. Overall, it can therefore be concluded that the manipulation of the failure scenarios was successful. In the next section the results of the hypotheses tests will be discussed. 5.2 Test of hypotheses First, it can be concluded that the results of MannWhitney U tests show that there is no significant influence of television content (p=0.602) and failure origin (p=1.000) on failure perception. In other words, hypotheses 1 and 3 cannot be accepted. For all the

W ild- life do cu m entary B lo ck in g N o F ailure N ois e ar tefacts 2.60 x x 2.20 4 .6 0 3.90 x 90 90 x 3 .4 0 2.80 x 2.0 2.67

A c tion m ovie N o F a ilu re

N oise

1.85 2.20 x x x

x 4.0 80 4.12 2.38

Table 2: Overview of mean scores for the experimental variables.

501

B lo ck in g artefacts x 4.20 80 4.62 2.75

scenarios almost all the participants noticed a failure in picture quality. Although previous research discussed in section 2.3 showed that captivating content negatively influences attention when watching a video, the duration of the failure scenario videos might have been too short and too fragmented to fully capture the attention of the participant. Furthermore, both failures were made quite severe to counter for the use of videos (which are on a computer always smaller than a large LCD TV screen). Secondly, the influence of television content and failure origin on perceived failure impact is tested. Mann-Whitney U tests confirmed that a picture quality failure in the action movie is perceived as having a higher failure impact than a picture quality failure in the wild-life documentary (p=0.002). Consequently, hypothesis 2 is accepted. A similar comparison on failure origin is less conclusive, since the result is only marginally significant (p=0.088). In other words, hypothesis 4 is rejected but the results indicate that there might be an influence of failure origin on failure impact. When comparing the scenarios one-byone for perceived failure impact, only the comparison between the action movie blocking artefacts scenario on the one hand, and the wild-life documentary blocking artefacts scenario (p=0.008) and noise scenario (p=0.008) on the other hand, are significant after applying the Bonferroni correction. One possible explanation for these mixed results is the fact that blocking artefacts in fast moving video content cause more disturbance of the video and have different colour, contrast and detail variations than in slow moving content. 6 DISCUSSION AND CONCLUSIONS To support critical design decisions in the development of complex CE products, it is important to understand how consumers perceive potential product failures. This study investigated the use of a scenario-based evaluation method which was used to test the influence of television content and failure origin on user-perceived failures in picture quality of an LCD television. Four different failure scenarios were designed which were implemented in a simulated living room context. A 2 x 2 between-subjects experiment with 40 participants was carried out to evaluate the design of the scenarios and to test the formulated hypotheses. 6.1 Use of Failure Scenarios to Measure UserPerceived Failures In this study a different approach of the use of scenarios was investigated by specifically focusing on userperceived product failures. Several control variables were used to measure the validity of the designed scenarios. Overall, based on the statistical results discussed in section 5.1, it can be concluded that the design of the scenario context and the implementation of the failures was successful. Although the blocking artefacts scenarios were perceived as less realistic than the noise scenarios, this may be explained by the fact that, in practice, noise on a TV screen is more common than blocking artefacts. The advantages of using failure scenarios instead of trying to reproduce realistic product failures during actual product usage situations are not only the limited amount of resources and time required to design the scenarios and the possibility to use evaluation methodologies which easily allow to use larger sample sizes (e.g. survey instead of laboratory experiment). Similar as the argumentation by Carroll [11] in the context of scenariobased product design, the use of failure scenarios provides a significant larger degree of flexibility to explore different product failures which in actual product usage situations would be practically unfeasible.

However, the scenario-based approach for userperceived product failure evaluation also suffers from drawbacks which require careful design and pilot tests but also limit the applicability in certain contexts. Firstly, the scenario-based approach is only useful for failure scenarios which do not interact with the form in which the scenario is shown. For example, video scenarios cannot be used to evaluate the user perception of damaged pixels or small disturbances in the sound quality since such failures could also be attributed to the video. Moreover, scenarios remove the failure from the context in which the failure occurs (e.g. no user-product interaction, different start time and duration of the failure etc.) This can only be partly adjusted for with a careful design of the scenario context. Therefore, the input from and evaluation by product designers, developers, and testers is of crucial importance for designing realistic product failure scenarios. 6.2 User-Perceived Failures in Television Picture Quality Although the main goal of this paper was to evaluate the use of failure scenarios to evaluate user-perceived failures, several insights are gained about user-perceived picture quality failures. Firstly, the results of the experiment show that overall failure origin does not significantly influence the perception of a failure and the perception of failure impact. Secondly, the results did confirm that, for the student sample, failures in captivating content have a higher perceived impact than the same failures in less captivating content. To be importance for supporting product design decisions, among other things, the consumers' attribution of different picture quality failures to their perceived causes should be measured. Such insights could be used to support design decisions in the design of the user interface or the user manual. 6.3 Further Research This study is part of a larger project to investigate the influence of failure and user characteristics on userperceived failures in CE products (see also De Visser [6]). Although this explorative study was conducted with a narrow sample and limited sample size, the insights gained can be used to improve a scenario-based evaluation in future studies, to measure the perception and attribution of product failures. However, further research with a larger and different sample (i.e. more heterogeneous on, for example, age and product expertise [21]) and with different products is needed to validate the use of failure scenarios. Furthermore, more research is needed to account for the drawbacks of the scenario-based approach and, more specifically, to improve the design of the context in which the failure scenarios are implemented. For example one may evaluate the inclusion of user-product interaction in a scenario or enrich the scenario by using a more elaborate context description to place the failure into a relevant and realistic context. 7 ACKNOWLEDGEMENTS This work has been carried out as part of the Trader project under the responsibility of the Embedded Systems Institute. This project is partially supported by the Netherlands Ministry of Economic Affairs under the BSIK program. We would like to thank Rob Golsteijn, Iulian Nitescu and Roland Mathijssen for their help with designing the failure scenarios. Furthermore, we would like to thank Jun Hu for his support with the survey software.

502

8 REFERENCES [1] Siewiorek, D.P., Chillarge, R., Kalbarczyk, Z.T., 2004, Reflections on Industry Trends and Experimental Research in Dependability, IEEE Transactions on Dependable and Secure Computing 1(2): 109-127. [2] Norman, D.A., 2007, Three Challenges for Design, Interactions January + February 2007: 46-47. [3] Den Ouden, E., 2006, Development of a Design Analysis Model for Consumer Complaints, Ph.D. Thesis, Eindhoven University of Technology, The Netherlands. [4] Brombacher, A.C., Sander, P.C., Sonnemans, P.J.M., Rouvroye, J.L., 2005, Managing Product Reliability in Business Processes 'Under Pressure', Reliability Engineering and System Safety 88(2): 137-146. [5] Cooper, A., 1999, The Inmates are Running the Asylum: Why high-Tech Products Drive us Crazy and How to Restore the Sanity, Sams, Indianapolis. [6] De Visser, I.M., 2008, Analyzing User Perceived Failure Severity in Consumer Electronics Products: Incorporating the User Perspective into the Development Process, Ph.D. thesis, Eindhoven University of Technology, The Netherlands. [7] Jumisko-Pyykkö, S., Kumar, V., Korthonen, J., 2006, Unacceptability of Instantaneous Errors in Mobile Television: From Annoying Audio to Video, Proceeding of the MobileHCI 2006 Conference, Helsinki, Finland. [8] Tekinerdogan, B., Sözer, H., Aksit, M., 2008, Software Architecture Reliability Analysis Using Failure Scenarios, Journal of Systems and Software 81(4): 558-575. [9] Punchihewa, A., Bailey, D.G., 2002, Artefacts in Image and Video Systems: Classification and Mitigation, Proceedings Image and Vision Computing New Zealand, Auckland, New Zealand: 197-202. [10] Carroll, J.M., 2000, Making Use: Scenario-Based Design of Human Computer Interactions, The MIT Press, Cambridge, Massachusetts. [11] Carroll, J.M. (ed.), 1995, Scenario-Based Design: Envisioning Work and Technology in System Development, John Wiley & Sons Inc., New York. [12] Anggreeni, I., Van der Voort, M.C., 2008, Classifying Scenarios in a Product Design Process: A Study

503

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20] [21]

[22]

[23]

[24]

Towards Semi-Automated Scenario Generation, Proceedings of the 18th CIRP Design Conference, Enschede, The Netherlands. Rolland, C., Achour, B., Cauvet, C., Ralyté, J., Sutcliffe, A., Maiden, N., Jarke, M., Haumer, P., Pohl, K., Dubois, E., Heymans, P., 1998, A Proposal for a Scenario Classification Framework, Requirements Engineering, 3: 23-47. Lancellottti, M.P., 2004, Technological Product Failure: The Consumer Coping Process, Ph.D. Thesis, University of Southern California, United States of America. Seawright, K.K., DeTienne, K.B., 2008, An Empirical Examination of Service Recovery Design, Marketing Intelligence & Planning 26(3): 252-274. Ghinea, G., Thomas, J.P., 1998, QoS Impact on User Perception and Understanding of Multimedia Video Clips, Proceedings of the Multimedia Conference 1998, Bristol, United Kingdom. Seawright, K.K., Sampson, S.E., 2007, A Video Method for Empirically Studying Wait-Perception Bias, Journal of Operations Management 25(5): 1055-1066. Hamberg, R., De Ridder, H., 1995, Continuous Assessment of Perceptual Image Quality, Journal of the Optical Society of America A: Optics, Image Science and Vision 12(12): 2573-2577. Pinson, M., Wolf, S., 2003, Comparing Subjective Video Quality Testing Methodologies, Proceedings of Visual Communications and Image Processing 2003, Lugano, Switzerland. Limesurvey v1.71, an open source survey application, www.limesurvey.org. Alba, J.W., Hutchinson, J.W., 1987, Dimensions of Consumer Expertise, Journal of Consumer Research 13(4): 411-454. Montgomery, D.C., Runger, G.C., 1999, Applied Statistics and Probability for Engineers, Second Edition, John Wiley & Sons Inc., New York. Mann, H.B., Whitney, D.R., 1947, On a Test of Whether one of two Random Variables is Stochastically Larger than the Other, Annals of Mathematical Statistics 18(1): 50-60. Howell, D.C., 2002, Statistical Methods for Psychology, Fifth Edition, Thompson Learning, Pacific Grove.

Applying Scenarios in the context of Specific User Design: Surgeon as an Expert User, and Design for Handicapped Children. G. Thomann, R. Rasoulifar, F. Villeneuve G-SCOP Laboratory, Grenoble Institute of Technology, France {Guillaume.thomann, Rahi.rasoulifar, francois.villeneuve}@inpg.fr Abstract In the context of specific user (surgery or handicap domain), the list of requirements is especially difficult to establish. We propose to reflect on the design methodology and especially on the Scenario & Emulation application. The aim is to make emerge needs in users who have specific relation to the final product. In this paper, we illustrate the research with two projects: the DESTIN Project (know-how and experience of the surgeons) and the AE2M Project (handicapped children). User, usage and products are the parameters we chose to define and to categorize with the objective to optimise the design process. Keywords: Emulation & Scenario, User Centred Design, Design Methodology, Handicapped children, Design in Surgery, Specific User.

1 INTRODUCTION In the design process, the early stage of the conceptual design consists of preparing a requirement list according to the user needs. In most cases, only one discussion is sufficient to acquire enough understanding of the problem. In specific contexts, designers as a team need to change this classic methodology process and to adapt their approach to the specific user. In this article, we study two cases (AE2M and DESTIN projects) where we can suggest reflections on the design methodology. These two specific design projects follow the user centred design, participatory design and scenario based design methodologies, but the user capacities and capabilities are completely different. The aim of this article is to work on generalisation of a proposed methodology concerning two situations where the position of the user is essential in the design process. The aim of the AE2M project (Ergonomic Adaptation of Musical Materials) is to enable handicapped children play musical instruments. In this case, the user has to play the music, but specific system has to be designed for that. Three proficiencies have been notified not only as a necessity, but also as interactive tasks for the design of adapted systems for handicapped children (engineers, musicians and paramedical specialists). The DESTIN project (DEsign for Surgical-Technological INnovation) concerns the design of innovative surgical instruments that are used by the surgeons during Minimally Invasive Surgery (MIS) and interventions. In this case, the place of the expert user is essential to guide the design team through an optimal instrument. The know-how and the knowledge of the user must be efficiency taking into account during the design process. To be able to better understanding these two specific design situations and to work on a design methodology, we based our comparison on the three main parameters:

CIRP Design Conference 2009

504

user, usage and product. The definitions of these parameters will be completely described in details in the article and the comparison will provide some ways to propose better adaptation of the design process in each situation. In spite of well-identified differences between these two projects, the article shows that the design process, that integrates specific user, can use the scenario based approach. Our proposition concerns the applications of scenarios (emulation) in these design cases and the organisation for applying them efficiently considering the expressive capacities of the final user of the developing product These reflections and some applications will be experimented during courses in the University of Grenoble with the engineering students, we experiment the SBD methodology to design user adapted products in these two projects. Even if the projects are completely different, we observe some similarities and can give some interesting information on the product. 2 TOWARDS THE USER CENTRED DESIGN METHODOLOGY The purpose of the research behind this article is to propose an adapted design methodology and to find the place of specific user in design process, particularly when he has a specific position (as an expert or a handicapped person for example) and when the design is highly dealing with his expertise. In this case, i.e. when the user is not only the client of a product but also the final specific user, the design process has to be adapted to the situation. The great importance of human aspects in industrial environments have changed the viewpoints of designers and developed Human-Centered design approaches. One of the fundamentals of this approach is to consider human factors at all stages of the design process. The

integration of human factors in design process phases requires the effective use of the appropriate human models. In [1], the authors present definitions of human models and their classification in industrial applications with emphasis on industrial design processes. The authors also focus on the application of the human models in a human centred industrial system approach.

issue is how to involve and integrate the user in the design process. The general reference model of UCD principles and process is the model presented by ISO 13407. It identifies five UCD activities, one main for laying out the design process and the four rest of which deal with the substance [11].

In the domain of advanced manufacturing system for example, researchers explain that human Impacts can significantly impair system performance and the future capability of the company to react to market requirements [2]. As it is discussed, these authors propose the development of an alternative ‘human-centred’ approach: technical and human aspects of advanced systems can be considered in parallel from the start of the design process. Advanced production technology is not only characterized by higher automation of production flow and control, but more and more measured at the level of the ergonomics of man-machine interaction. Although much effort has been devoted to user friendly design and improved interface techniques, today's systems do not take into account their individual user's problems and tasks [3]. One possible answer to this problem is the design of ‘cooperative’, adaptable or adaptive user interfaces. The idea proposed is to adapt interface behaviour (presentation and dialog control) on account of individual user differences or user problems, by reasoning about user intensions in situational work contexts.

As mentioned as the main issue, the details in integrating the user in design process are very interesting in research point of view. Some researchers have proposed a novel process model of UCD, contrasted it with existing models, and reported their experience of using the model; see Jokela in [12]. The original aim of these kinds of researches is to learn how to improve the performance of UCD processes of product or system development trough the amelioration of user interaction with the process.

Several comprehensive user related design methodologies have been published in the last decade, like User Centred Design (UCD) and Participatory Design (PD), but while they all focus on users, they disagree on the definition of user, what relation exists between user and product, what activities should take place during the user needs analysis, and how these finding should be observed, presented, documented and communicated. All these aspects assume that the user’s knowledge, capabilities, limitation and needs have to be taken into account. Moreover, there is the actual use situation and environment that has a great effect when the degree of expertise of user increases. In PD, the users are involved in development of the products; in essence they are co-designers. A great number of projects are currently made around Software, Web and Human-Machine Interface development [4]. The main advantage of the UCD approach is that a deeper understanding of the psychological, organizational, social and ergonomic factors that affect the use of computer technology emerges from the involvement of the users at every stage of the design and evaluation of the product [5]. The involvement of users assures that the product will be suitable for its intended purpose in the environment in which it will be used. This approach leads to the development of products that are more effective, efficient, and safe. UCD as a design approach was introduced first time in the format of the standard ISO 13407: Human-Centred Design Processes for Interactive Systems [6]. The idea of developing usable products and services always pushed the design approaches toward placing the user in the design process. There exist many literatures on UCD, called also Human-centred design and usability design with the same basic principles for develop products and services that will meet the needs and expectations of the end users by user involvement such as iterative design and multidisciplinary teamwork [7] [8] [9] [10]. The main

The main idea of Jokela’s new UCD process model is to intercommunicate the user with the usability in cycling process, as shown in the schema (figure 1).

Identification of user groups

Usability evolution

Usability engineering process Context of use

User interaction design processes

User interaction design proc.

User requirements

User training assets Produce user task designs

User documentation Product package User interface

Figure 1: Jokela’s UCD Process Model “User inter- action” as he defined, aims to produce the interaction between user and design process who leads to four outcomes: user training assets, user documentation, product package and user interface. On the other hand, this model supposed to be an effective tool for training in the essentials of UCD. Some feedbacks indicate that getting needs piled up is more practical than focusing on methods. The other important issue in UCD is how identifying and selecting relevant users in the development work. In practice it is commonly possible to involve only a limited number of users, and therefore it is very important how to select the “representative users” to centre the design on their requirements and expectations. There are several studies in different themes such as [13] [14] [15], trying to avoid misunderstanding the representative needs. Nowadays, the UCD and PD methodologies are experimented in many different studies and domain. Their applications on the design process with the concrete integration of the user need more understanding of the Scenario Based Design (SBD) Methodology.

505

3 SCENARIO BASED DESIGN APPROACH FOR CONCRETE APPLICATION OF DESIGN FOR SPECIFIC USERS As a more global methodology, the PD workshop is one in which developers, designers, business representatives and users work together to design a solution [16]. PD workshops are most effective early in the design process, when ideas can be less constrained by existing code or other infrastructure. The methodologies presented above show lots of researches that are interested in working with a specific placement of the users in the design process. It is in the beginning of the design process that the list of requirements is essential for the design team. Nowadays, in specific domains, i.e. when the final user has its proper know-how and experience, the communication with the design team is too difficult and terms often used by users and employed to explain their needs don't allow an instantaneous understanding by the designers. Moreover, when the aim of the designer is to work on radical innovations, they need a more precise approach for the user observation and work analysis. In this article, we will present two different real situations that need specific organisation between final users and designers in the context of innovative product design. To be able for the designers to better understand the user requirements, we proposed to integer the concept of scenarios to the methodology thanks to the Emulation. In our case, emulation consists on the exact reproduction of the activity applied on one system but using a different system. Many papers deal with the advantages of the ScenarioBased Design (SBD) and with the way of creating scenarios [17] [18] [19]. In SBD, descriptions of situations become more than just orienting examples and background data, they become first-class design objects. To write a scenario, it is necessary to describe in a simple language the interaction which needs installation. It is important to put of references to technology, except when technology represents a constraint of design which must be represented [18]. It is thus always necessary to have the scenario read again by a user to be sure that it is representative of the real world in which he evolves. In [20] and [21], we have identifying the scenario as a manner to better communicate the requirements of the users to the designers. SBD methodology is defining by a person who makes things in a certain context. Using scenarios during design ensure that all participants understand and agree to the design parameters, and to specify exactly what interactions the system must support. In this study, the first step was to identify the needs and the goals of the surgeon. Due to observations in its real environment and discussions with users, the conclusion was the design of "new surgical instruments adapted to the requirement of MIS associated with a new operative procedure adapted to these instruments". This case study shows that it is necessary to place the user in an environment as real as possible for an efficiency experiment. This confrontation is the identified link between the user and the designing prototype. It is realised thanks to the scenario, which allows an efficiency requirements expression from the user. In the two following sections, we present two original projects which can illustrate the theory developed above. The first one called AE2M project (Ergonomic Adaptation of Musical Materials) concerns specific users as they are handicapped children. The aim of this project is the design and manufacture of electromechanical systems for users to play normal musical instruments. In the second

506

example (DESTIN Project for DEsign for SurgicalTechnological INnovation), the goal is to design innovative surgical instrument for a new type of Minimally Invasive Surgery intervention. 4

AE2M PROJECT

4.1 Presentation In one of the specific institutes for handicapped children in Grenoble, France, a music teacher observes that some children have a significant respond to music, and even they are motivated to play. According to their low physical capacities, these children can not use the proper instruments. This context inspired the idea of the AE2M project (http://projetae2m.free.fr/) in which the goal was to develop, some solutions for the handicapped children, by engineering students [22]. The final aim of this project is to provide a similar condition for handicapped children for music play, as the normal ones use musical instruments. The ambition is to create an orchestral musical practice for both normal and handicapped children. This multidisciplinary project (musical, ergonomics, social and technical aspects) tries to place engineering students in the context of reality, learns them to deal with numerous important specifications and different constrains around the special users of the product in design and manufacturing process. Authors propose an interactions representation around the AE2M project. The figure 2 shows the "competencies triangle" with the three main applications specialities: Engineers, Musicians and Paramedical specialists. These three competencies have been notified not only as a necessity, but also as interactive tasks for the design of adapted systems for handicapped children. For a better comprehension of this competencies triangle, it is necessary to develop the roles and the work of these specialities in the project. The design engineers are represented by the engineering students. Depending on the project, this team can be composed from 2 to 6 students in the AE2M project: •

They have to discuss with the paramedical specialists who spend all their times with the handicapped children. They know all the physical capacities of these children.



They should be familiar with the properties of the musical instrument the children would try to play. So they consult the music specialists of the project, in order to acquire enough knowledge about the good position of the body, the playing mechanism such as the velocity and the position of the contact for instance for the percussion instrument.

Mechanic, Informatics, Electronic, Physic (Acoustics) Design, Marketing, etc.

ENGINEERS

Acoustician, Specialised professor in instrument, Instruments manufactory

THE HANDICAPPED CHILD

MUSICIANS

PARAMEDICAL Doctors, Ergotherapists and Kinesitherapists specialists, orthoptists

Figure 2: Competencies triangle of the AE2M Project: competencies around the final user

All these specific requirements gained from the paramedical and the music specialists should be taken into account for the design of an adapted product. The mechanical engineering students’ team has the aim to compile these numerous data and to propose and discuss around proposed data user requirements. 4.2 Application and methods This study aims to focus on the design and the manufacturing process in the mechanical engineering field, and in the same time, opens the students’ mind to the other considerations of the final user such as handicaps, and deals with social and human relations and realities. After three years works of this AE2M Project, some prototypes have been manufactured in the Grenoble Institute of Technology. Currently, the main musical field is the percussion instruments. Three mechanical and electromechanical systems are used by handicapped children in the Motrice Educational Institute with the help of the pedagogic teachers. For example, during the last four months’ study, some experiments in usage situation have been done and two successive prototypes tested. Then, the students’ team proposed a final system. During this study, the emotional comportment of the children playing the music with prototypes guided some mechanical design decisions. Despites of the important handicaps of the children, students learn that users are able to give essential requirements thanks to their expressive comportment In this situation, scenarios are written to prepare the emulation with the children in usage situation: • What kind of data to recover, how and why instrumented the child and the prototype? • To list and prepare facilities for filming, recording and taking pictures to anticipate the future analysis, • To prepare questionnaire to submit to the child, but also to the other users of the prototype (musical teacher, paramedical specialists; etc.), • To take care of the children authorisations obtained. The added activity (Scenario & Emulation) imposed to the students shows them the necessity to adapt their step to the context of design. The aim is to make emerge needs in users who have specific relation to the final product. Using the same design methodology of playing the scenario, two others systems are in development this year. One system allows a child to play a wind musical instrument and the aim of another one is to help children to support the weight of their superior members to play the music. In the University of Grenoble, the AE2M projects are examples of multidisciplinary studies that can contribute to the complete formation of the engineering students. Actually, electrical and mechanical engineering students from two different engineering schools are collaborating with paramedical and musical specialists around the handicapped children. We have clearly identified the necessity to organise this difficult work around the final specific user. Not only the children requirement and physical capacities but also the other technical requirements (musical and paramedical) have to be taken into account by the engineering team. Currently, the best mean of success in design is to confront the successive prototypes as soon as possible in the design process. With engineering students, teachers propose to the final user to evaluate successively the prototypes manufactured. A lot of different informatics

means are employed to get information and to analyse the effective use. 5

DESTIN PROJECT

5.1 Presentation The DESTIN Project consists of reflection about complexity of medical instruments design in the Minimally Invasive Surgery (MIS) and some about the Scenario Based Design (SBD) which constitute another proposition for the better integration of the user in the Design Process [20]. Currently, the consequences are very handicapping after hard classical surgical interventions. Scientific progress of the last decades makes more and more possible to satisfy the needs for the surgeons in terms of surgical materials and more precisely of surgical tools. MIS has the main objectives to make the post-operative constraints less painful for the patient, mainly by modifying the operative process with the aim of introducing miniaturized or modified tools inside the human body. In this study, we try to propose an innovative process design method with a better integration of the user which allows the designer to design surgical tools more adapted to surgical procedure and more supporting for the surgeon’s ideas. This proposition consists in a new idea on the integration of the Scenario Based Design Methodology and the confrontation of: • •

Evolution of the surgical tool prototypes, Modification of surgical procedure.

This situation will be able to decrease the differences between the surgeons' needs and the designed tools and their usage. The proposal is to develop the application a Co-Evolutive Design Process methodology, with the explanation of prototype and surgical procedure evolutions (figure 3). In this specific surgical application studied, a particular lumbar fracture is caused by 50% of the serious sport accidents (falls of motorbike, ski, and parapet, etc.). Currently, the "classical" lumbar arthrodesis operation (placed an implant on the L1 vertebra) is carried out by tools introduced against the patient's back through a 25cm large incision. It is a heavy surgical operation consisting in reforming the vertebra fractured, while having beforehand repositioned the adjacent vertebrae with their origin positions. The post operative consequences are very handicapping for the patient. Following this observation, the surgeon has explained his need concerning the use of minimally invasive surgical tools. Instrument

Emulation

Use Figure 3: Co-evolutive Design Methodology proposal in the surgical domain

507

The complexity of designing medical instruments in proved. A proposition to design medical systems as closer as possible to the idea and needs of the users is to design with them. The proposal is to integer as soon as possible the surgeon in the design process. The proposed model shows the co-evolution of the instrument and the use. The idea is to confront the instrument developed by the engineers with the use idea of the physician. Thus, the successive prototypes can evolve in the same time than the surgical procedure. In this case of product innovation, the first surgeon’s idea of use is in its mind and comes from his know-how and his experience. Successively, the use and the instrument will converge to a final and efficiency solution. The procedure between one positions to another pass through another stage called emulation. 5.2 Application and methods In an emulation, the surgeon and the designer participate by making arguments, but they use different means of communication and different knowledge references. The surgeon communicates almost exclusively with words of his vocabulary while the designer uses his own words, and they have limited capacity of technical expression. In the experiment, the surgeon was asked to explain the operation and to give his comments in real time. We decided to capture and observe the emulation to validate the concept of emulation and to better understand the surgeon-instrument and surgeon-prototype interactions. One object of this study is to develop an analysis model for detailing surgical procedure, in order to have a focused vision on prototype validation. This model aims also to provide some helps in requirement analysis for the designer in conceptual design [23]. We used a scenario-based approach to be certain about the observation recordings. The scenario was used to clarify what the usage situation was supposed to be, and how the emulation observation could represent the feed for design modifications. We define the scenario as a written document in which characteristics of confrontation between instrument and usage are explained. These characteristics can be divided to four main categories: 1. Surgical procedure A Scenario is supposed to explain the objectives of the operation, operative tasks of surgeon, and required tools for surgery. If the operation is realized by a team, their role (ex. Nurse, technician) and tasks should be mentioned 2. Instrument prototype functions In this part, scenario describes the functions of the actual prototype and the elaboration of the principle solutions. Functions are determined by requirements, so in each emulation due to the understanding of requirements, different functions are proposed 3. Usage situation The instrument is going to be used by a surgeon in an operative situation. This situation can be characterized by environmental factors of where the usage is taking place, including the patient. It is possible, and very common, that the usage situation does not happen on the patient but on a box, mannequin, or cadaver. Some general information about the emulation like characteristics of the mannequin, the external equipment, and the general environment should be prepared 4. Observation How the surgeon and emulation should be observed is important to be prepared before. In normal experience,

508

there is a general camera for filming the whole scene. The technique of filming is somehow informal and dependent on project, and is a subject of discussion [24]. In this design case, we can consider the surgeon as an expert [25]. This study was made to determine whether level of expertises of user affect the design process in a development of new surgical instrument. From the results, the following conclusion can be drawn: •

The classification of users can involve their level of expertise when priority of exigencies takes into account in order to modification of prototype of the instrument, and operational procedure.



Emulation seems to be essential for observing expert user’s needs for the reason that expert user is like an artist and he uses to manipulate the artefact in his hands and in real situation.



While an expert activity is dependent on actor’s authority and decision making, data interpretation should be verified with more than one expert in each profile of the classification. As we have shown in this study, it is essential to use the scenario based design methodology to adapt the future product to the expert user requirements. We have proposed a co-evolutive instrument-surgical Methodology where the essential step called “emulation” is realized thanks to a scenario that consists in four main categories. Effective emulations coupled with adapted instrumented observations and analysis must largely contribute to a final product completely in phase with the use of the expert user. To complete this study in the medical domain, this is an other example of design organisation [26]. In the fuzzy front end, designs with medical information asks special attention to the communication between designer and specialist, otherwise there could be lost opportunities for an optimized product design. The knowledge must be brought directly on the level of understanding and sharing the medical and technical information. The shared information should be known to all the involved parties for a successful project with an optimized product design or working prototype as result. Projects with medical science as starting point require a new design approach to develop a design concept or a prototype. The new approach is doing of observation research in the orientation phase of the process by following a medical treatment and the medical steps forward which the medical researcher wanted to reach with the project. But the acceptance of design progress belongs also to this new approach. Shortly, communication and understanding the design progress by all involved parties motivate to come to a new product design or prototype. 6

TOWARDS A MORE ADAPTED METHODOLOGY

6.1 SBD in the AE2M and DESTIN Projects In SBD, descriptions of situations become more than just orienting examples and background data, they become first-class design objects. Scenario-based design takes literally the adage that a tool is what people can do with it — the consequences it has for them and for their activities that use it. In SBD, scenarios of established work practice are constructed. Each scenario depicts actors, goals, supporting tools and other artefacts, and a sequence of thoughts, actions, and events, through which goals are achieved, transformed, obstructed, and/or abandoned. The scenarios are iteratively analyzed, revised, and refined.

In the two projects presented above, some scenarios have been experimented with user in real situations: in the Motrice Educational Institute, children have tested the designed systems with the musical teacher (AE2M Project, figure 4). In the DESTIN project, the surgical instrument is used on phantom and cadaver during emulation in the operating room (figure 5). The objectives of these two projects are concrete and the manufactured systems satisfy the requirement of the user. The main difficulty is to integer the requirements of all the actors who are working around the user. The Scenario Based Design Methodology is the used in all the cases and some systems are currently used in real situations by the users.

Figure 4: A handicapped child is testing one mechanical system prototype with a teacher and an engineering student

Figure 5: The surgeon and its assistant using the prototype of surgical tool on cadaver in the operating room 6.2 Proposal of an adapted classification During the design process, we have identified three factors that can influent the organisation of the scenario and the practical application: the user, the product and the usage. Firstly, the user can be identified in the working context with his know-how and experience. For example an expert surgeon mustn’t be considered as the same manner than a beginner and the scenario will be prepared differently. The expression mean of the user have to be analysed a priori by the designer and the observation of the emulation must be adapted to the situation. The user mustn’t be disturbed by the camera and the other observers. It is also important to identify the user in the usage context. The user in a team or using the product alone doesn’t react in the same way and more interactions with the team members change the user behaviour. The usage of the product depends essentially of the working environment and of the user itself. In the beginning, it is defined by the user from its own

experience and needs. Throughout the design process, the usage often evolves and it is essential to confront different users with the prototype. Depending of the type of user, it is finally possible to propose a common prototype for multiple users (usable with each usage) or one generic but adaptable prototype for one type of use. The product must answer to the list of requirements. User must practice all the tasks the product or system is designed for. If something doesn’t work well (point of view of the user), the first reaction of the user will be directed towards the product without calling into question its own usage. These three definitions allow us to propose the table 1 as follows. This table represents the classification of the two presented projects in regards to the user, product and usage definitions. DESTIN Project

AE2M Project

User

with a team

alone

Product

generic & adaptable

generic & adaptable

Usage

personalised

unique

Table 1: Representation of the user, product and usage for the AE2M and DESTIN Projects The user In the DESTIN Project, the surgeon is always working with the complete medical team (anaesthetist, instrumentals’ person, young surgeon, etc.). It often happens that the surgical operation held with the participation of two surgeons at the same time and with multiple others surgical instruments; this must be taken into account. Moreover, it is not possible for them to prepare the instrument before using it on the patient. Thus, the preparation of this kind of emulation needs more organisation and anticipation than for a “simple” user. All the team members have to give their opinion on the product in their own level and expertise. Even if the handicapped child needs assistance to install and arrange the system, he uses it alone with its own physical capacities. So the musical teacher and the child are using successively the system and it has to be design in these conditions. One time the system is installed for the child, he must be able to use it alone for approximately one hour without intervention of anybody. Indeed, the musical teacher has to coordinate a lot of children in the same time. The product The product, as a surgical instrument for the surgeon or an electromechanical system for the handicapped child, is what the user is waiting for. In the surgical domain, even if it is advised to test the product with many users, at the end there will be a single specimen of surgical tool. In the AE2M Project, each child has his own handicap, but it is better that the system will be used by many children. The usage We can define the usage as the manner the user uses the product in situation. Each surgeon manipulates the surgical instrument in function of its experience and know-how. We observe that the same product in situation is used differently. In situation, the product must be adapted to the handicap of the child. But even if each user is single, the product and the usage are unique. It means that the relation

509

between the product and the instrument is always the same. Indeed, the objective of the project is that all the handicapped children will be able to play effectively the music as easily as valid ones. 7 INFLUENCE OF THE THREE FACTORS ON THE DEVELOPED PROJECTS Here we propose that there are interconnections between the three parameters: user, usage and product (figure 6). The product must be personalised to the user and we are convinced that it has to be design not only for him but with him. This figure represents also the usage that is imposed by the user. In fact, all the users will finally use the same generic product (definition of a frontier) with their own usage and with the same final result. This is represented by the third arrow. Usage imposed by the user

User

Product personalised to the user

Usage

Product

Figure 6: Relation between user, product and usage During the design, designers always have to think about the place of this frontier: the designed product must answer the requirements of the specific user but it has to be as generic as possible. It means that the frontier of the product has to be as large as possible compared to the use and the usage ones. In the two cases studied, the product must be as generic as possible but adaptable. We will explain the main differences below. In the study made with the surgeons, the experience and the know-how of the user allow to represent this schema as shown in figure 7. User

Usage

Product

Figure 7: Representation of the scheme for the DESTIN Project The product is the link between the user and the patient. In the medical field, each physician defends his own practice. The consequence is the adapted usage of the surgeon to the product developed. With the same product, and because of the history of the physician, the result of the surgical operation will be exactly the same with a completely different usage. The ergonomic factor is essential in this design case with surgeons and this is the main link with the user. On the other hand, this aspect has a great influence on the usage. That is the reason why the usage mainly influences the design process in this case. In final situation of use, all the expert users will be able to realise the surgical interventions with the team, using generic and adapted instrument, with a personalised usage. The representation of the influences of the use, the product and the usage, concerning the design of systems

510

that allow the handicapped children to play the music can be represented as shown in figure 8. Usage User Product

Figure 8: Representation of the scheme in the AE2M Project In this case, the physical capacities of the children are limited but lots of knowledge must be taken into account to satisfy them. The designers mainly have to take into account the remarks coming from the experiences of the medical specialists who are daily working with the user. The most important thing in this context is the limited capacities of the user to express his satisfactions and dissatisfactions during the applied scenarios. Once this important quantity of information recovered and analysed, the usage become simple. All the products manufactured during this project correspond to this schema. A lot of time is consumed with the user before beginning the design with the musical teacher. In final situation of use, all the handicapped users will be able to play alone the musical generic and adapted instrument, with their own usage. 8 CONCLUSION AND PERSPECTIVES Our main contribution to the design methodology consists of the proposition and classification of three parameters: the user, the product and the usage. We illustrate their inter-relations with the examples of the DESTIN and the AE2M projects. These projects are chosen because they strongly imply two different types of user for concrete product design objectives. We show firstly that a lot of researchers are working and developing the UCD and PD methodologies. The applications of these methodologies allow the designer to focus on the user and its practices of work. The Scenario and the Emulation are proposed to observe the user in situation and then to analyse the usage in real environment. It is important to remind the comment that “the scenario is as a manner to better communicate the requirements of the users to the designers”. Using scenarios and emulation several times during design ensure that all participants (not only the users but also the other persons who are interacting with the product) understand and agree to the design parameters, and to specify exactly what interactions the system must support. The two projects presented are completely dependent on the SBD methodology. The users, as they are considered as specific users, are put in situation during emulation. These concrete projects allow us to currently propose 3 patents: two surgical instruments have been tested with two different surgeons and one electromechanical system allows disabled children to play percussion instruments with the help of a contactor. The objectives of the proposed classifications of the user, the product and the usage and their interactions must allow the designer to better organise the preparation and the application of the methodology. The integration of Scenarios and Emulations are essential in the design process for the evolution of the product in these cases. In

the future design projects, we will try to better pre-analyse the usage situation of the product with the objective to adapt Scenario & Emulation to this context. [13] 9 ACKNOWLEDGMENTS We gratefully acknowledge the support of Dr. Jérôme Tonetti and Dr. Hervé Vouaillat from Service OrthopédieTraumatologie, Grenoble Hospital for their collaboration in the DESTIN project. The authors gratefully acknowledge all the engineering students of the University of Grenoble for their important work in the AE2M Project. They especially thank a lot Jacques Cordier, musical teacher, Alain Di-Donato, mechanical engineering and Julie Thony, Ergotherapist for their implication in this project. The authors also thank the other participants (teachers, paramedical specialists, Direction team of the ME Institute) for their motivation and their devotion.

[14]

[15]

[16]

[17] 10 REFERENCES [1] M.Shahrokhi, M. Pouliquen ,A. Bernard, Human Modelling in Industrial Design, In Proceeding of the 14th International CIRP Design Seminar, may 1618, 2004, Cairo, Egypt, 12p. [2] Slatter R.-R., Husband T.M., Besant C.B., Ristic M.R., A Human-Centred Approach to the Design of Advanced Manufacturing Systems CIRP Annals Manufacturing Technology, Volume 38, Issue 1, 1989, Pages 461-464 [3] Spath D., Weule H., Intelligent Support Mechanisms in Adaptable Human-Computer Interfaces CIRP Annals - Manufacturing Technology, Volume 42, Issue 1, 1993, Pages 519-522 [4] Katz-Haas, R., “A summary of this article, Ten Guidelines for User-Centred Web design”, Usability Interface, Vol 5, n°1, July 1998 http://www.stcsig.org/usability/topics/articles/ucd%2 0_web_devel.html [5] Abras, C., Maloney-Krichmar, D., Preece, J., "UserCentered Design", Bainbridge, W. Encyclopedia of Human-Computer Interaction. Thousand Oaks: Sage Publications, 2001 [6] ISO13407, “Human-centred Design Processes for Interactive Systems. International Organization for Standardization”, Genève, Switzerland, 1999. [7] Hix, D., Hartson, H. R., “Developing User Interfaces: Ensuring Usability through Product and Process”, New York, NY, Wiley, 1993. [8] Nielsen, J., “Usability Engineering”, Academic Press Limited, 1993. [9] Holtzblatt, K., Beyer, H., “Contextual Design: Defining Customer-Centered Systems”, Morgan Kaufmann Publishers, San Francisco, 1998. [10] Mayhew, D. J., “The Usability Engineering Lifecycle: A Practitioner’s Handbook for User Interface Design. San Francisco, CA, Morgan Kaufmann, 1999 [11] Rasoulifar R., Thomann G. and Villeneuve F., Integrating an expert user in the Design Process: How to make out surgeon needs during a new surgical instrument design; case study in Back Surgery, In Proceedings of the TMCE 2008, Seventh International Symposium on Tools and Methods of Competitive Engineering, April 21-25, 2008, Izmir, Turkey, pp. 415-426. [12] Jokela, T., “Making user-centred design common sense: striving for an unambiguous and communicative UCD process model”, Proceedings

[18]

[19]

[20]

[21]

[22]

[23]

of the second Nordic conference on Humancomputer interaction. Aarhus, Denmark, ACM Press, 2002. Carr-Chellman, A., Cuyar, C., et al., “User- Design: A Case Application in Health Care Training”, Educational Technology Research and Development, 1998, 46(4): pp. 97–114. Bekker, M., Long, J., “User involvement in the design of human-computer interactions: Some similarities and differences between design approaches”, HCI’2000, Springer-Verlag, 2000. Wilson, A., Bekker, M., et al., “Helping and hindering user involvement – A tale of everyday design”, CHI’97, ACM Press, 1997 Gaffney Gerry, Participatory Design Workshop, Usability Techniques Series, 1999 Information&Design http://www.infodesign.com.au Rosson Mary Beth, Carroll John M.: Scenario-based usability engineering. Symposium on Designing Interactive Systems 2002 : 413 Gaffney Gerry, Scenarios, Usability Techniques Series, 2000 Information&Design http://www.infodesign.com.au Carroll John M., Five Reasons for scenario-Based Design, In Proceedings of the 32nd Hawai International Conference on System Sciences, 1999 Rasoulifar R., Thomann G., Caelen J. and Villeneuve F., Proposal of a new Design Methodology in the Surgical Domain, International Conference On Engineering Design, ICED'07, Cité des Sciences et de l'Industrie, august 28–31, 2007, Paris, France, 12 pages. Thomann G., Caelen J., Proposal of a new Design Methodology including PD and SBD in Minimally Invasive Surgery, The International Federation for the Promotion of Mechanism and Machine Science 12th IFToMM World Congress, June 18-21, 2007, Besançon, France, 6 pages. Thomann G., “Ergonomic Adaptation of Musical Materials Project: First Experience Feedbacks of a Two-Year Multidisciplinary Human Experience of Mechanical Engineering Students", The 10th International Conference on Engineering and Product Design Education, E&PDE08, September 4-5, 2008, Barcelone, Spain, 6 pages Rasoulifar R., Thomann G., Villeneuve F., Engineering Design in surgery: An analyze model for prototype validation, CIRP Design Conference 2008: Design Synthesis, April 7-9 2008, Twente, the Netherlands, 6 pages.

[24] Mondada, L., Describing surgical gesture: the view from researcher's and surgeon's video recording in Gesture Conference, 2002. [25] Rasoulifar R., Thomann G. and Villeneuve F., Integrating an expert user in the Design Process: How to make out surgeon needs during a new surgical instrument design; case study in Back Surgery, In Proceedings of the TMCE 2008, Seventh International Symposium on Tools and Methods of Competitive Engineering, April 21-25, 2008, Izmir, Turkey, pp. 415-426. [26] L.H. Langeveld, Design with Medical Information, In Proceeding of the International Design Conference, DESIGN 2008, May 19 - 22, 2008. Dubrovnik, Croatia, pp. 449-456.

511

Analysing Discrete Event Simulation Modelling Activities Applied in Manufacturing System Design J Johansson Department of Engineering and Sustainable Development, Mid Sweden University Östersund, SE-832 25, Sweden E-mail: [email protected]

Abstract In manufacturing industry, Discrete Event Simulation (DES) is applied in just a small fraction of the cases where it could give significant value. The complexity of the DES technology itself is one barrier to achieving its full potential in manufacturing system design. This paper focuses on methodologies which can be used in studying the modelling process, with the specific intent of integrating DES into the engineering design process of manufacturing systems. Ethnographical methodologies are proposed, as they are successfully applied to the analysis of the engineering design process. Observations from industrial practice indicate that visualisation in DES software is essential for the sequential verification of design activities. Keywords: Discrete Event Simulation, Manufacturing system design, Engineering design

1 INTRODUCTION Why DES is applied in just a small fraction of the cases where it could give significant value is a frequently debated question. According to Banks [1], the complexity of the technology itself is the foremost barrier to the broad deployment of DES technology. Moreover, DES is also considered a time-consuming and expensive expert tool by potential users in industry [2]. Despite these negative attitudes, DES must be considered a top-ranked decisionmaking tool capable of capturing the dynamic complexity of manufacturing a system. DES is also a versatile tool and the potential areas of application in manufacturing industry include a wide range of examples, such as operative planning support, system analysis and system design. This study has its particular interest in the integration of DES into the engineering design process of wood manufacturing systems. The engineering design process of such a system may include the construction and modification of equipment for the processing and handling of materials, the configuration of computerized control systems and automation, and the design of system layout. By using traditional design methods only static capacity analyses can be carried out. To manage the system complexity and the dynamics of material flow the employment of DES is needed. The intention is to integrate DES already in the early development stages as a design support tool and to verify design stages continuously, during the development of the system. This is in contrast to the most frequent situation where DES is used to verify the capacity and properties of an already completed system design which has been developed by traditional methods. Many of the conditions typically encountered in wood manufacturing can also be of relevance to other industrial sectors. Typical characteristics of wood manufacturing include highly automated processing and an extensive

CIRP Design Conference 2009

512

material handling of large physical volumes on conveyors which come in a myriad of variations. Design details in these systems often have a decisive influence on the capacity and functionality of these systems. Therefore, a valid DES modelling of these systems must also include the functionality and logics that are associated with these specific design issues. Only a few DES tools enable this kind of detailed modelling; they are based on a “manufacturing-oriented simulation language” [3]. To comply with the requirement of detail modelling capability our reference projects were carried out in AutoMod from Applied Materials (http://www.appliedmaterials.com /products/automod_2.html). AutoMod has the capacity to deal with large models built in a flexible modelling language in a true-to-scale 3D model environment with collision detection between moving objects in the material flow. Objects drawn in 3D CAD (Computer Aided Design) can be imported, and the effects resulting from the geometry of moving objects and distance in layout are implemented with no extra calculation being involved. To realize the intention of integrating DES into the engineering design process of wood manufacturing systems, the perceptions of engineering design methodologies are adopted. This also includes observation methods employed to study and analyse the design process. Analysis of the design process has been important for some time, but the changes in current engineering design practices brought about by the revolution in information technology makes it a more vital issue today than ever before. Virtual representation of 3D objects, many types of modelling and simulation applications, and the opportunity of working in geographically spread organisations, are some examples of the factors that have come into focus. This approach, based on engineering design methodologies, stands in contrast to and complements other studies that see the matter of DES integration into the design process mainly in terms of DES technology.

The focus on the integration of DES into the engineering design process is, for example, extensively discussed in dissertations by Klingstam and Randell [4, 5], but in the context of the large automotive industries The main purpose of this paper is to illustrate how observation methodologies and the perceptions of engineering design methodologies are applicable to a better understanding of the integration of DES into the engineering design process. Since DES modelling is a complex and iterative process of intermixed debugging and verification including validation tasks and exercises [6], it may involve a number of activities important for model design and the detection of design criteria that can be of value to the engineering design process. 2

THE ENGINEERING DESIGN PROCESS AND OBSERVATION STUDIES For a long period of time, the engineering design process has been a subject for analysis, but mainly as an individual activity of technical character [7]. An increased complexity in working with industrial product development has, however, changed the character of traditional development and design work. Today, the need for collaboration between teams at a long distance from each other has become more important. Computer-supported activities and their ability to represent and share information in a virtual environment are also conditions that influence both local and distributed design work. Due to this dramatic change in the role of engineering design, there is an increased interest in studying and analysing the design process from a multifaceted perspective, which focuses on the collaborative work in design [8, 9]. The study and the analysis of the collaborative design process include interdisciplinary approaches which involve the social context, the language, the creation and use of artefacts, and other factors which form the design process [10]. To study the engineering design process, methods are simply “borrowed” from the social sciences and applied by engineers [11]. Some examples of these methods include open-ended and focused interviews, structured interviews and surveys, the documentation and the use of archival records, as well as observation studies. It should be emphasised that these kinds of studies and the subsequent analysis do not necessarily result in a model or a descriptive principle of work flow, as is often expected by engineers [10]. Instead, they are applied to provide frameworks for the understanding of the wide range of conditions in which the design process is accomplished in real-world industries. According to Larsson, engineering design is an “ inherently social activity, and the focus of interest is to study, better understand and describe how engineers actually carry out their work, in detail, not to prescribe how they should work”. Observations collected by a qualitative method common in ethnography are often used by social scientists and are applicable to the discipline of engineering. An observer could obtain information without participating in activities or acting as the participant observer, fully participating in the studied activities. According to Larsson [10], the choice between these observational extremes is not necessary. Ethnographers often move back and forth between different degrees of participation. One way to obtain and document observation can be to take field notes. They serve more as ‘triggers’ to remember activities in a certain situation than as complete notes of experience and observation. An essential complement to such notes is videotaping since it is impossible to capture human activities and their complexity by observation alone since they happen so quickly. The playback of video

taping is particularly valuable in analysing such situations. Informal interviewing allows the participants to create the dialogue in their ‘natural’ environment where they feel familiar and where they have access to the people and objects that might be of importance for the conversation. The foregoing summary of ethnographical methodologies refers mainly to Larsson [10]. 3 WHAT TO OBSERVE? To provide some insight into ethnographically informed fieldwork – arrangements, observations, interpretations – the article “Coordinating joint design work: the role of communication and artefacts” provides substantial examples [8]. Two independent design projects were studied by two separate observers with the shared objective to examine the significance of “social and organisational interactions, and the ongoing nature of the knowledge representation and transformation work that takes place through the use of design artefacts”. Findings from this work suggest “that design and engineering are constructed through the interactions of multiple actors, and that artefacts and representations of the design process have a key function in the organisation of this work.” The creation and the use of artefacts have prominent positions in a large number of studies as they compose objects of interaction and have important roles as communicative resources. According to Perry and Sanderson, two categories of artefacts are defined: design and procedural [8]. Design artefacts include things like pen and paper sketches, tables of data, guidelines, cardboard models, and computer visualisation of objects in 3D. Procedural artefacts include change requests, office memos, letters, schedules, and Gantt charts. Artefacts are also forwarded as linguistic elements that help designers to bridge thought and object, function and structure [9]. The opposition between formal specification and work with prototyping explains how some organisations can be driven by specifications and others by prototyping [12]. An odd and interesting observation of artefacts is the extreme form of the spatial location defined as the “airboard” [13]: “On occasion, people explain things to others by drawing in the air”. Later in the conversation, people referred to ‘that idea’ by pointing to the spot in the air where they first had ‘drawn’ the idea.” 4

EXAMPLES OF OBSERVATIONS FROM COMPLETED DES PROJECTS Experiences of performed DES projects from the perspective of engineering design and the analysis of the design process imply that a great number of factors determine the work progress. The following examples are based on the projects in wood manufacturing industries with their characteristics. 4.1 Distributed modelling activities In one project, the industrial representative was an experienced computer user, and a spreadsheet model was already developed, which could be implemented as a part of the DES-model user interface built in MS excel. The need to collaborate became more and more accentuated, and to overcome the geographical distance a software application of “shared desktop” was used. By the means of a shared desktop over distance, spreadsheets, flowcharts, and layout drawings were developed with both parties involved simultaneously. The verbal dialogue was simply managed by phone, and the dialogue was truly enriched by the interactivity performed by the shared desktop. Early in the project stage, the industry also made a purchase of a so-called “runtime

513

licence,” which enabled runs of a DES model in an executable form. The industry could thereby verify the model design continuously during its development by various in-data scenarios defined in the excel user interface under development. It should be emphasized that the DES run-time license was applied frequently to verify the model in various design stages, not only applied to the completed model. This is an example of distributed work in DES model design that was realized thanks to the industrial representative who became absorbed in the software technology involved. Subsequent project meetings turned out to be successful and inspiring to others by the unexpected progress in work without travelling. Time and money were saved, and the model design was verified, sequentially and by the involvement of industrial representatives and “run-time models.” This simple example of a “shared desktop,” at first glance considered doubtful, was used frequently and with success 4.2 Artefacts reinforce the dialogue This example concern material handling in the process line and include nine object variations of rectangular geometries, rotated 4-5 times on the flat, along with positions of 90 degrees direction change. The relation between length and width is often double, which considerably influences the capacity of feed and conveyor logics. These sequences of material handling were not documented in prior by the industry, and after two unsuccessful attempts to model this part of the system, a new approach to solve the problem was needed. This time the support of business cards clarified the situation. They were moved and rotated along the layout until the logic could be documented step by step in a index structure that served the DES model logics.

Figure 1: Material handling illustrated The use of business cards constitutes an example of simple arrangements, without which it would have been rather difficult to systemize and document. Data collection is a time-consuming and difficult part of the modelling stage where the support by artefacts, in this case business cards, can make the process more efficient. Whether semantic differences were present in the dialogue was not observed, but if that was the case, common references in artefacts would also be of crucial importance to bridge dialectal variations between knowledge areas. 4.3 Detecting design criteria in a DES model A case of material handling problem concerns a machine process which included objects of various dimensions on a conveyor with carriers. Carrier spacing was 500 mm, and the carriers moved objects by their leading edge. The objects next in line could not to be closer than 400 mm, which means that all carriers could not to be used for all objects in order to maintain a distance longer than 400 mm, as it is illustrated in figure 2. The feeding direction of the objects meant that there were 18 possibilities for

514

dimensions between 400 to 1600 mm. At first, the industry did not provide these conditions, just the feed rate without information of carriers and their logic. On the other hand, the consultant was not aware of these conditions until it became clear that the capacity in the real system did not correspond to the feeding rate in the model. When it became clear that the system with carriers and objects of various dimensions could be represented in the DES model, a number of other aspects regarding the design and spacing of carriers could also be lively discussed between the individuals in the team of industry representatives. Before the DES project, the process technology with carriers had for a long time frequently been considered as a problem, and when it was clear that these conditions could be re-created in a model, further questions could be raised considering an alternative design for the spacing of carriers.

Figure 2: Detailed conveyor design This example of a detailed modelling of material hand-ling, in specifics shown in the DES model, illustrates how collaboration between DES and system experts can be based on visual references in a DES model. It also illustrates how alternative machine design can be detected, modified and evaluated by the support of a DES model. In this case it points to a possible redesign for an optimal spacing of carriers according to object dimensions. 5 DISCUSSION Implementing new technology into an existing organisation is challenging, and the dependence on the external competence of a DES consultant makes it even more difficult. The approach to solve this problem is, however, often considered to be of a technical character only, since technology is a concrete subject whose application is stimulated by its fast development. Nevertheless, in addition to technology, methodology and organisation are also two components that have to be taken into consideration in the quest to overcome barriers and hindrances. A saying is that it takes one part of technology, ten parts of methodology, and one hundred parts of organisation to manage the implementation of new technology [4]. In many cases, just the attitude towards new technology may be a decisive factor [14]. This can be the case in tradition-bound industries, of which wood manufacturing industries make a good example. Figure 3, which illustrates specific capabilities in DES application, composes an applicable reference to the circumstances discussed. Unlike a number of attempts to systemise the modelling process [15], the approach here emphasises the collaboration between simulation expert, engineer, technician, and factors that maintain the integration between system and simulation know-how in the design process. Since collaboration can be of many different kinds according to the context of dependences and situations, it is important to tailor the specific needs in the setup of DES applications. This is complex work,

however, which is also emphasised within the area of engineering design and its most recent development towards increased collaboration in distributed organisations. As a basis for clarity in collaborative work Ostergaard and Summers have proposed a taxonomy [16]. Such a taxonomy could in many aspects compose a starting point to adopt and tailor a taxonomy in the area where DES is involved [17].

determining factor for a valid understanding of dynamic effects.

model

design

and

Figure 4: Details of cross-feeding system and glue-spread machine [22] Figure 3 Specific capabilities [15] While DES modelling is a complex and iterative process, it must contain a great deal of creativity for problem solving and collaborative processes that involve both system and simulation know-how. To bridge these two competences, a fundamental prerequisite is a dialogue that is based on mutual references to the two know-how areas. This is where design and procedural artefacts play their role. The simulation model, the factory layouts, the flowcharts, the paper sketches, or the usage of business cards compose some examples of vital artefacts that maintain this dialogue. At first sight, these artefacts and our examples of observations listed may appear trivial or obvious to a skilled DES user. However, with the introduction of DES technology to industries as a point of departure, any condition that makes DES technology concrete and involves industry personnel must be forwarded. From this perspective, one must remember that DES is a virtual and complex representation of math modelling. In the best case, according to the character of the problem and the DES tools used, visualisation in the DES model is provided as a support to bridge competences. If this is not possible, other artefacts, which present DES capabilities and model verification in concrete form, must be given a forwarded position. In the area of product design, such design artefacts, no matter their character – rough or complete, virtual or palpable, sketch or mock-up – constitute an essential medium which serves to bridge thought and object among the team members [9]. An essential quality of design artefacts is also their ability to express non-verbal characteristics [18]. It is, however, emphasised in DES literature [19] that the use of a tabletop scale model, embodying a system in focus for a DES analysis, is of great support. Moreover, the examples of observations from the performed DES projects compose facts that in the context of engineering design would have formed determining factors for further analysis of the modelling process. Today’s development of DES model visualisation and flexible model design provides a comprehensive possibility of virtual representation in a concrete form [20, 21]. Figure 4 and 5 illustrates the capabilities of detailed DES modelling that would have been impossible without the visualisation of geometrical references that can be achieved in the DES model. Collision detection between objects in the material flow provides here a realistic and valid representation the material handling, and how objects are supposed to queue in front of a process, which is a common design solution in wood manufacturing systems. For the case of various material dimensions, collision detection becomes a

Figure 5: Geometrical references in the skeleton of the DES model, compare with Figure 4 [22] An increased awareness of virtual design artefacts and their role in the modelling development process appears to be essential for the integration of DES in the design process. Observations show that features for detailed model design, provided by some DES tools, are significant for design criteria. The possibility to manage geographical distance constitutes another concrete example where visualisation is a fundamental prerequisite for collaboration in the design process. A decisive factor for successes in distributed work was the positive attitude to experiments with various technical solutions to enable collaboration over distance. It could also be noticed that the social characteristics of the dialogue changed from formal to more relaxed and familiar and that ideas could therefore be deliberated without prestige. The importance of social factors in the design process is established [10, 23]. 6 CONCLUSION In the effort to integrate DES into the engineering design process of wood manufacturing systems, it has to be remembered that DES may appear as an abstract and complex technology to an industry uninitiated in the nature and benefits of DES technology. Therefore the importance of understanding the kinds of factors that support a more concrete form of DES technology is essential. In a context where DES is integrated into the engineering design process, the need for DES in a concrete form to support design issues and the related dialogue between competences is even more accentuated. Examples of features in AutoMod that provide a more concrete form of DES which supports the design process are the visualisation of CAD-drawn equipment, moving material and collision detection between items of moving material. Furthermore, everything is represented in a 3D model environment that is true to scale. The value of the employment of this kind of visualisation for design

515

purposes is proved by the case presented in part 4.3. This illustrates how shared references in a DES model enable a dialogue about specific design criteria between DES expertise and design engineers. The importance of visualisation is also proved in part 4 where the entire dialogue is related to visualisation on the shared desktop, which enables collaboration in layout design, among other things, over geographical distances. Simple physical objects can also be of importance in clarifying the complexity of material movement, as established in part 4.2. In addition to the fact that the dialogue between competences can be greatly reinforced by design artefacts in physical and digital form, the essential insight in this study is associated with the adoption of the perceptions of engineering design methodologies. This provides an enriched perception on the design activities that occur in DES projects, as demonstrated in the case of a wood manufacturing system design. The application of observation studies in engineering design helps to focus on essentials in the design process. The importance of design artefacts is a typical example of the kinds of factors that are rarely mentioned in the context of DES applications. Within engineering design, design artefacts are frequently put forward as an essential factor in uniting thought and object, bridging competences, communicating design solutions, managing complexity and much more [8, 9, 12, 13, 24, 25]. Visualisation in the context of DES application is, however, often considered as cosmetic and common advice is not to “drown in visualisation”. Observations of the kind presented in this study may, in the eyes of a DES expert, appear trivial. Visualisation in DES is, however, of various kinds and cannot be generalized. It varies from simple 2D, through 2D with perspective icons graphics, 3D by predefined objects, to the CAD-like 3D environment provided by AutoMod [20]. It is obvious that manufacturing system design issues are placed in a different context depending on the capacities of different DES tools. A striking insight is, however, that only two DES software programmes appear to provide this kind of detailed and flexible modelling technique required. These are AutoMod and Quest [3]. Whether the development of DES in recent years has supplied the market with additional DES tools with these capabilities is not investigated in this study. Finally, not only technology, but also methodologies and organisation constitute crucial factors in enabling efficient collaboration in design projects. The specific capabilities of DES are illustrated in figure 3, and demonstrate in many respects factors of relevance for DES consultant services in small industries, such as wood industries. Observation methodologies and taxonomy tailored in accordance with DES technology and manufacturing system design may be fundamental to further research into the integration of DES into the engineering design process. It is an urgent issue because DES is the only technology that manages system complexity and its dynamics, i.e. the effects of time. 7 SUMMARY Using the perceptions of engineering design and its methods for analysing the design process, the integration of DES in manufacturing system design is illustrated from a multifaceted perspective. To exemplify the advantages of visualisation with geometrical references, DES modelling is here put in a context where design issues are concretisized and communicated among different competences. To achieve such insights, observation methodologies are used and successfully applied to the analysis of the engineering design process. Interpretations from observations provide us with a better understanding 516

of what we do, why we do it, and what we could do better. This approach helps us to observe many aspects other than simply the technological, as is often the case. Seen in the general perspective of DES application in industry, this example of manufacturing system design is probably just one of many areas of application for DES where engineering design methodologies could bring support. 8 ACKNOWLEDGMENTS The academic part of this study has been financed by the European Union and by Mid Sweden University, Department of Engineering and Sustainable Development. It has been supervised by Professor Anders Grönlund, Luleå University of Technology, Division of Wood Technology. Collaboration with industry has been realized through Swedwood International AB, IKEA of Sweden, and the consultant firm of ÅF Engineering - Manufacturing & Logistics Development in Gothenburg. Their involvement has truly inspired this study. [24] 9

REFERENCES

[1]

[2]

[3]

[4]

[5]

[6] [7] [8]

[9]

[10]

[11]

[12] [13]

Banks, J., 1999, What Does Industry need from Simulation Vendors in Y2K and After? - A panel discussion, Winter Simulation Conference. Banks, J., Carson, J., Nelson, B.L., and Nicol, a., 2004, Discrete-Event System Simulation (4th Edition), Prentice-Hall International Series in Industrial and Systems Law, A.M.McComas, M.G., 1997, Simulation of manufacturing systems, Winter Simulation Confernce. Klingstam, P., 2001, Integrating Discrete Event Simulation into the Engineering Process Strategic Solutions for Increased Efficiency in Industrial System Development, Doctoral Thesis, ISSN 0346718, Chalmers University of Technology. Randell, L., 2002, On Discrete-Event Simulation and Integration in the Manufacturing System Development Process, Doctoral Thesis, ISBN 91628-5319-8, Lund Univeristy. Carson, J.S., 2002, Model Verification and Validation, Winter Simulation Conference. Cross, N., Christiaans, H., and Dorst, K., 1996, Analysing Design Activity, John Wiley And Sons Ltd Perry, M.Sanderson, D., "Coordinating joint design work: the role of communication and artefacts," in Design Studies, vol. 19, 1998, pp. 273-288. Bucciarelli, L.L., 2002, Between thought and object in engineering design, Design Studies, Vol 23, 219231. Larsson, A., 2005, Engineering know-who: why social connectedness matters to global design teams Doctoral Thesis, ISSN:1402-1544 ; 2005:19 Luleå University of Technology. MacGregor, S.P., 2002, Describing and Supporting the Distributed Workspace:Towards a Prescriptive Process for Design Teams, Doctoral Thesis, 54948797, University of Strathclyde. Schrage, 1993, The Culture(s) of prototyping, Design management Journal, Winter 1993, 55-65. Hinds, P.J.Kiesler, S., 2002, Distribuated Work, MIT Press

[14] Andreou, N., Donald, D.L., Abell, J., and Schreiber, R.J., 1999, The New Design: The Changing Role of Industrial Engineers in the Design Process through the Use of Simulation (Panel) Winter Simulation Conference. [15] Bley, H., Franke, C., Wuttke, C.C., and Gross, A., 2000, Automation of Simulation Studies, 2nd CIRP International Seminar on Intelligent Computation in Manufacturing Engineering (ICME 2000). [16] Ostergaard, K.J.Summers, J.D., 2003, A taxanomy for collaborative design, Design Engineering Technical Conferences and Computers and Information in Engineering Conference. [17] Johansson, J.Bäckström, M., 2007, Collaboration .A key factor for application of DES in real world manufacturing industries, 11th International Conference on Human Aspects of Advanced Manufacturing Agility and Hybrid Automation — HAAMAHA. [18] Ferguson, 1992, In Engineering and the mind’s eye, MIT Press, Cambridge, [19] Kelton, D., 1991, Simulation Modelling and Analysis, 2nd ed, McGraw-Hill Education [20] Rohrer, M.W., 2000, Seeing is Believing: The Importance of Visualization in Manufacturing Simulation, Winter Simulation Conference. [21] Rohrer, M.W.McGregor, I.W., 2002, Simulating Reality Using AutoMod, Winter Simulation Conference. [22] Johansson, J., 2002, Modelling and Simulation in the Early Stages of the Development Process of a Manufacturing System - A case study of the development process of a wood flooring industry, LicentiateThesis, ISSN:1402-1757, Luleå University of Technology [23] Nahapiet, J.Ghoshal, S., 1998, Social Capital, Intellectual Capital, and the Organizational Advantage, [24] Hutchins, 1995, Cognition in the wild, The MIT Press [25] Ferguson, E.S., 1992, Engineering and the mind’s eye, The MIT Press

517

A User Centred Approach to Eliciting and Representing Experience in Surgical Instrument Development J. Restrepo, T.A. Nielsen, S.M. Pedersen, T.C.McAloone Department of Management Engineering, Technical University of Denmark DTU Byg 404, DK-2800 kgs. Lyngby {jdrg;tmca}@man.dtu.dk ; [email protected]

Abstract Requirements elicitation for surgical equipment development is a challenging task. On the one hand, designers need to acquire a significant amount of domain and procedural knowledge to understand and discuss surgical tasks, which is difficult and resource intensive. On the other hand, there are restrictions to surgeons observation related to ethics, regulations and availability. This paper proposes a framework for the elicitation, representation and communication of surgeons’ experience and their translation into requirements and concepts for new surgical instrument development. The framework is showcased by the development of an improved surgical instrument for laparoscopic surgery. Keywords: User Experience, User Centred Design, Surgical Task Analysis, Requirements Engineering

1 INTRODUCTION The development of medical equipment and surgical instruments in particular, is characterised by a series of factors that make the process difficult and challenging. On the one hand, regulatory authorities impose stringent restrictions in terms of biocompatibility, robustness, reliability, repeatability of tests, suitability for the task or treatment, and documentation. On the other hand, design teams are often confronted with the need to process large amounts of domain and procedural knowledge to be able to understand, discuss and contribute to the development of new instruments or the improvement of existing ones. Moreover, the ability of a company to identify new opportunities is limited by the ability of the responsible team to understand the surgical procedures, the factors affecting the quality of the surgery, the safety of the patient and the safety of the surgeons. A series of interviews with medical companies has revealed that current participatory design methods are insufficient to capture surgeons’ experience, and are limited in their ability to support the identification of new areas for instrument improvement or new developments. This paper reports on the results of those interviews, and proposes a framework to support the processes of capturing, representing and communicating surgeons’ experience, requirements generation and translation into new concepts for new surgical instrument development. The framework is constructed using knowledge classification theory [2][10],[11] and was built to map which observation- and data analysis techniques [8][1] are best suited to the specific type of knowledge being captured. Most methods proposed in the framework have been adapted to the restrictions of carrying out observation in an operating theatre, such as ethical considerations of video filming and difficulties in communicating with medical personnel during surgery. The framework is showcased as an approach used for the development of an improved surgical instrument for laparoscopic surgery. The case study includes a validation cycle in which both the relevance of the requirements and

CIRP Design Conference 2009

518

the suitability of the concepts generated are tested with hospital surgeons. The paper concludes with a set of recommendations to companies on how best to use resources to capture and use surgeons’ experience on the development of new surgical equipment. 2

INTRODUCTION TO THEORY AND METHODS

2.1 Requirements and Requirements Elicitation Current requirements elicitation techniques such as those proposed by Beyer and Holtzblatt [1] are designed to be generic. These generic techniques are insufficient to elicit requirements in complex environments. Complex environments are characterized by: •

critical decision-making



low tolerance for errors



team collaboration required



highly specialised knowledge required



highly specialised skills required to operate



possible unforeseeable events can have catastrophic consequences Surgical environments can be considered complex environments, as they comply with the above characteristics. Additionally, eliciting requirements in these environments is challenging due to legal and ethical issues and the low availability of the stakeholders (i.e. surgeons) [6]. The term elicitation is preferred to the more common used term requirements capture as the latter implies requirements are readily available to be captured. In these environments, the knowledge and experience that is elicited needs to be interpreted, represented, communicated and validated to the design team and ultimately transformed into requirements, concepts and product features. The methods used for these elicitation processes are highly influenced by the type of knowledge being processed. In this paper, we deal with a process in which surgeons’ experience is to be elicited in order to

identify new opportunities for product improvement or for new concept development. 2.2 Current Participatory Design Methods and Requirement Elicitation Techniques Elicitation methods vary greatly in their use of resources, level of detail and the kind of knowledge that needs to be captured. A classical distinction in elicitation methods considers them as classical methods (interviews, surveys and questionnaires), group methods (workshops and user panels), cognitive methods (protocol analysis, laddering, and cognitive walkthroughs), and contextual inquiry [11][1]. Efforts have been made to provide medical devices companies with good practices for requirements elicitation. Cysneiros [6] presents a series of recommendations on which methods to use or to avoid and how the use of these methods can be affected by the conditions of the medical environments. For instance, videoing can have legal, ethical and privacy-related consequences and protocol analysis can be inconvenient when doctors are attending acute situations. Alexander et al [5] present a framework for requirements elicitation and good practice and Jalote-Parmar et al [7] present a template based observation technique developed to elicit requirements for the development of information systems to support surgeons. However, none of these works address specifically the assessment of the relevance of the knowledge captured, the matching of elicitation methods to the type of knowledge to be elicited or the modelling or representing of this knowledge. This paper intends to contribute to these areas. 2.3 Knowledge Categories Understanding and distinguishing the different types of knowledge from a user-environment is critical for the success of the elicitation process. Acknowledging the existence of explicit and tacit knowledge and the subdivision by Blackler’s knowledge types is an important part of the elicitation process, but how this knowledge is integrated into actual methods is not entirely clear, nor easy. The elicitation process can be optimised by structuring it with a view to handle knowledge gathering in accordance to Nonaka’s knowledge spiral [11], where each transition is analysed and structured to ensure the quality of the information gathered. By analysing each transition of knowledge, e.g. tacit to tacit, the selection or even creation of methods can lead to a more narrow focus, thus an optimised use of resources. By focusing on only the relevant information a more effective and target worthy research process is performed, i.e. optimisation of visits to the user-environment. Blackler’s categories of knowledge [2] are applicable to the research process as a more precise description of certain knowledge areas; they are part of the basis that supports the structure of a framework. As mentioned above, the awareness of the types of knowledge desired reflects on the selection and structure of methods of eliciting. The methods can be selected or even created to meet the requirements for gathering a specified type of knowledge. Three knowledge types that are central to consider in development are procedural knowledge, domain knowledge and sociotechnical knowledge. Procedural and domain knowledge are significant when a particular working situation is to be understood, whereas socio-technical knowledge is central when a broader perspective is needed. They are not expected to be equally important for all kind of development projects, it depends on the type of organisation, nature of the work, nature of the people etc. Blackler mentions different kind of organisations in which

different kinds of knowledge are more dominant and hence more important for the focus of attention. It is necessary for the researchers and designers to understand the basic of behavioural and perceptive patterns of the user with the purpose of designing the product in the right way. Apart from using knowledge types to understand users, relating to theory on human cognition, cognitive aspects and skills, enables the designers to understand the users’ perception of a product and other objects in their every day work situation. 3 METHODOLOGY The research carried out to elicit the user requirements in such a complex environment as the medico-technical field represents, had to be carefully designed in order to gain deep insight into actual user- and usage experience, in a professional field bound with a series of legal, ethical and practitional limitations. Our research method was therefore constructed to include a series of interviews in companies and case studies in hospitals, which were structured, informed and reflected on by the selection and study of relevant literature to the research. The main elements of our methodology are described in the following. 3.1 Interviews One of the aims of carrying out this piece of research was to contribute insight and structured input to the actual development of improved medico-technical devices – both in terms of concrete recommendations for device improvements and (most importantly) in terms of methodological organisation for the actual product development methodologies themselves. We therefore based a large focus on understanding how medicotechnical companies develop their products today – hereunder how they currently gain and organise their insight into user requirements in the product development process. This research was carried out by interviewing employees of five medico-technical equipment producing companies. The companies interviewed were chosen on the basis of their product lines, which were related to use at hospitals. The participating companies all had more than 300 employees and could hence be classed as large companies, according the EU definition. All interviews took place at the companies and all but one interview included several informants, each of whom represented different professions – but all of whom were employed in either the Marketing or R&D departments of their respective companies. The professions represented in the interviews included nurses, designers, pharmacists and engineers of master and PhD level (mechanic, electro, bio, materials and production). All informants were either project leaders or project team members, but not managers at the strategic level in the company. 3.2 Case Study Research On the receiving end of the product development chain in the medico-technical field, the users are largely hospital surgeons and to a lesser extent nurses and related staff. It is obviously extremely important to ensure that the voice-of-the-customer is heard when eliciting user requirements. But equally important is to be able to observe the actions-of-the-customer, experiences-of-thecustomer and a range of other non-explicit related activities that cannot be captured through merely interviewing the users. We therefore chose to carry out a set of case studies in hospitals. By carefully balancing and structuring a set of inquiry methods, inspired by Yin [17] we carried out two initial case studies in two hospitals in Copenhagen, at the beginning of the research. These

519

case studies – which were constructed from methods such as observations, interviews, discourse analysis and document analysis – were used initially to familiarise us with the hospital environment and the use environment of the medico-technical tools in live operation situations. Later in the research, cases were also used to test the framework developed for the communication of surgeons’ experiences. 3.3 Literature study The literature study carried out for this research prepared the researchers for the topics encountered under the study (e.g. during interviews and observations) and for gaining insight into specific topics related to the problem focus (e.g. regulations in design of medical devices, outcome driven innovation, knowledge types, etc.). The literature study is not the topic of this paper but is documented in detail in [13]. 3.4 Limitations in the medico-technical field We feel it worth mentioning that there are a series of limitations that surround the medico-technical field. Limitations regarding company confidentiality, patient confidentiality, ethical code of conduct, access to main stakeholders (surgeons) under the usage situation (the operation), and the sheer complexity of the field, were very apparent from the very start of the study. These limitations set clear boundaries for the planning, execution and reporting of our research; nevertheless we managed to create a generic and useable set of observations and recommendations for the improvement of product development in the medico-technical field. 4

DIAGNOSIS OF CURRENT PRACTICE: RESULTS FROM INTERVIEWS From the semi-structured interviews carried out in the five companies it was possible to gain both specific and generic overviews over how the companies elicited user requirements in their product development processes. A summary of the findings from the interview informants follows. The companies differ in their approach to developing new products. They appear to be skilled in coming up with and handling ideas but experience difficulties in their approach to analysing the use-context of the new product. This is expressed as they mention their difficulty with finding the right methods and right processes for investigating the requirements that users raise for the products. The results of the company analysis show that ongoing work is performed to attempt to optimise the process towards understanding user-environment and thus be able to identify the right requirements and needs. New approaches have been implemented to identify opportunities and thereby needs e.g. Outcome Driven Innovation (ODI) [16], but the consecutive elicitation of requirements is still problematic as it is unstructured and affected by the preferences of the current project leader. Companies are experienced in methods for research in user environments, but find it difficult assessing which ones are the strongest and/or most appropriate, thus there is a need for a well-structured requirement elicitation process in these complex user environments. Regulations, complexity and communication by sales departments were anticipated to rank among the reasons for the difficulties in elicitation. The complexity of the environment was not explicitly mentioned as a cause of difficulties (the word was avoided by the researchers during the interviews), but there was a recognition of the difficulty to elicit requirements in medical environments and operating theatres. This was reflected in the constant changes in the companies’ requirement elicitation

520

processes. Communication through the sales department is problematic and often insufficient, as it does not convey sufficient or reliable insight back to the designers. Four of the five companies interviewed are intervening in their requirement elicitation processes to include larger and more multidisciplinary teams in their marketing and R&D departments. In conclusion, the companies use a wide range of methods, primarily for identifying opportunities and field research in the user environment. The companies’ follow up on the acquired data is, however, less systematic and reported to be challenging to handle. None of the companies can present an overview or sequence of activities in the elicitation process. A lack of structure for this is expressed as a shortcoming. Due to this, the focus of improvement in the elicitation process was found in all of the five cases to be in the stage of opportunity analysis in their respective Front End of Innovation (FEI) processes [9]. This means any new framework should necessarily focus on the process of capture, representation and communication of user experience in the form of requirements and product features. Project motivation and conceptualisation both are touched upon during the case work, but neither of them is directly targeted by the framework developed. However, conceptualisation cannot be completely separated from the process of eliciting requirements, as ideas for a given issue emerge during the elicitation process. Methods for conceptualisation or project motivation are not addressed directly in the building of the framework. 5 FRAMEWORK In order to elicit requirements based on user-experience drawn from complex user-environments a framework is constructed. The aim was that the framework should aid the selection of methods and structure the requirement elicitation process. The framework consists of three parts: •

• •

Part I the knowledge-map illustrates userexperience in terms of different knowledge types and enables the user of the framework to map methods against the knowledge they target. Part II the methods are the practical means by which knowledge is captured, represented or communicated. Part III the process-model drives the process of eliciting requirement through the stages of capture, represent and communicate. The process is inspired by Nonaka’s work on organisational learning [10][11].

5.1 Part I: Knowledge Map Knowledge can be classified, for instance, according to whether it contributes to the ways of doing something (procedural knowledge) or whether it provides the background for making decisions (domain knowledge) [4]. Similarly, it can be either tacit or explicit [14]. These four categories allow the researcher to classify the level of immediate availability and abstraction level of the knowledge to be captured. An additional classification provided by Blackler [2] gives an indication on where the knowledge resides and provides a more detailed description of each area in the knowledge-map. Additional knowledge related to values, relations and culture is represented as socio-technical knowledge [3]. User experience is thought to reside in the overlapping areas of these knowledge categories. (see Figure1) and is the user’s ability to plan and perform a task, make decisions with incomplete or fuzzy information, cope with

uncertainty and take action when unexpected events occur. In order to create an overview of the respective knowledge types and their characteristics, a knowledge map was created. In this map, Blackler’s categories or images of knowledge are positioned in the span between tacit and explicit knowledge. These categories or images are called embodied, embrained, encultured, embedded and encoded, as depicted in Figure 1.

codified information e.g. articles, manuals, Internet. Encoded knowledge is de-contextualised and limited to a selected representation of the knowledge presented. The different types of knowledge categorised and described by Blackler can be used to analyse the situation in which the companies research and gather information as input to their development processes. The understanding of knowledge types and the circumstances that surrounds knowledge is crucial when gathering information. Each of the knowledge types demand to use of specific methods to elicit information and the careful consideration when processing the knowledge to avoid misinterpretations and the loss of important elements in the understanding of the environment being researched. 5.2 Part II: Mapping Methods The knowledge map was subsequently used in the study to classify knowledge capturing methods, according to their suitability for targeting particular knowledge types, as relevant to the designers. Knowledge capturing will require different methods according to which type of knowledge is to be targeted. A method is to be perceived as a technique for capturing, representing or communicating knowledge. Figure 2 illustrates how the methods used for capturing in the casework were mapped. Each phase in the requirement elicitation process needs to have a separate ‘knowledge-map’ so the methods mapped in it are consistent in purpose. Methods are selected from the ‘knowledge-map’ depending on the kind of knowledge desired.

Figure1 Knowledge Map

Embodied knowledge is embedded in the body of an individual by his/her actions and in social systems. Take, for instance, the example of riding a bike. The individual cannot explain exactly how it is done; it has to do with practical experience and physical cues when the bike is at hand. Embodied knowledge is largely tacit knowledge to the individual hence, it is difficult to encode and share with others. Embrained knowledge is dependent on the individual’s conceptual and cognitive ability. It is “intellectual” knowledge. New knowledge is gained from higher level of abstract thinking as understanding complex causations, e.g. specific requirements in product design on basis of insight into a complex user environment. Encultured knowledge is socially constructed, and deals with the process of achieving shared understanding. It describes the norms and values in social structures and it is therefore dependent on the language and other means of interacting. As encultured knowledge is socially constructed, it can develop or change over time. An example of this is the language in a development team; it will gradually change as the team collects knowledge on the project’s specified area. This type of knowledge is acquired through dialogue and mutual experiences, but it also consists of tacit knowledge, which is not formulated. Embedded knowledge is closely related to encultured knowledge, since embedded knowledge is also created and related to social structures, but it can more easily be analysed from more formalised structures as technical routines, procedures, processes and technologies. This stands in contrast to encultured knowledge, which can only be analysed from social structures between individuals. Hence, embedded knowledge is more explicit than encultured knowledge. Encoded knowledge is explicit and can be found in literature and other media that can be passed on as

Figure 2 The knowledge map with methods in capture phase

5.3 Part III: Process Model The final element of the framework is the process model, constructed to inform the requirements elicitation process. The process model is structured into three stages: capture, represent and communicate. These stages are related to the first three stages in Nonaka’s the SECI (Socialisation, Externalisation, Combination, Internalisation) model [10][11]. Capture is the socialisation stage in which new knowledge is captured from userenvironment. Represent is the externalisation of findings from tacit to explicit. Finally the findings are combined through a communication stage where the recipients of the communication are the developers.

521

Figure 5 Example of bad ergonomic postures and risk areas to the surgeon

Figure 3 Process Model and examples of knowledge maps

The intention of the process model is that product developers will use the knowledge obtained to articulate requirements and come up with ideas and concepts. The process is iterative and repeats once a concept is ready and becomes convergent as a project progresses. Thus the model is depicted as a circular loop. The methods selected through use of the ‘knowledgemap’ are entered into the process and planned in an appropriate sequential order. Figure 3 illustrates the ‘process-model’ in combination with the ‘knowledge-map’. 6

CASE STUDY

6.1 MIS as a complex environment The environment in a Minimally Invasive Surgery room has been chosen as a case study as it complies with several characteristics of complex environments. These characteristics are namely time critical decision-making, low tolerance for errors, team collaboration, highly specialised knowledge, highly specialised skills and possible unforeseeable events which can have catastrophic consequences. Furthermore, there is a significant amount of domain and procedural knowledge distributed amongst many actors, it is difficult to gain access to this environment and there are heavy ethical issues in collecting data.

Figure 4 The MIS operating theatre as a complex environment

6.2 Case scope and boundaries The scope of the case was defined was to increase the safety of the surgeons performing minimally invasive surgery by improving their work environment. An obvious element to improve is to reduce the risk of discomfort and eventual professional injuries caused by poor instrument design (see for example Figure 5).

522

No initial knowledge of minimal invasive surgery The domain of minimally invasive surgery was relatively new to the researchers, except for knowledge acquired in other concept development projects within the medical field. All insight into the domain of minimally invasive surgery had to be gained from a limited number of visits to the operating theatres. The researchers did not know beforehand what types of knowledge would be the most relevant to a product development project. To prepare the process of eliciting requirements, preliminary research was conducted in terms of brief visits to operating theatres accompanied by a doctor who explained the setup. After preliminary research, the aim was to make an incremental improvement due to the purpose of evaluating the framework. Knowledge from the preliminary research was also used to select and prepare methods applied in the requirement elicitation process. Methods not mapped when initiating the process of eliciting requirements The framework was developed prior to the observations in the case study, but various parts were adjusted as result of the learning process from capturing knowledge from the user-environment. As part of the development of the framework, the methods used were not positioned in the framework before they were applied. The outcome of the methods in terms of knowledge types was predictable based on experience using the methods in other projects. Not all the methods had the predicted outcome and an adjustment to the selection of methods was needed to target specific knowledge, partially due to the limitations and complexity of the user-environment. 6.3 Using the framework Capturing Capturing is the most challenging of all phases, as it is necessary to identify which knowledge is relevant to capture as well as which methods are best suited for capturing such knowledge. In this phase, the knowledge map has proved useful in aiding the assessment of relevance of knowledge and the selection of methods. For instance, procedural tacit knowledge is best captured by using templates prepared in advance after a surgical task analysis [15], and then validated after the surgery through semi-structured interviews. Another example is embedded knowledge, e.g. knowledge made explicit by the products’ usecues, which is best captured using methods such as follow-the-actor or disciplined attention. Representing Finding suitable ways of representing the captured knowledge is of importance as it not only allows communicating it later to the product development team but also allows for the institutionalisation of this knowledge and facilitating its later re-use in other projects. For instance, domain knowledge is best represented using the outcome of laddering, (surgical) work flow analysis and functional analysis, whereas sociotechnical knowledge is best represented using Use Cycle Analysis (UCA) [13].

Communicating Communicating this vast amount of captured knowledge requires significant effort and resources. Some knowledge can be communicated using reports, video clips and posters but some, such as embodied knowledge, requires an empathic approach. To test different methods, a group of novice designers were invited to a workshop where they were given the assignment of redesigning a handle for a laparoscopic grasper. Knowledge was communicated by means of UCA and ergonomics posters, pictures of surgeons and a home made laparoscopic simulator LIS (see Figure 6 and note the posture of the participants is similar to the posture of the surgeons in Figure 5). The use of this simulator was the single most powerful tool to convey the difficulty in operating the laparoscopic instruments and the ergonomic problems associated to it.

Validating with surgeons The validation was carried out by presenting a surgeon with the requirements generated, together with low resolution functional models and a high resolution mockup of the handle (See Figure 8). Each of the requirements generated was discussed with the surgeon, who in turn explained why they were relevant or irrelevant. 75% of all requirements generated were deemed as relevant by the surgeon interviewed. The irrelevant requirements were explained by the researchers not fully understanding some of the surgical tasks, or because other (better) instruments could be used for the foreseen uses of this instrument.

Figure 7 Roadmap of concepts generated

Figure 6 Example of designers trying LIS, a laparoscopic surgery simulator

Requirements Generated The ability to trace back the origin of requirements has been deemed as important in the medical design industry. The requirements generated during the workshop have been recorded and with the help of the process model, we have been able to trace back all the requirements generated to the individual stages, methods used and raw data collected. Of course many new requirements appeared during the concept generation, as concepts impose new conditions that need to be taken into consideration. A roadmap for requirements can be seen in Figure 7. Note that most of the requirements come from work flow analysis and from the empathic design exercise done with the simulator. Table 1 presents a summary of the type, proportion and origin of the requirements generated during the workshop. Concepts Generated The quality concepts generated during the workshop were not as important to the research as it was to test whether the type, format and amount of information provided to the participants were adequate to generate relevant requirements. In this paper we only present the final chosen concept that was used in the validation loop with the surgeons at the hospital. The concepts focus on reducing the strain on the surgeons’ wrists and elbows and on allowing a higher number of degrees of freedom at the tip of the tool. See Figure 8 and [13].

Figure 8 Example of concepts, compared to existing instruments

7

DISCUSSION

7.1 Framework The framework is established as a synthesis between company findings and theory. Findings from companies have aided to find focus as well as an understanding of the (company) context of the new framework. The theory is the major source for the framework itself; it is constructed on a knowledge-based approach and the idea is that knowledge of the object environment is essentially what is needed, thus elicitation is set up as a process of organisational learning [10][11]. Knowledgerelated terms such as information and experience cannot be disregarded in spite of the knowledge approach, but the intent is that the comprehensive mapping of knowledge creates a picture complete enough for the researcher to map out their methods in the knowledge map.

523

Origin

Type

Proportion

Simulation

Requirements linked to experience of the participant performing their assignments in the simulation as well as toying with the equipment.

20/42=47.6%

Posters/task

Requirements derived from the background knowledge of the domain and users, presented by posters. Some were explicitly formulated in the design task

13/42= 31%

Other

Requirements linked to ideas discussed by the participants, general considerations of a designer, artifacts placed for inspiration or other things.

9/42=21.4%

Table 1 Origin and proportion of requirements generated during the workshop. A total of 42 requirements were generated

The three parts that form the framework each serve a purpose; the knowledge-map is the “strategic” choice of methods, the process-model implements the learning process and the methods are the practical part specialised for the selected purpose. Complementing these three parts are the preconditions, which are important considerations prior to elicitation. With regard to complex environments, the idea is that complexity comes from many sources, especially the challenge of understanding different knowledge levels. It must be faced with different kinds of methods that each addresses a relevant subject. The framework is meant to deal with this by mapping out knowledge, identifying relevant focus areas and use methods suitable for the knowledge area to elicit from it. The analysis of the interviews with companies revealed the most sought-after models or methods are those which address opportunity identification in the Front End of Innovation, and on eliciting requirements in general. Although companies did not explicitly use the phrase ‘complex environment’ it was evident to the researchers that the shortcomings in their approach were due to the difficulties in capturing and understanding knowledge in medical environments, legal and ethical aspects and resource limitations. The framework is a suitable solution to elicit requirements by gaining knowledge. The expected future implications of regulation are only partly handled in the framework. Documentation of the process can be achieved, though exact tracking of requirements is not yet possible. The framework might have an option to expand its usage to support more radical innovation processes. We argue that the framework has a number of strengths: •

It is well adapted to face diverse challenges in complex environments.



It gives a structure to an otherwise “fuzzy” process, which is hard to control. The structure includes important phases, each of which contributes to the company in a different way.



It can help to create a good outline of possible means of eliciting the knowledge necessary.



It gives an overview of the knowledge levels a user addresses when activities are performed.



It is easily visualised by means of the knowledgemap and the process-model.



It is based on theory of organisational learning, combined with experience of companies which can accelerate a company’s ability to increase their knowledge pool. The framework has weaknesses too: •

The abstract notions of knowledge types are an Achilles heel in the framework. They are fundamental for it, but also a weakness as it

524

requires some education to understand and use it. •

The value of the framework is limited as long as the number of methods is not increased and the relations between are not explored.



The framework has thus far been created and verified within the limited boundaries of the case described in this paper. Our future work will strengthen this situation by applying the framework in participating companies. It is expected to be a challenge to assess when enough knowledge is gained and it is time to move on from the requirement elicitation process. 7.2 Recommendations about how to best use resources For companies willing to include the framework into their development processes (or researchers intending to apply the framework), the following recommendations apply when considering your company needs. If the company has little experience with the use of methods, start by looking into the methods proposed in this project to strengthen parts of the process. Especially ‘template observation’, graphical representations and simulations have proven effective in this project. If the company is well acquainted with the use of a diverse palette of methods for elicitation, focus on structure. Improvement of structure by the aid of the framework starts out by using the knowledge-map, making considerations for placement of focus area, mapping the methods on the map, and seeing if it corresponds. Do this for all the phases to identify areas in which the company is unacquainted with good methods. Find appropriate methods to fill in the gaps, and consider how the methods would link together. Use the processmodel as a systematic approach, try not to take any short cuts but work through each phase to be acquainted with it. This makes the process more controlled. And please communicate your experiences to the authors! 8 CONCLUSIONS Although the interviewed companies use already current participatory design and requirements elicitation techniques, there is a lack of understanding of which methods are best suited to capture the different kinds of knowledge, which methods are most fitting to modelling or representing experience and how to transform requirements into product features. The introduction in this project of a focus on knowledge types and the subsequent mapping of methods and approaches to these, has led to a promising and novel way of eliciting many types of user requirement that would not have been captured through traditional or generic observation techniques.

[8] Kanis, H. (2003) “Research in Design: Situated Individuality versus Summative Analysis”, IEA 2003 (pp. 1-4). [9] Koen, P.; M.Ajamian, G.; Boyce, S; Clamen, A.; Fisher, E.; Fountoulakis, S.; Johnson, A.; Puri, P.;and Seibert, R. (2002) The PDMA ToolBook 1 for New Product Development in Belliveau, P. Griffin, A. and Somermeyer, S. Eds. Wiley. [10] Nonaka, I.; Toyama, R. (2003) The Knowledgecreating theory revisited: Knowledge creation an a synergising process. Knowledge Management Research & Practice (2003) 1, 2–10 [11] Nonaka, I.; Takeuchi, H (1995) The Knowledge Creating Company: How Japanese Companies Create the Dynamics of Innovation. Oxford University Press [12] Nuseibeh, B., & Easterbrook, S. (2000) Requirements Engineering: A Roadmap. In Proceedings of the Conference on The Future of Software Engineering. Limerick. Ireland. [13] Pedersen, S.M.; Nielsen, T.A. (2008) Eliciting Requirements from a Complex Environment: A framework. M.Sc. Thesis. Department of Management Engineering. Technical University of Denmark. [14] Polanyi, M. (1966) The Tacit Dimension. Gloucester, MA: Peter Smith, Christensen, Karina S. (2007): ’Perspektiv(er) på knowledge management’, Knowledge Management. Børsens ledelseshåndbøger, Børsens Forum A/S. [15] Sarker, S; Chang, A; Albrani, T.; Vincent, C. (2008) Constructing hierarchical task analysis in surgery. Surg Endosc (2008) 22: 107–111 [16] Strategyn(2008) Outcome Driven innovation (www.strategyn.com) [Accessed 01.01.2008] [17] Yin, R. (2002) Case Study Research: Design and Methods, Third Edition, Applied Social Research Methods Series, Vol 5 Sage Publications.

The knowledge map has been tested and proved to be a useful tool to understand surgical experience, to map the surgeons’ knowledge and the contextual aspects of surgical procedures and to select the best capturing, modelling, representing and communicating techniques. Additionally, companies have expressed that the process and knowledge maps can be used as a tool to documenting and communicating the experience and requirements elicitation processes internally to other stakeholders. The approach to user centred design that was adopted in this project is broader than simply listening to the voice-ofthe-customer, and involved a socio-technical approach to practice-oriented research, participatory design and organisational learning. 9

REFERENCES [1] Beyer, H., & Holtzblatt, K. (1998) Contextual Design: Defining Customer-Centered Systems (S. Card, J. Grudin, M. Linton, J. Nielsen, & T. Skelly, Eds.) San Francisco: Morgan Kaufmann. [2] Blackler, F. (1995) Knowledge, Knowledge work and Organisations: An overview and interpretation, Organisational studies. [3] Bruun Jensen, C., Lauritzen, P., Olesen, F. (2007) Introduktion til STS; Hans Reitzes forlag. [4] Christiaans, H. (1992) Creativity in Design: The role of domain knowledge in designing. Lemma: Utrecht. [5] Alexander, K.; Clarkson, J.; Bishop, D.; Fox, S. (2002) Good Design Practice for Medical Devices and Equipment: Requirements Capture. Engineering Design Centre. Cambridge University. [6] Cysneiros, L.M. (2008) Requirements Engineering in the Health Care Domain. In: Proceedings of the IEEE International Conference on Requirements Engineering. IEEE. [7] Jalote-Parmar, A & Badke-Schaub, P. (2008) Workflow Integration Matrix: a framework to support the development of surgical information systems. Design Studies Volume 29, Issue 4, pp. 315-424.

525

Equating Business Value of Innovative Product Ideas S. Brad Research Lab of Competitive Engineering in Design and Development, Technical University of Cluj-Napoca, Cluj-Napoca, 400020 Ctin Daicoviciu 15, Romania [email protected]

Abstract Investing in new product development crucially depends on the capacity to estimate the business value of the product idea in the very early phases of the development process. Several empirical formulations have been proposed by now in this direction, but they lack of scientific tools to relate the business potential of the new product with the key decision-making factors. Strategic and general dimensional analyses are applied to define the relationship between the product business value and the influential factors. A key finding is the strong non-linear relationship between the business value of the new product and the market acceptance. Keywords: New product business value, product innovation, dimensional analysis

1 INTRODUCTION High-tech industries are key driving forces for economic development at regional and national levels. This justifies the interest of governments and venture capitalists to support the foundation and development of businesses operating in the high-tech sector. High-tech companies are those engaged in the design, development and introduction of new products and/or innovative manufacturing processes through the systematic application of scientific and technological knowledge [1]. Various studies have revealed that the high-tech industry is characterized by market uncertainty, technological uncertainty and competitive volatility [1], [2], [3]. High levels of uncertainty and volatility generate high business risks. Because of these aspects and because the cost of developing the prototype of a high-tech product is very high relative to the costs of reproduction, the conception and development of new high-tech products require a careful analysis before getting a financial support, both from governments (e.g. in the form of spin-offs) and/or venture funds [3], [4]. 2

ABOUT PRODUCT BUSINESS VALUE

2.1 Components of business value In management theory, business value is a concept including various forms of “value” that determine the wellbeing of an organization in the long-run [5]. This means the concept of value is expanded beyond the economic value (economic profit), including other forms of value such as customer value, supplier value, employee value, partnership value, managerial value, and societal value. These categories of value have an indirect effect on the economic value, even if they are not directly measured in monetary terms. Viewing business value from this broader perspective, methodologies like balanced scorecards look to be very popular for measuring and managing business value [1], [2], [6]. Up-to-date, there are no well-grounded theories about how the various categories of business value are related to each other and how they might contribute to the organization's long-term success [7]. A promising approach is the business model, but countless CIRP Design Conference 2009

526

opinions let knowing that a well-formalized model was not yet proposed. 2.2 Criticisms on business value There is no consensus on the meaning of business value, either in theory or practice, as well as about its role in effective decision-making, as long as measuring the economic value is enough to guide the decision-making process [1], [5], [6], [8]. While it would be very desirable to formulate all categories of business value to a single economic measure, many practitioners and theorists believe this is either not feasible or theoretically impossible [9], [10]. Therefore, advocates of business value consider that the best approach is to measure and manage multiple forms of value as they apply to each issue under consideration [5], [6], [9], [10]. 2.3 Assessing business value of new product ideas The present paper focuses attention on a specific component of the business value - the one which is related to the market value of a new product. From this perspective, the ratio between “benefits” associated with the extended product (product plus related services plus related relations) and “sacrifices” (monetary and nonmonetary) has to be determined. This actually leads towards measuring the effects of both driving and constraining factors acting upon the product (seen as a system) and determining on which side the balance is inclined. 3 THE PROBLEM Various studies have shown that new ventures have a high rate of failure [4]. Statistics highlight that only a small percentage of these initiatives (around 30%) succeeded to survive more than three years [3], [4]. There are various causes that keep the success rate only at this level, but a major one is the lack of understanding the complex nature of innovation by the people which initiate innovative businesses. In most of the cases, the entrepreneurs are mainly focused on technological innovation (product innovation), omitting the key roles which some other business aspects

play upon the commercial success [2]. Usually, the initiators of high-tech innovative companies are people with very good technical and creative skills, but with poor managerial and business aptitudes and experience – this makes them “not seeing the forest from leafs”. To get financial support from venture capitalists or business angels, the entrepreneurs need to demonstrate the market potential of their innovative, “product-oriented” business idea [11]. In this respect, several approaches have been experienced for quantifying the market potential of a given innovative product idea. These approaches are mainly based on empirical formulations, which use criteria for identifying the most effective mechanisms to measure the performance of product introduction [3], [11], [12]. However, as other researches demonstrate [13], [14], [15], [16], the market impact of products depends also on factors that are not best quantified by the current empirical assessment approaches of product’s business value (e.g. technical: design and manufacturing, contextual and customer-related factors). Therefore, efforts are required for setting up tools capable to link, on a scientific basis, the market potential of a new product with the key decision-making factors. Literature in the field does not reveal too much work on mathematical models for quantifying the market value of a product-related innovation. As of yet, an unconventional model for calculating the value of innovation by associating it with the Albert Einstein’s equation of energymass equivalence is reported in [17] (E = mc2, where: E is the energy, m is the mass and c is the speed of light, 3108 m/s). Thus, adapting the Einstein’s equation for valuing innovation one gets: 2

K  C  S  V H  , 3  

(1)

where: V is the value of innovation, H the resources, K the knowledge level, C the capacity to combine knowledge into feasible solutions, and S the part from an ideal solution that can be implemented into practice [17]. However, relationship (1) is somehow doubtful for practitioners, as long as it is not very much directed towards the most common decision-making factors used by investors in assessing new technology ventures. In this respect, an improved model for assessing the market potential of an innovative product idea is introduced in this paper. It attempts to define a mathematical relationship between the value of innovation and the major influential factors for the commercial success of the technology-based venture. The proposed model intends to be beneficial both for developers, entrepreneurs and investors in estimating the potential of a given high-tech business idea from the very early phases of its life-cycle, thus minimizing the failure rates and preventing to invest in business initiatives with low market potential. The next sections of the paper describe the methodology applied to set up the model, the model itself, as well as a case study for exemplifying its use into practice. 4

THE PROPOSED MODEL

4.1 The methodology An unconventional approach is proposed for establishing the mathematical relationship between the value of innovation and the major influential factors leading to the

commercial success of an innovative product, namely the general dimensional analysis [18]. In advance to the application of this formalism, several preparatory actions are required. Thus, a five-step algorithm is further considered for defining the model which quantifies the market potential of an innovative product idea. These steps are: Step 1: Determine the key influential factors for the commercial success of an innovative product idea. Step 2: Establish the mechanical equivalence both for the influential factors and for the market potential of the innovative product. Step 3: Visualize the units of measurement for each mechanical equivalent. Step 4: Apply the general dimensional analysis to elaborate the mathematical model. Step 5: Normalize the variables from the mathematical model for harmonizing the units of measurement. 4.2 Identifying the major influential factors In the algorithm from section 4.1, the first step requires to determine the key influential factors. This is a major challenge, as long as there are so many opinions around this topic [1], [2], [3], [4], [5], [6], [7], etc. Many questions arise in this respect: Which are all these factors? Which of them are essentials? Which is the reference system for measuring essentialness? And so on. For breaking this vicious circle, an unconventional approach has been considered. It emerges from the philosophy of complex systems, which says that there is no optimal solution to a complex problem, but several possible solutions [19]. The approach is about relating all known influential factors to a set of critical business objectives. Among the exhaustive set of influential factors there is a minority of factors that mainly affect the whole set of business objectives. They are actually the major influential factors. In other words, they comprise most of the “value weight” which contributes to the business success of the new product idea. By screening the representative literature in the field, a set of eight widely accepted business objectives has been worked out and used for analyzing the complex system concerning to the valuation of the market potential of innovative product ideas. Thus, the following business objectives have been finally considered as being comprehensive in relation with this subject: Objective 1: Getting as high as possible financial performances. Objective 2: Leading to as low as possible technical risks. Objective 3: Leading to as low as possible commercial risks. Objective 4: Getting as high as possible competitive advantages. Objective 5: Meeting market needs. Objective 6: Achieving on-time product launching. Objective 7: Leading to a long-run product. Objective 8: Leading to product extensions. These objectives might be ranked, but in a dynamic business environment, as well as in relation with various business contexts, ranking of objectives has less relevance. Therefore, a balanced approach should consider all business objectives as being of the same significance. In fact, ranking business objectives exceeds somehow the scope of this analysis. In order to identify potential influential factors, a comprehensive literature in the field has to be examined,

527

too. The survey led to over 50 possible influential factors. Framing the business objectives and the possible influential factors into a strategic analysis matrix makes possible the identification of the key influential factors. This means putting the business objectives along the matrix’s rows and the possible influential factors along the matrix’s columns and afterwards filling the intersection boxes of each pair “business objective-influential factor” with a value showing the level of relationship. Transferring the know-how from the field of quality planning to this problem, the following levels of relationship could be taken into account: 0 – no relationship; 1 – weak or possible relationship; 3 –medium relationship; 9 – strong relationship; 27 – very strong (critical) relationship. An influential factor belongs to the sub-set of major influential factors if it has a strong or very strong relationship with more than 50% of the business objectives. Considering this approach, nine major influential factors have been finally extracted. They are shown in table 1. The relationships between the major influential factors and the business objectives are shown in table 2. Symbols (see table 1) have been attached to the major influential factors for a more convenient use in the forthcoming mathematical formulation. As table 2 reveals, all nine factors have strong and very strong relationships with the majority of business objectives. In theory, every influential factor could be taken into account for modelling the business value of new product ideas. No. Factor Symbol Emergency for satisfying a certain 1 U market need 2 Market size M 3 Financial power of the target market P 4 Difficulty to copy the idea by competitors D Originality/Novelty (opening a 5 O completely new market niche) Return on investment (investor; 6 R customer) 7 Market elasticity E 8 Market resistance to changes I 9 Effort required to put idea into practice L

Factor 9: L

Factor 8: I

Factor 7: E

Factor 6: R

Factor 5: O

Factor 4: D

Factor 3: P

Factor 2: M

0 - blank 1- 3- 9- 27 - 

Factor 1: U

Table 1: Major influential factors.

Business objective 1

        

Business objective 2

  

Business objective 3

        

Business objective 4

        

Business objective 5



Business objective 6



Business objective 7

        

Business objective 8

        

 

    

 



 

528

Table 2: The relationships between business objectives and major influential factors. This approach would lead both to complications in mathematical formulation and to a less practical model (because of too many variables in the system). The idea is to consider the minor set of factors which brings the major influence in the system (the 80-20 rule). 4.3 Model elaboration As the step 2 of the algorithm requires (see section 4.1), dimensional analysis is further applied for modelling the relationship between system’s variables [18]. In order to apply the formalism of dimensional analysis, equivalence with homogeneous physical entities must be established for the influential factors (e.g. mechanical entities). Table 3 reveals the physical equivalent of each influential factor. A new symbol is also added in table 3, namely V. It is associated to the market value of the innovative product idea. The unit of measurement (U.M.) of each physical equivalent is put into evidence in table 3, too. With this information available, the next step of the algorithm consists in formulating the system of equations according to the methodology of general dimensional analysis [18]. In the case of this particular problem, there are m* = 10 variables (V, U, M, P, D, …, L) and d* = 3 fundamentals quantities (kg, m and s) (see table 3). For the set m*, the distributed relationship is the following: f (V, I, L)  f (U, M, P, D, O, R, E) . 1

(2)

2

The dimensional matrix of the m* variables, for the d* fundamental quantities, is: V kg m s

I

L

1 1 2 1 -2 -2

U M

1 1 2 -1 -2 -2

0 2 0

P

D O

1 1 1 2 1 1 -3 -1 -1

R

E

0 1 . (3) 3 -1 -1 -2

The monomial relation, having unknown exponents for all variables, is: p

i

j

a

c

d

e

f

g

h

V  I  L  k U  M  P  D O  R  E .

(4)

In (4), k represents a constant. It should be determined for each market sector by experimental analysis. The system of linear algebraic equations expressing the dimensional homogeneity is presented in (5). As one can see, it is an indeterminate system of equations. In such cases, the rule of Diophantine systems of equations is applied. According to the theory of general dimensional analysis and the theorem of quantity exponent values the rule to solve the system of equations from (5) is to identify positive, integral and small quantity exponents [18]. Symbol Equivalent entity V U M P D O R E I

Energy Pressure Surface Power Percussion (force impulse) Impulse Volume flow-rate Elastic modulus Inertial force

U.M. kgm2s-2 kgm-1s-2 m2 kgm2s-3 kgms-1 kgms-1 m3s-1 kgm-1s-2 kgms-2

normalized value ΩN of the variable Ω is calculated with formula (8).

kgm2s-2

Mechanical work

L

Table 3: Equivalence with physical entities. pi  j ad ef h    2  p  i  2  j  a  2  c  2  d  e  f  3  g  h .  2  p  2  i  2  j  2  a  3  d  e  f  g  2  h

(5)

Solving the system of equations (5), the following results are obtained: p = 1; i = 3; j = 1; a = 1; c = 1; d = 1; e = 1; f = 1; g = 1; h = 1. This leads to the following relationship between the market value of an innovative product idea and the influential factors of the commercial success for the innovative business: V k

U  M  P  D O R E 3

I L

.

4.4 Model normalization For a better use in practice of the model (6), some transformations are required. They emerge from the fact that variables have different units of measurement. The usual procedure in such cases is normalization. In order to normalize the right-side variables in equation (6), two cases have to be considered: a) the case of variables that should be maximized; b) the case of variables that should be minimized. For normalizing, the so-called “utility” function is introduced. In the case of variables that have to be maximized, utility 0 will be associated to every value which is below the limit value and utility 5 will be associated to every value that reaches or exceeds the target value. For values in between, the three-simple rule is applied. In the case of variables that have to be minimized, the rule is opposite. Thus, variables U, M, P, D, O, R and E have to be maximized, whereas variables I and L have to be minimized. Considering a generic variable Σ that has to be maximized and denoting with ΣT the ideal value to be achieved (the target value), with ΣL the limit value (the lowest acceptable value) and with ΣE the estimated value, the normalized value ΣN of the variable Σ is calculated with the formula: 5  (Σ E  Σ L ) ΣT  Σ L

.

5  (Ω L  Ω E ) Ω L  ΩT

.

(8)

For example, taking into account the variable L (effort required to put the idea into practice), counted in money spent for product development, implementation and launching, and assuming for the hypothetic case study LT = 10 000 000 €, LL = 30 000 000 €, LE = 15 000 000 €, the application of formula (8) leads to the normalized value LN of 3.75 from the maximum value 5. Thus, the model which expresses the business value of a new product idea gets the following normalized form:

(6)

As relationship (6) reveals, the market resistance to change is the most critical factor for the success of the business. This is also proved in practice. There are cases of breakthrough innovations which are so revolutionary that markets are not prepared to assimilate them. Therefore, a huge effort for educating the market is required; as well as relevant time. Hence, innovative companies have wide marketing campaigns with a lot of time in advance the new product is launched, for educating the market with the upcoming product and for showing its benefits.

ΣN 

ΩN 

VN

 125  k 

 ΣL Σ U ,M ,P ,D ,O ,R ,E ΣT  Σ L



ΣE

Y

 ΩL  ΩE  1     Ω Ω  Ω I ,L; Y 3 for I ; Y 1 for L  L T 

.

(9)

A new product idea brings value in the business if and only if VN > 1. The higher the value of VN the higher business potential is embraced in the respective product idea. From two product ideas, the one having a higher value of VN should be primarily taken into account. 5

APPLICATION EXAMPLE

For exemplifying the practical application of the model equating in the relationship (6) (or (9)), the case of a software product emerging from a research project, run under the coordination of the author of this paper in the framework of a grant financed by the Romanian Ministry of Education and Research, is further considered. The innovative software product is actually an expert system for quality cost planning, monitoring and control. Quality cost management is an important issue to increase the visibility of the business processes and to make more “tangible” the process’s maturity within an organization; and from here to make more “visible” the market value of the business. Therefore, companies that are oriented on capitalizing their businesses have an interest in implementing quality cost management systems. For software prototyping (the beta version), a budget of 300 000 € was spent (considering the costs in the Romanian labour market).

(7)

For example, having the variable M (market size) and assuming for the sake of exemplification the values MT = 1 000 000 units, ML = 100 000 units, ME = 600 000 units, the normalized value MN, calculated with formula (7), is 2.50 from the maximum value 5. Considering a generic variable Ω that has to be minimized and denoting with ΩT the ideal value to be achieved (the target value), with ΩL the limit value (the highest acceptable value) and with ΩE the estimated value, the

529

Figure 1: Screenshots exemplifying the expert system. The results have been tested in a large chemical plant, thus proving the technical potential of the new tool. Screenshots with examples of reports produced by the expert system under discussion are illustrated in figure 1. In this context, the idea of transferring the results towards a spin-off has been encouraged by the university. Opportunities for further financing the development of the product from structural funds in the amount of 200 000 €, as well as the possibility of the entrepreneurs to add two other prototypes in the framework of this initiative additionally motivate the initiation of a high-tech spin-off. To simplify the exercise, for this case study the target market in the introduction phase of the product’s life-cycle is limited to the Romanian mid and large size companies, where the economic context has generated opportunities to sell such products. The beginning of year 2008 was taken into account as temporal reference for analysing the business potential. For the variable U (emergency onto the market), considering the characteristics of the target market, the following values have been obtained: UT = 1/0.2 years-1; UL = 1/2 years-1 and UE = 1/0.5 years-1. Therefore, UN = 1.66 (see formula (7)). In the case of the second variable, M (market size), for the product under consideration and the target market, the following results have been obtained: MT = 6 000 units; ML = 500 units; ME = 2 000 units. From formula (7) the value of MN is 1.36. For the third variable, P (financial power of the target market), the focus is on the price policy. In the context of the target market, the following results were taken into account: PT = 7 000 €/unit; PL = 3 500 €/unit; PE = 5 000 €/unit. The normalized value, calculated with formula (7), is PN = 2.14. For the variable D (difficulty to copy the idea by competitors) the estimations are on how much money a potential competitor has to spend in order to get at least the same results as the current product. In this respect, the following results were obtained: DT = 700 000 €; DL = 200 000 € (the same as for the current product); DE = 480 000 €. From formula (7), the normalized value is DN = 2.80. In the case of O (originality), the focus is on identifying the unique features of the current product with respect to other competing products and further on calculating the costs for competitors to bring these features within their

530

products, too. For the product under consideration, the following results have been obtained: OT = 150 000 €; OL = 50 000 €; OE = 100 000 €. The normalized value is ON = 2.50 (formula (7)). For the sixth variable, R (return on investment: ROI), the focus is on determining the mean value between the ROI for customers and the ROI for producer. In this case, the following results are reported: RT = 250%; RL = 150%; RE = 200%. Thus, the normalized value is RN = 2.50 (see formula (7)). For the seventh variable, E (market elasticity), in the case under consideration the following results have been obtained: ET = 1.5; EL = 1.0; EE = 1.2. Using formula (7), the normalized form gets the value EN = 2.00. With respect to variable I (market resistance to changes), the challenge is to estimate the effort (in monetary units) necessary to spend for educating the target market about the utility of the product and about the way to use it effectively and efficiently. In the case of the product under consideration, the estimations over the time horizon of 5 years have led to the following results: IT = 40 000 €/time horizon; IL = 200 000 €/time horizon; IE = 150 000 €/time horizon. With formula (8), the normalized value is IN = 1.56. For the last variable, L (effort required for putting the idea into practice), the following results are reported (in monetary units): LT = 200 000 €; LL = 350 000 €; LE = 300 000 €. The normalized value of L is LN = 1.66 (calculated with formula (8)). The business value of the product idea is calculated with the formula below: VN  k 

U N  M N  PN  DN  ON  RN  E N

5 - I N 3  (5 - LN )

.

(10)

Replacing the symbols from (10) with their numeric values for the case study under consideration and operating the calculations, the estimated potential of the product idea in the moment of analysis is: VN = 1.24 k. In this very specific case, the constant k shows the potential to sell the product on new markets (which are quite large), too, as well as the capacity to attach the product as module to various ERP platforms (opportunities for business partnerships). Thus, k >1; this makes the product even more attractive. As the results reveal, VN > 1, therefore the conclusion is that this product idea has market potential. However, value 1.24 is not very high with respect to the limit 1. This means that, for the product under consideration, there are business risks in the respective target market. Therefore, a decision to invest in this specific business will be strongly influenced by the capability of entrepreneurs to co-finance the business, especially for supporting promotion and market education. An opportunity might be the access to structural funds for granting spin-offs in the amount of 200 000 €, distributed 105 000 € for promotion, marketing and communication and 95 000 € for development and launching. This means, the new values for LE and IE from the perspective of venture capitalist will be: LE = 205 000 € and IE = 45 000 €/time horizon, thus the new values for LN and IN will be: LN = 4.83 and IN = 4.84. The new value of VN will be VN = 294.71  k >> 1, which makes the project very attractive for a potential investor. A remark has to be done with respect to formula (9) or formula (10): if LE or IE reaches their target values, the result is a division with 0, therefore, in such cases, instead of 1 (in formula (9)) or 5 (in formula (10)) it should

be used another value (e.g. 1.1 and 5.1). This tip does not affect the conclusions at all. It should be noted that calculation of VN is a supplement to the feasibility study of the business, as well as to the business plan, and must not replace them. However, it offers a way of speeding up the decision of investing or not in a business idea. If VN does not show attractive values, maybe supplementary efforts for setting up the feasibility studies and business plans are not justified. 6 FURTHER RESEARCHES Researches to identify if it is justified or not (in terms of the effectiveness of conclusions) to enhance the relationship (6) with new factors will be taken into account. The idea of defining equivalent relationships for other areas of innovation (e.g. marketing innovation, process innovation, organizational innovation, etc.) is of interest, too. 7 CONCLUSIONS A novel model for quantifying the market potential of an innovative product idea is described in the paper. The quality of the model arises from its potential to reveal the type of relationship (direct or inverse) and the value weight of each factor with respect to the market value of the new product. The use of scientific tools, both for the identification of the key influential factors and for the elaboration of the mathematical formula of the model, contributes to the quality of the results, too. The key influential factors are in strong relation with the business objectives taken as inputs in the system. This issue might induce certain relativity in the model, in the sense that some additional influential factors, not included in the model, could be also critical. However, this aspect does not affect the methodology and so far the business objectives here considered look to be quite comprehensive. According to this model, the market inertia looks to be very critical in the equation of product competitiveness. This conclusion is extremely important, as long as many (just for not saying most) of the spin-off entrepreneurs do not spend enough effort on marketing, promotion, advertising and communication, appreciating that technical innovation is the major driving force for the commercial success of the new product. The model is relatively simple without losing the essence of the problem. Usually, people dislike writing elaborated documents, such as feasibility studies or business plans, before having some credits that their idea is of interest for potential investors. This model gives them the chance to prove the business potential, without missing the essential aspects of the business. Nevertheless, even the model looks somehow simple it is in the same time comprehensive. Actually, all important aspects that should be included in a business plan or a feasibility study are elegantly comprised in the model. In addition, the model forces to think in terms of targets, acceptable limits, as well as in terms of estimates with respect to the effective capability of the entrepreneurs to put the idea into practice, taking also into account the market conditions. The model is also sensitive to the temporal and spatial location of the business. This means that, applying the model to the same product idea but on different markets, at different moments in time, and related to different entrepreneurs, the results might differ. In fact this issue is specific to any business plan, too; thus the model proofs its “aliveness”. In conclusion, the model provides a practical tool for assessing the potential of new technology ventures. It is a

useful guide both for entrepreneurs and investors in the incipient phases of the innovation process. 8 ACKNOWLEDGMENTS Financial support from the Romanian Ministry of Education and Research within the research grant CEEX / INOVEX 140 is acknowledged with gratitude. 9 REFERENCES [1] Mohr, J., Sengupta, S. and Slater, S., 2005, Marketing of High-Technology Products and Innovations, Pearson Prentice Hall. [2] Cagan, J. and Vogel, C., 2002, Creating Breakthrough Products, Pearson Prentice Hall. [3] De Coster, R. and Butler, C., 2005, Assessment of Proposals for New Technology Ventures in UK: Characteristics of University Spin-off Companies, Technovation, 25: 535-543. [4] McAdam, R., Keogh, W., Galbraith, B. and Laurie, D., 2005, Defining and Improving Technology Transfer Business and Management Processes in University Innovation Centres, Technovation, 25: 1418-1429. [5] Trott, P., 2004, Innovation Management and New Product Development, Pearson Prentice Hall. [6] Carter, T. and Ejara, D., 2008, Value Innovation Management and Discounted Cash Flow, Management Decision, 46(1): 58-76. [7] Porter, M. and Stern, S., 2001, Innovation: Location Matters, MIT Sloan Management Review, 42(4): 2836. [8] Mazurencu-Marinescu, M. and Nijkamp, P., 2008, Assessing the Value of e-businesses in Emerging Markets: Spotlight on Romania, International Journal of Foresight and Innovation Policy, 4(1-2): 71-89. [9] Dai, Q., Kauffman, R. and March, S., 2007, Valuing Information Technology Infrastructures: A Growth Options Approach, Information Technology and Management, 8(1): 1-17. [10] Lynskey, M., 2006, Editorial: A Strategy to Optimise the Business Value of Intellectual Property, International Journal of Biotechnology, 8(3-4): 146168. [11] Brookes, N. and Backhouse, C., 1998, Measuring the Performance of Product Introduction, Proceedings IME, Part B: Journal of Engineering Manufacture, 212(1): 1-11. [12] Chan, F., Chan, H. and Chan, M., 2003, An Integrated Fuzzy Decision Support System for Multicriterion Decision-Making Problems, Proceedings IME, Part B: Journal of Engineering Manufacture, 217(1): 11-27. [13] Moultrie, J., Clarkson, P. and Probert, D., 2006, Development of a Product Audit Tool, Proceedings IME, Part B: Journal of Engineering Manufacture, 220(7): 1157-1174. [14] Williams, M. and Kochhar, A., 2000, New Product Introduction Practices in the British Manufacturing Industry, Proceedings IME, Part B: Journal of Engineering Manufacture, 214(10): 853-863. [15] Evans, S. and Burns, A., 2007, An Investigation of Customer Delight during Product Evaluation: Implications for the Development of Desirable Products, Proceedings IME, Part B: Journal of Engineering Manufacture, 221(11): 1625-1638.

531

[16] Schofield, M. and Gregory, M., 2004, The Impact of Uncertainty on Product Introduction in Dispersed Environments, Proceedings IME, Part B: Journal of Engineering Manufacture, 218(7): 749-763. [17] Gupta, P., 2005, Innovation: The Key to a Successful Project, Six Sigma Forum Magazine, August.

532

[18] Staicu, C., 1992, Restricted and General Dimensional Analysis, Abacus Press. [19] Brad, S., 2008, Complex System Design Technique, International Journal of Production Research, 46(21): 5979-6008.

Affordance Feature Reasoning in Some Home Appliances Products J.S. Lim1, Y.S. Kim2 1 Hyundai-Kia Motor Company, Jangduk-dong, Hwaseong, Kyunggi-do 445-706, Korea 2 Creative Design Institute, Sungkyunkwan University, 300 Chunchun, Jangan, Suwon 440-746, Korea 1 [email protected], 2 [email protected]

Abstract Interactions between human and objects are made through specific features of the object, which are adequate to the task context. During interactions, the perceived features vary according to the context. Affordances are the messages that products provide and users perceive in such a way that user actions are naturally induced with the help of such messages. In this research, we define affordance features as structural elements providing affordances. We classify affordance features into functional, ergonomics, and informative aspects. For some home appliances, common affordances and affordance features are identified so that they can be used in designing other products. Keywords: Affordances, Affordance Features, Task Context, Home Appliances Products

1

INTRODUCTION Affordance is considered as the properties of the product inducing human actions for operation[1][2]. Interactions between human and objects are made through specific features of the object, which are adequate to the given task context. During interactions, perceived features of the object vary according to the context given to human. For example, grasping features of an object are critical in grasping and manipulation features, at control. Features can be implicated as engineering significant aspects of the geometry of a part or assembly and thus plays a very import role in product design, product definition and also reasoning for various applications [3] [4]. That is, feature concept earlier used in design and manufacturing satisfies the structural aspect for product as needed to provide affordances. Affordance was coined by perceptual psychologist James Gibson. His essential concept of affordance is that relationship exists in a pair of animal and environment and some parts of this relationship are the invariant features of the environment permitting the animal to do things [1]. From the investigation of affordances of everyday things such as door, telephone and so on, it was argued that the form of everyday things provides strong clues to their operation as a result of the mental interpretation of them, where the mental interpretation is based on human’s past knowledge and experiences [2]. On affordances for complex actions, Gaver introduced two concepts of affordance such as sequential affordances and nested affordances [5]. Sequential affordance is about situations in which one action on an affordance leads to new affordances over time, while nested affordance is concerned with grouping affordance in space. In the field of engineering design, the research efforts to develop the design theory and methodology reflecting the concept of affordance have been considerably made. Maier and Fadel proposed the Affordance-Based Design (ABD) methodology [6][7]. In the ABD, affordances are categorized into positive affordances (what the artifact

CIRP Design Conference 2009

533

should afford) and negative affordances (what the artifact should not afford) as well as artifact-artifact affordance (between multiple artifacts) and artifact-user affordance (between artifact and user). Maier et al. introduced Affordance-Structure-Matrix (ASM) for evaluating what affordances are embedded in each component of a product and thus grading. This matrix can illustrate correlations of affordances and also of components [8]. Galvao and Sato proposed Function-Task Interaction (FTI) Method. This method includes a general product development process and additionally affordance method, especially FTI matrix [9][10]. In the FTI method, product functions and user tasks were derived from function decomposition and task analysis and then linked to each other in the FTI matrix. The FTI method has been applied in identifying affordances for interior space such as a conference room [11], where affordances for social issues have been addressed beyond function oriented affordances. Murakami et al. tried formulation of affordance feature for product design by experiment with some simple shaped (elliptical-, conical- or rectangular-section) objects which are considered as the control devices. In their research, it was showed that existence of strong relation between some geometric features such as height, aspect ratio between width and length are strongly associated with human action such as pushing, pulling, turning and tilting [12]. In this research, we identified common affordances and affordance features through case studies with some home appliances such as a remote controller, a power screwdriver and a toaster. 2

AFFORDANCE FEATURES REASONING In this research, we define affordance features as structural elements of products that provide affordances. We classify affordance features into functional, ergonomics, and informative aspects, as shown in Figure 1. Functionalaffordance features (FAF) are related to the physical

properties for behaviors of the object by physical laws. Ergonomics-affordance features (EAF) are related to convenience for operation. Informative-affordance features (IAF) are related to the properties that can help human understand the functionalities and so guide him/her to proper operations.

Function Decomposition Feature Recognition User Task Analysis Link Sub-functions and User Tasks

Affordance Features FunctionalAffordance Feature

ErgonomicsAffordance Feature

InformativeAffordance Feature

for Behaviors of Object by Physical Laws

for User’s Convenience

for User’s Understanding the Functionalities

Identifying Affordances

Affordance Feature Reasoning

Figure 3: Affordance Feature Reasoning Process 3.1 Remote Controller for Window Blinds 3.1.1 Affordances Identification The overall function of the remote controller (for window blinds in this case study) can be defined as control window blinds motion. As a result of function decomposition using functional basis [13], the remote controller (briefly, RC) has three main functions with human hand and hand force as well as electricity being converted into signal as flows. The electricity is supplied from battery by user’s pressing action and the electricity is converted to signal which can reach a target object (window blinds) to be controlled as shown in Figure A.(a) of Appendix. From the user task analysis, we get the tasks and actions as shown in the first 2 rows of Figure 4. User holds this device by grasping, orients to the target object in general and then presses a button for starting, stopping rolling motion of the window blinds, or changing direction until the window blinds reaches the proper height. If this device is used inside space where the signal is strong enough to reach the target object, the orienting action may be skipped. By combining sub-functions with user tasks and actions, we identified RemoteControl-ability and HandControl-ability consisting of Grasp-ability and Handling-ability. RemoteControl-ability has FingerControl-ability and Electric-ability as well as Signal-ability. FingerControl-ability is composed of FingerNavigate-ability and FingerPressability as shown in Figure 4. On-ability, MovingDirectionControl-ability and Off-ability share same affordances such as FingerControl-ability and Electricability as well as Signal-ability as their components.

Figure 1: Affordance Feature Classification 3

CASE STUDY The purpose of this case study is to identify common affordances and their affordance features of some home appliances so that these affordance features can be used in designing other products. We selected a remote controller for window blinds, a power screwdriver and a toaster for the case study based on following criteria: 1) commonly used in everyday life, 2) movable, 3) potential for human body action according to user context. The case study products are shown in Figure 2. During this research, we conducted function decomposition and user task analysis. Following them, we combined the sub-functions with user tasks as well as actions in the Function-Task interaction matrix for identifying affordances. And feature reasoning is conducted with respect to affordances identified as shown in Figure 3.

(a) (b) (c) Figure 2: Products for Case Study; (a) Remote Controller for Window Blinds, (b) Power Screwdriver (c) Toaster

FingerControl-ability Task & Action Sub-function Import Human Hand

Hold RC Grasp

Lift

Orient



Import Human Hand Force



Store Electricity (EL)

Grasp -ability

Supply EL

Orient toward W/Blinds





Handling-ability

Start Rolling Finger Navigate

Finger Press

Finger Navigate





Finger Navigate -ability



FingerPress -ability

Convert EL to Signal

Stop Rolling

Finger Press

Finger Navigate

Finger Press









Finger Navigate -ability



FingerPress -ability

Finger Navigate -ability

FingerPress -ability



HandControl-ability

Actuate EL Switch EL

Change Rolling Direction

● ● ●

Transmit Signal















Electric-ability / ● Signal-ability●







On-ability





MovingDirection Control-ability



Off-ability

RemoteControl-ability

Figure 4: Affordances Identified from Function-Task Interaction for Remote Controller

534

3.1.2 Affordance Features Reasoning We recognized affordance features associated with affordances identified in FTI-Matrix as shown in Figure 5. These affordance features are shown in Tables 1 ~ 4 and are classified based on our affordance feature classification shown in Figure 1. (APL) (BD) (SPL)

(GA)

(W1) (SP), (PR), (FS)

(NA)

(LO) (HL)

(OL)

(OT)

(APL)

(T1) (OW) (SPL)

Figure 5: Affordance Features of Remote Controller For Grasp-ability, a reflective symmetry (represented by symmetric plane, SPL) and a grasping area (GA) are reasoned as affordance features, as shown in Figure 5, where the grasping area is a region contained in a human hand when grasped. The asymmetry (represented by asymmetric plane, APL) caused by location of the button group where one side is too small to afford any grasping, could tell the user how to hold the RC with the correct upright orientation. Note that bad affordances could be given if the RC were symmetric with respect to the plane APL. Due to the symmetry, both left-handed and righthanded persons can equally operate the RC. Grasping area should have proper size accommodating human hand. Thus all affordance features for Grasp-ability are in hierarchical structure as shown in Table 1, and are classified as ergonomics affordance features (EAF). Grasp-ability EAF z Reflective Symmetric EAF z Grasping Area z Reflective Asymmetric EAF z Location of Button WRT Grasping Area z Size EAF z Length z Thickness z Width

As this product is a hand-held device, we apply a taxonomy of grasping in reasoning about it. A partial taxonomy of grasp was introduced by Cutkosky for manufacturing expert system [14]. Grasp type for this RC is ‘light-tool grasp’ or ‘thumb-3 or -4 fingers grasp’ as shown in Table 1. Note that orientation and grasping area are sustained regardless of ‘light-tool grasp’ or ‘thumb3(4) fingers grasp’, due to asymmetry by location of button group. Affordance features for Handling-ability, FingerNavigate-ability and FingerPress-ability are also shown in Figure 5 and Tables 2 ~ 4. Weight and overall volume consisting overall length (OL), overall thickness (OT) and overall width (OW), are identified as affordance features for Handling-ability, because Handling-ability is related to lifting and orienting actions by human hand force. FingerNavigate-ability has the affordance features of boundary by protrusion (BD) for tactile guiding, size of area inside boundary, and location (LO). Location is further decomposed into position and orientation subfeatures with respect to the grasping area, as these are the critical factors in determining whether it is possible to select a button while grasping the RC in one hand. FingerPress-ability, shown in Table 4, has a protrusion (PR) affordance feature that is physically separated (SP) for pressing. This protrusion has affordance sub-features of height (HT), arbitrary flat surfaces (FS), and size of area accommodating the finger palm. FingerPress-ability has also location with respect to the grasping area. All affordance features of FingerNavigate-ability and FingerPress-ability are classified as ergonomics-affordance features except separation for FingerPress-ability, which is a functionalaffordance feature. Handling-ability EAF

z Weight

-

EAF

z Overall Volume z Overall Length z Overall Thickness z Overall Width

-

-

(SPL)

-

(GA)

(OW)

Table 2: Affordance Features for Handling-ability

(APL)

Group

(OL) (OT)

FingerNavigate-ability EAF z Boundary by Protrusion

(LO) (HL) (T1, OT) (W1, OW)

z Size of Area inside Boundary

-

EAF

z Location WRT Grasping Area

(LO)

Table 3: Affordance Features for FingerNavigate-ability FingerPress-ability EAF z Protrusion z Separation FAF z Height EAF z Arbitrary Flat Surface EAF z Size accommodating Finger Palm EAF

+ Grasp Type - Light-tool Grasp

- Thumb – 3 (4) Grasp

(BD)

EAF

EAF

z Location WRT Grasping Area

(PR) (SP) (HT) (FS) (LO)

Table 4: Affordance Features for FingerPress-ability Affordance features for MovingDirectionControl-ability are shown in Figure 6 and Table 5. A clue for functionality can be provided from difference of shapes (DS) as triangles versus ellipse, which could imply motion and

Table 1: Affordance Features and Grasp Type for Grasp-ability of Remote Controller

535

stop respectively. The relative location (RL) of two triangles provides the relative direction. The most critical affordance feature of the RC is that the triangle of the upward-motion button points in the correct (upward) direction when the RC is held as afforded so that this coincides with the upward motion of the window blinds (Uu). To lower the window blinds, the downward button is pressed, as afforded by the triangle pointing downward (D-d). We organize these affordance features as subfeatures of the affordance feature coincidence of configuration and behavior of target object while grasping. We classified these affordance features as informativeaffordance features (IAF).

3.2 Power Screwdriver 3.2.1 Affordances Identification We now consider the case of a power screwdriver (briefly, PSD). The overall function of a power screwdriver can be defined as tighten (or loosen) screw. By decomposing the overall function, we obtain subfunctions as shown in Figure A.(b) of Appendix. A PSD is a kind of screwdriver that uses electric energy for generating torque. Hand force is used only to stabilize the PSD during screwing. Finger force is used to depress the trigger to actuate the electric motor, and to push/pull the switch to change the rotating direction of the screw. Note that adjusting the screw motion should be still operated by human hand even though the PSD is an electric device. As shown in Figure 7, the user holds the PSD by hand grasping, connects the PSD with a screw by aligning as well as mating, and starts the PSD by pressing the trigger. He/she should select the rotating direction according to the situation or task for either tightening or loosening. During operation, stabilizing and adjusting are sustained by supporting and maintaining the PSD. Generally, the user’s other hand should remain free, so that it can be used for finger grasping, lifting and locating a screw at the proper position before being connected and aligned with the PSD. We identified affordances of the PSD such as HandControl-ability and PowerScrew-ability consisting of Mate-ability, FingerControl-ability, Maintain-ability, Rotate-ability and RotDirectionControl-ability, as shown in Figure 7, from FTI-Matrix. RotDirectionControl-ability has its components such as Electric-ability concerning electricity and FingerPushPull-ability for operations on sliding a switch. FingerControl-ability of this PSD has one more component, FingerPushPull-ability, in contrast to the FingerControl-ability for the RC.

(u)

(DS)

(RL)

(d) (U) (D)

Figure 6: Affordance Features for MovingDirectionControlability of Remote Controller MovingDirectionControl-ability z Coincidence of Configuration and IAF Behavior of Target Object while Grasping z Relative Location of Buttons while IAF Grasping z Difference of Button Shapes IAF

-

(RL) (DS)

Table 5: Affordance Features for MovingDirectionControlability Task & Action Subfunction

Hold Screw Grasp

Position Screw Lift

Locate

Hold PSD Grasp

Connect PSD with Screw

Start PSD

Align

Finger Press

Lift

Mate

Couple Screw



Secure Screw



Support Hand Force Import Human Hand Import HHF Transmit Hand Force Store Electricity

HandControl-ability ● ●

Grasp -ability

● ●

Handling-ability

● ●

Grasp -ability

● ●



Handling-ability

Mate -ability ●

Change Direction Finger Navigate

Finger PushPull

FingerControl-ability FingerPress -ability ●





Finger Navigate -ability



HandControl-ability

Stabilize PSD

Adjust Motion

Support

Maintain





Finger PushPull -ability



Maintainability







● ●

RotDirectionControlability

Supply Electricity Actuate Electricity



Convert EL to Torque



Change TQ

Electric-ability ● ●

Rotate-ability ●

Transmit TQ





Rotate Screw





PowerScrew-ability

Figure 7: Affordances Identified from Function-Task Interaction for Power Screwdriver

536

3.2.2 Affordance Features Reasoning The PSD contains Grasp-ability features of grasping area and reflective symmetry (represented by symmetric plane, SPL), as shown in Figure 8, similar to those of the RC. The reflective symmetry feature allows both lefthanded and right-handed persons to operate the PSD. For FingerPushPull-ability, we identify an affordance feature of a protrusion in a slot, with sub-features of separation (SP), height, two flat surfaces (FS), and location (LO) with respect to the grasping area. Moreover, the two flat surfaces are in opposite directions (OD) along the slot direction. This FingerPushPull-ability is supported by affordance features for FingerNavigate-ability which has boundary by depression (BD) for tactile guiding and size of area inside boundary as well as location (LO). These affordance features in hierarchical relations are shown in Table 6 and Figure 9. Affordance features for Maintain-ability are shown in Figure 8 and Table 7. The PSD has a hand grasping axis (GX) [15] which does not have to be coaxial to its rotating axis (RX), because screw rotation is caused not by hand rotating motion but by torque converted from electricity. Thus, the human hand need only support and maintain to stabilize the PSD and adjust screw motion during screwing. While maintaining the PSD during screwing, the distance (DT) between the rotating axis (RX) and the arm axis (AX) plays a critical role, because this distance is a functional-affordance feature as the moment arm for resisting reaction force from screwing. The angle (AG) between the rotating axis (RX) and the grasp axis (GX) is also an ergonomics-affordance feature for Maintain-ability, because this angle affects convenience for the person’s wrist during the screwing operation. The containment of the trigger within the grasping area affords stabilized operation to user during screwing. Because the trigger is fully contained in the grasping area, strongly pressing the trigger achieves two functions: it causes triggering, and simultaneously produces strong grasping with securing stability. So this containment is related to Maintain-ability as well as FingerPress-ability falling in our ergonomic-affordance features. The soft rubber (in black color) of the handle is for absorbing impact by reaction force, so it is also identified as an ergonomics-affordance feature for Maintain-ability. (SPL)

(GA)

(RSS) (OL)

(WD) (RX) Containment in Grasping Area

(DT)

(AG)

(OH)

(AX) (GX)

Figure 8: Affordance Features of Power Screwdriver FingerPushPull-ability EAF z Protrusion EAF

z (Position) inside Slot

FAF EAF EAF

EAF EAF

z Separation z Height z Two Flat Surfaces z In Opposite Directions along Slot z Size accommodating Finger Palm z Location WRT Grasping Area

(SL)

(OD)

(FS) (OD) (LO)

(SP)

(BD), (PR1) (SL)

(FS)

(LO)

(GA) (GA)

Figure 9: Affordance Features for FingerPushPull-ability of Power Screwdriver In addition, this product has affordance features for Handling-ability, Rotate-ability, and Mate-ability. Because the PSD has the overall function tighten (loosen) screw, a rotating axis (RX) exists as a functional-affordance feature for Rotate-ability. Though this PSD has a functionality of interchanging bits, we assumed that a bit proper to context is already installed, so we did not consider Interchange-ability, because Interchange-ability may be equivalent to Mate-ability in the context of our case study. Mate-ability has relative shape and size (RSS) as its affordance features, as shown in Figure 8. The critical affordance feature of the PSD is the coincidence of the configuration with the behavior of the target object. The configuration for normal operation is the coincidence of a forward direction induced by the forward motion (pushing) of the thumb along a slot when the PSD is held as afforded so that this coincides with the forward motion of the screw (F-f). For unscrewing operation, the slide switch is to be pulled as afforded by backward motion (pulling) of the thumb (B-b) along a slot. Therefore, we identified this coincidence for RotDirectionControl-ability as an informative-affordance feature as shown in Figure 10 and Table 8. MovingDirectionControl-ability of the RC emphasizes the upward or downward motion of the window blinds, RotDirectionControl-ability of the PSD emphasizes the rotating motion of the bit (sharp tip), which is a component of the PSD itself. This difference in the object being emphasized is due to the nature of an RC as a device using signal without mechanical behavior, whereas the PSD does have mechanical behavior. Therefore, this distinguishes MovingDirectionControlability from RotDirectionControl-ability in our case studies.

EAF

537

-

Table 6: Affordance Features for FingerPushPull-ability

Maintain-ability EAF z Containment in Grasping Area z Distance between Rotating Axis (RX) FAF and Arm Axis (AX)

(PR1)

(SP)

z Angle between Rotating Axis and

(DT) (AG)

devices. TST is generally used on a Table in a kitchen, although it is movable by human.

Grasping Axis (GX) EAF

z Soft Rubber

-

Table 7: Affordance Features for Maintain-ability (DS

(f)

3.3.1 Affordances Identification TST has the overall function of toast bread (bagels) as shown in Figure A.(c) of Appendix. This product heats slices of bread (bagels) contained in its chamber until they reach the proper temperature determined by the human user according to his/her taste. Because this product converts electricity to thermal energy (heat), emergency or manual stop control exists. As already mentioned, TST is used in a pre-determined place, such as a Table, thus the Set up Toaster task generally is not conducted. Furthermore, some users always keep TST plugged into an electric outlet, and may leave the chamber uncovered between uses. For toasting, the user inserts slices of bread (bagels) down into the heating chamber, and lowers them by depressing a control bar. The user also selects the desired time duration of heating or final temperature. When the temperature or desired time duration is achieved, the toasted bread pops up to allow the user to grasp and lift them. To dispose of bread crumbs that accumulate during operation, the user removes the crumbs by grasping and withdrawing a crumb chamber. These user tasks and actions are shown in the first 2 rows of FTI-Matrix in Figure 11. We identify many affordances of TST, as shown in Figure 11. Setup-ability allows TST to be set on a place. Wind-/Unwind-ability of cord and Insert-ability of the plug into an electric outlet are identified as sub-features of Handling-ability. Contain-ability (of bread) is composed of Insertdown-ability and Lower-ability. HeatControl-ability is decomposed into Stop-ability, HandControl-ability and HeatSet-ability. Finally, Clean-ability consists of Handlingability and Accumulate-ability. Additionally, this product has Popup-ability for exporting the toasted bread.

L)

(b)

(RD)

(F)

(N)

(B)

Figure 10: Affordance Features for RotDirectionControlability of Power Screwdriver (N): Neutral Position for Manual Screwing RotDirectionControl-ability z Coincidence of Configuration WRT Behavior of Target Object IAF while Grasping z Direction of Slot IAF z Relative Direction of Arrows while IAF Pushing or Pulling Sliding Switch

(DSL) (RD)

Table 8: Affordance Features for RotDirectionControlability 3.3 Toaster Toaster (briefly, TST), shown in Figure 2(c), is a kind of home appliances, in contrast to the remote controller and power screwdriver, which are a kind of hand-held Task & Action Subfunction`

Set Up TST -

Plug In Grasp

Unwind

Open Cover Insert

Grasp

Lift

Import Solid

Settle Bread Insert down

Finger Press

Insertdown -ability

Contain Solid



On-ability

Import Electricity



Actuate Electricity

Electric -ability

Grasp

Lift











Grasp

With-draw

Popup -ability



● ●

Transmit Electricity



Stop -ability



Convert EE to TE



Transmit TE

Heatset -ability



Heat -ability

Indicate TE



HandControl-ability

Sense TE Stop TE

Setup -ability

Import Hand





Import HF





Export Crumb

Push

Clean Up

Lower-ability HeatControl -ability

Export Solid

Accumulate Solid

Rotate

Serve Toast



Pop Up Solid

Transmit HF

Grasp

Stop TST

Contain -ability



Secure Solid

Set Temp

Handling -ability

Grasp -ability



Clean-ability HandControlability



Insertability ●

Unwind-ability



Handlingability





Grasp -ability HandControlability

FingerPress -ability ●

● ●

● ●

● ●

Grasp -ability

Rotate -ability

On-ability HandControl -ability



● ●

Handlingability ●

Grasp -ability

Push -ability

Figure 11: Affordances Identified from Function-Task Interaction for Toaster

538

Handling -ability ●

Grasp -ability

Withdraw -ability ●



Accumulate ● -ability

3.3.2 Affordance Feature Reasoning We have identified some very remarkable affordances as Insertdown-ability and Lower-ability. In this paper, we will focus on these two affordances only, and describe their affordance features. Insertdown-ability and Lowerability are shown in Tables 9 ~ 10 and Figure 12. The user’s downward insertion is guided by the only accessibility direction for the chamber. Thus Insertdownability has the affordance feature of depression with upward opening direction (UOD). The opening has opening size (OS) for accommodating a slice of bread and depression volume (DV) for containing it. This depression is a pocket form feature as defined in [4], whose accessibility cone comprises the vertical upward direction only, i.e. opposite the downward direction of gravity. The possible mating directions for inserting the bread into the depression are obtained as the complement of the accessibility cone, i.e. vertically downward only. Thus the opening at the top provides a clue for the human’s cognitive effort to recognize the Insertdown-ability affordance. Lower-ability has the affordance features of initial position of a slide bar and downward direction in slot. The initial position of the slide bar is just below the height of the top of the opening (JB), which affords a human hand to continue a natural downward motion for conducting an additional task or action just after inserting the bread. Thus, this subsequent downward motion of the hand can depress slide bar to lower the breads into the heating chamber. These relations exhibit Gaver’s sequential and nested affordances [5]. Insertdown-ability z Depression -

-

IAF

z Upward Opening Direction

FAF

z Opening Size z Depression Volume

FAF

(UOD) (OS) (DV)

Table 9: Affordance Features for Insertdown-ability - Opening at the Top

(JB) - Slide bar just below Opening (OS) (UOD) (DV)

- Downward Motion during Control

Figure 12: Affordance Feature for Insertdown-ability and Lower-ability of Toaster Lower-ability z Relative Position between Slide bar IAF and Opening z Just below Opening z Downward Direction

(JB) -

Table 10: Affordance Features for Lower-ability

539

4

CONCLUSION In this research, affordance features are defined as structural elements of products that provide affordances. Affordance features are classified into functional, ergonomics, and informative aspects and are identified in a hierarchical structure. We have identified some affordances by analyzing hand-held devices such as a remote controller (RC) and a power screwdriver (PSD). Those affordances are HandControl-ability consisting of Grasp-ability and Handling-ability, and FingerControl-ability composed of FingerNavigate-ability, FingerPress-ability and FingerPushPull-ability. For hand-held devices, especially those with controlling functions such as the RC and PSD, the critical affordance features are related to the coincidence between the operations of human hands and the behavior of the object to be controlled. This behavior can be a physical motion of the object, or variation of the object’s state, such as a change in temperature. For the toaster as a kind of kitchen appliance, we have identified the affordances of Insertdown-ability and Lower-ability as well as their critical affordance features. Specific affordance features identified from diverse home appliances could be stored in a repository with a suitable classification scheme regarding affordances and affordance features as well as task context. Such a repository could support the affordance-based design of other devices involving interaction with human hands. 5 REFERENCES [1] Gibson, J. J., 1979, The Theory of Affordances: In the Ecological Approach to Visual Perception, Houghton Mifflin. [2] Norman, D. A., 2002, The Design of Everyday Things, Basic Books, New York, NY. [3] Shah, J. J., 1991, An Assessment of Features Technology, Computer-Aided Design, 23(5): 331343. [4] Kim, Y. S., 1992, Recognition of Form Features using Convex Decomposition, Computer-Aided Design, 24(9): 461-476. [5] Gaver, W., 1991, Technology Affordances, Proc. of the SIGCHI Conf. on Human Factors in Computing Systems: Reaching Through Technology, ACM Press, New York, NY, USA: 79-84. [6] Maier, J. R. A. and Fadel, G. M., 2003, AffordanceBased Methods for Design, Proc. of ASME Int’l Conference on Design Theory and Methodology, Chicago, Illinois, USA: DETC03/DTM-48673. [7] Maier, J. R. A. and Fadel, G. M., 2005, A Case Study Contrasting German Systematic Engineering Design with Affordance Based Design, Proc. of ASME Int’l Conference on Design Theory and Methodology, Long Beach, CA, USA: DETC200584954. [8] Maier, J. R. A., Ezhilan, T. and Fadel, G. M., 2007, The Affordance Structure Matrix - A Concept Exploration and Attention Directing Tool for Affordance Based Design, Proc. of ASME Int’l Conference on Design Theory and Methodology, Las Vegas, Nevada: DETC2007-34526. [9] Galvao, A. B. and Sato, K., 2005, Affordances in Product Architecture: Linking Technical Functions and Users' Tasks, Proc. of ASME Int’l Conference on Design Theory and Methodology, Long Beach, California: DETC20005-84525.

[10] Galvao, A. B. and Sato, K., 2006, Incorporating Affordances into Product Architecture: Methodology and Case Study, Proc. of ASME Int’l Conference on Design Theory and Methodology, Philadelphia, Pennsylvania, USA: DETC2006-99404. [11] Kim, Y. S., Kim, M., Lee, S. W., Lee, C. S., Lee, C. H. and Lim, J. S., 2007, Affordances in Interior Design: A Case Study of Affordances in Interior Design of Conference Room Using Enhanced Function and Task Interaction, Proc. of ASME Int’l Conference on Design Theory and Methodology, Las Vegas, Nevada: DETC2007-35864. [12] Murakami, T., Cheng, L. M., Higuchi, M. and Yanagisawa, H., 2006, Trial for Formulation of Affordance Feature for Product Design, Proc. of the Human Interface Symposium: 403-408 (in Japanese).

[13] Little, A., Wood, K., and McAdams, D., 1997, Functional Analysis: A Fundamental Empirical Study for Reverse Engineering, Benchmarking and Redesign, Proc. of ASME Int’l Conference on Design Theory and Methodology, Sacramento, CA, USA: DETC97/DTM-3879. [14] Cutkosky, M. R., 1989, On Grasp Choice, Grasp Models, and the Design of Hands for Manufacturing Tasks, IEEE Trans. Robotics and Automation, June, 5(3): 269-279. [15] Hasser C. J. and Cutkosky M. R., 2002, System Identification of the Human Hand Grasping a Haptic Knob, Proc. of the 10th Symposium on Haptic Interfaces for Virtual Envir. & Teleoperator Systems (HAPTICS’02): 171-180.

APPENDIX: FUNCTION DECOMPOSITION

Electricity (EL)

EL

Store Electricity

Supply Electricity

EL HH

Hand Force (HF)

(a)

Human Hand (HH)

HF

Import HF

EL

Actuate EL

Signal

Convert EL to Signal

Signal

Transmit Signal

HF

HH

Import HH

EL

Switch EL

HH Screw

Screw

Couple Screw

Human Hand (HH)

Secure Screw

Support HHF Screw

HH

Import HH

HH HH

Human Hand Force (HHF)

(b)

Screw

HHF Screw TQ

Transmit TQ

HHF

Transmit HHF

HHF

Import HHF

HHF

HHF Screw TQ

Rotate Screw

TQ HHF

Electricity (EL)

HHF

Store Electricity

EL

Supply Electricity

EL

EL

Actuate Electricity

Convert EL to TQ

TQ

Change TQ HHF

HHF

Human Hand (HH)

Human Hand Force (HHF)

HH

Import HH

HH

HF

Import HF

Bread

Import Solid

Secure Solid

Toasted Breads

Contain Solid

Bread

Toasted Breads

Pop up Solid

Export Solid (Bread)

TE

TE

HF

(c)

Export Solid (Crumb)

Sense TE or Time

Transmit TE

Import EE

EE

Stop TE

EE

EE

Transmit EE

TE Convert EE to TE

HF Human Hand (HH)

Import HH

HH

Target TE or Time

Indicate TE or Time HH Human Hand Force (HHF)

Toasted Breads TE

TE

Actuate EE

Electricity (EE)

Crumb

Crumb

HF

Bread

Crumb

Accumulate Solid (Crumb)

Transmit HF

Import HF

HF

Figure A: Function Decompositions of (a) Remote Controller, (b) Power Screwdriver, (c) Toaster

540

HF

A Methodology of Persona-centric Service Design

1 1 2 3 2 S. Hosono , M. Hasegawa , T. Hara , Y. Shimomura , T. Arai Service Platforms Research Laboratories, NEC Corporation, Japan 2 Department of Precision Engineering, The University of Tokyo, Japan 3 Department of System Design, Tokyo Metropolitan University, Japan [email protected] 1

Abstract In order to clarify the customer’s inclination and provide them with better services, the persona-scenario method has been accepted in Requirements Engineering. A CAD tool proposed in Service Engineering also adopts persona-scenario and focuses on denoting and visualizing the quality factors in service receivers. However, the process of service development with persona has not been fully discussed yet. This paper proposes a methodology to identify the boundary of a whole service system with role and user personas, and formalize the procedure of service development with them. The methodology in service development is exemplified through e-learning services. Keywords: User centric design; Requirements Engineering; Service Engineering; CAD

1 INTRODUCTION Transformation from product business to service business is a trend in the manufacturing industry. In order to meet the needs of the time, research has been done on the Product-Service System (PSS) [1] and more specifically the Industrial Product-Service System (IPS2) [2]. This research will contribute to reducing environment load and realizing a sustainable society, though these discussions mainly argue for alternating functions which are offered by products into services conceptually so far. The transformation will be realized through reconfiguring the conventional value space in the business model and rebundling it by promoting density of opportunities in the system [3]. Aimed at realizing the transformation and facilitating the trend toward a service economy, there is a need to establish a methodology to systemize and industrialize the process of servicing. To realize this, it is necessary to have the following procedures: (1) to disintegrate notionally conventional products and services into functions that have been offered to users, (2) systemize a mechanism to reorganize the functions. Establishing a method modelling the structure of a service system which provider and receiver take part in will enable us to realize the procedures. From this standpoint, our current research can be classified into two categories:

User Centric Design User Centric Design (UCD) makes product planning useroriented. As human-centred design has been recognized and established, (e.g. ISO13407: human-centred design processes for interactive systems), product planning has been to put more emphasis on a needs-based development approach which starts from knowing customers’ preferences and behaviour than the seedsbased approach which is technology-oriented product development [8]. The Persona Method [9] [10] is a branch of the UCD method. It is a method which proceeds with product development by designing virtual users in detail and assuming that they use them. The method enables the design of a virtual user which reflects family lifestyle, motivation for work, or demands to achieve his/her goals. Companies are expected to adopt the method hereafter. UCD has also been discussed in Requirements Engineering [11] - a field of Software Engineering. UserCentred Requirements Engineering has been proposed to manage users who have various goals. Personascenario-goal methodology integrates Goal Orientation with the Persona Method to negotiate conflicting purposes [12].

Service Design Service Engineering [4] [5] [6] [7] studies the mechanism of the service, and provides denotations and expressions which PCs can handle. A CAD tool, Service Explorer aims to reuse service knowledge for designing and evaluating a service. It focuses on human acts that possess and consume products and services – subjective view of people and organizations. It enables us to draw interactions between a service provider and a receiver in a flow diagram, and depict state changes in receivers using parameters.

In the two research categories, persona is effectively applied to the early phase of service design. However, comprehensive supply for customer’s expectation is still difficult to realize. Being different from products, the service has unique features: ‘intangibility’, ‘simultaneity’, ‘heterogeneity’, ‘perishability’. These features show that introduction and aftercare of a service are important factors for customer satisfaction. Although conventional persona methods mainly focus on the phase, a viewpoint of phases in advance or afterward of service experiences must be considered.

CIRP Design Conference 2009

541

However, these phases of service encounters include several service functions, such as promotion or follow-up aftercare, which multiple people with roles on the vendor side provide. Therefore, the service system seems to expand greatly, and it is hard to ascertain the boundary of the service system comprehensively, and optimize the system. Solving these issues and frameworking the procedures to design services are the urgent tasks. The next chapter explains the procedure of designing services. 2

METHODOLOGY OF PERSONA-CENTRIC SERVICE DESIGN We propose a persona-centric service design methodology. The methodology models a service system with stakeholders and their interactions in terms of service function. It minimizes disparities between quality levels designed by service providers and the level of expected among service receivers. Persona-centric service design methodology denotes all stakeholders involved in a service system as personas. Persona is defined in two types. One is ‘User Persona’ who will be the final user of the service, and the rest of the stakeholders are denoted as ‘Role Persona’. Figure 1 shows the architecture of the persona-centric service design system. A user persona has ‘important value list’ and ‘use case list divided by each phase’. A use case divided by each phase of service encounter is a combination of ‘scenario’ and ‘degree of importance’. A role persona has a ‘function list’ which denotes each function provided to other personas. The next section explains the procedures of developing personas, identifying a service system boundary and minimizing quality disparities among service providers and receivers.

2.1 System Modelling with Personas System modelling with personas designs a target user at first and then models service functions which are provided by role personas in the provider. Figure 2 shows the steps. 1. Collect a mass of individual information on services how much importance they put on quality - through questionnaires to target a user group. 2. Cluster these collected data and choose an arbitrary group that has characteristics suitable to the target based on business decision-making. 3. Extract important value data among information on important quality degree by methods such as multivariate analysis. Then, list the extracted results as an important value list. 4. Interview some individuals who are in the group mentioned above and make a list of important values. 5. Calculate a degree of similarities with the important values, and identify a person who has the highest similarity. Then, complete user persona information by using the use case as supplementary information of user persona. 6. Nominate stakeholders in service providers and receivers, and nominate functions that each persona gives to other persona. Then, make role personas. With interview results, put importance on each function and complete role persona information. These steps enable us to find service functions and depict the service system broadly. The following procedures clarify the functions and the boundary.

Figure 1: Persona-centric Service Design System

542

and role persona importance respectively. Complete the PRM table with them. 10. Find disparities by comparing relative importance 11. According to the above disparities, providers redesign the function of service providers The next chapter verifies the methodology and evaluates the effectiveness by applying these procedures to actual services.

Figure 2: Flow chart of designing User Persona 2.2 Identifying Service Boundary After nominating personas with functions, the boundary of service system will be specified. The Provider-Receiver Map (PRM) retains the correspondence relation between consumers’ phases of use case and service functions from provider. By having importance rates for both use cases and functions, the difference of service quality levels between provider and receiver is clearly observable. The first half of figure 3 shows the steps. 7. Correspond between each use case and a function in role persona one-to-one. In the persona-centric service system (fig.1), this step is to connect them on the editor. 8. Confirm whether each of use case based on the process of user persona is all related to either function of role persona. When there is no corresponding function in role persona, analyzing and extracting role persona would be inappropriate. In this case, extract functions in provider again until they correspond to both sides perfectly. It is difficult for the service developers to confirm how many stakeholders they should consider. However, with these steps, necessary and sufficient stakeholders with functions will be extracted. As a result, the service system boundary will be identified in a user-oriented way. 2.3 Optimizing Service Functional Quality After determining the service system, the design of quality level of service functions can be optimized. The latter part of figure 3 shows the steps. 9. When the correspondence is clarified, persona information can be processed in a comparative way, by normalizing each value on the user persona importance

543

Figure 3: Flow chart of designing Role Persona and optimizing the quality of functions 3 CASE STUDY E-Learning services have spread rapidly in Japan since 2000. The market continues to grow 10% annually. The BtoB segment accounts for 83% of the market and BtoG or BtoA segment 13%. However, the BtoC segment (business to individual customers) is only 4%. Therefore, there is a potential for business opportunities in the BtoC area. The following shows a case study of developing a new e-learning service for individual customers. 3.1 System Modelling with Persona User Persona 1, 2. Firstly, items and structure of questionnaires were designed. In order to provide comprehensive quality factors, ‘words lists’, a part of Service Templates based on a standard list of quality factors, defined in Service Explorer were used. Then a survey was conducted on around 10,000 men and women on the net. Figure 4 shows segmentations of surveyed people by personality

Table 2: Quality factors and their importance

(specifically according to level of self-motivation) and by experience in e-learning. From a marketing point of view, the group accounts for only 3.7%, however people who are self-motivated and inexperienced in e-learning are the most promising customers.

Figure 4: Segmentations 3. Secondly, we extracted data of people who are selfmotivated and inexperienced in e-learning, and analyzed it. Table 1 shows the customers’ attitude towards quality factors and how important they think those factors are, and how much they expect from those quality factors. Some factors, such as ‘credibility’, ‘appeal’ and ‘low price’, show relatively large gaps between their importance and expectation. Focusing on these factors is significant to stimulate expected customers to start to use e-learning services. How customers access services differs from person to person, so, it is difficult to depict the concrete lifestyle of a person from the statistical data. To draw realistic scenarios, the most suitable person was selected from the survey, and his/her scenarios were used. Table 1: Quality factors of e-learning for target customers

Figure 5: User Persona Table 3: Quality factors Activity

4, 5. In-depth interviews ware conducted on 10 business men and women. Five people (A, B, C, G and H) were chosen who are in the target segment. Then, their factors were compared and the one whose factors are the most similar to the patterns in step 1 was identified. Person A in table 2 was the most similar to the persona, therefore his scenarios will be used to depict the lifestyle. The user persona finally identified is shown in figure 5 and table 3. The persona name is Toru Mori, aged thirty-six. He has a wife and a daughter. He is a section manager at a manufacturing company. He studies a lot to keep up with the changes in his business environment. He likes to watch human documentary films. In his everyday life, he values: ‘fun and enjoyment in life’, ‘sense of accomplishment’, and ‘warm relationships with others’.

544

Requirement

Quality Factor

Advertisement

Get the latest course information

Appeal

Participation

Select a handson course

Reasonable fees

Content

Select a handson course

Trend

Search

Ease of finding the best course

Ease of finding course

Course Information

Select a handson course

Sincerity, Reliability, Intellectuality

Provider

Select a reliable provider

Reliability

Business results

Expect favourable outcome

Sincerity, Reliability, Intellectuality

Required environment

Get the IT environment information

Comprehensiveness

Placement

Take a suitable course

Convenience, Certainty

Application procedure

Certainly apply for a course

Ease of applying

Business hours

Have flexible timetable

Ease of use, Reasonable fees

Materials

Get practical knowledge, skills

Interest, Intellectuality

Payment

Pay tuition fee easily and safely

Convenience

Accomplishment

Get a sense of achievement

Superiority

Customer service

Maintain motivation

Sincerity

Skill improvement

Develop potential ability

Course line up

3.2 Identifying Service Boundary 7, 8. Aligning actions in accordance with phases of service encounters comprehensively, the boundary of the e-leaning service was determined. The e-learning service system is shown in figure 7. 3.3 Optimizing Service Functional Quality As the service system is defined, the PRM helps to determine how to optimize the quality level of functions. 9. Five e-learning service providers were surveyed, and clarified the degree of importance of each function among vendors was clarified. Then, the PRM was completed to show the gap between customers’ demands and providers’ perceptions.

6. Then role personas are modelled. Through five vendor hearings, five role personas are nominated: Course Administrator The course administrator supervises the content developer, system administrator, help desk and course planner. Contents Developer The content developer develops, creates or revises a new educational materials in accordance with the guidelines from course administrators System Administrator The system administrators manage e-learning servers. They manage the servers and contents. Help Desk The help desk answers for queries from customers, e.g. questions about educational materials, requests for help with network problems or operations of web interfaces. When the requests cannot be handled, it delegates the problems to the course administrator to dispatch the task to other divisions. Course Planner The course planner explains educational plans to people who will take courses. It also monitors effectiveness, and evaluates how much the course contributed to their job skills. Figure 6 shows the role persona of the course administrator. Some IT systems involved in the service system will be operated or maintained by any people with roles. The functions in those IT systems are expressed through role persona functions. Therefore, the whole service system can be modelled as personas and their interactions.

Figure 6: Role Persona 545

Figure 7: E-learning service system 10. By comparing the functions of the providers’ perceptions with the persona’s needs, interesting results were found. Functions in line with the phases of service encounters are listed from the top to the bottom. Some of the items are listed only on the provider’s side, or on the receiver’s side in table 4. More precisely, ‘courteous response’, ‘reporting achievement’, and ‘improving services’ are only on the providers’ side. ‘Being aware of services’, ‘experiencing free-trials’, and ‘comprehension of course outlines’ are only on the receivers’ side. 4 DISCUSSION Figure 8 shows BtoB and BtoC service models. In the BtoB model, the customers who pay for the service are the managers of companies, and the users are the employees who take courses. However, in the BtoC model, the customer and the user are the same. The persona in the BtoC model uses the service and pays for it. The border between the provider and the receiver has to be shifted when changing the target from corporate users to consuming public. The proposed methodology clarifies the gaps in service qualities between the provider and the receiver. It also reveals the predominant perceptions in the provider. The BtoB segment accounts for more than 80% of the elearning business. The major customers are the managers of companies who evaluate the achievement of the employees, and give requests to the providers to customize or improve the course for their business goals. Therefore, the functions targeting the managers’ satisfaction are shown in the providers’ perception. The comparison helps the providers to realize there are important factors left unnoticed, and they can redesign functions in the new service.

Table 4: Importance of each factor in quality (PRM) Phase

Provider

Receiver

Importance

Importance

Access

-

0

Being aware of services

5

Check-in

Ease of finding course

3

Ease of finding popular courses

3

Check-in

Ease of comparing programs

3

Ease of estimating ROI

3

Check-in

Ease of getting course information

3

Ease of confirming business records

3

Diagnosis

-

0

Experiencing free-trials

5

Diagnosis

Providing sample contents

3

Confirming IT environment through freetrials

3

Diagnosis

-

0

Comprehension of course outline

5

Diagnosis

Course information

3

Ease of accessing course information

3

Diagnosis

Sincerity for inquiries

3

Ease of applying for courses

3

Delivery

Ease of distributing contents

3

Ease of preparing IT environment

3

Delivery

High-quality contents

3

Interest in contents

3

Delivery

Ease of paying tuition fee

3

Ease of paying tuition fee

3

Delivery

Customizability of certificates

3

Sense of accomplishment

3

Check-out

Courteous response

5

-

0

Check-out

Quick response to inquiries

3

Sincerity and rapidity toward inquiries

3

Follow-up

Course promotion (direct-mail)

3

Receiving next-step courses

3

Follow-up

Reporting achievement

5

-

0

Follow-up

Improving services

3

Renewing courses

0

Figure 8: Service boundary and border 5 CONCLUSION It has been shown that a methodology of persona-centric service design gives a framework for service development. A case study on e-learning services showed that the methodology can identify the gaps that exist between the providers’ recognition and the receivers’ expectation, and the service developers could become aware of the important factors for providing services. Although the work has shown the effectiveness in this particular service, further evaluation should be couducted on other services in different business domains. The feedback will improve the methodology. The methodology will contribute to Service Engineering, strengthen Service Explorer, and accelerate service production. 6 ACKNOWLEDGMENTS This work was partly supported by Ministry of Economy, Trade and Industry. 7 REFERENCES [1] Baines, T. S., Lightfoot, H. W., Evans, S., Neely, A., Greenough, R., Peppard, J., Roy, R., Shehab, E. et al., 2007, State-of-the-art in Product Service

Systems, IMechE, Part B: Journal of Engineering Manufacture, 1543 – 1552. [2] Meier, H., Volker, O., 2008, Industrial ProductService-System – Typology of Service Supply Chain 2 for IPS Providing, The 41st CIRP Conference on Manufacturing Systems, 485-488. [3] Normann, R., 2001, Reframings Business – When the Map Changes the Landscape, John Wiley & Sons, Ltd. [4] Arai, T., Shimomura, Y., 2004, Proposal of Service CAD system – A tool for Service/Product Engineering -, Annals of the CIRP, 53(1), 397-400. [5] Arai, T., Shimomura, Y., 2005, Service CAD system – Evaluation and Quantification-, Annals of the CIRP, 54(1), 463-466. [6] Hara, T., Arai, T., Shimomura, Y., 2006, A Concept of Service Engineering: A Modeling Method and A Tool for Service Design, Proc. Of IEEE International Conf. Service Systems and Service Management 2006, 13-17. [7] Arai, T., Hara, T., Shimomura Y., 2008, Scientific Approach to Services: What is the Design of Services?, The 41st CIRP Conference on Manufacturing Systems, 25-30. [8] Iga, A., et al., 2007, A style of craftsmanship (In Japanese), Tokai University Press. [9] Cooper, A., 1999, The Inmates Are Running the Asylum, SAMS. [10] Pruitt, J., Adlin, T., 2006, The Persona Lifecycle : Keeping People in Mind Throughout Product Design, Morgan Kaufmann. [11] Sutcliffe, A., 2002, User-Centred Requirements Engineering : Theory and Practice, Springer. [12] Aoyama, M., 2007, Persona-Scenario-Goal Methodology for User-Centered Requirements Engineering, IEEE Requirements Engineering.

546

Invited Paper Stanford’s ME310 Course as an Evolution of Engineering Design T. Carleton, L. Leifer Center for Design Research, Stanford University, 424 Panama Mall, Stanford, CA 94305, U.S.A. [email protected], [email protected]

Abstract ME310 is a radical course that has been taught at Stanford University since 1967. The year-long course is a graduate level sequence in which student teams work on complex engineering projects sponsored by industry partners. Student teams complete the design process from defining design requirements to constructing functional prototypes that are ready for consumer testing and technical evaluation. This paper presents the first longitudinal study of ME310 and characterizes the course in terms of nine eras, each with distinctive teaching philosophies and class dynamics. By looking at one engineering design course in its entirety, a rough parallel is gained of how the field of engineering design itself has evolved over the last forty years. Data for this study was drawn from 80 surveys, 28 interviews, and 42 years of historical university enrollment records, course archives, and course bulletins. Keywords: Engineering Design Education, Problem-Based Learning, Innovation, Immersion, Simulation

1 INTRODUCTION Despite its age, ME310 is not your traditional engineering class. Taught since 1967, ME310 has developed a strong reputation at Stanford University as a cross between a senior capstone course, prototyping laboratory, and microcosm of Silicon Valley. The course combines the best of interdisciplinary teaching and problem-based learning for engineering design. ME310 also offers a successful formula of global networked innovation and provides a documented test bed of engineering education. In short, it is remarkable that the same course has been taught continuously for 42 years. Why does ME310 work? What has changed and held consistent over this time span? How has the course influenced other educational practices in the U.S. and around the world? This paper presents the first longitudinal study of ME310, examining the dynamics between engineering design education and practice and the effects on diverse course participants, including faculty, students, project coaches, and industry liaisons. 2 COURSE OVERVIEW ME310 is a year-long graduate course offered through Stanford’s School of Engineering. It is mandatory for Stanford master’s students specializing in Engineering Design and an elective for students from other disciplines. Due to various Stanford policies, the course was originally listed as ME201 from 1967 to 1974, then ME210 from 1975 to 1998, and next as ME310 from 1999 to 2009. The course will generally be referred to as ME310 throughout this paper. Students are required to enroll in all three quarters of the academic year. In this Stanford course, student teams work on complex engineering projects sponsored by industry partners. Example industry partners are Autodesk, BMW, Lockheed Martin, Nokia, Panasonic, and Xerox Corporation. Each team of students selects a real problem or opportunity to pursue, which are provided by the industry partners. Each

CIRP Design Conference 2009

547

team also receives a hefty project budget and dedicated lab space (commonly known as the “310 loft”). Teams are typically comprised of three or four Stanford students, and in recent years, each team has collaborated with a similarly sized team at a global partner university. All student teams complete the engineering design process from defining design requirements to constructing functional prototypes that are ready for consumer testing and technical evaluation. Throughout the year, teams may choose to enlist the help of vendors, faculty, or students from other Stanford courses, the latter frequently from computer science, for their projects. The course culminates in a student project showcase, and each industry partner receives detailed documentation and prototypes for their respective projects. Moreover, a broader network supports the student teams each year. Project coaches are assigned to specific teams, providing relevant expertise and project advice. Coaches are often faculty or industry professionals, many of whom took the course as students. In addition, multiple teaching assistants and a small administrative team coordinate ME310 operations and logistics. In the last ten years, the course has been available remotely to working professionals through the Stanford Center for Professional Development (SCPD) and to graduate students at global academic partners. Global academic partners for 2008-09 include Pontificia Universidad Javeriana (Columbia), Helsinki University of Technology (Finland), the Hasso Plattner Institut (Germany), and Universidad Nacional Autónoma de México (Mexico). In recent years, several student teams at Stanford have been matched with a corresponding student team from a global academic partner. Every global team also has its own faculty, teaching assistants, project coaches, and dedicated lab space. Figure 1 presents a visual summary of all key relationships occurring within the course at two key points, when the course was established as a yearlong sequence in 1972 and then in 2009.

Figure 1. Network view of ME310 in 1972 versus 2009. 3

DATA COLLECTION

3.1 Research Objectives The objective of this research study is to describe the evolution of ME310 from its inception in 1967 to 2009. Evolution is an apt term because, in order to thrive, the course has had to adapt to multiple conditions arising from within Stanford University, as well as external drivers in the global economy, throughout its history. By synthesizing multiple data sources, a more complete understanding of this dynamic course can be formed. Data sources for the study are described below. 3.2 Web Surveys The total population of ME310 was not available to survey due to a lack of information about all members. For example, certain faculty members are deceased, older student alumni have drifted from contact with Stanford, and various industry liaisons have changed company affiliations. Two small and carefully chosen samples were used to represent the population for ME310 based on either available sample size or course influence. First, a convenience sample of student alumni was drawn from an online community, composed of 128 members. In total, 47% of the student alumni (n=60) participated in the survey. A large contingent (39%) of them returned in other course roles, namely as project coaches (17%), teaching assistants (15%), or researchers (7%) of ME310 in following years. Sixteen participants (27%) were from global academic partners. Second, a random sample of 104 industry liaisons and project coaches was generated from a course database. These project coaches were all senior working professionals. Approximately 19% of this industry sample responded (n=20). Interestingly, 35% of the industry liaisons were ME310 student alumni, and two (5%) of the global faculty also served as project coaches to student teams at their respective universities. Both surveys were conducted online, and all responses were confidential. Although not statistically significant, taken together, the two web surveys (n=80) provide a rough approximation of the total course population. Survey questions addressed prior experience, course participation, lessons learned, and personal outcomes.

3.3 Individual Interviews Interviews offered a way to gain deeper perspective about specific roles and intervals in course history. Interview candidates were identified by their course role and year of participation in order to generate a greater diversity of viewpoints. Twenty-eight individual interviews were conducted over a five-week period with the entire ME310 network, including: faculty, industry liaisons, project coaches, teaching assistants, student alumni, global academic partners, and administrative staff. Many of the interview subjects served multiple roles over the years, for example, returning as course teaching assistants and later as project sponsors. All interviews were semistructured and followed a common interview guide. 3.4 Course Archives Each year, all student teams in ME310 produce detailed documentation about their project, including a final report. All reports are available in hard copy (digital only in recent years) for content analysis. Project reports serve as a valuable body of knowledge about ME310, and at a minimum, reveal information about team size, industry category, project type, and solution timeframe. A representative sample of 135 project reports was reviewed for this study. In addition, all industry partners provide a project proposal to their respective student team at the start of the course, and available proposals from recent years were also examined. Other materials, such as videotapes and prototypes, were not examined. 3.5 University Course Descriptions In addition, ME310 faculty have updated the course description since 1967. Subsequently, 42 years of course descriptions have been captured in the annual Stanford University Bulletin, the university’s official catalog of courses and degree requirements. By analyzing these course descriptions over time, broad trends can be detected in curriculum focus and language use. In other words, has ME310 been communicated differently to students, and what do these changes reveal about the course in light of its overall evolution? 3.6 University Enrollment Records The University Registrar maintains all student records, including course enrollment. This study examined the change in ME310 enrollment by quarter to help identify

548

average attendance, peak years, and drop-off rates, as well as changes in faculty and teaching staff. Course records were only available from 1983 to current. The U.S. Family Educational Rights and Privacy Act (FERPA) requires that all student data in academic files remain private, so only general enrollment data was reviewed. 3.7 Additional Research of ME310 Other scholars at different times have explored specific dynamics of ME310, such as team interaction [1], coaching [2], collaboration support [3], and team performance [4]. These studies provided further context. 4

COURSE PEDAGOGY

4.1 Design Engineering Education Much has been written about the state of engineering education. Recent studies have highlighted growing challenges, specifically in globalization and innovation, which require improved skills in synthesis thinking and system building by engineering students [5][6]. One might argue that this need has been constant in the last century, and by reviewing longitudinal studies, the changes and progress of engineering education over time can be understood, specifically in the field of design engineering. In one of the few examples of a longitudinal study of design education, the authors discussed the need to train students in both hard and soft skills [7]. At Stanford University, ME310 was designed to and continues to address exactly the issues raised by these authors. The course has functioned as a dynamic combination of problem-based learning (PBL), immersion, and simulation, which is illustrated in Figure 2. Most other courses in Stanford’s engineering curriculum and broader design program may combine up to two approaches; however, ME310 consistently unites all three approaches for student learning. The hands-on design experience becomes invaluable knowledge for the students’ work and research after ME310. Each approach is discussed in more detail.

Figure 2. ME310 as the dynamic combination of problem-based learning, immersion, and simulation approaches. 4.2 ME310 as Problem-Based Learning According to literature, characteristics of PBL include an emphasis on problem solving, a role of a facilitator or coach, and the use of reflection and self-directed exercises [8]. Students are actively engaged in their own learning process, becoming co-responsible for their education. A general finding of PBL is that student levels of interaction and participation increase tremendously. While the origins of PBL are often traced to medical education in the late 1960s [9], their foundations at Stanford date to the mid 1950s, lying at the roots of the

549

ME310 course. Professor John Arnold was recruited from the Massachusetts Institutes of Technology to Stanford's Mechanical Engineering department in part because of his success in PBL teaching. Arnold brought together students from multiple disciplines to work in teams on industry-based (and also future-based) problems [10]. Writing in 1952, Howe noted: "Professor Arnold wants to develop men who can find drastic new solutions to old problems, and discover and solve new problems not yet recognized" [11]. ME310 is a PBL course, in which students analyze reallife problems from industry and synthesize new opportunities. Stefik and Stefik noted that ME310 had adopted a project-based model using coaches instead of traditional product design education [12]. As a variant of PBL, ME310 has been focused on product-based learning, in which students are given the opportunity to directly define and build a complex product component or system from concept to prototype [13]. More than evaluate mock scenarios, students are challenged to define real requirements and build solutions for real companies. Several different PBL models have been proposed over the years, recently by Savin-Baden, who posits five models of PBL including Model II, which is “focused on a real-life situation that requires an effective practical resolution” [14]. Model II may come closest to describing the nature of ME310. Savin-Baden has found that this type of model arises from curricula with strong ties to industry and tends to emphasize process skills, such as teamwork and communication, over content skills. The other models typically present sample problem scenarios to students, not necessarily from the real world. 4.3 ME310 as Immersion ME310 also provides an immersive experience. Students are thrust into a realistic situation that requires their full concentration over three quarters. Every detail in the project, such as vendor selection and billing, requires their real-time attention and decision. It is a timeconsuming engagement, often to the detriment of other courses, yet on reflection, nearly all students recall it as one of their best memories from college. While a preponderance of immersion studies can be found in advanced virtual environment research, several studies have discussed the benefits of immersive environments in other applications [15]. ME310 uses a combination of hardware and software tools to create an immersive physical space that functions as a central base throughout the year. The physical environment strongly influences student behavior, and the objective has been to augment the real space, stimulating the imagination using video and other digital equipment. In addition, all global teams interact with Stanford teams through mediated channels. 4.4 ME310 as Simulation Lastly, ME310 serves as a simulator. The course is a training ground. Students learn by doing, prototyping the design process and the role of a design engineer. They gain practice in how to interact with other engineers and how to design in context. Beyond testing the prototype, many students also test different project roles, alternating responsibilities within their team. ME310 is a safe environment to experiment, fail, and try again. Simulation training is highly effective and sees extensive use today in medical applications [16] and the military [17]. Kneebone notes, “Simulators can provide safe, realistic learning environments for repeated practice, underpinned by feedback and objective metrics of performance” [16]. ME310 has also often been likened to a pre-incubator. In many ways, this comparison is not surprising because studies show that successful incubators are closely linked

with academic institutions. Ample research has been done in recent years about university-related incubators, which provide a simulation environment for technical entrepreneurs to start a new business with the support and resources of a university. Smilor and Gill documented several case studies of the earliest efforts by American universities [18]. One major finding from their research was that no one ideal model exists, due to multiple variables, and any successful model may not be transferable in its entirety to another area. Another key finding was that many incubators address the need for entrepreneurial training and education through a combination of formal and informal programs. The objective is to instill additional business skills and knowhow, so that the entrepreneurs can effectively build their businesses outside the safety of the incubator. Recent research by Tornatzky, Sherman, and Adkins found that the majority of best-in-class incubators were connected to a research-intensive university, medical research institution, or research laboratory [19]. It proves to be a mutually beneficial relationship. While the incubator provides a mechanism for commercializing university research, the university fulfills an emerging obligation to contribute directly to regional economic development. ME310 industry partners who participated in the study stressed the benefits of their Stanford affiliation and collaboration. In addition, university-related incubators are often used as a source of research for university faculty and students [18]. Similarly, ME310 has served as a research laboratory in its history. 5

COURSE EVOLUTION

5.1 Nine Eras in History By looking at one course in engineering design in its entirety, a rough long-term parallel is gained of how the field of engineering design itself has evolved over the last forty years. Several trends are apparent, as the course has shifted from phase to phase. The evolution of ME310 is analyzed primarily from an internal viewpoint, looking at the changes driven from within the course that have directly affected course pedagogy. ME310 has been characterized by nine eras, each with distinctive teaching philosophies and class dynamics. In short, engineering design has been taught (a) as synthesis, (b) as an immersive process, (c) as real world problems, (d) as mechatronics, (e) as redesign, (f) as distributed teamwork, (g) as entrepreneurship, (h) as global innovation, and most recently, (i) as foresight. Although these eras are presented as separate time periods, in actuality, they overlap. Table 1 summarizes the nine eras in the course history. Smilor and Gill, when examining case studies of university-related incubators, found that “In many instances, the unique character of an incubator is determined by the personality of the management team” [18]. Likewise in ME310, the faculty drove much of the changes to spark each era, often bringing their personal teaching beliefs about engineering design and learning to the forefront. Savin-Baden makes a similar comment about faculty influence in PBL approaches, noting that “the positioning of knowledge in a problem-based learning programme will tell us more about the pedagogical stance of the staff than the forms of knowledge in action” [14]. The nine eras are described in the following sections. 5.2 Era I: Synthesis (1967–1972) It would help to explain the context of Stanford University during the late 1960s. At this time, the Mechanical Engineering department was organized into three major divisions: Design, Thermosciences, and Nuclear. The

Design Division was largely concerned with “comprehensive systems design, product design, mechanical analysis and mechanisms design, and design components” [20]. In 1966, the actual development of student designs in any course was optional, subject to the instructor’s approval. The precedent was set in 1967 with ME219, a three-quarter series that allowed graduate students to gain practice designing a machine: “The intent of the series is to involve the student in a major portion of the design-development process”. The class was updated to stress multi-disciplinary thinking, and students turned working drawings into functioning systems. Also in 1967, Professor Henry Fuchs and other faculty introduced a new graduate course, in which students analyzed real-life case studies from industry using a combination of interviews, artifacts, and other records. This course also fulfilled a degree requirement in “Engineering Synthesis”, which emphasizing the value of integrating analytical skills with creative skills. This provided additional exposure to how practitioners worked and the problems they faced in engineering design. Professor Jim Adams became the director of Stanford’s Design Division in 1970. Adams and Fuchs were invited to Harvey Mudd College, a small private college in Southern California, to tour the Engineering Clinic, which had been established in 1963 as a series of required courses “in which junior students form interdisciplinary teams to tackle company-sponsored design and research projects” [21]. Similar programs in cooperative “co-op” education were underway at other universities at the time, providing students with practical work experience. The visit provoked Adams to reconsider Stanford’s course. 5.3 Era II: Immersive Process (1972–1974) Both Adams and Fuchs were impressed with these existing practical models and decided to expand ME 201 into a three-quarter sequence that fit Stanford’s design culture in 1972. They took the synthesis focus a step forward by emphasizing the immersive process of design in the second era of ME310. Not only was it important to unite multiple knowledge areas, it was also beneficial for students to directly experience the design process. The course was focused on learning by doing. Each quarter built on knowledge from the prior quarter, so the entire year was integrated. In particular, product testing and debugging was an important belief to Adams, helping students to understand “the difference between theory and actuality”. From prior industry experience, he knew problems in hardware were complex, and the earlier a student could learn how to prototype and test, the more successful the final result could become. The new course appealed to local industry partners, and Adams explained, “It was a good way to bootleg ideas.” Looking back, one student, whose degree specialized in engineering design, reflected, “For me, it was the first time I had ever really done an engineering design project.” In terms of structure, each student team typically worked independently as a unit and had little interaction with other project teams. A Stanford Design Division faculty member served as a project advisor to every student team, so the entire division was engaged with the student projects. Aside from general metrics, course success was primarily measured by annual reviews conducted by Tau Beta Pi, the engineering honor society. 5.4 Era III: Real-World Problems (1975–1981) As Adams took on different responsibilities at Stanford, the course transitioned to other faculty, including Professor Philip Barkan, over the next seven years. The third era of ME310 focused even more on real-world problems, and the course language reflected an emphasis on the design considerations in manufacturing.

550

A co-instructor said, “The projects all came out of the corporate world. It was very much oriented to real design. We had clients come in from industry to critique [students’] designs. That was a very positive part of the program.” In 1979, Barkan began the tradition of submitting final project reports to the James F. Lincoln Arc Welding Foundation, which sponsors an annual competition to recognize and reward achievement by engineering and technology students in solving design, engineering, or fabricating problems. For many subsequent years, Stanford University dominated Lincoln’s college graduate division [22]. 5.5 Era IV: Mechatronics (1981–1990) By the early 1980s, the course shifted again to combine knowledge of mechanical engineering with electrical engineering and computer programming. With the growth in mechatronics and smart products – a class of products that rely on computer processing technologies and embedded systems – design for manufacturability had become a main concern. A project advisor then explained that the objective for students was to “learn systematic tools during design to evaluate manufacturing”. One student noted his graduate degree concentration as “mechatronics” in the survey, and another student explained that he took the course because he “wanted to use a CAE (computer-aided engineering) package for a real industry project”, reinforcing the growing importance of engineering software then. By 1988, Professor Larry Leifer was the lead instructor for ME310. He had taken ME310 from Adams as a graduate student in the 1970s and then been involved as a project coach for several years. (Leifer also remembers the Era I II

Years

Faculty

Engineering Design Pedagogy

1967 1974 1972 1974

Fuchs, Adams, Staff

Synthesis

Staff

Immersive process

III

1974 1981

Chilton, Piziali, Liu, Barkan, Staff

Real world problems

IV

1981 1990

Barkan, Chilton, Leifer, Staff

Mechatronics

V

1990 1995

Leifer, Staff

Redesign

VI

1995 1998

Leifer, Cutkosky

(Distributed) teamwork

VII

1998 2004

Leifer, Cutkosky

Entrepreneurship

VIII

2004 2009

Leifer, Cutkosky

Global innovation

IX

2009 -

Leifer, Cockayne

Foresight

'Philosophy of Design' course he took with John Arnold in the early 60s, which ingrained in him the importance of asking questions, a lesson that Leifer repeats to his students today.) As director of Stanford’s Smart Product Design Laboratory, Leifer had earlier expanded the graduate course in mechatronics into a three-quarter series with industry-sponsored projects, in hopes to mirror the success of ME310. He explained, “Mechatronics is a particularly good medium for introducing PBL (productbased learning) because of its dependence on interdisciplinary collaboration” [13]. Working with other Design faculty, Leifer began to formalize elements of the emerging model of design thinking that had become to exemplify the department's teaching, building the foundations for what would become the product design firm IDEO and the Hasso Plattner Institute of Design at Stanford. ME310 became a gradual blend of design research and practice. Leifer also revised the teaching model; instead of assigning a faculty member per student team, he engaged industry professionals, experienced students, and other volunteers as project advisors. These advisors were soon referred to as industrial coaches, recognizing the value of hands-on guidance and mentoring on the student teams. 5.6 Era V: Redesign (1990–1995) The next era of ME310 gradually moved away from an emphasis on mechatronics to a growing emphasis on rapid prototyping. Student assignments in the first quarter taught them about the journey of product realization, starting with raw product concepts. Students were pushed to iterate and rework all mockups and prototypes, and they were encouraged to fail early and to fail often to

First Mention of Key Phrases from ME310 Course Descriptions "examination of artifacts and records", "interviews with engineers", "prepare written case histories" "project work accompanied by investigations of the design process", "fabrication", "testing", "team-taught" "Real engineering projects presented by local industry", "Designs will be developed by small groups of students”, "Industrial sponsor", "prototype", "methodology", "patents" "Provides experience in technical presentations", "Students unfamiliar with manufacturing process and drafting", "Smart Product Design", "Designs will be developed through hardware phase", "design for manufacturability", "exposure to machine design and design methodology", "industrial 'coaches'", "automation technology" "Project-centered", "Rapid Prototyping", "design alternatives", "industrial team focuses on methodology", "teaching team focuses on methodology", "design exercises", "incremental test/assessment development cycles", "full-scale functional product prototypes", "projects are formally presented to an industrial audience", "Design Affiliates Conference" "Cross-Functional Systems Design", "communication", "Experiences in Team-Based Design", "Team-Based Design-Development with Corporate Partners", "design by immersion", "interdisciplinary, distributed, engineering design-teams", "Series of four designdevelopment cycles", "Work guided by case readings", "sociotechnical infrastructure for self-management", "professional coach" "Tools for Team-Based Design", "limited SITN/global enrollment", "entrepreneurial design", "effective engineering design team in a business environment", "benchmarking", "deliverable is a detailed document with specifications", "part of the student's portfolio", "Each team functions like a small start-up company" "Team-Based Design Global Teaming Lab", "global design team with students in Sweden or Japan", "Project-Based Engineering Design, Innovation, and Development" “The art, science, and practice of design innovation”, “global foresight research team”, “anticipatory research”

Table 1. Nine eras in ME310 history.

551

improve their thinking. One of Leifer’s fundamental design axioms became “All design is re-design.” He gradually added, “All learning is re-learning. All coaching is recoaching.” A visiting lecturer, who co-taught ME310 one year, noted: “The course somehow embodied the Design Division. You get physical, you mock things up, you test your ideas in a disciplined and creative way.” A student, who later became a project coach, took ME310 because he had heard about the course’s reputation: “It was a straight jump into Stanford's design philosophy”. Another student echoed this comment, “I thought I would get indoctrinated in the Stanford way of thinking.” During this era, benchmarking and instrumenting the design process became critical, allowing ongoing design activity and knowledge sharing to be recorded by teams. By 1993, all project documentation and team communication tools had moved online. Leifer explained, “The focus is on capturing and re-using informal and formal design knowledge in support of ‘design for redesign’” [23]. In 1994, the course was offered remotely to professional students through the SCPD (then called SITN) program, which provided class lectures live via television broadcasts and also on videotapes as a variant of distance education. Although SCPD students missed experiencing lectures in person, most lived locally, so they often joined their respective project teams outside work hours. 5.7 Era VI: (Distributed) Teamwork (1995–1998) Over the next four years, Leifer increased the emphasis on teamwork, experimenting with different ways to enhance team culture and cohesion. Leifer realized that students in mechanical engineering could not become students overnight in electrical engineering or computer science, and it was more effective if different types of students collaborated and shared skill sets. Leifer built on another axiom that design was a social process. For example, multiple assignments in the first quarter allowed teams to mix up members repeatedly, so students could learn each other’s working styles and skills before choosing a final project team. Team formation was directed to achieve optimal balance and diversity by using modified profiles of Myers-Briggs and Jung attributes, which many felt positively influenced project success [22]. A student alumni from this era felt that, of all course activities, participating in group discussions had the strongest value and that providing peer reviews on other projects had lasting value – both which rank highly in team interaction and collaboration. Other course traditions had become fully indoctrinated, including a weekly beer bash called SUDS (soon translated as a Slightly Unorganized Design Session), which helped establish a sense of community among students. Leifer joked, “I lived off the donut cart at Hewlett Packard, so that was in there as a notion. I learned one can do that; one should do that.” In 1996, ME310 was opened to select global team members to further increase team diversity. Professor Mark Cutkosky became a coinstructor in 1997, quickly immersing himself in the ME310 culture and allowing Leifer to step away to establish and oversee the Stanford Learning Lab, now rebranded as the Stanford Center for Innovations in Learning (SCIL). 5.8 Era VII: Entrepreneurship (1998–2004) Cutkosky led ME310 for the next several years, and the character of the course sharpened even more. The definition of design engineers was broadened in scope to emphasize skills in entrepreneurship and leadership, reflecting the Silicon Valley zeitgeist and growing startup fervor. Stanford engineering students responded positively. The course was an opportunity to learn about

“a business environment”, develop a corporate-sponsored project that was “part of the student’s portfolio”, and function “like a small start-up company” [24]. The final report was recast as a “deliverable”, adopting business jargon, and the digital collaboration tools were further improved. Cutkosky joked that the course itself is “like a company that has 100% turnover every two years,” and the instructors and coaches provided the thread of continuity. In the spirit of redesign, Cutkosky tweaked several assignments and added several new design methods to the course curriculum. He wanted students to continually challenge their assumptions throughout the design process. He explained, “It grew out of my frustration that students were reaching premature closure” and shrinking the design solution space unnecessarily. For example, the Critical Function Prototype asked students to build a mockup that focused on the one most vital feature of their product concept, which allowed them to refocus and prioritize their efforts, ideally from a user perspective. In addition, the Dark Horse Prototype required students to build a mockup that was potentially promising, but rejected earlier for a preferred approach, in order to revisit first hunches and further push the limits of team creativity. These two methods have since become embedded in Stanford’s design ethos. Leifer returned from his term at the Stanford Learning Lab with new ideas about active team learning and communication. Leifer and Cutkosky decreased the emphasis on global collaboration and instead focused on student interaction. Cutkosky explained, “The challenge is to create a ‘community’ atmosphere that promotes learning between teams as well as within each team” [25]. Local team bonding increased even more. One student dropped a competing course, which combined mechanical engineering with business skills, because ME310 “seemed more fun, like a community.” Another student said, “We had at least one other class party at some point where we did DDR (Nintendo’s Dance Dance Revolution) and we regularly did dinner together, took other classes together, did karaoke, went skiing, etc. The teaching assistants were also instrumental in the class bonding, in addition to being good sources of help during the course. I believe that the depth and extent of our class community was more significant than any other class I've seen since.” Reflecting on lasting lessons for career and life, a third student from that era reported that, “Personalities affect design just as much or more so than design skills.” 5.9 Era VIII: Global Innovation (2004–2009) Building on what they learned about team collaboration, Leifer and Cutkosky expanded the influence of engineering design in the most recent era of ME310. By 2004, engineering design was truly multidisciplinary, multicultural, and even multi-purpose. Since the mid 2000s, the rhetoric of design thinking had risen, showing the world of business how design provides a viable strategy to convert user needs into market demand. More than entrepreneurship, engineering design was now an essential element of innovation, both in terms of process and outcome. Design was also enmeshed in a global business context, and Leifer was particularly interested in exposing Stanford students to more of the world outside Silicon Valley. By 2005, nearly half of the Stanford student projects were paired with global academic partners, and by 2007, all projects had a sister global team. All global partnerships were organically structured, requiring each student team to actively decide and negotiate their own relationships. Several student alumni commented strongly about learning global team

552

management, both positively and negatively, as a lesson for their careers and lives. Students who took ME310 during this era were also more business-savvy, with 30% bringing at least two years of previous industry experience into class. The reasons students gave for enrolling in the course also ranged widely, and one said that he desired the “practical application of design thinking to business proposals.” In addition, unlike all previous eras, the students surveyed from this era ranked traditional “soft” process skills – such as project coordination, team management, presentation skills, and startup mentality – as having lasting value, compared to discipline-specific content skills. ME310 was an opportunity to connect with future employers, and 21% of the alumni surveyed said that they received a job offer from one or more ME 310 industry partners. Others used ME310 to build a personal network, and over a third of student alumni were in touch with 10 or more other participants. 5.10 Era IX: Foresight (2009–) A ninth era has emerged this academic year, focused on foresight. Analysis of the data shows that, from 1967 to 2004, all proposals from industry partners asked students to address an immediate problem, and the corresponding solutions were to be built in the next product cycle. By 2004, industry partners began to extend the project time horizon, requiring students to contemplate solutions in the far future. Sample project proposals described "future elderly environments” and the “technician of the future". Instead of short-term design solutions, a growing number of industry partners wanted students to explore possible opportunities and future users 15 years or more in the future. Figure 3 depicts the steady rise in long-term industry proposals. Responding to the recent trend, Leifer engaged Professor William Cockayne to develop a sister course, ME410, which was piloted in 2008-09 at Stanford. Built on an existing foresight program underway since 2002, ME410 has taught students complementary methods in foresight strategy and long-range innovation, so that they could develop a broader context for their subsequent efforts in engineering design [26]. Time will tell about the exact nature of this shift in pedagogy and in industry partner interests. ME310 has continued to emphasize design thinking and innovation. 6 INFLUENCE ON OTHER ACADEMIC PROGRAMS At Stanford University, ME310 has positively influenced the development of other courses, such as the threequarter course series about smart product design. The

University Aalto University Loyola Marymount University

Location Finland U.S.

Year 2007 – current 2006 – current

Luleå University of Technology Massachusetts Institute of Technology New York University

Sweden U.S.

2001 – current 1980 – 1987

U.S.

2006 – current

Reykjavik School of Art & Design Santa Clara University Univ. of California at San Diego University of Maryland University of St. Gallen (HSG) Yale University

Iceland U.S. U.S. U.S. Switzerland U.S.

2008 – current 1986 – current 2005 – 2006 2007 2005 – current 2003 – 2004

Figure 3. Trend of long-term ME310 industry projects. broader impact of ME310 has occurred in two primary areas: other American universities and global academic institutions. Table 2 summarizes several example programs directly inspired by the ME310 course model. Please note that this table does not represent an exhaustive list; instead, it is intended to demonstrate a representative diversity of courses. Several ME310 student alumni, who have later become course instructors or faculty, have adapted ME310 entirely or integrated key aspects of the pedagogy to enhance their respective curricula. For example, Professor Natalie Jeremijenko explained, “ME310 has been enormously influential on me, and influenced a whole program I developed as faculty in Yale Engineering, which influenced the ABET accreditors. It has influenced the capstone projects of the environmental studies program at NYU, and at UCSD, and now I am modeling my new systems design masters’ degree on the 310 model” [27]. 7 CONCLUSION It is remarkable to witness how one course has had an unusually large effect on the lives of multiple participants, including roughly 3223 students over the years – many of whom have returned to the course as project coaches or teaching assistants – at Stanford University. One student alumna acknowledged the hands-on experience she gained in ME310 and reflected that, “In retrospect, now that I’ve been out in the workforce, I see what a rare environment and opportunity we had to work in at the [ME310] loft.”

Course Name • Kon-41.4002 – Product Development Project • MECH/SELP/MBAH 673 – New Product Design and Development • M7017T – SIRIUS: Creative Product Development • 2.731 – Advanced Engineering Design • 2.732 – Advanced Design Projects • Systems Design Masters • VIS149 / ICAM130 – Feral Robotics • HFR0122H – HowStuffIsMade • ME194, ME195, ME196 – Advanced Design I-III • Vis 147B – Feral Robotics • ENME 472 – Integrated Product & Process Dev. • 7,004-2 – Design Thinking & Business Innovation • E&AS 996 – SynThesis • ME 386 – Feral Robotics: IT in the Wild

Table 2. Sample academic programs inspired by Stanford’s ME310 course

553

7.1 Research Limitations While this study’s analysis may be illuminating, several limitations in the data are important to recognize. All survey and interview responses are self-reported, and older memories are subject to the vagaries of time. Moreover, the survey sample is not statistically significant, nor does it accurately represent the entire population of ME310. Lastly, the course bulletins were used as a proxy to faculty beliefs about pedagogy and may not necessarily reflect their true intentions, or what actually occurred during the early years of the course. 7.2 Future Directions This study offers just a start to understanding the complete body of knowledge in ME310. It would be interesting to compare the various eras described here with broader engineering educational trends or economic activity to see if any close linkages exist. In particular, one lens is to examine the pattern of external drivers, such as the changes in the course’s industry partners, on the development of ME310. Another question is raised about the changing nature of student development. Has the type of engineering design student changed considerably, and are there any corresponding shifts in student expectations, skills, and backgrounds over the years? Furthermore, extensive ME310 course archives, including student reports and multimedia, provides another source of considerable data that has yet been fully mined. ME310 has an amazing legacy built on 42 years at Stanford University, helping to redefine the frontiers of engineering design. My hope is that this Stanford course has additional decades ahead to pioneer. 8 ACKNOWLEDGMENTS The authors would like to thank Dr. William Cockayne for his thoughtful insights during revisions of the paper. 9 REFERENCES [1] Eris, O., and Leifer, L., 2003, Facilitating Product Design Knowledge Acquisition: Interaction Between the Expert and the Team, in International Journal of Engineering Education, 19(1): 124-152 [2] Reich, Y., Ullmann, G., va der Loos, M., and Leifer, L., 2007, Perceptions of Coaching in Product Development Teams, Proceedings of the 16th International Conference on Engineering Design, The Design Society [3] Ju, W., Ionescu, A., Neeley, L., and Winograd, T, 2004, Where the Wild Things Work: Capturing Physical Design Workspaces, Proceedings of the Conference on Computer Supported Cooperative Work (Chicago, IL), 533-54 [4] Mabogunje, A., and Leifer, L. J., 1997, Noun Phrases as Surrogates for Measuring Early Phases of the Mechanical Design Process, Proceedings of the 9th International Conference on Design Theory and Methodology, ASME (Sacramento, CA) [5] Sheppard, S. D., Macatangay, K., Colby, A., and Sullivan, W. M., 2008, Educating Engineers: Designing for the Future of the Field, Jossey-Bass, San Francisco, CA [6] Vest, C. M., 2008, Special Guest Editorial: Context and Challenge for Twenty-First Century Engineering Education, Journal of Engineering Education, 97(3): 235-236 [7] Naveiro, R. M., and de Souza Pereira, R. C., 2008, Viewpoint: Design Education in Brazil, Design Studies, 29: 304-312

[8]

[9]

[10] [11] [12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

[23]

[24] [25]

[26] [27]

554

Bridges, E. M., and Hallinger, P., 1995, Implementing Problem-Based Learning in Leadership Development, ERIC Clearinghouse on Educational Management, Eugene, OR Evensen, D. H. and Hmelo, C. E., 2000, ProblemBased Learning: A Research Perspective on Learning Interactions, Lawrence Erlbaum Associates, Mahwah, NJ Hunt, M., 1955, The Course Where Students Lose Earthly Shackles, Life magazine, May 2, 186-202 Howe, H., 1952, 'Space Men' Make College Men Think, Popular Science, October, 124 Stefik, M. and Stefik, B., 2004, Breakthrough: Stories and Strategies of Radical Innovation, MIT Press, Cambridge, MA Leifer, L., 1997, Suite-210: A Model for Global Product-Based Learning With Corporate Partners, ASME Curriculum Innovation Award Savin-Baden, M., 2000, Problem-Based Learning in Higher Education: Untold Stories, The Society for Research into Higher Education and Open University Press, Buckingham, UK, 126 and 124 Psotka, J., 1995, Immersive Training Systems: Virtual Reality and Education and Training, Instructional Science, 23: 405-431 Kneebone, R., 2003, Simulation in Surgical Training: Educational Issues and Practical Implications, Medical Education, 37: 267–277 National Research Council, 1997, Modeling and Simulation: Linking Entertainment and Defense, National Academy Press, Washington, DC Smilor, R. W. and Gill, Jr., M. D., 1986, The New Business Incubator: Linking Talent, Technology, Capital, and Know-How, D.C. Heath & Company, Lexington, MA, 78 Tornatzky, L., Sherman, H., and Adkins, D., 2003, Incubating Technology Businesses: A National Benchmarking Study, National Business Incubation Association, Athens, OH Office of the University Registrar, 1986, Stanford University Bulletin: Courses and Degrees, 19(7): 50 and 156, Stanford University, CA Harvey Mudd College, 2007, Engineering Clinic Guidelines Handbook, Harvey Mudd College, Claremont, CA, introduction Wilde, D. J., 2008, Teamology: The Construction and Organization of Effective Teams, Springer, Germany, 2-3 Hong, J. and Leifer, L., 1985, Using the WWW to Support Project-Team Formation, ASEE/IEEE Frontiers in Education 95 Conference Office of the University Registrar, 1998, Stanford Bulletin, 1(34): 167-168, Stanford University, CA Cutkosky, M., 2000, Developments in (Global) Project-Based Design Education, presentation given at the Tokyo Metropolitan University of Technology, March 29, Tokyo, Japan Cockayne, W., 2009, Becoming a Foresight Thinker, Funktioneering Magazine, 1: 12 Apfel, R. E., and Jeremijenko, N., 2001, SynThesis: Integrating Real World Product Design and Business Development with the Challenges of Innovative Instruction, International Journal of Engineering Education, 17(4&5): 375-380

Invited Paper Educating T-shaped Design, Business and Engineering Professionals T-M. Karjalainen, M. Koria, M. Salimäki International Design Business Management Program, Helsinki School of Economics, P.O.B. 1210, FI-00101 Helsinki, Finland [email protected], [email protected], markku.salimä[email protected]

Abstract The paper provides an insight into the International Design Business Management Program (IDBM), an attempt to educate T-shaped professionals who can combine design, business and technology knowledge. IDBM is creating a new master program, in which solid theoretical basis, delivery methods, and structure are being developed to improve both coherence and relevance within the curriculum. To contribute to the coherence of systemic competence development, a model of five major dimensions – tools, environment, management, process, organisation (TEMPO) – is proposed. To ensure the relevance of the program, four professional orientation tracks are built into the curriculum; research, management, consulting and entrepreneurship. Keywords: Multidisciplinary Education, Systemic Competence, Curriculum Development, Design Strategy

1 INTRODUCTION Design, business and technology form a combination that is increasingly sought after in the contemporary business environment. Strategic management of modern companies and brands, linked to NPD and innovation activities of the company, is a complex endeavor and requires multitalented development teams. To get designers, marketers, strategists and engineers to work effectively in the joint process is, however, not always easy. If communication can create a severe obstacle for effective team work alone within specific disciplinary areas with consistent knowledge bases, difficulties are multiplied within larger multidisciplinary groups [1]. The global operation context of many companies even adds the communicative friction created by different cultural backgrounds to the picture. It is assumed that most challenges result from poor understanding of the languages, tools, practices, thinking models that the team members with different backgrounds possess. Moreover, effective product development teams and organizational structures are prerequisites for innovation and creativity that are increasingly sought after in many companies. The more innovative the product is, the more creativity is required and the greater the need for different kinds of expertise in the team [2]. Innovative teams are constructed of members who hold not only disciplinary expertise but also strong multidisciplinary knowledge and experience. Well-functioning product development teams need experts of design, technology, and business who can master their discipline-specific tasks and, in addition to this, are able to work effectively with the representatives of other disciplines. 2 IDBM PROGRAM In order to provide students with practical and realistic team work experiences already during their studies, industry collaboration and project-based learning are becoming increasingly important topics in education.

CIRP Design Conference 2009

555

International Design Business Management Program (IDBM) is an example of an educational attempt tailored to meet the expectations of the modern business field. IDBM educates future professionals by preparing them to work in multidisciplinary teams and providing them with a strategic view into design management and, more precisely, management of international design-intensive businesses, operations, and NPD. IDBM is a joint teaching and research program of three leading Finnish universities: the Helsinki School of Economics (HSE), University of Art and Design Helsinki (TAIK), and Helsinki University of Technology (TKK). Since its establishment in 1995, over 450 students have done the program, many of whom work today in high positions in the industry. IDBM also has established contacts with the industry. About 150 company projects have been executed with some 70 Finnish and foreign companies. The purpose of the Program is to bring together students from the key disciplines within the concept of design business management. IDBM trains skilled professionals for key roles in international design business by underscoring the importance of design as a competitive factor in different industries. Arising particularly from the needs of companies, the program provides business, design and engineering students with an opportunity to practice interdisciplinary and interpersonal skills through shared projects and courses. Multidisciplinary approach, international context, strategic view, and hands-on company collaboration are the cornerstones of the IDBM Program. Eight months long industry projects form the core of the program. The multidisciplinary knowledge that students gain from theory courses is applied into practice particularly through these projects. A typical IDBM industry project is completed in teams of 3 to 5 students, representing students and the different knowledge areas of HSE, TAIK and TKK. Moreover, 2-3

project tutors are selected for each team both from the participating universities and the industries to supervise the progress of the project. The projects have dealt with new product concepts (design and user interface issues), the definition of customer needs and the future environment of a product, the analysis of markets and customer feedback, and the examination of corporate identity, communications and design management, among other things. Through these projects, IDBM is not only able to provide students with practical coaching in real-life corporate and NPD environment, but also gained a wide experience of the challenges and functioning forms of industry-university collaboration. The projects have been well-appreciated by companies who are able to come into contact with creative and innovative students and to obtain first-hand information on the most recent research and training in the field. During the course of the project, the company also has an opportunity to evaluate the students in view of possible future collaboration. 3 MULTIDISCIPLINARY KNOWLEDGE IDBM is not only about developing students’ practical team work skills and affinity towards different mental models and disciplinary practices. It is suggested that multidisciplinary exposure has potential to create novel knowledge that would not occur in a sheer disciplinary context. In other words, well-functioning teams not only get along in daily activities but can also create a shared body of knowledge that is more than the sum of individual members’ own knowledge and skills. This concerns creation and sharing of explicit and tacit knowledge, but especially so-called “embedded knowledge” within the teams [2]. Embedded knowledge can be defined as a result of combination of team members’ tacit knowledge that is potentially created as soon as team members get together. This is type of knowledge is inherent in wellfunctioning teams, collective knowledge that cannot be held efficiently by individual members. It is proposed that the better the team members’ recognize and acknowledge the roles, strengths and limitations as well as their practices and thinking models of their team mates, the more purposeful embedded knowledge is created. This is also the fundamental ground of the IDBM Program. Sensitivity to generate embedded team-specific knowledge, or what could also be called multidisciplinary knowledge, can be nurtured through project-based learning. Embedded knowledge that a team possesses is transferred to “embodied knowledge” in the new product that the team develops [2]. How successfully the embedded knowledge transfers to embodied knowledge, in other words, how well the product meets the intended goals, is a central challenge in multidisciplinary team work. 4 T-SHAPED Successful and effective embodiment of knowledge is the fundamental goal of the IDBM Program. It aims at creating trust between different disciplines and boosting hands-on interaction through project-based learning and. Through this, information redundancy is being enabled, meaning that a minimal amount of (formal) information sharing would be required within teams. These aspects are all important characteristic of knowledge embodiment [2]. These “exogenous” aspects thus contribute to the construction of the multidisciplinary knowledge base. This

base can be characterized by the concept of “T-shaped” skill profile. The concept of persons with T-shaped skills was originally proposed by Iansiti [3]. T-shaped persons are experts in specific areas (T’s vertical stroke) and know how their discipline interacts with others (horizontal stroke). In addition to their specific disciplinary knowledge, they exposed to experience and knowledge of other disciplines. T-shaped approach generates shared mental models, prior knowledge of how things are supposed to be, as well as NPD routines, concretized in the form of regular and predictable patterns of organizational behavior, and inherent innovativeness within product teams [2]. Even though the composition of teams and professional tasks that the students will face in their future careers differ from those constructed in the study phase, they will supposedly be better prepared and more sensitive to work effectively and efficiently in different teams and contexts. Before entering the IDBM Program, these students have been taught within their respective disciplines to receive the level of expertise and knowledge that suffices for performing their disciplinary tasks in a good manner. Then, IDBM develops their multidisciplinary skills, thus forms the horizontal stroke of the T-shaped skill set, which enables team members to interact with one another. T-shaped approach proposes that students’ vertical skills are a prerequisite for creation of new embedded knowledge within the teams. Interaction of different knowledge sets can result in creativity and new ideas [2, 4]. It is assumed that higher the disciplinary knowledge level of individual members, the greater is the potential for creative ideas within the team. Multidisciplinary interaction can create “creative abrasion”, a deliberate conflict of different ideas at a cognitive level that leads to increased effectiveness and efficiency, as well as innovativeness of NPD [2]. This remark is important. Without T-shaped skills, teams may end up in a state of abrasion that is not creative but destructive. Tim Brown, CEO and President of IDEO who has a profound experience in innovative processes and multidisciplinary team work, states that T-shaped people work, and need to work, in a highly experiential manner [5]. Innovative products are created through error and trial. Multidisciplinary education also must take place through structures and practices that allow and develop the creative teamwork skills of students and also comprise challenges that are great enough and implied from the real-life context. This approach is embraced in the IDBM Program specifically through the industry projects that the students execute for the industry under senior supervision. 5 STRATEGIC VIEW T-shaped professionals possess knowledge both on the practical (or operational) level and on the strategic level. In addition to practical skills, the focus of the IDBM Program is to improve the strategic knowledge base of the participants. In a multidisciplinary context, this knowledge is often tacit. Students have acquired the main body of their disciplinary knowledge and skills, that is, their practical abilities and expertise as professionals, in their own universities. The idea of IDBM is not to educate a business graduate to become a designer, or an engineer to learn an array of marketing tools, but to get them acknowledge the existence and profiles of the tools,

556

practices, and mental models employed by other professions. This conceptual notion, however, escapes the fact that the multidisciplinary team work naturally involves a variety of practical and operational skills that the team members incorporate in interaction, whether explicitly or tacitly. When entering the Program, students have employed different professional profiles. Business and often also technology students are usually better educated to strategic thinking than design students who typically master better the practical skills linked to design processes and NPD. In this situation, the greatest outcome of the IDBM program for the design student is to get more accustomed to strategic thinking and planning processes, while the business students learn about product development practices. 6 IDBM TOMORROW IDBM has generated a convincing track record in terms of company collaboration and positive experiences of the participating students as well as developed a widely acclaimed image in the international context through project collaboration with a number of foreign universities. The IDBM program has also functioned as one of the forerunners of the forthcoming Aalto University that is forming in the Helsinki Area. Aalto University, starting operating in 2009, merges HSE, TAIK and TKK together to form one high-quality university that applies the IDBM approach into a larger context. Within the Aalto initiative, IDBM is currently undergoing drastic development towards increased activities. The aim is to further foster multidisciplinary education and research in close collaboration with companies, and more strongly on the global context. Thus the successful program is facing new challenges; in many ways, the achievements of the IDBM program have paved the way for this integration process, and the initiative is now in danger of being overtaken by its own success. To maintain itself in the forefront on innovative knowledge delivery, the IDBM program is currently developing a new 2-year master program that would start at HSE in 2010. In addition, there is an increased emphasis on doctoral education as well as forming a global research alliance with a number of international partner universities to support IDBM education and research. The new joint Masters program in IDBM has the objective of educating world-class multidisciplinary professionals in global business development with design and technology. This will be achieved through upgrading current IDBM program into a full M.Sc., developing further the theoretical grounding, reviewing the course structure for coherence and relevance, while adding and trimming course offering to suit the new needs, and by developing further the appropriate teaching methods for the multidisciplinary and cross-cultural content in global setting. Furthermore, the program is creating a strong research structure that is able to cross fertilize to offering through an enhanced knowledge base. 7 SYSTEMIC THINKING Several key elements underpin the thinking behind the new program. In the first place, the program is seen to be highly multidisciplinary, which is understood as different disciplines addressing common challenges as equal stakeholders, creating new knowledge and aiming at increasing integration.

557

The program is also cross-cultural, as it is based on the interactivity and exchanges of individuals that act beyond and above national and cultural groups. This is closely linked with the global perspective , which is understood to contain the idea of blending and transformation of local and supranational into a single system built on harmonious co-existence and diversity. Furthermore the program has a future orientation, as it seeks to develop capabilities that reach for the future in new business, product and service development. All of the above elements have been present in the program already to date, explicitly or implicitly, either as named objectives or as assumed and emergent phenomena. The review and verification process for the upgraded program has, however, led to an additional need to make explicit the systemic nature of the IDBM. Systemic thinking is seen to be based on holistic, synthetic views that replace the traditional reductionist, and analytic perspectives [7]. Systemic thinking builds on the observation that the whole cannot be always reduced to its parts without loss of knowledge. Understanding the system requires a holistic view of the systems itself, and in many cases, interactive participation within the system itself. The study of holistic systems emerges from the systems theory, cybernetics, engineering, and is linked intelligence research, philosophy and complexity. The key link to the program is derived from the perception that multidisciplinary, cross-cultural, global and future-oriented undertakings are complex and require highly developed systemic competences from those individuals that intend to operate in the said context (See Figure 1).

Figure 1: Competence building objectives in the IDBM master program. Systemic competences have been defined in this context as the abilities related to whole systems. This is generally seen to include the diffusion and transferability of skills, combinations of knowledge and understanding. While many of the first cycle degree (BA/B.Sc.) holders possess viable and adequate instrumental and interpersonal skills and competence, according to our experience, they often lack systemic, integrative competences. As such, the perspective of businesses as complex social systems has been around for a while [8][9], and especially business schools have addressed the issues through their offerings. That being said, the complexity of the IDBM context is seen to warrant special and explicit attention to the issue. To diffuse systemic thinking within the program, two specific key issues have been taken up in a comprehensive manner: coherence and relevance. Through addressing these two issues in a holistic and integrated fashion, the IDBM program is seen to be able to start on the journey towards a systemic thinking platform that will enable the participants to gain real and tangible advantage in their future activities.

8 PROGRAM COHERENCE In the development of the new IDBM Program and similar educational approaches, there exist a number of practical challenges and fundamental decisions to be made. One of them is linked to the internal coherence of the program offering. To address the issues of internal coherence the program has developed a series of cross-cutting competence dimensions. Through the TEMPO dimensions (Tools, Environment, Management, Processes, Organization), the program intends to ensure that cross-cutting issues are addressed in a holistic fashion throughout. The identified dimensions underpin the development of systemic competence, and enable coherence in the educational delivery. The TEMPO dimensions chosen are seen to cut across all business, design and technology activity. These dimensions are used throughout as the key content evaluation criteria for the course and project offering in the program. Having such criteria is critical in an environment that cuts across various institutions with differing worldviews and backgrounds. On another level, TEMPO not only enables the rapid verification of the offering but also of the demand; in other words, it serves also to evaluate the choices of participants. In order to achieve balanced learning profiles, the crosscutting dimensions can be used to verify the balance of the personal study planning (See Figure 2).

Figure 2: Cross-cutting dimensions in systemic competence building. The dimensions above are not presented in an order of importance; they are all important in terms of achieving systemic competences. There is, however, a time-linked causality between the Tools dimension and the others. The Tools dimension is incorporated in the very first activities that are undertaken in the program by the participants. This is due to the need to develop complementary instrumental competences that enable the subsequent development of more systemic thinking. Tools include, among other things, developing skills in project management, design audits, qualitative and quantitative methods. These enable higher level processes of conceptualizing, planning and executing especially in the interactive project work of the program. In many ways, the Tools respond to the need of learning how to undertake activities. The TEMPO dimensions include building and sustaining Environments that enables creativity in global, crosscultural and multidisciplinary contexts. The importance of conducive physical and virtual environments to successful operations is well-established. What is not so clear is how these environments can be built up and sustained in the types of complex environments that the future IDBM professionals can be seen to be operating in. This is also

a clear point of reflection for the program itself; how to go about establishing best practice in this area appears to be quite challenging. Undoubtedly, cognitive psychology, the study of work, innovations in space, work, living, are all issues to be considered in this context. The environment dimension is all about the place where things happen. Thirdly, TEMPO includes consideration for developing systemic and strategic Management (of and in) design business. There is a wide range of issues that need to be covered under this heading, specifically seen to be related to business management. Starting from functional issues in HR, marketing, finance, and including the management of entre- (and intra-) preneurship, the dimension can also cover aspects related to, say, managing continuous innovation in administration. Widely speaking, the management dimension is understood as the driver of business. Key activities of the IDBM program are related to the Process of developing new business, product or service concepts. These involve a clear future orientation, through developing new ideas to business, incubation, growth through design, re-invention, NPD, and technological and service innovations. In many cases the operational forum is linked to the categories of firms aptly named “Born Globals” [10]. Processes are at the very core of the IDBM essence, the what dimension of the whole program. Lastly, enabling novelty, utility and success in Organizations is a key dimension of the program. From innovative teams to major players, organizations form the institutional setting of all design business effort. In many ways, the organizational settings act as the enablers that are needed for successful operations. The IDBM program is therefore widely interested in settings that enable multidisciplinary creativity. Based on organizational studies on project based work, temporary organizations, organizational innovation, mergers and acquisitions, to name a few, the program expects to deliver the updated best practice. As with the environments dimension, significant clarification on the most appropriate knowledge base need to be established; this is recognized as a major undertaking. 9 RELEVANCE OF THE PROGRAM Another key issue in the development of the new program is linked to the relevance that it has for the participants. In many cases it appears that programs assume the relevance through inference, i.e. that participation is an explicit indicator that the program is relevant in terms of its education al offering. While this may be so, this approach does not offer active tools to plan, verify and direct the program relevance to the participants. To address this issue, the IDBM program in development has developed the concept of professional orientation tracks (See Figure 3). These orientation tracks consist of a number of possible pathways that future professional might proceed in their future activities. At present, tentatively four tracks have been identified that are highly relevant to the IDBM program: research, management, consulting, and entrepreneurship. The first track, research, forms the key pathway towards future doctoral research and studies. It also is highly relevant to individuals that expect to operate in business intelligence, market research, organizational development and other similar activities that require intimate knowledge of methods, approaches and cross-

558

disciplinary tools that can be used to make sense out of highly ambiguous and volatile realities. Secondly, the management track aims to chart the roadmap for future professionals involved in say, design management, NPD and service management, among other such tasks. The key differentiating factor from generic management practice is linked to the need to understand, manage and influence multidisciplinarity in complex settings. This is further made more difficult through often global, cross-cultural settings and a high level of ambiguity. While activities related to consulting are often related to managerial action, the field has specific and exceptional characteristics that warrant a separate track. The delivery environment, often through projects is usually specific, and the temporality of the undertakings is distinct from the ongoing nature of more constant managerial action. The variance in the roles that consultants adopt - or are assigned - in design business are also different from dayto-day managerial action. Entrepreneurship and intrapreneurship are key features of design business initiatives and are seen to warrant a special orientation track. The overarching aim of these tracks is to ensure that the program is relevant for the foreseeable future of the participants. The orientation themselves are not set in stone, and a revision is expected to happen over time, and new tracks may be included and other done away with.

Figure 3: Multidisciplinary professional orientation tracks. Timing-wise, the orientation tracks are taken onboard at the start of the studies, and developed through the course offering. They are present through the electives courses and projects, and emerge finally at the thesis stage as an orientation track for thesis work. While they form a logical framework. 10 SUMMARY This paper has reported on key issues related to creating a new cross-institutional masters program combining multidisciplinarity, cross-culturalism, and systemic thinking, inside three well-established and fairly traditional institutions that do not fully share a common ethos. Harmonizing the current degree structure and delivery

559

practices within the three major university players poses significant challenges that are currently being addressed. The approach of the new program has been to build organically on the existing IDBM platform, however developing the theoretical base, delivery methods, and structure to achieve a new, improved coherence and relevance. A central challenge of the IDBM master program is to apply the strategic, goal-oriented, highlevel, and long-term approach to the different disciplinary views of the constituent institutions. In order to achieve coherence in systemic competence development, an approach consisting of five major dimensions – tools, environment, management, process, organisation (TEMPO) – was proposed. Furthermore, to ensure the relevance of the program to the participants, four professional orientation tracks are built into the programme; research, management, consulting and entrepreneurship. These perspectives allow the program to ensure that wide systemic competences are built up in a contextually relevant fashion. 11 REFERENCES [1] Peeters, M.; van Tuijl, H.; Reymen, I.; Rutte, C., 2007, The development of a design questionnaire for multidisciplinary teams, Design Studies, 28: 623643. [2] Madhavan, R.; Grover, R., 1998, From Embedded Knowledge to Embodied Knowledge: New Product Development as Knowledge Management, Journal of Marketing, Vol.62 (October 1998): 1-12. [3] Iansiti, M., 1993, Real Word R&D: Jumping the Product Generation Gap, Harvard Business Review, 71 (3): 138-147. [4] Simon, H.A., 2001, The Sciences of the Artificial, 3rd edition, MIT Press, Cambridge. [5] Brown, T., 2007, Strategy by Design, FastCompany. com, http://www.fastcompany.com/magazine/95/ design-strategy.html, 19.12.2007. [6] Leiviskä, E., 2001, Creative Interdisciplinarity – Engineering, Business, and Art&Design Students’ Collaboration and Learning in the International Design Business Management (IDBM) program, Research Report (227), University of Helsinki. [7] Atwater, J.; Kannan, V.; Stephens, A., 2008, Cultivating Systemic Thinking in the Next Generation of Business Leaders, Academy of Management Learning & Education, 7 (1): 9-25. [8] Senge, P., 1990, The fifth discipline: The art & practice of the learning organization, Doubleday/Currency, New York. [9] Deming, W., 1994, The new economics: For industry, government, education. MIT Centre for Advanced Educational Services, Cambridge, MA. [10] Gabrielsson, M.; Kirpalani V.H.M., 2004. Born Globals: How to Reach New Business Space Rapidly, International Business Review, 13(5): 555– 571.

European-wide Formation and Certification for the Competitive Edge in Integrated Design 1

1

2

A. Riel , S. Tichkiewitch , R. Messnarz 1

Laboratoire G-SCOP, Grenoble INP, 46 av Félix Viallet, Grenoble, 38031, France 2

ISCN GmbH, Schiesstattgasse 4a, Graz, 8010, Austria [email protected], [email protected], [email protected]

Abstract Competitive Product Design is more and more linked to mastering the challenge of the complexity and multidisciplinary nature of modern products in an integrated fashion from the very earliest phases of product development. Design Engineers are increasingly confronted with the need to master several different engineering disciplines in order to get a sufficient understanding of a product or service. Industrialists demand for the certification of these skills, as well as for their international recognition and exchangeability. This paper describes the approach that EMIRAcle takes together with the ECQA in order to define and establish job roles, curricula and certifications in the domain of Integrated Engineering on a European level. Keywords: Integrated Engineering, Integrated Design Engineer, System Competence, Product Development Improvement, Lifelong Learning, Certification, Professional Training

1 INTRODUCTION Integrated Engineering is characterised by a highly multidisciplinary approach to product development. Engineers are increasingly confronted with the need to master several different engineering disciplines in order to get a sufficient understanding of a product or service. Likewise, engineering teams are getting increasingly interdisciplinary, and thus demand for a mutual understanding and collaboration between domain expert team members [1][2]. Although university curricula are starting to get adapted to this development on an international scale, it is evident that there is an urgent need for interdisciplinary education and certification programs on a postgraduate level [3]. While universities are supposed to educate in-depth knowledge in specific engineering areas, lifelong learning programs and curricula are needed that teach the transversal links between the different engineering disciplines according to criteria that are defined by industry. Industrialists demand for the certification of these skills, as well as for their international recognition and exchangeability. Today, such internationally recognized training and certification programs for job roles in modern product creation do not exist. This paper describes the approach that EMIRAcle (the European Manufacturing and Innovation Research Association, a cluster leading excellence – www.emiracle.eu) takes together with the ECQA (the European Certificates and Qualification Association – www.eu-certificates.org) in order to define and establish job roles, curricula and certifications in the domain of Integrated Engineering on a European level. The target is to define and describe the skill sets that characterises Integrated Engineering, as well as to provide skill-specific training modules and the corresponding training material. Once these are found, sets of test questions have to be formulated, which shall

CIRP Design Conference 2009

560

provide the basis for assessment and certification of candidates. This paper points out skill requirements of job roles in Integrated Engineering that are demanded by industry, in particular Integrated Design Engineering. It shows how they are used to develop education and test programs, as well as certification criteria. This activity is part of the EU Certification Campus (EU Cert) initiative in the Leonardo da Vinci Programme of the EC launched by the ECQA and EMIRAcle at the beginning of 2008. It is the first in a planned series of projects that aim at implementing a number of training and certification programs in Integrated Engineering into the well-established IT-platform of the ECQA, and offering those in a number of education institutions all over Europe. The great success of ECQA’s system and platform in the software engineering domain provides an important basis for this collaborative work. Chapter 2 introduces the background of the work of the ECQA. Chapter 3 points out the requirements to Integrated Design Engineering skills and makes evident that those are not sufficiently taken into account in current education schemes. Chapter 4 looks into automotive powertrain development to show why Integrated Design Engineering Skills on an enterprise level can lead to a significant competitive edge. Chapter 5 introduces the training, testing and certification concept that has been established by the ECQA, and suggests its application to implement the proposed Integrated Design Engineering skill set. 2

BACKGROUND

2.1 Success Factors of Innovation The success of an innovation or improvement does not only dependent on the correct technical approach. Instead, numerous learning strategy related aspects influences the success. This fact has been proved by the following European studies, among others:



Study at 200 firms in 1998 [4];



study at 128 multinational firms in 2002 [5];



study in 59 networked European organisations in 2003 [6][7][8]. Beside top management support (26%) the studies outlined a positive learning culture (15%, learning from mistakes, team learning, knowledge sharing, etc.) and a supporting organizational infrastructure (17%) which helps with the implementation of the learning organisation [5]. A learning organisation [9][10] creates a positive learning culture and enables team learning and synergy exploitation in an organisation. By team learning knowledge is spread much more quickly and a high level of a skilled human force is maintained. Human skills are regarded as a complementary set needed in addition to qualified processes to be successful on the market. 2.2 Processes, Job Roles, and Skills Figure 1 illustrates that processes require roles, which need specific skills to efficiently perform the job. In the ISO 15504 (SPICE, Software Process Improvement and Capability dEtermination [11][12]) a capability level 3 would, for instance, require the definition of competence criteria per role. The combination of this approach with the learning organisation related approach outlined in section 2.1 leads to a framework where it becomes extremely important to think in terms of job role based qualification and skills. This concept is described in greater detail in e.g. [13]. Processes and Human Ressources Skills Assessment

Process Assessment

Processes ISO 9001, ESA ECSS, ISO 15504, etc.

Quality Management System

Results Roles

Skill Sets Activities

Results

Project 1

Organisation

Project 2 People Project X

Assessment Result (Capability Profile)

Assessment Result (Skill Profile)

Figure 1: Integration of Process and Human Skills in an Integrated Model 3

INTEGRATED DESIGN ENGINEER KEY SKILLS

3.1 Background and Motivation Engineering design is a crucial component of the industrial product realization process. It is estimated that 70 percent or more of the life cycle cost of a product is determined during design. Effective engineering design, as some foreign firms especially have demonstrated, can improve quality, reduce costs, and speed time to market, thereby better matching products to customer needs. Effective design is also a prerequisite for effective manufacturing [14]. In connection with this complexity, the field of engineering design can be viewed as consisting of three independent categories of variables and abstractions: (1) a wide variety of problem types, (2) a wide variety of persons who may be required to solve the problems, and (3) a wide variety of organizations and environments (including tools and available time) in which the persons may be required to function. Attempts to discover crucial variables and abstractions that apply to persons and the environment

are likely initially to be either unmanageably complex or else greatly oversimplified. Moreover, research methodology in these categories is cumbersome and difficult to plan and implement. Obstacles faced in the cognitive, social, and environmental aspects of design are much the same as those faced by researchers in such fields as education, sociology, and management. This section suggests five skill units which should complement expert skills of Integrated Design Engineers, departing from the fact that design is at the root of every product development. The skill sets that make up this unit, as well as additional units will be developed in the frame of this research. 3.2 Requirements Engineering The key to making a product successful on the market is to design it according to all sorts of key requirements that come from a number of different sources. These are all the actors directly involved in the product life cycle, as well as the product’s “environment”, like government, laws, economy, etc. Outstanding actors and factors are 1. the target customers, 2. the manufacturing process, 3. the product’s life cycle, 4. its manufacturability and maintainability, 5. the development time and costs, 6. etc. Identifying requirements is in general a complex activity. Very often the requirements specifications that are given to designers are imprecise and/or incomplete. Knowledge about systematic requirements collection and management helps designers collect missing or incomplete requirements information. Requirements management is a complex procedure that is difficult to carry out systematically without the use of appropriate tools. There exists already a large number of requirements management tools (about 40 are listed in [15]), which are typically specialized for use in certain domains. Even if in some (especially bigger) organizations development tools are chosen on a higher management level, it is often the engineers who are asked to propose a choice of tools. User-centric methods like Scenario-Based Design [16] and Use Case Design [17] are becoming more and more important, as they force requirements engineers to think from a product-use point of view rather than in terms of solutions. Scenarios are important tools for exercising architecture to gain information about a system's fitness with respect to a set of desired quality attributes. A use case is a description of a system's behaviour as it responds to a request that originates from outside of that system. The use case technique is used in software and systems engineering to capture the functional requirements of a system. Use cases describe the interaction between a primary actor—the initiator of the interaction—and the system itself, represented as a sequence of simple steps. Actors are something or someone which exist outside the system under study, and who (or which) take part in a sequence of activities in a dialogue with the system, to achieve some goal: they may be end users, other systems, or hardware devices. Each use case is a complete series of events, described from the point of view of the actor. Use case design thus enables design engineers and anyone else concerned with the product to adopt an application and user-oriented viewpoint which largely facilitates the derivation of the detailed functional requirements to the product.

561

3.3 Integrated Product Design Current design methodology developed a lot of tools called “Design for X”, in order to take into account one specific domain X, where X assembly, maintenance, manufacturing, etc.). Such tools are made to optimize one specific view, disregarding the fact that the global optimization of a system is in general not to be achieved by the local optimization of a series of components. Moreover, what normally has to be a constraint for the system is transformed into an objective function in these systems: Does an assembly have to be minimized, or is it sufficient to respect its operability if in another solution it can be less costly or complicated? Integrated product design considers that the different constraints previously cited are the aim of different actors who have to control them but who “belong to the same world” [18]. The common goal is to reduce the cost, to reduce the time to market, to take into account sustainability and to increase quality. Such actors have to work in a concurrent engineering context, having access to a common product model where they can have their own contextual views. They have to respect the just need [19] which consists of giving a constraint on the system as soon as possible if such a constraint can be proved. An application of integrated design of wood furniture can be found in [20]. It is shown how the actors of the design process have to exchange information before starting a new design in order to understand what the consequences are of the different decisions they have to make for the other actors and which information has to be propagated. Choosing an assembly system for joining two boards is directly guided by a quality requirement but also has consequences on the mechanical models used to determine the deflections of the boards and on the manufacturing features to be realized (and therefore also on the cost). The assembly set can be considered as an intermediate object for the communication between people in charge of assembly, mechanical behaviour and manufacturing. As such it acts as a vehicular object (as opposed to a vernacular one). At the same time, however, this assembly set cannot be sized without knowing the thickness of the board that depends on the mechanical model used. It turned out that an interactive process between the assembly actor and the people in charge of mechanics must arise during the design activity. This interactive process is a way to solve imaginary complexity. Other particularly representative confirmations and urgent demands of the above issues have been published notably in the automotive industry [3][21], where product development is outstandingly multidisciplinary and interdependent. According to the above, product design does not seek to optimize one single objective, but rather aims at finding the best compromise solution under multiple, often coupled restrictions like the following:

Certainly an Integrated Design Engineer cannot master all the associated complex disciplines by himself in general. He should, however, be able to understand domain experts, and be able to translate their requirements into his design task.



3.4 Product Lifecycle Engineering and Management Integrated Design Engineering is a synonym for well understanding the product and the way it is created, used, disposed, and recycled. Product Lifecycle Management (PLM) is the process of managing the entire lifecycle of a product from its conception, through design and manufacture, to service and disposal [22]. It is one of the four cornerstones of a corporation's information technology structure. All companies need to manage communications and information with their customers (CRM-Customer Relationship Management) and their suppliers (SCM-Supply Chain Management) and the resources within the enterprise (ERP-Enterprise Resource Planning). In addition, manufacturing engineering companies must also develop, describe, manage and communicate information about their products (PDM-Product Data Management). Although a product lifecycle is specific to a product, there are some basic facts, aspects, and phases that are common to almost any type of product. An Integrated Design Engineer needs this basic knowledge in order to be able to analyse and understand specific product lifecycles. The core of PLM is in the creation and central management of all product data and the technology used to access this information and knowledge. PLM as a discipline emerged from tools such as CAD, CAM and PDM, but can be viewed as the integration of these tools with methods, people and the processes through all stages of a product’s life. It is not just about software technology but is also a business strategy. For simplicity the stages described are listed below in a traditional sequential engineering workflow. The exact order of event and tasks will vary according to the product and industry in question but the main processes are [23]: •

Conception,



Specification,



Concept Design,



Design,



Detailed Design,



Validation and Analysis (Simulation),



Tool Design,



Realization,



Plan Manufacturing,



Manufacturing,



Build/Assembly,

Producability,





Test (Quality Check),

Assembly/Disassembly,





Service,

Modularity,





Selling and Delivery,

Testability,





Usage,

Product Variant Creation,





Maintainance and Support,

Environmental Sustainability,



Product-Service Optimization,



Maintainability,



Cost Minimization,



etc.

• Disposal and Recycling. The reality is however much more complex, people and departments cannot perform their tasks in isolation and one activity cannot simply finish and the next activity start. Design is an iterative process, often designs need to be modified due to manufacturing constraints or conflicting requirements.

562

Companies that design successfully have carefully crafted Product Creation Processes (PCP) that extend over all phases of product development from initial planning to customer follow-up. Their PCP is their plan for continuous improvement. The decision to develop and operate under a PCP is a corporate one. Successful operation of a PCP requires extensive cooperation among a firm's marketing and sales, financial, design, and manufacturing organizations [12][14]. In the foregoing idealized account of the PCP, everyone cooperates, desired quality is achieved, and the product succeeds in the marketplace. In practice, the process is difficult and full of conflict and risk. Converting a concept into a complex, multitechnology product involves many steps of refinement. The design process requires a great deal of analysis, investigation of basic physical processes, experimental verification, complex tradeoffs between conflicting elements, and difficult decisions. For example, there may be insufficient space for a desired function unless costly development is undertaken, or space is taken from another function, affecting quality, fabrication yields, or ease of assembly. The original concept may not function as planned, and additional work may be required, affecting the schedule or requiring a change in specifications. Satisfying the different and conflicting needs of function, manufacturing, use, and support requires a great deal of knowledge and skill. Although Collaborative Engineering is based on the support of the organization, it is very much facilitated by the awareness of each engineer about his role in the process, as well as the roles of others. 3.5 Networked Collaboration Due to the involvement of many different experts, Integrated Product Design can only be done in teams which are inherently heterogeneous and very often international. Although design tools support this collaborative work increasingly better, Integrated Design Engineers need to have skills that go beyond tool operation in order to be successful collaborative engineering tasks. In the development of the Integrated Design Engineer’s profile this research focuses on the following ones: 1. Teamworking skills, 2. Intercultural skills, 3. Knowledge Management, 4. Knowledge Capitalisation, 5. Knowledge Sharing. Teamworking and intercultural skills are indispensible in modern international engineering teams. Knowledge management is certainly a subject of the whole organisation, which is under the responsibility of the management levels. Understanding the purposes and challenges of knowledge management and knowledge capitalisation, as well as the concept of typical knowledge management and knowledge modelling tools, is an important prerequisite for the participation of Integrated Design Engineers in the related efforts of an organisation [22]. Collaborative design involves product designers, manufacturing engineers, and representatives of purchasing, marketing, and field service in the early stages of design in order to reduce cycle time and improve manufacturability [19]. This practice helps resolve what is sometimes called the designer's dilemma—the fact that most of product cost, quality, and manufacturability are committed very early in design before more detailed information has been developed. Assembling a multidisciplinary design team permits pertinent knowledge to be brought to bear before

individuals become wedded to their approach and much of the design cost has been invested. Differences are more easily reconciled early in design, and reductions in design cycle time that result from the use of this method invariably reduce total product cost. The key to the successful use of collaborative design concepts is the ability to organize and manage concurrent processes and cross-functional and typically distributed teams effectively. Obtaining this know-how is not a matter of studying textbooks but it rather demands a balanced blend of solid experience and of theoretical background. This is what the professional seminar program to be conceived in this research shall convey in sector- and national-specific contexts. 3.6 Knowledge Management Engineering design is a knowledge-based, knowledgeintensive intellectual activity. As pointed out in section 3.4, designers and others involved in the design of any product or process bring to bear extensive technical knowledge, product knowledge, manufacturing process knowledge, design process knowledge, memories of previous projects, and so forth [24]. Much of this knowledge is presently ad hoc and heuristic, residing implicitly with individuals or within organizations and neither accessible to, nor of a form that is easily accessible by, others within the firm, much less in other firms or disciplines. The handbooks, textbooks, catalogs, trade journals, research journals, and company guidelines in which much of this knowledge has been recorded are generally useful only if close at hand (some say “within reach”) and if they deal specifically with the designer's current problem [14]. As a data base, this collection is extremely inefficient in terms of accessibility. A design knowledge base more generally and completely accessible to all engineering designers would be tremendously powerful. For this vision to be realized, existing knowledge must be captured, organized and, where possible, generalized. Once this is done, the knowledge might be made available to designers via CAD systems or computer networks in a form which is adequate to support design engineers in a maximum of tasks efficiently. Every phase of this process is very complex, as they all deal with originally implicit knowledge. As both the knowledge providers, as well as the final target users are design engineers, they should be able to participate in this process as much as possible, which requires a basic understanding of the motivation and targets of knowledge management for product design. Furthermore, professional seminars allow professional users to exchange related experiences, which is particularly essential for the improvement of existing and upcoming approaches to knowledge management in product creation [25] 4

INTEGRATED ENGINEERING SYSTEM COMPETENCE

AS

A KEY

TO

4.1 The Importance of System Competence Integrated Engineering by its very definition covers multiple expert domains and thus usually separate and specific threads of communication, specific tools, specific ontology, etc. Classic product development organisations typically resemble expert domains in their departmental and/or project structures, thus further intensifying and augmenting the difficulties of realizing integrated engineering. With increasing system complexity, obtaining the competence of the whole final product as a system and as a result of a networked system of development tasks has become practically impossible in such environments. System competence is however the fundament of being able to perform consistent integrated 563

engineering and thus an increasingly important competitive advantage. The development process of automotive powertrains is a stereotype example for this problem. The automotive industry is one of the most highly innovation-driven industries. This chapter presents selected results of a detailed analysis of this process [21], and their implications on the need for integrated engineering skills to attain and improve system competence. 4.2 Case Study: The Automotive Powertrain Development Process Figure 2 shows the most essential phases of the automotive powertrain development process [21]. The engine and transmission development processes run in parallel in very similar phases and they are closely linked by consecutive “vertical” tasks if the powertrain is developed in a holistic way. The horizontal line arcs indicate the various horizontal activities that need to be carried out ideally throughout the whole process, as they are all closely linked to the performance and quality of the final product. Most of them, however, require the whole powertrain and/or the vehicle to be available before they have actually been built. This is especially true for the engine and powertrain electronic control units (ECU – Engine Control Unit, TCU – Transmission Control Unit). In the traditional approach, prototypes of the missing parts are manufactured, or they are used from a suitable predecessor model. In the modern, still heavily researched approach, simulation models with different levels of detail are used to mimic real components that are not yet available, from concept simulation via tests and calibrations on various kinds of testbeds to the phase with the vehicle prototype on the chassis dynamometer. This enables “front-loading” development activities to the early phases of the process, which are mostly linked to design. In this scenario, it may well happen that the transmission exists before the engine has been built and vice versa.

Both these approaches, and any approach in between, represent cases in need of intensive integrated engineering and system competence on an individual engineer’s level as well as on a distributed team level. They involve engineers with several different education and expertise profiles, who all have to work towards the same final targets, which are all linked to the global performance of the whole vehicle, mainly in terms of drivability (specific “feeling”), fuel consumption and emissions. The inputs of one activity depend on the results of several other activities, which are all linked to different domain experts. [2][19][22] treat this subjects exhaustively, with special regard to its implications on integrated design. [21] develops the so-called Behavioural Mock-Up (BMU) concept that extends the wellestablished Digital Mock-Up (DMU) concept to support the entire development process. The permanent interactions and synchronizations between the two processes are sketched with the inclined arrows in Figure 2. Networking the engine and transmission development processes can be achieved by the seamless use of simulation tools and consistent simulation models. Closely connected to this is the process of collecting all the data that are required for the models used [26]. Primarily due to the stringent demands imposed by quality assurance, member of the different, typically distributed engineering teams, need to have comparable levels of engineering skills on a system level. Because it is on a system level where the teams’ tasks are linked and have their dependencies: Engine and transmission, control electronics and powertrain, comfort electronics and cabin, etc. to name only a few. Design engineers play a key role in this process, as all the individual phases pass by design iteratively. This becomes most evident by the fact that the DMU technology is at the centre of the process. It serves not only design decisions, but is also increasingly a means of interactions and synchronisations between different expert departments and serves major project

Figure 2: Representative Automotive Powertrain Development Process

564

management related decision. Qualified Integrated Design Engineers are able to understand this context, and are thus better able to help the enterprise capitalize on this technology by making best use of it. 4.3 Model-Based Integrated Development In the ideal model-based integrated development process, sketched in Figure 3, the early CAE-models act as the single source of data for all the later models. This assures the consistency of all the models.

Figure 3: Model-based Integrated Development Real-time models are derived from CAE-models by targetoriented simplification or re-structuring, which typically includes the replacement of analytical calculations by precalculated maps and the exclusive use of fixed-step solvers. CAD/CAE data and models are used for test planning and definition, and a seamless feedback loop from the testing environment has to be established for model verification and improvement. A practical example can be found in [21]. This engineering “control loop” relies on a working flow of vehicular knowledge between the involved groups and departments. Realizing such a loop relies on system competence of the engineers involved: Each part of this loop has to understand what the other parts need in terms of the characteristics of the system models, the parameters and the data. 4.4 IT-Infrastructure in Integrated Engineering Organizations A fundamental requirement to integrated engineering support systems is to neatly integrate into existing IT infrastructures. Both manufacturers and suppliers have invested a lot in their tool- and IT-infrastructures. CAD, ERP and PDM systems are more or less the three IT “pillars” within a product development enterprise [21]. Figure 4 shows the close relationships between the integrated engineering environment (here represented by the BMU) and all the other important complementary information sources within the enterprise.

Figure 4 : Networked Integrated Engineering

Integrated Engineers have to understand the role of each system in order to be able to use the whole IT infrastructure in way that it can leverage the work of all the concerned engineering teams. Once more he has to be aware of the fact he is one part in a highly networked, dependent and complex system, in which his work depends on that of others and vice versa [27]. 5

QUALIFICATION AND CERTIFICATION OF INTEGRATED ENGINEERING SKILLS This chapter gives an overview of the system and the platform proposed and implemented by the ECQA [13]. One of the major aims of this research is to show that both their system and their platform are very well suited to specify, implement and roll out the qualification and certification of modern job roles in Integrated Engineering environments. 5.1 Skills Acquisition with the ECQA Platform The ECQA has set up a partnership of experienced partners in 18 European countries to create a pool of knowledge for specific professions. This pool can be extended to further professions. All the professions that have been configured in this system up to now, are based in the ICT area, and are thus closely related to Software Development. As integrated product development processes are increasingly related and/or linked to software development, new job roles from the Integrated Engineering domain will profit from this sound basis [28]. Figure 5 gives an overview of the uncomplicated but efficient skill acquisition process supported by the ECQA platform: If there is a need a person can attend a course for a specific job role online through an advanced learning infrastructure. The student starts with a self assessment against the skills [29]. Then she can sign into an online course. Here he is guided by a tutor and does a homework which is corrected by the tutor. Finally the homework and the real work done in her project are sufficient to demonstrate the skills.

Figure 5: Integrated European Skills Acquisition System The learning platform is based on the web based public domain learning management system Moodle (www.moodle.com). The assessment process is supported by the so-called Capability Adviser, which is a web based assessment portal system with a defined database interface to connect the systems. Network Quality Assurance NQA is a web based team working tool which was developed in the EU IST 2000 28162 project [28]. 5.2 Provision of Skill Sets The ECQA platform of knowledge is enhanced on an annual basis. Existing skills sets are being reworked and new skills sets are added. Joint knowledge is being configured in form of a job role with standard content structures [7][10] like skills set, syllabus, learning

565

materials and online configuration, as well as sets of test questions. So-called Job Role Committees decide upon the content for a specific skills set. These committees are composed of academics and industrialists. The job role committee for the Innovation Manager, for instance, created a skills set of an innovation manager together with a set of online courses etc. People can register from their work places. 5.3 Qualification and Certification Nowadays and according to the Bologna Process, it is very important that training courses are internationally recognized, and that successful course attendees receive certificates that are valid for all European countries. The EU supported the establishment of the European Qualification Network (EQN), from which the ECQA has evolved, with exactly this target in mind. This has resulted in a pool of professions in which a high level of European comparability has been achieved by a Europe wide agreed syllabus and skills set, a European test questions pool and European exam (computer automated by portals) systems, a common set of certificate levels and a common process to issue certificates. The partners collaborated on the development of the quality criteria consisting of quality criteria to accept new job roles in the ECQA, quality criteria to accredit training organisations and certify trainers promoted by the ECQA, and quality criteria and test processes to certify attendees who have run through the training of a specific job role. The existing skills assessment portals (already used by more than 5000 students in different learning initiatives) are extended to cover the new requirements of the ISO 17024 (General Requirements for Bodies operating Certification of Persons) standard. Among the international certification organizations that provide ECQA-compliant certification is the ISQI (International Software Quality Institute, www.isqi.org). 5.4 Importance for Universities From what has been developed in this paper it may seem that universities and initial education were not affected by and/or not involved in the proposed activities in professional qualification and certification. However, this is certainly not at all the case. Universities can profit from the skills set descriptions and developed industrial case studies in using them to adapt their curricula to industry needs. They will be able to prepare engineers better for their jobs in industry, and they will find it easier to get into collaboration contracts with industry. Moreover, in many respects it may be attractive for universities to act as qualification institutions for certain training modules and/or to provide trainers. Training courses present very good opportunities to meet with employees from industry and to learn about their problems and experiences. 6 SUMMARY This paper points out that there is a strong industry need for international training, qualification and certification of modern job roles in Integrated Engineering, in particular in Integrated Design Engineering. Using the automotive powertrain development process as an example, it identifies key skills of Integrated Design Engineers. The lifelong learning concept of the ECQA, which is already very well established in the ICT domain and set a European-wide standard there, is proposed for this purpose. Moreover, the ECQA provides a strong IT platform with all the applications required for learning, testing, and certification already in place. An indispensible key to the success of this program of projects is the

566

involvement of industrialists in the creation and maintenance of the skill sets and the certification criteria. 7 ACKNOWLEDGMENTS This project is the first in the long-term lifelong learning strategy of the EMIRAcle association, which is wellaligned with the strategic objectives of the European Technology Platform in Manufacturing ManuFuture (www.manufuture.org). The launching activities have been supported by the European Commission under the contract NMP2-CT-2004-507487 of the Network of Excellence in FP6 VRL-KCiP. This research are currently supported by the EU in the Leonardo da Vinci project LLP-1-2007-AT-KA3-KA3MP (EU Cert - EU Certification Campus) of the Lifelong Learning Programme. 8 ACRONYMS BMU ............. Behavioural Mock-Up CAD.............. Computer Aided Design CAE .............. Computer Aided Engineering DMU ............. Digital Mock-Up ECU.............. Electronic Control Unit ECQA ........... European Certification and Qualification Association EMIRAcle ..... European Manufacturing and Innovation Research Organisation – a cluster leading excellence ERP .............. Enterprise Resource Planning EU Cert......... EU Certification Campus EQN.............. European Quality Network HiL ................ Hardware in the Loop ICT................ Information and Communication Technologies NQA.............. Network Quality Assurance PCP .............. Product Creation Process PDM ............. Product Data Management PLM .............. Product Lifecycle Management SCM ............. Supply Chain Management TCU .............. Transmission Control Unit 9 REFERENCES [1] Riel A., Tichkiewitch S., Molcho G., Shpitalni M., Uys W., Uys E., du Preez N., 2008, “Improving Product Development Organisations using Knowledge Mining: Requirements, Methods and Tools. Knowledge Management” in Product Development, Proceedings, Enschede, CD-ROM [2] Tichkiewitch S., Brissaud D., 2004, Methods and Tools for Co-operative and Integrated Design, Kluwer, Academic Publishers, ISBN 1-4020-1889-4, pp. 488 [3] Menne R., 2007, Ford: teach integrated engineering, in: Automotive Engineer, December 2007, Professional Engineering Publishing Limited, London, UK, p. 5 [4] Messnarz R., Stöckler C., Velasco G., O'Suilleabhain G., 1999, A Learning Organisation Approach for Process Improvement in the Service Sector, in: Proceedings of the EuroSPI 1999 Conference, 25-27 October 1999, Pori, Finland [5] O'Keeffe, T., Harrington D., 2001, Learning to Learn: An Examination of Organisational Learning in Selected Irish Multinationals. Journal of European

[6]

[7]

[8]

[9]

[10]

[11] [12] [13] [14]

[15]

[16]

[17]

Industrial Training, MCB University Press, Vol. 25: Number 2/3/4 Biro M., Messnarz R., Davison A., 2002, The Impact of National Cultures on the Effectiveness of Improvement methods – The Third Dimension, in Software Quality Professional, Volume Four, Issue Four, American Society for Quality Feuer E., Messnarz R., 2002, Best Practices in ECommerce: Strategies, Skills, and Processes, in: Proceedings of the E2002 Conference, E-Business and E-Work, Novel solutions for a global networked economy, eds. Brian Stanford Smith, Enrica Chiozza, IOS Press, Amsterdam, Berlin, Oxford, Tokyo, Washington Feuer E., Messnarz R., Wittenbrink H., 2003, Experiences With Managing Social Patterns in Defined Distributed Working Processes, in: Proceedings of the EuroSPI 2003 Conference, 10-12 December 2003, FTI Verlag, ISBN 3901351841 Gemünden H.G., Ritter T., 2001, Inter-organisational Relationships and Networks, Journal of Business Research Messnarz R., Nadasi G., O'Leary E., Foley B., 2001, Experience with Teamwork in Distributed Work Environments, in: Proceedings of the E2001 Conference, E-Work and E-commerce, Novel solutions for a global networked economy, eds. Brian Stanford Smith, Enrica Chiozza, IOS Press, Amsterdam, Berlin, Oxford, Tokyo, Washington van Loon, H., 2007, Process Assessment and ISO 15504, Springer, ISBN 978-0-38730-048-1 van Loon, H. 2007 Process Assessment and Improvement, Springer, ISBN 978-0-38730-044-3 Messnarz R. et al., 2008, The EQN Guide. Graz, Austria Committee on Engineering Design Theory and Methodology, National Research Council, 1991, Improving Engineering Design—Designing for Competitive Advantage, National Academy Press, ISBN 978-0-30904-478-3 Ludwig, J.I. (2008). Requirements Management Tools, available at: http://www.jiludwig.com/Requirements_Management_Tools.html, Accessed: 200804-14 Carroll J.M., 1999, Five Reasons for Scenario-Based Design, in: Proceedings of the 32nd Hawaii International Conference on System Science, Hawaii, IEEE No. 0-7695-0001-3/99 Bittner K., Spence I., 2002, Use Case Modeling. Addison Wesley Professional, 2-3. ISBN 0-20170913-9

[18] Boltanski L., Thevenot L., 1991, De la justification, les économies de grandeur, Gallimard [19] Brissaud D., Tichkiewitch S., 2000, Innovation and manufacturability analysis in an integrated design context, in Computers in Industry, 43:111-121 [20] Pimapunsri K., Tichkiewitch S., Butdee S., 2008, “Collaborative negotiation between designers and manufacturers in the wood furniture industry using particleboard and fibreboard”, Design Synthesis, CIRP Design Conference, Enschede, CD-ROM [21] Riel A., 2005, From DMU to BMU: Towards Simulation-Guided Automotive Powertrain Development. PhD Thesis, Vienna University of Technology, Vienna, Austria [22] Draghici G., Brissaud D., 2000, Modélisation de la connaissance pour la conception et la fabrication integrées, Editura Mirton, Timisoara [23] Saaksvuori A., 2005, Product Lifecycle Management. Springer, ISBN 978-3-54025-731-4 [24] Ameri F., Dutta D., 2005. Product Lifecycle Management: Closing the Knowledge Loops, Computer-Aided Design & Applications, Vol. 2, No. 5, pp 577-590 [25] Bernard A., Tichkiewitch S. (Eds.), Methods and Tools for Effective Knowledge Life-CycleManagement, Springer, ISBN 978-3-54078-430-2 [26] Riel A., Brenner E., 2004, Shape to Function: From DMU to BMU, in: Experiences from the Future. New Methods and Applications in Simulation for Production and Logistics. Fraunhofer IRB Verlag, Stuttgart , ISBN 3-8167-6640-4, pp 275-288 [27] Schmidt C., Temple B.K., McCready A., Newman J., Kinzler S.C., 2007, Virtuality-based understanding scheme (VirUS) – a holistic typecast supporting research and behaviour-oriented management of virtual teams. In: Cheng K., Webb D., Marsh R. (eds.): Advances in e-Engineering and Digital Enterprise Technology: Proceedings of the 4th International Conference on E-Engineering and Digital Enterprise, Wiley, ISBN 978-1-86058-467-1 [28] Riel A., 2006, EU Certificates and Knowledge Communities in Europe: An unbeatable Symbiosis, Keynote at the EQN Founding and Dissemination Conference, Krems, Austria, CD-ROM [29] Messnarz R., Stubenrauch R., Melcher M., Bernhard R., 1999, Network Based Quality Assurance, in: Proceedings of the 6th European Conference on Quality Assurance, 10-12 April 1999, Vienna, Austria

567

Invited Paper ED100: Shifting Paradigms in Design Education and Student Thinking at KAIST M.K. Thompson Department of Civil and Environmental Engineering, KAIST 373-1 Guseong-dong Yuseong-gu, Daejeon, 305-701, Korea [email protected]

Abstract Freshman design courses offer a number of benefits to incoming students and are becoming increasingly popular in universities around the world. At KAIST, an innovative freshman design program has been developed that challenges some of the existing paradigms in design education in general and freshman design education in particular. This paper will discuss the basic format, goals, and philosophy for the freshman design program at KAIST. It will also address the successes, challenges, and future implications for the course. Keywords: First Year Education, Design Theory

1 INTRODUCTION Freshman design courses offer a number of benefits to incoming students and are becoming increasingly popular in universities around the world. At KAIST, an innovative freshman design program has been developed that challenges some of the existing paradigms in design education in general and freshman design education in particular. The aim of the course is to improve the students’ abilities to think independently, consciously, rationally, systematically, and synthetically. The course is intended to help the students become leaders by causing a paradigm shift in the way that the students think, view education, view the world, and view their role in the world. This is accomplished by having students apply formal design theories including Axiomatic Design Theory, traditional product design, and TRIZ to semester long design projects. This paper will discuss the basic format, goals, and philosophy for the freshman design program at KAIST. It will also address the successes, challenges, and future implications for the course. 2 PRIOR ART Freshman design courses are offered in a variety of formats and for a variety of reasons. The last major survey of freshman design courses was done in 1997 by researchers at Stanford University. It revealed that the format of freshman design courses varies widely. Some courses focus on individual work, while others focus on teams. Course activities include case studies; deconstructing and reverse engineering artifacts; actively engaging in design projects; or a combination of these activities [1]. Freshman design courses sometimes appear as general or department electives. For example, 2.00B: Toy Product Design is a successful freshman elective currently offered by Barry Kudrowitz and Prof. David Wallace from the Mechanical Engineering Department at MIT. Freshman design courses are often part of the required curriculum for students in individual departments or for all

CIRP Design Conference 2009

568

engineering majors. Both Northwestern University (DSGN106: Engineering Design and Communication) and Harvey Mudd College (E4: Introduction to Engineering Design) require a freshman design course for all first year students in the School of Engineering. These courses sometimes include modules to expose students to engineering technology (drawing and sketching, CAD, etc.) and common software programs (MATLAB, Excel, etc.) that they will need during their engineering careers. It is also increasingly popular to focus part of the course on team work, communication, and other “soft” skills which are important for professional careers in engineering. Only two schools are known to require a design subject for all incoming students regardless of major. Colorado School of Mines offers EPIC151: Design I. KAIST offers ED100: Introduction to System Design and ED101: Communication for Design. Both schools specialize in mathematics, engineering, science and technology and do not offer general liberal arts degrees. Many of today’s biggest and most successful freshman design courses are what Sheppard and Jenison refer to as ‘team process centered courses’. These courses are “principally centered around” and “dominated” by “one or several multi-week design projects.”[1] These courses are sometimes the first in a larger undergraduate design sequence. They are treated as ‘cornerstone courses’ and are developed with the intent of ultimately preparing students for design capstone courses. 3 MOTIVATION There are many good reasons to offer freshman design courses. They help to reduce the attrition of undergraduate engineering students, address requests from industry for a more prepared workforce, and satisfy ABET requirements for design in engineering education [2]. Freshman design classes are often fun and exciting. They help students gain hands-on engineering experience which provides context and motivation for

upper level engineering courses. They often have concrete results which can increase student satisfaction and build confidence. Finally, most project based freshman design courses rely on mentoring systems which allow students to have more personal contact with faculty, graduate students, upperclassmen, and engineering professionals. Many of these benefits are shared by ED100 and ED101 at KAIST, however they were not the primary motivation for the creation of the course. Instead, the new freshman design course at KAIST is part of a larger initiative to revolutionize the university and its student population. 3.1 KAIST Revolution During his inaugural address at KAIST, President Nam P. Suh stated that one of the major goals of the university was “to produce the next generation of leaders for society, industry, and academia.”[3] His vision was for KAIST to become “the place where innovative, new ideas and concepts are created that change the way people think and approach challenging issues. It will be where … disruptive technologies are generated. Most of all, it will be the place where our planet's future leaders - in all fields of human endeavor - are groomed through the rich education and varied experiences they receive and the professional and personal relationships they form.”[3] To achieve these goals, KAIST is working to create a campus-wide culture of “design thinking.” 3.2 Design Thinking Dym et al. say that good design thinking includes: divergent-convergent thinking; systems thinking; the ability to tolerate ambiguity and uncertainty; the ability to make decisions; the ability to work in teams; and the ability to communicate through various media and in the multiple languages of design [4]. Stephen Lu adds the following characteristics of good design thinking: “synthetic (rather than analytical) thinking; functional (rather than physical) thinking;…constructionist (rather than determinist) thinking; solution-neutral (rather than solution-specific) thinking; demand-driven (rather than supply-based) thinking; want-pull (rather than needpush) thinking; price-based (rather than cost-based) thinking; top-down (rather than bottom-up) thinking; [and] socio-technical (rather than pure-technical) thinking.”[5] 3.3 Need for ED100 Surveys have shown that 85 – 90% of the students in the incoming freshman class at KAIST have never participated in a design project before. Their education before entering university has been rigidly structure with little freedom for choosing courses or exploring interests. Information has been “pushed” to the students, instead of giving them the opportunity to “pull” the information that they want or need. As a result, students are often more preoccupied with grades than learning. High school coursework for KAIST students has typically focused more on memorization and calculation than on analysis and synthesis. The students are used to working with specific instructions, rather than independently evaluating the situation and choosing the best path for their work. Finally, the evaluation of their work has been done with more tests than projects. As a result, these students have little experience with open-ended poorlydefined questions and are initially uncomfortable with these types of assignments. 3.4 Goals for ED100 ED100 is intended to cause a major paradigm shift in the way that its students think, view education, view the world, and view their role in the world at KAIST. The course aims to help students to become conscious, rational,

independent, systematic, and synthetic thinkers. Students are expected to learn to question, evaluate, and make decisions. They are expected to learn how to teach themselves and learn independently. They are expected to develop and refine teamwork and communication skills, and gain experience and confidence. Finally, it is hoped that students will begin to recognize the value their education and understand that their abilities can (and should) be used to make a positive difference in the world. 4 FRESHMAN DESIGN AT KAIST The freshman design course at KAIST is formally composed of two courses: ED100: Introduction to System Design (3 units) and ED101: Communication for Design (1 unit). The two courses are taught as a single, unified course and are separated only for administrative purposes. The combined course will be referred to as ‘ED100’ in this work for simplicity. ED100 is required for all incoming students regardless of major. Approximately 400 students (half of the freshman class) take the course each semester. The course was first offered as a freshman elective in Fall, 2007. It has been required since Spring, 2008. 4.1 Course Overview ED100 is a ‘team process centered’ course with a single 16 week long design project. Each semester, up to 20 different projects are offered and students choose their topic by lottery. Each project is assigned to four or five teams which are composed of four to six students each. Project advisers come from all departments at KAIST and are welcome to offer any project topic that satisfies the provided guidelines. Internal and external clients who bring their own design project topics to the course may be introduced in the Fall 2009 semester. Although projects are typically related to engineering or product design, they are not required to be. During the Fall 2008 semester, a professor from the School of Humanities and Social Sciences offered a very successful project on policy design to bridge the digital divide. A project to design educational curriculum will be offered in the Spring, 2009. All course lectures, laboratory sessions, materials, assignments, and activities are geared towards the deliverables of the final projects. Students attend 1 hour of design lecture and 1 hour of communication lecture per week. They also have 3 hours per week of design laboratory where they meet with their faculty project adviser and 1 hour per week of communication laboratory with a faculty communication adviser. Students attend all laboratory sessions as a team. In many ways, ED100 is structured more like a traditional senior capstone course than a freshman cornerstone course. 4.2 Design Projects In any design project, the designer needs three types of knowledge: (1) knowledge about design and the design process; (2) domain-specific or subject-specific knowledge; and (3) knowledge about the particular problem at hand. Most incoming students in a design class will have no previous formal experience or knowledge of design and will have to learn that material during the course. This is true for both capstone and cornerstone students. In addition, all designers, no matter how experienced, have to study their particular problem as part of the design process. This is equally true for freshmen and seniors. The major difference between capstone and cornerstone courses is that the first year students will not have the same domain-specific or subject-specific knowledge that juniors and seniors in a

569

similar course will have. To address this, ED100 requires that all projects offered be unsolved and important real world problems which do not require strong domainspecific knowledge. Because the course focus is on conceptual design, problems must be defined in a solution neutral manner and have a large solution space. Any domain-specific information or resources that the students need is provided by their project advisers and teaching assistants or learned through background research. 4.3 Design Lecture In ED100, there are 10 lectures during the 16 week semester. There are no classes or laboratory sessions during the mid-term or design review weeks. The remaining weeks in the semester are unscheduled to give students more time to work on their projects. “Design lectures are primarily based on material from Axiomatic Design (AD) Theory [7] and traditional product design [8]. Classical AD assumes that the student is already familiar with design and that they will use AD to supplement and modify their design thinking, rather than building it from scratch. The material from product design is used to create a more holistic course for novice designers. The lectures are also supplemented with materials from Altshuller [9], Pahl and Beitz [10], Simon [11], Suh [12], and others. The lectures introduce various definitions of design, design methods vs. design methodologies, and design thinking. Problem identification, problem clarification, and background research are discussed. Different design processes are introduced and compared. Customer needs and customer research are addressed. Functional thinking, functional requirements, and the independence axiom are introduced. Strategies, concepts, and design parameters are explored and compared. Concept refinement techniques from AD, TRIZ, and other areas are introduced. Students are encouraged to locate and fulfil hidden needs; eliminate coupling, conflict and bias; consider physical integration; introduce flexibility and modularity in their designs; use hidden or free resources; recognize and increase the level of innovation in their concepts; and to increase the overall ideality of their designs. Students learn about concept testing, concept selection, customer testing, and prototyping. A guest lecture on intellectual property in the US and in Korea is offered. The process domain and design implementation are discussed. Finally, the design matrix is discussed in more detail and advanced techniques for identifying coupling in the matrix are presented. Bonus lecture materials are available on complexity and the information axiom but are not presented in class.”[6] 4.4 Uniqueness of Lecture Materials The emphasis on design theory and design thinking in ED100 is very unusual in both cornerstone and capstone classes. Sheppard noted that “[w]hile all of the [multi-week project based] courses reviewed do talk about design methodologies to some extent, in some cases this discussion is much more extensive. For example, at Harvey Mudd College, students engage in a number of exercises that have them explicitly consider a variety of design methods/ strategies. In addition, Harvey Mudd's course relies heavily on exposing students to design case studies.”[1] The Harvey Mudd course addresses various aspects of the design process including: problem definition; objectives and functions identification; morph charts; performance specifications and metrics; generating and evaluating alternatives; proof of concept and prototyping

570

[13]. But it does not seem to cover the material in the same breadth or depth that ED100 requires. In this respect, ED100 stands alone. Axiomatic design theory is offered primarily in graduate engineering subjects [14-17], as university professional short courses [18-19] and through short courses offered by industry [20]. AD has been used in capstone design courses in the Mechanical and Electrical Engineering Departments at the University of Idaho [21]. It has been combined with a variety of other design tools and theories in an undergraduate capstone course at Ryerson University in Canada [22]. It is also compared to other design processes in an undergraduate materials design course at Northwestern University. However, AD and other formal design theories are still relatively uncommon at the undergraduate level and unheard of in the growing field of freshman design education. 4.5 Grading Philosophy The unique motivation and philosophy of ED100 are also apparent in the way that the course deliverables are defined and evaluated. 4.4.1 To Build or Not To Build Many undergraduate design courses strongly emphasize design realization (building). However, there is a risk that students will focus on “doing” at the expensive of “thinking” when faced with the pressure of impending deadlines (Students sometimes refer to this as “hacking things together.”) Design implementation in ED100 is encouraged but not required. Some projects will have full working prototypes, but the majority will rely on sketches, sketch models, movies, dioramas, or other media to communicate their ideas. It is expected that students will have additional opportunities to do detailed design and build-and-test in upper-level design courses offered within their departments. 4.4.2 Breaking the Rules “Novices in all fields, including design and communication, tend to seek “the rules”, while experts tend to ask “what are we trying to do?” In ED100, there are no “rules” which students must obey. Instead, students are exposed to different ideas, opinions, tools, and guidelines. The students, then, choose which aspects of the lecture materials to apply to their design projects and how to apply it based on their needs. The emphasis is on whether or not the students’ decisions make sense, and whether or not they can explain and defend those choices.”[6] 4.4.3 Grading Guidelines A full 50% of the final grade in ED100 is based on the final deliverables (10% poster, 20% paper, 20% technical evaluation.) The rest of the grade is based on design and communication laboratory attendance, participation, and assignments, and peer review. Roughly half of the grade is based on individual work and half is determined by the group’s performance. The paper and poster grades are based on how well students have communicated their ideas both verbally and visually. The technical evaluation is based on how well students have understood and applied formal design theories and other lecture materials to their project. Students are specifically judged on their problem statement (7.5%); design process (12.5%); design concept, feasibility, and results (50%); risks and countermeasures for their design (10%); and their use of axiomatic design theory (20%). The contribution of axiomatic design theory to the final grade is relatively small to allow students the freedom to

use other design theories if desired. Trial-and-error and intuitive design are not permitted and result in a severe grading penalty. All design decisions must be explained and justified. Success is evaluated not just based on the quality of the resulting design from the viewpoint of the faculty members doing the grading, but based on the students’ ability to understand, explain, and substantiate their work. Instructions and grading criteria are available to the students for all assignments through the semester, including the final deliverables so there is no confusion about course expectations. 5 RESULTS The success of ED100 has been evaluated through a variety of metrics including the quality of the final projects; continuing work; and unsolicited feedback from students and faculty. 5.1 Final Projects Overall, the final projects in ED100 have been very good and are improving every semester. Teams have strong statistics, customer data and/or expert interviews to demonstrate the need for their design and substantiate their customer needs and functional requirements. Designs tend to be uncoupled or decoupled in accordance with principles from axiomatic design and TRIZ. The level of innovation for most of the projects is high. Few projects are merely new combinations of existing ideas and no projects rely on incremental improvements. The viability of the projects is supported by calculations, experiments, or customer testing data. In addition, some teams have full working prototypes. The number of working prototypes in ED100 is on the rise despite the fact that prototypes are not required. Teams produced working ducted-fan type unmanned aerial vehicles (UAVs, figure 1) and air-drop vaccine containers which successfully survived being thrown off of tall buildings. Modular eco-friendly paper furniture including portable benches (figure 2), a desk which retracts into the ceiling and bookshelves which can be reconfigured into chairs were produced. Students also designed and built bio-mimetic robots that could climb stairs (figure 3) and navigate rough terrain. Some of the designs and prototypes that are being produced are junior/senior level work and not what one would normally expect from a freshman design class. Finally, all projects use formal design theories and processes to produce their final design. Not only do the students use design theories, tools, and techniques that are presented in class and are discussed in the course texts, students have begun to use tools, techniques, and theories from other areas of design and from fields outside of design. One team from the Fall, 2008 semester used the ‘3c STP 4p’ framework from marketing to integrate non-functional requirements and qualities into their design [24]. Other teams used evaluation graphs, gap maps, pair wise comparison matrices, and synthetization in their concept selection processes although none of these techniques were presented in class or in the textbook [2527]. These examples show that students are demonstrating genuinely synthetic thinking and are starting to “pull” information from other classes and other sources to meet the needs of their design projects. 5.2 Continuing Work At the beginning of ED100, students rarely know about patents, publications, and other indications of success outside of grades. Fewer still understand the value of these indicators. Students will often request extra credit for filing patents in the hopes that this will improve their

grades. The response is always to inform the students that a patent, paper, or other type of publication is more valuable than the grade and to remind them that the focus of the course is on their future instead of on their GPA. It seems that some of that message is getting through. After the Fall 2007 pilot of ED100, three teams were invited to present their design projects at the Fifth ChinaJapan-Korea Joint Symposium on Optimization of Structural and Mechanical Systems in Jeju, S. Korea. A fourth team continued their work as an Undergraduate Research Project (URP). These students presented their work as a research paper at the 21st International KKCNN Symposium on Civil Engineering in Singapore [28] and won an award for best student presentation.

Figure 1. Agricultural (left) and Surveillance (right) UAVs

Figure 2. Portable Eco-Friendly Paper Bench

Figure 3. Stair-climbing “Slinky” Robot (Video Screen Captures) [23] After the Spring 2008 semester, one team continued their work as a URP and filed six patents on their design for a remote control (RC) helicopter charging platform. In addition, Samsung Electronics invited nine teams with projects related to the company’s interests to participate in a 'KAIST Freshman Invitation Competitive Seminar'. After the Fall, 2008 semester, two teams filed patents based on their designs for innovative toy water guns. Students from three additional teams traveled to Lisbon, Portugal to present their work as research papers at the

571

5th International Conference on Axiomatic Design [23-24, 29]. 5.3 Unsolicited Feedback from Faculty “Unsolicited feedback from various faculty members associated with ED100 has generally been very positive. Many faculty members regularly voice their support for the course and express interest in continuing to be a part of the course as time allows. However, there were some initial reservations about the course, including concerns that the students did not have enough domain knowledge to do design, or that the course material was too nontraditional or not applicable to all students and majors. As time goes on, those concerns seem to be diminishing. One of the project advisers from the Spring 2008 semester sent the course coordinators (and the president of the university) an email with the following statement: “At the beginning of this semester, I was uncertain about whether this kind of design course would work for freshmen. … However the seriousness and heated atmosphere of the students in the team discussion convinced me that they know what they are doing and this course will work. I was also reconvinced that you don't need to be a master or PhD to be a good designer.” The greatest strength of any educational experiment is not shown by its initial supporters, but in those who are convinced after experience with the project.”[6] Comments of this kind not only indicate the success of the course. They also demonstrate a shift in faculty perceptions about the course and about design education. 5.4 Unsolicited Feedback from Students “Similarly, the initial response of the students to ED100 is frequently mixed. The course material is new to all of the students and very challenging. Students often complain that the course work load is too high and the course itself is too fast-paced. They also sometimes feel that the lecture material is “trivial” or “useless” at first and that the course should not be required. However, these opinions often change after the students have completed their project and participated in the poster fair. One student email to the course coordinators from the Spring 2008 semester said: “I want to give my thanks to you. Frankly speaking, even until the last period of the semester, I didn't like this class because the homeworks [sic] was too hard, big and a lot. But, during doing the poster fair and presentation, I changed my mind. I thought that it is just hard and doesn't help my study, but now I think that it changed my view of thinking. And I also could feel the happiness of accomplishing something with the members with same object. It was really the one of the happiest things in my first semester. I like your class and thank you for giving me the chance to have this good experience^^.” (Note: The double carrots at the end of the statement are the local equivalent of a smiley face.) Similar sentiments were echoed by a student from the Fall 2008 semester: “To be honest, this course was one of the toughest courses that I have learned since my elementary school years :) Also, as our team's project topic was not making any tangible thing, but rather creating a policy, it was a lot tougher. Getting started was such a huge job that it took us more than about three weeks to get the idea of what we are going to do. However, after the poster fair and all those difficult days are past, I think we learned a

572

lot! I feel really thank you for this course for giving me such precious lessons! Hope the coming freshmen students next year learn a lot from this course as well :)” These statements are significant for three reasons. First, again they show that the course is successful in accomplishing its goals in changing the students’ attitudes towards their education and their role in the world. Second, they show that students who were not initially supportive of the course were convinced of its value through their experiences. But they are most important because surveys have shown that most undergraduate students do not realize the full value of their experiences in design courses until 5 years after graduation. The fact that these students are beginning to recognize the value of ED100 both for themselves and for future ED100 students after only a single semester is phenomenal.”[6] 5.5 Paradigm Shift in Student Culture There are strong indications of the beginning of a paradigm shift in student attitudes as a result of ED100. ED100 faculty have observed that students are increasingly comfortable with expressing themselves in English. They are becoming more vocal and pro-active both inside and outside of the class. Their questions and comments frequently demonstrate a very mature and impressive understanding of design. They actively seek help and look for feedback. They are beginning to debate with each other and their professors. And, we are finally starting to see students valuing the results of their work (and the opportunities and rewards) that exist outside of grades. Although these changes may seem small, they are a drastic departure from the traditional Korean educational system. However, the observations from the students themselves are even more important. One group of students observed that because of ED100 “[t]he homework mentality was broken, and rather than considering the tasks as a simple assignment, the students generally tried their best to create something to the best of their ability. The students generally react to courses considering cost to credit/grade ratio, despite the course ED100 being another three credit course, students spent immense amounts of time and thought independently in order to improve upon their design, exploring possible applications of their design, and also exploring other possible applications of the design process. Even jokingly the students would bring up concepts from the lecture during casual conversation, indicating that the concepts and theory taught in during the course were deeply penetrating.” [24] The same students also observed that the course changed the landscape of competition between students. “The competition, although subtle, was also a large factor in motivating the students to strive for excellence. Considering the grade/credit to time invested ratio, grades were not the cause for competition, especially because the projects were not graded on a curve. We were able to identify three major sources that created competition. The first was the potential recognition and award, highlighted by an award ceremony at the end of the semester. The second source was competition amongst the students across different projects, all striving to generate the best possible designs for each project. However, the strongest competition was between the teams that dealt with the same projects. This friendly, but fierce, competition motivated the students to come up with better and more creative solutions than the other teams, creating different types of satisfaction in the involvement of the course.”[24] This again indicates that students are beginning to value their education over their grades and are beginning to understand that solving problems is sometimes more important than the potential rewards involved.

Changes in attitudes towards teamwork were also observed. “[T]he unique characteristic about ED100 is that it does not allow the students to split the work load and work independently. It requires the students to work together. Our advising professor Jung Kim repeatedly informed us that collaboration and harmony would be required for the success of the project rather than equal distribution and specialization of the tasks.”[24] Perhaps most surprising are not the changes in the attitudes and behavior of the students, but the fact that the students themselves recognize the changes and are able to articulate them so well. 5.6 The Long Road Ahead Despite the apparent successes, there is still a long way to go. There is still a lot of confusion and debate about the definition of “design” for both the students and the faculty and the value of AD. The term “design” when translated directly into Korean strongly implies aesthetic or industrial design. It also frequently equated with “creativity” and “optimization” in Korea. It is uncommon to see design discussed as a larger field and within a larger context. This is demonstrated in some of the comments from students in their final surveys. One student recognized the differences between the more common definitions of design that they are used to and the course material. However, they do not appreciate the role of axiomatic design in the design process. AD is seen as an impediment to creativity and ideation, instead of a way to help organize and focus those efforts. “What I've found out is that the way most of the teams thought of 'designing' was very different from the 'designing' that this course tended to do. We thought all we needed to do was think of a good idea and finalize it into an awesome product. But this ED100 designing was trying to create 'something' from 'nothing' which didn't allow any creative, popping ideas to be fulfilled directly. If I were to teach this class, I'd give the topic and develop it without the FRs and DPs and get onto specifying people's ideas right away. In this way, the teams will be relieved from the stress of FRs and have fun making their product more attractive and useful.” Another student’s comments indicate that the course has not adequately explained why is will never be possible to optimize a poor design into a successful one. Although, it does seem to have succeeded in helping them learn to value patents: “It was helpful in that we had to find solutions for problems in a different method, but we did not have a chance to optimize existing systems, which would actually be the realistic, "patent inducing" design approach that could actually assist in creating realistic solutions.” These alternate or limited views of design are sometimes reinforced by television, faculty, family, and friends. Shifts in student thinking sometimes happen very rapidly, but changes in attitudes of those around them can take much more time. Despite the obvious disappointments, these detailed comments show that the students are beginning to value “design” – whatever it is. They are also beginning to evaluate the design process that they used and suggest alternatives or improvements. These represent the third (valuing), fourth (organization), and fifth (characterization by value set/internalization of the value) levels of Krathwohl’s taxonomy in the affective domain [30]. This, in itself, is a major achievement.

Other student comments from the final surveys do express an understanding of and an appreciation for axiomatic design theory and the course materials. The extent to which the majority of students do (or do not) appreciate some of the more formal aspects of the course is not known at this time. 6 DISCUSSION There are many challenges associated with running any large design course and ED100 is no exception. However, some of the challenges in ED100 are specific to the course. Most of the design theories being covered in ED100 were originally developed by or for mechanical engineering or product design. Although many of them were intended to be universally applicable to all areas, the course material is still more suitable for some projects than for others. This is a challenge both for the faculty and the students and is reflected strongly in the survey responses. In addition, because the course material is being combined from different sources and because some of the material has never been taught to first year students, the course material is constantly evolving and no unified textbook is currently available for the students. A textbook is planned for the course and should be available within a few years but this is little consolation for the current students. The course currently uses either Ulrich and Eppinger [8] or the Northwestern EDC text by Yarnoff, et al [31]. Despite the challenges, there are also many opportunities especially for the advancement of design education and design theory. ED100 provides an unprecedented occasion to study how undergraduate students learn axiomatic design theory and other formal design theories and apply them to non-traditional areas including chemical and biological engineering; human-computer interaction; policy design; educational design; and more. It is also an excellent opportunity to better understand how these various theories and design fields work together and to identify the agreements and disagreements between them. 7 CONCLUSIONS A new required freshman design course at KAIST has been developed which challenges traditional ideas about freshman design education and which is successfully producing a paradigm shift in student thinking, attitudes, and culture. Despite the challenges, the future of the course, both as an educational vehicle and a research opportunity for design theory and education, looks very bright. 8 ACKNOWLEDGEMENTS First and foremost, thanks are due to KAIST President Nam P. Suh and the Republic of Korea for creating and sponsoring this program. Much credit is due to Prof. G. J. Park, Prof. S. D. Cha, Prof. G. Y. Nam, and Prof. S. Y. Lu for developing and running the ED100 pilot program; to Dean S. O. Park, Dean K. H. Lee, and Dean Y. H. Noh for their unwavering support for the program; to Prof. T. S. Lee for his outstanding contributions as the other ED100 course coordinator and design lecturer; and to Prof. S. Y. Kim, Prof. G. Furst, Prof. C. Vale, Prof. R. Gordon, Prof. C. Surridge, and Prof. D. Persram for their excellent work in developing and running the communication component of the course.

573

Special thanks are due to Prof. G. B. Olson, Prof. P. L. Hirsch, and the Northwestern EDC faculty for their kind help and advice in the evolution of the course. ED100 was heavily influenced by the EDC and both the course and the students have benefitted immensely from ED100’s relationship with the EDC. Last, but certainly not least, the author would like to acknowledge the ED100 faculty project advisers and teaching assistants. It is their efforts that make all of the difference. 9 REFERENCES [1] Sheppard, S. and Jenison, R., 1997, Examples of Freshman Design Education.” Int. J. Eng. Edu., 13 (4): 248-261. [2] Sheppard, S. and Jenison, R., 1997, Freshman Engineering Design Experiences: an Organizational Framework, Int. J. Eng. Educ., 13 (3):190-197. [3] Suh, Nam P, 2006, KAIST Inauguration of the 13th President: Inaugural Speech. Daejeon, S. Korea, 14 July. [4] Dym, C. L., et al. 2005, Engineering Design Thinking, Teaching, and Learning, Journal of Engineering Education, January: 103-120. [5] Lu, S. C. 2007, Module 2: What is Design, and Design Thinking? Design Thinking Lecture Series, KAIST Institute for the Design of Complex Systems, Daejeon, Korea, Fall. [6] Thompson, M. K., 2009, Teaching Axiomatic Design in the Freshman Year: A Case Study at KAIST, Proceedings of the 5th International Conference on Axiomatic Design, Campus de Caparica, March 2527. [7] Suh, Nam P, 2001, Axiomatic Design: Advances and Applications, Oxford University Press, Oxford. [8] Ulrich, K. T., and Eppinger, S. D., 2008, Product Design and Development (4th Ed.), McGraw-Hill International Edition, Singapore. [9] Altshuller, G, 2005, 40 Principles Extended Edition: TRIZ Keys to Technical Innovation, Technical Innovation Center, Worcester, MA. [10] Pahl, G., and Beitz, W., 2005, Engineering Design: A Systematic Approach (2nd Ed.). Springer, London. [11] Simon, H. A., 1996, The Sciences of the Artificial (3rd Ed.), MIT Press, Cambridge, MA. [12] Suh, Nam P., 2005, Complexity: Theory and Applications, Oxford University Press, Oxford, UK. [13] Dym, C. L., Lape, N. K., Spjut, R. E., and Wang, R., 2005, Handbook for E4: Introduction to Engineering Design, Harvey Mudd College. Available: http://www4.hmc.edu:8001/Engineering/E4/E4%20H andbook%20(S06).pdf. [Accessed Feb. 10, 2009.] [14] Massachusetts Institute of Technology (MIT), 2008, MIT Course Catalog 2008-2009. Available: http://student.mit.edu/catalog/m2c.html [Accessed Dec. 8, 2008.] [15] Worcester Polytechnic Institute (WPI) 2008, Graduate Course Catalog 2008 – 2009. Available: http://www.wpi.edu/Pubs/Catalogs/Grad/Current/mec ourses.html [Accessed Dec. 8, 2008] [16] KAIST, 2008, KAIST International Summer School List of Courses Offered in English. Available: http://summer.kaist.ac.kr [Accessed Dec. 8, 2008] [17] Tate D., Lu, Y. 2004, Strategies for Axiomatic Design Education, Proceedings of ICAD2004, the Third

574

International Conference on Axiomatic Design Seoul, Korea, June 21-24. [18] Massachusetts Institute of Technology Professional Institute (MITPI), 2008, Axiomatic Design for Complex Systems [2.882s]. Available: http://web.mit.edu/mitpep/pi/courses/axiomatic_desi gn.html [Accessed Dec. 8, 2008.] [19] Brown, C. A., 2008, Axiomatic Design Short Courses. Available: http://www.axiomaticdesign.org [Accessed Dec. 8, 2008.] [20] Axiomatic Design Solutions Inc,, 2008, Training. Available: http://www.axiomaticdesign.com/services/training.as p [Accessed Dec. 8, 2008.] [21] E. Odom, S. Beyerlein, C. A. Brown, D.Drew, L.Gallup, S.Zimmerman, J.Olberding, 2005, Role of Axiomatic Design in Teaching Capstone Courses, Proceedings of the 2005 American Society for Engineering Education Annual conference & Exposition, ASEE. [22] Salustri, F. A., and Short, L. P., 2003, Using Student Design Projects for Secondary School Outreach, International Conference on Engineering Design (ICED 2003), Stockholm, August 19-21. [23] Yeo, S. J., Jeon, B. S., Jeong, Y. C., Ha, D. Y., Kwak, K.W., Kim, S. H., 2009, Bio-Mimetic Articulated Mobile Robot Overcoming Stairs by using a Slinky Moving Mechanism. Proceedings of the 5th International Conference on Axiomatic Design, Campus de Caparica, March 25-27. [24] Park, A. Chung, S., Lee, B., Lee, S., and Kim, J., 2009, Learning the Fundamentals of Design through the Axiomatic Design Process: A Case Study on ED100 at KAIST, Proceedings of the 5th International Conference on Axiomatic Design, Campus de Caparica, March 25-27. [25] Cho, M. K., 2008, Interactive Gaming System Utilizing Bio-Signals: 6th Homework Assignment, Unpublished Document, KAIST, Daejeon, S. Korea, Oct. 30. [26] Yang, H. H., Maeng, J. H., Cho, M. K., Hwang, S. H., 2008, Interactive Gaming System Utilizing BioSignals: ED100 Final Report. Unpublished Report KAIST, Daejeon, S. Korea, Dec. 17. [27] Heo, S., Kim, S., Kwon. H., Lee, K., Son, S., 2008, Syringe Healing for Africa: Effective Vaccine Container Targeting Children in Western and Central Africa, Unpublished Report KAIST, Daejeon, S. Korea, Dec. 17. [28] Thompson, M. K., Ibragimova, E., Lee, H., Myung, S., 2008, Design and Evaluation of an Eco-Friendly Tidal Dam using Axiomatic Design Theory, Proceedings of the 21st International KKCNN Symposium on Civil Engineering, Singapore, Oct. 27-28:: 230-233. [29] Park, J., Lee, H., Ha, K., Hwang, Y., Oh, A., 2009, The Application of Axiomatic Design Theory on a Cell Phone Interface for Location-Based Bus Application, Proceedings of the 5th International Conference on Axiomatic Design, Campus de Caparica, March 25-27. [30] Krathwohl, et al., 1964, The Classification of Educational Objectives Handbook II: Affective Domain. New York: David McKay Co., Inc. [31] Yarnoff, C., et al. 2009, “Engineering Design and Communication: Principles and Practice.” Northwestern University, Evanston, Il,

A Knowledge Based Approach for Affordable Virtual Prototyping: the Drip Emitters Test Case P. Cicconi, R. Raffaeli Department of Mechanics, Faculty of Engineering, Polytechnic University of Marche Brecce Bianche, I 60131, Ancona, Italy {p.cicconi, r.raffaeli}@univpm.it

Abstract Virtual prototyping lacks of application in SME due to the costs of software systems and the necessity of skilled operators. The aim is to improve dripper emitters design process reducing costs. A knowledge base is presented to gather data on products behaviour in terms of experimental data and simulation results for a set of meaningful test cases. Input design parameters were linked to performance indices on the base of the correlations emerged in the analysis. Specifications for a new product can be used to extract similar cases and to define a possible solution in terms of a combination of them. Keywords: Virtual Prototyping, Knowledge Based Engineering, Design of Experiments, Drip emitters.

1

INTRODUCTION

Today companies must increase their competitiveness in the global market. In the mechanical production field this objective can be pursuit reducing time-to-market and increasing products quality. Both these aspects require an optimization of the design process. Good product design in short time is achievable through sophisticated virtual models which represent all functional and manufacturing aspects. Nowadays, virtual prototyping technologies permit high level simulations of many design aspects: geometry, kinematics, strength, fluid dynamics, production processes, etc… Performing a wide range of such analysis on a computer, physical product prototypes can be highly reduced. That means lower costs, shorter times to market and an overall higher quality level. However, this approach presents some problems especially for Small and Medium Enterprises (SME). These firms are often operating in low value mass products markets. Here, performance and quality are also evermore important in order to maintain and possibly increase the market share and resist to emergent countries competitors. Therefore, virtual prototyping techniques need to be effectively employed in design departments. On the other hand, SME often lack of resources and competences to effectively employ such systems. Software is usually expensive and require high skilled dedicated operators. On the contrary in a small department people are required of a wide generic knowledge in order to cope with different design aspects. That means used tools are often limited to CAD systems and product optimization is performed by time consuming trial-and-error approaches. For instance, even if the need of Finite Elements Analysis (FEA) or Computerised Fluid Dynamics (CFD) tools is recognised, costs are too high compared to product value and batch sizes. This research work aims to develop a method to improve design process in SMEs maintaining low costs in assets and resources allocation. The objective is the introduction of knowledge base systems storing the information on a certain number of meaningful test cases. Each case is deeply analysed from a virtual and experimental point of view before being added to the knowledge base. These

CIRP Design Conference 2009

575

template products are characterised by a certain number of meaningful parameters, which are linked to main specifications. When the system has collected a sufficiently wide number of examples, statistical rules are introduced to derive new solutions in response to new specifications request. The knowledge base is traversed in order to find most similar cases and the required solution is expressed from the combination of them. The aim of the approach is to limit the cost of deep product analysis to a restricted number of test cases. This work can be outsourced if the company lacks of virtual prototyping tools or test machines. Once sufficient data has been gathered, a knowledge base is developed and rules are established between design parameters. When a new product instance comes, it is characterised by its own parameters drawn from requirements. Introducing them in the knowledge base, similar solutions are extracted. Using statistical rules, an attempt solution is found which is believed to be closed to the desired one. Design activity is then limited to optimization and physical prototyping. Since the attempt solution benefits of previous results, it requires a much shorter review process and a limited number of iterations. As test case, a typical small mechanical company activity has been analysed. It is a plastic moulding business, focused on the production of drip emitter components, used for irrigation purposes. The design and realization of such components is quite complex and requires to take into account many different functional and production aspects. The long term objective is a knowledge based tool to support the design of these particular devices. It needs a multidiscipline approach since it gathers data on product specification, geometrical layout, numeric simulations, production choices, experimental tests and customer reports. The tool elaborates specifications input, such as water discharge and geometrical constrains, maps them to knowledge base and it comes up with a suitable design. 1.1 Drip emitter description The drip emitter is an important device in water-saving agriculture, and it characterizes all development of

modern agriculture. The use of dripper emitter is fundamental in arid regions or where rain begins to decrease. The task of this component is to dissipate pressure and to deliver water at a constant rate by lowering the pressure energy. Shapes are various as shown in figure 1. Usually dimensions are very small, and the water flow crosses through micro-orifices like a labyrinth channels which make the pressure drop. Discharge rate is usually 1 to 8 L/h and is linked to the small width and depth of the flow path which is about 0.5 to 1.5 mm high.

Figure 1: Various design solutions for dripper emitters Drippers are equally spaced inside irrigation lines which are laid on the ground or just few centimetres below the surface level. During pipe extrusion drippers are welded toward its inner surface. Pipe diameter is around 16 mm and its thickness varies between 0.12 and 1.5 mm. In agriculture many pipe-lines are used and the intake pressure is variable. In horizontals fields, nominal pressure is 1 bar, while in sloping fields pressure can reach even 4 bars in lower level areas (figure 2). There are two big families of drip emitters: the flat dripper and the round type. Each of them can be divided in two subfamilies: unregulated dripper and regulated dripper. The flow rate in unregulated dripper varies with inlet water pressure. On the contrary, regulated emitter maintains a relatively constant flow rate at varying water pressure, within the limits specified by the manufacturer. Last ones show good performance in sloped fields where intake pressure is inevitably variable.

Dripper life is linked to the plastic material used to produce this device. Many producers employ only thermoplastic materials. Most of them are made in high density polyethylene, because this choice is an important compromise between physical and moulding properties. 1.2 Current state in drip emitters design Drip emitter design process is commonly based only on the experience of engineers supported by CAD-CAM systems and trial-and-error procedures. Nowadays Computer Aided Engineering (CAE) systems can be successfully employed to investigate performance of emitters without any physical realization of physical prototypes. In particular, CAE systems include Computational Fluid Dynamics (CFD) software, which is useful to calculate hydraulic performance of the emitter such as the output flow rate and the pressure drop in the labyrinth. On the other hand the production can be analysed with the help of moulding simulation systems in order to investigate product integrity, mould cycle duration and efficiency. The integration of virtual prototyping tools in the design flow is very important in shortening the whole production cycle. However, some specific knowledge is required for a correct interpretation of the results. CFD outcomes highly depend on the geometry but the last one is not so certain. In fact, nominal CAD model differs from effective dimensions of a real dripper assembled into a pipeline. The extrusion process, used to form the pipe and stick the dripper, creates a permanent junction between the parts. The dentate path penetrates into the internal face of the pipe and the actual depth of the channel reduces. The effective correct depth is not easily predictable, because it depends on the type of materials, geometry, external pipe thickness, extrusion temperature, speed of the extrusion etc.... Therefore CAD/CAE systems outputs must be matched with experimental tests in order to draw correct results. 2 STATE OF THE ART In this paragraph a brief review of the state of the art related to this research is outlined. In particular Knowledge Based Systems, Design of Experiments Method and approaches in drip emitters fluid dynamics are presented.

Figure 2: Dripper emitters in irrigation lines The most important properties in drip tubing irrigation systems are uniformity, anti clogging capacity and lifespan of all components. A well designed dripper device should maximise these aspects and ensure a good hydraulic performance. Uniformity is the property of each dripper of a piping line to provide almost the same discharge rate in a range of ±10%. Anti clogging capacity is the property of an emitter to reduce the precipitation of suspended particles. In fact, these devices can easily clog. Efficient turbulence can create some reverse whirlpools in low velocity zones and this effect prevents the sedimentation of suspended particles. Another method to reduce clogging is the introduction of a filter at water inlet section. This filter is often made of a grid which blocks particles larger than a third of the labyrinth smallest cross section.

2.1 Knowledge Based Systems Knowledge Based Engineering (KBE) is a technical domain that includes methodologies and tools to acquire, formalize and represent in IT systems the knowledge of a specific application field. KBE is a special type of Knowledge Based System with a particular focus on product engineering design and downstream activities such as analysis, manufacturing, production planning cost estimation and even sales. The development of such applications aims to shorten the time of products configuration phase, to aid in decision-making activities and to automate repetitive procedures. Nowadays, many companies try to invest in KBE systems. Configuration is often applied in consolidated productive situations to standardise functional groups and improve economies of scale. By means of a suitable analysis, it is possible to determine product platform for future production. Further development is represented by variants definition through the assembly of "intelligent" modules that encapsulate the configuration rules and the design parameters.[1]

576

However, this research is focused on those cases whose final solutions can not be explicitly detected only on the base of specific design parameters. Here final configuration is the result of many design activities. The impact of each single selection or choice needs to be assessed in terms of costs, performance, assemblability and so on. In absence of decision support tools such task, generally, is intuitively performed on the basis of the expert's personal skill. In order to evaluate alternative solutions, the designer must be able to manage the different types of knowledge that are part of the configuration model knowledge. The goal is to develop a system to support the expert during his/her decision-making activity. Then, the problem to formalise, integrate and structure different types of knowledge involved in both the design for configuration and configuration of the solution phases is a crucial point. The implementation of this support tool requires knowledge relative to the product domain. This knowledge can be at least classified in two kinds: explicit knowledge and tacit knowledge. The explicit knowledge is rational and sequential, and can be found on books, manuals and catalogue. On the contrary, tacit knowledge is more linked to the individual experiences, so it is very difficult to describe it. Knowledge is mainly drawn from the development team, made of people with different tasks and composed by internal and external collaborators. In SME some competences cannot be found due to the reduced internal staff. So it is important to formalise and store this knowledge in order to avoid continuous expenses for outsourcing [2][3]. Knowledge recovery should be carried on in order to gather information without slowing down enterprise activities. In this analysis phase, the base for future development is established, since rules and tacit knowledge are collected. Then, the phase of development follows. The experts team defines the tasks and implements a methodology and related tools. The third step is the systems test, in which they start to be employed in the design department. 2.2 Design of experiments method The Design of Experiments method (DOE), which was developed by the mathematician Ronald Fisher, is used to determine the relationship between the different parameters (Xs) affecting a process and the output of that process (Y) involving structured data matrices [4]. The advantage of a DOE tool is linked to the acquisition of the tacit knowledge which is normally based only on designer experience. This method involves some steps: definition of objective, the choice of a number of experiments (better if small), definition of input and output variables. It requires designing a number of experiments in which the principal variables are varied. Analysing the results, it is possible to find the optimal solution of a problem, the dependent and independent variables and the relations between all parameters. In the areas of research and development, DOE is fairly widespread, but sometimes this method result expensive, so to contain costs it is wise to do few experiments as possible. The DOE approach requires the identification of influencing parameters in the problem. Since each experiments costs time and money, it is recommended to ask whether these experiments are really needed, so a minimum number of them should be organized and performed. Instead of beginning with randomly changing design parameters, DOE method distributes the experiments nodes as uniform as possible. With this

577

methodology, costs and results deviations can be calculated in advance. 2.3 Dripper fluid dynamics related works Recently some researchers studied the fluid dynamics in the dentate path with many numerical and experimental methods. However, these studies are often pure computational fluid dynamics simulation of the flow inside the labyrinth channels. In fact, the main objective is the verification of the presence of a turbulent flow. On the other hand, there are not many research papers focusing on the behaviour of the dentate path and the influence of the geometry on the discharge rate [5][6][7]. There is an important study around the Reynolds numbers inside the labyrinth. In fact, if the particular dentate geometry has not yet been analysed, there is no actual knowledge of critical Reynolds number, that fixes the transition from laminar to turbulence flow [8]. For Kamrmli [9] the critical Reynolds is almost 2000. Maintaining fluid dynamics conditions over this value, the flow can be considered turbulent providing energy dissipation inside the path and anti-clogging effect. The effects of reverse vortexes along the path where shown in the work. From an experimental point of view, it is pretty difficult to measure the effective Reynolds in a path almost 0.8 mm width. So, many authors rely only on CFD simulation results. Zhang [10] made an experimental setup to measure flow using a Laser Doppler Velocimetry device with magnified model in plexiglas (dimensional ratio 15:1) according to the Reynolds number similarity method. 3 PROPOSED APPROACH The aim of this work is the development of a framework for the implementation of knowledge based applications to support the design of products requiring complex virtual and experimental analysis. The steps to come to a valid knowledge base to be embodied in a support tool can be summarised as follows: an investigation phase based on dialog with customers and suppliers, a research about the product, the application of virtual prototyping tools, the study of production and assembly process, materials, and finally the study about particular experimental set-ups. The principle of the approach is recognised in the DOE method. Characteristic input and output parameters are used for the specific problem and test cases in the knowledge base play the role of the experiments [11]. After data has been gathered, it needs to be stored in a system following the steps here listed: Targets individuation: it is the definition of the principal objectives of the study; Input parameters individuation: it is the analysis of all variables on which the problem depends. These parameters can be divided in geometrical, physical, process and operating parameters. They respectively represent physical constraints, material properties, production processes parameters and parameters linked to the operating conditions; Output parameters individuation: these parameters are affected by changes in input ones. So these variables are part of the specifications and must be experimentally verified. They can be divided into functional and quality parameters.

operating in the moulds design and production for plastic components.

Figure 3: Diagram showing the proposed approach Selection of design parameters: only parameters which mostly influence the study are analysed. Parameters reduction permits to save time and money in the next steps. This selection must be justified to guarantee analysis consistency. As result of verification step these parameters may be changed. Experiments planning: this step is important to predict analysis duration and cost. It is recommended to reduce experiments number to the essential ones. This phase can be integrated or substituted by virtual prototyping technologies. Experiments execution and data collection: this phase is based on experimental setups and measures operations. All data must be organised in a structured database. Data analysis: it is the central step in which the virtual and experimental results are used to find rules and conditions. So this phase is liked to knowledge caption. The output is a first attempt theory. Verification: here theoretical assumptions and their correctness are verified. This step can bring back to the selection of design parameters. Knowledge formalization, is the final step in which the product knowledge is formalised in terms of parameters correlations, so it is ready to use in a similar problem. Once data is acquired and knowledge formulated, a tool to support design process can be implemented. The core of this support tool is a structured multidiscipline database that is the collection of all design aspects of the analysed test cases and rules to link input and output parameters. A specific design solution is then extracted recognising product category and similar test cases. Then parameters correlations and rules are used to predict the product behaviour. 4 DRIPPER EMITTERS DESIGN PROCESS The proposed approach has been tested on the design process of dripper emitters used in irrigation applications. The research program has been funded by and carried on in collaboration with F.G.R. srl, a small Italian company

4.1 Drippers design process The research is focused on drippers design and production. Companies define the exact shape of drippers on the base of specifications and then design and produce injection moulds for their realization. Usually they sell the product, but sometimes only moulds. Above all, they provide a specific dripper design service. Dripper production follows the mass customisation paradigm, today highly diffused in the modern globalization. The dripper is not a standard product and customers are represented by pipe producers. These firms buy drippers which are inserted in the pipeline during the extrusion process. Every customer requires different specification based on the specific irrigation application and the technologies being used to manufacture the pipeline. When a new order comes, it specifies some overall dimension requirement, a specific flow rate, a certain intake pressure, specific environmental working conditions and other functional requirements. All these variables lead to the necessity of a new design which often may be similar to a previous one. However this does not mean that the design process can be fully recovered. Small changes require the repetition of all the design, manufacturing and testing steps as pointed out as follows. Currently the time for designing and realising of the final prototype of a drip emitter is quite long (almost 3 months) and it includes four steps: the design of the emitter, the project of injection moulding process, the assembling process between dripper and external pipe, and the experimental set-up of the emitters pipelines (figure 4).

Figure 4: Diagram showing dripper design phases and iterations between companies An initial wrong design could highly increment the cost of all the realization process. For instance, negative results from the experimental set-up require a product revision and the repetition of all the design and manufacturing steps. Moreover, drippers are usually designed and produced by a firm while they are assembled by pipe producers. That means the overall time for iteration is long and the all design process could span for months. The first design step is the most complex because engineers must considerate at least four fundamental aspects: fluid dynamics, geometrical and dimensional constraints, influence of geometry into moulding process and the choice of materials. A new project begins with the analysis of the geometrical constraints. Overall dimensions depend on different extruding machines which include drippers in pipeline.

578

Each of them makes use of a particular track to convey the emitters: so it is not possible to standardize product geometrical limits. Secondly, the designer must fulfil customer fluid dynamics specifications. In particular, every dripper has its own characteristic discharge rate, linked to a particular agricultural application. This parameter is very critical. There are no rules or methods to analytically compute this value due to the complexity of the geometry. CFD simulation may be employed but there are many parameters influencing the results. It is important to know them precisely in order to come out with good results. That means the designer usually bases his work only on experience. At first he fixes a possible labyrinth path. Then he works only varying the depth of the channel. In choosing the geometry he must take into account anticlogging properties, life-span of the parts and overall performance. A dentate design is usually preferred since it meets these two aspects. The profile is often triangular since it guarantees a turbulence flow to increase pressure dissipation and to prevent sedimentation of suspended grains. In addition, an intake filter is added to stop bigger particles. After geometry definition, the moulding process is designed. Main aspects are related to lines productivity and the correct and constant properties of the product. This is very important for the quality of dripper pipelines, because every dripper must emit almost the same water quantity to guarantee a balanced irrigation of any plant of the field. The discharge uniformity is a central parameter the designer must control in all the process. Besides, the realization of moulds requires many types of machine tools, such as copper electrodes and mills with an accuracy of about 0.01 mm. The compromise between performance, cost and fast realization is hardly reachable and requires knowledge linked to the experience. After a pilot batch has been obtained, the customer tests a first assembly-line to experimentally measure the effective discharge rate. Results are often not very good, so the first dripper model may need a deep revision and the repetition of all previous phases. This leads to a trialand-error loop which terminates only when the experimental results are sufficiently good. This loop spans in all production steps, so it is very expensive for the company which needs to employ many resources to realise changes to the first dripper design.

recognised in discharge rate and lifetime. All these parameters are numerous, heterogeneous and complexly linked. Here some hypotheses follow which where formulated to simplify the approach. The study was focused on geometric and on operating parameters. Factors linked to material and moulding process where considered as constant. Material was fixed in high density polyethylene both for the dripper and pipe; operating temperature and clogging state where respectively fixed in about 23°C and in the absence of any clogging sediment. As output parameter, discharge rate was only taken into account while lifetime has been ignored since it mainly depends on chosen material and employing conditions. In the test cases a constant pressure of about 1 bar was fixed and the attention focused on geometrical parameters, such as the dentate path geometry and the path depth, which deeply influence the dripper performance. Generally speaking, parameters are chosen out of convenience considerations. New design very often new design starts from an existing model which maintains most of the geometric choices such as structure, labyrinth shape, inlet position, and so on…. For that reason a new design is often based on a product family choice and then concentrates on parameters such as path depth, overall length and number of labyrinth bends which, conveniently varied, lead to desiderate performance. 5.2 Chosen test cases description Three kind of flat drippers, characterised by three different dentate labyrinths where analyzed. Moreover, for each flat device three different path depths were considered. All the dripper have were also experimentally tested on pipelines of different pipe thickness.

5

A KNOWLEGDE BASE FOR SUPPORTING DRIPPER DESIGN In this paragraph the problem of the construction of a knowledge base for a dripper design supporting tool is addressed. Some meaningful test cases are examined both from a virtual and experimental point of view. This information is used to extract main design parameters and their correlations. 5.1 Dripper design parameters To test the introduced methodology, two different cases from the flat and the round dripper families have been analysed. The input parameters, which influence the performance of all the drippers, can be divided in geometric parameters, such as dentate path shape, path depth, pipe thickness; process parameters, such as moulding pressure, moulding temperature, assembly process temperature; dripper and pipe material properties; operating parameters as water pressure, water temperature and clogging state. The output parameters can be mainly

579

Figure 5: three type of dentate paths being analysed to study the flat dripper (flow path is dark) In particular, external dimensions of the flat drippers respectively are: 35x8 mm (flat type A), 20x8 mm (flat type B) and 30x8mm (flat type C). The path depth varies between 0.75, 1 and 1.25 mm. Finally the thickness of the assembly pipe, on which the drippers were installed, was chosen in 0.15 or 0.3 mm. In figure 5 the three types of path are shown. At first sight, it may be observed as the first path is long but each dentate tip is very rounded; the second path is very

shaped and finally the last path has an almost rectangular dentate module. After simulations and experiments, it is possible to discuss about the influence of geometry design on the discharge rate. On the other side, round emitters have cylindrical symmetry, so are pretty different from the flat type. The approach being used is the same of the flat ones. So three labyrinth types were chosen (see figure 6), but in this case only the pipe thickness were studied, varying between 0.7, 1 and 1.2 mm. Besides, channels depth was been maintained fixed. Pipe thickness effect is here more evident than in flat drippers. This is mainly due to cooling phase after pipe extrusion. Radial tension make the dripper weld to the pipe with partial materials overlapping. A thicker pipe will cause stronger tensions and therefore deeper material deformation. As result, labyrinth channel effective cross section will be smaller then nominal one.

Continuity equation:

∂u i =0 ∂xi

(1)

Navier-Stokes equation:

ρ

∂uiu j ∂xi

=−

∂P ∂   ∂ui ∂u j   + +  µe  ∂x j ∂xi   ∂x j ∂xi  

(2)

Gravity and surface roughness effects were neglected. In the simulation standard boundary conditions about flow inlets and outlets were set. Relative pressure at inlets were set to 1 bar, corresponding to normal emitters working pressure, while at the outlets pressure was fixed to zero. The outcomes of numeric CFD analysis are reported in the following paragraph. 5.4 Experimental tests The experimental phase was consisted in the design of the moulds and in the realization of the different flat and round drippers discussed above. Then tests were carried out to measure output parameters.

Figure 7: The test dripper machine Figure 6: The round dripper chosen as second test case (flow path is dark) The external diameter of these emitters is 16 mm, while lengths are 50, 40 and 35 mm. All three types have nominal labyrinth depth of about 0.8 mm. As happens for most designs, two parameters were investigated: path depth and pipe thickness. The last one is not a strictly related dripper design parameter but highly influence the results so can be considered as one of them. Other parameters were maintained constant among homogenous product families. 5.3 Product virtual analysis Chosen dripper models were both experimentally and numerically analysed. Fluid dynamics aspects were simulated with a commercial CFD system, Fluent by Fluent Inc. All geometries were meshed with grids of 0.1 5 mm spacing leading to more than 1x10 cells. From the literature is clear how water flow into the emitter dentate path can be considered as turbulent. So, the k − ε model to calculate fluid dynamics sizes was used [12]. The flow inside the emitters could be considerate as a viscous steady uncompressible flow described by these fundamental equations [13]:

The data were gathered in two different ways: with a standard discharge rate measurement of some extruded emitter piping and by means of an innovative test machine. Since the discharge depends on the type of pipe, the second test was designed to simulate the tube interference effect. Basically, a silicon cylinder encloses the dripper and let the water flow into the labyrinth. A particular of this machine is reported in figure 7.

Figure 8: Schematic diagram of experimental set-up used in measuring the discharge rate of drippers. In figure 8, a scheme of this innovative dripper experimental measurement setup is illustrated. This machine can test drippers simulating the effects of the pipe, leading to time and costs savings. In fact, dripper performance can be measured before pipe extrusion.

580

Path depth (mm)

CFD

Measured

Type

Discharge Rate (L/h)

Discharge Rate (L/h)

A

0.75

1.65

1.57

A

1

2.05

1.89

8.5%

A

1.25

2.48

2.11

12.8%

B

0.75

1.32

1.22

11.5%

drip tubing has been set up. Each pipeline is one meter long, with a total of 25 drippers. These measurements are very time consuming compared with the ones realized with the test machine, but permit to analyse the effects of the pipe in the dripper performance. For the flat dripper eighteen measure combinations were used because of two tube thickness. Besides, for round drippers additional nine tests were carried out. Those measurements are reported in the following tables 3 and 4.

Differen ce 11.5%

B

1

1.81

1.63

10.4%

B

1.25

2.28

2.04

9.8%

C

0.75

2.52

2.30

11.7%

C

1

3.08

2.73

11.4%

C

1.25

3.54

3.12

12.2%

Table 1: Comparison between simulated water discharge rate and measured values for flat drippers.

Type

Path depth (mm)

CFD Discharge Rate (L/h)

Measured Discharge Rate (L/h)

Differen ce

A

0.8

3.22

2.42

33.1%

B

0.8

5.41

4.21

28.5%

C

0.8

4.14

3.12

32.7%

Table 2: Comparison between simulated water discharge rate and measured values for round drippers. Type

Path depth (mm)

Pipe thickness (mm)

A

0.75

0.15

1.51

A

1

0.15

1.77

A

1.25

0.15

1.95

B

0.75

0.15

1.32

B

1

0.15

1.67

B

1.25

0.15

1.83

C

0.75

0.15

2.22

C

1

0.15

2.56

C

1.25

0.15

3.01

A

0.75

0.30

1.42

A

1

0.30

1.62

A

1.25

0.30

1.85

B

0.75

0.30

1.02

B

1

0.30

1.22

B

1.25

0.30

1.62

C

0.75

0.30

2.02

C

1

0.30

2.36

C

1.25

0.30

2.78

Discharge Rate (L/h)

Type

Path depth (mm)

Pipe thickness (mm)

Discharge rate (L/h)

A

0.8

0.7

2.32

B

0.8

0.7

3.95

C

0.8

0.7

2.98

A

0.8

1.0

2.01

B

0.8

1.0

3.45

C

0.8

1.0

2.48

A

0.8

1.2

1.92

B

0.8

1.2

3.22

C

0.8

1.2

2.21

Table 4: Round drippers discharge rate data measured with the standard method. 5.5 Design parameters discussion and correlation Data were analysed to find correlations between input and output parameters. Some one-on-one correlations emerged. For instance the effect of path depth on discharge rate is evident in flat emitters. The relation between the two parameters is almost linear in our test cases (figure 9). In flat dripper Type A the ratio between discharge rate and path depth is almost 2 L/h for each mm. In other terms, a depth increase of 25% causes 25% higher water flow. This behaviour can be observed on round emitters too, but in this study experimental or CFD data for it are not available.

Table 3: Flat drippers discharge rate data measured with the standard method. Flat and round drippers were tested by means of the machine. For flat drippers nine experiments were planned, because of three different dentate paths and three path height. For round drippers three typologies of dentate paths were analysed all sharing the same depth. In the following tables experimental results are reported along with CFD outcomes. Afterwards, experiments with the classical method for dripper discharge rate measurement were carried out. A measure station with a pump that provides water to five

581

Figure 9: Discharge rate versus path depth for flat drippers Similar is the influence of pipe thickness effect on discharge rate. In fact, especially in round emitters, a thicker pipe reduces the cross section of channels labyrinth and then make the flow rate decrease. Usually this effect is neglected and leads to problems with the pipe line assembly firms. Each customer registers different dripper performance on the base of the pipe being use and on the specific extrusion process parameters, for instance the speed rate or cooling effects.

CFD analysis shows big difference from experimental data. The gap is from 28 to 33% for round emitters and from 8 to 12% for the flat ones. Of course these errors are linked to the quality of the mesh model created for CFD analysis and the accuracy of the test machines. Moreover, CFD results also depend on some hydraulic parameter assumptions which would require further investigation. Anyway, the main reason is that CFD were based on nominal dripper geometry which does not consider the effect of pipe collapse into the labyrinth. Therefore, on test machine the discharge rate is always smaller than CFD results. Numeric experiments should be somehow corrected considering this effect, for instance reducing nominal path depth only for simulation purpose. This choice was not done and data were reported as they came out from virtual or physical models. In fact, the aim of this paper is to show the correlation of design parameters among homogeneous families of products. It means that, as long as the error of CFD is repeatable and correlation between parameters assured, new design behaviours can be predicted on the base of old known ones. The correlation between labyrinth area and its volume is also worth to be further explored. Each single dentate tip causes a pressure drop linked to its geometry. However, considering the labyrinth as a whole, the area on volume ratio takes into account frictional effects on the walls. Increasing this ratio leads to more flow resistance and then a reduction of water discharge. The correlations which emerged apply to the specific dripper design family. It was noticed how different dripper types show different levels of correlations between parameters. However, among homogenous families, results can be extend to new designs and performance be predicted with a sufficient grade of reliability. 6 CONCLUSIONS AND FUTURE DEVELOPMENTS This work has presented an approach which was followed to gather data on a specific design problem, the dripper emitters. Numeric simulations were performed on a certain number of meaningful test cases and verified from an experimental point of view. Design parameters were individuated and put in correlation on the base of empirical rules, as in the Design of Experiments methodology. The aim was to form a knowledge base made of the gathered data and of design rules to help the definition of a new product as new specifications come. The future development of this work will be the implementation of a knowledge based tool that organizes and manages all these experimental data along with empirical laws. To this aim, all data must be organized into a structured database along with experimental rules that are drawn from data analysis. The system will manage diverse design families and parameters in order to predict the water discharge rate. Possibly, an interaction with a CAD system will be useful to define geometrical layouts. Finally, in order to widen the knowledge base, all new products must be stored in the data base in order to explicit design information and add new experiments to elaborate stronger parameters correlation rules. 7

8 REFERENCES [1] Mandorli, F., Bordegoni, M., 2000, Product Model Definition Support for Knowledge Aided Engineering Applications Development, Proceedings of ASMEDETC'00, Proceedings CD-ROM. [2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

ACKNOWLEDGMENTS

The authors wish to thanks Mr. Francesco Ruschioni and Eng. Alessandro Telarucci of FGR srl, for their precious contribution in the development of this research program.

582

Cederfeldt, M. Elgh. F., 2005, Design automation in SMEs – current state, potential, need and requirements, Proceedings of ICED 05. Bermell-Garcìa, P., Fan, I.-S., Li, G., Porter, R., Butter, D., 2001, Effective abstraction of engineering knowledge for KBE implementation, Proceedings of ICED 01, 1:99-106. Montgomery, D. C., 2001, Design and Analysis of Experiments, 5th ed., John Wiley & Sons Inc., New York. Wang, W.Z., Liu, Y.Z., Jiang, P.N., Chen, H.P., 2007, Numerical analysis of leakage flow through two labyrinth seals, Journal of Hydrodynamics, 19 (1): 107-112. Li, Y.K., Yang, P.L., Ren, S.M., Chen, H.P., 2006, Hydraulic characterizations of tortuous flow in path drip irrigation emitter, Journal of Hydrodynamics, 18 (4): 449-457. Demir, V., Yurdem, H., Degirmencioglu, A., 2007, Development of prediction models for friction losses in drip irrigation laterals equipped with integrated inline and on-line emitters using dimensional analysis, Biosystems Engineering, 96 (4): 617-631. Dazhuang, Y., Peiling, Y., Shumei, R., Yunkai, L., 2007, Numerical study on flow property in dentate path of drip emitters, New Zealand Journal of Agricultural Research, 50: 705-712. Kamrmli, D, 1997, Classification and flow regime analysis of dripper, Journal of Agricultural Engineering Research, 22 (2): 165-173. Zhang, J., 2007, Numerical and experimental study on hydraulic performance of emitters with arc labyrinth channels, Computers and Electronics in Agriculture 56: 120-129. McCreary, M.L.., 2007, Tips And Tricks For Using Simulation Doe To Assess The Complex Interactions Of Your Process, Proceedings of Simulation Conference Winter 2007. Launder, B.E., Spalding, D.B., 1974, The numerical computation of turbulent flow. Comput. Meth. Appl. Mech. 3: 269. Wei, Q., 2006, Study on hydraulic performance of drip emitters by computational fluid dynamics, Agricultural Water Management ’06 – Agriculture Water Management, Ed. Elsevier, 130-136.

Real 3D Geometry and Motion Data as a Basis for Virtual Design and Testing 1,2 1 1 1 D. Weidlich , H. Zickner , T. Riedel , A. Böhm 1

Institute for Machine Tools and Production Processes, Chemnitz University of Technology, Reichenhainer Strasse 70, D-09126 Chemnitz / Germany 2

Fraunhofer Institute for Machine Tools and Forming Technology Chemnitz, Reichenhainer Strasse 88, D-09126 Chemnitz / Germany

Abstract A consistent and accurate digital data model takes a major role during the whole product life cycle of production facilities. This article introduces both technologies: 3D laser scanning for acquisition of 3D geometry date and motion capturing for real motion data with focus on the application in the product life cycle of production facilities. It gives an overview and comparison of different hard- and software solutions. Workflows for concrete tasks in the planning process show how both technologies can be combined and how the acquired data needs to be handled for the integration with CAD tools or Virtual Reality systems. Finally this article gives an outlook on the possible future development of these technologies. Keywords: Virtual Reality, 3D Laser Scanning, Motion Capture, Ergonomics

1

MOTIVATION

What is required to successfully distribute and establish products in the domestic and international markets? What does the buyer expect of the product? Typical expectations are high functionality, quality, reliability, advanced design, efficiency and safety at low cost. In view of this development, the importance of integrating advanced digital technologies into all stages of product development from the first draft to the finished prototype is increasing. A great number of basic decisions in the development process such as variants, layout planning, or FME analyses are made today based on digital three-dimensional data. Combining this basic data with technologies of virtual and augmented reality allows early statements regarding component arrangements, solution variants and space conditions and even on the design of the man-machine interfaces. In principle, two original types of digital data inventories can be considered. When developing new machines and plants using current CAD systems, 3D CAD data is available at early stages of product development for virtual test environments and for checking on manifold issues. In addition to new designs, optimizations, reconstructions or extensions of existing machines, plants and production facilities have a high priority in planning processes. Under these conditions, early and optimum answers to planning questions frequently require a combination of data on real objects, environments, and real human motion. Advanced processes such as 3D laser scanning for the digitalization of real environments as well as Motion Capture for the recording and analysis of human motions create the conditions for building these complex virtual test environments.

CIRP Design Conference 2009

583

3D laser scanners can be found in many varied areas of industry and research and constitute indispensable tools. Functions such as highly precise measuring, checking and documenting support engineers in their daily work. The use of Motion Capturing in an industrial environment is just starting. But it does create conditions that facilitate realistic motion in human models to achieve optimum results regarding the ergonomic design of machines and plants. Development processes can be comprehensively accompanied, supported and corroborated by combining this data in a VR environment as the basis for virtual testing and optimization processes, including tests involving humans. Many varied disciplines can utilize this integrated environment and expand it to become a joint basis for work and discussion. It reduces design times, cuts costs and makes decision-making processes transparent [1]. 2

INVENTORY-TAKING WITH 3D LASER SCANNING

2.1 Technologies at a glance Laser scanning, also referred to as laser sensing, “denotes the row by row or grid-like scanning of surfaces or bodies with a laser beam” [13], to obtain a model. Laser scanners only capture surfaces that are visible in the vector of the laser pulse. Rear surfaces or hidden objects remain clouded. This problem is solved today by either dynamic referencing using INS/DGPS (coupling inertial sensors and differential global positioning systems) and additional sensors during measurement or later partially automated referencing. The technologies in Figure 1 offer different options. While airborne laser scanning (ALS) only allows a macroscopic strip-like scan (2D profile lines) of the Earth's surface with typical

lowest possible resolution is selected for a maximum capturing rate and that color information is deactivated, if required (see Figure 2).

clouding due to vegetation and other objects, a terrestrial laser scanner (TLS) with spherical scanning (3D scan) in the near range 1 may be able to eliminate clouding by a change of location. Mobile laser scanning (MLS) for the near range combines 2D profile scanners with a movable object on the ground (such as a vehicle) and can be compared to ALS. The most intelligent and most accurate method to date to avoid clouding for relatively small working spaces and short contactless measuring distances is provided by the laser arm that can be manually brought into position in the respective space. Captured data is immediately displayed during measurement as a 3D point cloud and serves as the basis for a quasi control circuit. The configuration of various devices also depends on the size of the object to be scanned and the associated measuring accuracy.(see Figure 1). Regardless of their technological design, laser scanners generate a finite number of points over the time of a measurement that are digitally stored with a position and brightness value. A model representation in point form is limited to this information and does not contain any topology as compared to that of a surface or volume model. It is not always necessary to generate surfaces (see Figure 4). Point clouds depict a quasi-surface in a specific quality depending on point density, point thickness and color value. Interpretation by software tools is critical in this case.

Technology

Time per Scan

Photo Option

Re-Positioning

0.09° (1/10)

1 min

~ 7 min

~ 5 min

0.045° (1/5)

5 min

~ 7 min

~ 5 min

Figure 2: Capturing time examples with FARO LS 880 Smaller resolutions also produce considerably less data volumes (see Figure 3). Models can be represented smoothly and without reduction as segmented VRML by 3D viewers in VR environments. Resolution

Data Volume

Data Volume

FARO Scene

xyz

0.09° (1/10)

~ 5/150 Mbytes

~ 50 Mbytes

0.045° (1/5)

~ 18/165 Mbytes

~ 200 Mbytes

Figure 3: Data volume color scan examples with FARO Pass marker referencing is only useful for individual plants or smaller production areas since the resolution for geometry fitting (reference) is insufficient at a distance of >7m 2 and the effort of dragging the pass markers along would rise tremendously for big objects (such as halls). Suitable algorithms that can handle feature extraction and ICP (such as Geomagic) can achieve overlap accuracies of a few millimeters between two scans. The strong divergence of two laser pulses that leads to a decrease in point resolution as distances increase should be taken into account. Accuracy may have to be adjusted depending on the respective application. A TLS scan at a low resolution is sufficient for simple visual VR reference in conjunction with motion capturing test scenarios. But if components such as those of a machine tool are to be included in a CAD re-engineering process, systems such as a laser tracker or the laser scan arm may be better suited since they permit considerably higher accuracies of approx. 35-50 µm. A TLS is in this context suitable for simple surface feedback requiring accuracies of several mm only, such as in a VR visualization. A detail size of several centimeters can only just be resolved visually and geometrically in a scan. In testing practice with TLS, a square site grid with a maximum width of 5 to 6m and a resolution of 0,09° has

2.2 Prerequisites for obtaining an effective model Laser scanning is of interest for capturing existing machines and plants or general objects for which there are incomplete digital models or no models at all. Elementary variables are the achievable accuracy or resolution with the associated systematic distance variation, capturing rate or which volumes per time unit can be captured and used. Plants can be dimensioned from a few to several hundred square meters. The TLS is currently best suited for such object sizes at a measuring accuracy in the millimeter range and flexible positioning (see Figure 1). The main criteria that applies is the visibility of the components that are relevant for documentation and revision. If complete threedimensional objects are to be captured, the number of measuring points always is >1. This brings up the problem of location referencing of each individual scan. Practicable methods include referencing using pass markers, feature extraction, and iterative closest point (ICP) methods. 2.3 Laser scanning field tests

ALS

Resolution

Accuracy

Resolution

Range

Rate

Model size

+/-15cm at 1200m axially

3 to 50cm

1200 to 3500m

up to 100,000 P/s



0.0025° angular

2 to 300m

up to 11,000 P/s



0.009° angular

1 to 80m

up to 120,000 P/s



0.00076° scan line

(phase difference)

+/-0,6cm at 1200m horizontally MLS

+/-1cm at 100m

0.004° scan line TLS

Laser Tracker

+/- 2 to 3mm at 25m

+/- 50 µm at 10m

0,5 µm length measurment

up to 35m

up to 10,000 P/s

70x70m

Laser Scan Arm

+/-35 to 50 µm

-

up to 95mm

up to 19,200 P/s

4x4m

Laser Microscope

+/-1 to 10 nm

up to 450 nm in axial direction

up to 16.5mm

-

20x20mm

up to 150 nm in lateral direction

Figure 1 : Laser capturing systems (Sources : Hansa Luftbild, FARO, Riegl, Zeiss, Optech) Field tests with the FARO LS 880 laser scanner in a typical production environment have shown that the 1

approx. 1 to 10 m away from laser source

2

584

Empirical determination using the FARO Scene software

Figure 4 : Model representation – point cloud, area model 3 studies of the most varied issues in the process of proven its worth if no pass markers are to be used. The relatively small grid width automatically eliminates developing machines and plants. clouding at a high object density. An overall model is 3 INTEGRATION OF THE HUMAN ELEMENT INTO achieved by iterative coupling of stationary THE PLANNING PROCESS measurements. It should be noted that error propagation may lead to curvatures 4 of planar areas (such as a 3.1 More ergonomic man-machine interfaces building footprint) that can result in deviations in the meter range over 100m of object length. This problem can be A well thought-out ergonomic construction of a machine solved using a higher-order system of coordinates to together with an ergonomic design solution represent a which the orientation and positioning of scans are aligned major factor determining the decision in favor of or as well. If objects take an area of up to 100 sq. m and against a product. Buyers expect that a machine handles 5 requirements of a few referencing accuracy well, does not cause fatigue and provides ease and safety centrimetres, this factor can typically be neglected. of operation, maintenance, retooling and service. Humans themselves are the yardstick of ergonomics. The design 2.4 Direct utilization of point cloud records: a of the interfaces between man and machine has the hands-on example purpose of allowing clear and unambiguous operation, TLS is preferable for generating static and highly accurate operator safety and optimization of work flows. models for documentation and visual referencing 6 . TLS The European Machine Directive 98/37/EC(MRL) [5] [14] allows capturing machines and plants and their (updated as from 12/29/2009 by 2006/42/EC) requires 7 associated structures in a relatively short time and using machine manufacturers to minimize physical and mental them as reference. MLS is unsuitable for typical strain on machine operators by applying ergonomic production environments due to its specific application principles as early as in the blueprint stage of machines requirements, although the technology of automatic and plants. To comply with this criteria means for a locating holds enormous potential for TLS in the future designer the early inclusion of ergonomic considerations [3][11][12]. TLS can clearly do better than practicein the process of development at a time when all that is oriented accuracies for manual digitalizations up to +/available is CAD design data and the relevant provisions 100mm. in the respective standards. Experience gained from The visTABLE planning system is an example of directly decades, analyses of customer requests and wellusing point cloud data in VR. The basis for planning, designed components from suppliers may help improve including detail planning is created using a reference of ergonomics but the relevant provisions in European the structure as a point cloud. The point cloud model standards [6][7] must be observed. One approach to primarily acts as visual collision control in this context or meeting these requirements is to use the CAD data for defines the static spatial limitations in a production facility, first ergonomic studies. Data obtained by 3D laser similar to a surface or volume model. It is likewise scanning can be integrated into virtual test environments possible to segment individual parts of the point cloud and in the form of point clouds or models from surface thus extract machines and plants as a model and arrange feedback to optimize the remodeling or extension of them accordingly in the planning model. This method, is existing environments such as assembly stations. also called "rapid planning" [2] ,allows planners to obtain 3.2 Virtual ergonomics tests high-quality statements in shorter time. The construction of virtual test scenarios focusing on the Point clouds can also be used as references for surface ergonomic design of man-machine interfaces has feedback in design tools such as AutoCAD or decisive advantages in product design over the use of ProEngineer. Manual generation of surface models based desktop systems alone. A bond is created between on a point cloud is still relatively complex today and may humans and geometry by the true-to-scale representation take up 75% of the time for inventory-taking, which equals and interaction with the test scene so that geometry is four times the effort [4]. It can be derived from this that it perceived by the human and can be evaluated using a is useful to mix existing CAD data with scan data to structured approach. In addition to perception by a real reduce planning costs. The Pointools software renders person and his or her natural flow of motion, the mixed records in conjunction with visualization tools such integration of virtual human models into the test as 3D Studio Max and avoids complex surface environment provides an opportunity to assess issues reconstructions, for example when visualizing motion under specific anthropometric aspects. capture records in an existing production area. 3D laser scanning allows the generation of basic data for virtual The functionality of digital human models used in VR environments is already quite comprehensive while they are constantly being adjusted to their real entities. Digital 3 Averaged across all scanner sites VR-capable human models such as JACK 8 , RAMSIS 4 So-called "banana referencing" 5 Absolutely across all scans 6 8 Quasi 3D photograph Digital human model for improvment the ergonomics of 7 product designs by Tecnomatix For manual inventory-taking

585

VR 9 , VirtualANTHROPOS[8][9], and IDO:ERGONOMICS 10 provide the user with comprehensive model variants from anthropometric databases and catalogue. Depending on the analysis task, the model can be defined with characteristics such as age, gender, percentile 11 , region of origin and special physical features. The models are supported and moved by approx. 90 joints with up to five axes which allow biomechanically correct movements [8][9]. Visualizing the visual, gripping and working spaces using auxiliary geometries and graphic representation of important joints in the VR environment enables designers to make fast statements regarding the comfort or discomfort of defined model types in relation to the geometries to be examined (see Figure 5).



the human factor (real person and digital model) as a yardstick of ergonomics.

The CAD data generated in the design process was exported to VRML format, a standard exchange format for 3D scenes and at the same time one of the target formats of the VR environment (in addition to Performer Binary and Open Inventor). The checklist and evaluation sheets of BGI 5048-1, "Ergonomic Machine Design", and the associated BGI 5048-2, "Information on the Ergonomic Machine Design Checklist" were selected as working material from the statutory and European directives and standards. This practical guideline includes the most important criteria of ergonomic machine design from 30 individual standards and directives. In a third step, the relevant issues were assigned to the 11 higher-order inspection topics of the checklist. Relevant subordinate questions for examining the VR environment were derived from the more general topics "Machine Access", "Workplace Dimensioning", "Observation of the Working Cycle in the Manufacturing Process", "Manually Operated Controls" and "Keyboards, Keys, and Input/Output Devices".

Fig. 5: Checking the view of tool/workpiece with a digital human model 3.3 Creating virtual test scenarios: hands-on examples In cooperation with machine toolmaker StarragHeckert GmbH 12 , a virtual check was performed during the development of the 5-axis HEC 630 X5 processing center to assess the view of the tool-workpiece interface and to check accessibility of the working space and of specific modules if they require disassembling and assembling as part of service operations [10]. The objective of this virtual pre-examination was to eliminate ergonomic deficiencies regarding views and accessibility at an early point in the planning process of the HEC 630 X5 and subsequent designs. The IDO:ERGONOMICS VR environment was used to combine all data relevant for the viewing process (see Figure 6).

9



the CAD design data of the HEC 630 X5,



the ergonomic issues relating to the design,



the standards and directives on ergonomic machine design, and

Computer-supported anthropometric mathematical system for vehicle occupant simulations by Human Solutions 10 Digital human model for analysis of ergonomics by IC:IDO 11 In ergonomics: the distribution characteristics for the dimensional proportions of the human body, stated as a percent 12 International manufacturer of high precision milling machines

The examined scene was completed by using the user himself and the virtual human model of the IDO:ERGONOMICS VR environment. Depending on the specification selected, the virtual model visualizes the viewing space of the person, the gripping and working space (depending on physical body dimensions) as well as discomfort markers on joints indicating extreme postures (green: normal, yellow: critical, red: discomfort) using auxiliary geometries The first examination was performed subjectively using the users body dimensions and movements in the VR environment. No deficiencies regarding access and viewing space were detected. The subsequent use of the virtual human model allowed the inclusion of different physical dimensions and proportions in the viewing process. These functionalities of digital human models make it possible to determine ergonomic constraints for different physical dimensions, e.g. that people in the 5-percentile will not be able to reach the outer function button unless the control panel can be pivoted, for example, in parallel to the machine frame. Direct movement and turning the body towards the control panel would be required, which is an unfavorable sequence of movements for the user from an ergonomic point of view. The objective studies using different body proportions allow an optimized design of the man-machine interface. The special advantage of virtual scenarios is the combination of all relevant data and information in one environment, the VR system. Ergonomic questions such as view and accessibility can be assessed easily when taking a systematic approach. The geometry can be evaluated based on the researcher’s own feeling while the use of the virtual model of a human being allows the implementation of more functionality for evaluations taking into consideration anthropometrics. Measuring functions integrated into the VR system and the movement of components further support fast results. Early and, most of all, true-to-life assessment of design data in conjunction with man as the standard for ergonomics is ensured.

586

Gypsy 6™] 14 . The disadvantage of such systems is

Figure 6: Virtual test scenario, focus on ergonomics, combination of all relevant data in the VR environment 4

INTEGRATION OF REAL SEQUENCES MOVEMENTS INTO VIRTUAL TEST SCENARIOS

4.1 Capturing motion (e.g. at an assembly station) using Motion Capture In addition to inclusion early-on of ergonomic issues into the design process of machines and plants, the evaluation and optimization of physically existing environments are important factors for improving the productivity and safety of human operators. The combination of various technologies in the planning and optimization process offers great benefits. If the goal is restructuring or expansion of manual assembly stations, planning should be performed both with real people and using virtual corroboration. Creating virtual test scenarios allows the merger of existing geometries that may have been captured using 3D laser scanning and resources and facilities that are being developed and are only available in the form of CAD data. Motion Capture allows inclusion humans and to transfer a human's natural sequence of movements to a virtual environment. Motion Capture can be used to record human movements and to store them in data formats that allow analysis of the recorded movements and their use on more advanced studies such as virtual avatars. Stored functional content such as the display of critical joint angles by discomfort markers in the virtual human model allow detection of uncomfortable sequences of movement and their elimination by an advanced virtual optimization process. 4.2 Motion capture systems for whole body tracking Whole body movement tracking as required for studies of complex sequences of movement is mainly applied through optical, electromagnetic, electromechanical, and inertial systems. Optical systems require multiple special cameras with a fast refresh rate and high resolution. The actor's body is equipped with active (light-emitting diodes) or passive markers (reflecting, non-luminescent markers) that are captured by the camera [VICON] 13 . Despite their high accuracy and the actor's freedom of movement, optical systems are not suitable for capturing movements at assembly stations. This would take an enormous camera mounting effort. When using passive markers, other reflections in the room have to be prevented since they would be captured as ghost markers. Electromechanical systems consist of linkages (exoskeleton) equipped with potentiometers that measure the rotation and orientation of the actor's joints [such as

limitation of movement due to the design constraints of the skeleton. Their use in ergonomic studies is not beneficial since natural movements cannot be performed without limitations. In electromagnetic systems, a transmitter unit generates a low-frequency electromagnetic field. The sensors attached to the actor are activated by induction, the data is transmitted to a control unit that determines position and orientation in the 3D space. The disadvantage of these systems for motion capturing at assembly stations is the rather small radius of action that is limited to the magnetic field generated, susceptibility to interference and the actor's cabling. Motion capture systems with acceleration/inertial sensors like the "MOVEN" whole-body motion capture suit by Xsens currently offer the most comfortable options for motion capture at assembly stations. It is based on miniature sensors that are integrated in a suit and do not require external markers and cameras. Data transfer is wireless and in real time. No calibration effort is required, and the actor's movements are not restricted, enabling a large scope of action and recording of any type of movement (see Figure 7).

Figure 7: Motion capture at a real machine tool and virtual interpretation by the real actor. The 16 inertial sensors integrated into the suit record 3D position, 3D orientation and, optionally, acceleration, speed, angular speed and angular acceleration of each segment. The data obtained is entered for the avatar of the VR application. Skeleton models provide the link between the two processes. The skeleton model of the actor who performs the movements and the skeleton of the virtual human model to which the motion data is applied. The skeleton structure has to be identical, which includes topology and the number and labeling of skeleton elements. The MOVEN SDK 15 provides data stream motion data for immediate use in the virtual scene. The standardized BVH data format (BioVision Hierarchical data) provides an interface for data transfer into systems with integrated virtual human models. The movements are exported based on global translation and the rotations of the joints. 5

For optimum support of design processes across design stages, the methods of modern 3D digitalization have to increasingly be integrated and coupled with planning and development tools. Complex questions and early product assessments require simple but interdisciplinary solutions.

14 13

15

Provider of infrared cameras for motion capturing

587

ABSTRACT AND OUTLOOK

System of the company Meta Motion The „Moven“ Software Development Kit

Further development of 3D laser scanning can enhance the creation of virtual test scenarios, particularly in the fields of optimization and extension of existing environments. Scanned inventory data and CAD design data can be integrated as early as in the blueprint stage for early detailed planning, e.g. when changing the design of existing plants. Another objective is to obtain automatic surface feedback based on feature recognition algorithms that allow fast creation of basic 3D data for real-time VR environments. Automatic networks are used in precise 3D body scanning to use 1:1 human models for analyzing optimum sizing. With increasing accuracy of TLS and improved derivatives thereof, surface feedback of production areas will become more accurate and costeffective. Another requirement is fully automated referencing of TLS data across multiple measurements. This is to ensure that highly precise point cloud records are determined in their position and orientation towards each other as accurately as possible. Motion Capture is for the most part applied in the professional entertainment industry to develop video games and create special effects. Another established field of application is sports medicine. Human movements are recorded and used to optimize the athletes' performance. However, inclusion of the human operator in the development of technological systems is increasingly required by the market so that motion capturing applications are finding their way into industrial planning. The design of interfaces that are more suitable for humans is the top priority for using this technology. Optimum support for ergonomic issues in the planning and development of machines and plants will require the further merger of ergonomically functional, VR-capable human models and motion data from motion capture systems. The functionality of the human model can be used for fast and unique capture of motion occurring in real time, e.g. discomfort indication in critical joint positions such as assembly operations with a limited gripping space. The examination of the virtual environment also creates an opportunity to place resources in the planning process in such a way that critical postures and sequences of movement are corrected and optimum working conditions are created for different physical proportions (example: 5-, 50-, 95percentile study; DIN 33204). The more human features are included in the planning process through modeling, the better suited interfaces can be designed for humans. Which digital data will be used in VR scenarios will always depend on the issues to be studied in the respective planning process and on the desired results. The intelligent and controlled application of advanced technologies such as 3D laser scanning and motion capture in combination with virtual reality technologies facilitates the creation of comprehensive testing environments for early planning and design evaluations. This may prevent later time-consuming and costly design changes from the start.

[3]

[4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

REFERENCES [1]

[2]

Riedel, T.; Polzin, T.; Zickner, H.; Weidlich, D.; Neugebauer, R.: Effektive Modellgewinnung mittels Laserscanning: Chancen für die Anlagenplanung. In 1. Internationaler Workshop "VAR² - VR/ARTechnologien für die Produktion", Chemnitz, 15/05/2008 : pp. 77-86, ISBN 978-300-024677-7 Fraunhofer IPA: Jahresbericht 2006 – Rapid Planning – Fabrikplanung im Kundentakt – Kopplung von Laserscanner und partizipativer 3-DLayoutplanung, p. 42

[13]

[14]

588

Kutterer, H. (2007): Kinematisches terrestrisches Laserscanning – Stand und Potenziale. In: Luhmann, T. und Müller, C. (Hrsg.): Photogrammetrie, Laserscanning, Optische 3DMesstechnik, Beiträge der Oldenburger 3D-Tage 2007, Wichmann, pp. 2 - 9. http://www.intergeo.de/archiv/2007/Kutterer.pdf; 25/06/2008 Böhm, J.; Schuhmacher S.; (2005): Erste Erfahrungen mit dem Laserscanner Leica HDS 3000, p. 10 Funktionale Sicherheit von Maschinen und Anlagen Umsetzung der Europäischen Maschinenrichtlinie in der Praxis; Patrick Gehlen ; Juni 2007 Publicis Corporate Publishing; ISBN: 3895782815 BGI 5048-1 Berufsgenossenschaftliche Information für Sicherheit und Gesundheit bei der Arbeit; Hauptverband der gewerblichen Berufsgenossenschaften; Ergonomische Maschinengestaltung – Checkliste und Auswertungsbogen-; Carl Heymanns Verlag; October 2006; http://www.heymanns.com/servlet/PB/show/1225114 /bgi5048_2.pdf; 05/05/2008 BGI 5048-1 Berufsgenossenschaftliche Information für Sicherheit und Gesundheit bei der Arbeit; Hauptverband der gewerblichen Berufsgenossenschaften; Ergonomische Maschinengestaltung –Informationen zur Checkliste-; Carl Heymanns Verlag; October 2006; http://www.heymanns.com/servlet/PB/show/1225114 /bgi5048_2.pdf; 05/05/2008 Echtzeitorientierte Evalution mit Hilfe von Virtual Anthropos; Bauer, W.; Lippmann, R.; Rößler, A. Landau, K.; Gesellschaft für Arbeitswissenschaft – GfA-:Mensch-Maschine-Schnittstellen. Methoden, Ergebnisse und Weiterentwicklung arbeitswissenschaftlicher Forschung. Bericht zur Herbstkonferenz der Gesellschaft für Arbeitswissenschaft Stuttgart: IfAO Institut Arbeitsorganisation. 1998, ISBN: 3932160-09-6 Virtuelle Menschmodelle in der Produktentwicklung Andreas Rößler, Roland Lippmann ; Spektrum der Wissenschaft, Ausgabe 9/1997. Neuber D.; Böhm, A.; Weidlich, D.: VR-gestützte Ergonomiebetrachtungen an einem 5-Achs Bearbeitungsszentrum, 1. Internationaler "VAR² VR/AR-Technologien für die Produktion", Chemnitz, 15/05/2008 : pp. 77-86, ISBN 978-300-024677-7 Rieger, P.; Studnicka, N.; Ullrich, A.: ”Mobile Laser Scanning” Anwendungen http://www.riegl.co.at/terrestrial_scanners/3d_project s_/city_modeling_/pdf/MobileLaserScanning.pdf, 10/09/2008 Rieger, P.; Studnicka, N.; Ullrich, A.: “Mobile Laser Scanning”; Key Features and Applications http://www.intergeo.de/archiv/2007/Riegl.pdf; 20/08/2008 C. Teutsch, Model-based Analysis and Evaluation of Point Sets from Optical 3D Laser Scanners, volume 1 of Magdeburger Schriften zur Visualisierung. Shaker Verlag, 2007. ISBN: 978-3-8322-6775-9 The European Parliament and the Council of the European Union: „Richtlinie 98/37/EG“, 22/07/1998

Enhancement of Digital Design Data Availability in the Aerospace Industry E. Shehab1, M. Bouin-Portet2, R. Hole2, C. Fowler2 1

Decision Engineering Centre, Cranfield University, Cranfield, Bedford, MK43 0AL, UK 2 Airbus, Filton, Bristol, UK [email protected]

Abstract This paper presents the development of a roadmap to enhance the current utilisation of digital design data within the aerospace engineering discipline. Within companies, many decisions have to be taken throughout the product design and development based on accessing data and information. The quality and timeliness of such decisions depends on the data availability. Therefore, there is a need for enhancing the digital data availability in order to make better decisions in a shorter time, reducing the product lead time. This research project, which conduced in collaboration between Airbus in the UK and Cranfield University, has identified opportunities to exploit the access and sharing of digital data, specifically within the stress population through the use of Digital Mock-Up colouring and lightweight visualization.

Keywords: Digital Design Data; Product Design; Aerospace Industry

1 INTRODUCTION In the current global competitive environment, companies need to face and respond to many challenges. One of these challenges is the data explosion that has taken place in the last decade, involving the increase and integration of the outsourcing to the extended enterprise. Thus, companies need to manage increasing datasets, and have to allow their employees and partners to access, share, use and integrate it easily, at the right time, the right place, the right format and the right quality. The massive quantity of data and tools represent the main barriers to the access of digital data. Thus, to compete, companies need to manage data as they are managing any other resource in the company and that they have to use the best of the technology. For instance, in the design process, 3D modelling has increased significantly and its access allows people to use the information much more rapidly. The improvements in terms of managing data tools and technology have helped companies to deal with this new challenge. However, the quantity and complexity of the tools and processes can make this data difficult to access. With constant improvement of the technologies and the Internet revolution, digital data has become the source around which decisions are based. Specifically, in the design process, companies have to deal with a large amount of data and processes. Entering a 3D world was a huge revolution but it creates numerous extra data. Engineering companies are significantly increasing their dependency on 3D data, inline with its 3D as master policy. For instance, some of these companies proposed to eliminate 2D drawings through enriching the 3D.

CIRP Design Conference 2009

589

As the quantity and quality of the data in the 3D model increases, the digital data accessibility and availability has to be reviewed in order to identify new customers and use of the data. The current issue of increased data volumes, more complex systems/processes and a wider spread extended enterprise make it difficult to clearly communicate, and enable everyone to take advantage of new technology, capability and flexibility provided by digital data. Thus, there is a need for enhancing the digital design data availability for its use in supporting product design, engineering and development, in order to make better decisions in a shorter time. Therefore this research project focuses on digital design data in the aerospace industry. The main aim of this research is to develop a roadmap to enhance the utilisation of digital data within the aerospace engineering discipline, both in terms of optimizing existing processes and identifying new methods of visual management techniques. 2 RELATED LITERATURE Nowadays, data explosion is one of the most important problems that the companies need to manage. According to Walker [1], a business in 2007 needs to store ten times more data than in 2000. In the same way, Gartner Consulting estimated than in 2012, this factor will be increased by thirty. It is really important to understand that in the past 30 years, the information technology utilisation has become widely circulated, mostly within the design, engineering and manufacturing processes. Thus, data management methods need to focus on the retrieval of data. Indeed, storing data is nowadays quite quick and easy but finding the data and information that you need in the vast quantity of stored data is definitely much more difficult [2].

It has been identified that up to 80% of product costs are defined during early concept design [3]. Engineering companies need to improve constantly their product life cycle by working on the triptych cost-quality-time. For instance, the last years have seen a diminution of a car life cycle from 10 years to only 4 to 6 years nowadays [4]. Thus, accessing and sharing digital data appears to be a key success factor in the centre of this significant change. Indeed, with the globalisation, companies have to work with suppliers and partners all over the world and sharing quality data in real time is a daily challenge [5]. In recent years, one of the major improvements in terms of sharing and accessing data was the implementation and utilisation of Virtual Reality (VR) resulting in reducing product development lead-time. VR represents a model “which is not real, but may be considered to be real while using it” [6]. One of the main advantages by using VR is the replacement of the majority of real prototypes by a virtual one. Thus, the design can be reviewed and modified before any physical prototype is built [7]. Indeed, VR allows a simple visualization, representation and analysis of a complex 3D model [8]. Therefore, the utilisation of virtual systems implies a large improvement in terms of time, costs and quality, achieving a better integration of processes and focusing on the producibility and affordability. There are many different VR systems, crossing video games, gloves systems, head-mounted display systems, Digital Mock-up (DMU), graphical computers and many others. Wang [9] describes the DMU as a virtual prototyping technology that comprises the use of virtual reality and others technologies to allow the building of the digital prototypes. The DMU is a 3D representation integrating all engineering, design, maintenance, and manufacturing requirements. McBeth et al [10] emphasised that DMU utilisation has improved the sharing of data between the OEM and supply base. The DMU, if used daily by the design teams, can provide a dynamic representation of the current state of the product design evolution and employ to resolve issues. However, the main benefits are being gained in the manufacturing areas due to the improved communication between design and manufacturing during the design process. The DMU also provides benefits to predict the structural behaviour of a product, to forecast the aerodynamics, to design and test complex systems, to anticipate human interaction with the product and to predict the effectiveness of the maintenance [11]. In terms of work, using the DMU mainly improves speed, schedule shortening, technical quality, concurrence engineering approach and costs reduction [12]. Technology is increasingly used by companies and workers to help them to accomplish their daily tasks, to communicate with others and learn new competencies. To respond to the consistent spreading of the companies and the increasing numbers of suppliers, which need to work on a same project with people around the world, a new technology is born: the virtual meeting room [13]. This virtual environment is a huge improvement in terms of technology and exchange of information as it incorporates the feeling of “being there”. Thus, this new type of collaboration allows the user to directly interact with the virtual environment as if it was a real room since it comprises audio, text communication, documents and characters [14, 15]. Several companies as Intel, Raytheon, BP and HP have started to developed and used this type of virtual room. They have even started to go further in the utilisation of the virtual room by using it not only for meetings but also for trainings, customers’ relationship and private

collaboration. For these companies, this new technology has unique advantages such as maintaining peoples’ attention, not feeling alone, seeing facial expressions, collaborating in real time, and solving problems more quickly than with current tools [16, 17 ]. However, this new technology is still expensive and in its pioneering days. It appears quite difficult to use this virtual room for the moment as its use is not enough instinctive and people need practice to navigate in the environment. But within five years, the virtual room will certainly take the same importance in the worker’s life than the Web as it opens the world of the 3D Internet [13]. The literature review has highlighted that there is a lack of research projects focused on the enhancement of utilisation of digital data. Therefore this research project has attempted to fill in this research gap by developing a set of recommendation to enhance the current utilisation of digital design data especially in the aerospace industry. 3

RESEARCH METHODOLOGY

3.1 The Case Study Company Airbus is a leading manufacturer of commercial aircraft in the world through innovation, cultural diversity and commitment. Specifically, Airbus in the UK is the centre of excellence in the design and manufacturing of aircraft wings. The design office at Filton - one of six within Airbus – manages the design of all wings for the whole Airbus family of aircraft. It is also responsible for the design integration of the landing gears and fuel systems [18]. Within such a multicultural company, employees need to work with partners and suppliers worldwide, exchanging knowledge, experience and ideas. 3.2 The Approach Adopted The research project commenced with a familiarization stage through the literature and the use of informal interviews with key employees and regular visits to Airbus in the UK. The data collection focused on the processes, the customers and the uses of digital design data. This phase allows identifying the users of digital data and capturing their expectations and requirements. Figure 1 illustrates the adopted research methodology. The different opportunities for improvements in terms of access and sharing of digital data have been identified through the data analysis phase. To identify possible improvements in the utilisation of digital data within the design process, the most important step was to capture the requirements and expectations of the employees. It was really important to understand what they needed to achieve and the way they were actually working in order to capture any possible requirements that could help them to achieve their deliveries. Key representatives in different disciplines and aircraft programs were selected. A semi-structured questionnaire was employed in this study, characterized by five main areas of focus: introduction, job description and tools used, access of digital data, Key Performance Indicators (KPIs) and facilities. The questionnaire was designed to capture the role and responsibilities of the interviewee, their daily tasks and the tools they use to accomplish their work.

590

Outputs  Client Research Brief  Project Flyer  Familiarize with the field of the project

Project Definition & Familiarization

Interviews

PHASE I Company visits

Outputs  Analysis of the existing design process  Identify users  Capture their satisfactions

Visits to the company

Process Understanding

Data Collection

Interviews Questionnaires

PHASE II Outputs  Identify dependency on KPIs

Meetings with engineers Data Analysis

Tacit knowledge and experience

Analysis of questionnaire PHASE III

Outputs  Develop Roadmap for improvement  Validate Roadmap

Meetings

Roadmap Development and validation

Expert interviews

PHASE IV

Figure 1: The Research Methodology Adopted

The following are samples of the questions employed in this study:    

What are the various tasks during your day to day job and what are the tools you need to achieve your job? What kind of digital design data do you need to access? What kind of digital data do you need to extract? From your point of view, what are the areas that could be improved?

In order to capture the satisfactions and requirement of Airbus in the UK employees in the design process, fifteen interviews were carried out with different aircraft program

591

representatives. The specific function representatives interviewed were Designers, DMU Managers, Engineering Program Managers, Lead Designers, Weight Engineers, Stress Engineers, Senior Design Integrators and Head of Product Enablement. Finally, the last phase represents the creation of a roadmap, giving a set of recommendations and improvements necessary to improve the digital data use process. The roadmap was validated by a set of experts present in the company in order to improve and accept the proposed solutions.

4 THE ROADMAP DEVELOPMENT The roadmap has provided several areas for improvement. This paper presents three areas, which are the improved access to 3D models for stress engineers, the DMU colouring and the use of a lightweight visualization. There are another four areas of improvement that can be obtained from Bouin-Portet [19]. These areas were selected based on the short-term implementation, the technical feasibility and the rapid return on investment. 4.1 The Stress Issue In Airbus in the UK, not all stress engineers have direct access to native 3D. The stress population has two specific requirements in the access to the 3D model. They need to have access to the 3D model for: • Visualizing the 3D Model such as Assemblies or components in order to understand how it fits together. Thus, they need to have a global view of the model. • Obtaining components measurements and dimensions in order to perform the stress and finite analysis. Therefore, they need to have access to a Detailed View with a specific accuracy. However, the stress engineers may need to work with large volumes of 3D data. Therefore, they need assistance from designers who have CATIA access to find the necessary 3D information for them. Accordingly, a possible lead-time improvement has been identified for both the stress and the design population. Indeed, it was evaluated that stress engineers and designers could save between 2 and 2.5 hours per week by having a direct access to the 3D model. 4.1.1 Global View Solution For the visualization requirement, two solutions were investigated: the use of a lightweight visualization format and the use of product view. A lightweight visualization is a light-view format of the 3D Model, created from the CATIA Model. Product View is the software that allows seeing the 3D Model through a Product Data Management (PDM) system. Both can allow the stress engineer to have a global view of the 3D model. After an in-depth study of both solutions, it appears that the best solution is the use of the lightweight visualization format, as the Product View solution cannot visualize large assemblies, due to restriction of the PDM capabilities. The use of a lightweight visualization will allow the user to have a quick global view of the 3D model. Moreover, one of the most significant advantages to use such a format is that a user does not need any training to access it. The user needs only to install a viewer to visualize the files. 4.1.2 Detailed View Solution For the measurement requirement, the solution of lightweight visualization is not appropriate since it is not accurate enough. Thus, three different solutions were investigated: the use of CATIA, the use of the DMU and the use of Product View. The conclusions of this investigation are the following: • The DMU turned out to be not enough accurate to take measurements. • CATIA could allow taking measurements but it involves at least a four full days training to get an



access and it seems complex for people who do not use it daily. Product View seems to be the good solution but the accuracy needs to be checked.

In order to investigate if Product View has enough accuracy to obtain components measurements and dimensions, a test was carried out to understand the limitations of Product View. Thus, a stress engineer calculated the necessary measurements in CATIA and in Product View in parallel for the same component in order to make a comparison in terms of measurement and ease of use. The conclusions of using Product View were the following: • Component measurements can be obtained within 0.001 mm accuracy • Easy to do measurement • Quicker than CATIA to load a part • Just need a PC and not a Unix Station The direct time saving of using both a lightweight visualization and Product View was estimated at one hour a week for the stress engineers and two hours a week for the designers. 4.2 The DMU Colour The case study employees need to have an easy access to some information located in the Product Data Management System. For instance, they would need to have a global view on the maturity phase, the released information of components and assemblies to monitor the design progress, or they would need to identify the parts from a certain supplier. Moreover, they need to see this information at a part level visualizing at section such as the Trailing Edge. Nowadays, to access this kind of information, employees need to extract it from the PDM system and transfer it in an Excel sheet. Then, they reorganize it in order to keep the data they need and make the information easier to read. This is an inefficient process which can be improved upon through better integration between the data source and the visualization tools. The solution proposed is to colour the DMU in order to highlight any attribute required by the customer for example, the different levels of maturity of design data and Data for Manufacturing (DFM) phase by parts or the parts according to a specific supplier. A script already exists to colour the DMU. However to apply this macro, employees need to generate a KPI table, open the DMU in CATIA, run the macro manually from the KPIs table and save the file in another folder. It is why the solution has not been deployed yet within Airbus in the UK. Thus, the goal is to automate the macro and combine the results with the lightweight visualization to allow the management population and the non-CATIA specialist to access easily this kind of information. The use of the colour in the DMU will allow Airbus to gain significant time in review meetings and in their daily work, to have better tracking, control and monitoring, and to have visual management in-line with a lean engineering approach. However, due to the nature of the requirements and benefits, it is quite difficult to quantify and measure these savings in terms of time and money.

592

The implementation of the DMU colour solution will require achieving the following stages: • Automatic creation of KPIs in line with customers requirements • Creation of a script to automate the macro • Save the files as lightweight visualization to make it available to non CAD users It will then be possible to colour the DMU with different colour according to specific criteria and to make it accessible easily to the management population. 4.3 The use of lightweight visualization The lightweight visualization is an interactive visualization tool used in a PC environment. It allows a global view of the 3D model in a non-CAD environment and complements the Digital Mock-Up. However it cannot assist structural analysis. The main audience of this tool are the non-CAD users, the management population and most of the engineering population such as stress engineers, lead designers and program manager. Nowadays, hundreds of people are currently using the lightweight visualization in a recent aircraft program. In Airbus in the UK, the lightweight visualization files are currently created manually two or three times a week. The users can access these files with user rights. The proposed solution involves the following steps: • Automate the creation of lightweight visualization files by writing a script. It represents one to two weeks of work • To update the files every day • Create a portal or a Webpage to allow people to access it and to get a better control • Communicate this new tool through Splash screens, Team managers, Presentation in the digital data sharing facilities… Thus, the first stage in the implementation of this solution will involve the creation of a script to allow the automatic creation of lightweight visualization at a section level. Many benefits are associated to the use of the lightweight visualization format. A user does not need any software application to access it and can visualize large geometry once the files are created. To maintain data integrity and Intellectual Property Rights, the user cannot share the files easily by sending presentation for instance, which allows a better control and tracking of the information. The lightweight visualization can also allow visualising a particular configuration, a product structure. Finally, one of the most important advantages of such a format is that it allows to non-CAD users to gain an access to the 3D model with an ease of use and without any training, improving communication, encouraging management in 3D world, and supporting the concurrent engineering and single source approach of Airbus. 5 VALIDATION The Digital Wing team and the users of the digital design data carried out the validation of the recommendations. However, a workshop with a set of 12 experts was organized to present the proposed solutions. DMU Managers, Mock-Up Integrators, PDM Support, Application Support, Head of Product Enablement, Digital Data Manager, Digital Data Capability and Enablement, and Digital Data Quality and Performance Managers were involved in this workshop. The workshop commenced with presenting the problems and the proposed solutions as well as a discussion for each of the areas of focus.

593

The workshop members were enthusiastic and interested in the proposed improvements. Several issues of focus emerged from the discussion. For instance, for the stress issue; it appears important to find an official document proving that Product View has the right accuracy to allow measurements. The DMU colouring is considered as really useful. The Head of Product Enablement “This solution shows that the DMU is really powerful but not used at its plain capabilities”. Some employees would like to implement this information such as the materials to differentiate between metallic/composite and supplier sources. 6 ROADMAP IMPLEMENTATION: FIRST STEPS After the validation of the proposed roadmap, several steps have started in preparation for the implementation of the DMU colour and the lightweight visualization changes. 6.1 DMU Colour In order to develop an automated DMU colouring capability, a pilot was presented against a sample dataset of an aircraft. To allow the development of a sample dataset, several tasks needed to be performed. First, a KPIs table was created automatically, including the part number associated to the stage of maturity or DFM. Then a macro that was already created was applied to this KPIs table. Finally, the files created were saved as lightweight visualization to facilitate its access and make it available to non-CAD users. The DMU colour differentiated the parts that are released (Green), those that are waiting to be released (Amber) and those that are in-work (Red). 6.2 Use of lightweight visualization In order to develop the lightweight visualization solution and to allow an easy access to the files, several actions have been put in place. The technical side of the implementation includes:    

Formalizing a structure breakdown, Setting up a secure space to store the files by locking off the X-drive for a better control and security, Creating a script for CATIA to create the lightweight visualization files automatically, Creating an interface for an intranet webpage.

The two first steps were performed. The third step is waiting for being performed by the Data Exchange Department, which means that in short-term the files will be created manually. The last step includes a specific request to allow the creation of the webpage. Finally, in order to deploy the tool, the communication chosen includes a presentation to key managers with a demo in the facilities available, emails and splash screens and possibly an article in the company internal magazine. 7 CONCLUSIONS The roadmap to enhance the current utilisation of digital design data within the aerospace industry was presented. There are opportunities to enhance the digital data availability in order to make better decision in a shorter time to reduce the product lead-time. The paper focuses on three main areas including the Stress, the Digital Mock-Up (DMU) colour and the potential use of the lightweight visualization. The proposed solutions were developed and validated, presenting the best way to achieve the goal, considering

the short-term implementation necessary to get immediate benefits and the technical feasibility of the new change. The initial stages of implementing the developed roadmap have commenced within Airbus in the UK. The main benefit achieved by implementing the new solutions will be time savings for the digital design data users.

[9] [10]

[11] 8 ACKNOWLEDGMENTS The authors would like to thank Airbus in the UK for sponsoring this project. They would like also to express their gratitude to all the persons who contributed directly or indirectly to the success of the project.

[12]

9

[13]

[1] [2] [3]

[4] [5]

[6] [7] [8]

REFERENCES Walker, D.M. (2007). How Data Works. White Papers and Research. Teradata (2006) – Master Data Management. Available: http://www.teradata.com/t/pdf.aspx?a =83673&b=145653 (Accessed on June 18, 2008) Shehab, E., Abdalla H., 2006, A Cost Effective Knowledge-Based Reasoning System for Design for Automation, Proceedings of Instn Mech Engrs (IMechE), Part B: Journal of Engineering Manufacture, 220 (5): 729-743. Netto, A.V. and De Oliveira, M.C (2004). Industrial application trends and market perspectives for virtual reality simulation. Revista Producao, Vol. 4, No 3. Bennis, F. et al. (2005). Virtual Reality: A human centered tool for improving Manufacturing. In: Proceeding of Virtual Concept 2005 Conference, Biarritz, France. Hand, C. (1994). Other Faces of Virtual Reality. In: East-West International Conference on Multimedia, Hypermedia and Virtual Reality. Bao, J.S. et al. (2002). Immersive Product Development. Journal of Materials Processing Technology, 129, p. 592-596. Monacelli, G. (2003). VR Applications for reducing time and cost of Vehicle Development Process. ATA –TORINO, Vol. 56; Part 7/8, p. 236-241.

[14]

[15]

[16]

[17]

[18] [19]

594

Wang, G. G. (2002). Definition and Review of Virtual Prototyping. Journal of Computing and Information Science in Engineering, Vol. 2, No. 3, p. 232-236. McBeth, C, Tennant, C and Neailey, K (2006). Developing products in the global environment using digital technology – A case study. ICE Proceedings, 25 July 2006. DassaultSystemes (2006) – Digital Mock-Up Revolution in the Air. Available: www.cdcza.co.za /DMU%20in%20Aerospace_FINAL_Eng.doc (Accessed on June 16, 2008) Giordano, P. (2002). Innovative Technologies in the AIT Process: The Digital Mock-Up for Integration and FMECA Modelling for Troubleshooting Support. In: 1st ESA Space System Design, Verification and AIT Workshop, Estec, Noordwijk. LAW (2007) – Virtual Room 101. Available: http://www.law.com/jsp/legaltechnology/pubArticleLT .jsp?id=900005497672 (Accessed on July 5, 2008) Casanueva, J. and Blake, E. (2000). Presence and Co-Presence in Collaborative Virtual Environments. Technical Report CS00-06-00, Department of Computer Science, University of Cape Town. Mogensen, P. and Gronboek, K. (2000). Hypermedia in the Virtual Project Room – Toward Open 3D Spatial Hypermedia. In : 11th Conference on Hypertext and Hypermedia, San Antonio, Texas, p113-122 BusinessWeek (2007) – The Virtual Meeting Room. Available :http://www.businessweek.com/technology/ content/apr2007/tc20070416_445840.htm (Accessed on July 23, 2008) HP (2006) – HP Virtual Rooms Available: http://h10076.www1.hp.com/education/ hpvr/hpvr_orderfaq.htm (Accessed on July 15, 2008). http://www.airbus.com Bouin-Portet, M (2008). Enhancement of Digital Data Availability in the Aerospace Industry. MSc Thesis, Cranfield University.

CIRP – Design 2009

CIRP – Design 2009

The following topics are covered: • Innovative and creative design

Competitive Design is an essential driver of innovation and creativity. In today’s fastchanging engineering environment, the need to enhance the role of design and creativity in all aspects of business is increasingly acknowledged.

The proceedings present multidisciplinary research encompassing concepts, methodologies and infrastructure development for successful competitive design.

• Design methods, tools and techniques • Design to cost • Affordable design • Risk in design • User centric design • Requirements engineering and management • Design for customisation • Distributed and collaborative design • Design management • Product life cycle management • Design knowledge and information management • Computer aided design • Product platforms and modular design • Adaptable design • Virtual design and testing • Manufacturing systems design • Design optimisation • Intelligent design • Global design • Management of outsourced design

Professor Rajkumar Roy

Dr. Essam Shehab

Rajkumar Roy is Professor of Competitive Design and Head of the Decision Engineering Centre at Cranfield University. He is also the President of the Association of Cost Engineers. His research interests include design optimisation and cost engineering for products, services and industrial product-service systems.

Essam Shehab is a Senior Lecturer in Decision Engineering at Cranfield University. His research and industrial interests cover multi-disciplinary areas including design engineering, cost modelling and knowledge management for innovative products and industrial product-service systems.

Competitive Design

The application of good design techniques within both product and service organisations has grown in importance, encompassing areas such as communication, behaviour and environment, enabling engineering organisations to develop and maintain a competitive edge. The papers presented in this book focus on the notion of design as a pivotal activity, creating and setting in motion the vision of the future within an engineering environment.

• Design and sustainability

Competitive Design Proceedings of the

19th CIRP Design conference

Rajkumar Roy Essam Shehab

Rajkumar Roy, Essam Shehab Editors