i Software Testing Foundations - Rocky Nook

57 downloads 1031 Views 1MB Size Report
Software testing foundations : a study guide for the certified tester exam / Andreas ... The International Software Testing Qualifications Board (ISTQB) was offi-.
Spillner_2_Auflage.book Seite i Montag, 20. Dezember 2010 4:41 16

i

Software Testing Foundations

Spillner_2_Auflage.book Seite ii Montag, 20. Dezember 2010 4:41 16

ii

About the Authors

Andreas Spillner is a prof essor of Computer Science in the Faculty of Electrical Engineering and Computer Science at Bremen University of Applied Sciences. For more than 10 years, he was president of the German Special Int erest G roup in S oftware T esting, Analysis , and Verification o f th e G erman So ciety f or Informatics. H e i s a honorary member of the G erman Testing B oard. His w ork emphasis is on sof tware engineering, quality assurance, and testing.

Tilo Linz is CEO of i mbus AG, a leading service company for software testing in G ermany. He is presiden t of the German Testing Board and was president of the ISTQB from 2002 to 2005. His work emphasis is on consulting and c oaching pr ojects on sof tware qualit y managemen t, and optimizing software development and testing processes.

Hans Sch aefer is an indepe ndent consulta nt in sof tware t esting in Norway. He is president of the Norwegian Testing Board. He has been consulting and teaching sof tware t esting methods since 1984 . H e organizes the Norwegian Special Interest Group in Software Testing for Western Nor way. His work emphasis is on consulting, t eaching, and coaching test process improvement and test design techniques, as well as reviews.

Spillner_2_Auflage.book Seite iii Montag, 20. Dezember 2010 4:41 16

iii

Andreas Spillner · Tilo Linz · Hans Schaefer

Software Testing Foundations A Study Guide for the Certified Tester Exam ■ ■

3rd Edition

Foundation Level ISTQB compliant

Spillner_2_Auflage.book Seite iv Montag, 20. Dezember 2010 4:41 16

iv

Andreas Spillner Tilo Linz

[email protected]

[email protected]

Hans Schaefer

[email protected]

Editor: Dr. Michael Barabas Copyeditor: Judy Flynn Layout and Type: Birgit Bäuerlein/Nadine Thiele Cover Design: Helmut Kraus, www.exclam.de Printer: Courier Corporation Printed in the USA ISBN: 978-1-933952-78-9 3rd Edition © 2011 by Rocky Nook Inc. 26 West Mission Street Ste 3 Santa Barbara, CA 93101 www.rockynook.com This 3rd English book edition conforms to the 4th German edition “Basiswissen Softwaretest – Aus- und Weiterbildung zum Certified Tester – Foundation Level nach ISTQB-Standard” (dpunkt.verlag GmbH, ISBN 3-89864-358-1), which was published in May 2010. Library of Congress Cataloging-in-Publication Data Spillner, Andreas. Software testing foundations : a study guide for the certified tester exam / Andreas Spillner, Tilo Linz, Hans Schaefer. -- 3rd ed. p. cm. ISBN 978-1-933952-78-9 (soft cover : alk. paper) 1. Computer software--Testing. 2. Computer software--Verification. 3. Computer software--Evaluation. I. Linz, Tilo. II. Schaefer, Hans, 1952- III. Title. QA76.76.T48S66 2011 005.1'4--dc22 2010043374 Distributed by O’Reilly Media 1005 Gravenstein Highway North Sebastopol, CA 95472 All product names and services identified throughout this book are trademarks or registered trademarks of their respective companies. They are used throughout this book in editorial fashion only and for the benefit of such companies. No such uses, or the use of any trade name, is intended to convey endorsement or other affiliation with the book. No part of the material protected by this copyright notice may be reproduced or utilized in any form, electronic or mechanical, including photocopying, recording, or bay any information storage and retrieval system, without written permission from the copyright owner.

Spillner_2_Auflage.book Seite v Montag, 20. Dezember 2010 4:41 16

v

Foreword

Foreword

The International Software Testing Qualifications Board (ISTQB) was officially founded in Edinburgh in N ovember 2002. A lot can happen in the space of eight years in the IT world—new developments find their way into everyday use and new versions of established processes become available. In spite of this constant change, information technology is built on fundamental kn ow-how th at g enerally r emains co nstant. F rom t he outset, w e have al ways tak en t he “ Foundations” pa rt o f o ur t itle seri ously, a nd w e avoid addressing aspects of IT that are not yet proven. Specialty skills such as Web application or embedded system testing are not IT b asics. Where necessary, we reference current literature on specialist topics. The major advantage of true g rass-roots knowledge is that it can b e applied to many different disciplines. The ISTQB Foundation Level certification is in w orldwide demand, clearly demonstrated by the strength of the ISTQB's membership, which has grown continually from a handful of members in 2002 and today counts 47 Testing Boards around the world. A current list of the individual country websites can be found on the ISTQB Worldwide t ab a t w ww.istqb.org. I t i s no exa ggeration t o s ay tha t the ISTQB and its Certified Testers are a globally recognized force that contributes significantly to worldwide software testing dependability. This success has only been possible due to the outstanding cooperation b etween all t hose in volved. We would like t o t hank everyone at ISTQB, without whom the Certified Tester scheme couldn’t have achieved the level of global acceptance it enjoys today. So w hy are we publishi ng a new e dition of t his book? We have corrected a number of errors and inconsistencies, and we would like to thank the readers who dr ew o ur a ttention t o some of t hese. We ha ve a lso increased the precision of some definitions and updated others to match the latest version of the ISTQB glossary. This version of the book also conforms to amendments and additions that have been made to the ISTQB’s 2010 syllabus, and we have updated the bibliography to include new refer-

We really do mean “Foundations”

Knowledge is the cornerstone of IT

Thanks are due …

What’s new in this edition?

Spillner_2_Auflage.book Seite vi Montag, 20. Dezember 2010 4:41 16

vi

Foreword

Our new website Advanced Level books

ence books and standards. We have updated the quoted URLs where necessary. Additionally, we ha ve l aunched a ne w web site a t http://www.softwaretest-knowledge.net/, where we will regularly post any changes that are made to the syllabus or the glossary after this book goes to print. The website will also include any corrections that we may have to make to the book text. Our web pages include references to b ooks that cover the Advanced Level certification syllabus. We wish our readers every success in turning the theory contained in this book into practice. If you are using the book to prepare for the Certified Tester Foundation Level examination, we wish you all the best on the big day. Andreas Spillner, Tilo Linz and Hans Schaefer Bremen, Möhrendorf, and Valestrandsfossen, December 2010

Spillner_2_Auflage.book Seite vii Montag, 20. Dezember 2010 4:41 16

Contents vii

Contents

1

Introduction

1

2

Fundamentals of Testing

5

2.1

Terms and Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.1 2.1.2 2.1.3 2.1.4

2.2

Error and Bug Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Testing Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Software Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Test Effort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

The Fundamental Test Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.2.1 2.2.2 2.2.3 2.2.4 2.2.5

Test Planning and Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test Analysis and Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test Implementation and Execution . . . . . . . . . . . . . . . . . . . . . . . Evaluation of Exit Criteria and Reporting . . . . . . . . . . . . . . . . . . Test Closure Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19 21 25 27 30

2.3

The Psychology of Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

2.4

General Principles of Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.5

Testing Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

2.6

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3

Testing in the Software Lifecycle

3.1

The General V-Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

3.2

Component Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2.1 3.2.2 3.2.3 3.2.4 3.2.5

Explanation of Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test Basis and Test Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

43 43 44 46 49

Spillner_2_Auflage.book Seite viii Montag, 20. Dezember 2010 4:41 16

viii

Contents

3.3

Integration Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.3.1 3.3.2 3.3.3 3.3.4 3.3.5

3.4

58 59 61 61

Testing for Acceptance According to the Contract . . . . . . . . . Testing for User Acceptance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Operational (Acceptance) Testing . . . . . . . . . . . . . . . . . . . . . . . . . Field Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

63 64 64 64

Testing New Product Versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 3.6.1 3.6.2 3.6.3

3.7

Explanation of Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test Basis, Test Objects and Test Environment . . . . . . . . . . . . . Test Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problems in System Test Practice . . . . . . . . . . . . . . . . . . . . . . . . . .

Acceptance Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 3.5.1 3.5.2 3.5.3 3.5.4

3.6

50 52 53 54 55

System Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.4.1 3.4.2 3.4.3 3.4.4

3.5

Explanation of Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test Basis and Test Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Test Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Integration Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Software Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Release Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Testing in Incremental Development . . . . . . . . . . . . . . . . . . . . . . 69

Generic Types of Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 3.7.1 3.7.2 3.7.3 3.7.4

Functional Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nonfunctional Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Testing of Software Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Testing Related to Changes and Regression Testing . . . . . . .

70 72 74 75

3.8

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

4

Static Testing

4.1

Structured Group Examinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 4.1.1 4.1.2 4.1.3 4.1.4 4.1.5

Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The General Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Roles and Responsibilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Types of Reviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

79 80 82 86 88

Spillner_2_Auflage.book Seite ix Montag, 20. Dezember 2010 4:41 16

Contents ix

4.2

Static Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 4.2.1 4.2.2 4.2.3 4.2.4 4.2.5

The Compiler as Static Analysis Tool . . . . . . . . . . . . . . . . . . . . . . . 97 Examination of Compliance to Conventions and Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Data Flow Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Control Flow Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Determining Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

4.3

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

5

Dynamic Analysis – Test Design Techniques

5.1

Black Box Testing Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.1.1 5.1.2 5.1.3 5.1.4 5.1.5 5.1.6

5.2

105

Equivalence Class Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Boundary Value Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 State Transition Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Cause-Effect Graphing and Decision Table Technique . . . . 135 Use Case Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 General Discussion of the Black Box Technique . . . . . . . . . . 142

White Box Testing Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 5.2.1 5.2.2 5.2.3 5.2.4 5.2.5 5.2.6

Statement Testing and Coverage . . . . . . . . . . . . . . . . . . . . . . . . Decision/Branch Testing and Coverage . . . . . . . . . . . . . . . . . . . Test of Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further White Box Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . General Discussion of the White Box Technique . . . . . . . . . . Instrumentation and Tool Support . . . . . . . . . . . . . . . . . . . . . . .

143 145 148 156 156 157

5.3

Intuitive and Experience Based Test Case Determination . . . . . . . . . 157

5.4

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

6

Test anagement M

6.1

Test Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 6.1.1 6.1.2

6.2

65

Test Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Tasks and Qualifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

Test Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 6.2.1 6.2.2 6.2.3 6.2.4

Quality Assurance Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Test Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Prioritizing Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Test Entry and Exit Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

1

Spillner_2_Auflage.book Seite x Montag, 20. Dezember 2010 4:41 16

x

Contents

6.3

Cost and Economy Aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 6.3.1 6.3.2 6.3.3

6.4

Definition of Test Strategy and Test Approach . . . . . . . . . . . . . . . . . . . . 180 6.4.1 6.4.2 6.4.3

6.5

Preventative vs. Reactive Approach . . . . . . . . . . . . . . . . . . . . . . 181 Analytical vs. Heuristic Approach . . . . . . . . . . . . . . . . . . . . . . . . . 182 Testing and Risk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

Test Activity Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 6.5.1 6.5.2 6.5.3

6.6

Costs of Defects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Costs of Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Test Effort Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

Test Cycle Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Test Cycle Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Test Cycle Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

Incident Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 6.6.1 6.6.2 6.6.3 6.6.4

Test Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Incident Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Incident Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Incident Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

6.7

Requirements to Configuration Management . . . . . . . . . . . . . . . . . . . . 196

6.8

Relevant Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

6.9

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

7

Test Tools

7.1

Types of Test Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 7.1.1 7.1.2 7.1.3 7.1.4 7.1.5 7.1.6

7.2

Understanding the Meaning and Purpose of Tool Support for Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Tools for Test Management and Control . . . . . . . . . . . . . . . . . . 203 Tools for Test Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Tools for Static Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Tools for Dynamic Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Tool Support for Performance and Monitoring . . . . . . . . . . . . 213

Selection and Introduction of Test Tools . . . . . . . . . . . . . . . . . . . . . . . . . 215 7.2.1 7.2.2 7.2.3

7.3

01

Cost Effectiveness of Tool Introduction . . . . . . . . . . . . . . . . . . . 216 Tool Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Tool Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220

2

Spillner_2_Auflage.book Seite xi Montag, 20. Dezember 2010 4:41 16

Contents xi

Appendix

221

A

Test Plan According to IEEE Standard 829-1998

223

B

Important Information on the Curriculum and on the Certified Tester Exam

229

Exercises

231

Glossary

235

Literature

269

Index

277

C

Spillner_2_Auflage.book Seite xii Montag, 20. Dezember 2010 4:41 16

xii

Contents

Spillner_2_Auflage.book Seite 1 Montag, 20. Dezember 2010 4:41 16

1

1

1

Introduction

Introduction

Software has found an enormous dissemination in the past years. There are few machines or facilities left today that are not controlled by software or at least include software. In automobiles, for example, from the engine to the transmission and up to the brakes, more and more functions are controlled by microprocessors and their software. Thus, software is crucial to the functionality of devices and industry. Likewise, the smooth operation of an e nterprise or or ganization d epends l argely on t he re liability of t he software systems used for supporting the business processes or particular tasks. The speed at which an insurance company is able to introduce a new product, or even a new rate, most likely depends on how quickly the IT systems can be adjusted or extended. Within bo th se ctors ( embedded a nd c ommercial so ftware sy stems), the quality of software has become the most important factor in determining the success of products or enterprises. Many enterprises have recognized this dependence on software and strive f or i mproved q uality o f th eir so ftware s ystems and so ftware e ngineering (or development) processes. One way to achieve this goal is systematic evaluation and testing of the developed software. In some cases, appropriate testing procedures have found their way into the daily practice of s oftware development. H owever, in many s ectors, t here remains a significant need to become educated with regard to evaluation and testing procedures. With this book, we offer basic knowledge that helps to achieve structured and systematic evaluation and testing. Implementation of these evaluation and testing procedures should contribute to an improved quality of the software being developed. This book is written in such a way that it does not presume previous knowledge of software quality assurance. It is designed as a textbook and is m eant for s elf-study. A single, continuous case example is included which will help explain every shown topic and its practical solution.

High dependence on the correct functioning of the software

Basic knowledge for structured evaluation and testing

Spillner_2_Auflage.book Seite 2 Montag, 20. Dezember 2010 4:41 16

2

1

Certification program for software testers International initiative

National Testing Boards

Introduction

We wan t to appeal to t he s oftware te sters in software and i ndustry enterprises who strive for a w ell-founded, basic knowledge of the principles behind software testing. We also address programmers and developers who are already practicing testing tasks or will do so in the future. The book will help project managers and team leaders to improve the effectiveness and efficiency of software tests. Even those in related disciplines close to IT jobs, as w ell as other employees who are involved in the process of acceptance, introduction, and further development of IT applications, will find this book helpful for their daily tasks. Evaluation and testing procedures have a high cost in practice (expenditures in this sector a re est imated to b e 25 % to 50 % o f the s oftware development time and cost [Koomen 99]). Yet, there are few universities, colleges, or vocational schools in the sector of computer science that offer courses that intensively teach this topic. This book is of value to both students and teachers, as it provides the material for a basic course. Lifelong learning is indispensable, especially in the IT industry, therefore many companies offer further education to their employees. The general recognition of a course certificate is, however, only possible if the contents of the course and the examination are defined and followed up by an independent body. In 1998, the Information Systems Examinations Board [URL: ISEB] of the British C omputer S ociety [URL: B CS] sta rted suc h a certification scheme. Similar to the British example, other countries took up these activities and established country specific Testing Boards in order to make it possible to run training and examination in the language of the respective country. These na tional b oards co operate in t he International S oftware Testing Qualifications Board [URL: ISTQB]. An actual list o f all ISTQB members can be found at [URL: ISTQB-Members]. The International S oftware Testing Q ualifications Board coordinates the national initiatives and provides the uniformity and comparability of the teaching and exam contents in the countries involved. They are responsible f or i ssuing a nd maintaining cur ricula in their country language and for organizing and executing examinations in their countries. They assess the seminars offered in their countries according to defined criteria and accredit training providers. Th e testing boards thus guarantee a standard o f h igh q uality of t he seminars. Af ter p assing a n exam, the seminar participants receive an internationally recognized qualification certificate.

Spillner_2_Auflage.book Seite 3 Montag, 20. Dezember 2010 4:41 16

1

3

Introduction

The ISTQB Cer tified Tester Qualification s cheme has thr ee s teps [URL: ISTQB]. The b asics are described in the cur riculum (syllabus) for the Foundation L evel. Bu ilding on this i s the Advanced L evel certificate, showing a d eeper kno wledge o f t esting. Expert L evel q ualifications a re available or planned for the following subjects: Test Process Improvement, Test Management, Test Automation, and Security Testing. The co ntent o f this book co rresponds t o t he requirements of t he ISTQB Foundation C ertificate. The knowledge needed for the exams can be acquired by self-study. The book can also be used to extend knowledge after, or parallel to, participation in a course. The t opics o f thi s b ook, and t hus the r ough s tructure o f th e co urse contents for the Foundation Certificate, are described below. In chapter 2, the basics of software testing are discussed. In addition to the mo tivation f or t esting, t he cha pter wi ll exp lain w hen t o test , with which goals, and how intensively. The concept of a basic t est process will be shown. It will deal with the psychological difficulties experienced when testing one’s own software, and the blindness for one’s own errors. Chapter 3 d iscusses which t est acti vities sh ould be d one d uring th e software development process, and how they relate to other development tasks. In addition to the different ➞test levels and ➞test phases, it will deal with th e di fferences b etween fu nctional a nd n onfunctional t ests. The economy of testing and how to test changes, as well as testing in ma intenance, will also be discussed. Chapter 4 di scusses sta tic m ethods, i.e., p rocedures wh ere t he t est object is analyzed but not executed. Reviews and static analyses are already used by many enterprises with positive results. The various methods and techniques will be described. Chapter 5 deals with testing in a narrower sense. The classification of dynamic testing into black box and white box techniques will be discussed. Various test techniques are explained in detail with the help of a continuous example. Illustrated at the end of the chapter is the reasonable usage of exploratory a nd in tuitive testing, w hich ma y b e use d in addi tion t o the other techniques. Chapter 6 shows which aspects should be considered in test management, how systematic incident handling appears, and s ome b asics about establishing sufficient ➞configuration management. Testing o f s oftware wi thout t he su pport o f a ppropriate t ools is ver y labor and time intensive. The seventh and last chapter of this book intro-

Chapter overview basics

Testing in the software life cycle

Static test

Dynamic test

Test management

Test tools

Spillner_2_Auflage.book Seite 4 Montag, 20. Dezember 2010 4:41 16

4

1

Notes to the subject matter and for the exam are in the appendix

Introduction

duces dif ferent cl asses of t ools for s upporting testing, and hints f or tool selection and implementation. The appendix offers notes and additional information on the s ubject matter and the exam to the Certified Tester. Further sections of the appendix contain explanations to the test plan according to [IEEE 829], exemplary exercises, a glossary, and the list of literature. The technical terms are marked with an appropriate hint when they appear for the first time in the text. The hint points to a detailed definition in the glossary. Text passages that go beyond the material of the syllabus are marked as excursions.

Spillner_2_Auflage.book Seite 5 Montag, 20. Dezember 2010 4:41 16

2

2

Fundamentals of Testing

5

Fundamentals of Testing

This chapter will explain some basic facts of software testing, covering everything th at i s r equired for unde rstanding the foll owing chapters. Important phrases and e ssential v ocabulary will b e e xplained by u sing an e xample application. This example appears frequently to illustrate and clarify the subject matter throughout the book. In the following section this example will be introduced. The fundamental test process and the different activities of testing will be illustrated. Psychological p roblems w ill be discussed. F inally, the ISTQB Code of Tester Ethics is presented and discussed. The procedures for testing software presented in this book are mainly illustrated by one general example. The fundamental scenario is de scribed as follows: A c ar man ufacturer d evelops a ne w ele ctronic s ales su pport sy stem c alled VirtualShowRoom (VSR). The final version of this software system is supposed to be installed a t e very ca r dealer wo rldwide. An y customer w ho i s in terested in buying a new ca r will be able t o co nfigure th eir f avorite mod el (m odel, type , color, extras, etc.), with or without the guidance of a salesperson. The system shows possible models and combinations of extra equipment, and instantly c alculates the acc urate price of t he configured ca r. Thi s f unctionality will be implemented by a subsystem called DreamCar. When the customers have made up their mind, they will be able to calculate the most suitable payment (EasyFinance) as well as place the order online (JustInTime). Of course, they will get the option to sign up for the appropriate insurance ( NoRisk). P ersonal information and c ontract d ata ab out t he customer i s managed by the ContractBase subsystem. Figure 2–1 shows the general architecture of this software system. Every subsystem will b e designed and developed by separate developer teams. Altogether about 50 de velopers and ad ditional employees from the respective user departments are involved in working on this project. External software companies will also participate.

Case study “VirtualShowRoom” – VSR

Spillner_2_Auflage.book Seite 6 Montag, 20. Dezember 2010 4:41 16

6

2

Fundamentals of Testing

Before shi pping t he VSR-System, i t m ust b e tested t horoughly. The p roject members who have been assigned to test the software apply different techniques and p rocedures. This b ook contains t he basic kn owledge f or thos e techniques and procedures of software testing. Figure 2–1 Architecture of the VSR-System

DreamCar 1

ContractBase 2

JustInTime

2

2

NoRisk

EasyFinance

3

Host

1 Exchange of car data 2 Exchange of contract data 3 Exchange of order data

VirtualShowRoom (VSR)

2.1 Terms and Motiv ation Requirements

Software is immaterial

During th e co nstruction o f a n in dustry p roduct, th e pa rts and t he fin al product are usually tested to check if they fulfill the given ➞requirements. It must be determined whether the product solves the required task. There may be differences between the requirements and the implemented product. If the product exhibits problems, necessary corrections must be made in the production process and/or in the construction. What generally counts for the production of industry products is also appropriate to the production or development of software. However, testing (or evaluation) of partial products and the final product respectively is more d ifficult, b ecause a s oftware p roduct is no t a ph ysical p roduct. A direct examination is not possible. The only way to directly examine the product is b y reading (reviewing) the development documents and code very carefully. The d ynamic b ehavior o f the so ftware, ho wever, canno t b e checked this way. It must be done through ➞testing, where the tested software will be executed on a co mputer. Its behavior must b e compared to the given requirements. Thus, the t esting of software is a very important and difficult task in the s oftware development process. It contributes to reducing the ➞risk o f us e o f t he s oftware, b ecause b ugs c an b e f ound b y test ing.

Spillner_2_Auflage.book Seite 7 Montag, 20. Dezember 2010 4:41 16

2.1

7

Terms and Motivation

Testing a nd i ts do cumentation a re s ometimes als o r equired b y the co ntract, or in legal or industrial standards. To identify and repair possible faults before delivery, the VSR-System from the case example must be tested intensively before it is used. For example, if the system executes order transactions incorrectly, this could result in frustration for the customer and a serious financial or image loss to the dealer and the car manufacturer. Anyway, not finding the bugs holds a high risk when using the system.

2.1.1

Example

Error and Bug T erminology

When d o w e have a be havior of the system wh ich does n ot conform t o requirements? A situation can be classified as incorrect only after we know what th e e xpected co rrect si tuation is su pposed t o look like. Thus, a ➞failure is a nonfulfillment of a given requirement; a discrepancy between the ➞actual result or behavior (identified while executing the test) and the ➞expected r esult o r b ehavior (def ined in t he specifications o r r equirements). A fail ure is present if a wa rrantable (user) expectation is not f ulfilled adequately. Examples of failures are products that are too hard to use or too slow, but still fulfill the functional requirements. In contrast to physical systems, software failures do not occur because of aging or abrasion. They occur because of ➞faults in the software. Every ➞fault (or ➞defect or ➞bug) in the software is present since it was developed or changed. Yet, the fault materializes only when executing the software, becoming visible as a failure. To d escribe t he e vent w hen a us er exp eriences a p roblem, [ IEEE 610.12] u ses the t erm fa ilure. H owever, o ther terms li ke “p roblem” o r “incident” are often used. During testing or use of the software, the failure becomes visible to the ➞tester or user; for example, an output is wrong or the application crashes. We m ust d ifferentiate between th e o ccurrence o f a fa ilure a nd i ts cause. A fa ilure has i ts roots in a fa ult in the software. This fault is a lso called a defect or internal error. Programmer slang for this term is a “bug”. An example might be wrongly programmed or forgotten code in the application. It is p ossible that a fault is hidden by one or more other faults in d ifferent p arts of t he app lication ( ➞defect ma sking). In that case, a f ailure only occurs after the masking defects have been corrected. This demonstrates that corrections can have side effects.

What is an error, failure, or fault?

Causal chain

Failures

Faults

Defect masking

Spillner_2_Auflage.book Seite 8 Montag, 20. Dezember 2010 4:41 16

8

2

Fundamentals of Testing

One problem is t hat a fault can cause none, one, or many failures for any number of users, and t hat the fault and t he corresponding failure are arbitrarily far away from each other. A particularly dangerous example is some sm all corruption o f stored data, which may b e f ound a lo ng time after it first occurred. The cause of a fault or defect is an error or mistake by a person – for example, wr ong p rogramming by the de veloper, a mi sunderstanding of the commands in a programming language, or forgetting to implement a requirement. However, faults may even be caused by environmental conditions, like radiation, magnetism, etc., that introduce hardware problems. This last factor is, however, not further discussed in this book. More detailed descriptions of the terms within the domain of testing are given in the following paragraphs.

2.1.2 Testing is not debugging

A test is a sample examination

Testing Terms

To be able to correct a defect or bug, it must be localized in the software. Initially, we only know the effect of a defect but not the precise location in the software. The localization and the correction of defects are the job of the software developer and are called ➞debugging. Repairing a defect generally increases the ➞quality of the product, provided that in m ost cases no new defects are introduced. Debugging is o ften equated with testing, but testing and debugging are totally different activities. Debugging is t he task of localizing and correcting faults. The goal of testing is t he (more or less syst ematic) detection of failures (that indicate the presence of defects). Every execution (even using more or less random samples) of a test object, in o rder to examine it, is testing. The co nditions for the t est must be defined. The actual and the expected behaviors of the test object must be compared1. Testing software has different purposes: ■ ■ ■ ■ 1. 2.

Executing a program in order to find failures Executing a program in order to measure quality Executing a program in order to provide confidence2 Analyzing a program or its documentation in order to prevent defects It is not possible to prove correct implementation of t he requirements. We can only reduce the risk of serious bugs remaining through testing. If a thorough test finds little or no failures, confidence in the product will increase.

Spillner_2_Auflage.book Seite 9 Montag, 20. Dezember 2010 4:41 16

2.1

9

Terms and Motivation

Tests can also be performed to acquire information about the test object, and that information is then u sed as the b asis for decision-making – for example, whether one part of a system is appropriate for integration with other parts of the system. The whole process of systematically executing programs to demonstrate the correct implementation of the requirements, to increase confidence, and to detect failures is ca lled test. In addition, a test includes static methods, i.e., static analysis of software products using tools, as well as document reviews (see chapter 4). Besides the execution of the test object with ➞test data, the planning, design, implementation, and analysis of th e te st ( ➞test manageme nt) are also part of the ➞test process. A ➞test run or ➞test suite consists of the execution of one or more ➞test cases. A test case contains defined test conditions (mostly the requirements for execution), the inputs, and the expected outputs or the expected behavior of the test object. A test case should have a high probability of revealing previously unknown faults [Myers 79]. Several t est c ases c an o ften b e co mbined to cr eate ➞test s cenarios, whereby the result of one test case is used as the starting point for the next test case. For example, a test scenario for a database application can contain one test case which writes a date into the database; another test case which manipulates that date; and a third test case which reads the manipulated date out of the database and deletes it. Then all three test cases will be executed, one after another, all in a row. At present, there is no known bug free software system, and there will not be in the near future if the system has nontrivial complexity. Often the reason for a fault is that certain exceptional cases were not considered during development as well as during testing of the software. Such faults could be the in correctly cal culated lea p y ear, o r t he n ot considered b oundary condition for the timely response, or the needed resources. On the other hand there are many software systems in many different fields that operate reliably, day in and out. Even if all the executed test cases do not reveal any further failures, we cannot co nclude wi th co mplete sa fety (e xcept f or v ery sm all programs) that there do not exist further faults. There are many confusing terms for different kinds of software testing tasks. Some will be explained later within the description of the different ➞test levels (see chapter 3). This excursion is supposed to explain some of the different terms. It is helpful to differentiate these categories of testing terms:

No complex software system is bug free

Absolute correctness cannot be achieved with testing

Excursion: Naming of tests

Spillner_2_Auflage.book Seite 10 Montag, 20. Dezember 2010 4:41 16

10

2

Fundamentals of Testing

1. ➞Test objective or test type: Calling a kind of test by its purpose (e.g., ➞load test). 2. ➞Test technique: The test is named using the name of the technique used for the specification or execution of the test (e.g., ➞business-process-based test or ➞boundary value test). 3. Test object: The test is named after the kind of the test object to be tested (e.g., GUI test or DB test (database test)). 4. Testing level: The test is named after the level or phase of the underlying life cycle model (e.g., ➞system test). 5. Test person: The test is named after the person subgroup executing the tests (e.g., developer test, ➞user acceptance test). 6. Test extent: The test is named after the level of extent (e.g., partial ➞regression test). Thus, not every term means a new or different kind of testing. In fact, only one of the aspects is pushed to the fore. It depends on the perspective we use when we look at the actual test.

2.1.3

Software Quality

Testing of software contributes to improvement of ➞software quality. This is do ne through iden tifying defects and th eir subsequent co rrection by debugging. But testing is also measurement of software quality. If the test cases are a r easonable sample of software use, quality experienced by the user should not be too different from quality experienced during testing. But software quality entails more than just the elimination of failures that o ccurred d uring t he testing. A ccording to th e ISO /IEC-Standard 9126-1 [ISO 9126] the f ollowing factors b elong t o software q uality: ➞functionality, ➞reliability, usability, ➞efficiency, ➞maintainability, and portability. All these factors, or ➞quality characteristics (also: ➞quality attribute), have to be considered while testing in order to judge the overall quality of a software product. It should be defined in advance which quality level the test object is supposed to show for each characteristic. The achievement of these requirements must then be examined with capable tests. Example VirtualShowRoom

In the example of the VSR-System, the customer must define which of the quality characteristics are most important. Those have to be implemented and examined

Spillner_2_Auflage.book Seite 11 Montag, 20. Dezember 2010 4:41 16

2.1

Terms and Motivation 11

in the system. The characteristics of functionality, reliability, and usability are very important for the car manufacturer. The system must reliably provide the required functionality. Beyond that, it must be easy to use so that the different car dealers can us e i t wi thout a ny p roblems in everyday lif e. Th ese quality cha racteristics should be especially well tested in the product.

We di scuss t he in dividual q uality c haracteristics o f ISO/IEC-S tandard 9126-1 [ISO 9126] in the following section. Functionality contains all characteristics which describe the required capabilities of the system. The capabilities are usually described by a specific input/output behavior and/or an appropriate reaction to an input. The goal of the test is to prove that every single required capability in the system was implemented in the specified way. According to ISO/IEC-Standard 9126-1, the functionality characteristic contains the subcharacteristics: adequacy, ➞interoperability, correctness, and security. An appropriate solution is achieved if all required capabilities exist in the system and they work adequately. Thereby it is clearly important to pay attention to, and thus to exa mine during testing t he correct or specified outputs or effects that the system generates. Software systems must interoperate with other systems, or at least with the operating system (unless the operating system is the test object itself). Interoperability describes the cooperation between the system to be tested and other systems. Trouble with t his co operation should be d etected by the test. One a rea of f unctionality is t he f ulfillment o f a pplication specific standards, a greements, or l egal r equirements and similar r egulations. Many applications give a high importance to the aspects of access security and data security. It must be proven that unauthorized access to applications and data, both accidentally and intentionally, will be prevented. Reliability describes the ability of a system to keep functioning under specific use over a specific period. In the standard, the quality characteristic is split into maturity, ➞fault tolerance, and recoverability. Maturity means how often a fa ilure of the software occurs as a r esult of defects in the software. Fault tolerance is the capability of the software product to maintain a specified level of performance, or to recover from faults in cases of trouble, like software faults, environment failures, or input that is incorrect. Recoverability is the capability of the software product to reestablish a specified l evel o f pe rformance a nd r ecover th e d ata d irectly a ffected i n

Functionality

Security

Reliability

Spillner_2_Auflage.book Seite 12 Montag, 20. Dezember 2010 4:41 16

12

2

Usability

Efficiency

Changeability and portability Maintainability Portability

Prioritize quality characteristics

Fundamentals of Testing

case of failure. Following a fa ilure, a so ftware product will sometimes be “down” for a certain period of time. Recoverability describes the length of time i t tak es to recover, th e ease of r ecovery, and th e a mount o f w ork required to recover. All this should be part of the test. Usability is very important for interactive software systems. Users will not accept a system that is hard to use. How significant is the effort that is required for the usage of the software for the different user groups? Understandability, ease of learning, op erability, an d attractiveness, a s well a s compliance to standards, conventions, style guides or user interface regulations a re p artial asp ects o f us ability. Thes e q uality characteristics a re examined in ➞nonfunctional tests (see chapter 3). The test for efficiency measures the required time and consumption of resources f or the fulfillment of tasks. Res ources m ay inc lude o ther software p roducts, t he s oftware a nd ha rdware conf iguration o f t he s ystem, and materials (e.g., print paper, network, and storage ). Software systems are often used over a long period on varied platforms (o perating syst em and ha rdware). Ther efore, t he l ast t wo q uality criteria are very important: maintainability and portability. Subcharacteristics of ma intainability are an alyzability, ch angeability, stability against side effects, testability, and compliance to standards. Adaptability, ea se o f in stallation, conformity, a nd in terchangeability have to be considered for the portability of software systems. Many of the aspects of maintainability and portability can only be examined by ➞static analysis (see section 4.2). A so ftware sys tem ca nnot ful fill e very q uality c haracteristic eq ually well. Sometimes it is possible that a fulfillment of one characteristic results in a conflict with another one. For example, a highly efficient software system can become hard to p ort, because the developers usually use special characteristics (or features) of t he chos en pl atform to im prove t he efficiency, which in turn affects the portability in a negative way. Quality c haracteristics m ust therefore be prioritized. The q uality specification is used to determine the test intensity for the different quality characteristics. The next chapter will discuss the amount of work that is involved in these sorts of tests.

2.1.4 Complete testing is not possible

Test Effort

Testing cannot prove the absence of faults. In order to do this, a test would need to execute a program in every possible way with every possible data