testability: factors and strategy - Semantic Scholar

3 downloads 0 Views 1MB Size Report
Oct 28, 2010 - NASA, Hubble Space. Telescope, M66 Galaxy. • Holst - The Planets – Uranus. Peabody Concert Orchestra,. Hajime Teri Murai, Music. Director.
1

TESTABILITY: FACTORS AND STRATEGY Robert V. Binder [email protected] Google Test Automation Conference Hyderabad October 28, 2010 © 2010, Robert V. Binder

2

Overview • Why testability matters • White box testability • Black box testability

• Test automation • Strategy • Q&A

3

WHY TESTABILITY MATTERS

4

Testability Economics Testability defines the • Assumptions limits on producing complex systems with an • Sooner is better acceptable risk of costly or • Escapes are bad dangerous defects. • Fewer tests means more escapes • Fixed budget – how to optimize • Efficiency: average tests per unit of effort • Effectiveness: average probability of killing a bug

per unit of effort • Higher testability: more better tests, same cost • Lower testability: fewer weaker tests, same cost

5

What Makes an SUT Testable? • Controllability • What do we have to do to run a test case? • How hard (expensive) is it to do this? • Does the SUT make it impractical to run some kinds of tests ? • Given a testing goal, do we know enough to produce an adequate test suite? • How much tooling can we afford? • Observability • What do we have to do to determine test pass/fail? • How hard (expensive) is it to do this? • Does the SUT make it impractical to fully determine test results ? • Do we know enough to decide pass/fail/DNF ?

6 O ra cle

S p e c if ic a ti o n

R e q u ire m e n t s

Re us ab le To ol Ba se d

Fe as ibl e C o m p le te n e s s

O b j e c t iv e F e a s ib le

Testability Factors

S pe cific atio n- ba se d

C u rre n c y

Q u a n t if i a b le

Tr ace a ble

E fficie nt Co n figu ra tio n Ma n ag e me n t Co n tr o l

C o rre s p o n d e n c e

A uto ma te d IEE E

83 0

IEE E

10 16 R e p r e s e n t a ti o n

Co nfi gu ra ti on Ma na g em en t

De ve lop e r Re vie w

U s e r I n t e rf a c e C o n t ro l S tr a t e g y

S R S /S D D S D D /S U T

Te st Su ite

Cu sto me r Re vie w

Do cu me nte d

Co n f ig u ra tio n Ma n a g e m en t S tru c tu re

Ru n t ime Tra c e

Te s t S u ite Ma n a g e m en t

Fa u lt S e n sit ivity P o lym o rp h is m

E n ca p su la t ion

Co m p a ra to r

In p u t Dis trib u tio n

I nc id e n t Tra c ke r

S ta t ic Co d e A n a lyze r

Co m p le xity

P e rfo rm a nc e Tw e ak s

In h e rit an c e

I ns tru m e n to r

Re p o rt in g De b u g

I nt e ro p e rab ilit y Im p le me n t at io n

De m e te r Co mp lia n ce

Me m o ry M a na g e m e n t

S ta n d a rd M ec h a n is m

Co n c urre n c y

E xt e rna l A P I

Te s t To o ls

In it ia lize

S p e cif ica t ion - b a se d Te s t G e n e rat o r

In p u t C ap t u re

Mu lt ipro g ra m min g Co n s is te n t U sa g e

Co n t rol Flo w

Te st Su m ma ry

Te st Pr oce d ur es

V er ifie d

S e p a r a t io n o f C o n c e rn s

T ra c e a b ilit y

Te st In cid e nt Re po rt

Te st Ca ses

Cl ose d L o op A sse sme n t

A rc h it e c t u r a l P a c k a g in g

Te st Exe cu ti on Lo g

Te st De sign

C o la b o ra t io n P a c k a g in g

S U T / T e s t S u i te

Te st Re sult S ch em a

Te st Pl an

E xe c ut e S c ript

Co d e -b a se d Te s t G e n e rat o r

S crip t E d it

No n -d e te rm in istic C o n tro l Re p la y S crip t

Re s ta rt, R e co ve ry

Tim e S e n sit ivit y

S h a re d Re s ou c e s

E xt e rna l In t e rf a ce

De t e rmin ism

E xc e pt io n Ha n d lin g

De v e lop e r De f in itio n

Te s t Ca s e De v e lop m e n t

Te s t B e d

C o n s ta n c y o f P u rp o s e

D ri ve r

S e t/Re s et

Te s t Da t a G e n e ra to r

E f f e ct iv e n e s s D e f in e d a n d R e p e a t a b l e

S a fe ty Em pow er m en t

C u s t o m e r-o rie n te d

R eu s ab l e S ta te -b a se d

T e s t R e d in e s s A s se s m e n t

F u n d in g

S ta te -b a se d

Ad equ ac y A s s es m en t

D om a i n S am p l in g

G r a nu l ar ity

A c c o u n t a b ilit y

Pr oc es s C a p a b i lit y

G e n e r ate F ro m S p e ci fica tio n B u il t-in T e st

C lo s e d L o o p F e e d b a c k

T ra in in g

V e rt ic a l I n t e g r a t io n

E x p e rie n c e

In te rm e d ia te Re s ul ts

C la s s T es t

C lu s t e r T es t

A p p lic a ti o n T es t

M o t iv a t io n

P r e C o n d itio n s T ru stw or th y N o s id e -e ffe cts

S t a ff C a p a b i lit y

H o r izo n t a l I n t e g r a t io n

P o st C o nd i tio n s R q m ts

C la ss In va r ia n ts

D e s ig n

C od e

T es t

M a i n t a in

V & V I n t e g ra t io n

R ep o r ter

A ss e rtio n s

TESTABILITY

I n t e g ra t e d T es t S t ra t e g y

P r o t o t y p in g

I n s p e c tio n s

R e v ie ws

7

Testability Factors Representation

Implementation

Built-in Test

Testability

Test Suite

Test Tools

Test Process

8

Examples Controllability

Observability

GUIs

Impossible without abstract widget set/get. Brittle with one. Latency. Dynamic widgets, specialized widgets.

Cursor for structured content. Noise, nondeterminism in non-text output. Proprietary lockouts.

OS Exceptions

100s of OS exceptions to catch, hard to trigger.

Silent failures

Objective-C

Test drivers can’t anticipate objects defined on the fly

Instrumenting objects on the fly

Mocks with DBMS

Too rich to mock

DBMS has better logging

Multi-tier CORBA

Getting all DOs desired state

Tracing message propagation

Cellular Base Station

RF loading/propagation for varying base station population

Proprietary lock-outs. Never “off-line”

9

WHITE BOX TESTABILITY

10

White Box Testability Complexity, Non-Deterministic Dependency (NDD) PCOs Built-in Test, State Helpers, Good Structure

11

The Anatomy of a Successful Test • To reveal a bug, a test must • Reach the buggy code • Trigger the bug • Propagate the incorrect result • Incorrect result must be observed • The incorrect result must be recognized as such int scale(int j) { j = j - 1; j = j/30000; return j; }

// should be j = j + 1

Out of 65,536 possible values for j, only six will produce an incorrect result: -30001, -30000, -1, 0, 29999, and 30000.

12

What Diminishes Testability? • Non-deterministic

Dependencies • Race conditions

• Message latency • Threading • CRUD shared and unprotected data

13

What Diminishes Testability? • Complexity • Essential • Accidental

15

What Improves Testability? • Points of Control and Observation (PCOs) • State test helpers • Built-in Test

• Well-structured code

16

PCOs • Point of Control or Observation • Abstraction of any kind of interface • See TTCN-3 for a complete discussion • What does a tester/test harness have to do activate SUT

components and aspects? • Components easier

• Aspects more interesting, but typically not directly controllable

• What does a tester/test harness have to do inspect SUT state or

traces to evaluate a test?

• Traces easier, but often not sufficient or noisy • Embedded state observers effective, but expensive or polluting • Aspects often critical, but typically not directly observable

• Design for testability: • Determine requirements for aspect-oriented PCOs

17

State Test Helpers • Set state • Get current state • Logical State Invariant Functions (LSIFs)

• Reset • All actions controllable • All events observable

18

Implementation and Test Models TwoPlayerGame TwoPlayerGame +TwoPlayerGame() +p1_Start( ) +p1_WinsVolley( ) -p1_AddPoint( ) +p1_IsWinner( ) +p1_IsServer( ) +p1_Points( ) +p2_Start( ) +p2_WinsVolley( ) -p2_AddPoint( ) +p2_IsWinner( ) +p2_IsServer( ) +p2_Points( ) +~( )

Two Play erG am e ( )

α

p1 _S tart ( ) / s im u lat eV olle y( )

p1 _W ins V olle y ( ) [th is .p 1_ Sc ore ( ) < 20 ] / th is .p 1A ddP oin t( ) s im u lat eV olle y( )

G am e S tarte d

p2 _S tart ( ) / s im u lat eV olle y( )

α

p1_Start( ) / simulateVolley( )

p2 _W ins V olle y ( ) [th is .p 2_ Sc ore ( ) < 20 ] / th is .p 2A ddP oin t( ) s im u lat eV olle y( )

p1 _W ins V olle y ( ) / s im u lat eV olle y( )

P la ye r 1 S erv ed

P la ye r 2 S erv ed

p1_WinsVolley( ) / simulateVolley( )

p2 _W ins V olle y ( ) / s im u lat eV olle y( ) p1 _W ins V olle y ( ) [th is .p 1_ Sc ore ( ) = = 20] / th is .p 1A ddP oin t( )

p2 _W ins V olle y ( ) [th is .p 2_ Sc ore ( ) = = 20] / th is .p 1A ddP oin t( )

P la ye r 1 Won

p1 _Is W in ner( ) / retu rn TR UE ;

P la ye r 2 Won

~( )

ω

ThreePlayerGame( ) /TwoPlayerGame( )

p2 _Is W in ner( ) / retu rn TR UE ;

Game Started

p2_Start( ) / simulateVolley( )

p2_WinsVolley( ) [this.p2_Score( ) < 20] / this.p2AddPoint( ) simulateVolley( )

p1_WinsVolley( ) [this.p1_Score( ) < 20] / this.p1AddPoint( ) simulateVolley( )

p1_WinsVolley( ) / simulateVolley( )

~( )

Player 2 Served

Player 1 Served p1_WinsVolley( ) [this.p1_Score( ) == 20] / this.p1AddPoint( )

α

Th ree P la y erG a m e ( ) / Two P la y erG am e ( )

ThreePlayerGame +ThreePlayerGame() +p3_Start( ) +p3_WinsVolley( ) -p3_AddPoint( ) +p3_IsWinner( ) +p3_IsServer( ) +p3_Points( ) +~( )

G a m e S ta rt ed

p3_WinsVolley( ) / simulateVolley( )

p 3_ S tart ( ) / s im ulat eV o lley ( )

p 3_ W ins V o lle y( ) / s im ulat eV o lley ( ) p 3_ W ins V o lle y( ) [t his .p 3_ S co re ( ) < 2 0] / th is . p3 A dd P oint ( ) s im ulat eV o lley ( )

Tw oP lay erG am e ( ) p 1_ W ins V o lle y( ) / s im ulat eV o lley ( )

P la y er 3 S erv e d p 2_ W ins V o lle y( ) / s im ulat eV o lley ( ) p 3_ W ins V o lle y( ) [t his .p 3_ S co re ( ) = = 2 0] / th is . p3 A dd P oint ( )

P la y er 3 W on

ω

~( )

p 3_ Is W in ne r( ) / ret urn TR UE ;

p1_IsWinner( ) / return TRUE;

Player 1 Won

p2_IsWinner( ) / return TRUE;

p2_WinsVolley( ) / simulateVolley( )

p3_WinsVolley( ) [this.p3_Score( ) < 20] / this.p3AddPoint( ) simulateVolley( ) Player 3 Served

p3_WinsVolley( ) / simulateVolley( )

p2_WinsVolley( ) / simulateVolley( )

ThreePlayerGame

p3_Start( )/ simulateVolley( )

p3_WinsVolley( ) [this.p3_Score( ) == 20] / this.p3AddPoint( ) p2_WinsVolley( ) [this.p2_Score( ) == 20] / this.p1AddPoint( )

Player 2 Won

Player 3 Won

~( )

~( ) ω

~( )

p3_IsWinner( ) / return TRUE;

19

Test Plan and Test Size • K events • N states • With LSIFs • KN tests

1 2 3 4 5 6 7 8 9 10

ThreePlayerGame( ) p1_Start( ) p2_Start( ) p3_Start( ) p1_WinsVolley( ) p1_WinsVolley( )[this.p1_Score( ) < 20] p1_WinsVolley( ) [this.p1_Score( ) == 20] p2_WinsVolley( ) p2_WinsVolley( ) [this.p2_Score( ) < 20] p2_WinsVolley( ) [this.p2_Score( ) == 20]

8 11 Player 1 Served

*7

Player 2 Served

Player 3 Served

17 Player 1 W on

omega

14 Player 1 W on

*6

Player 1 Served

*9

Player 2 Served

11

Player 3 Served

2

1 alpha

3 Gam eStarted

Player 2 Served

17

* 10 Player 2 W on

omega

15 Player 2 W on

5

• No LSIFs • K × N3 tests

Player 1 Served

4

* 12

Player 3 Served

17 11 12 13 14 15 16 17

p3_WinsVolley( ) p3_WinsVolley( ) [this.p3_Score( ) < 20] p3_WinsVolley( ) [this.p3_Score( ) == 20] p1_IsWinner( ) p2_IsWinner( ) p3_IsWinner( ) ~( )

* 13 Player 3 Served

Player 3 W on

16 Player 3 W on

8 Player 2 Served

5

omega

Player 1 Served

20

Built-in Test • Asserts • Logging • Design by Contract

• No Code Left Behind pattern • Percolation pattern

21

No Code Left Behind • Forces • How to have extensive BIT without code bloat? • How to have consistent, controllable • • • •

Logging Frequency of evaluation Options for handling detection Define once, reuse globally?

• Solution • Dev adds Invariant function to each class • Invariant calls InvariantCheck evaluates necessary state • InvariantCheck function with global settings • Selective run time evaluation • const and inline idiom for no object code in production

22

Percolation Pattern • Enforces Liskov Subsitutability

Base + + + +

Base() ~Base() foo() bar()

# # # # #

invariant() fooPre() fooPost() barPre() barPost()

• Implement with No Code Left Behind Derived1 + + + + +

Derived1() ~Derived1() foo() bar() fum()

# # # # # # #

invariant() fooPre() fooPost() barPre() barPost() fumPre() fumPost()

Derived2 + + + + +

Derived2() ~Derived2() foo() bar() fee()

# # # # # # #

invariant() fooPre() fooPost() barPre() barPost() feePre() feePost()

23

Well-structured Code • Many well-established principles • Significant for Testability • No cyclic dependencies • Don’t allow build dependencies to leak over structural and functional boundaries (levelization) • Partition classes and packages to minimize interface complexity

24

BLACK BOX TESTABILITY

25

Black Box Testability Size, Nodes, Variants, Weather

Test Model, Oracle, Automation

26

System Size • How big is the essential system? • Feature points • Use cases • Singly invocable menu items

• Command/sub commands

• Computational strategy • Transaction processing • Simulation • Video games

• Storage • Number of tables/views

27

Network Extent • How many

independent nodes? • Client/server • Tiers • Peer to peer

• Minimum Spanning

Tree • Must have one at least

of each online

MS-TPSO, Transaction Processing System Overview

28

Variants • How many configuration options? • How many platforms supported? • How many localization variants

• How many “editions”? • Combinational coverage • Try each option with every other at least once (pair-wise) • Pair-wise worst case product of option sizes • May be reduced with good tools

29

Weather – Environmental • The SUT has elements that cannot be practically established

in the lab • Cellular base station • Expensive server farm • Competitor/aggressor capabilities

• Internet – you can never step in the same river twice • Test environment not feasible • Must use production/field system for test • Cannot stress production/field system

30

More is Less • Other things being equal, a larger system is less testable • Same budget spread thinner • Reducing system scope improves testability • Try to partition into independent subsystems: additive

versus multiplicative extent of test suite

• For example • 10000 Feature Points • 6 Network Node types • 20 options, five variables each • Must test when financial markets are open • How big is that?

M66

95000 Light Years 1 Pixel ≈ 400 Light Years Solar System ≈ 0.0025 LY

32

Understanding Improves Testability • Primary source of system knowledge • Documented, validated? • Intuited, guessed? • IEEE 830 • Test Model • Different strokes • Test-ready or hints? • Oracle • Computable or judgment? • Indeterminate?

33

TEST AUTOMATION

34

Automation Improves Testability • Bigger systems need many more tests • Intellectual leverage • Huge coverage space • Repeatable • Scale up functional suites for load test • What kind of automation? • Harness • Developer harness • System harness • Capture-replay

• Model-based • Generation • Evaluation

35

Test Automation Envelope Reliability (Effectiveness)

5 Nines

Bespoke ModelBased

4 Nines

ModelBased Vision

3 Nines

Scripting

2 Nines

Manual 1 Nine 1

10

100

1,000

Productivity: Tests/Hour (Efficiency)

10,000

36

STRATEGY

37

How to Improve Testability? WBT = BIT + State Helpers +PCOs - Structure - Complexity– NDDs maximize

BBT = Model + Oracle + Harness maximize

minimize

– Size – Nodes – Variants - Weather minimize

38

Who Owns Testability Drivers? Architects BBT

Devs

Test

Model Oracle Harness PCOs Size Variants Nodes Weather

WBT

BIT State Helpers Good Structure

Complexity NDDs

Testers typically don’t own or control the work that drives testability

39

Strategy High

Black Box Testability Low

Emphasize Functional Testing

Incentivize Repeaters

Manage Expectations

Emphasize Code Coverage

Low

White Box Testability

High

40

Q&A

41

Media Credits • Robert V. Binder. Design for

• • • •



testability in object-oriented systems. Communications of the ACM, Sept 1994. Diomidis D. Spinellis. Internet Explorer Call Tree. Unknown. Dancing Hamsters. Jackson Pollock, Autumn Rhythms. Brian Ferneyhough. Plötzlichkeit for Large Orchestra. Robert V. Binder. Testing Object-Oriented Systems: Models, Patterns, and Tools.

• Microsoft, MS-TPSO,

Transaction Processing System Overview. • NASA, Hubble Space Telescope, M66 Galaxy. • Holst - The Planets – Uranus. Peabody Concert Orchestra, Hajime Teri Murai, Music Director • Test Automation Envelope. mVerify Corporation.