Pattern-Based Debugging of Declarative Models - Electrical and ...

0 downloads 7 Views 211KB Size Report
Core extraction and non- example generation: debugging and understanding log- ical models. Master's thesis, MIT, 2004. [
Pattern-Based Debugging of Declarative Models Vajih Montaghami and Derek Rayside Electrical and Computer Engineering University of Waterloo Waterloo, Ontario, Canada {vmontagh, drayside}@uwaterloo.ca

Abstract—Pattern-based debugging compares the engineer’s model to a pre-computed library of patterns, and generates discriminating examples that help the engineer decide if the model’s constraints need to be strengthened or weakened. A number of tactics are used to help connect the generated examples to the text of the model. This technique augments existing example/counter-example generators and unsatisfiable core analysis tools, to help the engineer better localize and understand defects caused by complete overconstraint, partial overconstraint, and underconstraint. The technique is applied to localizing, understanding, and fixing a defect in an Alloy model of Dijkstra’s Dining Philosopher’s problem. Automating the search procedure remains as essential future work.

I. I NTRODUCTION Debugging can be a cumbersome and time-consuming task that persists throughout the software lifecycle [11, 22, 23]. Bugs in the specification, whether due to incompleteness or unsoundness, can be especially pernicious [10, 13]. Modern software modelling languages, such as Alloy [8], often have associated analysis tools that assist the engineer in confirming the soundness and completeness of the specification. These tools might include both an example/counterexample generator, and highlighting of the unsatisfiable core of the model when no example or counter-example can be generated. Such tools are a tremendous assistance to model engineers, and our work builds on these them. An example/counter-example generator helps the engineer understand when the model is under-constrained. The generated example might be one the engineer wishes to reject, and so refines the model to prevent that example. However, common tools take the first available example, which is not necessarily an interesting one nor one the engineer wants to reject. Our technique produces discriminating examples, which help the engineer decide whether a property should be strengthened or weakened (or stay as is). When the model is fully overconstrained — that is, there are no instances possible — an unsatisfiable core analysis highlights the parts of the model that are in conflict. With high probability, if the model is not intended to be overconstrained, the fix must be made somewhere in the core. But where? Our technique will produce discriminating examples that help the engineer localize which part of the core needs fixing. Partial overconstraint occurs when some instances that the engineer would accept are prohibited by the model, whereas others are correctly generated. This circumstance is underserved by existing tools. Again, discriminating examples can

help the engineer decide if certain properties need to be strengthened or weakened. The need to debug arises because the expressed meaning of the model differs from the intended meaning of the model, but the engineer does not know where or why [12]. We propose pattern-based debugging as a semi-automatic technique to assist with localization and understanding of these differences. Pattern-based debugging comprises pattern-based semantic inference, semantic mutation, and a collection of tactics. In this context a pattern is a general idea such as acyclicity. When a pattern is instantiated with respect to a particular relation we call it a property, e.g., r is acyclic. Pattern-based semantic inference is the process of discovering which properties the model implies. The conjunction of these properties is an approximation of the expressed meaning of the model. Semantic mutation is the process of changing this inferred meaning by either strengthening or weakening individual properties. We have computed both an implication graph and a conjunction graph that are used to guide this reasoning process. The implication graph records which patterns strengthen or weaken which other patterns. The conjunction graph records which patterns are mutually satisfiable, and which patterns are in conflict (not mutually satisfiable). The mutated model is used to produce examples for the engineer to accept or reject. Through this dialogue the mutated model will move closer to the engineer’s intentions. The engineer is informed of how the original model differs from his or her intentions in three ways. First, the models can be compared directly, to generate discriminating examples that are true in one but not the other. Second, these discriminating examples can be compared with the original model using unsat core. Finally, the mutations themselves can be reported: relax property p to p0 , strengthen property q to q 0 , etc. Through the dialogue with the engineer the debugger uses a number of tactics. These are divided into generation tactics, used to generate examples or counter-examples, and correspondence tactics, used to highlight to the engineer where generated examples (or counter-examples) correspond or diverge from the expressed model. The generation tactics include antecedent satisfaction, disjunction vacuity detection, and conjunction vacuity detection. The correspondence tactics include quantifier unrolling, and function and predicate inlining. These tactics will be explained individually as they are used in the presented case study. The Alloy Analyzer has shipped with a model of Dijkstra’s

     

Inclusion × Ordering × Column × Staticness × Emptiness  Expand HeadOf   Expandd HeaddOf _MiddleStatic  _FirstLeftEmpty  Middle  _LastLeftEmpty  Contract × TailOf × × _RightStatic  × Right  Contractt TaillOf ∅ ∅ Mutate ∅ Fig. 1: Parametric Temporal Patterns’ structure for a ternary relation like r:Left→Middle→Right TABLE I: Examples satisfying predicates in Figure 1.

ExpandHeadOfRight_MiddleStatic Left Middle Right L0 M0 R0 L1 M0 R0 M0 R0 L2 M1 R1 M0 R0 L3 R1 M1 R2

ContracttTaillOfRight_MiddleStatic Left Middle Right R0 L0 M0 R1 R3 R0 L1 M0 R3 L2 M0 R3 – – –

dining philosopher’s problem for several years. This model had a partial over-constraint bug that went undetected until 2012. The bug was originally discovered — and fixed — when a translator from Alloy to the KeY theorem proving system was written [21]. However, the depth of reasoning required to fix this bug was not adequately explored in previous publications. We demonstrate that pattern-based debugging can guide the engineer through this complex, multi-step reasoning. II. PATTERN -BASED S EMANTIC I NFERENCE A pattern is a general concept such as monotonicity, whereas a property is a pattern applied to a specific relation in the model. Pattern-based semantic inference checks which properties, from the cross-product of instantiating every pattern on each relation, are implied by the model. The conjunction of these properties is an approximation of the meaning of the model. Pattern-based semantic inference is guided by both an implication graph and a conjunction graph of patterns. These graphs are computed once for the library of patterns and then stored for use in analyzing individual models. The implication graph records which patterns imply others, whereas the conjunction graph records which patterns are mutually satisfiable. If pattern p ⇒ q, and the model m ⇒ p[r] (that is, the model implies pattern p applied to relation r), then there is no need to check m ⇒ q[r]. Similarly, if pattern p and pattern x are not mutually satisfiable (i.e., if there is no edge between them in the conjunction graph), and the model m ⇒ p[r] then there is no need to check m ⇒ x[r]. III. A L IBRARY OF T ERNARY PATTERNS We present a library of ternary patterns. A previous study of Alloy models showed that ternary relations are often used in temporal models [18]: a third column is added, at either end, to index a binary relation through time. Previously, Mendel [15] developed a library of binary patterns.

ContracttMiddle_LastLeftEmpty Left Middle Right M0 R0 L0 M1 R1 M2 R0 M0 R0 L1 M2 R0 L2 M2 R0 ∅ ∅

MutateHeadOfMiddle Left Middle Right M0 R0 L0 M1 R1 M0 R0 L1 M2 R0 M0 R1 L2 M2 R0 – – –

The library of patterns is generated from a five-dimensional cross-product (Figure 1). There are 450 patterns produced by this cross-product. An analysis with Alloy reveals that only 180 of these are satisfiable. These can be grouped into 160 equivalence classes (two patterns are in the same equivalence class if they are equisatisfiable). The implication graph for this library has 12 source nodes, 6 sink nodes, and a maximum path length from source to sink of 6. There are some nodes that are reachable from only a subset of the source nodes. Consider the ternary relation r, with column Left, Middle, and Right: sig Left{r: Middle→Right}. Table I shows example values for this relation that conform with four different patterns. Consider the pattern ContracttTailOfRight_MiddleStatic. We see that L0 → M0 → {R0 , R1 , R3 }; in the next time step we see that the tail of the right column (atom R0 ) has been removed (contracted), while the middle column remains the same: L1 → M0 → {R1 , R3 }. The double ‘t’ on ‘Contractt’ indicates that the contraction must happen each time step. By contrast, the pattern ExpandHeadOfRight_MiddleStatic does not have a double ‘d’ at the end of ‘Expand’, so the expansion does not need to happen at every time step: things might stay the same, as they do from L0 to L1 . When atoms are added to the right column, they must be greater than the existing atoms in that column. For example, state L2 adds R1 , and L3 adds R2 . The MiddleStatic part of the pattern indicates that once an association is made between a middle atom and a right atom, that association persists. For example, the association M0 → R0 exists in the first state (L0 ) and in all subsequent states. Similarly, in state L2 the association M1 → R1 is made, and that persists for all future states. Pattern ContracttMiddle_LastLeftEmpty strictly shrinks the number of atoms in the middle column as the states progress, ending in the empty set. For example, at state L0 we see {M0 , M1 , M2 }. Then at state L1 this has been reduced to {M0 , M2 }. There is no ordering constraint in the pattern, which is why M1 could be removed. The LastLeftEmpty con-

straint in the pattern enforces that in the last state, L3 , there are no associated middle or right atoms. The example for the MutateHeadOfMiddle pattern shows that the head of the middle column changes from M1 to M2 as the state progresses from L0 to L1 . The transition from L1 to L2 reveals some subtleties of this pattern. First, the head of the middle is not required to change at each time step, because ‘head’ does not have a double ‘d’, and from L1 to L2 the middle column does not change. Second, the contents of the right column do change from L1 to L2 , which at first might seem confusing. This change is permitted because the pattern does not include RightStatic. Meta-programming techniques are used to generate the formulas for patterns in this library, as shown below for ExpandHeadOfRight_MiddleStatic The highlighting shows which parts of the formulas are generated from which concepts in the pattern. 1 2 3

open utils/ordering[Left] lo open utils/ordering[Middle] mo open utils/ordering[Right] ro

4 5 6 7 8

9

pred ExpandHeadOfRight_MiddleStatic{ all l: Left − lo/last | let l’= lo/next[l] | all m: Middle | let i= m.(l.r) | let j= m.(l’.r) | let delta = j - i | (i in j) and ( some delta implies ro/lte[ ro/min[i], ro/min[delta] ] ) }

The library of formulas is online at https://goo.gl/WhwKiR IV. D INING P HILOSOPHERS C ASE S TUDY The Alloy Analyzer ships with a model of Dijsktra’s dining philosophers problem. The purpose of the model is to show that Dijkstra’s criteria for ordering the mutexes prevents deadlock. Previously, another research group [21] used a theorem prover to discover that the original model (which shipped with Alloy for several years) was over-constrained: deadlock was prevented because no instances were possible. They provided a fix, but did not document the complex and subtle reasoning required to create that fix. Dijkstra’s well-known idea is that processes will not deadlock if they grab mutexes in order. The Alloy model is defined with the entities Process, Mutex, and State, and the relations holds and waits: 1 2 3

sig Process {} sig Mutex {} sig State { holds, waits: Process →Mutex }

The following assertion checks the absence of deadlock: 1 2 3

assert DijkstraPreventsDeadlocks { some Process and GrabOrRelease and GrabbedInOrder implies not Deadlock }

As will be explained, the root of the problem is that the original GrabInOrder predicate is over constrained, so while the assertion about absence of deadlock is true, it is so vacuously: 1 2 3 4

pred GrabbedInOrder { all pre: State − so/last | let post = so/next[pre] | let had = Process.(pre.holds), have = Process.(post.holds) |

5 6

let grabbed = have − had | (some grabbed) ⇒ grabbed in mo/nexts[had] }

The remainder of this section describes a sequence of four fixes that lead to the final solution, which is logically equivalent (although syntactically different) to the solution previously presented in the literature [21]. A. The First Fix: Removing Overconstraint Since the absence of deadlock assertion is satisfied, the engineer does not even realize that there is a problem with this model. The debugger sees that the assertion is written as an implication, and applies the Antecedent Satisfaction tactic to generate and check the following assertion. The idea is that if the engineer wrote an implication, they expected the antecedant to be true at least some of the time. The tactic will generate variants of the following assertion, potentially adding existence conditions such as some holds. In this model, holds is the most frequently mentioned relation. Again, if the engineer put a relation in the model, they probably intend that it has the possibility of being non-empty. 1 2 3

pred DijkstraPreventsDeadlocksAntecedent { some Process and GrabOrRelease and GrabbedInOrder and some holds }

The unsatisfiable core for this assertion reveals that GrabOrRelease and GrabbedInOrder are in conflict. Which one needs to be fixed? Pattern-based semantic inference discovers that there is a property, ExpandRight_FirstLeftEmpty, that is implied by GrabOrRelease and is inconsistent with GrabbedInOrder. This property is used to generate a discriminating example: S0 S1

holds P0 M0

S0 S1

waits - - -

The engineer accepts this example, which localizes the problem to GrabbedInOrder. Had the engineer rejected this example then the problem would have been localized to GrabOrRelease. The debugger now applies the quantifier unrolling tactic to GrabbedInOrder, which has a universal quantifier over a set of states. The unsat core of the unrolled GrabbedInOrder with the discriminating example further localizes the problem: GrabbedInOrder is inconsistent with the first state of the discriminating example. GrabbedInOrder comprises a single implication. For that implication to be in conflict with the discriminating example it must be the case that the antecedent is satisfied and the consequent is where the conflict is. So the problem has been localized to the following expression: grabbed in mo/nexts[had]

The debugger also reports to the engineer that had is empty. Now the engineer realizes that the problem is that the nexts function will not return any successors for the empty set (had). The engineer replaces this faulty expression with a conditional that returns all mutexes when had is empty and mo/nexts[had] otherwise.

15

pred GrabbedInOrderState0 { let pre = so/first | let post = so/next[pre] | let had = Process.(pre.holds), have = Process.(post.holds) | let grabbed = have − had | some grabbed implies grabbed in mo/nexts[had] } pred GrabbedInOrderState1 { let pre = so/next[so/first] | let post = so/next[pre] | let had = Process.(pre.holds), have = Process.(post.holds) | let grabbed = have − had | some grabbed implies grabbed in mo/nexts[had] } run{ GrabbedInOrderState0 GrabbedInOrderState1

16

−−The statement encoding the partial instance.

1 2 3 4 5 6 7 8 9 10 11 12 13 14

(no waits some disj s0 ,s1 : State, p0 :Process, m0 : Mutex|{ s0 = so/first s1 = so/next[s0 ] holds = s1 →p0 →m0 } )

17 18 19 20 21 22 23

q q ∧ ¬p

weakening p strengthening q Fig. 3: The dark centre circle represents instances satisfying property p, whereas the lighter outer circle represents instances satisfying property q. The containment relationship indicates that p implies q. Discriminating examples are inside the dotted line, but outside the dark centre circle: i.e., they satisfy q ∧ ¬p. If the engineer accepts such an example then p must be weakened towards the dotted line, whereas rejecting such an example means that q must be strengthened towards the dotted line.

S0 S1

} Fig. 2: Unrolled GrabbedInOrder

1 2 3 4 5 6 7 8

pred GrabbedInOrder_Fixed_1 { all pre: State − so/last | let pre = so/first | let post = so/next[pre] | let had = Process.(pre.holds), have = Process.(post.holds) | let grabbed = have − had | (some grabbed) ⇒ (grabbed in (no had implies Mutex else mo/nexts[had])) }

B. The Second Fix: Strengthening Underconstraint The model now generates non-trivial examples, and so is no longer totally overconstrained. There is a possibility that it might still be partially overconstrained or underconstrained. The debugger attempts to produce discriminating examples to help the engineer decide if the model is a faithful expression of his or her intent. To begin, the debugger identifies the properties consistent with each constraint in the antecedent of the revised expression. Four such properties are identified: ExpandRight, ExpandRight_FirstLeftEmpty, ExpanddRight, and ExpanddRight_FirstLeftEmpty. The implication graph shows that each of these four properties are implied from stronger forms; thus, if the engineer rejects an example that is consistent with the weaker form of a property, but inconsistent with the stronger form, then the expression is underconstrained and requires correction. Figure 3 illustrates the idea of weakening and strengthening. The debugger generates the following discriminating example, where at state S2 , process P0 holds two mutexes {M0 ,M2 }. In the next state, S3 , process P0 additionally acquires mutex M1 . The debugger is testing the hypothesis that M1 can be added to a set that already contains M2 . The weaker form of the property, that the engineer has actually written in the model, permits this — but the stronger form of the property, which the debugger knows from the implication graph, would not permit this addition.

p

S2 S3

holds P0 M0 M0 P0 M2 M0 P0 M2 M1

waits S2 S3 S0 S1

-

The engineer rejects the discriminating example: their intent is that mutexes be acquired in order, and so acquiring M1 after M2 should be forbidden. The debugger localizes the problem to, again, the consequent in GrabbedInOrder_Fixed_1. It is able to do this because it knows the property that is too weak, what it needs to be strengthened to, and how these relate to parts of the model in terms of the implication and conjunction graphs. Here the engineer needs to think of the fix: how can the expression grabbed in mo/nexts[had] permit this example? The problem is that had refers to a set that includes M0 , and M1 is in the nexts of M0 . The engineer decides to rewrite this expression in terms of prevs instead of nexts: 1 2 3 4 5 6 7

pred GrabbedInOrder_Fixed_2 { all pre: State − so/last | let pre = so/first | let post = so/next[pre] | let had = Process.(pre.holds), have = Process.(post.holds) | let grabbed = have − had | (some grabbed) ⇒ (no (grabbed & mo/prevs[had])) }

C. The Third Fix: Strengthening Underconstraint The debugger explores another hypothesis that the model is underconstrained by generating an example to determine if the property ExpandHeaddOfRight[holds] should be strengthened to ExpandHeaddOfRight_MiddleStatic[holds] by generating an example consistent with the former but not the latter:

S0 S1 S2 S3 S4

holds P0 P0 P1 P0 P1 P1 P1

M2 M2 M3 M2 M3 M2 M3

S0 S1 S2 S3 S4

waits P1 -

E. Termination of the Debugger M2 -

The engineer rejects this discriminating example because in state S4 process P1 takes mutex M2 when it already has mutex M3 , thereby violating the intention that mutexes are acquired in order. The debugger explains that, from its perspective, the issue is that it changed the middle column (processes) while keeping the right column (mutexes) fixed, and that the syntactic difference between the predicates it is experimenting with is a quantifier over the middle column (see §III). The engineer realizes that the set have should be defined for each process individually, rather than for all processes collectively, and introduces a quantifier to do this: 1 2 3 4 5 6 7 8

pred GrabbedInOrder_Fixed_3 { all p: Process | all pre: State − so/last | let pre = so/first | let post = so/next[pre] | let had = Process.(pre.holds), have = p.(post.holds) | let grabbed = have − had | (some grabbed) ⇒ (no (grabbed & mo/prevs[had])) }

D. The Fourth Fix: Strengthening Underconstraint Unfortunately, the revised model now permits deadlock. Alloy’s regular example generator produces the following instance. The engineer examines this instance, and realizes that it is almost acceptable: if only the last tuple of the waits relation were changed from S4 → P0 → M0 to S4 → P0 → M2 . S0 S1 S2 S3 S4

holds P1 P1 P0 P1 P0 P1 P0

M0 M0 M1 M0 M1 M0 M1

S0 S1 S2 S3 S4 S4

waits P1 M1 P1 P0

M1 M0

change to M2

The debugger finds that this corrected example implies over the union of waits and holds. This union also turns out to be the only non-vacuous combination of these relations: their intersection and differences are empty. This new property is also aligned with the engineer’s intentions on all of the previously generated examples and counterexamples. The property is reported to the engineer, who chooses to simply conjoin it with the existing predicate: ExpanddHeadOfRight

1 2 3

pred GrabbedInOrder_Fixed_4{ GrabbedInOrder_Fixed_3 ExpanddHeadOfRight[waits + holds]}

With this fourth fix deadlock is prevented.

This fourth fix is equisatisfiable with the previously published solution [21], and so the case study ends here. In addition to this social criterion, the debugger knows that the model prevents deadlock and is consistent with the engineer’s intent on all of the discriminating examples generated so far. The debugger additionally checks that the fourth fix does not introduce accidental partial overconstraint — such an accident was the original cause for this entire case study, after all. The debugger uses Alloy to confirm that the third model and the fourth model are equivalent so long as the possibility of deadlock is excluded. Therefore, the fourth model is no more overconstrained than the third model. In this early stage of the research, the debugger search procedure operates by the researchers manually applying tactics. If the search procedure were fully automated, then one might imagine some further checks the debugger might want to perform in addition to the ones described above. For example, the debugger might check that every one-step semantic mutation of the model is killed by one of the discriminating examples that the engineer has already accepted or rejected. The debugger might also check that its example generation tactics — antecedent satisfaction, disjunction vacuity detection, and conjunction vacuity detection — produce no new information. If all of these conditions pass then the engineer could have a fairly high degree of confidence that the model — now including the collection of generated examples — is an accurate expression of his or her intent. V. C ONCLUSION AND F UTURE W ORK Pattern-based debugging has the potential to enhance existing model debugging tools, including example/counterexample generators and unsatisfiable core analyzers. This potential derives from a capacity to create semantically meaningful variants of the engineer’s model, and to explain those variants to the engineer semantically, syntactically, and by example. This process can guide the engineer towards a model that more accurately reflects their intent. Pattern-based debugging is in the spirit of example-driven modelling: an approach that emphasizes the importance of examples, in addition to abstractions, for creating complete models [1]. A key idea in pattern-based debugging is the discriminating example: an example that helps the engineer decide if a property should be strengthened or weakened. Discriminating examples additionally help to focus unsatisfiable cores on where the fix should be. This paper reports on the idea of pattern-based debugging, a library of patterns for ternary relations, and a case study of a subtle and complex fix for an Alloy model of Dijsktra’s dining philosophers problem. Pattern-based debugging comprises pattern-based semantic inference, semantic mutation, and a collection of tactics. Essential future work includes automating the semantic mutation and tactic application search procedures, as well as the tactics themselves. There are also possibilities to expand the library of patterns.

VI. R ELATED W ORK There are a variety of existing libraries of specification patterns, including for LTL [5], real-time systems [6, 14], and service-based applications [7, 9, 17]. Closer to our work, Mendel [15] created a library for binary relations. There are also debuggers and visualizers for LTL and CTL [2, 4, 13]. Some of these make use of Reiter’s general theory of diagnosis [19]. Aluminum has a similar goal of improving example exploration, but is based on mathematical definitions of minimality rather than discriminating between variants of a property [16]. Seater [20] proposed non-examples to help explain the purpose of each constraint in a specification. This is complementary to our idea of discriminating examples. Pattern-based debugging is inspired by mutation testing [3]. ACKNOWLEDGMENT

The authors thank Steven Stewart for his assistance with the prose, and the reviewers for their thoughtful comments. This work was funded in part by the Natural Sciences and Engineering Research Council of Canada (NSERC). R EFERENCES [1] Kacper Bak, ˛ Dina Zayan, Michał Antkiewicz, Krzysztof Czarnecki, Zinovy Diskin, and Derek Rayside. ExampleDriven Modeling. Model = Abstractions + Examples. In New Ideas and Emerging Results Track at ICSE, May 2013. [2] Ilan Beer, Shoham Ben-David, Hana Chockler, Avigail Orni, and Richard Trefler. Explaining counterexamples using causality. In Proc.21st CAV, volume 5643 of LNCS, pages 94–108. Springer-Verlag, July 2009. [3] T.A. Budd, R.A. DeMillo, R.J. Lipton, and F.G. Sayward. Theoretical and empirical studies on using program mutation to test the functional correctness of programs. In Proc.7th POPL, January 1980. [4] Marsha Chechik and Arie Gurfinkel. A framework for counterexample generation and exploration. In Proc.8th FASE, pages 220–236. Springer-Verlag, April 2005. [5] Matthew B. Dwyer, George S. Avrunin, and James C. Corbett. Patterns in property specifications for finite-state verification. In Proc.21st ICSE, 1999. [6] Volker Gruhn and Ralf Laue. Specification patterns for time-related properties. In Temporal Representation and Reasoning, International Syposium on, pages 189–191. IEEE Computer Society, 2005. [7] Sylvain Halle, Roger Villemaire, and Omar Cherkaoui. Specifying and validating data-aware temporal web service properties. TSE, 35(5):669–683, 2009. [8] Daniel Jackson. Software Abstractions: Logic, Language, and Analysis. The MIT Press, revised edition, January 2012. [9] Slim Kallel, Anis Charfi, Tom Dinkelaker, Mira Mezini, and Mohamed Jmaiel. Specifying and monitoring temporal properties in web services compositions. In Web Services, 2009. ECOWS’09. Seventh IEEE European Conference on, pages 148–157. IEEE, 2009.

[10] Sagi Katz, Orna Grumberg, and Daniel Geist. ‘Have I written enough Properties?’ — A Method of Comparison between Specification and Implementation. In Proceedings of the 10th IFIP WG 10.5 Advanced Research Working Conference on Correct Hardware Design and Verification Methods, CHARME ’99, pages 280–297. Springer-Verlag, 1999. [11] Andrew J Ko and Brad A Myers. A framework and methodology for studying the causes of software errors in programming systems. Journal of Visual Languages & Computing, 16(1):41–84, 2005. [12] Andrew J. Ko, Robin Abraham, Laura Beckwith, Alan Blackwell, Margaret Burnett, Martin Erwig, Chris Scaffidi, Joseph Lawrance, Henry Lieberman, Brad Myers, Mary Beth Rosson, Gregg Rothermel, Mary Shaw, and Susan Wiedenbeck. The state of the art in end-user software engineering. ACM Computing Surveys, 43(3): 21:1–21:44, April 2011. [13] Robert Könighofer, Georg Hofferek, and Roderick Bloem. Debugging formal specifications: a practical approach using model-based diagnosis and counterstrategies. International Journal on Software Tools for Technology Transfer, pages 1–21, 2011. [14] Sascha Konrad and Betty HC Cheng. Real-time specification patterns. In , Proc.27th ICSE, pages 372–381. ACM, 2005. [15] Lucy Mendel. Modeling by example. Master’s thesis, MIT, September 2007. [16] Tim Nelson, Salman Saghafi, Daniel J Dougherty, Kathi Fisler, and Shriram Krishnamurthi. Aluminum: principled scenario exploration through minimality. In , Proc.35th ICSE, pages 232–241, 2013. [17] Franco Raimondi, James Skene, and Wolfgang Emmerich. Efficient online monitoring of web-service slas. In , Proc.16th FSE, pages 170–180, November 2008. [18] Derek Rayside, Felix Chang, Greg Dennis, Robert Seater, and Daniel Jackson. Automatic visualization of relational logic models. In First Workshop on the Layout of (Software) Engineering Diagrams (LED’07), September 2007. [19] R Reiter. A theory of diagnosis from first principles. Artif. Intell., 32(1):57–95, April 1987. [20] Robert Morrison Seater. Core extraction and nonexample generation: debugging and understanding logical models. Master’s thesis, MIT, 2004. [21] Mattias Ulbrich, Ulrich Geilmann, Aboubakr Achraf El Ghazi, and Mana Taghdiri. A Proof Assistant for Alloy Specifications. In , Proc.18th TACAS, volume 7214 of LNCS, pages 422–436. Springer-Verlag, 2012. [22] I Vessey. Expertise in debugging computer programs: an analysis of the content of verbal protocols. IEEE Trans. Syst. Man Cybern., 16(5):621–637, September 1986. [23] Andreas Zeller. Isolating cause-effect chains from computer programs. In , Proc.10th FSE, pages 1–10, November 2002.

Suggest Documents