! "#
$ % & ' ( & $ ' ' $ )* ,+
A bibliography of object-oriented testing, with a short general survey of the area Mike Stannett MOTIVE Project Group, Dept of Computer Science, Sheffield University Regent Court, 211 Portobello Street, Sheffield S1 4DP, UK.
[email protected] Tele. +44 (114) 222-1911 Fax. +44 (114) 222-1810 Acknowledgement. This research is sponsored by the UK-EPSRC as part of the MOTIVE Object-Oriented Testing project (Grant number GR/M56777). Summary. As part of an EPSRC project considering new approaches to object-oriented testing (OOT), we have constructed a comprehensive bibliography of existing OOT literature. This bibliography shows worrying a worrying trend, for while industrial uptake of OO technologies has increased dramatically in recent years, the same period has seen the number of publications and conference reports in the subject decline markedly. Even so there have been some significant developments, and we include these in this summary of current concepts in OO testing. We include the bibliography itself as a resource to assist and encourage future research in the field. Keywords. Object oriented testing, functional testing, class testing, integration testing.
-/.021 34257681 9 4 0 Testing has long been recognised as a vital component of the software development process, and many books and papers have been devoted to the subject (see [Mye79] for a traditional pre-OO account of testing principles, and e.g. [BG99, MS01] for a recent discussion of OO testing). The field of object-oriented testing (OOT) concerns those aspects of software system testing that arise specifically in relation to using an object-oriented development approach. The first question that needs to be discussed is whether OOT has any relevance. Does objectorientation actually impinge upon the testing process, or is OOT simply the application of standard procedural testing strategies to OO language constructs? From a theoretical point of view there is now no doubt that OOT raises important questions that are not addressed by the general theory of software testing, but the wider question, whether OOT is perceived by the OO community as being relevant is harder to assess. One approach is to consider the number of research publications in the subject. To the extent that OOT addresses special problems, we would expect to see this reflected in a relatively large number of related publications. Conversely, if object-orientation is largely considered to be irrelevant as far as testing is concerned, we would expect to find relatively few papers on the subject; it would be left instead to researchers of general software testing. To this end I have tried to construct as complete a bibliography of OOT as possible, and have drawn on library catalogues, conference reports, and the reference sections of journal papers. The results, based on 322 entries and shown in Chart 1, are not necessarily complete (it is likely that some citations for 2001 have been missed), but they are nonetheless revealing. Authors on OOT often note that practitioners are remarkably unconcerned with testing, but the chart suggests that the situation is far worse than this – the level of interest is not merely low, but may actually be decreasing. Early authors on OO development (OOD) were quite certain that OOT was of little consequence. Indeed, Rumbaugh et al [RBP+91] specifically excluded testing from their
! "#
$ % & ' ( & $ ' ' $ )* $
discussion of OO modelling and design, arguing (p.144) that “testing and maintenance are simplified by an object-oriented approach, but the traditional methods used in these phases are not significantly altered.” With hindsight, this argument is obviously untenable, because the two styles of programming entail completely different choices for the basic unit of testing. By common consensus, the basic unit for procedural systems is the procedural function (procedure), while for OO systems it is the class (sometimes collections of classes). Now, procedures are typically designed to use local variables that are hidden from other parts of the system, while objects are designed to share a single underlying set of local variables across an entire family of methods. This entails that interactions between functions are likely to be of far greater importance in the OO world than in the procedural one, and this is confirmed by that other well known consequence of the move from procedural to OO development: whereas procedural functions are typically large with complex control structures, OO methods tend to be small with relatively simple control structure. From the tester’s point of view this explains why OO methods are generally easier to test than their procedural counterparts, while integration testing is typically harder [AO00]. In addition, OO languages employ inherently more complex relationships than their procedural counterparts – it is the concise expressive power inherent in these relationships that makes the OO approach so valuable in the first place – and these also influence the sorts of question that testing can ask. Number of OO Testing publications for each year (as listed in the bibliography) 59
60 50
43 40
41 36 27
30 21 20
21
23
16 11
10
6 0
2
8
6
1
19 86 19 87 19 88 19 89 19 90 19 91 19 92 19 93 19 94 19 95 19 96 19 97 19 98 19 99 20 00 20 01 20 02
0
Chart 1 Apart from the change of unit from procedure to class, Binder’s landmark survey [Bin95f] identified a number of features in which authors seemed particularly interested: •
inheritance
•
encapsulation
•
polymorphism
•
testing styles (white-box, black-box, and state-based)
•
test metrics (adequacy and coverage)
•
integration strategies
! "#
•
$ % & ' ( & $ ' ' $ )* (
test process strategy
and we can now add several more issues to his list •
reusability
•
the Internet (concurrency, security, mobility, timing)
•
automation of testing
•
delivered-as-tested
•
mixed-mode development and formal methods
The most important development over the last decade has been the realisation that testing must occur across the entire life cycle [Kor98, BAS99, BG99], but this realisation is common to software testing in general and not restricted to OOT. The specifically OOT-centred issues identified by Binder are still relevant to day, but in some cases the emphasis has changed. For example, Binder considered white-box [Par98], black-box [Edw01] and state-based [TR92b, TR93a, TR93c, Bin95a, Bin95b, TR95] testing. It is now largely accepted that white-box testing cannot succeed in isolation; in real situations functional techniques are also required, and many approaches focus on the integration of both approaches within a single strategy [CTCC98]. As computers have become more powerful, so test strategies that were once considered infeasible are becoming routine; at the same time, systems have continued to grow, and the complexities of some components require new approaches. Consequently, Binder’s list of techniques now needs to be extended to include statistical [TW96, BBG+98], mutation [OLR+96, OP97, Edw01] and dual-refinement testing. Statistical testing accepts that perfect knowledge of a system is generally unattainable, and tries instead to find statistical support for correctness. Mutation testing is a particular approach to the generation of test cases, and has been boosted recently by the discovery that only a small number of mutation types are needed to achieve high coverage [OLR+96, Edw01]. The dual-refinement strategy (originally applied to high-level functional testing of VLSI [HSR89]) is still under review (see the description of MOTIVE below). It starts with the observation that a functional system specification can be also be regarded, albeit abstractly, as a functional test specification, because it tells us what observable outcomes should be generated when stimuli are supplied under well-defined conditions. Consequently, just as we can envisage automatically refining the specification into program code for the application, we can also envisage refining it into a detailed functional test script. Provided we ensure that the two refinement processes are carried out in tandem, we will automatically have generated a complete, correct and consistent test-frame for the application in question. The usual problems associated with specification-based testing, that one sometimes needs to know implementation details, are completely avoided by this approach, because of the extra information provided a priori by the ‘dual-refinement’ process. We know that the testspecification and design-specification describe the same system, because they have been constructed with this property in mind. Unlike most specification-based scenarios, the implementation has not been constructed in logical isolation from the test script, but as part of a scheme for ensuring that the two are in precise agreement. Reusability. Given the prevailing focus on proceduralism when OO testing first emerged as a separate discipline, it is perhaps inevitable that the issues cited in Binder’s survey are themselves intrinsically proceduralist — they concern ways of doing things. It is one of the great ironies of OO testing that procedural thinking still dominates the ‘micro architecture’ of the discipline. To be sure, today’s researchers address new developments relating to the various categories of testing, but the categories they use continue to be those constructed around ‘procedural’ exemplars. Even so, as the OO approach has evolved over the past 7 years, testers have begun asking a question that is more in keeping with the OO philosophy: what can this entity be used for? This question (which is the basis for the long-standing importance of use-
! "#
$ % & ' ( & $ ' ' $ )* ,+
cases [Jac92, Amb94, Maj98]) allows us to add reusability to the list of key issues [MS91, Chu95, Bin96c, McG96c, McG97g]. Reusability is by no means a new concept; indeed, it can be regarded as one of the original goals of OO development. Early writers hoped that once a class was fully tested you would rarely need to test it again: the class could therefore be used safely as an off-the-shelf component. Reexpressing this in the analogous language of bricks and mortar, it amounts to the hope that bricks can be used safely as off-the-shelf components in the construction of buildings, and this is generally true. But there are always exceptional circumstances where general rules break down. Can bricks be used to build a skyscraper, or would they crumble under the excessive load? We can only decide by performing integration tests to see how they perform as a collective. Such tests may tell us that a stronger brick is required, or we may find that no type of brick is strong and light enough to use for high-rise construction, and that a system of steelreinforced concrete is required instead. This is not the brick’s fault; it has passed its tests and is a perfectly safe object in its own terms, but the moment we place the brick in a larger context we bring forces to bear that cannot be internalised. In other words, external factors can impinge upon the acceptability or otherwise of a class, even if it precisely satisfies its specification, and can even place a class under so much stress that it fails completely. The validity of a class can never be determined absolutely — validation is only ever relative to the environment in which the class will operate. In particular, any class that is designed to be reusable is bound at some point to become unusable, because in the real world the set of instances for which a given condition holds is always open; we can never precisely identify the boundary points [Vic89]. The Internet has obvious implications for OOT [MD97, MF99]. The idea that components can migrate across the Net and collaborate concurrently at dynamically specified locations essentially precludes proceduralism — it is hard to see how a non-OO approach might be used to implement such processes. Concurrency was an early concern in OOT [HY88, Chu89], and is becoming so again with the use of concurrent OO languages [Cor98]. Mobility is obviously important for components that traverse the Net, and will presumably become more so given current demand for mobile communications. Probably the most significant factor here is the development of new process algebras [BFT00] (especially Milner’s π-calculus [MPW91]), as these increasingly enable us to reason about such systems systematically. Likewise, security has introduced new barriers to testing: how does one test compatibility with a component which is specifically designed not to provide information? Timing is usually thought of as a nonfunctional issue, but in this context it also concerns the order in which an object is prepared to accept method invocations. It has usually been left to programmers to ensure that methods are called in the correct order, but this is probably impossible to ensure when objects are expected to interact autonomously. Consequently objects should not have to rely on programmers getting it right, but need their own means of ensuring that incorrect timing won’t compromise their behaviour [Buc00]. Delivered as tested applies equally to OO and procedural code, but is included here because of the particular mechanism used to achieve it. Delivered as tested refers to the rule that the code we deliver should be the code we tested. Investigating the properties of an object without adding instrumental code usually involves declaring test classes as friends of the class under test [ESS+98]. As friends of the class, test objects have privileged access to the objects they scrutinise, and the additional overhead is avoided at delivery by the simple expedient of not including the test classes in the final build; the semantics of languages like C++ do not require that a friend class actually be present. (We are assuming that subsidiary factors, like the size of compiled code, do not affect functional behaviour.) For this approach to work, however, the implementation language must support friend declarations, or something analogous. For example, inner classes in Java 2 have full access rights to the members of their enclosing class, and might be used to simulate friends, but the fact that the inner class is part of the code of the
! "#
$ % & ' ( & $ ' ' $ )* ,+
outer class is an unwelcome complication (we certainly could not delete them at delivery, for example). Given the possibility of ‘friendly’ test objects, let’s call the program element which validates the class under test its auxiliary. Then the OOT solution to delivered as tested involves two changes to the code: the introduction of hooks into the original class (these allow the auxiliary to interface with the class; for example, friend declarations), and the introduction of the auxiliary itself. According to [Edw01] the key requirements for this kind of built-in testability (BIT) are •
hooks shouldn’t modify the behaviour of the underlying component
•
hooks shouldn’t impose excessive overheads
We should also add a basic consistency requirement: •
if we disable the auxiliary, the integrated layer containing the underlying component should have essentially the same behaviour as before the hooks were added.
Given that these conditions are satisfied, we would enable the auxiliary for functional tests, and disable it for non-functional tests and delivery. Without our consistency requirement, however, it is debatable whether this really amounts to delivered as tested, because of the somewhat arbitrary decision that auxiliaries are not to be regarded as part of the system, even though components of the system are specifically amended (they have hooks inserted) with auxiliary interactions in mind. Automation in this context refers to automating the testing process. Several stages are involved in the automatic testing of entities: •
generate a model of the entity
•
generate test data
•
generate a model of the expected behaviour given these data
•
use the data to drive the entity’s evolution
•
extract information from the entity
•
compare the reported behaviour with the expected behaviour
•
produce a documentary summary of the test run
•
ensure as far as possible that test data and documentation are reusable
All of the research articles listed in the bibliography consider one or more of these stages, but to the best of our knowledge none proposes a unified approach that covers them all. Mixed-Mode Development and Formal Methods. Finally, we should remember that testing is part of an ongoing development process, and as such does not occur in isolation. In the late 1980s the development of formal methods (FM) led many to hope that testing would one day become obsolete. (These should not be confused with semi-formal methods like Yourdon or SSADM.) The goal was correctness by construction. Unfortunately formal methods have so far failed to deliver, in part because the mathematical demands placed on users were far too high. It is probably fair to say that no formal method has yet been fully adopted by industry. This is an unfortunate situation, because the wholesale rejection of FM entails ignoring it even for the simple tasks where it is known to be successful. I’d like to end this section on a positive forward-looking note with a very brief description of two mixed-mode (testing with formal methods) European projects which are trying to bring formal methods back into the real world. DeVa (Design for Validation) is a massive EU-funded research project [DEVA01] aimed at validating critical computing systems. Instead of asking the proceduralist criterion does it
! "#
$ % & ' ( & $ ' ' $ )* +
calculate exactly the right function, it asks is it understandable and dependable enough? It is particularly involved in working out how to structure software to make sure it is understandable and verifiable, and can be meaningfully evaluated. Key goals are testability on the one hand, and measurable reusability on the other. At the heart of the project are a raft of formal object oriented methods [GBB+97, BBG+98] with which designs can be checked and investigated, and which allow meaningful and measurable comparisons to be made between different approaches to OO development. On a somewhat smaller scale is our own EPSRC-funded MOTIVE [MOTIVE] project at Sheffield. This applies the dual refinement approach to specifications constructed as X-machines [Eil74, Hol89, Lay93, Ipa95]. These can be thought of as finite state machines whose transition labels are functions which manipulate objects, but unlike finite state machines, which have very limited computational power, the computational power of X-machines is known to be unlimited [Sta90, Sta91, Sta01, Sta02]. In 1997, Holcombe and Ipate discovered that when systems are specified as X-machines, the dual-refinement approach generates an integration testing method that is proven to find all faults [IH97, HI98]. Their approach, however, is inherently procedural, and we are now in the final stages of extending their results to the OO paradigm. The approach is still under evaluation, but the breadth of applications so far is promising [FHI+95, IH98, BH00, EK00, KEK00, HH01a, HH01b, IH01, Van01].
Summary The field of OO testing has continued to deepen and widen since it was first established in the late 1980s, but there is a danger that the subject may now be stalling. In general, the last 10 years have seen consolidation of existing approaches rather than the introduction of new ones, but there are signs that this may change over the coming years, both because testers are beginning to ask non-procedural questions, and because previously unused combinations of techniques are now being applied.
,.-0/1/32 4 2 5 617839;:=< :>:6?:=@ This section lists only those references not already included in the main bibliography of OO testing articles below. [BFT00]
Di Blasio P, Fisher K, Talcott C. 2000. A control-flow analysis for a calculus of concurrent objects. IEEE Trans. Software Eng. 26:617-634.
[BH01]
Bogdanov K, Holcombe M. 2001. Statechart testing method for aircraft control systems. Softw. Test. Verif. Reliab. 11:39-54.
[Buc00]
Buchanan, KM. 2000. Behavioural constraints, patterns and conformance: reconciling object-oriented inheritance with temporal restrictions on the ordering of methods. PhD Thesis, Dept of Computer Science, University of Hertfordshire, UK.
[DEVA01] DeVa, Design for Validation, Esprit Long Term Research Project No. 20072. All archives accessible via http://www.newcastle.research.ec.org/deva. [Eil74]
Eilenberg S. 1974. Automata, Languages and Machines, Vol. A. Academic Press.
[EK00]
Eleftherakis G, Kefalas P. 2000 Model Checking X-Machines: Towards integrated formal development of safety critical systems. Tech Report, Dept of Computer Science, CITY Liberal Studies, 13 Tsimiski Str., 54624 Thessaloniki, Greece.
[FHI+95]
Fairtlough M, Holcombe M, Ipate F, Jordan C, Laycock G., Duan Z. 1995. Using an X-machine to model a Video Cassette Recorder. Current issues in Electronic Modelling, 3:141-161.
! "#
$ % & ' ( & $ ' ' $ )* +
[GBB+97] Guelfi N, Biberstein O, Buchs D, Canver E, Gaudel M-C, von Henke F, Schwier D. 1997. “Comparison of Object-Oriented Formal Methods.” DeVa Tech Report No. 27. Available at http://www.newcastle.research.ec.org/deva/trs/index.html. [HH01a]
Hierons RM, Harman M. 2001. Testing conformance to a quasi-nondeterministic stream X-machine. Formal Aspects of Computing (Special Issue on X-machines).
[HH01b]
Hierons RM, Harman M. 2001. Testing conformance of a deterministic implementation against non-deterministic stream X-machine. Working paper, Dept of Computer Science, Brunel University, UK.
[HI98]
Holcombe M, Ipate F. 1998 Correct Systems: Building a Business Process Solution. Springer.
[Hol89]
Holcombe M. 1989. X-machines as a basis for dynamic system specification. Soft. Eng. J. 3(2):69-76.
[HSR89]
Holcombe M, Stannett M, Rathore S. 1989. Very high level functional testing of VLSI – preliminary results. In: Miller DM (ed). 1989. Fourth Technical Workshop on New Directions for IC Testing. Vancouver University.
[IH01]
Ipate F, Holcombe M. 2001. Generating test sequences from non-deterministic Xmachines. Formal Aspects of Computing (Special Issue on X-machines).
[IH97]
Ipate F, Holcombe M. 1997. An integration testing method that is proven to find all faults. Int. J. Computer Math. 68:159-178.
[IH98]
Ipate F, Holcombe M. 1998. A method for refining and testing generalised machine specifications. Int. J. Computer Math. 69:197-219.
[Ipa95]
Ipate F. 1995. Theory of X-machines with Applications in Specification and Testing. PhD Thesis, Dept of Computer Science, Sheffield University.
[Jac92]
Jacobson I. 1992. Object-oriented software engineering – A use case driven approach. Addison-Wesley.
[KEK00]
Kehris E, Eleftherakis G, Kefalas P. 2000. Using X-Machines to Model and Test Discrete Event Simulation Programs. In: Mastorakis N. (ed). 2000. Systems and Control: Theory and Applications. World Scientific and Engineering Society Press; 163-168.
[Lay93]
Laycock G. 1993 The Theory and Practice of Specification-Based Software Testing. PhD Thesis, Dept of Computer Science, Sheffield University.
[MOTIVE] EPSRC GR/M56777 MOTIVE: Method for Object Testing, Integration and Verification. Archives accessible via http://www.dcs.shef.ac.uk/~ajhs/motive/. [MPW91] Milner R, Parrow J, Walker D. 1991. Modal Logics for Mobile Processes. Tech Report ECS-LFCS-91-136, University of Edinburgh. [Mye79]
Myers GJ. 1979. The Art of Software Testing. Wiley.
[OLR+96] Offutt J, Lee A, Rothermel G, Untch RH, Zapf C. 1996. An experimental determination of sufficient mutant operators. ACM Trans Softw. Eng. Method. 3(3):99-118. [Sta90]
Stannett M. 1990. X-machines and the Halting Problem: building a super-Turing machine. Formal Aspects of Computing 2:331-341.
[Sta91]
Stannett M. 1991. An introduction to post-Newtonian and non-Turing computation. Tech Report CS-91-02, Dept of Computer Science, Sheffield University.
! "#
$ % & ' ( & $ ' ' $ )* %
[Sta01]
Stannett M. 2001. Computation over arbitrary models of time. Tech Report CS-0108, Dept of Computer Science, Sheffield University.
[Sta02]
Stannett M. 2002. Computation and Hypercomputation. (to appear).
[Van01]
Vanak SK. 2001. Complete Functional Testing of Hardware Designs. Preliminary PhD Report, Dept of Computer Science, Sheffield University.
[Vic89]
Vickers S. 1989. Topology via Logic (Cambridge Tracts in Theoretical Computer Science). Cambridge University Press.
[Weg90]
Wegner P. 1990. Concepts and paradigms of object-oriented programming. OOPS Messenger, 1(1).
+-,. /10 . 2434567849 These references form a resource for researchers interested in OO testing. In compiling this bibliography I have relied heavily on libraries, the internet, loans from colleagues, references cited in papers, and existing databases, but many resources may still have been missed. I have tried to include only those papers which address object-oriented testing as their primary topic, and this means that many papers from related disciplines may have been excluded. In particular, I have typically excluded general introductions to software testing that do not specifically address OO concerns. Readers are therefore advised to use the bibliography in conjunction with their own library catalogues. I have included technical reports and online resources where appropriate, but readers are reminded that these have not necessarily been subject to peer review. Where relevant I have added notes to entries explaining how they are related to one another (e.g. if the same material is available in several forms). Please send relevant details (citations, copies or URLs) for any articles that should be added directly to the author. The entries suggest that OO testing first emerged as a separate issue in the late 1980s [HM87, HY88, Chu89, Fie89, LL89, PK90]. McGregor and Sykes almost certainly contributed the first book on OO testing [MS91], and have recently followed this up with another [MS01]. Bashir and Goel have also produced a useful practical guide [BG99]. The key journals for OO testing have been JOOP (24 entries), Object Magazine (15 entries), and Software Testing, Verification and Reliability (9 entries). The main annual event is the Software Quality Week in San Francisco (18 entries). With this notable exception, however, major conferences on both sides of the Atlantic seem to be quite hesitant about OO testing. Of the roughly 250 papers reported at OOPSLA between 1995 and 2000 only two have directly concerned testing [Yat95, VHK97]. (A 1995 panel discussion [McG95] on “OO Testing in the Real World: Lessons for All” was described in the Proceedings as taking place, but no actual account of the discussion was included in any Proceedings or Addendum). Likewise, of the nearly 240 papers reported in the Proceedings of Europe’s ECOOP throughout the eleven years 1990 to 2000, only one directly concerns testing [FL00]. Research in the area seemed to have reached a peak (59 entries) around 1994, and since then the publication rates seem to have been in decline. The counts for 1997 and 1998 would have been significantly lower had McGregor not contributed his long series of instructive articles in JOOP [McG97a-g, McG98a-d]. This series confirms McGregor’s status as easily the most prolific contributor of OO testing publications throughout the period (39 entries), followed by Binder, Harrold, Hsia, Kung and McDonald. [Ale99]
R. T. Alexander, “Testing the polymorphic relationships of object-oriented components.” Tech Report ISE-TR-99-02, Dept of Information and Software
! "#
$ % & ' ( & $ ' ' $ )* +
Engineering, George Mason University, Feb 1999. Available at http://www.ise.gmu.edu/techrep. [Amb94]
S. Ambler, “Use-Case Scenario Testing.” Softw. Development, 3(6), pp. 53-61, July 1994.
[Amb96]
S. Ambler, “Testing Objects.” Softw. Development, 4(8), pp. 55-62, Aug 1996.
[AO93]
P. Ammann & A. J. Offutt, “Functional and test specifications for the MiStix file system.” Tech Report ISSE-TR-93-100, Dept of Information and Software Systems Engineering, George Mason University, 1993.
[AO99]
R. T. Alexander & A. J. Offutt, “Analysis techniques for testing polymorphic relationships.” Proc. Thirteenth Int. Conf. on Technology of OO Languages and Systems (TOOLS USA’99), pp. 104-114, Santa Barbara, CA, August 1999.
[AO00]
R. T. Alexander & A. J. Offutt, “Criteria for testing polymorphic relationships.” Dept of Information and Software Engineering, George Mason University, Fairfax, VA 22030-4444, USA. 2000. Available at http://www.ise.gmu.edu/techrep.
[APH97]
R. Alexander, P. Payne & C. Hutchinson, “Design for Testability for ObjectOriented Software.” Object Magazine, July 1997.
[ARE96]
T. Ashok, K. Rangaraajan & P. Eswar, “Retesting C++ Classes.” Ninth Int. Softw. Quality Week in San Francisco. Software Research Inc., May 1996.
[AW97]
K. Abdullah & L. White, “A firewall approach for the regression testing of objectoriented software.” Tech Report, Case Western Reserve University, Cleveland, Ohio, 1997.
[Bar95]
S. Barbey, "Testing Ada 95 object-oriented programs." Proc. Ada-Europe 95, Frankfurt, Germany, October 2-6 1995.
[Bar97]
S. Barbey. Test selection for specification-based testing of object-oriented software based on formal specifications. PhD Thesis 1753, École Polytechnique Féderale de Lausanne, Switzerland, 1997.
[BAS94]
S. Barbey, M. Ammann & A. Strohmeier, “Open Issues in Testing Object-Oriented Software.” Proc. Europ. Conf. on Softw. Quality (ECSQ’94), pp. 257-267, Basel, Switzerland, 1994.
[Bas99]
I. Bashir, “Object-Oriented Software Testing: Life-Cycle Perspective.” Sixteenth Int. Conf. and Expo. on Testing Computer Softw., Bethasda, USA. Frontier Technologies Inc. and USPDI, June 1999.
[BBG+98] S. Barbey, D. Buchs, M.-C. Gaudel, B. Marre, C. Péraire, P. Thévenod-Fosse & H. Waeselynck, “From requirements to tests via object-oriented design.” Esprit Long Term Research Project No. 20072. DeVa Tech Report, Lausanne, Switzerland, 1999. [BBP96]
S. Barbey, D. Buchs & C. Peraire, “A theory of specification-based testing for object-oriented software.” Proc. Second Europ. Dependable Computing Conf. (EDCC2), Taormina, Italy, LNCS 1150, pp. 303-320. Springer, 1996. Also available as Tech Report EPFL-DI 96/163, Dept of Computer Science, École Polytechnique Féderale de Lausanne, 1996, and included in the DeVa first year report, Dec 1996.
[BBP98]
S. Barbey, D. Buchs & C. Peraire, "A case study for testing object-oriented software: A production cell." Available at http://lglwww.epfl.ch/research/deva/production_cell/home_page.html
! "#
$ % & ' ( & $ ' ' $ )* ,+ '
[BBPS98] S. Barbey, D. Buchs, C. Peraire & A. Strohmeier, "Incremental test selection for specification-based unit testing of OO software based on formal specification." Unpublished. [Bec94]
K. Beck, “Simple Smalltalk Testing.” The Smalltalk Report, 4(2), pp. 16-18, Oct 1994.
[Ber94]
E. Berard, “Issues in the testing of object-oriented software.” Electro’94 Int., pp. 211-219. Los Alamitos: IEEE Computer Society Press, 1994.
[BG93]
I. Bashir & A. L. Goel, “Object-Oriented Metrics and Testing.” Proc. Fifteenth Minnowbrook Workshop on Softw. Eng., Syracuse, New York, pp. 1-9. Syracuse University, July 1993.
[BG94]
I. Bashir & A. L. Goel, “Testing C++ Classes.” Int. Conf. Softw. Testing, Reliab. and Quality Assurance. IEEE, Dec 1994.
[BG97]
I. Bashir & A. L. Goel, “Metrics Guided Testing of Object-Oriented Software.” Tenth Int. Softw. Quality Week in San Francisco. Software Research Inc., May 1997
[BG99]
I. Bashir & A. L. Goel, Testing Object-Oriented Software: Life Cycle Solutions. New York: Springer. 1999.
[BG00]
S. Beydada & V. Gruhn, "Integrating white- and black-box techniques for classlevel testing of object oriented prototypes." Computer Science Dept, University of Dortmund, 2000.
[BGS01]
S. Beydada, V. Gruhn & M. Stachorski, "A graphical class representation for integrated black- and white-box testing." Computer Science Dept, University of Dortmund, 2001.
[BHR+00] T. Ball, D. Hoffman, F. Ruskey, R. Webber & L. White, “State generation and automated class testing.” Soft. Test. Verif. Reliab., 10, pp. 149-170, 2000. [Bin94a]
R. Binder, “Design for testability in object-oriented systems.” Comms ACM, 37(9), pp. 87-101, Sept 1994.
[Bin94b]
R. Binder, “Testing Object-Oriented Software.” American Programmer, 7(4), pp. 22-28, April 1994. See [Bin95f], [Bin96e].
[Bin95a]
R. Binder, “State-based Testing.” Object Magazine, 5(4), pp. 75-78, July-Aug, 1995.
[Bin95b]
R. Binder, “State-based testing: Sneak paths and Conditional Transitions.” Object magazine, 5(6), pp. 87-89, Nov-Dec 1995.
[Bin95c]
R. Binder, “Testing Objects: Myth and Reality.” Object Magazine, 5(2), pp. 73-75, May 1995.
[Bin95d]
R. Binder, “The FREE-flow Graph: Implementation-Based Testing of Objects using State-Determined Flows.” Eight Int. Softw. Quality Week in San Francisco. Software Research Inc., May 1995
[Bin95e]
R. Binder, “Trends in Testing Object-Oriented Software.” Object Magazine, 5(10), pp. 68-69, Oct 1995.
[Bin95f]
R. Binder, “Testing object-oriented systems: a status report.” Released online by RBSC Corporation, 1995. Available at http://stsc.hill.af.mil/crosstalk/1995/April/testinoo.asp. Originally published in the April 1994 issue of American Programmer. The online version was amended slightly by its author before being released. See [Bin94b], [Bin96e].
! "#
$ % & ' ( & $ ' ' $ )* ,+ +
[Bin96a]
R. Binder, “Modal Testing Strategies for Object-Oriented Software.” Computer, 29(11), pp. 97-99, Nov 1996.
[Bin96b]
R. Binder “Off-the-shelf Test Automation for Objects.” Object Magazine, 5(2), pp. 26-30, April 1996.
[Bin96c]
R. Binder, “Testing for Reuse: Libraries and Frameworks.” Object Magazine, 5(6), Aug 1996.
[Bin96d]
R. Binder, “Summertime, and the testin’ is easy…” Object Magazine, 5(8), Oct 1996.
[Bin96e]
R. Binder, “Testing Object-Oriented Software: A Survey.” Softw. Test. Verif. Reliab., 6(3/4), pp. 125-252, Sept/Dec 1996. See [Bin95f].
[Bin96f]
R. Binder, “The FREE Approach to Object-Oriented Testing: An overview.” Available at http://www.rbsc.com/pages/FREE.html.
[Bin97a]
R. Binder, “Class Modality and Testing.” Object Magazine, 6(2), Feb 1997.
[Bin97b]
R. Binder, “Developing a Test Budget.” Object Magazine, 6(6), June 1997.
[Bin97c]
R. Binder, “Automated Java Testing.” Object Magazine, 6(7), July 1997.
[Bir92]
B. Birss, “Testing object-oriented software.” Sun Programmer, The Newsletter for Professional Software Engineers, 1(3), pp. 15-16, Fall/Winter 1992.
[Boi97]
J. Boisvert, “OO Testing in the Ericsson Pilot Project.” Object Magazine, 6(7), July 1997.
[BOP00]
U. Buy, A. Orso & M. Pezze, "Automated testing of classes." Available at http://www.cc.gatech.edu/~harrold/8803/Readings/orso00.pdf.
[Bos96]
O. Bosman, “Testing and Iterative Development: Adapting the Booch Method.” Thirteenth Int. Conf. and Expo. on Testing Computer Softw., Silver Spring, USA, Jun 1996, pp. 1-10. USPDI.
[Bri95]
J. W. Britt, “A Study of the Effectiveness of Selected Testing Techniques in ObjectOriented Programs.” Master’s Thesis, Mississippi State University, 1995.
[BS94]
S. Barbey & A. Strohmeier, “The problematics of testing object-oriented software.” SQM’94 Second Conf. on Softw. Quality Management, 2, pp. 411-426, Edinburgh, Scotland, 1994.
[BS95]
O. Bosman & H.W. Schmidt, "Object test coverage using finite state machines." Tech Report TR-CS-95-06, University of Canberra, Australia, 1995.
[Car95]
T. A. Cargill, “Short Tour Testing.” C++ Report, 7(2), Feb 1995.
[CDH+00] J. Corbett, M. B. Dwyer, J. Hatcliff, S. Laubach, C. S. Pasareanu, Z. Robby, H. Zheng, “Bandera: extracting finite-state models from Java source code.” Proc. of ICSE 2000. ACM Press: Limerick, Ireland; 439–448. [CHK+95] M. C. Campbell, D. K. Hinds, A. V. Kapetanakis, D. S. Levin, S. J. McFarland, D. J. Miller & J. S. Southworth, “Object-Oriented Perspective on Software Testing in a Distributed Environment.” Hewlett-Packard J., 46(6), Dec 1995. [Chu89]
C.-M. Chung, “Object-Oriented Concurrent Programming from Testing View.” National Computer Symposium, 1989, pp. 555-565.
[Chu95]
I. S. Chung, “Methods of comparing test criteria for object-oriented programs based on subsumption, power relation and test reusability.” J. Korea Information. Science Soc., 22(5), pp. 693-704, May 1995.
! "#
$ % & ' ( & $ ' ' $ )* ,+ $
[CK97]
M.-H. Chen & H. M. Kao, “Investigating the Relationship between Class Testing and the Reliability of Object-Oriented Programs.” Eighth Int. Symp. on Softw. Reliab. Eng. (ISSRE’97), Alburquerque, USA, Nov 1997. Los Alamitos: IEEE Computer Society Press, 1977.
[CK99]
M.-H Chen & M.-H. Kao, “Testing object-oriented programs – an integrated approach.” Tenth Int. Symp. on Softw. Reliab. Eng. (ISSRE’99), pp. 73-83, Boca Raton, FL, Nov 1999. Los Alamitos: IEEE Computer Society Press, 1999.
[CKH+96] K.-N. Chang, D. Kung, P. Hsia, Y. Toyoshima & C. Chen, “Object-Oriented Data Flow Testing.” Thirteenth Int. Conf. and Expo. on Testing Computer Softw., pp. 97100, Silver Spring, MD, June 1996. USPDI. [CL92]
C.-M. Chung & M.-C. Lee, “Object-oriented programming testing methodology.” Fourth Int. Conf. on Softw. Eng. and Knowledge Eng., pp. 378-385. IEEE Computer Society Press, June 1992. See [CL94].
[CL94]
C.-M. Chung and M.-C. Lee, “Object-oriented programming testing methodology.” Int. J. of Mini and Microcomputers, 16(2):73-81, 1994. See [CL92].
[CL95]
J. Chen & S. Liu, "An approach to testing object-oriented formal specifications." Dept of Software Development, Monash University, Australia.
[CLW94]
C.-M. Chung, M.-C. Lee & C.-C. Wang, “Inheritance testing for OO programming by transitive closure strategies.” Advances in Modeling and Analysis, B 31(2), pp. 57-64, 1994.
[CM90]
T. J. Cheatham and L. Mellinger, “Testing object-oriented software systems.” In Proc. 1990 Computer Sci. Conf., pp. 161-165, 1990.
[CMLK96] I. S. Chung, M. Munro, W. K. Lee & Y. R. Kwon, “Applying Conventional Testing Techniques for Class Testing.” COMPSAC’96, pp. 447-454. IEEE Computer Society, Korea Information Science Society, Aug 21-23 1996. [CMM+00] D. Carrington, I. MacColl, J. McDonald, L. Murray & P. Strooper, "From Object-Z specifications to ClassBench test suites." Softw. Test. Verif. Reliab., 10(2), pp. 111137, 2000. [Cor98]
J. C. Corbett, “Using shape analysis to reduce finite-state models of concurrent Java programs.” Dept of Information and Computer Science, University of Hawaii, Honolulu, HI 96822, USA, 1998.
[CR99]
R. Chatterjee & B. G. Ryder, “Data-flow-based testing of object-oriented libraries.” Tech Report 382, Rutgers University, 1999.
[CRM98] A. Cibele, A Rosa & E. Martins, "Using a reflective architecture to validate objectoriented applications by fault injection." In Proc. Workshop on Reflective Programming in C++ and Java (Vancouver, Canada), October 1998. [CS93]
B. P. Chao & D. M. Smith, “Applying Software Testing Practices to an ObjectOriented Software Development.” OOPSLA’93 Addendum, pp. 49 -52, 1993. See [CS94].
[CS94]
B. P. Chao & D. M. Smith, “Applying Software Testing Practices to an ObjectOriented Software Development.” OOPS Messenger, 5(2), pp. 49-52, April 1994. See [CS93].
[CTC01]
H. Y. Chen, T. H. Tse & T. Y. Chen, “TACCLE: A Methodology for Object-Oriented Software Testing at the Class and Cluster Levels.” ACM Trans. Soft. Eng. Methodol., 10(4), pp. 56-109, Jan 2001.
! "#
$ % & ' ( & $ ' ' $ )* ,+ (
[CTCC98] H. Y. Chen, T. H. Tse, F. T. Chan & T. Y. Chen, “In black and white: An integrated approach to class-level testing of object-oriented programs.” ACM Trans. Soft. Eng. Methodol., 7(3), pp. 250-295, 1998. [DF91]
R. K. Doong & P. G. Frankl, “Case studies in testing object-oriented programs.” Proc. Fourth Symp. on Testing, Analysis and Verification (TAV4), Victoria, BC, pp. 165-177, Oct 1991. New York: ACM Press.
[DF94]
R. K. Doong & P. G. Frankl, “The ASTOOT Approach to Testing Object-Oriented Programs.” ACM Trans. Softw. Eng. and Methodology, 3(2), pp. 101-130, 1994.
[DG94]
D. S. Dunaway & E. Gillan, “Applying object-oriented design principles to developing a test strategy.” Eleventh Int. Conf. and Expo. on Testing Computer Softw., pp. 341-368. ASQC/STSC, .June 13-16, 1994.
[DL94]
R. J. D'Souza and R. J. LeBlanc Jr., “Class testing by examining pointers.” JOOP, 7(4), pp. 33-39, July-Aug 1994.
[DLKH98] F. Dietrich, X. Logean, S. Koppenhöfer & J.-P. Hubaux, "Modelling and testing object-oriented distributed systems with linear time temporal logic." Tech Report SSC/1998/011, Swiss Federal Institute of Technology, Lausanne, 1998. [DMR96] I. Duncan, M. Munro & D. Robson, "Test case development during OO life-cycle and evolution." JOOP 11(9), pp. 36-40 & 44, 1999. [Dor93]
M. Dorman, “Unit Testing of C++ Objects.” EuroSTAR’93, pp. 71-101. SQE Inc., Oct 1993.
[Dor97]
M. Dorman, “C++ “it’s testing, Jim, but not as we know it”.” EuroSTAR’97, Edinburgh, Scotland, 1997.
[DRW94] P. K. Devanbu, D. S. Rosenblum & A. L. Wolf, “Case studies on testing objectoriented programs.” Proc. Fourth Symp. Testing, Analysis and Verif. (TAV4), pp. 165-177, 1991. [Dus99]
E. Dustin, “Moving from Conventional to Object-Oriented Testing.” Int. Conf. on Softw. Testing, Analysis and Review (STAR’99 East), Orlando, FL, May 1999. Software Quality Engineering.
[Edw00]
S.H. Edwards, "Black-box testing using flowgraphs: an experimental assessment of effectiveness and automation potential." Softw. Test. Verif. Reliab., 10(4), pp. 249262, 2000.
[Edw01]
S. H. Edwards, “A framework for practical, automated black-box testing of component-based software.” Softw. Test. Verif. Reliab., 11, pp. 97-111, 2001.
[FD90]
P. G. Frankl & R. Doong, “Tools for Testing Object-Oriented Programs.” Proc. Pacific Northwest Conf. on Softw. Quality, pp. 309 -324, 1990.
[FD91]
P. G. Frankl & R. K. Doong, “Testing Object-Oriented Programs with ASTOOT.” (First Int.) Softw. Quality Week in San Francisco, CA, 1991. Software Research Inc.
[FFGU96] M. Factor, E. Farchi, A. Gluska & S. Ur, "Using Snapshot to implement hierarchy of criteria." IBM Israel Science and Technology, Haifa Research Laboratory, Haifa, Israel, 1996. [Fie89]
S. P. Fiedler, “Object-oriented unit testing.” Hewlett-Packard J., 40(2), pp. 69-74, April 1989.
[Fir93]
D. G. Firesmith, “Testing object-oriented software.” Eleventh Int. Conf. on Technology of OO Languages and Systems (TOOLS USA’93), pp. 407-426. Eaglewood Cliffs, NJ: Prentice Hall, 1993.
! "#
$ % & ' ( & $ ' ' $ )* ,+ -
[Fir95]
D. G. Firesmith, “Object-Oriented Regression Testing.” Report on Object Analysis and Design (ROAD), 1(5), pp. 42-45, Jan/Feb 1995.
[Fir96]
D. G. Firesmith, “Pattern Language for Testing Object-Oriented Software.” Object Magazine, 5(9), pp. 32-38, Jan 1996.
[FL00]
P. Fröhlich & J, Link, “Automated Test Case Generation from Dynamic Models.” ECOOP 2000, LNCS 1850, pp. 472-491, 2000. Springer.
[Fla99]
David Flater, "Manufacturer’s CORBA Interface Testing Toolkit: Overview," J. Research of the NIST, 104(2), 1999. Available at http://www.mel.nist.gov/msidlibrary/summary/9904.html)
[Fla00]
D. Flater, 2000. “Testing for imperfect integration of legacy software components.” National Institute of Standards and Technology, 100 Bureau Drive, Stop 8260, Gaithersburg, MD 20899-8260, USA.
[FLN91]
W. B. Frakes, D. J. Lubinsky & D. N. Neal, “Experimental Evaluation of a Test Coverage Analyzer for C and C++.” J. Systems and Softw., pp. 135-139, 1991.
[FM99]
D. Flater & K. C. Morris, 1999. “Testability of product data management interfaces.” National Institute of Standards and Technology, 100 Bureau Drive, Stop 8260, Gaithersburg, MD 20899-8260, USA.
[Fra91]
W. B. Frakes, “Experimental Evaluation of a Test Coverage Analyzer for C and C++.” J. Systems and Softw., 16, 1991
[Fre91]
R. S. Freedman, “Testability of Software Components.” IEEE Trans. Softw. Eng., 17(6), pp. 553-564, June 1991.
[FS96]
R. Fletcher & A.S.M. Sajeev, "A framework for testing object-oriented software using formal specifications." In Ada-Europe, pp. 159-170, 1996.
[GC95]
T. J. Gattis & G. F. Cheatham. Testing object-oriented software. New York: ACM, 1995.
[GDT93]
J. A. Graham, A. C. T. Drakeford & C.D. Turner, “The Verification, Validation and Testing of Object-Oriented Systems.” BT Technology J., 11(3), pp. 79-88, 1993.
[Gil94]
T. Giltinan, “Leveraging Inheritance to Test Objects.” STAR’9l, pp. 361 -372, Jacksonville, FL, May 1994. Software Quality Engineering.
[GJ98]
M.-C. Gaudel & P. R. James, "Testing Algebraic Data Types and Processes: A Unifying Theory." Formal Aspects of Computing, 10, pp. 436-451, 1998.
[GKH+95] J. Z. Gao, D. Kung, P. Hsia, Y. Toyoshima & C. Chen, “Object State Testing for Object-Oriented Programs.” COMPSAC’95, pp. 232-238, Dallas, TX, Aug 9-11 1995. IEEE Computer Society. [GKS92]
M. Ghiassi, M. Ketabchi & K. Sadeghi, “Integrated Software Testing System based on an Object-Oriented DBMS.” Proc. Twenty-fifth Annual Hawaii Int. Conf. on System Sciences (HICSS-25). IEEE Press, 1992.
[Hal91]
P. A. V. Hall, “Relationship between specifications and testing.” Information and Softw. Technology, 33(1), pp. 47-52, 1991.
[Har93]
M. J. Harrold, “Program-Based Testing of Object-Oriented Programs.” OOPSLA’93 Workshop on Testing OO Softw., Oct 1993.
[Haw98]
B. Haworth, "Adequacy criteria for object testing." Proc. 2nd Int. Softw. Quality Week Europe 1998, Brussels, Belgium, Nov 1998.
! "#
$ % & ' ( & $ ' ' $ )* ,+ -
[Hay94]
J. H. Hayes, “Testing of object-oriented programming systems (OOPS): a faultbased approach.” In E. Bertino and S. Urban (eds), Object-Oriented Methodologies and Systems (LNCS 858), Springer-Verlag, 1994.
[HF94]
D. Hoffman & X. Fang, “Testing the C Set++ Collection Class Library.” CASCON 94, Toronto, Canada, 1994. IBM Center for Advanced Studies.
[HJJ93]
B. Hinke, V. Jones & R. E. Johnson, “Debugging Objects.” The Smalltalk Report, 2(9), July/Aug 1993.
[HJL+01] M.J. Harrold, J. Jones, T. Li, D. Liang, A. Orso, M. Pennings, S. Sinha, S. Spoon & A. Gujarathi, "Regression test selection for Java software." Proc. ACM Conf. On Object-Oriented Programming, Systems, Languages and Applications (OOPSLA 2001), November 2001. [HK97]
P. Hsia and D. Kung, “An Object-Oriented Testing and Maintenance Environment.” Proc. Int. Conf. on Softw. Eng. (ICSE’97), pp. 608--609, Boston, 1997. ACM.
[HKC95]
H. S. Hong, Y. R. Kwon & S.D. Cha, “Testing of Object-Oriented Programs Based on Finite State Machines.” Asia-Pacific Softw. Eng. Conf. (ASPEC’95), Brisbane, Australia, Dec 1995.
[HKR+97] B. Haworth, C. Kirsopp, M. Roper, M. Shepperd & S. Webster, "Towards the development of adequacy criteria for object-oriented systems." Proc. 5th European Conf. on Softw. Testing Analysis and Review, pp. 417-427, Edinburgh, Scotland, November 1997. [HLK+97] P. Hsia, X. Li, D. Kung, C.-T. Hsu, L. Li, Y. Toyoshima & C. Chen, “A technique for the selective revalidation of OO software.” Softw. Maint.: Research and Practice, 9(4), pp. 217-233, 1997. [HM87]
M. J. Harrold & J. D. McGregor, 1987. “Incremental testing of object-oriented class structures.” Dept of Computer Science, Clemson University, Clemson, SC 296341906, USA. Released with a 1989 copyright statement in favour of Free Software Foundation, Inc., 675 Mass Avenue, Cambridge, MA 02139, USA. See also [HMF92].
[HM92]
M. J. Harrold & J. D. McGregor, “Incremental testing of object-oriented class structures.” See [HMF92].
[HMF92]
M. J. Harrold, J. D. McGregor & K. J. Fitzpatrick, “Incremental testing of objectoriented class structures.” Proc. Fourteenth Int. Conf. Softw. Eng. (ICSE’92), pp. 68-80, May 11-15 1992. Citations for this paper often list only the first two authors. See [HM87].
[HP00]
K. Havelund & T. Pressburger, “Model checking Java programs using Java PathFinder.” Int. J. Softw. Tools for Technology Transfer, 2(4), pp. 366-381, 2000.
[HR94a]
M. J. Harrold & G. Rothermel, “Selecting regression tests for object-oriented software.” Los Alamitos: IEEE Computer Society Press, 1994. [Rest of details missing; please email the author if you have them]
[HR94b]
M. J. Harrold & G. Rothermel, “Performing data flow testing on classes.” Second ACM SIGSOFT Symp. on Foundations of Softw. Eng., pp. 154-163. New York: ACM Press, Dec 1994.
[HR95]
M. J. Harrold & G. Rothermel, “Structural testing of object-oriented classes.” Eighth Int. Softw. Quality Week in San Francisco, 1995. Software Research Inc.
! "#
[HR96]
$ % & ' ( & $ ' ' $ )* ,+ -
M. J. Harrold & G. Rothermel, “A coherent family of analysable graph representations for object-oriented software.” Tech Report OSU-CISRC-11/96TR60, Ohio State University, Nov 1996.
[HRHH97] T. Hammer, L. Rosenberg, L. Huffman & L. Hyatt, “Measuring requirements testing: experience report.” Proc. Int. Conf. Softw. Eng., pp. 372-379, Boston, May 1997. ACM. [HS01]
G. J. Holzmann & M. H. Smith, “Software model checking: extracting verification models from source code.” Softw. Test. Verif. Reliab., 11, pp. 65-79, 2001.
[HS89]
M. J. Harrold & M. L. Soffa, “Interprocedural data flow testing.” Proc. Third Testing, Analysis and Verif. Symp. (TAV3 – SIGSOFT’89), pp. 158-167, Key West, FL, Dec 1989.
[HS91]
M. J. Harrold & M. L. Soffa, “Selecting and using data for integration testing.” IEEE Softw., 8(2), pp. 58-65, 1991.
[HS93a]
D. Hoffman & P. Strooper, “A case study in class testing.” CASCON 93, pp. 472482, Toronto, Oct 1993. IBM Toronto Lab.
[HS93b]
D. Hoffman & P. Strooper, “Graph-Based Class Testing.” Seventh Australian Softw. Eng. Conf., pp. 85-91. IREE, Sept/Oct 1993.
[HS95a]
D. Hoffman & P. Strooper, “ClassBench: A framework for class testing.” Eighth Int. Softw. Quality Week in San Francisco, CA., May 1995. Software Research Inc. See [HS97].
[HS95b]
D. Hoffman & P. Strooper, “The test-graphs methodology: Automated testing of classes.” JOOP, 8(7), 1995.
[HS96]
M. Hughes & D. Stotts, “Daistish: systematic algebraic testing for OO programs in the presence of side-effects.” Proc. of the ACM SIGSOFT Int. Symp. on Softw. Testing and Analysis, pp. 53 -61, 1996.
[HS97]
D. M. Hoffman & P. A. Strooper, “ClassBench: A framework for automated class testing.” Softw. Practice and Experience, 27(5), pp. 573-597, 1997.
[HSS94]
D. Hoffman, J. Smillie & A. Stroper, “Automated Class Testing: Methods and Experience.” First Asia-Pacific Softw. Eng. Conf., pp. 163 171, Washington, DC, 1994. IEEE Computer Society.
[HTP89]
K. G. Heisler, W. T. Tsai & P. A. Powell, “An object-oriented maintenance-oriented model for software.” IEEE Spring CompCon, pp. 248-253, Piscataway, NJ, Feb 1989.
[Hun95a]
N. Hunt, “C++ Boundary Conditions and Edge Cases.” JOOP, 8, pp. 25-29, May 1995.
[Hun95b]
N. Hunt, “Automatically Tracking Test Case Execution.” JOOP, 8(7), pp. 22-27, Nov/Dec 1995.
[Hun96a]
N. Hunt, “Performance Testing C++ Code.” JOOP, 9, pp. 22-25, Jan 1996.
[Hun96b]
N. Hunt, “Unit Testing.” JOOP, 9, pp. 18--23, Feb 1996.
[HW94]
J. Hurst & R. Willhoft, “The Use of Coherence Checking for Testing ObjectOriented Code.” IBM Int. Conf. on Object Technology. IBM, June 1994.
[HY88]
Y. Honda & A. Yonezawa, “Debugging Concurrent Systems Based on Object Groups.” In G. Gjessing & K. Nygaard (eds), ECOOP '88, pp. 267-282, New York, 1988. (LNCS) Springer-Verlag.
! "#
$ % & ' ( & $ ' ' $ )* ,+ -
[IO95]
A. Irvine & A. J. Offutt, “The effectiveness of category-partition testing of objectoriented software.” Tech Report ISSE-TR-95-102, Dept of Information and Software Systems Engineering, George Mason University, 1995.
[Jal92]
P. Jalote, “Specification and Testing of Abstract Data Types.” Computer Lang., 17(1), pp. 75-82, 1992.
[JE94]
P. C. Jorgensen & C. Erickson, “Object-oriented integration testing.” Comms ACM, 37(9), pp. 30-38, Sept 1994.
[JKNZ94] P. Juttner, S. Kolb, U. Naumann & P. Zimmerer, “Experiences In Testing ObjectOriented Software.” Eleventh Int. Conf. Testing Computer Softw., Washington, DC, June 1994. USPDI. [JKSZ94] P. Juttner, S. Kolb, S. Sieber & P. Zimmerer, “Testing major object-oriented software systems.” Siemens Review (special issue), pp. 25-29, Spring 1994. [JKZ94]
P. Juttner, S. Kolb & P. Zimmerer, “Integration and Testing of Object-Oriented Software.” EuroSTAR 94, pp. 71-101, Jacksonville, FL, Oct 1994. SQE Inc.
[JO95]
Z. Jin & A. J. Offutt, “Integration testing based on software couplings.” Proc. Tenth Annual Conf. on Computer Assurance (COMPASS 94), pp. 13-23, Gaithersburg, MD, June 1995. Los Alamitos: IEEE Computer Society Press, 1995.
[JO96]
Z. Jin & A. J. Offutt, “Coupling-based integration testing.” Proc. ICEECS’96, pp. 10-17, Montreal, 1996. This paper won the Outstanding Paper Award. Available at http://www.ise.gmu.edu/faculty/ofut/rsrch/abstracts/complex.html.
[JO98]
Z. Jin & A. J. Offutt, 1998. “Coupling-based criteria for integration testing.” Softw. Test. Verif. Reliab., 8(3), pp. 133-154, 1998. Available at http://www.ise.gmu.edu/faculty/ofut/rsrch/abstracts/couptest.html.
[Jon95]
A.M. Jonassen, “Managing Unit and Integration Testing of a Large Object-Oriented System.” Eighth Int. Softw. Quality Week in San Francisco, May 1995. Software Research Inc.
[JZNK94] P. Juttner, P. Zimmerer, U. Naumann & S. Kolb, “A complete test process in objectoriented software development.” Seventh Int. Softw. Quality Week in San Francisco, pp. 3-A-2. Software Research Inc., May 17-20 1994. [Kan00]
M. Kangasluoma, "Test case generation from UML state charts." PhD Thesis, Finland, available online at http://www.nixu.fi.
[KCA94]
S. Kanjulal, S. T. Chakradhar & V. D. Agrawal, “A test function architecture for interconnected finite state machines.” Proc Seventh Int. Conf. on VLSI Design, pp. 113-116. Los Alamitos: IEEE Computer Society Press/VLSI Society of India, 1994.
[KCM99a] S.-W. Kim, J. Clark & J. McDermid, "Assessing test set adequacy for objectoriented programs using class mutation." In Proc Symp. Software Tech. (SoST'99), pp. 72-83, September 1999. [KCM99b] S.-W. Kim, J. Clark & J. McDermid, "The rigorous generation of Java mutation operators using HAZOP." In Proc. 12th Int. Conf. on Software and Systems Eng. And their Applications (ICSSEA'99), December 1999. [KCM00a] S.-W. Kim, J.A. Clark & J.A. McDermid, "Investigating the effectiveness of objectoriented testing strategies with the mutation method.” Dept of Computer Science, University of York. [KCM00b] S.-W. Kim, J. Clark & J. McDermid, "Class mutation: mutation testing for objectoriented programs. " Proc FMES 2000, October 2000.
! "#
$ % & ' ( & $ ' ' $ )* ,+ %
[KCM00c] S.-W. Kim, J. Clark & J. McDermid, "Investigating the applicability of traditional test adequacy criteria for object-oriented programs. " In Proc. Object Days 2000, October 2000. [KGH+93] D. Kung, J. Gao, P. Hsia, J. Lin & Y. Toyoshima, “Design Recovery for Software Testing of Object-Oriented Programs.” Proc. Working Conf. on Reverse Engineering, pp. 202-211, Baltimore, 1993. [KGH+94] D. C. H. Kung, J. Gao, P. Hsia, F. Wen, Y. Toyoshima & C. Chen, “Change impact identification in object-oriented software maintenance.” Int. Conf. on Soft. Maint., Piscataway, NJ, pp. 202-211. IEEE Computer Society Press, 1994. [KGH+95a]D. C. H. Kung, J. Gao, P. Hsia, Y. Toyoshima, C. Chen, Y.-S. Kim & Y.-K. Song, “Developing an object-oriented software testing and maintenance environment.” Comms ACM, 38(10), pp. 75-87, Oct 1995. [KGH+95b]D. C. H. Kung, J. Z. Gao, P. Hsia, Y. Toyoshima & C. Chen, “A test strategy for object-oriented programs.” COMPSAC’95, Dallas, Texas, pp. 239-244. Los Alamitos: IEEE Computer Society Press, Aug 9-11 1995. [KGH+95c]D.C. Kung, J. Gao, P. Hsia, J. Lin & Y. Toyoshima, “Class firewall, test order & regression testing of object-oriented programs.” JOOP, pp. 51 66, May 1995. [KGH+96] D. C. H. Kung, J. Gao, P. Hsia, F. Wen, Y. Toyoshima & C. Chen, “On Regression Testing of Object-Oriented Programs.” J. Systems and Softw., 32(1), pp. 21-40, Jan 1996. [KHC+99] Y.G. Kim, H.S. Hong, S.M. Cho, D.H. Bae & S.D. Cha, "Test cases generation from UML state diagrams." In IEE Proceedings: Software, 146(4), 187-192, August 1999. [KL94]
L. M. Keszenheimer and K. J. Lieberherr, "Incremental testing of adaptive software." Tech Report NU-CCS-94-22, Northeastern University, November 1994.
[KL95]
L. M. Keszenheimer and K. J. Lieberherr, “Testing Adaptive Software During Class Evolution.” Tech Report NU-CSS-95-xx, Northeastern University, Boston, MA., Jan 1995.
[KLV+96] D. C. H. Kung, Y. Lu, N. Venugopalan, P. Hsia, Y. Toyoshima, C. Chen & J. Gao, “Object State Testing and Fault Analysis for Reliable Software Systems.” Proc. Seventh Int. Symp. on Softw. Reliab. Engineering, New York, 1996. [Kor98]
T. D. Korson, “Testing across the lifecycle.” Slides presented at the Ninth Annual Borland Conference, Denver, August 11, 1988. Available at http://www.softwarearchitects.com.
[KPRW97] G. Kösters, B.-U. Pagel, T. de Ridder, M. Winter, “Animated requirements walkthroughs based on business scenarios.” EuroSTAR’97, pp. 24.-28, Edinburgh, Scotland, 1997. [KR97]
M. Kölling & J. Rosenberg, "Support for object-oriented testing." School of Computer Science and Software Engineering, Monash University, Australia, 1997.
[KSG+94] D. C. H. Kung, N. Suchak, J. Gao, P. Hsia, Y. Toyoshima & C. Chen, “On Object State Testing.” Eighteenth Annual Int. Computer Softw. & Applications Conf., pp. 222-227. IEEE Computer Society Press, 1994. [KT94]
S. Kirani & W. T. Tsai, “Method Sequence Specification and Verification of Classes.” JOOP, 7(6), pp. 28 -38, Oct 1994.
! "#
$ % & ' ( & $ ' ' $ )* ,+ -
[Lab97]
Y. Labiche, "On testing object-oriented programs." LIS Aerospatiale, LAAS-CNRS, France.
[LFC94]
J. Lee, M. Feng & C. Chung, “A Structural Testing Method for C++ Programs.” Eighteenth Annual Int. Computer Softw. & Applications Conf., IEEE Computer Society Press, 1994.
[LH95]
W. Li & S. Henry, “Maintenance support for object-oriented programs.” J. Softw. Maint. Research and Practice, 7(2), pp. 131-147, March-April 1995.
[Liu96]
W. Liu, "Selecting test cases for object oriented programs." Available online at http://www.vlsi.uwaterloo.ca/~wbliu/proposal.html.
[LL89]
C.-C. Lin & R. J. LeVlanc, “Event based debugging of object/action programs.” SIGPLAN Notices, 24(1), pp. 23-34, Jan 1989.
[LMR92]
M. Lejter, S. Meyers & S. P. Reiss, “Support for Maintaining Object-Oriented Programs.” IEEE Trans Softw. Eng., 18(12), pp. 1045-1052, 1992.
[LOA00]
M. Lee, A. J. Offutt & R. T. Alexander 2000. “Algorithmic analysis of the impacts of changes to object-oriented software.” Proceedings, 34th Int. Conf. on Technology of Object-Oriented Languages and Systems (TOOLS USA ’00), Santa Barbara, CA, August 2000. Also available as Tech Report ISSE-TR-00-02, George Mason University, Dept of Information and Software Engineering, May 2000. Available at http://www.ise.gmu.edu/techrep.
[LW90]
H. K. N. Leung & L. White, “A study of integration and software regression at the integration level.” Conf. on Softw. Maintenance 1990, San Diego, USA, pp. 290301, November 1990.
[LX93]
K.J. Lieberherr & C. Xiao, "Object-Oriented Software Evolution." Softw. Eng., 19(4), pp. 313-343, 1993.
[Maj98]
M. Major, “Prioritizing OO Tests Built With Use Cases.” STAR’98, Orlando, May 1998. Software Quality Engineering.
[Mar95]
B. Marick. Testing Object-oriented Software. New York: ACM, 1995.
[McD95]
J. McDonald, "Inheritance and genericity in object-oriented testing." Dept of Computer Science, University of Queensland, November 1995. Available at http://www.itee.uq.edu.au/~jasonm/home.html.
[McD97]
J. McDonald, "Generating test oracles from object-oriented formal specifications." PhD Confirmation Report, School of IT, University of Queensland, February 1997. Available at http://www.itee.uq.edu.au/~jasonm/home.html.
[McG94a] R. L. McGarvey. Object-oriented test development in ABBET. New York: IEEE, 1994. [McG94b] J. D. McGregor, “Constructing functional test cases using incrementally derived state machines.” Proc. Eleventh Int. Conf. on Testing Computer Softw., pp. 377-386, Washington, DC. USPDI, June 13-16 1994. [McG94c] J. D. McGregor, “Selecting functional test cases for a class.” Proc. Seventh Int. Softw. Quality Week in San Francisco, pp. 2-T-2. Software Research Inc., May 1720 1994. [McG94d] J. D. McGregor, "Testing Object-Oriented Software." Proc. Soft. Development’94, 1994. [McG94e] J. D. McGregor, Online bibliography of 37 titles. Undated, but internal evidence suggests 1994. Available at http://www.cs.clemson.edu/~johnmc/.
! "#
$ % & ' ( & $ ' ' $ )* $ '
[McG95a] J. D. McGregor (moderator) 1995. “OO Testing in the Real World: Lessons for All.” Panel Discussion, OOPSLA’95, 1995, p. 140. [McG95b] J. D. McGregor, "Functional testing of classes." Undated copy, but internal evidence suggests 1995. Available at http://www.cs.clemson.edu/~johnmc/. [McG96a] J. D. McGregor, "A Component Test Strategy for Object-Oriented Software." Proc. Ninth Int. Soft. Quality Week in San Francisco, May 1996. [McG96b] J. D. McGregor, "A Testing Effort Metric." Proc. Conf. OO Technology, June 1996. [McG96c] J. D. McGregor, "Let Architectural Reuse Guide Component Reuse." Object Magazine, April 1996, pp. 38 - 47. [McG97a] J. D. McGregor, “An Overview of Testing.” JOOP, January 1997. Available at http://www.cs.clemson.edu/~johnmc/. [McG97b] J. D. McGregor, “Planning for Testing.” JOOP, February 1997. Available at http://www.cs.clemson.edu/~johnmc/. [McG97c] J. D. McGregor, “Component Testing.” JOOP, March/April 1997. Available at http://www.cs.clemson.edu/~johnmc/. [McG97d] J. D. McGregor, “Parallel Architecture For Component Testing” JOOP, May 1997. Available at http://www.cs.clemson.edu/~johnmc/. [McG97e] J. D. McGregor, “A Component Testing Method” JOOP, June/July 1997. Available at http://www.cs.clemson.edu/~johnmc/. [McG97f] J. D. McGregor, “Making Component Testing More Effective” JOOP, August 1997. Available at http://www.cs.clemson.edu/~johnmc/. [McG97g] J. D. McGregor, “Specifications, Reuse and Testing.” JOOP, September 1997. Available at http://www.cs.clemson.edu/~johnmc/. [McG98a] J. D. McGregor, “An Overview of Testing.” JOOP, January 1998. Available at http://www.cs.clemson.edu/~johnmc/. [McG98b] J. D. McGregor, “Testing Models: The Requirements Model” JOOP, June 1998. Available at http://www.cs.clemson.edu/~johnmc/. [McG99a] J. D. McGregor, “Instrumentation for Class Testing” JOOP, February 1999. Available at http://www.cs.clemson.edu/~johnmc/. [McG99b] J. D. McGregor, “Testing Distributed Objects and Components.” STAR’99 East, Orlando, FL, May 1999. Software Quality Engineering. [McG99c] J. D. McGregor, “Test Patterns – Please Stand By” JOOP, June 1999. Available at http://www.cs.clemson.edu/~johnmc/. [McG9x]
J. D. McGregor, Online bibliography of 37 titles. Undated, but internal evidence suggests 1994. Available at http://www.cs.clemson.edu/~johnmc/.
[MCM+98] L. Murray, D. Carrington, I. MacColl & J. McDonald, P. Strooper, "Formal derivation of finite state machines for class testing." LNCS 1493, p. 42 ff., 1998. Also available as SVRC Technical Report TR98-03 at http://svrc.it.uq.edu.au/Bibliography/svrc-tr.html?98-03. [MCMS97a] L. Murray, D. Carrington, I. MacColl & J. Strooper, "Inheritance and specification-based object-oriented testing." Tech Report 97-18, Software Verification Research Centre, School of Information Technology, University of Queensland, Australia, 1997. Also available as "Extending test templates with
! "#
$ % & ' ( & $ ' ' $ )* $,+
inheritance." In Proc 1997 Australian Softw. Eng. Conf. (ASWEC'97), pp. 80-87, IEEE Computer Society, 1997. [MCMS97b] L. Murray, D. Carrington, I. MacColl & P. Strooper, "A case study in specification-based object-oriented testing with inheritance." Tech Report 97-19, Software Verification Research Centre, School of Information Technology, University of Queensland, Australia, 1997. [MD93a]
J. D. McGregor & D. M. Dyer, “Selecting functional test cases for a class.” Eleventh Annual Northwest Softw. Quality Conf., pp. 109-121, Portland, OR. PNSQC, 1993.
[MD93b]
J. D. McGregor & D. Dyer, “A note on inheritance and state machines.” Tech Report, Clemson University, 1993.
[MD93c]
J. D. McGregor & D. M. Dyer, “Selecting specification-based test cases for objectoriented systems.” Tech Report, Clemson University, 1993.
[MD97]
P. Middleton & C. Dougan, “Grey Box Testing C++ via the Internet.” Tenth Int. Softw. Quality Week in San Francisco, May 1997. Software Research Inc.
[MDDW94] T. J. McCabe, L.A. Dreyer, A. J. Dunn & A. H. Watson, “Testing an ObjectOriented Application.” J. Quality Assurance Inst., 8(4), pp. 21 27, Oct 1994. [MF99]
K. C. Morris & David Flater, 1999. “Standards-based software testing in a netcentric world.” Manufacturing Systems Integration Division, National Institute of Standards and Technology, 100 Bureau Drive, Stop 8260, Gaithersburg, MD 208998260, USA.
[MH92]
J. D. McGregor & M. J. Harrold, "Toward a Testing Methodology for Objectoriented Software Systems." Proc. Workshop on OO Soft. Eng. Practice, February 1992.
[MHS98]
J. McDonald, D. Hoffman & P. Strooper, “Programmatic testing of the Standard Template Library container classes.” Proc. IEEE Int. Conf. on Automated Softw. Eng., pp. 147-156, 1998.
[MK94]
J. D. McGregor & T. D. Korson, “Integrating object-oriented testing and development processes.” Comms ACM, 37(9), pp. 59-77, Sept 1994.
[MK96a]
J. D. McGregor & A. Kare, “Parallel Architecture for Component Testing of ObjectOriented Software.” Ninth Int. Softw. Quality Week in San Francisco, May 1996. Software Research, Inc..
[MK96b]
J. D. McGregor & A. Kare, "Testing Object-Oriented Components." Proc. 17th Int. Conf. Testing Computer Soft., June 1996.
[MM94]
R. McDaniel & J. D. McGregor, “Testing the polymorphic interactions between classes.” Tech Report TR-94-103, Clemson University, 1994. Available at http://www.cs.clemson.edu/~johnmc/.
[MM99]
M. L. Major & J. D. McGregor, “Guided Inspection.” Slides presented at SEW’99. Available via http://www.software-architects.com.
[MMB+95] F. MacDonald, J. Miller, A. Brooks, M. Roper & M. Wood, "Applying inspection to object-oriented software." Softw. Test. Verif. Reliab., 6, pp. 61-82, 1996. [MMH93] J. D. McGregor, B. Malloy & M. J. Harrold, "The Implementation of a Simulation Language Using Dynamic Binding." 1993 Western Simulation MultiConference. SCS, 1993.
! "#
$ % & ' ( & $ ' ' $ )* $ $
[MMLS00] J. McDonald, L. Murray, P. Lindsay & P. Strooper, "A pilot project on module testing for embedded software." Tech Report 00-24, Software Verification Research Centre, School of Information Technology, University of Queensland, Australia. [MMLS01] J. McDonald, L. Murray, P. Lindsay & P. Strooper, "Module testing embedded software - an industrial pilot project." In Proc 7th IEEE Int. Conf. on Eng. of Complex Computer Systems, pp. 233-238, July 2001. [MMS97] J. McDonald, L. Murray & P. Strooper, "Translating Object-Z specifications to object-oriented test oracles." In Proc Asia Pacific Softw. Eng. Conf. (APSEC'97), Hong Kong, pp. 414-423, December 3-5 1997. Revised version available as SVRC Tech Report TR97-30 at http://www.itee.uq.edu.au/~jasonm/tr97-30.ps. [MMS98] L. Murray, J. McDonald & P. Strooper, "Specification-based class testing with ClassBench." In Proc. Asia Pacific Softw. Eng. Conference (APSEC'98), Taipei, Taiwan, December 2-4 1998, pp. 165-173. Also available as SVRC Tech Report TR98-12 at http://svrc.it.uq.edu.au/Bibliography/svrc-tr.html?98-12. [MMSC98] I. MacColl, L. Murray, P. Strooper & D. Carrington, “Specification-based class testing: a case study.” Proc. Second Int. Conf. on Formal Eng. Methods, Brisbane, Australia, December 1998, pp. 222-231. Los Alamitos: IEEE Computer Society Press, 1998. [Moh97]
G. Mohl. Regressionstesten objektorientierter Software. Diploma Thesis (in German), University of Hagen, Germany, Aug 1997.
[MPCD00] J. D. McGregor, A. Parrish, D. Cordes & B. Dixon, "A Taxonomy of Module Interaction for Object-Oriented Testing." ACIS Int. J. Computer and Inform. Sci., Winter 2000. [MR00]
J. D. McGregor & M. L. Russ, "Selecting Test Cases Based on User Priorities." Soft. Development, March 2000.
[MS91]
John D. McGregor & David A. Sykes. Object-Oriented Software Development: Engineering Software for Reuse, International Thompson Publishing, 1991.
[MS96]
J. McDonald & O. Strooper, "Testing inheritance hierarchies in the Classbench framework." In Proc. Technology of Object-Oriented Languages and Systems (TOOLS USA '96), Santa Barbara, California, July 1996.
[MS98]
J. McDonald & P. Strooper, "Translating Object-Z specifications to passive test oracles." In Proc. Int. Conf. On Formal Eng. Methods (ICFEM'98), Brisbane, Australia, December 1998, pp. 165-174. Also available in extended form as SVRC Tech Report TR98-04 at http://www.itee.uq.edu.au/~jasonm/tr98-04.ps.
[MS01]
J. D. McGregor & D. A. Sykes, A Practical Guide to Testing Object-Oriented Software. Addison-Wesley, 2001.
[MTW94] G. C. Murphy, P. Townsend & P. S. Wong, “Experiences with cluster and class testing.” Comms ACM, 37(9), pp. 39-47, Sept 1994. [Mus96]
J. D. Musa, “Software-Reliability-Engineered Testing.” IEEE Computer, 29(11), pp. 61-68, Nov 1996.
[MW94]
T. J. McCabe & A.H. Watson, “Combining comprehension and testing in objectoriented development.” Object Magazine, 4(1), pp. 63-64, March/April 1994.
[MW98]
J. Münzel & M. Winter, Testing Object-Oriented Programs — Annotated Bibliography, 1998. Available at http://www.informatik.fernunihagen.de/import/pi3/GI/akOOT.html.
! "#
$ % & ' ( & $ ' ' $ )* $ (
[Nau94]
U. Naumann, “Experiences in testing object-oriented software.” Eleventh Int. Conf. and Expo. on Testing Computer Soft., pp. 373-376. ASQC/STSC, June 13-16 1994.
[OAT00]
A. J. Offutt, A. Abdurazik & R. T. Alexander, “An analysis tool for coupling-based integration testing.” Dept of Information and Software Engineering, George Mason University.
[OI95]
A. J. Offutt & A. Irvine, “Testing object-oriented software using the categorypartition method.” Proc. Seventeenth Int. Conf. on Technology of OO Languages and Systems (TOOLS USA’95), Santa Barbara, California, USA, pp. 293-303, August 1995.
[OP97]
A. J. Offutt & J. Pan, “Detecting equivalent mutants and the feasible path problem.” Softw. Test. Verif. Reliab., 7(3), pp. 165-192, 1997.
[OS99]
A. Orso & S. Silva, “Integration testing of procedural object-oriented languages with polymorphism.” In Sixteenth Int. Conf. on Testing Computer Software (ICTCS’99), Washington, DC, 1999.
[Ove92a]
J. Overbeck, “Test Activities in Object-Oriented Development.” In P. Liggesmeyer, H. M. Sneed & A. Spillner (eds), Test Activities in Object-Oriented Development, pp. 168-177. Springer-Verlag, 1992.
[Ove92b]
J. Overbeck, “Test Activities in Object-Oriented Development.” In P. Liggesmeyer, H. M. Sneed & A. Spillner (eds), Testen, Analysieren und Verifizieren von Softw., pp. 168--177. GI, Springer-Verlag, 1992.
[Ove93]
J. Overbeck, “Testing Object-Oriented Software: State of the Art and Research Directions.” Proc. First Europ. Int. Conf. Softw. Testing, Analysis and Review, London, Oct 1993.
[Ove94a]
J. Overbeck, “Testing Generic Classes.” Proc. Second Europ. Int. Conf. Softw. Testing, Analysis and Review, Brussels, Oct 1994.
[Ove94b]
J. Overbeck. Integration Testing for Object-Oriented Software. PhD Dissertation, Vienna University of Technology, 1994.
[PA01]
K. Periyasamy & V. S. Alagar, “A rigorous method for test templates generation from object-oriented specifications.” Softw. Test. Verif. Reliab., 11, pp. 3-37, 2001.
[Par98]
Parasoft, 1998. Automatic white-box testing for the Java developer. Available at http://www.parasoft.com/jtest/jtestwp.htm.
[PAS99]
K. Periyasamy, V. S. Alagar & S. Subramanian, “Deriving test cases for composite operations in Object-Z specifications.” Proc. Technology of OO Languages and Systems (TOOLS 26), Santa Barbara, CA, August 1999, pp. 429-441. Los Alamitos: IEEE Computer Society Press, 1999.
[Pay98]
J. Payne, “Practical Techniques for Testing Objects.” STAR’98 West, San Diego, CA, Oct 1998. Software Quality Engineering.
[PBB98]
C. Péraire, S. Barbey & D. Buchs, “Test selection for object-oriented software based on formal specifications. IFIP Working Conf. on Programming Concepts and Methods (PROCOMET’98), pp. 385-403, Shelter Island, NY, June 1998. Chapman & Hall, 1998. Also available as Tech Report EPFL-DI 97/252, École Polytechnique Féderale de Lausanne, and published in DeVa second year report, Jan 1998.
[PBC93]
A. Parrish, R. Borie & D. Cordes, “Automated Flowgraph-Based Testing of ObjectOriented Software Modules.” J. Systems and Softw., Nov 1993.
! "#
$ % & ' ( & $ ' ' $ )* $+
[PC94]
A. Parrish & D. Cordes, “Applying conventional unit testing techniques to abstract data type operations.” Int. J. Soft. Eng. and Knowledge Eng., 4(1), pp. 103-122, 1994.
[PCG94]
A. Parrish, D. Cordes & M. Govindarajan, “Systematic defect removal from objectoriented modules.” Seventh Int. Softw. Quality Week in San Francisco, pp. 2-T-1. Software Research, Inc., May 17-20 1994.
[Pér98]
C. Péraire. Formal testing of object-oriented software: from the method to the tool. PhD Thesis 1904, École Polytechnique Féderale de Lausanne, 1998.
[PK90]
D. E. Perry & G. E. Kaiser, “Adequate testing and object-oriented programming.” JOOP, 2(5), pp. 13-19, Jan/Feb 1990.
[Pos94]
R.M. Poston, “Automated testing from object models.” Comms ACM, 37(9), pp. 4858, Sept 1994.
[PR00]
M. Prasse & P. Rittgen, "Success factors and future challenges for the development of object orientation." Tech Report 20, Institut für Wirtschaftsinformatik, Fachbereich Informatik, Universität Koblenz-Landau, Germany, 2000.
[PW91]
J. A. Purchase & R. L. Winder, “Debugging Tools for Object- Oriented Programming.” JOOP, 4(3), June 1991, pp. 10-27.
[RBP+91] James Rumbaugh, Michael Blaha, William Premerlani, Frederick Eddy & William Lorensen, 1991. Object-Oriented Modeling and Design. London: Prentice-Hall International. [Ree94]
D. R. Reed, “Building, Testing & Tuning C++ Programs.” C++ World Conf. Proc., New York, January/February 1994, pp. 135- 137. SIGS Conferences.
[RH94]
G. Rothermel & M. J. Harrold, “Selecting regression tests for object-oriented software.” Proc. Int. Conf. on Soft. Maint. 1994, pp. 14-25. Los Alamitos: IEEE Computer Society Press, Sept 1994.
[RHD00]
G. Rothermel, M. J. Harrold & J. Dedhia, “Regression test selection for C++ software.” Softw. Test. Verif. Reliab., 10, pp. 77-109, 2000.
[Rin87]
D. C. Rine, “A common error in the object structure of object-oriented design methods.” ACM SIGSOFT Notes, 12(4), pp. 42-44, October 1987.
[RK97]
J. Rosenberg & M. Kölling, "Testing object-oriented programs: making it simple." Basser Dept of Computer Science, University of Sydney, Australia, 1997.
[Ros97]
D. Rosenblum, "Adequate testing of component based software." Tech Report 9734, Dept of Information and Computer Science, UC Irvine, Irvine, CA, August 1997.
[RSTC95] Reliable Software Technologies Corporation, “Testability of Object-Oriented Systems.” Tech Report NIST GCR 95-675, National Institute of Standards and Technology, Computer System Laboratory, Gaithersburg, MD 20899, 1995. [Rüp95]
P. Rüppel, "A class library for the automization of class testing." Fachbereich Informatik - Softwaretechnik (Skr. FR 5-6), Berlin, Germany, June 1995.
[RWB+95] M. Roper, J. Wilson, A. Brooks, J. Miller & M. Wood, "A simple dynamic analyser for C++." Tech Report RR/95/194 [EFoCS-18-95], Dept of Computer Science, University of Strathclyde, Glasgow, Scotland, 1995. [Rym98]
J. Rymer, “Unit and Integration Testing of Objects.” STAR’98 West, San Diego, CA, Oct 1998. Software Quality Engineering.
! "#
$ % & ' ( & $ ' ' $ )* $+
[SH99]
S. Sinha & M. J. Harrold, “Criteria for testing exception-handling constructs for Java programs.” Int. Conf. on Soft. Maint.
[Sie94]
S. M. Siegel, “OO integration testing for OO projects.” Seventh Int. Softw. Quality Week in San Francisco, pp. 4-A-4. Software Research, Inc., May 17-20 1994. (McGregor notes [McG9x] that this paper is not included in the conference proceedings)
[Sie96]
S. M. Siegel, 1996. Object-Oriented Testing: A Hierarchic Approach. New York: John Wiley & Sons.
[Sim00]
A.J.H. Simons, "On the compositional properties of UML statechart diagrams." In Proc. Third Workshop on Rigorous Object-Oriented Methods, BCS EWICS. BCS, 2000.
[SK94]
S. J. Samadzadeh & M. H. Khan, “Stability, coupling and cohesion of objectoriented software systems.” New York: ACM, 1994. [Rest of details missing; please email the author if you have them]
[SL00]
A. L. Souter & L. L. Pollock, “OMEN: A Strategy for Testing Object-Oriented Software.” ISSTA ’00, pp. 49-59. Portland, Oregon, 2000.
[SM94a]
S. Shlaer & S. J. Mellor, “A deeper look at testing and integration, Part 1.” JOOP, pp. 8-13, Feb 1994.
[SM94b]
S. Shlaer & S. J. Mellor, “A deeper look at testing and integration, Part 2.” JOOP, pp. 18-22, July/Aug 1994.
[SMT92]
G. M. Schneider, J. Martin & W. T. Tsai, “An experimental study of fault detection in user requirements documents.” ACM Trans. Softw. Eng. and Methodology, 1(2), pp. 188- 204, April 1992.
[SN94a]
A. Serra & P. Nesi, “Object-oriented approach for a non-invasive, automatic testing tool.” Seventh Int. Soft. Quality Week in San Francisco, pp. 4-A-3. Software Research, Inc., May 17-20 1994.
[SN94b]
E. Siepmann & A. R. Newton, “TOBAC: A test browser for testing object-oriented software.” Proc. of the ACM SIGSOFT Int. Symp. on Softw. Testing and Analysis, 1994. Also cited as SIGSOFT Softw. Eng. Notes (Special Issue), pp. 154-168, 1994.
[Spu94]
David A. Spuler, 1994. C++ and C Debugging, Testing & Reliability. Prentice Hall.
[SR90]
M. D. Smith & D. J. Robson, “Object-oriented programming - the problems of validation.” Proc. Seventh Int. Conf. on Soft. Maint., pp. 272-281, San Diego, CA, Nov 1990.
[SR92]
M. D. Smith & D. J. Robson, “A framework for testing object-oriented programs.” JOOP, 5(3), pp. 45-53, June 1992.
[Sta94]
B. Stahl, “How to test object-oriented software.” Eleventh Int. Conf. and Expo. on Testing Computer Soft., pp. 369-372. ASQC/STSC, June 13-16 1994.
[Sub97]
S. Subramanian, “Object-oriented program testing using formal requirements specification.” Masters Thesis, Dept of Computer Science, University of Manitoba, Winnipeg, Manitoba, Canada, April 1997.
[Sun98]
Sun Microsystems, “Java Testing Tool.” Available at http://www.sun.com/workshop/testingtools/.
[TCC95]
T.H. Tse, F.T. Chan & H.Y. Chan, "An axiom-based test case selection strategy for object-oriented programs." In M. Lee, B.-Z. Barta & P. Juliff (eds), Software
! "#
$ % & ' ( & $ ' ' $ )* $ +
Quality and Productivity: Theory, Practice, Education and Training, pp. 107-114. Chapman and Hall, London,1995. [Thu92]
N. N. Thuy, “Testability and Unit Tests in Large Object-Oriented Software.” Fifth Int. Softw. Quality Week in San Francisco, May 1992. Software Research, Inc..
[TR92a]
C. D. Turner & D. J. Robson, “The testing of object-oriented programs.” Tech Report TR-13/92, University of Durham, England, 1992.
[TR92b]
C. D. Turner & D. J. Robson, “A suite of tools for the state-based testing of objectoriented programs.” Tech Report TR-14/92, University of Durham, England, 1992
[TR93a]
C. D. Turner & D. J. Robson, “State-based testing and inheritance.” Tech Report TR-1/93, University of Durham, England, 1993.
[TR93b]
C. D. Turner & D. J. Robson, “Guidance for the testing of object-oriented programs.” Tech Report TR-2/93, University of Durham, England, 1993.
[TR93c]
C. D. Turner & D. J. Robson, “The state-based testing of object-oriented programs.” Proc. IEEE Conf. on Soft. Maint., 1993, pp. 302-311. Los Alamitos: IEEE Computer Society Press, September 1993.
[TR95]
C. D. Turner & D. J. Robson, “A state-based approach to the testing of class-based programs.” Softw.: Concepts and Tools, 16(3), pp. 106-112, 1995.
[TTB91]
S. Trausan-Matu, J. Tepandi & M. Barbuceanu, “Validation, verification & testing of object-oriented programs.” First East Europ. Conf. on OOP, pp. 62-71, Sept 1991.
[TW96]
P. Thévenod-Fosse & H. Waeselynck, “Towards a statistical approach to testing object-oriented programs.” Esprit Long Term Research Project No. 20072. DeVa Tech Report No. 28, LAAS-CNRS, Toulouse, 1996. Also available in Proc 27th Int. Symp. on Fault-Tolerant Computing (FTCS’97), pp. 99-108, 1997.
[TX96]
T. H. Tse and Z. Xu, “Test Case Generation for Class-Level Object-Oriented Testing.” Ninth Int. Softw. Quality Week in San Francisco, May 1996. Software Research, Inc..
[VHK97]
J. Vitek, R. N. Horspool & A. Krall, “Efficient Type Inclusion Tests.” OOPSLA’97, pp. 142-157. ACM, 1997.
[Voa96]
J.M. Voas, “Object-Oriented Software Testability.” In S. Bologan and G. Bucci, (eds), Third Int. Conf. on Achieving Quality in Softw., pp. 279-290, London, 1996. Chapman and Hall.
[Voa97]
J. Voas, “How Assertions Can Increase Test Effectiveness.” IEEE Softw., pp. 118122, March/April 1997.
[WA97]
L. J. White & K. Abdullah, “A firewall approach for the regression testing of objectoriented software.” Tenth Int. Softw. Quality Week in San Francisco, May 1997. Software Research Inc.
[Win90]
T. L. Winfrey, “Testing Object-Oriented Programs by Mutually Suspicious Parties.” Tech Report CUCS-041-90, Columbia University, New York, 1990.
[Win98]
M. Winter, “Managing object-oriented integration and regression testing (without becoming drowned). EuroSTAR’98, Munch, 1998.
[WK95]
C.-M. Wang & Y. S. Kuo, “Class Exerciser: A Basic Tool for Object-Oriented Development.” 1995 Asia Pacific Softw. Eng. Conf., pp. 108-116. IEEE, 1995.
! "#
$ % & ' ( & $ ' ' $ )* $ +
[WKC+97] Y. Wang, G. King, I. Court, M. Ross & G. Staples. On Testable Object-Oriented Programming.” Softw. Eng. Notes, 22(4), pp. 84 90, July 1997. [WXT+97] Y. Wanghong, C. Xiangkui, X. Tao, M. Hong & Y. Fuqing, "C++ program information database for analysis tools." Dept of Computer Science, Washington University. [YAT95]
Barbara Yates, “Testing Smalltalk Applications.” OOPSLA’95, Addendum to the Proceedings, pp. 143-148, 1995.
[YOO83]
S. Yamada, M. Ohba & S. Osaki, “S-Shaped Reliability Growth Modeling for Software Error Detection.” IEEE Trans. Reliab., pp. 475-484, Dec 1983.
[ZHK92]
S. Zweben, W. Heym & J. Kimmich, “Systematic Testing of Data Abstractions Based on Software Specifications.” Softw. Test. Verif. Reliab., 1(4), pp. 39 -55, 1992.
[ZOPS97] J. Zhuo, P. Oman, R. Pichai & S. Sahni, “Using Relative Complexity to Allocate Resources in Gray-Box Testing of Object-Oriented Code.” Fourth Int. Softw. Metrics Symp. (METRICS’97), Alburquerque, NM, Nov 1997. IEEE. [Zwe89]
S. H. Zweben, “Testing formally specified data-oriented modules using programbased test data adequacy criteria.” Tech Report, Ohio State University, 1989.