International Journal of Engineering Education Vol. 30, No. 6(A), pp. 1476–1485, 2014 Printed in Great Britain
0949-149X/91 $3.00+0.00 # 2014 TEMPUS Publications.
Teaching Software Verification and Validation Course: A Case Study* DEEPTI MISHRA1, TUNA HACALOGLU2 and ALOK MISHRA3 1
Department of Computer Engineering, Atilim University, Ankara, Turkey. E-mail:
[email protected] Department of Information Systems Engineering, Atilim University, Ankara, Turkey. E-mail:
[email protected] 3 Department of Software Engineering, Atilim University, Ankara, Turkey. E-mail:
[email protected] 2
Software verification and validation (V & V) is one of the significant areas of software engineering for developing high quality software. It is also becoming part of the curriculum of a universities’ software and computer engineering departments. This paper reports the experience of teaching undergraduate software engineering students and discusses the main problems encountered during the course, along with suggestions to overcome these problems. This study covers all the different topics generally covered in the software verification and validation course, including static verification and validation. It is found that prior knowledge about software quality concepts and good programming skills can help students to achieve success in this course. Further, team work can be chosen as a strategy, since it facilitates students’ understanding and motivates them to study. It is observed that students were more successful in white box testing than in black box testing. Keywords: software engineering; education; testing; open source tool
1. Introduction Computers have revolutionized our modern day life including the way that we work. Currently, computers are the tools for engineers and scientists and therefore computers play a vital role in engineering education. Testing contributes significantly to software engineering. According to Katara [1], testing can detect errors and therefore measures quality factors. Although there are many professional software testing training organizations worldwide; universities are still the main centre for the provision of education to software developers and testers that can satisfy the need of the IT industry. It is stated that the share of resources spent on testing, locating and fixing errors in an average software development project can still reach over 50%, the share of academic resources spent on teaching testing is much less than that of a programming courses [1]. Although Software Testing is a major approach to software quality assurance [2] and accounts for over 50% of the total software development cost [3], it is known that in the universities, few computer science or software engineering departments have a fully fledged testing courses part of their curriculum. Generally, it is taught as part of other software engineering courses but not as an independent course. There is a general inadequacy internationally in addressing software testing in the curricula of software engineering or related programs in formal university education [4]. The reason, as suggested by Katara [1], is that testers are generally considered to be less educated and less paid than other profes1476
sionals and therefore the students do not choose testing as a career. Further, the approach to teaching software testing is inadequate, mainly because the students would not be able to acquire any handson testing experience [5]. Teaching software testing requires the use of computer supported automated tools but most of the professional automated testing tools are expensive and therefore difficult to acquire for the universities. Chen & Poon [5] also supported the fact that the difficulty in obtaining access to the associated automated tools creates a further difficulty in teaching a testing course. The objectives of this study are as follows: To report the experiences gained from teaching a software testing course, especially at undergraduate level. The main objective of this course is to not only provide students with sound knowledge of various testing strategies and techniques but also to educate the students to choose and implement these testing strategies and techniques using computer based testing tools. To examine the students’ understanding of different topics of the software testing course. To compare and analyze the performance of students in various different topics of the software testing course. Further, to examine the reasons for the students’ low understanding of a particular topic and suggest ways to overcome these. The remainder of this paper is organized as follows: Section 2 discusses main issues and the related work on teaching the software testing course. Details of the software verification and validation course, i.e. * Accepted 14 August 2014.
Teaching Software Verification and Validation Course
the course plan, teaching strategy, topics covered, tools used, grading policy, etc., are provided in Section 3. Our experiences, lessons learned and implications from teaching software verification and validation course are presented in Section 4. Section 5 presents the conclusions, limitations and further work in this area.
2. Literature review Software verification and validation is considered to be one of the main knowledge areas [6] for an undergraduate Software Engineering curriculum, by the ACM and the IEEE [7]. This topic is also among the top ten important topics in terms of its appropriateness to industry [8]. It is stated that software disciplines, such as requirement management and quality management are very important in the life cycle of software. Therefore introducing these concepts in the universities will help to satisfy business needs. Wang et al. [9] argued that qualified software programmers require special training in software quality assurance (SQA) and proposed a pedagogical model for SQA at source code level, in which coding standards and code optimization were determined as two quality buses, while code review traversed almost the entire software development process. In the context of quality management, a focus on the testing discipline is absolutely critical both for software in embedded systems and traditional IT systems [10]. In spite of the importance of software verification and validation to produce quality software, university curricula generally do not prepare graduates with enough coverage of software testing [4]. Shepard et al. [11] argued that more testing should be taught in the university curriculum so as to better prepare university graduates as software practitioners. Chan et al. [4] proposed that a review of the current software engineering curricula in the universities to examine the coverage of software testing would be useful for the development of quality software [4]. Bin and Shiming [12] analyzed the expectations from the software development organizations and proposed some reforms and practices in order to update the course content and structure. Some experiences have suggested that the teaching of software testing should begin as early as possible so that an adequate culture of testing could be created [13]. Earlier mastering of testing concepts and techniques would: (1) improve reasoning about the program (and its solution), leading to better high quality products; and (2) induce and facilitate the use of testing throughout the software development process, leading to a better high quality process, in contrast to current practices [13]. Katara [1] suggested that students should learn
1477
about the novel software processes, like TestDriven Development (TDD), that recognize the importance of testing and embrace it from the early phases of the software development life cycle. He further stated that students should learn how to use test automation effectively and how to estimate the costs of automating to make the right choices between manual and automated testing in each context. Studies by Patterson et al. [14] and Edwards [15] pointed out several problems related to teaching software testing in introductory computer science courses: Software testing requires students to have experience in programming. Instructors need to assess program correctness first. Due to time constraints, it may not be feasible to assess test cases. Students need frequent, concrete feedback on how to improve their performance at many points throughout their development of a solution, rather than just once at the end of an assignment. Testing is often perceived by students as boring where much time is spent on planning and performing the tests. Writing test plans creates a large overhead in workload and students fail to see the benefit of such a formal approach since their programs are small and simple. Testing properly is very time intensive. Specifically when regression tests are necessary, doing so without proper tool support becomes tedious. Good tools to support testing in a student environment are rare. There are various studies that reported the experiences of teaching a software testing course. Chen and Poon [5] reported the experience of teaching the classification-tree method as a black box testing technique to students majoring in computer science or software engineering. Yu et al. [16] illustrated their experience about the methods used by full-time and part-time undergraduate students to test their own programs, as well as their perception of the classification-tree method. Chen et al. [3] reported their experience in teaching automated test case generation with limited resources. Frezza [17] proposed an innovative approach to integrate testing into an introductory software design course for computer science and engineering undergraduate. Smith et al. [18] described their experience in incorporating peer testing into an honors data structure course without significantly reducing its heavy programming load. Liu et al. [19] reported their experience in teaching metamorphic testing to a class composed of both undergraduate and graduate students from different backgrounds and career
1478
orientation. Williams [20] presented the experience of teaching graduate-level software reliability and testing the course in a laboratory setting, such that students can learn testing and reliability theory and apply this theory immediately. Mao [21] proposed a new teaching method named Question-Driven Teaching Method (QDTM) to enrich the education of professional testers and applied it to the software testing course in a software engineering major. Kaner and Padmanabhan [22] laid out an optimal set of lecture notes and practice exercises to teach a software testing technique, domain testing. Elbaum et al. [23] developed a web-based tutorial named Bug Hunt to engage students in learning software testing strategies. Garousi [24] modernized and redesigned the lab exercises, based on real world systems under tests of an undergraduate software testing course, so as to make the course up-to-date with the most recent testing tools and technologies. Krutz and Lutz [25] incorporated material on significant software failures called ‘‘The Bug of the Day’’ as part of formal coursework that serves to broaden students’ awareness of the cost of software flaws. The goal was to instill a respect for the importance of software quality and the role of testing in ensuring software quality. All of the above mentioned studies outline the experiences in teaching a specific testing topic. None of these studies covers all the different topics generally covered in the software verification and validation course. Moreover, static verification and validation is not covered adequately in any of these studies. Therefore, this study will provide a significant insight in teaching software verification and validation, along with use of computers and computer supported testing tools.
3. Course specification and implementation 3.1 Context The course Software Verification and Validation is included in the second year undergraduate curriculum of Bachelor of Engineering (Software Engineering) which is a four year degree course. The course is taught in a two hour theory lecture and a two hour laboratory session every week, in addition to assignments and exercises given at regular intervals. During the laboratory sessions, students employ computers to test the programs. The division of the course into theory and laboratory sessions provides students with the opportunity to apply the theoretical concepts of software testing that they learned during the lecture hours later in the laboratory sessions. Usually, about 30–40 students attend this course every year. Students pursuing this course have already taken
Deepti Mishra et al.
introductory courses in C and C++. Although students have taken these programming courses, these are not prerequisite for the Software Verification and Validation course. Therefore students who have not passed the C and C++ course can also take the Software Verification and Validation course. 3.2 Text book/Course material Although there is a main text book [26] and several reference books [27, 28] mentioned in the course syllabus, the instructor provides lecture notes to the students for every topic. Lecture notes are also available on the course website. 3.3 Topics covered The fundamental concepts of Software Verification and Validation were introduced during the lecture hours using examples as required. The main topics discussed are shown in Table 1. These lecture hours were followed by laboratory sessions where students practiced the topics learned during the lecture hours, for example, they had the chance to perform code inspection on a code written by someone else, creating and running test cases using a particular testing technique, etc. The laboratory session was divided into the following parts: Static V & V, White box testing techniques, Black box testing techniques, Testing tools. The first part was Static V & V techniques. During the theory lectures, students were given concepts of inspection, review, walkthrough and the distinction between them. Team structure, the roles of different team members, different steps (phases) of these static V & V techniques were also explained. During the laboratory hours, the aim was to introduce the concept of finding defects in a given code segment. The emphasis was not only on the correctness of the code but on other attributes like good programming style, readability, etc. which can lead to other quality attributes like testability, maintainability, etc. The students were divided into two groups and since they were more familiar with the C language, two medium level C programming problems were provided. Codes of the program for these problems were written by students. Thereafter, each student looked for the defects in the code that was written by another student. In these sessions, the students found the line number of defects as well as the description of the defect, the type of the defect, whether it is a syntax error, understandability, or readability problem, etc. The same activity was repeated for C programs provided by the instructor. The second part relating to white box testing techniques was introduced during the theory session with basic concepts such as:
Teaching Software Verification and Validation Course
1479
Table 1. Main topics covered during lecture hours Topic
Details
Static verification and validation
Review, Inspection, Walkthrough
Software testing techniques—white box testing
Basis path testing, Condition testing, Loop testing, Data flow testing
Software testing techniques—black box testing
Error guessing, Equivalence class partitioning, Boundary value analysis, Decision table, Cause effect graphing, Test cases derived from functional requirements (i.e. usecases)
Software testing strategies
Unit testing Integration testing: Big-Bang Vs. Incremental approach System testing: Recovery, Stress, Performance, Security testing Acceptance testing: Alpha and Beta testing
Software testing tools
Importance of testing tools was discussed. Different professional and open source tools were discussed
Software measurement and metrics related with verification and validation
Introduction to software measurement and its importance, Introduction to software metrics and their importance in improving the quality of software product and process, Different V&V metrics such as Error Density, Error Removal Efficiency, etc. were discussed
What is testing and its significance? What is a test case? What is a successful test case? Why is it important to know the different testing techniques? What is white box testing? The lecture hours covered various white box testing techniques, including concepts such as independent paths and their significance, the calculation of cyclomatic complexity and its significance, data flow testing, etc. These techniques were explained by giving different examples. During the lab sessions, practical implementation of techniques learned in the theory was introduced. Small code segments were provided for students to create and execute test cases using one of the white box testing techniques. This helped them just to concentrate on understanding the testing aspect, i.e. test case creation and execution, without the added burden of programming. The students first analyzed the given code and then, according to the chosen technique, developed and executed the test cases for the given code. The third part related to black box testing techniques and was introduced with basic concepts such as: What is black box testing? How is black box testing different from white box testing? What are the advantages and disadvantages of white box and black box testing? The lecture hours covered various black box testing techniques, such as equivalence partitioning, boundary value analysis, etc. These techniques were explained by giving different examples from different domains. During the lab sessions, practical implementation of techniques learned in the theory was introduced in two steps:
1.
2.
Students were given a group of requirements and were instructed to write test cases using one of the black box testing techniques. This part was just related to the testing aspect. Later, requirements were given to students who were directed to write a program satisfying the given requirements and also to write the test cases using a black box testing technique. Thereafter, students executed the test cases to test the program. This part not only checked student’s knowledge of black box testing but also their programming skills.
The next part was related to automated testing. During the theory session, the following points were discussed: the significance of automated testing; an overview of various professional and open source testing tools. As most professional testing tools are expensive and require much expertise, an open source testing tool was used in the course. Despite the general notion that open source software is of low quality as it is available for free, the quality of open source software stems not from its management but from its openness. Free access to the source code ensures that the code is tested and retested by a worldwide user base, leading to timely identification of bugs and opportunities for software enhancements, and is hence more reliable [29]. Moreover, open source software provides users with the ability to modify, adapting the source code according to the requirement of the user. These tools are also free-of-charge and easy to access. During the lab session, an open source testing tool was chosen and introduced to students. Since the students were more familiar with C programming, CuTest was selected. CuTest works in a similar way to JUnit, which is one of the most popular open source testing tools for Java,
1480
Deepti Mishra et al.
in terms of writing test cases using test scripts and arranging them in test suites. This is one of the main reasons that CuTest was chosen, as it would be easier for the students to switch from CuTest to JUnit once they had learn Java programming language in the advanced years of their undergraduate studies. As CuTest is an open source tool, it is reasonable to compare it with an open source tool only. The comparison of CuTest and JUnit is shown in Table 2. The tool was introduced briefly to students to give them an idea about: How the tool works and what steps they would follow to write and execute test cases. Basic information about the test script was also provided during those lectures. How to analyze the result of execution of a test suite and subsequently to correct the program code if necessary. Initially, students did some practice on writing test cases for the given code by using the tool. During these sessions, small code segments were provided to students to create and execute test cases using test script. Later, students wrote the test cases using different testing techniques and subsequently used the tool to execute the test cases. The reason for introducing the tool at the end of the course is that it is necessary for students to be familiar with different testing techniques using manual testing before the introduction of automated testing. According to Katara [1], test automation has failed to fulfill its promises on a wide scale and testing is still largely manual work. He further argued that the basic problem is that while a test case finds an error, executing the same test more quickly and/or several times does not reveal any new errors; automating certain testing tasks and main-
taining the automation architecture only pays off when the test runs are repeated several times. 3.4 Grading Three exams were conducted during lecture hours and several quizzes, along with one main final exam during laboratory hours. The results of the performance of 30 students in these exams are presented in Table 3.
4. Discussion and lessons learned The following are the implications related to different topics of this course. 4.1 Static V& V Students’ performance in checking the correctness of the code was good. Also, their performance in finding syntax and semantic errors was the same. The problematic area for the students was to inspect the code and make suggestion about other quality aspects such as readability, testability, understandability, etc. It was difficult to convince students that there are other quality issues also to be taken into account while inspecting a program code. Therefore, it will be beneficial for students to take an introductory software quality concept course or chapter as a prerequisite for the testing course. During the code inspection, students faced difficulties in understanding what the program actually does. An explanation about the functionality of the code was always required by the students. During the laboratory sessions, students were more motivated and productive when working in pairs or teams than when working individually. While working in a team, students discussed the problems and tried to find out solutions together. Working together in a team improves deeper level learning, shared understanding, critical thinking,
Table 2. Comparison of CuTest and JUnit CuTest
JUnit
Language support
C programming language
Java programming language
Language structure
Procedural programming
Object-oriented programming
Testing level
Unit testing
Unit testing
Ease of installation
Very easy. It consists of single .c and .h files, which need to be included in source tree (bin folder).
Easy. JUnit Framework can be easily integrated with Eclipse, Ant, Maven. New eclipse versions already include JUnit.
Portability
Highly portable. It works with all major compilers on Windows (Microsoft, Borland), Linux, Unix.
Highly portable. It works with Windows, Linux, Mac.
Type of testing environment
Part of development environment
Part of development environment
Knowledge of test script required
Yes
Yes
Usage
Test cases are written in development environment but executed from command prompt.
Test cases are written and executed in development environment.
Origin
Open source
Open-source
Teaching Software Verification and Validation Course
1481
Table 3. Student’s performance in different topics White box 1
Black box 2
3
Automated 4
5
Testing technique
BPT
DFT
ECP & BVA
CEG & DT
Number of exercise
5
3
3
Maximum marks
Over 100
Over 100
Student 1 Student 2 Student 3 Student 4 Student 5 Student 6 Student 7 Student 8 Student 9 Student 10 Student 11 Student 12 Student 13 Student 14 Student 15 Student 16 Student 17 Student 18 Student 19 Student 20 Student 21 Student 22 Student 23 Student 24 Student 25 Student 26 Student 27 Student 28 Student 29 Student 30
32.9 90.6 58.2 68.2 68.8 74.7 22.9 77.6 28.8 40.6 76.5 30.6 60.6 75.3 85.3 37.6 81.8 48.8 45.9 72.4 78.2 50.6 88.2 57.6 40.0 79.4 55.3 23.8 64.1 82.9
Mean Stdev
60.0 20.7
MEAN
62.15
1 7
6
7
SG
CuTest
3
1
2
Over 100
Over 100
Over 100
Over 100
88.3 90.0 39.2 53.3 84.2 80.8 23.3 88.3 76.7 54.2 81.7 40.8 78.3 90.8 83.3 61.7 91.7 23.3 71.7 40.8 43.3 60.8 76.7 60.8 39.2 76.7 0.8 88.3 58.3 81.7
70.7 52.6 29.3 65.5 81.9 71.6 82.8 79.3 71.6 27.6 71.6 70.7 39.7 73.3 19.8 53.4 69.0 81.9 67.2 63.4 69.8 31.0 68.1 71.6 56.9 81.5 81.0 80.6 77.6 67.2
48.9 36.7 24.4 27.8 31.1 63.3 56.7 28.3 57.8 1.1 68.9 70.0 60.0 35.6 27.8 0.0 28.9 53.3 6.7 46.7 4.4 46.7 20.0 55.6 16.7 58.9 32.2 7.8 52.2 42.2
80 80 68 60 60 60 60 80 80 0 76 60 80 80 60 28 60 60 64 60 60 80 80 60 80 80 60 72 60 60
86.7 73.3 17.5 0.0 25.0 70.0 17.5 86.7 68.3 42.5 50.0 45.0 70.0 65.0 66.7 0.0 70.0 80.0 50.0 36.7 0.0 71.7 44.2 63.3 50.8 70.0 61.7 70.0 26.7 58.3
64.3 24.0
64.3 17.8
37.0 20.8
64.9 17.0
51.3 25.5
55.4
51.3
Basis path testing 2 Data flow testing 3 Equivalence class partitioning 4 Boundary value analysis 5 Cause effect graphing 6 Decision table Scenario graphing.
and long-term retention of the learned material [30]. It has been observed that in individual exercise there were few students motivated enough to work sincerely and most of the students preferred to ask each other. 4.2 White box testing techniques The major problem encountered during these sessions was that students faced difficulties in testing code written by someone else. It is important to understand the internal structure of the code before writing the test cases using white box testing. Students found it difficult to understand the program written by someone else. Once they understood the internal structure of the program, they could write and execute test cases using white box testing techniques without much difficulty. The reason might be the students’ lack of analytic skills. White box testing methods are mostly used
when the testers are also the developers. Generally, testers tend to incorporate their knowledge of the flow of the program code in the testing methods [16]. This was not the case in this course as the code provided during white box testing was not written by the students. According to Table 3, it can be observed that the students’ performance was better in Data Flow Testing than Basis Path Testing exercises. The reason might be that students were supposed to work on a small code part that fulfilled just one or two requirements during Data Flow testing. Also they were supposed to write test cases with respect to just one variable, which facilitated the testing activity in Data Flow Testing. On the other hand, Basis Path Testing exercises required students to understand the code and draw the related flow graph, and then find the test cases that demanded more effort than Data Flow Testing.
1482
4.3 Black box testing techniques Students should understand the program’s functionality (or informal specifications written in a natural language) before writing the test cases to test the program. The majority of the students required teachers’ explanations about ‘‘what a particular program does’’. The reason might be the students’ lack of analytic skills or lack of exposure to different application areas. Most undergraduate students have not yet acquired an adequate level of expertise and experience, mainly because of the lack of understanding of requirement specifications in different application areas [5]. In addition, most black box testing methods rely heavily on human intuition and hence are less systematic [5]. The students’ performance was highest in the scenario graph method, although it was taught during the later part of the course. Few examples were provided before the examination due to time constraints. The reason for higher performance might be the familiarity of the application area (online shopping). Another reason might be that the creation of the scenario graph is similar to the creation of the flow graph (cyclomatic complexity) and there were lots of practice examples done related to the flow graph. The performance differs significantly between ECP & BVA methods and CEG & DT methods, although the problems given had almost the same difficulty level and the same time-length was allocated to these topics. The reason might be the difficulty in the creation of the cause and effect graph when many causes and effects are involved as supported by Chen and Poon [5]. 4.4 Comparison between black box and white box testing techniques Deriving test cases from specifications (i.e. using black box strategies) are more popular than deriving test cases from program codes (i.e. using white box strategies) in industry [31]. During the course also, students enjoyed black box testing more and found it easier and less time consuming than white box testing but their performance was the opposite. Contrary to the general assumption that black box testing is easier than white box testing; students were more successful in white box testing than in black box testing as shown in Table 3. The average mark in questions related to white box technique is 62.15 whereas the average mark is 55.4 in black box testing methods. The difference is significant. This can be explained through the fact that the code was provided during white box testing technique sessions. These code were generally small (15–20 lines of code) and usually fulfilled one or two simple requirements. Therefore, understanding these
Deepti Mishra et al.
codes during white box testing was easier than in black box testing, where requirements were provided. The students were required to produce test cases based on these requirements using a chosen black box technique. There were at least 5–6 requirements in each problem and therefore they found it difficult to comprehend the whole problem. Further, students found it difficult to understand requirements written in English as it is not their first language. Generally, students find it easier to understand a topic if more exercises are done and more examples are given on that particular topic. Unfortunately, this was not the case in this course. Basis Path Testing was the first method taught during the course and the maximum number of exercises was given on this topic compared to other techniques. Only five out of numerous exercises were evaluated in this topic and the result is contradictory to our expectations according to Table 3. The average mark in this technique is lower than other techniques in which fewer exercises/examples were given. The reason might be the student’s lack of analytic skills. Also, more practice on a particular topic in the classroom is immaterial until students try to solve problems on their own after the lecture. This is also supported by Chen et al. [3] that compared to sitting in the classroom and listening to the lectures; students usually acquire a much deeper impression and a longer memory of things they learn on their own by going through and overcoming difficulties in the phase of learning. The student’s familiarity or unfamiliarity with the given problem domain is also very important. Programming problems come from a wide range of problem domains and understanding the problem domain is also critical [32]. According to Table 3, a students’ performance was best in the scenario graph problem (black box testing technique), although the number of examples given and evaluated in this topic were the least. This particular exercise was based on online shopping and nowadays almost every student has had experience of this particular domain. The effective use of black box testing methods by the students depends largely on their expertise and experience in the application areas of the software [5]. Students can easily write test cases for a problem from a familiar domain using the black box technique because they experience it many times in real life. 4.5 Comparison between manual and automated testing Students found it easier to do manual testing than automated testing. The students got low average marks in automated tool exercises as expected at the beginning. In the grading Table 3, it can be seen that
Teaching Software Verification and Validation Course
the grades of the tool (CuTest) range from 0 to 86.7 and the average is 51.3, which is the lowest of all the exercises. Students were expected to write test cases for a given code using test script. They found it difficult to write test cases using test script even though they knew the test cases. The test script of the tool used during this course requires the knowledge of topics such as pointers, structures, strings, etc. These topics require more programming knowledge than loops, conditional statements, etc. This shows us that although the tool is not difficult to use, it still requires a good knowledge of programming. Although students were second year students who had taken at least one programming course before this course, they were not very successful in programming. Barbosa et al. [13] claim that software testing requires the learners to have experience in programming. Therefore, we suggest that some additional hours should be dedicated to practical implementation of test cases, test suites, and test scripts to overcome the programming problem for writing the test cases using a tool. The testing tool is introduced after manual testing. At that time, there was not too much time left before the end of the semester, therefore lack of sufficient time caused students not to practice enough with the tool. The relative failure of students in automated testing in comparison to manual testing can also be due to similar time constraints. To become a successful tester requires good analytical skills, programming skills and knowledge of testing tools. Writing a successful test case is predominantly a manual activity and efforts can be saved during execution and re-execution (regression testing) of test cases by using automated tools. Ng et al. [31] in their survey argued that although it is widely believed that software quality will be improved by the use of automated testing, only 30 of the 44 respondents (68.2%) using testing tools agreed with this belief. Ten organizations (22.7%) were unsure, and four organizations (9.1%) gave a negative response to this question.
5. Conclusion There are various approaches to instructing this course successfully. Some of these have already been mentioned in the previous section. The following are suggestions to overcome the difficulties appearing during this course: Some prerequisite course should be given before this course to make sure that students’ knowledge is sufficient to fulfill the requirements of this course. Sufficient knowledge of software quality and good programming skills are a prerequisite for this course. The course schedule should be
1483
arranged so as to cover the practice needed for the course. Teamwork can be chosen as a strategy, since it facilitates the students’ understanding and motivates them for studying. The major problem encountered during white box testing sessions was that students faced difficulties in testing code written by someone else. Students should understand the program’s functionality before writing the test cases to test the program. A small explanation of what the program does can be supplied, along with the program itself, to handle this concern. The major problem encountered during black box testing sessions was that students found it difficult to understand requirements specification written in English. English is not the students’ first language and it might be due to the students’ lack of analytic skills or lack of exposure to different application areas. To overcome this issue, the problem domain can be chosen from the topics with which the students are familiar. This enables the students to concentrate on understanding the application of the testing techniques rather than understanding the question. Students found it difficult to write test cases using test script during automated testing using CuTest, even though they knew the test cases. It could be due to time constraints as the tool was introduced towards the end of the semester after they got well-versed in different testing techniques, so they didn’t get enough practice with the tool. This issue can be handled by introducing the tool after the introductory lessons so that they can learn different testing techniques as well as usage of the tool alongside each other.
As this course can be introduced at different levels in undergraduate and graduate studies and in different departments in universities around the world, the level of rigor/difficulty while teaching, programming language, and testing tools chosen depends on the level and orientation of students. This study reports the experiences gained from teaching a software testing course included in the second year undergraduate curriculum of the Bachelor of Engineering (Software Engineering). The lessons learned from this study will be more applicable if the students are from similar backgrounds. For future work, we plan to analyze the results of the application of white box testing and black box testing for the same problem. Students will be provided with the requirement (for black box testing) and code implementing the same requirements (for white box testing) to compare the students’ understanding of a problem written in two different ways. Also, problems from a relatively less familiar
1484
Deepti Mishra et al.
domain will be selected to check whether the familiarity of the topic is important in scenario graphing.
References 1. M. Katara, Improving testing education—seven observations why testing is different, Proceedings of the IEEE International Conference on Software—Science, Technology & Engineering, 22–23 February 2005, pp. 121–128. 2. B. Hailpern and P. Santhanam, Software debugging, testing, and verification, IBM Syst. J., 41(1), pp. 4–12. 3. T. Y. Chen, F. Kuo and Z.Q. Zhou, Teaching automated test case generation, Proceedings of the Fifth International Conference on Quality Software (QSIC’05), 19–20 September 2005, pp. 327–333. 4. F. T. Chan, W. H. Tang and T. Y. Chen, Software testing education and training in Hong Kong, Fifth International Conference on Quality Software (QSIC’05), 19–20 September, 2005, pp. 313–316. 5. T. Y. Chen and P. L. Poon, Experience with teaching black box testing in a computer science/ software engineering curriculum, IEEE Transactions on Education, 47(1), 2004, pp. 42–50. 6. T. C. Lethbridge, R. J. LeBlanc Jr, A. E. Sobel, T. B. Hilburn and J. L. Diaz-Herrera, SE2004: Recommendations for undergraduate software engineering curricula, IEEE Softw., 23(6), 2006, pp. 19–25. 7. J. Diaz-Herrara and T. Hilburn (Eds), ACM/IEEE CS Joint Task Force on Computing Curricula, Software Engineering 2004: Curriculum Guidelines for Undergraduate Degree Programs in Software Engineering, available at http://sites.computer.org/ccse/. 8. B. Kitchenham, D. Budgen, P. Brereton and P. Woodall, An investigation of software engineering curricula, J. Syst. Softw., 74(3), 2005, pp. 325–335. 9. Y. Q. Wang, Z. Y. Qi, L. J. Zhang and M. J. Song, Research and practice on education of SQA at source code level, The International Journal of Engineering Education, 27(1), 2011, pp. 70–76. 10. T. Astigarraga, E. M. Dow, C. Lara, R. Prewitt and M. R. Ward, The emerging role of software testing in curricula, Transforming Engineering Education: Creating Interdisciplinary Skills for Complex Global Environments, 2010 IEEE, 6–9 April, pp. 1–26. 11. T. Shepard, M. Lamb and D. Kelly, More testing should be taught, Commun. ACM, 44(6), 2001, pp. 103–108. 12. Z. Bin and Z. Shiming, Curriculum reform and practice of software testing, International Conference on Education Technology and Information System (ICETIS 2013), pp. 841–844. 13. E. F. Barbosa, M. A. G. Silva, C. K. D. Corte and J. C. Maldonado, Integrated teaching of programming foundations and software testing, FIE 2008 (38th ASEE/IEEE Frontiers in Education Conference), 22–25October, pp. S1H-5–S1H-10. 14. A. Patterson, M. Ko¨lling and J. Rosenberg, Introducing unit testing with BlueJ, SIGCSE Bull., 35(3), 2003, pp. 11–15. 15. S. H. Edwards, Using software testing to move students from trial-and-error to reflection-in-action, Proceedings of the 35th SIGCSE Technical Symposium on Computer Science Education (SIGCSE ’04), Norfolk, Virginia, USA, 3–7 March, 2004, pp. 327–333. 16. Y. T. Yu, S. P. Ng, P. L. Poon and T. Y. Chen, On the testing
17.
18.
19.
20.
21.
22.
23.
24.
25. 26. 27. 28. 29.
30. 31.
32.
methods used by beginning software testers, Information and Software Technology, Special Issue on Software Engineering, Applications, Practices and Tools from the ACM Symposium on Applied Computing 2003, 46(5), 2004, pp. 329–335. S. Frezza, Integrating testing and design methods for undergraduates: Teaching software testing in the context of software design, Proceedings of the 32nd ASEE/IEEE Frontiers in Education Conference, 6–9 November, 2002, Boston, MA, pp. S1G-1–S1G-4. J. Smith, J. Tessler, E. Kramer and C. Lin, Using peer review to teach software testing, Proceedings of the Ninth Annual International Conference on International Computing Education Research (ICER ’12), ACM, New York, USA, 2012, pp. 93–98. H. Liu, F. Kuo and T. Y. Chen, Teaching an end-user testing methodology, Proceedings of the 2010 23rd IEEE Conference on Software Engineering Education and Training (9–12 March 2010), CSEET, IEEE Computer Society, pp. 81–88. L. Williams, Teaching an active-participation university course in software reliability and testing, Proceedings of the 16th IEEE international Symposium on Software Reliability Engineering (8–11 November 2005), ISSRE, pp. 5. C. Mao, Towards a question-driven teaching method for software testing course, Proceedings of the 2008 International Conference on Computer Science and Software Engineering, 12–14December 2008, CSSE, 2008, pp. 645–648. C. Kaner and S. Padmanabhan, Practice and transfer of learning in the teaching of software testing, Proceedings of the 20th Conference on Software Engineering Education & Training, 3–5 July 2007, CSEET, pp. 157–166. S. Elbaum, S. Person, J. Dokulil and M. Jorde, Bug hunt: Making early software testing lessons engaging and affordable, Proceedings of the 29th International Conference on Software Engineering, 20–26 May, 2007, pp. 688–697. V. Garousi, An open modern software testing laboratory courseware—an experience report, Proceedings of the 2010 23rd IEEE Conference on Software Engineering Education and Training, 9–12 March 2010, CSEET, pp. 177–184. D. E. Krutz and M. Lutz, Bug of the day: Reinforcing the importance of testing, Frontiers in Education Conference, 23– 26 October 2013, IEEE, pp. 1795–1799. E. Kit, Software Testing in the Real World: Improving the Process, ACM Press/Addison-Wesley Publishing Co., 1995. W. Perry, Effective Methods for Software Testing, Third Edition. John Wiley & Sons, Inc., 2006. R. Black, Pragmatic Software Testing: Becoming an Effective and Efficient Test Professional. John Wiley & Sons, Inc., 2007. S. Raghunathan, A. Prasad, B. K. Mishra and C. Hsihui, Open source versus closed source: software quality in monopoly and competitive markets. IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, 35(6), 2005, pp. 903–918. D. W. Johnson and R. T. Johnson, Learning Together and Alone: Cooperative, Competitive, and Individualistic Learning, 5th edn, 1999, Allyn & Bacon, Boston. S. P. Ng, T. Murnane, K. Reed, D. Grant and T. Y. Chen, A preliminary survey on software testing practices in Australia, Proceedings of the 2004 Australian Software Engineering Conference, 13–16 April 2004, ASWEC, pp. 116–125. B. Anderson and E. Soloway, The role of domain experience in software design, IEEE Transactions on Software Engineering, 11(11), 1985, pp. 1351–1360.
Deepti Mishra is an Assistant Professor in the Department of Computer Engineering at Atilim University, Turkey. She received her Ph.D. in Computer Science from Rani Durgawati University, India in 2004 and her M.Sc. in Computer Science and Applications from Jiwaji University, India in 1994. Her research interests include software testing, software quality, software process improvement, requirements engineering, software engineering education and information systems. She has published many research papers and book chapters at international and national levels. She has been granted the Department of Information Technology Scholarship of the Government of India.
Teaching Software Verification and Validation Course
1485
Tuna Hacaloglu completed her B.Sc. studies at Atilim University at the Department of Software Engineering as a first rank student in 2009. She completed her master studies in 2013 at the Department of Information Systems at Middle East Technical University where she is currently pursuing her Ph.D. Since 2009 she has been working as a research assistant in the Department of Information Systems Engineering at Atilim University. Her research interests include information systems, software engineering and software engineering education. Alok Mishra is Professor of Software Engineering at Atilim University, Ankara, Turkey. His areas of interest and research are software engineering and information systems. He has published articles, book chapters and book-reviews related to software engineering and information systems in refereed journals, books and conferences. Dr. Mishra is also an editorial board member of many journals including Computer Standards and Interfaces, Journal of Universal Computer Science, Computing & Informatics, Electronic Government—an International Journal, etc. Professor Mishra has extensive experience in online education. He had received the excellence in online education award from U21Global Singapore. He is the recipient of various scholarships, including the national merit scholarship and the department of information technology scholarship of the Government of India.