BULETINUL ŞTIINŢIFIC al Universităţii POLITEHNICA din Timişoara, România, Seria AUTOMATICĂ ŞI CALCULATOARE SCIENTIFIC BULLETIN of The POLITEHNICA University of Timişoara, Romania, Transactions on AUTOMATIC CONTROL and COMPUTER SCIENCE, Vol. Xx (Yy), No. Z, MonthAA 20BB, ISSN 1224-600X
Final preprint submitted for publication
Ranking Software Maintenance Processes in а Small Software Company in the Context of Software Process Improvement Zeljko Stojanov*, Vladimir Brtka* and Dalibor Dobrilovic* * University of Novi Sad, Technical faculty “Mihajlo Pupin”, Zrenjanin, Serbia
[email protected],
[email protected],
[email protected]
Abstract – Software maintenance is recognized in literature and practice as the most demanding and the most profitable part of software life cycle. Effective management of maintenance processes require appropriate and on-time decision making in software companies, as well as continuous assessment and improvement of the practice. Assessment and improvement of maintenance processes increase the efficiency of software companies, the quality of their product and services and their positioning at the market. Several approaches have been proposed for process assessment and improvement, but small software companies face difficulties in adopting and implementing them. Small companies, due to their limitations, require cooperation with researcher and experts that will assist in assessment and improvement projects, as well as bottomup approach that will consider specific context in small companies. This paper presents a study on ranking five typical maintenance processes in a small software company in the context of software improvement project. Typical maintenance processes were identified through observation of the practice in the company and analysis of maintenance repository with requests submitted by the clients. Ranking was performed by using fuzzy screening method, classified as multi criteria multi expert decision making. Attributes related to time-line of maintenance processes are used as criteria. Three programmers from the company and one researcher were engaged as experts. The whole process of ranking is presented, as well as integration of fuzzy screening ranking in the assessment phase of process improvement project. The article concludes with lessons learned from the experience and promising directions for further research.
decades. Small companies are dominant in economies across the globe [1]. Research focused on small companies is important because of their importance to economic activity, employment, innovation and wealth creation in many countries [2]. Fayad et al. [3] reported that only few percents of all software companies have over 50 employees, while 78 percent of them have less than 10 employees. Because of their importance, small software companies need effective practice tailored for their size and capacities. Small companies face the similar problems as the larger ones regarding the quality of products and services, but due to their specific characteristics and limitations, they find difficult to implement assessment and improvement activities [4]. Software Process Improvement (SPI) should help software organizations to achieve their business goals and competitivity at the market. However, experience with several SPI methods in almost two decades revealed that many SPI projects do not end successfully [5]. This problem is even more evident in small software companies because they usually do not have defined processes and do not implement systematic control at the appropriate level. Software process assessment, as an important phase in SPI, has significant impact on SPI project because its goal is to assess processes and their alignment with an organization's business goals. Although SPI can be implemented without assessment of processes, the formal assessment enables identification of treats to process implementation in an organization. Therefore, SPI should be based on process assessment that will serve to fulfill the interests of the organization, having in mind not only technical factors, but also organizational and social (people) factors [6]. The most comprehensive understanding of process, which is based on the knowledge and experience of people involved in processes, is necessary for their assessment and improvement. However, small software companies are usually not aware of the opportunities and benefits that SPI might bring to them [1].
Keywords: software maintenance process, small software company, fuzzy screening, decision-making, process assessment, process improvement.
I. INTRODUCTION
It has been recognized in literature that software maintenance is the most costly part of the software life cycle [7][8][9]. According to Boehm and Basili, software
Increased number of small software companies, is produced by the growth of software industry in last few 1
maintenance consumes over 70% of the total costs for typical lifecycle of a software product [10]. Maintenance costs for systems that are in use for a very long time frequently exceed the costs of development [11]. Software companies, and especially small ones, do not devote sufficient attention to software maintenance management. In [12], the authors reported that majority of software organizations do not have any defined process for their software maintenance activities. As a consequence, there is an evident crisis of management and lack of planning in software maintenance [13].
predictable patterns of maintenance work. Faghihini and Mollaverdi [27] recommended a broader view of the maintenance managers by considering more than one criterion in making an appropriate decision. Decision makers should study identified maintenance problem in such a way that considers variety of factors that affect planning and solving activities in maintenance. Despite that fact, decision-making studies are rare in software maintenance. For example, Bauer and Heinemann [28] presented the approach for the automated analysis and visualization of the API dependence for software projects that allows decisions in software maintenance scenarios, while Stojanov et al. presented fuzzy screening based approach for evaluating software maintenance tasks [29] and processes [30].
Maintenance process, according to ISO/IEC 12207 Software Life Cycle Processes Standard is defined as a primary process in software life cycle [14]. Software maintenance process is a set of various activities, usually initiated by clients. The diversity of maintenance processes and activities depends on the domain where software is used, software size, frequency of changes, and limitations in the work schedule, resources and budget [15]. Evaluation of various parameters of maintenance process is important because of their influence on product and service quality [16]. However, according to Pino et al. [9] many software organizations do not have defined and established procedures for maintenance activities because of the lack of maintenance process management models. Improvement of software maintenance process will lead to enhancement of the software products quality, which requires understanding the scope of maintenance activities and the context of work on a daily basis. This is even more important for small software companies because of their constraints in resources, and problems identified in everyday maintenance practice [17][18].
This paper presents an approach for evaluating and ranking maintenance processes in the context of software improvement project in a local small software company based on fuzzy screening. The project is implemented in the period from 2011 to 2014. The rest of the paper is structured as follows. The second section provides background on software process improvement in small software companies and fuzzy screening method. The third section presents practical experience gained in conducting the study in the software company. The last section contains conclusions and some remarks for further research. II. BACKGROUND Software process improvement is challenging and demanding activity for any software organization. Assessment phase of improvement project is very important because in this phase the organization should identify the state of the practice for the selected set of processes and make decision which ones to improve. Several approaches exist for assessing the state of the processes, and they use variety of methods. In this work, fuzzy screening method is used for assessing processes based on the characteristic elements of maintenance request processing timeline. In order to provide relevant background for the presented approach in this section will be outlined a short overview of process improvement related to small software companies, as well as fuzzy screening method.
Systems based on fuzzy logic have been used in different fields for making decisions, controlling systems or forecasting. For example, in [19] the authors proposed a new method based on linguistic model inversion and fuzzy rule reduction for controlling plant. Further, Precup et al. [20] proposed two fuzzy controllers for tire slip control in anti-lock braking systems and the conducted simulation showed advantages of fuzzy controllers comparing to traditional ones. In [21] the authors proposed hierarchical multi agent control system based on rule based fuzzy system that enables improving the rule base under uncertain conditions and processing of a priori inserted expert knowledge, while Aisjah and Arifin [22] described the strategy in maritime weather forecasting by using fuzzy logic.
A. Software Process Improvement in Small Software Companies
Decision making techniques have recently gained attention in many areas of software engineering, such as choosing among a finite set of architectural alternatives during system design [23], requirements prioritization [24], role assignment for personnel in software development teams [25], or improving the configuration items selection process for software development [26]. Efficient maintenance requires that managers (and maintainers) are able to forecast future maintenance activities based on
Process assessment can help software companies to improve themselves through identification of critical problems and establishment of improvement priorities. Regardless of the subject matter, assessment procedure usually includes planning of the assessment, conducting of the assessment and completion of the assessment with the feedback to the organization that is involved in the process. In general, there are three types of approaches to process improvement, does not matter of the process domain [31]: 2
process refinement, process redesign and process reengineering. All these approaches require comprehensive analysis of the current processes, as well as understanding the nature and implications associated with the improvement.
Pettersson et al. [46] presented a step-by-step guide to process assessment and improvement planning using lightweight assessment and improvement planning framework (iFLAP). Two main parts of the assessment approach are: a project study – investigation of one or more projects, and a line study – examination of the relevant parts of the organization that are not part of any project. The approach is suitable for smaller organizations. Assessment includes eliciting improvement issues that are based on experience and knowledge accumulated in the organization. Multiple data sources were used to validate research findings. Practitioners were actively involved in prioritizing improvement issues and identifying dependencies between them. iFLAP framework can be tailored in size and coverage depending on the organizational needs. The authors also presented two industrial case studies, together with lessons learned.
In software engineering, two approaches to software process assessment and improvement can be found in literature: Model-based approaches - assessment based on best practice models such as Capability Maturity Model Integration (CMMI) [32], ISO/IEC 15504 Information Technology – Process Assessment (SPICE) [33], and ISO/IEC TR 29110 Software engineering — Lifecycle profiles for Very Small Entities [34], and inductive approaches - assessment tailored for specific organization and context that assumes active participation of people from the organization and researchers. Software process assessment and improvement initiatives in small software organizations have achieved some promising results. However, because of well known limitations and characteristics they have not been entirely successful [35]. In most cases for small software companies, assessment and improvement activities are simplified and the number of considered processes is reduced. A large number of factors influence implementation of assessment and improvement initiatives [36], and their understanding is necessary for success. Several studies reported low level of implementing software assessment and improvement endeavors in small software companies [37][38][39][40]. According to Kuvaja et al. [41], small software companies have problem in choosing and implementing an improvement approach in their organizations without the assistance of external consultants or investment in the time of their software managers. Software process improvement should take into account the peculiarities of small software organizations in order to be beneficial for them [42]. Clarke and O’Connor [43] conducted the empirical study with 15 software organizations that can be qualified as micro, small or medium in size based on the definition provided by European Commission [44], and concluded that there is a positive association between SPI activity and business success of these companies. According to the authors, the results of this research should improve the motivation of small software companies to implement SPI. Baddoo et al. [45] argued that the people factors are vital for implementing SPI. Perspectives and attitudes of software practitioners, strategic and operational managers should be taken into account in order to achieve greater success in SPI initiatives. Based on these observations, inductive approaches, based on close cooperation between researchers and employees from the small companies, are the most promising way of conducting software process improvement and assessment in small software organizations. These approaches should include tailoring the process improvement approach to the real state of the practice in small companies.
Pino et al. [47] presented a methodology for process assessment, as an integral part of an improvement framework suitable for small software companies. The improvement framework is the part of a methodological framework developed under the COMPETISOFT project [40], aimed at increasing the competitiveness of small software organizations through improvement and certification of their processes. The process for software process assessment consists of the following activities: assessment planning, assessment execution, results generalization and socialization, process prioritization, and preliminary planning of improvements. The assessment methodology was applied in eight small software organizations in Spain, Colombia and Argentina as a support for diagnosing processes in the cycle of improvement. In the article [48], MESOPYME - a continuous software process improvement method oriented to SMEs, with the goal to reduce effort and time during SPI implementation is presented. SPI implementation is based on using Action Packages, sets of components that provide concrete solutions to software problems, developed by domain experts. An Action Package as the starting point in SPI should be validated by the organization before adopting it for the implementation, while problems are related to enterprise's real need. MESOPYME has the following four stages: (1) Getting commitment to SPI by the senior management in the company; (2) Assessing software process based on CMM (Capability Maturity Model) software process model [49]; (3) Developing an action plan for improvement and providing the relevant infrastructure, and (4) Implementing SPI for selected processes in the organization. B. Fuzzy Screening Method Fuzzy Screening Method (FSM), introduced by R. Yager [50][51], offers a ranking method, but also allows experts to express their opinion by linguistic terms such as high, low, medium, etc. As mentioned in [52], the screening
3
procedure is subjective and could be imprecise, so that linguistic terms are very suitable for this. FSM ranks multiple alternatives, taking into account the opinions of individual experts. If attribute a takes value from the set {a1, a2, a3}, and if attribute’s values satisfy some relation, e.g. a1≤ a2≤ a3, then this attribute is called criterion attribute or simply criterion. Generally, we have multiple criteria attributes.
3. Every expert gives his/her opinion of how important is each criterion by applying the scale defined in step 1. 4. Every expert gives a score for each cell in the table by applying the same scale defined in step 1. 5. Overall score U for each alternative, for all experts, is calculated by: U min neg ( I j ) s j
In the previous equation Ij is the importance of the j-th criterion while sj is the score of the j-th criterion given by certain expert, while disjunction is implemented as max function. Ordered Weighted Averaging Operators. After five-step procedure introduced before, each alternative is associated with r overall scores calculated by (2), but what we need is a single score. Some function (or operator) is used for aggregation. As in [51] “This function can be seen as a generalization of the idea of how many experts it feels need to agree on an alternative for it to be acceptable to pass the screening process”. It follows that some aggregation operator must be used to aggregate scores, so that each alternative is associated with only one total score. The choice of aggregation operator, which will be used to combine information, is important for the reason that it has direct impact to the total score. Commonly used operators are Arithmetic Mean, Weighted Mean and Ordered Weighted Averaging (OWA) operator. As this is multi expert multi criteria decision making (ME-MCDM) method, OWA operator as aggregator is the most suitable.
Each expert for each alternative gives his/her opinion (score) how well that alternative satisfies each of the criteria. For this process, the scale S consisting of m elements is used, where m is usually 5 or 7. For m=5 scale S is defined by: Very High (VH)-S5, High (H)-S4, Medium (M)-S3, Low (L)-S2, Very Low (VL)-S1. For m=7 scale S is defined by: Outstanding (OU)-S7, Very High (VH)-S6, High (H)-S5, Medium (M)-S4, Low (L)-S3, Very Low (VL)-S2, None (N)-S1. Screening procedure proposed in [52] uses five element scale, while linguistic terms are precisely defined by fuzzy membership functions. Natural ordering applies to scale S: Si>Sj if i>j, as well as the maximum and minimum of any two scores: max(Si , S j ) Si
if
Si S j ,
min( Si , S j ) S j
if
S j Si .
OWA operators were introduced by R. Yager in [53]. There are families of OWA operators [54][55], as well as some issues obtaining OWA operator weights [57]. OWA operators are commonly used in group decision making procedures [58] when linguistic terms are used. This feature of OWA operators is important one for this investigation because attribute's importance is defined by linguistic term. OWA operators are recently used in market segmentation applications [59], for planning of electric power systems [60], water utilities [61], etc. The fact that OWA operator as aggregation function was commonly used in group decision making, means that OWA is the right choice for this investigation.
The definition of the negation for the scale S: neg ( S i ) S m i 1
(2)
j
FSM is commonly used for the selection of a small subset of alternatives from much larger set of alternatives [50, 51], but in case of this investigation FSM is used for ranking. The FSM uses three components: 1. Set X of p alternatives. These are maintenance processes. 2. Set E of r experts. These are domain experts, who have considerable experience in the field or domain. 3. Set C of n criteria attributes. These are attributes associated with every alternative.
(1)
The FSM is particularly suitable when there are multiple experts who do not agree on criterion importance. For example, the opinion of expert E1 is that criterion C should not be considered at all, while expert E2 believes that the criterion C is not essential, but it should be taken into account while considering an outcome of the decision making process. Therefore, expert E1 will assign importance value N (None) to criterion C, while expert E2 will assign importance value VL (Very Low) or L (Low) to the same criterion C. In this investigation FSM is implemented by five steps: 1. The definition of five-point scale. 2. The definition of table where rows present alternatives (maintenance processes), while columns are criteria. Every cell of the table is the criterion value for a certain row.
An OWA operator of dimension n is a mapping F: RnR that has associated vector: w ( w1 ,..., wn )T , where wi [0,1],1 i n . We have: n
w i 1
i
n
1 , and F (a1 ,..., an ) w j b j , j 1
where bj is the j-th largest element of the bag . The total score T for some alternative is calculated by (3): T maxQ j B j j
4
(3)
In (3) Bj is the worst of the j-th top scores, and Qj is the aggregation function for j-th top scores. Just after selection of Q, it is possible to use the OWA for aggregating experts’ opinions. This function is denoted as Q(k ) Sb ( k ) , where
Fig. 1. Distribution of MRs for the observed period
A user that requires some service from the company initiates software maintenance request process. Although a process is usually tailored for the current request and a user, a general process can be shaped as it is presented at Fig. 2.
q 1 . (4) b(k ) Int 1 k r In (4) Int[a] returns an integer value that is the closest to a, q is number of points on the scale and r is the number of experts.
III. EXPERIENCE FROM THE PRACTICE This study was conducted as a long term project started in 2011 with the goal to improve maintenance practice in a selected very small local software company. The project was prepared and established in the manner that facilitates the close cooperation of employees from the company and researchers from the university. This kind of partnership has recently become the common situation since universities are interested in applying the research in the practice, while companies are interested in using new scientific discoveries in their practice [62]. This cooperation is even more important in software industry because researchers usually do not know what really happens in the practice, while practitioners are deeply involved in everyday projects and do not have time to search for the state of the art methods [63]. The study was implemented in a very small local software company that maintains over 30 business software applications used by local clients in Serbia. Maintenance activities consume the majority of total activities in the company. Data analysis based on the real data extracted from records available in the company internal repository of maintenance requests (MR) for the period from May 2010 to November 2011 revealed that over 84% of all activities are maintenance activities [64]. Two types of clients exist: clients with signed Maintenance Service Agreement (MSA) and pay for maintenance services on the monthly basis, and clients without have signed MSA and pay for each maintenance service after its completion. Distribution of MRs for the observed period, for both groups of clients is presented at Fig. 1.
Fig. 2. MR processing
A. Software process improvement project Software process improvement project was initiated with the goal to improve software maintenance practice in the company. SPI project was implemented as lightweight project starting with the resources available in the company, called Lightweight Software Maintenance Process Improvement for Very Small Software Companies (LSMPI4VSSC). Process assessment was elaborated as lightweight inductive approach called Lightweight method for Maintenance Process Assessment based on Frequent Feedbacks (LMPAF2). Fuzzy screening as a proved method for ranking alternatives based on incomplete information was used in the assessment phase of the LSMPI4VSSC for ranking maintenance processes based on their importance for developers. The use of fuzzy screening method in the context of LSMPI4VSSC is presented at Fig. 3. Fuzzy screening was used for the following activities: evaluation of maintenance tasks [29], evaluation of maintenance processes [30] and ranking of improvement 5
Completion time (CT). This is a time interval from the date of recording a MR in the repository to the date when a programmer completes all technical activities related to the MR realization. Charging maintenance services to clients is based on recorded working hours spent on each request. The analysis includes the following types of working hours: Working hours spent in the company (Company Working Hours, CWH). Recorded working hours that a programmer spends on activities in the company. Working hours spent on Internet (Internet Working Hours, IWH). Recorded working hours that a programmer spends on activities that require Internet access to clients’ information system. Working hours spent at client side (Client Side Working Hours, CSWH). Recorded working hours that a programmer spends at client side (in client’s company). Few types of maintenance processes can be distinguished in the company. All these types of maintenance processes are recorded in the repository with the same attributes, some of which are previously discussed time intervals and working hours. These types of maintenance processes are: modification – the most common process with modified software as the output, installation – installation of software modules or new software products in client’s business environment, training – training of end users in a client company, administration – setting and configuring parameters of software application installed at client side, and support – assistance to customers in the operation of their software and/or integrated software and hardware products [68].
proposals as an assistance for selecting the processes for improvement. It is worthwhile to note here that assessment is not based only on fuzzy screening method, but rather uses multi-strategy approach that enables more comprehensive and reliable understanding of assessment and improvement [65]. This means that the assessment relies on multiple data sources and multiple research methods (both quantitative and qualitative). LMPAF2 approach is inductive [66] and participative [67], which means that it is bottom up approach with the aim to understand processes implemented in the company with full involvement of company employees.
Fig. 3. Fuzzy screening in the context of software process improvement
The main characteristic of the LMPAF2 method is that a process to be assessed is already determined during the initial phase of SPI project. Therefore, the entire effort is directed towards assessing selected process. The assessment method has been developed with the following objectives: • To enable quick and cheap process assessment in small companies. • To enable easy diagnosis of a selected process and proposal of issues for improvement. • To allow engineers to work on their daily tasks with low engagement in the assessment process. • Practice assessment and adjustment of improvement proposals through frequent feedback sessions.
C. Data Set Data analysis is based on real data extracted for typical maintenance processes recorded in the internal repository in the company [64]. Data are discretized for five typical maintenance processes by using the scale with five values: Very High (VH), High (H), Medium (M), Low (L), and Very Low (VL). The analysis is based on the decisions of four experts: three software experts (E1, E2 and E3) from the company, and one researcher (E4) with more than ten years of experience with small software companies.
B. Selection of Processes and Attributes All MRs are recorded in the internal repository in the company. Although, MR process is usually tailored for the current request (and user), a general maintenance process with typical phases exists in the company. The following time intervals exist in the MR process: Scheduling time (ST). This is a time interval from the date of recording a MR in the repository to the date when a request is assigned to a programmer. A programmer can reject a request and forward it to another programmer. In the repository is saved only the last successful assignment for a particular request. Acceptance time (AT). This is a time interval from the date of recording a MR in the repository to the date when a programmer accepts a request for realization.
Timeline of typical maintenance processes is described with the following attributes: Scheduling Time (ST), Acceptance Time (AT), Completion Time (CT), Company Working Hours (CWH), Internet Working Hours (IWH), and Client Side Working Hours (CSWH). These attributes are selected as criteria for evaluating maintenance processes. In Table 1 are presented values that describe the importance of each attribute according to experts. Values from Table II indicate that for expert E1 the highest importance have scheduling and completion time, and work in the company. For Expert E2 the highest importance has scheduling and completion time and work at client side. For expert E3 the highest importance has completion time and work in the company. For expert E4 the highest 6
importance have completion time, work at Internet and in the company.
U11 = min{max(Neg(H),M),max(Neg(M),M),max(Neg(H),VH), max(Neg(H),VH),max(Neg(L),M),max(Neg(L),L)}
TABLE 1. Criteria importance assigned by experts. Criteria ST AT CT CWH IWH CSWH
E1 H M H H L L
E2 VH H VH L M VH
E3 M L H VH L M
U11 = min{max(L,M),max(M,M),max(L,VH), max(L,VH),max(H,M),max(H,L)}
E4 L M VH H H L
U11 = min{M,M,VH,VH,H,H} U11 = M.
The overall scores for each alternative (process), for each expert were calculated analogously. Table 6 contains scores for each alternative for each expert, as well as total scores for each alternative. Total scores for each alternative (process) are calculated by using aggregation function based on OWA operator for q=5 and r=4, by (3, 4). For example, total score for the first alternative (modification process) is calculated as:
Experts also provided the evaluation of five typical maintenance processes by using the same scale with five values. Table 2 presents the discrete values assigned to the selected criteria for typical maintenance processes by expert E1. The similar tables for evaluation of maintenance processes were created with the values obtained from experts E2, E3 and E4 (Tables 3, 4 and 5 respectively).
U1 = max{min(L,M),min(M,M),min(H,M),min(VH,L)} TABLE 2. Evaluation of maintenance processes by expert E1. Processes Modification Installation Training Administration Support
ST M H M VH VH
AT M L M M H
CT VH M H H L
CWH VH VL L L VH
IWH M H L VH M
U1 = max{L,M,M,L}
CSWH L VH VH H VL
U1 = M TABLE 6. Alternative evaluation.
TABLE 3. Evaluation of maintenance processes by expert E2. Processes Modification Installation Training Administration Support
ST H L L M H
AT L H L M H
CT L H VH VH L
CWH VH VL M M H
IWH H M L VH H
CSWH L H VH VH VL
ST L M M M H
AT M M L H L
CT VH VH H H L
CWH VH L VL H H
IWH H M VL VH L
CSWH H VH H L L
TABLE 5. Evaluation of maintenance processes by expert E4. Processes Modification Installation Training Administration Support
ST H L M L H
AT M H H H M
CT VH L M M VH
CWH H M VL M VH
IWH M VH VL H VH
E1
E2
E3
E4
Modification
M
L
M
M
Total score (OWA) M
Installation
L
L
L
L
L
Training
L
L
VL
L
L
Administration
L
M
M
M
M
Support
L
VL
L
M
L
According to all experts, modification and administration processes are more important than installation, training and support. This conclusion is based on total scores for all processes. However, more detailed insight into scores obtained by each expert revealed that the worst ranged process is training. Modification process is usually implemented through the work in the company, while administration is based on the work at Internet and at client side. These results are very important for planning maintenance activities in terms of available programmers in the company and available access to information system at the client side, and increasing the performances such as productivity and profitability [69][70]. In addition, these results are used in the assessment phase of SPI project with the goal to get processes ranking suitable for decision making regarding further improvements.
TABLE 4. Evaluation of maintenance processes by expert E3. Processes Modification Installation Training Administration Support
Alternative (Processes)
CSWH L M VH L VL
D. Data Analysis
IV. VALIDITY OF THE FINDINGS
The overall scores for all five processes were calculated based on evaluation of selected criteria and processes by each expert separately. For example, the score U 11 for the first alternative (modification process) according to the first expert was calculated as:
Using these results together with the findings obtained from other sources (practice observation and programmers' interviewing, trend analysis) increased the validity of the study [71] through triangulation of different sources of data and different data analysis methods [72][73]. 7
[3]
The main limitation of the study is related to external validity since SPI approach has been implemented in only one software company. Further checking of external validity, or more precisely generalizability, requires implementation of the approach in similar settings.
[4]
[5]
V. CONCLUSIONS
[6]
Ranking of software maintenance processes, as a part of software process assessment, can significantly contribute to the efficiency of software maintenance improvement. The ranking is based on criteria attributes referring to timing of MR processing (scheduling, acceptance and completion) and consumption of working hours that provide basis for managing workload of programmers in the company and charging the services to clients. Experience from the practice revealed that fuzzy screening provides an easy to use and reliable approach for ranking typical maintenance processes and distinguishing processes (alternatives) that deserve more attention in software process improvement projects.
[7]
[8] [9]
[10] [11] [12]
Presented research can be continued in the following directions. The first one is related to development of a method and an appropriate tool for converting real data available in maintenance repositories (in companies or available on Internet) into linguistic terms suitable for screening procedures. The second direction is related to implementing this software improvement method with software maintenance processes' ranking based on FSM in similar small software companies. This further implementation will provide validation of this approach. The next one is the implementation of FSP for ranking other software life-cycle processes in the case of improving these processes. And finally, the implementation of this improvement approach, based on FSM in the assessment phase, in other, non software small companies is promising research direction with a large possible area of implementation.
[13]
[14]
[15]
[16]
[17]
[18]
ACKNOWLEDGMENTS Ministry of Education, Science and Technological Development, Republic of Serbia, supports this research under the project “The development of software tools for business process analysis and improvement”, project number TR32044, 2011-2014.
[19]
[20]
REFERENCES [1]
[2]
[21]
I. Richardson and C. G. von Wangenheim, “Guest Editors' Introduction: Why are Small Software Organizations Different?”, IEEE Software, vol. 24, no. 1, 2007, pp. 18-22. J. Bell, D. Crick and S. Young, “Small Firm Internationalization and Business Strategy: An Exploratory Study of 'Knowledge-Intensive' and 'Traditional' Manufacturing Firms in the UK”, International Small Business Journal, vol. 22, no. 1, February 2004, pp. 23-56.
[22]
[23]
8
M. E. Fayad, M. Laitinen and R. P. Ward, “Thinking objectively: software engineering in the small”, Communications of the ACM, vol. 43, issue 3, March 2000, pp. 115-118. C. G. von Wangenheim, T. Varkoi and C. F. Salviano, “Standard based software process assessments in small companies”, Software Process: Improvement and Practice, vol. 11, issue 3, 2006, pp. 329– 335. M. Lepmets, T. McBride and E, Ras, “Goal alignment in process improvement”, Journal of Systems and Software, vol. 85, issue 6, June 2012, pp. 1440-1452. D. E. Perry, N. Staudenmayer, and L. G. Votta, “People, Organizations, and Process Improvement”, IEEE Software, vol. 11, issue 4, July 1994, pp. 36-45. K. H. Bennett and V. T. Rajlich, “Software maintenance and evolution: a roadmap”, In Proceedings of the Conference on the Future of Software Engineering (ICSE ’00), pp. 73–87, 2000. S. L. Pfleeger, Software Engineering: Theory and Practice (3rd edition). Prentice Hall PTR, Upper Saddle River, NJ, USA.2006. F. J. Pino, F. Ruiz, F. García and M. Piattini, “A software maintenance methodology for small organizations: Agile_MANTEMA”, Journal of Software: Evolution and Process, vol. 24, issue 8, , December 2012, pp. 851–876. B. Boehm and V. R. Basili, “Software defect reduction top 10 list”, Computer, vol. 34, no. 1, January 2001, pp. 135–137. I. Sommerville, Software engineering (6th edition). Addison-Wesley Longman Publishing Co., Inc., Boston, MA, USA. 2001. A. April, J. H. Hayes and A. Abran, “Software Maintenance Maturity Model (SMmm): the software maintenance process model”, Journal of Software Maintenance: Research and Practice, vol. 17, issue 3, 2005, pp. 197 - 223. A. April and A. Abran, “A Software Maintenance Maturity Model (S3M): Measurement Practices at Maturity Levels 3 and 4”, Electronic Notes in Theoretical Computer Science, vol. 233, Proceedings of the International Workshop on Software Quality and Maintainability (SQM 2008), pp. 73-87, 27 March 2009. R. Singh, “International Standard ISO/IEC 12207 Software Life Cycle Processes”, Software Process: Improvement and Practice, vol. 2, no. 1, 1996, pp. 35–50. L. C. Briand, V. R. Basili, Y-M. Kim and D. R. Squier, “A Change Analysis Process to Characterize Software Maintenance Projects”, In Proceedings of the International Conference on Software Maintenance (ICSM ’94), pp. 38–49,1994. N. F. Schneidewind, “Measuring and Evaluating Maintenance Process Using Reliability, Risk, and Test Metrics”, IEEE Transactions on Software Engineering, vol. 25, no. 6, Nov./Dec. 1999, pp. 769-781. Z. Stojanov, “Using Qualitative Research to Explore Automation Level of Software Change Request Process: A Study on Very Small Software Companies”, Scientific Bulletin of The “Politehnica” University of Timişoara, Transactions on Automatic Control and Computer Science, vol. 57 (71), no. 1, March 2012, pp. 31-40. Z. Stojanov, D. Dobrilovic, and V. Jevtic, “Identifying properties of software change request process: Qualitative investigation in very small software companies”, In Proceedings of the 9th IEEE International Symposium on Intelligent Systems and Informatics (SYSI 2011), pp. 47–52. September 8-10, 2011. Subotica, Serbia. P. Baranyi, P. Korondi, H. Hashimoto, and M. Wada, “Fuzzy inversion and rule base reduction”, In Proceedings of IEEE International Conference on Intelligent Engineering Systems (INES’97), pp. 301-306, 1997. Budapest, Hungary. R.-E. Precup, S. Preitl, M. Balas, and V. Balas, “Fuzzy controllers for tire slip control in anti-lock braking systems”, In Proceeding of IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2004), vol. 3, pp. 1317-1322. 2004. Budapest, Hungary. D. Hladek, J. Vascak, and P. Sincak, “Hierarchical fuzzy inference system for robotic pursuit evasion task”, In Proceedings of the 6th International Symposium on Applied Machine Intelligence and Informatics (SAMI 2008), pp. 273-277. 2008. Herľany, Slovakia. A. S. Aisjah and S. Arifin, “Maritime weather prediction using fuzzy logic in Java sea for shipping feasibility”, International Journal of Artificial Intelligence, vol. 10, no. S13, 2013, pp. 112-122. D. Falessi, G. Cantone, R. Kazman and P. Kruchten, “Decisionmaking techniques for software architecture design: A comparative
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33] [34]
[35]
[36]
[37]
[38]
[39]
[40]
[41]
survey”, ACM Computing Surveys, vol. 43, issue 4, October 2011, Article No. 33. A. Felfernig, M. Schubert, M. Mandl, F. Ricci and W. Maalej, “Recommendation and decision technologies for requirements engineering”, In Proceedings of the 2nd International Workshop on Recommendation Systems for Software Engineering (RSSE '10), pp. 11-15, 2010. L. G. Martinez, J. R. Castro, A. Rodríguez-Díaz and G. Licea, “Decision making fuzzy model for software engineering role assignment based on fuzzy logic and big five patterns using RAMSET’, Intelligent Decision Technologies, vol. 6, no. 1, 2012, pp. 59-67. J. Wang and Y-I Lin, “A fuzzy multicriteria group decision making approach to select configuration items for software development”, Fuzzy Sets and Systems, vol. 134, issue 3, 2003, pp. 343-363. E. Faghihini and N. Mollaverdi, “Building a maintenance policy through a multi-criterion decision-making model”, Journal of Industrial Engineering International, 2012, vol. 8, paper 14. V. Bauer and L. Heinemann, “Understanding API Usage to Support Informed Decision Making in Software Maintenance”. In Proceedings of the 2012 16th European Conference on Software Maintenance and Reengineering (CSMR '12), pp. 435-440. March 27-30, 2012. Szeged, Hungary. Z. Stojanov, V. Brtka and D. Dobrilovic, “Evaluation of the Software Maintenance Tasks Based on Fuzzy Screening”, In Proceedings of the IEEE 11th International Symposium on Intelligent Systems and Informatics (SISY 2013), pp. 375-379. September 26-28 2013. Subotica, Serbia. Z. Stojanov, V. Brtka and D. Dobrilovic, “Evaluating Software Maintenance Processes in Small Software Company based on Fuzzy Screening”, In Proceedings of the 9th IEEE International Symposium on Applied Computational Intelligence and Informatics (SACI 2014), pp. 67-72. May 15-17 2014. Timisoara, Romania. W. M. V. Chang and T. C. E. Cheng, “A process improvement choice model”, Knowledge and Process Management, vol. 6, issue 4, 1999, pp. 189–204. CMMI Product Team. Appraisal Requirements for CMMI, Version 1.1 (ARC, V1.1). Technical report CMU/SEI-2001-TR-034. 2001. Software Engineering Institute, Carnegie Mellon University. Pittsburgh, PA, USA. ISO/IEC 15504. Information Technology – Process Assessment (Parts 1–5) (SPICE). Geneva, Switzerland. ISO/IEC TR 29110:2011. Software engineering — Lifecycle profiles for Very Small Entities (VSEs) (Parts 1–5). 2011. Geneva, Switzerland. I. E. Espinosa-Curiel, J. Rodrıguez-Jacobo and J. A. FernandezZepeda, “A framework for evaluation and control of the factors that influence the software process improvement in small organizations”, Journal of Software: Evolution and Process, vol. 25, no. 4, 2013, pp. 393–406. T. Dyba. “Factors of software process improvement success in small and large organizations: an empirical study in the scandinavian context”. In Proceedings of the 9th European software engineering conference held jointly with 11th ACM SIGSOFT international symposium on Foundations of software engineering (ESEC/FSE-11), pp. 148–157, 2003. G. Coleman and R. O'Connor, “Using grounded theory to understand software process improvement: A study of Irish software product companies”, Information and Software Technology, vol. 49, issue 6, Qualitative Software Engineering Research, June 2007, pp. 654-667. F. J. Pino, F. Garcıa and M. Piattini, “Software process improvement in small and medium software enterprises: a systematic review”, Software Quality Journal, vol. 16, no. 2, June 2008, pp. 237-261. D. Mishra and A. Mishra, “Software Process Improvement in SMEs: A Comparative View”, Computer Science and Information Systems, vol. 6, issue 1, June 2009, pp. 111-140. H. Oktaba and M. Piattini, Software Process Improvement for Small and Medium Enterprises: Techniques and Case Studies (1st ed.). Information Science Reference, Hershey, PA. 2008. P. Kuvaja, J. Palo and A. Bicego, “TAPISTRY—A Software Process Improvement Approach Tailored for Small Enterprises”, Software Quality Journal, vol. 8, issue 2, 1999, pp. 149-156.
[42] K. Kautz, “Software process improvement in very small enterprises: does it pay off?”, Software Process: Improvement and Practice, vol. 4, issue 4, December 1998, pp. 209–226. [43] P. Clarke and R. V. O’Connor, “The influence of SPI on business success in software SMEs: An empirical study”, Journal of Systems and Software, vol. 85, issue 10, October 2012, pp. 2356-2367. [44] European Commission, The new SME definition: User guide and model declaration. 2005. [45] N. Baddoo, T. Hall and D. Wilson, “Implementing a people focused SPI programme”, In Proceedings of the 11th European Software Control and Metrics Conference and the Third SCOPE Conference on Software Product Quality, pp. 373-381. April 18–20, 2000. Munich, Germany. [46] F. Pettersson, M. Ivarsson, T. Gorschek and P. Ohman. “A practitioner’s guide to light weight software process assessment and improvement planning”, Journal of Systems and Software, vol. 81, issue 6, 2008, pp. 972–995. [47] F. J. Pino, C. Pardo, F. Garcia and M. Piattini, “Assessment methodology for software process improvement in small organizations”, Information and Software Technology, vol. 52, issue 10, 2010, pp. 1044–1061. [48] J. A. Calvo-Manzano Villalón, G. Cuevas Agustín, T. San Feliu Gilabert, A. De Amescua Seco, L. García Sánchez and M. Pérez Cota, “Experiences in the Application of Software Process Improvement in SMEs”, Software Quality Journal, vol. 10, no. 3, 2002, pp. 261-273. [49] D.K. Dunaway and S. Masters, CMM-Based Appraisal for Internal Process Improvement (CBA IPI): Method Description, Technical Report: CMU/SEI-96-TR-007. Carnegie Mellon University, Software Engineering Institute, Pittsburgh, USA. 2006. [50] R. R. Yager, Fuzzy Screaning Systems, in: R.Lowen and M.Roubens eds., Fuzzy Logic: State of the Art, Kluwer. Dordrecht, Netherland. 1993, pp. 251-261. [51] C. Carlsson and R. Fullér, “On fuzzy screening systems”, In Proceedings of EUFIT’95 Conference, pp. 1261–1264. August 2831 1995, Aachen, Germany, Verlag Mainz, Aachen, Germany. [52] S. Wibowo and H. Deng, “A fuzzy rule-based approach for screening international distribution centres”, Computers and Mathematics with Applications, vol. 64, issue 5, September 2012, pp. 1084–1092. [53] R. R. Yager, “On ordered weighted averaging aggregation operators in multi-criteria decision making”, IEEE Transactions on Systems, Man and Cybernetics, vol. 18, issue 1, January/February 1988, pp. 183-190. [54] R. R. Yager, “Families of OWA operators”, Fuzzy Sets and Systems, vol. 59, issue 2, 1993, pp. 125–148. [55] R. R. Yager, “Constrained OWA aggregation”, Fuzzy Sets and Systems, vol. 81, issue 1, 1996, pp. 89–101. [56] D. Filev and R. R. Yager, “On the issue of obtaining OWA operator weights”, Fuzzy Sets and Systems, vol. 94, issue 2, 1998, pp. 157– 169. [57] C. Carlsson, R. Fullér, S. Fullér, OWA Operators for doctoral student selection problem, in R.R.Yager and J.Kacprzyk eds., The ordered weighted averaging operators: Theory, Methodology, and Applications, Kluwer Academic Publishers, Boston, USA. 1997, pp. 167-178. [58] J. Lan, Q. Sun, Q. Chen and Z. Wang, “Group decision making based on induced uncertain linguistic OWA operators”, Decision Support Systems, vol. 55, issue 1, April 2013, pp. 296–303. [59] G. Sánchez-Hernández, F. Chiclana, N. Agell and J. C. Aguado, “Ranking and selection of unsupervised learning marketing segmentation”, Knowledge-Based Systems, vol. 44, May 2013, pp. 20–33. [60] M. Q. Suo, Y. P. Li and G. H. Huang, “Multicriteria decision making under uncertainty: An advanced ordered weighted averaging operator for planning electric power systems”, Engineering Applications of Artificial Intelligence, vol. 25, issue 1, February 2012, pp. 72-81. [61] R. Sadiq, M. J. Rodríguez and S. Tesfamariam, “Integrating indicators for performance assessment of small water utilities using ordered weighted averaging (OWA) operators”, Expert Systems with Applications, vol. 37, issue 7, July 2010, pp. 4881–4891. [62] M-C. Dan, “Why Should University and Business Cooperate? A Discussion of Advantages and Disadvantages”, International
9
[63]
[64]
[65]
[66]
[67]
Journal of Economic Practices and Theories, vol. 3, no. 1, 2013, pp. 67-74. R. L. Glass, “Guest Editor's Introduction: The State of the Practice of Software Engineering”, IEEE Software, vol. 20, no. 6, 2003, pp. 20-21. Z. Stojanov, D. Dobrilovic and J. Stojanov, “Analyzing Trends for Maintenance Request Process Assessment: Empirical Investigation in a Very Small Software Company”, Theory and Applications of Mathematics & Computer Science, vol. 3, no 2, 2013, pp. 59-74. A. Rainer and T. Hall, “A quantitative and qualitative analysis of factors affecting software processes”, Journal of Systems and Software, vol. 66, no. 1, 2003, pp. 7–21. T. Gorschek. Software Process Assessment & Improvement in Industrial Requirements Engineering. PhD Dissertation. School of Engineering, Department of Systems and Software Engineering, Blekinge Institute of Technology. Karlskrona, Sweden. 2004. T. Dybа, N. B. Moe, “Rethinking the Concept of Software Process Assessment”, In proceedings of the European Software Process Improvement(EuroSPI99), Learn from the Past - Experience the Future. October 25-27 1999. Pori, Finland.
[68] M. Kajko-Mattsson, “Problems within front-end support’, Journal of Software Maintenance and Evolution: Research and Practice, vol. 16, issue 4-5, 2004, pp. 309-329. [69] I. Alsyouf, “The role of maintenance in improving companies’ productivity and profitability”, International Journal of Production Economics, vol. 105, no. 1, 2007, pp. 70–78. [70] M. Kans and A. Ingwald, “Common database for cost-effective improvement of maintenance performance”, International Journal of Production Economics, vol. 113, no 2, 2008, pp. 734–747. [71] B. A. Kitchenham, S. L. Pfleeger, L. M. Pickard, P. W. Jones, D. C. Hoaglin, K. El Emam and J. Rosenberg, “Preliminary Guidelines for Empirical Research in Software Engineering”, IEEE Transactions on Software Engineering, vol. 28, no. 8, 2002, pp. 721-734. [72] L. Bratthall and M. Jørgensen, “Can you Trust a Single Data Source Exploratory Software Engineering Case Study?”, Empirical Software Engineering, vol. 7, no. 1, 2002, pp.9-26. [73] J. Miller, “Triangulation as a basis for knowledge discovery in software engineering”, Empirical Software Engineering, vol. 13, no. 2, 2008, 223-228.
10