a preliminary survey of method use in the uk - CiteSeerX

2 downloads 0 Views 262KB Size Report
design and future development of SSADM (Structured Systems Analysis and Design. Method) which is the UK Government standard for structured system ...
A PRELIMINARY SURVEY OF METHOD USE IN THE UK Colin Hardy, Barrie Thompson & Helen Edwards School of Computing and Information Systems, University of Sunderland, UK.

Occasional Paper No: 94-12 May 1994

Commercial Software Engineering Group, School of Computing and Information Systems, University of Sunderland, Priestman Building, Green Terrace, Sunderland, SR1 3SD. Tel: (091) 5152769

 School of Computing and Information Systems, 1994.

1. Introduction A project is currently being undertaken in collaboration between the University of Sunderland and The Government Centre for Information Systems (CCTA). CCTA are responsible for the design and future development of SSADM (Structured Systems Analysis and Design Method) which is the UK Government standard for structured system development methods. The background to this project has been described in depth in a previous article (Hardy, 1994), but in order to provide a framework for this document it would be useful to briefly describe some of its aims. One of the intentions of this project is to determine the factors associated with usage of structured methods within companies across the UK, with particular reference to the use of customisation. In addition, since one of the outcomes of this research will include the evaluation and potential development of software to assist in the process of method customisation, it was important to obtain information as to whether companies would be more interested in the development of a tool to support either the choice of method to suit a particular project, or one which would assist in the choice of techniques within a method. The resolution of these issues has, in part been facilitated by undertaking a preliminary survey of companies. The following sections of this report detail the design of the questionnaire used for the survey, the aims of the survey and the results of processing the replies.

2. Assumptions underlying the development of the questionnaire The questionnaire design and development followed an established and rigorous procedure. It has been developed to take into consideration a number of factors: • • • •

Utilising non-technical wording of the questions Limiting the type of possible responses Maximising the response options available Ensuring the confidentiality of the respondent

2.1 Utilising non-technical wording of the questions Given that the questionnaire was to be distributed to a random sample of companies (see the following section), it was necessary to use as few technical terms as possible. This was because, the sample was likely to include a number of companies who had limited computer-based IT experience, but who were still involved in the design, implementation and maintenance of information systems. One of the major problems associated with this aspect of the development was providing sufficient explanation of the question to the respondent, whilst reducing the effort spent in the reading process.

2.2 Limiting the type of possible responses It was decided that the responses to the questions should be constrained, in order to allow for statistical analysis of the results. It was decided to use multiple choice type questions, because they allow for discrete responses from a range of possible options. Two of the limiting factors in the use of this type of question are; 1. Options offered may not adequately reflect the respondent's ideal response. 2. A large number of options increases the questionnaire bulk.

1

Between these two extremes some compromise must be reached. It was decided that the questions should err on the side of breadth of choice, in order that it would have maximum relevance to the wide sample frame. Unfortunately, this meant that when complete, the questionnaire ran to 13 sides. This does not mean that the respondent was expected to complete all sections (breakdown of the questionnaire will be dealt with in subsequent sections), but it is accepted, that the shear bulk of the questionnaire may have deterred some individuals from attempting to complete it.

2.3 Ensuring the confidentiality of the respondent It was one of the key intentions of the questionnaire that it should maintain the confidentiality of the information provided by companies. In order to achieve this, it was decided to eliminate all reference to individuals and their companies from the body of the questionnaire. The address of the company concerned was already known from the sampling procedure, and all questionnaires were addressed to the IT/Computing Manager of the company. Consequently, there was little point in duplicating this information within the questionnaire itself. In order that returned questionnaires could be associated with the company to whom it was sent, each company was issued with a unique code number, and this was duplicated on each of the documents distributed to the respondents. In addition, the respondent was repeatedly assured (at relevant points within the documents) that any information provided by them would be held in confidence, and would only be used for the purpose of analysis. It was believed that should respondents be requested to put their name to their responses, then they may be less likely to respond openly about the success or failure of procedures used within their department. 3. Questionnaire structure, distribution and control The questionnaire is divided into five sections, which will be described in more detail in later sections, but in order to provide an overview; Section A acquires background information about the company and department of which the respondent is a member. This information includes the type and size of company, the nature of the services provided by the department. Section B is only completed if the respondent has indicated that either or both of system analysis and system design are performed in house. This section is primarily concerned with whether the respondents department utilises structured methods to support their systems development, the type of method used, and their perceptions of those methods. In addition, whether they customise the method to suit particular project needs. Section C is only to be completed by those respondents who have not completed Section B. It is concerned with the respondent's perceptions of structured methods, and the types of techniques used within the department. Section D should be completed by all respondents, and is concerned with the current usage of CASE tools, the envisaged workload of the department, and whether or not a tool to support method or technique choice would be attractive to various levels within the respondents company. Section E provides the respondent with the opportunity to make comments about the questionnaire.

2

3.1 Preliminary questionnaire A preliminary questionnaire was distributed to staff within the School of Computing and Information Systems at the University of Sunderland, who had relevant experience and expertise in the field of system development. They were requested to complete the questionnaire as if they were IT/Computing managers with responsibility for the control of development projects. Whilst it was accepted that without a specific framework of on-going projects, their ability to provide accurate information was limited. Nevertheless, their input was deemed to be of importance if they could provide feedback as to the relevance of the questions, as well as the overall structure of the questionnaire. 3.2 Pilot questionnaire On the basis of the results obtained, relatively minor adjustments were required to the existing structure. This resulted in a questionnaire which could then be distributed as a pilot to a number of companies which were known to have an IT department. Selection of this sample was made in a pseudo-random fashion, from a document offering a wide variety of graduate opportunities within the UK (GET, 1994). It was decided to limit the number of companies within the pilot to twenty, due to the nature of the information to be obtained. The pilot study had as its aim the need to ask for feedback as to the relevance of the document within existing departments. The results provided a useful comparison with the preliminary questionnaire as to the effectiveness of the questions asked. It could be argued that the preliminary questionnaire could have been omitted. However, since the success of the questionnaire relied on the helpfulness of companies who had priorities of more importance to them than completing a questionnaire, it was important to determine that it had general relevance to their current work. In addition, it allowed for it to be commented upon by staff at CCTA (the collaborating organisation). Each of the first two stages of this process provided cross-validation to the other, as well as highlighting their own individual viewpoints about the questionnaire's design. There was a 40% return rate from the pilot study, without the necessity of offering reminders. This is higher return rate than would generally expected from a questionnaire of this type. However, this may be accounted for by the small sample size, resulting in an atypical response pattern. Since the results of this study were for reference purposes only, and no statistical analysis was performed, it was decided that this number of returns would not influence the overall success of the project. As with the preliminary study, the results obtained suggested that the questions asked were appropriate to the individuals to which they were aimed. Criticism was levelled that the questionnaire was too long, but a re-evaluation of the questionnaire indicated that it would not be possible to significantly reduce the size of the document without losing important information. One option at this point would have been to divide the questionnaire into two, and distribute them as separate documents. The problem with this approach (and the reason why it was not pursued) was that it introduced additional experimental variables which were very difficult to control for. Specifically, those of the primacy effect where responses to the first questionnaire may differ qualitatively as well as quantitatively from those of the second. Conversely, to have sent each part of the questionnaire to different samples would have caused problems in combination and interpretation of the two parts.

3

3.3 Sampling procedure Having obtained confirmation that the questionnaire was applicable to the type of department activities of interest to the study, and that the questions were pitched at a level appropriate to the role of the potential respondents, the next stage was to acquire a sample from within the population. It was decided that in order to obtain sufficient returns to the questionnaire to justify the effective use of statistics on the results, a minimum sample of 500 was required. 3.3.1 Determination of the population The population of interest were UK based companies. The reason why the population was not defined more precisely was because of the assumption that there is an increasingly widespread usage of computer-based technologies, particularly databases, both within and outside of IT departments. Whether the companies which use such technologies regard them as information systems, is not of importance to the current study. It is believed that assuming all current and future developments of information systems are confined to existing IT departments would be a mistake. Similarly, if structured methods as well as software engineering in general are supposed to be relevant for most project developments then limiting the population would not only omit a potentially large proportion of the system development population. In addition, this would maintain the misnomer that structured methods are only suited to large developments, and should be left to technically literate departments.

3.3.2 Determination of the sample frame The sample frame was determined by the availability of a document which provides details of 60,000 graduate jobs and courses (GET 1994). The reason for this choice was not only one of being readily available, but also that any company which advertised for graduate recruitment was likely to be relatively established, and its personnel literate, if not technically literate. This does not imply that systems development should be limited to a certain range of companies, but rather that despite all efforts certain technical terms and jargon had to be incorporated into the questionnaire. Consequently, respondents would require a minimum amount of understanding in order to respond effectively. Of course, due to the assumptions underlying the questionnaire, there would be no way to judge whether individual companies had a PC, let alone a system development team handling a wide range of projects. 3.3.3 Determination of the sample The overall sample size was determined as follows; within the GET 1994 directory companies are divided into six broad categories, in terms of the employment that they offered. Since this questionnaire was not interested in the employment prospects within companies, then these divisions could act as an arbitrary division of the sample frame. Each category was subdivided into a number of company types, which in turn related to a number of individual companies. It was decided that 17 groups would be randomly selected (by means of a random number generation program) from each of the major categories, and that from each group, 5 companies would be randomly selected. This would result in an overall sample size of 510.

4

3.4 Contents of the materials distributed Each company received a copy of the following documents: •

• •



A covering letter - This explained the nature of the project, the enclosed documents and emphasised the confidentiality of their responses A copy of the questionnaire A complements slip - This was included to allow those who were interested to include their name and address so that they could receive a copy of the collated results. This was separate from the questionnaire, and did not include a code number, so that it could be stored separately from the questionnaire, and that there was no direct link between the two documents A return envelope - This did not include postage (due to limited resources)

A copy of the letter and the questionnaire are given in Appendices A and B of this paper. 3.5 Questionnaire distribution and control One month after the initial distribution, a reminder letter was despatched to those companies who had not replied by that date. This letter included a reply slip, which offered the respondent the choice of either requesting a replacement questionnaire, or the option to state that the questionnaire was not relevant to their company. The reminder also included a return envelope (postage not included). It was decided that one month from the reminder would mark the closing date for the questionnaire. This date was not made available to the respondents because it was assumed that after such an extended period, those who were going to return the questionnaire would have done so. In addition, setting a deadline for return may have either provided an incentive for return, or acted as a deterrent, due to the pressure of a deadline. Had this been done, there would be no way to judge which of these effects, if any, had influenced the responses, and would have added an unnecessary variable to account for in the analysis of the results. 3.6 Combination of data from pre and post reminder groups Up to the date of the reminder letter, 14% of the sample had returned completed questionnaires (Figure 1), with a further 6% of the responses made in subsequent returns, it was believed that a statistical comparison should be made between the two groups of respondents, to determine whether they could be regarded as a single sample. Due to the differing sample sizes, and the possibility that the variability between responses in one group might differ from the variance in the other group, it was decided to carry out t-tests which took these factors into consideration. t-tests were individually applied to all elements of each question in the questionnaire. Since the there was no experimental hypothesis predicting that one group would have a significantly higher set of scores than the other, a two-tailed test was performed. Similarly, it was assumed that any differences that may have existed between the two groups could be picked up by a significance level of ρ0.05. In all instances, there was not a significant difference between the responses obtained before and after the reminder letter. Therefore, in each case, there was insufficient evidence to reject the Null hypothesis (that there was no difference between the two groups). Before making broad inferences on the basis of these results, it is important to raise a number of issues associated with the analysis of the differences between the two groups.

5

Firstly, all companies who responded after the reminder was issued, stated that they performed systems analysis and/or system design in-house (i.e. they did not complete Section C), thereby implying that there is a difference between the two groups even if the ttests did not reflect it. Secondly, whilst it is very convenient to find that all of the t-tests failed to gain significance, it may be queried as to whether the confidence level had been set too coarsely, which would have the resulted in committing a Type 2 error (where we conclude that there was no difference between the two sets of data when in fact there was). Another potential area for problems to arise is in the number of t-tests performed on the data (fifty-nine). It may be argued, that with such a large number of tests, by chance alone, at least one of those results should have been significant. Another problem associated with this argument, is that each of the tests were performed on the same set of data. With analyses of this type, there is a cumulative error added introduced with each successive test on the same data. Given the large number of tests, it is an error factor which brings accurate interpretation of the results into doubt. Nevertheless, it is believed (with some reservation) that the results obtained in this questionnaire can be treated as if they were obtained from a single sample.

Figure 1 Overall response rate to the questionnaire 4. Questionnaire content The questionnaire consists of five sections (A to E), each of which has been designed to elicit specific information. Subsequent sections of this paper will discuss the results obtained from the questionnaire, and our interpretation placed upon them. Consequently, this is a useful point to discuss in more detail the nature of each of the sections, what was expected to be gained from them and the relevance of particular questions. Section A was developed to provide information about the organisation (A1, A2), department (A3), its role (A4) and resources (A5), for each of the respondents to the questionnaire. As has been noted (Section 3.3.3), the questionnaire sample was intended to be drawn from as wide a range of companies as possible, differing in both size and activity. Questions A1 to A3 were included to determine whether the responses obtained were similarly diverse, or were biased in some respect. Prior to distribution, it was not possible to determine whether organisations of a particular size or type would be more or less likely to respond to this type of questionnaire. In addition, the sample frame (GET, 1994) was unable to provide accurate information about the overall size of an organisation, or the relative size 6

of any IT department within it. Although there is unlikely to be a direct correlation between organisation details and the nature of the systems developed, it was assumed that there would be more incentive for larger companies to invest in information system development and maintenance. This is due to the demands for information increasing along with the numbers of individuals. Question A4 is concerned with the type of service offered by the respondent's department, and the demands these make on their resources. It was assumed that the responses to this question would provide an indication of how important the development process is to the respondents, and how effective such activities have been in terms of the maintenance incurred. Questions A5 and A6 aim to determine the nature of the computers and operating systems used within each department. the major reason for these questions was to discover the most common platform that could be used in any future software development. All organisations responded that PC's operating under MS-DOS were used in their departments. The final question in this section was used to determine whether the departments concerned took personal responsibility for the development of systems, or they bought this knowledge in. It was accepted department's may require external support in their developments, and so this option was also provided. Section B was completed by those departments who indicated that they performed the analysis and design of their systems (with or without external assistance). Question B1 aimed to isolate the size and frequency of the systems development projects undertaken by the departments surveyed. Since many systems development methods have been devised to be most suitable to large systems projects, the size of the projects undertaken may reflect the likelihood of method usage. The next question asks which structured method, if any are used in the respondent's department. Not only is it useful to know what proportion of departments use which methods, but also (given the nature of the current collaborative project) what percentage of departments are currently using SSADM. Given that the aim of the project is the investigation of customisation of structured methods for specific projects, the next three questions within this section aim to determine whether customisation occurs, where the knowledge of this activity has been obtained, and whether it is done as a matter of policy or is determined on a project by project basis. The remaining two questions within Section B offer a range of first positive and then negative statements about systems analysis and design methods. The respondents were then expected to indicate the degree to which they agreed with each statement. these questions were designed to provide an indication of whether or not the respondents were satisfied with the effectiveness of their current method. In addition, the extent to which structured methods fulfil in practice the benefits described by their proponents. Section C was designed to be completed by those departments who indicated in question A7 that all analysis and design of systems was left to external agencies. It was also assumed that a proportion of those departments who do not develop systems by means of a structured method would also complete this section. Question C1 provides the respondents the opportunity to indicate the extent to which they agreed with negative statements about methods. Given that this group of individuals have not had experience of system design using structured methods, it was assumed that the responses would reflect why such methods have not been widely adopted. The next question asked whether a tool to support technique choice from a method for particular projects would be attractive. This allowed for a comparison between this group of potential recruits to structured methods and the more general results to the same question offered in D7. In order to determine what techniques these non-method users were familiar with, question C3 offered a range of techniques used both within and independently of structured methods.

7

Section D provided a range of questions covering subject areas which were relevant to the study, but did not fit the other sections. The first four questions were concerned with the CASE tool support that was currently available within the respondents departments, and whether they supported particular structured methods. This allowed for the testing of hypotheses as to whether the choice of tool influenced the choice of method or vice versa. In addition, usage of tools does not necessarily indicate their effectiveness or popularity. Consequently, question D4 aimed to determine how well the tools used supported the needs of the department. Question D5 was concerned with the anticipated workload of the respondent departments for the next two years. This question was included because although A4 asks about the current services offered, it does not indicate where the priorities of the department lay for future development. The final two questions in this section ask whether a tool to support either method choice or technique choice within a method would be attractive to the senior management of the organisation, the respondents department or the system design team. It was intended that the questionnaire would provide an indication as to which type of tool should be investigated and potentially implemented as the outcome of the current research. Section E was included to allow the respondents the opportunity to provide any comments they may have had about the questionnaire. 5. Analysis of the questionnaire results The raw data collected from the returned surveys is detailed in Appendix C of this report. This data was processed using Microsoft Excel. The overall response to the survey is shown in Figure 1. It shows that 20% of the companies responded positively to the questionnaire (that is, returned questionnaires that could be analysed). This whilst low, is not unusual in a postal questionnaire of this type (Stobart 1990, Edwards 1990, Davis 1990). In fact, given the lack of definition in the choice of target population, it may be argued that the response rate has been relatively good. Conversely, it is not possible to draw inferences about the nature of the companies who did not respond, and it should be borne in mind that the responses we did obtain may not adequately represent the views of the majority. Consequently, effort will be made not to make unsupported statements without reservation.

Figure 1 Percentage of responses to the questionnaire

8

5.1 Company and departmental analysis The number of individuals within the companies who returned questionnaires was relatively evenly balanced (Figure 2). This differs from the number of individuals within the departments who have responsibility for information systems management, where the majority of departments have 20 or less, with 39% of the departments having between 1 and 10 individuals (Figure 3).

Figure 2 Percentage number of people within the organisation

Figure 3 Percentage number of people within the system development department

9

People in company

1-100

101-500

501-2000

2001+

Average department size Most common department size Minimum department size Maximum department size

1-10

11-20

21-50

51-100

1-10

1-10

11-20

51-100

1-10

1-10

1-10

1-10

21-50

101-150

101-150

300+

Table 1 Statistics based on the number of people in the system development department, for companies of varying sizes To a certain extent, the disparity between the number of people within the company, and the number of individuals involved in system development is to be expected, given that the sample was taken from a wide range of companies. This is because it is not possible to assume that a company will have an IT department merely because of its size. System development may often lay in the hands of individuals co-opted to the task as a result of their experience with computers and/or systems. Table 1, makes the situation somewhat clearer. As would be expected, as the number of people within the company increases, so does the average department size. One reason for the apparently large number of small departments can be seen from the minimum department size, where this is always between 1 and 10 individuals, even for the largest of companies. This preponderance of small departments, appears to support the argument that company's, regardless of their size rely on a limited amount of in-house expertise. 5.2 Services offered by the departments The questionnaire next aimed to isolate the key responsibilities of the departments concerned. It was believed that services differed not only in the frequency with which they were used, but also in terms of the demands that a particular service made on the resources of the department. Consequently, respondents were asked about a range of services, and as to whether the service was a central activity of the department, or was an occasional task which placed low or high demands on the resources available (Figure 4).

10

Figure 4 The type of service offered by the department, and the demands of that service on departmental resources.

As can be seen from Figure 4, there is a clear indication that department's are involved in providing a wide range of services. Those which can be regarded as being central to their role, or place high demands on their resources, may be divided into two categories, system development (System Analysis, Design and Testing) and system maintenance (System Maintenance, System Support and User Liaison). User Liaison has been included in system maintenance, on the assumption that contact with the department would primarily be associated with maintenance issues. System development and maintenance are performed by over 80% of the departments, with system maintenance and support being regarded as more demanding both in terms of the centrality of the task, and the demands placed on the resources of the department. The concept of system maintenance is taken one step further in Figures 5, 6 and 7. Here, the results described in Figure 4 are broken down into those departments which use established structured methods (Figure 5), those who use in-house methods (Figure 6) and those who use no method at all (Figure 7). It would be assumed that each successive figure would indicate an increasing demand for maintenance being placed upon department's. This has not proved to be the case.

Figure 5 The level of maintenance carried out by departments who use established structured methods for developing systems, and the demands of that service on departmental resources.

11

Figure 6 The level of maintenance carried out by departments who use in-house methods for developing systems, and the demands of that service on departmental resources.

Figure 7 The level of maintenance carried out by departments who do not use methods for developing systems, and the demands of that service on departmental resources. Taking each of the services in turn; System Maintenance remains as central a task regardless of whether or not a method is used. There is a difference in the situation where this service is an occasional task, but one which makes high demands on the department's resources. Here there is a counter-intuitive reduction in the demands placed on the department, between structured method / no-method conditions and in-house method developments. If taken in isolation, this result suggests that in-house methods are more effective in reducing the demands placed on a department for maintenance purposes. With regard to system support, there is a clear increase in the importance of this service for both in-house and no method categories, in comparison with those who use structured methods. It is at this point, that it should be questioned whether the category names offered to respondents have been interpreted in the manner in which they were intended. It was assumed that System Maintenance would include not only updates and amendments to the system, but also de-bugging of any errors found by the users. Whereas, System Support was intended to relate to technical support with regard to how to obtain certain functionality from the system, but not addition to that functionality. It may be suggested however, that respondents have interpreted the System Support option as including both de-bugging, and the addition of functions to the system. If this is the case, then it may account for the lack of change in the System Maintenance results, with a more marked increase in the centrality of System Support for the no-method/in-house method conditions.

12

The interpretation of the responses to the User Liaison option is more problematic. Whilst there is an increase in the centrality of the task between structured method use and the use of in-house methods, there is a clear decrease in its importance for the departments which use no methods. Similarly, there is a consistent increase in the percentage of departments which responded that User Liaison is an occasional task which places low demands on the departments resources. It may be argued (with reservation) that User Liaison occurs less in departments which use no methods, because of the lack of structure to the systems, where liaison is more closely associated with the definition of System Support. 5.3 Determination of experience in system analysis and design Having determined the services offered by department's, the questionnaire then aimed to divide the respondents into those who would have experience in the system development process, and those who bought-in that expertise (Figure 8). Responses to this question determined whether respondents completed Section B or Section C of the questionnaire. Section B was structured to isolate information concerning the use or otherwise of structured methods, and their perceptions of such methods. Section C is primarily concerned with the respondents perceptions of structured methods, and the techniques with which they have familiarity. Figure 8 indicates that most department's develop their own systems (with or without external support), and that handing over full responsibility for development to an external agency was not common. 6. Departmental experience in the systems development process 6.1 Size of projects undertaken As noted above, Section B was aimed at those respondents who had familiarity with the system development process. Figure 9 represents the responses to question B1 (see Appendix B). The aim of this question was to determine the proportion of a department's time spent on projects of varying sizes. There is a clear indication from the diagram that the respondents to the questionnaire are primarily involved in projects which are of less than 20 months duration. This finding is in keeping with the assumptions of the questionnaire, which were that the sample should be randomly chosen from as wide a range of companies as possible, with no pre-requisite for the existence of an IT department, nor the size of the projects undertaken. Projects of longer duration are increasingly less common, with the exception of very large projects (141+ man-months) which are the norm more often than either large or medium sized projects. This finding may reflect the recent movement towards downsizing of system developments, where mainframe and mini based systems are being transferred to more distributed developments which require shorter development times than medium to large scale projects. The presence of very large projects is likely to reflect departments which have long term investment in well established system developments.

13

6.2 Methods used in systems analysis and design Having determined the size of the projects undertaken, the next question aimed to determine which method, if any is used in the development of systems by the department (Figure 9). The methods represented are those offered within the question. Whilst other methods exist, it was believed that these were the most common, and an 'Other' option was provided. It should be noted however, no other structured method was provided by respondents, with the exception of those developed as part of a CASE tool. This figure clearly indicates that with regard to this sample, in-house methods are the most common type of method used. Inhouse methods are by definition peculiar to particular departments, and there is no specification as to what constitutes such a method. Consequently, it is not possible to determine whether individual in-house methods are actually methods, or could more realistically be described as a range of techniques. Of the structured methods mentioned, SSADM is used by a greater percentage of the departments who responded than all the other structured methods added together. This is understandable given that it is the UK Government standard, and also that it is an open method which has no limitations on who can use it. Whether or not it is being used appropriately is a separate issue. A key point to be drawn from this figure is that only 44% of respondents reported using a recognised structured method or using formal specifications. Contacts with the industry indicate that many "in-house" methods are simply informal collections of techniques (rather than closely structured like SSADM). Thus it is highly likely that a large percentage of those who responded to this question still do not take advantage of the proffered benefits of structured methods. This finding suggests that structured methods are still not as popular as their proponents would like them to be.

Figure 9 The percentage of method usage within the department's questioned

14

6.3 The extent to which methods are customised As noted earlier, one of the principle aims of this questionnaire was to determine not only the extent of method usage within departments, but also whether the whole of these methods are used during each development project. Figure 10 provides the responses obtained to this question. As can be seen, there is clear evidence that customisation of structured methods occurs in most of the departments who carry out system development. This confirms the assumption (Hardy, 1994) that customisation of methods for particular project needs is likely to be the norm.

Figure 10 The percentage of departments which either use the whole of their chosen method, or customise it to suit particular projects 6.4 Where department's acquire their knowledge of customisation The results from the previous question made it clear that most departments who use a method, customise it to suit the particular needs of the project development. The obvious question arising from this is 'where do such departments acquire the skills necessary to customise their methods?' Consequently, this was asked next within the questionnaire. Structured methods may support the concept of customisation, but there is often little documentary assistance within their manuals. Similarly, a review of the existing literature, has made it clear that very little information has been published to supplement the basic method documentation on the customisation process (this is supported by the results outlined in Figure 11). As a result, it is assumed that most departments achieve success in customisation through the experience of trial and error.

15

Figure 11 indicates that 62% of the departments surveyed obtained their knowledge on customisation from experience (it is assumed that experience includes training within the department), with a further 29% buying in that knowledge from external consultancies. This supports the assumption made in the preceding paragraph about the source of customisation knowledge. Whilst departments with a broad base of knowledge in effective customisation are likely to require little external assistance in future projects involving customisation, departments relatively new to system development are either having to invest in external support, which does not help them to later develop their own systems nor to maintain their existing systems. Alternatively, they use the whole of a method which is excessive and costly to their needs, or make potentially costly mistakes whilst learning to get customisation right. If such systems fail, it is likely that the method will be blamed for the failure, and not how the department has gone abut customising the method. This is likely to result in poor satisfaction rates for investment in structured methods, and this in turn may contribute to their slow rate of adoption. Several points should be raised about the use of method customisation. These include; when customisation is appropriate, which components stages and techniques are required in given situations, how to maintain internal consistency within the method, what risks are involved in the choice of a particular customisation approach, and at what point does the customisation process reduce the method to in-house or no-method status.

Figure 11 The source of a department's knowledge of method customisation. Values indicate the percentage of departments who responded to each option

In order to determine how departments perceive the methods they are using, the next two questions within the questionnaire provided the respondents with a series of positive and negative statements about such methods, and allowed for a range of responses. The overall responses to these questions are displayed in figures 12 and 15.

16

6.5 Responses to positive statements about methods Figure 12 provides the combined results from three groups of respondents; department's who currently use established structured methods, those who apply in-house methods, and those who use no method in their system development. Since this question concerns a department's perceptions about their current method, it is useful to distinguish between the first two types of department, whilst the third group it is assumed, have not provided any meaningful responses. Consequently, Figures 13 and 14 represent a breakdown of structured method users (Figure 13) and in-house method users (Figure 14).

Figure 12 The extent to which department's agreed with positive statements about methods

There is an apparent difference between Figures 13 and 14, but before proceeding to provide an evaluation, it was deemed appropriate to carry out a statistical analysis of the results. Single factor analyses of variance were carried out between the responses to each of the positive statements offered. Despite differences in the visual representations of the two figures, no significant result was obtained. This means that it is not possible to reject the hypothesis that there is no difference between the two groups. One interpretation of these findings is that due to the data types assigned to the individual responses (1 = Strongly disagree → 5 = Strongly agree) and the relatively small number of responses to each option (maximum 20, "System matches specification" and "System meets requirements" in Figure 13), the measurements was too coarse to detect differences between the two conditions. Nevertheless, subsequent analysis should be regarded as subjective. A prior assumption to this analysis was that the results obtained from those departments using structured methods should be more positive than those associated with in-house methods. This is because of the comprehensive nature of the former, and the potentially limited structure of the latter. However, when the absolute responses are converted into percentages of the total number of departments in each condition, the findings are not as clear cut.

17

Questionnaire respondents frequently avoid choosing the extreme values on any scale, consequently the results from the two positive options (Agree and Strongly agree) will be summed for discussion purposes. As would be expected, given the statistical results (detailed above), there are few conspicuous differences between the two conditions. Probably the most noticeable difference is that generally there is a consistently higher percentage of departments using in-house methods who indicate satisfaction with their method, than is the case with structured methods. This is not a happy result, since it goes against the expectations of the proponents of structured methods, and is contrary to the tenets of software engineering. The problem is not so much that the results are slightly in favour of in-house methods, but rather that there is not a clear result in the opposite direction. It could be argued that despite all efforts to the contrary, the sample of companies used in this questionnaire was biased. It may be that by not choosing only those companies who had IT departments we have eliminated a large proportion of those companies who make most (best?) use of structured methods. There is one clear result from these two figures which may support this argument. There are over 30% more in-house method user departments who believe their method results in fewer errors in design than there are for those who use structured methods. If the sampling procedure for the questionnaire has captured a range of departments who are relatively inexperienced in method use (either through their small size or through a lack of system development experience) then misuse of structured methods may result in more errors in design than would relatively ad-hoc development. This may be the result of either the complexity of the structured method used or because of mistakes in the customisation of the method which may lead to a loss of integrity within the method. It may be suggested that it is neither the question itself, or its structure which has resulted in the findings described above. This due to the fact that in both conditions less than 50% of departments believed that their method resulted in a reduction in maintenance costs. This is in keeping with the results obtained from the question associated with the services offered by departments (Section 5.3). Positive totals

71

54

46

69

74

86 57

77

43

51

%

Number = Percentage of the total number of departments

37

in each condition

40 34 29 26

23

20

17 14

11 6 3

Figure 13 The extent to which department's agreed with positive statements about their current structured methods

18

Positive totals

84

71

64

90

74

84

80

48

%

77 Number = Percentage of the total number of departments

61 55

in each condition 55

52

45

32

29 23

19 16 10

13

3

Figure 14 The extent to which department's agreed with positive statements about their current in-house methods 6.6 Responses to negative statements about methods As noted above, Figure 15 represents the overall responses to negative statements about the effectiveness of department's current methods. As with the previous question these have been broken down into the two principle categories of method; structured (Figure 16) and inhouse (Figure 17), and the results obtained for both groups statistically analysed to determine whether there is a significant difference between them. This involved performing analyses of variance to each of the options within the question. Each of the tests had a nonsignificant result, which means that it is not possible to reject the hypothesis that there is no difference between the two conditions. Since the type of data in both questions is of the same type, it is assumed that the rationale for this finding is likely to be the same as in the previous question, namely that the measures used were too coarse and of too limited a range, to isolate significant differences if they existed.

Figure 15 The extent to which department's agreed with negative statements about methods

19

Analysis of Figures 16 and 17 has been undertaken in the same way as for the previous question. That is, the absolute number of departments who responded to a particular option within the question have been converted to a percentage of the total number of departments within that condition. Similarly, the two positive options ('Agree' and 'Strongly agree') have been summed. Due to the negative wording of the statements offered, positive responses imply that these departments hold negative perceptions about their current method. Comparison of the positive totals, with two exceptions, suggests that those departments who are using structured methods are more negative about their method than are departments who use in-house methods. This is in keeping with the results obtained in the previous question, and as such reinforces the implication that those who use structured methods are not receiving the full benefits of such a method. It was intended that this question would highlight where the limitations of the methods existed. This was believed to be particularly important in the case of structured methods, where the responses to the question may indicate reasons for their lack of uptake. Consequently, each of the statements and their results will be discussed individually. Firstly, it was important to determine whether it was the method itself or the techniques it contained which caused difficulty. Responses to the first negative statement (Techniques are too complicated) suggest that regardless of the method type, approximately one third of the departments who responded believed that the techniques used were too complicated. Unfortunately, due to the structure of the question it was not possible to determine who finds the techniques complicated (groups such as the users, the systems developers or the project managers), and whether the complication lay in their execution or interpretation. This means that it is only possible to make general statements about the responses obtained. The second statement (There are too many techniques) was concerned with whether the respondents believed that their method contained too many techniques. Here there was a suggestion that in-house had a slightly higher level of agreement with this statement than was the case for structured method users. However, in both conditions over 50% of departments agreed with the statement. It could be argued that this reflects a legacy of developers who would prefer a less structured approach to system development. Responses to the statement that the projects undertaken by department's are too short for the full method, suggest that there is a higher level of agreement for structured method users. This is understandable given that structured methods are designed to be generically applicable, and in certain circumstances there are more techniques than would be appropriate for a particular project. The results obtained for the next statement (which argues that the method provides poor coverage of the life cycle) indicate that there is a relatively higher support for this position from structured method users. Although it is not possible to deduce accurate reasons for this finding from the results obtained, it may be suggested that current structured method users believe that they are tied into a method which does not answer all their business needs. Conversely, in-house method users appear to believe that their method is sufficiently flexible in its ability to encompass the relevant areas of the life cycle applicable to the projects undertaken.

20

The next statement (Poor help for method choice) concerns the extent to which a method is suitable for all types of problem domain, and by implication, is prepared to make recommendations about possible alternatives if the method is unsuitable for the project. It was anticipated that this statement would not result in a high level of agreement, particularly for structured method users, because method designers are concerned with selling their own product, emphasising its generalisability to as many situations as possible, and not advertising its limitations. With regard to those who use structured methods, 40% of departments agreed with the statement. This suggests that there is a demand for interfaces to or recommendation of other methods, but that they do not know where to acquire it. Of those who use in-house methods, 41% indicate that the question is not applicable. This is to be expected given that the method has been developed with the specific characteristics of the project in mind, and it is assumed that a decision had already been made that this approach was more suitable than that offered by structured methods. One of the most common perceptions of the process of method customisation is probably the determination of which techniques within the method are essential to the current project, and which can be safely ignored. When the respondents were offered a statement suggesting that the 'method used provided poor support for the choice of techniques', the results indicate that there was a similar level of agreement in both conditions (35%). This statement is particularly relevant to those departments who use structured methods, since it is assumed that in-house methods are designed to be project dependent. It is believed, that there is limited documentary advice available to support the process of structured method customisation; therefore, a relatively low level agreement with this statement is somewhat inconsistent. Although the question was worded to be as unambiguous as possible, subsequent evaluation suggests that with regard to this statement the respondents may interpret 'support for technique choice' as being support from knowledgeable colleagues, and not from the method itself. This would account for the result obtained, and would reinforce the finding that most experience about customisation is obtained from experience (Figure 11). The final statement offered, was that the 'cost of the method exceeds the returns' provided. As would be expected, both groups clearly disagreed with this statement, with those using structured methods being slightly less positive in their responses.This question completed Section B, and the respondents were then requested to go to Section D. Positive totals

32

54

60

65

40

35

20

%

51 46

Number = Percentage of the total number of departments

40

37

in each condition

32

29

26 23 17

20 14

11 9

6 3

Figure 16 The extent to which department's agreed with negative statements about their current structured methods 21

Positive totals

31

65 62

55

37

27

35

34

Number = Percentage of the total number of departments in each condition

48 45

%

41 34 31 28

24 21 17

14 10 3

7

Figure 17 The extent to which department's agreed with negative statements about their current in-house methods 7. Perceptions of methods from those who are not involved in system development Section C was completed by those who had indicated (in Section A) that they bought-in expertise to perform their systems analysis and design (Figure 8). It was also anticipated that a small proportion of non-method users, because they do no structured analysis and design, would omit Section B and proceed to Section C. The major thing which should be noted about the results obtained from Section C is that they are based on the responses of only twenty departments, and as such, must have limited generalisability. 7.1 Responses to negative statements about methods The first question in this section largely mirrors the last question in Section B, in that it allows the respondents to respond to negative statements about structured methods. However, here it is explicitly stated that the statements may be given as reasons for not using such methods (Figure 18). With regard to the first statement, that 'Methods are too complicated', 40% of the respondents indicated that the statements was not applicable. This would be expected in this situation where responsibility for the analysis and design process lay elsewhere. However, a similar percentage agreed with the statement, whilst no department strongly disagreed with the statement. This finding appears to be supported by the results obtained from the responses to the next two statements. There was a 60% agreement with the proposition that methods take too long to use, and a 70% agreement with the statement that there are more techniques within than are needed by the department. It could be tentatively argued from these results that those who are not actively involved in the use of methods perceive them as large relatively inflexible structures; this may consequently act as a deterrent to their uptake. When offered a statement suggesting that the cost of using methods exceeds returns, there was a strong agreement (40%). It could be argued that this may be the result of the respondents buying-in the expertise from outside; this would place an additional burden on department's, and may distance them even further from understanding the benefits of such an investment.

22

The remaining statements ('Insufficient guidance for method choice' and 'Insufficient guidance on technique choice') failed to provide any clear results for interpretation. However, this may reflect either a lack of understanding of the methods used (through limited involvement in the development process) or a lack of interest in the mechanics of system development. 7.2 Determining whether a tool to support technique choice would be attractive The next question asks whether a tool to assist in technique choice (from within a method) would be attractive to one of three groups. This question is also present in Section D, and was included here for comparison purposes. Consequently, discussion of the findings will be left until a Section 8.5.

Figure 18 The extent to which department's agreed with negative statements about structured methods

7.3 The use of individual techniques The final question within Section C aims to determine how this group of departments represent knowledge of their systems. As can be seen from Figure 19, there is still widespread use of flowcharts despite the fact that they become increasingly hard to read and interpret as program size gets larger (Thompson, 19 ). The use of DFD's (Data Flow Diagrams), ERD's (Entity Relationship Diagrams) and ELH's (Entity Life Histories) being central diagramming techniques within SSADM, reflects the current popularity of the method. To ensure that bias was not built into this question, respondents were given the opportunity to name other techniques that they may use; however, no further techniques were offered.

23

Figure 19 The percentage of departments who use particular techniques

8. Other factors concerned with the development process Section D was to be completed by all respondents, and covers a number of issues which were not specifically relevant in other sections. The first four questions investigate the current usage of CASE tools by departments within the sample, and the extent to which they support current method use. 8.1 The use of CASE to support analysis and design The first question in Section D asks the respondents which CASE tools are currently used within their departments for analysis and design purposes. Figure 20 represents the responses to the nine tools offered, whilst Table 2 contains those tools which were reported under the 'Other' option. 45 departments stated that they currently used CASE. This represents 43% of those departments who responded. A survey of CASE tool usage in companies who were known to have data processing (DP) departments (Stobart, 1990), found that for a similar sample size (112), 18.2% of departments were currently using CASE, whilst a further 25.5% were evaluating such tools. Before making a brief comparison of the findings, it should be noted that the current questionnaire was not designed to replicate the earlier study, and consequently they can only be compared in general inferential terms. In terms of overall responses, it appears that since 1990 there has been an increase in the number of departments who are taking advantage of CASE tools to support their system development. This may be a result of their wider availability; certainly, there are a larger number of different commercially available tools used in the current study (28) as compared to the earlier research (16).

24

Figure 20 The number of departments who indicated that they used the CASE tools offered

CASE Tool ADW CASE Modeller EasyCase High Productivity System (Seer) IE Facility Ingress (Forms) ISEE (Westmount) Jackson Speed JSP - Co Oracle *CASE PDF Principia Rational Rose (for OOD) Select SSADM Silverrun Synon Systems Architect Top CASE World CASE

Number of departments 1 1 1 1 1 1 1 1 1 3 1 1 1 1 1 1 2 2 1

Table 2 The number of departments who offered alternative CASE tools

25

8.2 Method support in currently used CASE tools With regard to which methods are supported by the tools used, Figure 21 indicates that SSADM is the most common method. This was also the case in the earlier study (with 28% of departments providing responses). In the current study, it may be that this result reflects the popularity of the method within the sample, and not necessarily a level of support for the method within CASE tools in general.

Figure 22 The number of departments which have CASE tools which support a method In order to get around this 'chicken and egg' situation as to which came first, the method or the tool which supports a method, the next question aimed to isolate the level of mutual influence of the CASE tool and method used (Figure 23). As can be seen, it is only in a very limited number of instances that the existing CASE tool had any influence on method choice. Conversely, the presence of an existing method appears to have influenced the choice of tool. It should also be noted that a relatively high proportion of departments responded that no influence was exerted by one on the other. This may suggest that either the tool and method were bought as an integrated purchase, or alternatively, that no effort was made by the department to take into consideration whether the tool would support the method

Figure 23 The extent to which the choice of CASE tool influenced method choice and vice versa

26

8.3 Appropriateness of the tool to the needs of the department The next question aimed to determine to what extent the CASE tool had been found to be appropriate to the department's needs (Figure 24). It is apparent from the results that department's are generally satisfied with the appropriateness of their CASE tools, with one department stating that it was completely satisfied with the tool. At the other extreme, mistakes are still being made in either the choice of tool, or the department's ability to exploit the tool's potential.

Figure 24 The extent to which the CASE tool was appropriate to the department's needs 8.4 Expected departmental workload The next question was concerned with the department's workload over the next two years. The intention behind this question was that the results (Figure 25) would provide an understanding of the medium-term commitments of the department (as opposed to the day to day services provided (Figure 4)). The results show that 47% of departments expected to be involved in the design of new systems, and a further 19% in the implementation of new systems. It is thus apparent that a large proportion of the companies contacted are considering expansion to, or replacement of their existing systems. It is not known however, whether this is the result of disappointment with their existing system due to its structure, or limitations in its ability to encompass business rules.

27

Figure 25 The anticipated workload for department's over the next two years 8.5 The proposal of a tool to assist in method customisation The next two questions are concerned with firstly, whether a tool to assist in the choice of method for a particular project would be attractive to three key groups within a company (Senior management, The IT department, and the System design team)(Figure 26). Secondly, if a tool to assist in the choice of techniques within a method, for a particular project would be attractive (Figure 27). It was assumed that each of these groups within a company would have a different set of requirements and constraints, and that is not possible to consider the needs of any one group in isolation. As noted in the Section 1, one of the aims of this questionnaire was to determine whether there was sufficient grounds to pursue the current research project, and also to find out what the demands of business were in terms of the tools required. What is clear from these figures is that both the IT department and the System design team would find both types of tool attractive. However, it should be noted that a tool to support technique choice received a higher percentage of support. This finding is reinforced by the results obtained from non-method users to the same question (Figure 28), where there is over 60% of both IT departments and design teams who would be interested in such a tool. It is assumed that the relatively poor level of interest from senior management reflects considerations of additional costs over and above their current investment in systems development. This questionnaire is obviously not an appropriate forum to advance the proposed benefits of such a tool.

28

Figure 26 Responses as to whether a tool to support method choice would be popular with three key groups

Figure 27 Responses as to whether a tool to support technique choice would be popular with three key groups

Figure 28 Responses as to whether a tool to support technique choice would be popular with three key groups (non-users of methods)

29

9. Discussion and conclusions The research project, of which the questionnaire described here is part, is an investigation into the customisation of structured methods to suit particular project needs, with the intention of evaluating the development of a computer-based tool to support this process. The aim of the questionnaire has been to investigate the current usage of structured methods within a wide range of UK based companies. This has involved an examination of how these methods are used; specifically, whether the whole method is used in each project development, or the method is customised to suit the problem requirements. Despite the publicised advantages of structured methods over ad-hoc development, they have not been adopted as widely as might have been hoped. Consequently, an important element of this questionnaire has been to examine the perceptions of methods, in order to determine what factors have acted as a deterrent. It has been assumed that one of the possible factors which underlie this situation is the limited availability of support for the customisation process. One of the clearest findings obtained from the current questionnaire has been that most departments (88%, Figure 10) customise their method, but that the source of their knowledge of customisation often comes from experience (Figure 11). This is a cause for some concern, since there is little indication that the customisation that is being performed is always successful. Structured methods are designed to be internally rigorous and consistent, and often rely on a core sequence of procedures in order to provide an accurate representation of the required system. If departments fail to meet this minimum requirement, through their customisation practices, it is questionable as to whether the developed system will be as successful as would be anticipated from the use of such methods. Without explicit support for customisation a department may believe that they are using a method, when in fact, due the breakdown of the structure, the method is reduced to a series of selected techniques. In such circumstances, the failure of a developed system to meet the specification of the users may be blamed on the method rather than its customisation. The findings that structured method users have more negative perceptions about their current method than is the case for in-house method users (Figures 13/14 and 16/17) may reflect failures of the customisation process, rather than limitations of the method itself. Structured methods are only as effective as the way in which they implemented. They are designed to applicable to as wide a range of projects as possible, but by so doing, there are often techniques or procedures which are superfluous to an individual project. Which techniques are most appropriate are constrained by the method itself, and only to certain extent by the project type. Due to the wide range of project types, it is not feasible for the method designers to provide templates to cater for all situations; consequently, systems developers are often left to their own devices. The results obtained, in addition to independent research, suggests that there is very little independent documentary evidence to assist departments in this critical task. The concern lays with those developments carried out by departments who are relatively inexperienced in the customisation process. Practitioners will learn from their mistakes, and in time, will learn how to effectively customise a method to suit their particular needs. However, it should be questioned as to how many mistakes occur before they get it right. Established departments are likely to pass on their knowledge through in-house training, so that subsequent recruits to the department achieve the same level of competence. Unfortunately, situations change, this may be either in terms of personnel or the nature of future systems to be developed. In such situations, there is no assurance that the department will not return to a trial and error approach to system development.

30

Without support for the customisation process, departments with limited experience in this activity have three choices; buy in the expertise from outside (which means that there is no learning involved, but a resultant increase in cost, and a reduction in understanding and control of the development), learn by their mistakes (possible increase in costs both in terms of development and maintenance) or use the whole of the method (resulting in a longer development period, increased irrelevant documentation and possible lack of focus to the project). In such a circumstance, it is understandable that so-called in-house methods have become the most popular method of system development. Structured method developers (such as CCTA) are concerned about this situation but extending the method to assist in this process will create more documentation than already exists. There is a clear indication from this questionnaire that departments believe that methods are too large for their needs already, and the concern is that if their size was increased to cater for the customisation process, it would deter more people than it would attract. From this, it may be argued that a paper based tool to support customisation would not be as popular as one which is computer based. Certainly, from the results obtained, there is a demand for such a tool, and this reinforces the need for this particular research project. The next stage of this project aims to isolate the key concerns associated with the customisation process from individuals and departments involved in the activity. This will initially be in the form of another questionnaire, but will be followed up by in-depth knowledge acquisition from experts in system development and customisation of methods. Given that the project is in collaboration with CCTA the focus of any development will primarily support the users of SSADM, however, it is hoped to be able to generalise these findings to as wide a range of methods as possible.

31

References Davis, C.J., Thompson, J.B., Smith, P., (1990) A survey of approaches to software quality within the United Kingdom, Occasional Paper No. 92-5, School of Computing and Information Systems University of Sunderland, Sunderland. Edwards, H.M., Thompson, J.B., Smith, P., (1989) Results of a survey of SSADM in commercial and government sectors in United Kingdom, Information and Software Technology, Vol 31, No 1 January/February.

GET (Graduate Employment and Training) 1994, (1994) Directory Publishers. Hardy, C.J., Thompson, J.B., Edwards, H., (1994) A Preliminary Study of Method Use in the UK, Occasional Paper No. 94-12 , School of Computing and Information Systems, University of Sunderland, Sunderland. Stobart, S.C., Thompson, J.B., Smith, P., (1991) Use, problems, benefits and the future direction of computer-aided software engineering in United Kingdom, Information and Software Technology, Vol 33, No 9, November.

32

Suggest Documents