Developing systems thinking through engaging in ...

5 downloads 241 Views 235KB Size Report
Developing systems thinking through engaging in multidisciplinary high-tech projects. Moti Frank* and Sigal Kordova. Faculty of Technology Management,.
222

Int. J. Project Organisation and Management, Vol. 5, No. 3, 2013

Developing systems thinking through engaging in multidisciplinary high-tech projects Moti Frank* and Sigal Kordova Faculty of Technology Management, HIT-Holon Institute of Technology, 52 Golomb St., Holon 58837, Israel E-mail: [email protected] E-mail: [email protected] E-mail: [email protected] *Corresponding author Abstract: The larger, more complex, more dynamic, and more multidisciplinary technological projects get, the higher is the need for project managers and engineers with a systems view of the project to ensure project success. This study examined the potential contribution of being engaged in a multidisciplinary project on developing capacity for engineering systems thinking (CEST) in project team members. The subjects were senior students conducting capstone projects in their 4th year of studies. Their CEST was assessed twice – at the beginning and end of the academic year. A significant difference was found between the mean CEST score at the end of the academic year, after completing the project, and the mean CEST score at the beginning of the academic year (p < 0.01). These results imply that CEST may be developed and/or improved while working as a member of a multidisciplinary project team. Keywords: systems thinking; engineering systems thinking; capacity for engineering systems thinking; CEST; project-based learning; PBL. Reference to this paper should be made as follows: Frank, M. and Kordova, S. (2013) ‘Developing systems thinking through engaging in multidisciplinary high-tech projects’, Int. J. Project Organisation and Management, Vol. 5, No. 3, pp.222–238. Biographical notes: Moti Frank obtained his BSc in Electrical Engineering in 1981 from the Technion – Israel Institute of Technology and he worked for more than 20 years as an Electronics and Systems Engineer. He obtained his MSc in 1996, and PhD in Industrial Engineering and Management and Education in Technology and Science in 1999, both from the Technion. He is a Professor of Systems Engineering and Project Management in HIT, Holon Institute of Technology. His research interests are systems engineering, systems thinking and project management. Currently, he is a Visiting Faculty at the Department of Systems Engineering and Operations Research in George Mason University, Fairfax, Virginia. Sigal Kordova obtained her BSc in Industrial Engineering and Management in 1989, MSc in 1997 and PhD in 2010, all from the Technion – Israel Institute of Technology. Currently, she is an Adjunct Lecturer at the Faculty of Technology Management in HIT-Holon Institute of Technology and the Director of the Industrial Engineering and Management Department at the Amal College.

Copyright © 2013 Inderscience Enterprises Ltd.

Developing systems thinking

1

223

Introduction

The larger, more complex, more dynamic, and more multidisciplinary technological projects get, the higher is the need for project managers and engineers with a systems view of the project to ensure project success (Hitchins, 2003). Capacity for engineering systems thinking (CEST) is a major high-order thinking skill that enables engineers to successfully perform project management and systems engineering tasks. Much has been written on systems thinking but yet there is a core question that has not been resolved. This question expresses the following research gap: Is this ability learned or innate? Can it be developed through experience, job rotation, education and training? Can it be developed through engaging in projects? The research objective of the study presented in this paper was to examine whether CEST can be learned and developed through active learning in a project-based learning (PBL) environment. Since we deal here with two components – (1) CEST and (2) PBL, let us begin by explaining these terms.

2

Literature review

2.1 CEST Systems thinking, according to Senge (1994), is a discipline for seeing wholes. Engineering systems thinking is a major high-order thinking skill that enables engineers to successfully perform systems engineering tasks, and helps project managers to successfully manage complex technological projects. To successfully perform project management and systems engineering roles, project managers and systems engineers need a systems view or a high CEST. There is an ongoing argument in the literature about whether systems thinking ability is inherited (innate) or learned (acquired). It was found that this ability is a consistent personality trait, and that it can be used to distinguish between individual engineers (Frank, 2006). Individuals with high CEST are more capable of dealing with the conceptual and functional aspects of a given system without first understanding all the details. The systems approach is crucial for successfully managing complex projects (Kerzner, 2006). According to Frank and Waks (2001), engineering systems thinking is the ability to: 1

See the big picture – the ability to: grasp and understand the whole system and the big picture, conceptually and functionally, without understanding all its minutiae and all of the system’s details; understand the interconnections and the mutual influences and interrelations among system elements; describe a system from all relevant perspectives; derive the synergy of a system from the integration of the subsystems; identify the synergy and emergent properties of combined systems; understand the system as a whole and anticipate all the implications (including side effects) of changes in the system, engineering and non-engineering alike; understand and describe the operations, purposes, applications, advantages, and limitations of a new system/product or idea/concept immediately after receiving an initial explanation, and remedy system failures and problems.

2

Implement managerial considerations – the ability to grasp and implement managerial, organisational, political and broad-perspective considerations.

224

M. Frank and S. Kordova

3

Acquire and use interdisciplinary knowledge – the ability to: deal with interdisciplinary knowledge; use this knowledge for various engineering and managerial tasks; make analogies and parallelisms between disciplines.

4

Analyse the needs/requirements – the ability to capture, understand and analyse the customer/market requirements/needs and future technological and business developments.

5

Be a systems thinker – the ability to be curious and innovative, to be an initiator and independent learner, and to possess the ability to develop and ask good questions.

Systems thinking is a method for describing, analysing, and designing complex systems of diverse types (Holmberg, 2000). Many researchers relate to the need to see the big picture in the problem-solving process. For example, breaking down a problem into components and finding separate solutions for each component only rarely results in an effective solution (Senge, 1994). In fact, the exact opposite is true. Dealing with the problem as a whole, without breaking it down into parts, results in a more effective solution in most cases. A general consensus exists among researchers regarding the importance of systems thinking, as a tool for improving organisational performance. Despite this fact, in most organisations its use is often not sufficiently developed (Holmberg, 2000) to be of real benefit. The main reason for this is the limited number of tools existing in an organisation that could increase the practical value of systems thinking. Scott (2005) found that the ISO 9001 standard is an effective tool for teaching and enriching management groups regarding systems thinking. Not enough has been done by the education system to accurately examine the process by which this ability is acquired, and to incorporate tools that could encourage the development and assessment of students’ systems thinking ability.

2.2 Project-based learning In PBL, learning is achieved through a process in which students, working in teams, build a product. The product may be something tangible, such as a computerised product or a written product. The product must answer a question, solve a problem or meet certain requirements or needs. This is an integrative learning environment, requiring the learner to solve problems using high-level thinking. The main benefits of the PBL approach are numerous: The possibility of gaining meaningful multidisciplinary knowledge while working in a real-world context; working in an authentic and active learning environment; developing deep, integrated understanding of content and process; promoting responsibility and independent learning; engaging students in various types of tasks, thereby meeting the learning needs of many different students; developing thinking skills; synthesising (and not just analysing); developing teamwork skills; gaining experience in the design process; gaining experience in the ‘top down’ approach; becoming familiar with the importance of optimal design; becoming familiar with the principles of project management; developing long-term learning skills; increasing students’ self-confidence, motivation to learn, creative abilities, and self-esteem; improving academic achievements; and developing systems thinking (Krajcik et al., 1999; Frank et al., 2003).

Developing systems thinking

225

2.3 The relationship between CEST and PBL As mentioned above the objective of the study presented in this paper was to examine whether CEST can be developed through active learning in a PBL environment. The relationship between CEST and PBL was investigated in a prior study (Frank and Elata, 2005). As the students, in this prior study, were executing their project, in A PBL environment, they had to ‘see’ the whole (the final product) and understand the interrelationships and interdependencies among the components of the product that they were attempting to design and build. This galvanised students into improving their systems thinking abilities. By observing the students activities while working on their projects it was clear that most of the students tried to begin by clarifying the ‘big picture’ and consider the widest aspects of the system and the environment in which it should perform. One of the conclusions of this prior study was that from the student answers, it seems that the awareness of the notion of a ‘big picture’ even though this big picture may not be clearly seen, is of great importance in itself. For an experienced engineer, seeing the big picture means a concrete vision of the system in a large perspective. However, for inexperienced students, the realisation that there is a big picture is an important first step.

3

Method

3.1 The tool In organisations and projects, there are many different kinds of job positions that may be included in the project management and systems engineering category. Different positions require different competencies, characteristics, abilities, traits and attributes. Despite this fact, it was found that a set of core characteristics, abilities, traits and attributes does in fact exist, necessary to all systems engineers and project managers of complex technological projects, independent of their specific position. Fourteen cognitive characteristics, 12 capabilities, nine behavioural competencies, and three items related to education, background and knowledge were found (Frank, 2006). Based on these findings, Frank (2010) introduced a tool for assessing CEST and provided results from three studies aimed at examining its reliability and validity. In its original form, the tool was designed to be used by engineers and project managers. In the current study, the tool was modified to better suit the needs of students. In the current study, the modified tool was comprised of 31 pairs of statements. For each pair, the examinee had to choose between the two statements according to his/her preference. Each ‘systems-thinking-related’ answer received three points. Thus, the range of the score for each participant was 0–93. As mentioned above, in a prior study a set of core characteristics, abilities, traits and attributes related to engineering systems thinking was found. This set can be divided into four subsets: cognitive characteristics, capabilities, behavioural competencies and background. Fourteen cognitive characteristics, 12 capabilities, nine behavioural competencies, and three items related to education, background and knowledge were found. Consequently, the 31 items of the modified tool (the questionnaire used in the current study) were divided into four categories as follows: 11 items related to cognitive

226

M. Frank and S. Kordova

characteristics (maximum score is 33 points, three points per item); six items related to capabilities (maximum score is 18 points); nine items related to behavioural competencies (maximum score is 27 points); five items related to knowledge (maximum score is 15 points).

3.2 Participants The study population included all senior technology management students who were registered for the ‘capstone project’ course in the 2009 academic year at Holon Institute of Technology. We used the simple random sampling method; the sample size was 42 senior students who were randomly selected from the population (sampling error 9.17%, p ≤ 0.05). For the purpose of this study, our control group was a group of one hundred and eleven (111) 12th grade high school students. These students were required to prepare a ‘final project’ as part of their ‘matriculation exam’ in the subject of ‘industrial engineering’. The structure, objective, duration, required outcomes and assessment method of the high school ‘project exam’ were very similar to those of the ‘capstone project’ academic course. The 111 high school students were from two subgroups – 73 were from the ‘basic’ track and 38 were from the ‘advanced’ track. The four main differences between the regular and advanced tracks are the number of topics the students have to cover, the level of in-depth research involved, and the tools and the type of statistical methods students have to use for analysing the findings.

3.3 Procedure The students, working in small teams (two to three students in each team), had to examine several alternatives for resolving a problem, issue, question or dilemma set by the mentors – a faculty member and a representative of a high-tech company in which the project had been conducted. The required outcomes were to submit a final written report and present the project and its results in front of faculty members, project stakeholders, and colleagues. Several teams were also required to generate a software program or to build an artefact. All students were required to apply theories they had learned in various courses. The projects’ outcomes were presented to the management of the high-tech organisations which hosted team(s) of students for conducting projects. During the fourth year of studies towards a BSc in technology management, the students had to find a high-tech organisation that would enable them to study the organisation and one of its projects; to identify a problem, issue, question, problematic process or need, in the current situation, which the organisation had undertaken to resolve; and to quantitatively and qualitatively analyse the organisation and the project they were investigating. The students first defined needs and the gap between the existing and desirable status, and then looked for relevant information by reviewing textbooks, papers, e-journals and databases, websites, and other sources. They investigated alternatives for resolving the problem, collected and analysed data through a process of investigation and collaboration, and conducted a feasibility study, after which they chose the optimal alternative. The optimisation criterion was cost-effectiveness and the analytic hierarchy process (AHP) is the technique used by most students for scoring and rating the solution alternatives.

Developing systems thinking

227

Working in small teams, the students were given two semesters to complete their project. They were encouraged to consult with graduate students, teaching assistants, industry experts, and faculty staff whose expertise was relevant to the project. The quality of the work was assessed – by peers, the mentor from the organisation in which the project was being conducted, and faculty members – in two ways: written reports and presentations in the classroom in front of faculty members, colleagues, industry mentors, and invited experts. The final report included detailing the work process; discussing management and engineering considerations that led to students’ conclusions and the steps which followed; a review of the relevant engineering and management literature; as well as all relevant findings, conclusions and recommendations. All 42 senior BSc students and 111 high school students were required to complete the questionnaire (presented in Section 3.1) twice – before beginning their projects (pre-test) and at the end of the academic year (post-test). The objective was to test whether there was a significant difference in the mean CEST score of each group before and after conducting the project. The main idea behind this test was to examine whether being a member in an interdisciplinary complex project might develop and/or improve systems thinking ability. The study plan was designed according to the principles of a controlled experiment (i.e., Brown and Melamed, 1990; Kirk, 1995). A controlled experiment generally compares the results obtained from an experimental sample against a control sample, which is practically identical to the experimental sample, except for the one single variable whose effect is being tested. In the study presented in this paper, the single independent variable was the ‘project type’ (seniors, high-school basic, high-school advanced). The dependent variable was ‘CEST’. In this type of study, all variables affecting CEST should be practically identical, except for the tested variable, which in this case was the ‘project type’. An analysis of other courses taken, during the year, by both the senior students and the high school students revealed that all other courses are traditional courses characterised by focusing on details. These finding enables us to assume that CEST improvement is related to being involved in ‘capstone projects’. Following are examples of capstone projects performed by the senior students: Driver fatigue awareness system; developing ERP system for small enterprises; trade study, business plan and risk management for a homeland security project.

4

Results

4.1 The tool’s reliability As mentioned above, the original tool for assessing CEST was designed for project managers and systems engineers. For the purpose of the current study, the tool was modified in order to adapt it to students. Due to these modifications, it was necessary to recheck the tool’s reliability and validity. A measurement’s reliability is represented by the extent to which it is accurate (Anastasi, 1988). The obtained Cronbach’s alpha of the original tool was 0.836 in one study, and 0.855 in another study (Frank, 2010). In the current study, the alpha coefficient of the modified tool was calculated by using the SPSS program and the obtained result was 0.765. The difference between the alpha of the original and modified tool makes sense because engineers and managers are much more mature than students,

228

M. Frank and S. Kordova

which probably leads to more consistent answers. However, all these results are higher than the minimum value required by the statistical literature. Another measurement of reliability that was examined within the framework of the current study was interjudge reliability. The modified questionnaire was sent to three senior experts in the field of project management. These experts were asked to complete the questionnaire and evaluate each item’s suitability to the tested subject. After analysing their answers, several items in the questionnaire were revised. Once the revisions had been completed, wide agreement among all referees was demonstrated.

4.2 The tool’s validity The validity of a measurement is the extent to which it represents the measured quantity (Anastasi, 1988). Content validity was achieved by basing the tool’s items on findings from a previous study (Frank, 2006). Contrasted groups validity is determined by comparing the grades of two contrasted groups. In the current study, the two groups were senior students and high school students. Concurrent validity is the correlation between the scores obtained by two assessment tools. In the current study, the concurrent validity was checked by comparing the subjects’ CEST scores with their teachers’ assessments. Construct validity indicates the extent to which the tool measures a theoretical construct or characteristic. In the current study, construct validity was checked by two factor analyses; an exploratory factor analysis was carried out by using the SPSS program, and a confirmatory factor analysis was carried out by using the AMOS program – see Appendix.

4.2.1 Contrasted groups’ validity In the current study, two CEST measurements were taken for each subject. At the end of the academic year, a repeated measure ANOVA with-between subjects factors test was done. This sort of analysis is performed when there are both within-subjects factors (time in the current study) and between-subjects factors (project type – senior capstone, high-school basic, high school advanced – in the current study). The between-subjects factor analysis was conducted in order to check whether a significant difference exists between the mean CEST score of the senior students and high school students. We expected that the mean score of the senior university students would be significantly higher than the mean score of the high school students. This is because in high schools the focus is usually on understanding details, while the focus in the 3rd and 4th year technology management curriculum is on systems aspects. Table 1 presents the result of the between-subjects ANOVA: Table 1

Between-subject ANOVA

Measure: MEASURE_1 Transformed variable: mean CEST score Source Intercept

Type III sum of squares

df

Mean square

F

Sig.

1,177,227.307

1

1,177,227.307

7,153.201

0.000

30.526

0.000

Project

10,047.504

2

5,023.752

Error

24,686.026

150

164.574

Developing systems thinking

229

From Table 1 we can see that a significant difference was found between the mean CEST score of the three groups – the senior students and the two groups of high school students (F(2, 15) = 30.526, p ≤ 0.05). The next analysis refers to the root of this difference. We defined a new variable as follows: DIFFER = the difference between the CEST score of each subject at the end of the academic year (after submitting the final report), and the CEST score of each subject at the beginning of the academic year. A one-way ANOVA was performed with DIFFER as the dependent variable and the results are presented in Table 2: Table 2

One-way ANOVA with DIFFER as dependent variable

Between groups

Sum of squares

DF

Mean square

F

Sig.

1,732.645

2

866.323

4.801

0.010

180.463

Within groups

27,069.473

150

Total

28,802.118

152

From Table 2, we can see that a significant difference between the DIFFER of the senior students and the DIFFER of the two groups of high school students was found (F = 4.801, p ≤ 0.05). A Post-Hoc Tukey test revealed a significant difference between the senior students and the advanced track high school students (7.50 points, p ≤ 0.05), and a significant difference between the senior students and the basic track high school students (7.56 points, p ≤ 0.05). These results also confirm the contrasted groups validity of the tool used in the current study.

4.2.2 Concurrent validity Concurrent validity is the correlation between the scores obtained by two assessment tools. In the current study, the concurrent validity was checked by calculating the correlation between the subjects’ CEST scores (as measured by the tool presented in Section 3.1) and the assessment of their teachers. The Pearson correlation coefficient was found to be 0.74 (p ≤ 0.05).

4.2.3 Construct validity Construct validity indicates the extent to which the tool measures a theoretical construct or characteristic. The construct validity of the tool used in the current study was examined by performing two factor analyses. The first analysis was an exploratory factor analysis and was performed using an SPSS program; the second was the confirmatory factor analysis, performed using the AMOS program. The results of the CFA are presented in Appendix. Figure A1 in Appendix is a graphical presentation of the model using AMOS software. The ellipse in Figure A1 represents the latent (unobserved) variable engineering system thinking. The observed variables are marked with a rectangular shape. Figure A2 in Appendix presents the 23 observed variables that were used as indicators of four latent variables. The exploratory factor analysis that was conducted in the current study revealed four factors – cognitive characteristics, capabilities, behavioural competencies and knowledge. These results are compatible with the four subsets found in a prior study (see the method section) and they confirm the construct validity of the tool used in the current study.

230

M. Frank and S. Kordova

4.3 Developing CEST through being engaged with multidisciplinary projects In order to examine whether CEST might be improved through being a member of a high-tech project team, a repeated measures ANOVA test was performed. A repeated measures design refers to studies in which the same measures are collected multiple times for each subject, but under different conditions or in studies in which change over time is assessed. A repeated measures ANOVA test is done when the same subjects are used for each treatment. In the current study, the CEST scores of the same subjects were collected twice – at the beginning and the end of the academic year. A dependent variable reflecting the difference between the repeated measures was defined and called time. This is the within-subject factor. The CEST score at the beginning of the year was called PRE and the CEST score at the end of the year was called POST (see Table 3). Table 3

Within-subject factors

Time

Dependent variable

1

PRE

2

POST

The variable reflecting the three study groups is the project type (see Table 4). Table 4

Project

Between-subject factors

1 3 5

Value label

N

Senior students Basic track high school students Advanced track high school students

42 73 38

As mentioned in Section 3.2 (entitled ‘participant’) there were three groups of subjects in the current study – senior students and two subgroups of high school students. There were 42 senior students and 111 high school students. The 111 high school students were from two subgroups – 73 were from the ‘basic’ track and 38 were from the ‘advanced’ track. According to further statistical analysis it was found that the distribution function was quite close to the normal distribution with light positively skewed (skewness = 0.321, kurtosis = 0.422). We may assume that the distribution of CEST scores is a bell-shaped distribution. Table 5 presents the results of the repeated measures analysis of variance according to the general linear model. Table 5

Descriptive statistics measurements in the repeated measures ANOVA Mean CEST score

Standard deviation

N

PRE (time 1)

Project Senior students ‘Basic’ high school students ‘Advanced’ high school students Total

68.79 60.41 58.66 62.27

13.730 12.128 11.518 13.018

42 73 38 153

POST (time 2)

Senior students ‘Basic’ high school students ‘Advanced’ high school students Total

77.79 61.85 60.16 65.80

8.418 10.218 11.115 12.401

42 73 38 153

Developing systems thinking

231

Table 5 shows that the mean CEST score of the senior students improved from 68.79 to 77.79. As for the control group, the high school students improved their scores only very slightly – the mean CEST score of the ‘basic’ students improved from 60.41 to 61.85, while the mean CEST score of the ‘advanced’ students improved from 58.66 to 60.16. An interesting finding is that the basic high school students achieved higher scores than advanced high school students in both pre and post measurements. This finding was not significant. A possible explanation to this finding might be that the basic high school students are educated to use interdisciplinary knowledge while the advanced high school students are required to be experts in details and to use analytical thinking. As mentioned above, from the repeated measures analysis (Table 1) we can see that a significant difference was found between the mean CEST score of the three groups – the senior students and the two subgroups of high school students (p ≤ 0.05). To validate the repeated measures factor analysis (Table 1), a Mauchly’s Sphericity test was performed (Table 6). Sphericity relates to the equality of the variances of the differences between levels of the repeated measures factor. Sphericity requires that the variances for each set of difference scores are equal. Sphericity is an assumption of an ANOVA with a repeated measures factor. When the significance level of the Mauchly’s test is < 0.05, Sphericity cannot be assumed. Table 6

Results of the Sphericity test in the repeated measures analysis of variance

Measure: MEASURE_1 Within-subject effect

Mauchly’s W

Approx. chi-square

df

Sig.

1.000

0.000

0

.

Time

From Table 6 we can see that the result of the Mauchly’s Sphericity test was found insignificant (sig. = .); therefore, Sphericity can be assumed and regular ANOVA test can be used. Table 7 presents the results of the ANOVA test for comparing the CEST scores at both points in time. Table 7

Results of ANOVA repeated measurements Measure: MEASURE_1

Source

Type III sum of squares df

Time

Sphericity assumed

1,116.543

Error (time)

Sphericity assumed

13,534,736

1

Mean square 1,116.543 150

F

Sig.

12.374 0.001 90.232

The result presented in the Sphericity assumed line shows that a significant difference was found for the ‘time’ source at both time points (Sig. = 0.001, F(1, 150) = 12.374). This means that the effect of time on the CEST score within groups is significant. Only the lines labelled Sphericity assumed is shown because the sphericity assumption was validated as explained above and presented in Table 6. As mentioned above, CEST is comprised of four components (Frank, 2006) – cognitive characteristics, abilities, behavioural competencies and knowledge. A paired sample t test was used to compare these components before and after project completion. As is shown in Table 8, there was a significant difference between the pre scores (T1) and the post scores (T2) in the following components: cognitive characteristics, behavioural competencies and knowledge.

232

M. Frank and S. Kordova

Table 8

Results of the paired t-test Pre mean CEST scores (T1)

Post mean CEST scores (T2)

Sig. (two-tailed)

Cognitive characteristics

23.4762

27.3571

0.004

Abilities

11.7857

11.9286

0.743

Behavioural competencies

21.9286

24.8571

0.005

Knowledge

11.5952

13.6429

0.012

5

Discussion

By comparing the subjects’ CEST scores (as measured by the tool presented in Section 3.1) before and after executing the project, it was found that all of the senior students achieved higher scores in the post test, which implies that they improved their CEST. This might mean that systems thinking ability may be developed or improved by being a member of a high-tech multidisciplinary project team. Specifically, the senior students’ CEST improvement was found to be significant regarding three dimensions – cognitive aspects, behavioural competencies and knowledge. The starting point of the seniors was higher than the control group students (two subgroups of high school students). The mean CEST score of the senior students was higher than the mean CEST score of the control group students, both at the beginning and end of the year. The differences at the two time points were found to be significant. This result confirms the contrasted groups validity of the tool used in the current study. In addition, the CEST improvement of the senior students was significantly higher than that of the high school students. This makes sense because in high schools the focus is usually on understanding details, while the focus in the 3rd and 4th year technology management academic curriculum is on systems aspects. Senior university students are much more mature than high school students. Furthermore, most of the seniors who participated in the current study were already employed in high-tech organisations in project-management-related job positions. The finding that being engaged in a capstone project in the 4th year of study toward a BSc in technology management might improve the capacity for systems thinking may be explained by analysing the students’ tasks during the year. They are required to analyse the organisation and its environment from all aspects, including management, business, marketing, financing, resource management, quality management, research and development, etc. They deal with a wide range of considerations: technological, business, organisational, ‘political’, human resource, configuration management, project management and systems engineering, all of which relate to the project they are investigating. Each student has to consider his/her project as a temporary sub-organisation within the organisation, or in other words, as a sub-system within a mega-system. From the findings of the current study, we may conclude that whether we relate to senior students executing 4th year projects, project managers or systems engineers, all

Developing systems thinking

233

need a high CEST in order to bring their project to a successful conclusion. This means that they must all understand the whole system and see the big picture, understand interconnections and systems without getting stuck on details, think creatively, have a high tolerance for ambiguity and uncertainty, understand all implications of a proposed change, understand a new concept immediately upon presentation, understand analogies between disciplines, understand the system from multiple perspectives, and take all relevant factors - technology, business, financing, marketing, management, the organisation, politics, etc. - into consideration. Moreover, project managers, systems engineers, and 4th year students must be able to ask good questions, lead teams, effectively communicate and control work plans. In addition, they are expected to be innovators, initiators, promoters and originators, and they must have interdisciplinary and multidisciplinary knowledge relevant to the projects in which they are involved (Frank, 2006). The findings clearly show that CEST may be improved and acquired through learning, experience, and being a member in a project team. This result is compatible with the findings of prior studies. For example, Davidz and Nightingale (2008) refer to a ‘wide and varied background’ in the “individual characteristics that enable systems thinking development”. Frampton et al. (2006) found that successful IT Architects have broad experience in all facets of the software development life cycle. Frank (2006) found that successful systems engineers usually accumulate experience by performing varied engineering and systems engineering job positions. Frank and Elata (2005) found that freshman engineering students, studying in a PBL environment, may develop approaches and strategies related to systems thinking. All these results imply that systems thinking may be developed through learning and experience, and these results are compatible with the findings of the current study.

6

Summary

The main finding of this study is that all senior students achieved significantly higher scores in the post test than in the pre test. This implies that the senior students improved their CEST while being engaged in a multidisciplinary project with a high-tech organisation. This might mean that systems thinking ability may be developed or improved by being a member of a high-tech multidisciplinary project team. Perhaps this is evidence that supports the notion that CEST may be improved and acquired through learning and experience. Therefore, it is recommended that organisations should create a supportive environment to enable and encourage systems thinking development in project managers and systems engineers. This conclusion should be further verified and validated by additional future studies. A series of additional and more extensive tests must be conducted in other cultures, sectors and organisations with a larger number of participants. It is important to understand the mechanisms behind effective system thinking development. A better understanding of the processes in which system thinking is developed can provide a better foundation for project management educational programs.

234

M. Frank and S. Kordova

References Anastasi, A. (1988) Psychological Testing, 6th ed., Macmillan Publishing, New York. Brown, S.R. and Melamed, L.E. (1990) Experimental Design and Analysis, Sage Publications, Newbury Park, CA. Davidz, H.L. and Nightingale, D.J. (2008) ‘Enabling systems thinking to accelerate the development of senior systems engineers’, Systems Engineering, Vol. 11, No. 1, pp.1–14. Frampton, K., Thom, J.A. and Carroll, J. (2006) ‘Enhancing IT architect capabilities: experiences within a university subject’, Proc., Australasian Conference on Information Systems (ACIS2006), Adelaide, Australia, paper 50, available at http://aisel.aisnet.org/acis2006/50. Frank, M. (2006) ‘Knowledge, abilities, cognitive characteristics and behavioral competences of engineers with high capacity for engineering systems thinking (CEST)’, Journal of Systems Engineering, Vol. 9, No. 2, pp.91–103. Frank, M. (2010) ‘Assessing the interest for systems engineering positions and other engineering positions’ required capacity for engineering systems thinking (CEST)’, Journal of Systems Engineering, Vol. 13, No. 2, pp.161–174. Frank, M. and Elata, D. (2005) ‘Developing the capacity for engineering systems thinking (CEST) of freshman engineering students’, Journal of Systems Engineering, Vol. 8, No. 2, pp.187–195. Frank, M. and Waks, S. (2001) ‘Engineering systems thinking: a multifunctional definition’, Systemic Practice and Action Research, Vol. 14, No. 3, pp.361–379. Frank, M., Lavy, I. and Elata, D. (2003) ‘Implementing the project-based learning approach in an academic engineering course’, Int’l. J. of Technology and Design Education, Vol. 13, No. 3, pp.273–288. Hitchins, D.K. (2003) Advanced Systems Thinking, Engineering and Management, Artech House, Boston, MA. Holmberg, S. (2000) ‘A systems perspective on supply chain measurements’, International Journal of Physical Distribution & Logistics Management, Vol. 30, No. 10, pp.847–852. Kerzner, H. (2006) Project Management: A systems Approach to Planning, Scheduling and Controlling, 9th ed., John Wiley & Sons, Hoboken, NJ. Kirk, R.E. (1995) Experimental Design: Procedures for Behavioral Sciences, Brooks/Cole Publishing Co., St. Paul, MN. Krajcik, J., Czerniak, C. and Berger, C. (1999) Teaching Science: A Project-Based Approach, McGraw-Hill College, New York. Scott, J. (2005) ‘ISO 9000 in service: the good, the bad and the ugly’, Quality Progress, Vol. 38, No. 9, pp.42–48. Senge, P.M. (1994) The Fifth Discipline: The Art and Practice of the Learning Organization, Doubleday, New York.

Developing systems thinking

235

Appendix Analysis of the findings by structural equation modelling The analysis tool: analysis of moment structures (AMOS, SPSS added module) In order to gain additional insight beyond regression, we used the AMOS program. This program confirms relationships among observed and latent variables. The program enables confirming the theoretical basis of the research study using a confirmatory analysis of factors. This analysis enables us to confirm the exploratory analysis of factors that was made using the SPSS program. The great advantage of using AMOS software is in drawing a graphic model that reflects the saying “a picture is worth a thousand words”. In this type of analysis, we measured the abstract concept of Engineering System Thinking via a set of indicators. Next, we present a model where factors are related to each other, and which combines a factor analysis with regression. Structural equation modelling (SEM) includes both observed and unobserved variables. Here, 23 observed variables were used as indicators of four latent variables, with single-headed arrows going, for example, from ‘traits’ to the other three latent variables (see Figure A2). The relationship between the four latent variables and the observed variables (the 23 items of the questionnaire) is called the measurement model, in which observed variables were used as indicators of unobserved factors. The ellipse in Figure A1 represents the latent (unobserved) variable Engineering System Thinking. The observed variables are marked with a rectangular shape. The latent variable and the observed variables are connected by a single-headed arrow. Engineering systems thinking is a theoretical latent value (hidden) that cannot be predicted directly (this is the construct in Figure A1), and items in the questionnaire are predictive indicators (boxes X3, X2 and X1 – the predictive indicators will be numbered according to items in the questionnaire that match the relevant latent variable). The values of e1, e2 and e3 represent the residual variables meaning the statistics errors. Figure A1

Graphical presentation of the model using AMOS software

Construct Engineering System Thinking

1

X3

X2

1

1

e3

e2

X1

1

e1

Notes: X1, X2, X3 – the predictive indicators; the items in the questionnaire that match the relevant latent variable. e1, e2, e3 – the residual variables (statistics errors).

236

M. Frank and S. Kordova

Based on theoretical-philosophical considerations, we can assume that the latent engineering systems thinking variable existed before the predictive indicators. From here, the relationship between the latent variable and the indicators is a definable causal relationship. Using the analysis made with AMOS software, we can estimate the extent to which the latent variable explains each indicator. We aspire to find saturated items/indicators in the latent variable, so that we can explain a large part of the variances in these items using the hidden variable. Relying on the division proposed by Frank (2006), according to which engineering systems thinking includes four different aspects – knowledge, individual traits, cognitive characteristics, and capabilities – we developed a suitable structural model using the program. According to this division, there are four latent variables: knowledge, individual traits, cognitive characteristics, and capabilities. We matched relevant items in the questionnaire to each of these latent variables. According to Frank (2010), each aspect/latent variable of systems thinking includes some of the following components:

Knowledge •

interdisciplinary and multidisciplinary knowledge



extensive experience in dealing with systems tasks, technical experience



education and knowledge in systems thinking.

Behavioural competences •

managerial skills



group leadership



good interpersonal skills, building relationships of trust with interested parties, good communication skills, the ability to ‘read’ people



self-study skills, personal reflection



desire to deal with systems, strong desire to succeed



perceiving failures and mistakes as challenges, decisiveness, tolerance to difficulties



self-confidence and personal motivation.

Cognitive characteristics •

understanding the whole system, seeing the big picture



creative thinking



understanding the system without being familiar with all of its details, tolerance for situations of uncertainty



understanding the synergy between different systems



curiosity, innovation, originality, invention, promotion



asking good questions

Developing systems thinking •

setting limits



considering non-engineering factors, such as economic, commercial and political factors.

237

Capabilities •

ability to carry out requirements analysis



abstract thinking and the ability to develop the solution



functional analysis



‘seeing the future’, future vision



use of simulation and engineering tools



optimisation



resolving system failures and problems



ability to offer several solutions to a problem.

It is worth mentioning that these components were originally developed for an analysis of the CEST of systems engineers, as opposed to other types of engineers. In this study, the examination of the CEST was made with respect to high school and university students. Therefore, some of these components are not relevant for the research groups under discussion. As previously mentioned, the questionnaire assessing the CEST was adapted to students and was comprised of items affecting engineering systems thinking pertaining to the work of the students on their final year projects. Similar to the factor analysis, each item (indicator) is loaded with a latent theoretical variable. We would like to verify how each theoretical variable explains the item. In the current research study, we verify the actual expressions of engineering systems thinking in the questionnaire’s items or, in other words, how ‘loaded’ each item in the questionnaire is, with respect to the different aspects of engineering systems thinking. In order to build the structural model, we first built a separate model for each latent variable that includes the relevant items. Next, we built the structural model which presents the relationships between the latent variables – knowledge, individual traits, cognitive characteristics, and capabilities. Figure A2 is the structural model, built using AMOS software. The exogenous variable in the model is ‘capabilities’ (which appears in the figure as ‘ability’). Arrows are shown coming out of this variable and going towards the endogenous variables. In the model presented above, we illustrate the relevant items for each latent variable. For example, items p11, p18, p19 and p23 are the predictive indicators for the latent variable ‘ability’. This analysis confirms the tool’s construct validity.

238

M. Frank and S. Kordova

Figure A2

Combined model according to Frank (2006) 0,

e1 1

p4

0,

0,

e2

e3

1

1

0,

e4

e24

0, 1

e22

0, 1

e21

0, 1

e20

0, 1

e19

0, 1

e18

1

0,

0,

e6

e7

1

1

0,

e8 1

p8 p12 p13 p21 p24 p27 p29 0

0,

e23

e5

1

1

0, 1

0,

1

Traits

p3

p11

1 0,

p6

p18 0

Ability

p9

p19

cognitive p22 p25

1

p23

1 0,

e25 p26

p7 1 0

0,

e26

1

know

p15 p16 p17 p31

1 1 1 1 1

0,

e13 0,

e14 0,

e15 0,

e16 0,

e17

1 1 1 1

0,

e9 0,

e10 0,

e11 0,

e12

Suggest Documents