Human Behavioral Modeling for Enhanced Software Project Management Sandeep Athavale
Vivek Balaraman
Tata Consultancy Services 54-B, Hadapsar Industrial Estate Pune, India 411013 91 20 6608 6436
Tata Consultancy Services 54-B, Hadapsar Industrial Estate Pune, India 411013 91 20 6608 6453
[email protected]
[email protected]
ABSTRACT
1. INTRODUCTION
Despite advances in technology and the growing maturity of processes, software project outcomes are still highly dependent on the performance of members in the software project team. The individual and team performance is based on human aspects – cognitive, psychological and sociological, apart from the enablers such as infrastructure and the environment.
Software Project Managers primarily use metrics and dashboards to monitor the state of a software project and take corrective actions. They may also use project planning and simulation tools to analyze projects from operational perspectives. A majority of these tools take into account aspects such as budgets, duration, team size, skill, task dependencies, etc. but not the human aspects. White and Joyce [1] give a detailed account of the methods/tools/techniques used in the current practice of project management. They also enlist the limitations of such tools and techniques, where the top two mentioned are: the inability to account for complexity and the inability to represent the ‘real world’ which is human centric.
Project managers need tools that factor in human aspects for improved software project planning and management. It will enable them to take preventive actions or make interventions to reduce the uncertainty of project outcomes and increase the likelihood of project success. Human behavioral modeling and simulation can provide precisely such decision support. In this paper, we develop a computational model of human behavior in the context of a software development project. We model human aspects that affect task performance in a software project. We use the agent based modeling and simulation (ABMS) paradigm as it can capture the autonomous actions performed by individual members and also the interactions between the members and with the environment. Through human behavioral modeling and simulation, we demonstrate the potential impact of human aspects on software project outcomes, which can lead to fresh management insights. At the same time, we also realize the need to further study these aspects individually or together to make the model robust.
Categories and Subject Descriptors D.2.9 [Management]: Productivity, Time estimation, Software process models, Programming teams; K.6.1 [Project and People Management]; K.6.3 [Software Management]: Software development
General Terms Management, Measurement, Performance, Experimentation, Human Factors.
Keywords Human Aspects, Agent Based Modeling and Simulation, Software Project Management, Human Behavior Modeling, Computational Organization Theory Permission to make digital or hard copies of all of part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To, copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. 7th International Conference on Software Engineering (CONSEG), 15th-17th November, 2013 @ Pune, India Copyright ©2013 Computer Society of India (CSI)
In their classic Peopleware [2], DeMarco and Lister state that more projects fail because of sociological and people related issues rather than technical reasons. In [3] Cerpa and Verner present analysis of why software projects fail. In particular they list the most common failure factors, which are inappropriate estimation of schedule and effort, long working hours, lack of consideration for team member’s personal life, unpleasant interactions, lack of motivation, lack of (knowledgeable) resources and gaps in project management. The list is perhaps indicative of the lack of focus on human aspects. Project managers, even when aware of the need to consider human aspects, may lack decision support tools that include these aspects. During our literature survey exercise we tried particularly to locate any tools or products that factored in human aspects but were not successful in finding any. While there may be a lack of such systems now, we believe the next stage in the evolution of software project management theory and practice is through incorporating human behavioral models. We believe the benefits will be manifold and multi-dimensional. Project simulation using such models will give project managers a more realistic sense of project planning. We give two examples to illustrate the benefits. Consider a project where the planners know the team that will execute the project. They may want to know the productivity factor (for this known team) in order to estimate the efforts. The productivity factor (PF) is used in techniques like Function Point Analysis (FPA) [4]. Once the project size is arrived at in function points, it is multiplied with PF to get the effort. The PF is supposed to be derived from historical data of the organization. However productivity of humans as an ‘averaged number’ does not account for the different situations within a project nor its temporal variations. Human performance is dynamic - it changes over time, it varies within a day, varies from person to person and is affected by interactions and events. If the planners had the ability to simulate a project considering this variability of PF, they would be better at estimating the project effort and schedule.
As another example, consider a project with tight schedules. The question that many project managers wrestle with is whether given the tight schedules, should one ask the team to work extra hours and / or on weekends to ensure the project gets done on time. A project model that does not factor in human aspects might suggest that increase in deployment of resources will increase effort and would reduce the time to completion. However, if human aspects are considered, it may point to very different insights. The extra work hours may disturb individual routines and hence adversely impact cognitive and affective states, leading to deterioration of productivity or quality of work [5]. The poor productivity / quality may necessitate rework, leading to even longer work hours, further stress and further deterioration in performance. This spiral can go on until the project goes into crisis, requiring additional management intervention to handle schedule and cost overruns. The examples above suggests that if we factor in the non-linear, dynamic human aspects we may get a very different, a counter intuitive view of software projects that may be closer to how they are in practice. This is the intuition that drives our research into human behavioral models of software projects. In this paper we model five dimensions of software project (individual, team, task, organization and environment), their aspects, associations and the behavioral rules. Our objective is to understand if and how performance of a project varies as a result of variable behavioral characteristics of team members and thus aid in better planning. While modeling only a few behavioral aspects with validation would have resulted in better rigor and accuracy, our intent in this paper is to point out the likely impact on project outcomes when we consider multiple dimensions that represent reality. We use ABMS to simulate the model in order to observe the emergent behavior of a virtual development team working on a virtual project through its duration. Through the simulation we get a sense of the impact of human aspects on software project outcomes. Our choice of ABMS over alternatives such as Discrete Event Simulation (DES) and System Dynamics (SD) is because ABMS are particularly suited for bottom up modeling of projects. As Bonabeau argues in [6] ABMS are well suited to model situations where the interactions between the agents are complex, nonlinear, discontinuous, or discrete (for example, when the behavior of an agent can be altered dramatically and even discontinuously, by other agents). As these are the characteristics we need in human behavioral modeling of software projects, we felt that ABMS was the right platform for this model.
2. RELATED WORK Carley & Prietula, 1994 [7] suggest that organizations are basically groups of people working together to attain common goals. Organizations can also be considered as Complex Adaptive Systems composed of intelligent, task-oriented, boundedlyrational, and socially-situated agents that are faced with an environment that also has the potential for change. A way of studying such organizations is by use of the toolset of Computational Organization Theory (COT), which often employs ABMS models, where the organization is viewed as a composition of a number of intelligent agents. Our work with regard to modeling software projects can be situated in this area of COT.
Kellner et al [8] state that software process simulation and modeling is increasingly being used to address a variety of issues from the strategic management of software development, to supporting process improvements, to project management training. They provide us a useful overview of work in this area. The increased prominence of ABMS is resulting in growing interest in simulation of software organizations. Joslin and Poole [9] have proposed ABMS for software project planning. Spasic and Onggo [10] advocate the use of ABMS for simulation of software development process and demonstrate it through use of real data. Abdel-Hamid and Stuart E. Madnick [11] develop a comprehensive systems dynamic model of software development that enhances the understanding of, provides insight into, and makes predictions about the process by which software development is managed. Agarwal and Umphress [12] also propose simulation of software development process as an easy way for managers to test the different configurations of the process and understand the effects of various policies. The literature discussed above give us a direction on how software projects can be modeled to generate insights for planning and performance. Crespo [13] has applied simulation techniques to project management within a process framework in order to achieve process improvement, specifically, a multi-paradigm simulation model in the realm of CMMI. He has modeled task and programmer parameters. We draw the idea of segregating input parameters and using weights to calculate outcomes though we focus much more on behavioral parameters. Gosenpud and Miesing [14] examine the relative impact of several factors (including behavior factors such as motivation) that influence performance in business simulation. They carry out the experiment in a university setting and conclude that although several factors played a role, only few factors had significant impact. In this paper we take the significant factors that they pointed out and apply them in a context of software organization. Wickenburg and Davidsson [15] investigate the applicability of ABMS for simulating software development processes (SDP) and provide general guidelines about when to apply ABMS for SDP simulation. One such scenario is when there is need to study the sensitivity of the SDP to individual characteristics, e.g. to measure the impact of changes in behavior of individuals. Our work fits this scenario. Juan Martínez-Miranda et al [16] have proposed a tool for management decision making. Software agents are used to model human social behavior at work, where human characteristics are represented by a set of fuzzy values, and fuzzy rules model the interaction between the agents to generate the possible performance of a work team. Our model is similar to it in terms of objectives but very different in the structure, grounding of the rules in social sciences and the use of routines and events. We draw insights from P´erez-Pinillos et al [17] about modelling of motivations, personality traits and emotional states of agents for automated planning using ABMS technique. However they do not specifically apply the model in software domain. Human Behavior Representation which refers to computer-based models that mimic either the behavior of a single human or the collective actions of a team of humans has been studied in Defence and Military applications by Pew and Mavor [18]. We have made an attempt to represent human behavior in the software project domain.
3. HUMAN BEHAVIOR CENTRIC MODEL
We layout these components to give a systems view in figure 2.
Bill Curtis [19] had proposed a layered behavior model of software development world. The model manifests layers as concentric semi-circles starting with individual, then team, project, organization and lastly business milleu. He has suggested that software development be studied at several levels and not restricted to individual layer (developer abilities). The model emphasizes not merely cognitive factors but also social and organizational factors that affect software development. In this paper we propose model with five dimensions (figure 1) namely: (a) The individual agent/actor/programmer/developer having cognitive abilities and behavior (b) the software project team - as set of relationships, (c) the software engineering tasks, (d) the organization which has goals and which also provides the enablers and (e) the environment.
Figure 2. Components of the model Further, the aspects impact each other based on their states and governing rules. For example a phone call interaction with coworker can change the ‘affective state’ aspect of an agent. Figure 3 captures the flow and direction of the influence between the aspects. Individual agent aspects (blue), team aspects (pink) and other aspects (grey) impact/influence each other resulting in a cumulative impact on the work performance (green) in a software project context.
Figure 1. Five dimensional project model Though we retain the classification from Curtis [19], we represent them as independent dimensions interconnected by a common ‘Context’ instead of having concentric layers. We found limitation with the layered model. Each dimension can interact with other in some way and the layers do not represent that. Although we do not model inter-dimension interaction, we build a (project) context where all dimensions interact. A model is a representation of a system for the purpose of studying the system. The proposed model is a logical representation of a system of software project which we intend to study from human behavior standpoint. We describe the physical implementation/prototype of our logical model later in this paper. The components of the model include the dimensions mentioned above, the aspects of the dimensions (or attributes) and the associations between aspects. Each dimension has multiple aspects (or attributes). For example, Individual agent dimension has knowledge, personality, emotion and self-efficacy aspects. The engineering task dimension has aspect such as complexity. The organization dimension has infrastructure quality as an aspect. The aspects can be static (personality) or dynamic (emotion). The model includes the set of rules that form the associations, set of rules that determine performance and the set of rules that change the state (or values) of an aspect. The clock provides the time dimension. Events are generated either based on time or stochastically. The events trigger rules for association of the aspects. For example ‘start coding’ event would make an individual start writing programs (virtually). The sequence of such events is primarily based on work plan and individual routines for a day. The sequence can however be altered by stochastic events such as ‘receiving a phone call’.
Figure 3 Influence of aspects on performance The knobs indicate that the value can be set externally or is set stochastically (inside the model). The blue arrows indicate the direction of impact. We have modeled one feedback loop indicated by the dotted line that reinforces specific behavior – such as low quality work results in rework, high overtime, and pressure, negative affective state and further low quality. These aspects whose impacts and influences are generally qualitative when studied in the social sciences, need to be represented in quantitative terms to make the model ‘computational’. We found this conversion challenging but attempted it by using consistent scales throughout. In the next phase of our work we will work on field level surveys and other instruments which will help us to calibrate our computational model with real data from the field. In the present model, we use a scale of 0 to 1 (Low to High) to represent the range of each aspect. Given that each sample will have its own range, the conversion scale will vary from one setting to another - as it stands currently. Next we describe the dimensions and aspects in detail.
3.1 The individual agent/actor dimension The Individual Human Actor/Agent/Programmer dimension defines the cognitive and behavioral aspects of the individual. Giordano et al [20] suggest that realistic representation of individual humans or groups of humans in computer simulations has served as an overarching goal for those considering nextgeneration simulations. However this is easier said than done. We found challenged by the lack of specific guidance on which individual aspects have significant impact and need to be represented in the model. We referred to the work of Curtis again [19] where he suggests that programming performance is a function of cognitive abilities on one side and motivational, personality and behavior characteristics on the other. We also referred to literature discussed in related work section especially [13], [14], [16], [17] and did frequency analysis to identify key aspects in software project performance. Table 1 summarizes the individual aspects thus identified and their impact on the performance. Aspects 1 and 2 form the cognitive side and 3 to 7 represent the behavioral side. Table 1 – Individual aspects Aspect 1. Knowledge 2. Learnability 3. Self-Efficacy 4. Conscientiousness 5. Agreeableness 6. Neuroticism
Range 0 to 1 0 to 1 0 to 1 0 to 1 0 to 1 0 to 1
7. Affective State (Internal)
0 to 1
Performance Impact Quality, speed of work Acquisition of knowledge Quality of work Quality of work Team Cohesion Amplitude of swings in affective state Affects quality and speed of work
3.1.1 Knowledge aspect We have represented cognitive side as the knowledge or skill the individual possesses to perform the required task as well as the learnability (aptitude). Both aspects have positive correlation with programming performance as mentioned by Hunter [21]. We model individual knowledge as having a value between 0 and 1. Higher knowledge results in higher quality and quantity of work. Individuals with lesser knowledge can learn through formal opportunities, peer interactions and self-learning. The pace of learning will be driven by the learnability factor which varies from 0 to 1; higher the factor, higher the likelihood and pace of gaining knowledge.
Similarly, employees who have high conscientiousness display responsible, disciplined, reliable, resilient and determined behavior when carrying out the task entrusted to them. This results in better quality performance. Since our model is stochastic, it accommodates the fact that individual behavior varies. Therefore, high conscientiousness (or neuroticism) in our model, means the individual is likely to display that trait more often than not (but not always).
3.1.3 Self-Efficacy aspect We have modeled Self Efficacy to represent motivation (selfefficacy leads to motivation) based on the Schunk model of selfefficacy [24]. Schunk model suggests a reinforcing loop where people acquire information to appraise self-efficacy from their performances, experiences, forms of persuasion, and physiological reactions. He refers to self-efficacy theory as "people's judgments of their capabilities to organize and execute courses of action required for attaining designated types of performances" based on the work by Albert Bandura. Schunk suggests that work or learning performance is higher with higher Self Efficacy. Hence in our model, high self-efficacy results in high quality and quantity/hr. (speed) of work and high learnability. This aspect varies over time based on the feedback from the coworkers.
3.1.4 Behavior – Routines Modeling behavior is challenging due to the varied interpretations of the term ‘behavior’. We refer to the work done by Triandis [25] where his Theory of Interpersonal Behavior (TIB) suggests that antecedents (habits, social and affective factors) form the behavior. Drawing from this, we have used routines (habits) and ‘affective state’ to represent individual aspect. The social factors appear later as team aspect. First we discuss routines. Routines are repetitive behavioral sequences. Alterman et al [5] suggest that routines are customized to the agent's environment (habitat). We modeled agent routines as a sequence of activity segments in a day. It is kind of a timetable for the day and different for every individual. Typical routines may have segments such as Pre-work, At-work and Postwork. The pre-work may consist of tasks such as daily chores at home, exercise, breakfast, traveling to work. The at-work segment may include activities (or tasks) such as coding, meeting, lunch, etc. A snapshot of the day’s routine (in a sample simulation run for a 9 person team, taken at 2 PM) is shown in figure 4. Here 3 agents are each at lunch and coding, 2 are at home and 1 is idle.
3.1.2 Personality aspects We derive personality aspects from the Five Factor Model (FFM) of personality by McRae et al [22]. The five factors (or traits) are Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism. Barrick et al [23] in their meta-analytic findings indicated that Conscientiousness and Neuroticism (also called as emotional stability) traits have a significant impact on work performance as compared to other traits across most types of jobs. Hence we model these two traits. As per that study, employees who are low on neuroticism are generally calm, adaptable, confident and not easily depressed. They are able to produce consistent performance and show lesser variations in affective state. We model neuroticism as having values 0 or 1 (high). Neuroticism impacts affective state which in turn impacts performance. We discuss affective state in further sub-section.
Figure 4. Agents in various segments of routines
Figure 5. Events impact Routine
We assign routines to individuals randomly to begin with and allow them to be impacted by events (we detail events in table 4). Events modify routines by inserting/deleting segments inside the existing sequence as shown in figure 5.
3.1.5 Behavior –Affective State Now, we discuss affective state. Rothbard and Wilk [26] suggest that the affective state is impacted by start of the day mood and events that happen during the day. Based on that, Affective state in our model is impacted by events and the amplitude of impact is modulated by the neuroticism trait. We model affective state as a variable aspect having values between 0 and 1 for every individual. Polarity of event causes positive or negative impact on affective state. The impact of an event is more (+/- 0.3) for a person scoring high on the neuroticism and lesser (+/-0.1) for person scoring low. For example, {Good event (e.g. appreciation email) + High Neuroticism} High positive variation in affective state {Bad event (e.g. Kickoff with tighter project schedule) + Low Neuroticism State} Low negative variation in affective state Positive affective state impacts work performance (quality) positively and vice-versa, drawing from Howard and Russel [27]. Although [27] suggests a circumflex model with High and Low PA (Positive affect) and High and Low NA (Negative Affect) as four poles, we have reduced it to a single axis for simplicity. The list of typical events (and its polarity) is shown in table 2. Table 2 – Events impact Affective State Event Phone call Email Review outcome Chat Work allocation Enabler activation Project start
Negative Polarity Bad news Criticism Overwork (rework, cross-allocation) Bad interaction Undesirable change in routine Infra failure/ issues
Positive Polarity Good news Appreciation Rest/Break Good interaction Desirable change in routine Learning
Schedule pressure
3.2 The project team dimension Organizational citizenship behaviors (OCB) and counterproductive work behaviors (CWB) affect work performance. Kumar et al suggest that the OCBs themselves are related to personality – especially Agreeableness [28]. We also note from [23] that individuals with high score on ‘agreeableness’, exhibit better teamwork and relationships. To capture the project team dimension we model two aspectsagreeableness and relationships. The interpersonal relationships of agents are modeled through the cohesion factor. The factor varies from 0 to 1. Factor of 0 means no one has good relationship with anyone else in the team. 1 means everyone has good relationship with other. Values in between represent mixed set of relationships. In our model we assume relationships to be symmetric, which mean if a person A has good relationship with B then B also has good relationship with A. For example, in a team of 8, a factor of 0.5 represents that out of possible 28 symmetric relationships, 14 are good and 14 are not and these are assigned as good for persons high on agreeableness.
Both agreeableness and relationship aspects impact individual’s affective state during work or non-work interactions. Good relationships and agreeableness cause positive interactions. In our model, good interactions impact affective state and self-efficacy positively and vice versa. Teams’ geographical distribution and leadership are two other important aspects of team dimension which we intend to consider in future though the present model does not consider these.
3.3 The software engineering/task dimension We have modeled the software engineering dimension as a set of task types that the human agents (programmers) need to perform. Thus instead of modeling a project as a set of phases, we have modeled a project as a set of tasks. Each task in turn is an instance of a task type. We view the task types as being similar to Therbligs as defined by Gilbreth and Gilbreth [29]. Thus task types are the atomic cognitive or behavioral action blocks of a project which recur multiple times in a phase or in multiple phases – for example, “Discussion” task type occurs in all the phases. We used the SEI CMM Development process handbook [30] to identify ‘software therbligs’ from the practices and sub-practices listed therein. We extracted a set of action words/verbs from the statements defining sub-practices in the CMM document. For example, ‘create’ from ‘create project plan’, ‘discuss’ from ‘discuss design alternatives’ etc. and categorized them into a set of 18 elemental task types which we consider as a first cut of therbligs in software as listed in table 3. Table 3 – Elemental task types Acquire Discuss Design Review Comply Learn
Plan Create Code Demonstrate Consult Think
Analyze Evaluate Deploy Maintain Manage Idle
For the purpose of this paper, we have restricted the scope to a single phase (coding/development) consisting of 6 task types i.e., Plan, Code, Review, Discuss, Learn and Idle.
3.3.1 Planning In the planning task, the code units are allocated/ distributed to the team. On the first day, work units are evenly distributed. However from the next day, work is allocated based on the backlog at the end of previous day and fresh work for present day.
3.3.2 Coding This task type is where the agents virtually write code. As the coding task is completed, the counter for ‘code units completed’ is incremented for every individual and counter for ‘code units remaining’ in the project is reduced. Coding tasks are of varying complexity. The complexity is varied using a normal distribution with values from 0 to 1. Each team member is given the same distribution. For example in 10 allocated tasks, each individual gets 3 simple, 3 complex and 4 medium complex tasks. The complex the task, more the time required to complete the task.
3.3.3 Discussion Work meeting (discussion) is a task which involves more than one individual. Meetings outcomes can be modeled as a complex function of parameters that include roles, agreeableness, selfefficacy, knowledge, appeal of the idea and relationship between the individuals.
However presently, we have taken a simplistic approach where, if majority of the people in the meeting have agreeable personality, the meeting will result in an agreement/conclusion. Meetings take out an hour from the day’s schedule for all participating individuals. The individual returns satisfied if his/her idea is accepted. Other individuals return satisfied or dissatisfied based on the valence of their relationship with proposer. Individual’s affective state is impacted positively by satisfaction and viceversa.
3.3.4 Learning This task is modeled as happening in 3 ways - formal classroom learning, interaction with peers and self-learning. Learning opportunities such as these can result in increase in knowledge of the individuals based on learnability.
3.3.5 Review Review task decides which and whose work units need rework. Agents may need to work overtime to complete the rework or pending work. The rework is calculated based on percentage of poor quality output. The review can result in re-allocation of pending tasks from low-performers to high performers (after a threshold). This reallocation increases workload and can impact affective state negatively.
3.3.6 Idle During Idle time, there is no work done by the individual, but the time counter is advanced. However idle time can result in affective state changing towards positive.
3.4 The organizational dimension The organizational dimension of software development project is embodied by the IT services vendor (offering software development services). The organization has a structure, functions and goals. It has a culture. The organization provides enablers such as infrastructure to the individuals to perform the functions. We have considered the organizational dimension for sake of completeness and introduced only a single ‘enabler’ aspect, which is infrastructure. We refer to Guynes work [31] where he suggested better infrastructure results in better productivity (all other factors being the same). In our model, the quality of infrastructure can vary from 0 to 1 (1 being very good infrastructure). Good infrastructure results in higher speed/quantity per hour and lesser failures (we set that threshold to max 1 failure in 3 weeks).
3.5 The environment dimension
Table 4 – Events list Task Events Start/End Meeting
Routine Events Lunch break
Start/End Thinking Start/End Learning Start/End Review Start/End Coding (incl. rework) Start/End Planning Project Start
Stay overtime Leave early Idle/Rest Leave for home
Disruptive Events Travel to work issues Phone call Email Infra issues Chat with colleague
Take off day
3.6 The context –project and performance The context is where the associations happen between the aspects, the rules take effect and actors perform tasks. It represents the specific software project with an instance of aspects with values. Performance is represented as a combination of quality and quantity of work completed in a unit time (one hour) – certain combination of aspect values impact quality and certain other combination impacts quantity. These performance rules and thresholds that govern the model presently are shown in table 5. . Table 5 – Determinants of Performance Quadrants Q1
Q2
High Quantity/hr (speed) High Quality Knowledge ≥ 0.5 AND Self-Efficacy ≥ 0.5 AND Conscientious ≥ 0.5 AND Affective State ≥ 0.5 AND Infra Quality ≥ 0.5AND Task Complexity < 0.5
Low Quantity/hr (speed) High Quality (Knowledge ≥ 0.5 AND Self-Efficacy ≥0.5 AND Conscientious ≥ 0.5 AND Affective State ≥ 0.5) AND (Infra Quality < 0.5 OR Task Complexity ≥ 0.5)
Q3
Q4
High Quantity/hr (speed) Low Quality (Knowledge < 0.5 OR Self-Efficacy < 0.5 OR Conscientious < 0.5) AND (Infra Quality ≥ 0.5 AND Task Complexity < 0.5)
Low Quantity/hr (speed) Low Quality (Knowledge < 0.5 OR Self-Efficacy < 0.5 OR Conscientious < 0.5) AND (Infra Quality < 0.5 OR Task Complexity ≥ 0.5)
We classify the performance in four quadrants. Quadrant 1 (Q1) indicates a high quality and high quantity per hour (speed) of performance whereas Q4 is low quality and low quantity.
The environment is everything outside of the four dimensions that we discussed so far (individual, team, task and the organization). It includes the business environment of the organization, social environment of the team and physical environment of the individual etc. However, at the moment, we modeled only one aspect of environment, which is the ambience. We consider that good ambience means lesser disruptions.
Table 6 represents the realized performance every hour. Presently the thresholds are based on sample project data. The ‘review’ task uses this table to determine completed work and rework quantity for every individual. For example a performance in quadrant Q4 in a particular hour means 70% rework is required for output in that hour. The planning task uses this review output to allocate fresh plus pending work units for the next day.
DeMarco and Lister have indicated that more the disruptions lesser the performance [2]. The disruptions are modeled as events that are generated randomly. The ambience quality determines the frequency of disruptive events. Ambience quality varies from 0 to 1 (1 being very good ambience). Good ambience results in fewer disruptions (we set it to less than 3 per day). Apart from disruptions, routine and task association events are the two other types of events modeled. Such events start or end the tasks and daily routine segments. The complete list is shown in table 4.
Table 6 – Performance Quadrants and Productivity Q1
Q2
Highest Output (1.2 times of planned productivity)
(0.7 times of planned productivity)
Q3
Q4
(0.5 times of planned productivity)
Lowest Output (0.3 times of planned productivity)
4. PROTOTYPE We have implemented an executable ABMS prototype (physical model) based on the logical model that has been presented in the sections above. As our model is complex having multiple agents, each having distinct set of aspect values and dynamic as the aspect values change based on set of rules and events, a computational prototype becomes essential. The computational prototype enables us to generate and observe outcome patterns based on such complex and dynamic model which would not be possible by using mathematical equations alone.
4.1 Implementation Platform We used Netlogo [32] which is a popular ABMS platform/tool for our implementation. We found that Netlogo provides a simple yet powerful programming language, built-in graphical interfaces and sufficient documentation. Netlogo also requires comparatively less effort to model as indicated by [33].
Planning happens in the morning and Review in the evening. Discuss, Learn and ‘Idle’ tasks happen when random events are generated calling for these tasks.
4.4 Setting up Rules We have earlier (table 5) described aspect values and the rules that affect work performance. We have also described rules that cause change of state and rules that create associations (or work allocation). These rules are programmed in the prototype.
4.5 Setting up Measures As fourth step we setup measures or outcomes for observation. We have two categories of measures - Individual and Project. Individual measures include hourly work performance, hourly state of routine (which segment), hourly affective state and cumulative completion of work units. Figure 7 displays a sample performance plot (colored) based on table 5 across the team.
The implementation was done by creating (programming) visual display, the virtual agents, tasks, other aspects, events, rules and measures. The display enables visualization of agent movement and current activity at work as shown in figure 6.
Figure 7. Team member’s variable performance over time For a project – we measure estimated schedule versus actual schedule, estimated effort versus actual effort, planned vs. actual work progress and planned versus actual productivity.
Figure 6. Visual display of Simulator
4.2 Setting up Agents First, we created a virtual world (office) where agents (software developers) perform tasks. Agents are created as virtual persons in this world using the agent dimension of the model. We have implemented only one type of agent which is the developer. Initial values of the aspects can be setup through user controls to represent the real world distribution. Agents start their action based on initial routines assigned randomly but keeping some basic tenets – number of ‘coding’ segments, arrive, go home, and lunch appearing for each individual.
Figure 8. Actual vs. Estimated work progress Figure 8 show a sample graph for ‘actual versus estimated’ work progress on a temporal axis. Outcomes such as amount of rework, amount of overtime etc. can also be measured.
4.6 Running the Simulation
A project size is defined by setting the total code size or function points (FP) for development phase. The project duration is also set (in weeks). Planned productivity is also set.
The simulation begins with the user/manager setting up project code size, productivity and duration. The team size is automatically calculated based on these. Personality, efficacy and knowledge distribution of the team members is then set up through the control panel as show in figure 9.
The team size is calculated as total project size divided by planned productivity (units/person/week) and the duration. Presently we assume the team size is uniform throughout. For example, a project with 1000 FP, with a 25 week deadline and productivity of 1 FP/day/person would result in a team size = 1000/ (25*5)*1 = 8.
The working of the slider can be known from this example -. When the Conscientiousness dimension slider is set to 0.8, it means that 80 percent of team members will display high conscientiousness. The high itself may vary stochastically between values 0.6 to 1 for each individual.
Code units are assigned to individuals based on the planned productivity – in above example 1 FP to each person per day. For modeling, we break FP into 8 code units so 1 FP = 8 code units.
As multiple runs of simulation give us a range of outcomes due to the stochastic nature of environment modeled, the simulation is run in sets to get average values. Each simulation tick unit corresponds to one hour of real-world time. The simulation continues till the entire work is done (all work units completed).
4.3 Setting up Tasks
We have modeled tasks based on the task types in the task dimension. Plan, Code and Review tasks happen daily.
5.2 Generating useful insights Here we present some interesting combinations of simulation settings that can generate additional insights for planning. The actual results displayed below are not as significant as the fact that such what-if scenarios or specific conditions can be generated using this model.
5.2.1 Non-linearity of outcomes
Figure 9. Control Panel The manager/user of tool can fine tune the parameters and rerun the simulation to observe which combinations result in desired outcomes and use that in planning.
5. RESULTS AND ANALYSIS We did the simulation runs with multiple input settings to compare the effort, schedule and productivity outcomes. We also tried to generate some interesting insights from these runs. We believe such results (post further validation of rules) can equip managers to better manage projects.
It would be intuitive to expect that a team consisting of members who display higher neuroticism (low emotional stability) is likely to be impacted by schedule pressure but we may not know “how much”. We try to find that “how much”. We also try to find whether one aspect alone or a combination of aspects make a big difference. We did 2 sets of simulation runs, both with 25% squeezed schedule. In the first set, we only varied percentage of members with low stability from 0 to 90. We observed a linear change in effort slippage as shown in figure 10 (blue curve). In the 2nd set, we varied the percentage of people with low conscientiousness from 0 to 90 in addition to varying stability. We observed a non-linear effort slippage (brown curve). This indicates that combination of more than one aspect can lead to non-linear outcomes rather than being an algebraic summation over the impacts of individual aspects.
5.1 Impact of human aspects We created 5 sets of inputs with varying aspects as indicated by the columns in the table 7 and compared results (that can feedback into plans). In the first setting the human aspects are not considered. It is assumed that the team will have required abilities, personality and motivation to perform. The productivity achieved is the same as the planned productivity. In this set, we assume the end state can be determined by the equations discussed earlier and we do not need simulation. Table 7. Impact of Human Aspects
Figure 10. Pressure and Personality
5.2.2 Learning and transformation Another simulation we did was to see how knowledge was impacted by learnability and self-efficacy aspects. We set up a team with 70% people having inadequate knowledge to perform required tasks. But we set learnability high for the entire team. Since knowledge directly impacts performance, we tracked performance to observe the gain in knowledge. We then did two sets of runs - first we kept the self-efficacy to 100% (all members with high self-efficacy). The result (figure 11) indicates that an initial performance in quadrant Q3 (low quality, high quantity –blue) soon improved and people transitioned to Q1 (high quality, high quantity – red).
In sets 2, 3, 4 we varied few parameter pairs whereas others are held constant. Set 5 is a project where all parameters deviate from the ideal. Simulation is run for multiple times for each set (2 to 5). The results show that varying combinations of human aspects make variable impact on the outcomes. This might appear selfevident but the model allows us to make such formal correlations.
Figure 11. Learning improves performance This was because people gained knowledge and started performing better.
In the second set, we set the self-efficacy high for only 30% people (keeping settings for knowledge and learnability same as the first set). It resulted in longer learning time and hence took longer time to shift performance to Q1 (shown in figure 12).
In the model, we created a simple combination index of high work pressure (overwork, change of routines), low agreeableness, low learnability (or fewer opportunities to learn), leaves-taken and low team cohesion to indicate vulnerability. We did simulation through the duration of the project for a team of seven members (TM1 to TM7). The figure 14 indicates that some people are more vulnerable than other. Also, vulnerability can change over time. Such insights can allow project managers to plan their limited resources and schedule wisely.
5.3 Data for the model Figure 12. Learning takes time with low Self-Efficacy
5.2.3 Individuals and Groups Relationships between individuals cause impact on outcomes. We did 5 sets of simulation runs to measure impact of good relationships (within the team) on schedule slippage. For each set we varied from 0% good relationships to 100% good relationships. The figure 13 shows the resulting graph. It is interesting to note that initially the schedule slippage decreased with improving relationships because it allowed the individuals to improve their self-efficacy and affective state.
While the current model conceptually demonstrates the impact of human aspects on the project outcomes, our future versions will be driven by project data held by our organization. An interesting question comes up as to the kinds of data that can be used. We see data coming for both model refinement and validation coming from 3 sources: Academic papers and public domain data sources, data mining on our organizational (project and personnel) databases and finally data collection in live projects through surveys, interviews and unobtrusive sensing techniques.
6. CONCLUSION Transferring behavioral models from the field of sociology and psychology into computational models have potential to provide innovative ways of studying and addressing problems. We have demonstrated with a prototype that simulating a software project based on human behavioral modeling can provide additional insights to project managers. We argue and show that project estimates, schedules and plans will be more realistic when human aspects are accounted for. Managers can plan and execute projects better by generating scenario specific insights. Organizations can use this approach/technique/tool for training of the managers as well as to share and discuss the likely outcomes transparently with the customers.
Figure 13. Relationships and Performance However after a certain point (more than 50% good relationships), the slippage again increased with better relationships. This is because time is spent in non-work related interactions. Few more good relationships don’t increase efficacy beyond a point.
Finally, we acknowledge that there can be limitations of such techniques which are based on numeric correlations between human aspects. However refinement of such models based on real world feedback can lead over time to both model improvements as well as understanding the precise limitations of such models.
5.2.4 Vulnerability
7. FUTURE WORK
A specific combination of situation and individual aspects can lead to vulnerability of individuals (people can fall sick, seek to quit project etc.).
We have presented a human centric approach to modeling and planning of software project. This work is in its early stage and we envisage specific refinement and extensions for betterment. Firstly, the rules - fine tuning the impact that each aspect has on other, needs to be looked at. We know, for example, positive affective state/mood affects work performance positively. But it would be beneficial to conduct further research to determine ‘exactly how much’. Next, simplification can make the model more usable. The key is to identify the few aspects which cause more impact than others. Lastly, we know that validation of results with real world data, as discussed in section 5.3, and recalibration will make the model more realistic. We also need to extend to more phases, varied project sizes and types in future.
Figure 14. Vulnerability
Ideas for furtherance of this work include building a simulation game for project managers to help them learn navigation through project situations, integration with task/resource scheduling algorithms, aid in designing stimulants such as ‘customized rewards and recognition programs’, supporting early warning system for risk management and possibly developing a model of an entire organization as a human centric system.
8. REFERENCES [1] White, Diana, and Joyce Fortune. "Current practice in project management—An empirical study." International journal of project management 20.1 (2002): 1-11. [2] Tom DeMarco, Tomothy Lister. Peopleware (2nd ed.): productive projects and teams. Dorset House Publishing Co.1999 [3] Cerpa, Narciso, and June M. Verner. "Why did your project fail?" Communications of the ACM 52.12 (2009): 130-134. [4] The International Function Point Users Group. 2009. Function Point Counting Practices Manual -Release 4.3, [5] Alterman, Richard, and Roland Zito-Wolf. "Agents, habitats, and routine behavior." IJCAI. 1993. [6] Bonabeau, Eric. "Agent-based modeling: Methods and techniques for simulating human systems." Proceedings of the National Academy of Sciences of the United States of America 99.Suppl 3 (2002): 7280-7287. [7] Prietula, Michael J., and Kathleen M. Carley. "Computational organization theory: Autonomous agents and emergent behavior." Journal of Organizational Computing and Electronic Commerce 4.1 (1994): 41-83. [8] Kellner, Marc I., Raymond J. Madachy, and David M. Raffo. "Software process simulation modeling: why? what? how?." Journal of Systems and Software 46.2 (1999): 91-105.
Emotional States in Deliberative Agents Based on Automated Planning." Agents and Artificial Intelligence. Springer Berlin Heidelberg, 2013. 146-160. [18] Pew, Richard W., and Anne S. Mavor, eds. Modeling human and organizational behavior: Application to military simulations. National Academies Press, 1998. [19] Curtis, Bill. "Fifteen years of psychology in software engineering: Individual differences and cognitive science." Proceedings of the 7th international conference on Software engineering. IEEE Press, 1984. [20] Giordano, John C., Paul F. Reynolds Jr, and David C. Brogan. "Exploring the constraints of human behavior representation." Proceedings of the 36th conference on Winter simulation. Winter Simulation Conference, 2004. [21] Hunter, John E. "Cognitive ability, cognitive aptitudes, job knowledge, and job performance." Journal of vocational behavior 29.3 (1986): 340-362. [22] McCrae, Robert R., and Oliver P. John. "An introduction to the five‐factor model and its applications." Journal of personality 60.2 (1992): 175-215. [23] Barrick, Murray R., and Michael K. Mount. "The big five personality dimensions and job performance: a meta‐analysis." Personnel psychology 44.1 (1991): 1-26. [24] Schunk, Dale H. "Self-efficacy, motivation, and performance." Journal of Applied Sport Psychology 7.2 (1995): 112-137.
[9] Joslin, David, and William Poole. "Agent-based simulation for software project planning." Proceedings of the 37th conference on Winter simulation. Winter Simulation Conference, 2005.
[25] Triandis, Harry 1977. Interpersonal Behaviour. Monterey, CA: Brooks/Cole.
[10] Spasic, Bojan, and Bhakti SS Onggo. "Agent-based simulation of the software development process: a case study at AVL." Simulation Conference (WSC), Proceedings of the 2012 Winter. IEEE, 2012.
[26] Rothbard, Nancy P., and Steffanie L. Wilk. "Waking up on the right or wrong side of the bed: Start-of-workday mood, work events, employee affect, and performance." Academy of Management Journal 54.5 (2011): 959-980.
[11] Abdel-Hamid, Tarek K., and Stuart E. Madnick. "Lessons learned from modeling the dynamics of software development." Communications of the ACM 32.12 (1989): 1426-1438.
[27] Weiss, Howard M., and Russell Cropanzano. "Affective Events Theory: A theoretical discussion of the structure, causes and consequences of affective experiences at work." Research in Organizational Behavior, Volume 18, (1996).
[12] Agarwal, Ravikant, and David Umphress. "A flexible model for simulation of software development process." Proceedings of the 48th Annual Southeast Regional Conference. ACM, 2010.
[28] Kumar, Kuldeep, Arti Bakhshi, and Ekta Rani. "Linking the Big Five personality domains to Organizational citizenship behavior." International journal of Psychological studies 1.2 (2009): P73.
[13] Crespo, Daniel, and Mercedes Ruiz. "Decision making support in CMMI process areas using multiparadigm simulation modeling." Simulation Conference (WSC), Proceedings of the 2012 Winter. IEEE, 2012.
[29] Ferguson, D. "Therbligs: the key to simplifying work." The Gilbreth (2000).
[14] Gosenpud, Jerry, and Paul Miesing. "The relative influence of several factors on simulation performance." Simulation & Gaming 23.3 (1992): 311-325. [15] Wickenberg, Tham, and Paul Davidsson. "On multi agent based simulation of software development processes." MultiAgent-Based Simulation II. Springer Berlin Heidelberg, 2003. 171-180. [16] Martínez-Miranda, Juan, et al. "Modelling human behaviour at work using fuzzy logic: The challenge of work teams configuration." (2006). [17] Pérez-Pinillos, Daniel, Susana Fernández, and Daniel Borrajo. "Modeling Motivations, Personality Traits and
[30] Team, CMMI Product. "CMMI® for Development, Version 1.3, Improving processes for developing better products and services." no. CMU/SEI-2010-TR-033. Software Engineering Institute (2010). [31] Guynes, Jan L. "Impact of system response time on state anxiety." Communications of the ACM 31.3 (1988): 342-347. [32] Wilensky, Uri, 1999, NetLogo (and NetLogo User Manual), Center for Connected Learning and Computer-Based Modeling, Northwestern University. http://ccl.northwestern.edu/netlogo/ [33] Railsback, Steven F., Steven L. Lytinen, and Stephen K. Jackson. "Agent-based simulation platforms: Review and development recommendations." Simulation 82.9 (2006): 609-623.