Architecting an Evolvable System by Iterative Object-Process Modeling Ashirul Mubin The Graduate School The University of Alabama Tuscaloosa, AL 35487
[email protected]
Rezwanur Rahman Daniel Ray Department of Aerospace Department of Mathematics & Engineering Computer Science The University of Alabama The University of Virginia Tuscaloosa, AL 35487 Wise, VA 24293
[email protected] [email protected]
Abstract Long term evolutionary needs for a system to be able to meet new and emerging requirements are often left unnoticed, since the complete picture is not apparently visible at the time of analysis. Therefore, over a long period of time, many of these systems become obsolete, because their lifecycles cannot be extended or are very expensive to re-engineer into a reusable system that could actually meet the new requirements. To overcome such difficulties, we present a methodology to build a wrapper-system based on the iterative Object-Process Modeling scheme. The purpose of the wrapper-system is to coordinate three stages of iteration: first, to collect the evolving factors from the system behavior; second, to update system state, and third, to apply necessary changes to the system to meet new requirements. Based on our analyses of the system usage activity logs and detailed update-request history of several projects over two to three years of time, we show that this iterative scheme can be effectively applied for architecting evolvable systems with longer life expectancy.
1. Introduction Due to lack of adaptability to the changing needs over extended period of time within the environment [4], many traditionally developed legacy systems are gradually becoming of lesser value at the point of increasing demands in terms of service expectations, resource utilizations and competitiveness. Therefore, the need for an evolvable system is very high [14], ideally, which can indefinitely extend the system lifecycle to accommodate new changes and requirements over time. The intent of our work is to construct a wrapper system that is able to generate feedback [12] data on system behavior, and detect the
need for evolutionary changes that can be applied to the system itself by the iterative Object-Process Modeling [1].
2. Background Large software systems need to be able to sense the current state of evolution, if any, as well as other related changes of the system environment and provide some indication about the need to restructure the system itself. However, changing large software systems are extremely costly [2]. Therefore, it is necessary to develop a methodology to keep the systems alive for its continuing services that will meet the new emerging expectations over a period of time. Some extensive works on software evolution can be found in [5,11,13], as well as attempts to visualize software changes [4,7]. Analyses of various historic data of system usage and update records play significant roles in the study of the evolution process [7]. With the availability of historical data to help capture evolutionary factors, we also need to be able to devise a methodology that will ease the process of upgrading the system. [5] attempts to provide this feature in a limited way by forming a reflective computation system, but lacks user involvements and flexibility in meta-structure deployment. The need of system upgrades can be obtained partially from the system itself. This can be done with the help a number of evolutionary factors deduced from update request history, user interactions, usage patterns, path of control & data flows, meta-data, etc. We can thereby progressively maintain an evolvable system for its extended lifecycle.
3. Analysis of system behavior In traditionally developed systems, a steeper decline in user satisfaction is noticeable during the
system lifecycle. In the long run this causes a high recurring cost of setting up another new system to replace the outdated system. We use the term User Satisfaction Index (USI) to quantify the level of satisfaction of the system users and affiliates; it may also quantify the level or quality of services provide by a system [15] and well as some psychological factors [6]. For our purposes, we will denote an estimated USI to indicate the level and degree of system value for an instant of time. A system architect has the freedom to define system specific USI that will most closely quantify its value with respect to system parameters and its environment. Figure 1 explains the impact of evolutionary factors on system USI for traditional systems. During a system lifecycle in the changing environment, users face the inception of evolutionary factors at Te1 due to accumulating newly desired system changes; this causes a decline in USI, and thereby the system’s value. However, no steps are taken to address the new requirements to be incorporated into the system, and after a constant interval (Tdi - Tei), the users accept this fact, knowing that it would not be addressed as expected; and then get used to this new lower-valued services. This cycle is repeated at random intervals (Tei+1 – Tdi), that actually depends on the changing expectations and its surrounding environment. Inception of Evolutionary Factors System USI
No action is taken to revive the system value; users continue with lower USI
New System Replacement at high cost, with higher USI
emerging features (that grow up over time and eventually become essential) to retain its USI value within agreeable range and effectively maintain operations at a satisfactory level. Therefore, the system continues through its extended lifecycle as long as the organization does not drastically shift their business processes. Figure 2 explains in details about the general behavior of an evolvable system at the inception of evolutionary factors that may cause an evolutionary cycle to begin. As we can see from the behavioral depiction, such an evolvable system helps to re-track the system USI value. However, the USI value continues to decrease, as the system users (or other sorts of service outcomes) keep on waiting for the new requirements to be fulfilled soon, as it is known that the new requirements will be fulfilled. Therefore, upon upgrading the system with the newly emerged requirements, the system USI value is restored, maintaining the overall stability and coherence. But, depending on the unpredictable outcomes due to incorporating new changes dynamically [5] in the newly updated system, the new USI value may be greater, lower or back to the same value as it was before.
System USI
Inception of Evolutionary Factors
After the iteration, the system USI is restored
USIhigh USIavg
USIhigh2 USIhigh1
USIth USIavg
Tbegin
Te1 Td1
Te2
Td2
TeN TdN Time
Figure 2: A higher USI level is maintained in an evolvable system USIth Tbegin1 Te1 Td1
Te2 Td2
TeN TdN Tbegin2
Time
Figure 1: Declining USI of traditional system over its lifecycle Thus, over an extended period of time, the system users hit the lower limit USIth of the threshold satisfactory level. At this point, the organization must take necessary actions to replace the outdated system to be operational with their business processes. However, a carefully designed evolvable system should be able to accommodate new changes and
By further analyzing traditional and evolvable system behaviors, as described in figures 1 and 2, we can deduce relevant metrics from evolutionary factors that will guide the architects in assessing the inception of an evolutionary cycle. Table 1 summarizes the general properties of USI for both types of systems. From the comparative USI analyses, the distinctions between the two different types of system behaviors can be clearly identified quantitatively.
Table 1. USI related metrics for analyzing system behavior Metric Duration of Diminishing USI Duration of Constant USI Average USI (during system lifecycle) Threshold USIth
Traditional System Tdi – Tei = (const) Tei+1 – Tdi = (random-variable) (USIhigh+USIth )/2
Restored USI
Eventually reaches at this point by the end of system lifecycle USIT.di > USIT.di+1
Span of USI
USIhigh - USIth
Evolvable System Tdi – Tei = (randomvariable) Tei+1 – Tdi = (randomvariable) Weighted average of USITdi within agreeable limit |USIei - USIdi| within agreeable limit; always maintained |USITei - USITdi| = within agreeable limit USITdi(high)-USITdj(low)
Similar to defining system-specific USI, the evolutionary factors may also differ from one system to another. However, a general list of some vital factors is presented in table 2 with their impacts on the evolutionary cycles and their types of data collection techniques. We find that these evolutionary factors (or their subset, depending on the system) need to be considered [5] in the evolution policy analyzer (figure 3) so that it may provide a decision [2] on whether to initiate an evolution phase. Table 2. Evolutionary factors Factor Name Inter-Arrival Time of change/update requests Frequency of update/change requests Wait-time until the request is processed Feature ranking of Objects Feature ranking of processes Data-flow of objects through probing points Control-flow of processes through probing points
Impact on the evolutionary cycle provides a major role predicting the next cycle Helps in formulating probability distribution models A psychological factor that influences USI affects the need for new related feature affects the need for new related sub-process Assesses the workflow contents helps re-evaluate workflow paths
Feedback Collection Auto Auto
Manual Manual Manual Auto Auto
4. Architecture of an evolvable system Schematically we can draw an overview of the architecture that wraps around a system to collect system behavior and provide revised system state (entities and their attributes). In other words, an evolvable system can be organized into three basic stages: capture system behavior, update system state and apply new changes to the system. Figure 3 explains this in details.
(1) Capture system behavior
System Meta-Data
feedback
Probing Station
probing points
Evolvable System
adjust
system boundary
Evolution Policy Analyzer
(3) Apply new changes
System Meta-Model
(2) Update system state
adjust
Figure 3. Three basic stages of iteration
4.1. Capturing system behavior The first step is to carefully identify system and environmental parameters & deduce relevant metrics that will provide some sort of indications of the inception of an evolutionary phase for an existing system state. Types of parameters will vary widely from one system to another; and with different workflows. A data measurement profile for each parameter and USI metric pair can help put the probing points in the system workflow for collecting behavioral data. In other words, the system can be instrumented programmatically to serve the purpose, which is outside of the system boundary. This helps the system to automatically generate valuable knowledge [5], such as workflow activity patterns, change request history, request inter-arrival time, feature ranking, and so on. These factors, collected from the probing points, are combined into the probing station along with manually generated survey data (if applicable) and prepare the “system behavior” at that instant of time as a feedback for the evolution policy analyzer in the next stage of the iteration, as described in section 4.2. Upon inception of an evolution cycle, the probing points can be readjusted, if necessary.
4.2. Updating system state Given that we have already generated system meta-data and its meta-model during the development process, we can now apply a suitable analytical techniques and statistical models [9,10] within the system context (i.e. evolutionary factors as described in table 2) to build a supporting subsystem, called “Evolution Policy Analyzer,” to provide Decision Support Data (DSD) from the current system behavior (i.e. the feedback from the probing station) and the system state (i.e. meta-data and meta-model) for the next iteration. Our preliminary analysis scheme is based on the Kernel Density Estimation [10], which is a non-
parametric probability density function estimation technique. As described in section 6, this analysis helps to model the inter-arrival time (Tei+1 – Tei) with “Renewal Process” for the inception of evolutionary factors. However, a detailed and rigorous analysis with “Hypothesis Testing,” which is yet to be included in the analyzer for further refinements. By incorporating relevant evolutionary factors, along with the inter-arrival time, the analyzer will be able to predict a more accurate evolutionary phase. Based on the results from the analyzer, it triggers a signal for updating system state (objects-processes) which is framed by OPM [1], as discussed in section 5. The updated system state, in turn, directs the area(s) of system upgrades and changes.
patterns, and support tools that combine graphical and textual interfaces. [3]
4.3. Applying new changes Based on the analyzer response, we need to apply a re-engineering process to the existing system model and derive newly revised objects, processes and their interconnections. If necessary, we will also have to adjust probing points in the updated workflow and still keep live probes attached to appropriate places in the workflow and continue collecting data for the next cycles. There are already a number of tools [4,5] available for this purpose. These three basic stages iterate through an evolutionary cycle at the inception of evolving factors, and if the system needs to evolve (based on decision learned from the evolution policy analyzer), appropriate upgrade measures are taken from the newly updated system state (specification). These short and incremental upgrades are performed by recent trends of service-oriented development (as compared with routine computer hardware upgrade or Help Desk services).
5. Modeling an evolvable system We used a framework for conceptual representation of systems based on Object Process Methodology (OPM). The benefit of this template based approach is that it enables a rigorous what-how decomposition that is both theoretically sound and practically applicable as system-architecting guideline which helps construct a system lifecycle support methodology [1,2]. OPM is a relatively new systems engineering approach that recognizes the duality of objects and processes in describing of almost any sorts of problems. It uses three entities as building blocks: objects, processes, and attributes. With these building blocks a pre-defined set of connectors, it can model a highly integrated representation of a system, built-in interaction
Figure 4: A basic high-level wrapper model for an evolvable system by OPCAT [1] For the ease of managing and updating system models in OPM, we adapted OPCAT [1] as an effective tool. Figure 4 presents a basic high-level wrapper model, as described in section 4. The actual model, with its recurring modeling capabilities, becomes very complex and large, yet easy to manage and browse through system objects and processes, reshaped into workable entities for both base level and its meta-level modeling. From OPM modeling to OPD and OPL, the tool is also partly used to generate interacting codes.
6. General observations from empirical data From system development, update/change history and regular system usage activity logs, we developed an empirical data set that supports the proposed architectural model for evolvable system. We present the outcome by applying the methodology to several web-based projects (systems) of a large integrated system over few years of time, namely ADMINROLES, APP-TRACK, GTA-WS, PVD and ITAP. Comparatively, we also provide some statistics of traditionally developed older systems, namely GRAD-APP, TEST-SCORES. We used significance of the new update request, system context at the inception time, promptness of addressing the requested changes, and brief survey to quantify our USI values. The following table 3 provides the compiled data set with their calculated average values from few years’ activity log and update history.
Table 3. Analysis of evolutionary factors for some select sub-systems ProjectName /System
Average #Change Requests /year
Avg. USI change at request time
Request Interarrival time-avg. (days)
Avg. Wait Time (days)
Avg. USI change after upgrade
Traditional Systems: GRADAPP
4.286
-4.066
76
-
-
TEST SCORES
3.466
-3.923
85
-
-
Projects with wrapper sub-systems: ADMIN ROLES APPTRACK
14.181
-1.846
33
99
4.054
14.285
-1.360
25
47
3.000
GTAWS
4.923
-1.625
81
85
3.583
PVD
7.143
-1.454
54
13
3.000
ITAP
12.444
-1.678
31
30
3.762
For the traditional older systems, we had some beneficial update requests in GRAD-APP and TESTSCORES that could not be taken care of easily. Notice the comparative higher drop in USI values. Upon reaching USIth, as described in figure 1, both systems had to be rebuilt. Whereas, for the newer systems have lower drop in USI values because needed upgrades were taken care of, as delineated in figure 2. In order to identify the inception of an evolution cycle (Renewal Process), we used the two key factors for our data analyses: request inter-arrival times and frequency of update requests (that influences the evolution) based on corresponding USI values. By appropriately grouping [9] of the inter-arrival update requests, we projected number of evolutionary cycles that needed to go through necessary system updates (described in section 4.3) are 3, 1, 2, 3, 2 respectively for the seemingly evolvable projects as listed in order in table 3. Thus, the new methodology arguably demonstrates the benefits of maintaining a higher level of satisfactory values (USI); and yet still open to accommodate future changes (to some agreeable extent) over time.
7. Conclusion Some significant benefits can be obtained from of an evolvable system with iteratively integrated metastructure with it, such as, better and timely services, clear insight of the system, increased control over maintaining and upgrading the system. In the era of virtualization, service-oriented solutions, internetbased rich applications and cloud computing, the system architects and developers have now have much wider freedom for developing evolvable systems. However, an automated way to update
system objects, processes and codes needs to be investigated further. Also, an in-depth study of psychological impact on USI for each system upgrade will help refine related metrics and identify effective evolutionary factors. Applying further statistical analyses on the accumulated historical data will direct accurate prediction of evolvable cycles.
9. References [1] Dov Dori, Object Process Methodology, Springer 2002. http://objectprocess.org/ [2] D. Coleman, D. Ash, B. Lowther, P. Oman, Using Metrics to Evaluate Software System Maintainability, IEEE Computer, August 1994. [3] Hong Liu, David P. Gluch, Conceptual Modeling with the Object-Process Methodology in Software Architecture, CCSC Southeastern Conference, 2003. [4] Qiang Tu, Michael W. Godfrey, An Integrated Approach for Studying Architectural Evolution, Proceedings of the IWPC, 2002. [5] Stephen Rank, A Reflective Architecture to Support Dynamic Software Evolution, Ph.D. Thesis, Department of Computer Science, University of Durham, 2002. [6] D. Svetinovic, M. Godfrey , Attribute Based Software Evolution: Patterns and Product Line Forecasting, ACMICSE ’02 Buenos Aires, Argentina. [7] JF Ramil, MM Lehman, Challenges facing Data Collection for Support and Study of Software Evolution Processes, ICSE Workshop, Los Angeles, May 18, 1999. [8] Stephen Cook, He Ji, Rachel Harrison , Dynamic and Static Views of Software Evolution, Univ of Reading, UK. [9] Richard O. Duda, Peter E. Hart, David G. Stock, Pattern Classification, 2nd Ed, A. Wiley Publication, 2001. [10] B.W. Silverman, Density Estimation for Statistics and Data Analysis, CRC press, 1986. [11] M.M. Lehman, Programs, life cycles, and laws of software evolution, Proc. IEEE, 68:1060-1076, 1980. [12] M M Lehman, Feedback in the Software Evolution Process, Imperial College, London SW7 2BZ. [13] K. Bennett, V. Rajlich, Software Maintenance & Evolution: a Roadmap, Future of Software Eng, Ireland‘00. [14] M.M.Lehman, Software’s Future: Managing Evolution, IEEE Software January–February 1998. [15] K. Chen, C. Huang, P. Huang, C. Lei, Quantifying Skype User Satisfaction, ACM SIGCOMM’06, Pisa, Italy.