Modularization of a Large-Scale Business Application: capturing ...

1 downloads 0 Views 583KB Size Report
contend with our own unmanageable monolith: a banking application that was developed in the late. '90s to serve a single bank's requirements but had.
focus 1

capturing design knowledge

Modularization of a Large-Scale Business Application: A Case Study

Santonu Sarkar, Accenture Technology Labs Shubha Ramachandran, G. Sathish Kumar, Madhu K. Iyengar, K. Rangarajan, and Saravanan Sivagnanam, Infosys Technologies

This case study describes the modularization approach that one company adopted to reengineer a monolithic banking application beset with maintenance and complexity problems.

28

IEEE Soft ware

I

n industries such as banking, retail, transportation, and telecommunications, large software systems support numerous work processes and develop over many years. Throughout their evolution, such systems are subject to repeated debugging and feature enhancements. Consequently, they gradually deviate from the intended architecture and deteriorate into unmanageable monoliths. To contend with this, practitioners often rewrite the entire application in a new technology or invest considerable time in documenting the code and training new engineers to work on it.

However, for very large systems, such approaches are typically impossible to carry out. As an alternative, researchers have proposed several tools to automatically modularize software that’s grown to be inadequate in both quality and scalability. The modularization approach segregates the code base into domain modules, identifies welldefined interfaces to these modules, and restricts the intermodule interactions through these interfaces. We adopted a modularization approach to contend with our own unmanageable monolith: a banking application that was developed in the late ’90s to serve a single bank’s requirements but had expanded to power more than 100 large installations across more than 50 countries. The application—which grew from 2.5 million to 25 million LOC (MLOC)—has endured more than 10 mainline releases and is supported by several hundred engineers. Although the original application was

Published by the IEEE Computer Society

programmed in C, developers had implemented subsequent functional enhancements in several different languages, including Java, JavaScript, and JavaServer Pages. The application’s rapid growth significantly increased both its essential and accidental complexities.1 As the law of increasing complexity predicts,2 we faced considerable maintenance problems as a result. In this case study, we describe the modularization approach we adopted to address this situation, as well as certain other benefits we unearthed as a result of this reengineering exercise.

Problem Analysis We observed the following maintenance problems with the banking application: ■■ The application learning time for new em-

ployees was continuously growing, more than 0 74 0 -74 5 9 / 0 9 / $ 2 5 . 0 0 © 2 0 0 9 I E E E

Table 1 Mapping solution strategies to root causes Root causes RC4

RC5

S1: Identify coarse-grained domain modules such that each has its own life cycle—namely, build-test-deploy—and can be maintained by respective module owners/teams.

Solution strategies

RC1 X

RC2

RC3

X

X

S2: For each important module, create separate, independently loadable libraries and executables.

X

X

S3: Enforce a structured intermodule communication protocol so that developers can implement modules in isolation (as far as possible). S4: Publish a set of fine-grained, reusable common functionalities to form the system’s infrastructure layer.

X

X

S5: Identify and organize base libraries and tables and make them stable (that is, not subject to frequent changes).

X

X

doubling—from three months to almost seven months—over the past five years. Despite code reviews and testing, bug fixes invariably introduced new problems due to incomplete impact analysis. ■■ Product extendability for even seemingly simple features was taking an inordinately long time. ■■ Even when we enhanced only one functional area, we had to test the entire application before the release. Consequently, partial deployment was often impossible. To alleviate these problems, the development and maintenance team tried a few off-the-shelf code analysis tools but found that they were neither scalable nor useful in analyzing the problems quantitatively. Infosys then formed a solution task force to perform root-cause analysis. The team comprised 20 collocated senior employees from the company’s product architecture, development, testing, management, and applied-research groups; on average, team members had 10 years’ experience in application development and maintenance. Their analysis uncovered five root causes: ■■ RC1. Over time, developers had created shared

libraries as assorted collections of business functions irrespective of their domain modules. Almost 60 percent of the shared library code had this problem. ■■ RC2. Developers had mixed up presentation and business logic in the code; presentation logic (about 2 to 3 percent of the code) was spread across the entire code base. ■■ RC3. Lower-granularity functions, such as date validation, existed in the same library— and sometimes in the same source-code file—

X

X

as complex domain functions, such as interest calculation. That is, the system didn’t have a layered architecture. ■■ RC4. Code wasn’t organized by functional domain, such as interest calculation, loan, or user account. One directory, named “sources,” had more than 13,000 C files and 1,500 user-interface-related files. ■■ RC5. Functional modules with clearly defined module APIs were nonexistent. A quick manual inspection of a part of the application showed that 5 percent of potential API functions were duplicates. Prior to adopting our solution, management had aimed for operational efficiencies by offering product-specific training to newcomers, creating extensive documentation, using pair programming, and so on. However, as Meir M. Lehman and L. Belady observed, such process-centric strategies are limited in their ability to control the increase in structural complexity.2 The solution task force therefore proposed a long-term solution: restructure the entire system by creating a set of independently compilable, testable, deployable, and releasable domain modules, while keeping the same programming language and platform (see Table 1).

Solution Overview Research suggests that developers can organize a large body of software into a set of modules, each of which ■■ consists of cohesive functions, files, and data

structures that together implement a functionality; and ■■ operates relatively independent of other modules. March/April 2009 I E E E S o f t w a r e 

29

Files to module assignment heuristics

Subject matter experts (SMEs) provide list of probable modules

Source code

Assign all source code to modules based on rules (involves file splitting/rewriting)

Identify intermodule calls

Identify/refactor candidate functions that would eventually form provided API

SMEs provide list of each module’s probable functional services

format conversion, credit-card issue, and user profile creation. Such a module would eventually have high coupling with other modules and might be frequently impacted by seemingly unrelated changes. The existence of such modules would also hamper deployment and porting of any one functional module, such as porting only credit-card management functionality to another platform. Second, developers should limit the sharing of data structures and function definitions across modules so that they can build and test modules more independently.

Intermodule Interaction Refactor code (might introduce new functions and dependencies)

No

Completely automated Figure 1. A process flowchart of the key modularization tasks. The annotation associated with a task indicates the extent to which the task has been automated using modularization assistance tools.

Is intermodule call through candidate API?

Yes

Partially automated

Create required and provided API from the candidate functions Completely manual

According to David L. Parnas, a module should capture and encapsulate a set of design decisions (implementation details) that is hidden from other modules.3 Furthermore, he says, the interaction among modules should occur primarily through module interfaces, such as each module’s API. Researchers now widely accept that a large body of software’s overall quality is enhanced when modules are loosely coupled because it restricts interactions with the modules’ published APIs. On the basis of studies of several legacy systems, Magnus Ramage and Keith Bennett,4 as well as Jesús Bisbal,5 indicate a positive correlation between software maintainability and the extent of the software’s modularization.

Modular Design Guidelines The solution task force recommended a set of design guidelines to define modules, module APIs, module interactions (derived from Parnas3), and a layered system organization. The software community at large has also treated these guidelines—which appear intuitively plausible—as good design practices.

Module Creation A good module should provide a set of services related to a specific purpose. Given this, the task force defined an overall guideline to identify modules on the basis of domain concepts such as interest calculation, loans, account, calendar, and so on. The task force also suggested two additional guidelines. First, developers shouldn’t create a module consisting of logically unrelated services, such as date 30

IEEE Soft ware

w w w. c o m p u t e r. o rg /s o f t w a re

The solution task force recommended that each domain module define two types of interfaces: the provided API (PI) and the required API (RI). Both UML 2.0 and the rich body of literature on software architecture description languages6 use the PI and RI notions to define component interfaces and interactions. According to both, a PI declares the services that the module implements, whereas an RI defines the services set that the module needs (from other modules) to implement its own intended functionality. Whenever a module, m1, needs to invoke a service offered by m2, the task force recommended that developers create a logical connection—rather than a direct reference— between the declared RI of m1 and the declared PI of m2. This symmetry helps the developers reduce compile-time dependency among modules and ensure easier configuration management.

Layered Module Organization For a system this complex, it’s neither desirable nor feasible to keep all functions at the same utility level. At minimum, we must organize the system into a layered architecture, where each layer, comprising a set of modules, has a specific responsibility. During analysis, the solution team observed that developers couldn’t assign all the source code to domain modules because some functions actually provided infrastructure utilities that the domain modules used. These functions are more stable than those that handle domain-specific logic. Hence, we clearly needed to introduce a layered architecture during modularization. Although layering provides a logical organization of modules, the benefits are realized only when developers apply certain rules on the layers. The task force therefore specified the following high-level guidelines for the layered architecture: ■■ Modules residing in the same layer can com-

municate with each other through their PI–RI infrastructure.

Driver layer (entry modules) Business layer Loan

RI

RI

PI

PI

Trade finance

RI

Domain X

PI

Domain Y

RI

PI

Base layer (database access, utilities, and so on) Provided API/Required API (RI/PI) function Module-API function Internal function

■■ Modules residing in layers at upper levels can

communicate directly with the layers below. ■■ Modules residing in layers at lower levels should

not communicate with modules in higher layers. Having established our design guidelines, we then used them to guide our modularization efforts.

The Modularization Approach We used the task force guidelines to classify the entire code base into a set of business domains (such as trade finance, loan, and term deposit). To control the impact of the change, we extracted one domain at a time. Figure 1 shows the modularization process for extracting a domain’s modules. Figure 2 shows the system’s modularized architecture—which includes a domain module, a submodule, a domain-module-level API, and layering—after the first modularization phase. As the figure shows, the solution team decided to create three layers, as well as sublayers within the business layer.

Module Creation For each domain, we identified every relevant domain-specific business operation. We then marked each operation’s related code artifacts (such as .c , .cpp , .h , .hxx , .cxx , .java , and .html), along with its generated files, make files, and so on, to create a domain module using three heuristics: tables, naming conventions, and functionality. We manually associated the product’s database

Submodule Function call

Domain module Boundary RI/PI call

tables to the domain modules on the basis of domain experts’ inputs. For the loan domain, for example, we identified all tables related to loans and then used their information to classify code. In several cases, we used domain-related naming conventions for function/source file names. For example, a file with a name like loanXXX.cxx or laXXXXX.cxx would typically have loan-related functionality. These heuristics offered a quick way to let us generally classify a domain’s files. This was particularly important when there were tens of thousands of files. However, some domains—such as loan—are particularly complex and have many business operations. For an operation such as creating a loan repayment schedule, for example, we had to identify the set of tables responsible for the repayment schedule. We identified the operation’s code on the basis of the related database tables and the function call graph of access to these tables. We decomposed a domain module’s artifacts into submodules—such as securitization and corporate loans in the loan domain—on the basis of business operation complexity and organized them into an appropriate directory-subdirectory structure.

Figure 2. The threelayer modularized architecture. The driver layer (3 percent of the system) contains modules that offer entry points into the application. The business layer (64 percent) is divided into three domain sublayers. The base layer (33 percent) consists of modules that provide infrastructure support, such as database access and generalpurpose utilities.

Intermodule Interaction and the Layered Architecture Once we’d classified the source code into domain modules, we identified the PI for each module as follows. First, using tools we describe later, we identified all the intermodule calls and the functions that receive them. With the help of domain experts, we classified a function as PI if it March/April 2009 I E E E S o f t w a r e 

31

■■ supports domain-specific queries by fetching

Several complex scenarios required us to significantly and manually modify the existing source code.

domain data, ■■ validates domain data, and ■■ supports domain-specific processing. Identifying PI revealed cases in which functions that were supposed to be internal to a module were actually exposed to other modules. It also revealed cases in which functionalities were incorrectly input to modules, resulting in unnecessary coupling. In one load-related module, for example, we found that, as the product evolved, a particular kind of date-formatting functionality became accessible to calls from other domain modules. Obviously, such a functionality shouldn’t exist in a loanrelated module. As a part of modularization, we removed this date-formatting functionality from the loan-related module and put it into a date module (with an appropriate API) in the lower architectural layer. After we’d identified a domain module’s PI, we identified a set of RI functions. If, for example, a loan submodule, loan1, needs to call the API function of a trade finance submodule, tfin1, we create an RI-infrastructure library, RI loans, and a PIinfrastructure library, PI tfin. The loan1 submodule then calls a function in RI loans , which in turn dynamically loads a function in PI tfin, which calls the required tfin1 function. For implementation simplicity, we strictly adhered to this RI-PI framework only for interactions across domain modules. Thus, when a submodule in the loan domain needs to call another submodule in trade finance, the call must go through the RI-PI infrastructure. We didn’t strictly enforce the RI-PI infrastructure for interactions within a domain module; as Figure 2 shows, these go through PI functions whenever it’s easy and meaningful to extract them.

Modularization Assistance Tools The task force recommended three home-grown tools to automate a few laborious and repetitive tasks during modularization. Figure 1 illustrates the tasks that we partially automated using these tools. Function-dependency analysis. To analyze intermodule calls and identify API functions, we must create a repository of all function-call dependency information. We built our home-grown toolset to extract function-call information from the source code. Function pointers make this dependency extraction challenging. Ideally, we can accurately resolve call dependencies through function pointers 32

IEEE Soft ware

w w w. c o m p u t e r. o rg /s o f t w a re

during runtime analysis. During static analysis, we can only approximate such dependencies. Fortunately in our case, the function pointer definitions follow a specific naming convention, and we could thus easily resolve such dependencies through lexicographic analysis. Cross-reference tools. We built several cross-reference tools to extract relationships among sourcecode artifacts (files, functions, and variables) and the underlying database tables and business operations. These tools helped us classify source-code artifacts into various domain modules and design intermodule calls in RI-PI framework. Code generation tools. Our solution team felt that we could generate a significant part of the code that goes into the RI-PI infrastructure. We therefore created a code generation toolset, mostly based on scripting languages, to generate significant parts of the RI-PI infrastructure.

Complex Scenarios Several complex scenarios required us to significantly and manually modify the existing source code, even after we created a reasonable set of domain modules. First, many modules were strongly coupled through global variables. In such cases, subject matter experts had to analyze the associated business operation and global-variable usage to reduce this coupling. In several cases, modules used a global variable’s definition (rather than its value), and it was cloned in every module even though only one module accessed it. When modules used global variables to pass data, we encapsulated the global variables in an appropriate module API function to ensure that data was passed only through API function parameters. The second scenario related to a single function containing the logic of multiple domains. To deal with this, we restructured the function as follows. When a function, such as open_account(), contained business logic pertaining to different domains— such as opening a deposit account and opening a loan account—we split the function into open_deposit_account and open_loan_account and assigned them to the respective domains. When domain logic was intertwined with a utility functionality (such as audit trail or calendar manipulation), we moved the utility functionality to functions in the lower utility layer. Third, generic business operations such as interest calculation are applicable to multiple domains, such as loan, trade finance, and so on. Consequently, functions implementing interest calcula-

tion contained all the applicable domain-specific nuances. We split these generic operations into multiple domain-specific operations. When such operations were associated with the user interface, we split the user interface into several domainspecific parts and a common part. We then moved the common part to a lower utility layer. Fourth, during restructuring, we unearthed a complex intertwining of business logic and external-environment integration logic. For example, to decide the output delivery mechanism, the loan domain’s interest calculation logic was checking whether the interest calculation had been invoked from the batch environment or online. This observation prompted the team to introduce a driver layer in the architecture (see Figure 2). We then restructured the interest calculation functionality and delegated infrastructure-related tasks (online/ batch, user authentication) to the new modules in the driver layer. Finally, the premodularized product extended core functionality through a home-grown “customization infrastructure” that was similar to a plug-in architecture. However, the infrastructure was never implemented with a modular design in mind. Consequently, we had to load the entire product at runtime, along with the plug-in component, every time an interaction used the customization infrastructure. This led to high runtime memory requirements. During modularization, we created a special “API factory” module to intercept all external calls from a plug-in and load the appropriate module—instead of the entire application—in memory.

Project Management Our plan required both a phased modularization and continuous integration of the modularized code with the main product. To ensure that we achieved this with minimal schedule slippage and defects, we adopted the following strategy: ■■ We involved senior managers (who sponsored

the project) and other key stakeholders right from the beginning. ■■ We conducted fortnightly reviews with senior management and frequently shared intermediate results to help them better appreciate the modularization effort and quickly resolve any issues. The project management team carried out the modularization exercise in a separate, isolated area and subsequently integrated the modularized code into the mainstream code base. To enforce modularization compliance during source-code build and

check-in time, the team created a set of home-grown gatekeeper tools that can ■■ detect whether a module is directly calling an-

other module’s function rather than its PI; ■■ detect when certain application-specific guide-

lines are violated, such as when an online function calls a batch function; and ■■ detect when a lower-layer function calls an upper-layer function. While the modularization project was underway, we decided to measure the extent of modularization at the end of the first phase to provide a benchmark for future modularization phases. We therefore developed a framework and a set of tools to quantify the modularization.

ModularizationQuantitative Analysis To manage the modularization, we earmarked only 12.5 MLOC from the entire product for the first phase, on the basis of the code’s age and the need for modularization. Implementing the design guidelines and modularizing 7 MLOC (56 percent of the 12.5 MLOC) required nearly two years­—about 520 person-days for design and 2,100 person-days for coding and preliminary testing. At times, the project went dormant; at its peak, it had 13 staff members. The modularized system comprises 10 newly created/extracted domain modules and about 52 submodules. As we described earlier, it has three layers, and the business-logic layer is further divided into three sublayers. The following discussion is based on our initial modularization of 7 MLOC.

Measuring Modularization Quality To measure modularization quality, our applied research group invented a new set of metrics7 to compare the premodular and modular product versions. The module interaction index (MII) measures the extent to which intermodule calls go through API functions, whereas the non-API function clo­ sedness index (NC) measures the extent to which non-API functions don’t participate in intermodule calls. As per the solution strategy S3 in Table 1, we spent significant effort in identifying domain module APIs and ensuring all intermodule calls go through the RI-PI framework. We therefore observed significant improvement in both MII (from 0.25 to 0.76) and NC (from 0.86 to 0.91). The module-interaction stability index (MISI) March/April 2009 I E E E S o f t w a r e 

33

To meet our goals, we had to immediately integrate newly modularized parts into the mainstream product.

measures how closely we’ve adhered to the notion that each module should depend only on more stable modules that reside in the same or a lower layer. The layer organization index (LOI) measures how closely we’ve honored the layeredarchitecture principle. As per strategies S4 and S5, we enforced a layered-architecture and aimed to make the base-layer modules stable. We observed an improvement in MISI (from 0.74 to 0.84) and in LOI (from 0.5 to 0.63). The Normalized Testability-Dependency Index measures module dependency during testing; ideally, modules should be as independent as possible. We observed an improvement in NTDM from 0.67 to 0.71. Finally, the module-size boundedness index (MSBI) measures module size similarities with respect to a desired size provided by the subject matter expert. Our modularization exercise created several fine-grained modules that are more or less similarly sized. This metric value improved from 0.05 to 0.39.

Observations: Benefits and Limitations To meet our goals of continuous deployment, enhancement, and modularization, we had to immediately integrate newly modularized parts into the mainstream product. Given this state of continuous evolution, it’s hard to precisely measure various operational metrics, such as productivity improvement, and associate them with the modularization activity. Nonetheless, we did observe the following benefits and limitations of our approach during the first phase of modularization.

Benefits Among our observed benefits are faster fault localization, regression testing, and change-request response time. Our code base consisted of an interminably meshed set of functionalities related to multiple domains. Our modularization process therefore concentrated on functional domainbased partitioning and rerouting of interdomain module calls through the RI-PI framework, which has reduced domain module coupling. Consequently, any functional change requirement (say, a modification in open_loan_account) or introduction of a new loan-related feature has a localized impact, restricted to the loan module or its submodules, as opposed to the large part of the code base. Fault localization. Our improvements in fault localization mean that now, for a fault in a loanrelated functionality, we need analyze only the 34

IEEE Soft ware

w w w. c o m p u t e r. o rg /s o f t w a re

loan-related code, which is about 9 percent of the code base. Our product quality group’s preliminary calculation shows a 50 percent effort reduction in localizing simple faults, and 20 to 25 percent reduction for complex ones. Regression testing. We’ve also observed a 15 percent reduction in the defects detected during regression testing, although we’ve not yet reduced the number of regression test cases. We’re currently working to optimize the regression test cases and segregate them by domain to further reduce our overall regression-testing effort. Memory requirements. Our reduction in runtime memory requirement is also notable. Previously, we had to load the complete code executable in memory, even though we only needed a small part of it. With modularization, we can now create module-specific shared libraries and an infrastructure to load or unload them on demand. A quantitative measurement of the memory requirement shows that the application takes nearly 43 percent less memory to load compared to the premodularized system. Link-line dependency. In reducing link-line dependency, the modularization process has also significantly reduced start-up time for an executable. Application load time, for example, has improved by 30 percent for a batch job postmodularization. Build time. Given the extremely high cyclic dependencies among code and header files, the premodularized system required almost a complete build for even a small change. Because complete builds are time consuming, there was only a small time window for new development. Following our first modularization phase, there’s been a nearly 30 percent reduction in overall build time. We expect further improvements once modularization is complete. Ownership among developers. Because of modularization, our domain modules are developed and maintained exclusively by specialists. Now, newcomers need only understand the domain module; previously, everyone needed to be familiar with the entire product to make any change. Our new system has created both a strong sense of ownership and a significant increase in domain knowledge. Development process. As mentioned earlier, modularization helped our management team incorpo-

rate a set of hard compliance checks in the build and check-in process. This ensures that the modular design that we’ve achieved thus far won’t be subsequently compromised.

Limitations Introducing the RI-PI framework caused some difficulty in that the architecture is hard for developers to understand. The framework caused a marginal increase (about 5 percent) in coding for enhancements involving cross-domain module calls. However, our significant reduction in runtime memory requirements has compensated for this overhead. Also, the engineering team hasn’t been able to achieve plug-and-play of domain modules yet, primarily owing to the RI-PI framework’s proprietary nature. Currently it’s not possible to independently release a domain module or to integrate one easily with a third-party application. Moreover, we haven’t completely modularized the underlying database schema. Furthermore, the RI-PI framework’s current design assumes in-memory interactions among modules. Under this framework, it would be hard to achieve a more loosely coupled message-passing-based interaction, if such a requirement were to arise.

O

ur next modularization phase aims to extract modules from the remaining 44 percent of the system’s code. We’re also working to assign database objects to different domain modules and prevent their cross-module referencing/processing. We also plan to further layer the base layer and enforce the RI-PI framework on the submodules. Overall, our modularization project has been a success. The application has now existed in a stable state in the production environment for one year. As a result, all teams have begun enthusiastically participating in the modularization project’s second phase. We’ve also begun building a comprehensive modularization framework. The solution is expected to last for another decade, and we’re planning no additional reengineering tasks of this magnitude for the foreseeable future.

References 1. F.P. Brooks, “No Silver Bullet: Essence and Accidents of Software Engineering,” Computer, vol. 20, no. 4, 1987, pp. 10–19. 2. M.M. Lehman and L. Belady, Program Evolution— Processes of Software Change, Academic Press, 1985. 3. D.L. Parnas, “On the Criteria to Be Used in Decomposing Systems into Modules,” Comm. ACM, vol. 15, no. 12, 1972, pp. 1053–1058. 4. M. Ramage and K. Bennett, “Maintaining Maintainabil

About the Authors Santonu Sarkar is a vice president of R&D at Accenture Technology Labs, India, and

previously was a principal architect at Infosys Technologies. His research interests include architecture modeling, metrics, program comprehension, and reengineering techniques. Sarkar received his PhD in computer science from the Indian Institute of Technology Kharagpur. Contact him at [email protected].

Shubha Ramachandran is a senior technical architect at Infosys Technologies, India. Her current research interests are in metrics, program comprehension, and reengineering methods. Ramachandran received her BTech in civil engineering from the Indian Institute of Technology Mumbai. Contact her at [email protected].

G. Sathish Kumar is a delivery manager at Infosys Technologies, India. His research interests include design patterns, process automation, and system integration. Kumar re­ceived his MS in software systems from the Birla Institute of Technology, Pilani. Contact him at [email protected].

Madhu K. Iyengar is a delivery manager at Infosys Technologies, India. His research interests include software product enterprise architecture. Iyengar received his BTech in instrumentation engineering from the Sri Jayachamarajendra College of Engineering. Contact him at [email protected].

K. Rangarajan is a principal architect at Infosys Technologies, India. His research interests include efficient algorithms and computational theory. Rangarajan received his BTech in aerospace engineering from the Indian Institute of Technology Chennai. Contact him at [email protected].

Saravanan Sivagnanam is a project manager at Infosys Technologies, India. His research interests include porting large-scale enterprise applications. Sivagnanam received his MTech in thermal sciences from the College of Engineering, Guindy, Anna University. Contact him at [email protected].

ity,” Proc. Int’l Conf. Software Maintenance (ICSM), 1998, pp. 214–223. 5. J. Bisbal et al., “Legacy Information Systems: Issues and Directions,” IEEE Software, vol. 16, no. 5, 1999, pp. 103–111. 6. N. Medvidovic and R.N. Taylor, “A Classification and Comparison Framework for Software Architecture Description Languages,” IEEE Trans. Software Eng., vol. 26, no. 1, 2000, pp. 70–93. 7. S. Sarkar, G.M. Rama, and A.C. Kak, “API-Based and Information-Theoretic Metrics for Measuring the Quality of Software Modularization,” IEEE Trans. Software Eng., vol. 33, no. 1, 2007, pp. 14–32. March/April 2009 I E E E S o f t w a r e 

35

Suggest Documents