Magnify – a new tool for software visualization

4 downloads 3559 Views 743KB Size Report
Since they share the same Blueprints API, they can substitute ... dynamic languages like Rails and Django. ... Blueprints API or the GEXF graph format.
Magnify – a new tool for software visualization Cezary Bartoszuk, Robert Dabrowski, ˛ Krzysztof Stencel, Grzegorz Timoszuk Institute of Informatics University of Warsaw Banacha 2, 02-097 Warsaw, Poland

Abstract—Modern software systems are inherently complex. Their maintenance is hardly possible without precise up-todate documentation. Unfortunately, handwritten documentation gets quickly outdated. Moreover, it is often tricky to document dependencies among software components by only looking at the raw source code. We address these issues by researching new software analysis and visualization tools. In this paper we focus on software visualisation. Magnify is our new tool that performs static analysis and visualization of software. It parses the source code, identifies dependencies between code units and records all the collected information in a repository based on a language-independent graph-based data model. Nodes of the graph correspond to program entities of disparate granularity: methods, classes, packages etc. Edges represent dependencies and hierarchical structure. We use colours to reflect the quality, sizes to display the importance of artifacts, density of connections to portray the coupling. This kind of visualization gives bird’s-eye view of the source code. It is always up to date, since the tool generates is automatically from the current revision of software. It can equally well visualize the whole system, or a single package. In the former case nodes will be modules, while in the latter case nodes will be classes or methods. In this paper we discuss the design of the tool and present visualizations of sample open-source Java projects of various sizes.

I. I NTRODUCTION The complexity of software systems and their development processes have rapidly grown in recent years. In numerous companies the high level structure of dependencies between software components is kept only in the heads of developers. This makes the development process fragile, since teams composed of humans are inherently volatile. If a project has some architectural documentation, it will usually be handwritten and maintained manually. Such an approach either significantly increases the cost or decreases the quality of documents, since they quickly become out of date. Therefore, development teams need methods and tools to inspect current states of complex systems and their fragments. In this paper we describe a software visualization tool Magnify that caters for these needs. It shows top level views of software entities of disparate granularities: systems, components, modules, classes etc. These views contain graphic presentation of the importance, the quality and the coupling of subcomponents. In our opinion Magnify constitutes a progress over manually maintained diagrams that is similar to the improvement caused by automated unit tests over inline comments. The difference between comments and unit tests lays in the verifiability. By running a test suite, a software engineer attests that the code

matches the specification. It is not possible with respect to comments. Magnify does not use any documentation that could possibly be verifiable to match the source code. Magnify takes a different approach. We assume that all the needed information on the current revision of a system is already present in its source code. The problem is how to extract that information. Magnify does that by visualizing software as a graph composed of artefacts with relationships between them. Given a source code bundle Magnify parses and analyzes it in order to create a graph-based model [1], [2]. This model is then persisted in the architecture warehouse that provides data for software intelligence [3]. One of its methods is visualization. Magnify reads the data from the warehouse and produces a graph laid out on a plain. Nodes of the graph are entities (classes, packages etc.) of the provided source code. The size of a node reflects the importance of the corresponding artefact. In the current version of Magnify we use PageRank to assess importance. We plan to add more importance measures in the future. The colour of a node represents the quality of the corresponding piece of code. Any software metrics accompanied with threshold values for green and red can be employed. In [4] we showed how to express state-of-the-art software metrics in the graph model. In the examples presented in this paper, we use the numbers of lines per class. In case of a package view, these numbers are aggregated by taking the average of the measures for enclosed classes. Edges represent relationships of two kinds: structural inclusion and dependency. The structural inclusion represented by red edges shows the package tree and the class containment relationships. A class depends on another one if it knows about the existence of the latter. Class dependencies are escalated to package dependencies, i.e. a package depends on another package if there exists a class in the first package that depends on a class in the second package. The paper is organised as follows. In Section II we address the related work. In Section III we recall the theoreticall foundations for our research. In Section IV we describe the design of Magnify and possible extension points in its architecture. In Section V we present visualizations of sample open-source Java projects of various sizes. Section VI concludes.

II. M OTIVATION The idea described in this paper has been contributed to by several existing approaches and practices. A unified approach to software systems and software processes has already been presented in [5]. Software systems were perceived as large, complex and intangible objects developed without a suitably visible, detailed and formal descriptions of how to proceed. It was suggested that software process should be included in software project as parts of programs with explicitly stated descriptions; software architect should communicate with developers, customers and other managers through software process programs indicating steps that are to be taken in order to achieve product development or evolution goals. Nowadays, the process of architecture evolutions is treated as an important issue that severely impacts software quality. There have been proposed formal languages to describe and validate architectures, e.g. architecture description language (ADL) [6]. In that sense, software process programs and programs written in ADLs would be yet another artefact in the graph recalled in this paper. Multiple graph-based models have been proposed to reflect architectural facets, e.g. to represent architectural decisions and changes [7], to discover implicit knowledge from architecture change logs [8] or support architecture analysis and tracing [9]. Graph-based models have also become helpful in UML model transformations, especially in model driven development (MDD) [10]. Automated transition (e.g. from use cases to activity diagrams) have been considered [11], along with additional traceability that could be established through automated transformation, e.g. relate requirements to design decisions and test cases. An approach to automatically generate activity diagrams from use cases while establishing traceability links has already been implemented (RAVEN). As the system complexity increases the role of architectural knowledge is also gaining importance. There has been developed multiple tools that supports storing and analysing that knowledge [12]–[15]. Architectural knowledge also influences modern development methodologies [16], [17]. Architectural knowledge can also be extended by data gathered during software executions [18]. Aspect of tracing architectural decision to requirements has been thoroughly investigated in [19]–[21]. Analysis on gathering, managing and verifying architectural knowledge has been conducted and presented in [22]. Changes made in architecture management during last twenty years has been summarised in the survey [23]. Visualization of software architecture has been a research goal for years. The tools like Bauhaus [24], Source Viewer 3D [25], Gevol [26], JIVE [27], evolution radar [28], code_smarm [29] and StarGate [30] are interesting attempts in visualization. However none of them simultaneously supports aggregation (e.g. package views), drill-down, picturing the code quality and dependencies. Moreover, all of them are significantly more

complex when compared to our proposal. In our opinion noteworthy better effects can be achieved with simpler facilities. Movies and the third dimension are not necessary to quickly assess the quality, robustness and resilience of an architecture. III. T HEORETICAL FOUNDATIONS A. Model We recall the theoretical model [1] for unified representation of architectural knowledge. The key reason for introducing such model is the need for representation of the architectural knowledge that is scalable by nature; abstract from programming paradigms, languages, specification standards, testing approaches, etc.; and complete, that is allows for representation of all software system and software process artefacts [5]. Definition of the model is based on directed labelled multigraph. According to the model, the software architecture graph is an ordered triple (V, L, E) where V is the set of vertices that reflect all artefacts created during a software project, E ⊆ V × L × V is the set of directed edges that represent dependencies (relations) among those artefacts, and L is the set of labels which qualify the artefacts and their dependencies. There can be more than one edge from one vertex to another vertex (multigraph), as artefacts can be in more than one relation. The relations are typically not reflexive (the graph is directed). Vertices of the project graph are created when artefacts are produced during software development process. By default vertices represent parts of the source code, like: modules, classes, methods, tests (e.g. unit tests, integration tests, stress tests). Other examples of vertices are: documents (e.g. requirements, use cases, change requests), coding guidelines, source codes in higher level languages (e.g. yacc grammars), configuration files, make files or build recipes, tickets in issue tracking systems etc. Vertices may be of different granularities (densities). The containment relation is easily represented in the model, using a special containment label to relate vertices representing artefacts for different granularities; on graph print-outs it can be presented as an edge or using vertex nesting. During software development vertices are subject to changes (due to changes in requirements, implementation process or bug fixing and code refactoring), therefore the vertices must be versioned, This is easily represented in the model using a special label containing version number (revision) attached to each vertex and/or label. It means that multiple vertexes can exist for the same artefact in different version; however graph print-outs usually show vertices from one revision only, e.g. the most recent one. Example 1. Each artefact can be described by a set of labels, they may have additional properties, e.g. a method can be described by labels showing that it is a part of project source code (code); written in Java (java); its revision is 456 (r:456); it is abstract and public. Edges are directed and may have

multiple labels as well, e.g.: a package contains a class; a method calls another method. The transformations and metrics recalled below give the foundation for software intelligence layer of tools [3]. B. Transformations Our graph model is general and scalable, fits both small and huge projects, and has been tested in practice [2]. The tests proved that in case of a large project it becomes too complex to be human-tractable as a whole. This has confirmed that some rules for graph transformation are a must, since software architects are interested both in an overall (toplevel) picture and in particular (low-level) details, in particular details that satisfy some additional restrictions. Intuitions for such transformations are e.g.: return the subgraph including only methods that call the given method; return the subgraph of all public methods for the given class (either including or excluding inherited ones). Obviously, in the graph model the queries that compute such transformations are computationally inexpensive, as usually they only need to traverse the graph (or even its small fraction).

metric in graph-based model reduces to filtering and counting neighbours, its calculation can be done quickly, in time O(|G|). Graph metrics depend only on the graph’s structure, they are independent of any programming language. Hence storing and integrating all architectural knowledge in one place facilitates tracing not only dependencies in the source code but also among documentation and meta-models. This opportunity gives raise to new graph-defined metrics concerned with software processes. Example 6. Let CHC (Cohesion of Classes) denote metric counting the number of strongly connected components of this graph. Software is cohesive if this metric is 1. In graph-based model it is computed quickly, in time O(|G|). IV. M AGNIFY We implemented Magnify as a server system, based on a graph database, with web front-end written in Scala. Magnify functionality allows loading source code bundles, analyzing them and displaying the resulting graphs of software components. See functionality samples at Figure 1 and Figure 2.

Example 2. For a given software graph G = (V, E, L) and a subset of its labels L0 ⊆ L, its filter is a transformation G|L0 = (V 0 , E 0 , L0 ) where V 0 and E 0 have a label in L0 . Example 3. For a given software graph G = (V, L, E) and t : L × L 7→ L0 , its closure is a transition G t = {V, L0 , E 0 }, where E 0 is the set of new edges resulting from a transitive closure of t calculated on pairs of neighboring edges from E. The vertices V remain unchanged. Further examples of transformations include e.g.: a selection that maps a graph into one of its subgraphs; a transition that maps a graph into a new graph and introduces new vertices or edges; software architects may also choose to stop distinguishing certain differences in artefact or dependency types (adapt a higher level of abstraction, e.g. hide fields and methods while preserving class dependencies) and for this purpose we define the map transformation. The transformations can be combined. C. Metrics For complex projects their quantitative evaluation is a must. The graph-based approach is in line with best practices for metrics [31], [32], allows for easy translation of existing metrics into graph terms [33], ensures they can be efficiently calculated using graph algorithms. It also allows for designing new metrics, e.g. ones that integrate both software system and software process artefacts. Example 4. For a given software graph G = (V, L, E), its metric is a transformation m : G 7→ R where R denotes real numbers and m can be effectively calculated by a graph algorithm on G. Example 5. Let NOC (Number Of Children) denote metric counting the number of direct subclasses. Calculating such

Fig. 1.

Magnify functionality - main view

The analytical backend is composed of three main modules: the parser, the graph storage and the analysis engine. The parser is the only part of Magnify that must be programming language specific. Its responsibility is to transform contents of a source code bundle into an abstract software graph. This graph is then persisted. Currently the implementation contains a Java 5 parser based on the javaparser library. The graph storage is based on Tinkerpop Blueprints specification. At present Magnify uses the reference Tinkergraph implementation. Besides Tinkergraph, numerous compatible graph database engines like Neo4j or OrientDB are available. Since they share the same Blueprints API, they can substitute the currently used simple database. When the source code is converted and stored in the abstract language-agnostic form, the analysis engine will run. The

On the other hand, the content of the repository can be read, analyzed and augmented by additional arbitrary intelligence modules, e.g. design pattern detectors, code-smell detectors, code query systems, refactoring tools, visualizations tools, dependency miners. V. E VALUATION

Fig. 2.

Magnify functionality - visualisation parameters

In this Section we show sample visualizations produced by Magnify. We have chosen five open-source projects that significantly vary in size and quality. All of them have a noteworthy number of users. They are well adopted by the software development community. These are JUnit, Cyclos, Play, Spring and Karaf. For each system we present its top-level visualization created by Magnify. Then, we analyze the resulting images and enumerate conclusions that can be drawn from them.

implementation computes PageRank of every node in the graph and code quality metrics for classes. These metrics are then escalated to the package level. Sample visualizations presented in this paper use lines of code per class. See referential deployment diagram on Figure 3.

Fig. 4.

The visualization of Spring context 3.2.2 produced by Magnify

A. Spring context 3.2.2

Fig. 3.

General architecture of Magnify

The architecture of Magnify is flexible enough to replace any of its components and add new constituents. As noted above, we can replace the repository with any graph database that conforms to Blueprints API. New data providers can be added, like parsers for more programming languages, runtime profilers, analyzers of version control systems, and analyzers of web data (e.g. forum discussions).

Spring is one of the most popular enterprise application frameworks in the Java community. It provides an infrastructure for dependency injection, cache, transactions, data base access and many more. Figure 4 shows the visualization of Spring produced by Magnify. The structure of dependencies implies that Spring is well designed. The graph is notably sparse. The only packages detected as important are empty vendor packages: the root package, org and org.springframework. All the packages that do contain classes are of the same importance. This indicates a well balanced software. Figure 4 contains no brightly red packages. This means that on average the classes are small in most of packages. Thus, the overall quality is satisfactory.

services that can simulate local trade and development. Cyclos is published under the GPL licence. Figure 6 presents visualization of this system produced by the Magnify tool. Unfortunately, this time the dependency graph is exceptionally dense. The software engineering experience indicates that the development and maintenance of software systems with so tight coupling is difficult, costly and error-prone. On the other hand, Figure 6 shows few packages in which classes are big on average. That means that overall complexity of the classes themselves is acceptable. Cyclos is a profound example of a system that should be split into orchestrated group of communicating systems. This kind of refactoring will significantly improve the quality of this software. It will also reduce the cost of further development and maintenance. D. Play 1.2.5 Fig. 5.

The visualization of JUnit 4.10 produced by Magnify

B. JUnit 4.10 JUnit is a standard unit testing framework for the Java programming language. Figure 5 shows its visualization. The overall quality of JUnit’s code is good, since there are no brightly red packages. Furthermore, all important packages are green. The web of dependencies is manageable. This implies desired low coupling of components. C. Cyclos 3.7

Fig. 7.

The visualization of Play 1.2.5 produced by Magnify

Play is a popular Scala and Java web framework. It has a lightweight, stateless and web friendly architecture. Play is heavily influenced by web frameworks from the family of dynamic languages like Rails and Django. The architecture of Play makes it a simpler development environment when compared to other Java platforms. Figure 7 shows the visualization of Play using Magnify. It presents a small project with decent amount of dependencies. The flat package structure is typical for dynamic languages. The biggest node corresponds to the project root package play. Brightly red packages reveal potentially high complexity of their classes. Fig. 6.

The visualization of Cyclos 3.7 produced by Magnify

E. Apache Karaf 3.0.0 RC1 Cyclos is a complete on-line payment system. It offers numerous additional modules such as e-commerce or communication tools. The project allows local banks to offer banking

Apache Karaf is a small OSGi container to deploy various components and applications. Karaf supports the hot deployment of OSGi bundles, native operating system integration and

more. Figure 8 shows the visualization of Karaf produced by Magnify. Even though Karaf is split into many packages, the number of dependencies is small. Many subtrees of the package hierarchy have only a single dependency on the rest of the system. Thus, Karaf is well packaged. Figure 8 shows that overall code quality in Karaf is good. There are only a few packages where the average class size is alarming. It seems that one refactoring is worth consideration. The subpackages of org.apache.karaf.shell should be encapsulated, since they tend to spread a web of dependencies in the top-right corner of the picture.

Fig. 8.

The visualization of Karaf 3.0.0-RC1 produced by Magnify

VI. C ONCLUSION We follow the research on anayslis and visualisation of software and software process, and promote an approach that avoids separation between software and software process artefacts. We demonstrate that the implementation of such approach is feasible. We start with a graph-based theoretical model, then we use a graph database to build a software architecture warehouse to collect knowledge on artefacts and their relations. On top of the warehouse we implement tools that perform software intelligence by running analyses and generating visualisations. We execute experiments on open-source Java programs using those tools. In this paper we presented Magnify - a tool that performs static analysis and visualization of software systems. It focuses on relationships between components rather than on their internal structure. Software architects can use Magnify in order to quickly comprehend and assess software. The idea is to automatically generate a visualisation of the software such that architects can instantly see importance and quality of software components. They can do it at the level of abstraction they

require. The most eye-catching things are big red nodes. A big red node means an important artefact with poor quality. Most probably, severe bugs are hidden there. It may indicate e.g. a God Module anti-pattern that should be decomposed immediately. Architects can also see how node sizes are distributed. Our experiments have proven that in many cases sparse software graph and almost uniformly distributed node sizes mean proper modular architecture. On the other hand, one node dominating others in size might also mean a shared kernel architecture, where other functionalities are implemented as services floating around the kernel. Magnify is a general tool that can adapt other quality metrics and importance estimates. Flexibility of its design allows replacing any of its components and adding new parts. In order to support the analyses for another programming language, we have to add only an appropriate parser. All other facilities (the repository and analytic algorithms) need not be changed. We used PageRank as the algorithm to compute artifacts’ importance. It has proven to be effective in practical applications, though its adequacy can be questioned. For example, a common technique for encapsulating a module in an object oriented language involves depending on module’s interfaces and obtaining instances via a façade. PageRank importance of the façade will be significantly higher than importance of implementation classes. This usually is a poor reflection of real importance. We emphasise however, that PageRank can be easily substituted with other metrics. Magnify is implemented using standards for graph representation and visualisation, like Blueprints API or the GEXF graph format. Measures of importance of a node depend only on the used graph model. Thus, any algorithm working on those standard graph technologies can be easily adapted. Magnify can be extended also in other directions. Currently Magnify supports only Java. Adding support for other programming languages only requires registering a new parser. Its task would be to analyse source files and insert specific nodes and their relations into the graph database. Since the graphbased representation of the source code is language agnostic, all the analytics done inside Magnify will work equally well for any language with notions of packages, classes and methods. Furthermore, even though certain local complexity measures might depend on a programming language, most of them do not. Cyclomatic complexity that takes into account execution paths can be computed in the same way for most programming languages. Moreover, most languages use the same keywords for branching and loops. Other promising ideas worth implementing in the near future include: (1) improving the vertex clustering algorithm for module repackaging, (2) gathering the information across revisions and (3) adding metadata stating how packages are split into modules and how these modules depend on each other. This kind of metadata would constitute a specification that can be matched against the source code.

R EFERENCES [1] R. Dabrowski, K. Stencel, and G. Timoszuk, “Software is a directed multigraph,” in ECSA, ser. Lecture Notes in Computer Science, I. Crnkovic, V. Gruhn, and M. Book, Eds., vol. 6903. Springer, 2011, pp. 360–369. [2] ——, “Improving software quality by improving architecture management,” in CompSysTech, B. Rachev and A. Smrikarov, Eds. ACM, 2012, pp. 208–215. [3] R. Dabrowski, “On architecture warehouses and software intelligence,” in FGIT, ser. Lecture Notes in Computer Science, T.-H. Kim, Y.-H. Lee, and W.-C. Fang, Eds., vol. 7709. Springer, 2012, pp. 251–262. [4] R. Dabrowski, K. Stencel, and G. Timoszuk, “One graph to rule them all - software measurment and management,” in CS&P, ser. CEUR Workshop Proceedings, L. Popova-Zeugmann, Ed., vol. 928. CEURWS.org, 2012, pp. 79–90. [5] L. J. Osterweil, “Software processes are software too,” in ICSE, W. E. Riddle, R. M. Balzer, and K. Kishida, Eds. ACM Press, 1987, pp. 2–13. [6] M. T. T. That, S. Sadou, and F. Oquendo, “Using architectural patterns to define architectural decisions,” in WICSA/ECSA, T. Männistö, A. M. Babar, C. E. Cuesta, and J. Savolainen, Eds. IEEE, 2012, pp. 196–200. [7] M. Wermelinger, A. Lopes, and J. L. Fiadeiro, “A graph based architectural (re)configuration language,” in ESEC / SIGSOFT FSE, 2001, pp. 21–32. [8] A. Tang, P. Liang, and H. van Vliet, “Software architecture documentation: The road ahead,” in WICSA, 2011, pp. 252–255. [9] H. P. Breivold, I. Crnkovic, and M. Larsson, “Software architecture evolution through evolvability analysis,” Journal of Systems and Software, vol. 85, no. 11, pp. 2574–2592, 2012. [10] J. Derrick and H. Wehrheim, “Model transformations across views,” Sci. Comput. Program., vol. 75, no. 3, pp. 192–210, 2010. [11] T. Kühne, B. Selic, M.-P. Gervais, and F. Terrier, Eds., Modelling Foundations and Applications, 6th European Conference, ECMFA 2010, Paris, France, June 15-18, 2010. Proceedings, ser. Lecture Notes in Computer Science, vol. 6138. Springer, 2010. [12] P. Kruchten, P. Lago, H. van Vliet, and T. Wolf, “Building up and exploiting architectural knowledge,” in WICSA. IEEE Computer Society, 2005, pp. 291–292. [13] A. Tang, P. Avgeriou, A. Jansen, R. Capilla, and M. Ali Babar, “A comparative study of architecture knowledge management tools,” Journal of Systems and Software, vol. 83, no. 3, pp. 352–370, 2010. [14] D. Garlan, V. Dwivedi, I. Ruchkin, and B. R. Schmerl, “Foundations and tools for end-user architecting,” in Monterey Workshop, ser. Lecture Notes in Computer Science, R. Calinescu and D. Garlan, Eds., vol. 7539. Springer, 2012, pp. 157–182. [15] I. Gorton, C. Sivaramakrishnan, G. Black, S. White, S. Purohit, C. Lansing, M. Madison, K. Schuchardt, and Y. Liu, “Velo: A knowledgemanagement framework for modeling and simulation,” Computing in Science Engineering, vol. 14, no. 2, pp. 12 –23, march-april 2012. [16] N. Brown, R. L. Nord, I. Ozkaya, and M. Pais, “Analysis and management of architectural dependencies in iterative release planning,” in WICSA, 2011, pp. 103–112. [17] R. L. Nord, I. Ozkaya, and R. S. Sangwan, “Making architecture visible to improve flow management in lean software development,” IEEE Software, vol. 29, no. 5, pp. 33–39, 2012. [18] A. van Hoorn, J. Waller, and W. Hasselbring, “Kieker: a framework for application performance monitoring and dynamic software analysis,” in ICPE, D. R. Kaeli, J. Rolia, L. K. John, and D. Krishnamurthy, Eds. ACM, 2012, pp. 247–248. [19] P. Avgeriou, J. Grundy, J. G. Hall, P. Lago, and I. Mistrík, Eds., Relating Software Requirements and Architectures. Springer, 2011. [20] G. Spanoudakis and A. Zisman, “Software traceability: a roadmap,” Handbook of Software Engineering and Knowledge Engineering, vol. 3, pp. 395–428, 2005. [21] A. Egyed and P. Grünbacher, “Automating requirements traceability: Beyond the record & replay paradigm,” in ASE. IEEE Computer Society, 2002, pp. 163–171. [22] P. Kruchten, “Where did all this good architectural knowledge go?” in ECSA, ser. Lecture Notes in Computer Science, M. A. Babar and I. Gorton, Eds., vol. 6285. Springer, 2010, pp. 5–6. [23] D. Garlan and M. Shaw, “Software architecture: reflections on an evolving discipline,” in SIGSOFT FSE, T. Gyimóthy and A. Zeller, Eds. ACM, 2011, p. 2.

[24] R. Koschke, “Software visualization for reverse engineering,” in Software Visualization, ser. Lecture Notes in Computer Science, S. Diehl, Ed., vol. 2269. Springer, 2001, pp. 138–150. [25] J. I. Maletic, A. Marcus, and L. Feng, “Source viewer 3d (sv3d) - a framework for software visualization,” in ICSE, L. A. Clarke, L. Dillon, and W. F. Tichy, Eds. IEEE Computer Society, 2003, pp. 812–813. [26] C. S. Collberg, S. G. Kobourov, J. Nagra, J. Pitts, and K. Wampler, “A system for graph-based visualization of the evolution of software,” in SOFTVIS, S. Diehl, J. T. Stasko, and S. N. Spencer, Eds. ACM, 2003, pp. 77–86, 212–213. [27] S. P. Reiss, “Dynamic detection and visualization of software phases,” ACM SIGSOFT Software Engineering Notes, vol. 30, no. 4, pp. 1–6, 2005. [28] M. D’Ambros, M. Lanza, and M. Lungu, “The evolution radar: visualizing integrated logical coupling information,” in MSR, S. Diehl, H. Gall, and A. E. Hassan, Eds. ACM, 2006, pp. 26–32. [29] M. Ogawa and K.-L. Ma, “code_swarm: A design study in organic software visualization,” IEEE Trans. Vis. Comput. Graph., vol. 15, no. 6, pp. 1097–1104, 2009. [30] K.-L. Ma, “Stargate: A unified, interactive visualization of software projects,” in PacificVis. IEEE, 2008, pp. 191–198. [31] F. Abreu and R. Carapuça, “Object-oriented software engineering: Measuring and controlling the development process,” in Proceedings of the 4th International Conference on Software Quality, 1994. [32] J. M. Roche, “Software metrics and measurement principles,” SIGSOFT Softw. Eng. Notes, vol. 19, pp. 77–85, January 1994. [Online]. Available: http://doi.acm.org/10.1145/181610.181625 [33] S. R. Chidamber and C. F. Kemerer, “A metrics suite for object oriented design,” IEEE Transactions on Software Engineering, vol. 20, pp. 476–493, June 1994. [Online]. Available: http://portal.acm.org/ citation.cfm?id=630808.631131

Suggest Documents