A Dynamic Software Product Line Architecture for ...

0 downloads 0 Views 563KB Size Report
A Dynamic Software Product Line Architecture for. Prepackaged Expert Analytics. Enabling Efficient Capture, Reuse, and Adaptation of Operational Knowledge.
A Dynamic Software Product Line Architecture for Prepackaged Expert Analytics Enabling Efficient Capture, Reuse, and Adaptation of Operational Knowledge

Karen Smiley, Shakeel Mahate

Paul Wood

Industrial Software Systems ABB Corporate Research Raleigh, NC USA

Software Architecture Ventyx, an ABB Company Sacramento, CA, USA

Abstract— Advanced asset health management solutions blend business intelligence with analytics that incorporate expert operational knowledge of industrial equipment and systems. Key challenges in developing these solutions include: streamlining the capture and prepackaging of operational experts’ knowledge as analytic modules; efficiently evolving the modules as knowledge grows; adapting the analytics in the field for diverse operating circumstances and industries; and executing the analytics with high performance in industrial and enterprise software systems. A Quality Attribute Workshop (QAW) was used to elicit and analyze variability at development time and runtime for creating, integrating, evolving, and tailoring reusable analytic modules for ABB/Ventyx asset health solution offerings. Dynamic software product line (DSPL) architecture approaches were then applied in designing an analytics plugin architecture for asset health solutions. This paper describes our approach and experiences in designing the analytics product line architecture and its SME Workbench toolset, and how we achieved significant improvements in speed and flexibility of deploying industrial analytics. Keywords—dynamic software product line; extensibility; reusability; interoperability; knowledge; performance; industrial software systems; asset health; industrial analytics

I.

INTRODUCTION

ABB (www.abb.com) is a leading global engineering company in power and automation technologies. To guide better-informed decisions and eliminate wasteful expenses for customers, ABB has been developing enhanced asset health management solutions1 which blend business intelligence and analytics with ABB experts’ deep knowledge of industrial assets and industries. An initial customer partnership with a major US power utility2 confirmed the business value of the offering and the importance of effectively embedding expert knowledge modules and decision support guidance in enterprise software systems. ABB leaders realized that the pervasive impact of asset decisions, combined with the breadth and depth of ABB’s 1

“Asset health”, www.ventyx.com/en/ga/demand-asset-health.aspx. 2 “Ventyx launches groundbreaking asset health solution”, www.ventyx.com/en/company/news/press/20130612-assethealth.

product families (e.g. transformers and circuit breakers) and their adaptability to multiple industries (e.g. power utilities, mining, data centers), created broad opportunities for reusable analytics that leverage the deep operational knowledge of ABB experts. In this vision, offerings could integrate reusable expert analytics across many products (e.g. on-premise enterprise software systems) and services (e.g. consulting, or access to ABB-hosted software systems) for any asset and any industry. This asset health vision introduced new architectural requirements for variability outside the development lifecycles for the analytics and for the solutions (products or services). ABB architecture researchers partnered with product and service group architects to explore application of software product line (SPL) architecture approaches [1]. The architecture team realized that these variability needs might be addressable with dynamic software product line (DSPL) architecture [2]. In the resulting DSPL for analytic solutions, reusable asset analytic module plugins form the primary asset library. The geographic and organizational dispersion of collaborations for building, evolving, deploying, and tuning analytic modules drove the conception of an innovative “Subject Matter Expert (SME) Workbench” [3] toolset for building the analytic modules, which became its own DSPL of applications with reusable extension plugins. Fig. 1 illustrates the two product lines (solutions and workbenches).

Fig. 1. Asset Health Solution DSPL with SME Workbench DSPL Toolset

The remainder of this paper is organized as follows. Section II provides basic background on dynamic software product lines and the functional domain of asset health management. Section III summarizes elicitation of roles, as-is process, functionalities, desired variability, and architectural

drivers for asset health analytics. Section IV describes the analytics solution DSPL. Section V summarizes the high-level architecture decisions and the team’s service-oriented architecture strategy. Section VI describes the DSPL toolset, its main components, how components support development and/or runtime variabilities, and lower-level architecture decisions. Section VII highlights lessons learned in evolving the DSPL architecture and toolset. Section VIII summarizes results and conclusions on how applying DSPL tactics improved variability support for asset health solutions. II.

BACKGROUND

A. Industrial Asset Management Context and Terminology An ‘asset’ is a physical, capital-intensive resource (e.g., power transformer) which is critical to the business process of an organization (e.g., power transmission utility). In assetintensive industries, the net business impacts of assets evolve over asset lifecycles due to fluctuating revenues, costs, and depreciable values. A spectrum of maintenance strategies, including “run to failure”, schedule-based preventive maintenance, condition-based maintenance, and reliabilitycentered maintenance [4], can be selected for individual assets or groups of assets [5]. Industrial enterprises may measure how well these strategies leverage their assets by key performance indicators (KPIs). KPIs may quantify production throughput, service reliability, profit, time-to-market, safety incident rate, or budget targets. Other KPIs may be industry-specific, e.g. System Average Interruption Duration Index (SAIDI) for power utilities, or Operating Equipment Effectiveness (OEE) for process industries. Asset condition and performance, and asset-related decisions, can impact all of these objectives. Many asset-intensive businesses face stiff challenges from a ‘perfect storm’ of aging asset infrastructures, aging workforces of asset experts, regulatory pressures, and evertightening budget constraints. As an example, in a power transmission utility, unplanned outages due to power transformer failures can cost up to $15 million [6]. Due to the impact of asset maintenance decisions on profitability, compliance, and customer satisfaction, asset health management is emerging as a critical business process [7]. Analytic models for asset and fleet health assessments and decision support optimization analyses can bring valuable situational awareness about asset condition and performance into enterprise asset management (EAM) systems. The Institute for Asset Management (IAM) [8] and various standards, e.g. PAS-55 [9] and the emerging ISO-55000 family [10], offer further information on industrial asset management. B. [Dynamic] Software Product Lines Product line engineering improves reuse and efficiency in building a family of related products, using a shared set of assets designed under a common reference architecture. In a software product line (SPL), multiple software systems are built by addressing variability at development time across two lifecycles: domain engineering and application engineering. Domain analysis techniques [11] can help to identify commonalities and variabilities. Variability realization

techniques may include inheritance, extension points, parameterization, configuration, and generation [12]. With dynamic software product lines (DSPLs), while some variabilities might be addressed at development time, the DSPL architecture is explicitly designed to support specified variabilities at runtime. For instance, extensions or adaptations which go beyond mere parameterization may support [re]bindings during runtime. Variabilities are modeled across the members of an SPL or to define the adaptation scope of a DSPL. References [2] and [13] are recommended for a deeper overview of the characteristics and limitations of DSPLs. III.

ASSET HEALTH SOLUTION PRODUCT LINE SCOPING

A. Architectural Implications of ABB’s Asset Health Vision ABB has many SMEs with deep operational knowledge of products, systems, and industries. ABB product and service groups were eager to integrate SME knowledge as executable analytic modules. This requires flexibility for deployers to add new analytic modules to solutions at development time and runtime; for SMEs to evolve modules as expert knowledge grows; and for tailoring modules to fit customer operations. In-house software architecture researchers were engaged in early 2012 to collaborate with an experienced business unit architect to address these challenges. Early discussions focused on the potential for many solution variants, depending on which ABB and non-ABB enterprise application systems could comprise the asset health solution for a given customer. However, stakeholders identified two critical issues for the team to address: handling the potential solution variations created by integrating combinations of analytic model types for diverse assets and industries, and dramatically accelerating the speed with which new reusable analytic modules could be created, validated, and deployed. B. Architecture Approach Domain analysis activities similar to [14] were pursued concurrently and iteratively, with primary focus on the analytic models as potential reusable assets of the software product line. Activities included understanding integratability of existing product and service solution architectures; limited feature mining; eliciting architecture requirements (e.g. scalability and performance) for major product and service variants; exploring reuse potentials and variabilities in the ways asset analytics could be developed, deployed, and adapted by groups of experts; and examining how experts collaborated in the as-is development and runtime processes for embedding asset analytic models in systems and solutions. C. Experts’ Roles in Delivering Analytic Models Many EAM systems and business intelligence (BI) solutions provide mechanisms for preconfigured KPIs and runtime-configurable calculations. Variabilities already available in at solution runtime include configurable ‘equation editors’ to execute parameterized formulas on individual rows of data; ‘rules engines’ which enable users to define simple rules that can be automatically applied; or ‘weights and factors’ user interfaces which allow users to choose from a predefined set of available factors and adjust percentage weights to them. Although convenient, these mechanisms

were inadequate for implementing the more complex logic of ABB experts’ asset health analytics. The architecture team’s analysis of how the first complex asset analytic models were built, integrated, adapted, and deployed by groups of experts identified the six primary roles in Table I. These human processes were tightly-coupled, geographically and organizationally distributed, costly in effort and calendar time, and not scalable for the experts. To enable efficient development, deployment, and tuning of asset analytics, the process needed to collapse from months down to days or even minutes; accommodate the temporal, geographic, and organizational dispersion of these experts; and allow them to focus on what they knew best. TABLE I. EXPERTS’ ROLES IN ASSET ANALYTICS-ENHANCED SOLUTIONS Expert Role

Role Characteristics Competence

Industry SME

Systems in which assets operate

Asset SME

In-depth knowledge about a specific family of assets (equipment)

Analytics Software Engineer

Software engineering and performance

Application Data Engineer

DBMS, BIM, NoSQL, enterprise warehouse, data integration

Application Software Engineer Installer /Integrator /OEM /partner

Application Integration Customer operations and third party systems

Key Deliverables

Logic for importance (criticality) analytics and decision support analytics Logic for condition analytics and action plan analytics Prepackaged asset analytic modules reflecting SME logic Mappings for data sources and sinks for all analytic model types Glue code for integrating analytics into an application Deployed solutions and tuned asset analytics that fit customer needs

The team proposed three ‘refactorings’ of the as-is asset analytics processes: • decoupling the development lifecycles of analytic modules from the development lifecycles of systems and solutions using the analytics; • moving variability in creating and deploying analytic models, and binding them to units and data, from system development time to solution runtime; • enhancing runtime flexibility for analytic modules to support controlled in-the-field tuning with validation. With these shifts, a deployed solution could dynamically bind health-aware systems to new or updated analytic models. It was acknowledged that one side effect of this shift might be increased difficulty in tuning execution performance. D. Discovering DSPL Architecture Requirements Requirements were elicited from 68 asset health sources and artifacts, and in a stakeholder session based on the Quality Attribute Workshop (QAW) method [15] using a Quality Attribute Scenario template extended with two new fields: for major product and service variants, and for relevant industries. In the session, 20 SMEs (for multiple industries and asset types, covering all six roles in Table I) generated 57 scenarios. Follow-up discussions engaged 35 other stakeholders worldwide involved with asset health related products, services, or solutions. Important variations were identified in:



diversity of asset types (e.g. transformers, cables, pumps) and of diagnostic & monitoring capabilities; • industry-specific system/subsystem analytics and optimizations (e.g. power, mining, water); • flexible model deployment options (e.g. customerhosted, ABB-hosted, offline) which could scale across ABB solutions, products, and services; and • needs for tailoring asset algorithms to address customer/region-specific considerations (e.g. climate). Direct input from end customers on asset health needs also informed the team’s analyses. The team identified additional variations impacting the architecture, e.g. product line variants within asset families (e.g. dry vs. oil-filled transformers), and variances in families of monitoring and diagnostic devices; fundamental differences in classes of assets (discrete or linear; fixed or mobile; operational or supportive); flexible model development options (e.g. offline work by SMEs, or collaborative work among asset experts); and future expansion of the SME group for tuning or contributing asset analytics. All of these solution variabilities were classified by the systems or analytics lifecycles in which they could be bound (e.g., a new version of a transformer health model could be deployed in a solution and bound to a subset of transformers during system application engineering, or re-bound during the runtime of the solution). E. Architectural Drivers for the Asset Health Solution DSPL The elicited scenarios and results of the variability analysis covered a broad set of quality attributes. During the QAW, the 20 stakeholder participants ranked the preliminary architectural drivers. The five key drivers which emerged from this ranking are summarized in Table II. TABLE II. KEY DRIVERS FOR ASSET HEALTH SOLUTION DSPL Quality Attribute Maintainability

Flexibility

Scalability (of Responsiveness)

Interoperability Usability (Operability and User Error Protection)

Asset Health Solution Lifecycle Relevance Development Time Runtime Extensibility: Ease of Reusability: True reuse of analytic models, in more integrating asset and DSPL solution offerings industry analytics Context of Use: Context of Use: Ability to Adaptability of tap data from diverse data analytics integration systems as input for the in diverse solutions analytics Scalable Execution Performance: Calculations for fleet updates and what-if analyses fast enough to satisfy SMEs and end users, balanced vs. computing footprint and cost for small to large development and runtime configurations Compatibility: Ability Replaceability: Ability to to access asset data substitute one asset data and schemas in diverse storage system for another without requiring changes standards-compliant to the analytic models data storage systems Role-appropriate User Role-appropriate UIs for Interfaces (UIs) for deployers (tuning and experts and application validating asset analytic models) engineers

Other quality attributes, although not ranked highly enough by stakeholders in the QAW to be key architecture drivers, also required attention while defining the DSPL architecture. These qualities are summarized in Table III with key lifecycle impacts and some associated tradeoffs.

TABLE III. NON-KEY QUALITIES FOR ASSET HEALTH SOLUTION DSPL Quality Attribute Confidence Customizability Deployability Security Flexibility

Asset Health Solution Lifecycle Relevance Runtime Development Time Integrity, Traceability Accuracy, Verifiability [impacts Integrity] [impacts Complexity] Flexibility [impacts Complexity] [impacts Performance, Confidentiality Usability] Operability [impacts Complexity]

The team carefully assessed priorities and tradeoffs among the qualities in designing the DSPL architecture. For instance, customizability needs were balanced with the need to constrain ‘change scope’ [16] to preserve integrity; increased complexity improved confidence and deployability. IV. DSPL FOR ASSET HEALTH ANALYTIC SOLUTIONS Dynamic software product lines address variability at development time and at runtime, across the domain engineering lifecycle and the application engineering lifecycle. This may be achieved via a reference architecture, a shared asset library, and a supporting toolset. Fig. 2 shows the initial set of reusable elements in the shared asset library of the solution DSPL for asset health.

Fig. 2. Reusable Elements for Asset Health Solutions Product Line

During development of the toolset supporting the solution DSPL and/or at toolset runtime, either of which may encompass domain engineering or application engineering, solution variabilities may be further enabled. Dynamic aspects of an industrial analytics solution SPL are identified by considering how analytic module subsets, logic, parameters, and/or behavior can be modified during solution or toolset application engineering or runtime. To the extent that the toolset supports variants of the tools themselves, it may be considered a SPL in its own right. If runtime variabilities in the tools are supported, it can be deemed a DSPL as well. The toolset DSPL could also enable additional runtime variabilities in the solution DSPL. Runtime variability in solution instances for the industrial analytic solutions DSPL means adding, removing, or reconfiguring analytic modules during application engineering or during solution runtime, and enabling the analytics to be rebound dynamically to new data sources. To provide these variabilities, the team created a reference architecture, a shared library of analytics, and a toolset of applications. The SME Workbench toolset has its own development time and runtime, with its own domain engineering lifecycle, application engineering lifecycle. The SME Workbench provides extensible applications for establishing and managing the asset library of reusable analytic modules, in a loosely-coupled solution ecosystem for scalable development and execution of the modules. Interest later arose in offering an SME Workbench application to

customers as part of a solution. As Fig. 1 illustrated, SME Workbench applications and extensions became additional reusable elements in the shared asset library of the DSPL. Due to an internal ABB initiative on architecture documentation, the team’s architecture decisions were documented in Y-template format [17] and organized according to level (executive, conceptual, technology, vendor asset). The next sections describe the key decisions, and how the reference architecture and toolset were designed to support this dual product line concept. V. SERVICE-ORIENTED ARCHITECTURE STRATEGY After exploring the landscape of available industry standards and frameworks [18] and investigating the “Open O&M” integration approaches advocated by MIMOSA [19], the team selected a service-oriented strategy for the DSPL. Services provided a flexible way to achieve the loose coupling needed for asynchronous development of analytic models by multiple experts; to enable binding variability in analytic models and data sources at runtime as well as development time; to maximize interoperability with diverse enterprise application systems; to leverage reuse potentials; to better align the solution architecture with the contributing organizations; and to localize the ‘change scope’ of analytic models in asset health solutions. Within the functional domain of asset health management, summarized in section II, the ‘adaptability scope’ of the DSPL architecture focuses on integrating reusable asset expert and industry expert analytics in asset-health-aware systems and solutions, at solution development time and at solution runtime. A. Architecture Decisions – Executive Level For flexibility and interoperability, the team chose a dataoriented, service-oriented, standards-oriented integration style that would build upon ISA-95 3 and CIM 4 where practical. Simple contracts maximize integratability and reusability. The persistence scheme is agnostic (i.e., uses web API for connectivity to cloud or local repositories). Variabilities in toolset applications and their plugins offer SMEs multiple design paradigms for building diverse analytic models, and flexibility to validate with their own data. To support deployers while protecting model integrity, flexible but controlled tuning of models will be supported. B. Asset Health Business Processes and Asset Decisions Asset analytic models can be leveraged to support users across the enterprise. Distinct business processes for user roles drive distinct decision support analytics with different solution objectives, each of which will leverage expert asset and industry analytics executed for different timeframes. Table IV lists some example roles and associated business processes. The solution architecture needs to accommodate these roles, with sufficient execution speed to deliver useful and timely recommendations.

3 4

ANSI ISA-95 standard for Enterprise-Control Systems Integration DMTF standards for Common Information Model (CIM)

TABLE IV. User Role Maintenance planners Financial analysts

Reliability engineers

ASSET HEALTH USER ROLES AND BUSINESS PROCESSES Key Process

Example Role Objective Make the best possible use of available O&M funds and CapEx funds in the short, mid, or long term Proactively assess what funding or resource levels are needed to achieve various business objectives, for planning or negotiating future budgets

Asset maintenance planning Long-term Budget Planning

Assess patterns and outliers in operational reliability (e.g. based on manufacturer, servicer, operating environment, or other common facets)

Component, Subsystem, or Fleet Reliability Evaluations

C. Business Services and Types of Analytic Models Examination of the asset health management business processes in Table IV revealed the asset analyses needed for one or more of the processes. For instance, asset performance assessments are reused for maintenance planning, long-term budget planning, and various fleet reliability evaluations. In summary, the analytic model types in Table V provide the key ‘business services’ for asset health business processes. TABLE V. ASSET HEALTH BUSINESS SERVICES (ANALYTIC MODEL TYPES) Model Type Asset Performance Asset Action Plans Asset Criticality Asset Decision Support Key Process Indicator

Description of Business Service / Analysis Characterize the health or condition of an asset, quantify likelihood of failure or degraded performance, and identify likely causes Recommend one or more sets of alternative actions to improve asset condition and/or avoid or mitigate inservice failures Quantify asset ‘importance’ by characterizing the consequences of changes in asset condition on the performance of the industrial enterprise Combine knowledge and data with relevant optimization techniques to deliver effective multitimeframe, multi-constraint, multi-objective guidance on asset related decisions Quantify a business objective affected by assets and asset-related decisions

As an example, an asset performance model for a power transformer may characterize its risk of failure and indicate a probability of overheating. To address potential overheating, an action plan model may recommend degassing it or replacing it. Its criticality may depend on whether it is in a power grid supporting a rural hospital with no other source of power, or used to power cooling equipment at a data center facility with redundant power feeds. KPIs may quantify cost/benefit implications of alternative action plans on important business objectives, such as reliability (uptime) and frequency of safety incidents. An optimization algorithm may consider total risk of failure (combining condition, importance, and KPI impacts) for transformers, breakers, cables, and other critical assets, to determine the optimal overall capital and operation plans for a power transmission regional operating company for the next two years. A fleet reliability analysis may examine results from current performance models, and in turn fleet analysis results may guide model tuning or automatically adjust model parameters.

D. Architecture Decisions – Conceptual Level Four key conceptual decisions were made by the team early in the DSPL architecture project to balance performance, scalability, usability, maintainability, and flexibility. 1) Development Lifecycle Tools: The team selected the concept of a flexible, code-free workbench with drag-and-drop editors and extensible ‘designers’, similar to a visual software development tool, for SMEs to create analytic models. 2) Analytic Models as Services: Using good serviceoriented abstractions, SME algorithms could be mapped as end points of services. Analytic models would be packaged as potentially scalable REST-based web services. In awareness of the potential for performance impacts of this decision to handle models as services, a packaging layer in the tools allows easy addition of more efficient executable formats, if needed. Existing hand-coded analytic models could be ‘packaged’ to migrate them to the new architecture. 3) Runtime Tailoring: The team envisioned a shared Core element to handle scalable, efficient execution of analytic models, e.g. distributing the services and scaling them out, with built-in performance monitoring. A runtime component would allow tailoring of models by deployers and validation of tailored models against the originals (i.e., invoking both via the execution component and comparing results). In future, models might be tailored automatically. 4) Extensibility/Flexibility: To support the identified and anticipated needs for extensibility, the team focused strongly on layering and configurability throughout the applications. Other conceptual decisions made by the team included use of a Subsystem shell (read-only access for fetching analytic model input arguments); Transformation and Integration shell (Metadata interfaces between analytic models and data sources); User access management shell (authentication via Active Directory - more robust security is typically provided by industrial and enterprise application suites); Processing shell (web services for analytic models, with extensibility to support other model integration styles; simple execution interface for invoking models); and UI Client type (thick Windows 7+ clients, deployable with ClickOnce). The high-level architecture reflecting these decisions maps closely to the “generic architecture for ontology-based MDSD tool environments” in [20], but with additional layering for extensions to support the targeted variations. E. Role-based Services Provided by Toolset Applications Breaking the dependency between SME logic modules and build-time software product or service integration was essential for achieving the desired reuse of the analytic models across multiple application suites. Software service applications to support the business service needs (types of analytic models in Table V) were derived based on ‘separation of concerns’ for the user roles in Table I, and for runtime and development lifecycles of the analytic models and solutions [16]. Decoupling these activities and providing flexibility that lets SMEs evolve their analytic logic independent from the complexities of data source mappings, parameter tunings, or application- or customer-specific considerations. The toolset applications are summarized in Table VI.

TABLE VI.

SME WORKBENCH APPLICATION SERVICES

SME Workbench Applications Model Development (Logic) Model Validation (Test Bench) Model Management / Deployment Model Execution (Runtime) Engine

Software Services Supported at Solutions (D) Development or (R) Runtime (D) Any SME can create, evolve, validate, and package a new analytic module (D) SMEs can visualize results and validate models against their own data (R) Deployers can tune model parameters in the field, and validate their tuned parameter sets before deploying (U) Deployers can configure data sources and assign published analytic models to run on 1+ assets or asset subsets (D,R) Run SME models for validation (R) Run models in a deployed solution

F. Toolset Support for Analytic Model Variabilities The first application pursued was the Model Development UI to help SMEs convert asset and industry experts’ knowledge into executable analytic modules. The software services needed for the Model Development UI drove the initial design of the SME Workbench. Initial application design highlighted the need for several flexible core elements, such as services for taxonomy of asset model types and asset types; data source integration; and model execution runtime platform integration. To enable the variabilities supported by the SME Workbench, the team designed the toolset with a Core and a framework for dynamically reusing combinations of plugins or extensions from a toolset asset library. Solution developers gain development-time variability by building extensions that dynamically plug into the toolset. The architecture provides essential consistency across the toolset and the analytic modules to support reuse. The toolset enforces the architecture on the modules, while enabling adaptations to different industries, types of equipment, and customer environments – i.e., the solution variabilities identified by stakeholders. Fig. 3 illustrates the solution, analytics, and toolset lifecycles. Asset Health Solution Development (Domain Engineering, Application Engineering)

Solution Runtime (configuration, integration)

Analytic Module Plugin Runtime (Selection, Mapping to Assets, and Tuning) Analytic Module Plugin Development (Domain-specific or Application-specific) SME Workbench Toolset Variant Runtimes (Applications deployed with selected, configured Extensions) SME Workbench Toolset Extension Development (Extensions for Domains and Applications) SME Workbench Toolset Development (Applications, Core, and base Extensions)

to support consistency across the DSPL development (D) and runtime (R), and to provide the framework for variability management. Table IX and Table X summarize the main SME Workbench toolset components which support DSPL variability at analytics solution development time and runtime, respectively. TABLE VII. SOLUTION ASPECT Domain Engineering

Application Engineering

Runtime

SOLUTION VARIABILITIES SUPPORTED BY TOOLSET DSPL Toolset Development Variabilities Supported Build domain-relevant extensions: - Model Types - Model Templates - Visualizations Build application-relevant or customer-relevant extensions: - Argument Explorer data sources - Output Packaging execution options Build new plugins; drop in or remove plugins; use toolset to evolve models.

Create, evolve, & deploy new analytic modules using the toolset

TABLE VIII. COMMON COMPONENTS FOR ASSET HEALTH DSPL Common Component Runtime Engine Runtime Engine API Designer/ Model Type and Model Template Extensions Argument Explorer Extensions Collaboration Support and Cloud Enablement Customizable Parameter Support Visualizations / Dashboards

DSPL Component Characteristics Where/When Used Description Meta-execution of models, (D,R) Invoked by the with built-in performance Model Execution Engine measurements (R) Invoked by the Service interface using Runtime Engine; retrieves defined input and output data, runs model, saves interfaces results Defines toolbox functions and default input and output arguments

(D) Model Development (R) Model Management/ Deployment

Maps available data source arguments to model arguments

(R) Model Management/ Deployment

Abstractions for data source access and shared model repositories

(D) Model Development (D) Model Validation and Tuning

Mechanism for SMEs to specify parameters tuning options

(D) Model Development (R) Model Validation and Tuning (D) Model Validation and Tuning (R) Model Management/ Deployment

Insights into model results

Fig. 3. Intersecting Lifecycles of Solutions, Analytic Modules, Toolset

VI. DSPL ARCHITECTURE TOOLSET The analytic modules created with the toolset form the primary “asset library” for the analytic solution DSPL. The SME Workbench toolset supports efficient delivery of diverse analytic modules at solution development time and at runtime. Table VII maps SME Workbench support during toolset development time and runtime for major variabilities of asset health solution analytics by domain engineering and application engineering lifecycles and runtime. The key shared components listed in Table VII were identified and developed

Toolset Runtime Variabilities Supported Select and/or configure sets of extensions to deploy with SME Workbench applications, and optionally - Deploy configuration(s) of toolset application(s) with some extensions as new solution features, and/or - Pre-integrate analytic models with solution variants

TABLE IX.

VARIABILITIES SUPPORTED IN SOLUTION DEVELOPMENT

Supporting Components

New Asset Model Type Extensions and Template Extensions New Argument Explorer Extensions New Model Packaging Extensions

Variabilities Supported during Analytics Solution Development

• Support consistency & usability among

• •

SMEs for predefined analytic models and asset types [SMEs can still create ad hoc models and add arguments to any model] Data engineers can adapt models to various application suites at build or deployment time SMEs or Solutions developers can define new ways of integrating their models

TABLE X.

VARIABILITIES SUPPORTED AT SOLUTION RUNTIME

Toolset Components

Taxonomy and associated Model Type and Template Extensions Data Source Selection Interface and Templates Cloud Enablement Output Packaging Extensions Execution Interface API Customizable Parameters

Variabilities Supported during Analytics Solution Usage (Runtime)

• SMEs can create new analytic models within defined model types & asset types • Deployers can dynamically map operating assets to 1+ asset types and models • SMEs can build and test a deployable model in the field (without a data engineer) • Collaborative development by SMEs • Offline work by individual SMEs • SMEs can choose how they want to publish and optionally protect their models • Any application can invoke any analytic model (service, DLL, …) • SMEs can constrain the usage-time variability of tuning for parameters • Deployers can tune models within SMEspecified constraints

A. DSPL Toolset for Analytics Development Time & Runtime During the solution development and usage lifecycles, SMEs can create, edit, validate, and package executable analytical modules, and solution groups can select, configure, and deploy them. The main toolset components that support SMEs and solution groups are two SME Workbench applications: Model Development and Model Validation. The Model Development application enables SMEs to quickly build and evolve the logic and arguments of an analytic model, then ‘package’ the model into an executable, without involving software engineers. SMEs can optionally specify whether model parameters are tunable and if so, whether or how tuning in the field is constrained. The Model Development UI interacts with a repository component for storage and retrieval of models. It supports variabilities via extensions, which are discussed in the following sections. The Model Development UI (“Test Bench”) can handle default values for input arguments to make rudimentary testing of model logic quick and intuitive for SMEs. However, validation of an algorithm against larger baseline data sets, and verifying fidelity of the packaged model, benefits from more systematic bulk execution with visualizations and comparisons. This application provides that service to SMEs. It shares some reusable DSPL components (extensions) with the Model Development UI. To support model integrity and confidence, the real executable model can be quickly generated and then used for validation. This requires high speed for both publishing and model execution to provide acceptable usability to SMEs, and drove performance goals for the Output Packaging and Runtime Engine services. B. DSPL Toolset for Solution Development Time & Runtime Easy reconfigurations and customizations of the analytic models in the field are needed for adapting asset analytics to operating circumstances and industry characteristics. These elements support ‘change application policy’ [16], i.e. how and when new models are activated in one or more customer solutions. Two toolset applications manage this variability during solution development and usage lifecycles: Model Tuning and Model Management/Deployment.

The Model Tuning UI reuses components from the Model Validation UI, and adds a new component for installers and OEM partners to tune (within SME-allowed ranges) the values of customizable parameters. It was initially envisioned as a separate application which would provide visualization and comparison of multiple sets of models and configurable parameters. Given the value of these capabilities to SMEs for validation purposes, the Model Validation UI and Model Tuning UI are being combined into a single prototype. The Model Management/Deployment application supports data engineers, installers, and OEM partners in managing versions of models and parameters, and using the analytic model Taxonomy to map model versions to customer equipment and operating circumstances. C. Refactoring the DSPL Toolset for Reuse and Variability The SME Workbench toolset began as a single welllayered application for a Model Development UI, with a Core and components for variable Model Designers, for model types and paradigms; Model Templates, e.g. for types of assets and industries; Output Packaging, for runtime execution platforms; and Argument Explorers, for integrating multiple data sources. To streamline development of other toolset applications, e.g. Model Validation, and to gain runtime variability benefits in deploying application variants, the Model Development prototype was refactored to convert the components to plugins which could be dropped in an Extensions folder at runtime. Fig. 4 illustrates the result. Model Development Model Type / Designer Extensions

Model Model (other Runtime Validation Management & Engine applications) and Tuning Deployment SME Workbench “Core” Model Output Argument (other Template Packaging Explorer extensions) Extensions Extensions Extensions

Fig. 4. Refactored High-Level Architecture of SME Workbench Toolset

This toolset framework enables reuse of the Core and any or all extensions in creating variants of any of the Workbench applications, while the Output Packaging extensions and Argument Explorer extensions enable easy adaptation to various runtime environments and data systems in solutions. Extension variants are developed during or following the development lifecycle of the toolset. A Workbench UI variant can be dynamically adapted at runtime simply by adding or removing extensions from its Extensions folder. To create a toolset instance for an analytics solution, solution developers first identify the types of SME Workbench variabilities their solution needs. Then they reuse or build the extensions they need. The Core components of an application auto-discover and use new extensions at runtime (launch). A potentially infinite variety of analytic models could be applied to assets. With the extensible SME Workbench and its applications, any model type or instance can be created and integrated during solution application engineering or usage, i.e. independent of the development lifecycle for the solution. D. Plug-in Extensions Used in SME Workbench Applications For Model Development, at least one extension of each type in Fig. 4 is needed. Other applications use a subset.

1) Model Designers: The visual editor component of the UI is extensible to support different paradigms of model representations. Each visual editor component implements a pre-defined interface that allows it to register itself and to edit a model. The design of the SME Workbench allows software developers extending the DSPL toolset to create their own implementations of model editors, as long as they conform to the editor interface specification. The visual edit functionality of the component can be as comprehensive as needed, or as simple as a text editor. Like all SME Workbench extensions, Model Designers can now be added or removed at runtime. They are not used by other Applications in the toolset. 2) Model Types: In the current version of the toolset, support for a Model Type defined in the taxonomy is bundled with a Model Designer extension. These may be separated in a future refactoring. Model Type definitions drive consistency across the outputs of all models of that type. For instance, for the model type of “asset performance model”, all model instances for all possible asset types will have a common set of default output arguments. This extension type is currently used only in the Model Development UI. 3) Model Templates: The Model Development UI also utilizes these extensions to support variations within model types. For instance, Model Templates associated with a Model Type extension for asset performance models can reflect performance characteristics of individual asset types, such as power transformers or high voltage circuit breakers. These templates drive consistency across the inputs of all models for that asset type. For instance, a template for high voltage cables may include an input parameter specifying whether the cable is overhead or underground. 4) Argument Explorer: The Model Development UI uses Data Source extensions to help SMEs select input or output arguments to include in their model logic. The Model Validation UI can dynamically associate model input arguments with a SME’s data sources (reusing the Argument Explorer and data source extensions). 5) Runtime Environment: The Model Development UI uses Output Packaging extensions to enable SMEs to create executable formats of their modules which can run in diverse solution instances. The Model Validation UI uses these same Output Packaging extensions to create executables for bulk execution with the Runtime Engine. E. Architecture Decisions – Technology Level The architecture was elaborated in concert with the executive and conceptual decisions. Key technology decisions by the team for realizing this architecture included: 1) Platform Selection: Windows Workflow was chosen as the core platform for representing and running analytical models. It provided all necessary attributes, such as: persistent representation using XAML; hosted visual designer with all functionality required to model flowcharts, state models, activities; workflow execution API; extensibility of workflows; and visual debugging capabilities in the workflow designer. Analytic models can be created either visually or programmatically, and persisted as XAML. At runtime, Windows Workflow rehydrates the objects and the

environment required to execute the Workflow. The team recognized that implementing the initial services as Windows Workflow Services could cause a small amount of startup overhead; this was tested and measured. Using the Microsoft T4 template library (“Text Templating Transformation Toolkit”) for code generation streamlined handling of variability in Model Types. Later, it enabled easily extending back-end Output Packaging options. 2) UI Widget Selection: The team decided to build UIs as single page applications, mimicking the behavior provided by many web applications. The Windows Workflow hosted designer provided the visual editor required for editing the analytical models. However, the UI for a toolset application is much more than the visual editor. The team chose basic Windows Presentation Foundation (WPF) UI elements, supplemented with the SyncFusion toolkit to obtain richer customization capabilities and ease of development. 3) Model Designers: Many SMEs had been observed to initially document their model logic in flowcharts before involving an analytics software engineer for implementation. To help SME’s adjust to naturally describing their expertise without having to write code, the first visual editor to be developed was a flowchart-style paradigm, with toolbox primitives for decision logic, statistics, trends, and other familiar tools. The screen shot in Fig. 5 shows part of a logic flow built by an SME using the flowchart designer.

Fig. 5. Excerpt of Model built in SME Workbench Flowchart APM Designer

4) Analytic Model Representation Format: XML-based encoding was selected for internally storing model logic because it provided complete freedom to support multiple model designers. The semantics of the XML are left to implementers of analytical model designers for the Model Logic UI. [Windows Workflow models are represented as XAML, an XML encoding.] 5) Analytic Model Repositories: The model repository is a SQL database that stores metadata and the XML encoding of the model. It does not mandate any additional requirements on the model logic beyond XML-based encoding. The repository is accessed via “web API” interface, and so can be configured to operate locally or on a server (for cloud access). The toolset can utilize any repository that implements the interface. 6) Analytic Model import/export: The workbench allows administrators and SMEs to import and export a model. XAML was selected for import/export because of the rich set of tools available for exploring exported analytical models outside the workbench.

F. Architecture Decisions – Vendor Asset Level A small number of vendor selections were made for implementing the toolset and proof-of-concept demonstrator. Stakeholders selected specific ABB and vendor systems to integrate as data sources and asset-health-aware application systems in the demonstrators. For compatibility with the technology selections noted above, the team selected Microsoft Visual Studio 2012, C#, Microsoft .NET 4.5 and Windows Workflow, and internally available SDKs for UI integration of visualizations (built on HTML5 and web sockets) and for the data abstraction interface. VII. LESSONS LEARNED As the workbench was designed and prototyped, it was demonstrated to stakeholders, SMEs, and a growing body of interested internal parties with similar needs, who provided feedback. This section briefly describes some adjustments and additions made during the first year of DSPL development. A. Intended and Unintended Adaptability Scope Some elicited variabilities and extensibilities were deliberately reserved to be variable at toolset development time, and are not currently adaptable during development or usage of analytics. For instance, the asset type list in the taxonomy is extensible during toolset development, not runtime, and enforced during analytics and solution lifecycles. Not all requested extensibilities were achieved in the initial project phase ending in 2013. For example, stakeholder scenarios indicated usability could be enhanced in the Model Development UI if a component model (e.g. for health of a transformer bushing) could be easily used as a building block in models for higher level elements (e.g. health of a transformer). The team used an early spike to explore dynamically adding toolbox primitives at toolset usage time. This did not appear to be a ‘quick win’, so this requested toolset runtime variability was postponed. Tactics for easing component model reuse will be explored in future work. B. Trading off Complexity for Usability The team successfully extended the flowchart designer toolbox with new sets of primitives to add new UI elements to the toolbox and integrate third party libraries. While this was more challenging than anticipated, increased complexity during the tooling development lifecycle was accepted as a tradeoff for increasing usability for SMEs, and the extension mechanism for adding toolbox primitives was documented. C. Extensibility of Model Publishing Logic Selection of T4 templates for packaging made publication of packaged models as REST-based web services easy. However, as internal awareness of the Workbench grew, and as additional opportunities were identified for reusing analytic algorithms developed in the Workbench, it became clear that ability to also deploy a model as a DLL would enhance reusability. Due to the team’s strategic emphasis on layering and the choice of T4 for code generation, developing the new DLL publishing extension proved to be straightforward. D. Pre- and Post-Processing Plugin Hooks The team realized that for some applications, glue code or transformation code might need to be invoked before or after

execution of the analytical models. Microsoft Managed Execution Framework (MEF) technology was leveraged to allow pre- or post-processing extensions of the deployed models. Any component packaged and deployed as a MEF library (DLL) that implements the defined interface and resides with the packaged model in a pre-determined location will be invoked by the runtime. Solution software or data engineers can use this to customize model integrations. However, to date the pre- and post-processing plugin hooks have not yet been used in a deployed solution. E. Data Flexibility for SMEs To address SME needs for executing models with their own data stores in the Model Validation UI, more Argument Explorer extensions were added to interoperate with data source types more common to SME personal data stores than to large industrial enterprise systems. The extensible Argument Explorer has now been leveraged to enhance usability for SMEs defining input arguments in the Model Development UI. Integration of the Argument Explorer data source extensions with the Model Template and Model Type extensions will enable any SME to create, validate, and publish new models in a deployed solution in the field without data engineers, further reducing concept-to-execution time for new analytics from months or days to minutes. F. Expansions of the SME Pool and Solution Pool Conversion of the toolset to a DSPL in its own right was driven by stakeholder interest in offering model creation capabilities to end customer experts, as well as ABB experts. Different asset health solution customers need different configurations of SME Workbenches pre-tailored for their solution, industry, and environment. The plugin approach used for the toolset extensions enabled easy creation of the secondary DSPL of runtime-variant toolset applications. With a solution instance and SME Workbench instance created from the DSPLs, customer SMEs who want to build new analytics can instantly deploy them at runtime in their own solution. An example asset health solution for power transmission & distribution (T&D) can be derived from the product line and customized at solution runtime for a customer, with industry-relevant plugin asset performance models and action plans for grid equipment, a grid criticality model, KPIs, and optimization algorithms; an SME Workbench instance with an Output Packaging extension to execute newly-deployed models in the solution’s historian; Model Designers and Model Templates for special types of grid equipment; and Argument Explorer extensions for enterprise-specific data source systems. VIII. SUMMARY Combining domain analysis activities with the QAW and the extended quality scenario template was both effective and efficient in identifying and prioritizing the variations and related qualities for asset analytic model development and usage. Layering was essential, and enabled even greater extensibility than originally targeted. All five variability realization techniques were used to design and implement our DSPL: inheritance (used in development of the workbench applications), extension points (e.g., pre- and post-processing plugins), parameterization (making model parameters tunable

and data sources configurable at usage time), configuration (e.g., model management/deployment), and generation (e.g., T4 code generation for templates and output packaging). Table XI and Table XII summarize how the DSPL toolset decoupled and changed experts’ roles in delivering solutions. TABLE XI.

SME WORKBENCH SUPPORT FOR SMES AND DEPLOYERS

Expert Role

Capabilities

Toolset Applications Used

Industry SME Asset SME Installer /Integrator /OEM /partner

Create and evolve analytic model logic Tune models and configure model execution options

Model Development Model Validation and Tuning Model Management and Deployment Model Validation and Tuning

TABLE XII.

SME WORKBENCH SUPPORT FOR SOLUTION ENGINEERS

Expert Role

Capabilities

Contributions to Toolset

Analytics Software Engineer Application Data Engineer Application Software Engineer

Develop and share Model Type and Template plugins Develop and share data source plugins Develop and share plugins for desired executable formats

Diversify model types and drive consistency in families of analytic models Enhance interoperability of models with data systems Streamline integration of analytics; optionally deploy Application variants

The success of the DSPL architecture initiative is being measured both subjectively and objectively. Prototype demonstrations began in February 2013. Feedback from SMEs and other stakeholders to these demos was highly positive, and drew many expressions of interest in broader reuse due to the decoupling and acceleration of analytics development. New Workbench variants with combinations of current extensions have been published in less than a day. Demonstration models have been built and packaged for dropin execution in a few hours to a few days (depending on complexity of logic), vs. many days or months it would have taken previously. Output packaging executes in only a few seconds, which is more than fast enough to satisfy toolset usability for SMEs. To date, model execution has been at least as fast as models hand-integrated in pre-DSPL solutions. Practical use and reuse of SME algorithms is now supported in a diversity of deployment scenarios in industrial and enterprise software systems. This approach has dramatically reduced time from model conception by an SME to usage of the SME’s knowledge in deployed applications, and delivers the desired flexibility in the field. Due to the up-front attention paid to achieving the vision of a dynamic software product line, and the commitment of the team to disciplined layering and preservation of architectural integrity, the analytics ecosystem has to date demonstrated the desired flexibility, extensibility, and speed. The SME Workbench toolset is being enhanced and extended to integrate analytics in domains beyond asset health. ACKNOWLEDGMENT The authors deeply appreciate the engagement and support of ABB Corporate Research colleagues, application software architects and product managers in Ventyx and ABB, and the

experts throughout ABB who collaborated with us on developing the DSPL architecture described in this paper. REFERENCES [1] [2] [3]

[4] [5] [6]

[7] [8]

[9] [10] [11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

Software Engineering Institute, “A framework for software product line practice, Version 5.0”, www.sei.cmu.edu/productlines/. Mike Hinchey, Sooyong Park, and Klaus Schmid, “Building dynamic software product lines”, Computer, vol. 45, no. 10, pp. 22-26, Oct. 2012. Karen Smiley, Shakeel Mahate, Paul Wood, Paul Bower, Gary Rackliffe, and Martin Naedele, “Picture of health: An integrated approach to asset health management”, ABB Review 2014Q1. Rich Overman, Society for Maintenance & Reliability Professionals (SMRP), “CORE principles of reliability centered maintenance”. Kristian Steenstrup, “Asset management and reliability: a strategic road map”, Gartner, 2010-11-15, www.gartner.com/id=1469515. Thomas Westman, Pierre Lorin, and Paul A. Ammann, “Fit at 50: Keeping aging transformers healthy for longer with ABB TrafoAsset ManagementTM – Proactive Services”, ABB Review, 2010Q1. GTM (GreenTech Media) Research, “Evaluating asset health: Prioritizing and optimizing asset management”, 2013. The Institute of Asset Management (IAM), “An anatomy of asset management”, theiam.org/what-is-asset-management/anatomy-assetmanagement . Publicly Available Standard PAS-55, www.pas55.net. International Standards Organization, “ISO 55000 International standards for asset management”, www.iso55000.info. Mahvish Khurum and Tony Gorshek, “A systematic review of domain analysis solutions for product lines”, Journal of Systems and Software 82(12):1982–2003, Dec. 2009. Mikael Svahnberg, Jilles van Gurp, and Jan Bosch, “A taxonomy of variability realization techniques”, Software Practice & Experience, vol. 35(8), pp. 1-50, 2005. Nelly Bencomo, Svein Hallsteinsen, and Eduardo Almeida, “A view of the dynamic software product line landscape”, Computer, vol. 45, no. 10, pp. 36-41, Oct. 2012. Heiko Koziolek, Thomas Goldschmidt, Thijmen de Gooijer, Dominik Domis, and Stephan Sebestedt, “Experiences from identifying software reuse opportunities by domain analysis”, in Proceedings of the 17th International Software Product Line Conference (SPLC '13). ACM, New York, NY, USA, pp. 208-217, 26-30 Aug. 2013. Mario R. Barbacci, Robert J. Ellison, Anthony J. Lattanze, Judith A. Stafford, Charles B. Weinstock, and William G. Wood, “Quality attribute workshops, third edition”, CMU/SEI-2003-TR-016, 2003. Peyman Oreizy, Nenad Medvidovic, and Richard N. Taylor, "Architecture-based runtime software evolution", Proceedings of the 1998 International Conference on Software Engineering, 1998, pp.177186, 19-25 Apr 1998. Olaf Zimmermann, “Architectural decision identification in architectural patterns”, in Proceedings of the WICSA/ECSA 2012 Companion Volume. ACM, New York, NY, USA, pp. 96-103, 2012. Andy Koronios, Daniela Nastasie, Vivek Chanana, and Abrar Haider, “Integration through standards – An overview of international standards for engineering asset management", via CIEAM at cieam.com (seen 2013-06-24) – CIEAM is now succeeded by The Asset Institute at theassetinstitute.com Ken Bever, “Open O&M and MIMOSA Standards Introduction”, Open O&M Initiative, May 2012; Machinery Information Management Open Systems Alliance (MIMOSA), www.mimosa.org Christian Wende, Uwe Assmann, Srdjan Zivkovic, and Harald Kuhn, "Feature-based customisation of tool environments for model-driven software development", 2011 15th International Software Product Line Conference (SPLC), pp.45-54, 22-26 Aug. 2011.