crossings - Sapient

11 downloads 398 Views 2MB Size Report
that are central to helping financial services firms transform their businesses in the era of the digital ...... from na
CROSSINGS The Journal of Business Transformation

EDITION THIRTEEN FALL 2015

2

TABLE OF CONTENTS 04

INTRODUCTION: the age of the digital enterprise by Chip Register

06

THE BUSINESS TRANSFORMATION IMPERATIVE: change now or become obsolete by Josh Sutton

12

DYNAMICS OF DISRUPTION: an ‘Uber’ approach to compliance reporting by Randall Orbon and Cian Ó Braonáin

16

OTC DERIVATIVES: the data management challenge, risks and opportunities by Paul Gibson and Matthew Rodgers

22

DERIVATIVES GOVERNANCE: enabling product innovation for asset managers by Geoff Cole and Jackie Colella

28

HOUSING BUBBLE 2.0: ready for another housing market crash? by Hans Godfrey and Adi Ghosh

34

FINTECHS—OPPORTUNITY OR THREAT?: a pragmatic approach for organizations to assess the value of financial technology initiatives by Sean O’Donnell

40

CLOUD-BASED SOLUTIONS: why the time is right for asset managers to consider adoption by Manish Moorjani

48

THE BUSINESS CAPABILITY MAP: a critical yet often misunderstood concept when moving from program strategy to implementation by Shiva Nadarajah and Atul Sapkal

56

DIGITAL CUSTOMER ENGAGEMENT: the key to long-term success for utilities by Yugant Sethi and Alakshendra Theophilus

62

INTRODUCTION: drive change through analytics and collaboration by Rashed Haq

64

MANAGING AN ANALYTICS PROGRAM: the three key factors for success by Barbara Thorne-Thomsen, Cassandra Howard and Shahed Haq

68

DATA QUALITY FOR ANALYTICS: clean input drives better decisions by Niko Papadakos, Mohit Sharma, Mohit Arora and Kunal Bahl

76

PREDICTIVE ANALYTICS IN INTEGRITY MANAGEMENT: a ‘smarter’ way to maintain physical assets by Ashish Tyagi and Jay Rajagopal

84

ENERGY INTELLIGENCE: the key to competitive advantage in the volatile LNG market by Ritesh Sehgal, Parry Ruparelia and Sidhartha Bhandari

90

FUEL MARKETING OPTIMIZATION: providing an advantage in an increasingly complex and competitive market by Pooja Malhotra, Rathin Gupta and Rajiv Gupta

96

SHIPPING ANALYTICS: improving business growth, competitive advantage and risk mitigation by Kunal Bahl

102

STOCHASTIC ANALYTICS: increasing confidence in business decisions by Tomas Simovic and Rashed Haq

108

CHOOSING AN APPROACH TO ANALYTICS: is a single technology platform the right investment? by Abhishek Bhattacharya

CROSSINGS: The Journal of Business Transformation

3

INTRODUCTION

the age of the digital enterprise With the digital age rewriting the playbook on how to succeed in business, organizations of all shapes and sizes are finding themselves confronted with the need for change. Across the spectrum, from financial services and utilities to retail, business transformation has taken center stage as companies search for ways to remain relevant amidst continually evolving landscapes. Regardless of industry, the underlying need is very much the same. Pressures, from plunging energy prices and stringent regulatory reforms to ever-increasing amounts of data, the always-on customer and complex tech advancements, have placed the old way of doing business on the endangered species list. It is no longer enough to focus on being the fastest, the coolest or the smartest. Firms must now offer the perfect package— one that delivers the connected, innovative and seamless experience their clients have come to expect. As comfort zones continue to degrade, companies need to closely scrutinize purpose and strategy. They need to reexamine their raison d’etre and in many cases, both redefine their definition of value and, put plans in motion to transform their business models.

4

Realizing such an end state means breaking down organizational silos we have built up to keep things running in accordance with our old blueprints. It requires collaboration between finance, marketing, IT and information security—groups that have not traditionally worked together in the past. And, it necessitates unifying around a vision of the future—and a promise for the customer—regardless of the internal changes needed to bring it to fruition. This issue of CROSSINGS is our biggest yet and includes many articles that showcase how firms are challenging traditional models. Josh Sutton introduces the business transformation imperative and highlights the changes firms are making in today’s era of the digital enterprise. Similarly, Randall Orbon and Cian Ó Braonáin discuss how firms like Uber and Airbnb are revolutionizing their industries and how such out-of-the-box thinking can provide a smarter and more sustainable approach to trade and transaction reporting. Sean O’Donnell looks at the technology side of things and demonstrates how and why financial services organizations need to view innovative initiatives as opportunities for the business. Plus, we are devoting an entire section to analytics and how its use is driving better decision-making, with examples from the energy and commodity sectors.

There is no doubt that the age of the digital enterprise is upon us. For many organizations, this will mean sweeping changes—from how we reach our customers to the technologies that enable those connections. Firms that recognize this evolution as an opportunity to deliver value in smarter and more meaningful ways for their customers will position themselves best for success in this new era.

Best Regards,

Chip Register CEO, Sapient Consulting

CROSSINGS: The Journal of Business Transformation

5

THE BUSINESS TRANSFORMATION IMPERATIVE: change now or become obsolete

The financial services landscape is changing faster than at any time in history. The FinTech revolution is creating an entirely new breed of competitor that is forcing “the establishment” to look closely at themselves and determine how they can transform to continue to be leaders in tomorrow’s marketplace. Goldman Sachs estimates that $4.7 trillion worth of revenue is at stake and could be claimed by new entrants to the financial services space if today’s market leaders do not adapt and change. While this is a concern for most CEOs, it is also a great opportunity. In this article, Josh Sutton discusses the following four topics that are central to helping financial services firms transform their businesses in the era of the digital enterprise:

› › › ›

Why today’s incumbent firms have an unfair advantage The four pillars of business transformation Understanding and leveraging the firm’s culture Choosing the right approaches to drive business transformation

AN UNFAIR ADVANTAGE Every start-up in the world wants to have the customer base, balance sheet or product offerings that nearly every leading incumbent firm possesses today. Thinking like a start-up does not mean figuring out how to do things from scratch ­— it means determining how to provide a better service that people will pay for that leverages available assets. For most Silicon Valley firms, those assets are not very substantial; however, they are usually fairly massive for today’s leading financial firms. › Customers. There is a certain amount of “stickiness” with financial services customers. This holds true across the spectrum, as retail banking customers, institutional investors and corporate clients alike have a natural predisposition to avoid switching institutions unless there is a compelling reason to undertake that effort. Conversely, customers want to consolidate their relationships to as few providers as possible if it makes their lives easier. There is a large opportunity for financial firms to increase the amount their customers spend with them by providing improved experiences and easy-to-purchase incremental services. An example is the success of Amazon Prime. By creating an improved service platform, Amazon has been able to entice a large set of customers to join Amazon Prime. Based on data from Consumer Intelligence Research Partners, the revenue generated from an Amazon Prime customer in 2014 was $1,500 per year versus $625 for a non-Prime member.

6

› Capital. There is an oft-repeated quote that it takes money to make money. Nowhere is this truer than in financial services. Business lines in today’s firms, ranging from retail lending to treasury services, rely on a significant balance sheet to operate successfully. The net result is a large number of business lines shielded from FinTech start-ups to date. Some new ideas, such as peer-to-peer lending, have launched as a result of Silicon Valley firms seeking to identify models that do not require large amounts of capital. This will not last forever, as funding for FinTech firms is increasing rapidly — $8 billion worth of funding was granted during the first two quarters of 2015. For the near term, however, this environment creates an opportunity for the larger, established firms to innovate and create meaningful improvements within their core business lines. › Products. Every person and firm has slightly different financial needs. In a perfect world, these needs would be met by unique and bespoke products. Yet the complexity involved in creating such products, along with the regulatory hurdles they would impose, make this a rather impractical solution. Today’s established firms have the benefit of possessing a large catalog of products that can be assembled as part of a custom portfolio that achieves the same net effect. The hurdles to this are largely internal at most firms, as there is little incentive to cross-sell and deliver custom solutions. Therefore, the potential is significant, and this is an opportunity that is not possible for new entrants to the business.

THE FOUR PILLARS OF BUSINESS TRANSFORMATION There are four primary lenses that must be considered when undertaking a business transformation effort. While each lens can be viewed independently, the sum of all four should be considered when prioritizing investments and business model changes. Customer Experience and Engagement The customer is always right. In today’s world, this saying applies to nearly every industry, including financial services. The first step in transforming the business is to look at it through the eyes of the customer

(either consumers or corporate customers). What are they really trying to accomplish? What would make their lives easier? What services would a firm provide to them in their ideal world? These are all questions that need to be answered without being influenced by the constraints of how a business operates today. Firms often confuse a great user interface with a great user experience. The former is a technology solution while the latter is about business design and ensuring that it is built around customer needs. Employee Empowerment What can firms do to better enable their employees to add value? These are the people who best understand the business and its customers. Oftentimes, however, the way an organization is structured can impede employees’ ability to leverage insights to improve the business. Any transformation effort must include a robust analysis of how to increase the value that a firm gets from its employees. This can be as simple as ensuring that employees are working on the same platforms and systems as their customers so they can better interact with them. It can also be about how to increase employees’ leverage by using technology to make them more effective. Technology and employees work hand in hand to create better results. In the wealth management space, for example, many financial advisors fear being replaced by technology. Instead, they should be focused on leveraging technology to serve a significantly larger client base. Process Optimization The streets in London or Boston are a direct result of quite literally the paving of cow paths. Too many businesses have processes that suffer from the same mindset. A process that once had to be manually performed was automated; unfortunately, the process that was automated was never examined to determine if there was an opportunity to improve it. Many of the costs associated with middle- and back-office processing can be traced to this behavior. Firms should take a close look at what can be done with today’s technology and business landscape that might not have been possible even five years ago. Can business lines that were once separate due to cost pressures now be integrated? Can services that were once too costly to provide now be offered to a wider range of customers? CROSSINGS: The Journal of Business Transformation

7

New Business Lines The creation of new business lines receives a significant amount of media attention, but this approach should only be considered if it provides value that customers want but are currently not receiving. New business lines can be segmented into two areas: those that are accretive and those that are disruptive. Accretive business lines are things that firms can provide as a new revenue stream that does not destroy any current business lines. One example would be giving retail banking customers the ability to allocate a portion of their savings to be invested in sector-specific ETFs. This provides a new set of investors for ETF products without damaging any existing business lines. A disruptive example along the same lines would be using digital platforms and robo-advisors to provide wealth management services and advice to every retail banking customer. While potentially accretive to the firm as a whole, there would definitely be an impact on the wealth management business and how it operates. Truly disruptive areas, such as using blockchain to redefine how settlement works or artificial intelligence to identify investment opportunities, would also fit into this category. As a general guideline, accretive business lines are best considered part of a transformation of an existing business, while disruptive business lines are best considered outside of the lens of today’s business model.

BUSINESS TRANSFORMATION VERSUS DIGITAL TRANSFORMATION Nearly every major firm has had discussions about digital transformation and what it means to their firm. There is a fear of being “uberized” or becoming the next Borders bookstore. The term “digital transformation,” while important, is ultimately misleading, as it conveys an implicit thesis that firms need to move from legacy channels to digital channels in order to conduct business. This is not always the case. To fully assess how to transform a business, a firm must first review the underlying areas that have changed as a result of technology. › Access to information. Technology has created an environment in which people expect to have access to information on demand. The days of sending out monthly NAV reports are quickly disappearing. › Mobility. People no longer work in physical offices or during fixed times. Instead, they are working at their offices, homes, restaurants and a plethora of other places throughout the day and night. They need to be able to access any information from any location on any device. › Analytics. While the industry is in the early days of big data and machine learning, these technologies are quickly redefining how people expect business to be conducted. Financial services will benefit the most from changes in this area, and the firms that lead the way will be leaders of tomorrow. Just as sub-prime retail credit cards are a well-understood and profitable business today, it is safe to assume that there will be a number of new business opportunities originated by the intelligent use of complex data analytics and even artificial intelligence. \

Only after firms have reviewed their business in the context of these three changes, can they begin to complete a true business transformation — which will likely involve an increased use of digital channels. However, if firms do not start with a thorough business review, they will be simply putting a veneer over a potentially obsolete business model.

CULTURE Understanding and leveraging a firm’s culture is one of the most critical yet overlooked components to any business transformation program. The first step in this process is to undertake a frank assessment of what the culture is today. Is it aligned with taking risks or is it risk adverse? Do people gravitate toward working in silos or do they prefer collaboration? Are decisions made by executive mandate or group consensus? Each of these items, along with other intangible components of the firm’s culture, is critical to understand before attempting to change the firm. 8

Once a firm understands its current culture, it is important to assess which components of that culture are complementary to transformation and which are counterproductive. In doing so, organizations can determine how to best leverage the characteristics of the firm to create a bias toward transformation. Certain characteristics can influence how to drive action. For example, a firm with a competitive culture can incorporate that into how it drives transformation ideas. Firms can make the funding of transformative ideas a competitive event and let the natural aggressive culture of the firm create an environment where people are competing to create the best and most transformative ideas. Understanding the current culture will also help firms identify any parts of the culture that will be detrimental to the firm’s ability to transform. One example is a firm that has a cultural bias toward silo-driven behavior. This can often be a showstopper for many transformation opportunities presented in today’s digital enterprise. Creating a plan to change any behaviors such as this is critical to any well thought-out transformation plan. The final step in understanding the impact of a firm’s culture on its transformation efforts is to determine how best to create an environment that makes people want to be part of the program. This must be done within the constructs of the culture as it was assessed, but often will have some common components. People must understand that transformation is important to the firm and therefore to them. This cannot be conveyed through words alone; it must also be backed by actions. Some common actions that people generally notice are career advancement opportunities as well as financial rewards and penalties. Ensuring that the people who are actively exhibiting the right cultural behaviors are being visibly promoted is a good first step. Another action is making a clear link between increased budget funding for parts of the firm that are making efforts to participate in the transformation efforts, while decreasing budgets for those business lines that are not. It is important to note that people will expect transformation to be internal as well as external. For example, when a firm creates a great communication platform for its clients, but its employees still use Lotus Notes, it implicitly tells people that the firm does not

value its employees to the extent that is should in today’s world. Actions speak louder than words, and firms must ensure that the actions they are taking are aligned with any transformation program.

APPROACHES TO DRIVE BUSINESS TRANSFORMATION The real challenge with business transformation is how to make it actionable and beneficial. The obstacles are substantial. There are often decades or even centuries of history at every firm. This history creates a cultural perception about how things have been done, which is difficult to overcome. Equally challenging is the natural resistance to change. People will have a vague idea at best about what the future might look like, but can understand in explicit detail what changing today’s model might mean to their personal role at a firm. With this in mind, three models have yielded successful results at firms within financial services as well as other industries: 1. Journey-driven Transformation. This is the approach that is best-suited for situations where there is a clear executive mandate to explore and enact transformation to improve the business. It starts with a high-level articulation of the functions that are core to a firm’s business strategy. Examples could include things like mortgage origination or third-party distribution of funds. Once this list is assembled, it should be reviewed and prioritized. For each business function, a crossfunctional team should create a future-state “journey” that clearly articulates what that business process would ideally look like for both customers and employees. This cross-functional team should comprise key employees as well as external team members who can challenge legacy models and prevent “group think” from taking over. These future-state journeys can then be mapped to actionable plans that can be executed. Often, a critical part of the plan is the internal communication required to ensure alignment and garner the necessary support from existing employees and customers. A common best practice is to designate a leader for the transformation program who is accountable to the CEO and does not have any specific ties to the legacy business models currently in place. This can be somebody from within the firm (divisional CEO, chief innovation officer, etc.) or someone external.

CROSSINGS: The Journal of Business Transformation

9

2. FinTech Investment Models. Leveraging the start-up mentality to bring new and innovative ideas into a firm is a model that is growing in popularity. At the highest level, the goal of this model is to leverage a group of people who are not part of any existing business line or burdened by any preconceived notions to develop transformative business opportunities that can generate incremental revenue or even disrupt entire business lines. There are a variety of different approaches firms are taking to implement this model. Two of the more common ones are as follows: › Venture Capital (VC) Model. The firm effectively acts as a venture capital firm for potential disruptive start-ups in the industry. It provides them with capital in exchange for equity and access to the firm’s technology platforms, usually taking some type of leadership position, such as a board seat. This model enables the firm to have a wide range of visibility into potential disrupters so they can leverage those insights to inform their business decisions. Goldman Sach’s investments into such firms as Motif and Kensho are good examples of this model in action. › Private Equity (PE) Model. Some firms are choosing to operate as PE firms rather than VC firms. They are buying controlling stakes in firms they believe could be either accretive or disruptive to their businesses. These acquisitions are then used as core components in the transformation of their business processes, occasionally creating entirely new business lines that did not previously exist. Some examples of this model are Capital One’s acquisition of Level Money as well as BBVA’s acquisition of the start-up Simple, a Portland-based bank that operates entirely online. 3. Innovation Incubators. Innovation incubators, sometimes called accelerators or innovation labs, are a good model for creating an environment that allows new ideas to flourish in an organization that is not necessarily fully committed to transforming its core processes just yet. The high-level construct of this is a firm-funded lab focused on progressing disruptive concepts from ideation to a minimally viable product, at which time it is transitioned into the core business. There are successful models in which the innovation incubator team is given a high degree of freedom to ideate in any areas they believe could improve the business and models in which different business lines fund more targeted innovation efforts (i.e., how to disrupt and improve the mortgage origination process). Both of these models have proven successful because they combine a core group of people who know and understand the firm with a group of people from outside the firm to push the boundaries and ensure that historical precedents will not rule out potentially innovative ideas. Similarly, this model is most successful when the team is placed outside of its normal work environment — either at an entirely different physical location or, at a minimum, within a unique space inside the firm. All of these models can be deployed independently or in parallel. The right combination will depend on a firm’s business strategy and perceived market differentiators.

10

CONCLUSION The world is changing faster than it ever has before. The ability for firms to both adapt to the changes that the concept of a digital enterprise has created and capitalize on the opportunities it presents will determine future success. Disruption has been slower to attack financial services than other industries as a result of the capital and regulatory hurdles that are in place. Those hurdles are no longer viewed as insurmountable as they once were; however, with investment in FinTech, disrupters are accelerating rapidly. The larger, established firms still have an advantage over these new entrants, but those advantages will only last for so long. In an industry that values risk management, perhaps the greatest risk that CEOs and boards need to be concerned with today is that of having one’s business model become obsolete.

THE AUTHOR Josh Sutton is the global head of Sapient Global Markets’ Digital Business Transformation offering for financial services. He also serves on the executive leadership team of Sapient Consulting, a division of Publicis.Sapient. Since joining Sapient in 1995, Josh has led several transformational programs for many of its largest clients. He has also been responsible for shaping some of the company’s largest outsourcing and co-sourcing relationships related to both technology and business operations. [email protected]

CROSSINGS: The Journal of Business Transformation

11

DYNAMICS OF DISRUPTION:

an ‘Uber’ approach to compliance reporting With MiFID II requirements looming, firms face the need to build the new capabilities necessary to meet complex mandates for trade and transaction reporting. Or do they? As Uber and Airbnb continue reshaping the transportation and hospitality industries, a growing number of firms are adopting a similarly disruptive approach to trade and transaction reporting— positioning themselves for greater cost efficiency, reduced risk and more time to invest in meeting customers’ expectations. In this article, Cian Ó Braonáin and Randall Orbon explore why it is no longer a matter of if—but rather when and how—firms will exit the “business” of trade and transaction reporting.

A wave of regulations has hit financial institutions, and one of the most recent—the Markets in Financial Instruments Regulation (MiFIR) and Directive (MiFID II)—is driving a fundamental “rethink” of how firms respond to changing regulations. What started as a whisper is now reaching a roar in the industry, as a growing number of participants shift to a different way of thinking. In short, participants are recognizing that they no longer can—or should—own and operate the systems that support trade and transaction reporting. Instead, they are choosing to access shared systems that address requirements without draining budgets, straining resources and distracting from their core revenue-generating businesses.

DRIVERS OF DISRUPTION Driving this strategic rethink are new requirements that significantly increase the scope of reportable instruments and reflect the provisions in the market abuse legislative proposals. Meanwhile, firms face specific additions to the content of transaction reporting—including information related to clients, algorithms, trader IDs and short sales, as well as the price and negotiated waiver under which the trade took place. Together, these changes pose significant organizational, systems and technological challenges—which are merely the latest in a long and ongoing drumbeat of changing regulations. Industry thought leaders are abandoning the chore of building and maintaining dedicated systems and moving toward a common platform. Such an approach enables faster, more cost-efficient access to needed reporting services, shared risk across industry participants, and the ability to focus monetary and human resources on more strategic initiatives that will strengthen the customer experience and drive top- and bottom-line growth.

12

THE RISING PRICE OF COMPLIANCE REPORTING Should we exit the “business” of regulatory reporting? When faced with this strategic question, more and more firms are answering with a resounding “yes.” Cost is a major reason why. Consider, for example, that investment banks alone have spent nearly $25 million on average to achieve compliance for both Dodd-Frank and EMIR.1 Yet few participants believe these systems will deliver the adaptability, scalability and flexibility needed to meet new requirements. A recent survey by Sapient Global Markets found that 72 percent of firms are using in-house systems, 16 percent are using a third-party vendor solution and 6 percent are using a managed service solution to manage trade reporting. Among those using an in-house system, more than a quarter (26 percent) expect their trade reporting costs to increase by 50 percent or more over the next two years.

THE ‘UBER’ APPROACH TO REPORTING So what, exactly, do Airbnb, Uber and other industry disruptors have to do with the latest regulations and those affected by them? There are two angles to consider: › First, these innovative companies have demonstrated real success by focusing on their core services rather than the assets that support delivery of those services. Uber provides transportation yet does not own and maintain a fleet of vehicles. Airbnb is revolutionizing hospitality but does not own or manage a single hotel property. Their strength lies in the ability to provide an outstanding customer experience—a priority shared by banks in this “Year of the Customer.”

6%

6%

16%

72%

In-house Systems Third-party Vendor Solution Managed Service Solution Other

Many firms are increasingly recognizing that they no longer need full ownership over the reporting systems that support compliance—and many are questioning whether or not they can actually afford to build and maintain such systems (see sidebar: The rising price of compliance reporting). As they investigate different approaches, they are also realizing a number of other advantages.

› Second, market participants intuitively grasp the potential benefits of regulatory reporting services delivered via a “virtualized” model. Why incur the massive time and expense of building trade and transaction reporting systems? Why commit to ongoing maintenance and updates of the system? Why not find a means of tapping into the benefits of such a system without the burden of owning it? CROSSINGS: The Journal of Business Transformation

13

As with other outsourced or virtualized solutions, an “Uber” solution for trade reporting positions a market participant for the following: › Lower cost of ownership. Just as Uber provides its users with reliable transportation—without having to buy and maintain an automobile—a managed trade reporting solution helps a firm address reporting requirements with minimal capital outlay and staffing requirements. And, because operating costs are spread among multiple subscribers, a managed solution also keeps ongoing cost of ownership in check. › Improved reconciliation. Most firms lack a reconciliation engine that pulls reports from trade repositories, TriOptima and other sources. Typically, a managed solution will offer this capability, reconciling those reports to each client’s internal database—addressing regulatory rules not only for reporting but also for reconciling portfolios, identifying discrepancies and resolving disputes. › Reduced compliance risk. Best-in-class trade reporting solutions offer support for reporting to all global trade repositories for all asset classes and message types. In addition, they deliver regular, timely updates that reflect rule changes and are backed by the provider’s close relationships with regulators for staying abreast of new requirements. › Improved data usability. When a solution aggregates data from multiple sources, its subscribers have new opportunities to deploy data analytics. Beyond addressing reporting requirements, a managed solution can provide a new vehicle for decision support. › Rapid deployment. A managed solution offers an existing infrastructure that a market participant can simply plug into. Pre-configured reporting rules and message types enable integration with any source system—whether from subscribers, trade repositories, counterparties or vendors. › Better reliability and security. Compared to onsite solutions, remotely hosted solutions offer a higher, more reliable standard for data protection.

14

DO MORE—OR DO IT DIFFERENTLY? The MiFID II requirements demand greater breadth and depth and more investment in order to be able to respond. For those affected, one option is doing more of what has always been done: another massive initiative, another major investment and the continued possibility of new or modified regulations. But with the latest round of deadlines approaching, the other option—a revolutionary approach to reporting—is becoming increasingly obvious and attractive. Such an approach will not only meet the most recent regulatory mandates but will also pave the way for smoother compliance with future reporting requirements. With nearly every firm either considering or actively adopting outsourced reporting, it is time for all participants to start asking how and when they will deploy this new model.

Resources 1. Bob’s Guide, “Banks Must Streamline Trade Reporting to Cope With Expanding Regulations,” June 29, 2015, http://www.bobsguide.com/guide/ news/2015/Jun/29/banks-must-streamline-tradereporting-to-cope-with-expanding-regulations.html

THE AUTHORS Randall Orbon As part of the Sapient Global Markets leadership team, Randall Orbon defines strategy, drives business development and executes key components of the strategy. Randall joined the company in 1996 and has built a breadth and depth of experience that spans Sapient’s capabilities. He has worked with numerous capital and commodity market participants to develop and execute transformative strategies. Randall holds a BSE in Computer Science from the University of Pennsylvania and an MBA from Columbia and London Business Schools. [email protected]

Cian Ó Braonáin is the global lead of Sapient Global Markets’ Regulatory Reporting practice, providing guidance, insight, leadership and innovative solutions to the company’s regulatory reporting and response project portfolio. Cian has over 15 years of experience as a lead business analyst, project manager and business strategist, developing methodologies and tools for solving the risks of regulatory impact and change. He has been involved in numerous regulatory reporting projects, many of which focus on pre-compliance date readiness activities, such as analysis, implementation and post-compliance assurance activities. [email protected]

CROSSINGS: The Journal of Business Transformation

15

OTC DERIVATIVES:

the data management challenge, risks and opportunities Even as regulators struggle to harmonize significant inconsistency in rules across jurisdictions, they are now shifting from rulemaking to enforcement. To avoid the risk of noncompliance, trading firms are beginning to invest in data management as a discipline within their organizations. As internal architectures are reshaped, costs will likely increase in the short term. However, the return on investment is potentially exponential. In this article, Paul Gibson and Matthew Rodgers discuss how poor data management practices and a fundamental lack of standardization still pose a risk in the over-the-counter (OTC) derivatives markets. But as businesses work toward providing regulators with the ability to effectively monitor systemic risk, they may also be opening up new business opportunities for themselves.

LACK OF HARMONIZATION AND EXPECTED ENFORCEMENT OF RULES In 2009, leaders from the Group of 20 developed nations (G-20) agreed on a series of commitments to reform OTC derivative markets with the aim of improving transparency, mitigating systemic risk and preventing market abuse in order to restrain the excesses leading up to the financial crisis. Explicitly calling out reckless behavior by capital market participants, they committed to put in place the checks and balances needed to prevent excessive risk taking and hold firms accountable for the risks they take. Despite the tough stance taken then, many inside and outside the industry feel little progress has been made with regard to the majority of these commitments. In fact, at the 2015 ISDA AGM, it was reported that only 15 percent of the 2009 Pittsburg agreements have been implemented. One exception is the headway that has been made in reporting OTC derivative contracts to trade repositories (TRs). Most national regulators now have rules in place that require this but, notwithstanding the presence of the rules, regulators are becoming increasingly frustrated with their inability to understand the data being reported to them and the impact this has on their capacity to monitor systemic risk. In efforts to improve the situation, the European Securities Markets Authority (ESMA) has increased the pressure with their level 1 and level 2 validations. Furthermore, over the past 18 months, the Commodity Futures Trading Commission (CFTC) has begun to levy fines due to the lack of or inaccurate reporting.1 Although participants may argue that the number of different and sometimes contradictory regulatory requirements actually contributes to the problem rather than helping to solve it, regulators are likely to continue moving into a period of enforcement. Participants have dedicated considerable time and resources to begin the process of bringing transparency to the OTC derivatives market, but unless they are able to report all of their eligible trades in a timely, accurate and complete manner, they face significant risks. In order to resolve this situation, trading organizations must invest in the operational processes to support their complex business models. Otherwise, the effort which has been put into this period of change will be rewarded with yet more fines.

16

HOW HAS THE LACK OF REGULATORY HARMONIZATION AND INDUSTRY STANDARDIZATION IMPACTED FIRMS? Until progress has been made in harmonizing the inconsistent rules and creating common standards across the global markets, participants will continue to be pulled in various directions to address G-20 commitments. With budget constraints and the changing economy increasingly weighing on investment banks’ balance sheets, this poses a significant risk to market participants. Eventually, the industry can, and will, create a more seamless transparent landscape for regulators and participants—but it won’t be easy. And, as the business of banking and investing changes, it is likely that this period will usher in a new set of winners and losers. While firms can point to the lack of global alignment of requirements as an impediment to transparency, it is not a valid excuse for failing to meet regulatory mandates or falling short from an operational risk and controls perspective. In short, it will not save participants from the potential for fines. Furthermore, unless firms invest in the business models required to accurately collect and disseminate the data required, they could also fall behind when it comes to new business opportunities. The main choice firms make when deciding on a reporting model is to choose between a central internal hub with one connection to a TR or a solution that has multiple connections from one organization to the same or multiple TRs. While many agree that a centralized internal hub is the superior model, establishing one is extremely challenging for many organizations. In a recent Sapient Global Markets’ survey taken at the ISDA AGM in Montreal, the majority of firms (about 72 percent) said they are using their own in-house solutions to satisfy their regulatory reporting requirement.2 Additionally, the survey highlighted how the majority of firms are still working with a decentralized internal model. In fact, as seen in Figure 1, there are often a number of different reporting models scattered throughout a firm’s internal infrastructure. This is partly due to the bifurcated nature of most institutions’ systems by geographical location, regulatory obligation and strategy, as well as a lack of standardization, clarity and direction on harmonization from regulators.

There are many advantages to working toward a central internal hub. A single central location for all reporting decision making logic, suppression rules and enrichment provides an organization with consistent management information along with a robust proof to regulatory scrutiny for the completeness, accuracy and timeliness of their reporting. Combined with the cost-saving opportunities for leveraging third-party providers, this area of financial market infrastructure offers many opportunities for performance improvement. When taking into account more than the G-20 OTC commitments and looking at the legacy reporting frameworks, such as MiFID, Trace, Blue Sheets, ACT and FINRA as well as upcoming requirements like SEC and MiFID II, the number of different reporting frameworks currently in place at each firm is likely to be extremely high. Most participants struggle with data integrity issues across business lines and functions, but the speed required for implementation programs and the nascent nature of the practice of data management within the investment banking industry has meant that rather than simply being a characteristic, poor data-management practices tend to define OTC derivative reporting frameworks. With operations budgets normally first in line to be scrutinized and cut, compliance monitoring and reporting requires constant justification. These budget constraints and gaps in proper oversight and governance impact the quality of the architecture which is implemented; it is not unusual to see firms making do with solutions which rely on manual reconciliations a year, or even two, after the deadline is met. In many cases, firms have disjointed systems that are growing increasingly expensive to maintain and update as firms have to keep multiple systems up to date with the everchanging rules of each regulation. Historically, due to industry pressure, TRs have interpreted the rules and requirements in different ways and have allowed firms to submit loosely validated data—contributing to the data quality issues at the TR level. These data quality issues, coupled with the maintenance of the regulatory rules engine, add to the rising cost of maintaining or upgrading the technology firms are currently using.

CROSSINGS: The Journal of Business Transformation

17

A solution with multiple connections provides the following functionality: 1. The system takes trade, position and static data from multiple systems and translates it into a standardized data model required for reporting. 2. The rules engine decides what needs to be reported and to which TRs. 3. Built-in mappings then automatically translate and transmit data to the relevant TRs. 4. The reconciliation engine takes reports back from TRs and other sources, and reconciles them to the internal database. 5. Delegated reporting functionality provides a combined dashboard view across all counterparties and TRs for clients (unlikely to be possible without a central hub).

Figure 1: Two main approaches have been identified as most prevalent in trade reporting: a central internal hub, or a solution with multiple connections to TRs.

Complying with upcoming G-20 regulations following American and European derivative regulations while managing ongoing updates to existing rules is causing firms to be in a state of almost perpetual scramble, facing tight timelines for implementation every quarter. Simply keeping pace with regulatory requirements requires large numbers of support staff and ongoing maintenance, which is adding dollars to the bottom line that firms will never be able to recoup unless they are building scalable, flexible solutions. Spreads and credit are so tight in today’s financial markets that firms are unable to pass regulatory costs on to their customers or shareholders without an adverse impact to their reputation or product offerings.3 More institutions are coming under regulatory scrutiny and will need to establish proper checks and balances to survive the ever-changing regulatory landscape. This is likely to lead firms to reevaluate their strategic solution for regulatory reporting and ideally tie into broader requirements, such as Basel Committee on Banking Services (BCBS), for an effective solution. Many firms have acknowledged that they were materially non-compliant with the rules associated with some—or all—of the G-20 regulations implemented to date due to data architecture, IT limitations, accuracy and reporting integrity. For these participants, time is running out. Governing bodies are growing less tolerant when firms heavily rely on manual processes and “work arounds.”

18

WHAT CAN FIRMS DO? In 2013, BCBS and IOSCO identified 14 principles under four headings intended to assist firms in creating a harmonized structure for effective risk data aggregation and risk reporting.4 While there are many ways to ensure high-quality data, this approach indicates the direction regulators are looking for firms to take as they move toward the necessary standards required for compliance. The firm-wide understanding of these types of frameworks is critical in order for participants to implement a workable approach to data governance and day-to-day management and stewardship that are ideally globally streamlined. The principles are grouped into the four themes outlined in Figure 2, which presents the proposed framework developed by the BCBS on how firms can aggregate those principles into a harmonized governance model. By looking to follow a model based on these principles, firms will have greater access into their data and a better understanding of where the issues lie, while moving toward a global harmonization with other institutions.

Supervisory

Governance & Infrastructure Data Capabilities

Regulatory Reporting

Supervisory Review Compliance and legal departments should have the necessary checks in place for a periodic or random review of the reporting being distributed both internally and externally. Global policies and procedures should be validated when any remediation effort is required due to external triggers. Therefore, department(s) should have the necessary tools in place to provide a timely resolution on a particular report(s). Governance and Infrastructure Taking into consideration its capabilities, banks should promote strong governance and guidance that is consistent with what the Basel Committee established. Institutions should have a high standard of validation while ensuring procedures are fully documented. In addition, they should have designed, built and maintained a data and IT infrastructure that allows for robust data aggregation and reporting. Data Aggregation Capabilities Participants should have the necessary tools to generate and maintain an accurate data extract that can meet supervisory requirements pertaining to a particular regulation. They should look to consolidate and streamline their reporting processes through a single data source point of entry, while having the ability to segregate by legal entity, business unit, asset type and region. Effective risk mitigation should be in place when manual processes are incorporated into the BAU data reporting process. When a request is initiated, banks should have the systemic process in place to produce a timely and accurate report, allowing for a particular supervisory body to review and assess the firm’s data quality and known risks. These reports should allow for any ad hoc requests received offmarket or frequency previously assigned by a particular regulator or client request. Regulatory Reporting

Figure 2: Four principles representing the BCBS proposed framework for achieving a harmonized data governance model.

Reports generated by a firm should be accurate and transparent, while ensuring a reconciliations process is in place and its overall process is validated ad hoc or per annum when outside requirements/regulations change. The reporting being created should be comprehensive enough to meet all necessary guidelines for frequency and distribution while covering all aspects of a firm’s risk and necessary business lines.

CROSSINGS: The Journal of Business Transformation

19

PRACTICAL NEXT STEPS Since the financial crisis in 2007, data has remained fractured. It still lacks structured oversight and accuracy and is often maintained in silos throughout the enterprise. Having bad data that is not harmonized causes gaps, leading to various internal and external control breakdowns. The majority of a firm’s data is simply not actionable. What’s more, many firms still struggle to adequately review their positions or provide transparency to external data consumers in a timely fashion and are unable satisfy the necessary requirements for proper oversight. While the 2013 BCBS/IOSCO Risk Mitigation paper highlights a structure participants could aim for, the majority are a very long way off from such a formalized approach. Despite the lack of regulatory harmonization and global data standards, regulators are still planning to levy fines.5 Firms need to become smarter about their trade reporting. This starts with adopting a culture where data matters, ensuring initiatives such as those coming out of BCBS/IOSCO are well understood, and engaging in the significant progress being made at the TR and industry association level, such as the DTCC and ISDA. In the absence of leadership from national regulators, these organizations are taking the calls from the industry and supranational bodies for harmonization upon themselves. The methodology for agreeing upon a cross-asset/jurisdiction approach to streamline and harmonize reported data in order to achieve a consistent framework by classifying data elements for maximum completeness, validity and accuracy is gaining significant traction. Industry working associations are taking the approach to guide members and participants in a more streamlined way. Two significant examples currently in place are the proposal from DTCC for global data standardization6 and the ISDA EMIR Review Consultation working group, which is focused on single-sided reporting for EMIR and harmonizing collateral calls, margin and liquidity to establish more consistent models across the globe. It is the mindset of and action by market participants, such as ISDA and DTCC, which will eventually lead to a global standard for transaction reporting.7 As these initiatives progress, all market participants should begin assessing their infrastructures and operating models for inefficiencies stemming from either the lines of business or from technology and infrastructure. This analysis should include data capture, front-to-back flow analysis, level of headcount, organizational structure as well as how technology functions across asset classes. Future-state definition should take into account current industry practices to promote and support the definition of the target process. Being able to leverage industry working groups, while seeking various solutions both internally and externally, will allow firms to expedite any immediate needs that fall out during the assessment of a firm’s reporting infrastructure. Another challenge is putting the right governance in place to ensure people own the data and see it as a valuable commodity in its own right on an ongoing basis. Establishing the necessary processes with robust controls and procedures, will help alleviate compliance concerns related to data quality and harmonization. Once this is achieved, firms will be able to use the data for a variety of purposes, including analytics, management reporting, compliance oversight and more. In fact, regulatory reporting may well be the basis by which many firms are coerced into achieving a level of data-quality maturity fit for their businesses. The regulatory requirement for firms to report all of their OTC data gives them the mandate to put these standards in place­—but it may actually be the catalyst pushing the larger investment banking organizations toward business models that are actually sustainable going forward. In the short term, the investment required is not insignificant; but this is expected to drive down the cost of compliance in the long term and provide opportunities in risk management and asset servicing, with greater access to more reliable data.

20

Resources 1. U.S. Commodity Futures Trading Commission, “CFTC Orders ICE Futures U.S., Inc. to Pay a $3 Million Civil Monetary Penalty for Recurring Data Reporting Violations,” http://www.cftc.gov/ PressRoom/PressReleases/pr7136-15 2. Sapient Global Markets, “Sapient Global Markets Survey Reveals Escalating Trade Reporting Costs and Concerns over Maintaining Compliance,” http:// www.sapient.com/en-us/global-markets/news/ press-releases/year2015/sgm_survey_reveals_ trade_reporting_concerns.html 3. MarketsMedia, “Trade reporting costs on the rise,” July 9, 2015, http://marketsmedia.com/tradereporting-costs-on-the-rise/ 4. Bank for International Settlements, “Principles for effective risk data aggregation and risk reporting - final document issued by the Basel Committee,” http://www.bis.org/press/p130109.htm

THE AUTHORS Paul Gibson is a Business Consultant currently based in New York. Specializing in new business and regulatory drivers, Paul has extensive experience in how these are impacting the capital markets industry. His current focus as a program manager of a top market infrastructure provider’s data quality initiative was the result of an engagement to advise on its products and services strategy, helping to identify strategic growth areas and facilitate the board’s decision on further investment. His previous client was a top European investment bank, focusing on the impacts of regulatory reform on the bank’s execution, clearing and reporting workflows, designing solutions, and planning implementation to facilitate eventual compliance and more efficient operating models. [email protected]

5. U.S. Commodity Futures Trading Commission, “CFTC Orders ICE Futures U.S., Inc. to Pay a $3 Million Civil Monetary Penalty for Recurring Data Reporting Violations,” http://www.cftc.gov/ PressRoom/PressReleases/pr7136-15 6. DTCC, “DTCC Issues Proposal to CPMI IOSCO for Global Data Harmonization,” June 2015, http://www. dtcc.com/news/2015/june/17/dtcc-issues-proposalto-cpmi-iosco-for-global-data-harmonization.aspx 7. Committee on Payments and Market Infrastructures, “Harmonisation of key OTC derivatives data elements (other than UTI and UPI) – first batch,” September 2015, http://www.iosco.org/ library/pubdocs/pdf/

Matthew Rodgers is a Business Consultant based in New York. Matthew has an extensive background in sell-side banking, where his experience has been pivotal around OTC derivative trading and regulations. His expertise has carried over to various other regulations within the OTC regulatory space for Dodd Frank, Canadian regulations, ESMA, ASIC, MAS, JFSA and HKMA. His current engagement entails regulatory reporting globally along with advising on upcoming regulations within the derivatives and cash space for a tier 1 investment bank. His broad understanding of the “front to back” business flow is beneficial in advising clients on strategic decisions needed for compliance within the OTC space. [email protected]

CROSSINGS: The Journal of Business Transformation

21

DERIVATIVES GOVERNANCE:

enabling product innovation for asset managers Derivatives are becoming a valuable tool for asset managers to boost product innovation and deliver outperformance in a risk-controlled manner. But current, reactive governance structures are creating a long lead time for assessing and approving a new derivative instrument. Geoff Cole and Jackie Colella explain how strengthening governance, approval and operational due diligence can help investment managers reduce time to market and more quickly respond to portfolio managers’ needs.

Typically considered an instrument reserved for hedge funds and complex investment strategies, derivatives are becoming far more common within investment portfolios and are increasingly desired by portfolio managers for exposure and risk management. Desire for benchmark outperformance and product differentiation in a crowded marketplace has led to a marked increase in the use of derivatives within US mutual fund products and newly marketed funds that cater to the needs of the retail and institutional investor communities. But with growing competition, the shift from passive to active management and new regulations for fund transparency, asset managers must be able to differentiate their offerings and find new ways to deliver innovative investment products ahead of the competition and at lower cost. Sapient Global Markets interviewed a select group of asset managers to learn about their governance and processes supporting the assessment, implementation and trading of new derivative instrument types. These discussions centered on the following key aspects of enabling a new derivative instrument type to be operationalized within the investment management process: › Derivatives committee structures and responsibilities › Operational assessment and approval for new and currently traded derivative instrument types › Risk management and controls related to derivative trading across the investment management industry › Legal agreement management and the changing regulatory environment In speaking with heads of derivative operations and primary derivative leads, the issues investment managers face to effectively and efficiently implement a new derivative instrument type became clear: enabling the trading of a new instrument type to quickly support a portfolio manager request, while taking into account technology restraints and mitigating operational and reputational risks, requires significant due diligence and a robust yet flexible governance model to support the process.

22

THE CHANGING DERIVATIVES LANDSCAPE Large investment managers are increasingly utilizing derivatives within their portfolios to support the introduction of new, innovative products that seek to utilize more advanced methods for interest rate, credit and currency risk management, as well as provide unique exposure opportunities potentially not offered by or accessible to competitor products. This response is motivated by downward pressure on both fees and firm profitability, and concern of underperformance relative to benchmark-tracking passive strategies and exchange-traded funds. At the macro level, the prolonged low interest rate environment has made outperformance within fixedincome products particularly challenging, while the anticipation of global changes to interest rates and central bank policies has left institutional investors with few alternatives to appropriately manage the risk. Derivatives are increasingly used as an additional tool for portfolio managers seeking exposure to countries, currencies, rate differentials, etc., which is driving broader, more complex derivatives usage to become a key enabler to product innovation and the way asset managers structure portfolios. Products and strategies that more intensively use derivatives, such as unconstrained bond funds or liquid alternatives, are growing in number as another avenue for investment management firms to increase revenues, capture sophisticated investors’ assets, execute upon unique investment ideas from research teams, and manage risk more effectively. Additionally, as clients and products become more internationally distributed, more complex hedging strategies are required to reduce risk and return profits to local currency or protect against unfavorable future yield environments. The ability to better understand and govern derivatives usage has enabled investment management firms to not only execute derivatives at a lower cost but to also scale in terms of both volume and ability to support complexity in the form of new product launches without significantly adding to the cost. Nimble governance structures can help asset managers unlock the full value of technology and operations in reducing time to market and giving portfolio managers access to a comprehensive range of tools at a reasonable, incremental cost.

The ability to better understand and govern derivatives usage has enabled investment management firms to not only execute derivatives at a lower cost but also scale in terms of both volume and ability to support complexity in the form of new product launches without significantly adding to the cost. The evolution of traditionally sell-side oriented technology platforms to better cater to buy-side needs for derivative trade execution, risk and lifecycle management is an indicator of the blurring of the lines between investment managers and the dealer community. Lagging behind is investment in the onboarding and management of new derivative instrument types for client accounts, legal agreements and internal governance. From a governance and operational support perspective, the investment management industry is beginning to consider derivatives as an asset class alongside equity and fixed income. Having a more complete range of derivative capabilities enables asset managers to nimbly manage risk, volatility and liquidity, as well as seek and execute upon numerous investment team ideas envisioned for their clients’ portfolios. With this evolution, firms are confronting more complex issues associated with trading and managing derivatives positions and new instruments, because the level of complexity and inconsistency industry-wide is greater than traditional cash securities.

CROSSINGS: The Journal of Business Transformation

23

THE CHALLENGE FOR ASSET MANAGERS The complexities of derivatives and firms’ disparate individual abilities to implement and manage the operational risk associated with introducing new derivative instrument types into the investment infrastructure, exponentially increases the difficulties associated with the governance and onboarding of new derivative instruments into the front-to-back investment management infrastructure. The investment management industry is struggling to determine the proper level of operational and legal due diligence necessary to create a level of comfort appropriate for firms to trade new derivative products, while balancing investment managers’ desire to be the first to market with a new product offering that offers a unique exposure or riskmanagement approach. Sapient Global Markets’ observations across the industry suggest that operationalizing the trading of new derivative instrument types can extend the lead time of new product introduction by three to six months. The intricacies of trading derivatives across markets require large operational assessment efforts that can often delay the inclusion of a new instrument in a portfolio, leading to missed opportunities in the market. Investment managers are seeking a broader range of exposures using an increasingly diverse set of instruments. When Sapient Global Markets asked asset managers how they currently use derivatives and their future views on usage of derivatives within their investment products, three primary trends emerged: 1. The primary use for derivatives is for hedging purposes, followed by generating alpha and liquidity management 2. The majority of investment managers stated that pooled vehicles held most of their derivative strategies and investment products, followed by institutional as well as individual separately managed accounts (SMAs) 3. All of the asset managers indicated that SMAs add a layer of complexity to implementing a new derivative instrument type, due to additional legal agreements required as well as coordinating client approvals In addition, most firms expect an increase in derivative trade volumes over the next one to three years, based on market conditions and/or strategy diversification. Some firms expect sharp increases in volumes and trades as the multiasset/sector space gains traction, while other firms expect to see unchanged volumes in anticipation of the impact of new regulations or a potential decrease in the number of trades as transactional size increases due to costs.

THE CURRENT STATE OF GOVERNANCE MODELS AND PRACTICES Governance plays an integral role in onboarding and enabling a new instrument type for trading across the investment management technology and operations infrastructure. Therefore, it is essential for firms to assess their current governance models and practices to identify strengths, weaknesses and limitations. Many firms have governance committee(s) responsible for approving the operational aspects of new instruments; however, the process for implementing the changes varies widely from firm to firm. In most cases, different committees are responsible for approving derivative usage on a portfolio or fund level, but most committees only approve operational capability. Firms should consider implementing a dedicated, fully resourced derivatives team with appropriate product knowledge and capacity levels to support the onboarding lifecycle. In addition, reviews should be conducted on a regular basis. The majority of firms review derivative usage bi-weekly or once a month, yet almost all said their committee convenes on an ad hoc basis to review any new issues that arise with a portfolio manager’s new instrument request. For all the firms Sapient Global Markets interviewed, however, none felt they were fully resourced in staffing for onboarding new derivative instruments.

24

Firms should also determine the level of efficiency in the current process for assessing their readiness to trade a new instrument type. Incorporating a streamlined process to approve and implement a new derivative instrument is paramount to mitigating operational risk and reducing time to market for new products and investment strategies. Half of the firms Sapient spoke to stated that their governance structure is far from streamlined and that challenges and backlogs exist primarily in operations and technology.

FINDING THE RIGHT BALANCE Asset managers are searching for the right balance between enabling portfolio managers and investment teams to express their investment desires through any means possible (including usage of derivatives) and achieving the optimum level of operational control and reputational risk management. However, major operational challenges occur in the process of assessing and approving new derivatives, managing legal agreements, meeting regulatory mandates, and achieving fast time to market for new investment products while controlling operational risk.

Legal Agreements Legal agreements pose an interesting challenge for investment management firms. The due diligence needed to manage master umbrella agreements is cumbersome and requires qualified staffing with knowledge of the intricacies of derivatives documentation. When Sapient Global Markets asked asset managers about legal agreements, a small percentage said their clients negotiate their own agreements with counterparties. If an investment manager chooses to trade a new derivative not stipulated in the original clientnegotiated agreement, it may take weeks or months to have all the paperwork completed, delaying capitalizing on that derivative trade. Regulatory Implications and Constraints The amplification of new regulatory requirements for trading and clearing of derivatives has created greater challenges with firms’ legal review and documentation processes. The new regulatory requirements have changed the legal review and documentation process in the following ways: › Additional “touch points” requiring clients to sign off on each new requirement adds weeks to months for documents to be returned › Extra legal team resources are needed to review regulatory changes › Most changes occur only in the documentation › Because the regulatory environment may change, the process is largely “case by case” in which firms may “inform” rather than “request” approval from the client › Regulatory mandates in Europe are especially challenging for derivatives

Figure 1: Finding the Right Balance. As product innovation accelerates and competition for assets increases, derivatives usage will continue to grow in both volume and complexity. To position for long-term growth, investment management firms must reach a greater level of maturity with respect to the governance model, so they can support the increased usage of derivatives instrument types as part of existing and new product offerings.

CROSSINGS: The Journal of Business Transformation

25

Balancing Time to Market with Operational Risk In Sapient Global Markets’ interviews, asset managers stated that it can take anywhere from three weeks to one year to completely onboard a new derivative instrument type. The majority of firms also said that most instruments are traded with manual workarounds without taking into account post-trade operational processing, including settlement, collateral management and even client reporting. In many cases, an instrument that is too complex for existing systems can delay implementation for over a year and will sometimes lead to the decision not to make the instrument type available to portfolio managers at all. In addition, the majority of firms said they complete a full end-to-end testing of any new derivative instrument. However, in some cases, this testing is completed for one specific business unit rather than firm-wide, which can increase operational and business risk in the trade lifecycle if another business unit subsequently attempts to trade that newly enabled derivative instrument. Reliance on standard vendor packages for trading and risk management may provide out-of-the-box support for most instruments, but changes to interfaces and configuration may be more complex than anticipated or require close coordination with software providers. Essentially, each new instrument request becomes a joint business and technology project, requiring scope, funding and prioritization against all other IT projects, which can also prolong the period between the request to trade and the first execution.

IMPROVING THE GOVERNANCE MODEL AND PRACTICES For asset managers looking to continually innovate, introduce new products and enable their investment professionals with a full toolkit of market access and risk-management tools, the time to enable trading of a new derivative instrument type must be significantly compressed. Revamping governance models and approval processes is required to streamline, centralize and balance the timeto-market push against operational risk. Additionally, investment in workflow tools for transparency and tracking, dedicated derivatives/new instrument due diligence teams and the active involvement of operations teams is necessary to inspire and enable the cultural change needed to support usage of more complex product types.

For asset managers looking to continually innovate, introduce new products and enable their investment professionals with a full toolkit of market access and risk-management tools, the time to enable trading of a new derivative instrument type must be significantly compressed. These changes are often overlooked dimensions of a robust target operating model (TOM) initiative that can address the definition of roles, responsibilities and accountability, as well as identify opportunities for improvement and investment across a firm. As product innovation accelerates, fee and cost pressures persist, and competition for assets increases, asset managers must tie all of the capabilities, supporting derivatives, including legal, client service, collateral management, risk

26

management, reporting and project management, together in the form of a nimble and responsive governance model to enable a true competitive advantage. Improvements in governance models and practices must also take into consideration future industry, market and regulatory shifts. For example, asset managers are encouraged to consider a number of factors, such as determining if using Special Investment Vehicles (SIVs) across accounts is a viable option, preparing for BCBS 269 compliance and other regulatory change, and providing all personnel with appropriate derivatives education and training.

THE AUTHORS Geoff Cole is a Director of Business Consulting with Sapient Global Markets’ Investment Management practice based out of New York. Geoff focuses on supporting investment managers with business, data and technology strategy. Most recently, Geoff has led projects helping global asset managers design operating models to support broader derivatives usage as well as select and implement solutions for performance attribution and risk analytics. [email protected]

TURNING CHALLENGES INTO OPPORTUNITIES As product innovation accelerates and competition for assets increases, derivative usage will continue to grow in both volume and complexity. While most asset managers recognize this, the focus of investment and operational improvement has typically been directed toward front-to-back trade flow improvements. In order to support increased usage of derivatives, most firms need to refresh their governance, approval and operational due diligence process. Yet the majority of the investment managers interviewed have reactive governance structures, which is a major contributor to the time lag of assessing and approving a new derivative instrument. In addition, no investment manager was continuously improving their governance structures, suggesting that derivatives governance is not recognized as a vital investment area.

Jackie Colella is a Senior Manager of Business Consulting. Based in Boston, Jackie plays a leadership role in helping Sapient Global Markets’ clients drive innovative strategies to improve business performance and manage risk. Jackie is a primary interface for our clients, structuring and planning engagements, implementing critical delivery teams and developing relationships. [email protected]

Asset managers need new products and outperformance to compete, differentiate and win. Derivatives are a valuable tool for product innovation and delivering outperformance in a risk-controlled manner. The opportunity exists to refresh or realign governance structures to better support organizational growth in accordance with derivative usage plans. Adopting new practices for governance and operational risk management specific to derivatives can help asset managers reduce time to market and more quickly respond to portfolio managers’ needs.

CROSSINGS: The Journal of Business Transformation

27

HOUSING BUBBLE 2.0:

ready for another housing market crash? A study of the current housing finance market reveals the multidimensional reaction to events from the last decade is still in play throughout the global financial system. New regulation and regulatory bodies, wholesale legislative changes, the formation and adoption of new risk-management frameworks, reduced securitizations by private-label banks and increased scrutiny by the press are just a few of the factors contrasting today’s mortgage market with the pre-crisis era. But are mortgage markets truly more stable now than they were before 2008? In this article, Hans Godfrey and Adi Ghosh discuss how the government and regulators, as well as the primary and secondary markets, are preparing to mitigate the factors that caused the 2008 crisis. Plus, they will highlight the counter-effect and additional risks that these new policies, tools and systems may create.

Movements in the housing market, then and now, are widely recognized as a leading indicator of economic cycles. In 2015, a large degree of economic growth is being driven by expansionary monetary policy and the access to easy credit this enables. Combined with a renewed focus on affordable housing and upward trending jobs data, home prices have seen significant appreciation since the US mortgage market bottomed out in 2012. However, questions are being asked on the sustainability of this appreciation. There is already mention of the lead up to the start of the next housing bubble1. While economic cycles and housing bubble formations are inevitable, there are significant differences in the current housing finance structure from seven years ago. Old and new regulatory agencies implemented rules with sometimes unintended consequences, causing a dramatic change to the housing market landscape in less than a decade. For example, risk aversion in the market has given rise to players outside of traditional banking, while the increasing use of technology has disrupted traditional business processes.

REGULATIONS—DIRECT AND INDIRECT CONSEQUENCES Despite the pace of change, significant modifications have been made to the regulatory framework in the last few years, particularly in terms of a more concentrated focus on overall standardization. Better defined terms, more transparent guidelines and eligibility requirements and increased accessibility and transparency of data are some of the areas where new regulations have made an immediate impact.

28

A work-in-progress example of this drive toward standardization is the development of the Common Securitization Platform (CSP) under the auspices of the Federal Housing Finance Agency (FHFA). The CSP provides a common infrastructure for Fannie Mae and Freddie Mac to securitize loans and help ensure consistency in terms of security onboarding, pricing and transparency. Efforts to create a single security between Fannie Mae and Freddie Mac are another step toward bringing consistency in the housing finance market. From a guidelines perspective, defined standards, such as Basel III, Private Mortgage Insurer Eligibility Requirements (PMIERS) and Servicer Total Achievement and Rewards program (STAR), all help manage counterparty service levels and risk management. While each of these is undoubtedly helping to mitigate some of the risks and eventualities which led to the meltdown, an analysis of the potential unintended impact of these regulations is also worth examining. The global financial system continues to increase in terms of size, depth, complexity and interconnectedness, and the US mortgage market is no exception. Regulators have attempted to protect the markets from risk through stress testing and increased stringency of compliance reporting. However, despite the laudable efforts to protect markets and consumers from systemic risk, the actions of the regulators have in many ways had the unintended effect of “Balkanizing” the leading financial services economy.2 In addition, this can potentially create a situation in which a mass of regulations operating in silos has the same result as no regulations (at many times the cost of compliance and reporting). Enhanced regulations have also impacted operational costs for many organizations, including the cost of enhancing data, regulatory compliance reporting and adherence to different rules and standards. In certain segments of financial services that are heavily dependent on legacy infrastructure, the costs have been dramatic as new regulation has forced wholesale re-architecting of business processes and the systems that support them. The changes in housing finance regulations, in particular, should encourage organizations to think more strategically about their operational and technology investments. Although this task is by no means easy, it should include the following:

› Data Consolidation and Availability. A critical step is the increased investment in enterprise data integration programs to consolidate both structured and unstructured data. While most organizations have already invested in enterprise-wide data management, the consolidation and availability of data rather than its production is increasingly becoming the key to unlocking its potential. › Standardization. Investments in standardized enterprise data transfer mechanisms and protocols— whether through defining canonical data models for the enterprise or participation in key industry standards like those championed by the Mortgage Industry Standards Maintenance Organization (MISMO)—are increasingly critical to reducing integration and change management costs to help drive these efficiencies. › Partnership. The recent elimination of desktop underwriter fees to allow originators to run the same underwriting checks as the enterprises is an example of using partners to mitigate business risk. The ability of primary and secondary market parties to work with their technology partners and other enterprises to co-invest in solutions and share best practices would be of immense benefit to the overall housing finance ecosystem. › Adoption. For many of these programs, the key success criteria is adoption, making investment in an enterprise-wide adoption and change management program critical. For example, a way to drive adoption across the firm could be through compliance scorecards with funding linked to quarterly goals for business units. It will be interesting to see in the next few years how technology and partnerships on both sides (regulators and market participants) may help reduce systemic risk and lower associated operational costs. With the right investments in technology and standards, regulators and organizations may be able to better integrate regulatory frameworks, dynamically access the right information and make more informed decisions proactively through the use of advanced analytics and visualization techniques.

CROSSINGS: The Journal of Business Transformation

29

NEW PARTICIPANTS—THE SIX-MINUTE LENDERS A recent study3 by the Harvard Kennedy School analyzes the increasing role played in loan origination by non-banks (defined by firms unassociated with a depository institution). Given the regulations defined by the Dodd-Frank Act in conjunction with Basel III, the total volume of capital available to depository institutions for lending has been reduced. This has resulted in a non-bank boom in the lending space: the share of non-banks among the top 40 mortgage originators has increased from 16 percent to nearly 38 percent. This is validated by a recent study which listed regulatory factors (or reduced levels) as a lead competitive advantage for non-bank originators and servicers.4 As a counterpoint, however, examination standards remaining consistent across the board and risk retention regulations put disproportionate pressure on non-banks as compared to larger depository organizations.

Fewer Depository Institution Originations Help Explain the Non-Bank Boom: Top 40 Mortgage Originators by Type ($ bil., 2005-14) $3,500.00

$3,000.00 15.9%

18.0%

$2,500.00 11.8%

$2,000.00 10.8%

$1,500.00 84.1%

9.7% 82.0%

$1,000.00

10.0%

88.2%

18.3% 26.1%

13.4% 90.3%

89.2%

90.0%

$500.00

86.6%

81.7%

37.5% 73.9% 62.5%

$0.00 2005

2006

2007

Depository Insitutions

2008

2009

2010

2011

2012

2013

2014

Non-Banks

Figure 1: Increasing share of non-bank originations.

The other change to the lending framework has been in the area of alternate lending. These non-bank players, also known as “Six-Minute Lenders” (referring to the ease of origination), include lending clubs and peer-to-peer lenders leveraging high-end technology platforms. Increasingly making their presence felt, participants in these new entities are no longer restricted to small venture capital-funded organizations. In fact, some of the biggest financial organizations (especially larger hedge funds) are getting on the bandwagon through conduits.

30

Given technology’s ability to disrupt existing business processes, it is very likely that non-banks will continue to play a critical role in changing the dynamics of the mortgage market. Naturally, the presence of these non-bank institutions creates another potential area for systemic risk—one not necessarily accounted for in the current regulatory framework. However, just as technology has served as a force for disruption in the market, this may be an opportunity to again leverage technology to ensure that post-crisis mitigation measures are enhanced to effectively address current and future scenarios that were not anticipated. Technology has made rapid strides in the combination of data and analytics. By running big data analytics, organizations are able to analyze a mix of structured, semi-structured and unstructured data in search of valuable business information and insights—a key technology differentiator from 2008.

REPORTING AND DATA It is widely recognized that inconsistent and ambiguous terminology and poor data quality exacerbated the crisis through inaccurate classifications and reporting. One example of this is the classification of sub-prime loans as prime, based on their acceptance by enterprises, even though these were accepted by lowered underwriting standards as part of the affordable housing program. Post-crisis regulators, industry groups and market participants have acknowledged the need to increase data quality, including the clear definition of regulatory reporting guidelines and the consistency of definitions and attributes at loan, borrower and security levels. Most players have recognized that good quality data is a benefit to the market because it will allow for improved correlation analysis and predictive modeling, which in turn improves efficiency. Most would agree that the volume of data and the velocity of its creation will only increase. However, there is no guarantee that the increasing amount of available data will necessarily help investors correctly value what they buy or allow regulators to properly measure systemic risk. Market participants must develop clear strategies for ensuring data availability and quality in their organizations—data consolidation and adoption,

strategic prioritization on modeling initiatives and an overhaul or consolidation of reporting frameworks and tools are all steps in that direction. A development that will be keenly followed will be the Regulation AB II standards around asset level disclosure. An organization’s ability to develop early warning signals based on historical trend analysis would be a definite competitive advantage. The market is at the beginning of a long learning curve on understanding the effective use of data. Although the boom-bust economic cycle seems inevitable, the promise represented by technology’s ability to help draw meaningful conclusions from increasing volumes of data will be a crucial factor in mitigating systemic risk and industry participants’ ability to adapt to changing market conditions.

RISK MANAGEMENT—CHANGING SHAPE AND FORM Risks are inherent to financial systems. While regulations and processes do help manage risk, they will never completely eliminate risk. Risk changes form and evolves in response to any action given the high degree of interdependency in the modern financial ecosystem. Since 2008, several steps have been taken to manage credit risk, such as the launch of credit risk transfer initiatives to distribute risk across investors, lenders and insurers and the defining of documented standards and guidelines for counterparties like insurers and servicers. Similarly, interest rate risk management for the portfolio of loans enterprises retain on their books has also seen several advances to manage both extension risk as well as pre-payment risk. However, with the high focus on partnerships in terms of risk distribution, the counterparty risk in terms of exposure and the number of counterparties has significantly increased. The improvement of risk model sophistication, regular disclosures, defined guidelines and frequency of stress testing and reporting have been put in place to manage causes that led to the 2008 crisis. It remains to be seen if these measures adequately manage systemic risks. Moreover, the Housing Bubble 2.0 will be impacted by CROSSINGS: The Journal of Business Transformation

31

other factors that were non-existent seven years ago, such as the composition of the portfolio shifting toward nonbanks, the role played by regulators, spiraling operational and compliance costs for lenders, and an increasingly complex market in terms of volume, leverage and stakeholders and their interactions. It is impossible to quantify the element of risk in today’s housing finance market and measure it against the market as it was seven years ago. However, it is fair to say that while regulators, housing finance organizations and their technology partners have invested significantly in mitigating known risks, the true test will be the ability to anticipate and mitigate against the yet-unseen risks posed by new players, the direct and indirect consequences of regulations and disruptive technology.

CONCLUSION Financial markets, whether organized and separated by asset class, geography or other factors, are more interconnected than ever before. No one can predict when the next bubble will burst, but the boom-bust cycle will likely continue and the interconnectedness of markets is sure to result in difficult-to-predict ripple effects. Markets will inevitably evolve faster and in less predictable ways with the entry of new players who leverage new technologies to disrupt old businesses. To combat this disintermediation, incumbent participants must have clear strategies as well as the will to boldly execute those strategies. Further, the ability of technology to outstrip the bounds of regulation is readily apparent from the rise of non-bank financial entities. Regulators must likewise develop proactive strategies that leverage technology and data as a means to mitigate systemic risk when guidelines fail to do so. The ability of the housing finance market to absorb a potential Housing Bubble 2.0 may depend upon two key factors. The first is a market participant’s ability to individually invest in long-term strategic initiatives that address the foundational issues of enterprise data. Risk management at an individual level is a first step toward mitigating risk at a more macro level. It will be a challenge for market participants to do this without getting distracted by more tactical and reactionary goals. The second is the market’s ability to create value networks through partnerships between regulators, primary and secondary market parties and their technology partners. These value networks have the potential to mitigate future financial crises through a combination of risk frameworks and regulatory measures backed with advanced analytics technology. Financial markets have generally shown an inclination toward integration (e.g., swaps clearing process), and if properly executed, there is a possibility that the housing markets could benefit from this. In any event, technology and the power of data will be key factors in either helping spark a Housing Bubble 2.0 or making it just a passing reference in the history of housing finance.

32

Resources 1. Zero Hedge, Mark Hanson Is In “Full-Blown, BlackSwan Lookout Mode” For Housing Bubble 2.0, http:// www.zerohedge.com/news/2015-05-13/markhanson-full-blown-black-swan-lookout-modehousing-bubble-20 2. Cato Institute, Regulatory Fragmentation, the Balkanization of Financial Markets and the Competitiveness of the American Financial Services Sector, http://www.cato.org/publications/testimony/ regulatory-fragmentation-balkanization-financialmarkets-competitiveness 3. Harvard Kennedy School, What’s Behind the NonBank Mortgage Boom?, http://www.hks.harvard.edu/ content/download/76403/1714118/version/1/file/ Final_Nonbank_Boom_Lux_Greene.pdf

THE AUTHORS Hans Godfrey is a Vice President within Sapient Global Markets. In this role, Hans leads Sapient Global Markets’ engagements in the mortgage and securitization space, partnering with industry stakeholders to deliver next-generation infrastructure. Hans has over 20 years of business experience and has worked with governmentsponsored enterprises, multilateral development banks, regulatory agencies and commercial groups to deliver business value through technology enablement. [email protected]

Adi Ghosh is a Washington, DC-based Director focused on the primary and secondary housing finance market. He works closely with key industry participants to increase operational efficiency and bring strategic alignment in the housing finance space through technology solutions and value-driven partnerships. Adi has over 15 years of product development and business advisory experience across mortgage-backed securitization, loan origination, servicing and delinquency management. [email protected]

CROSSINGS: The Journal of Business Transformation

33

FINTECHS–OPPORTUNITY OR THREAT?:

a pragmatic approach for organizations to assess the value of financial technology initiatives In the last six years, a proliferation of new financial services technology or “FinTech” ventures, eager to capitalize on shifting market needs and preferences, have emerged. Rather than sit back and watch these new models eliminate them, financial services organizations need to address these innovative initiatives as opportunities rather than threats. In this article, Sean O’Donnell reviews the drivers of the FinTech evolution, where and how they are transforming financial services, and approaches for businesses to adopt (rather than run from) these new offerings.

While the mainstay of the financial services industry was busy dealing with the global financial crisis in 2008, start-up organizations in Silicon Valley, New York, London and other major financial and technology hubs were turning their attention away from social media plays and looking to reinvent financial services. Despite the 2008 global recession and slowing population growth, the continued rise in global gross domestic product (GDP) paints a profitable picture for FinTech start-ups. According to a study by the Center for Financial Inclusion, global GDP is predicted to reach $85 trillion by 2020—a four-fold increase over four decades.1 This rise in real incomes across all regions of the developing world will translate into greater demand for financial services. Many of today’s banks and other financial organizations are still stuck in post-crisis mode, grappling with current and pending regulatory changes, bloated business models and shrinking profit margins. As such, investing in new technologies to better meet their business and customer needs has been very low (or non-existent) on the priority list. This complacency has opened the door for innovative thinkers to come through with better, faster solutions that address most organizations’ legacy technologies and complex processes. Leveraging open source software, mobile, cloud and digital technologies, these new competitors are providing intuitive apps and tools, streamlined processes and a fresh approach to a usually monotonous services industry, and in the process, they are changing the game and turning the financial services world upside down. From payments to wealth management, from peer-to-peer lending to insurance, emerging FinTech initiatives threaten to grab $4.7 trillion in revenue and $470 billion in profits from traditional Wall Street firms.2 Moreover, with heavy financial backing—$12 billion in FinTech investments in 2014, up from $4 billion the prior year—this digital revolution can no longer be ignored.3

34

WHAT IS FUELING THE FINTECH FIRE? While open source software, an agile delivery process, cloud technology and mobile computing form the foundation of the FinTech sector, its explosive growth has been driven by significant changes within the financial services industry. As a result, most established organizations have been either too busy (particularly with regulatory changes), too cash strapped, too inflexible or just unable to address changes on their own. Seeing an industry ripe with opportunity, innovative start-ups have quickly stepped in to fill the void. Here are five main issues that have helped fuel the FinTech evolution: 1. Eye Not on the Ball: Changing Regulatory Landscape Stringent regulatory changes over the last few years have increased reporting, transparency requirements and costs for all capital market participants. Many firms invested heavily in their legacy systems to meet requirements, but did little to streamline their processes or improve their analytical capabilities. Given the industry’s shortcomings, new technologies have emerged to focus on institutional-only problems. Examples include Algomi, which leverages social concepts to ease bond trading, and Tradier, which provides full-fledged brokerage services as APIs. 2. No Money: Shrinking Budgets and Margins In response to a wave of new regulations, most organizations made significant investments in their legacy systems and point solutions to achieve compliance. With rising cost pressures and shrinking profit margins, companies know they need newer and less costly delivery models but most have little to invest in new technology initiatives. FinTech companies, unencumbered by complex systems and processes, are looking at all areas of finance for opportunities to simplify and successfully improve margins. For example, Roostify is tackling the mortgage business by taking the traditionally complex process of buying a home and turning it into a streamlined, instant online process. Their cloud-based service helps lenders process loans faster and reduce risk, while improving the homebuyer’s experience. 3. Falling Behind: Shifting Customer Preferences Influenced by mobile technology and social media, rising customer experience and service expectations

as well as lower switching costs for customers to take their business elsewhere have dramatically changed the competitive landscape for banks and other financial services companies. Strongly held by legacy systems and rigid business models, many organizations are finding it nearly impossible to deliver sophisticated, technologydriven solutions to meet their customers’ changing preferences. FinTech start-ups have quickly stepped in. One such area is wealth management. Here, roboadvisors Wealthfront and Betterment are attracting young investors interested in financial advice yet seeking simplicity and speed. Betterment now manages $2.2 billion for 85,000 clients, while its rival Wealthfront has amassed $2.3 billion in 27,000 accounts.4 4. Everywhere Access: Growing Use of Mobile Cisco Systems predicts the number of mobile users will rise to 4.9 billion in 2018 from 4.1 billion in 2013 as consumers in emerging markets come online.5 This continued growth in mobile users is fueling a wave of new mobile technologies, many focused on financial services, including mobile banking, payments, locationbased commerce and personal financial management. According to CEB TowerGroup, bank investments in mobile banking technologies are expected to increase at an annualized rate of 13.4 percent through 2017.6 While customer demands have forced most banks to develop mobile banking apps, other financial service sectors have been slower to develop mobile-specific solutions. 5. Predictive Intelligence: Increasing Role of Analytics Today, companies need forward-looking, predictive insights to help shape their business decisions. This is a significant challenge for many financial services organizations that operate with siloed databases and systems. FinTech companies, built from the ground up with a focus on data, are able to combine their internal data with external information, such as social media, demographics and big data, to quickly determine which efforts are most profitable, evaluate their risk exposure, streamline processes and analyze future margins. This type of fast analytical power enables FinTechs to quickly improve and enhance their products, provide highly personalized offerings and strengthen their competitive positions.

CROSSINGS: The Journal of Business Transformation

35

WHY ORGANIZATIONS CAN NO LONGER AVOID THESE INITIATIVES As technology advances at an ever-increasing rate, customers, partners and employees of the financial services industry are demanding more in terms of exceptional service (e.g., faster access to information, personalized products, quicker response rates and transaction speeds, enhanced analytics and real-time decision support systems) and they are not willing to wait. Meeting these demands with five-year implementation plans is no longer feasible. In this age of digital transformation, agile organizations that can quickly integrate and drive innovation into the business will succeed. Those who do not are destined for the fate of Polaroid, who watched profits plummet as digital photography blossomed. Determining the right path for your organization Faced with a new brand of competitors, financial services organizations should quickly determine if and how these new initiatives map to their existing businesses, and whether they pose a significant opportunity or threat. Figure 1 highlights one suggested path to determine how best to integrate a FinTech initiative into an organization.

Figure 1: A framework for deciding how to embrace FinTech initiatives.

Step 1: Determine the impact of the initiative on your current market. The first step is to define the FinTech initiative’s core value proposition and what impact it currently has on your market. The goal is to determine whether an initiative will be of value to your customers (opportunity), is irrelevant at this time (no impact) or provides significant value to your customers to the detriment of your business (threat). Some areas to investigate are included in the following chart.

Threat

36

Opportunity

No Impact

Strong customer/client demand for it

o Yes

o Yes

o No

Have lost customers/clients to it

o Yes

o No

o No

Entrants have captured market share

o Yes

o Expected

o No

Has become an industry standard

o Yes

o Expected

o No

Regulators require it

o Yes

o Expected

o No

If this initial assessment shows no impact, organizations should take a “wait and watch” position as the market and technology continue to evolve. If the initiative is clearly a threat or presents an opportunity to the business, management should move quickly to determine the best course of action (engage or defend). First, however, a company must determine how valuable the technology will be to its own business. Step 2: Determine the value of the initiative to your business The more areas of the business an initiative can bring value to, the higher its overall value will be. Some areas to consider are included in the following chart.

High Value

Low Value

Help retain existing customers/clients

o Yes

o No

Add new customers /markets/channels

o Yes

o No

Improve our existing process(es)

o Yes

o No

Reduce our costs

o Yes

o No

Add to or diversify our products or services

o Yes

o No

Reduce our current risk

o Yes

o No

Improve our compliance capabilities

o Yes

o No

Reduce our competition

o Yes

o No

Fits our strategic vision for innovation

o Yes

o No

If an organization determines the technology will offer little or no value, it may decide on a “wait and see” strategy in order to preserve resources or eliminate the risk of potentially harming the business. A company may also opt to divest the product, service or business segment under attack and focus on opportunities that are more profitable. Additionally, an organization may decide to take defensive measures and leverage its established brand and competitive position to fend off the new competitor. Step 3: Determine your organization’s level of readiness and select a course of action. If the value is deemed moderate to high, the business should consider the best course of action to “engage.” This should be determined by evaluating a number of readiness factors, such as budget, culture, experience and desired time to market. These factors can assist an organization in deciding whether a strategic partnership, investing in or acquiring an established FinTech or building the capabilities in-house is the best engagement model. For some organizations, a combination of the various approaches (e.g., partner and invest) will be most viable.

Partner*

Invest

Buy

Build**

Innovative culture exists (or will establish)

o No

o No

o Yes

o Yes

High technology aptitude exists (or will establish)

o No

o No

o Yes

o Yes

Culture easily embraces change

o No

o No

o Yes

o Yes

Desired timeline to go to market

o < 1 year

o < 1 year

o < 1 year

o < 1 year

Available investment budget

o Low

o Low

o High

o High

Resources available to manage initiative

o No

o No

o Yes

o Yes

Strong strategic fit with target company

o Yes

o Yes

o Yes

o No

Past experience with partner/invest/buy/build model

o Yes

o Yes

o Yes

o Yes

Willing to share data/information with a partner

o Yes

o Yes

o No

o No

* Partner with either a FinTech leader or third-party provider with blend of technology, digital and financial markets expertise. **Build innovation in house or in collaboration with a third-party provider.

CROSSINGS: The Journal of Business Transformation

37

The options in terms of the chosen engagement model include: 1. Partner with a FinTech leader or third-party services provider The least risky and expensive option to add value is to partner with an established player or innovative start-up. A strategic partnership can strengthen a company’s competitive position and reduce the time needed to develop and bring to market new products or services. For example, the UK’s Santander Bank has partnered with peer-to-peer lender Funding Circle to grow its small business loan business.7 Santander refers rejected business loan applicants to Funding Circle for assistance. In return, Funding Circle directs businesses to Santander when they require traditional banking services such as a relationship manager, international banking and cash management. 2. Invest in a FinTech Some organizations may prefer to have a bigger stake in a FinTech initiative, particularly if the value to its market is high and time to market is critical, in which case it may directly invest in or form a dedicated VC arm to fund promising technology. Goldman Sachs has made significant investments into payment platforms Square and Revolution Money; payments security firm Bluefin Payments; and bill presentment and payment start-up Billtrust.8 3. Acquire a FinTech start-up For many established financial players, legacy technology and lack of an innovative culture are huge barriers to making an in-house innovation lab successful. For these organizations, the best option for gaining new technology and innovative processes is to acquire an existing FinTech. This approach can often be less risky if the technology is already established, and can add immediate value to the business in terms of revenue and customers. Additionally, buying new intellectual property, products or services may be more cost effective than developing in-house. Earlier this year, DH Corp of Toronto acquired Fundtech for $1.25 billion.9 Fundtech’s transaction banking software will help DH expand its service offerings to global financial institutions and large US banks. Last year, BBVA acquired the start-up Simple, a Portland-based bank that operates entirely online, for $117 million.10 4. Build innovation in-house or in collaboration with a third-party provider While a more expensive approach, the build option enables a business to develop innovations from the ground up and enhance existing ideas already in the market. This is just what Charles Schwab did earlier this year. The launch of its Intelligent Portfolios platform seeks to capture a piece of the rapidly growing robo-advisory market. According to an A.T. Kearney report, robo-advisors will manage 5.6 percent of Americans’ investment assets totaling about $2 trillion by 2020.11 Others are establishing innovation labs to discover and build their own FinTech innovations. Capital One has three Digital Innovation Labs in the United States tasked with advancing Capital One’s enterprise-wide digital agenda.12 Its team of entrepreneurs focuses on building products and experiences for Capital One customers. When a company lacks specific capabilities, it can partner with a service provider that offers a blend of technology, digital and finance expertise, providing innovative insights and helping to accelerate the project.

CONCLUSION Today’s digital technology can bring both great opportunities and daunting challenges to the financial services industry. New business models have already proven they can reduce costs, create efficiencies and improve the customer/client experience. Banks and other financial services organizations can no longer avoid embracing FinTech initiatives. Stepping into this emerging world does not need to feel threatening, overwhelming or impossible. By using a pragmatic approach, as well as seeking strategic guidance from consultants who understand the FinTech space, organizations can quickly understand what they are up against and determine the best approach for future success.

38

Resources 1. “Financial Inclusion 2020,” Center for Financial Inclusion, June 2013: http://www.centerforfinancialinclusion.org/ fi2020/mapping-the-invisible-market/growing-incomegrowing-inclusion 2. “The Next Big Acquisition Craze: FinTech,” USAToday, May 18, 2015: http://www.usatoday.com/story/ money/2015/05/17/silicon-valley-wall-streetfintech/27321361/ 3. “The FinTech Revolution: A Wave of Startups is Changing Finance—For the Better,” The Economist, May 9, 2015: http://www.economist.com/news/leaders/21650546wave-startups-changing-financefor-better-fintechrevolution 4. “Robo Advisors Take On Wall Street,” Barron’s, May 23, 2015: http://www.barrons.com/articles/robo-advisorstake-on-wall-street-1432349473 5. “Mobile Traffic Will Continue To Rise, Rise, Rise As Smart Devices Take Over The World,” Forbes, February 5, 2014: http://www.forbes.com/sites/ connieguglielmo/2014/02/05/mobile-traffic-willcontinue-to-rise-rise-rise-as-smart-devices-take-overthe-world/

THE AUTHOR Sean O’Donnell is a Director of Technology based in London. Sean’s background is in designing, building and running financial trading platforms, particularly in FX, CFDs, metals and some money markets for tier one, tier two and large broker clients. His technology experience is primarily in Java and open standard platforms, KDB+, UX, and highperformance/scale technologies deployed as cloud-based services. Over the last 10 years, Sean has worked in Product Management roles, bridging business and technology, and driving business development in Europe, North America and Asia. [email protected]

6. “The Rise of FinTech: New York’s Opportunity for Tech Leadership,” Accenture, 2014: http://pfnyc.org/wpcontent/uploads/2014/06/NY-FinTech-Report-2014.pdf 7. “Goldman Sachs Investment Activity into FinTech Startups Intensifies,” CB Insights, May 10, 2015: https://www. cbinsights.com/blog/goldman-sachs-fin-tech-startups/ 8. “DH Expands FinTech Portfolio by Acquiring FundTech,” Proactive Investors, March 31, 2015: http://www. proactiveinvestors.com/companies/news/60775/dhexpands-fintech-portfolio-by-aquiring-fundtech-60775.html 9. “Banking Startup Simple Acquired For $117M, Will Continue To Operate Separately,” TechCrunch, February 20, 2014: http://techcrunch.com/2014/02/20/simpleacquired-for-117m-will-continue-to-operate-separatelyunder-its-own-brand/ 10. “Robo Advisors: The Next Big Thing in Investing,” CNN Money, June 18, 2015: http://money.cnn.com/2015/06/18/ investing/robo-advisor-millennials-wealthfront/ 11. “Peek Inside 7 of The Banking World’s Coolest Innovation Labs,” The Financial Brand, June 8, 2015: http:// thefinancialbrand.com/52177/7-of-the-coolest-innovationlabs-in-banking/

CROSSINGS: The Journal of Business Transformation

39

CLOUD-BASED SOLUTIONS:

why the time is right for asset managers to consider adoption Faced with a slew of new industry, client and technology pressures, today’s asset management landscape looks quite different than it did just 10 years ago. In this article, Manish Moorjani discusses the evolution of the asset manager’s ecosystem, looks at the technology solution transformation with the advent of cloud computing and details the decision criteria CIOs can use to determine whether cloud solutions fit into their future plans and strategies.

Over the past decade, the asset management business has been impacted by several factors, including the popularity of exchange-traded funds (ETFs) as investment vehicles; new regulations; the rise of automated investment services, such as robo-advisors; increased scrutiny around risk management; and a focus on the end-user experience due to the growing popularity of tablets and smart phones. These factors have led to the “new world” of asset management with a different set of opportunities and challenges. Technology solutions have also gone through a significant transformation. In the past, firms could either develop custom technology solutions or implement a commercially available “off the shelf” product. That decision was primarily influenced by the importance of the business function, the ability of the internal technology team to support it and the availability of mature products in the space. However, the advent of cloud computing has added a completely new dimension to this decision-making process. It has opened up avenues to host custom-developed applications on third-party-managed platforms and created opportunities to use software as a service (SaaS). According to a recent survey, using cloud-based solutions versus onsite solutions over a four-year period could lower the total cost of ownership by 55 percent.1 Numbers like these are making CIOs around the world to take notice and realize that cloud computing provides an opportunity to shift to an entirely new technology operating model. With this shift, IT can move from managing applications on an internal infrastructure to managing the integration of different cloud services, platforms and cloud-based solutions. And while the potential benefits are clear, firms must conduct proper due diligence and understand the impact before making the move. In the case of SaaS, for example, it is important to consider factors such as information security and integration with the existing application architecture. For platform as a service (PaaS) offerings, it is important to understand the impact of adapting the new ecosystem to the existing IT organization.

THE “NEW WORLD” OF ASSET MANAGEMENT Traditionally, asset managers view investment management, or front-office functions, as generating returns while the middle- and back-office functions diminish those returns. However, this perspective has been challenged in recent years due to several disruptions to the business model, as shown in Figure 1.

40

Figure 1: Key business and market pressures.

These market pressures are forcing asset managers to think about optimizing operations in order to deliver higher operational efficiency, which in turn would lead to more profitability for their clients. Asset managers are discovering ways to broaden their product offerings and improve the quality of information they provide to their customers. The matrix in Figure 2 defines specific themes for business functions based on their impact. The two main themes are as follows:

› Economize and Standardize: Add value by standardizing a business function in a cost-effective manner. For example, extensive regulatory reporting is a reality for today’s asset managers. An asset manager who can meet regulatory requirements on time and cost effectively will add real value to the business and also build credibility with customers and regulators.

› Optimize and Differentiate: Offer a unique value proposition to the customer in the most optimal manner. For example, an asset manager can provide custom product offerings with risk and return characteristics that meet the requirements of sophisticated institutional investors.

CROSSINGS: The Journal of Business Transformation

41

Figure 2: Business functions and “new world” themes.

42

Figure 2 shows that the paradigm of a profit center versus a cost center is far less relevant in the new world and the lines have become blurred between how asset managers look at the front, middle and back offices. For example, trade execution that used to be core to asset management and was expected to drive differentiation is becoming more standardized across the industry. Today, however, customers increasingly evaluate asset managers on their other functions, such as risk management and client reporting. In a CEB TowerGroup survey, 53 percent of executives said that client reporting provides high business value and is a key differentiator.2

TECHNOLOGY SOLUTION MODELS The traditional approach to solving a business problem was limited to custom development or package implementation to meet the requirements—both of which came with the additional overhead of application hosting and maintenance. The introduction of cloud computing has added a new dimension to these solutions, enabling firms to completely break free from application hosting responsibilities and reduce maintenance overhead. Cloud-based solutions, however, are perceived as posing challenges in terms of information security or leakage (specifically when the solution is built and managed by a competing firm), integration to a firm’s existing technology infrastructure and change requirements for existing processes. Figure 3 describes the four major post-cloud technology solution models available plus their strengths and weaknesses.

Strength

Weakness

Figure 3: Post-cloud technology solution models. CROSSINGS: The Journal of Business Transformation

43

Some of the factors listed in Figure 3 complement each other and hence cater to different business functions. For example, SaaS provides the ability to standardize, which is helpful for regulatory reporting, where organizations want to stay aligned with industry standards. Custom solutions provide a greater degree of differentiation, which is something an asset manager requires for activities like portfolio management. The other point of caution while looking at the strengths and weaknesses of these solution models is that they are based on a set of assumptions and trends typically seen in the industry, but there are always exceptions to the rule. For example, several SaaS-based products are able to provide information security and can be easily integrated. Similarly, some custom-built products were developed in a way that can lower the cost of ownership when compared to a SaaS offering.

Most Preferred Figure 4: Solution model table. 44

Least Preferred

TECHNOLOGY SOLUTIONS FOR THE NEW WORLD OF ASSET MANAGEMENT To stay competitive and ahead of the curve, asset managers must evolve as a response to industry change, maximize opportunities and successfully tackle new challenges. The difficulty, however, lies in determining what to change and how to make the change in terms of people, processes and technology. Figure 4 offers a point of view on answering the question, “Which solution model best caters to the needs of specific business functions in the asset management world?” Areas in blue highlight the most preferred solution, while the red areas highlight the least preferred one.

The solution model table highlights an interesting pattern around the use of SaaS specifically for middle- and backoffice functions. It shows that SaaS (point 2 and 3) is the preferred option to solve problems when lower cost and standardization are the most important factors, whereas a custom solution (point 1) is the right approach for functions where differentiation is the primary driver. Where do packaged implementations and PaaS solutions fit in? These solutions are relevant when: › Differentiation is important but cost is still a major driver › The firm lacks the in-house capability needed to build a custom solution › Time to market is critical › The firm uses a mature industry product and customizes it to meet the business need It is important to note that the model in Figure 4 is one point of view based on a certain set of assumptions around the business model, size and focus area of an asset manager, and may require adjustments to meet specific needs. For example, if an organization with a world-class trade execution system that is tightly coupled with an upstream (order management) and downstream (trade confirmation and accounting) system wants to replace or upgrade the trade confirmation system, it would need to decide between building a custom solution or implementing a SaaS. The firm would also need to understand how the new system would impact the trade execution and accounting systems. Additional factors to consider include the level of customization within existing user processes. CIOs who follow this framework should take these steps to make an informed decision: › Review the business needs in conjunction with the existing application ecosystem › Short-list the business functions that need a significant technology investment to either meet critical new business requirements or an existing application that is unable to support the business › Agree on the theme for each business need (Optimize & Differentiate versus Economize & Standardize) › Prioritize the business needs based on themes and timelines which will lead to a roadmap › Based on the proposed framework, select the top two recommended solution models for further evaluation, taking into account firm-specific factors such as size, technology capability, time to market, etc. › Pick a solution model and begin defining the technology stack or evaluate off-the-shelf products › Choose between using a cloud-based solution or one that would be hosted by the asset manager › Perform a proof of concept to validate the approach and ensure the solution will meet the business requirements › If it does, proceed with a full implementation

CLOUD-BASED SOLUTIONS USED BY ASSET MANAGERS Cloud-based solutions are not new to the asset management space. According to a 2014 CEB TowerGroup report, more than 71 percent of firms confirmed their intent to adopt cloud computing or increase its usage by 2017. 3 This will happen when firms feel more comfortable with SaaS-based solutions and the right products are available for adoption through the cloud. Interestingly, the survey says that 2015 will be the year in which product vendors will start supporting a majority of their products as cloud-based solutions. Providing cloud-based solutions also benefits service providers; it helps to reduce software upgrade and maintenance costs, as well as optimize application performance and support through a centralized application and client support team. CROSSINGS: The Journal of Business Transformation

45

A number of players in this space offer product suites (in addition to stand-alone products) to reduce the integration challenges associated with cloud computing. For example, Charles River’s Investment Management solution is a suite of products for portfolio management, compliance, trading and order management, execution, trade settlement, risk and attribution, position and cash management, covering almost all functions across the front and middle offices.4 Similarly, Bloomberg Asset and Investment Manager (AIM) is a SaaS-based solution that focuses on portfolio management, order management, compliance and trade matching/settlement and integrates well with other Bloomberg platforms like EMSxNet, Bloomberg FIT and BVAL. It also integrates with BVAULT, which is Bloomberg’s data archiving platform, and can be used to meet regulatory and legal reporting needs. Newer asset managers who are still building their client bases can opt for a SaaS-based solution rather than making a significant investment in building and maintaining a technology infrastructure. Likewise, these solutions are appealing to niche asset managers who want to focus on specific client segments and may never want to grow too large. On the other hand, solutions like Barclays POINT, BI-SAM Go and Wilshire AXIOM, focus on providing business value in specific areas such as case risk, analytics and attribution. Risk and performance attribution functions require software to process large quantities of transactional and reference data using complex mathematical models to generate the desired output. This activity utilizes considerable computing power in short periods of time. In this scenario, SaaSbased solutions are ideal since computing power can be made available on demand. In the back-office space, products from SIMCORP, such as CORIC Web Report, and Kurtosys are providing SaaS-based solutions for client reporting. This is one area that is growing in popularity as more asset managers look to support changing client needs across multiple platforms, including tablets.

46

With so many players in this space offering a variety of cloud-based solutions (including stand-alone and complete product suites) across the front, middle and back office, why are we still talking about cloud adoption? Although a 2014 survey by the TABB Group claimed that 71 percent of asset managers intend to use cloud solutions, a TABB survey published in Q1 2015 provides a slightly contrasting picture.5 Only 23 percent of respondents said they were comfortable using a public cloud and more than two-thirds cited concerns such as compliance, security and data control. In a similar survey by NASAQ OMX (for capital market firms) in 2013, the same number, only 23 percent of respondents, said that they are actively embracing the cloud.6 These results seem to suggest that perceptions and adoption have not changed significantly over the last two years. It also indicates that the conclusions the study drew around information security and integration challenges are still preventing today’s firms from fully embracing cloud-based solutions. To address these issues, companies have begun to explore a hybrid cloud approach that connects data centers, public and private clouds in any combination. In a hybrid model, firms have the ability to use a public cloud for non-sensitive data or testing and rely on a private cloud for critical data and applications. While retaining their independence, the individual clouds are bound together to facilitate the portability of data and applications. Products, such as VNS3:net by Cohesive Networks, allow firms to build their own custom cloud network, enabling them to extend onto a public cloud infrastructure while remaining inside their own network. Even with new options like these, the current adoption and usage of cloud-based solutions suggests that there is still some way to go before the industry is prepared to fully unlock the full potential of the cloud.

CONCLUSION In today’s “new world” of asset management, cloud-based solutions are available to help asset managers gain a competitive advantage. Even though the benefits of cloud computing have been well documented, firms still need to use a logical framework for evaluating and selecting the right technology solutions. They can choose to follow a bottom-up model that begins with the business need, such as the one presented in this article, or a top-down model that looks at what other firms in the industry or related industries are doing and then decide which technology solutions to adapt. Regardless of the model, it is important to view adoption criteria through the same lens to ensure any new solutions will help firms achieve their business goals.

THE AUTHOR Manish Moorjani CFA, FRM, is a Senior Manager Business Consulting at Sapient Global Markets with more than 12 years of experience working with firms in the capital and commodity markets. Manish has worked on the design and implementation of portfolio management, order management and trade execution systems across multiple asset managers and investment banks. He has also led multiple business and development teams across middle- and back-office functions. [email protected]

Resources 1. Confluence, Five Drivers of the Cloud in Asset Management, http://www.confluence.com/uploads/ mkt-i-whitepaper_five-drivers-of-cloud-_final_ edited_2.pdf 2. CEB, Use Client Reporting As a Market-Facing Differentiator, https://www.executiveboard.com/ blogs/use-client-reporting-as-a-market-facingdifferentiator/ 3. The Bull Run, Cloud Computing Adoption in Asset Management, https://thebullrun.wordpress. com/2014/03/19/cloud-computing-adoption-inasset-management/ 4. Charles River SAAS Managed Services http://www.crd.com/assets/pdfs/Charles_River_ SaaS_ManagedServices_US.pdf 5. CloudTech, Cloud making inroads into the capital markets sector, http://www.cloudcomputing-news. net/news/2015/feb/23/cloud-making-inroadscapital-markets-sector-report-finds/ 6. InformationWeek Wallstreet & Technology, Capital Markets Cloud Adoption: Food for Thought or Empty Calories?, http://www.wallstreetandtech.com/ infrastructure/capital-markets-cloud-adoptionfood- for-thought-or-empty-calories/a/did/1267911? CROSSINGS: The Journal of Business Transformation

47

THE BUSINESS CAPABILITY MAP:

a critical yet often misunderstood concept when moving from program strategy to implementation The use of business capabilities for planning and analysis has been on the rise in recent years, yet the value they provide is not fully understood. For most critical initiatives at large firms, the transition from vision to strategy to implementation is a multiyear program involving numerous stakeholders. During the early inception phases—amidst aggressive timelines, pressure to produce estimates, budget constraints and other challenges—project teams tend to jump into execution without truly comprehending the overarching business strategy. It is difficult to visualize end-to-end risk and business value without a business capability map. In this article, Shiva Nadarajah and Atul Sapkal show where business capabilities fit into the overall program cycle and how and when they can be used to drive meaningful decisions to gain alignment from strategy through execution. Additionally, they illustrate the value of the business capability map using an example from the wealth management industry.

Business unit (BU) heads typically focus on new markets, products and customers on the revenue side and operational challenges on the cost side. Most capital budget projects for large companies follow a top-down approach that begins by defining a program vision, the business strategy and then finally, the IT strategy and execution roadmap. For a multiyear initiative, the business case may be revisited every year only to find that costs keep increasing and that the initiative fails to achieve its desired value either due to delays, higher costs or lost business opportunities, which can have a real impact on a company’s short-term and long-term profitability. Information technology plays an important role in actualizing the business strategy. Because many CIOs are gaining more visibility with their boards and BU heads, they are well-informed of upcoming strategies and can begin evaluating the IT impact and plan accordingly. Despite this partnership between business and IT, a large percentage of projects do not meet their schedules and/or budgets and ultimately fail to deliver the anticipated value. Though each failed project is unique, a common theme among most unsuccessful projects is the inability to manage risk and determine clear priorities for delivering value. The vision and goals articulated by the business do not reach IT in the right framework, causing information and priorities to get lost along the way. Also, IT may not communicate the platform and technology risks to the business in a timely and comprehensive manner. But who is to blame — business or IT? Ultimately, it’s neither. These problems stem from a lack of a common language or an effective method of communication. This is where business capability mapping can help.

48

WHAT ARE BUSINESS CAPABILITIES? Business capabilities define “what” a business does, not the “why” or “how.” Business capability mapping is about structuring the functional capabilities of a business in a hierarchical fashion. Business capabilities are a good way to connect the vision of various stakeholders to the execution value chain. Capabilities are a great communication tool across the organization, and breaking down capabilities to the appropriate level of granularity can help bridge the gap between different functional and IT groups. The key is to map capabilities to strategic components and the vision as the initiative moves up the strategy chain, and to map them to requirements and IT initiatives as the project moves through the execution chain (see Figure 1). Consider an example from the wealth management industry. Taking a holistic view of the business unit or organization, the BU head and corporate strategy team together define the vision for the business unit. The next step involves the corporate strategy team working with the front office and other stakeholders

to define the strategic components that may include the business case and the underlying business model needed to realize the vision. Once the high-level business strategy has been identified, business product managers, in collaboration with consultants, corporate strategy and relevant stakeholders, create the business capability map, which connects the high-level strategic components to granular and actionable business capability areas. At this stage, the business capability (such as Insurance Planning) may be aligned to a strategic component (such as Asset Protection). Ideally, every strategic component should have relevant business capabilities identified. Some examples include: linking the strategic component “Asset Transfer” to the capability “Estate Planning,” or linking the strategic component “Financial Plan” to the capabilities “Client Profile Management” and “Multi-goal Planning.” Once the capability map is complete, it can be linked to business processes and services. At the end of the exercise, relevant IT projects and initiatives can be identified.

An aspirational statement of where the organization or business unit would like to be in near-term or long-term Vision

E.g. Provide a customized high networth offering that builds long-term relationships in the $2M+ investable asset customer segment

A method, approach or plan of action which may include how organizational resources, skills, tools and stakeholders across value chain may be aligned and optimized to achieve desired result or reach a goal. This may also include the business case and business model Defines “What” a business does and not “Why” or “How”. Business Capability mapping is structuring the functional capabilities of a business in a hierarchical fashion

A self-contained function that delivers a specific business outcome

E.g. Asset Protection, Income Protection, Comprehensive Financial Plan

(BU Heads, Corporate Strategy, Distribution (Front-Office)

Business Capabilities (Corporate Strategy, Distribution, Business Product Management)

Business Processes/ Value Streams (Distribution, Business Product Management, Mid-Office, Back-Office)

Services

E.g. Estate Planning, Insurance Planning, Client Profile Management, Multi-goal Planning

E.g. Establishing Client Relationship, Gathering Client Information, Developing Recommendations, Transferring Assets, etc.

E.g. getAccountData(), calculateTaxRate(), analyzeCurrentAssetMix(), etc.

(Business Product Management, IT Product Management, IT Architect)

An organized activity with defined goal and distinct timeframe

IT Projects and Initiatives

Execution

A series of logically linked activities and tasks that collectively accomplish the goal. It may also define the guidelines, tools, timeframes, information exchanged on how the stakeholders interact with each other to add value at each step along the activity

Strategy

Strategy

(BU Heads, Corp. Strat.)

E.g. BPM Tool Implementation, Third-party Data Integration etc.

(IT Product Management, IT Architect, Production Support)

Figure 1. Where business capability fits within the organizational/program context. CROSSINGS: The Journal of Business Transformation

49

HOW AND WHERE CAN BUSINESS CAPABILITIES BE USED? Business capabilities can be used across the board in a variety of business situations, such as: › Post merger and acquisition (M&A) IT consolidation. For M&A integration projects where cost synergies can be achieved by consolidating certain core and support functions, a business capability map for both organizations can act as a starting point to identify areas of overlap. › Strategic planning and IT investments. Business capabilities provide a foundation to the capital budgeting exercise for multi-year, long-term planning. A gap between current-state and future-state business capabilities can identify key areas of investment and allocate planning dollars appropriately. › Product definition and roadmap. A new service, product or offering that needs to be launched can use a capability map to conceptualize the overall offering. In an agile- and Minimum Viable Product (MVP)-based product culture, a capability map can keep the final product vision in perspective while defining and articulating the product roadmap. › Application portfolio rationalization. Many instances of the same functionality may be duplicated by different applications within different business units of the same organization. Any business case for application portfolio rationalization could benefit from a business capability map as the primary input.

HOW TO DRAFT BUSINESS CAPABILITIES While there is no one-size-fits-all approach, firms can use a top-down or bottom-up approach, or a combination of the two. Figure 2 illustrate a four-step, top-down approach.

A

B Industry and Vision Alignment

50

C Capability Draft Level 1, Level 2 Creation

D Capability Review and Prioritization

Capability Decomposition

Step

Sub-Steps

Outcome

A. Industry and Vision Alignment

1. Understand the key industry drivers and company strengths and how those feed into the program vision

Program Vision Canvas (Optional)

2. Create a program vision canvas to understand various components, such as: a. What is the value being created? b. Which customer segments see the highest value? c. How does the business model change the relationship with existing customers and what new ones will be established? d. Who are the key stakeholders in the value chain? e. What channels are used to interact with customers and stakeholders? f. What are the key activities performed? g. What are the key costs that could impact the program in the long run? h. What are the key sources of revenue? i. What are the key strategic components and what are the key performance indicators (KPIs) for these components? B. Capability Draft Level 1. Create a Level 1/Level 2 capability map. Start with a Level 1 capability map by focusing on the following areas: 1, Level 2 Creation

Level 1, Level 2 Capability Map

a. Activities related to the customer such as onboarding, account access, etc. b. Activities related to stakeholders such as lead management, book management, etc. c. General activities applicable to the industry domain such as custody services, account aggregation, etc. d. Specific activities related to the key product/service/offering such as investment planning, tax planning, insurance planning, etc. e. Distribution channel related activities such as client service management, relationship management, etc. f. Business support activities such as accounting, risk, compliance, etc. g. Shared service-related activities such as customer data warehouse management, infrastructure management, etc. C. Capability Review and Prioritization

1. Review Level 1/Level 2 with key business and IT stakeholders to identify any missing capabilities

Prioritized List of Business Capabilities

2. Prioritize Level 2 capabilities in a 3 by 3 matrix of “Customer Value” vs. “Business Unit Strength” Group level 2 (see Figure 3) a. Capabilities with high customer value and BU strength can have a high priority on the roadmap b. Capabilities with lowest customer value and BU strength can have a low priority on the roadmap D. Capability Decomposition

1. Focus on capabilities that have high customer value and start distilling them further into Level 3 and Level 4 capabilities wherever applicable

Level 3, Level 4 Capability Map

2. These Level 3 and Level 4 capabilities can be translated into business requirements and mapped to services Figure 2. A four-step approach to draft business capabilities.

CROSSINGS: The Journal of Business Transformation

51

Business Capability

Customer Value

Business Unit Strength High

Medium

Low

High

Higher priority on the roadmap

Medium-to-Higher priority on the roadmap

Medium priority on the roadmap

Medium

Medium-to-Higher priority on the roadmap

Medium priority on Lower-to-Medium the roadmap priority on the roadmap

Low

Medium priority on Lower-to-Medium the roadmap priority on the roadmap

Lower priority on the roadmap

Figure 3. Prioritizing business capabilities for the roadmap.

BUSINESS CAPABILITY CHALLENGES Creating and maintaining business capabilities is not necessarily a straight-forward process. A major roadblock is that oftentimes stakeholders have less clarity and awareness on the use and advantages of business capabilities, which leads to less support for the initiative across the organization. Teams have a difficult time appreciating the business capability effort until they see something tangible. To address this issue, the business capability team should consider educating the audience on the benefits of business capabilities using case studies and examples. Another hurdle is confusion over capability governance. This occurs when there is no clarity in the organization about who is responsible for creating and managing the capability map. To resolve this, the business sponsor should identify the appropriate owners for the business capability map. The capability owners should act as champions for the organization-wide initiative for the business capability effort. Additionally, some capability discussions move into the “how” instead of “what” too early in the process, thereby derailing the core objective of the business capability effort. The team tasked with the effort should create business capability guiding principles early on to align the group toward the long-term objective of the initiative. Wealth management example When an asset manager wants to enter the private wealth management space, its corporate strategy team builds a business case that includes the operating model (e.g., broker/dealer with a hub-and-spoke model) and major strategic components (e.g., asset protection, income protection) on how the new business unit will operate within the overall organization.

52

The business product manager and corporate strategy team work in collaboration to take the strategy to the next level. At this stage, the business product manager makes sure the offering/service that needs to be defined and built aligns with the overall program vision. The product manager understands the key industry drivers, such as the importance of technology in financial planning, fee transparency and evolving demograhics; recognizes the company’s strengths, such as strong distribution and economies of scale; identifies the key customer segments (e.g., >$2M in investable assets, specific geography, etc.); and studies the proposed operating model. Also, key stakeholders in areas such as risk, legal and distribution are identified and the strategic components are clearly defined. The next step is creating Level 1 (e.g., Planning, Advisor Support, Customer Engagement) and Level 2 (e.g., Advisor Support broken down to Level 2 capabilities, such as Lead Management, Book Management and Alerts) business capabilities and prioritizing them with key stakeholders. The priority is based on the customer value and company/business unit strength. Once the key capabilities that need further definition are identified, the team delves into Level 3/Level 4 capabilities. The next logical step is defining business requirements and linking these to existing business services, such as getAccountData(), calculateTaxRate(), etc. This can lead to further definition of other IT projects, including implementation of business process management (BPM) tools and third-party data integration.

At the same time, IT clearly knows what business capabilities they are delivering and how they fit into the context of the overall program. An example from an IT perspective could be: the project “BPM Tool Implementation” that is delivering “Shared Services” and “Advisor Support” business capabilities is on track, but the project “Third-Party Data Integration” that is delivering the business capability “Third-Party Services” will not meet its schedule. Depending on the priority of the business capability, IT would know how to realign and reprioritize IT projects as needed. Also, the capability map helps IT understand the business rationale and priority for investment. Though IT is working on delivering specific business capabilities, they can relate their project to the overall business context and accelerate specific sections of the project as needed. Here, the capability map acts as a prioritization tool to reallocate IT resources and link it to business value. Essentially, the detailed capability map becomes the de-facto communication tool between business and IT as IT begins implementation.

Once the business capability definition is complete, it becomes clear that the business capability map plays an important role in bridging the gap between strategy and implementation. The business product manager can use this capability map to communicate status to business stakeholders and senior executives during quarterly or semi-annual executive updates. An example of a status update could be: the project is on track to deliver “Advisor Dashboard -> Book Management” and ”Advisor Dashboard -> Lead Management”capabilities, but it is behind schedule to deliver ”Third-Party Services” capabilities. The key decision maker can use this status update in the context of the overall capability map to understand what projects need further investments depending on the capability delivery schedule. In this capacity, the capability map acts as a decision-making tool for prioritizing IT investments. CROSSINGS: The Journal of Business Transformation

53

1.1 Planning 1.1.1. Wealth View

1.1.2. Strategic Goals

1.1.3. Investment Planning

1.1.4. Retirement Planning

1.1.5. Insurance Planning

1.1.6. Estate Planning

1.1.7. Small Business Planning

Level 1 Capabilities

1.1 Planning

1.1.8. Product Selection and Implementation

1.1.9.1. Assumptions

1.1.9.2. Tax Analysis

1.3. Customer Engagement 1.4. Channel Management

1.1.9. Tax Planning

Level 2 Capabilities

1.2. Advisor Support

1.1.9.3. Reporting and Communication

1.5. Methodology

1.1.9.4. Tax Strategies

Level 3 Capabilities

1.6. Platform

1.1.9.4.1. Tax Estimation

1.7. Business Support

1.1.9.4.2. Scenario Modeling

Level 4 Capabilities

› A bility to model a withdrawal hierarchy during retirement › Ability to model a savings hierarchy for retirement › Ability to model deductions that can be deferred to avoid Alternative Minimum Tax

Business Requirements

1.8. Third party Services 1.9. Shared Services

Level 1 Capabilities

Level 2 Capabilities

Level 3 Capabilities

Level 4 Capabilities

Business Requirements

Capability Hierarchy

Wealth View Advisor Support

Strategic Goals

Customer Engagement

Investment Planning

Planning

Estate Planning

Investment Management

Insurance Planning

Channels Management

Retirement Planning

Platform

Tax Planning

Business Support Third Party Services Shared Services

Assumptions Tax Strategies

Small Business Planning

Tax Analysis Reporting and Communication

Figure 4. Preview of a sample capability map for private wealth management. 54

Tax Estimation Scenario Modeling

› A bility to model a withdrawal hierarchy during retirement › A bility to model a savings hierarchy for retirement › A bility to model deductions that can be deferred to avoid Alternative Minimum Tax › A bility to model the tax impact for different filing statues

CONCLUSION The use of business capability maps is on the rise, but “business capabilities” as a discipline will take some time to become a mainstream tool for planning, communicating and actualizing business strategies. The key to success is the increased adoption by mid-level executives from both the business and IT. In an era when time to market becomes a priority for new products and services, business capability maps, when combined with the Agile methodology, can be an important tool to articulate, realize and balance both short-term and longterm organization and project goals.

THE AUTHORS Shiva Nadarajah is a Director with expertise in the assessment and delivery of largescale, mission-critical engagements. He has experience conducting holistic reviews of large, complex programs to help clients diagnose potential effectiveness issues and recalibrate and mobilize initiatives that require rapid turnaround. Shiva helps executives evaluate the case for investment and manages teams that work with clients to establish programs for success. [email protected]

Atul Sapkal is a Boston-based business and management consultant providing tactical and strategic consulting services to clients in financial services, particularly capital markets and wealth management. With over 11 years of experience developing new products and services, designing and implementing enterprise solutions and effecting industry change, he brings clients clear insights and proven solutions that help them discover new business opportunities and overcome challenges. [email protected]

CROSSINGS: The Journal of Business Transformation

55

DIGITAL CUSTOMER ENGAGEMENT:

the key to long-term success for utilities Power utility businesses around the world are rapidly waking up to the enormous changes taking place in the global energy market. Stronger environmental policies, increased competition, empowered customers and evolving demographics add to the factors compelling the traditional electric utility industry to shift to a more competitive model. In this article, Yugant Sethi and Alakshendra Theophilus discuss the changing utility market, the benefits of digital transformation and the challenges many utilities are facing with their digital initiatives.

THE CHANGING STATE OF POWER UTILITIES The traditional services model for utilities was one in which customers had little interaction with their energy providers. But that is rapidly changing. With the evolution of the smart grid, an automated system that helps balance electrical consumption with supply, customers now have more reasons to communicate with their service providers. What’s more, utilities are now offering a wide range of energy options, including green and retail through deregulation, as well as energy efficiency products and services, giving customers the power of choice. For utilities, keeping customers satisfied, engaged and loyal has never been more important. The utility industry faces drastic change as renewable energy turns consumers into producers and undermines the dominance of utilities. In Germany, 27.7 percent of electricity came from renewable sources in 2014.1 The big four German utilities—E.ON, RWE, EnBW and Vattenfall—are nearly absent in this sector. Renewable generating capacity broke the 100GW barrier in 2014,2 which is equivalent to the entire fleet of nuclear power capacity of the United States, a United Nations report shows.3 Utilities that serve many of the deregulated states in the United States may lose the comfort of guaranteed consumer demand as new third-party or renewable energy vendors join the market. Failure to provide engaging customer service that fosters loyalty can increase the risk of today’s utilities being left behind as consumers switch to these new companies. Competition with other providers, new digital technologies, rising customer expectations and the need to support more sophisticated customer interactions are driving the push to deliver a better customer experience.

56

UTILITY DIGITAL BUSINESS TRANSFORMATION According to a report by American Customer Satisfaction Index (ACSI) customer satisfaction with investor-owned utilities dropped 1.3 percent to 74 out of 100. Customer satisfaction levels in Britain dropped from 78 percent in 2012 to 55 percent in 2013.4 As a result, the utility industry is seeing a shift to the use of more technology and data as a means of addressing the growing dissatisfaction of its customers. There is clearly a requirement to reenergize the customer experience and improve satisfaction and willingness to regularly engage with their service provider. To drive long-term customer engagement, utilities must transform themselves from energy suppliers to energy service providers—with a focus on meeting the needs of customers beyond just energy supply. Social media as well as web and mobile apps are rapidly becoming the preferred channel for customers to interact with their service providers, leading to the emergence of new business models.

Digital initiatives undertaken by the utilities help gather more detailed data by running specialized billing, energy efficiency, demand response and behavioral programs. The market drivers to go digital are complemented by the following digital initiatives as shown in Figure 1. › Real-time insights on energy usage and personalized advice on reducing consumption help customers reduce their energy bills › An ability to make payments online to avoid late fees or to report billing errors strengthens customer service › Demand forecast solves demand uncertainty and equips the utility to better manage peak capacity › Energy efficiency helps customers reduce energy bills with consumption analysis patterns while minimizing peak-time purchases from spot markets › Discount offers and promotional campaigns are customer engaging practices that help foster customer retention and building trust

Figure 1: Digital initiatives and drivers to go digital.

CROSSINGS: The Journal of Business Transformation

57

WHY CUSTOMER ENGAGEMENT MATTERS Customer engagement, through digital transformation, can deliver business value in a variety of ways. First, it can provide a wealth of data that utilities can use for continuous improvement of customer-facing programs. Second, a well-designed customer communication channel with personalized messaging can reduce the burden on customer service representatives and help lower the cost of call-center operations. Third, it can help utilities better optimize asset and capital investments by predicting savings from energy efficiency and demand-response programs.

Figure 2: The rise of the empowered customer.

Utilities that successfully engage their customers have reported peak consumption reductions up to 30 percent greater than non-peak reductions. These utilities reduce excess capacity generation, and therefore, can quantify the benefit of their effective customer engagement. The potential to lower energy bills through real-time insights on energy usage and personalized advice on reducing consumption will enable utilities to create more value for themselves and for their customers. This will result in greater customer satisfaction and loyalty.

ADOPTION OF DIGITAL INITIATIVES AND ITS CHALLENGES Sapient Global Markets recently conducted a study of 40 North American (NA) and European utility providers to assess the adoption of digital initiatives to improve the customer experience. The research focused on their web, mobile and social media initiatives and evaluated each utility based on their use of the following: › Social media platforms to provide information, address customer complaints, communicate outages and run promotions › Mobile apps and web self-service features to access and pay bills online, view usage comparison and outage information and participate in home energy management and energy efficiency programs

Figure 3: Mobile app feature support--percentage of utilities offering the feature. 58

Figure 4: Web feature support--percentage of utilities offering the feature.

The study revealed the following: › Although more than 50 percent of NA and European utilities allow customers to pay bills online through their official websites or mobile apps as part of a digital initiative to reduce operating costs, billing issues continue to be a cause of concern for customers. › Nearly 68 percent of NA utilities provide outage information on their websites and 53 percent offer it via their mobile site, while approximately 30 percent of European utilities offer this information on both mobile and web. However, beyond basic outage reporting, there is rarely a web or mobile mechanism available for customers to provide feedback. › 70 percent of European utilities are sharing customer usage information and promotions on mobile apps and the web, while NA utilities are just beginning to follow suit. › Very few utilities in NA and Europe provide real-time insights on energy usage and personalized advice on reducing consumption though mobile apps or websites. However, this number is expected to increase over the coming years as utilities expand their digital presence and launch social media campaigns.

Driving the Need for Better Data Management Utilities must work with a rapidly increasing amount of data. In the past, utilities had 12 meter readings per year. Today, smart meters are capable of delivering usage data every 15 minutes—or 35,040 times a year. In addition to smart meter data, utilities need to analyze customer data such as location, energy efficiency measures in place, large appliances installed and energy usage patterns, along with data on rates, billing, demand-side management, load growth and more. The growing volume of information, multiplied by millions of customers, highlights the fact that utilities will quickly be overwhelmed by data that is usually stored, managed and maintained via different tools and technologies. This need for better data analytics is underscored in a recent study by the Utility Analytics Institute that anticipates spending on analytics by utilities will grow 32 percent a year, from $180.4 million in 2011 to $718.9 million in 2016.5

› Utilities have made social media a part of their communication strategies—using it to share energy efficiency tips or information during outages. › Thermostats are an important step toward creating smart homes and helping consumers better manage energy costs. It is gaining market attention as it is being implemented by emerging players in the field of energy management. How do utility customers feel about digital initiatives? The study collected opinions from a wide range of sources, including social media sites, blogs and online discussion boards, as shown in Figure 5. The data suggests that only 30 percent of online customers deem digital transformation as positive, whereas others complain about overcharges, especially for delayed payments and inaccuracies in billing periods and rate plans. Of this, 91 percent of customers complained about poor response of mobile apps, 45 percent were not happy about being charged for the app service and more than 20 percent had app login and connectivity issues.

Figure 5: Customer Sentiments.

CROSSINGS: The Journal of Business Transformation

59

ENGAGING WITH THE CUSTOMER Seeking more engagement, customers have asked utility companies to provide more value by helping them save more, inform them about their energy usage, and deliver usage information via both traditional bills and apps. And when things are not going well, they want to be able to inform the utility through new channels such as social media from their smart phones. Effective customer engagement opens up new opportunities for utilities by creating a dialogue that had not existed before. To transform the digital customer experience, web and mobile channels must be expanded and social media customer service capabilities must be strengthened. This can be done in the following ways: › More efficiently delivering bills via online and web channels, with options for customers to report errors easily › Educating customers and keeping them informed about new products and services through live chat or energy savings tips › Informing customers about changes to pricing and billing to switch tariff plans in order to reduce energy consumption › Addressing questions and enabling virtual customer conversations by leveraging trained social media personnel › Setting up a dedicated social media account, such as a Twitter handle, to address customer issues › Creative approaches to bring customers into the utility’s fold, such as gamification, which provides a platform to enhance customer engagement, achieve higher energy efficiency and identify new sources of revenue. It turns the energy management experience into a game. By offering a self-service model that provides fast, accessible and personalized resources, utilities can help enhance day-to-day customer interactions involving outages, maintenance and billing. This improves customer satisfaction and a customer’s willingness to regularly engage with its service company.

DIGITIAL UTILITY—A WAY FORWARD If utilities are going to succeed in a much more competitive market and meet increasing customer expectations, they need to make the shift from simply supplying energy to servicing the needs of the customer. Ultimately, this means changing their mindset and business structure to seamlessly incorporate digital channels as part of their customer touch-point strategy. This digital transformation is the key to long-term customer engagement and retention. But, it brings challenges, such as the need for utilities to manage and extract insight from a much larger data set. How utilities address these challenges and their ability to keep customer needs at the forefront will be critical to their longterm success.

60

Resources 1. BloombergBusiness, Renewables Take Top Share of German Power Supply In First, Stefan Nicola, http:// www.bloomberg.com/news/articles/2014-10-01/ german-renewables-output-tops-lignite-for-firsttime-agora-says. 2. International Renewable Energy Agency, Rethinking Energy 2014, http://www.irena.org/rethinking/ rethinking_fullreport_web.pdf 3. BBC News, UN: New renewables broke through 100GW barrier in 2014, Mark Kinver, http://www.bbc.com/ news/science-environment-32119463 4. American Customer Satisfaction Index, ACSI Utilities, Shipping, and Health Care Report 2015, http://www.theacsi.org/news-and-resources/ customer-satisfaction-reports/reports-2015/acsiutilities-shipping-and-health-care-report-2015/ acsi-utilities-shipping-and-health-care-report2015-download 5. Reuters, Utility Customer Analytic Market to Quadruple by 2016, http://www.reuters.com/ article/2012/06/21/idUS110009+21-Jun2012+BW20120621

THE AUTHORS Yugant Sethi is a Senior Manager of Business Consulting with Sapient Global Markets in Gurgaon. He leads Sapient Global Markets’ research efforts to support the company’s strategy and thought leadership programs. Yugant has over 15 years of experience and is a certified SAP consultant. With deep knowledge of C/ETRM product development and implementation, he has led several large advisory and technology implementation engagements in the energy industry. [email protected]

Alakshendra Theophilus is a Sapient Global Markets Business Consulting Associate based in Gurgaon. He is a Business Analyst (BA) with an emphasis on financial services strategy and is currently working with a utilities and trading major. Alakshendra is a certified energy risk professional. [email protected]

CROSSINGS: The Journal of Business Transformation

61

INTRODUCTION

drive change through analytics and collaboration Over the last decade, enormous strides have been made in information and consumer technologies and many industries are leveraging these advancements to innovate and gain competitive advantages. One example is the increasing investment in financial services technology (FinTech) and the integral part it is playing in reshaping financial companies’ traditional business models and processes. Over the same period, many energy companies have struggled to continue to grow their bottom line, which has become increasingly difficult with the recent downturn in commodity prices. While confronting the realities of these challenges, energy markets are waking up to the benefits of analytics and collaboration. In the previous issue of CROSSINGS, we discussed the renewed interest in industrial collaboration for those activities that are not proprietary. While this was primarily an efficiency play to reduce costs, firms are now looking to do more. In fact, many are leveraging the immense power of analytics to increase confidence in decision-making and ultimately improve their bottom line in today’s environment of shrinking margins and dwindling revenue. Making these changes is not always easy. It requires smart investments in order to effect change within the organization. As the energy and commodity markets continue to face new challenges and market shifts, driving these innovations will be imperative to consistently and confidently improve decision-making. But those that do are being rewarded with: › greater confidence in decisions under uncertainty › enhanced competitive advantage › improved time and cost savings › incremental revenue gains

62

This special section in CROSSINGS is dedicated to articles that explore innovative ways firms can leverage analytics across the energy and commodity industry. For example, Pooja Malhotra, Rathin Gupta and Rajiv Gupta explain how analytics can provide an advantage for fuel marketing companies; Niko Papadakos, Mohit Sharma, Mohit Arora and Kunal Bahl discuss the importance of data quality in analytics; and Barbara Thorne-Thomsen, Cassandra Howard and Shahed Haq highlight the key success factors needed to successfully manage an analytics program. I hope you find these articles informative as you adapt your growth strategy to address the challenges of this rapidly changing industry.

Rashed Haq Vice President and Global Lead for Analytics & Optimization for Commodities

CROSSINGS: The Journal of Business Transformation

63

MANAGING AN ANALYTICS PROGRAM: the three key factors for success

Analytics programs bring a different level of execution and delivery complexity involving many unknowns and constant changes. In this article, Barbara Thorne-Thomsen, Cassandra Howard and Shahed Haq discuss the three key challenges for developing an analytics project, plus three key success factors for making them work.

Many companies recognize that they have opportunities to use data and analytics to enhance productivity, improve decision-making capabilities and gain a competitive advantage. However, managing and executing an analytics program can be challenging. It requires setting a strategy; drawing a detailed roadmap for investing in assets such as technology, tools and data sets; and tackling the intrinsic challenges of securing commitment from stakeholders, improving processes and changing organizational behavior.

CHALLENGES Unknown and uncertain requirements Analytics programs often begin with a specific idea or a potential opportunity in mind. However, unlike conventional IT programs, where requirements are clearly defined, refined and validated early in the design phase, analytics programs are characterized by unknown or uncertain requirements. The end users involved are often unclear of what data is available, how to analyze it or the details of how they want to achieve their vision. As the team begins to analyze and uncover more information, new ideas surface leading to changes in the requirements. This can present a major challenge in the management of the project’s scope, schedule and budget. Data identification and integrity Analytics programs use data as the foundation; therefore, the quality of the data defines the quality of the solution and the decisions made from that solution. It is critical to identify and cleanse the data as well as keep that data from degrading in the future. Both are challenging to achieve considering the vast amount of data available and the additional time, effort and difficulty in keeping it clean and issue-free. End-user adoption. An analytics program is only as successful as the people who use it. User resistance is seen in various forms—most of which arise from not fully understanding the capability of analytics. For example, users may not want to take the time to maintain high data quality because they do not directly see the impact. Users may also reject the idea of analytics in fear of losing their decision-making authority to an automated solution. Still others may refuse to participate because they do not fully understand how to use and apply analytics. Left unchecked, all of these forms of resistance will lead to poor overall end-user adoption.

64

KEY SUCCESS FACTORS The fast-paced, ever-evolving nature of analytics initiatives requires program management methodologies that can react to constant change, while keeping the team focused and working together as one. Success hinges on the following factors: 1. Adopting an iterative approach to planning and scope management 2. Obtaining a clean and robust data set 3. Painting the “big picture”

Table 1: Key Challenges and Success Factors of Analytics Programs.

Iterative Planning and Scope Management Due to the changing and ongoing discovery of requirements and scope, the planning approach for analytical programs usually consists of quick, iterative and tight implementation cycles that are planned one to two cycles out. Low-level, long-term planning is not in the program’s best interest as substantial reestimation and re-planning will most likely occur. The lack of a detailed, long-term plan, however, does not mean that there is no longterm vision. There is an objective that needs to be met and the program manager must ensure that regardless of change and evolution, the program should still be working toward that objective.

The key to achieving this is to continually prioritize and group the known scope items and always focus on the most critical and important items first. As implementation cycles are being executed, any new scope items are added in with the remaining items. These are then reprioritized and regrouped based on any new findings and lessons learned. This iterative process continues until the long-term objective is achieved. It is important to note that there is a fine line between effectively managing scope and blindly executing scope. The project manager needs to ensure each item aligns with the long-term vision while also adhering to the budget and high-level timeline.

CROSSINGS: The Journal of Business Transformation

65

Obtaining Clean and Robust Data Because data is the crux of an analytics project, the value and outcomes of the solution built will only be as good as the data it is fed. The pursuit of high-quality data can quickly become overwhelming, expensive and time-consuming. To prevent this, the overall scope of data must be managed by identifying and using only the subset that directly and immediately addresses the question or problem at hand. The next step is to assess the quality of the data and rectify any issues. Firms must plan sufficient time and effort for this activity as many issues tend to surface after the analysis phase begins. Since addressing data quality issues is a long-term strategic activity, maintenance processes are critical. The solution must either create the proper data governance and architecture or enhance any existing frameworks. Painting the Big Picture Consistent and active end-user involvement increases the chance of successful project delivery and adoption. To achieve this, the program must help end users understand the big picture in terms of business objectives and their role within it. Providing this insight will also help address some of the mysteries associated with analytics. It is critical to emphasize the importance of the end user’s active participation in everything from the design sessions to the cleansing and maintenance of the data. This again needs to be done in the perspective of the bigger picture, because end users are often not directly or immediately impacted. The process includes addressing any fears or concerns that end users may have around analytics. This can be done through tool demonstrations, proofs of concept and/or Q&A sessions. These activities should focus on how users will interact with the solution and explain how analytics can help support their decisions and actions. Since one of the most common concerns is having decision-making authority automated, it is important to stress that end users will still be the decision owners and that the analytics solution is a tool that enables their decision-making process. Lastly, the program team needs to ensure that the solution is easy to use and understand by incorporating appealing visualizations and embedding behavioral science.

CONCLUSION Executing an analytics program is a complex and resource-intensive endeavor. Analytics programs have unique characteristics, and therefore require a different approach than large, conventional IT programs. Implementing an iterative IT approach, focusing on identifying and cleansing the right data and painting the big picture for end users and the team are all essential to a successful analytics program. Considering these unique characteristics and planning to overcome the challenges will increase the chances of producing a more reliable, accurate and timely solution that will deliver value to a business.

66

Resources 1. D. Marchand and J. Peppard, “Why IT Fumbles Analytics”, Harvard Business Review, Jan-Feb 2013 https://hbr.org/2013/01/why-it-fumbles-analytics 2. Stijn Viaene and Annabel Van den Bunder, “The Secrets to Managing Business Analytics Projects”; MIT Sloan Management Review, Fall 2011 3. “Making data analytics work: Three key challenges”, McKinsey & Company, March 2013

THE AUTHORS Barbara Thorne-Thomsen is a Senior Associate of Program Management based in Houston. She is working as the program manager of a visualization and analytics program at a major oil company. Prior to this, Barbara worked as a project manager on numerous large-scale energy trading and risk management deployments, focusing on the change management and cutover phases of the projects. [email protected]

Cassandra Howard is a Manager with Sapient Global Markets. She has 15+ years of experience in the energy commodity industry, including 10 years of experience working in front- and mid-office roles for leading companies in the energy trading industry. Cassandra has worked on multiple large-scale integrated system deployments, demonstrating a strong understanding of impact from front to back office. Prior to joining Sapient, Cassandra worked with major oil companies in trading, risk and accounting operations. [email protected]

Shahed Haq is a Director of Program Management based in Houston, specializing in large-scale ETRM implementations. He has recently worked as a program manager on an analytics program for a Canadian ISO. Prior to that, Shahed worked as a program manager on multiple large-scale, global ETRM implementations for a major oil company. [email protected]

CROSSINGS: The Journal of Business Transformation

67

DATA QUALITY FOR ANALYTICS:

clean input drives better decisions Organizations are increasingly relying on analytics and advanced data visualization techniques to deliver incremental business value. However, when their efforts are hampered by data quality issues, the credibility of their entire analytics strategy comes into question. Because analytics traditionally is seen as a presentation of a broad landscape of data points, it is often assumed that data quality issues can be ignored since they would not impact broader trends. But should bad data be ignored to allow analytics to proceed? Or should they stall to enable data quality issues to be addressed? In this article, Niko Papadakos, Mohit Sharma, Mohit Arora and Kunal Bahl use a shipping industry scenario to highlight the dependence on quality data and discuss how companies can address data quality in parallel with the deployment of their analytics platforms to deliver even greater business value.

AN ANALYTICS USE CASE: FUEL CONSUMPTION IN THE SHIPPING INDUSTRY Shipping companies are increasingly analyzing the financial and operational performance of their vessels against competitors, industry benchmarks and other vessels within their fleet. A three-month voyage, such as a round trip from the US West Coast to the Arabian Gulf, can generate a large volume of operational data, most of which is manually collected and reported by the onboard crew. Fuel is one of the largest cost components for a shipping company. Optimum fuel consumption in relation to the speed of the vessel is a tough balancing act for most companies. The data collected daily by the fleet is essential to analyze the best-fit speed and consumption curve. Figure 1 demonstrates an example of a speed versus fuel consumption exponential curve plotted to determine the optimum speed range at which the ships should operate. With only a few errors made by the crew in entering the data (such as an incorrect placement of a decimal point), the analysis presented is unusable for making decisions. The poor quality of data makes it impossible to determine the relationship between a change in speed and the proportional change in fuel consumption as presented in Figure 1.

68

Speed-Consumption Curve 10,000.00

Vessel A » BALLAST Vessel A » LADEN Vessel C » BALLAST Vessel C » LADEN Vessel F » BALLAST Vessel F » LADEN

9,000.00 8,000.00

Fuel Consumption

7,000.00 6,000.00 5,000.00 4,000.00 3,000.00 2,000.00 1,000.00 0.00 20.00

40.00

60.00

80.00

100.00

120.00

Speed Figure 1: Speed – Fuel consumption curves (including data quality issues).

If the outliers are removed, the analysis shown in Figure 2 provides a clear a correlation between the speed of the vessel and its fuel consumption.

Speed-Consumption Curve 120.00

Vessel A » BALLAST Vessel A » LADEN Vessel C » BALLAST Vessel C » LADEN Vessel F » BALLAST Vessel F » LADEN

110.00 100.00

Fuel Consumption

90.00 80.00 70.00 60.00 50.00 40.00 30.00 20.00 10.00 8.00

9.00

10.00

11.00

12.00

13.00

14.00

15.00

16.00

17.00

Speed Figure 2: Speed – Fuel consumption curves (cleaned data by removing outliers).

CROSSINGS: The Journal of Business Transformation

69

As shown in these examples, most analytics programs are designed based on the belief that removing outliers is all that is needed to make sense of the data, and there are many data analysis tools available that can help with that. However, what if some of those outliers are not outliers and were the result of a scenario that needs to be considered? For instance, in the example, what if some of the outliers were actual fuel consumption points captured when the ship encountered inclement weather? By ignoring these data points, users can make assumptions without considering important dimensions—and that could lead to very different decisions. This approach not only makes the analysis dubious, but also often leads to incorrect conclusions. In some cases, the practice of removing outliers can lead to the deletion of a significant number of data points from the analysis. But can users get the answer they are looking for by ignoring 40 percent of the data set? Companies need to be able to determine the speed at which vessels are most efficient with a lot more certainty. Data quality issues only reduce the confidence in the analysis conducted. In the shipping example, a difference in speed of 1 to 2 knots can potentially result in a difference of $500,000 to $700,000 in fuel consumption for a round trip US West Coast to Arabian Gulf voyage at the current bunker price. Does this mean that data needs to be validated 100 percent before it can be used for analytics? Does the entire universe of data need to be clean before it is useful for analytics? Absolutely not. In fact, companies should only clean the data they intend to use. The right approach can help to determine which issues should be addressed to manage data quality.

DATA USED FOR ANALYTICS: WHERE SHOULD I USE MY CLEANSING TOOLS? Analytics use cases have specific needs in terms of which pieces of data are critical to the analysis. For each piece of data, the rules or standards required to make it suitable for the analysis must also be defined. But not all data standards have equal priority. For instance, in the shipping example above, it might be more important to ensure that the data used for analysis is accurate as compared to ensuring that all the data is available. In other words, using 80 percent of 100 percent accurate data to generate the trend is better than using 100 percent of data that is only 80 percent accurate. An organization should focus most of its energy on data used by high-impact business processes. To manage the quality of data, organizations need a robust data quality management framework. This will enable them to control, monitor and improve data as it relates to various analytics use cases.

APPROACH TO DATA QUALITY MANAGEMENT Data is created during the course of a single business process and it moves across an organization as it goes through the different stages of one or more business processes. As data flows from one place to the next, it transforms and presents itself in other forms. Unless it is managed and governed properly, it can lose its integrity. Although each type of data needs a distinct plan and approach for management, there is a generic framework that can be leveraged to effectively manage all types of data. As shown in Figure 3, the data quality management framework consists of three components: control, monitor and improve.

70

Improve

Control

Fix when data quality drops

Validate before loading

Monitor

Assess periodically Figure 3: Data quality management framework.

Control

Monitor

The best way to manage the quality of data in an information system is to ensure that only the data that meets the desired standards is allowed to enter the system. This can be achieved by putting strong controls in place at the front end of each data entry system, or by putting validation rules in the integration layer responsible for moving data from one system to another. Unfortunately, this is not always feasible or economically viable when, for example, data is captured manually and then later captured in a system, or when modifications to applications are too expensive, particularly with commercial off-the-shelf (COTS) software.

It is natural to think that if a company has strong controls at each system’s entry gate, then the data managed within the systems will always be high in quality. In reality, as processes mature, people responsible for managing the data change, systems grow old and the quality controls are not always maintained to keep up with the desired data quality levels. This generates the need for periodic data quality monitoring by running validation rules against stored data to ensure the quality meets the desired standards.

In one particular case, a company decided against implementing changes to one of its main data capture COTS applications that would have enforced stricter data controls. They relied instead on training, monitoring and reporting on the use of the system to help them improve their business process, and as a result, experienced improved data quality. However, companies that have implemented strong quality controls at the entry gates for every system have realized very effective data quality management.

In addition, as information is copied from one system to another, the company needs to monitor the data to ensure it is consistent across systems or against a “system of record.” Data quality monitors enable organizations to proactively uncover issues before they impact the business decision-making process. As shown in Figure 4, an industry-standard fivedimension model can be leveraged to set up effective data quality monitors.

CROSSINGS: The Journal of Business Transformation

71

Correctness

Measure the degree of data accuracy

Completeness

Measure the degree to which all required data is present

Currency

Measure the degree to which data is refreshed or made available at the time it is needed

Conformity

Measure the degree to which data adheres to standards and how well it is represented in an expected format

Consistency

Measure the degree to which data is in sync or uniform across the various systems in the enterprise

Figure 4: The five Cs of data quality.

An example of a monitoring dashboard is shown in Figure 5. It is built to provide early detection of data quality issues. This enables organizations to perform root-cause analysis and to prioritize their investments in training, business process alignment or redesign.

Figure 5: Sample data quality monitoring dashboard.

72

Improve When data quality monitors report a dip in quality, a number of remediation steps can be taken. As mentioned above, system enhancements, training and adjusting processes involves both technology and people. When a dip in quality occurs, it may be the right time to start a data quality improvement plan. Typically, an improvement plan includes data cleansing, which can be either done manually by business users or via automation. If the business can define rules to fix the data, then data cleansing programs can be easily developed to automate the data improvement process. This, followed by business validation, ensures that the data is back to its desired quality level. Often, organizations make the mistake of ending data quality improvement programs after a round of successful validation. A critical step that is often missed is enhancing data quality controls to ensure the same issues don’t happen again. This requires a thorough root-cause analysis of the issues and data quality controls that need to be added to the source systems to prevent the same issues from reoccurring. Implementing these steps is even more critical when a project includes reference or master data, such as client, product or market data. Also, organizations that are implementing an integration solution will benefit from taking on this additional effort as it enables quality data to flow across the enterprise in a solution that can be scaled over time. Most effective data quality management programs are centrally run by an enterprise-level function and are only successful if they are done in partnership with the business. Ultimately, it is the business that owns the data, while the IT teams are the enablers. But how can the business contribute to these seemingly technical programs?

DATA QUALITY IS AS MUCH ABOUT THE PEOPLE AS IT IS ABOUT TECHNOLOGY In addition to the technical challenges faced by most data projects, there are often organizational hurdles that also must be overcome. This becomes particularly pronounced in organizations where data is vast, diverse and often owned by different departments with conflicting priorities. Therefore, a combination of data governance, stakeholder management and careful planning are needed, along with the right approach and solution. Key challenges that must be addressed for data quality initiatives include the following: 1. Stewardship—Like any corporate asset, data needs stewardship. A data steward is needed to provide direction and influence resources to control, monitor and improve data. The data steward should be someone with a strategic understanding of business goals and an interest in building organizational capabilities around data-driven decision making. Having a holistic understanding will help the data steward direct appropriate levels of rigor and priority to improve data quality. 2. Business Case—Organizations are unlikely to invest in data quality initiatives just for the sake of improving data quality. A definition of clean data and a justification for why it is important for analytics as well as operations needs to be documented. Some of the common themes in the business case include accurate and credible data for reporting, reduced rework at various levels and good quality decisions. The business case should present the data issues as opportunities that can unlock significant gains in the form of analytics and/or become the foundation of future growth. 3. Ownership—Often, personnel other than data stewards and data entry personnel (data custodians) use the data for decision making. In that context, it is imperative for custodians to understand the importance of good quality data. The drive and ownership for entering and maintaining good quality data needs to grow organically. As an example, the crew onboard a vessel is more likely to take ownership of entering good quality and timely data about port time or fuel consumption if they know that the decisions involving asset utilization and efficiency are driven from data reported by the crew.

CROSSINGS: The Journal of Business Transformation

73

4. Sustainable Governance—Making data quality issues visible or measuring the quality of data is good information to have, but ultimately does not move the needle in terms of improving data quality. A sustainable governance structure with close cohesion between data stewards, data custodians and a supporting model is required. It is nice to know that the data supporting a certain business process is at 60 percent or 90 percent quality, but that in and of itself will not automatically drive the right behaviors. A balanced approach of educating and training data custodians and enforcing data quality standards is recommended. With a changing business landscape and personnel, reinforcing the correct data entry process from time to time may improve quality. On the other hand, to ensure that overall data quality does not drop over time, effective monitoring and controls are also equally important. Doing one without the other may work in the short term, but may not be sustainable over time. For real change and improvement to happen, organizations need to implement a robust and sustainable data governance model. 5. Communication—Any data quality initiative is likely to meet resistance from some groups of stakeholders and poor communication can make matters worse. Therefore, a well-thought-out communication plan must be put in place to inform and educate people about the initiative and quantify how it may impact them. Also, it is important to clarify that the objective is not just to fix the existing bad data, but to also put tools and processes in place to improve and maintain the quality at the source itself. This communication can be in the form of mailers, roadshows or lunchand-learn sessions. Further, the sponsors and stakeholders must be kept engaged throughout the lifecycle of the program to maintain their support. 6. Remediation—Every attempt should be made to make the lives of data stewards easier. They should not view data quality monitoring and remediation routines as excessive or a hindrance to their day-to-day job. If data collection can be integrated and the concept of a single version of truth replicated across the value chain, it will ultimately improve the quality of data. For example, if the operational data captured by a trading organization (such as cargo type, shipment size or counterparty information) is integrated with pipeline or marine systems, it will ultimately enable pipeline and shipping companies to focus on collecting and maintaining data that is intrinsic to their operation.

74

CONCLUSION As organizations increasingly rely on their vast collections of data for analytics in search of a competitive advantage, they need to take a practical and fit-for-purpose approach to data quality management. This critical dependency for analytics is attainable by following these principles: › Tackle analytics with an eye on data quality › Use analytics use cases to prioritize data quality hot spots › Decide on a strategy for outliers and use the 80/20 rule when pruning the data set › Ensure decisions are trustworthy and make data quality stick by addressing root causes and implementing a monitoring effort › More than any other program, make this one business-led for optimum results

THE AUTHORS Niko Papadakos is a Director at Sapient Global Markets in Houston, focusing on data. He has more than 20 years of experience across financial services, energy and transportation. Niko joined Sapient Global Markets in 2004 and has led project engagements in key accounts involving data modeling, reference and market data strategy and implementation, information architecture, data governance and data quality. [email protected]

Mohit Sharma is a Senior Manger and Enterprise Architect with eight years of experience in the design and implementation of solutions for oil and gas trading and supply management. During this time, Mohit was engaged in multiple large and complex enterprise transformation programs for oil and gas majors. Most recently, he developed a total cost of ownership (TCO) model for a major North American gas trading implementation. [email protected]

Mohit Arora is a Senior Manager at Sapient Global Markets and is based in Houston. He has over 11 years of experience leading large data management programs for energy trading and risk management clients as well as for major investment banks and asset management firms. Mohit is an expert in data management and has a strong track record of delivering many data programs that include reference data management, trade data centralization, data migration, analytics, data quality and data governance. [email protected]

Kunal Bahl is a Senior Manager in Sapient Global Markets’ Midstream Practice based in San Francisco. He is focused on Marine Transportation and his recent assignments include leading a data integration and analytics program for an integrated oil company, process automation for another integrated oil company and power trading system integration for a regional transmission authority. [email protected]

CROSSINGS: The Journal of Business Transformation

75

PREDICTIVE ANALYTICS IN INTEGRITY MANAGEMENT: a ‘smarter’ way to maintain physical assets

Safe and reliable transportation of products is the backbone of pipeline companies. In order to avoid costly and hazardous product leaks, pipeline companies spend considerable amounts of money to maintain the integrity of their assets. Ensuring the integrity of assets, such as pipes, pumping units, meters and valves, requires a robust maintenance strategy that minimizes asset/ equipment failures. In this article, Ashish Tyagi and Jay Rajagopal discuss how predictive analytics can help make asset integrity management more reliable and cost-effective.

TRADITIONAL APPROACHES TO INTEGRITY MANAGEMENT Some people who own cars are so hard pressed for time that they neglect to maintain them. When something goes wrong, they take their cars to repair shops to get them fixed. However, this process wastes time, energy and money when many of these repairs could have been avoided with the proper maintenance. Similarly, companies that manage physical assets, such as pipes, turbines and vessels, have long used a similar corrective maintenance approach to manage and operate their systems. And while they know the risks and costs associated with it, they are often constrained by other higher-priority activities. Other car owners are aware that it pays to prevent costly repairs before breakdowns occur. They diligently perform prescribed maintenance activities according to the car manufacturer’s schedule, such as changing the oil every few thousand miles. As a result, the chances of an unexpected outage using this preventive maintenance approach are much lower since inspection and service tasks are preplanned. However, a set amount of money is still spent for such activities and the car owner is left with questions, such as, “Will my car run fine if I delay an oil change for another 1,000 miles?” or “If I drove the same number of miles this summer in hotter, dustier conditions than I did last summer, should I take my car in for service sooner?” A fixed maintenance schedule cannot answer these questions since it does not take into account operating conditions which are key influencers of the performance of an asset.

THE PREDICTIVE APPROACH TO INTEGRITY MANAGEMENT With significant technology advances, companies are much better equipped to remotely monitor assets and put in place a more intelligent system that senses the state of various components and predicts the type of maintenance required based on actual operating conditions. Honda’s Maintenance Minder System is one such example. It shows the remaining oil life and assigns a code that helps the owner identify which service activities should take place during the next visit. This smarter way of taking care of assets is the predictive maintenance approach to managing integrity, and while it still encompasses preventive maintenance activities, it does so in a more focused and cost-effective way.

76

Predictive maintenance involves the use of continuous or periodic equipment monitoring or prior events to predict the need for maintenance before an unexpected failure actually occurs. This is different from preventive (or planned) maintenance in which maintenance is conducted on a scheduled basis, and corrective (or reactive) maintenance in which maintenance is conducted after a failure has occurred.

the current or latest equipment performance, providing a more optimized maintenance frequency. Preventive maintenance, on the other hand, sometimes happens more often than required (resulting in higher costs), and sometimes less often than required (resulting in potentially faster asset degradation).

Advantages of predictive maintenance

Predictive analytics is not a new concept or field. In fact, predictive techniques date back to the 1600s when insurance companies used historical data to predict risk and use it for underwriting purposes. The concept still holds true today with the fundamental distinctions in techniques more closely related to who performs them. In the field of equipment maintenance, two fundamental approaches include:

Predictive maintenance significantly helps to lower costs, improve operational availability and optimize frequency. 1. Cost: Predictive maintenance significantly lowers cost in comparison to corrective maintenance, since a catastrophic event could take longer to fix (compared to a preventive maintenance activity), thus resulting in longer interruptions to operations (e.g,. a pipeline that is out of service for a long time). 2. Operational availability: Since predictive maintenance is planned in advance, it allows equipment to be serviced when it is idle or when the outage is planned, whereas reactive maintenance may lead to costly equipment downtime while waiting for spare parts or skilled resources to become available. 3. Optimized frequency: Predictive maintenance is typically based on models that take into consideration

PREDICTIVE ANALYTIC TECHNIQUES

1. Experience-based prediction by individuals: Business subject matter experts (SMEs) have an understanding of past failure patterns and are able to predict potential failures purely based on their experience. 2. Model-based prediction by systems: Analytical models are created that use historical data as input and provide future failure predictions as output. An element of experience-based prediction is present in models since they need to mimic real-world experiences as closely as possible.

Criteria

Human-based Prediction

Model-based Prediction

Human Factor

A high dependency on individuals can lead to potential losses when individuals need to be replaced

By encapsulating the experience into algorithms, the dependency on people is reduced, freeing them up for more value-add roles

Costs

No hardware or software set-up costs are involved; however, one must factor in labor costs to perform manual analysis. These costs could be very high to get to the same level of quality as a model, but achieving that quality is unlikely since humans will need to segment the problem and optimize their specific area, leading to an overall sub-optimal solution

Costs vary based on how realistic the model needs to be. It frees up people (and therefore costs) to perform more strategic activities, such as validating or overriding model outputs, and making decisions in exception scenarios

Accuracy

The human brain is more prone to errors, especially when it compensates for a lack of data. In addition, the sheer number of variables for accurate and consistent analysis may overwhelm people

Models have the ability to predict consistently and with better accuracy when using greater varieties and especially larger volumes of input data sets

Table 1: The key differences between the two predictive analytic approaches.

CROSSINGS: The Journal of Business Transformation

77

PREDICTIVE MAINTENANCE—THE MODEL-BASED APPROACH Simply put, a model-based predictive maintenance initiative involves gathering equipment and operating data that would be relevant to the analysis, constructing a statistical/mathematical model (typically some form of regression model) that the data fits into, and using that model to extrapolate into the future—thus making predictions about unknown events. These unknown events may or may not materialize, but actual data related to them will continue feeding the models which can then be further tweaked to help increase the accuracy of future predictions.

Analyze Business Requirements Iterate Based on Learnings

Identify & Gather Data

Predict Outcomes

Assess Data Relationships Apply Statistical Models

Figure 1: Life cycle of predictive modeling.

Model-based prediction can further be divided into three sub-types when considering the types of data as well as how frequently that data is fed into the model: 1. Failure Event Data. Only past equipment failure (or near-failure) events are captured on a timeline to create a relationship between events and time. A suitable regression model is created based on this relationship and future events are predicted. 2. Operational Data Monitoring—Periodic. Periodic operational data from equipment—such as vibration, temperature, viscosity of commodity flowing in the pipe—is used to create a model that establishes a baseline relationship between operational data and equipment performance. A deviation of the equipment’s baseline performance and actual performance is regressed to predict future equipment failure. 3. Operational Data Monitoring—(Near) Real-Time. Real-time (or near real-time) operational data from equipment is used to create a model that creates a baseline relationship between the operational data and equipment performance. The deviation of the equipment’s baseline performance and actual performance is regressed to predict future equipment failure.

78

Predictive Modeling Requirements Irrespective of the sub-type of predictive modeling being considered, there are a few key requirements for the model to succeed in terms of data, people and technology. 1. Data a. Quantity: Just as more experience typically leads to better decisions, similarly the more data there is, the more accurate a model’s predictions are likely to be. Failure event extrapolation and periodic operational monitoring-based models require at least 15 to 20 valid, historical data points under varying operational conditions to provide a semblance of meaningful predictions. Real-time monitoring-based models do not need a lot of history since they can ingest and use the required number of data points within minutes. b. Quality: The quality of historical data is of vital importance. Random bias or low precision in historical data will skew predictions. The analysis of past data can help identify potential data capture issues and provide a roadmap to improving the data capture process and other data governance processes in the future.

3. Technology a. Asset/Equipment: Historical event extrapolation needs a smaller technology footprint when compared to operational data-based models that need sensors to monitor operational parameters and transmit them (in real time or near real time if required) to Supervisory Control and Data Acquisition (SCADA) systems. b. Information (i.e., IT): Historical event extrapolation has minimal software requirements (a database and visualization/presentation layer will suffice in most cases). Periodic operational data monitoringbased prediction models may be created using similar, minimal software requirements with optional statistical modeling tools depending on the sophistication of the requirements. Near real-time operational data monitoring-based predictions have higher software requirements primarily to deal with the acquisition and storage/management of large volumes of data, along with more sophisticated statistical modeling to deal with potentially one new data point every second.

2. People a. Business users: Historical failure event extrapolation needs few inputs from business users or technical SMEs. However, periodic and realtime monitoring based predictions need deeper involvement from business users who can guide technology teams on the engineering concepts involved with data point readings in order to interpret their relevance to the prediction. b. Technology implementation team: Historical failure event extrapolation needs only a basic knowledge of statistical concepts. However, periodic and real-time monitoring based predictions will require business analysts who are not only adept at mathematics, statistics and information technology, but can also grasp the basics of the engineering concepts involved with equipment operations.

CROSSINGS: The Journal of Business Transformation

79

Requirements

Failure Event Data

Operational Data – Periodic Monitoring

Operational Data – Near Real-Time Monitoring

Data Availability

At least 15 to 20 meaningful, historical data points to create statistical relationships

At least 15 to 20 meaningful, historical data points to create statistical relationships

Not much need for historical data as long as data can be acquired in near real time and stored

Accuracy of Prediction / Power of Decisions

Low, because it doesn’t consider current operating conditions as a factor in failure prediction

Medium, because it allows for degradation of performance to be detected as long as it is within bounds of data analysis frequency

High, as it enables real-time decisions to be taken since the current conditions are factored into analysis

Budgetary Needs

Low, since it is only based on Medium, because it needs historical events of failure or more data management equipment service than a few historical events, but is not as cost-intensive as real-time monitoring

When to Use

If embarking on the analytics journey and want a starting point; can live with lower prediction accuracy for specific types of equipment

If real-time monitoring is not truly needed for the equipment being maintained (model would not have the knowledge of current conditions and therefore will suffer from a prediction lag)

High, due to costly realtime data acquisition and transmission equipment (e.g., viscometers and vibration sensors), if not already in place If equipment is flagged as highly critical and can thus justify the higher budgets needed to put this in place

Table 2: Comparison of modeling types for predictive maintenance.

EXAMPLE OF PREDICTIVE MAINTENANCE AT A PIPELINE COMPANY Meter Proving: What is it? Meter proving is the process of determining the accuracy of a meter by comparing its register reading to the register reading of a base meter (prover) which is accurate. Ideally, the meter should show the same reading as the prover. However, due to changes in operating conditions, meters must be regularly proven so that measurement accuracy is maintained. The meter factor is the ratio of a meter’s reading to a prover’s reading under the same operating conditions. Why Prove Meters? Federal rules require all pipeline operators to calibrate (or prove) their meters upon change of operating conditions that affect the meters’ performance. Factors, such as changes in pressure, temperature, density (water content), viscosity and flow rate, are some examples that can trigger the need to reprove a meter. However, there are no specific criteria regarding the amount of change that warrants a proving, as each meter has its own operational range. If a meter is left unproven, it leads to the following: 1. Inaccurate billing and loss of productivity: a. If a meter consistently under measures, the company loses revenues on deliveries and gains on receipts b. If a meter consistently overmeasures, the company loses revenues on receipts and gains on deliveries

80

In either case, if the measurement error is more than a specific threshold, the company is required to send adjustment invoices, which results in loss of productivity. 2. Revenue loss due to false volume imbalance alarms: Flow rate data from multiple meters in the same pipeline is typically used as a criterion for leak detection. When operating conditions impact a meter’s performance characteristics beyond its control limits, the volume measured per unit time will not be the same across different meters in the same pipeline, which would cause an alarm even though there is no leak. Such alarms may cause the pipeline to be shut down until the reason for the alarm is known, which causes revenue loss. 3. Unwanted repair/replacement costs: When a meter’s performance degrades below a specific threshold, it needs a repair or replacement. Prediction of meter failure in advance can prevent billing issues as well as costly outages before they have occurred. Is there a smarter way to address meter proving? › Traditional proving reports show a shift in meter factor since the last time the meter was proved, but does not show whether it is shifting consistently in one direction (i.e., if it is at risk of crossing control limits). In order to pull together this information, either a lot of manual effort is required or an analytics model can be used that analyzes trends based on historical data and can predict when a meter will require repair or replacement.

be the most optimal approach. An approach that takes operating conditions into account and leverages them in statistical models can help predict when the meter requires reproving. High-Level Solution Approach The high-level steps shown in Figure 2 can be used to implement solutions to any analytics modeling problem. For meter proving, the following basic modeling steps are useful to keep in mind before factoring in specific models for the situation at hand. 1. Develop Meter Reference Curve. Prove the meter in various operating conditions (or use historical proving data) to create a baseline curve 2. Calculate Reference Meter Facto. Use the reference curve shown above to calculate a reference meter factor corresponding to the actual meter factor on proving reports 3. Plot Meter Drift. Using the reference and actual meter factors, plot the deviation (the lower the deviation, the better the meter is performing) 4. Extrapolate Meter Drift Curve to Control Limits. Develop a regression model of the meter drift and predict when the meter will reach the control volume level (based on current flow rates) and will thus drift beyond thresholds that trigger a reproving

› Traditional meter proving is done on a fixed schedule (either time-based or volume-based) which may not

Data analysis › Understand business problem › Understand the goal

Define objectives

› Create initial hypothesis regarding required data › Analyze data patterns › Understand data gaps

› Create a prototype model for the intended problem › Analyze other analytics use cases that have emerged during data analysis

Implement solution › Implement model on a larger scale › Evaluate real-world results to iterate

Proof of concept

Figure 2: Approach to tackling an analytics problem.

CROSSINGS: The Journal of Business Transformation

81

Solution Benefits In addition to the benefits of meter proving listed earlier, a predictive approach offers the following: 1. Cost Efficiencies: a. The company need only prove when necessary (apart from federally mandated provings) b. The company need not spend resources chasing false imbalance/leak alarms 2. Improved Equipment Reliability: a. The company has the ability to control operating conditions to extend life of meters b. Timely diagnostic information leads to proactive maintenance

CONCLUSION The pipeline workforce continues to be burdened by administrative tasks, especially sifting through and analyzing data. These impair the ability to accomplish important value-added functions. As the power of analytics continues to be leveraged, it is important to use technology not just to provide people with reports, but to actually perform the burden of analysis and to provide insights and predictions. Employees will then be able to focus more time and effort on important decisions. The future is even more exciting with the prospects of full automation, such as analytics solutions integrated with maintenance management systems that order replacement parts directly with minimal human intervention. But until that day arrives, pipeline companies can begin taking steps in that direction. Doing so offers firms an opportunity to leverage the benefits of predictive analytics, stay ahead of the competition and make their workplace a much better environment for employees.

82

Resources 1. Preventive Maintenance Strategies using Reliability Centered Maintenance: http://oce.jpl.nasa.gov/ practices/pm4.pdf 2. Honda’s Maintenance Minder System: http://owners. honda.com/service-maintenance/minder

THE AUTHORS Ashish Tyagi is a Manager with Sapient Global Markets and leads analytics engagements for midstream clients. He has helped large energy and investment banking clients with data analysis and modelling, data migration and application performance management. [email protected]

Jay Rajagopal is a Director with Sapient Global Markets and focuses on building and executing strategic initiatives for midstream companies. He has led several large advisory and technology implementation engagements across the midstream value chain including pipeline management systems, gas utility systems and the implementation of trading packages. [email protected]

CROSSINGS: The Journal of Business Transformation

83

ENERGY INTELLIGENCE:

the key to competitive advantage in the volatile LNG market Today’s liquefied natural gas (LNG) industry faces extreme price volatility and uncertainty in supply and demand. The recent oil price bust, continued growth of LNG spot trades over the last decade, and an increase in the number of LNG exporters and importers across the globe have added more complexity to the process of identifying the right market (for maximum profit) in which to trade LNG. As a result, firms that are interested in remaining competitive and protecting profits have begun to critically evaluate their business processes and the technology that enables them. Many are exploring ways to become more efficient, cut unnecessary costs and identify incremental revenue opportunities. In this article, Ritesh Sehgal, Parry Ruparelia and Sidhartha Bhandari examine how LNG organizations can integrate cargo intelligence with trading insights in order to make more informed buying and selling decisions­—and strengthen their competitive advantage.

The world LNG markets are seeing an unprecedented shift in the evolution and expansion of LNG exporting and importing countries (see Figure 1). Countries such as Australia, Indonesia and Malaysia are investing to catch up with Qatar, the current market leader in LNG exports. In addition to the Asia Pacific region, demand is also on the rise in Egypt, Jordan, Pakistan, the Philippines, Poland and Uruguay where investment in building regasification plants to cater to large import LNG cargoes has been on the rise.

84

700

30

600

MTPA

20

400 300

15

200

No. of Countries

25

500

10 100 5 2013

2012

2011

2010

2009

2008

2007

2006

2005

2004

2003

2002

2001

2000

1999

1998

1997

1996

1995

1994

1993

1992

1991

1990

0

Volume of LNG Trade

No. of LNG Exporting Countries (right axis)

Figure 3.1: LNG Trade Volumes, 1990-2013

Global Regasification

No. of LNG Importing Countries (right axis)

Source: IHS, IEA, IGU

Figure 1: Trends—LNG import/export with volume of LNG trades.

Factors, such as low crude oil prices, the upcoming increases in LNG supply from Australia, combined with Japan moving closer to restarting some of its nuclear reactors, are causing a slump in LNG prices (see Figure 2). This has further increased the pressure on LNG traders to stay profitable. World LNG Estimated August 2014 Landed Prices

Cove Point $3.27 Lake Charles $4.00 Altamira $12.23

Bahia Blanca $12.48

UK $6.59 Canaport $3.73

Spain $9.70

World LNG Estimated June 2015 Landed Prices

Korea $11.35

Belgium $6.76

Japan $11.35 India $11.20

Cove Point $2.65 Lake Charles $2.50

China $10.95

Rio de Janeiro $12.34

Altamira $7.38

Bahia Blanca $7.52

UK $6.38 Canaport $2.70

Spain $6.80

Korea $7.45

Belgium $6.62

Japan $7.45 India $7.35

China $7.30

Rio de Janeiro $7.28

Figure 2: 2014 vs. 2015 LNG prices.

In fact, gas price spreads between Asia and Europe have remained volatile due to the fluctuating demand from Asian consumers such as Japan, South Korea, China and India. Real-time decision making based on market and weather events Historically, weather events such as hurricanes and tsunamis, can have a big impact on LNG prices. For example, in March 2011, the tsunami on the eastern coastline of Japan led to the Fukushima power plant nuclear disaster which ultimately resulted in shutting down approximately 50 Japanese nuclear reactors. This accelerated the LNG price in the Asian market and created a sharp increase in the gas spread price between Japan and Europe—making Asia a more favorable destination to trade LNG post-March 2011.

CROSSINGS: The Journal of Business Transformation

85

Similar market events, such as the addition of a new LNG supply plant in Papua, New Guinea (which started shipping LNG to Asia in May 2014) and a dip in crude oil prices from $100 to $60 by the end of December 2014, created a slump in the LNG price. Margin efficiency by reducing transportation costs In recent years, the number of LNG tanker fleets has been growing. There are roughly 450 LNG tankers in service and many more under construction. However, earnings for LNG ship owners have been cut in half to $70,000 per day as compared to 2011. Since ship owners receive a thin margin, the market has grown increasingly competitive and more pressure has been put on charterers and ship owners to maximize margins. Operational efficiency With commercial analysts, traders and charterers monitoring the LNG market, it is critical that they have a comprehensive view of global LNG cargo movement and receive timely alerts in case there are LNG cargo diversions, port closures and weather alerts. Often, this work is done manually (such as in fixture data collection from various brokers) and in silos by regional analysts, resulting in inefficiencies, delayed data availability and error-prone results.

ENERGY INTELLIGENCE Given the high volatility of LNG prices tied to such market and weather events, LNG traders are using advanced simulation algorithms, or energy intelligence platforms, in order to build forward-looking views listing the expected LNG imports and exports by destinations or cargo ownership. This, in turn, will enable them to make real-time decisions to divert, buy or sell an LNG cargo as a result of these events. Such unique digital platforms integrate available trading insights with cargo intelligence information. They help firms gain a competitive advantage and increase profit margins by using advanced data visualization and analytical capabilities that allow them to make informed commercial decisions. In order to build an energy intelligence platform, the following tasks should be considered: 1. Gather cargo information (from the current day to 90 days out) from multiple market data providers. This cargo information includes volume of product, vessel International Maritime Organization (IMO) number/name, source and destination terminal information, estimated load and discharge dates and counterparty information. 2. Collect the annual product import/export plans of company-owned LNG liquefaction and regasification plants, or refineries in the case of products, and overlay that with cargo information. 3. Allow traders to enter proprietary market intelligence information such as knowledge around cargoes that are in transit or anticipated LNG plant shutdowns. 4. Integrate with the Automatic Identification System (AIS) data feed to plot the real-time location of the vessels as well as identify diversions and where cargo is heading. 5. Indicate whether the vessel is loaded via algorithms that can use the draft percent. 6. Integrate with weather alerts or port closure alerts prompting businesses to take the required action.

86

7. Offer various analytical views for different user segments: a. Traders: It shows the real-time location of ships and how much cargo is on the water along with their estimated time of arrival (ETA). b. Analysts: Forward 90-day view and historical data can be used to forecast supply and demand imbalance. c. Charterers: It offers views that can be used to assess traffic around terminals to understand seasonality patterns and assess the inbound versus outbound traffic at LNG terminals. Higher inbound traffic at a terminal presents the opportunity to negotiate a better shipping rate with the ship owner. d. Schedulers: Real-time weather alerts and port closure alerts can help schedulers make better decisions, track if cargo is onboard and whether the vessel has reached its destination on time. e. Risk Professional: It can help ensure that price risk is hedged properly and is in line with the physical delivery of the cargo. For example, in case of cargo delays, hedging may need corrections. Note that an energy intelligence platform is not restricted to LNG. It can also be applied to various other commodities, since the steps will remain the same but just the underlying data will change. Why Now for LNG? Traditionally, the LNG market was dominated by longterm, off-take contracts. Without these, it would not have been possible to make significant capital investments in extraction, transportation, storage and regasification that are all necessary to build the LNG supply chain. But over the years, the short-term/spot LNG market has been growing and reached the 30 percent mark (30 percent of the total LNG traded = 70 million mt in 2014). Factors, such as high volatility of LNG prices due to sudden market and weather events, seasonal gas consumption peaks and delays or disruption of gas domestic production, in conjunction with growing demand of cleaner and safer energy fuel, is essentially

driving the LNG spot market and has created the need for an LNG energy intelligence tool that can help businesses make faster and better commercial decisions.

CRITICAL SUCCESS FACTORS In order to establish an energy intelligence platform, companies need to ensure core capabilities in data quality, analytics and visualization and business change management. Data Quality Underlying data quality is critical for the platform to be successfully implemented and adopted. Since data may be collected from different sources, identifying the right data feed provider is important. If a company uses multiple market data providers, the same data may enter the system twice, requiring an intelligent solution that can discard duplicates. It is also possible that two brokers will provide different cargo destination information for the same vessel. In that case, a data feed priority mechanism can be used or the user can be prompted to override the destination. Analytics and Visualization Today’s enterprise users are demanding more from their workplace software applications than ever before. With advances in front-end technologies, and touchscreen uses and gestures constantly being defined, more organizations are leveraging user-centered design professionals in order to carefully architect and design their software applications. This approach provides a greater emphasis on system intelligence and drives insight not previously possible. With good visualization, it is critical that data is well correlated to provide meaningful business insight that drives faster analysis and better decision-making.

CROSSINGS: The Journal of Business Transformation

87

Business Change Management Change management is critical for solution adoption. If not handled properly, a new platform or solution can cause confusion, lost opportunity, wasted resources and poor morale. Typically, business users are resistant to change if not engaged in the right way. The adoption of an energy intelligence platform that drives commercial decisions can become even more challenging due to its complexity. Therefore, the key to success is to engage business users from day one— involve them in iterative checkpoints to ensure data quality and usability issues are tacked upfront, and use actual data to drive day-to-day decisions (i.e., stay in parallel mode for a while before moving to production) as part of testing to gain business user confidence.

CONCLUSION As the LNG market continues to fluctuate, trading organizations that come out ahead will need to reenvision how they make decisions—and the tools they use to gain intelligence. An energy intelligence platform can give them the technology solution they need to integrate cargo intelligence with trading insights in order to make more informed buying and selling decisions. But to be successful, they will need to focus on data quality and management, explore new approaches, such as visualization, and ensure adoption by establishing well-planned business change management. For more information about this topic, reference the article, “ENERGY INTELLIGENCE: detecting new revenue opportunities,” on page 18 of the spring 2013 issue of CROSSINGS, and the article, “LIQUEFIED NATURAL GAS: the case for an integrated portfolio approach,” on page 27 of the spring 2011 issue of CROSSINGS.

88

Resources 1. International Gas Union, World LNG Report, 2014 Edition, http://www.igu.org/sites/default/files/nodepage-field_file/IGU%20-%20World%20LNG%20 Report%20-%202014%20Edition.pdf 2. BG Group, Global LNG Market Outlook 2014/15, http://www.bg-group.com/480/about-us/lng/globallng-market-outlook-2014-15/ 3. OilPrice.com, Oil Prices May Recover, But Not LNG, http://oilprice.com/Energy/Natural-Gas/Oil-PricesMay-Recover-But-Not-LNG.html

THE AUTHORS Ritesh Sehgal is a Senior Manager within Sapient Global Markets’ Trading & Risk Management Practice. Based in the Houston office, Ritesh has been involved in a wide array of advisory and business consulting initiatives — particularly in risk management and supply logistics. He is a certified Energy Risk Professional and GARP chapter director for Houston and is currently working as a solution architect for a US oil major. [email protected]

Parry Ruparelia leads Sapient Global Markets’ User Experience and Visualization Practice in Houston with a particular focus on the capital and commodity markets. He has led user-centered design projects from conception to delivery for 15 years. Parry is an expert in creative leadership, facilitating stakeholder workshops, gathering and developing user requirements, providing project leadership in producing design concepts, and addressing key visualization and experience design opportunities. [email protected]

Sidhartha Bhandari is a Director based in Sapient Global Markets’ Houston office. He partners with clients to advise, establish and manage C-TRM business and IT transformation engagements with a specific focus on organizational design, capability development, outsourcing and change management. [email protected]

CROSSINGS: The Journal of Business Transformation

89

FUEL MARKETING OPTIMIZATION:

providing an advantage in an increasingly complex and competitive market Fuel marketing companies are faced with a volatile commodity market and an increasingly stringent regulatory environment. Better decision support systems are required to provide insights to grow margins and effectively utilize assets. In this article, Pooja Malhotra, Rathin Gupta and Rajiv Gupta discuss how recent advances in computing make a strong case for fuel marketing companies to evaluate and invest in optimization tools. They also explain how users can leverage these tools to more efficiently and effectively evaluate multiple scenarios, uncover opportunities and make better business decisions.

For decades, fuel marketing companies have been connecting customers to refiners, giving refiners demand predictability to plan refinery runs and customers a secure supply at competitive prices. Because the fuel marketing business is a low-margin business, companies have traditionally relied on cost savings through operational efficiencies and economies of scale to maintain and increase profitability. Although these methods have been effective in the past, companies are realizing that this approach has resulted in an underinvestment in people, tools and process standardization, which is making earning the next dollar more difficult. Compounding the problem, a typical organization with separate procurement, sales and marketing teams may be using different models, tools, data and assumptions to make decisions. This leads to suboptimal outcomes and a lack of transparency into the decision-making process. The underlying inconsistencies also make communication across groups more challenging. To survive in the current environment, companies need to have an increased focus on standardizing data, tools and processes, as well as providing appropriate information to the decision makers at the right frequency. The fuels supply chain is inherently complex with many moving parts running at different velocities, making it difficult to manage and adjust course on short notice. It is impossible to assess the impact of assumptions as well as decisions on supply chain and profitability without the use of strong analytical tools which can help users test hypotheses, evaluate the current state and optimize future decisions.

A SYSTEMATIC APPROACH TO BETTER DECISIONS Recent advances in computing technology, particularly for big data analytics, have made it easier and significantly less expensive to leverage technology in order to get more granular insights into the supply chain and make informed decisions.

90

Portfolio optimization can help companies grow revenue, increase market share and improve profitability. It will also require them to treat all their assets, including terminals, contracts and logistic contracts, as an integrated portfolio that needs to be co-optimized for maximizing the overall portfolio profitability. All contractual obligations should be met while deriving the most value from the entire portfolio. With the concepts of advanced analytics applied to the fuel marketing business, models can be developed to enable outcomes that will maximize profit margins while staying within risk parameters and operational constraints. It is important to note that: › People at different levels use different types of data for decisions. The people on the ground typically drive operational decisions, while the leadership team drives more strategic decisions.

› Many decisions are to be made in a relatively short span of time. › When making decisions, both controllable and noncontrollable factors are usually in play, resulting in extremely high levels of uncertainty. › Decisions made today have an impact on tomorrow’s decisions, creating the need for a more agile decision-making process that can rapidly adapt to changing conditions. As a consequence, the approach used needs to focus on optimizing near-term to longer-term operational decisions to operate the existing supply chain as well as making strategic decisions about evolving the business.

Optimizing existing operations

Evolving the business

Ensure the best economic utilization of assets to meet receipt and delivery commitments

Improve management decisions, decide on the right asset portfolio mix with what-if analysis and drive asset changes (acquisitions or divestures)

Evaluate and monetize arbitrage opportunities maximizing total profit

Evaluate sales channel profitability and determine the optimal mix of flow through different sales channels

Maximize contract incentives and variability options

Enable commercial decisions through enhanced contract terms and renegotiation of existing contracts

Determine blending opportunities and negotiate capacity

Evaluate investments into blending facilities

Optimize the inventory, decide on inventory draw or build based on market outlook

Identify the most valuable locations to build storage and connectivity

Drive annual planning decisions with multiple scenario analysis

Identify new channels for investment

Table 1: Optimizing operations versus evolving the business.

CROSSINGS: The Journal of Business Transformation

91

To achieve incremental margin gains, all decision parameters across the value chain need to be evaluated, leveraging the optionality embedded in the portfolio assets/contracts and monetizing market conditions while adhering to the compliance limits and risk threshold. These recommendations also need to be revalued and adjusted to account for market volatility and any unforeseen events on a timely basis. There are multiple algorithmic tools available for managing strategic and operational plans and enterprise risk goals, such as linear programing or stochastic modelling tools. These decision support tools can be extended to manage the information and computational complexity of the supply chain assets, contracts, logistics and positions in recommending the optimal plans. The major benefit of such tools is to provide decision makers with the ability to run multiple variations of scenarios to evaluate execution feasibility. These tools also provide the ability to monitor the execution options available across the business units with the potential risk profile before the execution decisions are made. Tools can also better prepare the organization by stress testing existing plans and by simulating pricing/volume stress scenarios to evaluate the impact on portfolios and response strategies. For an optimization tool to provide relevant valid output and drive decision making, it is important that the business has insight into all the contracts as well as compliance, risk and operational constraints. These include the following: › All contracts for receiving and delivering commodities › All logistics contracts and operational constraints for moving/processing commodities › All demand and supply forecasts › Current prices and the future outlook

Inputs Supply Chain Costs › Storage Costs › Transportation Costs › Blending Costs › Fixed Costs

What If Analysis

Contracts › Purchase › Sale › Transportation › Storage Physical Assets › Trucks/Rail/Pipeline/ Ship › Terminal/Storage › Blending Facility Pricing › Rack Prices › Index Prices › Differential Price Curves

Figure 1: Portfolio optimization workflow.

92

Minimized Supply Chain Costs Model

Optimized Contracts Optimization Engine

Optimal Plan

Maximized Profits

Maximized Incentives

Recommendations

USES AND BENEFITS An optimization tool provides clear advantages for fuel marketing businesses in terms of both monthly and annual planning. Improved annual planning: At the start of a planning year, fuel marketing businesses carry out planning activities in which product demand is matched with supply to balance the supply chain and determine annual profitability targets. Given the complex nature of the supply network and the variability in supplier and customer contracts, creating a plan that is both holistic and insightful becomes very difficult. Traditionally, companies plan based on historical numbers and “guesstimates” derived from market conditions and business knowledge. For such scenarios, an optimization tool can help business managers make much more astute decisions in their planning by evaluating all the permutations possible and drawing out new insights. The optimization model can be used to run simulations by changing different variables and evaluating for maximum profitability. The model can provide results that consider circumstances of operational constraints at the lift and delivery locations, and the variability in supply and delivery volumes and costs. The real advantage of an optimization tool is the fact that what-if analysis can be done by simply changing the inputs to the model. These results, when compared with the previous model run results, can give powerful insights into the business such as: › What should be the allocation of products to the different sales channels in order to maximize profitability given the supply contract constraints which in turn will drive the sales effort?

› Will it be valuable to introduce a blending facility to sell a product spec versus buying from a third party? Under what price and volume ranges will it be profitable? › What will be the impact to the existing supply chain if a new supply chain network is acquired? Improved monthly planning: At a monthly level, fuel marketing businesses plan the nominations for next month’s deliveries, which will translate into weekly, biweekly dispatch plans for logistics operators. Traditionally, these plans have been driven by historical precedence rather than true economic value. Given the uncertainties, complexity and lack of tools for evaluation, people have simplified assumptions to develop a feasible and easy-to-use solution. Unfortunately, these shortcuts prevent all the options from being considered and, often, money is left on the table. An optimization model can consider various opportunities to identify the cheapest source for procurement as well as specific customer delivery, contract and logistics constraints. This helps businesses reach various goals such as maximum profitability or maximum contract utilization. The model can answer such questions as: › In case of limited supply, what contracts can be reduced within contract constraints while maximizing profitability? › What will be the optimal way to schedule distribution throughout the month if the market is in contango versus normal backwardation? › How can I meet the maximum demand in case of logistics constraints due to outages? › In which scenarios is it better to meet the requirements by doing spot contracts versus using long-term negotiated contracts without affecting or minimizing the impact to contract performance benchmarks?

› How should the supply contracts be utilized given a mix of demand to various sales channels to maximize specific contract incentives? › What will be the impact to supply chain profitability if a new storage terminal is introduced at a specific location?

CROSSINGS: The Journal of Business Transformation

93

THE CHANGE MANAGEMENT FACTOR As an organization moves toward the adoption of better optimization tools, it is important to have a concrete user adoption plan in place. Four significant areas need to be covered as part of the change management process. These include: › Overcoming historical bias: The natural tendency is to go to the “safe” route and do what was done in the past. This will require challenging tribal constraints and assumptions that have built up over a period of time. To address this, the decision-making process needs to be actively assessed to check that data-driven insights, rather than historical decision bias, are driving decisions. › Overcoming silos: The organization will need to rethink the collaboration strategy along with the performance metrics between different groups to ensure that the individual metrics are aligned to the same goal. It must also ensure that when the model recommendations are executed, the impact is measurable. Modeling data from multiple groups (for example, distribution and supply) in an optimization tool can provide powerful metrics to ensure that strategic goals are met. Changing input data, running the model multiple times and then comparing the results can help identify areas of collaboration and ensure maximum profit for the organization as a whole. › User buy-in: New tools can generate uncertainties with employees who are threatened by them and feel their jobs may be in jeopardy. Leadership support is required to ensure they appreciate the fact that the more powerful decision support tools will help improve their decision making. Additionally, sufficient user training will be needed. › Data governance: The quality of output is dependent upon the quality of input data. Given the complexity of the supply chain and multiple systems in existing enterprise architectures, it is important to ensure the data is systematically captured, validated and automated to flow between systems. This helps to ensure that the quality of data is not compromised as it flows throughout the enterprise. Business users must be educated on the importance of good quality data—and should be instructed that exceptions need to be corrected.

94

CONCLUSION Fuel marketing companies will need to evolve to maintain or grow their market share in the increasingly competitive and volatile commodity market. The systematic incorporation of advanced analytics and optimization tools in the decisionmaking process will allow companies to swiftly respond to and capitalize on changing market conditions and gain a competitive edge.

THE AUTHORS Pooja Malhotra is a Manager at Sapient Global Markets with over 11 years of experience working primarily with energy firms in the oil and gas space. She combines a strong industry background with process and system knowledge to provide creative solutions for improving business operations. Her current work involves modeling and implementing mathematical optimization solutions for commodity trading, especially in the areas of oil supply chain optimization and natural gas supply chain optimization including gathering, processing, storage and distribution networks. [email protected]

Rathin Gupta is a Manager of Business Consulting at Sapient Global Markets with nine years of experience in business/ management consulting and investment banking. Most of his career has been in the oil and gas industry, particularly in the energy trading and risk-management space as well as supply and logistics. [email protected]

Rajiv Gupta is a Director of Business Consulting at Sapient Global Markets based in Houston. He works in an advisory role with clients and provides project leadership in the commodities domain. Rajiv has over 15 years of full project lifecycle experience working with start-ups, utilities, investment banks and integrated oil majors across front, middle and back offices. [email protected]

CROSSINGS: The Journal of Business Transformation

95

SHIPPING ANALYTICS:

improving business growth, competitive advantage and risk mitigation Data analytics is driving incremental value for ship owners and charterers by influencing decisions across the various business functions of the marine business—such as voyage management, vessel operations and manning, as well as chartering and third-party risk assessment. As information collection and integration throughout the shipping value chain continues to evolve, shipping companies are beginning to harness data to make a range of decisions, from managing routine activities to improving operations and driving strategic decisions focused on transforming the business. In this article, Kunal Bahl presents analytics use cases that show how charterers and ship owners can utilize the power of data and analytics to improve decision making.

CHARTERING Over the last two decades, technological advancements such as electronic trading have reduced the cost of transactions while increasing competition and transparency in the trading industry. Similar technological advancements have made other industries such as insurance, healthcare, transportation and retail more competitive. However, the marine transportation business has not yet seen such large-scale transformation, which has resulted in a largely outdated and burdensome decision-making process. Finding the right ship for cargo at the most economical price is a key function performed by charterers. However, charterers’ access to this information is limited to what is provided by known brokers and ship owners. Since the information is shared “selectively,” it may or may not be most efficient. Charterers who have established relationships with many brokers will most likely be able to find a suitable ship to transport cargo, but the same is not true for small ship owners and charterers who lack access to timely information. In such a situation, how can charterers ensure they have made the right decision if the information provided is incomplete or suspect? There is an opportunity to utilize readily available, accurate and actionable information to improve decision making. Consider a charterer who is looking for a third-party vessel to move cargo from the Arabian Gulf to South East Asia for a certain cargo size and date. Rather than relying on the ship brokers for options, freight rates and other information, a simple information portal (see Figure 1) can provide alternatives. The charterer can provide pertinent inputs, such as load area (Arabian Gulf), cargo size (280,000 MT) and trade dates (October 26 to October 30). The information portal will then provide a list of suitable vessels available in the Arabian Gulf around that time.

96

This is made possible by integrating Automatic Identification System (AIS) information, position reports, estimated times of arrivals, vessel particulars (such as size) and market information into an exchange portal used to find all available alternatives as well as the freight forecast. This type of portal can give charterers and ship owners access to more options thus improving transparency and competiveness. The charterers can further improve decision making by integrating vessel availability data with their internal or external vetting information. If a vessel does not meet the required standards and has below-par feedback, then the ship can be removed from the selection process early on, saving time in selecting the best available vessel for the cargo.

Figure 1: Alternatives analysis page for charterers.

VETTING A lion’s share of marine transportation of bulk oil and gas is enabled through third-party ships. Unlike other types, in marine transportation, charterers are responsible for the quality of the vessel and its operator. For risk-averse charterers, the viability of the vessel and its operator is as important as the charter hire rate.

cost effectively as possible. Instead of improving the vessel quality, their focus is on meeting or passing the acceptance criteria. The vetting process, as illustrated in Figure 2, includes feedback from various entities such as inspectors, terminals and port state authorities, as well as operator self-assessment. Some of this information is subjective in nature and can result in either extremely slow and/or bad vetting decisions.

Understanding the importance of quality, vessel owners and operators are focusing more attention on ensuring that their fleets are deemed acceptable for use by charterers—and they do so as efficiently and

CROSSINGS: The Journal of Business Transformation

97

Terminal Feedback SIRE Inspection

Casualty

Vetting Decision Previous Performance/ Adhoc

Detentions

TMSA

Figure 2: Typical information used in the vetting decision process.

Data analytics can help charterers, along with integrated oil companies and vetting organizations, analyze the different sources of information and select the right vessel with the least amount of risk. While evaluating a vessel or an operator’s entire fleet, it is important to look at granular information by slicing it into different risk categories. Risk categorization and comparison to the industry average or averages for certain types of fleet can provide valuable insights about vessel performance to charterers. Figure 3 shows a much more objective representation of the risk rating for a vessel compared to the rest of the fleet or other hired vessels.

Vessel/Operator Risk by Category Pollution Preparedness Safety Management Navigation Cargo & Ballast Policy/Procedures Engineroom & Steering Manning Condition/Appearance Lower Risk This Vessel

This Operator

Figure 3: Risk categorization based on all inputs. 98

Average

Higher Risk Range

An organization’s approach to vetting needs to be nimble enough to respond to changing regulatory requirements and market dynamics. Although nearly all voyages happen without any serious incident, safety cannot merely be classified as the absence of accidents or incidents. A good test of an organization’s vetting model can be performed by simulating events, such as an incident, a detention or a casualty, one day prior to such an occurrence. If the model gives the right answer, such as recommending that the ship not be hired, then the charterers can expect the model to be reliable.

OPERATIONS Operating a vessel at its optimum speed is difficult. Like automobiles, ships have an optimum speed (by design) and at the time of delivery of vessels, tests are conducted to determine the optimum speed for fuel consumption. Over time, the optimum speed for vessels changes due to a variety of factors such as engine wear and maintenance. It is very important for ship owners to always know the fuel consumption of their fleet at certain speeds. Ship operators can use analytics to determine the optimum speed, taking into considering such factors as bunker cost, freight rates and schedule.

Apart from optimum speed, fuel consumption data can be used for cost-benefit analysis of vessel maintenance such as hull cleaning and propeller polishing. Traditionally, these types of decisions are based on intuition or a schedule rather than empirical evidence of a vessel’s performance. Data analytics can make it easier for operators to decide the timing and the benefits of performing maintenance at those times. In Figure 4, the normal curve shows speed and fuel consumption data from all voyages while the maintenance curve provides speed and fuel consumption data within certain days of the maintenance being performed. In this example, at 14.5 knots, there is a difference of 19 MT/day of fuel consumption before and after a certain type of maintenance. At current bunker prices ($350/MT), this translates into a difference of $450,000 for a single US West Coast-to-Arabian Gulf round trip voyage. If the total cost of maintenance and vessel downtime is less than $450,000, then it warrants the maintenance to be performed regularly.

Speed-Consumption Curve 120.00 110.00 100.00 90.00 Fuel Consumption

80.00 70.00 60.00 50.00 40.00 30.00 20.00 10.00 9.00

10.00

11.00

12.00

13.00

14.00

15.00

16.00

17.00

18.00

Speed Figure 4: Speed and fuel consumption curve before and after maintenance. CROSSINGS: The Journal of Business Transformation

99

VOYAGE OPERATIONS Data analytics also helps voyage partners access information in a more efficient manner. From time to time, terminal operators, voyage managers or port agents need to know certain information, such as a ship’s estimated time of arrival (ETA) and cargo information. Instead of relying on notes, emails or phone calls, they can track vessels using dashboards. This helps them make more effective decisions about terminal and berth allocation, cargo handling and route tracking. It also helps to improve situational awareness regarding the crew onboard as well as upcoming maintenance and inspections. Figure 5 shows the current position of a vessel’s voyage from Ras Tanura to Long Beach showing the ETA, cargo quantity and discharge window details. This information is even more valuable for short-haul voyages (e.g., US Domestic, inland barges, Black Sea, etc.) with shorter turnaround times. The voyage operations dashboard can also provide information about any deviations from optimum performance. The ideal route, the weather service-provided route and the actual route can be tracked as the voyage is underway rather than after the fact. Any operational changes to speed, ETA and other factors can also be managed in real time, thus ensuring that the voyage performs to its plan and remains profitable.

Figure 5: Latest position of a vessel en route from Ras Tanura to Long Beach.

100

CONCLUSION Today’s companies in the marine transportation industry may not always fully utilize the power of the data at its disposal—data that is simple to collect, store and integrate. The use cases discussed in this article are just a few examples of how the marine transportation business can use sophisticated data analytics techniques to improve opportunities for business growth, competitive advantage and risk mitigation. The technologies that enable data integration, analytics and discovery have greatly matured in the last decade and offer a way to build a foundation for a long-term, sustainable and analytical approach to improve decision-making and ultimately, the business itself.

THE AUTHOR Kunal Bahl is a Senior Manager in Sapient Global Markets’ Midstream Practice based in San Francisco. He is focused on Marine Transportation and his recent assignments include leading a data integration and analytics program for an integrated oil company, process automation for another integrated oil company and power trading system integration for a regional transmission authority. [email protected]

CROSSINGS: The Journal of Business Transformation

101

STOCHASTIC ANALYTICS:

increasing confidence in business decisions With the increasing complexity of the energy supply chain and markets, it is becoming imperative for businesses to make decisions faster and with more confidence, in the face of increasing uncertainty. In this article, Tomas Simovic and Rashed Haq discuss how today’s technology and advanced models are enabling firms to do more than simply improve decision-making, but rather to reengineer it entirely. They also explain how by solving the right problem, using the right tools, approximating business problems and leveraging visualization, companies can impart valuable insights to their business users.

INTRODUCTION Whether in financial markets or commodity trading, decision makers are faced with decisions involving a large number of variables and data points, many of them uncertain. These decisions are often about the most efficient use of assets, such as using financial products as collateral, managing a large oil or gas supply chain, balancing renewable portfolios, scheduling or routing fleets, managing hydro generation or gas storage, etc. The uncertainty can come from market prices and spreads or in volumes such as customer demand, weather, technical disruptions, etc. As margins decline and the markets grow more integrated, fast-paced and increasingly competitive, decisions must be made more quickly and with greater confidence, even in the face of increasing uncertainty. Relying on just experience or basic analytics may work some of the time. However, there are times when a robust decision-making framework can have a greater positive business impact. Skepticism about advanced analytics lingers among business users, due in large part to past difficulties. A decade ago, most successful advanced decision aid tools were built as monolithic bespoke systems, involving teams of PhDs, partnerships with the academic world, big budgets and long project lead times. Oftentimes, the real business requirements were not well understood by the quantitative experts. And even if business requirements were understood, the process usually took so long that by the time the decision aid tool was finished, the business had moved on. As such, only large companies with big research budgets and patient management could afford to go down this road. The situation has improved in recent years. A quantum leap in computing power has been made, along with an exponential decrease in costs driven by cloud computing and big data technologies. Moreover, practical advances in numerical optimization have made it possible to harness this computing power to solve real-world problems. This know-how has been packaged into an increasing number of highly usable tools, some licensed and others open source. Software vendors have become adept at selecting the best PhDs and partnering with leading researchers to provide extremely powerful numerical optimization tools at a fraction of the cost of custom development. Therefore, instead of requiring large IT projects, these can be built from a variety of building blocks: a calculation engine from one vendor, modelling language from another source and finally a GUI from a third party. This leaves more time to gather business requirements, roll out prototypes, interact with business users and manage project costs.

102

MATHEMATICAL OPTIMIZATION: CONCEPTS It is important to situate decision-making under uncertainty in the analytics space. Advanced analytics are often divided into three categories: descriptive, predictive and prescriptive. Descriptive analytics answers the question: “What has happened in the past?” Predictive analytics answers the question: “What is likely to happen in the future” or alternatively “What is the set of possible futures?” Finally, prescriptive analytics provides the business users with suggestions on “What should we do?” or “What is the set of possible decisions and their implications?” Decision-making under uncertainty includes all three categories: descriptive analysis informs predictive methods, which are used to forecast the possible values of uncertain data points. Prescriptive methods compute the right decision to make given these forecasts and any constraints, while maximizing or minimizing some objective such as profit or cost.

Effects Decisions Predictions What will happen? When will it happen? Why will it happen?

How do we benefit from the predicitons?

How will these decisions impact everything else?

Predictive Analytics Prescriptive Analytics Figure 1: Predictive analytics suggests what decisions to make based on predictions and often allows users to simulate the effects of those decisions.

PREDICTIVE METHODS IN COMMODITIES The most likely uncertain data points a business decision maker in commodities will face are prices, customer demand and weather (e.g., hydro inflows, renewable production). Prices are notoriously hard to predict. However, this is not a new problem for most commodity trading companies: it is very likely that forward curves of some quality are computed to value portfolios and calculate risk metrics. Risk departments are also quite familiar with Monte Carlo simulation methods that allow the computation of a large number of possible price paths. Predicting demand or weather might be a newer problem, but if enough historical data is available, forecasts can be obtained using statistical tools such as principal component analysis. Predictive analytics is an emerging area, garnering increasing interest from both business users and software vendors. The area of applied mathematics that can model business problems using decision variables, constraints and an objective function to maximize or minimize is called mathematical optimization. When all data is “certain” or “deterministic” and all constraints and decision variables are linear and real valued, a problem can be modeled as a “linear program.” If mutually exclusive possibilities exist in the business problem, some variables might have to be restricted to binary or integer values, leading to business problems being modeled as “mixed integer linear programs.” Finally if some data points are uncertain or stochastic, and the optimization criterion is expected profits/costs, this uncertainty can be modeled as a set of scenarios and the problem formulation becomes a “stochastic program.” An important point is that, in some business problems, not all decisions have to be made at once and sometimes new information becomes available over time that can help improve the current best decision. This is referred to as “single stage”, “two stage” or “multi stage stochastic programs.”

CROSSINGS: The Journal of Business Transformation

103

Figure 2: Different types of mathematical programming, depending on what types of variables are being used and what type of data are being used.

As more elements are added to a mathematical optimization model, going from a linear program to a multistage stochastic program, the computational burden increases very quickly with the size of the problem. Before going with a full-fledged stochastic model it makes sense to perform sensitivity analysis on the input data: some types of data might influence the result more than others. Another commonly used method that allows business users to deal with uncertainty while restricting the modeling and computation effort to a deterministic model is “scenario analysis.” The kinds of data to which the model result is most sensitive can be modeled as multiple scenarios, with the model being run individually for each scenario and the results compared. While sensitivity analysis and scenario analysis are inexpensive ways to work with uncertain data, in some cases, they might give imperfect insight to the decision maker. First, deterministic solutions can often be fragile—a small change in input data may result in a significant change in the recommended business decision. Second, “optionality” or solution flexibility may have a lot of value in some business problems. Deterministic solutions give zero value to solution flexibility and thus may not appear as smart choices to a savvy business user. Finally, in many business problems, not all information comes at the same point in time and not all decisions have to be made at the same time and are “set in stone” afterwards. When in doubt, calculating metrics such as “expected value of perfect information” will help determine if going down the stochastic route is a worthwhile investment compared to sticking with a deterministic model.

104

EFFECTIVE IMPLEMENTATION OF QUANTITATIVE DECISION TOOLS The question now is how to actually turn all of the concepts above into a working application that provides valuable insights to business users. The many elements of success can be grouped in the following categories: solving the right problem, using the right tools, approximating business problems and leveraging visualization. In the past it was impossible to collect, analyze and act on data in a timely manner centrally, leading to organizations that function as silos, with decision making fragmented across different business units. Today, technology can give a more holistic view of company operations and examine possible solutions and outcomes that can be vastly beyond the cognitive abilities of a human being. Therefore, instead of simply automating or streamlining the current decision-making process, it makes sense to examine to what extent it is possible to reengineer it. For example, a large European utility company used to have several layers of decisionmaking to schedule complicated river chains. When the central dispatching model was able to respect more complicated physical constraints, eliminating the need for elaborate manual schedule modifications, the organizational structure could be streamlined. Using the right tools is the second element required for success. One important lesson to remember is that, at least with the first version of an optimization application, custom development of the actual calculation engines should be kept to a minimum. There is a wide variety of calculation engines and it is extremely unlikely that a couple of months of custom development can beat these packages even if done by extremely gifted quants.

problem, representing most of the important problem characteristics that can still be solved in a reasonable computation time?” Edison once said “the real measure of success is the number of experiments you can run in 24 hours.” Rapid prototyping is therefore crucial and using the right tools, such as algebraic modeling languages, is essential for a project to succeed. Rapid prototyping is not only important to find a good mathematical formulation for a given problem, it is also essential to refine the actual problem statement. Most business users cannot be expected to sign off on a particular constraint or set of constraints, expressed in mathematical equations. To elicit detailed problem properties, it is usually much better to show business users’ increasingly complex solutions of a prototype model and validate the acceptability of these solutions. It may be hard for them to express exactly how they expect the model to work, but they are usually quite vocal about what constitutes an inacceptable solution. This interaction around the rapid prototyping process can also dispel the initial skepticism a business user might have toward the ability of a quantitative tool to satisfactorily address the business domain where her expertise lies. It will also prove that decision aid tools are not there to replace jobs but to enhance the decision making skill of a business domain expert. An important element of the success of a decision aid application is the proper visualization of results. Complicated mathematical models can generate a large set of numbers and understanding these numbers quickly and finding business insights is not a trivial task. Intuitive visualizations of these results allow a wider audience in a typical company to leverage the results of quantitative tools in their daily tasks.

Given the increasing availability of high-quality implementations of sophisticated algorithms, if projects are to be successful, the main emphasis of the initial stages of a decision aid application project must be business modeling. The question that should be addressed is the following: “Given the size, type, mathematical characteristics of the problem and the availability of third party calculation engines, what is the right mathematical approximation of the business

CROSSINGS: The Journal of Business Transformation

105

CURRENT AND FUTURE APPLICATIONS There is a wide variation in the quantitative sophistication of companies across regions and industries, but large power utilities seem to be some of the most sophisticated users of advanced mathematical optimization tools. This is due to such factors as the need for centralized dispatching of the generation fleet, a long-time focus on efficient asset utilization and experience successfully executing technically complicated projects. The existing applications that are currently being used can be divided into strategic decision aid applications and tactical or operational applications. Strategic decision aid applications are used to support investment decisions, calculate fundamental prices or simulate policy choices. Tactical applications are used mainly in the daily operation of physical asset portfolios. For example, most sophisticated operators of hydro storage use multi-stage stochastic optimization models to manage the water levels of their hydro assets. These models inform trading and hedging decisions, scheduling maintenance outages and flood prevention.

Figure 3: A successful application of multistage stochastic programming—computation optimal hydro policies for large storage reservoirs.

So where are the next real applications of stochastic optimization going to come from? They are likely going to come mainly from the tactical decision aid applications as business users move from Excel-based tools and heuristic decision-making to more sophisticated quantitative tools. The applications with the biggest ROI will appear in areas with the following characteristics: › Physical assets with a large number of complicated physical constraints: quantitative tools are more likely to find interesting counterintuitive solutions if the underlying problem is very complex › Large systems/portfolios and centralized decision making: the larger the portfolio the business decision is affecting, the larger the savings/additional profit in absolute terms › A certain level of sophistication in the IT landscape: high data availability and quality is a precondition to successfully deploy advanced decision aid applications

106

CONCLUSION Many large oil and gas companies still have some way to go to reach the level of sophistication in tactical decision-making of the most advanced power utilities. Different parts of operations, such as production, storage, transportation, refining and trading, can be represented in a single model and optimized globally, at least for a single geographical region. Taking full advantage of decision-aid tools to streamline their operations can be a major lever to cut costs to restore profitability if oil and gas prices stay at their current low levels. Merchant traders that have acquired physical assets in recent years would also benefit from integrating their physical operations with trading activities more tightly through optimization models. For more information on this topic, reference the article, “ANALYTICS STRATEGY: creating a roadmap for success,” on page 28 of the spring 2013 issue of CROSSINGS.

THE AUTHORS Tomas Simovic is an industry expert with a strong quantitative background in mathematical optimization and modeling of physical assets and over six years of experience of designing and implementing various optimization models. He has engaged with internal and external clients to deliver high-quality custom optimization applications well suited to business needs. His work includes models for short-term production planning and energy and ancillary services bid calculation, hydro portfolio optimization, oil and gas supply chain modeling and pricing of exotic storage deals.

Rashed Haq is Vice President and Lead for Analytics & Optimization for Commodities at Sapient Global Markets. Based in Houston, Rashed specializes in trading, supply logistics and risk management. He advises oil, gas and power companies to address their most complex challenges in business operations through innovative capabilities, processes and solutions. [email protected]

CROSSINGS: The Journal of Business Transformation

107

CHOOSING AN APPROACH TO ANALYTICS:

is a single technology platform the right investment? There is virtually no debate about the business value of analytics. Effective analytics provide insights into what happened, why it happened and what is likely to happen in the future, as well as the factors that could help shape different outcomes. But when it comes to the “how” of analytics—including which technology platform(s) will be used to support them—there is far less clarity. In this article, Abhishek Bhattacharya explores some of the fundamental challenges of building an analytics capability, including the pros and cons of investing in an all-encompassing technology platform.

THE BUILDING BLOCKS OF SUCCESSFUL ANALYTICS Although technology platforms for analytics is the focus of this article, it would be a disservice to readers not to acknowledge that technology represents only a piece of the picture. When it comes to building an analytics capability, the real complexity lies not in the technology but in the business case and the supporting analytics models. To be successful, every analytics initiative must start with a clear understanding of appropriate business cases. How and where will the analytics be used? What are the critical performance indicators and/or business questions that must be measured and analyzed? From there, sources of data must be identified, and data must be modeled—that is, structured appropriately for the analytics engine. Analytical models, also known as quantitative models, are a key difference between traditional descriptive reporting and more sophisticated analytics, including those that can help predict or optimize outcomes. With those models in place, the next challenge is ensuring the quality of the data that enters the analytics engine. The phrase “garbage in, garbage out” applies here. If data is sub-par quality, the output will be too, and business stakeholders will lose trust in the analytics. All of those steps must be executed for every new business problem, and all are technology agnostic. Together, they represent about 80 percent of the effort within any analytics initiative. The remaining 20 percent focuses on technology: choosing a platform; creating a production and testing environment; and conducting performance testing and tuning. The core steps of the effort should help inform the technology decisions—and one of the most fundamental is whether to build a one-size-fits-all platform or to develop a series of platforms, each designed for a specific requirement or set of requirements.

108

COME ONE, COME ALL? Opting for a single enterprise analytics platform can seem like a logical decision—a way to ensure consistency and cost-effectiveness across all of an organization’s analytics initiatives. Yet an allencompassing platform is unlikely to succeed for a number of reasons. The first reason is the sheer diversity of analytics needs. An organization may be able to address its current range of needs, but it can be difficult, if not impossible, to anticipate all of the possible types of analytics it will need in the future. Building a platform around all of those theoretical “needs” would also come at a very high price in financial terms. Second, a one-size-fits-all platform could cost the organization in terms of opportunity. As the user community becomes more proficient in analytics, they will ask for advanced features and capabilities. In most cases, that involves an evolution from descriptive (“what happened?”) to predictive (“what is likely to happen?”) and prescriptive (“how can we increase the likelihood of our desired outcomes?”) analytics. Building incrementally as these needs arise is a much more palatable solution. An incremental approach also leaves open the opportunity to tap into ongoing innovations. Technology platforms represent a fast-moving, everchanging landscape, where committing to a single stack can cost you the chance to leverage something newer and better. Finally, the advent of cloud computing has revolutionized the way an analytics environment is set up and a technology platform is built. The cloud has significantly reduced the work so that it is now possible to have an environment up and running in days, if not hours. It affords real flexibility, with the ability to grow a platform to support additional users and new types of analytics. It also makes it easy to start small, building credibility and momentum over time. For all of those reasons, building an enterprise platform for analytics is probably not well advised. However, the opposite approach—building each component individually and then harmonizing those components—

can be equally expensive and ineffective. Many of those high costs are spent in integration, data movement and harmonizing the components.

STRIKING THE RIGHT BALANCE If neither an all-encompassing platform nor a conglomeration of platforms is the right approach, how should organizations proceed? The key is to strike a balance between building everything and building the bare minimum. Ideally, such an approach would yield an all-encompassing architecture (see Figure 1) that does the following: › Embraces layers. Rather than focusing on the platform specifically, think in terms of the layers of any analytics capabilities. An effective architecture will include layers for data, data ingestion and business intelligence. Compared to traditional techniques, this approach affords much more flexibility over time. In particular, when traditional star schema are used, everything is driven out of the schema—making it difficult to evolve the platform as analytics needs change. › Offers components within each layer. For greater effectiveness and agility, each layer should be built with modular components. The data layer must provide the ability to manage structured, analytical, unstructured and streaming data. The data ingestion layer should have modules for master data management, extract transfer and load (ETL), and data quality management. The BI layer must offer self-service, various types of analytics, visualization capabilities and support for multiple device types. › Is designed for evolution. It is important to build an architecture that can easily accommodate change. As part of that, work to build an understanding of the dimensions of potential change (business problems and types of quantitative models, for instance). By understanding the aspects likely to change, you can identify appropriate components and technologies at each layer. Fortunately, modern technologies—from columnar databases to the Hadoop open-source software framework—are inherently flexible and do not force every part of the solution to be tied to a specific quantitative model or schema.

CROSSINGS: The Journal of Business Transformation

109

Business Intelligence & Analytics

Self Service

Descriptive

Visualization

Devices

Predictive

Data Ingestion & Enrichment

ETL/ELT

Data Quality

Master Data Management

Data Management & Processing

Relational

Non-relational

Analytical

Streaming

Entitlements

Data Sources

OLTP

LOB

Figure 1: All-encompassing analytics architecture.

110

Structured Data

Devices

Web

Documents

THINK BIG, START SMALL Every successful analytics initiative will be built upon a sound framework that includes identifying business value, building models, sourcing data and, ultimately, driving adoption. At every step, technology is a critical enabler, but it should not be the central focus. Nor should it be a barrier. With many analytics technologies now available on the cloud, it is possible to get started with very little upfront capital cost. Using Amazon Web Services, Azure and other cloud services, an organization can begin to build an allencompassing architecture—starting small, building something and showing value to the business before making substantial investments.

THE AUTHOR Abhishek Bhattacharya is a Vice President of Technology based in Noida, India and leads the Technology Practice at Sapient Global Markets. Abhishek has spent the last 15 years architecting and designing technology solutions for companies around the world. His team is focused on developing market-leading solutions and frameworks for financial and energy services companies. [email protected]

CROSSINGS: The Journal of Business Transformation

111

112

ABOUT SAPIENT GLOBAL MARKETS Sapient Global Markets, a part of Publicis.Sapient, is a leading provider of services to today’s evolving financial and commodity markets. We provide a full range of capabilities to help our clients grow and enhance their businesses, create robust and transparent infrastructure, manage operating costs, and foster innovation throughout their organizations. We offer services across Advisory, Analytics, Technology, and Process, as well as unique methodologies in program management, technology development, and process outsourcing. Sapient Global Markets operates in key financial and commodity centers worldwide, including Boston, Calgary, Chicago, Dusseldorf, Frankfurt, Houston, London, Los Angeles, Milan, New York, Singapore, Washington D.C. and Zürich, as well as in large technology development and operations outsourcing centers in Bangalore, Delhi, and Noida, India. For more information, visit sapientglobalmarkets.com. © 2015 Sapient Corporation. Trademark Information: Sapient and the Sapient logo are trademarks or registered trademarks of Sapient Corporation or its subsidiaries in the U.S. and other countries. All other trade names are trademarks or registered trademarks of their respective holders. Sapient is not regulated by any legal, compliance or financial regulatory authority or body. You remain solely responsible for obtaining independent legal, compliance and financial advice in respect of the Services.

FSC Logo CROSSINGS: The Journal of Business Transformation

113

GLOBAL OFFICES Headquarters Boston 131 Dartmouth Street 3rd Floor Boston, MA 02116 Tel: +1 (617) 621 0200 Bangalore Salarpuria GR Tech Park 6th Floor, “VAYU” Block #137, Bengaluru 560066 Karnataka India Tel: +91 (080) 410 47 000 Calgary 888 3rd Street SW Suite 1000 Calgary, Alberta T2P 5C5 Canada Tel: +1 (403) 444 5574 Chicago 30 West Monroe, 12th Floor Chicago, IL 60603 Tel: +1 (312) 458 1800 Delhi Unitech Infospace Ground Floor, Tower A Building 2, Sector 21 Old Delhi - Gurgaon Road Dundahera, Gurgaon 122016 Haryana India Tel: +91 (124) 499 6000 Düsseldorf Speditionstrasse 21 40221 Düsseldorf Germany Tel: +49 (0) 211 540 34 0 Frankfurt Skyper Villa Taunusanlage 1 60329 Frankfurt Germany Tel: +49 (0)69 505060594

Geneva Succursale Genève c/o Florence Thiébaud, avocate rue du Cendrier 15 1201 Geneva Switzerland Tel: +41 (0) 58 206 06 00 Houston Heritage Plaza 1111 Bagby Street Suite 1950 Houston, TX 77002 Tel: +1 (713) 493 6880 London Eden House 8 Spital Square London, E1 6DU United Kingdom Tel: + 44 (0) 207 786 4500 Los Angeles 1601 Cloverfield Blvd. Suite 400 South Santa Monica, CA 90404 Tel: +1 (310) 264 6900 Milan Sapient Italy S.r.l Viale Bianca Maria 23 20122 Milan Italy Tel: +39-02-00681538 Mumbai Sapient Consulting Pvt. Ltd R-Tech Park, Goregaon(E) 13th Floor, Building 2, Off Western Express Highway Mumbai, Maharashtra - 400063 India Tel: +91-22-44764567 Munich Arnulfstrasse 60 80335 München Germany Tel: +49 (0) 89 552 987 0

Noida (NCR of Delhi) “Oxygen”, Tower C, Ground - 3rd floor Plot No. 7, Sector 144 Expressway Noida 201304 Uttar Pradesh India Tel: +91 (120) 479 5000 New York 40 Fulton Street 22nd Floor New York, NY 10038 Tel: +1 (212) 206 1005 Singapore 158 Cecil Street, #03-01 Singapore 069545 Tel: +65 6671 4933 Toronto 129 Spadina Avenue Suite 500 Toronto, Ontario M5V 2L3 Canada Tel: +1 (416) 645 1500 Washington DC 1515 North Courthouse Road 4th Floor Arlington, VA 22201-2909 Tel: +1 (703) 908 2400 Zürich Seefeldstrasse 35 8008 Zürich Switzerland Tel: +41 (58) 206 06 00