Past Experiences and Future Challenges using Automatic ...

1 downloads 0 Views 1MB Size Report
Mar 16, 2016 - Solution is to use automatic model building from APM data. • Cheaper and faster and more accurate. • Solves new problems, e.g. DevOps.
Past Experiences and Future Challenges using Automatic Performance Modelling to Complement Testing

Paul Brebner, CTO A NICTA/Data61/CSIRO Spin-out Company 16/03/2016

© Performance Assurance Pty Ltd

1

Performance modelling background • My background is analysis of distributed systems, middleware, GRID, architecture, performance, benchmarking (e.g. SPECjAppServer), sensor web performance, etc • Since 2007 project in NICTA to develop tools to assist mostly government systems of systems to perform better in advance • Service Oriented Performance Modelling tool • • • •

Model driven (SOA performance meta model) GUI Simulation for metric prediction Enables modelling at level of workloads, composite and simple services, servers.

• Used during early, middle, later lifecycle for lots of real systems

16/03/2016

Performance Assurance Pty Ltd

2

Performance modelling background • BUT Manual model building (structure, parameterisation, calibration) is • • • • • •

Time consuming Expensive Error prone Limited to model complexity that can be built manually Not easily repeatable or maintainable Not accurate enough for some problems (need high quality and quantity of performance data) • Not fast enough for agile development

• Last 3 years we have been a start up company, have to make $$$$$$ • Most customers have APM products • All customers want to increase speed and number of releases, reduce time and costs of testing • Solution is to use automatic model building from APM data • Cheaper and faster and more accurate • Solves new problems, e.g. DevOps 16/03/2016

Performance Assurance Pty Ltd

3

Automatic performance modelling from APM data • Only use available APM data • Use automatable (or potentially automatable) ways of getting the data from the APM into our Service Oriented Performance Modelling (SOPM) modelling/simulation tool (SaaS) • Automatically build and parameterise the performance data from the APM data • Multiple model types with various trade-offs, accuracy for capacity/response times, and model complexity/ability to change model aspects • Currently different model types are produced as part of the APM -> modelling tool transformation phase

16/03/2016

Performance Assurance Pty Ltd

4

Dynatrace Session File

SF

KEY

Application PP XML

1

Dynatrace Server REST API PurePath XML File

Dynatrace

SF

Model XML

PP XML

SF 2

PurePath Dash

Dynatrace

XML Model File

4

Converter

3 Browser 5

Modelling SaaS

Model XML

Dynatrace Transaction flow dashboard

16/03/2016

Performance Assurance Pty Ltd

6

Produces: Simple capacity model

16/03/2016

Performance Assurance Pty Ltd

7

Dynatrace PurePath Dashboard (detailed per transaction call tree)

16/03/2016

Performance Assurance Pty Ltd

8

Produces: Transactional model (portion)

16/03/2016

Performance Assurance Pty Ltd

9

Experiences with three projects • Project 1 • P2V migration

• Project 2 • C2V test -> prod

• Project 3 • DevOps • Focus of this talk, come to main ICPE talk for others 

16/03/2016

Performance Assurance Pty Ltd

10

Project 3 • Devops • • • • •

Focus on response time SLAs Deployment/resources Faster cycle time More releases Less and cheaper testing

• Challenge • Proprietary in-house APM tool • “Profile point” times only • Required pre-processing (using Hive)

16/03/2016

Performance Assurance Pty Ltd

11

Focus • Risk service • • • • •

Heavily used Multiple services New services added all the time Services had different time and memory profiles Would a new service break the SLA?

• Baseline model accurate to 10% response time

16/03/2016

Performance Assurance Pty Ltd

12

Alternatives modelled • Changing transaction mix • Changing arrival rates • Making some services asynchronous, concurrent • Adding new risk assessment services • More complex • Optimising deployment of services to multiple servers taking into account memory and CPU usage, and response time • A type of box/bin packing problem • 4 services out of 30 used 50% of CPU

16/03/2016

Performance Assurance Pty Ltd

13

Challenges • Pre-processing APM data “profile points” • Low load for APM data sample c.f. target load • Used calibration from load tests on pre-production to improve accuracy

• No CPU time breakdown from APM data • But GC had a profile point (and was significant)

• Transaction types not in APM data • Had to infer them, either too few or too many

16/03/2016

Performance Assurance Pty Ltd

14

16/03/2016

Performance Assurance Pty Ltd

15

16/03/2016

Performance Assurance Pty Ltd

16

16/03/2016

Performance Assurance Pty Ltd

17

DevOps • Goal is to shift left and shift right • Shift right • Build and continuously maintain performance model of production to accurately model response times, scalability, capacity and resource requirements under target production loads

• Shift left • Calibrate production performance model for development • Enable developers to make code changes, explore impact with unit tests and development APM to incrementally rebuild performance models • To understand likely performance and scalability impact • Speed up development cycle as no longer have to wait (weeks) for performance testing

16/03/2016

Performance Assurance Pty Ltd

18

Existing Dev, Test, Prod lifecycle: Delays in feedback: Takes weeks per iteration, test env is a bottleneck, environments are different

Deploy to prod

Deploy to test

Dev

Test Late Feedback

Prod

Late Feedback

DevOps + APM: earlier but not completely accurate performance feedback i.e. environments are different so APM data is different across lifecycle

Dev

Deploy to test

Test

Deploy to prod

Prod

Earlier Feedback

APM

APM

Late Feedback

APM

DevOps + APM + Modelling: Earlier more accurate performance predictions -> decreased cycle time Dev

Deploy to test

APM

Test

Deploy to prod

APM

Prod

APM

Dev Model Update Baseline model build Early Feedback Calibrate prod model for dev Dev Model

Base Model

Incremental updates to Base model with dev changes 16/03/2016

Performance Assurance Pty Ltd

21

Benefits • Changes in code in Dev • • • •

Unit test APM performance data Incrementally update calibrated performance model Predict performance and scalability impact for Prod env

• Cheaper and faster than waiting for testing and deployment to Prod • Sensitivity analysis could determine areas of greater sensitivity to changes and thresholds • These would be subject to more rigorous modelling and testing

16/03/2016

Performance Assurance Pty Ltd

22

DevOps + APM + Modelling: In reality lots of dev, different environments Dev Dev

Test

Deploy to prod

Prod

Dev Dev

APM APM APM APM

Dev Dev Model Dev Model Dev Model Model

16/03/2016

APM

APM

Baseline model build Base Model

Performance Assurance Pty Ltd

23

Challenges • Calibration of performance models for use in Dev from Test and Prod • Once predictions are made how do we test if they are supported by the APM data or not? i.e. if null hypothesis is “changes in dev will have no impact on prod”, how do we determine if this is supported by evidence or not? • Is it scalable? • Lots of developers and changes to subsets of code • Concurrent and compounding changes would need centralised model with all changes incorporated

• What about changes to infrastructure code that could impact everything?

• How to support this in Dev APM and modelling tools • ROI • Depending on cost of testing, cost of initial setting up APM and modelling tools and incremental costs, number of tests and modelling predictions per cycle, and value of reduced cycle times and earlier performance predictions, ROI may occur earlier or later or never… • Example • Assumes model calibrated once per release from performance APM data • Assumes one actual load test per release • What’s tradeoff between multiple tests per release vs 1 test and multiple modelling predictions?

16/03/2016

Performance Assurance Pty Ltd

24

Costs: Modelling cheaper after 3 changes

Cost ($)

Costs of LoadTest only and HybridApproach 45000 40000 35000 30000 25000 20000 15000 10000 5000 0 0

2

4

6

8

10

12

Number of changes tested/modelled LoadTestOnly

16/03/2016

Modelling

Performance Assurance Pty Ltd

25

Speed: Average hours to test/model a number of code changes (per model calibration) Average hours to test/model changes 90 80

Average time (hours)

70 60 50 40 30 20 10 0 0

2

4

6

8

10

12

Number of changes per calibrated model LoadTestAvgHourPerChange

16/03/2016

ModellingAvgHoursPerChange

Performance Assurance Pty Ltd

26

Send us your data • Free trial of simple Dynatrace capacity models • http://www.performance-assurance.com.au/send-us-your-data/ • http://www.performance-assurance.com.au/introduction-toautomatic-model-building/ • Send us a sample Dyntrace session file and we’ll send you a link to a demo capacity model • Particularly interested in trending technologies and use cases, e.g. Micro-services, Containers, Big Data, IoT, etc • Free Personal Dynatrace license from: http://bit.ly/dtpersonal

16/03/2016

Performance Assurance Pty Ltd

29