+ ( *--#,4. 5(22

0 downloads 0 Views 13MB Size Report
The CLOUD Risk-Meter is an automated tool for information gathering, ..... Does the provider use insecure algorithms such as DES (Data ..... Did the provider engage in charitable giving or free service provision.
   

                                                                                                                        !  "                                               #        $  %                            #  &  ' ! "                                                        $        

    

   (        )   ) *+,-.# (  / *+0,. 1     "  12  (3 #/    + ( *--#,4.  5(22*+67."(22*+68.  "2# ")(#9 /*+40.  13":     ;   



 !"#! $

         

    

               

    

  

             

 ! " 

         

                                                                                                                             !"#$       %  "

! &      ' &    (  )*!  &

" " *)+,-!  

  

    .!  & & !  /   . 0,)* %   "

 !  &       ' &   (     " " 0,)*

Table of Contents   

ABSTRACT .................................................................................................... 2 I. INTRODUCTION ...................................................................................... 3 II. RISK-METER: A BRIEF SUMMARY.................................................. 4 III. CLOUD RISK-METER (OR RM) ESSENTIALS ............................ 11 IV. DISCUSSIONS AND CONCLUSION ................................................ 17 V. REFERENCES ........................................................................................ 18 APPENDIX A: RESPONDENT RESIDUAL RISK RESULTS TABLE20 APPENDIX B: RESPONDENTS’ RESIDUAL RISK ASSESSMENT & OPTIMIZATION SCREENSHOTS .......................................................... 21 APPENDIX C: CLOUD RISK SURVEY (XML FORMAT)................ 69 ABOUT THE AUTHORS ........................................................................... 99





CLOUD RISK ASSESSMENT & MANAGEMENT SURVEY TOOL FOR USERS AND PROVIDERS M. Sahinoglu*, S. Morton, C. Vadla, K. Medasani Auburn University at Montgomery (AUM) [email protected], [email protected], [email protected], [email protected] * Corresponding Author

ABSTRACT Cloud computing providers endeavor to deliver an optimal service with minimal disruptions and problems. To ensure that, they will need to know what their customers experience so that they can countermeasure the vulnerabilities and risk factors that threaten security, usability, and overall satisfaction. To that end, Cloud computing providers will need to conduct customer satisfaction evaluations to determine their experience with the service. Cloud Risk Meter is the algorithm developed by the principal author. This software tool facilitates the assessment and management of Cloud risk. Using game theory and statistically-driven methodologies, it provides objective, quantitative risk assessment, and unlike any other tool available today, guidance for allocating resources to bring an undesirable risk down to a user-determined “tolerable level”. The Cloud Risk Meter provides a critical assessment and management tool for cloud computing providers as well as their customers. As such, those in industry and their customers will be greatly aided in their efforts to achieve greater Cloud security by the use of this rational and objective tool for assessing and mitigating risk. Keywords: Cloud Computing, Risk Assessment and Management, Risk Meter Software Tool, Vulnerability, Threat, Countermeasure, Customer Satisfaction





I. INTRODUCTION

Even with all of the data centers’ assurance of complete security, it’s still unsafe to host important data on a virtual CLOUD server than on a dedicated physical machine (Anthes, 2010). Some strong voices include as follow: “Imagine what would happen if the hackers gained access to thousands of people’s data. It would be nothing less than a catastrophe (especially for businesses) and the data center would pretty much have to stop all or some outgoing data while they solve the problem, which means downtime for not only one, but a lot of clients and their sites and data” (Greengard, 2010). Boland (2011) studies Private Cloud. See Srinivasan et al. for more details. The CLOUD Risk-Meter is an automated tool for information gathering, quantifying, assessing, and cost-effective managing risk. It further provides objective dollar based mitigation advice, allowing the user to see where their funds will be best allocated to lower risk to an acceptable level. CLOUD computing risk will be examined in the context of vulnerability categories (Grobauer et al. 2011), threats presented, and specific countermeasures. Threat countermeasures are used to mitigate risk and lower it to a desirable level. Using game-theoretic optimization techniques, the user will see how their budgetary resources can be best spent towards an optimal allocation plan, so as to lower the undesirable risk to a more tolerable level (Sahinoglu, 2011). Before delving into the CLOUD RM, it is timely to briefly summarize the essentials of the Security (or Risk) Meter methodology (Sahinoglu, 2005, 2007, 2008, 2009, 2010, 2016). Innovative quantitative risk measurements are very much needed to objectively compare risk alternatives and manage risks as compared to conventional guesswork using hand calculators.





II. RISK-METER: A BRIEF SUMMARY The Security Meter (SM) or Risk-Meter (RM) design provides the quantitative tool that is imperative in the security world. For a practical and accurate statistical design, security breaches will be recorded so as to estimate the model’s input probabilities using the risk equations developed. Undesirable threats (with and without bluffs) that take advantage of hardware and software vulnerabilities can break down availability, LQWHJULW\FRQ¿GHQWLDOLW\QRQUHSXGLDWLRQDQGRWKHUDVSHFWVRIVRIWZDUHTXDOLW\VXFKDV authentication, privacy, and encryption. Figure 1 below illustrates the constants in the SM (or RM) model as the utility cost or dollar asset, and a criticality constant. Those probabilistic inputs are vulnerability, threat, and lack of countermeasure all valued between 0 and 1 (Sahinoglu, 2005). See Benini et al. (2008) for more resources. The SM is described in the following subsections.

Figure 1: Security Meter Model of probabilistic, deterministic inputs, calculated outputs.





Probabilistic Tree Diagram Given that a simple sample system or component has two or more outcomes for each risk factor, vulnerability, threat, and countermeasure, the following probabilistic IUDPHZRUNKROGVIRUWKHVXPV™YL DQG™WLM IRUHDFKLDQGWKHVXPRI/&0 CM = 1 for each ij, within the tree diagram structure in Figure 2 on the next page. Using the probabilistic inputs, we get the residual risk = vulnerability x threat x lack of countermeasure (where x denotes times). That is, if we add all the residual risks due to lack of countermeasures, we can calculate the overall residual risk. We apply the criticality factor to the residual risk to calculate the final risk. Then we apply the capital investment cost to the final risk to determine the expected cost of loss (ECL), which helps to budget for avoiding (before the attack) or repairing (after the attack) the entire risk where the final risk = residual risk x criticality, whereas ECL ($) = final risk x capital cost. Algorithmic Calculations Figure 1 leads to an example probabilistic tree diagram of Figure 2 to perform the calculations. For example, out of 100 malware attempts, the number of penetrating attacks not prevented will give the estimate of the percentage of LCM. One can then trace the root cause of the threat level retrospectively in the tree diagram. A cyber-attack example: 1) A hacking attack as a threat occurs. 2) The firewall software does not detect it. 3) As a result of this attack, whose root threat is known, the ‘network’ as vulnerability is exploited. This illustrates the “line of attack” on the tree diagram such as in Figure 2. Out of those that are not prevented by a certain countermeasure (CM), how many of them were caused by threat 1 or 2, etc., to a particular vulnerability 1 or 2, etc.? We calculate as in Figure 2. Residual Risk (RR) = Vulnerability x Threat x LCM, for each branch and then proceed by summing the RRs to obtain the total residual risk (TRR). Let’s assume that we have the input risk tree diagram in Figure 3 and input risk probability chart in Table 1 (following pages) for a sample health care study where only the highlighted boxes in the tree diagram of Figure 3 are selected for a case study.





Figure 2: General tree diagram (V-branches, T-twigs, LCM-limbs) used for the RM model.





Patient Records Internet Insurance Records

HC Clinician Settings

Staff HIPAA

Patient Records Internet

Outpatient Facilities

Insurance Records Staff HIPAA Patient Records

Urgent Care/ Ambulatory Surgery Centers

Internet Insurance Records Staff

Ambulatory Healthcare Cybersecurity

HIPAA Patient Records Internet

Local Health Centers

Insurance Records Staff HIPAA

Pharmacy Records

Pharmacies

Internet Staff Customer Fraud Insurance Records

Homes/ Residential Facilities

Internet Doctor’s Records Prescriptions Insurance records



Figure 3: Health Care Related Security Meter’s Tree Diagram with highlighted selections.





Table 1: Vulnerability-Threat-Countermeasure Input Risk Data for Figures 3 and 4 (Healthcare Tree Diagram and RM).





Risk Management Clarifications for Table 1 and Figures 3, 4 Using the input Table 1 and the results from Figures 2 and 3, and so as to improve the base risk by mitigating from 26% to 10%, we implement the first-prioritized four recommended actions. 1) Increase the CM capacity for the vulnerability of “Outpatient Facilities” and its threat “Patient records” from the current 70% to 100%. 2) Increase the CM capacity for the vulnerability of “Urgent Care’s Surgery Centers” and its threat “Patient records” from the current 96% to 100%. 3) Increase the CM capacity for the vulnerability of “Local Health Centers” and its threat “Patient records” from the current 72% to 98.54%. 4) Increase the CM capacity for the vulnerability of “Local Health Centers” and its threat “Internet” from the current 70% to 99.99%. In taking these actions, as in Figure 4 on the following page, a total amount of $510 is dispensed (< $513.30 as advised) each within the limits of optimal costs annotated, staying below the breakeven cost of $5.67 per % improvement. The next step proceeds with optimization to a next desirable percentage once these acquisitions or services are provided, such as 5% mitigated from 10% if the budget allows. The RM tool can serve as an auditing expert system to circumvent criticisms regarding the budgeting plans to manage the risk.





Figure 4: Example of a Game-theoretic Cost-Optimal Risk Management for input Table 1 and tree diagram of Figure 3.





III. CLOUD RISK-METER (OR RM) ESSENTIALS The CLOUD RM has two versions imbedded. The CLOUD RM provider version is geared toward service providers and corporate users. The CLOUD RM client version is geared toward individual and smaller corporate end users, for whom a new vulnerability titled Client Perception (PR) & Transparency is included. Akin to Figure 3, let’s begin with a relevant comprehensive tree diagram as in Figure 5 on the following page. A thorough list of Vulnerabilities for CLOUD RM with their related threats (Sahinoglu and Morton, 2011): Accessibility & Privacy ƒ Threats: ƒ Insufficient Network-based Controls ƒ Insider/Outsider Intrusion ƒ Poor Key Management & Inadequate Cryptography ƒ Lack of Availability Software Capacity ƒ Threats: ƒ Software Incompatibility ƒ Unsecure Code ƒ Lack of User Friendly software ƒ Inadequate CLOUD Applications Internet Protocols ƒ Threats: ƒ Web Applications & Services ƒ Lack of Security & Privacy ƒ Virtualization Inadequate Cryptography Server Capacity & Scalability ƒ Threats: ƒ Lack of Sufficient Hardware ƒ Lack of Existing Hardware Scalability





Figure 5: Tree Diagram for CLOUD Risk-Meter: Comprehensive (both Client and Host inclusive). 



ƒ Server Farm Incapacity to Meet Customer Demand ƒ Incorrect Configuration Physical Infrastructure ƒ Threats: ƒ Power Outages ƒ Unreliable Network Connections ƒ Inadequate Facilities ƒ Inadequate Repair Crews Data & Disaster Recovery ƒ Threats: ƒ Lack of a Contingency Plan ƒ Lack of Multiple Sites ƒ Inadequate Software & Hardware ƒ Recovery Time Managerial Quality ƒ Threats: ƒ Lack of Quality Crisis Response Personnel ƒ Inadequate Technical Education ƒ Insufficient Load Demand Management ƒ Lack of Service Monitoring Macro -Economic & Cost Factors ƒ Threats: ƒ Inadequate Payment Plans ƒ Low Growth Rates ƒ High Interest Rates ƒ Adverse Regulatory Environment Client Perceptions (PR) & Transparency ƒ Threats: ƒ Lack of PR Promotion ƒ Adverse Company News ƒ Unresponsiveness to Client Complaints Lack of Openness





Nature of CLOUD Risk Assessment Questions Questions are designed to elicit the user’s response regarding the perceived risk from particular threats, and the countermeasures the users may employ to counteract those threats. For example, regarding Internet Protocols vulnerability, questions regarding Virtualization include both threat and countermeasure questions. Threat questions would include: ƒ Do your provider’s virtualization appliances have packet inspection settings set on default? ƒ Is escape to the hypervisor likely in the case of a breach of the virtualization platform? ƒ Does your provider use Microsoft’s Virtual PC hypervisor? ƒ Does your provider fail to scan the correct customer system? ƒ Does your provider fail to monitor its virtual machines? Countermeasure questions would include: ƒ Did the provider’s virtualization appliances inspect all packets? ƒ Did the provider extend their vulnerability and configuration management process to the virtualization platform? ƒ Did the provider patch the vulnerability or switch to another platform? ƒ Did the provider read-in current asset or deployment information from the CLOUD and then dynamically update the IP address information before scans commence? ƒ Did the provider utilize Network Access Control-based enforcement for continuous monitoring of its virtual machine population and virtual machine sprawl prevention? Risk Calculation and Mitigation Essentially, the users are responding yes or no to these questions. These responses are used to calculate residual risk. Using a game-theoretical optimization approach, the calculated risk index is used further to generate a cost-optimal plan to lower the risk to tolerable levels from those unwanted or unacceptable. Mitigation advice will be 



generated to show the user in what areas the risk can be reduced to optimized or desired levels such as from 52.83% to 20.00% in the Median Respondent screenshot (displaying threat, countermeasure, and residual risk indices; optimization options; as well as risk mitigation advice). For this study, a random sample of 31 respondents was taken and their residual risk results are tabulated and presented in Appendix A along with their individual residual risk assessment screenshots in Appendix B at the end of this paper. Also, please see Appendix C for the Cloud Risk Survey in XML format. Respondents’ familiarity with cloud security risk was comprised of both personal and corporate experience. Risk Management Clarifications for Figures 5 and 6 Using the RM results for the risk assessment step from Figure 6 on the following page, and so as to mitigate the base risk by mitigating from 52.83% down to 20.00%, we implement the three prioritized RM recommended actions. 1) Increase the CM capacity for the vulnerability of “Internet Protocols” and its threat “Lack of Security and Privacy” from the current 50.00% to 99.99% for an improvement of 49.99%. 2) Increase the CM capacity for the vulnerability of “Managerial Quality” and its threat “Lack of Quality Crisis Response Personnel” from the current 45.00% to 100.00% for an improvement of 55.00%. 3) Increase the CM capacity for the vulnerability of “Managerial Quality” and its threat “Lack of Service Monitoring” from the current 40.00% to 58.91% for an improvement of 18.91%. Additional steps proceed with optimization to a next desirable percentage once these acquisitions or services are provided, such as mitigated to 10.00% from 20.00% if the budget still exists. See Figure 6 for the mitigation and optimization step.





Figure 6: RM Risk Assessment Results Applying Figure 5. 



IV. DISCUSSIONS AND CONCLUSION CLOUD computing, also viewed as a fifth utility after water, electric power, telephony, and gas is set to expand dramatically if issues of availability and security can be trustfully resolved. The CLOUD simulator tool (CREA) by the author and its further refinement will aid in that expansion (Sahinoglu, 2011). Also Monte Carlo VaR is an alternative method for day to day monitoring of the CLOUD (Kim et al., 2009). Both CRAM and Monte-Carlo VaR (as Simulation Methods) only cited here, and CLOUD-RM (as Information Gathering Customer Survey Method) will provide quantitative risk assessment and management solutions if the correctly collected data needed for both approaches can be justified. Besides, Markov-chain method is only useful for small scale problems, utilized as a theoretical comparison alternative, mainly because large scale problems with excessive Markov states are intractable to compute even with supercomputers exceeding 500 servers. The CLOUD Risk-Meter breaks new ground in that it provides a quantitative assessment of risk to the user as well as recommendations to mitigate that risk. A cross section of draft questions (subject to change as required by the CLOUD management organization) are listed in Table 2 after the References. As such, it will be a highly useful tool to both the end user as well as IT professionals involved in CLOUD service provision due to mounting customer complaints on the breach of reliability (Worthen and Vascellaro, 2009). Further research commands reliable random data collection practices to render these two recommended methods, i.e. discrete event simulation (DES) and RM (Risk-Meter), useful and applicable “most bang for the buck” to aid managers with assessing and managing CLOUD risk, conventionally left to chance saving the day.





V. REFERENCES Anthes, G (2010). “Security in the Cloud”, Communications of the ACM 53(11), pp. 16-18, DOI:10.1145/1839676.1839683. Benini M. and Sicari S (2008). “Risk Assessment in Practice: A Real Case Study”, Computer Communications, Vol. 31, No. 15, pp. 3691-3699. Boland Rita (2011). “Approval Granted for Private Software to Run in Secure Cloud”, www.afcea.org, SIGNAL, Information Security, pp. 35-38. Greengard, S (2010). “Cloud Computing and Developing Nations”, Communications of the ACM. 53 (5), pp. 18-20. Grobauer B., Walloschek, Stocker E (2011). “Understanding Cloud Computing Vulnerabilities”, Vol. 9, No.2, IEEE Security & Privacy, 50-57. Kim H., Chaudhuri S., Parashar M., Marty C (2009). “Online Risk Analytics on the Cloud”, CCGRID’09 Proceedings of the 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid, IEEE Computer Society, Washington DC, USA. Leavitt N. (2009). “Is Cloud Computing Really Ready for Prime Time”, IEEE Computer, January issue, 15-20. Sahinoglu Mehmet, Cueva-Parra L (2011). “CLOUD Computing”, WIREs Comp. Stat., 3:. Doi: 10.1002/wics.139; 47-68. Sahinoglu, Mehmet (2007). Trustworthy Computing: Analytical and Quantitative Engineering Evaluation, New York: John Wiley and Sons Inc. Sahinoglu M., Morton S (2011). “CLOUD Computing Risk Assessment with Risk-oMeter”, AFITC (Air Force Information Technology Conference), Montgomery, AL. Sahinoglu M (2005). “Security Meter - A Practical Decision Tree Model to Quantify Risk,” IEEE Security and Privacy, 3 (3), April/May 2005, 18-24.





Sahinoglu M (2008). “An Input-Output Measurable Design for the Security Meter Model to Quantify and Manage Software Security Risk”, IEEE Trans on Instrumentation and Measurement, 57(6), 1251-1260. Sahinoglu M (2009). “Can We Quantitatively Assess and Manage Risk of Software Privacy Breaches?” IJCITAE-International Journal of Computers, Information Technology and Engineering, Vol. 3 No. 2, pp. 65-70. Sahinoglu M, Yuan Y-L, Banks D (2010). “Validation of a Security and Privacy Risk Metric Using Triple Uniform Product Rule,” International Journal of Computers, Information Technology and Engineering, Vol. 4, No. 2, 125–135. Sahinoglu M (2008). “Generalized Game Theory Applications to Computer Security Risk, Proceedings of the IEEE Symposium on Security and Privacy, Oakland, CA, May 18-21. Sahinoglu, Mehmet (2016). Cyber-Risk Informatics: Engineering Evaluation with Data Science, New York: John Wiley and Sons Inc. Srinivasan S, Getov V (2011). “Navigating the Cloud Computing Landscape Technologies, Services, and Adopters”, IEEE Computer, March Issue, 22-28. Worthen G, Vascellaro J (2009). “E Mail glitch shows pitfalls of online software – Photo: Services like Gmail run on vast computer farms. A Google center in Lenoir, N.C.”, Media and Marketing, Wall Street Journal. B 4-5.





APPENDIX A: RESPONDENT RESIDUAL RISK RESULTS TABLE Tabulated Survey Results for the Cloud Risk Meter, ranked overall, where Median: 52.83% (Respondent13) and Mean: 51.29% (Respondent1: 51.34% is the result that comes the closest).





APPENDIX B: RESPONDENTS’ RESIDUAL RISK ASSESSMENT & OPTIMIZATION SCREENSHOTS



































































































































































































APPENDIX C: CLOUD RISK SURVEY (XML FORMAT) -

- Are network based controls not sufficient to guarantee privacy? Has the cloud service been breached by intruders? Is key management haphazard? Is the cloud frequently not available?

- Does your provider fail to segregate your data from that of others? Are you not required to authenticate your identity? Are simple passwords not allowed? Are users allowed to never have to change their password? Is your connection not https or VPN? Did your provider have systems in place to prevent data leaks or access by third parties? Did your provider have a robust Identity Management System?





Did your provider require users to have complex passwords? Did your provider require you change passwords regularly? Did your provide offer a secure, encrypted connection?

+

- Does the provider use Windows LM Hash cryptography? Are your provider’s encryption algorithms known to others? Do you store the encryption key on your hard drive? Do you use someone else’s public key to identify the site? Do you write down the pass key somewhere not secure? Does your provider use 128 bit or higher encryption? Did your provider use strong encryption functions? Do you remove the hard drive from the computer and store it in a secure location when not in use? Did you only use public keys that you received directly from the provider? Did you store the pass key in a secure location like a safe? 





- Is Service getting interrupted? Is the service not quick to load? Are data changes not quickly propagated? Is the service not restored in minutes? Is data lost by the provider? Did the provider guarantee 24/7 service? Did the provider have a scalable service? Did the provider have sufficient bandwidth? Did the provider have robust hardware and software redundancy? Did the provider have the ability to recover lost data?

- Is the cloud software incompatible with your system? Is the application code unsecure? Is the application software not user friendly? Are cloud applications not up to your task? 



- Is it difficult to switch providers due to the provider’s proprietary technology? Is your system sometimes incompatible with your provider’s technology? Does your provider fail to open source standards? Do you not know what technology your provider uses? Do you receive indecipherable error messages? Did you select a provider that uses open standards? Did you research what technology your provider uses? Were providers chosen on the basis of compatibility? Did you research what technology your provider uses? Did potential providers provide compatibility guides?

- Is the service subject to injection of client side script into web pages viewed by other users? Is the service subject to buffer overflows or arrays overwriting data? Do you receive unintelligible error messages? Is the application subject to database manipulation?





Are the providers data formats not standard? Did the provider use anti-cross site scripting (XSS) libraries? Did the provider minimize the use of unsafe string and buffer functions? Did the provider validate input and output? Did the provider avoid string concatenation for dynamic SQL statements? Did the provider utilize standard data formats?

- Is the sequence through a task not obvious? Are system interface commands difficult to find and use? Does the system interface fail to offer more than one way to accomplish a task? Do tasks take a long time to accomplish? Do the intentions of the application designer fail to match those of the user? Did the provider have software that followed a clear, logical sequence? Were system interface commands clearly laid out and well designed? Did the system interface offer various means to accomplish a task? 



Was the system interface optimized to require a relatively small number of steps to accomplish tasks? Does the system always support the user's chosen plan of action?

- Does the cloud application have less features than a pc based one? Are application interface commands difficult to figure out? Does the cloud application freeze up or crash? Do applications take a long time to save or update? Do you have to download patches and upgrades? Did you research cloud application functionality? Were application interface commands logical and obvious? Did the cloud application recover automatically? Does the provider have sufficient servers and bandwidth? Does the cloud application have centralized updating?





- Are your web applications not secure? Do you worry about security? Does your provider use virtualization? Is the cryptography used not adequate?

- At the IaaS (Infrastructure as a Service) level, is communication among host computersc not secure? At the PaaS (Platform as a Service) level, does the provider make provisions for multiple users? At the SaaS (Software as a Service) level, did you fail to review the provider’s documentation on data separation? Do you not know what cloud computing standard your provider uses? Are your provider’s applications lacking in portability? Did the provider secure communication among host computers with channel level encryption? Did the provider threat model for multi-user risks? Did the provider securely separate your data from that of others? Did the provider adopt standards that allow for interoperability? Do your cloud applications work across different platforms?







- Does the provider fail to implement session handling? Do you use public computers to access cloud applications? Do you rely on deleting internet history for removing cached data? Does your provider use forms to accept data from users in a http request? Does your provider accept files from users without validation? Did the provider’s web applications provide some idea of session state? Did you use VPN or https to access your cloud applications? Do you have software that can delete cached data files? Did the provider validate data received from a http request before using it in an application? Did your provider validate the source before loading data?

- Does your provider’s virtualization use default settings for packet inspection?





Is escape to the higher level hypervisor likely in the case of a breach of the virtualization platform? Does your provider use Microsoft’s Virtual PC hypervisor? Does your provider fail to scan the correct customer system? Does your provider fail to monitor its virtual machines? Did the provider’s virtualization appliances inspect all packets? Did the provider extend their vulnerability and configuration management process to the virtualization platform? Did the provider patch the vulnerability or switch to another platform? Did the provider read-in current asset or deployment information from the cloud and then dynamically update the IP address information before scans commence? Did the provider utilize Network Access Control-based enforcement for continuous monitoring of its virtual machine population and virtual machine sprawl prevention?

- Does the provider use an improper algorithm such as using a hash function for encryption? Does the provider have encryption implementation errors? Is the provider’s key scheme weak?





Does the provider use insecure algorithms such as DES (Data Encryption Standard) or MD5 (Message Digest Algorithm)? Does the provider fail to update keys? Did the provider use an appropriate encryption algorithm? Did the provider use standard cryptographic implementations/libraries? Were keys provided long and sufficiently random? Did the provider select a robust encryption algorithm such as PGP? Were keys updated regularly?

- Does your provider lack sufficient hardware? Does your provider lack hardware scalability? Is the server farm capacity sometimes outstripped by customer demand? Are the provider’s servers not properly configured?

- Does your cloud service not respond quickly? Is upload/download time not fast?





Is cloud service downtime not uncommon? Does your cloud service return errors? Has your cloud service lost data? Did the provider have sufficient servers? Did the provider have enough bandwidth? Did the provider have sufficient redundancy? Did the provider have sufficient routers and switches? Did the provider have the storage capacity for frequent backups?

- Do you not know just how scalable your cloud service is? Does your vertical scalability require more and more computing resources? Do you complain about service unavailability in times of high demand? Do your service costs rise geometrically with high demand? Do applications not make use of network-side scripting? Does your hardware failing to meet the required specifications? Did the provider test for scalability?





Did the provider utilize load balancing? Did the provider guarantee quick scalability? Did the provider’s scalability charges increase at a reasonable arithmetic progression? Are applications written with vertical scalability in mind?

- Does your provider lack a failover capability? Has your provider been overwhelmed by demand? Has your provider been overwhelmed by DoS (Denial of Service) attacks? Is your provider a startup? Does your provider charge more for storage across multiple availability zones? Could your provider switch over automatically to redundant servers? Did the provider assign enough servers to scale as advertised? Did the provider utilize secure routers? Did your provider have the financial resources to adequately fund their server farms? Did you select a provider that offers storage across multiple availability zones?







- Are servers not updated? Are network ports unprotected? Are service accounts for SQL Servers often granted more access to the platform or network than is necessary? Are the features and capabilities of SQL servers exposed when not necessary? Are extended procedures stored that allow for access to the operating system or registry? Were servers updated as patches and upgrades become available? Did the provider use firewalls for its servers? Were service accounts for SQL servers set up under the principle of least privilege? Did the provider use the SQL server configuration manager to control features and other components? Did the provider not enable stored procedures that allow for access to the operating system or registry?

- Are server farms subject to power outages?





Are connections sometimes lost? Is the server farm rundown? Are repair crews ad hoc?

- Does the provider not have a reliable power source nearby? Does the provider sometimes have power outages? Does the providers facilities have uncontrolled climate? Are power facilities and backup relatively open? Do hours pass before the electricity is restored? Did the provider locate their server farm near a power source such as a dam or high tension power lines? Did the provider have an Uninterruptible Power Source (UPS) with backup such as diesel generators for their facility? Did the providers facilities all have rigorous environmental controls? Were critical areas for power generation and backup secured from unauthorized entry? Were repair crews on standby 24/7 in case of power outage?

- Do downloads/uploads get dropped? 



Are applications real slow to update? Do cloud connections take a long time to be established? Did the provider fail to examine the maximum transmission unit (MTU) size of the local link, as well as the entire projected path to the destination? Do you receive network related error messages? Did the cloud provider utilize a quality ISP? Did the cloud provider have sufficient bandwidth even in peak demand times? Did the provider have sufficient routers? Does your cloud provider fail to use IPv6 (Internet Protocol version 6)? Did the provider address error conditions such as such data corruption, packet loss and duplication, as well as out-of-order packet delivery?

- Does the server farm look run down? Does the hardware seem older than a two years? Is the facility open to entry by anyone? Does the facility have proper backup? Does the facility lack environmental controls? Was the cloud provider fully capitalized? 



Did the provider invest in the latest hardware? Did the provider invest in securing the facility? Did the provider invest in proper power and data backup? Did the provider have a facility that did not fluctuate in temperature?

- Do service outages take a long time to repair? Do repair crews take a long time to respond? Did the provider fail to vet repair crews? Are repair crews assembled on an ad hoc basis? Depending on the situation, can repair crews be quickly scaled up? Did the cloud provider have repair crews stationed at each facility? Were repair crews available 24/7? Were repair crews subject to a background check? Were repair crews thoroughly trained? Did the provider have the capability to bring in extra crews quickly as needed?







- Did the provider lack a contingency plan? Did the provider lack multiple sites? Did the provider have inadequate software and hardware? Does recovery take days?

- In the event of facility disaster, do you know how your provider would respond? Does the provider fail to rehearse data and disaster recovery? Does the provider have the same contingency plan as in the beginning? In the event of facility disaster, do personnel know what to do? Are critical services restored at the same time as others? Did the cloud provider have a well planned data and disaster recovery contingency plan? Did the provider rehearse their data and disaster recovery plan several times a year? Did the provider keep their plan current? Were personnel thoroughly prepared for data and disaster recovery? 



Did the provider prioritize restoration of the most critical services first?

- Are backups kept at the same site? Were backup sites merely storage facilities? Is restoration time different according to what site? Are unavailable sites apparent to clients? Does backup and archiving take weeks due to multiple facilities? Were backups kept at completely different sites? Were backup sites fully functional? Was restoration time the same or similar regardless of what backup site you were restoring from? Did the provider use load balancing so that unavailable sites would not be visible to clients? Was content copied as it is archived?

- Does the provider expand storage on an ad hoc basis? Does the provider require days for recovery?





Is data lost despite recovery efforts? Do recovery efforts sometimes encounter unexpected results? Does the provider have to bring in hardware and software for recovery? Did the provider have the ability to quickly scale storage capability as needed? Did the provider have a fully resourced recovery effort? Did the provider have the latest full functionality recovery software? Did the provider periodically test recovery hardware and software? Did every provider facility have recovery hardware and software?

- Does the provider try to get everything running all at the same time? In the event of cloud downtime, do you receive a refund? Has the provider made known a time frame when certain services can be expected to be restored? In the case of multiple facility recovery, has the provider taken into account overhead associated with parallel restoration?





Did the provider seem to not know when services may be restored? Is it takes more time in case of cloud downtime to restore? Did the provider follow a timeline for recovery, prioritizing the most important services? Did the provider prorate their charges according to time down? Did the provider set strict Recovery Time Objectives as part of their recovery plan? Did the provider address parallel restoration in their recovery plan? Did the provider do restoration test runs so they have a good idea when services may be restored?

- Did the provider lack qualified crisis response personnel? Did management lack technical understanding? Was there insufficient load demand management? Did management fail to monitor the cloud service?

-





Are crisis response teams formed only during adverse events? Are crisis response teams hired without background checks? Do crisis response team members know what to do or do they have to receive instructions? Did the provider fail to test crisis response teams? Do hours pass by before crisis response teams arrive? Were crisis response teams fully established and ready to respond? Were crisis response team personnel vetted as a condition of employment? Were crisis response teams thoroughly trained in disaster recovery? Did crisis response teams undergo periodic drills? Were crisis response teams on call 24/7?

- Is cloud service management all composed of MBAs and lawyers? Is senior management drawn from other industries? Did management fail to see the need for the newest hardware and software? Do corporate cost cutting measures lessen the cloud service quality? 



Does your cloud service provider seem to be concerned with profits more than anything else? Did cloud service senior management have technical expertise? Was management experienced in cloud service provision? Was management supportive of fully resourcing hardware and software needs? Was management supportive of fully resourcing testing, monitoring, and recovery personnel? Was provider management most concerned about providing a high quality, extremely low downtime cloud service above all else?

- Did the provider fail to monitor load demand? Does your provider not distribute incoming application traffic across multiple instances? Does your provider fail to reroute traffic in the case of unhealthy service instances? Does your provider use multiple availability zones? Does your provider fail to use load balancing software? Did the provider seamlessly scale up during peak demand? Did your provider automatically distribute incoming application traffic across multiple instances? 



Did your provider automatically reroute traffic to healthy service instances? Did the provider scale up across multiple availability zones? Did the provider utilize load balancing software that allowed for a seamless, transparent session?

- Are there periods when the cloud service is unmonitored? Are event logs and performance history only available to the provider? Does the provider fail to notify you regarding problems in your account configuration? Do you know when new instances are launched and ended? Are you in the dark regarding the health of your cloud session? Did the cloud service provider monitor the service 24/7 with a dedicated monitoring staff? Did the provider make log events and performance history freely available for its clients? Did the provider alert you to poorly configured cloud accounts or faulty auto scaling? Did the provider make available cloud utilization alerts? 



Did your cloud service provider offer a tool to monitor your cloud session?

- Is your provider’s payment plan inconvenient? Is the economy growing slowly or negligibly? Are interest rates high? Is there an adverse regulatory environment?

- Are you required to pay up front and make a deposit? Paper bills are received through mails, which sometimes may lost or go unnoticed? Are you in the dark as to how your bill is calculated? Does your provider increase charges geometrically as more and more computing resources are consumed? Are payment plans more for the benefit of the provider than you? Did the company offer payment plans? Did the company digitize invoicing? Were service charges explicitly spelled out in the Service Level Agreement (SLA)? 



Did increased computing resources charges progress arithmetically? Did the provider offer plans tailored to your needs and convenience?

- Is the economy growing slowly? Are you cutting back IT spending? Is consumer demand low? Does the provider have significant overhead? Did the provider fail to diversify its client base? Did the provider discount their cloud service? Did the provider show how money can be saved through cloud computing? Did the provider offer consumer promotions or discounts? Did the provider engage in corporate cost cutting? Did the provider offer services to consumers, government, or internationally?

- Are interest rates high? 



Did the provider offer alternative payment? Are payments late or partial? Does the provider have significant overhead? Did the provider fail to diversify its financing? Did the provider raise capital through bond or stock offerings rather than borrowing? Did the provider offer a discount for paying upfront? Did the provider ease payment terms? Did the provider use forward contracts to mitigate interest rate rises? Did the provider raise capital in overseas markets or privately?

- Is the industry facing increased regulatory scrutiny? Does the provider have a predominant position in the industry? Will pending legislation have an adverse impact on the industry? Are new ventures hard to launch due to regulatory measures? Did the provider fail to monitor regulatory events?





Did the industry hire lobbying and public relations representation? Did the provider spin off corporate entities? Did the industry present its case in Washington? Did the provider have foreign subsidiaries in less regulated markets? Did the provider have legal and corporate staff monitoring regulatory events?

- Does the provider lack a PR effort? Has the provider received negative publicity lately? Are your complaints ignored? Is the provider lacking in openness?

- Does the provider make little effort to shape public perceptions of itself? Does company news rarely originate from the company itself? Are company spokesmen rarely heard from? Is the provider rarely in the public eye?





Did the provider fail to create public good will? Did the provider have a PR campaign? Did the provider have a corporate relations department providing news releases? Did the company provide speakers or representatives available for interview? Did the provider engage a PR firm? Did the provider engage in charitable giving or free service provision to schools or other non-profits to create public good will?

- Has the company received negative publicity? Did the company let negative news slide without reaction? Does the company fail to explain service outages? Does news unfold without a timely reaction? Are responses to news events left to more junior corporate personnel? Did the company counteract the publicity with its own speakers and representatives? Did the company fully explain its side of a particular situation? Did the company state that corrective measures were taken and preventive measures put in place?





Did the company’s corporate relations dept. provide regular and timely news updates? Did the CEO or other senior corporate management step forward?

- Are billing errors a frequent source of your complaints? Did the provider charge the same even if there were service outages? Does the provider take days or even weeks to respond? Are making account or configuration changes frustrating? Is major research required to figure out charges? Did you have a dedicated office or account executive you can easily reach to correct billing errors? Did the provider prorate or refund in the event of service outages? Did the provider have a policy of responding within 24 hours to your complaints or questions? Did the provider offer tutorials or live help to address account or configuration changes? Are service terms and charges spelled out clearly in the SLA?





- Does the provider have a closed corporate culture? Is getting information out of the company very difficult? Is reaching a live person in the company difficult? Does it seem like the company hides behind lawyers and their legalese? In the event of crisis, does the company respond slowly? Did the company make visible efforts to be open to the public? Did the company clearly explain its actions and rational in public fora? Are spokesmen and other corporate representatives always available? Was the company’s information, charges, contracts, etc. presented in easily understood language? In the event of crisis, did the provider react rapidly in the public eye?





ABOUT THE AUTHORS Dr. M. Sahinoglu is the founding Director of the Informatics Institute (2009) and founder of the Cybersystems and Information Security Graduate Program (2010) at Auburn University at Montgomery (AUM). Formerly, the Eminent Scholar and ChairProfessor at Troy University’s Computer Science Department, he holds a B.S. (1973) from METU-Ankara-Turkey and M.S. (1975) in Electrical and Computer Engineering from Victoria University of Manchester, UK, and Ph.D. (1981) in EE and Statistics from Texas A&M University jointly. He conducts research in Cyber-Risk Informatics.He is the author of Trustworthy Computing (2007) and Cyber-Risk Informatics: Engineering Evaluation with Data Science (2016) both by Wiley Inc., Hoboken, New Jersey in USA. Scott Morton graduated summa cum laude with an MS in CS from Troy University. He is a research associate at AUM and also Computer Science instructor at Troy University Montgomery and South University Montgomery. His primary research interest is Cybersecurity Risk Assessment. C. Vadla and K. Medasani are graduate students at Auburn University Montgomery enrolled in the Cybersystems and Information Security Program.