Designing an Automated Audit Tool for the ... - Semantic Scholar

3 downloads 85743 Views 181KB Size Report
checked by using process automation software. In the absence of ... events, data and documents, including e-mail transactions between the people within.
Designing an Automated Audit Tool for the Targeted Risk Exposure Reduction Yurdaer Doganata and Francisco Curbera IBM T J Watson Research Center, 19 Skyline Drive, Hawthorne NY 10532 {yurdaer,curbera}@us.ibm.com

Abstract. The risk exposure of an organization is the cost of being noncompliant for all process instances that are subject to auditing and it can be reduced by auditing internal controls for every process instance, detecting and eliminating the cause of non-compliance. This paper discusses the design consideration for an automated auditing tool to achieve the desired level of risk exposure reduction. A method is provided to measure the effectiveness and the limits of such tools and adjust their performance for various risk exposure levels. Keywords: continuous assurance, risk management, risk exposure, automated auditing, internal controls.

1

Introduction

Detecting compliance failures help organizations better control their operations and remain competitive. The quality of product and services can not be ensured in a business if the processes do not conform to design goals and comply with the rules and regulations. Moreover, organizations may be subject to serious financial penalty as well as civil and penal consequences if they fail to comply with established guidelines, rules and regulations. Hence, non-compliance may have severe consequences and needs to be managed either by correcting violations or reducing the associated risk. Companies invest significantly on detecting compliance failures to ensure governance and manage risk. The cost of reducing the risk of being non-compliant could run into millions of dollars [1]. AMR Research survey reveals that the spending of companies on governance and risk management and compliance expected to grow to $29.8 billion in 2010, up nearly %4 over the $28.7 billion spent in 2009 [2]. Internal controls can provide a reasonable assurance that the goal of an organization is met and for some organizations they are required by law. In order to manage the risk associated with compliance failures and for the continuous assurance of business goals, organizations use Enterprise Risk Management framework such as COSO ERM [3]-[5]. ERM framework provides for a systematic way of creating internal control points as part of audit and compliance activities. In completely managed processes the compliance against internal controls can be checked by using process automation software. In the absence of process automation F. Daniel et al. (Eds.): BPM 2011 Workshops, Part II, LNBIP 100, pp. 356–369, 2012. © Springer-Verlag Berlin Heidelberg 2012

Designing an Automated Audit Tool for the Targeted Risk Exposure Reduction

357

software that can control and record who did what and when, the compliance check is a costly and time consuming task performed manually by auditors [6]. Automated continuous auditing systems, on the other hand, can provide for an almost cost-free auditing opportunity if the initial cost of building such a system is excluded. Such a system can run continuously and performs evaluation for all process instances without adding to the cost of auditing. While continuous auditing systems eliminate or reduce the dependency on audit professionals, they are not infallible. The tools that are built to realize automated continuous auditing rely on information extraction from process events, data and documents, including e-mail transactions between the people within the organizations. The information about the processes extracted from these sources can contain errors and, due to these errors, the audit results may be faulty. Moreover, testing certain compliance conditions requires a level of text analysis that is not yet available in automated systems. Hence, the automated systems can perform fast and extensive auditing of the internal control points at the cost of making mistakes. As a result, some compliance failures may be missed while some other cases that are compliant may be declared non-compliant. The focus of this paper is to discuss the factors that impact the effectiveness of continuous assurance with automated audit tools. The subject is important for organizations which need to determine how much they should invest to remain compliant. This paper aims to help understanding the characteristics of the operational environment that affect the efficiency of automated tools, and in particular the conditions that necessitate hiring experts for manual auditing to avoid compliance failures. Ultimately, the companies expect to reduce the risk exposure at least as much as they spend for compliance assurance. Therefore, they need to know how they can optimize the return on their investment. Another aspect of the research presented here is to provide for a mathematical foundation of designing automated audit tools that reduces the risk of being noncompliant to the targeted level. The performance of an automated audit tool depends on various parameters that determine its ability to differentiate compliant from noncompliant process instances. Hence, the design considerations include how to select the parameters to affect the performance of the tool to achieve the desired level of risk exposure reduction. The paper is organized as follows. After the related work, in section 3 the risk exposure in an organization for running potentially non-compliant processes is defined. In section 4, the method of using automated auditing tools along with limited auditor intervention to reduce the risk exposure is described. The effectiveness measure of such tools are also introduced and calculated in this section. Section 5 mainly focuses on designing an auditing tool with the desired effectiveness.

2

Related Work

The effect of using automated auditing tools on detecting compliance failures is investigated in [7] and a methodology is proposed to measure the effectiveness. The approach is based on evaluating all process instances by using the automated audit

358

Y. Doganata and F. Curbera

machine and asking experts randomly re-evaluate some of the instances marked as compliant and non-compliant by the automated machine. As a result, it is shown that the average number of non-compliant instances detected can be increased by a factor if the automated auditing tool is utilized. The improvement factor which is the effectiveness measure is found as a function of sensitivity and specificity of the tool. In [8], the risk of being non-compliant is measured and the cost of reducing the risk by performing internal audits with the help of automated audit tools is calculated. This paper expands the results of [7][8] in two areas. Firstly, the feasibility analysis of building an automated auditing tool to achieve the desired reduction in risk exposure in an environment where the prevalence of non- compliance is known is presented. Secondly, a method to design an automated auditing tool to achieve the targeted performance is introduced and the design considerations are discussed. The problem of using automated audit machines to determine the compliance failures is equivalent to determining the prevalence of a medical condition through screening the population by using a medical diagnostic test which is not a gold standard [9] [10]. The statistical methods used in epidemiology are also applicable in detecting compliance failures and building continuous assurance systems.

3

The Risk of Being Non-compliant

As mentioned in the introduction section, there is a cost associated with noncompliant process instances. The cost of being non-compliant is determined by the amount of penalties that the company will pay for not complying with the rules and regulations as well as the cost of not being able to ensure quality and remain competitive. If the processes are executed with the potential to violate the compliance rules, the organization that executes these processes takes a risk. Compliance officers are responsible to determine the amount of risk exposure for being non-compliant and find ways to minimize this risk. Risk exposure of an organization can be defined as the cost of being non-compliant for all process instances that are subject to auditing. It is proportional with the number of process instances and the penalty paid for every non-compliant case. The risk of running potentially non-compliant processes can be reduced by auditing internal controls for every process instance, detecting and eliminating the cause of noncompliance. While risk exposure can be completely eliminated by auditing every process instance, this may not be a cost effective solution, since manual auditing incurs a significant cost. The risk exposure in an organization for running potentially non-compliant processes is expressed as:

ℜ = Npωr where ω is the ratio of the process instances externally audited, N is the total size of the process instances, p is the prevalence of non-compliance for the population and r is the penalty to be paid per non-compliant instance. Assuming early detection and intervention to correct failures on some or all the non-compliant process instances, the

Designing an Automated Audit Tool for the Targeted Risk Exposure Reduction

359

prevalence of non-compliance is improved since thee overall number of non-compliant instances is reduced. If λ is the ratio of the process instances that can be audited manually, then the average number of non-compliant instances that can be fixed is found as λ.N. p . Hence, the new prevalence of non-compliance after manual auditing is reduced to p(1 − λ ) where 0 ≤ λ ≤ 1. One of the challenges of reducing the risk exposure is to decide how much budget should be allocated for auditing. Budget allocation must be sufficient enough to justify the investment by reducing the cost associated with risk exposure. In order for the investment to make financial sense, the return of investment must be at least positive. In this case, the return is defined as the amount by which the risk exposure is reduced. A company is expected to reduce the risk exposure at least as much as it spends for compliance assurance.

4

Effectiveness of an Automated Audit Tool to Ensure Compliance

In order to measure the effectiveness of an automated audit tool, we consider a methodology that enables detecting the largest number of non-compliant instances within a budget constraint. The methodology is based on evaluating all process instances by using the automated audit machine and asking experts to randomly reevaluate some of the instances marked as compliant and non-compliant by the automated machine. We assume that the budget permits the expert evaluation of only M =M1+M2 cases. Here M1 is the number of cases marked as non-compliant and M2 is the number of cases marked as compliant by the audit machine. This way the sample space that the experts operate is reduced. The effectiveness of the proposed methodology can be measured by comparing the expected number of non-compliant process instances detected. If the number is higher than what experts would have determined under budget constraint without using the methodology, then we can conclude that the methodology improves the auditing process in general. The analysis of this methodology is detailed in [7] and it is shown that using automated auditing tools and the methodology described above improves the detection of non-compliance instances by a factor of χ as below:

χ=

1 p(1 −ψ ) +ψ

(1)

where

ψ = (1 − θ ) / η

(2)

and θ is the specificity and η is the sensitivity of the tool and p is the prevalence of non-compliance instance in the population. Equation (1) reveals that if the sum of the

360

Y. Doganata and F. Curbera

sensitivity and the specificity of the tool is 1, then ψ = 1 and there is no improvement. A tool with ψ = 1 may not help detecting all non-compliant process instances since the improvement factor is χ = 1. On the other hand, for ψ < 1 , the improvement factor χ > 1 , hence the detection of non-compliance instances can be improved by a factor, χ , using auditing tools and limited manual auditing, as expressed in equation (1). Note that the improvement factor depends on the prevalence of non-compliance, sensitivity and the specificity of the auditing tool. This is an improvement over what can be done manually on limited set of process instances. Due to budget constraints, the set is usually much less than the total number of process instances. Hence, by using the methodology described in the previous section and the automated audit tool, the prevalence of non-compliance can be reduced by a factor of λ.χ . p where 0 ≤ λ ≤ 1 is the ratio of the process instances that can be audited manually within the budget constraint. By detecting and fixing some of non-compliant cases within the set of all process instances, the prevalence of non-compliance is improved since there is less number of non-compliant instances after the detected non-compliant instances are fixed. As a result, the new prevalence of non-compliance is found as follows: p ' = p(1 − λχ ) = p(1 −

λ p(1 −ψ ) + ψ

)

for λχ ≤ 1

(3)

Since the risk exposure is proportional to the prevalence, then the percent of reduction in risk exposure, Φ , can be found as 100λχ . Φ = 100(

λ p(1 −ψ ) +ψ

)

for λ ≤ p(1 −ψ ) + ψ

(4)

In other words, if the risk exposure has to be reduced by Φ % then (4) has to be satisfied. Equation (4) has some practical implications. Desired reduction percentage may not be achieved due to the constraint on λ , the ratio of the process instances that can be audited manually within the budget constraint and ψ , the performance measure of the tool for a given prevalence of non-compliance, p. This means that for a given p, it may not be possible to build a tool that could reduce the risk at a desired level. Fig. 1 shows the risk exposure reduction percentage as a function of the performance of the audit tool, ψ . Hence, in order to reduce the risk exposure to the desired level for a given prevalence p, λ and ψ values should be selected as plotted in Fig. 1 based on equation (4). The values of λ and ψ determine the operating point

where λ is controlled by the number of manually audited process instances and ψ is related to the performance measure of the tool which is tunable. Fig. 1 shows that risk exposure is reduced more with higher λ values and lower ψ values. As an example, when the prevalence of non-compliance is, p=0.3, then the risk exposure can be reduced 20% provided that the operating point is λ =0.1 and ψ =0.3. Since λ is the

Designing an Automated Audit Tool for the Targeted Risk Exposure Reduction

361

ratio of the process instances that can be audited manually within the budget constraint, it is directly proportional with the cost of hiring auditors for manual auditing. ψ , on the other hand, is related to the sensitivity and the specificity of the automated audit tool and there is a cost associated with building tools with desired performance measures. Hence, the reduction in risk exposure must be large enough to cover the cost of hiring experts and tuning automated auditing tool for the desired performance. 100

lambda=0.3 p=0.3

80 Risk exposure reduction %

lambda=0.7 p=0.3

lambda=0.5 p=0.3

lambda=0.9 p=0.3

60

40 lambda=0.1 p=0.3

20

0 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

(1-sensitivity)/specificity

Fig. 1. Reduction of risk exposure as a function of ψ for different λ values when prevalence is 0.3

Figure 1 shows that risk exposure is reduced most with higher values of λ . This is expected since λ is related to the number of process instances manually audited by experts that are labeled as non-compliant by the auditing tool. As λ increases, the number of actually non-compliant process instances in the system is reduced along with the risk. In the example depicted by Fig. 1, when λ = 0.9, i.e., 90% of all process instances labeled non-compliant examined by experts, and the (1-sensitity)/specificity of the audit tool is 0.9, the risk exposure is almost completely eliminated. This is the case when either the sensitivity or the specificity of the tool is very high, hence almost all the process instances labeled non-compliant are actually non-compliant and they are all detected and eliminated by experts. Fig. 2, on the other hand, shows the effect of prevalence of non-compliance on the risk exposure reduction when λ is constant. The designers of risk management systems need to know how the sensitivity and the specificity values of the automated tool should be tuned to reduce the exposure to the desired amount when the prevalence of non-compliance is constant.

362

Y. Doganata and F. Curbera 90 lambda=0.3 p=0.1

Risk exposure reduction %

80

lambda=0.3 p=0.3

70 lambda=0.3 p=0.5

60

50 lambda=0.3 p=0.7

40

30

lambda=0.3 p=0.9

20 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

(1-sensitivity)/specificity

Fig. 2. Reduction in risk exposure as a function of ψ for different prevalence values when

λ = 0 .3

1

(1-sensitivity)/specificity

0.8

0.6

0.4

0.2 p=0.1 p=0.3 p=0.5

0 10

20

30

40

50

60

70

80

90

100

Risk exposure reduction %

Fig. 3. ψ = (1-sensitivity)/specificity as a function of desired level of risk exposure reduction

Achieving the desired risk exposure reduction for a given prevalence of noncompliance may not be possible with an automated audit tool, if the sensitivity and the specificity measures of the tool cannot be tuned. Fig. 3 demonstrates this fact for different λ and prevalence, p, values. As an example for λ = 0.3 and p=0.1,

Designing an Automated Audit Tool for the Targeted Risk Exposure Reduction

363

represented as the solid line in the second group of curves, the risk exposure reduction cannot be more than 60% no matter how good the auditing tool is. On the other hand, if p=0.1, reduction in risk exposure can be increased to 70% by adjusting the ψ level to 0.2. This means that even if the sensitivity of the tool is 1, i.e., the tool is capable of identifying all actual non-compliant cases, the specificity of the tool, i.e., the proportions of negatives (compliant cases) which are correctly identified must be larger than 0.8. Since ψ ≤ 0.2 can only be satisfied when specificity is greater than 0.8 if sensitivity is 1.

5

Designing an Auditing Tool for the Desired Exposure Reduction

Fig. 3 shows the relationship between the performance of the auditing tool and the desired level of risk exposure reduction. This can be used as a guideline in designing auditing tools. The sensitivity and the specificity measures of such a tool can directly be computed using the relationship depicted in Fig. 3. As an example, in an environment where the prevalence of non-compliance is 0.1 and the ratio of the manually audited processes is 0.2, by using equation (3) which is plotted in Figure 9, the ψ value is found as a function of risk exposure reduction factor , Φ , as

ψ =(

200 − Φ ) 9Φ

(5)

Therefore, for Φ =50% risk exposure reduction, ψ is found as 150/450 = 1/3 or η = 3(1 − θ ) . Hence, if an auditing tool is designed with the right specificity and sensitivity, then the risk exposure can be reduced by 50%. The question of how to design an automated audit tool with the right specificity and the sensitivity ratios is addressed in this section. As defined in the previous section, sensitivity is the proportion of actual positives that are correctly identified and specificity is the proportion of actual negatives that are correctly identified. Therefore, the sensitivity and the specificity of the automated tool are expressed as follows:

η : Sensitivity = TP /(TP + FN ) = Pr( F = 1 / I = 1)

(6)

θ : Specificity = TN /(TN + FP ) = Pr( F = 0 / I = 0)

(7)

where F indicates what the tool observes and I indicates what the actual state is. So, when (F=1, I=1), the tool observes a positive when the sample is actually positive. Similarly, (F=0, I=1) indicates that the tool observes a negative while the actual sample is positive. Designing an auditing tool with the desired sensitivity and specificity levels requires the understanding of how the sensitivity and specificity measures are controlled. This depends on the definition of internal controls and the methodology used by the tool to detect compliance failures. This is explained further with an example below.

364

Y. Doganata and F. Curbera

The internal control points of firewall rule revalidation process (FWRR) can be used to understand how to design an automated audit tool with the desired level of performance measures [7][14]. The FWRR process is developed particularly for ebusiness hosting companies who manage customer machines and make sue that they are not hacked. The purpose is to ensure that proper firewall rules are implemented and the customer is informed. FWRR ensures both the e-hosting account representatives and the customers understand what rules exist in the customer environment and ensure customer is aware of existing deviations from best practices defined by the e-hosting security policy. If not executed properly, customers are at risk for or not being made aware of what protocols are in place and required for the support of their environment. As a result, the e-hosting company may be held liable for insecure activities, if the customers is not informed of and signs off on the risk involved. In FWRR process, the information security advisor reviews the firewall rulesets and modifies as needed before he sends the ruleset to the account team representative. The ruleset is then sent to the customer for validation. One of the internal controls is to check if the account team has sent the firewall ruleset to the customer within five days after the account team received it from the information security advisor. The account team sends the rulesets to the customers as an e-mail attachment. The auditing tools checks all the e-mails sent within 5 days after the ruleset is created and determine if a member of the account team send an e-mail to the customer for the purpose of revalidating the firewall rulesets. The decision is based on extracting some features of the e-mail and using these features to calculate the likelihood of one of the following hypotheses: H 0 : The email is from Account team to customer H1 : The email is NOT from account team to customer

Here H 0 is the null hypothesis and H1 is the alternative hypothesis. A true positive, TP , is rejecting H 0 when H1 is valid. Similarly a false negative, FN , is accepting H 0 when H1 is valid. Hence the sensitivity η and the specificity θ of the auditing tool is expressed as:

η = Pr(reject H 0 | H 1 is valid )

(8)

θ = Pr(accept H 0 | H 0 is valid )

(9)

In order to decide if an e-mail is from the account team to the customer regarding annual Firewall rule revalidation, all the e-mail that are sent from Account Team within 5 days after the ruleset is created to the customer is analyzed. The decision is based on the text analysis of the body and the subject of the e-mail, the content of the “To:’ and “From:” fields. The body, subject, from and to fields constitute the salient features of an e-mail. As a result of analyzing these salient features, the e-mail is rated by assigning a number based on the features that are found in the e-mail. The selected features should help differentiating between a null hypothesis and an alternative one.

Designing an Automated Audit Tool for the Targeted Risk Exposure Reduction

365

The rate of an e-mail, T, depends on the weights assigned to each observed feature and can be expressed as: N

T =  ci1 Fi + ci 0 (1 − Fi ) i =1

c i ∈ ℜ, Fi ∈ {0 , 1}

(10)

Here Fi is binary variable which is equal to “1” if the i th feature exists and can be extracted from the e-mail, “0” otherwise and ci is the associated weight of the feature. Hence, if a salient feature is observed, the rate of the e-mail increased by the associated weight ci 0 . If the salient feature does not exist, on the other hand, then the rating is reduced by ci1 . As an example, the features of the e-mail that is sent from a member of the account team to the customer may contain the features listed in Table 1. Each feature has a certain weight in determining the type of the e-mail. The weights of the features associated with the e-mail sent to the customer are shown below: Table 1. The salient features of the e-mail used in hypotheses testing

i

Features of the e-mail

ci1

ci 0

1

The e-mail address in the FROM: field belongs to a member of account team The e-mail address in the TO: field belongs to the customer Subject contains the following key words: “Annual”, “rule”, “revalidation” The body contains the following key words: “eBH security policy”, “rulesets”, “validate” The body has an attachment

10

-10

20

-20

10 (for each keyword)

-10 (for each keyword)

10(for each keyword)

-10(for each keyword)

10

-50

2 3

4

5

In this example, the value of T is distributed between 90 and -130. If an e-mail contains all the features listed in Table 1, it is rated as 90. If none of the features are observed, then it is rated as -130. The rate variable T can be assumed to have beta distribution Beta(α , β ) without lack of generality because Beta(α , β ) is a flexible family of distribution and a wide range of density shapes can be derived by changing the associated parameters of a beta distribution [13]. Hence, the distribution of T is assumed to be BetaH (α 0 , β 0 ) for null hypothesis and BetaH (α1 , β1 ) for the alternative hypothesis where the (α , β ) values depend on weights ci of selected feature. Hence, 0

1

achieving the auditing tool performance depends on selecting the feature weights appropriately. The audit tool calculates the value of T for every e-mail and classifies them based on the null hypothesis H 0 or alternative hypothesis H1 . The decision regions for the

366

Y. Doganata and F. Curbera

tool to classify an e-mail as the one sent by the account team to the customer is defined by the following probabilities: Pr( reject H 0 ) = Pr(T ≤ T0 ) and Pr( accept H 0 ) = Pr(T > T0 )

(11)

where T0 is the decision threshold. The probability of accepting or rejecting the null hypothesis can be found by using the cumulative density function of T: T0

Pr(T ≤ T0 | H 0

Beta (T0 ; α 0 , β 0 ) is valid ) = I T0 (α 0 , β 0 ) = = Beta (α 0 , β 0 )

t

(α 0 −1)

t

(α 0 −1)

Tmin T max

(1 − t ) ( β 0 −1) dt

(12)

(1 − t ) ( β 0 −1) dt

Tmin

where the cumulative density function I T (α1 , β1 ) is called the regularized incomplete beta function and clearly for T = Tmax , IT (α1 , β1 ) = 1. In Figure 4, the probabilities 0

mac

that the threshold is less than T * under null and alternative hypothesis are shown as

IT * (α 0 , β 0 ) and IT * (α1 , β1 ) respectively.

Fig. 4. The regularized incomplete beta function under null and alternative hypothesis

The sensitivity and the specificity of the tool can now be expressed in terms of the cumulative density function for the selected threshold T * as

η = Pr(reject H 0 | H 1 is valid ) = 1 − I T (α1 , β1 )

(13)

θ = Pr(accept H 0 | H 0 is valid ) = I T (α 0 , β 0 )

(14)

*

*

Designing an Automated Audit Tool for the Targeted Risk Exposure Reduction

367

Hence the equation for ψ to achieve the targted level of risk reduction is found in terms of α 0 ,α1 , β 0 , β1 and T * as:

ψ = (1 − θ ) / η = (1 − IT (α 0 , β 0 )) /(1 − IT (α1 , β1 )) *

(15)

*

Using equation (10) and the fact that the distribution of T is beta, the first and the second moments of T are expressed as: T = E (T ) =

αβ

2

S = Var (T ) =

α =  (ci 0 pi 0 + ci1 pi1 ) α+β i =  (ci 0 pi 0 + ci1 pi1 ) + T 2

(α + β ) 2 (α + β + 1)

(16)

2

2

(17)

i

th

where pi 0 denotes the probability that i feature Fi does not exist and the pi1 denotes that Fi exists in the sample. Hence, the parameters of the beta distribution are estimated by using the mean T and the variance S of T as follows:

α =T(

T (1 − T ) S

2

− 1) ,

β = (`1 − T )(

T (1 − T ) S

2

− 1)

(18)

Equation (18) shows that α , β can be expressed as a function of feature coefficients by using (16) and (17). Hence:

α = fα (C0 , C1 ) ,

β = f β (C 0 , C1 )

(19)

where C 0 = [c00 c01 ... c0 N ] , C1 = [c10 c11 ... c1N ] and c0 i , c1i are the feature wights. Similarly, the desired level of risk reduction can also be expressed as function of feature coefficients from (15) and (19).

ψ = fψ (C0 , C1 , T * )

(20)

Note that distribution of the features Fi can be estimated by using the labeled e-mail samples for the null H 0 and the alternative H1 hypotheses. As an example, if there are 100 e-mails labeled as AT2CUST “From account team to customer” and 95 of these e-mails have attachments, then p5,1 = 0.95 . Similarly, if only 20 e-mails out of 100 that are labeled otherwise, p5,0 = 0.2 is found. If the statistics of T under null hypothesis H 0 is used, then equation (18) yields (α 0 , β 0 ) , otherwise, it yields (α1 , β1 ) . Once the performance indicator of the tool ψ is expressed in terms of the feature weights, designing the optimal auditing tool is reduced to solving the following nonlinear optimization problem:

max{ψ } = fψ (C0 , C1 , T * ) c o , c1 ∈ℜ

(21)

368

Y. Doganata and F. Curbera

subject to

α 0 = fα (C0 , C1 ) > 0

(22)

α1 = fα (C0 , C1 ) > 0

(23)

β 0 = f β (C0 , C1 ) > 0

(24)

β1 = f β (C0 , C1 ) > 0

(25)

IT * (α1 , β1 ) < TFN

(26)

1 − IT * (α 0 , β o ) < TFP

(27)

0

1

0

1

where TFN is the threshold for false negative and TFP is the threshold for false positive. If the maximum value of ψ found as a solution of (21) is less than the value that satisfies the desired level of risk exposure reduction Φ found in (3), then there exists no solution. This means that an automated auditing tool with the targeted performance cannot be built. Otherwise, the solution of (21) yields a set of coefficients ci that would help achieving the targeted risk exposure reduction if applied in rating each email as described in equation (10). In addition, a decision threshold for classifying the e-mail is found as the solution of (26) and (27).

6

Concluding Remarks

The mathematical foundation described in this paper can be used to determine the effectiveness of an auditing tool to detect the compliance status of internal controls. The effectiveness depends on the prevalence of non-compliance and the budget constraints that limits the use of expert auditors. Hence, before the effectiveness is calculated, statistically significant number of process instances should be labeled to estimate prevalence of non-compliance. The effectiveness of the tool is measured by the increase in the number of non-compliant instances detected by the experts after the tool eliminates the instances that are compliant. Hence, the auditing tool helps the experts to detect more non-compliant instances with less effort. This improves compliance without increasing the budget for expert auditing. In addition to measuring the effectiveness of automated auditing tools, it is also important to understand the limitations of the tool and the factors to be considered for designing a tool with the desired performance. The design parameters depend on the features of the control point and the extraction of these features often nondeterministic. When there is no certainty about the observed features of business artifacts, a decision about the compliance is made by using statistical hypothesis

Designing an Automated Audit Tool for the Targeted Risk Exposure Reduction

369

testing. Hence, the design of the automated audit tool requires selecting the right features around the key control points and estimating their statistical properties. In this paper, the problem of finding the optimum weights for the features of the business artifacts that are related to the internal controls is reduced to a non-linear optimization problem. This work is focusing on selecting the optimal weights of the features. Hence, features are assumed to be known. The problem of selecting the features related the internal control optimally is left as a future work.

References 1. Greengard, S.: Compliance Software’s Bonus Benefits. Business Finance Magazine (February 2004) 2. Gartner: Simplifying Compliance: Best Practices and Technology, French Caldwell, Business Process Management Summit (2005) 3. Enterprise Risk Management Integrated Framework, Committee of Sponsoring Organizations of the Treadway Commission (COSO), Jersey City, NJ (2004) 4. COSO (2009) Guidance on monitoring internal control systems. American Institute of Certified Public Accountants 5. COSO – Committee of sponsoring organizations of the treadway commission, http://www.coso.org 6. AMR Research: The Governance, Risk Management, and Compliance Spending Report (2010) 7. Doganata, Y., Curbera, F.: Effect of Using Automated Auditing Tools on Detecting Compliance Failures in Unmanaged Processes. In: Dayal, U., Eder, J., Koehler, J., Reijers, H.A. (eds.) BPM 2009. LNCS, vol. 5701, pp. 310–326. Springer, Heidelberg (2009) 8. Doganata Y., Curbera F.: A method of calculating the cost of reducing the risk exposure of non-compliant process instances. In: Proceedings of 1st ACM Workshop on Information Security Governance, WISG 2009, pp. 7–9 (2009) 9. Joseph, L., Gyorkos, T.W., Coupal, L.: Bayesian estimation of disease prevalence and the parameters of diagnostic tests in the absence of a gold standard. American Journal of Epidemiology 141, 263–271 (1995) 10. Enøe, C., Georgiadis, M.B., Wesley, O.J.: Estimation of sensitivity and specificity of diagnostic tests and disease prevalence when the true disease state is unknown. Preventive Veterinary Medicine 45, 61–81 (2000) 11. Hagerty, J., Hackbush, J., Gaughan, D., Jacaobson, S.: The Governance, Risk Management, and Compliance Spending Report, 2008-2009. AMR Research Report (March 25, 2008) 12. Corfield, B.: Managing the cost of compliance, http://justin-taylor.net/ webdocs/tip_of_the_iceberg.pdf 13. Katsis, A.: Sample size determination of binomial data with the presence of misclassification. Metrika 63, 323–329 (2005) 14. Mukhi, N.K.: Approaches towards Dealing with Complex Systems Configuration. In: Meersman, R., Dillon, T., Herrero, P. (eds.) OTM 2010. LNCS, vol. 6428, pp. 35–37. Springer, Heidelberg (2010)