Towards Understanding the Importance of ... - Semantic Scholar

7 downloads 1485 Views 159KB Size Report
to be correct. Keywords-Dependability; Fault Injection; Spatial Impact; ... failure to contain the propagation of errors within a software system is ..... that can rank the importance of variables in situations where ..... checking by software signatures.
Towards Understanding the Importance of Variables in Dependable Software Matthew Leeke Department of Computer Science University of Warwick Coventry, UK, CV4 7AL [email protected]

Abstract—A dependable software system contains two important components, namely, error detection mechanisms and error recovery mechanisms. An error detection mechanism attempts to detect the existence of an erroneous software state. If an erroneous state is detected, an error recovery mechanism will attempt to restore a correct state. This is done so that errors are not allowed to propagate throughout a software system, i.e., errors are contained. The design of these software artefacts is known to be very difficult. To detect and correct an erroneous state, the values held by some important variables must be ensured to be suitable. In this paper we develop an approach to capture the importance of variables in dependable software systems. We introduce a novel metric, called importance, which captures the impact a given variable has on the dependability of a software system. The importance metric enables the identification of critical variables whose values must be ensured to be correct. Keywords-Dependability; Fault Injection; Spatial Impact; Temporal Impact; Variable Importance; Importance Metric

I. I NTRODUCTION The pervasive nature of current computer systems has led to an increase in our reliance upon such systems to provide correct and timely services. As functionality is being increasingly defined by software, this software needs to be dependable [20]. To design for dependability, a software must contain two important artefacts, namely, (i) error detection mechanisms (EDMs) and (ii) error recovery mechanisms (ERMs) [5]. Examples of EDMs include run-time checks [22] and error detection codes, whilst examples of ERMs include exception handlers and retry [37]. When a dependable software system is executing, an EDM in the software attempts to detect whether the system state at a given time during the execution can threaten the proper functioning of the software system. Such a state is generally referred to as an erroneous state, i.e., EDMs attempt to detect whether a state is erroneous. If a software state is erroneous, then an EDM can be said to have detected an error [20]. When an EDM detects an error, an ERM may attempt to restore a suitable state, i.e., recover from the erroneous state, so that the error is contained within a certain boundary and does not propagate throughout the whole software system. A failure to contain the propagation of errors within a software system is known to make recovery more difficult [3].

Arshad Jhumka Department of Computer Science University of Warwick Coventry, UK, CV4 7AL [email protected]

To contain errors, EDMs and ERMs must both be effective. The effectiveness of these artefacts is usually evaluated using measures such as coverage and latency [25]. The effectiveness of an EDM has been shown to depend on two important factors, namely, (i) its location in the software and (ii) the predicate it implements [15]. The location of an EDM is the program statement it is protecting, i.e., the EDM ensures that the software state in which the statement is executed will not result in a software failure. For some program statements this predicate, which is a boolean expression defined over a set of program variables, is nontrivial [4]. The properties for this non-triviality have been shown to be accuracy and completeness [15]. On the other hand, an ERM seeks to restore a suitable safe state for a software system. This implies that some corrupted variables need to be overwritten with appropriate values. The described observations regarding EDMs and ERMs imply that, in order to maintain correct and timely software execution, some variables, viz. the critical ones, need to hold suitable values that will make the software state safe for further execution. If these critical variables are known, it is easier to develop predicates and to determine appropriate locations for EDMs and ERMs. Thus, knowledge of critical variables eases the design and placement of EDMs and ERMs. Specifically, EDMs will monitor the identified critical variables, while ERMs will ensure that these critical variables hold appropriate values. Little research on error propagation or containment has addressed the described problems using a variable centric approach. In [10] Hiller et.al addressed the error propagation problem by identifying vulnerable modules, which could be subsequently wrapped to enhance dependability. Also, several works have focused specifically upon determining appropriate locations for EDMs (e.g., [10], [16]), whilst other works have focused upon the efficiency of EDMs and ERMs for error containment (e.g., [8] [31] [33]). However, little work has focused specifically upon the design of predicates for EDMs and ERMs, aside from [4] and [15], where finite state programs were considered in the automated design of EDMs. The predicate for an EDM captures a relationship among program variables that defines a set of safe states for the statement the EDM is protecting to execute

in. In general, some specific predicates can be obtained from a system specification [11] [33]. However, several other classes of effective predicates are not easily designed [22]. In this paper we develop a variable centric approach to understand and capture the importance of variables in an effort to contain error propagation. As determining the set of critical variables for the design of efficient EDMs and ERMs generally requires in-depth program knowledge, we take two factors into consideration when defining the importance metric; (i) the spatial impact of variables and (ii) the temporal impact of variables. The spatial impact metric captures the extent to which a system is corrupted when a given variable is corrupted. In contrast, the temporal impact metric captures the duration during which a system remains corrupted when a given variable is corrupted. To minimise the likelihood of a software failure, each aspect, i.e., spatial and temporal impact, needs to be adequately handled, i.e., the number of corrupted variables, hence modules, and the duration of the corruption must be minimised. A. Contributions In this paper we make the following specific contributions: • We introduce a novel metric, known as importance, which captures how critical a given variable is to the correct functioning of a software system. Moreover, we explain how the proposed metric can be instantiated to deal with various aspects of dependability. • We introduce two additional novel metrics, known as spatial and temporal impact, which the proposed importance metric is built upon. • We present a fault injection approach for experimentally estimating the proposed importance metric. • We apply the framework to a large software system to demonstrate the type of results that can be obtained. B. Paper Structure This paper is structured as follows: In Section II we provide a survey of related literature. We develop the adopted system and fault models in Section III. In Section IV we introduce the spatial impact, temporal impact and the importance metrics. In Section V we explain the experimental setup used to experimentally estimate the proposed metrics. In Section VI we present and discuss the results derived. In Section VII we conclude the paper with a summary of what has been achieved and a discussion of future work. II. R ELATED W ORK A key challenge in the design of dependable software systems is to limit the extent to which errors propagate through a system. EDMs and ERMs are essential software artefacts that help to achieve this goal. Several previous approaches to this problem have focused on experimentally evaluating the coverage and latency of EDMs and ERMs [2], [8], [33], often using fault injection [14]. Through these

approaches it was established that EDMs exhibiting high coverage and low latency reduced error propagation. It was subsequently recognised that the locations of EDMs play an important role in their effectiveness. This observation has led to research on locating EDMs [13], [16], [19]. In particular, the approach developed in [13] extended work in [10] and [11] to provide a framework for identifying vulnerabilities in software. This framework is based upon a basic measure, error permeability, which captures how likely errors are to propagate from a module input to a module output. In order to employ this framework, accurate data flow information is required to gain an understanding of error propagation. On occasions where it can be employed this framework identifies permeable modules, which then become candidate locations for EDMs. In contrast to the significant body of knowledge relating to the placement of EDMs, little is known that pertains to the design of effective EDMs. In this paper we address the problem of evaluating the importance of variables to the correct functioning of a given software system. Variables deemed to be important not only need to be protected, i.e., EDMs need to check the values held by these variables, they also, indirectly, provide candidate locations for EDMs. In [10] fault injection experiments were used to give an estimate of otherwise difficult to establish vulnerability metrics. Further, it was shown in [28] that fault injection experiments can be used to determine optimal combinations of hardware-based EDMs. The metrics proposed in [19] were intended to identify modules which do not propagate errors. The approach detailed in [34] and [36] proposed to use sensitivity analysis in order to identify vulnerabilities, a technique subsequently applied in [35]. Static analysis has been shown to be an effective approach to detecting vulnerabilities in software [39]. The completeness of static code analysis makes it a particularly appealing approach to vulnerability detection. However, the possibility of fault positives and the fact that the severity of an identified vulnerability can not be accurately estimated or ranked using static techniques makes the approach less appealing. The proposed importance metric can assess the relative severity of vulnerabilities, allowing software engineers to focus their efforts more precisely and address the most severe vulnerabilities as a matter of urgency. This is in contrast with the static analysis employed in [23], which classified the quality of software modules based upon the number, rather than severity, of identified vulnerabilities. A variety of guidelines and heuristics have been proposed to inform the design of EDMs and ERMs. The high-level guidelines described in [9] and [11] advocate a rigourous approach to code location analysis combined with the use of an established methodology such as FMEA or FMECA. In [26] a broad set of guidelines for the design and placement of executable assertions are described. A formal specification can inform the design of EDMs and

ERMs. In [27] and [29] it was proposed that a formal specification can be used as to derive programatic tests which capture some aspects of functional correctness. Further, in [26] it was shown that predicates for executable assertions could be derived directly from a formal specification. Other approaches to error containment have been proposed. For example, the influence measure proposed in [30], and evaluated in [17], focuses upon quantifying error propagation between interacting modules, such that modules with a high influence factor can be aggregated and located on distinct processors, thus confining errors to these processors. In contrast, the work proposed in this paper focuses upon identifying variables that have a high influence (i.e. importance) on system dependability, such that errors can be confined to their respective components. The nature of some errors and their associated EDMs dramatically simplifies the problems of design and placement. For example, EDMs for control flow error detection may not require explicit predicate derivation and can have their placement determined by program structure [1] [24] [38]. However, as a study in [32] showed that around 33% of transient faults result in a control flow error, the general coverage of these approaches is questionable. The problem of error propagation was studied at the operating system level in [18], where the authors profiled the propagation of errors due to failures of device drivers. In this work the authors proposed a measure called driver error diffusion, which is analogous to the spatial impact metric proposed in this paper. However, the definition of spatial impact is related to variables, whereas their definition for driver error diffusion was related to device drivers, which can be considered to be software modules. III. M ODELS In this section we present the system model and fault model assumed in this paper. A. System Model In this paper we consider software systems to be grey box, i.e., access to the source code is allowed, but knowledge of functionality and structure is not available, i.e., white box access with black box knowledge. We assume the software S to be a tuple, consisting of a set of components, C1 . . . Cn , and a set of connections. A component Ck consists of an import interface Ik , an export interface Ek , a set of parameters Pk , and a body Bk . Ik represents the set of resources Ck needs in order to provide the set of resources specified in Ek . Here, the resources are, in general, functions, and an interface is a set of resources. Pk contains a disjoint set of variables Vk and constants ck . The body Bk provides the implementation of Ek , using Ik . Two components Ck and Cl are connected if Ck exports a function f to Cl , i.e., a function f required by Cl is provided

b

a

C2

f

e

C1

C4

h

c d

Figure 1.

C3

g

System model diagram

by Ck defines a connection between Ck and Cl . Thus, a software S = (CM P, CON ), where CM P = {C1 . . . Cn }, and CON = {(Ckf , Clf )}, where Ck exports a function f to Cl . When Ck exports a function f to Cl , we will say that Ck sends a message f to Cl . We assume a special component called ENV (read environment) that exports inputs to the software or imports output from the software. Henceforth the terms component and module will be used interchangeably. Our grey box model implies that, although Bk ∀k is available, knowledge and understanding of Bk is not necessary. Further, in the context of this paper the set of connections CON need not be available. This model of a software, and our grey box model fits well with software engineering practices. For example, it may be the case that a team will develop a software system, whilst another team will be responsible for imparting dependability and may not have detailed knowledge of (Bk , CON ). Figure 1 shows an example of the assumed system model. Relating this example to the definitions given, it follows that if a software S = (CM P, CON ), then CM P = {C1, C2, C3, C4} and CON = {(C1b , C2b ), (C1c , C3c ), (C1d , C3d ), (C2e , C3e ), (C2f , C4f ), (C3g , C4g ), (EN V a , C1a ), (C4h , EN V h )}. B. Fault Model A transient value fault model is assumed in this paper. The transient fault model is commonly used to model hardware faults as bit flips which cause instantaneous changes to the values held in areas of memory. We assume that any variable or returned value can be affected by a transient fault, i.e., have their values temporarily corrupted. IV. T HE I MPORTANCE M ETRIC In this section we introduce the importance metric, which is based on two related metrics, namely, the spatial impact metric and the temporal impact metric. A. Spatial Impact Given a software system whose functionality is logically distributed over a set of distinct components. The spatial impact of variable v of component C in a run r, denoted r as σv,C , is defined as the number of components that get corrupted in r.

We then define the spatial impact of a variable v of component C, denoted as σv,C , as follows: r σv,C = max{σv,C }, ∀r

(1)

Thus, σv,C captures the diameter of the affected area whenever a variable v in a component C is corrupted. The higher σv,C is, the more difficult it is to recover from the corruption. Observe that it is possible to have different definitions for spatial impact, e.g., by accounting for the average number of corrupted components. However, in this paper we have focused upon the worst-case situation by using the maximum extent of corruption.

Consequently, a general form for a metric which accounts for the described factors in expressing the importance of a variable v in a component C, using arbitrary function G, K and L, can be taken to be: Iv,C = G[K(σv,C )), L(τv,C ))]

(3)

In the next sections we instantiate Equation 3 and develop an approach to evaluate the proposed importance metric. We subsequently analyse the type of results that can be obtained using the proposed framework and discuss the implications of this work for the development of dependable software. V. E XPERIMENTAL S ETUP

B. Temporal Impact Given a software system whose functionality is logically distributed over a set of distinct components. The temporal impact of variable v of component C in a run r, denoted as r τv,C , is defined as the number of time units over which at least one component of the software remains corrupted in r. We then define the temporal impact of a variable v of component C, denoted as τv,C , as follows: r τv,C = max{τv,C }, ∀r

(2)

Thus, τv,C captures the amount of time the program state remains affected whenever a variable v in a component C is corrupted. The higher τv,C is, the higher the probability of a system failure. As for spatial impact, it is possible to have different definitions for temporal impact, e.g., by accounting for the average duration of system corruption. However, in this paper we have focused upon the worst-case situation by using the maximum duration of corruption. C. Importance When calculating the importance of a variable in a given component, several considerations must be taken into account, though most crucially: 1) A variable v in a component C may have a high spatial impact, which means that, by the time the error disappears, several other components have been corrupted. This can limit the effectiveness of recovery and lead to a system failure. 2) A variable v in a component C may have a high temporal impact but have a low spatial impact. This means that, even though very few components are affected, the recovery is not effective, and thus a system failure can occur. This means that both the spatial impact and the temporal impact of a variable will affect the dependability of a system. Further, it follows from these observations that each factor, i.e., spatial impact and temporal impact, may affect dependability to a different degree. Thus, the impact of each factor needs to be taken into consideration when developing a definition for the importance metric.

In this section we introduce the target system used in our experimentation. We then explain its instrumentation for fault injection and describe the test cases used in the associated experiments. Finally, we provide a high level definition of failure in the context of the target system. A. Target System The FlightGear Flight Simulator project is an open-source collaboration which aims to develop an extensible yet highly sophisticated flight simulator to serve the needs of the academic and hobbyists communities [7]. The target system was chosen because it is modular, contains over 220,000 lines of C/C++ and simulates a situation where dependability is critical. The software is designed so that it can be built and run on multiple platforms, including variants of Linux, Windows, and Mac OS X. All source code and resources are available under the GNU General Public License. An aircraft takeoff procedure was executed in all test cases, the precise details of which are described in the next section. An input vector was provided at each iteration of the main simulation loop by a control module that was designed and implemented specifically for this purpose. B. Test Cases In each test case the target system executed the aforementioned takeoff procedure for 2700 iterations of the main simulation loop, where the first 500 iterations correspond to an initialisation period and the remaining 2200 iterations correspond to pre-injection and post-injection periods. To create a varied and representative system load, each instrumented variable was subjected to 9 distinct test cases; 3 aircraft masses and 3 wind speeds uniformly distributed across 1300-2100lbs and 0-60kph respectively. C. System Instrumentation Instrumented modules were chosen at random from the 312 modules used in the execution of all test cases. All functions within the randomly selected modules were instrumented. The number of function-scope and module-scope variables instrumented for each module accounted for no less than 90% of the total number of variables associated with that

Table I G LOBAL VARIABLE INSTRUMENTATION Representation (bits) 08 32 64

Variables 01 03 02

Injection Locations 1 5 4

Table II M ODULE A INSTRUMENTATION Representation (bits) 08 32 64

Variables 06 09 12

Injection Locations 10 14 27

module. The number of system-scope variables instrumented in the experiments associated with each module accounted for no less that 90% of the total number of system-scope variables. Variable instrumentation locations were selected randomly. It was ensured that no less than 80% of all possible code locations were instrumented for each instrumented variable. Variables and code locations that were not instrumented were associated with test procedures which would never be executed under normal circumstances, i.e., only executed in a controlled environment. The details of the system instrumentation for each module are shown in Tables I-VI. Table VII summarises the number of fault injection experiments performed for each module. D. Characterising Failure Failure criteria for test cases were established using a combination of golden run observation and relevant aviation information [6]. A failure in the execution of a test case was considered to fall into at least one of three categories; speed failure, distance failure and angle failure. A run was considered a failure with respect to speed if the aircraft failed to reach its safe takeoff speed (V2) after first passing through critical speed (V1) and velocity of rotation (VR). A run was considered a failure with respect to distance if the takeoff distance exceeds that specified by the aircraft manufacturer, where the specified distance is increased by 10 meters for every additional 200lbs over the aircraft base-weight. A run was considered a failure with respect to angle if a Pitch Rate of 4.5 degrees is exceeded before the aircraft is clear of the runway or the aircraft stalls during climb out. All golden runs created during experimentation were ascertained not to have failed by the stated failure criteria. E. Fault Injection and Logging A modified form of the Propagation Analysis Environment (PROPANE) was used for both fault injection and logging [12]. A golden run was created for each test case, during which the state of all modules was logged. The instrumentation details for all instrumented variables, including bit

Table III M ODULE B INSTRUMENTATION Representation (bits) 08 32 64

Variables 02 01 14

Injection Locations 05 01 34

Table IV M ODULE C INSTRUMENTATION Representation (bits) 08 32 64

Variables 15 21 06

Injection Locations 24 41 15

representation information, can be seen in Table VII. Bit flips were injected at each bit-position for all instrumented variables. Each injected run entailed a single bit-flip in a variable at one of these positions, i.e., no multiple injection were performed. Each single bit-flip experiment was performed at 3 distinct injection times uniformly distributed across the 2200 simulation loop iterations that follow system initialisation; 600, 1200 and 1800 simulation loop iterations after the initialisation period of 500 iterations. The data logged during injected runs was compared with the corresponding golden run, with any deviation being considered erroneous. VI. R ESULTS In this section we demonstrate the type of results that can be generated using the proposed metrics. After instantiating Equation 3 we demonstrate how estimates for both the spatial and temporal impact metrics can be experimentally derived. We then explain how the instantiated importance metric can meet specific sets of system requirements, offering insights into the use of the importance metric in situations where factors such as error detection and error recovery are primary concerns, e.g., in fail-safe fault tolerance and non-masking fault tolerance design respectively. Finally, we provide importance values for all instrumented variables and discuss how this relative ranking can be utilised by dependability engineers in the design and placement of EDMs and ERMs. Numerous instantiations of Equation 3 could be conceived, particularly when its components are to be estimated using fault injection. For example, a definition could easily incorporate the definition of erroneous state applied in [10], which views erroneous state in terms of deviation from expected values. However, in this paper, we are concerned with detecting erroneous states that lead to failure, which makes failure rate an important issue. We therefore use a definition that can rank the importance of variables in situations where avoiding system failures or minimising the extent of state corruption are of primary concern. Specifically, we define the importance of a variable v in a component C as:

Table V M ODULE D INSTRUMENTATION Representation (bits) 08 32 64

Variables 00 05 16

Table VII S UMMARY OF FAULT INJECTION EXPERIMENTS Module ID Globals A B C D E

Injection Locations 00 07 28

Table VI M ODULE E INSTRUMENTATION Representation (bits) 08 32 64

Iv,C =

Variables 12 02 11

Injection Locations 18 07 21

σ 1 τv,C m v,C + (1 − f )n σmax τmax

8-bit Injections 000216 012960 002160 077760 000000 046656 139752

32-bit Injections 012960 108864 000864 743904 030240 012096 908928

64-bit Injections 0013824 0559872 0822528 0155520 0774144 0399168 2725056

Total Injections 0027000 0681696 0825552 0977184 0804384 0457920 3773736

A. Calculating Importance

(4)

Equation 4 accounts for factors which influence the importance of a variable but are not directly captured by the spatial and temporal impact metrics. Specifically, the instantiation ensures that the rate of failures f , associated with variable v in component C is accounted for in the definition of importance, thus allowing the definitions of σv,C and τv,C to remain independent of failure rate. Normalisation of the spatial and temporal impact factors is performed in order to ensure that the addition of these quantities does not mask or enhance the significance of either impact metric. This normalisation is achieved by expressing each impact factor as a proportion of the maximum possible module corruption, σmax , and duration of module corruption, τmax . Further, where system specific knowledge is available, σmax and τmax can be deliberately configured to enhance the significance of either the spatial or temporal impact metric. The definition of importance given in Equation 4 can be applied in situations where emphasis is to be placed upon either the need to detect errors or recover from them. This adaptability can be achieved by varying the values assigned to n and m, which dictate whether emphasis is placed upon the need to avoid failures or the need to prevent widespread system corruption in the spatial and temporal domains. Whilst analysis concerning the sensitivity of Equation 4 to changes in n and m has been omitted for brevity, several general properties of this instantiation can been identified. In general, if n > m the importance metric will be oriented towards identifying variables that are implicated in system failures. Conversely, if n < m the importance metric will be focused upon identifying variables which are implicated in the most severe instances of system corruption. Further, by setting n = 0 and m > 0 the importance metric can be made to focus solely upon system corruption in the spatial and temporal domains, whilst setting m = 0 and n > 0 focuses the importance metric solely upon system failures.

In order to evaluate the importance metric for all instrumented variables, the failure rates and values of the spatial and temporal impact metrics, σ and τ , associated with these variables must be determined. In this paper we estimate these values experimentally using fault injection. Spatial Impact: Applying the definition given in Section IV, the spatial impact of a variable is the maximum number of objects that were corrupted during any run where that variable was the target of an injection, i.e., when the variable was corrupted. The experimentally derived spatial impact of selected variables is shown in the σ column of Table VIII. Temporal Impact: Applying the definition given in Section IV, the temporal impact of a variable corresponds to the maximum duration over which at least one object remains corrupted during a run where that variable was the target of an injection. The experimentally derived temporal impact of selected variables is shown in the τ column of Table VIII. In our analysis corruption duration is measured by the number of iterations of the main simulation loop. For example, if a variable has a temporal impact of 1, the software system remained corrupted for exactly 1 iteration of the main simulation loop. In other words, when an error is injected into such a variable in iteration n, the state corruption remains in iteration n+1, but is not present in iteration n+2. Failure Rate: Before values for the importance metric can calculated, the failure rate associated with instrumented variables must be established. The failure rate of selected variables is shown in the f column of Table VIII. Weighting: The definition of variable importance given in Equation 4 is adaptable, in the sense that the definition can be configured to yield a variable ranking which focuses either upon the need to avoid failures or the need to prevent widespread system corruption. In this paper we set n = 2 and m = 1, thus the analysis performed here is oriented toward identifying variables which are implicated in system failure. Whilst future work is required to determine optimal values for n and m in each context, the values chosen are appropriate for the purposes of this paper. The importance rankings presented are based upon the following configuration of the importance metric:

Table VIII C OMPONENTS OF THE I MPORTANCE M ETRIC

Table IX M ODULE C VARIABLE I MPORTANCE R ANKING

Module

f

σv,C

τv,C

Identifier

fgFlightTime

G

0.003472

4

2000

currentThrust

1.047348

delta time sec

G

0.006944

3

2000

HasInitializedEngines

1.016663

dump

C

0.013889

3

1

numTanks

1.012560

EmptyWeight

D

0.011905

2

2000

TotalQuantityFuel

1.011618

InitializedEngines

C

0.001389

5

77

firsttime

1.009914

IsBound

C

0.004167

1

1

dt

0.506376

HasInitializedEngines

C

0.003472

3

2000

Tw

0.090196

Mass

D

0.011905

12

1432

numEngines

0.068625

numEngines

C

0.002315

21

2

i run 3

0.058308

numTanks

C

0.004630

1

2000

InitializedEngines

0.054677

TotalQuantityFuel

C

0.004167

1

2000

i run 2

0.045462

Weight

D

0.003472

13

2000

overage

0.011680

bixx

D

0.000992

2

2000

shortage

0.011662

bixy

D

0.000992

2

2000

dump

0.010402

bixz

D

0.000992

2

2000

i run 1

0.006922

biyy

D

0.000772

2

2000

IsBound

biyz

D

0.000868

2

2000

Remaining Variables

bizz

D

0.000868

2

2000

currentThrust

C

0.010417

8

2000

dt

C

0.005208

3

983

firsttime

C

0.001736

2

2000

i run 1

C

0.000868

2

1

Weight

1.048938

i run 2

C

0.000992

14

1

EmptyWeight

1.030808

i run 3

C

0.000992

18

1

bixx

1.008410

Ixx

D

0.002778

4

78

bixy

1.008410

Iyy

D

0.001488

4

78

bixz

1.008410

Izz

D

0.003472

4

78

bizz

1.008160

overage

C

0.002778

3

4

biyz

1.008160

PM total weight

D

0.000771

7

1432

biyy

1.007966

shortage

C

0.001984

3

4

Mass

0.772751

Tw

C

0.001157

17

71

PM total weight

0.739576

Izz

0.052182

Ixx

0.052110

Identifier

Iv,C =

σ 1 τv,C 1 v,C + 2 (1 − f ) σmax τmax

Importance

0.003736