hierarchical morphological composition of web

1 downloads 0 Views 1MB Size Report
Kuppuraju, N., Ganesan, S., Mistree, F., Sobieski, J.S., 1985, “Hierarchical decision ...... V., Taylor, S., and Vargha-Khadem, F., 1997, “A longitudinal study of early intellectual ..... (2006) built a BN for gas turbine fault diagnosis in jet engines.
2009 Society for Design and Process Science Printed in the United States of America

HIERARCHICAL MORPHOLOGICAL COMPOSITION OF WEB HOSTING SYSTEM Mark Sh. Levin Institute for Information Transmission Problems, Russian Academy of Sciences, Moscow, Russia

Stanislav Yu. Sharov Moscow Institute of Physics and Technology (State University), Dolgoprudny, Russia

In the paper, hierarchical combinatorial synthesis of Web-hosting system is described. The composition problem is examined as analysis and selection of system component alternatives (e.g., hardware alternatives, software alternatives, communication alternatives). The modular design process is based on Hierarchical Morphological Multicriteria Design (HMMD) approach: (i) design of a tree-like model for the system, (ii) generation of design alternatives for each leaf node of the system model, (iii) generation of criteria and their scales for each node of the system model, (iv) assessment of the design alternatives upon the corresponding criteria, (v) multicriteria analysis of the alternatives for system components/parts to get ordinal estimates of quality for each design alternative, (vi) composing the selected alternatives into a resultant combination or a composite design alternative for the higher hierarchical level of the system model (while taking into account ordinal quality of the alternatives above and ordinal quality of compatibility between the alternatives). The multicriteria analysis of the design alternatives and their composition is based on a hierarchical Bottom-Up framework. A two-criteria combinatorial problem (morphological clique problem) is used for the composition and Pareto-effective solutions are searched for. An example of Web-hosting system consists of the following basic system parts: (a) facilities (servers, telecommunication facilities), (b) Internet access, (c) software part (Web server, Web technology). The set of basic criteria involves the following: price, reliability, scalability, stability of work, maintenance cost, etc. Numerical examples illustrate the system modeling and design processes. Keywords: modular system, hierarchy, morphological design, composition, web-based system

1. Introduction In recent decades the significance of modular system design has been increased in engineering, computer science/information technology (Baldwin and Clark, 2000, Buede, 2009, Huang and Kusiak, 1998, Kusiak, 1999, Levin, 2006). Thus, many methods for modular system design have been suggested, examined, and used. Some basic methods and corresponding references are presented in Table 1. Mainly, modular system design is targeted to system configuration and/or reconfiguration (Levin, 2009) and the following approaches to these problems are often used: (a) AI methods based on satisfiability problems (SAT) (McDermott, 1982, Stefik, 1995, Wielinga and Schreiber, 1997); (b) morphological approaches (Levin, 1998, Levin, 2009); and (c) combinatorial optimization based methods (Levin, 2009, Poladian et al., 2006, Yu, 2007).

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 1-14

Table 1 Methods for modular system design. Approach Structured and object-oriented design Formal methods Grammatical design and algebraic approach Component-based design AI based methods Hierarchical decision making Morphological approaches Ontology based approach Evolutionary multiobjective optimization

References Baresi and Pezze, 1998, Coad and Yourdan, 1991, DeMarco, 1979 Antonsson and Cagan, 2001, Bowen and Hinchey, 1995, Clarke and Wing, 1996 Brown, 1997, Mullins and Rinderle, 1991, Show and Garlan, 1996, Stiny, 1991 Crnkovic et al., 2005, Guo, 2002, Nierstrasz et al., 1992 Birmingham et al., 1988, Stefik, 1995 Bascaran et al., 1988, Harhalakis et al., 1992, Kuppuraju et al., 1985 Jones, 1981, Levin, 1998, Levin, 2009a; Zwicky, 1969 Jololian, 2005 Coello, 2001, Deb, 2001

Applied Web-based information systems and services are widely used, e.g., information searching processes, E-business, electronic government, educational and scientific applications, private systems (Agrawal et al., 2001, Chen et al., 2002, Leymann et al., 2002, Madhusudan and Uttamsingh, 2006, Orriens et al., 2004, Rabinovich and Aggarwal, 1999). As a result, there exists a need of design and maintenance of the systems (Battacharjee et al., 2001, Benatallah et al., 2003, Madhusudan and Uttamsingh, 2006). Evidently, it is reasonable to consider modular approaches to the Web-based information systems and services. Thus, it is necessary often to design (to compose) a system configuration (Battacharjee et al., 2001, Benatallah et al., 2003, Buchhiarone and Gnesi, 2006); Orriens et al., 2004, Wynn et al., 2004). It can be reasonable to point out contemporary approaches to a close domain as a composition of Web-based services, for example: (1) Petri net based composition (Tang et al., 2007, Xiong et al., 2009, Zhovtobryukh, 2007); (2) HNT method (Sirin et al., 2004); (3) multidisciplinary design optimization (Shen and Ghenniwa, 2003); (4) logic-based method (Rao et al., 2004); (5) ontology based approach (Li et al., 2005); (6) method based on dependency graph (Gu et al., 2008); (7) AND/OR graph based search algorithm (Liang and Su, 2005); (8) integer programming based method (Gao et al., 2005); (9) genetic algorithms (Cantora et al., 2005); (10) requirements/constraints driven approaches (Aggarwal et al., 2004, Zhang et al., 2004); and (11) semantic E-flow composition method (Cardoso and Sheth, 2003). In the paper, Hierarchical Morphological Multicriteria Design (HMMD) approach is used for modular composition of a Web hosting system. HMMD approach (Bottom-Up hierarchical solving framework) involves two basic problems: (i) multicriteria analysis/ranking of design alternatives for system parts/components, (ii) composing the selected alternatives (DAs) into a resultant combination (while taking into account ordinal quality of the alternatives above and their compatibility or interconnection IC). A special two-objective combinatorial model (morphological clique problem) is used for system composition (i.e., Pareto-effective solutions are search for). Generally, HMMD approach consists in a hierarchical (i.e., series-parallel) solving scheme and provides for the following (Levin, 1998, Levin, 2006, Levin, 2009): (1) multistage hierarchical process: (i) possibility to check/analyze intermediate results, (ii) possibility to repetition of solving framework parts after a preliminary analysis/evaluation of intermediate results), (iii) possibility to increase the problem dimension (it is important for combinatorial problems);

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 2

(2) partitioning the solving process: (i) possibility to use different experts/methods for different parallel parts of the solving framework, (ii) possibility to increase the problem dimension; (3) taking into account ordinal quality of system design alternatives and ordinal quality of compatibility among system design alternatives (that extends traditional morphological design approaches); (4) possibility to examine various system design requirements as a hierarchy of requirements (criteria) for the system and system parts, the requirements/criteria for system parts can be examined independently; and (5) concurrent usage of evaluation techniques and expert judgment. The considered example of Web-hosting system consists of three basic system parts: (a) facilities (servers, telecommunication facilities), (b) Internet access, and (c) software (Web server, Web technology). Eight criteria are applied for assessment of design alternatives (price, reliability, scalability, required skill level of maintenance staff, up-to-dateness, performance, stability of work, and maintenance cost). Illustrative numerical examples describe the system modeling and design processes. 2. Support Problems 2.1. Multiple Criteria Ranking

{ } . The matrix {z } can be mapped

Let H = {1..., i,...t} be a set of items which are evaluated upon criteria C = C1 ,..., C j ,..., Cd and

zi , j is an estimate (quantitative, ordinal) of item i on criterion C j i, j into a partial order on H . The following partition as linear ordered subsets of H is searched for: H = U H (k ), H (k1 ) I H (k 2 ) = 0 if k1 ≠ k 2 , i2 p = i1 ∀i1 ∈ H (k1 ), ∀i2 ∈ H (k 2 ), k1 ≤ k 2 . k =1,m

Set H (k ) is called layer k , and each item i ∈ H gets priority ri that equals the number of the corresponding layer. The list of basic techniques for multicriteria selection/ranking is the following: (1) multiattribute utility analysis (Keeney and Raiffa, 1976); (2) multicriteria decision making (Korhonen et al., 1984); (3) Analytic Hierarchy Process (AHP) (Saaty, 1988); and (4) outranking techniques (Roy, 1996). 2.2. Hierarchical Morphological Design A brief description of Hierarchical Morphological Multicriteria Design (HMMD) is a typical one as follows (Levin, 1998, Levin, 2006, Levin, 2009). The examined composite (modular, decomposable) system consists of components and their interconnection (IC) or compatibility. Basic assumptions of HMMD are the following: (a) a tree-like structure of the system; (b) a composite estimate for system quality that integrates components (subsystems, parts) qualities and qualities of IC (compatibility) across subsystems; (c) monotonic criteria for the system and its components; and (d) quality of system components and IC are evaluated on the basis of coordinated ordinal scales. The designations are: (1) design alternatives (DAs) for leaf nodes of the model; (2) priorities of DAs ( r = 1,..., k ; 1 corresponds to the best level); (3) ordinal compatibility estimates for each pair of DAs ( w = 0,..., l ; l corresponds to the best level). The basic phases of HMMD are: 1. Design of the tree-like system model. 2. Generation of DAs for leaf nodes of the model. 3. Generation of criteria and their scales for each node of the system model. 4. Multicriteria analysis of DAs (assessment of DAs upon criteria, assessment of compatibility between DAs). 5. Hierarchical selection and composing of DAs into composite DAs for the corresponding higher level of the system hierarchy. Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 3

6. Analysis and improvement of composite DAs (decisions). Let S be a system consisting of m parts (components): P(1),..., P (i ),..., P(m ) . A set of design alternatives is generated for each system part above. The problem is: Find a composite design alternative S = S (1) ∗ ... ∗ S (i ) ∗ ...S (m ) of DAs (one representative design alternative S (i ) for each system component/part P(i ), i = 1,..., m with non-zero IC estimates between design alternatives. A discrete space of the system excellence on the basis of the following vector is used: N (S ) = (w(S ); n (S )) , where w(S ) is the minimum of pairwise compatibility between DAs which correspond to different system components, i.e., ∀Pj1

and Pj2

( 1 ≤ j1 ≠ j2 ≤ m ) in S , k

n (S ) = (n1 ,..., n r ,..., nk ) , where n r is the number of DAs of the r -th quality in S ( ∑ n r = m ). As a r =1

result, we search for solutions (i.e., composite decisions) which are non-dominated by N (S ) .

S=X*Y*Z S1=X1 * Y4 * Z3

X

Y

X1(1) X2(2) X3(1)

Z

Y1(3) Y2(1) Y3(2)

Z1 (1) Z2 (2) Z3 (1)

Fig. 1 Example of composition.

X2 3

1 X1 X3

Z2

Z3 Z1

Y2

Y3

Y1

2

Fig. 2 Concentric presentation. Thus, the following layers of system excellence can be considered: (i) the ideal point; (ii) Paretoeffective points; (iii) a neighbourhood of Pareto-effective DAs (e.g., a composite decision of this set can be transformed on the basis of an improvement action(s) into a Pareto-effective point).

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 4

Figure 1 and Figure 2 illustrate the composition problem (by a numerical example for a system consisting of three parts S = X ∗ Y ∗ Z ). Priorities of DAs are shown in Figure 1 in parentheses and are depicted in Figure 2; compatibility estimates are pointed out in Figure 2. In this example, the resultant composite DA is: S1 = X 1 ∗ Y1 ∗ Z 3 , N (S ) = (1;1,1,1) . Figure 3 depicts the lattice of system quality for N = (w; n1 , n 2 , n3 ) , w = const , l = 3 .



The ideal point





S1













The worst point

Fig. 3 Lattice of system quality.

3. Web Hosting System 3.1. Hierarchical System Model and Components The tree-like model of the considered Web-hosting system (including DAs for system components) is the following (Figure 4): 0. Web Hosting System S = F ∗ I ∗ O . 1. Facilities F = A ∗ B . 1.1. Servers A: complex final solution from Cisco A1 , complex final solution from Sun A2 , complex final solution from unknown brand A3 , partial solution from a brand A4 , and create personal solution A5 . 1.2. Telecommunication facilities B: Cisco B1 , 3Com B2 , Dell B3 , and D-link B4 . 2. Internet access I: Dial-up I1 , DSL I 2 , FTTH (Fiber To The Home) I 3 , Radio Ethernet I 4 , and Satellite Internet I 5 . 3. Software O = D ∗ E . 3.1. Web server D: IIL D1 , WebLogic D2 , Web Sphere D3 , Apache D4 , and JBOSS D5 . 3.2. Web technology E: DHTML E1 , PHP E 2 , ASP E3 , and JSP+EJB E 4 .

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 5

S = F * I * O = (A * B) * I * (D * E) Facilities

Internet access

F= A* B

A

I1 I2 I3 I4 I5

Telecom facilities

A1 A2 A3 A4 A5

O=D* E

I

B

Server

Software

B1 B2 B3 B4

D Web server

E Web technology E1 E2 E3 E4

D1 D2 D3 D4 D5

Fig. 4 Structure of Web hosting system.

3.2. Criteria, Estimates, and Compatibility The same set of criteria with ordinal scales is used for all system components (Table 2, scales for criteria 1, 4, and 8 have a negative orientation, i.e., the maximal value corresponds to the worst case). The corresponding estimates for DA i are as follows zi = (z i1 ,..., z18 ) . Table 3 contains ordinal estimates of DAs upon the above-mentioned criteria (expert judgment) and priorities for DAs which are obtained as a result of multicriteria ranking (an Electre-like method). Estimates of compatibility between DAs are contained in Tables 4 and 5 (expert judgment). Table 2 Weights of criteria. A

B

I

D

E

C1 Price

Criteria

5

7

8

2

0

C2 Reliability

10

13

15

8

11

C3 Scalability

10

14

6

11

9

3

6

1

2

10

5

4

2

4

5

C6 Performance level

15

9

14

9

12

C7 Stability of work

13

14

13

10

10

C8 Maintenance cost

6

6

2

2

10

C4 Necessary skill level of maintenance staff C5 Up-to-dateness

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 6

Table 3 Estimates of design alternatives and their priorities, C1

C2

C3

C4

C5

C6

C7

C8

Priority

A1

50

25

20

40

20

25

30

60

2

A2

90

60

40

40

35

40

50

30

1

A3 A4

30 35

18 20

15 15

30 35

15 18

15 20

15 18

20 20

2 2

A5

20

15

10

50

13

15

13

45

3

B1 B2

75 35

90 35

80 40

50 30

50 27

35 33

90 40

20 25

1 2

B3

40

30

25

30

25

30

30

30

3

B4 I1

40 5

39 5

30 2

30 5

30 1

33 2

35 1

23 2

2 3

I2

15

15

5

7

5

10

8

7

2

I3

35

30

25

15

15

35

25

25

1

I4 I5

20 25

15 25

20 23

15 15

15 15

10 20

8 15

15 20

2 2

D1

15

25

18

13

5

20

10

5

1

D2 D3

25 20

30 25

29 20

24 20

5 5

25 17

9 7

4 3

1 2

D4

0

15

20

15

5

22

6

2

2

D5

0

15

17

10

5

15

3

2

3

E1 E2

4 8

10 7

1 4

2 4

1 4

18 20

15 12

20 12

3 2

E3

10

12

14

11

9

15

16

8

1

E4

15

12

16

11

9

11

15

8

1

Table 4 Estimates of compatibility. B1

B2

B3

B4

A1 A2

3 2

2 2

1 2

2 2

A3

2

1

1

1

A4

2

1

1

1

A5

1

1

1

1

Table 5 Estimates of compatibility. E1

E2

E3

E4

D1

2

1

3

1

D2

1

1

0

3

D3

1

1

0

2

D4

2

3

0

0

D5

1

1

0

2

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 7

3.3. Composite Decisions For system part F, we get the following Pareto-effective composite DAs: F1 = A2 ∗ B1 , N (F1 ) = (2;2,0,0); F2 = A1 ∗ B1 , N (F2 ) = (3;1,1,0). For system part O, we get the following Pareto-effective composite DAs: O1 = D1 ∗ E3 , N (O1 ) = (3;2,0,0); O2 = D2 ∗ E 2 , N (O2 ) = (3;2,0,0). It is assumed that priorities of the obtained composite DAs for F and O equal 1. Table 6 contains compatibility estimates for components of S (i.e., F, I, and O). Table 6 Estimates of compatibility.

F1

I1 2

I2 2

I3 1

I4 3

I5 3

O1 3

O2 3

F2

2

2

1

3

3

3

3

I1 I2

0 3

2 3

I3

3

3

I4 I5

3 3

3 3

Figure 5 illustrates the structure of Web hosting system with priorities of DAs and some composite DAs (for F, for O, for resultant system S). Here priorities of DAs are shown in Figure 5 in parentheses. Note an initial set of composite decisions contained 2000 combinations (5 x 4 x 5 x 5 x 4). Thus, a set of 12 final composite DAs is obtained: (1) S1 = F1 ∗ I 3 ∗ O1 = ( A2 ∗ B1 ) ∗ I 3 ∗ ( D1 ∗ E3 ) ; N (S1 ) = (1;3,0,0 ) ; (2) S 2 = F1 ∗ I 3 ∗ O2 = ( A2 ∗ B1 ) ∗ I 3 ∗ (D2 ∗ E4 ) ; N (S 2 ) = (1;3,0,0 ) ; (3) S 3 = F3 ∗ I 3 ∗ O1 = ( A1 ∗ B1 ) ∗ I 3 ∗ (D1 ∗ E3 ) ; N (S 3 ) = (1;3,0,0 ) ; (4) S 4 = F2 ∗ I 3 ∗ O2 = ( A1 ∗ B1 ) ∗ I 3 ∗ (D2 ∗ E4 ) ; N (S 4 ) = (1;3,0,0 ) ; (5) S 5 = F1 ∗ I 4 ∗ O1 = ( A2 ∗ B1 ) ∗ I 4 ∗ (D1 ∗ E3 ) ; N (S 5 ) = (3;2,1,0) ; (6) S 6 = F1 ∗ I 4 ∗ O2 = ( A2 ∗ B1 ) ∗ I 4 ∗ (D2 ∗ E 4 ) ; N (S 6 ) = (3;2,1,0 ) ; (7) S 7 = F2 ∗ I 4 ∗ O1 = ( A1 ∗ B1 ) ∗ I 4 ∗ (D1 ∗ E3 ) ; N (S 7 ) = (3;2,1,0) ; (8) S 8 = F2 ∗ I 4 ∗ O2 = ( A1 ∗ B1 ) ∗ I 4 ∗ (D2 ∗ E 4 ) ; N (S8 ) = (3;2,1,0) ; (9) S 9 = F1 ∗ I 5 ∗ O1 = ( A2 ∗ B1 ) ∗ I 5 ∗ (D1 ∗ E3 ) ; N (S 9 ) = (3;2,1,0) ; (10) S10 = F1 ∗ I 5 ∗ O2 = ( A2 ∗ B1 ) ∗ I 5 ∗ (D2 ∗ E4 ) ; N (S10 ) = (3;2,1,0 ) ; (11) S11 = F2 ∗ I 5 ∗ O1 = ( A1 ∗ B1 ) ∗ I 5 ∗ ( D1 ∗ E3 ) ; N (S11 ) = (3;2,1,0) ; and (12) S12 = F2 ∗ I 5 ∗ O2 = ( A1 ∗ B1 ) ∗ I 5 ∗ (D2 ∗ E 4 ) ; N (S12 ) = (3;2,1,0) . Now an additional phase of decision analysis can be used as follows: (i) multicriteria evaluation of the final decisions above and selection of the best ones (from the viewpoint of additional top-level requirements); (ii) expert judgement of the decisions; and (iii) analysis and improvement of the decisions. Figure 6 illustrates a lattice of quality and Pareto-effective composite DAs for part F. Figure 7 illustrates a lattice of quality and Pareto-effective composite DAs for system S.

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 8

S = F * I * O = (A * B) * I * (D * E) S1=F1 * I3 * O1=(A2 * B1) * I3 * (D1 * E3) ...

Facilities

Internet access

F= A* B F1= A2*B1 F2= A1*B1 A Server

Software O=D*E O1=D1*E3 O2=D2*E4

I I1(3) I2(2) I3(1) I4(2) I5(2)

B Telecom facilities

A1(2) A2(1) A3(2) A4(2) A5(3)

D Web server

B1(1) B2(2) B3(3) B4(2)

D1(1) D2(1) D3(2) D4(2) D5(3)

E Web technology E1(3) E2(2) E3(1) E4(1)

Fig. 5 Structure of Web hosting system and decisions.

The ideal point

N(F 1)

N(F 2)

w=3

w=2

w=1

Fig. 6 Space of system quality for F.

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 9

The ideal point N(S5)…N(S12)

N(S1)…N(S4)

w=3

w=2

w=1

Fig. 7 Space of system quality for S.

4. Conclusion In recent years, popularity of Web-based applied systems is increasing, for example: E-business and E-commerce, Web-based information systems, E-government and E-democracy, E-learning and Eteaching, Web-based research support systems, Web-based systems for public policy decision making. As a result, Web engineering/science is under examination (e.g., Web-based system life cycle engineering/management including issues of the system requirements, design, and maintenance). Web hosting systems are representatives of the important class of Web-based systems which are used as basic multidisciplinary tools in many applications. In the paper, hierarchical morphological approach (HMMD) to the components’ composition of Web hosting system is suggested. The considered example of modular Web hosting system and its hierarchical model (including design alternatives, criteria, and examples of estimates for the design alternatives and their compatibility) correspond to a versatile structure of multidisciplinary modular systems and can be used in various domains (engineering, computer science, management). Generally, HMMD is targeted to compose a small set of the best final decisions in a very large combinatorial space of all composite decisions. HMMD provides multistage series-parallel design framework that is a basis for control of design process complexity by several ways: (i) algorithmic/computational complexity (i.e., partitioning the solving process to examine combinatorial problems of increased dimension), (ii) human capital and/or organizational complexity (by dividing the organizational problems of expert judgment to use separately different domain experts). In addition, multistage (hierarchical or cascade-like) nature of HMMD allows analysis/evaluation of intermediate design results and repetition of some stages of the design framework. It is reasonable to note that the used discrete space of system excellence (the lattice of quality) in HMMD is a new useful approach to represent an integrated quality for composite (modular) systems decisions. This paper has a preliminary nature. In the future, it can be reasonable to consider the following research directions in the design of Web hosting systems: (1) examination of an extended system structure; (2) revelation of 'bottlenecks' in the obtained composite decisions and improvement of the decisions; (3) analysis of system adaptability and upgradeability; (4) multistage system design (or design of system trajectory); (5) study of system maintenance; and (6) usage of fuzzy set approaches to take into account uncertainty in examined problems.

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 10

The draft material for the paper was prepared within framework of faculty course “Design of Systems: Structural Approach” at Faculty of Radio Engineering and Cybernetics of Moscow Institute of Physics and Technology (State University) (creator and lecturer: M.Sh. Levin) (Levin, 2006a). The above-mentioned course was partially supported by NetCracker, Inc. (http://www.netcracker.com). In the future the material of the paper may be used as a basic example in computer science and/or engineering education to obtain students skills in modular systems modeling, analysis, and design.

5. References Aggarwal, R., Verma, K., Miller, J., Milnor, W., 2004, “Constraint driven Web service composition in METEOR-S,” in Proceedings of 2004 IEEE conference on services computing (SCC 2004), pp. 23-30. Agrawal, R., Bayardo, R.J., Jr., Gruhl, D., Papadimitriou, S., 2001, “Vinchi: A service-oriented architecture for rapid development of Web applications,” in Proceedings of the Tenth International WWW Conference (WWW10), Hong Kong, http://www10.org/cdrom/papers/506/index.html Antonsson, E.K., Cagan, J., Eds., 2001, Formal Engineering Design Synthesis. Cambridge University Press, Cambridge, UK. Baldwin, C.Y., Clark, K.B., 2000, Design Rules: The Power of Modularity. Vol. 1, MIT Press, Cambridge. Baresi, L., Pezze, M., 1998, “Towards formalizing structured analysis,” ACM Trans. on Software Engineering and Methodology, Vol. 7, No. 1, pp. 80-107. Bascaran, E.,, H.M. Karandikar, H.M., Mistree, F., 1992, “Modeling hierarchy in decision-based design: A conceptual exposition,” in Design Theory and Methodology 92, Stauffer, L.A., Taylor, D.L., Eds., ASME, New York. Battacharjee, S., Ramesh, R., Zionts, S., 2001, “A design framework for e-business infrastructure integration and resource management,” IEEE Trans. on SMC -Part C, Vol., 31, No. 3, pp. 304-319. Benatallah, B., Sheng, Q.Z., Dumas, M., 2003, “The self-serv environment for web services composition,” IEEE Internet Computing, Vol. 7, No. 6, pp. 40-48. Birmingham, W.P., Brennan, A., Gupta, A.P., Siewiorek, D.P., 1988, “Micon: A single board computer synthesis tool,” IEEE Circuits and Devices Magazine, Vol. 4, No. 1, pp. 37-46. Bowen, J.P., Hinchey, M.G., 1995, Applications of Formal Methods, Prentice Hall, Upper Saddle River, NJ. Brown, K., 1997, “Grammatical design,” IEEE Expert, Vol. 12, No. 2, pp. 27-33. Buchhiarone, A., Gnesi, S., 2006, “A survey on services composition languages and models,” in Proceedings of Int. Workshop on Web Services Modeling and Testing (WS-MaTe)-2006, Italy, pp. 51-66. Buede, D.M., 2009, The Engineering Design of Systems: Models amd Methods. J.Wiley&Sons, New York. Cantora, G., Penta, M.D., Esposito, R., Villani, M.L., 2005, “An approach for QoS-aware service composition based on genetic algorithms,” in Proceedings of 2005 conference on genetic evolutionary Computing, Washington, D.C., pp. 1069-1075. Cardoso, J., Sheth, A., 2003, “Semantic E-workflow composition,” J. of Intelligent Information Systems, Vol. 21, No. 3, pp. 191-225. Chen, X., Mohapatra, P., 2002, “Performance evaluation of service differentiating internet servers,” IEEE Trans. Computers, Vol. 51, No. 11, pp. 1368-1375. Clarke, E.M., Wing, J.M., 1996, “Formal methods: State ot the art and future direcitons,” ACM Computing Surveys, Vol. 28, No. 4, pp. 626-643, 1996. Coad, P., Yourdan, E., 1991, Object-Oriented Design. Prentice Hall PTR, Englewood Cliffs, NJ.

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 11

Crnkovic, I., Stafford, J.A., Schmidt, H.W., Wallnau, K.C., 2005, “Component-based software engineering,” J. of Systems and Software, Vol. 74, No. 1, pp. 1-3. Coello, C.A.C., 2001, A Short Tutorial on Evolutionary Multiobjective Optimization. LNCS 1993, Springer, Berlin. Deb, K., 2001, Multi-Objective Optimization using Evolutionary Algorithms. J.Wiley&Sons, Chichester, UK. DeMarco, T., 1979, Structured Analysis and System Specification. Prentice Hall, Englewood Cliffs, NJ. Gao, A.Q., Yang, D.Q., Tang, S.W., Zhang, M., 2005, “Web service composition using integer programmingbased models,” in Proceedings of IEEE Int. Conf. e-Business Engineering, Beijing, China, pp. 603-606. Gu, Z., Li, J., Xu, B., 2008, “Automatic service composition based on enhanced service dependency graph,” in Proceedings of the IEEE Int. Conf. on Web Services ICWS 2008, pp. 246-253. Guo, J., 2002, “A component-based systems development approach,” J. of Integrated Design and Process Science, Vol. 6, No. 1, pp. 103-115. Harhalakis, G., Lin, C.P., Nagi, R., Proth, J.M., 1992, “Hierarchical decision making in computer-integrated manufacturing systems,” in Proceedings of Third Intl. Conference on CIM. IEEE CS Press, pp. 15-24. Huang, C.C., Kusiak, A., 1998, “Modularity in design of products and systems,” IEEE Trans. SMC - Part A, Vol. 28, No. 1, pp. 66-77. Jololian, L., 2005, “Towards semantic integration of components using a service-based architecture. J. of Integrated Design and Process Science,” Vol. 9, No. 3, pp. 1-13. Jones, J.C., 1981, Design Methods, J.Wiley&Sons, New York. Keeny, R.L., Raiffa, H., 1976, Decisions with Multiple Objectives: Preferences and Value Tradeoffs, J.Wiley&Sons, New York. Korhonen, P., Wallenius, J., Zionts, S., 1984, “Solving the discrete multiple criteria problems using convex cones,” Management. Science, Vol. 30, No. 11, pp. 1336-1345. Kuppuraju, N., Ganesan, S., Mistree, F., Sobieski, J.S., 1985, “Hierarchical decision making in system design,” Engineering Optimization, Vol. 8, No. 3, pp. 223-252. Kusiak, A., 1999, Engineering Design: Products, Processes, and Systems, Academic Press, New York. Levin, M.Sh., 1998, Combinatorial Engineering of Decomposable Systems, Kluwer Academic Publishers, Dordrecht. Levin, M.Sh., 2006, Composite Systems Decisions. Springer, New York. Levin, M.Sh., 2006a, “Course 'System design: structural approach',” 18th Int. Conf. Design Methodology and Theory DTM2006, Pennsylvania, DETC2006-99547. Levin, M.Sh., 2009, Combinatorial optimization in system configuration design, Autom. & Remote Control, Vol. 70, No. 3, pp. 519-561. Levin, M.Sh., 2009a, “Towards morphological systems design,” in Proceedings of 7th IEEE Int. Conf. on Industrial Informatics, INDIN 2009, Cardiff, UK (in press). Leymann, F., Roller, D., Schmidt, M., 2002, “Web services and business process management,” IBM Systems J., Vol. 41, No. 2, pp. 198-211. Li, M., Wang, D.-Z., Du, X.-Y., Wang, S., 2005, “Dynamic composition of web services based on domain ontology,” Chinese J. of Computers, Vol. 28, No. 4, pp. 643-650. Liang, Q.A., Su, S.Y.W., 2005, “AND/OR graph and search algorithm for discovering composite web services,” Int. J. of Web Services Research, Vol. 2, No. 4, pp. 48-67.

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 12

Madhusudan, T., Uttamsingh, N., 2006, “A declarative approach to composing web services in dynamic environments,” Decision Support Systems, Vol. 41, No. 2, pp. 325-357. McDermott, J., 1982, “R1: a rule-based configurer of computer systems,” Artificial Intelligence, Vol. 19, No. 2, pp. 39-88. Mullins, S., Rinderle, J.R., 1991, “Grammatical approaches to engineering design, part I: An introduction and commentary,” Research in Engineering Design, Vol. 2, No. 3, pp. 121-135. Nierstrasz, O., Gibbs, S., Tsichritzis, D., 1992, “Component-oriented software development,” Comm. of the ACM, Vol. 35, No. 9, pp. 160-165. Orriens, B., Yang, J., Papazoglou, M.P., 2004, “Service component: a mechanism for Web service composition reuse and specialization,” J. of Integrated Design and Process Science, Vol. 8, No. 2, pp. 13-28. Poladian, V., Sousa, J.P., Garlan, D., Schmerl, B., Shaw, M., 2006, “Task-based adaptation for ubiquitous computing,” IEEE Trans. on SMC - Part C, Vol. 36, No. 3, pp. 328-340. Rabinovich, M., Aggarwal, A., 1999, “RaDaR: A scalable architecture for a global Web hosting service,” Computer Networks, Vol. 31, No. 11, pp. 1545-1561. Rao, J., Kungas, P., Matskin, M., 2004, “Logic-based services composition: From service description to process model,” in Proceedings of IEEE Int. Conf. on Web Services, pp. 446-453. Roy, B., 1996, Multicriteria Methodology for Decision Aiding. Kluwer Academic Publishers, Dordrecht. Saaty, T.L., 1988, The Analytic Hierarchy Process. MacGraw-Hill, New York. Shen, W., Ghenniwa, H., 2003, “A distributed multidisciplinary design optimization framework: technology integration,” J. of Integrated Design and Process Science, Vol. 7, No. 3, pp. 95-108. Show, M., Garlan, D., 1996, Software Architecture: Perspectives on an Emerging Discipline, Prentice Hall, Englewood Cliffs, NJ. Sirin, E., Parsia, B., Wu, D., Hendler, J., Nau, D., 2004, “HTN planning for web service composition using SHOR2,” Web Semantics, Vol. 1, No. 4, pp. 377-396. Stefik, M., 1995, Introduction to Knowledge Systems, Morgan Kaufmann, San Francisco, CA. Stiny, G., 1991, “The algebras of design,” Research in Engineering Design, Vol. 2, No. 3, pp. 171-181. Tang, X.-F., C.-J. Jiang, Z.-J. Ding, C. Wang, 2007, “A Petri net based semantic Web service automatic composition method,” J. of Software, Vol. 18, No. 12, pp. 2991-3000. Wielinga, B., Schreiber, G., 1997, “Configuration-design problem solving,” IEEE Expert: Intelligent Systems and Their Applications, Vol. 12, No. 2, pp. 49-56. Wynn, M.T., Edmond, D., Milliner, S., 2004, “The estimation of invocation cost for composite services,” J. of Integrated Design and Process Science, Vol. 8, No. 1, pp. 31-47. Xiong, P., Fan, Y., Zhou, M., 2009, “Web service configuration under multiple quality-of-service attributes. IEEE Trans. Automation Science and Engineering, Vol. 6, No. 2, pp. 311-321. Yu, T., Zhang, Y., Lin, K.-J., 2007, “Efficient Algorithms for Web Services Selection with End-to-end QoS Constraints,” ACM Trans. on the Web, Vol. 1, No. 1, Article No. 6. Zhang, L.-J., Li, B., Chao, T., Chang, H., 2004, “Requirements driven dynamic services composition for web services and grid solutions,” J. of Grid Computing, Vol. 2, No. 2, pp. 121-140. Zhovtobryukh, D., 2007, “A Petri net - based approach for automated goal-driven web service composition. Simulation,” Vol. 83, No. 1, pp. 33-63. Zwicky, F., 1969, Discovery Invention, Research Through the Morphological Approach. McMillan, New York.

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 13

6. Authors’ Biographies Dr. Mark Sh. Levin is with Institute for Information Transmission Problems of Russian Academy of Sciences as a Senior Research Scientist. He received the M.S. degree in Radio Engineering from Moscow Technological University for Communication and Informatics (1970), the M.S. degree in Mathematics from Lomonosov Moscow State University (1975), the Ph.D. degree in Systems Analysis from Russian Academy of Sciences (1982). He conducted his research projects in Russia, Israel, Japan, and Canada. Dr. Levin is interested in systems engineering, system design, combinatorial optimization, decision making, engineering and CS education. He has authored three books and many research articles. Dr. Levin is a member of ACM (SM'06), IEEE, SIAM, ISAI, ORSIS, and INCOSE. For more information visit: http://www.mslevin.iitp.ru/ Mr. Stanislav Yu. Sharov received the B.S. degree in Applied Mathematics and Physics from Moscow Institute of Physics and Technology (State University) (2007). Now he is a M.S. student at Faculty of Radio-Engineering and Cybernetics of Moscow Institute of Physics and Technology (State University. Mr. Sharov is interested in information technology, Web services, communication networks, software engineering, and protocol engineering. He has published several conference articles.

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 14

2009 Society for Design and Process Science Printed in the United States of America

N-DIMENSIONS, PARALLEL REALITIES, AND THEIR RELATIONS TO HUMAN PERCEPTION AND DEVELOPMENT Majid M. Naini Universal Vision & Research, Delray Beach, FL, USA

Joan F. Naini Pediatric Neurologists of Palm Beach, Loxahatchee, FL, USA

This paper delineates a new conceptualization of the integration of technology, psychology, medicine, and science, with an emphasis on quantum physics. The focus is on developing a new paradigm to comprehend the ultimate reality and its relation to the human journey in time, space, and beyond. As we learn more through scientific and technological advancements, we recognize that the human being has been ignoring his higher self. An increased understanding of the human brain, body, mind, and spirit is leading to the recognition of the extraordinary, yet often concealed, abilities we all possess. In addition, our increased understanding of the existing realities in parallel dimensions utilizing the superstring theory, is leading to a broader perception of the whole reality and its effect on the human development and condition. The relationship between electromagnetic frequencies and energies and human physiology and psychology requires further study to produce a more definitive understanding of the human makeup. As we increase our understanding and measuring of electromagnetic frequencies and energy in human beings, we will be better able to manipulate this energy. Through this integration of the sciences we will finally develop a more precise map for human composition, development, health, and recovery from diseases. Keywords: parallel dimensions and realities, superstring theory, human perception and development, neuroplasticity, neuroimaging

1. Introduction There have been many examples of transdisciplinary methodologies interspersed throughout time, especially in the fields of technology, psychology, medicine, and science. An integrated transdisciplinary education and research environment which combines the strength of many disciplines is essential to meet the challenges of modern technological development (Ertas, Tanik, and Maxwell, 2000). By studying the latest advances, especially in the fields of engineering, biology, genetics, medicine, chemistry, physics, astrophysics, astronomy, and psychology, we will be able to eventually gather more data about the elements and angles of this new understanding for which we don’t even have the complete paradigms at this time. It is the integration of disciplines which will lead to the ultimate understanding of the relationship and interaction among the electromagnetic and energy aspects of the human being and the other physical, mechanical, biological, chemical and electrical aspects. Common transdisciplinary techniques which could be utilized in this new conceptualization

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 49-61

include functional MRIs, ultrasound imaging, electrical stimulation, and electromagnetic and energy therapy in medical treatments. The objective of this paper is to present a new transdisciplinary conceptualization of the integration of modern science and technology in developing a new paradigm in relation to human perception and development. 2. In Reality Everything is Alive and in Motion From a scientific and realistic point of view what seems to us as dead and solid such as stone, soil, wood, metal, and even gases, are in reality made of molecules and those molecules are in turn made of atoms. Atoms consist of electrons and a nucleus. The nucleus is made of protons and neutrons which are made of quarks which are fluctuations of energy. In another physics theory the final elements of an atom are superstrings, which are microscopically tiny vibrating loops of energy. Electrons are nothing but electrical energy. So in any definition the bottom line is that an atom is nothing but some energy, vibrating and moving continuously. Therefore, calling something dead or solid is not really scientific reality. No wonder Rumi1, the great mystic, around 750 years ago ingeniously stated2, From the beginning we have come from the living world, Once again from this low end (world), we soared to the high end (world). All the particles in motion or motionless, are proclaiming, “Verily we are returning to the Beloved.” The worship and the praises of the hidden particles have filled the skies with an uproar. All particles in the universe secretly are telling you days and nights We are listening, watching, and enchanting, but we are silent with you who are not intimate. Since you are going toward inanimateness, how can you become intimate with the life of inanimate? From inanimateness go toward the world of spirits and hear the sound of the particles of the world. The glorification of God by inanimate beings will become apparent to you, The temptation of (false) interpretations does not misguide you. If man does not surpass his physical senses, he will be a stranger to the unseen images. The wind, soil, water, and fire are servants (of the Beloved), They are dead to you and me, but alive to God.

(Naini, 2010)

3. A Brief Review of Superstring Theory Physicists have been searching for a long time for a “theory of everything” to explain the universe. Quantum theory describes the universe as fundamentally discontinuous and is used primarily to understand atoms, molecules, and subatomic particles. Relativity describes time, space, and gravity as a smooth, unbroken continuum and is used to understand the behavior of astronomical bodies and the universe as a whole. Each theory works well in its own realm, but neither quite works when dealing with extremely large masses like the core of a black hole or extremely small time periods, such as the first moments after the Big Bang. Therefore, some physicists propose that the basic units of matter and energy should not be thought of as particles, but instead as minuscule, vibrating loops or strings, which exist not just in our familiar four dimensions of space and time, but actually in ten or more dimensions. According to string theory all of the extraordinary occurrences in the universe, are reflections of one grand physical principle and one single entity, microscopic vibrating loops of energy, a billionth of a 1

UNESCO named 2007 as The Year of Rumi and programs throughout the world celebrated the wonderful occasion of Rumi's 800th birthday. 2 All English translations of Rumi’s works are Dr. Naini’s translations from Mysteries of the Universe and Rumi’s Discoveries on the Majestic Path of Love, The Majestic Journey in Time and Space on Earth and Beyond, and Mind-Body-Spirit Relations and Mystical Balance. Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 50

billionth the size of an atom, about 10-33 centimeters long (Greene, 2003). If we were able to examine an elementary particle at an extremely high resolution via extraordinarily powerful tools, we would thus see a vibrating, one-dimensional, oscillating, dancing filament existing as a loop or as a strand, not a point. The way in which the string vibrates in space-time and the geometry of the space-time determine the properties of the elementary particle (Greene, 2003). Just as a violin string can vibrate in different modes and each mode corresponds to a different sound, the modes of vibration of a fundamental string can be recognized as the different familiar particles (Zwiebach, 2004). String theory suggests the existence of ten or eleven (in M-theory) space-time dimensions, as opposed to the usual four (three spatial and one temporal) used in relativity theory, which are twisted and curled into complex little shapes (Duff et al., 1995). Greene (2003) uses an example of a garden hose to explain why we don’t see the additional dimensions described in superstring theory. From a distance, the hose looks like a straight line, and if an ant lived on the hose, it could move up and down its length. However, a closer inspection reveals another dimension, its girth, so the ant could walk around the hose as well. The two types of dimensions are those that are long and visible (e.g. left-right, back-forth, or up-down) and those that are tiny and curled up, existing only on the microscopic level of strings. In 1919, Theodor Kaluza conjectured the existence of a fourth spatial dimension. In 1926, Oskar Klein elaborated on this concept, including that space consists of extended dimensions which are the usual three spatial ones and curled-up dimensions, which are found deep within the extended dimensions and can be thought of as a circle. If spheres replace the Kaluza-Klein circles, there are two additional dimensions if only the surfaces are considered. If we also include the space within the sphere, there are three additional dimensions, so the total number of dimensions is six. In 1957, Eugenio Calabi described six dimensional geometrical shapes and in 1977, Shing-Tung Yau mathematically proved this theory. If the spheres in curled-up space are replaced by these Calabi-Yau shapes, there are ten dimensions: three spatial, plus the six of the Calabi-Yau shapes, plus one of time. The tiny, curled up, six-dimensional shapes predicted by string theory cause one string to vibrate in precisely the right way to produce a photon and another string to vibrate in a different way to produce an electron. According to string theory, these miniscule extra dimensional shapes “really may determine all the constants of nature, keeping the cosmic symphony of strings in tune” (Greene, 2003). The M-theory, pioneered by Edward Witten in 1995, describes supersymmetric (a conjectured symmetry of nature) extended objects with two spatial dimensions and five spatial dimensions. Witten has stated that the M stands for magic, mystery, or membrane. M-theory is an eleven dimensional theory that looks ten dimensional at some points. It may have a membrane or “brane” as a fundamental object instead of a string, although it would look like strings when the 11th dimension is curled into a small circle (Witten, 1998). Greene (2003) states that M Theory opens up a new possibility, "that our whole universe is living on a membrane, inside a much larger, higher dimensional space.” He further proposes that possibly in the future we will be able to use gravity waves to communicate with other parallel universes which exist to the right and left of our universe. 4. A New Proposal: 1, 2, 3, N, Dimensions The classification of electromagnetic frequencies spectrum was described in the previous paper (Naini, 2006). It was explained that the spectrum of electromagnetic radiation covers a wide range of wavelengths from gamma rays, X-rays, ultraviolet, visible (red to violet), infrared, microwaves, and finally to radio waves. The whole spectrum carries energy and each band provides a different view of the universe and different information due to the amount of energy found in the photons. Radio waves have low energy photons, microwaves have higher level photons, and infrared has even higher energy level photons; visible, ultraviolet, X-rays, and gamma-rays have even higher energy level photons than infrared. In summary, longer wavelengths have less energy and shorter wavelengths have more energy.

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 51

In the above classification, there were seven broad sections. As we know the concept of three dimensions of X, Y, and Z was expanded by the ingeniousness of Albert Einstein to four dimensions of X, Y, Z, and T, where T stands for time. Demonstrating that an event taking place in the same dimension of X, Y, and Z with two different Ts, the time variables make the two events completely separate and distinctive. I would like to propose that if we combine the four X, Y, Z, and T dimensions with these seven fields of frequencies by assuming that one of each field would by itself be defined as one dimension, then this would increase our previous dimensions to eleven and more depending upon the further distinction and separation of the grand spectrum of the electromagnetic frequencies to more than seven categories. By increasing the number of different categories of the spectrum from seven to K, and adding the other four dimensions of X, Y, Z, and T we could have any defined dimension of N based upon our classification and definition. To give a simple example in order to better illustrate the above proposal and concept, let’s look at the reality for a human, honeybee, and snake in reference to their visual perception and the way that each one operates in a defined, specific range. Keep in mind that the human retina can only distinguish between red to violet frequencies, while a honeybee vision operates in ultraviolet frequencies, and a snake in infrared frequencies. Now imagine a human being, snake, and honeybee simultaneously present in the same place in which some event is taking place. The X, Y, Z, and T variables for that particular event are identical and yet, each observer through his own unique visual perception, which is active in its respective range, observes and views something different. The three observers have different pictures, understandings, and realities, although they are at the same time, location, and place, observing the same event. In superstring theory all the mathematical calculations bring the dimensions up to ten or eleven or N in general, where N is the number of variables that define the total numbers of the parameters. As it was proposed above, adding the number of the dimensions of X, Y, Z, T, and K (spectrum of frequencies) results in N. So if we assume that K = 7, the different fields of frequencies from radio to gamma rays, then N = 11. This manner of thinking would allow one to imagine and better understand the higher dimensions, in addition to the four regular dimensions of X, Y, Z, and T which are all familiar to most of us. 5. Reality of Perception To expand on the prior concept, let us also consider hearing, smelling, touching, and tasting. In the same X, Y, Z, and T dimensions a sound could be heard in different ways by different observers. For example, a human being’s hearing organ is equipped to hear from 20 to 20,000 Hz. Other animals hear high frequency sounds that a human cannot hear such as a dog from 60 to 45,000 Hz, a mouse from 1,000 to 90,000 Hz, and a bat from 3,000 to120,000 Hz (Schiffman, 2001). An elephant hears in the infrasound range from 1 to 20,000 Hz, so it also hears low frequency sounds that a human being cannot hear. Therefore, a hearing event in the same location and time for different observers has diverse meaning, understanding, and recognition. This concept can also be expanded to other senses. For example, humans have an olfactory membrane of about 4 sq. cm. and dogs up to 150 sq. cm. In humans there are about 40 million olfactory receptors, while in the German Shepherd dog there are about 2 billion olfactory receptors. In terms of taste, humans have approximately 10,000 taste buds, while a catfish has approximately 100,000 taste buds. If this concept is expanded to touch, we should consider that whatever is defined as a physical atom in the periodic table is supposedly touchable by our sensory organs. In fact, everything we are able to see on Earth is most likely made from combinations of electrons, upquarks, and down-quarks, but the universe itself has additional constituents. For example, in the early 1930s Wolfgang Pauli discovered neutrinos, which are ghostly particles that only rarely interact with other matter. Billions of them are passing through our body and skin and we do not even feel or see them (Greene, 2003). So it is of the utmost importance that we recognize that an observer and the tools of observation play a very

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 52

significant role in understanding our universe, reality, and existence. Imagine a ghost like the cartoon character Casper made of neutrinos could have a much different understanding of all the attributes of the same surroundings like smell, color, and sensing the physical and nonphysical environment than a physical organism with a thicker and more condensed being. Please keep in mind that the human body is over 99.9999% empty as seen from a quantum physicist perspective since it is actually an empty void space filled with scattered electrical charges and discharges. The feeling that our body parts are solid is an illusion in the same way as the wall that separates two repelling magnets. The void between the nucleus of an atom and the electrons orbiting around it is much more significant than the space between the Earth and the sun (Chopra, 2006). Hopefully when we develop better scientific tools we will have a better picture and understanding of the so-called human physical body and its combined electromagnetic frequencies and energies. Perceptions even vary from person to person when presented with the same situation. Meanings may also change for an individual as one’s perspective may change. For example, if we look at a common Müller-Lyer optical illusion in Figure 1 we are tricked into assigning a different meaning to what we see. Although all horizontal lines in the figure are exactly the same size, most people see them as different sizes. Therefore, what reaches our brain is not exactly what exists in reality although it may be patterned after reality or contain aspects of it in the same way as a painting is a copy of a scene as perceived by the artist.

Fig. 1 Optical illusion. Smith (2002) defines an illusion as “any perceptual situation in which a physical object is actually perceived, but in which that object perceptually appears other than it really is.” For example, we may perceive a white wall in a yellow light as yellow, a sweet drink as tasting sour if we have just eaten something sweeter, or a quiet sound as loud if it is very near. In addition to illusions, humans also may see, hear, touch, taste, and smell hallucinations, which are experiences which seem exactly like perceptions of a real, mind-independent object, although there is no mind independent object of the relevant kind being perceived. For example, most of us have experienced a visual hallucination like seeing water in the desert (Smith, 2002). In fact, even movement is actually an illusion since the human retina cannot detect movement, unlike the retina of frogs. Instead, the human brain, much like the frames projected on the theatre screen, matches up image elements from frame to frame over space and time. To see the motion of a retinal image feature means to interpret the real-world events that gave rise to it, and that interpretation is impossible without contextual factors (Albright, 1995). It turns out that our perception of any familiar object, although it appears to be subjective immediately, is really derived from past experience (Russell, 1921). Heidegger explains, “We never … originally and really perceive a throng of sensations, e.g., tones and noises, in the appearance of things…; rather, we hear the storm whistling in the chimney, we hear the three-engine aeroplane, we hear the Mercedes in immediate distinction from the Volkswagen. Much closer to us Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 53

than any sensations are the things themselves. We hear the door slam in the house, and never hear acoustic sensations or mere sounds.” (Heidegger, 1977: 156) So what is the reality of human perception? At any given moment our five senses are taking in more than 11,000,000 pieces of information. The receptor neurons in each sensory system deal with electromagnetic, mechanical, or chemical energy. The receptor neurons all convert a stimulus from the environment into electrochemical nerve impulses, which is the common language of the brain. Our eyes alone receive and send over 10,000,000 signals to our brains each second. Also, the retina is read by the brain every 0.1 seconds, meaning that we are not actually seeing anything in the present, but something that just happened a fraction of a second ago. To further complicate our situation, in addition to the traditional five senses, are the abilities to sense temperature, vibration, acceleration of our bodies, the positions of our limbs, forces acting on our body, hunger, thirst, pain, and other sensations related to our internal state. According to Freeman (1991), the chaotic, but simultaneous and cooperative activity, of millions of neurons is vital for perception, as the brain transforms sensory messages into conscious perceptions almost instantly, processing consciously about 40 pieces of information per second. The chaotic behavior may seem random, but there is a hidden order as the brain responds flexibly to its environment, generates novel activity patterns, and shifts from one complex activity pattern to another in response to even minute bits of information. Within a fraction of a second the limbic system in the brain sends a search command to the motor systems and alerts all sensory systems to respond. At this point there is a synchronous activity in every neuron in a given participating region which is transmitted back to the limbic system, where it combines with output from other sensory systems to form a gestalt. Thus, an act of perception is not the copying of an incoming stimulus. It is a step in a journey in which brains develop, reorganize themselves, and accommodate their environment to their own advantage. The final patterns of activity caused by any given stimulus configuration depends upon genetic programming in conjunction with the cumulative environmental influences through development and growth (Kandel, 2000). How ingeniously Rumi described the complex human being around 750 years ago: Our physical body became our cover (veil) in this world, We are like an ocean covered under this straw (cover) Oh my good friend you are not one dimensional, but you are a galaxy and a deep ocean, One of your grand dimensions which has nine hundred other dimensions, Is an ocean and that can drown a hundred of you. (Naini, 2010) 6. Reality of Emotions How can human emotions be based on reality when our ability to even perceive reality is so limited and biased? In fact, mental contamination, or contact with some unwanted agent that taints mental processing, is extremely difficult to avoid due to our narrow access to and use of mental processes (as discussed above) and an individual's belief system. If we better understand our limitations, are we then able to change our emotional reactions in response to stimuli? According to cognitive behavioral psychology, painful emotions arise from an individual’s beliefs, rather than from reality. Cognitive behavioral psychologists work with patients to relieve emotional distress by reframing or changing the content of their thoughts, which in turn challenges their belief system. In contrast, Buddhist meditation changes the relationship to emotions, allowing us to view mood fluctuations, so we can navigate around them, instead of impulsively acting on a biased viewpoint. Meditation, through its actions on the prefrontal cortex, allows one to decrease affective arousal from the limbic system, the emotional system in the brain (Ellison, 2006).

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 54

After taking readings of the activity in the right and left prefrontal areas on hundreds of people, Davidson, a neuroscientist from University of Wisconsin, has established a bell curve distribution, with the majority of people in the middle with a mixture of good and bad moods. The individuals who fall farthest to the right are most likely to suffer from affective disorders such as depression or anxiety, while those who fall farthest to the left, rarely have troubling moods and recovery from bad moods is rapid (Goleman, 2003). Can we learn to increase the activity in the left prefrontal area? Davidson states that qualities such as happiness should be thought of as skills that can be trained, instead of traits. In proof of this assertion, are several research studies involving meditation. In one study, after nonBuddhists were given an eight-week meditation training course, magnetic resonance imaging and other testing revealed 50 percent more electrical activity in the left frontal regions of the brain, associated with positive emotions and anxiety reduction, and an increase of as much as 25 percent in protective antibodies. Even more exciting, some changes lasted up to four months (Savory, 2001). This is an important finding since it has been shown that individuals with increased left-sided anterior activation exhibited faster recovery after a negative provocation (Davidson, 2000). Therefore, it seems as if the conscious act of meditation actually rearranges the brain and thus changes our emotional responses. As Rumi so aptly states, People are all the prisoners of thoughts, For this they are broken hearted and sad

(Naini, 2010)

Oh my brother, you are all thoughts, the rest are bones and flesh. If your thoughts are flowers, you are a flower garden. And if your thoughts are shrubs, you are firewood for burning. If you are rose water, you are sprinkled on head and bosom, And if you are urine, you are poured out (in a toilet).

(Naini, 2002)

Rumi’s above verses are so true. In these present times, we know from the latest medical and psychological research that happiness is really a state of mind. The effects of the mind’s status are incredible on the body and its functions, especially in the areas of age reversal and the production of neurotransmitters and growth hormones and factors. The recent scientific proof about the direct impact of our minds over our bodies is astonishing. 7. Functioning in Parallel Worlds and Realities In a previous paper (Naini, 2006) in order to illustrate the concept of several parallel worlds and realities existing at the same time, two examples were used. One was about a glass full of clear water. If we add some red color to the water, the water becomes red. If we then add some yellow color, the water becomes a mixture of red and yellow everywhere. If we add more colors, the color of the water becomes a combination of all the added colors. Even though we see a different color as the final result, at the same time, each color that we added is present simultaneously in that mixture. Another example illustrating this concept is to imagine filling a room with different types of gases. Every gas is present at the same time in every part of the room. In the same way, we can think of the different existences, realities, and physical matters being present everywhere at the same time at different vibrational frequencies. Our normal matter is made up of atoms from the periodic table of elements and other matters in higher vibrational frequencies. One theory proposes that some of these higher vibrational frequencies are particles all around us known as WIMPs (Weakly Interacting Massive Particles). Physicists predict that they have about 100 times the mass of a proton, but they pass through the densest matter, even a planet, like ghosts moving through walls. This may not seem like much, but billions and billions of trillions could weigh enough to be the missing matter of the universe. There is

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 55

currently a multi-institution search to accurately detect WIMPs called CDMS (Cryogenic Dark Matter Search). Is the human mind capable of functioning in parallel realities by hardwiring our neurological pathways and systems in order to be able to relate to different higher dimensions and have the capability of accessing our higher senses (Naini, 2006) by enhancing our natural abilities and expanding the limitation of our sensory organs? Will we be able to eventually enhance the brain’s ability to comprehend and learn? Is it possible we can learn how to develop neurological pathways in young brains for some amazing special abilities which could develop and mature in later years into various wonderful extrasensory and psychic abilities in order to comprehend better the reality of our environment, universe, different dimensions, etc.? 8. Neuroplasticity “The principal activities of brains are making changes in themselves.” Marvin L. Minsky (1986) The capacity of the brain to change with learning is plasticity. Plasticity refers to alterations in circuits in the brain in response to experience or sensory stimulation. The process of neuroplasticity is now being studied at the cellular and molecular levels. Periods of rapid change or plasticity occur in the brain under four main conditions: 1) developmental plasticity, when the immature brain begins to process sensory information; 2) activity-dependent plasticity, when changes in the body occur, like a problem with eyesight, and alter the balance of sensory activity received by the brain; 3) plasticity of learning and memory, when we alter our behavior based on new sensory information; 4) injury-induced plasticity, following damage to the brain (JFK Center for Research & Development, 2007). According to Durbach (2000), underlying this brain plasticity are adjustments in the strength of connections between brain cells via a change in the internal structure of neurons and an increase in the number of synapses between neurons. Our brains actually change as a result of the skills we have learned and the actions we have taken. For example, in an individual who has been blind from a young age, when no transmissions arrive from the eyes, the visual cortex can learn to hear, feel, or even support verbal memory. When signals from the skin or muscles attack the motor cortex or the somatosensory cortex (which processes touch), the brain expands the area that is wired to move certain body parts (Begley, 2007). It is clear that our brains are constantly adapting to our environment. Many experiments have been conducted to study the power of visualization, with results illustrating how mental practice can produce or maintain a skill at the same level as physical practice. A classic example is a study conducted at the University of Chicago. Students with approximately the same ability in basketball were divided into three groups and asked to shoot baskets. The percentage of baskets each team made was recorded. For thirty days, the first group was instructed not to practice or play basketball, the second group to practice shooting baskets every day for one hour, and, the third group not to play any basketball, but to practice shooting baskets in their minds for an hour each day. At the end of thirty days, Group 1 made no improvement over their original percentage of baskets. Group 2, the group which had physically practiced, improved performance by 24%. Group 3, which had only utilized visual imagery, improved performance by 23% (Clark, 1960). This leads back to our section on the reality of perception, as our brains do not seem to distinguish between an actual event and a vividly imagined one. This principle is now being used to improve performance in athletics and other areas. Another good example of neuroplasticity is retraining people recovering from brain injury in which the normal side of the brain compensates for the damaged tissue of the contralateral side. Also, the ability of the nondominant hemisphere to sustain language functions has been studied in various clinical situations. In patients with early lesions of the left hemisphere, acquired either prenatally or in early childhood, the right hemisphere can support nearly normal development of language, provided Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 56

there is no associated epilepsy (Muter et al., 1997). Recently it has been shown that difficulties in specific learning skills can be improved if the neurological origin of the problem is accurately determined. For example, Tallal (2000) discovered that in some children difficulty learning to read stems from a language processing delay in the brain, so she and her colleagues developed a computer program to accelerate the processing of the sounds that make up the written word. 9. The Human Journey in Time, Space, and Beyond The human being is really a biological, mechanical, chemical, electrical, and electromagnetic being. As we know, life in the human being is the combination and interaction of biological, mechanical, chemical, electrical, and electro-magnetic phenomena. The electromagnetic or the energy part is the most important aspect of life and the blueprint of life itself. After our physical death, the biological, chemical, electrical, and mechanical functions of our bodies cease to exist. What happens to the electromagnetic or energy part of the human being? The electromagnetic or energy part of our existence does not die, because that energy has a different form of existence. This is true according to the well-known scientific law of conservation of energy which states that energy may neither be created nor destroyed and, therefore, the sum of all the energies in the system is a constant. Thus, our energy and essence (soul) continue existing even after our physical bodies are dead, and our journeys continue. Rumi ingeniously describes the cycle of life, its process, and evolution in the subsequent verses in Mathnavi in more detail: I died from solid and became plant, from plant I was transformed to animal. From animal I died and became a human, then why should I be afraid of death? I never became less from death. Next I die from human, and soar to become an angel. I soar again from an angel, and become something that you cannot even imagine. This way he went from world to world, so now he has become wise and learned and strong. He does not remember his former intelligence, from this (human) intelligence he will also evolve. So he will escape from this intellect, full of greed and desire, He will surprisingly see (find) a hundred thousand intelligences. Even though he fell asleep and became forgetful of the past, How would they leave him in his forgetfulness? Again from that sleep (death) they will drag him to wakefulness, so he will laugh at his own state. (Saying) “Why was I grieving in my sleep? How did I forget my true status? How did I not know that that sorrow and illness, is the function of sleep and is illusion and fantasy? Similarly, this world which is the sleeper’s dream, the sleeper thinks that this is permanent. Until suddenly the dawn of death arises, and he is freed from the darkness of suspicion and deception. will laugh at his sorrows, when he sees his eternal place and home. Whatever you did in this sleep in the world, will appear to you at the time of awakening. So you should not think, that this evil act is permissible in this dream (world), And has no consequence. for this, God has called this world a play, Because this punishment is a play, in comparison with that punishment. (Naini, 2002) The cycle of life that is briefly described in the above verses is also scientifically accurate and true. We all know that vegetation and plants grow using nutrients in the soil. At the atomic level, we realize that all the atoms in vegetation and plants come from the soil and water in the ground from which they grow. The animals eat the vegetation and plants and those atoms become part of their bodies. People eat

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 57

the meat and other parts of the animals, and so those atoms are transferred to their bodies. Through procreation, those atoms are passed on to their children. If you view a human body as a chemical, mechanical, biological, electrical, and electromagnetic being, therefore every aspect of our existence is affected by the environment in which we operate and by the influence of its environment is related to that category, dimension, or function. To illustrate this concept better think of a regular medical checkup with blood and urine tests, X-rays, CT scans, MRIs, ultrasounds, and other scans to make a diagnosis and educated guess about the proper function or malfunction of a specific part or system of a human body. Imagine in the near future, it may be possible to obtain a very detailed scan and measurement of the vibrational frequencies and energy status of different parts of the body to further diagnose the regularity or irregularity of so-called normal health and functioning of the human makeup. 10. Practical Applications Ertas, Tanik, and Maxwell (2000) presented a new transdisciplinary model for higher education and research and stressed a need for multidisciplinary fusion. We believe that there is also a critical need to bridge the gap between computational neural network-level models, human psychology, neuroscience, and general educational practice. One example is the Mind, Brain, and Education master’s degree program at Harvard Graduate School of Education which seeks to utilize brain based research to produce the most effective teaching strategies. As research advances, the significant impact on shaping children’s brains through relationships, learning environments, and early interventions is clear. In a recent book, Educating the Brain, Posner and Rothbart (2006) illustrate that even innate brain properties, such as attention, can be enhanced via early explicit instruction. Also, case studies of child prodigies in various areas show that highly advanced learning is possible at a very young age. More and more science-based interventions have been researched for reading, math, and memory disorders as we learn how to shape the developing brain for optimal learning. The neuroscience research team at University of Wisconsin (Lutz et al., 2004) has concluded from extensive research with Tibetan monks that meditation not only produced temporary changes in the workings of the brain, but also produced permanent changes, especially an increase in gamma wave activity, the pattern of brainwaves associated with higher mental activity. Altered states of consciousness have a long history as part of the general experiential and behavioral research tied with exploration of consciousness expansion, meditation, LSD, and mystical practices and have been connected with changes in cortical activity and arousal levels. Vaitl (2005) explains that in Eastern cultures, techniques for altering consciousness are “embedded in religion and the philosophy of human destiny and personal growth.” In schizophrenia it is hypothesized that individuals are unable to adequately inhibit “irrelevant” stimuli and thus have a kind of sensory overload. Perhaps we can learn how to achieve this state of heightened sensory sensitivity without any overload, and thereby reach a different aspect or dimension of reality. It is clear from the prior sections that brain restructuring or rewiring exists. Neuroimaging techniques have shown that cortical functioning becomes fine-tuned as the child develops. Human neurophysiology labs are attempting to localize brain generators of attention, emotion, language, and cognition. Technological advances in imaging allow us to observe the activity of cognitive systems and processes as the individual reads, writes, calculates estimates, etc. Steven Hyman, former Director of the National Institute of Mental Health, has called for an increased collaboration between cognitive scientists and physicists, computer scientists, physicians and school teachers (Murray, 2000). Therefore, we propose that future educational programs should encompass a fully integrated system to teach children how to perform mental and psychic exercises to enable them to more efficiently use and develop their brain circuitry. For example, imagine if an elementary school student as a part of the curriculum has some hours and subjects dedicated to these possible mental and psychic exercises to

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 58

enhance ability of different kinds of learning in a diverse spectrum of existence and reality. This different kind of intelligence could be associated with Harvard psychologist Harold Gardner’s (1983) concept of multiple intelligences, which has only recently become increasingly popular among educators. It is therefore vital that we better understand the developmental critical periods when synaptic and neuronal changes usually occur and the best methods to stimulate effective and efficient learning. 11. Conclusions If we adopt a new transdisciplinary educational model, with state-of-the-art methodology, architecture, and technology with the intention of taking greater advantage of human neurological ability, we will be able to expand our range and the current limitations of our sensory organs to obtain a wider, more accurate spectrum of reality in this universe and beyond. Hopefully all of our progress would help us to realize that life is a journey in time and space in all dimensions and beyond. The advancements in science and technology will inevitably propel human beings to a higher, more evolved consciousness. This higher state will finally guide us to our real senses so we can fully comprehend this everlasting journey of life. We must take advantage of our knowledge and resources to relieve our fellow human beings and environment from misery, pain, destruction, and pollution. Great mystics, seers, and sages of all paths have recognized a long time ago, that love is the only magical force that can quicken the energy of our souls to soar to the highest state of ecstasy where we become an island of peace and serenity. Let us conclude with a translation of a profound verse from Rumi, the master and servant of love: I was dead, I became alive, I was tears, I became laughter; The majesty of love came, and I became an everlasting majesty myself. 12. References Albright, T.D. and Stoner, G.R., 1995, “Visual motion perception,” Proceedings of the National Academy of Sciences, Vol. 92, pp. 2433-2440. Begley, S., 2007, “How the brain rewires itself,” www.time.com (cited January 19, 2007). Chopra, D., 2006, “Seeing what you believe, believing what you see,” www.forbes.com (cited April 18, 2006). Clark, L.V., 1960, “Effect of mental practice on the development of a certain motor skill,” Research Quarterly, Vol. 31, pp. 560-569. Davidson, R..J., 2000, “Affective style, psychopathology, and resilience: Brain mechanisms and plasticity,” American Psychologist, Vol. 55, pp.1196¬1214. Drubach, D., 1999, “The brain explained,” Prentice-Hall, New Jersey. Duff, M.J., Liu, J.T. and Minasian, R., 1995, “Eleven¬dimensional origin of string/string duality: a one-loop test,” Nuclear Physics B, Vol. 452, No. 2, pp. 261-282. Ellison, K., 2006, “Mastering your own mind,” Psychology Today Magazine, Sept/Oct 2006. Ertas, A., Tanik, M.M., and Maxwell, T. T., 2000, “Transdisciplinary engineering education and research model,” Transactions of the SDPS, Vol. 4, No. 4, pp. 1-11. Freeman, W., 1991, “The physiology of perception,” Scientific American, Vol 264, No. 2, pp. 78-85. Gardner, H., 1983, “Frames of mind,” Basic Books, New York.

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 59

Goleman, D., 2007, “Behavior; finding happiness: Cajole your brain to lean to the left,” New York Times, Feb. 4, 2007. Greene, Brian, 2003, “The elegant universe: Superstrings, hidden dimensions, and the quest for the ultimate theory,” W.W. Norton & Co, New York. Greene, Brian, 2003, “The elegant universe, the string's the thing,” PBS Special, Oct. 2003. Heidegger, Martin, 1977, “The origin of the work of art” in: D. F. Krell (editor), Martin Heidegger: Basic writings, Harper and Row, New York, pp. 139-212. JFK Center for Research on Human Development, Vanderbilt University Staff, 2007, “Brain plasticity,” http://kc.vanderbilt.edu/kennedy/research/topics/plasticity.html (cited 2007). Kandel, E. R., Schwartz, J. H., & Jessell, T., 2000, “Principles of neural science, fourth edition,” McGraw-Hill Publishing Co, New York. Lutz, A., Greischar, L.L., Rawlings, N.B., Ricard, M., and Davidson, R.J., 2004, “Long-term meditators selfinduce high-amplitude gamma synchrony during mental practice,” Proceedings of the National Academy of Sciences, Vol. 101, No. 46, pp. 16369–16373. Minsky, M., 1986, “The society of mind,” Simon & Schuster, New York. Muter. V., Taylor, S., and Vargha-Khadem, F., 1997, “A longitudinal study of early intellectual development in hemiplegic children,” Neuropsychologia , 35, pp. 289–98. Murray, B., 2000, “From brain scan to lesson plan,”APA Monitor on Psychology, Vol. 31, pp. 3. Naini, Majid M., 2002, “Mysteries of the universe and rumi’s discoveries on the majestic path of love,” Universal Vision & Research, Delray Beach, Florida. Naini, Majid M., 2006, “Integration of science, technology, and human development,” Integrated Design and Process Technology Conference Proceedings, San Diego, California, June 25-30, 2006. Naini, Majid M., 2010, “The majestic journey in time and space on Earth and beyond,” to be published 2010. Naini, Majid M., 2010, “Mind-body-spirit relations and mystical balance,” to be published 2010. Posner, M.I and Rothbart, M.K., 2006, “Educating the human brain,” American Psychological Association, Washington, DC. Russell, B., 1921, “The analysis of mind,” The Macmillan Company, New York. Savory, E., 2004, “Meditation: The pursuit of happiness,” www.cbsnews.com (cited April 23, 2004). Schiffman, H.R., 2001, Sensation and perception. An integrated approach, John Wiley & Sons, Inc, New York. Smith A.D., 2002, “The Problem of perception,” Harvard University Press, Cambridge, Massachusettes. Tallal, P., 2000, “Experimental studies of language learning impairments: From research to remediation,” In Bishop & Leonard (Editors), “Speech and language impairments in children: Causes, characteristics, intervention, and outcome,” Psychology Press, Hove, United Kingdom. Witten, Edward, 1998, “Magic, mystery and matrix,” Notices of the AMS, October 1998, pp. 1124-1129. Vaitl, D., Birbaumer, N., Gruzelier, J., Jamieson, G.A., Kotchoubey, B., Kübler, A., Lehmann, D., Miltner, W.H., Ott, U., Pütz, P., Sammer, G., Strauch, I., Strehl, U., Wackermann, J., Weiss, T., 2005, “Psychobiology of altered states of consciousness,” Psychological Bulletin, Vol. 131, No. 1, pp. 98-127. Zwiebach, B., 2004, “A first course in string theory,” Cambridge University Press, Cambridge, United Kingdom.

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 60

13. Authors’ Biographies Professor Majid Naini has been a Professor, Department Chair, and College Dean at several major universities, including University of Pennsylvania, Hawaii, Oman, Cairo, Colorado, and Florida, in Computer Science, Engineering, and Information Technology. He has been an active researcher, designer of several state-of-the-art Computer and IT projects, and author of many innovative books and papers in science, technology, and humanity. Dr. Naini is one of the foremost Rumi scholars and a promoter of world peace. He has been featured in over 500 newspapers, magazines, websites, and TV/Radio shows and has been the keynote speaker at over 500 national/international conferences, seminars, and programs, including the United Nations. Dr. Joan (Firoozeh) Naini is a developmental neuropsychologist and clinical psychologist, and former professor in the Department of Neurology, University of Miami School of Medicine. She has also been Director of Psychological Assessment at the Mailman Center for Child Development, Department of Pediatrics, University of Miami School of Medicine. Dr. Naini specializes in assessment and intervention of neurological, genetic, learning, attention, and behavior disorders.

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 61

2009 Society for Design and Process Science Printed in the United States of America

USING BAYESIAN APPROACH FOR SENSITIVITY ANALYSIS AND FAULT DIAGNOSIS IN COMPLEX SYSTEMS Ozge Doguc Jose Emmanuel Ramirez-Marquez School of Systems and Enterprises, Stevens Institute of Technology, NJ, USA

System reliability is important for systems engineers, since it is directly related to a company’s reputation, customer satisfaction, and system design costs. Improving system reliability has been an important task for the system engineers and a number of studies have been published to discuss methods for improving system reliability. For this purpose sensitivity analysis and fault diagnosis has been used in various studies, where identification of significant and problematic components plays an important role. Both sensitivity analysis and fault diagnosis require understanding the system structure and component relationships; and Bayesian networks (BN) have been shown to be an effective tool for modeling the systems and quantifying the component interactions. In this study, we use BN for sensitivity analysis and fault diagnosis to improve system reliability. We focus on the complex systems, where the number of components and component interactions can be very large. In this study, we first discuss sensitivity analysis in complex systems using BN, which can be used for identification of significant system components. Sensitivity analysis using BN is concerned with the question of how sensitive system reliability is to possible changes in the nodes in BN. In this paper we demonstrate that BN can be efficiently and effectively used for sensitivity analysis in complex system reliability. This study is the first that considers component reliabilities and uses BN for sensitivity analysis in complex systems. In this paper as a part of our method for sensitivity analysis, an efficient algorithm (SA) is introduced to perform sensitivity analysis in complex systems. Our SA algorithm is based on a graph traversal algorithm that can be effectively used in BN. The SA algorithm traverses the BN through the connected nodes and evaluates the reliabilities to perform sensitivity analysis. Our method helps the systems engineers understand the cause and effect relationships between system components and their reliability and discover the key components that have significant effects on system reliability. Once the key components are identified, system structure can be revised to improve the overall system reliability. Next we discuss fault diagnosis in complex systems and show how fault diagnosis can be used to improve complex system reliability. Due to component aging and environmental factors, the system components in real-life complex systems may fail or not function as expected. Such failures may cause unprecedented changes in the system reliability values and affect the reliability of not only the failed component, but also the overall system. One important issue in complex systems is that, the system engineers must process large amounts of information before making operational decisions. Since BN combine expert knowledge of the system with probabilistic theory for construction of effective diagnosis methodologies, they have been applied to fault diagnosis in various studies. In this paper, we present a new method for fault diagnosis in complex systems. Our method uses the complex system reliability to detect the faulty components. We continuously monitor the overall system reliability value, and our fault diagnosis mechanism is only triggered when significant changes to system reliability are detected. As part of our method, an efficient search algorithm is designed specifically for BN. This algorithm is empowered with popular heuristics. In this paper, we discuss how our method can be efficiently applied to complex systems since our search

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 33-48

algorithm needs to check only a small portion of the system’s components before detecting the failed one. We believe that our method provides system engineers with invaluable information to diagnose the faulty component and improve reliability in complex systems. Keywords: complex systems, system reliability, sensitivity analysis, fault diagnosis, Bayesian networks

1. Introduction System reliability is important for systems engineers, since it is directly related with company’s reputation, customer satisfaction and system design costs. In order to improve system reliability, a number of studies have been published in the literature using various methods. Sensitivity analysis and fault diagnosis have been commonly used by systems engineers; and they require understanding the system structure and component interactions. In the literature, the Bayesian networks (BN) has been used as an efficient and effective tool to evaluate and quantify component interactions and there are a number of studies on sensitivity analysis and fault diagnosis using BN (Steenland and Greenland, 2004, Weiss, 1996, Pfingsten, 2006, Horvitz and Barry, 1995, Kappen et al., 2003, Painter, 2003). BN helps the systems engineers to understand the cause and effect relationships between reliabilities of system components, and discover the key and problematic components that have significant effects on the system reliability. Once these components are identified, system structure can be revised to improve overall system reliability. In this study, we focus on the complex systems with large numbers of component and component interactions; and we use the BN to model the complex systems. We first discuss sensitivity analysis for complex systems and show that sensitivity analysis can be used for identification of significant system components. However, this process requires understanding component interactions and their effects on the overall system reliability, which is not a straightforward task in the case of complex systems. Therefore we use our BN model for sensitivity analysis where concerns with the question of how sensitive the system reliability is to possible changes in the nodes in BN. On the other hand, even when the key components in a system are identified and necessary measures are taken to improve the system reliability, components failures may occur due to several reasons such as component aging and environmental factors. Such failures may cause unprecedented changes in the system reliability value and affect the reliability of not only the failed component, but also the overall complex system. Diagnosis of the unprecedented changes in the system reliability and detection of the source of the change is essential for removing faulty components, replacing them with better ones, restructuring system architecture, and thus improving the overall system reliability. However modern complex systems with large numbers of components and complex component relationships create new challenges for systems engineers to understand and trouble-shoot possible system problems. Therefore efficient monitoring and fault diagnosis methods are required for these systems. This paper demonstrates that BN can be efficiently and effectively used for sensitivity analysis and fault diagnosis in complex systems reliability. Although BN has been used as an efficient tool for sensitivity analysis, this is the first study that considers component reliabilities for sensitivity analysis and fault diagnosis in complex systems. In this paper we first provide an efficient sensitivity analysis method based on a graph traversal algorithm to evaluate component reliabilities and perform sensitivity analysis in complex systems. Then we introduce a new fault diagnosis method based on continuous monitoring of the overall system reliability value using BN. In our method, the fault diagnosis mechanism is triggered to find the failed component when significant changes to system reliability are detected. As part of our method, an efficient search algorithm is designed specifically for BN. This algorithm is empowered with effective heuristics. In this paper, we discuss how our method can be efficiently used in complex systems, since our search algorithm needs to check only a small portion of

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 34

the systems’ components before detecting the failed one. We believe that our method provides system engineers with invaluable information to diagnose the failed components and improve reliability in complex systems. This paper is organized as follows: Section 2 provides a summary of the related work in the literature. Section 3 discusses the BN and Section 4 presents our method for sensitivity analysis using BN. Section 5 introduces our fault diagnosis method. In Section 6 two example scenarios are provided to present our methods for sensitivity analysis and fault diagnosis. In Section 7 experimental analysis of our methods are given. Finally in Section 8 we provide our conclusions. 2. Literature Survey In the literature, system success or failure conditions have been evaluated by using different metrics such as availability, functionality, maintainability, etc. In addition to these, one important metric is the reliability of the system; which can be defined as the probability that a system will perform its intended function during a specified period of time under stated conditions (Gran and Helminen, 2001). Traditionally, engineers estimate reliability by understanding how the different components in a system interact for system success. However, for complex systems, understanding component interactions, which usually requires intervention of a domain expert, may prove to be a challenging problem. BN have been proposed as an alternative to traditional reliability estimation approaches, partly because they are easy to use in interaction with domain experts in the reliability field (Sigurdsson et al., 2001). The idea of using BN in systems reliability has mainly gained acceptance because of the simplicity it allows to represent systems and the efficiency for obtaining component associations (Doguc and Ramirez-Marquez, 2009). The concept of BN has been discussed in several earlier studies (Gran and Helminen, 2001, Doguc and Ramirez-Marquez, 2009, Boudali and Dugan, 2006). More recently, BN have found applications in, software reliability (Fenton et al., 2002, Gran et al., 2000) and general reliability modeling (Bobbio et al., 2001). Currently, predefined BN are used for reliability estimation for specific systems. For example, Gran and Helminen (2001) provide a BN for nuclear power plants and introduce a hybrid method for estimating the reliability of the plant. In another study, Helminen and Pulkkinen (2003) present a BN-based method for reliability estimation of computer-based motor protection relay. BN also has been used for sensitivity analysis and fault diagnosis, which are the focuses of this study. The following sections summarize the studies that use BN for sensitivity analysis and fault diagnosis. 2.1. Sensitivity Analysis Using BN Sensitivity analysis quantifies the dependence of system behavior on the parameters that affect the process dynamics. Due to its importance, sensitivity analysis found applications in various areas such as pharmaceuticals, medicine, civil engineering, political science and computer science (Steenland and Greenland, 2004, Blake et al., 1988, Castillo et al., 2006). In these applications, different methods have been used for sensitivity analysis such as discrete stochastic processes, Monte Carlo simulations, random sampling, etc (Gunawan et al., 2005, McCandless et al., 2006). As an example, Gunawan et al. (2005) use an analog of the classical sensitivity and the Fisher Information Matrix based on density function (distribution) sensitivity. In another study, McCandless et al. (2006) use Monte Carlo simulation for settings where there is a possible unknown risk factor for the outcome which also predicts exposure. The common purpose of these applications is to observe the effects of different parameters on the overall system. However formulating sensitivity analysis with an algebraic structure has shown to be a hard problem, especially due to complex calculations (Castillo et al., 2001). Moreover it has been argued that the existing sensitivity analysis procedures are not very effective in

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 35

revealing nonlinear relations. Therefore new methods should be used especially when limited information is available (Helton, 2008). To address these issues, BN have been shown to be an efficient tool for sensitivity analysis in different studies (Steenland and Greenland, 2004, Weiss, 1996, Pfingsten, 2006). Weiss (1996) introduced the concept of sensitivity analysis using BN and Wang (Weiss and Wang, 1998) followed his work. Also in his PhD thesis, Wang (2004) first show that BN are sensitive to inaccuracies in the numeric value of their probabilities; then he proved that BN can be efficiently used for sensitivity analysis. However these studies on sensitivity analysis in BN has focused on single parameters, where the goal is to understand the sensitivity of single parameter changes, and to identify single parameter changes that would enforce a certain conditions. Later Chan and Darwiche (2005) extended this work to multiple parameters, which may be in the conditional probability table (CPT) of a single variable, or the CPTs of multiple variables. Also Coupé and Van der Gaag (2002) show that the amount of algebraic computations can be significantly reduced and sensitivity analysis of BN can be more efficient and practical. More specifically, they show that the probability of interest in sensitivity analysis can be expressed as functions of conditional probabilities and computing these functions requires a small number of evaluations. Similarly, Wang et al. (2002) present a method that uses sensitivity analysis for a selective update of the probabilities when learning a BN. They first run sensitivity analysis on a BN learned with uniform parameters to identify the most important probability parameters. Then they update this set of probabilities to their accurate values by acquiring their informative parameters. The process is repeated until further recalculation of probabilities does not improve the performance of the BN. 2.2. Fault Diagnosis Using BN Since BN combine expert knowledge of the system with probabilistic theory for construction of effective diagnosis methodologies, they have been used in various fault diagnosis applications (Horvitz and Barry, 1995, Kappen, Wiegerinck, Akay, Neijt and Beek, 2003, Painter, 2003). During the last decade there has been a surge of applications using BN models for system analysis and diagnosis in various application areas. Currently the BN models are being used for building intelligent agents and adaptive user interfaces (Microsoft (2005), NASA (Horvitz and Barry, 1995)), process control (NASA (Horvitz and Barry, 1995), General Electric, Lockheed), fault diagnosis (Hewlett Packard, Intel (Kappen, Wiegerinck, Akay, Neijt and Beek, 2003), American Airlines), pattern recognition and data mining, and medical diagnosis (BiopSys, Microsoft), security and fraud detection (credit cards, AT&T) (Painter, 2003). In more recent applications, Sahin et al. (2007) used BN for fault diagnosis in airplane engines. They specifically used swarm optimization that works in parallel, increasing the efficiency for large domains. In another study, Kang et al. (2007) used BN with fault diagnosis of gear train systems. They combined the BN with Back Propagation Neural Networks (BPNN). Also Romessis and Mathioudakis (2006) built a BN for gas turbine fault diagnosis in jet engines. The studies mentioned in this section use specifically built BN for fault diagnosis purposes. Next section discusses the BN from systems engineering perspective. 3. Using BN for System Reliability As discussed in the previous sections, BN have been used in various studies for estimating system reliability (Cowell et al., 1999, Jensen, 2001, Pearl, 1988). BN can be defined as an approach that represents the interactions between the variables from a probabilistic perspective. This representation is modeled as a directed acyclic graph (DAG), where the nodes represent the variables and the links between each pair of nodes represent the causal relationships between the variables. From systems engineering perspective, the variables of a BN are defined as the components in the system while the

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 36

links represent the interactions of the components leading to system “success” or “failure”. Under a reliability analysis perspective, a variable A in BN constitutes the success of a specific system component and therefore, p(A) represents the probability of success for such a component. Strictly speaking, the probability of success of a component is conditional on the available evidence from other components. In a BN this dependency is represented as a directed link between two components, forming a child and parent relationship, so that the dependent component is called as the child of the other. Therefore, the success probability of a child node is conditional on the success probabilities associated with each of its parents (Fenton, Krause and Neil, 2002). The conditional probabilities of the child nodes are calculated by using the Bayes’ theorem via the probability values assigned to the parent nodes. Also, absence of a link between any two nodes of a BN indicates that these components do not directly interact for system failure/success thus, they are considered independent of each other and their probabilities are calculated separately. To illustrate these concepts, the BN shown in Figure 1 presents how five components of a system interact. Each component in the system is represented with a node in the BN. In this BN, the childparent relationships of the components can be observed, where on the quantitative side the degrees of these relationships (associations) are expressed as probabilities (Lagnseth and Portinale, 2005). Finally, the overall system reliability is represented as the System Behavior node in the BN.

Fig. 1 A sample BN.

In Figure 1 the topmost nodes (X1, X2 and X4, representing components 1, 2, and 4 respectively) do not have any incoming edges, therefore they are conditionally independent of the rest of the components in the system. The prior probabilities that are assigned to these nodes should be known beforehand -with the help of a domain expert or using historical data about the system. Based on these prior probabilities, the conditional probabilities of a dependent node, such as X3, can be calculated using Bayes’ theorem and stored in its CPT. With the help of the Bayes’ theorem and CPTs, all conditional probabilities can be efficiently calculated and stored in the BN. 4. Sensitivity Analysis of Complex System Reliability Using BN In this section, we provide a new method for sensitivity analysis of complex system reliability using BN. As discussed in Section 2.1, sensitivity analysis using BN has been an active area of research during the last two decades. Consequently, several methods have been proposed in the literature (Steenland and Greenland, 2004, Blake, Reibman and Trivedi, 1988, Castillo, Mínguez and Castillo, 2006, Gunawan, Cao, Petzold and III, 2005, McCandless, Gustafson and Levy, 2006). The main purpose of using BN for analysis is to understand the conditional probabilities of target nodes in the

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 37

network when enough evidence (historical data) is available. Sensitivity analysis using BN concerns with question of how sensitive the conditional probabilities are to possible changes in the nodes in the BN. For this purpose, sensitivity analysis requires manipulation of system variables and monitoring the effects of these changes in the conditional probabilities. Other possibilities have been presented in (Spiegelhalter and Lauritzen, 1990), which presents a fully Bayesian approach, or (Tessem, 1992), which propagate intervals instead of single values for the probabilities. Another way of performing sensitivity analysis is suggested in (Laskey, 1995) which measures the impact of a small changes in one parameter on a target probability of interest. This section demonstrates that the BN can be efficiently and effectively used for sensitivity analysis of complex systems reliability. As discussed above, one of the main problems in sensitivity analysis is to find out the effect of a component on another and on the overall system reliability. Understanding the cause and effect relationships between systems components helps the systems engineers to optimize the system architecture accordingly. Once the significant components are identified, system structure is revised to prevent any unprecedented changes on these components and increase overall system reliability. Almost all components in a system may affect the overall system reliability in some way, however its quantitative evaluation is not straightforward in the case of complex systems. For example, system components may influence each other through other components in the system. Thus in order to evaluate the actual effect of one component to another, all intermediate components should be examined first. As an example, in the small-size BN shown in Figure 1, the effect of the node X1 on System Behavior is influenced by the node X3; thus two relationships should be considered; X1 X3 and X3System Behavior. Here, the most important problem is the number of the relationships to be evaluated, due to the size of the complex system that is under consideration. As the BN shows the relationships between the components and the degrees of the relationships are quantitatively stored in CPT, this information can be used for sensitivity analysis in complex systems. As discussed in Section 3, BN provides efficient ways of estimating and storing the conditional probabilities between components of a complex system. Once the conditional probabilities are estimated by using Bayes’ rule, effects of a system component on another one can be evaluated by cascadingly iterating and recalculating probabilities of the intermediate nodes. In this section, a new algorithm for sensitivity analysis (SA) is introduced. The algorithm is given in Figure 2, an example scenario is provided in Section 6 and its performance analysis is discussed in Section 7.1. Algorithm SA (B, s, δ) Input: The Bayesian network B for a given complex system; a set s of two components Xi and Xj in the BN and the amount of change δ in the reliability of node Xi. Output: The amount of change in the reliability of node Xj. Prerequisite: Node Xi should precede node Xj in B. For each node in the child set of node Xi Pick the node with smallest index and call it as Xk Apply the change δ into CPT of Xk Calculate the change in the reliability of Xk and call it δ’ If Xk = Xj Return δ’ If Xk has children Xi = Xk Fig. 2 Pseudo-code for the SA algorithm.

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 38

The SA algorithm traverses the BN B through a path between the initial node Xi and the final node Xj; and evaluates the changes in the CPT of all intermediate nodes. The algorithm utilizes a commonly used graph traversal method, called as depth-first traversal. According to this approach, the leftmost child (i.e. the one with smallest index) should be visited first while traversing the BN. At some point, the algorithm visits a node that does not have any children. In this case, the algorithm traverses back to the last node it visited; and moves onto its second child. As a result, the algorithm is expected to traverse all BN in the worst case; and half of the nodes ( n ) in the average case. 2 Our method for sensitivity analysis of complex system reliability uses the SA algorithm to quantify the impact of components on the overall system reliability and identify the important system components. In our method, the SA algorithm is used with all system components, the important components can be discovered by comparing their individual impacts on the overall system reliability. 5. Fault Diagnosis in Complex Systems As discussed in Sections 1 and 2, fault diagnosis has been used for enhancing the system design and improving the system reliability. This section presents a new fault diagnosis method for complex systems, using continuous monitoring and BN. In real-life complex systems, system components may fail or do not function as expected in time. As an example, due to aging or environmental factors, a system component may start failing more frequently and the mean time between failures (MTBF) for that component decreases. Such failures may cause unprecedented changes in the evaluated reliability values. This affects not only the reliability of the failed component, but also the overall system as well. For the systems engineers it is very important to detect unprecedented changes and diagnose the causes of these changes to improve system reliability. In this study, a methodology is developed for monitoring the overall system reliability value and diagnosing the failed components in complex systems. BN are useful tools for diagnosis; and efficient search algorithms can be employed within the BN (Doguc, 2006). We provide an efficient method specifically for BN; empowered with simple and efficient heuristics. As the first step of our method for fault diagnosis in complex systems, we detect if any significant changes occur in the system reliability. For this purpose, we continuously monitor (Gaurav, 2000) the evaluated system reliability value, and observe the significant changes in the CPT of the System Behavior node in the BN. Our method suggests monitoring the reliability value with intervals of t, where there can be at most one significant change in the system reliability. The value of t should be decided by the systems engineer or a domain expert, who has adequate knowledge about the system characteristics. In our method, CPTs of the nodes in the BN are saved into Current set at each observation o. The values in the Current set are then compared with the ones in the Previous set, which was created during the previous monitoring t time units ago (Figure 3).

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 39

Fig. 3 Continous monitoring for fault diagnosis using BN.

As it can be seen from Figure 3, as part of our fault diagnosis method, continuous monitoring requires only two sets (CurrentX and PreviousX) at the same time; so that earlier sets can be discarded. This approach reduces the amount of resources (i.e. disk space and memory) required for monitoring and fault diagnosis. Next, if a change in the overall system reliability value is detected between two consecutive observations, our method requires mechanisms to search for the system component that causes this change. However, not every single change in the system reliability value is caused by failed components. Therefore, a threshold value must be defined to decide if the difference between two sets will be defined as a significant change. This threshold value depends on the general system characteristics and the frequency of the component failures in the system. When a significant change is decided, the diagnosis mechanisms are triggered to find the failed component in the system. When a system component fails or stops functioning as expected, it not only affects the overall system reliability, but also the reliabilities of the other components as well. Using the BN representation of a system, the affected component(s) can be diagnosed by using search algorithms among the BN. Starting from the System Behavior node, the search algorithm traverses the BN until it finds the source of the change. In this study, as a part of our fault diagnosis method we introduce a new search algorithm using an efficient heuristic. We also compare the performance of our algorithm with a commonly used naïve search algorithm, depth first search (DFS). As a blind search algorithm, DFS always examines the leftmost parent in a graph first. It keeps iterating until it reaches at the end of the graph. Then it backtracks by one level and tries the next node. The algorithm either finds the desired node; or keeps running until it checks all the nodes in the graph (Kopec et al., 2004). The DFS algorithm can be used to search for the failed component in a complex system by using the BN model, and Current and Previous sets that are explained above. In this case, the DFS algorithm starts from the System Behavior node, whose CPT was changed due to the failed component in the system. Next, the algorithm checks the parents of the System Behavior node for changes in their CPTs, since they are the candidates for the failed component. In the next step The DFS algorithm picks the leftmost parent whose CPT was also changed. The algorithm keeps iterating this loop until it reaches to the end of the network, or it finds a node whose parents are not altered at all. For both cases the DFS algorithm returns that node as the source of the fault in the system.

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 40

Fig. 4 Flowchart for our DFC algorithm. In contrast to DFS, our proposed algorithm uses a more sophisticated mechanism (heuristic) to choose the nodes to be considered while searching for the failed one. Our Diagnose Failed Component (DFC) algorithm uses the heuristic to select the node to be considered next rather than always picking the leftmost one. The heuristic reduces the number of nodes to be considered, thus our DFC algorithm would be more efficient than the popular DFS algorithm. According to our heuristic, if there is a change in the CPT of a node, the DFC algorithm chooses the parent that is closest to the source of that change. The difficulty here is to define such heuristic; i.e. how to determine the node which is closest to the source of alteration. Moreover, an important property of the heuristic is that it must be efficient, since it is re-evaluated at each iteration of the DFC algorithm. Our heuristic defined in this study uses the percentage of change in the CPT of the considered node. In other words, our heuristic is that the node whose CPT changes most, is also the closest one to the source of the change in the system, therefore it should be picked first. This heuristic is accurate, since according to the Bayes’ rule the values in the CPTs are calculated by multiplying the probabilities from the parent nodes (Gran and Helminen, 2001). As a result of this, the change in the CPT of a parent node is reflected to its children with a probability and to their children with a lower probability, and so on. Therefore if a component is closer to the failed one, its CPT would be changed more than others and it can be used to find the failed component. 6. Example Scenarios In this section we provide two example scenarios to demonstrate our methods for sensitivity analysis and fault diagnosis for complex system reliability. In these scenarios we use the BN model shown in Figure 1, and the historical data from Table 1. By using Doguc and Ramirez-Marquez’s

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 41

method (Doguc and Ramirez-Marquez, 2009) for system reliability estimation the initial reliability of our example system can be assessed as 0.75. For the first scenario, assume that the reliability of node X1 is increased from 0.5 to 0.7 according to new system data available (as shown in Table 1 below). In this case the overall System Behavior will be affected as well as several other nodes in the BN. According to the SA algorithm (and as shown in Figure 2) the first effected node will be X3. The CPT of the X3 node is shown in Table 2 after reevaluation. The reliability value of the X3 node can be estimated as 0.47 after this change. Table 1 Historical data after a change in node X1. Observation 1 2 3 4 5 6 7 8 9 10

X1 1 0 1 0 1 1 1 1 1 0

X2 1 1 1 0 0 0 1 1 0 0

X3 0 1 0 0 1 0 0 1 1 0

X4 0 1 1 0 0 0 1 1 0 0

X5 0 1 1 0 0 0 1 1 0 0

System Behavior 0 1 1 0 1 0 1 1 1 0

Table 2 CPT of the X3 node after reevaluation. Parents X1 0 0 1 1

Probability X2 0 1 0 1

X3 = 0 1 0 0.33 0.75

X3 = 1 0 1 0.67 0.25

According to the SA algorithm, the change in the reliability of X3 will also affect its children; thus the System Behavior node at first. Assuming that nothing else have changed in the BN, by using 0.47 in the calculations, the new value for the System Behavior node (i.e. overall system reliability) is estimated as 0.788. Based on these results we can observe that 20% change in the reliability of the X1 node effects the overall System reliability by 3.8%. For the next scenario, assume that the reliability of the component X4 has dropped from 40% to 30% due to aging and bad maintenance. As discussed before, this change affects the overall system reliability, which drops from 0.47 to 0.45. If our threshold for the change in the system reliability is set to 1%, the fault detection mechanism is triggered since 0.02 drop exceeds the threshold. According to our continuous monitoring mechanism, the CPTs of all nodes are stored in the Current and Previous sets that will be used to find the faulty component. Since the immediate parents of the System Reliability node in the BN are X3 and X5, our DFC algorithm checks the Current and Previous sets of these nodes for any changes. In Figure 5, the Current and Previous sets for both nodes are shown.

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 42

Fig. 5 Previous and Current sets for X3 and X5. It can be seen from Figure 5 that there are no differences between the Current and Previous sets for X3, while significant differences can be observed in the Current and Previous sets for X5. Thus, the DFC algorithm decides that X5 is either the faulty component, or one of its children. Therefore the DFC algorithm moves to the parents of X5, which are X2 and X4. Similar to the previous step, the algorithm checks the Current and Previous sets of X2 and X4. Since X2 and X4 are leaf nodes in the BN (i.e. do not have any parents), they do not have CPTs. In this case, the DFC algorithm checks the reliabilities of these nodes to detect any changes. According to this scenario, the reliability of X4 was changed from 0.5 to 0.4, while the X2’s reliability stayed constant. The DFC algorithm detects this change and decides that X4 is the faulty component, since it does not have any parents to check. In this example scenario, the DFC algorithm finds the faulty component in only two iterations in a five-component system, without checking all components in the system. As will be discussed in the next section, we can conclude that the running time of our DFC algorithm does not increase linearly with the system size, which is a very important feature for large complex systems. 7. Experimental Analysis In this section, we provide experimental analysis on the performances of our methods for sensitivity analysis and fault diagnosis in complex systems. We created test suites to model example complex systems and measured the running times of our algorithms on the test suites. We implemented our algorithms in C. For our experiments we used a PC equipped with Intel Core2 Duo 2.1GHz CPU and 2.5GB RAM. In order to increase the statistical significance of our experimental results, we repeated our experiments 10 times and used the mean values from our results in the figures. 7.1. Experimental Analysis of Our Sensitivity Analysis Method As we discussed in Section 4, our sensitivity analysis method for complex system reliability uses the SA algorithm, thus the performance our method is highly dependent on the efficiency of the SA algorithm. In this section we focus on evaluating the performance of the SA algorithm. It can be observed from our SA algorithm in Figure 2 that, the number of iterations of the algorithm is dependent on the size of the input BN. In other words, the SA algorithm iterates between components Xi and Xj;

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 43

and more iteration is required when the distances between the components increase due to the system size. Moreover, the performance of the SA algorithm is effected by the number of children for each node (i.e. size of the child set), since the algorithm evaluates each child node. For the sake of simplicity, in our experiments we assumed that the size of the child set is fixed for all nodes in the BN. For our tests, we created BNs of different sizes, varying from 10 nodes to 1000 nodes. Since the focus of this study is complex systems, we created test BNs as large as 1000 nodes. Also, in order to measure the effect of number of children on the running time, we varied the size of the child set (u) in the test BNs from 2 to 10. A small child set represent a sparse system while a large child sets shows high connectivity. For example, every component is connected to 10 others when u is equal to 10. This result shows that the running time of the SA algorithm is still considerably small even for very large systems. Our experimental analysis results in Figure 6 show that the running time of the SA algorithm increases sub-linearly with the number of components in the system. That is, when the system size increases by 10 times from 10 to 100, the running time increases by about 8 times, from 2.76 seconds to 22.1 seconds (when u=2). 1200

u=2 u=5 u=8 u=10

Elapsed Time (sec)

1000

800

600

400

200

0 0

100

200

300

400

500

600

700

800

900

1000

Number of Components

Fig. 6 Running time of the SA algorithm. 7.2. Experimental Analysis of Our Fault Diagnosis Method In this section, we evaluate the performance of our DFC algorithm, which is the main part of our fault diagnosis method. We compare the two algorithms (DFS and DFC) discussed in Section 5 to show how our DFC algorithm reduces the time to find the failed component in the system. For comparison purposes, we use different systems that are modeled as BN, so that larger systems require more nodes in their BN representation. We evaluated the two algorithms with different BNs and recorded the elapsed time for the algorithms to find the failed component. In Figure 7 it can be observed that, our DFC algorithm finds the failed component in significantly less time than the DFS algorithm, especially for the large systems (i.e. # of nodes > 100). Also we can say that unlike the DFS algorithm the running time of our DFC algorithm is not directly proportional to the number of components in the system.

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 44

100

DFS DFC

90 80

Elapsed Time (sec)

70 60 50 40 30 20 10 0 0

100

200

300

400

500

600

700

800

900

1000

Number of Components

Fig. 7 Comparison of DFS and DFC algorithms.

8. Conclusion Sensitivity analysis and fault diagnosis has been commonly used by the systems engineers for decades to improve system reliability. However they require understanding the system structure and component relationships, which can be a difficult task in the case of complex systems. BN has been used as an efficient and effective tool for modeling the systems and quantifying the component interactions. In this study, we discuss sensitivity analysis and fault diagnosis for complex system reliability using BN and we provide two new methods. First we discuss sensitivity analysis, which helps the systems engineers to understand the relationships between system components and their effect on overall system reliability. Sensitivity analysis is invaluable for the engineers especially for system analysis and design. Although BN has been used as an efficient tool for sensitivity analysis, this is the first study that uses reliability values of individual system components to analyze overall system reliability. In this paper we provide an efficient algorithm (SA) to traverse the BN and perform sensitivity analysis in complex systems. We also show that our SA algorithm is reasonably efficient even for very large complex systems. Next, we discuss fault diagnosis in complex systems. During a complex system’s life-cycle, the system components may fail due to aging or environmental factors. Failure of a system component not only reduces its reliability, but also affects overall complex system reliability as well. In this paper, using the BN model we suggest an efficient method to for fault diagnosis when an unprecedented change occur in a complex system’s reliability. We simulated the BN model of the complex system in time, and monitored the changes in the overall system reliability. Then we defined DFC algorithm to detect the significant change in system reliability and efficiently find the failed component that causes the change in reliability. The DFC algorithm employs a simple heuristic to find the failed component efficiently within the system. As a benchmark, we compared the performance of our DFC algorithm with a naïve search algorithm; DFS. Although the DFS algorithm’s performance was directly related with the number of components in the complex system, our method performed independent of the

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 45

number of components. We show that our DFC algorithm perform much better than DFS for complex systems with large number of components. 9. References Steenland, K.; Greenland, S., "Monte Carlo Sensitivity Analysis and Bayesian Analysis of Smoking as an Unmeasured Confounder in a Study of Silica and Lung Cancer." American Journal of Epidemiology 2004, 160, (4), 384-392. Weiss, R. E., "An Approach to Bayesian Sensitivity Analysis." Journal of the Royal Statistical Society, Series B 1996, 58, 593-607. Pfingsten, T. "Bayesian Active Learning for Sensitivity Analysis", 17th European Conference on Machine Learning, Berlin Heidelberg, 2006; Berlin Heidelberg, 2006. Horvitz, E.; Barry, M. "Display of Information for Time-Critical Decision Making", Eleventh Conference on Uncertainty in Artificial Intelligence, Montreal, Canada, 1995; Montreal, Canada, 1995; pp 296-305. Kappen, B.; Wiegerinck, W.; Akay, E.; Neijt, J.; Beek, A. v. "A clinical diagnostic decision support system"; 2003. Painter, J. "Uses of Bayesian Statistics"; 2003. Gran, B. A.; Helminen, A., "A Bayesian Belief Network for Reliability Assessment." Safecomp 2001 2001, 2187, 35–45. Sigurdsson, J. H.; Walls, L. A.; Quigley, J. L., "Bayesian belief nets for managing expert judgment and modeling reliability." Quality and Reliability Engineering International 2001, 17, 181–190. Doguc, O.; Ramirez-Marquez, J. E., "A Generic Method for Estimating System Reliability Using Bayesian Networks." Journal of Reliability Engineering and System Safety 2009, 94, (2), 542-550. Boudali, H.; Dugan, J. B., "A Continuous-Time Bayesian Network Reliability Modeling, and Analysis Framework." IEEE Transaction on Reliability 2006, 55 (1), 86-97. Fenton, N.; Krause, P.; Neil, M., "Software Measurement: Uncertainty and Causal Modeling." IEEE Software 2002, 10, (4), 116-122. Gran, B. A.; Dahll, G.; Eisinger, S.; Lund, E. J.; Norstrøm, J. G.; Strocka, P.; Ystanes, B. J. "Estimating Dependability of Programmable Systems Using BBNs", Safecomp 2000, 2000; Springer: 2000; pp 309-320. Bobbio, A.; Portinale, L.; Minichino, M.; Ciancamerla, E., "Improving the analysis of dependable systems by mapping fault trees into Bayesian networks." Reliability Engineering and System Safety 2001, 71, (3), 249–260. Helminen, A.; Pulkkinen, U., "Quantitative Reliability Estimation of a Computer-Based Motor Protection Relay Using Bayesian Networks." Safecomp 2003 2003, 2788, 92–102. Blake, J. T.; Reibman, A. L.; Trivedi, K. S. "Sensitivity Analysis of Reliability and Performability Measures For Multiprocessor Systems", ACM SIGMETRICS conference on Measurement and modeling of computer systems Santa Fe, New Mexico, United States Castillo, E.; Mínguez, R.; Castillo, C., "Sensitivity Analysis in Optimization and Reliability Problems." In ESREL 2006, 2006. Gunawan, R.; Cao, Y.; Petzold, L.; III, F. J. D., "Sensitivity Analysis of Discrete Stochastic Systems." Biophysical Journal 2005, 88, 2530-2540. McCandless, L. C.; Gustafson, P.; Levy, A., "Bayesian sensitivity analysis for unmeasured confounding in observational studies." STATISTICS IN MEDICINE 2006, 26, 2331–2347. Castillo, E.; Kjaerulff, U.; Gaag, L. v. d. "Sensitivity Analysis n Normal Bayesian Networks"; 2001. Helton, J. C., Uncertainty and Sensitivity Analysis for Models of Complex Systems In Computational Methods in Transport: Verification and Validation, Springer Berlin Heidelberg: 2008; Vol. 62, pp 207-228. Weiss, R.; Wang, Y. "Bayesian Sample Size Computations in Complex Models with Application to Repeated Measures Random Effects Model Design"; 1998. Wang, H. "Building Bayesian Networks: Elicitation, Evaluation, and Learning." University of Pittsburgh, Pittsburgh, 2004. Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 46

Chan, H.; Darwiche, A. "Sensitivity Analysis in Markov Networks", Nineteenth International Joint Conference on Artificial Intelligence, Edinburgh, Scotland, 2005; Edinburgh, Scotland, 2005; pp 1300-1305. Coupé, V. M. H.; Gaag, L. C. v. d., "Properties of sensitivity analysis of Bayesian belief networks." Annals of Mathematics and Artificial Intelligence 2002, 36, 323–356. Wang, H.; Rish, I.; Ma, S. "Using sensitivity analysis for selective parameter update in Bayesian network learning"; 2002. "DTAS." http://www.research.microsoft.com/dtas Sahin, F.; Yavuz, M. Ç.; Arnavut, Z.; Uluyol, Ö., "Fault diagnosis for airplane engines using Bayesian networks and distributed particle swarm optimization." Parallel Computing 2007, 33, (2), 124-143. Kang, Y.; Wang, C.-C.; Chang, Y.-P., "Gear Fault Diagnosis in Time Domains by Using Bayesian Networks." Advances in Soft Computing 2007, 41, 618-627. Romessis, C.; Mathioudakis, K., "Bayesian Network Approach for Gas Path Fault Diagnosis." Journal of Engineering Gas Turbines Power 2006, 128, (1), 64-73. Cowell, R. G.; Dawid, A. P.; Lauritzen, S. L.; Spiegelhalter, D. J., "Probabilistic networks and expert systems." Springer-Verlag: New York, NY, 1999. Jensen, F. V., "Bayesian networks and decision graphs." Springer Verlag: New York, NY, 2001. Pearl, J., "Probabilistic Reasoning in Intelligent Systems." Morgan Kaufmann: San Francisco, CA, 1988. Lagnseth, H.; Portinale, L. "Bayesian Networks in Reliability"; 2005. Spiegelhalter, D. J.; Lauritzen, S. L., "Sequential updating of conditional probabilities on directed graphical structures." Networks 1990, 20, 579–605. Tessem, B., "Interval probability propagation." International Journal of Approximate Reasoning 1992, 7, 95120. Laskey, K. B., "Sensitivity analysis for probability assessments in Bayesian networks." IEEE Transactions Systems, Man Cybernetics 1995, 25, 901-909. Doguc, O. "An Assessment, Monitoring and Diagnosis (AMD) Tool for System Operational Effectiveness." Master of Engineering, Stevens Institute of Technology, Hoboken, 2006. Gaurav, B. "Auto-diagnosis of field problems in an appliance operating system", USENIX Annual Technical Conference, San Diego, California, 2000; USENIX Association Berkeley, CA, USA: San Diego, California, 2000; p 24. Kopec, D.; Cox, J.; Marsland, A., "The Computer Science and Engineering Handbook." 2 ed.; CRC Press: 2004; Vol. 63.

10. Authors’ Biographies Ozge Doguc is a Business Analyst at Asset Control Inc. in New York, NY. Prior to that, she had worked as a Research Assistant in the Systems Engineering Department at Stevens Institute of Technology between 2004 and 2006. She holds a M.E. degree in Systems Engineering from Stevens Institute of Technology. She is graduated from Beykent University in Istanbul, Turkey obtaining B.S. degrees in Mathematics and Computer Sciences, and Management Information Systems. She is currently a PhD candidate in Systems Engineering and her research interests and areas include Reliability Estimation in Complex Systems, Grid Systems and System Operational Effectiveness. Dr. Jose E. Ramirez-Marquez is an Assistant Professor at the Stevens Institute of Technology in the Department of Systems Engineering. His research interests include system reliability and quality assurance, uncertainty modeling, advanced heuristics for system reliability analysis, applied probability and statistical models and, applied operations research. He has authored more than 12 articles in leading technical journals on these topics. He obtained his Ph.D. at the Rutgers University in Industrial and Systems Engineering. He received his B.S. degree in Actuarial Science from the UNAM in Mexico City in 1998 and M.S. degrees in Industrial Engineering and Statistics from the Rutgers University.

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 47

11. Appendix Acronyms BN DFS DFC BFS CPT MTBF

Bayesian Network Depth First Search Diagnose Failed Component Best First Search Conditional Probability Table Mean Time Between Failure

Nomenclature p(A) p(H|E) Xi Previousx Currentx δ t u n B s

Probability of an event A Probability of event H given the evidence E Component i in the system Set of CPTs from the previous observation of the system Set of CPTs from the current observation of the system Change in the reliability of a component Time interval between two observations Maximum number of children in the BN Total number of nodes in the BN BN to represent a complex system The set of two nodes Xi and Xj in the BN

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 48

2009 Society for Design and Process Science Printed in the United States of America

ANALYSIS, MODELING AND SIMULATION OF NETWORK TRACES FOR VIDEO TRANSMISSION OVER IP NETWORKS Ming Yang Mathematical and Computer Science Department, Jacksonville State University, AL, USA

Nikolaos Bourbakis College of Engineering and Computer Science, Wright State University, OH, USA

In the best-effort IP network, quality of service (QoS) cannot be guaranteed, and thus packets could possibly be delayed or lost. Packet delay/loss will inevitably degrade the perceptual quality of real-time multimedia-over-IP service, such as Voice-over-IP (VoIP), Video-on-Demand (VoD), etc. In general, packet loss/delay exhibits temporal dependence. In order to efficiently conduct error recovery/concealment and improve the perceptual quality of the transmitted multimedia contents, packet loss/delay has to be precisely modeled. Different mathematical models, such as Bernoulli Model, Gilbert Model, Extended Gilbert Model, have been proposed to model network trace. However, none of them is able to precisely model the networked multimedia trace like the General Markov Model (GMM), which has rarely been applied in practice due to massive computational resource requirements. With the developments of modern computing hardware and parallel processing algorithms, GMM is becoming computationally feasible. In this paper, in order to obtain network traces for real-time VoD transmission, different connections have been setup to simulate a real VoD system. Data packets have been transmitted between the server and clients under RTP/UDP/IP protocol stack. Different models have been applied to analyze and model the obtained video transmission network traces. Specifically, a 6-state GMM has been applied to analyze the network trace, and the parameterized model has been obtained for further error recover/concealment. Compared to the other models, GMM offers the best modeling precision, in terms of loss-run distribution (LSD) and Forward Error Correction (FEC) performance prediction. The parameterized GMM is very useful to model and analyze network traces and further improve the QoS in multimedia-over-IP based on the modeling and analysis. Keywords: modelling, simulation, network traces, video transmission, IP networks

1. Introduction Multimedia streaming applications over current best-effort network architectures always suffer from packet loss (Xue et al., 2001, and Markovski et al., 2001). For example, in a VoD application (Bourbakis et al., 2003), the multimedia streaming will suffer both video packet loss and audio packet loss, which will possibly lead to serious media quality degradation and the desynchronization between audio and video streams (Bertino et al., 1998, Meyer et al., 1993, Little et al., 1990, and Schmidt et al., 1995). Basically, video data is more sensitive to packet loss than to arrival delay, while audio data is more sensitive to network delay than to packet loss. Current video/audio coding standard, such as MPEG-X and H.26X, do not offer efficient approaches to deal with packet loss. In order to efficiently compensate for packet loss, the behavior and pattern of packet loss need to be precisely modeled and simulated.

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 15-32

In video streaming over packet-switch network, one frame is packetized into several packets. Usually the macro-blocks along the raster scanning path (called a slice) will be packetized to construct a packet. MPEG standard suggests that the number of packets that form an I-frame is three times of the number of packets that form a P-frame, which is in turn five times of the number of packets that form a B-frame. In a packet-lossy environment, the decoding and reconstruction of the video frames will be degraded by the packet loss. As can be seen in Figure 1, the macroblocks surrounded by red lines are damaged due to packets loss. Delivered

Lost

Fig. 1 Frame damage due to packet loss. In the Group-of-Pictures (GOP) structure defined in MPEG-X and H.26X, the correct decoding of a P-frame depends on the correct decoding of preceding I-frame or P-frame, and the correct decoding of a B-frame depends on the correct decoding of both preceding and succeeding I- or P-frames (Figure 2). Due to the inter-frame dependency in the GOP, a single packet loss may be disastrous (Tekalp 1995). For example, if the first I-frame in the GOP is damaged, all the following P-frames and B-frames that belong to the same GOP cannot be reconstructed. The error will propagate until the beginning of next GOP, and thus the number of damaged pictures could be very large. This phenomenon is called error propagation, which is illustrated in Figure 3.

Time Axis

Fig. 2 Coding dependence in a group of picture (GOP).

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 16

Damaged Frame due to Packet Loss

Infected Frame

Complete Frame

Error Propagation

Fig. 3 Error propagation in a GOP. Different approaches have been proposed to deal with video packet loss (Wah et al., 1999, Wang et al., 2000, Perkins et al., 1998). Aly and Youssef (Aly et al., 2000, 2001) have been addressing lip synchronization in a video-lossy environment. Their work was based on frame level and one of their assumptions was that the whole frame will be either delivered successfully or lost totally. It proves that finer error recovery can be implemented at packet level. The main goal of the research in this paper is video packet loss modeling of MPEG-X/H.26X data transmission. There are different Internet transmission protocols for multimedia communication. Due to the error control scheme in TCP protocol, the multimedia data can be reliably transmitted but with nearly unacceptable delay. Obviously TCP transmission is not suitable for real-time multimedia applications such as Video-onDemand (VoD). With the UDP transmission, the network delay can be considered as negligible but the quality of service cannot be guaranteed in terms of packet loss rate. Thus, UDP protocol is more suitable for real-time VoD application but requires error recovery in case of packet loss. In order to efficiently recover the packet loss and bit error, the statistics and modeling of the packet loss within the Internet traffic have to be investigated. 2. Network Loss Patterns Packet loss results from two situations. In the first case (network loss), the packet is lost during transmission and cannot reach the destination (Koodli et al., 1999). In the second case (late loss), the arriving packet is discarded because it arrives too late and misses the playback deadline (Alexandraki et al., 2005). Basically, these two kinds of packet loss make no difference to end users in terms of perceptual quality degradation. The effects of network loss and late loss are illustrated in Figure 4.

Delayed Packet

Delivered Packet

Lost Packet

Packet Delay

Final Loss Pattern, Packet Delay Translates to Packet Loss

Fig. 4 Network loss and late loss.

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 17

Simple metrics such as average loss, average delay, and network jitter are not able to accurately and adequately describe the network packet loss patterns. In order to better model the network packets loss behavior, network loss can be characterized with two more metrics: the first one is the distribution of the length of bursty loss (loss-run), i.e., the number of consecutively lost packets in a bursty loss (Sanneck et al., 2005); the second one, known as Inter-Loss Distance (ILD) or good-run (Yang et al., 2006), is the distribution of the distance between bursty losses. The basic metrics to evaluate network packet loss are illustrated in Figure 5. Delivered Packet

Loss Run

Lost Packet

Inter-loss Distance

Fig. 5 Network packets loss patterns. VoD is basically one-way communication – the server continuously delivers streamed video content to the client, and thus interactivity is not so important. The playback deadline (Alexandraki et al., 2005) really depends on two factors: (1) the waiting time that is tolerable to the end-viewers; (2) the size of client buffer. The end-users typically can tolerate the delay in the order of seconds before the movie actually starts. Buffer size is usually limited in hardware implementation of the client decoder, but it is not an issue in software decoder implementation in a general computer. The clients of VoD can usually tolerate initial delay of up to 10 seconds, which can be guaranteed with a relatively large buffer. In this case, packet delay can be always compensated with the receiver buffer, and thus only packet loss will be concerned here. The performance of error concealment and FEC is dependent on the final loss pattern of the network packets. Network loss will inevitably degrade the perceptual quality of the multimedia presentation. Different network behavior patterns also directly affect the performance of the error concealment. As we know, for a data block that consists of n data packets, at most one packet error loss or error can be recovered by FEC code, and thus FEC is suitable for random, sparse, non-bursty packet loss. For bursty network loss, FEC needs to be combined with data interleaving to achieve similar performance. Based on these considerations, we will need to design different error concealment strategies based on different network behavior patterns. 3. System Modelling Methodologies In order to quantitatively model the network packet loss behavior, some models have been proposed by recent research work (Yajnik et al., 1999). In the order of increasing complexity, we have Bernoulli Model, Gilbert Model, Extended Gilbert Model, and GMM (Jiang et al., 2000). The more complex model uses more parameters and more states to describe the packet loss patterns, and thus more accuracy will be obtained. 3.1. Bernoulli Model The Bernoulli Model has only one parameter: the average loss rate p. In definition: p =

m1 , where m

m1 is the number of lost packets and m is the total number of packets. With Bernoulli Model, the lossrun distribution is: pk = p ( k −1) (1 − p ) , where pk is the probability of loss-run with a length k.

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 18

Due to limited number of parameters, the Bernoulli Model is not able to model the network packet loss very precisely. With the Bernoulli Model, the single packet loss probability is usually overestimated, and the longer loss-run probability is usually under-estimated. Because of the over-estimated single packet loss probability, the performance of FEC is always over-estimated. 3.2. Gilbert Model Gilbert Model is illustrated in Figure 6. In the Gilbert Model, p is the probability that the next packet is lost provided that the previous one has arrived successfully, and q is probability of the opposite case. 1-p is the probability that a delivered packet is followed by another delivered packet, while 1-q is the probability that a lost packet is followed by another lost packet.

State #0: Packet Delivered

State #1: Packet Lost p

1 -p

Stat e #0

Stat e #1

q

1 -q

Fig. 6 Gilbert model.

From the definition, we can compute the state probabilities of 0 and 1: π 0 =

q p and π 1 = , p+q p+q

which also represent the average arrival probability and average loss probability. The probability distribution of loss-runs with respect to loss length k, pk, can be computed as: pk = (1 − q ) k −1 q . Obviously, the Gilbert Model describes the network packet loss behavior more accurately than the Bernoulli Model does. Sanneck, Carle and Koodli (Sanneck et al., 2005) recommended using the Gilbert Model to model the temporal loss dependency of packet loss. However, the Gilbert Model does not fully characterize the loss pattern because of no enough states. 3.3. Extended Gilbert Model Unlike the GMM, which assumes that the past n events will affect the future events, the Extended Gilbert Model assumes that only the past n consecutive loss packets will affect the future loss. The parameter that needs to be determined in the Extended Gilbert Model is P[ X i X i −1toX i − n arelost ] . The overall packet loss pattern can be modeled with the corresponding transition matrix:

 p00 p  01 P= 0   ...  0 

Transactions of the SDPS

p10

p20 ...

0

0

...

0

p12

0

...

0

...

...

...

...

0

0

... p( n − 2)( n −1)

p( n − 2) 0

p( n −1) 0  0  0   ...  p( n −1)( n −1) 

(1)

MARCH 2009, Vol. 13, No. 1, pp. 19

where: p01 = (

n −1

n−1

n−1

i =1

i =k

i= k −1

∑ mi ) / m0 p(k −1)(k ) = (∑ mi ) /( ∑ mi ) p(k −1)(0) = 1 − p(k −1)(k ) ( mi denotes the

number of bursty packet loss with i consecutive lost packets, m0 denotes the number of successfully delivered packets). The loss probability (π 0 , π 1 ,...π ( n − 2 ) , π ( n −1) ) can be computed as follows:  π0   π0   π   π  P× 1  =  1   ...   ...      π n − 1  π n − 1 

n −1

and

∑π

i

=1

(2)

i =0

The Extended Gilbert Model fully describes the bursty packet loss pattern. However, it has two main drawbacks: (1) it does not have the ability to describe the distance between adjacent loss bursts. This disadvantage can be compensated with the ILD metric, which is used to describe the distance between bursty packet losses. The ILD metric is useful in two respects. First, if many of the loss runs are close to each other, then these packets have small inter-loss distribution, and thus the final perceptual quality is going to be degraded. Second, small ILDs may also degrade the performance of FEC. Extended Gilbert Model needs to be combined with the ILD metrics to more precisely describe the network packet loss behavior. (2) Extended Gilbert Model has an upper bound on the length of the loss-run. Sometimes, loss-run with the length of nearly 10 thousands may happen with the real network trace, and Extended Gilbert Model does not possess the capability to describe this kind of trace. 3.4. The n-order General Markov Model (GMM) In the Extended Gilbert Model, the major assumption is that the loss probability of a certain packet depends on the past n consecutively lost packets. This assumption is not always true in reality. In the GMM, the loss probability of a certain packet depends on the loss behavior of the past n packets. GMM has the ability to fully describe the network loss pattern, which includes the bursty loss run and interloss distance, but it has too many states and thus is computational demanding. Let X i denote the binary event for ith packet, 1 for loss and 0 for non-loss. The parameters to be determined in an n-th order GMM are: P[ X i | X i −1 , X i − 2 ,..., X i − n ] for all combinations of X i , X i −1 , X i − 2 ,..., X i − n . A general 3-state GMM is illustrated in Figure 7.

P

P

Stat e #0 (P )

Stat e #1 (P )

P P P

P

P P

Stat e #2 (P ) P

Fig. 7 A 3-States general Markov model. GMM is a powerful tool in describing the relationship between events that are temporally related with each other. Intuitively, it makes sense to make use of the GMM to describe the network loss pattern. First of all, either the successful arrival or losses of adjacent packets are temporally related to each other: if the preceding packet is lost, the succeeding packet is also highly likely to be lost, no

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 20

matter with network loss or late loss. Wah and Su (Wah et al., 1999) had tested the Internet loss behavior by transmitting video data packets from Hong Kong to Japan and USA, and they found that Internet packet loss tends to be bursty rather than random. This means consecutive packets loss is more likely to happen than single packet loss. For example, if packet n is discarded because of delay, then packet n+1 is also likely to be delayed and discarded; secondly, GMM gets rid of the assumption made by Extended Gilbert Model and is thus more generic; finally, since the GMM has more states than the Extended Gilbert Model, it may be able to describe the network loss behavior more precisely. GMM used to be considered as computationally demanding. However, with the advance of modern computing hardware/software, the originally computationally infeasible high-order GMM is now becoming computationally feasible. It is possible to make use of parallel computers or computing clusters for computationally demanding tasks. Now it is feasible to investigate the GMM with higher order and more states. 3.5. Pareto Model Pareto distribution has been used by Markovski (Markovski 2000) to model the loss-run distribution of network trace, based on the end-to-end measurement of network traffic. The Pareto distribution can be modeled with a shape parameter β and a location parameter α. The probability density function of Pareto distribution is: f ( x) = βα β x − β −1 , for α, β > 0, x ≥ α (3) The corresponding cumulative density function for Pareto distribution is: β

α  F ( x ) = P( X ≤ x ) = 1 −   , for α, β > 0, x ≥ α x

(4)

Pareto distribution is an accurate model to describe the loss-run distribution of network traffic. The drawback with Pareto distribution is that it does not catch the properties of inter-loss distance and temporal dependence between losses. Pareto distribution can be combined with Gilbert Model, Extended Gilbert Model or GMM to model the network traffic. 4. IP Traffic Trace Generation Here, in order to simulate the network trace of video streaming (Kamath et al., 2002), the data packets need to be generated as if they were generated by a real video streaming application. We are mainly concerned with video steaming for Video-on-Demand (VoD). For VoD service, Variable Bit Rate (VBR) video coding is used. In VBR coding, constant video quality is maintained, which is perceptually comfortable to end viewers. The video perceptual quality does not have to be very high, because human eyes can adapt to the quality of the host content. However, if the source video is Constant Bit Rate (CBR) coded, the quality of the video content (in the sense of PSNR) will fluctuate, and this will give the end viewers unpleasant feelings. Thus, in the application scenario of VoD, VBR video coding is adopted. On the other hand, for VoD application, usually high bitrate (average bitrate) of 1Mbps to 8Mbps is used, based on the network bandwidth availability. The popular video coding standards, such as MPEG-2, MPEG-4, H.264/AVC, etc, generate video data stream with different rate. For example, the bitrate of MPEG-1 is generally 1.5Mbps with maximum 1.86Mbps. For MPEG-2 Main Profile @ Main Level, the bitrate ranges from 1Mbps to 10Mbps. Video stream with DVD quality is usually 5Mbps. The latest video coding standards H.264/AVC covers a wide range of applications, and its bitrate ranges from 64Kbps to 10Mbps.

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 21

For VoD service, video stream is delivered over best-effort IP network, and it is also important to investigate the network condition and bandwidth. For public switched telephone network (PSTN), the download speed is 56 Kbits/s, and the upload speed is 33.6 Kbits/s. For ISDN, the maximum rate is 128 Kbits/s over two 64 Kbits/s channels. For DSL, the download speed is generally 1.5 Mbits/s. The maximum speed of cable TV transmission is 1 Mbits/s. T1 Connection (also referred to as DSL Line) with a dedicated phone line can have rate up to 1.544 Mbits/s. Ethernet can be up to 1 Gbits/s in transmission rate. In wireless connections, IEEE 802.11/b can have rate up to 11 Mbits/s, and IEEE 802.11/g can have rate up to 54 Mbits/s. Bluetooth connection has 723 kbits/s communication speed in the range of 10 meters in open air environment and 5 meters in office environment. 4.1. Network Connections The architecture of the client-server model used in this study is illustrated in Figure 8. As can be seen, three connections have been setup. We use a workstation in the lab to work as the VoD server, which is connected through Internet with three clients: one local client in the same local area network (LAN), one local client in neighborhood of the same city, and one remote client in Indiana. The VoD server simulates VoD service by continuously sending UDP packets to these three clients.

Internet with Background Traffic Ethernet VoD Server (WSU, Dayton, OH)

Etherne t Local Client (WSU, Dayton, OH)

DSL

DSL

Remote Client (IN, USA) Local Client (Dayton, OH)

Fig. 8 Network connections for trace measurement. In order to generate the video trace, we assume that the video source has a bitrate of 1.5 Mbps. The packet payload size is 1460 bytes, and the content is random. The data packets are transmitted under RTP/UDP/IP protocol stack. As we know, the protocol for real-time multimedia service is UDP in order to reduce the network overhead. Including the UDP header (8 bytes) and IP header (20 bytes), the overall packet size is 1488 bytes, which is around the same as the Maximum Transmission Unit (MTU) size of 1500 bytes. This size is assumed in most research. The reason to make the packet size as the largest MTU size allowed is to reduce header overhead. We have run 10 experiments for each connection. In each of the experiment, we send out 20000 packets to each client. For each client, the same experiment will be repeated for 10 times. During each experiment, the video server will send data packets to all of the three clients in order to simulate the case of simultaneously serve multiple clients and background traffic.

4.2. Experimental Data Here is the configuration of the system structure: • VoD Server: 130.108.15.124, Port #: 2009 (Located in WSU, Dayton, OH, USA) • VoD Client (Local): 130.108.15.122, Port #: 2009 (Located in WSU, Dayton, OH, USA) • VoD Client (Local): 69.223.45.97, Port #: 2009 (Located in Dayton, OH, USA) • VoD Client (Remote): 69.209.48.139, Port #: 2009 (Located in Highland, IN, USA) • VoD Client (Remote): 69.209.75.50, Port #: 2009 (Located in Highland, IN, USA)

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 22

Table 1 Network connections. Connecti on # Cnx-1 Cnx-2 Cnx-3 Cnx-4

Source IP 130.108.15 .124 130.108.15 .124 130.108.15 .124 130.108.15 .124

Destination IP 130.108.15. 122 69.223.45.9 7 69.209.48.1 39 69.209.75.5 0

Bitrate 1.5 Mbps 1.5 Mbps 1.5 Mbps 1.25 Mbps

Payload Size

Payload Content

Packet Size

1460

Random

1488

1460

Random

1488

1460

Random

1488

1460

Random

1488

Table 2 Local area network connection. Trace 01 02 03 04 05 06 07 08 09 10

Date Time Bitrate Packets Loss Loss Ratio Delay 09/15/2005 10:44PM 1.5Mbps 20000 0 0% 3ms 09/15/2005 10:57PM 1.5Mbps 20000 0 0% 2ms 09/15/2005 11:12PM 1.5Mbps 20000 0 0% 3ms 09/15/2005 11:22PM 1.5Mbps 20000 0 0% 8ms 09/15/2005 11:47PM 1.5Mbps 20000 0 0% 3ms 09/15/2005 11:59PM 1.5Mbps 20000 0 0% 2ms 09/16/2005 00:05AM 1.5Mbps 20000 0 0% 3ms 09/16/2005 00:12AM 1.5Mbps 20000 0 0% 3ms 09/16/2005 00:25AM 1.5Mbps 20000 0 0% 2ms 09/16/2005 00:37AM 1.5Mbps 20000 0 0% 3ms Sender: 130.108.15.124 (port #: 2352)  Receiver: 130.108.15.122 (port #: 2009)

Jitter 0ms 0ms 0ms 0ms 0ms 0ms 0ms 0ms 0ms 0ms

Table 3 Neighborhood connection. Trace 01 02 03 04 05 06 07 08 09 10

Date Time Bitrate Packets Loss Loss Ratio Delay 09/20/2005 08:28PM 1.5Mbps 20000 40 0.2% 5ms 09/20/2005 08:57PM 1.5Mbps 20000 45 0.225% 5ms 09/20/2005 09:02PM 1.5Mbps 20000 32 0.16% 5ms 09/20/2005 09:26PM 1.5Mbps 20000 1 0.005% 3ms 09/20/2005 09:28PM 1.5Mbps 20000 47 0% 5ms 09/20/2005 09:42PM 1.5Mbps 20000 112 1% 6ms 09/20/2005 09:55PM 1.5Mbps 20000 520 3% 44ms 09/20/2005 10:18PM 1.5Mbps 20000 4 0% 3ms 09/20/2005 10:26PM 1.5Mbps 20000 1 0% 4ms 09/20/2005 11:02PM 1.5Mbps 20000 24 0% 5ms Sender: 130.108.15.124 (port #: 2352)  Receiver: 69.209.41.40 (port #: 2009)

Transactions of the SDPS

Jitter 0ms 0ms 0ms 0ms 0ms 0ms 1ms 0ms 0ms 0ms

MARCH 2009, Vol. 13, No. 1, pp. 23

Table 4 Remote Connection (OH  IN). Trace 01 02 03 04 05 06 07 08 09 10

Date Time Bitrate Packets Loss Loss Ratio Delay 09/16/2005 11:34AM 1.5Mbps 20000 2829 14% 332ms 09/16/2005 11:57AM 1.5Mbps 20000 2820 14% 332ms 09/16/2005 12:12PM 1.5Mbps 20000 2825 14% 321ms 09/16/2005 12:22PM 1.5Mbps 20000 2834 14% 329ms 09/16/2005 12:47PM 1.5Mbps 20000 2891 14% 324ms 09/16/2005 12:59 PM 1.5Mbps 20000 2839 14% 330ms 09/16/2005 01:08 PM 1.5Mbps 20000 2821 14% 331ms 09/16/2005 01:22 PM 1.5Mbps 20000 2858 14% 330ms 09/16/2005 01:28 PM 1.5Mbps 20000 2894 14% 325ms 09/16/2005 01:37 PM 1.5Mbps 20000 2850 14% 328ms Sender: 130.108.15.124 (port #: 2352)  Receiver: 69.209.48.139 (port #: 2009)

Jitter 2ms 2ms 2ms 2ms 2ms 2ms 2ms 2ms 2ms 2ms

Table 5 Remote Connection (OH  IN). Trace 01 02 03 04 05 06 07 08 09 10

Date Time Bitrate Packets Loss Loss Ratio Delay 09/19/2005 07:44PM 1.25Mbps 20000 8 0% 24ms 09/19/2005 07:57PM 1.25Mbps 20000 72 0% 118ms 09/19/2005 08:02PM 1.25Mbps 20000 72 0% 118ms 09/19/2005 08:16PM 1.25Mbps 20000 24 0% 17ms 09/19/2005 08:28PM 1.25Mbps 20000 37 0% 41ms 09/19/2005 08:59PM 1.25Mbps 20000 23 0% 35ms 09/19/2005 09:25PM 1.25Mbps 20000 3 0% 64ms 09/19/2005 09:48PM 1.25Mbps 20000 2 0% 88ms 09/19/2005 10:12PM 1.25Mbps 20000 126 1% 38ms 09/19/2005 10:37PM 1.25Mbps 20000 549 3% 48ms Sender: 130.108.15.124 (port #: 2352)  Receiver: 69.209.75.50 (port #: 2009)

Jitter 1ms 3ms 3ms 1ms 2ms 1ms 2ms 2ms 1ms 2ms

As can be seen from Tables 1-5, within the local area network, there is no loss at all. In the area of neighborhood, there is very little loss, which can be neglected. A high percentage loss (14%) has been observed with the connection between Ohio and Indiana at the data rate of 1.5 Mbps. With the same connection between Ohio and Indiana, the data rate was lowered to 1.25 Mbps, and thus the data loss ratio has been reduced to a very low level. The loss behaviors for different traces are also graphically illustrated in Figure 9. Our future work will be concentrated on the network trace obtained with the connection between Ohio and Indiana. 5. Experimental Results 5.1. Simulation Strategies In order to investigate the effectiveness of the Bernoulli Model, Gilbert Model, and GMM, we have applied these models to the network traces we have obtained before. In the tests, the following simulation strategies have been applied: (1) Each network traffic trace can be represented as a binary array. In the binary array, 1 represents a lost packet while 0 represents a successfully delivered packet. The whole binary sequence can be viewed as a random process. (2) The models are parameterized with the original network trace. For Bernoulli Model, we need to generate the means loss ratio; for Gilbert Model, we need to have the transition probabilities from 0 Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 24

to 1 and from 1 to 0; for GMM, we need to have the transition matrix which describes the transition probabilities from one state to another and the state probability for each state. (3) Generate pseudo traces with the parameterized models. Since we basically use the models to predict the statistical properties of the traces, now we will need to generate the traces with the model to evaluate the precision of the prediction. (4) Compare the generated traces with the original traces in terms of statistical properties such as mean loss rate, loss-run distribution, inter-loss distance distribution, the performance of FEC (indirect accuracy measurement), etc. (5) In order to minimize the random error led by the pseudo random number generator when generating the pseudo network traces, generation program has been run for multiple times and the average of the statistical properties has been obtained (6) In order to avoid the randomness of a certain network traffic trace, different traces have been analyzed and averaged to generate the parameters of the model, which will be able to more precisely reflect the statistical properties of the real networked multimedia transmission. Loss Ratio of Local and Remote Network Traces 0.16 0.14

Loss Ratio

0.12 0.1

LAN Connection at 1.5 Mbps Neighborhood Connection at 1.5 Mbps

0.08

Remote C onnection at 1.5 Mbps 0.06

Remote C onnection at 1.25 Mbps

0.04 0.02 0 1

2

3

4

5

6

7

8

9

10

Trace #

Fig. 9 Loss ratios for experiments with different connections. 5.2. Markov Chain Order Estimation When we try to set up a GMM, one of the major concerns is determining the appropriate order of the Markov chain. In other words, we need to know that the status of current event is determined by the states of how many previous states. Finesso, Liu, and Narayan (Finesso et al., 1996), and Merhav, Gutman, and Ziv (Merhav et al., 1989) had stated the algorithm for order estimation. In the proposed algorithm, empirical k-th order conditional entropy is defined, which is the indication of the predictive power of the history with a certain length k. The goal is to find the shortest history that has the same predictive power as a predefined maximum length of history. For example, suppose that the maximum possible order of the GMM is a predefined integer number m, which is obtained through empirical experiments. It is assumed that the actual order is an integer small than m. In order to obtain the actual order of the GMM, we calculate the conditional entropy of GMM with orders m, m-1, m-2, m-3, …, 0, 1. If the conditional entropy starts to decrease at the order n, it means that the history with length n+1 has the same predictive power as the predefined maximum length of history m. As such, the order of the GMM is n+1. It is found that generally an order of 6 is enough for the prediction of network traffic trace. In the conducted experiments, we will use a GMM of order 6. To parameterize a GMM, we need to generate the transition matrix. Table 6 is a part of the transition matrix for Trace #01.

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 25

Table 6 Part of the Transition Matrix of the 6-Order GMM for Trace #1. State # 0 1 2 3 4 5 6 7

State 000000 100000 010000 110000 001000 101000 011000 111000

State Count 4419 2376 2378 118 2454 24 77 42

State Ratio 0.221005 0.11883 0.11893 0.00590148 0.122731 0.0012003 0.00385096 0.00210053

0 Count 2462 1956 2259 117 2357 21 77 41

0 Ratio 0.55714 0.823232 0.949958 0.991525 0.960473 0.875 1.0 0.97619

1 Count 1957 420 119 1 97 3 0 1

1 Ratio 0.44286 0.176768 0.0500421 0.00847458 0.0395273 0.125 0.0 0.0238095

5.3. Performance Evaluation - Loss-Run Distribution We have applied Bernoulli Model, Gilbert Model, and GMM to predict the loss-run distribution of different traces. In Figure 10, the performance comparisons of different models on traces #1, #2, #3, and #4 have been plotted. As can be seen, the parameterized GMM has the best capability to predict the loss-run distributions of the network traces, because the loss-run distribution of the network trace generated by the parameterized GMM is always close to the original network trace. It can be also observed that Bernoulli Model tends to over-estimate the loss probabilities for loss-run over 1 packet. Gilbert Model predicts the loss-runs with the length of 1 and 2 very well, but it has a poor performance on the prediction of loss-runs longer than 2 packets. 5.4. Performance Evaluation – Forward Error Correction One of the major applications of the network packet loss models is predicting the performance of FEC. Here we choose a block size of 3. If only one packet within a block is lost, then FEC will have the capability to recover it. If more than one packet is lost within the block, then the lost packets will be un-recoverable by FEC. The number of un-recoverable loss packets and the un-recoverable ratio are computed for each of the models for each of the ten traces. The performance of different models in terms of FEC for Trace #1 is illustrated in Table 7, and the performance for all ten traces is graphically illustrated in Figure 11. As can be seen from Figure 11, the parameterized GMM that we have obtained predicts the performance of FEC very precisely. Bernoulli Model and Gilbert Model tend to over-estimate the performance of FEC because they are incapable of catching the distribution of network trace in the long-run. 6. Video Data Recovery In order to evaluate the advantage of GMM modeling for VoD network trace, some error recovery results based on the obtained GMM have been presented in this section. The YUV video content “forman.qcif” is compressed with H.264/AVC at the bitrate of 1.5Mbps. During video data packetization, the parameterized GMM obtained through experiments (with 14% loss) is applied to simulate a real lossy IP network communication channel. At the receiver end, the video data is depacketized and decoded. The video frames will obviously be damaged due to packet loss. Based on the obtained GMM, we have proposed an information hiding based error recovery algorithm.

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 26

Normalized Loss-Run Distribution

Loss-Run Distribution of Trace #01

10

Original Trace Bernoulli Trace Gilbert Trace Pareto Trace Markov Trace

8

6

4

2

0 1

2

3

4

5

Loss-Run Length

(a)

Normalized Loss-Run Distribution

Loss-Run Distribution of Trace #02

10

Original Trace Bernoulli Trace Gilbert Trace Pareto Trace Markov Trace

8

6

4

2

0 1

2

3

4

Loss-Run Length

(b)

Normalized Loss-Run Distribution

Loss-Run Distribution of Trace #03

10

Original Trace Bernoulli Trace Gilbert Trace Pareto Trace Markov Trace

8

6

4

2

0 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

Loss-Run Length

(c)

Normalized Loss-Run Distribution

Loss-Run Distribution of Trace #04

10

Original Trace Bernoulli Trace Gilbert Trace Pareto Trace Markov Trace

8

6

4

2

0 1

2

3

4

5

6

7

8

9

Loss-Run Length

(d)

Fig. 10 Loss-run distributions of trace #01 -- #04. Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 27

6.1. Subjective Quality The comparisons between the damaged frame and the recovered frames under different recovery schemes are displayed in Figure 12. As can be seen, the methodology adopted by H.264/AVC to recover damaged slices (caused by lost packet) is spatial extrapolation, which copies the luminance and chrominance values of adjacent slices to the damaged slices (Figure 12(a)). This approach does not produce good results, since the textural details are all gone. According to the displayed frames, the damaged slices recovered by the proposed methodology generates better results, especially with the schemes “4-bit @ 4x4 block”, “8-bit @ 4x4 block”, and “16-bit @ 4x4 block”. With the scheme “4-bit @ 8x8 block”, although the information hiding data rate is the same as the scheme “1-bit @ 4x4 block”, the recovered frame has better visual quality as well as objective quality. The same applies to the schemes “2-bit @ 4x4 block” and “8-bit @ 8x8 block”.

Table 7 FEC of Trace #01 FEC Original Bernoulli Gilbert GMM

Packet # 20000 20000 20000 20000

Lost Packet # 2829 2765 2780 2832

Loss Ratio 14.145% 13.825% 13.9% 14.16%

Un-recoverable Packet 269 708 489 209

Un-recoverable Ratio 9.50866% 25.6058% 17.5899% 7.37994%

Non-Recoverable Packets with Forward Error Correction

Non-Recoverable Packets with FEC (Normalized)

14

12

Original Trace Bernoulli Trace Gilbert Trace Pareto Trace Markov Trace

10

8

6

4

2

0 1

2

3

4

5

6

7

8

9

10

Trace #

Fig. 11 Performance evaluation on FEC.

6.2. Bit Error Localization In the proposed error recovery algorithm, there are some regions that cannot be recovered, which are called un-recoverable regions. Usually the un-recoverable region is caused by packet loss and frame damage. One of the most important tasks is minimizing the un-recoverable region. Based on the parameterized GMM we have obtained in the experiments, we have proposed a bit error localization approach, which is able to largely reduce the un-recoverable region. With this approach, the performance of error recovery is significantly improved. As can be seen from Figure 13, the unrecoverable region of the video frame (identified by the black dots) significantly shrinks with the bit error localization approach.

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 28

(a) Damaged Frame

(b) 1 bit @ 4x4 Block

(c) 2 bit @ 4x4 Block

(d) 4 bit @ 8x8 Block

(a) 8 bit @ 8x8 Block

(b) 4 bit @ 4x4 Block

(c) 8 bit @ 4x4 Block

(d) 16 bit @ 4x4 Block

Fig. 12 Subjective quality of recovered frames.

(a) Unrecoverable Region without Bit Error Localization

with

(b) Unrecoverable Region Bit Error Localization

Fig. 13 Unrecoverable region with bit error localization. 7. Conclusions In this research, three IP connections have been setup between the VoD server and three clients (in Local Area Network, City Neighborhood, and Indiana, respectively). Network transmission experiments have been conducted on the connections, and 0.8 million (within 40 time slots) data packets have been transmitted under RTP/UDP/IP protocols. Four mathematical models – Bernoulli Model, Gilbert Model, Pareto Model, and GMM – have been investigated. Parameterized Bernoulli Model, Gilbert Model, Pareto Model and GMM have been used to generate pseudo network traces. The generated traces have been analyzed to evaluate the predictive capabilities and effectiveness of different models. Mainly, the loss-run distribution and un-recoverable ratio in FEC have been analyzed. The parameterized GMM generally possesses the best predictive capability in the loss-run distribution and FEC performance. Gilbert Model works well in the prediction of FEC performance, but not in the loss-run distribution. Bernoulli Model generally over-estimates single packet loss probability and under-estimates longer loss-run distribution. Due to the above reasons, Bernoulli Model has poor

Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 29

ability to predict the performance of FEC. Some preliminary error results, based on the parameterized GMM that we have obtained in the experiments, have also been presented. The performance improvements in the error recovery approach have verified the advantages of GMM in the simulation of network traces. 8. Acknowledgement: The research work in this paper is partially supported by an AIIS (Centerville, OH) grant. 9. References Alexandraki, A., and Paterakis, M., “Performance Evaluation of the Deadline Credit Scheduling Algorithm for Soft-Real-Time Applications in Distributed Video-on-Demand Systems”, Cluster Computing 8(1), 2005, pp. 61-75. Aly, S.G., “Frame Estimation and Audio-Video Synchronization in Video-Lossy Environment”, Ph.D. degree dissertation, The George Washington University, October 2nd, 2000. Aly, S.G. and Youssef, A., “Real-time Motion-based Frame Estimation in Video Lossy Transmission”, Proc. of the 2001 IEEE Symposium on Applications and the Internet (SAINT’01), 2001, pp. 139-147. Bertino, E., and Ferrari, E., “Temporal Synchronization Models for Multimedia Data”, IEEE Transactions on Knowledge and Data Engineering, Vol. 10, No. 4, Jul./Aug., 1998, pp. 612-631. Bourbakis, N., Dollas, A., “A SCAN based Method for Multimedia on Demand”, IEEE Multimedia Magazine, July-Sept. 2003, pp. 79-87. Finesso, L., Liu, C., and Narayan, P., “The Optimal Error Exponent for Markov Order Estimation”, IEEE Transactions on Information Theory, Vol. 42, No. 5, Sept. 1996, pp. 1488-1497. Jiang, W., and Schulzrinne, H., “Modeling of Packet Loss and Delay and Their Effect on Real-time Multimedia Service Quality”, Proc. of 10th International Workshop on Network and Operations System Support for Digital Audio and Video, June 2000. Kamath, P., Lan K., Heidemann, J., Bannister, J., and Touch, J., “Generation of High Bandwidth Network Traffic Traces”, Proc K.. of the 10th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunications Systems (MASCOTS'02) , pp. 401-405. Koodli, R., and Ravikanth, R., “One-way Loss Pattern Sample Metrics”, Internet Draft, Internet Engineering Task Force, June 1999. Little, T.D.C., and Ghafoor, A., “Synchronization and Storage Models for Multimedia Objects”, IEEE Journal on Selected Areas in Communication, Vol. 8, No. 3, Apr. 1990, pp. 413-427. Markovski, V., “Simulation and Analysis of Loss in IP Networks”, Master Degree Thesis, Simon Fraser University, Oct. 2000. Markovski, V., Xue, F., and Trajkovic, L., “Simulation and Analysis of Packet Loss in Video Transfers using User Datagram Protocol”, The Journal of Supercomputing, Kluwer, Vol. 20, No. 2, 2001, pp. 175-196. Merhav, M., Gutman, M., and Ziv, J., “On the Estimation of the Order of a Markov Chain and Universal Data Compression”, IEEE Transactions on Information Technology, Vol. 35, No. 5, Sept. 1989. Meyer, T., Effelsberg, W., and Steinmetz, R., “A Taxonomy on Multimedia Synchronization”, Proc. of 4th International Workshop on Future Trends in Distributed Computing Systems, 1993, pp. 56-62. Perkins, C., Hodson, O., and Hardman, V., “A Survey of Packet Loss Recovery Techniques for Streaming Audio”, IEEE Network, Vol. 12, No. 5, Sept./Oct. 1998, pp. 40-48. Sanneck, H., Carle, G., and Koodli, R., “A Framework Model for Packet Loss Metrics based on Loss RunLengths”, Proc. of SPIE/ACM SIGMM Multimedia Computing and Networking Conference, 8-10 June 2005, Montreal, Canada, pp. 17-23. Schmidt, B.K., Northcutt, J.D., and Lam, M.S., “A Method and Apparatus for Measuring Media Synchronization”, Proc. of the 5th International Workshop on Network and Operating System Support for Digital Audio and Video, Durham, NH, April 1995, pp. 130-141.

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 30

Tekalp, A.M., “Digital Video Processing”, Book, Prentice Hall, ISBN 7-302-02927-X. Wah, B.W., and Su, X., “Streaming Video with Transformation-based Error Concealment and Reconstruction”, Proc. of IEEE International Conference on Multimedia Computing and Systems, Vol. 1, 1999, pp.238-243. Wang, Y., Wenger, S., Wen J., and Katsaggelos, A.K., “Review of Error Resilient Coding Techniques for Real-Time Video Communications”, July 01, 2000. Xue, F., Markovski, V., and Trajkovic, L., “Packet Loss in Video Transfers over IP Networks”, 2001 IEEE International Symposium on Circuits and Systems (ISCAS 2001), Vol. 2, pp. 345-348. Yajnik, M., Moon, S., Kurose, J., and Towsley, D., “Measurement and Modeling of the Temporal Dependence in Packet Loss”, IEEE INFOCOM 1999 - The Conference on Computer Communications, no. 1, March 1999, pp. 345-352. Yang, M. and Bourbakis, N., “A Prototyping Tool for Analysis and Modeling of Video Transmission Traces over IP Networks”, Proc. of IEEE International Workshop for Rapid System Prototyping, Crete, Greece, June 2006.

10. Authors’ Biographies Dr. Ming Yang (M-IEEE ’07) received his B.S. and M.S. degrees in Electrical Engineering from Tianjin University, China, in 1997 and 2000, respectively, and Ph.D. degree in Computer Science and Engineering from Wright State University, Dayton, Ohio, USA in 2006. He has been with Jacksonville State University as an Assistant Professor in Computer Science since 2006. His research interests include Digital Image/Video Coding, Multimedia Communication & Networking, and Information Security. He has actively participated in and significantly contributed to various large-scaled research projects funded by US National Science Foundation and Department of Defense. He is the author/coauthor of over fifteen publications in leading computer science journals and conference proceedings, including IEEE Transactions on Broadcasting and SPIE Journal of Electronic Imaging. He serves as reviewer of twelve international journals and conferences in the areas of multimedia communication & networking, digital signal/image processing, artificial intelligence, etc. He is currently leading a group in Jacksonville State University to conduct research on medical image security and privacy, H.264/AVC video coding, and video streaming over wireless networks. He has successfully secured two research grants from US National Science Foundation and Jacksonville State University Faculty Research Committee. He is the recipient of Jacksonville State University Faculty Research Award in 2007 and 2008. For more information please visit: http://mcis.jsu.edu/faculty/myang/index.html. Dr. Nikolaos Bourbakis (IEEE Fellow-96) received his Ph.D. in Computer Engineering from the Department of Computer Engineering & Informatics, University of Patras, Patras, Greece, in 1983. He currently is the Associate Dean for Research, the OBR Distinguished Professor of IT and the Director of the Assistive Technologies Research Center (ATRC) at Wright State University, OH. He also is affiliated as a Research Professor at Technical University of Crete, Greece. Dr. Bourbakis’ industrial experience includes service to IBM, CA and Soft Sight, NY. He is the founder and Vice President of the AIIS, Inc., NY. He pursues research in Applied AI, Image-Video Analysis, Machine Vision, Bioinformatics & Bioengineering, Assistive Technology, Information Security, and Parallel/Distributed Processing funded by USA and European government and industry ($13M). He has published more than 300 articles in refereed International Journals, book-chapters and Conference Proceedings, and 10 books as an author, co-author or editor. He has graduated 14 Ph.D students. He is the founder and the EIC of the International Journal on AI Tools, the Founder and General Chair of several International IEEE Computer Society Conferences, Symposia and Workshops. He is or was an Associate Editor in IEEE Trans. on KDE, MM, in numerous International Journals, and a Guest Editor in 17 special issues in IEEE and International Journals related to his research interests. He is an IEEE Fellow, a Distinguished IEEE Computer Society Speaker, an NSF University Research Programs Evaluator, an NSF reviewer for Research Centers, an IEEE Computer Society Golden Core Member, Transactions of the SDPS

MARCH 2009, Vol. 13, No. 1, pp. 31

an External Evaluator in University Promotion Committees, an Official Nominator of the National Academy of Achievements for Computer Science Programs, and a keynote speaker in several International Conferences. His research work has been internationally recognized with several prestigious awards: IBM Author recognition Award 1991, IEEE Computer Society Outstanding Contribution Award 1992, IEEE Outstanding Paper Award ATC 1994, IEEE Computer Society Technical Research Achievement Award 1998, IEEE I&S Outstanding Leadership Award 1998, IEEE ICTAI 10 years Research Contribution Award 1999, IEEE BIBE Symposium Outstanding Award 2003, ASC Award for Assistive Technologies 2005, SETN Research Recognition-2006, Diploma of Honor, University of Patras, Greece 2007. For more information please visit: http://www.cs.wright.edu/atrc/atrc_director.htm.

Journal of Integrated Design and Process Science

MARCH 2009, Vol. 13, No. 1, pp. 32

Suggest Documents