Estimation and Control Design of Mobile Robot Position

5 downloads 0 Views 7MB Size Report
The EnKF is one of data assimilation method to estimate the state ...... Fruin, J.J., Designing for pedestrians: A level-of-service concept, Polytechnic Institute of ... Department of Mathematics, Institut Teknologi Sepuluh Nopember Surabaya 60111 , Indonesia ..... Department of Naval Architecture and Shipbuilding Engineering.
Proceeding International Conference on Mathematics, Statistics and its Applications 2012 (ICMSA 2012)

ISBN 978-979-96152-7-5

Estimation and Control Design of Mobile Robot Position Erna Apriliani, Subchan, Fitri Yunaini, and Santi Hartini Institut Teknologi Sepuluh Nopember [email protected]

Abstract. The mobile robot is one of the Unmanned Vehicle (UV). The mobile robot can control the moving direction by itself. The present position of mobile robot can be detect by global positioning system (GPS), the direction of moving can be control based on the estimation of those position. In this paper, we estimate the one step ahead of mobile robot position by using Ensemble Kalman Filter (EnKF) and based on those estimation result, it can be control the one a head of mobile robot direction. Here, we derive dynamical model of mobile robot, we discretize respect to time. Before we apply the EnKF to estimate the position of mobile robot, we give the trajectory path, where the mobile robot will pass. We discretize the trajectory path into some segment and count the tangent of segment. We use the angle of segment tangent as drive angle. In our simulation, we apply three type of path, a linear path, a circular path and a path with corner. The EnKF is one of data assimilation method to estimate the state variable of non linear dynamic stochastic system. Key words: estimation, control, position, mobile robot, Ensemble Kalman Filter (EnKF)

1

Introduction

The mobile robot is one of the Unmanned Vehicle (UV). The mobile robot can control the moving direction by itself. The present position of mobile robot can be detect by global positioning system (GPS), the direction of moving can be control based on the estimation of those position. There are some paper about Mobile robot such as Mobile Robot Position Estimation Using the Kalman Filter, [8]. Robot position Tracking using Kalman filter,[4]. They applied the Kalman filter and Extended Kalman filter for linear dynamic mobile robot model. The ensemble Kalman filter (EnKF) is an estimation method to estimate the state variable of non linear dynamic stochastic system[5]. The EnKF is one of data assimilation method to estimate the state variable of non linear dynamic stochastic system. There are some paper study about the application of EnKF such as ground water pollution estimation[1], the air pollution problem [2], the estimation of population of plankton and its nutritions [6], estimation of mobile robot position by the Ensemble Kalman filter [3] . In this paper, we estimate the one step ahead of mobile robot position by using Ensemble Kalman Filter (EnKF) and based on those estimation result, it can be control the one a head of mobile robot direction. Here, we derive dynamical model of mobile robot, we discretize respect to time. Before we apply the EnKF to estimate the

AM1

Fig. 1. The dynamic of mobile robot [7]

position of mobile robot, we give the trajectory path, where the mobile robot will pass. We discretize the trajectory path into some segment and determine the tangent of segment path. We use the angle of segment tangent as drive angle. In our simulation, we apply three type of path: a linear path, a circular path and a path with corner.

2

Mathematical Model of Mobile Robot

The dynamic of mobile robot can be written as non linear differential system [7]: v (a sin ϕ + b cos ϕ) tan α L v y˙ = v sin ϕ + (a cos ϕ − b sin ϕ) tan α L v ˙ ϕ = tan α L x˙ = v cos ϕ −

(1) (2) (3)

where x, y, ϕ are position in x−direction, y−direction and the angle of mobile position, respectively. This system has the steer angle, α, and velocity, v as input. Suppose, we want control the dynamic of mobile robot such as the robot move on the desired path. Therefore, we must give the steer angle, based on the position of mobile robot. The position and dynamic of mobile robot can be represented as figure 1: Here, the mobile robot is assumed will pass the three types of path such as linear path (Fig.2), circular path (Fig.3) and a path with corner (Fig.4). Because the EnKF available apply to discrete time system, then we must discretize the dynamic of mobile robot respect to time and divide the trajectory path into some segments. The discretize of dynamic mobile robot is [

]

vk xk+1 = ∆t vk cos ϕk − (a sin ϕk + b cos ϕk ) tan αk + xk L

The Desired Linear Path 40 35

position in y−axis

30 25 20 15 10 5 0

0

5

10 position on x−axis

15

20

Fig. 2. The desired linear path The Desired Circular Path 5 4 3

position in y−axis

2 1 0 −1 −2 −3 −4 −5 −5

0 position on x−axis

5

Fig. 3. The desired circular path The Desired Path with corner 18 16

position in y−axis

14 12 10 8 6 4 2

0

5

10 position on x−axis

15

20

Fig. 4. The desired path with corner

[

]

vk yk+1 = ∆t vk sin ϕk + (a cos ϕk − b sin ϕk ) tan αk + yk L [ ] vk ϕk+1 = ∆t tan αk + ϕk L We estimate the position xk+1 , yk+1 and angle of mobile robot direction ϕk+1 by using the EnKF. And based on those position, we give the input control α.

The trajectory path is divide into some segment such as 10, 20, 40 or others. Here, we divide path into 20 segments so that we have the 20 coordinate, (xk , yk ) of path.

3

The Algorithm of Ensemble Kalman Filter

Let a non linear dynamic stochastic system xk+1 = f (uk , xk ) + wk

(4)

z k = H k x k + vk

(5)

with measurement system where wk ∼ N (0, Q) and vk ∼ N (0, R) is a system noise and a measurement noise, respectively. We use the EnKF to estimate the state variable xk . The algorithm of EnKF is [5] – Initial estimation Generate the N-ensemble of initial estimation x0,i = x0 + wi Determine the mean of those ensemble ˆ0 = x – Prediction Step

N 1 ∑ x0,i N i=1

ˆ− x xk−1,i , uk−1,i ) + wk,i k,i = f (ˆ

with estimation in prediction step ˆ− x k =

N 1 ∑ ˆ− x Ne i=1 k,i

and convariance error Pk−

N ( )T )( 1 ∑ ˆ− ˆ− ˆ− xˆ− = x k,i − x k k,i − x k N − 1 i=1

– Correction step Generate the N-ensemble measurement data zk,i = zk + vk,i

Kalman gain

(

Kk = Pk− H T HPk− H T + Rk Estimation

ˆ k,i = x ˆ− ˆ− x k,i ) k,i + Kk (zk,i − H x ˆk = x

Covariance error :

4

)−1

N 1 ∑ ˆ k,i x N i=1

Pk = [I − Kk H]Pk−

Estimation and Control of Mobile Robot Position

In this paper, we give the trajectory path which will be passed by mobile robot. At the first time, we divide the trajectory path into some segment and then we apply the estimation the position and control the direction of mobile robot. Here are the step in estimation and control the mobile robot position: 1. Estimation the mobile robot position We use the EnKF algorithm to estimate the position of mobile robot at time k. The estimation is based on the measurement position at time k − 1, [xk−1 , yk−1 ] and we get the position of mobile robot [ˆ xk , yˆk , ϕˆk ]; 2. Control the direction of mobile robot To control the direction of mobile robot at time k + 1, we determine the gradient between desired position at time k + 1, (xk+1 , yk+1 ) and estimation position in time k, (ˆ xk , yˆk ). The gradient is tan ϕk+1 =

yk+1 − yˆk xk+1 − xˆk

, where (xk+1 , yk+1 ) is desire position in time k + 1. We use this angle of segment, ϕk+1 to determine the angle of mobile robot wheel,αk+1 . [

α = tan

−1

L [ϕk+1 − ϕk ] v ∆t

]

(6)

3. next we continue to step 1, the estimation position in time k + 2 by using the EnKF algorithm, and then the control step by determining the gradient. Here, we used three type of trajectory path, such as linear path, circular path and path with corner. In this simulation, we take the velocity is constant for certain path. We take v = 60 for linear path, v = 40 for linear path with corner, and v = 20 for circular path. The simulation result are prestented on figure 5,6,7.

The Estimation Result for Linear Path 25

position in y−direction

20

15

10

estimation path by EnKF desired path

5

0

−5

0

5

10 position in x−direction

15

20

Fig. 5. Simulation Results for linear path The Estimation Result for Path with Corner 18 16

position in y−direction

14 12 10 8 6 estimation path by EnKF desired path

4 2 0

0

5

10 position in x−direction

15

20

Fig. 6. Simulation Results for path with corner The Estimation Result for Circular Path 5 4

position in y−direction

3 2 1 0 −1

Estimation path by EnKF desired path

−2 −3 −4 −5 −5

0 position in x−direction

5

Fig. 7. Simulation Results for circular path

From those simulation, we know that for linear path and linear path with corner, the result of estimation and controller almost same with the desired path (Fig. 5, Fig. 6). But for circular path, the result of estimation and controller is slice different with the desired path (7). We must choose the exact value of velocity for certain path. In the linear path, we can drive mobile robot faster

than in the path with corner, or in the circular path. It is not different with driving the real car, when we pass the straight way, we can drive fast, but when we turn in the corner or in the circular way, we driver slow.

5

Conclusion and Further Research

Based on some simulation and discussion above, we conclude that – The EnKF can be apply to estimate the position of mobile robot – The input control α is determined based on the gradient of desired position at time k + 1 and estimation position at time k. – The result estimation and control of mobile robot is influenced also by velocity of mobile robot and type of path. In this simulation, we apply the constant velocity. For further result, we will apply the time varying of velocity such that we can apply this algorithm for other type of path. We will apply this algorithm for a real mobile robot.

6

Acknowledgements

This paper is part of our research ”Data Assimilation: algorithm development and its application”, and ” Guidance and control unmanned vehicle navigation”.

References 1. Apriliani, E. Sanjaya, A.B., Adzkiyah, D:The Groundwater Pollution Estimation by the Ensemble Kalman Filter, Canadian Journal on Science and Engineering Mathematics, June, (2011) 2. Apriliani, E. Sanjaya, A.B.:The Square Root Ensemble Kalman Filter to Estimate the Concentration of Air Pollution, International Conference on Mathematics and Applied Engineering, Kuala Lumpur, Malaysia 2010 3. Apriliani, E. Subchan, Hartini, S. :The Ensemble Kalman Filter to Estimate the Position of Mobile Robot, International Conference on Mathematics and Sciences, Oktober 2011, Surabaya, Indonesia (2011) 4. Casanova, O.L., Alfissima, F., Machaca, F.Y.: Robot position Tracking Using Kalman Filter, Proceedings of the World Congress on Engineering 2008, Vol II. (2008) 5. Evensen, G.:The Ensemble Kalman Filter: Theoretical formulation and practical implementation. Ocean Dynamics, Vol 53, pp 343-367 (2003). 6. Purnama, D.K., Apriliani, E. :Estimasi Populasi Plankton dengan Ensemble Kalman Filter, Jurnal Ilmu Dasar, Vol 9 No. 1 (2008) 7. Rezaei, S. and Sengupta, R. ”Position Estimation of the Car via Kalman Filter” : Univercity of California, Berkeley, http://www.docin.com/p-73746816.html, (download April 2012) 8. Suliman, C., Crucerul, C., Moldoveanu, F.: Mobile Robot Position Estimation Using the Kalman Filter,Scientific Bulletin of the Petru Maior University of Tirgu Mures, Vol. 6 (XXIII)(2009)

Proceeding International Conference on Mathematics, Statistics and its Applications 2012 (ICMSA 2012)

ISBN 978-979-96152-7-5

Pedestrian flow characteristics in a least developing country Khalidur Rahman1,3, Noraida Abdul Ghani1, Anton Abdulbasah Kamil1, Adli Mustafa2 1

Mathematics Section, School of Distance Education, Universiti Sains Malaysia 11800 USM, Penang, Malaysia 2 School of Mathematical Sciences, Universiti Sains Malaysia 11800 USM, Penang, Malaysia 3 Department of Statistics, School of Physical Sciences, Shahjalal University of Science and Technology, Sylhet-3114, Bangladesh

Abstract: In least developing countries, scarcity of transportation, poor traffic management and less ability to bear transportation fares compel a significant portion of urban inhabitants to walk long distances. However, research on pedestrian flow characteristics in least developing countries are limited compared to developed countries. In this study, pedestrian speed-flow-density relationships in Dhaka, Bangladesh are estimated by the traditional ordinary least square (OLS) method and are compared to that of other studies. The results indicate that pedestrian flows on the sidewalks in Dhaka have some particular characteristics that are not similar to the others. The study recommends refraining from the direct adoption of foreign design and parameters for pedestrian facilities in Dhaka.

1

Introduction

Although walking is not usually considered as a transportation mode, every human trip begins or finishes with walking. The pedestrian mode has been attaining recognition as a vital building block at every age of urbanization. Even in the mechanized world of today, short trips and intermodal transportations in cities depend on the vital means of locomotion, walking, which is impossible to duplicate. In least developing countries, however, scarcity of transportation, poor traffic management and less ability to bear transportation fares compel a significant portion of urban inhabitants to walk long distances. Thus, to build a walking friendly urbanization, which should be matched with the equilibrium of pedestrian demand and the capacity of pedestrian facilities, proper evaluation of local pedestrian flow characteristics and travelling behaviour is very essential. Research on pedestrian movements and flow characteristics began in the 1960s, mostly in developed countries.

AM2

However, they are limited compared to research which focused on vehicle traffic flow. There are a number of physiological, psychological, and environmental factors that have significant contribution to the free flow movements of a pedestrian. These factors include age, gender, baggage carrying capacity of a pedestrian and walkability of a facility [1], gradient or roughness of surface [2], time of day [3], intention, intelligence and physical fitness of a pedestrian [4], indoor or outdoor walkway [5], type of walking facility [6]. For pedestrian movements on a public walkway facility, the most important factor is the presence or absence of other pedestrians [2]. Studies that considered the pedestrian movements in the presence of other pedestrians could be broadly categorized into two groups: microscopic level and macroscopic level. A microscopic level study considers individual units with traffic characteristics such as individual speed and individual interaction [7]. On the other hand, a macroscopic level study considers the movements of all pedestrians in a pedestrian facility and aggregates their characteristics to traffic flow. The main concern of macroscopic pedestrian studies is the space allocation for pedestrians that minimize the pedestrian conflicts, the expenditure of human energy and time on the pedestrian facilities. The recognized design parameters for pedestrian facilities, such as maximum flow, jam densities, are mostly established on the user characteristics of developed and western countries. Consideration of pedestrian movement characteristics from developing countries, in particular, from least developing countries is by and large neglected to take into account for standard capacity parameters. Thus, the direct use of foreign design codes and unavailability of well-recognized local parameters for pedestrian facilities has been a concern in least developing countries. These apprehensions motivated the current study for the macroscopic study of pedestrian flow characteristics in Dhaka, Bangladesh, which is considered as a typical capital city of least developing countries.

2

Data collection

Dhaka, a mega city of 15 million people with a land of Metropolitan area 1,530 km2, is one of the major cities of South Asia [8]. It is the 28th city among the most densely populated cities in the world. The economic, political and cultural lives in Bangladesh are centered in the capital city Dhaka. Although Dhaka has the most developed urban infrastructure in the country, it is suffering from urban problems such as overpopulation and air pollution. In last few decades, transports and communications in Dhaka have been modernized, but these efforts are not enough to meet the movements of highly dense population demand. Due to the shortage of transports, commuters are forced walk and use

alternative facilities. As a result, a lot of pedestrian traffic is usually formed on the sidewalks. Therefore, in this study some sidewalks of Dhaka have been selected to calibrate the traffic parameters. 2.1

Study locations

Farmgate is one of the major business centers of Dhaka, where a number of government, NGO (Non government Organization) and commercial institutions are located. It is one of the main transportation hubs of the city that caters to different types of passengers travelling to other parts of Dhaka as well as to other parts of the country. Pedestrian and vehicle traffic congestion is a common scene of Farmgate and thus the main sidewalk in front of Commissioner’s Market is often crowded. There are two over bridges that connect Farmgate to Kazi Nazrul Islam Avenue. At least eight educational institutions are located adjacent to this segment of the Avenue (between the two over bridges). The biggest wholesale market of Dhaka, Kawran Bazar and the highly crowded market, Sezan Market are also located on this avenue. Thus, the high density of pedestrians on the sidewalk in front of Tejgaon Govt. Girls’ School is a common phenomenon over the day. In addition, the sidewalk of Mirpur Road near Science Laboratory intersection is used daily by a large number of pedestrians from and to Dhanmondi, which is one of the most planned areas in the city. One selected location was chosen on each of the sidewalk to study the pedestrian flow characteristics. The details of locations are provided in Table 1. At each study location, pedestrians were assumed to have different trip objectives and movement were bi-directional with no entry from or exit to other walkways. Data were collected on typical weekdays under clear and dry weather condition that covers low to high density of pedestrians in each location. Pedestrian flow characteristics at all selected locations were assumed to be the same. Table 1. Details of Selected Locations on Sidewalks Location no.

Location Description

Length (m)

Width (m)

1

Commissioner’s Market Sidewalk, Farmgate

6.60

2.80

2

Kazi Nazrul Islam Avenue (in front of Tejgaon Govt. Girls’ School) Mirpur Road (Science Lab Signal Right Corner, approaches to Mirpur)

7.00

4.20

6.00

2.10

3

2.2

Data Collection Procedure

Self adhesive masking tapes were used to prepare longitudinal pedestrian traps and the effective width remained constant throughout the observed length for a data set. The observed locations were occupied by vendors, except at Kazi Nazrul Islam Avenue. In those cases the widths were measured by excluding the space occupied by the vendors. In cases when vendors exist, a small numbers of ‘vendor gazers’ were noted and thus were included in the counts of pedestrians as well as in the measurement of densities. However, these numbers were insignificant to have any effect on the general flow. To record the required data on pedestrian average speed, flow and density, a video recording technique which facilitated a bird’s eye view of the selected dimensions was used. The prevailing conditions were natural as pedestrians were not aware of the recording of their movements. The recorded videos were later converted to digital files and Adobe Premiere Pro software was used to playback the videos at a film speed of 25 frames/sec. The frame by frame videos ensured the accuracy of data reading, even at high density of pedestrian flows. For credible relationships among flow variables (traffic flow rate, density and speed) and a reasonable capacity analysis, a 30-second time interval was used to extract the data from recorded videos as pointed out by [9]. Data Extraction The following methods were used for calculation and data collection on pedestrian flow, density and speed. Pedestrian flow: From each video clip of 30-second interval, the total of flow was easily measured by the number of pedestrians in both directions passing a line of sight across the width of sidewalks. For comparison purposes, that number of pedestrians was then divided by the width of the sidewalk and 30 to express the pedestrian flow rate as the number of pedestrians per meter width of sidewalk per second (ped/m/s). Pedestrian density: As the volume of pedestrians were fluctuating in the 30second cycle time and approximately 3 to 10 seconds are required for a pedestrian to pass the selected trap lengths, the number of pedestrians in the observed dimensions at 8th, 15 th and 22th second were counted. Under low flow condition, pedestrians at any three instants were considered. The average of these three values was considered as number of total pedestrians in the selected dimensions corresponding to the flow of 30-second interval. Pedestrian density was expressed as total pedestrians divided by the area of observed dimensions resulting in pedestrians per square meter (ped/m2). Pedestrian speed: A random sample of twelve pedestrians, two from each direction at 0th, 10th and 20th second, was selected in each cycle of 30-second

interval to estimate the average pedestrian speed. All pedestrians were considered when the total pedestrians travelling the selected trap in 30-second interval were less than 12 under low flow condition. The traverse time of a particular pedestrian to travel the mark-off pedestrian trap length was obtained by using Adobe Premiere Pro software. The average of travelling times of selected pedestrians was then used to divide the pedestrian trap length to obtain the pedestrian average speed (space mean speed) corresponding to the flow of 30-second interval. It was expressed as (m/s).

3

Analysis and discussion

Ranges of densities observed at selected locations are not similar and no statistically significant relationships between speed, density, flow and space could be established at a single location. Thus, under the assumption of same set of attributes followed by pedestrians at each location, data from all locations has been pooled to develop appropriate relationships. In addition, the inferential study is done on the basis of single-regime approach. The relationships for pedestrian traffic flow on sidewalks have been established by the OLS method. The fundamental equation for pedestrian traffic flow is based on speed, flow and density and is analogous to fluid flow as follows [10,11]: = (1) where q= mean flow rate (ped./m/s) v= pedestrian mean speed (m/s) k= pedestrian mean density or concentration (ped./m2) Eq. (1) will be used to derive the required relationships. Speed-Density Relationship Figure 1 shows the scatter diagram for the speed-density relationship, where each dot corresponds to the mean speed and the mean density of a sample of pedestrian cohort during the chosen 30-second interval. The best fitted linear line to the collected data is also depicted in the diagram. The following linear line is fitted with R2 equal to 0.79. 

v  1.33  0.36k (2) Eq. (2) can be used to determine the average pedestrian speed in terms of different densities. However, this relationship is not valid under the free flow condition. The state of pedestrian walking condition under which the traffic density and conflicts between pedestrians are minimal to support a pedestrian to select her/him desired normal walking speed is termed as the free flow condition. The

determination of pedestrian mean speed under the free flow condition (defined as free flow speed) is necessary to evaluate the constraints on pedestrian movements that occur at higher levels of concentrations (densities) of traffic [10]. [12] observed that up to the densities of about 0.6 ped/m2 no significant deterioration of pedestrian desired speed occurs. According to [13], however, on level surface the free flow walking condition remains valid up to the density of 0.8 ped/m2. Hence, the interpretation of interception value of 1.33 of Eq. (2) with k=0 as mean free-flow walking speed is not acceptable, even though such interpretation has been used in some previous studies [14,15]. It is also evidenced from the study of [5] and from a comparison between the intercept of current study (1.33) and the value of free flow speed of 1.20 m/sec by [1] using the same data set, substituting k  0 into Eq. 2 to estimate free-flow speed is meaningless. In addition, even with the existence of a single pedestrian the density can never be zero, i.e, k  0 . It should also be noted that under the free flow condition, pedestrian speed (free flow speed) is likely to be more sensitive to age, gender, baggage carrying capacity of a pedestrian and walkability of a facility etc. rather than densities [1,16].

Fig. 1. Speed-Density Relationship for sidewalks in Dhaka

The jam density on the sidewalks in Dhaka is 3.69 ped./m2. This value is smaller compared to the value of 3.89 ped/ m2 in the British study on shopping

streets [2], the value of 3.99 ped/m2 in the American study at bus terminal [10] and the value of 4.20 ped/m2 the Indian study at intermodal transfer terminal [15]. However, the value is comparable to and greater than the jam density values in mixed traffic conditions of 3.6 ped/m2 [17] and 3.44 ped/m2 [14], respectively. The lower value of jam density can probably be explained by the socio-security conditions in Dhaka. Pedestrians in the capital city are generally unknown to each other and hence everyone usually tries to avoid the close touch of other for security and safety reasons. In addition, as a moderate Muslim country, female pedestrians always intended to maintain a certain distance to refrain themselves from the conflicts with male pedestrians. Thus, pedestrians on sidewalks in Dhaka tend to form less concentration as compared to shoppers and commuters. However, the rate of contribution of density (0.36) on the declining of speed is close to the rates of 0.34, 0.34 and 0.35 found by [2], [10] and [15], respectively. Flow-Density Relationship The scatter diagram for the flow-density relationship in conjunction with the fitted curve formulated based on Eq.(1) and Eq.(2) is depicted in Figure 2. Each dot corresponds to the flow rate and the mean density of a sample of pedestrian cohort during the chosen 30-second interval. The derived parabolic curve of the flow-density relationship by the OLS method is as follows. 

q  1.33k  0.36k 2

(3)

Table 2. Pedestrian characteristics at maximum flow on different walking facilities

Sidewalks Maximum flow rate (ped./m/sec) Density at maximum flow rate (ped./m2) Speed at maximum flow rate (m/sec) Author of the study

1.23

Type of walking facility Shopping Bus Intermodal transfer streets terminal terminal 1.30 1.35 1.53

1.85

1.95

2.00

2.10

0.66

0.65

0.68

0.74

current study

[2]

[10]

[15]

Fig. 2. Flow-Density Relationship for sidewalks in Dhaka

Since the product of speed and density provides a flow rate and there is an inverse linear relationship between speed and density, an optimum value of products that generates a maximum flow should arise at the middle level of flow curve. From the OLS method derived parabolic flow curve of Eq. (3) and speed line of Eq. (1), it is observed that the maximum flow of 1.23 ped/m/sec occurs at density 1.85 ped/m2 with speed 0.66 m/sec. A comparison of pedestrian characteristics at maximum flow on different walking facilities is summarized in Table 2. The table shows that pedestrian characteristics at maximum flow are comparable on the sidewalks and on the shopping streets. Comparability may exist between bus terminal and intermodal transfer terminal. Speed-Flow Relationship The OLS method derived relationship of speed and flow is calculated based on Eq. (1) and Eq. (2) and is depicted in the scatter diagram of Figure 3. The equation for speed-flow curve is:  v q (1.33  v ) 0.36 (4) Although each flow has two corresponding speeds: one is from upper stage and another is from lower stage, the reverse is not true. There is one unique pedestrian flow q for each speed v. At upper stage level, speed and flow have negative relationship. At this stage, there would have no downstream

bottleneck that affects the forward movements. Nevertheless, the decrement of speed is caused by the increment of physical interactions among the pedestrians. After the maximum flow, speed and flow have positive relationship. At this stage, as the maximum flow forwarded, there will be a downstream bottleneck and thus flow rate decreases thereafter. In such situation, the more increment of physical interactions among the pedestrians reduces speed as well. Although it is done in some previous studies, it seems unusual to use the pedestrian speed to estimate the flow rate instead of using the flow rate to estimate the corresponding speed. Hence, in this study we have revised the above Eq. (4) as follow: 

v

3.69  13.62  11.11 * q 5.56

(5)

Fig. 3. Speed-Flow Relationship for sidewalks in Dhaka

The speed corresponding to the maximum flow produced by OLS method is 0.66 m/sec. It is obvious that for the collected data only upper stage speeds need to be approximated by Eq. (5). As a consequence, the sign before the square root in Eq. (5) should only be positive. In addition, using Eq. (4), it is found that OLS method provides 10 percent invalid flow rates estimation i.e. negative value of flow rates in terms of corresponding densities.

4

Conclusion and future research

In this study an analysis of the characteristics of pedestrian flow on the sidewalks in Dhaka has been done based on the traditional OLS method. The results of this study can help the concerned policy makers to design and make appropriate improvement of the sidewalks to ensure a safe, smooth and pleasure movement for pedestrians. From the comparison of pedestrian characteristics of Dhaka with that of other studies, the study also recommends refraining from the direct adoption of foreign design and parameters and to instead use the local design and parameters for pedestrian facilities on sidewalks in Dhaka, Bangladesh. A common feature shared by the current and prior studies is that although flow rate could be determined easily, a sample of pedestrians had been used for the measurement of mean speed and mean density. Such sample based measurements as well as omission of influential factors lead to one of the major problems with these studies. For speed-density relationship, a model with specification errors had been estimated by ordinary least squares (OLS) which renders biased and inconsistent estimates of parameters [18]. Therefore, the validity of the relationships and conclusions drawn from such studies is open to question and should be examined further. The predictive power of derived relationships from the estimated speed-density relationship was not also justified. What is intended to do in further study is to use an appropriate estimation method for speed, flow and density relationships that will improve the predictive power and mitigate a part of OLS bias incorporated in the previous studies.

5

Acknowledgements

This study was supported by the Research University (RU) Grant Scheme, [Acct. No.: 1001/PJJAUH/811097], Universiti Sains Malaysia. Khalidur Rahman wishes to thank Universiti Sains Malaysia for the financial support (USM Fellowship).

References 1. Rahman, K., et al., Analysis of Pedestrian Free Flow Speed in a Least Developing Country: A Factorial Design Study Research Journal of Applied Sciences, Engineering and Technology. (Accepted for forthcoming issue) (2012) 2. Older, S., Movement of pedestrians on footways in shopping streets. Traffic engineering & control. 10: p. 160-163 ( 1968) 3. Hoel, L.A., Pedestrian travel rates in central business districts. Traffic Engineering. 38: p. 10-13 (1968)

4. Robertson, H.D., J.E. Hummer, and D. Nelson, Manual of Transportation Engineering Studies. Institute of Transportation Engineers, Englewood Cliffs, N.J.: Prentice Hall (1994) 5. Lam, W.H.K., J.F. Morrall, and H. Ho, Pedestrian flow characteristics in Hong Kong. Transportation Research Record (1487): p. 56-62 (1995) 6. Tanaboriboon, Y. and J. Guyano, Analysis of pedestrian movements in Bangkok. Transportation Research Record, (1294): p. 52-56 (1991) 7. Teknomo, K., Microscopic pedestrian flow characteristics: Development of an image processing data collection and simulation model, in Diss. Tohoku Univ(2002) 8. Bank, W., Country assistance strategy for the people's republic of Bangladesh for the period FY11-14, World Bank. p. 4(2010) 9. Jianhong, Y. and C. Xiaohong, Optimal Measurement Interval for Pedestrian Traffic Flow Modeling. Journal of transportation engineering. 137: p. 934 (2011) 10. Fruin, J.J., Designing for pedestrians: A level-of-service concept, Polytechnic Institute of Brooklyn (1970) 11. Manual, H.C., TRB, in National Research Council, Washington, DC, Transportation Research Board (2000) 12. Polus, A. and J.L. Schofer, Pedestrian flow and level of service. Journal of transportation engineering. 109: p. 46 (1983) 13. Ando, K., H. Ota, and T. Oki, Forecasting the flow of people. Railway Research Review. 45(8): p. 8-14 (1988) 14. Laxman, K.K., R. Rastogi, and S. Chandra, Pedestrian Flow Characteristics in Mixed Traffic Conditions. Journal of Urban Planning and Development. 136: p. 23 (2010) 15. Sarkar, A. and K. Janardhan, Pedestrian flow characteristics at an intermodal transfer terminal in Calcutta. World Transport Policy & Practice. 7(1) (2001) 16. Smith, R., Density, velocity and flow relationships for closely packed crowds. Safety science. 18(4): p. 321-327 (1995) 17. Gerilla, G., K. Hokao, and Y. Takeyama, Proposed level of service standards for walkways in metro Manila. Journal of the Eastern Asia Society for Transportation Studies. 1(3): p. 1041-1060 (1995) 18. Koutsoyiannis, A., Theory of Econometrics: An Introductory Exposition of Econometric Methods (Second ed.): Macmillan (1977)

Proceeding International Conference on Mathematics,

ISBN 978-979-96152-7-5

Statistics and its Applications 2012 (ICMSA 2012)

NUMERICAL SOLUTION TO CONTROL THE EXPLOITATION OF GROUND WATER POTENTIAL Suharmadi Sanjaya Department of Mathematics, Institut Teknologi Sepuluh Nopember Surabaya 60111 , Indonesia E-mail : [email protected]

Abstract: Water is one of the natural resources that are play very important role in human life on earth. The Source of the water normally coming from the rain and surface water. In Indonesia mostly people are using surface water for their daily activities, but because of the drought happen everywhere all over the country, people now days using ground water for irrigation as well as their daily activities. Controlling the stock of the ground water is becoming very significant problems. The paper proposed solution to control the potential of ground water using numerical solution. The potential of Ground water model will be solved using finite difference approach to find the condition of water in the aquifer Keywords : Finite Difference Solution, Ground water, the level control.

1. Introduction Water is one of the natural resources that are play very important role in human life on earth. The Supply of water normally coming from the rain and the surface water. In Indonesia most people using surface water for their daily activities. But , since drought happen everywhere in Indonesia due to the long dry season , people now days , especially during the dry season , using ground water as the source of water the for irrigation as well as their daily activities. Controlling the stock of the ground water is now becoming the very significant problem in Indonesia. Thousands of Ha rice field are dried and “ Puso ” ( the word for not able to reach the harvesting time) during the dry season , and many more thousands Ha during the rainy season. This will make the life of the

AM3

farmers experience the difficult time. The paper proposed solution to control the potential of ground water using numerical solution. The solution will provide a quite accurate calculation on how to make the stock of the ground water in equilibrium state, by controlling the use of the water. Otherwise it will give the negative impact , because of the level of the head of the aquifer decreasing significantly.

2. Finite Difference Solution The Model of the potential of ground water can present as follows, (2.1) Where , and

(

)

T : transmitivitas, S : coeffisien of the stock Q : pumping debit of the well , : the downing level , m : radius of the well, m : the starting pumping time, second If the ratio T/S substitute with

, the equation can be rewrite as , , 0 0; < < th ⋯ . ), be a mean, or location, of the i Gaussian and be a standard deviation, or scale, of the ith ( < < ⋯ . ).

3. Empirical Analysis 3.1 Data Description The preliminary empirical results are considered to check the appropriateness of the above discussed models. Gold prices are taken from daily closing prices from August-2002 to December-2011 of the gold futures prices trading in the NYMEX, a total of 2364 observations. The Gold Stock Exchange is a small and rather volatile stock exchange. Furthermore, historical experience shows that in countries during period of stock market slump, the gold always trend higher. The use of gold price as an economic indicator drew the attention of many economists. Mirmirani[28] asserted that more than 70% of the change in inflation rate can be explained by the price movements of gold. Other studies have found that gold’s sensitivity to news varies through time, with Hess, Huang, and Niessen[29] presenting evidence that it is dependent upon the state of the economy, with sensitivity increasing during recessions. Therefore, the study of Gold price is important for practitioners as well as researchers as a one of commodities. Figure 1: Logarithm of Gold daily return from August- 2002 to December2011 0.1 0.05 0 8/1/2002 -0.05

8/1/2004

8/1/2006

8/1/2008

8/1/2010

-0.1

The logarithm of daily Gold returns (equal weighted returns) is shown in Figure 1, which can be defined as; r = Log

, where, S be the Gold price

adjusted close value at time t, S be the Gold price adjusted close value at time t-1 and r be the returns at time t. Some basic statistical characteristics of the return series are summarized in Table 01. (The unit root tests indicate no

evidence of non-stationarity in the returns of Gold stock indices.) Kurtosis, Jarque-Bera (J-B) normality test statistics (=1743.862) and KolmogorovSmirnov test statistics are confirmed that high non-normality of return index. Furthermore, the normal QQ-plot (Figure 2) of the log returns confirm that the empirical distribution of log returns has a heavy left tail and a Gaussian like right tail. Table 01: Summary statistics of the equal weighted returns (January 2002 to December 2011) Mean

Maximu m

Minimu m

Std. Dev.

Skewness

Kurtos is

JarqueBera

0.00063

0.096910

-0.08661

0.01390

0.191487

7.1901

1743.86 (0.0000)

Kolmogorov -Smirnov 0.056579 (0.0000)

(Parentheses include the p-value of the test and 1% significance of confident interval) Figure2: Q-Q Plot of the return

The visual distribution comparisons of the equal returns of the Gold data are shown in Figure 3. In this graph, the above-discussed models are compared with the kernel distribution of return. This plot clearly indicates that the empirical returns distribution has longer and fatter than the normal distribution. On the other hand, all the models fitted to the this data, excluding the normal distribution, seem to be very close to each other and are hardly distinguishable. It should be noted, however, that in the tails there is rather limited amount of data. Consequently, all inferences concerning the extreme tails are quite difficult.

Figure3: Densities Estimation for return, from Smooth Kernel, Student T and Stable Distribution Probability_Density 35 30

SmoothKernel

25 StudentTDist

20 15

StableDist

10 5 0.05

0.05

0.10

Log_Return

3.2 Model Selection and Parameter Estimation The maximum-likelihood estimations are reported in the Table 2, that result from fitting the theoretical distributions described in the previous part to the series of daily Gold returns. Table 02: Parameter Estimates for the Gold Future Index, for various fitted Distribution Degrees of freedom

Stype with index of stability

Skewness parameter

Location Parameter

Scale parameter

-

-

-

μ = 0.00063851

σ = 0.0139023

υ = 4.37

-

-

μ = 0.000442984

σ = 0.0103444

Stable Distribution

-

Stype = α =1.75082

β =0.1339

μ = 0.000723005

σ = 0.0082113

Mixture of Normal Distribution

-

-

-

Model Normal Distribution Student T Distribution

= 0.0022465 = 0.0002724

= 0.02413908 = 0.01019322

The goodness-of-fit tests are performed, in order to compare the relative fit of the theoretical distributions considered. As expected, the Normal distribution provides the worst fit among all the specifications considered. In Stable Distribution, two test statistics give better fit, however Pearson test statistics rejected at the 5% significance level. The Student-t and mixture of normal

distribution provide the accurate fit under all test statistics. Furthermore; mixture of normal distribution is not significant at 1% level of significance. It is clear that in terms of log-likelihood values and other model selection criteria (Anderson-Darling, Cramér-von Mises and Pearson ) in the table 03, the Student-t distribution is the best one to model for the Gold return future index at the 1% level of significance. Table 03: Goodness of Fit-test

Statistic P-Value

AndersonDarling 14.38663 0.00000

Cramérvon Mises 2.437553 0.000001

Statistic

0.453483

0.062502

48.1827

P-Value

0.794504

0.79823

0.205028

Statistic

0.919264

0.165818

66.8376

P-Value

0.402512

0.344781

0.004917

Statistic

0.681568

0.125015

59.98477

P-Value

0.574393

0.475541

0.054589

Test

Values

Normal Distribution Student T Distribution Stable Distribution Mixture of Normal Distribution

Pearson 166.1269 0.00000

LogLikelihood 6753.39 6886.17 6874.07

6876.9

4. Discussion and Conclusions The main objective of this paper is to show that the Student-T distribution is the appropriate and the best fit distribution for the Gold future return, when modeling financial data. We also compare Student-T distribution with the other distributions (Stable Distribution and Mixture of Normal distribution). Our next step is to derive the option pricing formula, based on the underlying asset price distribution, which can be formulated by empirical distribution of asset returns distribution with application of risk Neutral valuation method.

Reference 1. 2. 3.

Bachelier, L., Theorie de la Speculation AnnalesScientifiques de l'EcoleNormal SuperieureIII. 1900. Black F., S., Pricing of Options and Corporate Liabilities. Journal of Political Economy, 1973. 81: p. 637-659,. Bates, D., Jumps and stochastic volatility: Exchange rate processes implicit in deutsche mark options. Review of Financial Studies, 1996. 9: p. 69–107.

4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29.

Rubinstein, M., Implied binomial trees. Journal of Finance, 1994. 69: p. 771–818. Fama, E., The behaviour of stock market prices. Journal of Business, 1965. 38: p. 34-105. Mandelbrot, B., The variation of certain speculative prices. Journal of Business, 1963. 36: p. 394419. Cont, R., Empirical properties of asset returns: stylized facts and statistical issues. Quantitative Finance, 2001. 1:2: p. 223-236. Mantegna R., S.H.E., An Introduction to Econophysics. 2000, Cambridge: Cambridge Univ. Press. Voit, J., The Statistical Mechanics of Financial Markets. 2003, Berlin: Springer. Hull, J., A. White, The pricing of options on assets with stochastic volatilities. Journal of Finance, 1987. 42: p. 281-300. Heston, S., A closed-form solution of options with stochastic volatility with applications to bond and currency options. Review of Financial Studies, 1993. 6: p. 327-343. Engle, R., ARCH, Selected Readings. 1995, Oxford, U.K: Oxford University Press. Bollerslev, T., A conditional heteroskedastic time series model for speculative prices and rates of return. Review of Economics and Statistics, 1987. 69: p. 542-547. Merton, R.C., Option pricing when underlying stock returns are discontinuous. Journal of Financial Economics, 1976. 3: p. 125–144. Bouchaud J.P., P.M., Theory of Financial Risks and Derivative Pricing. 2004: Cambridge University Press. Duffie, D., Dynamic Asset Pricing. Vol. 3rd ed. 2001, Princeton, NJ: Princeton University Press. Longin, F., The asymptotic distribution of extreme stock market returns. Journal of Business, 1996. 69: p. 383. Schoeffel, L., in arXiv:1110.1006. Zhou, G.-H.M.a.W.-X., Physica Review, 2010. 82(066103). Borak, S., Härdle, W., Weron, R., Stable Distributions. SFB 649 Discussion Paper 2005-008, 2004. Platen E., S.R., Empirical evidence on Student-t log-returns of diversified world stock indices. 2007. Jones, M.C., Student's simplest distribution. The Statististician, 2002. 51(1): p. 41-49. Zhu D. , G.J.W., A generalized asymmetric Student's-t distribution with applications to financial economics. 2009. Hu W., K., A.N.,, Portfolio optimization for student t and skewed t returns. Quantitative Finance, 2010. 10: p. 91–105. Hansen, B.E., Auto-regressive Conditional Density Estimation. International Economic Review, 1994. 35: p. 705-730. Press, J., A Compound Events Model for Security Prices. Journal of Business, 1967. 40(317-335). Thorne, L.R., Fat Tails Quantified and Resolved: A New Distribution to Reveal and Characterize the Risk and Opportunity Inherent in Leptokurtic Data. 2011. Mirmirani S., L.H., Gold Price, Neural Networks and Genetic Algorithm. Comput. Econ., 2004. 23(193–200). Hess, D., He Huang, and Alexandra Niessen, How Do Commodity Futures Respond to Macroeconomic News? Journal of Financial Markets and Portfolio Management., 2008. 22.

Proceeding International Conference on Mathematics, Statistics and its Applications 2012 (ICMSA 2012)

ISBN 978-979-96152-7-5

Control Sysytem Rocket RKX-200 LAPAN Using PID Controller Subchan, Putra Setya Bagus J.N. dan Idris E.P. Department of Mathematics, Faculty of Mathematics and Natural Sciences, Institut Teknologi Sepuluh Nopember, Surabaya Jl. Arief Rahman Hakim, Surabaya 60111 E-mail: [email protected] Abstract. Roket Kendali Eksperimen 200 (RKX-200) LAPAN is a rocket which can be guided. The RKX-200 missile motion consist of a longitudinal and lateral-directional mode, both of modes are controlled by the fins, which are elevator control fin, rudder control fin and aileron control fin. In this research, PID controllers are used to design control systems of RKX-200 LAPAN. The controller parameters obtained using the method of Particle Swarm Optimization (PSO) in order to obtain a good system performance and a robust controller. The simulation results show that the PID controller performance is robust. Controller on the short period motion was able to eliminate overshoot and steady state error, and can speed up the settling time to 2.12 seconds. PID controller on the phugoid motion can speed up the rise time to 0.4408 seconds, settling time 3.9 seconds and can eliminate the steady state error. In the spiral motion, controller capables to eliminate a steady state error with the rise time 0.0157 seconds and the settling time 0.98 seconds. Controller on the roll motion can speed up the rise time to 0.0431 seconds and eliminate the steady state error. In the dutch roll motion, controller can eliminate the steady state error and decrease overshoot to 0.1397%.

1 Introduction Roket RKX-200 LAPAN is a flying vehicle made by the researchers of Lembaga Penerbangan dan Antarikasa Nasional (LAPAN). The rocket is designed to be a guided missile which can be used on a variety of missions for scientific purposes and the defense of the region, which has a thrust, control systems, and targeting systems. In the process of flying RKX LAPAN-200 motion can be divided into longitudinal mode and lateral-directional mode. Both of modes are controlled by rocket fins control, which are elevator control fin, rudder control fin and aileron control fin. Longitudinal mode has a vector modes pitch and lateraldirectional mode have motion vectors yaw and roll. Roket RKX-200 LAPAN itself has six degrees of freedom in the process of flying (6 DOF), resulting from the rocket flight path is unstable. Without control, rockets tend to fly turning, and spinning up or down without predictable.

AM9

2 Rocket RKX-200 LAPAN Rocket RKX-200 LAPAN is a rocket control, where the movement is determined by the angle rocket fins movement control. These fins are divided into three types namely, which are elevator control fin, rudder control fin and aileron control fin. In the longitudinal mode control is carried out through the elevator control fin angle (two horizontal fins). While the lateral-directional mode, control is carried out through the rudder control fin angle (two vertical fins) and aileron control fin (a combination of the two horizontal fins and two vertical fins) [5]. Equation of Motion Rocket motion equations in this research using the reference system of the body axis. Body axis is the axis system that refers to the rocket vehicle body. X axis along the longitudinal axis rockets positive forward, the Z axis upright rocket plane of symmetry perpendicular to the X axis and the position of the positive flat fly down, and the Y-axis perpendicular to the plane of symmetry and positive to the right. To obtain the transfer function of the rocket first thing that is very important is to reduce the equations of motion of the rocket. The equation of motion derived from the rocket's 2nd Law Newton. In this equations of motion consist of force equation and the moment equation. Force equation On the rocket consists of thrust forces, lift, drag and gravity forces. Overall resultant force in the direction outlined three axes X, Y, and Z as follows [8]: (1) X  mg sin   m(u  qw  rw)  (2) Y  mg cos  sin   m(v  ru  pw)   pv  qu) (3) Z  mg cos  cos   m( w Moment equation Moments that arise due to the aerodynamic forces that have a center point (CP) is not a place with center gravity (CG), the moment equations can be written as follows [10]: Table 2. Variables of Motion Body Axis System X Y Parameters Angular velocity P q Linear velocity U v Aerodynamic forces X Y Aerodynamic moments L M Moment of inertia Ixx Iyy Euler angle Φ θ

Z r w Z N Izz 𝛹

L  I xx p  qr( I zz  I yy )  I xz ( r  pq) 2 2 M  I yy q  rp( I xx  I zz )  I xz ( p  r )

(4)

N  I zz r  pq( I yy  I xx )  I zx ( qr  p )

(6)

(5)

Force equation (1) up to (3) and the moment equation (4) up to (6) is non-linear equations. Non-linear equations can linierised using small disturbance theory [7]. Equation of Longitudinal Mode

Longitudinal mode of the movement that involves scale linear velocity direction forward, upward, angular velocity and pitch angle. (7) u  X u u  X w w  g cos  0  X e e   Z u u  Z w w  U 0 q  g sin  0  Z e e (8) w q  ( M u  M w Z u )u  ( M w  M w Z w ) w  ( M q  M w U 0 ) q  ( M e  M w Z e ) e  gM w sin  0

  q

(9) (10)

 Short Period Motion Short period motion is influenced by two parameters which are pitch rate (q) and velocity (w). By taking the state variables w and q, the equation of short period motion becomes [4]:

 w   Z w   q  M  M    w w Z w

   Z  e   w     M U   q  M  M  Z  e e w e w 0    U

M

q

0

(11)

 Phugoid Motion This motion affects two parameters on the motion of the rocket which are pitch angle (θ), and also the speed (u) [4].  X  u  Xe    u   g cos  0 u       Ze  (12)     Z u       U  e  0       0   U  0  Lateral-Directional Mode The lateral-directional mode is motion involving linear velocity scale the sides, angular velocity and yaw and roll angles. Lateral-directional mode is horizontal movements which include the movement of rotating and turning [10].  

Y U0

 r

g cos  0 U0



Yr U0

r

(13)

p  L   Lp p  Lr r  L a  a  L r  r

(14)

r  N    N p p  N r r  N a  a  N r  r

(15)

  p  r tan  0

(16)

 Spiral Motion

The motion is influenced by the motion of the yaw and roll motion with side slipe angle (  ) relatively small. Spiral motion consists of circular and turn slowly. Roll rate (p) is very small compared to the yaw rate (r) [4].  Lr N   r  L  (17) r   N r  a a  L     Roll Motion

Roll motion is purely circular motion, so that all the dynamical equations can be ignored unless the roll rate (p), therefore, for moving roll, it can be assumed  = r = 0 [4]. (18) p  Lp p  L a  a  Dutch Roll Motion

Dutch roll motion can be assumed roll rate (p) and yaw attitude (  ) is zero, because the roll motion is negligible. By taking the parameters  and r the state space of dutch roll motion can be reduced to [4]: Y  Y  Y        1  r      r      U 0  U 0      U 0  r (19)  r   N    r   N   N  r  r    A. Particle Swarm Optimization (PSO)

PSO is one of the optimization techniques and the kind of evolutionary computation techniques, optimization technique is an adaptation of the socialpsychological theories. This method is inspired by the dynamic motion of a flock of birds or fish in search of food. They move together in a group and not individuals. They use the concept of partnership in which each informationsharing within the group. Modification of the speed and position of each particle can be calculated using the current velocity and the distance from pbesti,d to gbestd as shown in the following equation: (t 1) (t ) (t ) (t ) v i , m  w.v i , m  c1 .R.( pbest i , m  x i , m )  c 2 .R.( gbest m  x i , m )

(20)

(t 1) (t ) (t 1) x i ,m  x i ,m  v i ,m

(21)

3. Discussions Missile control system design RKX-200 LAPAN performed at Mach 0.5 at zero degree angle of attack. A. Aerodynamic Parameter Calculation of roket-200 RKX LAPAN Determination of the coefficient of aerodynamic parameters of roket-200 LAPAN RKX uses software Missile Datcom. In this research, given input data that varies the angle of attack rocket from -9.0 degrees to 10.0 degrees at a speed that is also at variance from 0.1 Mach to 2.0 Mach, flying at an altitude of 500 meters. B. State Space formation on Longitudinal Mode Longitudinal mode has input fins elevator deflection (δe) which is the area of control and the output is a pitch rate (q) and the pitch attitude (θ).

 u   0.6747 0.0383 0  9.81  u   13.0348   w    0.3834  0.2224 34 0   w    1.3035   q   0.0002  0.0752  6.15 0   q   0.6554 e    0 0 1 0     0   Short Period Motion Approach w    0.2224 34  w    1.3035   q   0.0752  6.15  q   0.6554 e  Phugoid Motion Approach u    0.6747  9.81 u   13.0348 0     0.0113     0.0383  e

(22)

(23)

(24)

C. State Space formation on Lateral-Directional Mode Lateral-directional mode has input fins rudder deflection (δr) and aileron (δa) which is the area of control and the output roll rate (p), yaw rate (r) and the roll attitude (ϕ).     3.6267 0  0.9948 0.2885      0  0.0383    p    2.6913  4.8681 0.0594 0   p    26.9131 0   a  (25)  r   15.4758 0.0003  3.1334 0   r   0  0.1511   r  0 1 0        0  0 0   Spiral Motion Approach r  2.792 r  26.9131 a (26)  Roll Motion Approach p  4.927474 p  26.9131 a (27)  Dutch Roll Approach

    3.6267    r   15.4758

   0.9948 

 0.0383       0.1511 r  3.1334  r      

(28)

D. PID Parameters Tuning Method with Particle Swarm Optimization (PSO) In this research, the performance index ISE (Integral-Squared Error) is used to estimate the parameters of PID: T ISE ( t ) 

 e2 (t ) dt

(29)

0

Optimized fitness function is expressed as follows: J (t )   .ISE(t )   . | O(t ) |

(30)

where  ,  : improvement factor O:Overshoot J is the fitness function and each of particle in the swarm that mode-3 which describes the parameters Kp, Ki and Kd. In generally, the researchers applied the constriction factor PSO algorithm to set the value of φ = 4.1 then φ1 and φ2 is set equal to 2.05, so that the obtained value C = 0729, this value is equivalent to inertia weight as w = 0729 and the value of c1=c2 = 1, 49 618 [6], so in this research given the PSO parameters as follows, n = 50, d = 3, T = 100, w = 0.7298, c1 and c2 = 1,49618. Table 3. TUNING PID PARAMETERS BY PSO Kp Ki Motions Short Period Phugoid Spiral Roll Dutch Roll

-110 65 -5.4667 -2.3 -13

-250 7 -3.5 -3.7

Kd -0.1 28 0 0 0

E. Control System Design Criteria of Roket RKX-200 LAPAN The criteria used in this research refer to MIL-F-8785C "Military Specification Flying Qualities Of Piloted Airplanes". Rocket RKX LAPAN-200 can be categorized as: 1.Based on the weight classified into Class I flying objects, which weighs less than 5000 kg. 2.Based on the phase of flight are grouped into Category B, ie nonterminal flight phases commonly done gradually without precision tracking maneuver. 3.Based on its ability to accomplish the mission is filed under level I flying qualities enabling them to carry out the mission phase of flight (cruising).

The specifications of the system that is needed on the rocket is [13]: 1. rise time,Tr ≤ 2.5s 2. settling time, Ts ≤ 5s 3. Overshoot, Os ≤ 5% 4. Steady state error), Ess ≤ 2% F. Control System Simulation  Short Period Motion Transfer function the short period motion control system with parameter Kp = -110, -250 = Ki, and Kd = -0.1: G sp ( s ) 

3 2 0.06554 s  72.1s  169.1s  11.93 3 2 1.066 s  78.47 s  173s  11.93

(31)

3 Conclussion 1. 2. 3. 4. 5. 6. 7.

8.

Based on the analysis and discussion that has been done, it can be concluded: PID controller with parameters obtained by using PSO is a robust controller. Control conducted in motion short period can eliminate overshoot and steady state error, and accelerate settling time to 2.12 seconds. Control is done on phugoid motion can accelerate rise time to 0.4408 seconds, settling time is only 3.9 seconds, and eliminate steady state error. The spiral motion controller eliminates the steady state error, rise time 0.0157 seconds and settling time 0.98 seconds. Controlling the movement of roll making rise time 0.0431 seconds and eliminate steady state error. In the dutch roll motion controller can eliminate steady state error and reduce the overshoot to 0.1397%. The controller was able to overcome internal disturbances in the form of changes in the coefficient of aerodynamic parameters, the longitudinal dimension was enlarged to 40.5% and reduced to 1%, while the lateral dimension-directional enlarged to 60% and reduced to 775.5%. Controlling the motion of short period to overcome external interference in the form of impulse signals up to 10 N and a square signal to 5 N. In phugoid motion to overcome the impulse signal up to 200 N and a square signal to 10 N. In the spiral motion to overcome the impulse signal to 0.3 N and the square signal to 0.25 N. On the roll motion to overcome the impulse signal to 0285 N and the signal square until 0:15 N. In the dutch roll motion to overcome impulse signals up to 17 N and a square signal up to 1 N.

9. In the test system setpoint tracking response can explore a given setpoint changes with good results.

REFERENCES

[4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]

Response System Short Period Motion

1.2

1

0.8

0.6 Kp=-110,Ki=-250,Kd=-0.1 Without Controller 0.4

0.2

0

-0.2 0

1

2

3

4

5 Time (sec)

6

7

8

9

10

Fig. 3. Step Response System Closed Loop Short Period Motion.

Response System Phugoid Motion

1.6 1.4 1.2

Pitch Angle (deg)

[2] [3]

Alrijadjis dan Astrowulan, K. 2010. Optimasi Kontroler PID Berbasis Particle Swarm Optimization (PSO) untuk Sistem dengan Waktu Tunda. Surabaya: Department of Electrical Engineering ITS. Aulia, M., dkk. 2010. RKX 200 24092010. Bogor: Bidang Struktur LAPAN. Blake, W. 1998. Missile Datcom User’s Manual-1997 Fortran 90 Revision. Ohio:Air Force Research Laboratory Air Vehicles Directorate Wright-Patterson Air Force Base. Caughey, D. 2011. Introduction to Aircraft Stability and Control Course Notes for M&AE 5070. New York: Sibley School of Mechanical & Aerospace Engineering Cornell University Ithaca. Fitria, D. 2010. Desain dan Implementasi Pengontrol PI Optimal pada Gerak Longitudinal Roket RKX-200 LAPAN. Bandung:Department of Physic Engineering ITB. Kennedy, J. dkk. 2007. Particle Swarm Optimization. UK:Springer. McLean, D. 1990. Automatic Flight Control Systems. UK:Prentice Hall International. Mukherji, T. 2004. Aircraft Autopilot Design. Bombay. Nataraj, P.S.V. 1990. Design of Flight controllers using Quantitative Feedback Theory. Bombay: Systems and Control Engg IIT. Nelson, R. 1990. Flight Stability and Automatic Control. Singapore:McGraHill Book Co. Pasadena, W. 2010. RKX Berat dan CG Saat di cog awal 17 nov 2010. Bogor: Bidang Struktur LAPAN. Reveles, D. N. 2000. Longitudinal Autopilot Design. Georgia. Siouris, G. 2004. Missile Guidance and Control Systems. New York:Springer-Verlag. MIL-F-8785C. 5 November 1980. Military Specification Flying Qualities Of Piloted Airplanes.

Pitch Rate (q) (deg/sec)

[1]

1

0.8 Kp=65,Ki=7,Kd=28 Without Controller

0.6

0.4 0.2

0 0

5

10

15 Time (sec)

20

Fig. 4. Step Response System Closed Loop phugoid Motion.

25

30

Response System Roll Motion

1

Response System Spiral Motion

2

0 0

Roll Rate (p) (deg/sec)

Yaw Rate (r) (deg/sec)

-1 -2

Kp=-5.4667,Ki=-3.5,Kd=0 Without Controller

-4

-6

-3

-4

-5

-8

-10 0

Kp=-2.3,Ki=-3.7,Kd=0 Without Controller

-2

1

2

3

4

5 Time (sec)

6

7

8

9

-6 0

10

1

2

3

4

5 Time (sec)

6

7

8

9

10

Fig. 6. Step Response System Closed Loop Roll Motion.

Fig. 5. Step Response System Closed Loop Spiral Motion.

Response System Dutch Roll Motion 1.2

Response System Short Period Motion

1.4 1

1.2

Yaw Rate(r) (deg/sec)

Laju Sudut Angguk (q) (deg/sec)

Kp=-13,Ki=-100.117,Kd=0 Without Controller

0.8

0.6

0.4

0.2

0

1

2

3

4

5 Time (sec)

6

7

8

9

0.4

0.9

1

Yaw Rate (r) (deg/sec)

1.2

0.8 be enlarged 50% be enlarged 30% be enlarged 40.5% reduced 10% reduced 0.1% reduced 1% Impulse 400 Impulse 200 Square 40 Square 10

0.7

0.6

0.5

1

1.5

2

2.5 Time (sec)

Fig.9. Step Response System Phugoid Motion.

3

1

1.5

3.5

4

2

2.5 Time (sec)

3

3.5

4

4.5

5

Response System Spiral Motion

1.4

1

0.5

0.5

Fig. 8. Step Response System Short Period Motion.

Response System Phugoid Motion

1.1

Pitch Angle(deg)

be enlarged 50% be enlarged 30% be enlarged 40.5% reduced 10% reduced 0.1% reduced 1% Impulse 15 Impulse 10 Square 10 Square 5

0.6

0 0

10

Fig. 7. Step Response System Closed Loop Dutch Roll Motion.

0.4 0

0.8

0.2

0

-0.2

1

4.5

0.8

be enlarged 70% be enlarged 50% be enlarged 61% be enlarged 60% reduced 800% reduced 500% reduced 775.5% Impulse 1 Impulse 0.3 Square 1 Square 0.25

0.6

0.4

0.2

5

0 0

0.5

1

1.5

2

2.5 Time (sec)

Fig.10. Step Response System Spiral Motion.

3

3.5

4

4.5

5

Response System Dutch Roll Motion

1.4

Response System Roll Motion

1.4

1.2

1.2

Yaw Rate (r) (deg/sec)

1

Roll Rate (p) (deg/sec)

1

0.8 be enlarged 70% be enlarged 50% be enlarged 61% be enlarged 60% reduced 800% reduced 500% reduced 775.5% Impulse 1 Impulse 0.285 Square 1 Square 0.15

0.6

0.4

0.2

0 0

0.5

1

1.5

2

2.5 Time (sec)

3

3.5

4

4.5

Response of Setpoint 15

10

Short Period Motion Dutch Roll Motion Phugoid Motion Spiral Motion Roll Motion Setpoint

Amplitudo

5

0

-5

-10

-15 0

10

20

30 Time (sec)

Fig. 13. Step Response System to Setpoint.

40

50

60

be enlarged 70% be enlarged 50% be enlarged 61% be enlarged 60% reduced 800% reduced 500% reduced 775.5% Impulse 30 Impulse 17 Square 10 Square 1

0.6

0.4

0.2

0 0

5

Fig. 11. Step Response System Roll Motion.

0.8

0.5

1

1.5

2

2.5 Time (sec)

3

Fig. 12. Step Response System Dutch Roll Motion.

3.5

4

4.5

5

Proceeding International Conference on Mathematics, Statistics and its Applications 2012 (ICMSA 2012)

ISBN 978-979-96152-7-5

Modified Feige-Fiat-Shamir Signature Scheme with Message Recovery DessiNursari1 , Elena Sabarina2, and Rizki Yugitama3 National Crypto Institute, Jl. H.UsaPutatNutug, Ciseeng, Bogor, Indonesia, 16330 [email protected],[email protected], [email protected]

Abstract. Cryptography provides some security services, there are confidentiality, integrity, authentication, and non-repudiation. Digital signature is signature which include in a message to fill security service of cryptography, there are authentication, integrity, and non-repudiation. Feige-Fiat-Shamir Signature Scheme is one of digital signature scheme with appendix. In this paper, we modified the Feige-Fiat-Shamir Signature Scheme with appendix to the Feige-Fiat-Shamir Signature Scheme with message recovery, which a message will be recovered from own signature. The modified Feige-Fiat-Shamir will be secure from adaptive chosen message attack and it can prevent existential forgery attack.

1 Introduction In communications, some messages need the authentication assurance. Ownership of message can authenticated by signature in that message. The signature means that the message truthful come from the authentic sender. In this case, the message has been correspondence with signer, where the signature as identity of sender. There is the digital signature for signing the digital message. Some like the case above, the digital signature is an identity of the authentic sender. So, we can know that the message was really trust. In cryptography, digital signature used to ensure integrity of data, user and data authentication, and ensure there is no repudiation. Like the other schemes, digital signature scheme growth. Now, there are many digital signature schemes had made and research by many people. One of the digital signature scheme is the Fiat-Shamir signature scheme. The Fiat-Shamir signature scheme was developed by Adi ShamirandAmos Fiatin 1986. This scheme is designed to reduce the number of modular multiplication that are necessary for generating signature in the RSA-scheme. Using multicomponent private and public keys Fiat and Shamir generate signatures faster than with RSA-scheme. The Feige-Fiat-Shamir signature scheme is a modification of an earlier signature of Fiat and Shamir which was

AM10

developed by UrielFeige, Amos Fiat, and Adi Shamir.This schemewas adopt from Feige-Fiat-Shamir identification scheme and requires a one-way hash function h : {0, 1}* → {0, 1}k for some fixed positive integer k [1].Then, in this paper we will explain the new modification of digitalsignature schemes, FeigeFiat-Shamir Signature Scheme with Message Recovery.

2 Basic Theory 2.1 Cryptography Cryptography is the study of mathematical techniques related to aspects of information security such as confidentiality, data integrity, entity authentication, and data origin authentication [1]. Cryptography requires information security assurance and provides security services like: 1. Confidentiality is a service used to keep the content of information from all but those authorized to have it. These are numerous approaches to providing confidentiality, ranging from physical protection to mathematical algorithms which render data unintelligible. 2. Data Integrity is a service which addresses the unauthorized alteration of data. To measure data integrity, one must have the ability to detect data manipulation by unauthorized parties.Data manipulation includes such things as insertion, deletion, and substitution. 3. Authentication is a service related to identification. This function applies to both entities and information itself. Two parties entering into a communication should identify each other. 4. Non-repudiation is a service which prevents an entity from denying previous commitments or actions. When disputes arise due to an entity denying that certain actions were taken, a means to resolve the situation is necessary. For example, one entity may authorize the purchase of property by another entity and later deny such authorization was granted. A procedure involving a trusted third party is needed to resolve the dispute. 2.2 Digital Signature Digital signature is a data string which associates a message in digital form with some originating entity [1].A digital signature must can be verified. One sample of the implementation of the digital signature which significant is a public key certificate that very wide used in the network. A concept and necessity of digital signature has been published some years before the practice can be realized. The first methode of digital signature is the RSA signature scheme and nowaday so many signature scheme were

published. The result of the research is much alternative of signature can be chosen. There are two general classes of digital signature schemes [1]: 1. Digital signature schemes with appendix require the original message as input to the verification algorithm. 2. Digital signature schemes with message recovery do not require the original message as input to the verification algorithm. In this case, the original message is recovered from the signature itself. 2.3 Feige-Fiat-Shamir Signature Scheme The Feige-Fiat-Shamir signature scheme is a modification of an earlier signature of Fiat and Shamir, and requires a one-way hash function h : {0, 1}* → {0, 1}k for some fixed positive integer k[1]. Key Generation Each entity creates a public key and corresponding private key. Each entity A should do the following: 1. Generate random distinct secret primes p, q and form n = pq. 2. Select a positive integer k and distinct random integers s1, s2, ... , sk∈ ∗ . 3. Compute vj = sj-2 mod n, 1 ≤ j ≤ k. 4. A’s public key is the k-tuple (v1, v2, ... , vk) and the modulusn, A’s private key is the k-tuple (s1, s2, ... , sk). Signature Generation To signs a binary message m of arbitrary length, entity A should do the following: 1. Select a random integer r,1 ≤ r ≤ n-1. 2. Compute u = r2 mod n. 3. Compute e = (e1, e2, ... , ek) = h (m || u), eachei ∈ {0, 1}. 4. Compute s = r ∏ vjej mod n. 5. A’s signature for m is (e, s). Signature Verification To verify a’s signature (e, s) on m, B should do the following: 1. Obtain A’s authentic public key (v1, v2, ... , vk) andn. 2. Compute w = s2∏ vjej mod n. 3. Compute e’ = h (m || w). 4. Accept the signature if and only if e = e’.

3 Modified Feige-Fiat-Shamir Signature Scheme

The original Feige-Fiat-Shamir Signature Scheme is belonging to a digital signature scheme with appendix because in the verification process, we need to require the original message as input. In this research, we try to modifiedFeigeFiat-Shamir signature to be a digital signature scheme with message recovery. In the modified scheme, we change a one-way hash function h : { 0, 1 }* → {0, 1}k for some fixed positive integer k to be a redundancy function. A redundancy function that we used in this scheme is a redundancy function in the modified Rabin signature scheme, i.e. R(m) = 16m + 6. A redundancy function R has been chosen because it is more complex to prevent existential forgery attack. Key Generation Each entity creates a public key and corresponding private key. Each entity A should do the following: 1. Generate random distinct secret primes p, q and form n = pq. 2. Select a positive integer k and distinct random integers s1, s2, ... , sk∈ ∗ . 3. Compute vj = sj-2 mod n, 1 ≤ j ≤ k. 4. A’s public key is the k-tuple (v1, v2, ... , vk) and the modulusn, A’s private key is the k-tuple (s1, s2, ... , sk). Signature Generation Entity A signs message m where 1 ≤ m ≤ n-1. Entity A should do the following: 1. Select a random integer r,1 ≤ r ≤ n-1. 2. Compute = R (m) = 16m + 6. 3. Compute u = r2 mod n. 4. Compute e = ( . u) mod n. Change integer e to a binary form. Denote bitlength of e with l, eachei ∈ {0, 1}. a. If l < k, padding k-l bit 0 in front of bit euntil the bitlength of e isk bit. b. If l > k, take bit MSB from bit e amounts k bit and ignore the other bit. 5. Denote the results of step 4 with e’ = (e1, e2, ....., ek). 6. Compute s = r ∏ vje’jmod n. 7. A’s signature for m is (e, s). Signature Verification Any entity B can verify this signature by using A’s public key. To verify A’s signature (e, s) on m, B should do the following: 1. Obtain A’s authentic public key (v1, v2, ... , vk) andn. 2. Compute e’with the following way : a. Change integer e to a binary form. Denote bitlength of e with l, eachei ∈ {0, 1}. b. If l < k, padding k-l bit 0 in front of bit euntil the bitlength of e isk bit.

3. 4. 5. 6.

c. If l > k, take bit MSB from bit e amounts k bit and ignore the other bit. Denote the results of step 4 with e’ = (e1, e2, ....., ek). Computew = s2 . ∏ mod n. -1 Compute = (e . w ) mod n. Recover m = R-1 (m) = ( - 6) / 16.

Implementation Key Generation: 1. Entity A generate primes p = 13 dan q = 7 and computesn = pq = (13) (7) = 91. 2. A choose positive integerk = 5 and random integer (s1, s2, s3, s4, s5) = (2, 5, 17, 25, 31). 3. Computevj = mod 91, 1 ≤ j ≤ 5. Table 1. The selection of sj and integers

j

=

1 2 46 23

2 5 73 51

3 17 75 74

4 25 51 53

5 31 47 25

4. A’s public keyis thek-tuple (23,51, 74, 53, 25) and modulo 91. A’s private key is the k-tuple (2,5,17,25,31). Signature Generation: Asigns messagem = 4, where 1 ≤ 4 ≤ 90 1. Suppose entity A choose random integer r =, where 1 ≤ 11 ≤ 90. 2. Compute = R (4) = 16(4) + 6 = 70. 3. Computeu = (11)2 mod 91 = 30. 4. Computee = (70. 30) mod 91 = 2100 mod 91 = 7 - Change integere to a binary form, thene = 111. Bitlength ofeis 3 bit. - Because 3 < 5, padding amounts 2 bit 0 in front of bit 111 to be 00111. Then bitlength ofeis5 bit. 5. e’ = (e1, e2, e3, e4, e5) = (0, 0, 1, 1, 1). 6. Computes = r . ∏ mod n = (11) (17) (25) (31) mod 91 = 53. 7. Signature A is (e,s) = (7, 53). Signature Verification

1. Obtain A’s authentic public key (23,51,74,53,25) and 91. 2. Compute e’with the following way : - Change integere = 7 to a binary form, thene = 111. Bitlength of e is 3 bit. - Because 3 < 5, padding 2 bit 0 in front of bit 111 to be 00111. Then bitlength ofeis 5 bit. 3. e’ = (e1, e2, e3, e4, e5) = (0, 0, 1, 1, 1) 4. Computew = s2 . ∏ mod n = (53)2 (74) (53) (25) mod 91 = 30. 5. Compute = (e . w-1) mod n = (7 . 30 -1) mod 91 = (7 . 88) mod 91 = 70. - Compute 30 -1 mod 91 91 = 3.(30) + 1 1 = 91 – 3.(30) Then 30 -1 mod 91 = -3 mod 91 = 88 6. Recover m = R-1 (70) = (70- 6) / 16 = 4. B accepts the signature because the signature is valid. Mathematical Proof of Verification Process w≡s2 . ∏

≡r2 . ∏



≡r2. ∏

(

)

≡r2 ≡u (mod

n). Becausew≡u, and therefore e = e’.

4 Security Analysis These are the analysis of RED signature scheme: 1. In this scheme, the signer doesn’t give enough information for attacker to do cryptanalysis. Signer only send value of e and s to verifier so if there are any interception when the communication ongoing, attacker can not learn anything except the other things which is sent by the signer to the verifier. 2. The all of entities uses the same modulo to generate p, q, public key and private key for each entity to solve square root. 3. Base on problem of square root modulo n, so this scheme will secure from adaptive chosen message attack. The process to choose the algorithm’s parameters which same with the process to choose parameters on Fiat Shamir scheme, which m consist of t bits so private key same with k.t bit. 4. There are exist redundancy function which can prevent from existential forgery attack.

5 Conclusion

We have shown the modification of Feige-Fiat-Shamir signature scheme with message recovery where the message could be recovered from signature. In this scheme, we change the hash function to be a redundancy function onthe modified Rabin scheme. The redundancy function is used to make the scheme stronger and contend with signature forgery attack. It is an open problem to strengthen this digital signature scheme, for example do more research about the redundancy function that used.

References 1.

Menezes, Alfred J., Paul C. van Oorschot, Scott A. Vanstone,: ”Handbook of Applied Cryptography”, Boca Raton: CRC Press LLC, 1997.

2.

Ong, H., C.P. Schnorr. “Fast Signature Generation with a Fiat-Shamir – Like Scheme”. Proocedings of Eurocrypt’90, 2006.

3.

Stinson, Douglas R. “Cryptography Theory and Practice”. Chapman & Hall/CRC.

Proceeding International Conference on Mathematics, Statistics and its Applications 2012 (ICMSA 2012)

ISBN 978-979-96152-7-5

The effect of the use of the MDS matrices in the T-020 block cipher algorithm Sutoro1, Bety Hayat Susanti2 1

Sekolah Tinggi Sandi Negara, Bogor, Jawa Barat

2

Sekolah Tinggi Sandi Negara, Bogor, Jawa Barat

[email protected], [email protected]

Abstract. In this paper, we discuss the effect of the Maximum Distance Separable (MDS) matrices usage in the T-020 block cipher algorithm. The MDS matrices that we used are Twofish MDS matrix and Square MDS matrix. In order to measure the diffusion effect of MDS matrix, we test it by using Strict Avalanche Criterion (SAC) and Bit Independence Criterion (BIC) on a single MDS matrix and the F function. We also test the T-020 algorithm as a whole by using SAC. The test results showed that Twofish MDS matrix has a better effect than Square MDS matrix on T-020 block cipher algorithm. Keywords: MDS matrix, SAC, BIC, T-020 block cipher algorithm

1 Introduction According to Shannon [5, 6], confusion and diffusion are two mandatory properties for a secure cipher. Confusion is to make the relationship of statistical independence between ciphertext and plaintext more complicated. While diffusion associated with dependency of bits of the output on bits of the input. A cipher with good diffusion satisfies the SAC and BIC. Serge Vaudenay [7] suggested to use MDS matrices in cryptographic primitives to produce multi-permutations. These functions have perfect diffusion i.e. for a change of t input bits out of m bits; at least m-t+1 of the output bits are changed. A Feistel Network (FN) is a general method of transforming the input block in a cipher through a repeated application of keyed, non-linear F-functions into a permutation [1]. It was invented by Horst Feistel and was popularized by Data Encryption Standard [2]. An Unbalanced Feistel Network (UFN) is a Feistel network where the “left half" and the “right half" are not of equal size. T-020 algorithm is a block cipher algorithm based on UFN that have homogeneous, complete, and consistent properties [11]. In this paper, we will discuss the effect of the use of MDS matrix on T-020 UFN algorithm.

AM11

2 Theoretical Background 2.1

Maximum Distance Separable (MDS) Matrix

An MDS code over a field is a linear mapping from m field elements to n field elements, with a property that the minimum Hamming distance between any to distinct vectors is at least n+1 [6]. The Hamming distance between two vectors is equal to the Hamming weight of the difference of the two vectors with Hamming weight is defined as the number of nonzero components of a vector. An MDS code can be represented by an MDS matrix, M consisting of m × n elements. By using matrix M, the relation between output bits, C and input bits, P can be described as follows:

MDS matrices are used for diffusion block ciphers like AES, Twofish and Khazad [10]. MDS matrices, which are mainly derived from Reed-Solomon codes, deliver the diffusion properties thus making them one of the vital constituent of modern age ciphers like Advanced Encryption Standard (AES) and Twofish [9]. In Twofish algorithm, MDS matrix serves as a component that gives the diffusion. Twofish uses a single 4-by-4 MDS matrix over GF(28). This matrix multiply is the principal diffusion mechanism in Twofish. The MDS property here guarantees that the number of changed input bytes plus the number of output bytesis at least five [6]. The Twofish MDS matrix is given by:

In Twofish MDS matrix, the mapping is unique means that different single input byte will produce different output byte. This is due to that no row (or column) of the matrix be a rotation of another row (or column) of the matrix. In Square Algorithm, MDS matrix used to minimize the maximum probability of differential trails and the maximum correlation of linear trails to resist against linear cryptanalysis and differential cryptanalysis [8]. MDS matrix can be used to provide a high diffusion properties of the Square

algorithm and increase the number of active S-boxes. Square MDS matrix is also used in AES block cipher algorithm as a MixColumn matrix. The Square MDS matrix is given by:

2.2

Strict Avalanche Criterion (SAC)

According to [4], Strict Avalanche Criterion (SAC) is the combination between the concepts of completeness and avalanche effect. If a cryptographic function satisfies SAC for all , then each output bit should change with a probability of one half whenever a single input bit is complemented, formulated as follows: for all i, j (1) We can modify equation (1) to determine the parameter of SAC, follows:

as (2)

in the range of [0,1] and can be interpreted as probability of a change in the j-th bit output when the i-th bit input change. If

is not equal to

½ for every pair of (i,j), then it is not satisfy SAC. Relative error of SAC results can be obtained by the formula: (3) An S-box that satisfy SAC in the range

if for every i and j satisfy the

following equation: (4)

2.3

Bit Independence Criterion (BIC)

A function is to satisfies BIC if , with , inverting input bit i causes output bits j and k to change independently [4].

To measure the bit independence concept, one needs the correlation coefficient between the j’th and k’th components of the output difference string, which is called the avalanche vector . Bit independence parameter corresponding to the effect of the i’th input bit change on the j’th and k’th bits of is defined as: (5) Kwangjo Kim [3] explains that in order to find the correlations between pairs of avalanche variables, the correlation coefficient can be calculated as follows: (6) where,

= correlation coefficient between A and B = covariance of A and B = = standard deviation of A = = standard deviation of B = = expectation value or mean of A, B = expectation value or mean of the product A and B

From equation (6), avalanche variable will generate the correlation coefficient in the range [0, 1], which means: a. If the value is 1 then the avalanche variable are always identical or complements of one another; b. If the value is 0 then the avalanche variable are independent; In the process of criteria analysis of BIC, the value will be the relative error . Thus, for an S-box, the maximum value of is said the maximum value of relative error of BIC results, denoted by , (7)

2.4

T-020 Block Cipher Algorithm [11]

T-020 algorithm is a block cipher algorithm based on UFN. It encrypts data in 128-bit blocks. An 128-bit block of plaintext goes in one end of the algorithm and a 128-bit block of ciphertext comes out to the other end. T-020 is a

symmetric algorithm: The same algorithm and key are used for both encryption and decryption (except for minor differences in the key schedule). The key length is 192-bit and the number of rounds is 13. T-020 has 96-bit source block (s) and 32-bit target block (t). T-020 is an UFN that is homogeneous, complete, and consistent. They said to be homogeneous because the F function used in each round is the same; complete because in each round, each bit in block is part of the source block and target block or in other words s+t=n; consistent because the number of s, t, and n is the same for each round. Like other Feistel algorithm, T-020 algorithm has the most important components, namely F function. Constituent components of the F function of T-020 algorithm are the AES’s S-box, MDS matrix multiplication, addition and XOR operations. Figure 1 and Figure 2 present the structure of T-020 algorithm and the F-function of T-020 algorithm.

Fig. 1. Structure of T-020 algorithm

Fig. 2. F function of T-020 algorithm

3 Methodology We performed the SAC test on T-020 algorithm as a whole while the tests performed on MDS matrix and the F function of T-020 is a SAC and BIC test. MDS matrices to be tested are Twofish MDS matrix and Square MDS matrix. When the plaintexts of T-020 algorithm are treated as an independent variables then the key as the control variables are held constant with a value of zero. Similarly, when the keys are treated as independent variables then the plaintext as control variables are held constant with a value of zero. We use constant zero value on the control variables in order to eliminate the influence of the control variables. Independent variables were taken randomly using simple random sampling technique using a random function in Matlab programming language. The same is applied when the MDS matrix and the F function of T020 algorithm as an object of study. T-020 algorithm requires input of 128-bits plaintext and 192-bits key in a single process. Therefore, the total population of plaintexts is 2128 and 2192 of keys. As for the MDS matrix requires the input of 32-bit so that the total population for MDS input is 232. While the F function of T-020 algorithm takes input for 96-bit and the output for 64-bit subkey, so that the total population for the input function is 296 and the total population of subkeys is 264. The number of samples used in testing the T-020 algorithm when the plaintext as an independent variable is 216 of the population. Meanwhile, when the key as independent variable, number of samples used is 216 keys of the whole population. The number of samples used in testing the F function of T020 when the input function as the independent variable is 216 of the population.

Whereas when the subkey as independent variables, the sample used is 216 of the entire population of the subkey. In the testing of the MDS matrix used as input, samples MDS matrix is 216 of the entire population of input. Deniz Toz et.al [12] used a sample of 212 in the SAC testing at AES algorithm. Therefore, sampling as much as 216 on the testing of T-020 algorithm is expected to fairly represent the characteristics of the population. Table 1 summarize the variables used in this work. Table 1. The research variable of T-020 algorithms, MDS matrix, and F function

No

Testing

Object of testing MDS matrix

1.

SAC

F function T-020 algorithm MDS matrix

2.

BIC

F function

Variable Input Independent Control MDS Input F function Subkey Input F function Subkey Input Plaintext Key Key Plaintext MDS Input F function Subkey Input F function Subkey Input

Output Control MDS Output F function Output F function Output Ciphertext Ciphertext MDS Output F function Output F function Output

4 Results and Analysis 4.1. Analysis of Single MDS Matrix Test results on a single MDS matrix is presented in Table 2, while the test results of the F function of T-020 by using the Twofish MDS and Square MDS matrix are presented in Table 3. Table 2. Comparison of Single MDS matrix Object Twofish MDS Square MDS

SAC (maximum error) 0,985321 0,985351

Maximum BIC 1,00 1,00

According to Table 2, the matrices of Twofish MDS and Square MDS do not pass the both of SAC and BIC tests. Hence, they don’t have good diffusion properties when treated as a single matrix. When the Twofish MDS matrix is

used in the F function of T-020 indicates that this function has a smaller error value than the F function of T-020 using the Square MDS matrix (see Table 3). Table 3. Comparison of the MDS matrix on the F function of T-020 Object

SAC (maximum error) Plaintext Key independent independent

F function of T020 using Twofish MDS F function of T020 using Square MDS

Maximum BIC Plaintext Key independent independent

0,013443

0,013169

0,017

0,015

0,014694

0,015061

0,033

0,016

On the SAC test, the largest error in the F function of T-020 algorithm by using the Twofish MDS is 0,013443, while the largest error in the F function of T-020 algorithm by using Square MDS is 0,015061. At the BIC test, the F function of T-020 by using the Twofish MDS has the largest BIC value of 0,017, while the F function of T-020 by using Square MDS has the largest BIC value of 0,033. Based on the SAC and BIC test results indicate that Twofish MDS matrix diffusion can provide a better effect when used in a F function of T-020 algorithm compared with the Square MDS matrix (see Table 3). 4.2. Analysis of T-020 algorithm Based on data from the SAC test results of the T-020 algorithm with the plaintext to be treated independent variables showed that the T-020 algorithm passed the SAC test with minimum and maximum values as shown in Table 4. Table 4. SAC Test results with plaintext as independent variable SAC Value

Vector Unit

Avalanche Vector

SAC Value (%)

Relative Error

Explanantion

Min Max

98 100

101 94

49,23629 50,7454

0,0152742 0,014908

Passed Passed

According to Table 4, the largest error value is 0,0152742 obtained by using equation 3 and 4.

Interval value of SAC

Interval value of SAC (%)

.

Table 4 shows that the T-020 algorithm has good diffusion properties as indicated by the largest error value that less than 2%. The value of the largest error occurs at different bit positions (unit vector) into bit position 98 and the avalanche vector to 101, meaning that when the input plaintext bits to 98 changed then output bits to 101 will be changed with a probability of 49,23629%. In general, we can say that the one bit change in plaintext will cause changes in output bits with a probability of 50% with the largest relative error value is 0,0152742. So, it can be stated that T-020 algorithm has good diffusion properties. The SAC test results of the T-020 algorithm with the keys are treated as independent variables showed that the T-020 algorithm passed the SAC test with minimum and maximum values as shown in Table 5. Table 5. SAC Test results with key as independent variable SAC Value

Vector Unit

Avalanche Vector

SAC Value (%)

Relative Error

Explanation

Min Max

98 121

46 11

49,26223 50,72099

0,014755 0,01442

Passed Passed

According to Table 5, the largest error value is 0,014755 obtained by using equation 3 and 4.

Interval value of SAC

Interval value of SAC (%)

.

Table 5 shows that the T-020 algorithm has good confusion properties as indicated by the largest error value that less than 2%. The value of the largest error occurs at different bit positions (unit vector) into bit position 98 and the avalanche vector to 46, meaning that when the input plaintext bits to 98

changed then output bits to 46 will be changed with a probability of 49.26223%. In general, we can say that the one bit change in plaintext will cause changes in output bits with a probability of 50% with the largest relative error value is 0.0147554. So, it can be stated that T-020 algorithm has good confusion properties. 4.3. Analysis of F function of T-020 algorithm Test results of the F function of T-020 algorithm with the plaintext is treated as independent variables showed that the T-020 algorithm passed the SAC test with minimum and maximum values as shown in Table 6. Table 6. SAC Test results with plaintext as independent variable SAC Value

Vector Unit

Avalanche Vector

SAC Value (%)

Relative Error

Explanantion

Min Max

49 65

25 26

49,33699 50,67215

0,01326 0,013443

Passed Passed

According to Table 6, the largest error value is 0,013443 obtained by using equation 3 and 4.

Interval value of SAC

Interval value of SAC (%)

.

Table 6 shows that the F function of T-020 algorithm has good diffusion properties as indicated by the largest error value that less than 2%. The value of the largest error occurs at different bit positions (unit vector) into bit position 65 and the avalanche vector to 26, meaning that when the input plaintext bits to 65 changed then output bits to 26 will be changed with a probability of 50,67215%. In general, we can say that the one bit change in plaintext will cause changes in output bits with a probability of 50% with the largest relative error value is 0,013443. So, it can be stated that the F function of T-020 algorithm has good diffusion properties.

The SAC test results of the F function of T-020 algorithm with the keys are treated as independent variables showed that the T-020 algorithm passed the SAC test with minimum and maximum values as shown in Table 7. Table 7. SAC Test results with key as independent variable SAC Value

Vector Unit

Avalanche Vector

SAC Value (%)

Relative Error

Explanantion

Min Max

30 19

5 11

49,38277 50,65842

0,012345 0,013169

Passed Passed

According to Table 7, the largest error value is 0,013169 obtained by using equation 3 and 4.

Interval value of SAC

Table 7 shows that the F function of T-020 algorithm has good confusion properties as indicated by the largest error value that less than 2%. The value of the largest error occurs at different bit positions (unit vector) into bit position 19 and the avalanche vector to 11, meaning that when the input plaintext bits to 65 changed then output bits to 11 will be changed with a probability of 50,65842%. In general, we can say the one bit change in plaintext will cause changes in output bits with a probability of 50% with the largest relative error value is 0,013169. So, it can be stated that the F function of T-020 algorithm has good confusion properties. The BIC test results of the F function of T-020 algorithm with the plaintext and keys are treated as independent variables showed that the F function of T-020 algorithm passed the BIC test with the values as shown in Table 8. Table 8. BIC Test Result of F Function of T-020 Algorithm Independent variable F function Input

Correlation

16

21

Column

BIC value

34

0,017

Subkey

2

32

23

0,015

Table 8 shows that maximum BIC value is 0,017 and the value between so that the F function of T-020 algorithm passed on BIC test. This case indicate that avalanche variables of the F function of T-020 algorithm are independence. Change of input bit-i’th causes independently change of output of bit-j’th and bit-k’th. So, it can be stated that the F function of T-020 algorithm has good diffusion properties.

5 Conclusion In this paper we test and analyze the comparison of MDS matrix usage in T020 algorithm. The test result shows that Twofish MDS matrix can provide a better effect when used in a F function of T-020 algorithm compared with the Square MDS matrix. Twofish MDS matrix has a smaller error than Square MDS matrix when used in single MDS matrix or used in a F function of T-020.

References 1. Schneier, B. and Kelsey, J..: Unbalanced Feistel Networks and Block Cipher Design. In Proceedings of Fast Software Encryption 1996. Springer-Verlag (1996). 2. FIPS 46-3. Data Encryption Standard. Federal Information Processing Standard (FIPS), Publication 46-3, National Institute of Standards and Technology, U.S. Department of Commerce, Washington D.C., October (1999). 3. Kwangjo, Kim.: A Study on The Construction and Analysis of S-box for Symmetric Cryptosystem. Yokohama National University (1990). 4. Webster, A.F. and Tavares, S.E. .: On the design of S-boxes. Department of Electrical Engineering. Queen’s University (1989). 5. Shannon, C.: Communication Theory of Secrecy Systems, Bell System Technical Journal, 28(4), (1949). 6. Schneier, B., Kelsey, J., Whiting, D., Wagner, D., Hall, C., Ferguson, N.: Twofish: A 128bit block cipher (1998). 7. Vaudenay, S.: On the Need for Multipermutations: Cryptanalysis of MD4 and SAFER, FSE, Second International Workshop Proceedings, Springer Verlag (1995). 8. Daemen, J, Knudsen, L.R., and Rijmen V. .: The block cipher Square, Fast Software Encryption, LNCS 1267, E. Biham, Ed., Springer-Verlag, (1997). 9. Malik, Seon No. Dynamic MDS Matrices for Substantial Cryptographic Strength. Seoul National University. 10. Murtaza, Ikram. .: Direct Exponent and Scalar Multiplication Classes of an MDS Matrix. National University os Science and Technology. Pakistan. 11. Sutoro. Desain Algoritma Block Cipher T-020 Based on Unbalanced Feistel Network. Unpublished. Bogor. Sekolah Tinggi Sandi Negara (2012). 12. Deniz Toz, et.al. Statistical Analysis of Block Cipher (2006).

Proceeding International Conference on Mathematics, Statistics and its Applications 2012 (ICMSA 2012)

ISBN 978-979-96152-7-5

RAN Signature Scheme Novita Loveria1, Rizkya Mardyanti2, Ayubi Wirara3 National Crypto Institute, Bogor, Indonesia, [email protected], [email protected], [email protected]

Abstract.Digital signatures are one of the most important cryptographic tools and widely used today. It provide a method to assure that a message is authentic to one user. The basic idea that the signature on a message can be created by only one person, but checked by anyone. Signatures can be forged, and documents can be altered after signing. However, we are willing to live with these problems because of the difficulty in cheating and the risk of detection. In this paper we propose a new digital signature scheme called RAN signature scheme.The main idea is based on modified RSA and ElGamal signature scheme .

1 Introduction Cryptography defined as the study of mathematical techniques related to aspects of information security such as confidentiality, data integrity, entity authentication, and data origin authentication (A. Menezes, P. Van Oorschot and S. Vanstone - Handbook of Applied Cryptography). There are four fundamental of cryptographicgoal andbecomean aspects of information security: 1. Confidentiality is a service used to protect the content of information to be accessed by anyone except those having a secret key or authority to disclose information that has been encoded. 2. Data integrityis a service which addresses the unauthorized alteration of data. In order to maintain data integrity, a system must have the ability to detect data manipulation from the parties are not entitled, such as insertion, deletion, and the substutition of other data into the original data. 3. Authentication is a servicerelated to identification, either in whole systems and the information itself. Two parties entering into a communication should identify each other. Information delivered must be authenticated as to origin, date of origin, data content, time sent, etc. 4. Non-repudiation is a service which preventsan entity from denying previous commitments or actions. When disputes arise due to an entity denying that certain actions were taken, a means to resolve the situation is necessary.

AM12

In general, cryptographic techniques are typically divided into two generic types: symmetric-key andpublic-key. Symmetric-key algorithms are algorithms that use the same key in encryption and decryption process. Asymmetric algorithms are algorithms that use different keys to the encryption and decryption. A Hash function is a function that maps bitstring of arbitrary finite length to string of fixed length(hash value). The basic idea of hash functions is calculate the hash value of key or the original value, then compare the original value of the key or the content of the addressable memory hash numbers without having to check the contents of the table one by one, making it more efficient. Digital signatures are one of the most important cryptographic tools and widely used today. It provide a method to assure that a message is authentic to one user. A digital signature of a message is a dependent number on some secret knownonly to the signer, and, additionally, on the content of the message being signed. In the real world, digital signatures form are a series of bytes that can be used if it is checked to verify that a digital document, also including email, is derived from a particular person or not. For example, Alice sent an important document to Bob via email. Apparently, Trent knows this word and he tried to forge Alice’s email.Trent changed the document attachment. When Bob received Alice's email (the attachment has been replaced by Trent), he felt strange, because it is not in accordance with the previous discussion. Bob check (verify) the digital signature on email. Bob knows, it turns out that the letter did not match the signature. Digital signature is divided into two classifications, namely: 1. Digital signature with appendix: the verification process included the original message as input. 2. Digital signature with message recovery: the verification process does not require the original message, the message recovered from thesignature itself.

2

RAN Signature Scheme

2.1

Algorithm

RAN algorithm is an asymmetric algorithm that is modified from RSA and ElGamalsignature scheme. Modifications made to the key generation by using a similar key generation of RSA, whereas for signing and verification process is nearly similar to the ElGamal signature scheme.

Key generation for the RAN signature scheme : Each entity A should do the following: 1. Choose two primes p and q at random. These numbers should be large enough (at least 100 digits). 2. Compute n = pq. Number n is called the security parameter. 3. Calculate ( ) = ( − 1) ( − 1). 4. Choose an integer e, 1 < < ( ), where gcd ( , ( )) = 1 5. Calculate d, to 1 < < ( ), where ≡ 1( ( )) 6. Specify the hash function for message m. 7. Choose α, 1 ≤ ≤ − 1. 8. Public key = ( , , ), private key = ( , , ). RAN signature generation and verification: Signature generation: Entity A should do following: 1. Represent the message m in a hash function h (m) 2. Compute = 3. Calculate = ℎ( ) 4. A signature for message m is ( , ) Verification: To verify A’s signature ( , ), B should: 1. Obtain A’s authentic public key( , , ). 2. Verify that( ) = ; if not, reject the signature. 2.2

Example(RAN signature generation with artificially small parameters):

KeyGeneration: 1. Entity A selects primes p = 11, q = 7 2. Compute n = p.q = 7.11 = 77. 3. Calculate ( ) = (11 − 1)(7 − 1) = 10 . 6 = 60 4. Choose an integer , 1 < < 60, where gcd , ( ) = 1 , 5. Compute d, 1 N/2 (N denotes the number of plaintexts), - Then guess the right side = 0 (when p>1/2 ) or 1 (when p1/2 ) or 0 (when p Y (T , x) x  R

Whereas for second-order spatial dominance : we set the null hypothesis that X second-order spatially dominates Y against the alternative hypothesis that X does not second-order spatially dominates Y : H 0 (2) :  X 1 (T , x)  Y 1 (T , x) x  R H1(2) :  X 1 (T , x) >  Y 1 (T , x) x  R

and for third-order spatial dominance : we set the null hypothesis that X thirdorder spatially dominates Y against the alternative hypothesis that X does not third-order spatially dominates Y : H 0 (3) :  X 2 (T , x)  Y 2 (T , x) x  R H1(3) :  X 2 (T , x) >  Y 2 (T , x) x  R

the test statistics can be based on the Kolmogorov Smirnov Statistics by comparing the uniform distance of the respective estimated s-order integrated spatial distribution functions of X and Y as follows :

ˆ ( s 1) (T , x)   ˆ ( s 1) (T , x)  s  1, 2, 3 DN ( s ) (T )  N sup   N,X N ,Y 

(16)

x R

1 N where ˆ N  (T , x)   Lˆk (T , x) is the estimator of spatial distribution, with N k 1

n

Lˆ (T , x)   e  r i 1 X i   x

(17)

i 1

where  indicate an observation interval, therefore the number observations is given by n = T/  from period of time [0,T]. Asymptotic result was shown by Park(2006) which stated that for a fixed T, when   0 , n   .

3 Conclusion In this paper, we can conclude that the spatial dominance approach which uses the main feature of spatial analysis is applicable not only time invariant stationary processes but also nonstationary process whose distributions change over time.

4 Acknowledgements We would like to express our sincere gratitude to directorate of higher education, Ministry of Education and Culture for providing fund for this research as part of the fundamental research grant under title a spatial dominance approach for analysis of poverty distribution and to the library of Bank of Indonesia for providing a research article which has been required for our research.

References 1. Davidson, R. and J.-Y. Duclos : Statistical Inference for Stochastic Dominance and for the Measurement of Poverty and Inequality. Econometrica, 68, 1435-1464 (2000). 2. Foster, J.E. and A.F.Shorrocks : Poverty ordering and Welfare Dominance. Social Choice Welfare, 5, 179-198.(1988). 3. Ibara-Ramirez,R.: Stocks, Bonds and the Investment Horizon: A Spatial Dominance Approach. Working Paper 2011-3.Banco de Mexico, Mexico (2011). 4. Kim,C.S : Test for Spatial Dominances in the Distribution of Stock Returns: Evidence from the Korean Stock Market Before and After the East Asian Financial Crisis. Studies in Nonlinear Dynamics & Econometrics. Volume 13, Issue 4 (2009). 5. Madden, D. dan Smith, F.: Poverty in Ireland, 1987-1994: A Stochastic Dominance Approach. The Economic and Social Review, Vol. 31, 187 – 214 (2000). 6. Park J.Y : The spatial analysis of time series. Working Paper .Texas A&M University (2006). 7. Wolfstetter E. : Stochastics Dominance : theory and application .Topics in Microeconomics, Cambridge University Press (1999).

Proceeding International Conference on Mathematics, Statistics and its Applications 2012 (ICMSA 2012)

ISBN 978-979-96152-7-5

THE CORRELATION BETWEEN HYDRODYNAMIC OF RIVER AND POLLUTANT DISPERSION IN A RIVER 1

Basuki Widodo, 2 Bambang Agus S., 3 Setiawan 1

Mathematics Department of ITS Surabaya [email protected]

2

Post Graduate Student of Mathematics of ITS Surabaya [email protected] 3

Statistics Department of ITS Surabaya

[email protected]

Abstract. The river is one of the natural water resources that should be kept from the influence

of liquid waste or pollutants, which means the water quality must be maintained and secured from the causes of pollution, such discharges / inputs from industrial waste, domestic waste, agricultural and other wastes, into the river. Due to the expense of liquid waste discharged into the river more and more increasing, then to maintain the required water quality monitoring efforts and monitoring of water quality. However for monitoring river water quality is still done mostly by way of plot, the shape of the graph the relationship between data concentration of pollutant parameter values of the results of laboratory analysis of water samples in mg / L with longitudinal distance in meters or miles of the river at certain points and certain times. Meaningful linkages between the elements of hydrodynamics, such as discharge, velocity, and changes in the structure of the river with the spread of pollutants in the river have not been apparent. Therefore, this study examined the dispersion of pollutants based on the correlation / relationship between the qualities of the river with elements of hydrodynamic. Patterns of relationships were analyzed by using structural equation modeling (SEM) and assisted Lisrel 8.80 software. The analysis states that the pattern of dispersion of pollutants in rivers affected by the concentration of pollutants and elements of river hydrodynamic. Keywords: Hydrodynamic of River, Dispersion of Pollution, River Pollutant, SEM

1 INTRODUCTION Disposal of industrial waste water or non-industry, both the treated and untreated, into rivers has the potential to cause pollution to the river. This is because each load of waste water discharged into the river contains the parameters of a physical, chemical, and biological water quality of rivers can alter or affect the value of dissolved oxygen in the river [6] and [8]. And to declare the river water quality is still done mostly by way of plot, graph-shaped relationship between the data parameter value of the concentration of pollutants from the results of laboratory analysis of water samples in mg / L with longitudinal distance in meters or miles of the river at certain points and certain times. Meaningful linkages between the elements of

AM18

hydrodynamics, such as discharge, velocity, and changes in the structure of the river with the spread of pollutants in the river have not been apparent [1],[2],[4] and [7]. Water quality monitoring carried out by Jasa Tirta I include monitoring of off-line and on-line by taking samples at the sampling point is determined. In order to obtain the location of sampling points that representative that can describe the real condition of the river water quality, then it must be known pattern of distribution of pollutant concentration based on the hydrodynamics of the river and pollutant. Thus there are three latent variables are thought to relate to each other, namely the quality of rivers, river hydrodynamics and pollutant dispersion. In this study will be assessed the level of river water quality based on the correlation / relationship between the spread of the concentration of pollutants in the river with its hydrodynamic elements and the linkages with the sampling site. The pattern of relationships will be analyzed by using structural equation modeling (SEM) and aided by software LISREL [9]. According to Ramadiani [5], the use of SEM allows researchers to examine the relationship between the complex variables to obtain a comprehensive picture of the overall model. Unlike the usual multivariate analysis (multiple regression, factor analysis, MANOVA) SEM can test the structural model and the measurement model. By merging these two models allow researchers to test the measurement error and factor analysis in conjunction with hypothesis testing. The data used in this study is secondary data from the sampling results on-site sampling points along a predetermined segment of Surabaya River (Jrebeng bridge – Gunungsari Dam, [3] and [7]). The data taken in the form of the elements of the hydrodynamics of the river flow velocity, flow rate, water depth and quality of the river, namely pH, temperature, DO, and COD. As for the pattern of dispersion (spread) of pollutants made up of the river length (x axis) and the width of the river (y axis). Variables used in this study consisted of two exogenous latent variables ( ), one endogenous latent variable ( ), which the indicators are as follows:

Table 1. Notations/Symbols of Variable and Parameter

Indicators for River’s Quality X 1 Temperature X 2 pH X 3 DO (Dissolved oxygen) X 4 COD (Chemical oxygen Demand) Indicators for River Pollutants X 5 Flow rate (m 3 / s) X 6 Flow velocity (m / s) X 7 Water depth (m) Indicators for pollutant dispersion Y 1 Length Direction (x axis) Y 2 Wwidth direction (y axis) To achieve the objectives of the study, the following steps should be done as follows: (1) develop a model based on the concept / theory, (2) specification of the model, (3) form a path diagram and construct a model with a significant indicator variable, (4) establish the measurement model for all significant indicators are then selected the model structure, (5) to calculate factor scores for each latent variable.

2 RESULTS AND DISCUSSION This section describes the results and discussion of the SEM is applied to the secondary data with the endogenous latent variable dispersion of pollutants of the river. Based on the description in the previous section the research path diagram can be described as follows:

X1

X2 Quality of river Y1 X2 Dispersion pollutant

X4

X5 Y2 hydro dynamic of river

X6

X7

Fig. 1. Path Diagram of Mathematical Modeling Using SEM

From the path diagram above, the model structure is η = ζ + Γξ with η: vector of latent endogenous (state water quality) ξ: vector of latent exogenous ζ: vector error in structural equation Γ: the coefficient matrix of the latent exogenous variables Structural model and measurement model path diagram above is =

+

+

If made in the form of a matrix, [ ]=[

]

+[ ]

with : the quality of the river : element hydrodynamic river Measurement model is part of a structural equation model describes the relationship with the latent variable indicators. From the diagram above lines, the measurement model for the variable Y is =

+

=

+

or in matrix form =

+

where : indicator variable 1 for endogenous latent variable (longitudinal direction) : indicator variable 2 for endogenous latent variable (lateral direction).

Measurement model for the latent variable X with the exogenous variables are = =

+ + = =

+ +

or in matrix form =

+

Measurement model for the latent variable X with exogenous variables = = =

are

+ + +

or in matrix form =

+

In this study, the considered variables consists of three latent variables with the 2 of them are exogenous latent variables and one endogenous latent variable, namely the dispersion of pollutants in the river. To test whether each indicator is valid in measuring the latent variables used confirmatory factor analysis on each latent variable. 2.1 Descriptive statistics Variable Indicators Descriptive statistics of indicator variables are shown in Table 2 as follows,

Table 2. Descriptive statistics indicator variable Indicator variable X1 X2 X3 X4 X5 X6 X7 Y1 Y2

Minimum

Maximum

Mean

28 6.4 5 9.9 38.41 0.265 1.7 1.67 1.34

31 7.4 7.2 12.8 72.35 0.933 2.8 3.63 3.75

29.626 6.754 5.886 11.301 51.386 0.338 2.411 2.604 2.880

Standard deviation 1.001 0.366 0.709 1.191 12.625 0.063 0.356 0.631 0.879

2.2 The Running LISREL Path Diagram 8.8

Fig. 2. The basic model, the t-value

The test results for latent variable quality of the river is shown in Figure 2 that all indicator variables X1, X2, X3, and X4 significant latent variables measuring the quality of rivers and positively correlated. As well as the validity test for latent variable hydrodynamic depicted in the Figure 2 that all the variables X5, X6 and X7 significant latent variables measuring the hydrodynamic and positively correlated. Further, the dispersion of pollutants endogenous latent variables is shown in Figure 2. Latent variable dispersion of pollutants measured by two indicators, namely the direction of the river length (longitudinal direction) Y1and direction of the width of the river (lateral direction) Y2. From the result it is known that statistically

all indicator variables are taken to measure the dispersion of pollutants, namely Y1, Y2 positively and significantly correlated. 2.3 Overall Suitability Test Model Overall model fit associated with the analysis of the Goodness of Fit (GOF) statistics generated by the program. GOF size

Table 3. The Overall Model Test Match Target-Level Compatibility Estimation Results

Chi-Square, P NCP Interval RMSEA p (close fit) ECVI

little value p> 0.05 Of little value Narrow interval which RMSEA 0:08 0:50 p Values with which small and close ECVI saturated

AIC

Value that small and Dg near AIC saturated Value that small and Dg near CAIC saturated NFI 0.90 NNFI 0.90 CFI 0.90 IFI 0.90 RFI 0.90 CN 200 RMR Stand 0:05 GFI 0.90 AGFI 0.90

CAIC

NFI NNPI CFI IFI RFI CN RMR GFI AGFI

2110 p = 0.00 353.06 (293.79; 419.77) 0.30 p = 0.00 = 2.71 = 0.58 = 11.55 = 417.06 = 90.00 = 1778.85 = 493.89 = 271.95 = 1815.24 0.92 0.97 0.98 0.98 0.94 219.87 0.027 0.95 0.91

Compatibility level Poorly Poorly Poorly Good

Good

Good Good Good Good Good Good Good Good Good Good

From Table 3 shows that there are three GOF measures that indicate a poor fit and the 12 GOF measures showed a good fit, so it can be concluded that the overall model fit is good. 2.4 Analysis of Measurement Model After the match the model and data as a whole is good, then the next step is the evaluation or measurement model analysis. These evaluations were performed on each model or construct a separate measurement by evaluation of the

validity of the measurement model and the evaluation of the reliability of the measurement model. Evaluation of validity of the measurement model can be seen in Tables below. Latent Observed X1 X2 X3 X4 X5 X6 X7 Y1 Y2

Table 4. t-value, Payload Standard Factor (SLF), Validity QUALITY HYDRO DISPERSION SLF Value-t SLF Value-t SLF Value-t 1.00 0.90 -0.99 0.68 -

17.53 14.52 -17.19 9.53 -

1.00 -0.27 0.29 -

17.55 -3.41 3.73 -

0.99 0.73

10:08 -10.09

Validity

Good Good Good Good Good Good Good Good Good

From Table 4 it can be seen that all values of the variable-t load factor> 2, so the load factor of variable-existing variables in the model is significant or not equal to zero. Unless the charge standard factor (SLF) of X6 and X7, all other standard factor loading> 0.70. It can be concluded that the validity of all the observed variables to latent variables is good. While the summary evaluation of the reliability of the measurement model can be seen in Table 5. To measure the reliability in the SEM can be used: composite reliability measure and the variance extracted measure. Construct a composite reliability is calculated as: Varian extract can be calculated by the following formula: The results of reliability calculations above, it can be concluded that all of the Construct Reliability (CR) 0.70, and all of the Variance Extract (VE) 0.50 except for hydro. However, it can be concluded that the reliability of the measurement model (construct) is good. Table 5. Construct Reliability, Variance Extract, and Reliability Model Variables CR VE Reliability QUALITY 0.94 0.81 good HYDRO 0.61 0.4 Poorly DISPERSION 0.91 0.76 good

3

Structural Model Analysis This section relates to the evaluation of the coefficients or parameters that indicate a causal relationship or influence of one latent variable on another latent variable

Evaluation of structural models based on figures 2 and 3 that includes:  T-value of the coefficient / parameter

i. QUALITY DISPERSION: -7.44; absolute (-7.44)> 1.96 ii. HYDRO DISPERSION: -1.97; absolute (-1.97)> 1.96 All coefficients are significant in the structural model  Standardized solution value of the coefficient / parameter i. QUALITY DISPERSION: -0.04 ii. HYDRO DISPERSION: -0.01 The results of this evaluation can be concluded that the quality of the river and the element effect on the dispersion of pollutants.

4 Conclusion Based on data analysis performed using structural equation modeling, it can be concluded as follows: test the overall suitability of the model deals with an analysis of the Goodness of fit. Of the 15 measures, there are 3 sizes GOF indicating poor fit. Conclusions can be drawn is that overall model fit is good. The validity of all latent variables were observed for both variables except the indicator velocity (X6) and depth (X7) but can still be considered for permanent use. Reliability of the measurement model is good except for the latent variable hydro, but is still considered to remain in use. Dispersion of pollutants in the river is influenced by the quality of the river is also influenced by the river hydrodynamic elements.

5 Acknowledgements We would like to thank the ITS has provided research grants so that our research can be completed and we disseminated / presented at the International Conference on the Mathematics, Statistics and Its Applications (ICMSA 2012) Bali 19-21 November 2012. These studies were funded by non-tax revenues in accordance with the agreement ITS assignment in the context of the implementation of ITS research laboratory in 2012 number: 1027.108/IT2.7/PN.01/2012 dated May 2nd, 2012

References 1. Chagas, P. F., and Nayfeh, A. H.: Application of Mathematical Modeling to Study Flood Wave Behavior in Natural Rivers as Function of Hydraulic and Hydrological Parameters of the Basin, Hydrology Day (2010) 2. Gosiorowski, D., Szymkiewicz, R.: Mass and Momentum Conservation in The Simplified Flood Routing Models, Journal of Hydrology, v. 346, p.51-58 (2007) 3. Hermin : Evaluation of sampling points in the Brantas River Water Quality Model Application HP2S, Thesis, Environmental Engineering ITS (2007)

4. Keskin, M.E., Agiralioglu, N.: A Simplified Dynamic Model for Flood Routing in Rectangular Channels, Journal of Hydrology, v.202, p. 302-314 (1997) 5. Ramadiani : Structural Equation Model for Multivariate Analysis Using the LISREL, Journal of Mulawarman, vol. 5, no. 1, p. 14 (2010) 6. Slamet, A. and Karnaningroem, N..: The influence of hydrodynamics the spread of pollutants in the river with 2 Dimensional Horizontal Flow, Environmental Engineering, ITS, Surabaya (2003) 7. Widodo, B., Karnaningroem, K. and Anwar, N.: Construction of hydrodynamic models with River Water Distribution Water Pollutants in Rivers Agency, Phase III, Higher Education, Jakarta (2006) 8. Widodo, B., Fatahillah, A., Rahayuningsih, T.: Mathematical Modeling and Numerical Solution of Iron Corrosion Problem Based on Condensation Chemical Properties, Australian Journal of Basic and Applied Sciences, 5(1), PP.79-86 (2011) 9. Yamin, S. and Kurniawan, H.: Structural Equation Modeling: Data Analysis Techniques more easier Learning Questionnaire with LISREL, PLS, Book Second Series, London: Salemba Infotek (2009)

Proceeding International Conference on Mathematics, Statistics and its Applications 2012 (ICMSA 2012)

ISBN 978-979-96152-7-5

Stochastic Divination Reckoning Enactment on Multi class Queueing System K.Sivaselvan1 and C.Vijayalakshmi 2 1

Department of Mathematics, Jeppiaar Engineering College, Chennai 2

Department of Mathematics, VIT University, Chennai

[email protected] [email protected] Abstract. Stochastic systems play a prominent role in computer graphics, because of their success in modeling a variety of complex and natural phenomena. The usefulness of a particular stochastic model depends on both its advantages and on the extent to which it can be adjusted to describe different phenomena. In communication networks, the network size is growing hasty and the reckoning effort to finds a path between the source –destination pairs is increased massively. Routing plays a vital role on the performance and functionality of computer networks. Routing networks means identifying a path in the network that optimizes a certain criterion which is called as Quality of Service (QoS) routing and it is failure in the environment of large scale networks. The storage and updating cost of routing procedure is prohibitive as the number of nodes in the network gets large. Network seclusion is a key solution for improving the scalability problem in large networks. The main aim of seclusion is minimizing the reckoning effort by maximizing the probability of having a path between source-destination pairs in the network. This paper deals with the specification and analysis of routing procedures which are effective for large hoard and promote packet switched computer networks. The new concept of stochastic seclusion method is introduced to resolve the scalability in Quality of Service routing algorithm. Graphical representation shows that how the new method improves the performance measure in terms of reduction in reckoning effort.

Keywords. Routing, Scalability, Reckoning effort, Large Networks, Quality of Service, Network Seclusion.

1 Introduction Broadband of integrated service networks are expected to support applications with Quality of Service requirements. In many applications that has need service guarantees in order to function ingeniously. A communication network consists of a set of nodes which are connected by a set of links. A path is defined in the network where a collection of sequential communication links eventually connecting two nodes to each other. The process of finding and selecting the paths in the network is termed as routing function. A routing policy is a decision rule that selects which nodes to take next based on the current time and realize network link. The objective of routing technique is (i) distribute and searching the state information in optimal way of the network (ii) how to reduce the reckoning effort in searching for a path.

AM19

The main drawback of all modern routing algorithms is in lack of ability to scale the large networks proficiently. Network seclusion is the solution to enhancing the scalability in large networks. Network seclusion decomposes a network into sub networks according to particular rules and considerably reduces the reckoning effort of routing. The stochastic seclusion method that dynamically change a network seclusion according to traffic patterns in the network in order to minimize an objective function that reflects the reckoning effort involved in routing algorithm used in the network. In this method, the probabilities used to partition the network correspond to the frequency of connection requests between every pair of nodes in the network. The rest of the paper is organized as follows: Section 3 describes the routing reckoning structure of packet switching network. Section 4 discusses stochastic seclusion and introducing some notations. Section 5 explains the overhead in scalability systems. Section 6 graphical representation is discussed. Finally section 7 concludes the paper.

2 Literature Survey Scalability in communication network had been developed by Amitabh Mishra(2002). Ariel Orda et.al.,(2002) has clearly explained a scalable approach to the partition of QoS requirements in unicast and mulitcast. Fang Hao et.al.,(2002) had explained the scalable QoS routing performance evaluation of topology aggregation. P.Gupta.P.et.al.,(2006) and A. L. Stolyar, has clearly envisaged the Optimal throughput allocation in general access networks. W.Ching.et.al.,(2009) has analyzed the optimal service capacities in a competitive multiple-server queueing environment. E.Leonardi.,et.al.,(2005) had approached the Joint optimal scheduling and routing for maximum network throughput. X.Lin,et.al.,(2004) had analyzed an optimization based approach for quality of service routing in high-bandwidth networks. Orda et.al.,(2003). had approached the pre-reckoning schemes for QoS routing. S. Sinha Deb et.al.,(2003) had given a detailed explanation of a new approach to scale quality of service routing algorithms.

3 Steering Reckonable Structure The basic component of Quality of Service routing structure is path selecting that can operate in a link state routing protocol environment where different information can be used in two different scales.

The goal of routing reckoning structure is to reduce the impact of flow setup time. to avoid user level re-attempt in heavily loaded network to select a route quickly in possible paths. The structure consists of three stages at different time scale: (i) First Round Path Communicating (FRPC) stage. (ii) Sorted Path Ordering (SPO) stage (iii) Recognized Route Assortment (RRA) stage. The First Round Path Communicating (FRPC) stage does preliminary determination of a set of possible paths from a source node to destination node. The Sorted Path Ordering (SPO) stage follows Markov process (selects the most recent states of all links available to each node) and filters it to provide a set of Quality of Service acceptable paths. Moreover, this phase order the routes from most to least acceptable paths which is obtained from list of FRPC stage. The Recognized Route Assortment (RRA) stage follows that to select a definite route as swiftly as possible based on the pruned available paths from the SPO stage. The main advantage of this structure is that various distributed routing schemes can fit into this structure and multiple Quality of Service requirements can be used.

3.1 Routing For Packet Switching Networks In satellite communication among the network resources is accomplished by the communication sub network. In packet switching network, the messages are wrecked into small segments and then which are transmitted through the network in the form of hoard and promote switching. A packet is transmitted from source node to destination node, which may be hoard in queue at any intermediate node for transmission and then promote to the next node. The selection of the next node is based on the routing policy. Routing policy has divided into two categories: Deterministic (Design phase) and Adaptive (Networks Operation). Adaptive policy plays a vital role for triumphant operation of networks and it describes the state of the network. A central node Fig1., providing the routing information to all sub nodes in the network which computing the information directly.

Internet

Riddle A Cl

Queueing

Output

Riddle B C2 Riddle C

Fig 1: Queue Discipline in Multiclass Network

4 Stochastic Seclusion Stochastic seclusion technique is designed to partition the original network into a number of blocks which enhancing the scalability routing algorithm in large networks. In this method network is partitioned in a probabilistic manner that corresponds to the frequency of connection requests between every pair of nodes in the network. The main objective of stochastic seclusion is to minimize the mean reckoning effort spent by the routing algorithms used in the network by maximizing the chance of selecting a source–destination pair in the same block of the partition. In particular, if the source-destination pair in the same block there must exist at least one path between every pair of nodes in the block which is termed as irreducible. For low connectivity network the partition is more difficult with irreducible blocks whereas easily constructed in high connectivity network. Thus each block should consist of at least two nodes in order to have communication between them.

4.1Objective Function of Seclusion Strategy The objective is to minimize a quantitative measure for any network seclusion structure in terms of the reckoning effort in routing. The mean reckoning effort in finding a path satisfy the constraints from a source node to destination node, averaged overall source-destination pairs, in a network of K nodes and Z links be

 ( , ) . The objective function of reckoning effort is defined as N N  C (  ,  )  min  Pi i ( i ,  i )  (1   Pi )  (  ,  )  P ( K , Z )  i 1  i1 

where Pi represents the probability that given a connection request when both source and destination nodes located in the same block, P ( K , Z ) represents the seclusion overhead per connection N

request,

 P  (K i

i 1

i

i,

Zi )

represents the reckoning effort involved when both source and N   1   Pi   ( K , Z )  i 1  represents

destination nodes located in the same block and the reckoning effort when both source located in the different blocks For p Let Pij be the conditional probability that given a connection request from source node ‘i’ to the destination node ‘j’. Pij is defined as follows:

Pij ( ) 

 ij ( ) ij ( ) i , jN i j

 ( ) where ij denotes the number of times source node i has requested a connection to destination node in the last  time unit. Pij ( )

will be more accurate, as the time unit

 increases

lt Pij ( )  Pij

 

Pij ( b ) 

C nNb 2 2 C nNb

( N  2 )! (n b - 1)(n b ) (b - 2)! (N - 2 - b  2)!   (N)! (N - 1)N (nb)! (N - nb)!

n

where b denotes the number of nodes in block b and N denotes the number of nodes of the network.

5 Scalability Systems Overhead The most important constraint on seclusion is partition overhead. There are two major types of overhead are routing update reduction and route reckoning reduction. The routing update reduction provides the information continuously updated to the network nodes through routing information. Additionally frequent updated routing leads to a better routing performance in the network and also consumes more network bandwidth and processing power. Reducing update frequency which degrades the routing performance due to lack of routing information. Reduction of routing update frequency in two ways: (i) searching for appropriate routing update trigger policies to provide controllable update frequency and predictable accuracy (2) designing appropriate routing algorithms to minimize the impact of stale routing information The route reckoning reduction is essential for achieving high-quality routing performance and scalability. Route pre-reckoning and path catching are the two major approaches in order to reduce route reckoning. Route pre-reckoning is used to compute and store the paths to all destinations before the request which leads that minimize the request operations. Moreover that it helps to compute multiple paths to

the same destination nodes and also balance the traffic load. Path catching avoids computing the same path again.

6 Graphical Representations

7 Conclusion Quality of Service routing is a main component of a reckoning structure. The scalability network is the challenging issue in the environment of large networks. In this paper, a new concept is introduced for seclusion using stochastic seclusion to reduce reckoning effort to find a path. The stochastic seclusion technique is to maximizing the scalability and minimizing the complexity in large networks. Graphical representation shows that the stochastic seclusion speed up the routing functions.

Acknowledgements I wish to express my gratitude and thanks to my guide Dr. C. Vijayalakshmi and my parents, family members for their valuable support and cooperation extended to design this model in a successful way.

References

[1] Amitabh Mishra, “Scalability in communication network”IEEE Networks,Vol 16,no.4,pp10-10,2002. [2] Ariel Orda and Alexander Sprinston, “ A scalable approach to the partition of QoS requirements in Unicast and Mulitcast”, IEEE INFOCOM,vol,no.1 pp685-694,2002 . [3] H.Bettahar.H and A.Bouabdallah, “A New approach for Delay- Constrained routing,”Elsevier publication-Computer Communication, vol. 25, pp. 17511764, 2002. [4] S.N.Bhatti,and J.Crowcroft, :“QoS-sensitive flows: Issues in IP packet handling”, IEEE Internet Comput., 4, pp. 48–57, 2000. [5] W.Ching, .SChoi and M.Huang, “ Optimal Service Capacities in a Competitive Multiple-Server Queueing Environment ,” Proceedings of COMPLEX 2009,Shanghai, Lecture notes of the Institute for Computer Sciences, Social- Informatics and Telecommunication Engineering, Springer, 2009. [6] Fang Hao and Ellen W. Zegura, “ On Scalable QoS Routing Performance Evaluation of Topology Aggregation”, IEEE INFOCOM2002,vol,no.1 pp147-156,March 2002. [7] R.Guérin.and A.Orda., “Computing shortest paths for any number of hops,” IEEE/ACM Trans. Network., vol. 10, no. 5, pp. 613–620, Oct. 2002. [8] P.Gupta.P and A. L. Stolyar, “Optimal throughput allocation in general random- access networks,” Proceedings of 40th Annual Conf. Inf. Sci. Systems, 1254– 1259,2006. [9] S.Halabi,, and D.McPherson.: ‘Internet routing architectures’ (Cisco Press, 2000, 2nd edn.) [10] E.Leonardi, M.Mellia, M.Ajmone Marsan., and F.Neri., “Joint optimal scheduling and routing for maximum network throughput,” in Proc. IEEE INFOCOM 2005, Miami,FL, Jun. 2005, pp. 819–830.

[11] X.Lin and N.B.Shroff. “An optimization based approach for quality of service routing in high-bandwidth networks,” presented at the IEEE INFOCOM, Hong Kong, China, Mar. 2004. [12] S.Mao, S.S.Panwar, and Y.T. Hou. “On minimizing end-to-end delay with optimal traffic partitioning,” IEEE Transactions on Vehicular Technology, vol. 55, no. 2, pp.681-690, March 2006. [13] A.Montaser. “ Network Partitioning for QoS routing”, Thesis, University of Bradford, UK,2002. [14] K.Shen.,A.Zhang, T. Kelly, and C.Stewart. “Operational analysis of processor Speed scaling”. In SPAA, June 2008. [15] S.Sinha Deb.and M.E.Woodward.,“ A New Approach to Scale Quality of Service Routing Algorithms”, Globecom 2004. [16] K.Siva Selvan and C.Vijayalakshmi : Algorithmic Approach For the Design Of Markovian Queueing Network with Multiple Closed Chains International Conference on TRENDZ information Sciences and Computing. Proceedings IEEE xplore, Sathyabama University TISC-2010 [17] Turgay Korkmaz and Marwan Krunz, “Multi-Constrained Optimal Path Selection,”Proceedings of the IEEE INFOCOM, pp.834-843, 2001. [18] Wei Liu, Wenjing Lou and Yuguang Fang, “An efficient quality of service routing algorithm for delay-sensitiveapplications,” Elsevier publicationComputer Networks, vol.47, pp. 87–104, 2005. [19] O.Younis, S. Fahmy, "Constraint-based routing in the internet: basic principles and recent research", IEEE Communication Society Surveys & Tutorials, vol.5, Xg3 pp. 42-56, 2003. [20] Orda and A. Sprintson, “Prereckoning schemes for QoS routing,” IEEE/ACM Trans. Netw., vol. 11, no. 4, pp. 578–591, Aug. 2003.

Proceeding International Conference on Mathematics, Statistics and its Applications 2012 (ICMSA 2012)

ISBN 978-979-96152-7-5

Using SVAR with B-Q Restriction to examine posttsunami inflation in Aceh Saiful Mahdi1 1

Department of Mathematics, College of Mathematics and Natural Sciences, Syiah Kuala University (UNSYIAH), Darussalam, Banda Aceh 23111 [email protected]

WWW home page: http://math.unsyiah.ac.id/en/

Abstract- This paper is a technical post-script from a chapter in my dissertation and its non-tecnical versions was presented in a policy conference in 2009. It aims at introducing Structural Vector Autoregression (SVAR) with Blanchard and Quah (B-Q) restriction to provide basis for policy on inflation in posttsunami Aceh. This approach will produce a more “prescriptive” rather than “descriptive” models for the studied region. It incorporates an analysis of an output-price relation in Aceh’s economy, which is central in determining response to a shock policy. This analysis can be used to examine further policy implication of an intervention and, thus, help formulate necessary actions to tame inflation while maintaining economic growth in the post-tsunami region. I find that shocks based on aggregate supply (AS) policy, rather than aggregate demand (AD), would have been more effective to stimulate growth while maintaining moderate inflation in Aceh.

1 Introduction Since the December 2004 Tsunami, prices in the affected regions have increased more sharply than those of the national average. The most dramatic increase has been in Banda Aceh, which serves as the regional hub for reconstruction activities. Year-on-year inflation in December 2005 reached 41 percent in Banda Aceh, while it was 23 percent in Medan, the neighboring province, and 18 percent in Lhokseumawe, the second biggest city in Aceh, compared to 17 percent at the national level. The major increase in prices occurred immediately after the tsunami. During the first four months, the CPI in Banda Aceh registered an increase of 15 percent. The second major increase took place during the nationwide fuel price increase in October 2005, after the government revoked the oil and gas subsidies.

AM20

Inflation in post-tsunami Aceh has been consistently higher than that of Indonesia’s national average as shown in Fig. 1. The gap has been narrowing since the end of 2006, but it is still above the national average, at least until mid 2008. As of the beginning of 2008, officials and analysts alike were still concerned about the high inflation in Aceh [1].

Since then, Aceh’s inflation has shown normal behavior as those from other regions in Indonesia with occasional hikes and sinks (See Fig. 2). To understand inflation for policy purposes, researchers have tried to build models for inflation behavior. Aceh’s inflation, being at the center of world’s attention after the mega-disaster, was studied quite intensively by donor agencies. This is especially in the interest of the World Bank, which has its own office in Banda Aceh.

Fig. 2 Aceh and national inflation 2008-2010

Models depicting inflation behavior in post-Tsunami Aceh, e.g. those produced by the World Bank, however, are mostly “descriptive.” While such models are useful in understanding inflation behavior descriptively, they cannot lend much help in policy formulation. On the other hand, policy makers need to come up with certain kind of “prescription” as the basis for future actions. For this, understanding output-price relation is very helpful to enable

policy makers to choose appropriate approach to tame inflation while maintaining growth. This is especially essential in post-disaster region, which usually witnesses a wild fluctuation of prices. This paper incorporates an analysis of an output-price relation in Aceh’s economy, which is central in determining response to a shock policy. This analysis can be used to examine further policy implication of an intervention and, thus, help formulate necessary actions to tame inflation while maintaining economic growth in the post-tsunami region. The rest of the paper is structured as follows. In Section 2, a review on basic economics on aggregate supply is expected to provide a theoretical basis for the approach. Then, I will lay-out SVAR with B-Q restriction incorporating data from Aceh economics in Section 3. In section 4, I will show empirical results and discuss its policy implication. Section 5 concludes.

2 Long run aggregate supply The short-run and long-run restrictions imposed in this analysis follow the theoretical concept as laid-down in [2] and [3]. The demand shock temporarily increases output from Y* to Y’, but in long-run the output goes back to its original level, Y*. In other words, the transitory shock is neutral in the longrun as exhibited on Fig.3.

Fig.3 Impacts on Output and Price Due to Demand Shock

Based on the long-run restriction, as shown on Fig. 4, the supply shock shifts the long-run aggregate supply curve to the right and permanently increases the output.

Fig. 4 Impacts on Output and Price Due to Supply Shock

From theoretical concepts and imposed restrictions discussed above, one would expect that: (a) the aggregate supply shock should have a positive impact on output and yield a negative impact on price; (b) the aggregate demand shock should contemporarily have a positive impact on both output and price; and (c) the output response to the aggregate demand should go to zero in long-run.

3 SVAR with Blanchard-Quah (B-Q) restriction Specifically, the Blanchard-Quah (B-Q) identification method for Structural Vector Autoregression (SVAR) is applied to real GDP and GDP deflator data of Aceh. GDP deflator is used as a proxy for CPI. The data is Aceh’s GDP quarterly data from first quarter of 2000 to third quarter of 2008. In the rest of this section, I review the theoretical concept of SVAR and B-Q technique. Then I describe the steps of applying this methodology to the output and price data. 3.1 SVAR and Blanchard-Quah (B-Q) Technique The following are steps to formulate SVAR and perform B-Q decomposition as introduced in [4] and used in [6] and [7]. Suppose y and p denote the real output growth and inflation rate; and the two innovations y and p. Inverting the unrestricted Vector Autoregression (VAR) in (1), I then obtained the following Moving Average (MA) form:

 y   c11 ( L) c12 ( L)   y  p   c ( L) c ( L)   p  22    21   

(1)

, where ’s are mean zero innovations with covariance matrix . Let C(L) be the matrix of coefficients with lag operator cij(L). These are the impulse-response function of disturbances, showing the effect of shocks of y and p in period t on the variables y and p in period t+j, where j = 0,1,2,…. Hence, C(0)is the identity matrix representing contemporaneous responses. As discussed in [6], the innovation ’s are correlated in general cases. Thus, the impulse responses generated by MA form in (1) do not exhibit the responses to the orthogonal innovations (i.e. uncorrelated shocks). To solve this problem, the alternative MA form is defined as:

 y   a11 ( L) a12 ( L)  u y   p    a ( L ) a ( L )   p  22    21  u 

(2)

, where u’s are uncorrelated innovations with covariance matrix  ( is a diagonal matrix). The MA representations (1) and (2) are linked by: A(j) = C(j)A(0), j = 0,1,2,…

(3)

A(0)A(0)  = 

(4)

and also, Therefore, the MA form (2) can be obtained if we can identify each element of A(0). Let ij and ij denote elements in matrix  and . Equation (4) can be rewritten as:

 a11 a12   a11 a   21 a22   a12

a21   11 0  11 12   a22   0  22  21 22 

(5)

By (5), three elements of matrix A(0) are identified as: (a112  a122 ) 11  11 2 21

2 22

( a  a ) 22  22

(a11a21  a12 a22 ) 22  12

(6) (7) (8)

To solve for all elements in A(0), we need more assumptions. Following BQ technique, we assume the values of 11 and 22 are 1, representing that both shocks are normalized and have a standard deviation of 1. Also, the long-run restriction is imposed by assuming that the aggregate demand shock does not

impact the output in long run as explained in Section 2. This restriction is mathematically defined as: c11(L)a11(0) + c12(L)a12(0) = 0

(9)

Four elements in A(0) can be computed by solving (6)-(9). After obtaining A(0), the impulse responses of the orthogonal shocks can be generated by (3). 3.2 Decomposing the output and price data by using B-Q SVAR technique The data of real GDP and GDP deflator of Aceh is obtained from the Indonesia Office of Statistics (BPS) and CEIC. The augmented Dickey-Fuller (ADF) test for unit root suggests that when first-differences of log GDP and GDP deflator are applied, the null hypothesis of unit root is rejected at 1 percent level. Table 1. ADF Test for Unit Root Null Hypothesis: Ln(GDP) or its difference has a unit root T-Statistics, Z(t) Ln(GDP) -0.955 ΔLn(GDP) -6.440 Δ2 Ln(GDP) -7.272 *MacKinnon approximate p-value for Z(t)

P-value* 0.7694 0.0000 0.0000

Table 2. ADF Test for Unit Root Null Hypothesis: Ln(GDP Deflator) or its difference has a unit root T-Statistics, Z(t) Ln(GDPDef) 0.428 ΔLn(GDPDef) -6.392 Δ2 Ln(GDPDef) -9.468 *MacKinnon approximate p-value for Z(t)

P-value* 0.9825 0.0000 0.0000

Then, I computed the unrestricted VAR with 5 lag after following the laglength selection method used in [4]. The Ljung-Box test indicates that all residual series are not serially correlated when using the lag length of 5. This yields the VAR model mathematically represented as:

5

 y  b0 

5

 b1i 

2

y i 1 

i 1



(10)

i 1

5

p t  d 0 

 b2i 2 p i1   ty 5

 d 2i 2 pi 1   tp

d1i 2 y i 1 

i 1

(11)

i 1

, where y t and pt are the first-difference log of real output and GDP deflator, respectively. Next, I inverted the above VAR representation to obtain a model in Moving Average (MA) form as shown in (1). As previously stated, the residuals of (1),  ty and  tp , are correlated. To obtain the orthogonal residuals, we compute the matrix A(0) by using (6) – (9) and multiplying A(0) with C(j) – as shown in (3). This calculation gives a new MA representation of (2), which has orthogonal residuals ( u ty and u tp ) . It is noted that the MA form shown in (2) can be re-written as: j 1

y t 



j 1 y a11 ( s )u T  j  s

s 0



 a12 (s)uTp j  s

j 1

p t 

 s 0

(12)

s 0

j 1 y a 21 ( s )u T  j  s



 a 22 (s)uTp j s

(13)

s 0

, where u ty and u tp are orthogonal residuals. To generate the decomposed series of output growth due to aggregate demand shocks, I assign u tp to zero in (12) and obtain the series of y tAD , which is a first-difference log of real GDP generated by aggregate demand shocks. Likewise, to obtain a first-difference log of GDP deflator generated by aggregate demand shocks ( p tAD ), the value of u tp is set to zero in (13). The next step is to convert the first-difference log data into the no difference time series. Equation (14) and (15) show that the conversion can be done by cumulatively summing the values of first-difference log data. Mathematically speaking, this is similar to integrating the first derivative to obtain the derivative in continuous domain.

t

y tAD



i  y tAD

(14)

i 0

t

p tAD 

i  ptAD

(15)

i 0

Then, I created the scatter plot of y tAD and ptAD exhibiting the responses of real GDP growth and inflation to the aggregate demand shocks. Running the regression by using data of scatter plot yields an equation estimating a relationship between y tAD and ptAD as: p tAD = g + h y tAD + v

(16)

, where g, h and v are intercept, slope and residual, respectively. As suggested by the economic theory, the short-run response of yt and pt to the aggregate demand shock are in the same direction. This implies that the slope (h) obtained from the estimated regression should be positive. I applied the similar steps to generate the decomposed series of output growth and inflation due to aggregate supply shocks. Specifically, I set the value of u ty to zero in (12) and (13), and compute the series of y tAS and

p tAS .

After converting those series to the difference log data, I created the scatter plot and then estimated the regression line. Since the economic theory defines that the aggregate supply shock temporarily causes the output and price to move in the opposite direction, it is expected that the slope obtained from regressing y tAS on ptAS is negative.

4 Empirical results and discussion Plots of AS-AD curves based on Aceh’s total GDP (including oil and gas) produce anomaly results, i.e. the AS curve is always negative instead of positive as the theory predicts. This might due to the fact that oil and gas industry are heavily regulated sectors with much of its production is not for domestic use in Aceh. On the other hand, Aceh's GDP without oil and gas provides evidence as the theory predicts, that is, negative AD slope and positive AS slope. Thus, I focus on the series without oil and gas. Fig. 5 shows AD and AS curves from 2000:Q1 to 2008:Q3. These curves are plotted based on data obtained from the SVAR with B-Q decomposition as explained above.

Fig. 5 AS curve and AD curve in Aceh, 2000-2008

The AS-AD curves, depicted in Fig. 5 show that the slopes of AS and AD are according to what the theory predicts, i.e. positive for AS and negative for AD. With the known slopes, it can be deduced that along the AS-Curve, a positive inflation innovation of 1 percent corresponds to a positive output growth of 5.32 percent. Using the same slope, it can also be said that along the AS-Curve, a positive growth innovation of 1 percent corresponds to a positive inflation innovation of 0.19 percent. The AD-Curve has a much steeper slope, indicating growth is more sensitive to inflation innovation. Along the AD-Curve, a positive inflation innovation of 1 percent corresponds to a negative output growth of -0.66 percent. According [8] “to the extent that monetary authority focuses its policy on inflation control, a steep AD-Curve indicates that AS shock rather than AD policy to lower the price level would have been more effective.” In Aceh’s case, it is also necessary to see how the shock caused by the December 2004 Tsunami influences the output-price relationship. Broken into prior to and after the Tsunami, the AS and AD curves, again, indicate similar

behavior as predicted by the theory, except that the post-Tsunami AS curve is flatter than that of pre-tsunami and overall series (Table I). But, as the ADCurve after Tsunami also becomes flatter, policy based on AS rather than AD shocks to increase growth while keeping price in control would still have been more effective. Table 3. Slopes of AS and AD curves before and after the 2004 Tsunami

Overall Pre-Tsunami Post-Tsunami

AS curve slopes 0.188 0.274 0.137

AD curve slopes -1.519 -1.704 -1.199

Based on the slope after the Tsunami, 1 percent innovation in growth corresponds to 0.14 percent inflation along AS-Curve compared to -1.20 percent along AD-Curve. On the other hand, for 1 percent innovation in inflation generates as much as 7.31 percent growth along AS-Curve, while AD shocks would still create growth in the negative, that is -0.84 percent. This again indicates that policy based on AS shock would have been a better measure for Aceh’s economy after the disaster. Therefore, policy makers in Aceh should have focused more on the aggregate supply interventions. In this regard, macroeconomics literatures note that, shifts in the AS curve can be caused by the following factors:  changes in size & quality of the labor force available for production  changes in size & quality of capital stock through investment  technological progress and the impact of innovation  changes in factor productivity of both labor and capital  changes in unit wage costs (wage costs per unit of output)  changes in producer taxes and subsidies  changes in inflation expectations - a rise in inflation expectations is likely to boost wage levels and cause AS to shift inwards. [9] For further discussion on aggregate supply and, especially, its long behavior, a brief discussion is provided in Appendix A. Ref. [9] notes that “long run aggregate supply is determined by the productive resources available to meet demand and by the productivity of factor inputs (labor, land and capital). In the short run, producers respond to higher demand (and prices) by bringing more inputs into the production process and increasing the utilization of their existing inputs. Supply does respond to change in price in the short run.” The policy maker in Aceh should, for example, continue their investments on infrastructure, especially those that directly help increase the supply to meet the demand of manufacturing and service sectors. Better roads and servicing sea ports, for instance, can help bring down the price of material for struggling

manufacture industries which, so far, lack comparative and competitive advantage to their counterpart in North Sumatra. The local government should also find new channels to integrate, now, abetter labor force left by aid industries after the Tsunami.

5 Conclusion In post-tsunami Aceh, AS and AD curves indicate that AS shock rather than AD policy to stimulate growth while keeping inflation in control would have been more effective. This is in line with what has been observed by analysts in Aceh after the disaster. However, a more concrete policy and actions are needed to make aggregate supply interventions. Investment in infrastructure should be continued, but with an agenda to integrate the investment into productive sectors such as manufacturing and services industries.

Acknowledgements I thank participants of AIWEST-DR 2009 for their inputs and for their encouragement to present this research in a more technical conference. Ahya Ihsan and Harry Masyrafah of The World Bank in Jakarta and Banda Aceh, respectively help me in accessing the data for this research. I am also indebted to Nattapong Putanapong, Ph.D. who helped me with the Gauss code in this research.

References [1] Inflasi Aceh ‘Lampu Kuning’ (Aceh’s Inflation ‘Yellow Light’), Serambi Indonesia, 3 January 2008 [2] Watson, M.D. (1986). Univariate detrending methods with stochastic trends. Journal of Monetary Economics July: 49–75. [3] Beveridge, S. and Nelson, C. R.. (1981). A new approach to decomposition of economic time series into permanent and transitory components with particular to measurement of the business cycle. Journal of Monetary Economics March, pp. 151–174. [4] Blanchard, O. and Quah, D. (1989). The dynamic effects of aggregate demand and supply disturbances. American Economic Review, 79, pp. 655-673. [5] Gamber, E. N. (1996). Empirical estimates of the short-run aggregate supply and demand curves for the post-war US economy. Southern Economic Journal Vol. 62, Issue 4 (April): pp. 856–872. [6] Cooley, T.F. and LeRoy, S.F. (1985). Atheoretical macroeconomics: A critique. Journal of Monetary Economics, November, pp. 283-308. [7] Azis, I.J. (2008). Macroeconomic policy and poverty. ADB Institute Discussion Paper No. 111, Tokyo-Japan. [8] Azis, I.J. and Putanapong, N. (2009). Revisiting output-price relations in East Asia. China-USA Business Review, 8(1), pp. 1-11. [9] Available at www.tutor2u.net (accessed 10 August 2009) http://tutor2u.net/economics/content/topics/ad_as/aggregate_supply.htm

Proceeding International Conference on Mathematics, Statistics and its Applications 2012 (ICMSA 2012)

ISBN 978-979-96152-7-5

Mathematical Modeling of Circular Cylinder Drag Coefficient with I-Type as a Passive Control Chairul Imron1 , Suhariningsih2 , Basuki Widodo3 , and Triyogi Yuwono3 1

Student of Universitas Airlangga (Unair) and Lecturer of Institut Teknologi Sepuluh Nopember (ITS), [email protected], 2 Universitas Airlangga (Unair), 3 Institut Teknologi Sepuluh Nopember (ITS)

Abstract. Drag coefficient of a circular cylinder can be reduced by taking place an Itype passive control in front of a circular cylinder. We obtain a mathematical model of the drag coefficient on a circular cylinder. This is model further is solved Simpson’s rule and Gauss-Jordan elimination. We obtain that the smallest drag coefficient is 0.90981 in S/D of 2.14943. Keywords: Passive control, drag coefficient drag.

1

Introduction

To obtain an advanced technology, it is required continuous research. This encourages people to continue doing some various kinds of research, either by experiment or simulation. The results are expected to be employed to develop and discover new methods that are more useful. One area of research is the field of fluid dynamics research, i.e. the study of fluid flow through the cylinder. This research is conducted with aim to determine the effect of drag force on circular cylinder. Fluid flow on the surface of the object, either laminar or turbulent flow, the particles around the surface move slowly due to viscous force. Fluid particles close to the boundary surface will stick to the surface and zero velocity relative to the boundary. Others fluid try to move slowly on the particles are relatively silent as a result of the interaction between fluid motion faster and slower fluid. This is a phenomenon that can increase the force or shear stress. Velocity gradient is affected by the shear viscosity due to a force called the boundary layer. For example, offshore rig construction, structure overpass or other engineering products are often designed in groups. The force of each pile beside the load from above also receive forces from surrounding fluid. Piles are usually called the bluff body or the body is a major factor to be considered in the design. As it has been known, the force on a body that groups have different characteristics with a single body with the same shape. This is due to the combined

AM21

interference of the flow around the body of the group that will show a variety of interesting phenomena and unexpected moment. Following the concept of the boundary layer that has been is found, research on the phenomenon of fluid flow across the outer surface of an object has changed rapidly. The concept successfully reveals some answers to the influence of shear stress plays a very important characteristic to drag force around objects. Several studies have been done in which fluid flow through a single circular cylinder by Ladjedel (2011). A modified circular cylinders into either D-type or I-type has been investigated by Igarashi (2006) and Triyogi (2010). Fluid flow through more than one cylinder of different sizes and arranged tandem has been investigated by Bouak (1998), Lee (2004), Triyogi (2003) and Tsutsui (2002). In real world, the piling used is not a single, but it more than one. Fluid flow across a profile cylinder will produce drag force takes is often disadvantage. The size of the drag force is influenced by several parameters, one of which is the drag coefficient. Bouak in 1998 have conducted experiments using small circular cylinder as a passive control to reduce the aerodynamic forces on a circular cylinder. Experimental results show that the average reduction in drag coefficient can achieve 48 % compared to the single circular cylinder without passive control, in which bluff body diameter and Reynolds number (Re ) are the same. Tsutsui and Igarashi (2002) has conducted a similar experiment with Bouak, by varying the Reynolds number: 1.5×104 to the 6.2×104 . The results of the study showed that for Reynolds number variation, rising Re > 3×104 cause minimum pressure coefficient, (CPmin ) is the lower. Igarashi and Shiba (2006), has also conducted research on a circular cylinder D-type and I-type. The result is a drag coefficient (CD ) achieve a minimum, which is 50% of the drag coefficient of a circular cylinder. Furthermore, Triyogi Y. et al (2003) have done research on the concept of combining the two previous studies. Passive control used is a cylinder-type D to investigate their effects on circular cylinders. As a result, passive control is able to provide a drag reduction of 7% compared to the passive control of circular cylinders. The desire to obtain the best performance with a small drag of a fluid flow system across the bluff body behind the discovery of passive control. One of the efforts made to provide upstream disturbance in the form of an object in front of the bluff body bullies. Giving upstream disturbance is one of the efforts to put the location of the separation point on the bluff body. In principle, the content of the vortices in the shear layer generated by the upstream disturbance will be able to accelerate the formation of a turbulent boundary layer transition. The dominance of a turbulent boundary layer on the surface of the bluff body then the separation of the bluff body will be delayed further back.

y

r ,m , Uµ D

f s d

x CD

Passive Control S

Bluff Body

Fig. 1. Scheme of arrangement of two cylinders

Based on the above results, we consider a mathematical model of the drag coefficient received by the bluff body with passive control provisions are cylindrical type with θs = 65o Reynolds number is Re = 3.2 × 104 in which the arrangement of these problems can be seen in Figure 1. Comparison between the S/D varies from 0.6 to 3 and d/D = 0.125. The results of this research can be used to shorten the experimental research.

2

Numerical Method

Navier-Stokes equations for unsteady incompressible fluid, as follow 1 ϑu + u∇u = −∇P + ∆u ϑt Re

(1)

∇ · u = 0.

(2)

where u is the velocity vector, P is the pressure and Re is the Reynolds number. 2.1

Numerical Procedure

Several steps will be taken to solve the above equation. The first equation with neglect pressure, so the equation becomes 1 ϑu + u∇u = ∆u ϑt Re this can be simplified becomes

or written by

(3)

ϑu 1 = −u∇u + ∆u ϑt Re

(4)

1 u∗∗ − u∗ = −u∇u + ∆u 4t Re

(5)

The equation 5 further can be stated by 

u∗∗ = u∗ − 4t u∇u + As well as

1 ∆u Re



(6)

u∗∗ − u∗ = −∇P 4t

(7)

if both sides of the divergence, the results ∇u∗∗ − ∇u∗ = −∆P 4t

(8)

∇ · u∗∗ − ∇ · u∗ = −4t∆P

(9)

or Because of ∇u∗∗ = 0, then it becomes ∇ · u∗ = −∆P 4t

(10)

This equation is called the Poisson equation and we will get the P . The last step is a correction velocity, ∂u = −∇P (11) ∂t 2.2

Mathematical Modeling

Table 1. Pressure distribution (Cp ) on Circular Cylinder with 10−5 S/D 0.6 1.8 3.0 19 -16.0 -12.6 -13.5

1 10.1 3.7 8.7 20 -6.6 -5.3 -5.0

2 4.5 1.5 3.3 21 -9.6 -8.1 -6.1

3 7.5 2.7 3.9 22 -9.0 -7.9 -5.1

4 3.6 2.6 1.9 23 -7.0 -6.20 -3.7

5 0.6 1.3 0.0 24 -7.0 -6.1 -3.6

6 -0.7 0.1 -1.2 25 -9.1 -7.9 -4.6

7 -3.0 -0.2 -3.5 26 -9.8 -8.4 -5.0

8 -5.2 -4.7 -6.3 27 -6.7 -5.7 -3.5

9 -4.2 -4.2 -5.4 28 -16.8 -13.9 -9.2

10 -10.3 -10.5 -14.1 29 -6.5 -5.3 -3.7

11 -4.0 -4.1 -5.7 30 -8.0 -5.8 -4.5

12 -5.8 -5.8 -8.4 31 -5.1 -2.8 -2.5

13 -5.4 -5.3 -7.8 32 -0.2 -0.4 -0.7

14 -4.2 -4.1 -6.0 33 -0.3 0.8 0.3

15 -4.3 -4.1 -6.0 34 2.7 2.0 2.0

16 -5.8 -5.3 -7.7 35 6.6 2.1 4.0

17 -7.1 -6.0 -8.2 36 0.4 1.4 3.3

18 -5.8 -4.6 -5.6 37 10.1 3.7 8.7

Three calculations has been done, resulting in the pressure distribution on a circular cylinder as shown in Table 1. There are 37 points on the cylinder data taken with an increment of 10o . Using the drag coefficient formula, namely 2π

1Z CD = Cp cos θdθ 2 0

(12)

Table 2. Koefisien Drag S/D 0.6 1.8 3.0 CD 1.17538 0.923321 0.989853

By using Equation 12 and Simpson’s rule derived drag coefficients in Table 2. With three data from the drag coefficient above and the approach that the drag coefficient is a function of S/D, it can let us assume that the equation is probably the parabolic equation, namely y = ax2 + bx + c

(13)

by letting y as the coefficient of drag CD and x as S/D. Data obtained from three 1.17538 = 0.36 a + 0.6 b + c 0.923321 = 3.24 a + 1.8 b + c 0.989853 = 9.0 a + 3.0 b + c

(14)

of three equations with three unknown variables that are a, b and c. The three equations are solved by using the Gauss-Jordan elimination, it found that a = 0.1106, b = −0.4755 and c = 1.4209, to obtain the parabolic equation y = 0.1106x2 − 0.4755x + 1.4209

(15)

To obtain the smallest of circular cylinder drag coefficient, then it looks for a value of y are the smallest of the equation 15. Derive Equation 15 to x and equate to zero, obtained a line of symmetry is x = 2.14943 so the minimum value is y = 0.90981.

3

Conclusion

Mathematical models of the drag coefficient of a circular cylinder with the passive control type-I is the Equation 15. From these equations it can be found that the smallest drag coefficient obtained in S/D = 2.14943 for CD = 0.90981.

References 1. Milton van Dyke, (1988), ”An Album of Fluid Motion”, The Parabolics Press, Stanford, Caifornia. 2. Bouak, F. and Lemay, J. (1998), ” Passive Control of The Aerodynamic Forces Acting on a Circular Cylinder”, Experimental Thermal and Fluid Science, Vol. 16, 112-121. 3. Tsutsui, T., and Igarashi, T., (2002), ”Drag Reduction of a Circular Sylinder in an Air-Stream”, Journal of Wind Engineering and Industrial Aerodynamics, Vol. 90, 527-541.

4. Triyogi Y. and Nuh, M., (2003), ”Using of a Bluff Body Cut from a Circular Cylinder as passive Control to reduce Aerodynamics Forces on a Circular Cylinder”, The International Conference on Fluid and Thermal energy Conversion 2003, Bali, Indonesia, december 7-11, 2003. 5. Lee, Sang-Joon., Lee, Sang-Ik., and Park, Cheol-Woo, (2004), ”Reducing the Drag on a Circular Cylinder by Upstream Installation of a Small Control Rod”, Fluid Dynamic Research, Vol. 34, 233-250. 6. Igarashi T., and Shiba Y., (2006) ”Drag Reduction for D-Shape and I-Shape Cylinders (Aerodynamics Mechanism of ReductionDrag)”, JSME International Journal, Series B, Vol. 49, No. 4, 1036-1042. 7. Triyogi Y., and Wawan Aries Widodo, (2010), ”Flow Characteristics Around a D-Type Cylinder Near a Plane Wall”, Regional Conferences on Mechanical and Aerospace Technology, Bali, Feb 9-10. 8. Ladjedel,A.O., Yahiaoui,B.T., Adjlout,C.L., and Imine,D.O., (2011), ”Experimental and Numerical Studies of Drag Reduction on Circular Cylinder”, World Academy of Sciences, Engineering and Technology, 77, 357-361.

Proceeding International Conference on Mathematics, Statistics and its Applications 2012 (ICMSA 2012)

ISBN 978-979-96152-7-5

Analyzing portfolio performance of Bangladesh stock market Md. Zobaer Hasan1, Anton Abdulbasah Kamil1, Adli Mustafa2 and Md. Azizul Baten3

1

Mathematics Section, School of Distance Education, Universiti Sains Malaysia, Penang, Malaysia. 2

School of Mathematical Sciences, Universiti Sains Malaysia, Penang, Malaysia.

3

Department of Decision Science, School of Quantitative Sciences, Universiti Utara Malaysia, 06010 UUM Sintok, Darul Aman, Malaysia.

Abstract. This paper is designed to examine the validity of Capital Asset Pricing Model (CAPM) in the Dhaka Stock Exchange (DSE) of Bangladesh. For this study, we use monthly stock returns from 80 non-financial companies for the period of January 2007 to December 2011. We find that the intercept terms are not significantly different from zero, linearity in the securities market line and insignificant unique risk for the 10 portfolios during the period. But, the results in term of slope contradict the CAPM hypothesis and indicate evidence against the CAPM.

Introduction Capital asset pricing has always been an active area in the finance literature. One of the most important developments in modern capital theory is the capital asset pricing model (CAPM). CAPM states that the expected return of any capital asset is proportional to its systematic risk measured by the beta. The theoretical validity of CAPM is well tested and accepted but the practical validity of CAPM is in questioned. The stock markets (either developed or emerging) play very crucial roles for the economy. The emerging markets are contributing towards the economy by the way of GDP growth, investment attraction and expansion, and developing a market place for potential investors. The practice of well tested pricing model like CAPM in the emerging stock market is very rare. It is due to the absence of proper validity test of this model. A sound and well tested and accepted pricing model can contribute more to emerging markets for their sound operation. The

AM22

investors, management, policy makers, investment companies, consultants, regulators of the emerging markets can be guided by a sound pricing model. The objective of this study is to examine the validity of the CAPM for emerging markets especially for Dhaka Stock Exchange of Bangladesh. The novelty of this study is to investigate not only the validity of the CAPM but also the capital market behavior of Bangladesh over the period 2005-2009. Review of literature The foundations for the development of asset pricing models were laid by Markowitz (1952) and Tobin (1958). Early theories suggested that the risk of an individual security is the standard deviation of its returns – a measure of return volatility. The computation of risk reduction as proposed by Markowitz is tedious. Sharpe (1964), Lintner (1965) and Mossin (1968) had independently developed a computationally efficient and standard method, Capital Asset Pricing Model (CAPM) that predicts the expected return on an asset is linearly related to systemic risk, which is measured by the asset’s beta. The studies conducted by, Jensen et al. (1972), Black (1972, 1993) and Fama and MacBeth (1973) had largely been supportive of the standard form of CAPM. After 1970s, CAPM came under attack as striking anomalies were reported by Reinganum (1981), Elton and Gruber (1984) and Bark (1991). Further studies on the fundamental factors of securities such as size effect of Banz (1981), book-to-market equity (BE/ME) effect of Chan, Hamao and Lakonishok (1991), earnings price (E/P) ratio of Ball (1978) and Basu (1983), and studies of CAPM models by Fama and French (1992; 1993; 1996), Davis, Fama and French (2000) showed that CAPM’s beta (β) is not a good determinant of the expected return of securities/portfolios. Early twenty-first century saw an alternative methodology for testing CAPM in the Philippine Equity Markets in Ocampo (2004) and helped to provide the evidence for the role of beta in explaining returns in the Philippines market. In 2006, Yang and Xu (2006) tested CAPM in the Chinese stock market and found that while expected returns and beta exhibited a linear relationship, the hypotheses for the intercept and the slope did not hold. Another study in the same year, in the Greek Securities Market in Michailidis et al. (2006) concluded that the tests provide evidence against the CAPM. A test in Turkey in Gürsoy and Rejepova (2007) found no meaningful relationship between beta coefficients and ex-post risk premiums under the Fama and MacBeth (1973) approach but found strong beta-risk premium relationships with the Pettengill et al. (1995) methodology. Théoret and Racicot (2007) used a new set of instruments based on higher statistical moments to discard the specification errors. According to a study by Cooper et al (2008), a firm’s annual asset growth rate emerges as an economically and statistically significant predictor of the cross section of the

US stock returns. Liu and Zhang (2008) showed that the growth rate of industrial production is a priced risk factor in standard asset pricing tests. In DSE, there are several studies have been conducted for market efficiency. Hassan (1999) studied on time-varying risk return relationship for Bangladesh by utilizing a unique data set of daily stock prices and returns compiled by the authors and found DSE equity returns held positive skewness, excess kurtosis and deviation from normality and the returns displayed significant serial correlation, implying the stock market is inefficient. Haque (2001) worked on the cumulative abnormal profit on the study period. He described the experience of DSE after the scam of November 1996 by applying CAPM and EMH. Based on the data four months before and four months after the automation (10 th August , 1998), the paper measured risk-return performance, estimated SML for big capital and small capital companies before and after automation and tested EMH. The test results indicated that the market does not improve, and even after automation manipulation continued. Kader (2005) showed that there is no evidence that DSE is weak form efficient by testing whether any technical trading strategy yielded abnormal profit or not by using technical trading rule (K% filter rule). Islam (2005) analyzed on the predictability of the share price in Dhaka Stock Exchange prior to the boom in 1996 and by using heteroscedasticity-robust tests found evidence in favor of short-term predictability of share prices in the Dhaka stock market prior to the 1996 boom, but not during the post-crash period. In order to test whether CAPM is a good indicator of asset pricing in Bangladesh, Rahman et al (2006) considered Fama-French (1992) methodology on five variables –Stock market return, Beta, Book to market value, Size(Market capitalization) and Size 1(Sales). They found that the variables have significant relationship with the stock return. Uddin and Alam (2007) examines the linear relationship between share price and interest rate, share price and growth of interest rate, growth of share price and interest rate, and growth of share price and growth of interest rate which were determined by ordinary least-square (OLS) regression. For all of the cases, they found that interest rate has significant negative relationship with share price and growth of interest rate has significant negative relationship with growth of share price in Dhaka Stock Market, so that DSE is not weak form efficient. Uddin and Khoda (2009) investigated whether stock-price indexes of Dhaka stock markets can be characterized as random walk (unit root) processes by using the Unit Root test and the ADF test. They provided evidence that the DSE is not efficient even in weak form and DSE does not follow the random walk model.

Brief description of Dhaka Stock Exchange (DSE): The Dhaka Stock Exchange (DSE) was first incorporated as the East Pakistan Stock Exchange Association Limited on April 28, 1954. It was renamed as Dhaka Stock Exchange (DSE) Limited on June 23, 1962. The service on the stock exchange continued successively until 1971. The trading was suspended during the liberation war and resumed in 1976 initially with 9 listed companies and total paid-up capital of Tk. 137.52 million. At 31 October, 2010 the number of listed securities were 442 and the total issued capital of all listed securities was Tk. 646,490.00 million. The Securities and Exchange Commission (SEC) which is the regulator of the capital market of Bangladesh was established on 8th June, 1993. After the establishment of SEC, public interest to invest in the capital market has improved because of investment friendly rules and regulations. Foreign portfolio investment started to stream due to favourable regulatory conditions. In October 1996 a group of brokers, foreign portfolio managers and sponsors of listed companies manipulated stock prices. The “All Share Price Index” crossed 3600 from less than 1000 within six weeks. As a result, at the end of 1996, few local and foreign investors got a huge gain. On the other hand, general public was trended to invest and faced a huge loss (Uddin & Alam, 2007). However, Dhaka Stock Exchange is persistently trying to make the securities market an efficient reliable transparent organization that will be capable of meeting the challenges of economic reality of the country and will make the capital market as the centre for economic development of the nation. There are some problems involved in DSE. The unexpected rise and fall in share prices mostly followed from the general confidence of the investors about political stability, euphoria of investment in shares, prospects of quick capital gains, absence of proper application of circuit breaker etc. Sometimes the share values of some profitable companies have been increased fictitiously which hampers the smooth operation of DSE. Many companies of DSE do not focus real position of the company and because of this problem the share holders as well as investors do not have any idea about position of that company (Akhter, 2005).

Methodology Sample selection and data description The data collects from Dhaka Stock Exchange (DSE) market consisting 80 non-financial companies for the period of January 2007 to December 2011. We concentrate on the Dhaka Stock Exchange (DSE) because it is the main and country’s oldest stock exchange of Bangladesh. According to “Standard and

Poor’s Emerging Stock Markets Fact Book 2000”, the DSE is one of the frontier emerging markets of South Asia. The data in study covers 10 types of category of company as: Engineering, Food & Allied Products, Fuel & Power, Textile, Pharmaceuticals & Chemicals, Cement, IT, Tannery Industries, Ceramic Industry, and Miscellaneous. The data also covers the 3 groups (Group-A, Group-B and Group-Z) out of 4 groups in DSE market. In short, we can say that, the data represents the overall market. In this study, we take portfolio’s return as a dependent variable.DSE prepares individual company’s daily closing price. Using the closing price of individual company, we calculate the return of individual company as follows: Individual Company’s Return = In (Pt) –In (Pt-1) where, Pt = closing price at period t; Pt-1= closing price at period t-1 and ln = natural log. By using the individual company’s return we can find the portfolio’s return as follows (Michailidis et al, 2006): k

r

it

rpt 

i 1

k

where, k is the number of companies included in each portfolio (k=1…8), p is the number of portfolios (p=1…10), rit is the excess return on companies. For this study, we use monthly data for all variables, because the daily data, though better for estimating risk-return relationship, is very noisy (Basu & Chawla, 2010). The DSI Index is used as a proxy for the market portfolio. This index is a market value weighted index which is comprised of all listed companies of the exchange and reflects general trends of the Bangladesh stock market. Furthermore, Bangladesh government T-bill rate is used as the proxy for the risk-free asset = 0.05. Estimation of the CAPM

The Capital Asset Pricing Model (CAPM), developed in the 1960s, was a revolution in financial theory. The CAPM implies a positive linear relationship between expected return and beta of the security (Sharpe et al., 1999), i.e. stocks with larger beta will demand higher expected return than stocks with smaller beta. CAPM helps determine the theoretically required rate of return and, therefore, the price of an asset when added to a well-diversified portfolio and predicts that the expected return on an asset equals the risk-free rate plus a risk premium, that is, it is linearly related to systematic risk, measured by the asset’s beta (Basu, 2010).According to the CAPM and followed by (Basu, 2010), returns can be explained as:

Ri t  R ft   i R mt  R ft 

1

where, Rit is the return on portfolio i at time t, Rft is the return on the risk free asset at time t, Rmt is the market return at time t and βi is the beta of portfolio i, which can be also express by Cov(Ri , Rm)/Var(Rm). The validity of the CAPM’s theory depends on: (a) a positive linear relationship between beta and excess returns and (b) sole dependence of excess returns on systematic risk as measured by beta. In order to test the hypotheses of CAPM, the equation (1) can be estimated using the two stages regression (Omran, 2007). In the first stage regression, time series data is used to estimate systematic risk and residual variance. The following regression is used,

Ri t  R ft   i   i R mt  R ft   e it 2

2

RV   i   i  m

2

2 

3

where eit is the random disturbance term in the regression equation at time t and RV refers to the residual variance (the variance of the regression residuals, eit ), σi2 refers to the variance of the returns for the portfolio i, σm2 refers to the variance of the returns for index, the proxy for the market portfolio. Equation (2) can be estimated using ordinary least squares (OLS). For each portfolio, Rit is regressed on Rmt to estimate beta. Equation (3) measures residual variance (RV), which is the difference between the total variance of the returns on the portfolio and the portfolio’s market risk. The second stage regression is a cross sectional. The following regression is used. 2

Rit  R ft   0   1 it   2  it   3 RVit  eit

4

where, Rit is the return on portfolio i at time t, Rft is the return on the risk free asset at time t, βit is the beta of portfolio i at time t; representing systematic risk, βit2 is the beta of portfolio i at time t squared; representing non-linearity of returns, RVit is the residual variance of portfolio i at time t; representing unsystematic risk and eit is random disturbance term in the regression equation at time t. γ0, γ1, γ2 and γ3 are the parameter estimates. For this purpose, the excess monthly portfolio returns were regressed on beta, beta squared and residual variance, as obtained from the data preprocessing stage, to test the statistical significance of the coefficients using the standard t test.

Hypotheses of CAPM testing For CAPM to hold true, the following hypothesis should be satisfied: 1) γ0 = 0, as any excess return earned should be zero for a zero-beta portfolio , 2) γ1 > 0, as there should be a positive price for risk taken, 3) γ2 = 0, as the Security Market Line (SML) should represent a linear relationship, 4) γ3 = 0, as the residual risk which can be diversified away should not affect return. The regression model in equation (2) is estimated using Ordinary least Squares (OLS) method and tests of significance is carried out using the following framework: --The intercept term, the coefficient of beta-squared and the residual variance have been hypothesized as not being statistically different from zero, and therefore a two-tailed test is appropriate. --The coefficient of beta should be positive and significant, and therefore a one-tailed test is used.

Results and discussion Summary statistics of the main variables Table 1 contains summary statistics of the individual companies for the main variables as average return, beta and residual variance. The table shows that the average beta during the period was 0.2129. The minimum beta was 0.0028 and the maximum beta was 0.5928 with a standard deviation of 0.1578. However, there was no company that had a negative beta during the period. The mean average return for the period was −2.94%. The maximum return during the period was −0.29%, which corresponds to company “Meghna Condensed Milk”. Beta estimate for that company was 0.1191(Table 2), which is greater than the first quarter beta estimate of 0.0728 but smaller than the second quarter (Median) beta estimate of 0.1723 for the entire data set. The minimum return during the period was −5.57%, which corresponds to company “National Tubes”. Beta estimate for that company was 0.1772(Table 2).

Mean Standard deviation Minimum 1st quarter

Table 1. Summary Statistics Average return Beta -0.0294 0.2129 0.0114 0.1577 -0.0557 0.0028 -0.0365 0.0728

Residual variance 0.0252 0.0189 0.0039 0.0167

Median 3rd quarter Maximum Number of observations

-0.0301 0.1723 0.0218 -0.0211 0.3448 0.0304 -0.0029 0.5928 0.1578 80 Companies and 10 Portfolios

Significance of stock beta coefficient estimates From Table 2, we found the estimates of betas for individual companies in the DSE market of Bangladesh. The beta coefficients for 25 individual stocks were statistically significant at 1% level of significance, 6 individual stocks were statistically significant at 5% level of significance and 3 individual stocks were statistically significant at 10% level of significance. The remaining 46 companies are statistically insignificant. Among the 80 companies, the highest beta attainable company was “Square Textile” (β = 0.5928) and the lowest beta attainable company was “Monno Stafllers” (β = 0.0028). Portfolio construction In order to test the validity of CAPM in portfolios, the next step is to construct portfolios. For this construction, the total number of companies are arranged in descending order of beta and grouped into 10 portfolios of 8 stocks each such that Portfolio_1 contains the first 8 stocks representing the 8 highest beta values and Portfolio_10 contains the last 8 stocks representing the 8 lowest beta values. This is done to achieve diversification and thus reduce any errors that might occur due to the presence of residual variance as done in Amanulla et al. (1998). This procedure generates 10 equally-weighted portfolios comprised of 8 companies each (Table 2). Table 2. Results of the Stock Beta Coefficient Estimates and Constructs the Final 10 Portfolios Portf Company olio 1 Square Textile Heidelberg Cement Lafarge Surma Cement Singer Bangladesh Bangladesh Lamps BOC Bangladesh Confidence Cement Apex Foods

beta 0.5928 * 0.5592 * 0.5507 * 0.5486 * 0.5176 * 0.4938 * 0.4599 * 0.4452 *

tvalue 5.60 5.13 5.02 4.90 4.60 4.32 3.94 3.78

Portf Company olio 6 Rahima Food Anwar Galvanizing Bangas BDCOM Online Ltd. Renwick Jajneswar Pharma Aids Atlas Bangladesh Rangpur Foundry

beta 0.1694@ 0.1639@ 0.1625@ 0.1624@ 0.1520@ 0.1512@ 0.1442@ 0.1440@

tvalue 1.31 1.26 1.25 1.25 1.17 1.16 1.11 1.10

2

3

4

5

Apex Adelchy Footwear 0.4174 * 3.49 7 Saiham Textile * Eastern Cables 0.4123 3.40 Libra Infusions Ltd. Beximco Pharma 0.3959 * 3.28 Meghna Condensed Mk. Niloy Cement 0.3913 * 3.23 Kay & Que * Reckitt Benckiser Ltd. 0.3891 3.21 Agni Systems Ltd. BATBC 0.3780 * 3.10 Dulamia Cotton The Ibn Sina 0.3761 * 3.09 Legacy Footwear * Meghna Cement 0.3660 2.99 Fu-Wang Ceramic Bextex Limited 0.3632 * 2.97 8 Sonargaon Textiles Olympic Industries 0.3514 * 2.85 Stylecraft * Renata Ltd. 0.3479 2.82 Padma Cement Apex Tannery 0.3470 * 2.81 Beach Hatchery Ltd. Bata Shoe 0.3382 * 2.73 Aftab Automobiles * ACI Limited. 0.3330 2.69 National Polymer AMCL (Pran) 0.3306 * 2.66 Orion Infusion * Square Pharma 0.3229 2.59 Monno Jutex Aramit Cement 0.2959 * 2.36 9 BD.Autocars Padma Oil Co. 0.2761 ** 2.19 Alltex Ind. Ltd. ** Beximco 0.2732 2.16 Samata Leather Quasem Drycells 0.2691 ** 2.12 Bd.Welding Electrodes Aziz Pipes 0.2641 ** 2.09 Shaympur Sugar ** Delta Spinners 0.2635 2.08 Metro Spinning Information Services Nt. 0.2503 ** 1.97 Desh Garmants Glaxo SmithKline 0.2433 *** 1.91 Tallu Spinning *** Apex Spinning. 0.2202 1.72 10 Alpha Tobacco Therapeutics 0.2116 *** 1.65 Zeal Bangla Sugar @ Beximco Synthetics 0.1943 1.51 Monno Ceramic Prime Textile 0.1926 @ 1.49 Bangladesh Plantation National Tea 0.1820 @ 1.40 In Tech Online Ltd. @ H.R.Textile 0.1773 1.37 Samorita Hospital National Tubes 0.1772 @ 1.37 Meghna Pet Industries Ambee Pharma_ 0.1753 @ 1.36 Monno Stafllers *, **, *** Significance level at 1%, 5%, 10% consecutively, @ means insignificant, S.E = Standard Error

0.1427@ 0.1419@ 0.1191@ 0.1104@ 0.1091@ 0.1040@ 0.0972@ 0.0968@ 0.0914@ 0.0898@ 0.0754@ 0.0736@ 0.0726@ 0.0717@ 0.0679@ 0.0661@ 0.0660@ 0.0655@ 0.0651@ 0.0634@ 0.0501@ 0.0479@ 0.0349@ 0.0321@ 0.0299@ 0.0283@ 0.0192@ 0.0175@ 0.0166@ 0.0123@ 0.0091@ 0.0028@

Estimates of the OLS regression of the constructed portfolios According to CAPM, intercept term should be equal to zero and the coefficients of beta should be positive. The results of table 3 indicated that for all the 10 portfolios, the intercept term was not significantly different from

1.09 1.09 0.91 0.85 0.84 0.80 0.74 0.74 0.70 0.69 0.58 0.56 0.55 0.55 0.52 0.50 0.50 0.50 0.49 0.48 0.38 0.37 0.27 0.24 0.23 0.22 0.15 0.13 0.13 0.09 0.06 0.02

zero. The coefficients of squared beta and residual variance were insignificant which indicated that the “expected return-beta” relationship was linear in portfolios and residual risk had no affect on the expected return of the 10 portfolios. These findings were contradicted to the findings of Basu (2010).In his study, Basu showed that the intercept term is significantly different from zero for all the 10 portfolios, the coefficient of beta-squared is significant in five portfolios and the coefficient of residual variance is significant in four portfolios out of 10 portfolios. In our study, the coefficients of beta were found to be negative in three portfolios (Portfolio 2, 7 and 9) out of 10 portfolios and for all portfolios the coefficients of beta were statistically insignificant. The findings in terms of beta coefficients were similar to the findings of Basu (2010). Hence, based on the slope criterion the CAPM hypothesis cannot be accepted for the portfolios under study. Table 3. Results of the OLS Regression in 10 Portfolios Portfolio No. 1 2 3 4 5 6 7 8 9 10

Coefficient/ t-value Coefficient t-value Coefficient t-value Coefficient t-value Coefficient t-value Coefficient t-value Coefficient t-value Coefficient t-value Coefficient t-value Coefficient t-value Coefficient t-value

Constant

β

β2

-0.747@ -0.783 1.648@ 0.373 -7.825@ -0.825 -0.431@ -0.192 -0.729@ -0.431 -4.131@ -0.960 0.455@ 0.539 -0.176@ -0.190 -0.006@ -0.029 -0.035@ -0.684

1.147 @ 0.804 -1.012 @ -0.384 3.999 @ 0.812 0.256 @ 0.179 0.719 @ 0.418 2.857 @ 0.955 -0.839 @ -0.574 0.208 @ 0.156 -0.079 @ -0.126 0.059 @ 0.204

-1.188 @ -0.857 1.039 @ 0.390 -3.971 @ -0.800 -0.247 @ -0.173 -0.703 @ -0.410 -2.858 @ -0.957 0.853 @ 0.576 -0.204 @ -0.150 0.057 @ 0.091 -0.091 @ -0.280

Residual variance 0.035@ 0.338 -0.038@ -0.416 0.007@ 0.071 -0.014@ -0.273 -0.070@ -1.213 0.005@ 0.080 0.039@ 0.502 0.008@ 0.095 0.041@ 0.701 -0.003@ -0.036

*, **, *** Significance level at 1%, 5%, 10% consecutively, @ means insignificant, S.E = Standard Error

Comparison between average portfolio returns and portfolio betas From table 4, we found that the range of the estimated stock portfolio betas was between -1.012 the minimum and 3.999 the maximum. Among the 10 portfolios, the highest beta attainable portfolio was “Portfolio 3” (β = 3.991) and the lowest beta attainable portfolio was “Portfolio 2” (β = -1.012). The results of the portfolio did not support that “higher risk (beta) is associated with a higher level of return”. For example, Portfolio 3, the highest beta portfolio produced lower return (Return = -0.0315) than the Portfolio 2, the lowest beta portfolio (Return = -0.0309). The highest return (Return = -0.0249) yielding portfolio was Portfolio 6 whose β = 2.857. The CAPM theory indicates that higher risk (beta) is associated with a higher level of return. The results of the study did not support this hypothesis. These contradicting results can be partially explained by the significant fluctuations of stock returns over the period examine. Table 4. Results of the Average Portfolio Returns and the Portfolio Betas Portfolio No. Portfolio_1 Portfolio_2 Portfolio_3 Portfolio_4 Portfolio_5 Portfolio_6 Portfolio_7 Portfolio_8 Portfolio_9 Portfolio_10

Average portfolio return -0.0379 -0.0309 -0.0315 -0.0279 -0.0323 -0.0249 -0.0274 -0.0254 -0.0280 -0.0313

Portfolio beta 1.147 -1.012 3.999 0.256 0.719 2.857 -0.839 0.208 -0.079 0.059

Conclusion The article examines validity of CAPM for the Dhaka Stock Exchange. The findings of the article are not supportive of the theory’s basic hypothesis that higher risk is associated with a higher level of return. The results of the

coefficients of squared beta and residual variance indicate that the “expected return-beta” relationship is linear in portfolios and residual risk has no affect on the expected return of the 10 portfolios. The intercept terms for the 10 portfolios are not significantly different from zero. The above three findings support the validity of CAPM. But, the CAPM’s prediction for the slope that “slope should equal the excess returns on the market portfolio”. The results in term of slope of our study contradict the above hypothesis and indicate evidence against the CAPM. This study concludes the practical incompleteness of CAPM. This study can motivate researcher to search further for a sound pricing mechanism beyond return and beta factors in future. This study will obviously be used as a basis of reference for future investigates and the researchers will get proper instruction from this study.

References 1. Amanulla, S., Kamaiah, B.: Asset price behaviour in Indian stock market: Is the “CAPM” still relevant?. J. Finan. Manage. Anal.11, 32–47 (1998) 2. Bark, H. K. K.: Risk, Return and Equilibrium in the Emerging Markets: Evidence from the Korean Stock Market. J. Econ. Bus. 43(4), 353-362 (1991) 3. Basu, D., Chawla, D.: An Empirical Test of CAPM--The Case of Indian Stock Market. Global Bus. Rev. 11(2), 209-220 (2010) 4. Black, F.: Capital Market Equilibrium and Restricted Borrowing. J. Bus. 48(3), 444-445 (1972) 5. Black, F.: Beta and Return. J. Portfol. Manage. 20(1), 8-18 (1993) 6. Cooper, M. J., Gulen, H., Schill, M. J.: Asset growth and the cross-section of stock returns. J. Finance 63, 1609–1651 (2008) 7. Davis, J. L., Fama, E. F., French, K. R.: Characteristics, covariances, and average returns: 1929–1997. J. Finance 55(1), 389-406 (2000) 8. Elton, E. J., Martin, J. G., Rentzler, J.: The Ex-Dividend Day Behavior of Stock Prices: A Re-Examination of the Clientele Effect: A Comment. J. Finance 39(2), 551-55 (1984) 9. Fama, E. F., French, K. R.: The Cross-Section of Expected Stock Returns. J. Finance Vol. 47(2), 427-465 (1992) 10. Fama, E. F., French, K. R.: Common Risk Factors in the Returns on Stocks and Bonds. J. Finan. Econ.33, 3-56 (1993) 11. Fama, E. F., French, K. R.: Multifactor Explanations of Asset pricing Anomalies. J. Finance 51, 55-83 (1996) 12. Fama, E. F., MacBeth, J. D.: Risk, Return, and Equilibrium: Empirical Tests. J. Polit. Economy 81(3), 607-636 (1973) 13. Gürsoy, C. T., Gulnara, R.: Test of Capital Asset Pricing Model in Turkey. Dogus University Journal, 8(1), 3-56 (2007)

14. Islam, A., Khaled, M.: Tests of Weak-Form Efficiency of the Dhaka Stock Exchange. J. Bus. Finan. Acc. 32(7/8), 1613–1624 (2005) 15. Jensen, Michael, C., Black, Fischer, Scholes, Myron S.: The Capital Asset Pricing Model: Some Empirical Tests, STUDIES IN THE THEORY OF CAPITAL MARKETS. Praeger Publishers Inc. (1972) 16. Kader, A. A., Rahman, A. F. M. A.: Testing the Weak-Form Efficiency of an Emerging Market: Evidence from the Dhaka Stock Exchange of Bangladesh. AIUB Journal, 4(2) (2005) 17. Laura, X., Zhang, L. L.: Momentum Profits, Factor Pricing, and Macroeconomic Risk. Rev. Finan. Stud. 21(6), 2417-2448 (2008) 18. Lintner, J.: Security Prices, Risk, and Maximal gains from Diversification. J. Bus 36(4), 294–419 (1965) 19. Markowitz, H.: Portfolio Selection. J. Bus 7(1), 77–91 (1952) 20. Michailidis, G., Stavros, T., Demetrios, P., Eleni, M.: Testing the Capital Asset Pricing Model (CAPM): The Case of the Emerging Greek Securities Market. Int. Res. J. Finan. Econ.4, 78–91 (2006) 21. Mossin, J.: Optimal Multi-Period Market Portfolio Policies. J. Bus. 4(2), 215-229 (1968) 22. Ocampo, P. B.: Alternative Methodologies for Testing CAPM in the Philippine Equity Market. Philippine Manag. Rev 11(1) (2004) 23. Pettengill, G. N., Sundaram, S., Mathur, L.: The Conditional Relation between Beta and Returns. J. Finan. Quant. Anal.30, 101–116 (1995) 24. Rahman, M., Baten, M. A., Alam, A.: An Empirical Testing of Capital Asset Pricing Model in Bangladesh. J. Appl. Sci. 6(3), 662–667 (2006) 25. Reiganum, M. R.: Misspecification of Capital Asset Pricing. J. Finan. Econ. 9(1), 19–46 (1981) 26. Sharpe, W. F.: Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk. J. Finance 19(3), 425–442 (1964) 27. Theoret, R., Racicot, F. E.: Specification Errors in Financial Models of Returns: An Application to Hedge Funds. J. Wealth Manage. 10(1), 73–86 (2007) 28. Tobin, J.: Liquidity preference as behaviour towards risk. Rev. Econ. Stud. 25(2), 65–86 (1958) 29. Uddin, M. G. S., Alam, M. M.: The Impacts of Interest Rate on Stock Market: Empirical Evidence from Dhaka Stock Exchange. South Asian J. Manag. Res. 1(2), 123-132 (2007) 30. Uddin, M. G. S., Khoda, N.: Testing Random Walk Hypothesis for Dhaka Stock Exchange: An Empirical Examination. Int. Res. J. Finan. Econ.33, 64-76 (2009) 31. Yang, X., Donghui, X.: Testing the CAPM model: A Study of the Chinese Stock Market (Master Thesis Essay, UMEA School of Business, Sweden). (2006)

Proceeding International Conference on Mathematics, Statistics and its Applications 2012 (ICMSA 2012)

ISBN 978-979-96152-7-5

IMPLEMENTATION OF THE ALGORITHM KALMAN FILTER ON REDUCTION MODEL Didik Khusnul Arif1, Widodo2 , Salmah3 , and Erna Apriliani4 1

Ph.D Student in Department of Mathematics Gadjah Mada University (UGM) Yogyakarta Indonesia and Lecturer of Department of Matematics Institut Technology Sepuluh Nopember (ITS), [email protected] 2

Department of Mathematics Gadjah Mada University (UGM) Yogyakarta Indonesia [email protected]

3

Department of Mathematics Gadjah Mada University (UGM) Yogyakarta Indonesia [email protected]

4

Department of Mathematics Institut technology Sepuluh Nopember (ITS) Surabaya Indonesia [email protected]

Abstract. In this paper, the first will be the establishment of a model reduction of dynamic stochastic discrete systems. Model reduction is done by cutting method equilibrium. Then we will implement the Kalman filter algorithm on reduced models. As a case study for the simulation, this paper selected the distribution of heat conduction problems. The first will be the establishment of a model reduction of heat conduction systems. Further implementation of the Kalman filter algorithm on heat conduction system is reduced. Then we will analyze the comparison of the results of the estimate. Keywords: estimate, filter Kalman, model reduction.

1

Introduction

Estimation of state variables of a system is necessary because not all state variables of the system can be measured directly. Estimation of the state variable stochastic systems is done by Kalman filter. In its application, Kalman filter has several weaknesses that can be caused due to numerical stability problems or due to a less accurate system modeling. To solve the stability problem numerically, we accomplish this by forming the matrix square root of its covariance matrix. This algorithm is known as covariance Square Root Filter (SRCF). To reduce the computational time the square root covariance filter, Verlaan and Heemink (1997) to modify the reduced rank square root of the covariance matrix. The proposed algorithm is known Veerlan Square Root Filter covariance with Rank Reduction (RRSQRT). Reduction of rank in the algorithm is done by singular value decomposition. Algorithm RRSQRT has successfully accelerate computing time on SRCF algorithm. Issues of computation time can also be influenced by the order system. therefore, to reduce the computation time, we can do by replacing the first order model with a smaller model orders without significant error. This method is known as the model reduction method. One method for forming reduced models are cutting equilibrium.

AM23

Generally seen that from some existing research, the main focus has been discussed by the researchers is how to get the state variable estimation procedure that can get accurate estimation results and require a short computing time. Therefore in this paper will attempt to get the outcome variable estimation algorithm accurately estimates and short computing time. First, it will be reduced modeling of dynamic stochastic discrete systems. As an initial study, the method used to establish the model is reduced by cutting method equilibrium. Here will be a search for the model reduction process by the method of cutting equilibrium. The part that needs to be observed is to observe which parts of the state from the initial system will be reduced and the state which is maintained. Furthermore, it will formulate a Kalman filter algorithm for reduced models that have been formed. Then to see the results of its estimation and analysis, we can perform the implementation in applied problems, namely the problem of the distribution of heat conduction. First we create a model reduction of heat conduction systems. Next we implement the Kalman filter algorithm on reduced models that have been formed. Then we do a comparative analysis on the estimation of these methods.

2

Methodology

In this paper, a Kalman filter algorithm will be implemented on the reduction model. As a case study chosen is a matter of estimation of the distribution of heat conduction in the wire conductor. The steps taken are: 1. Reviewing the process of establishing a model reduction of discrete systems with cutting methods equilibrium. 2. Constructing Kalman filter algorithm to model reduction of discrete systems. 3. Establish a distribution system modeling heat conduction. 4. Estimate to determine the distribution of heat conduction by using a Kalman filter. 5. Doing reduction model of heat conduction system. 6. carry out the implementation of Kalman filter method on the model of heat conduction is reduced. 7. comparative analysis of the estimation results obtained from the application of Kalman filter algorithm at the beginning of the system compared to the estimation results obtained from the application of Kalman filter algorithm on the system reduced its system.

3

The Result and Discussions

3.1

Model Reduction

Model reduction is a reduction order system problem. The effort to get a simple model with a smaller order is called the reduction model (Karolos M. Grigoriadis, 1995). There are many methods to perform the model reduction, such as singular perturbation method (Yi Liu, 1989), the method of cutting equilibrium (Sigurd, S., 2001), genetic algorithms (Satakshi, 2004), the extended cuts equilibrium (Henrik Sandberg, 2008).

Equilibrium cutting method is the simplest method. If we give the initial system is stable and observed, then we will also obtain a stable reduction model and observed. As an initial assessment will be used method of cutting is reduced in proportion to get the model. 3.1.1 Model Reduction by Cutting equilibrium Given a deterministic dynamic discrete systems with time invariant: (1) . (2) where is a state variable, ( ) , u( ) is the input vector, ( ) , z( ) is the output vector, ( ) , and respectively is a constant matrix with the corresponding order. For the next system (1) and (2) referred to as a system (A, B, C, D). Further defined two matrices P and Q, each of which is an achievement gramian matrix and keteramatan gramian matrix of the system (A, B, C, D), ie: ∑ ( ) (3) ∑ ( ) . (4) P and Q are each positive definite and is the unique solution of the Lyapunov equation: (5) . (6) Then the system (A, B, C, D) is called equilibrium if P and Q are the same and is ( ) a diagonal matrix, ie , with is a positive real number which is the Hankel singular value of the i-th system ( ) (A, B, C, D). Generally defined . √ is gramian equilibrium of the system in balance, so it can be written as follows: ̃ (7) ̃ . (8) In general equilibrium system can be written in the form: ̃̃ ̃ ̃ (9) ̃ ̃ ̃ . (10) Furthermore gramian equilibrium

can be partitioned into

[

] where

( ) and ( ) with . Then the ̃ ̃ ̃ equilibrium system ( ) be partitioned corresponding to the partition equilibrium gramian , to obtain its system partition as follows: ̃ ( ̃ where ̃

̃ ( ̃

̃ ̃

̃ ̃ ̃ ), ̃ ̃

̃ | ̃ ). (

̃ ) , dan ̃ ̃

(11) (̃

̃ ).

) can be done by Finally, to get the model reduction of the system ( cutting the state variable on the equilibrium system ( ̃ ̃ ̃ ) associated with Hankel singular values small. Model reduction formed first order r, with , and written in the form: ̃ ̃ ̃ ̃ (12) ̃ ̃ ̃ . (13) Reduction model expressed in equation (12) and (13) is called the reduced system (̃ ̃ ̃ ) 3.1.2 Model Reduction Algorithm with equilibrium Cutting Methods Based on the foregoing, it can be briefly made a simple algorithm to obtain reduced models of discrete systems (A, B, C, D). The steps in the model reduction algorithm is as follows: 1. Given input initial system. Beginning of a given system is a discrete system, assuming the system to be stable. 2. Determine matrix reachability gramians P and matrix observability gramians Q. 3. Establish a the realization equilibrium. Actual equilibrium can be established by performing the following steps: - Get the factorization matrix R such that . - Construction of the matrix and doing factorization such that , where U is unitary matrix (ie: ) and ( ). - Determine the transformation matrix T which is a non-singular matrix, ie: . - Forms of equilibrium systems ( , , , ) with its matrices are: , . 4. Furthermore, to get the model reduction, do the cutting state equilibrium systems based on Hankel singular value of the smallest. In this case, the reduction process model does not detect the part where the state variables of the initial system is maintained and the state where the reduced variables. So the state variables of the initial system will be different from the state variables of the system is its reduction results. Therefore, the author felt the need to assess which parts of the state variable is maintained from the model reduction procedure. It is intended to do a comparative analysis on the estimation of the same state variable between the initial system and the system yield reduction. 3.2

Kalman Filters for Reduction System

Estimation of state variables in stochastic dynamical systems can be performed using a Kalman filter. In the Kalman filter, an assessment is done by predicting the state variable based on dynamical systems and then made improvements based on

measurement data. Phase prediction and phase correction is performed recursively, in a way to minimize the estimation error covariance ( ̂ ), with is the actual state variables and ̂ is the estimate of the state variable. Based on the description in the model reduced the formation of a discrete system, it appears that the formation of reduction model begins with the establishment of the equilibrium. Equilibrium system is the result of the transformation of the initial system state but its a different order to the order of the initial system state. In equilibrium systems, the position of the state is already sorted by its Hankel singular value. So the model reduction can be obtained from the system equilibrium by cutting state variables corresponding to the small Hankel singular value. Therefore, to apply the Kalman filter to the model reduction, as well as by applying Kalman filter on its equilibrium system. Given initial system as follows: . with and respectively a system noise and measurement noise, and each is a scale stochastic. System noise and measurement noise respectively assumed NormalGaussian distributed with mean zero and its covariance, respectively and . further from the initial system can be obtained equilibrium system, which can be written in the following form: ̃̃ ̃ ̃ (14) ̃ ̃ ̃ . (15) ̃ ̃ ̃ with , , and T is the transformation matrix obtained from the process of establishing a system of equilibrium. So based on the formation of Kalman filter algorithm for stochastic dynamic discrete systems, the Kalman filter algorithm can be developed for the system equilibrium of dynamic stochastic discrete systems as follows:  System Model and Measurement Model ̃ ̃ (16) ̃ ̃ (17) ( ) ( ). ̃ ( ̅ ) (18)  Initialization ; ̂̃ ̅ . (19)  Phase prediction (time update) ( ) error covariance : (20) ̂̃ estimation : ̂̃ . (21)  Phase Correction (Measurement Update) error covariance : ̂̃ estimation : ̂̃ If used Kalman gain, the Kalman Gain : error covariance :

(( ̂ ( (

) ( (

) )

(

)

( ̃ ( ) )

) ̂̃

(22) ) .

)

(23) (24) (25)

estimation 3.3

: ̂̃

̂̃

( ̃

̂̃

).

(26)

Modeling Heat Conduction System

Given the problems of heat on a straight wire of length and the heat conduction coefficient . Modeling of heat conduction problem can be done as follows. First selected the -axis to reveal the longitudinal direction of the wire, with and , stating the position of the ends of the wire. Furthermore it is assumed that the sides are perfectly insulated wire, meaning no heat can penetrate the sides of the wire and temperature flowing in the wire is affected by the position and time. The temperature is expressed by u, the position expressed by x, and time is denoted by t. So u is a function of x and t. The temperature variation in the wire can be expressed in an equation called the equation of heat conduction, which is: dengan , (27) with is the thermal diffusivity of the wire material. Furthermore it is assumed also that at the end of the left side of the trunk completely isolated, which means no heat change at position and at the other end were kept constant heat for all . With the forward Euler method, then from equation (27) can be obtained: ) ( ) ( )) . ( ( (28) with i and k respectively determine the position and the time step. Furthermore, the central difference method, then from equation (28) can be obtained for a discrete system, which can be written as follows: ( ) . (29) With

. To maintain a stable pendiskritan explicitly, it must be selected

or

. While the boundary conditions and initial conditions are: ̅, ( ) ( ) ( ) , ( ) . In the above modeling system used for isolated systems perfectly on the sides of the wire. Whereas in fact they are not, it means there is heat transfer between the wire with air. Such influence is referred to as noise system. The systems include the noise system can be modeled as follows: ( ) . (30) ( ) with is assumed normally distributed with mean 0 and variance Q. Kalman filter to improve the prediction results based on measurement data. Therefore, it needs to be defined a measurement equation to relate the measurement data to the system. Measurement equation contains measurement noise , and is defined as: . (31) Measurement noise is also assumed to be Gaussian distributed Normal with mean 0 and variance R. System in the form of finite difference equations (30) and measurement (31) can be written in the form of state space systems are invariant with respect to time:

(32) . (33) [ ] is the state vector, with [ ] is a vector output [ ] or measurement, is a system disorders and [ ] is the measurement disturbance. While matrices A, B, G, and C each is the coefficient matrix corresponding to the dimensions.

4

Analysis

Given distribution of heat conduction problems in a wire conductor. In this simulation, we assumed the wire length and a length of wire is divided in 10 positions. The given boundary conditions are and with its ⁄ . First formed discrete systems by coefficient of thermal conductivity is , with the initial estimate ̂ . Estimated initial error variance is and I is the identity matrix. Then from the measurement results, the data obtained by the heat at a certain position ie u3 = 1.4 and u8 = 48.5. further by taking Δt = 0.1, formed a discrete system with time. System noise generated from random values with mean 0 and covariance , while the measurement noise generated from random variable with mean 0 and covariance . By using a matlab program, to do the reduction process model and implementation of Kalman filter algorithms. Analysis of heat conduction distributions estimated by the Kalman filter algorithm can be done as follows. taking

Based on Figure 1 shows the estimated distribution of heat conduction in the initial system and the system of its equilibrium. It appears that even though the estimation is taken to position the same state, that state is taken in position-3 both for the initial system and the system is in balance, it is different estimation. This shows that the position of state-state equilibrium in the system is different from its initial system

Distribusi Panas SISTEM AWAL Pada Posisi ke-3 Pada semua Iterasi dengan deltaT=0.1 30

Distribusi Panas SISTEM SETIMBANG Pada Posisi ke-3 Pada semua Iterasi dengan deltaT=0.1 15 Xs_asli

X_asli

Xs_Estimasi

X_Estimasi

25

10

5

temperatur

temperatur

20

15

0

10 -5

5 -10

0 -15 0

-5

20

40

60 iterasi ke-k

0

20

40

60 iterasi ke-k

80

100

120

80

100

120

Figure 1:

Heat distribution in position 3 on all iterations

Figure 2 shows the estimated distribution of heat conduction in all positions for its 50th iteration, for both the initial system and its equilibrium systems. From both figures it appears that the heat distribution in each state, when seen in the same position, seen showing different results. But overall, both the initial system and its equilibrium systems, both show a similar pattern to estimate its overall state. Distribusi Panas SISTEM AWAL Pada semua Posisi pada Iterasi ke-50 dengan deltaT=0.01 80

Distribusi Panas SISTEM SETIMBANG Pada semua Posisi pada Iterasi ke-50 dengan deltaT=0.01 20 Xs_asli

X_asli

Xs_Estimasi

X_Estimasi

70

0

60 -20

Temperatur

Temperatur

50

40

-40

30 -60

20 -80

10 -100

0 1

2

3

4

5

6

7

8

9

1

10

2

3

4

Figure 2:

5

6

7

8

9

10

Posisi

Posisi

Heat distribution at all positions for the 50th iteration

This is also confirmed by the results of Figure 3 which shows the estimation error covariance. From Figure 3 shows that the estimation error covariance, for both the initial system and to its equilibrium systems, they showed me the pattern of results, the estimation error covariance him longer to get to zero. This means that the estimate converges to zero. In other words, state-state estimation is nearing its true value. kovariansi Kesalahan Estimasi filter Kalman pd SISTEM SETIMBANG

kovariansi Kesalahan Estimasi filter Kalman pd SISTEM AWAL 0.5

0.5

0.45

0.45

0.4 0.4

Kovariansi Kesalahan Estimasi

Kovariansi Kesalahan Estimasi

0.35 0.3 0.25 0.2 0.15

0.35

0.3

0.25

0.2

0.1 0.15

0.05 0

0.1 0

20

40

60

80

100

120

0

20

iterasi ke k

40

60

80

iterasi ke k

Figure 3:

Estimation error covariance

While the estimation results in reduced system can be seen in the picture below:

100

120

Distribusi Panas Sistem Tereduksi Pada Posisi ke-3 Pada semua Iterasi dengan dt=0.1 10

X_asli

X_Estimasi

80

5

X_Estimasi

0.45

60

0.4 Kovariansi Kesalahan Estimasi

40

Temperatur

0

temperatur

kovariansi Kesalahan Estimasi filter Kalman dengan dt=0.1 pd Sistem Tereduksi 0.5

Distribusi Panas Sistem Tereduksi Pada semua Posisi pada Iterasi ke-20 dengan dt=0.01 100

X_asli

-5

20

0

-20

-40

-10

-80 0

20

40

60

80

100

120

iterasi ke-k

0.25

0.2

0.1 1

1.5

2

2.5

3

3.5

4

4.5

5

5.5

6

Posisi

Figure 4:

0.3

0.15

-60

-15

0.35

0

20

40

60

80

100

iterasi ke k

Heat distribution in model reduction

In general, it appears that the estimated heat distribution on all state on the reduced system shows the same pattern of results with the results of the estimates for the initial system and its equilibrium systems. From Figure 4 shows that the estimated state is nearing its actual state value. This is confirmed by the results of the estimation error covariance representation which converges towards zero.

5

Conclusion

From the results of the above description, it can be concluded that the application of Kalman filter algorithm, both the initial system, the system in balance and in its reduced system, showed the same pattern of results estimated for all its state. It should be noted that the results estimated in the third state for the initial system, the system in balance and in its reduced system, all show different results. This shows that the position of state for the initial system and to its equilibrium system is different. Therefore, for future work, it is necessary to follow the state-state changes its position during the formation of the model reduction.

References 1. 2. 3. 4. 5. 6. 7. 8.

Anderson, R, 1979, Optimal Filtering, Prentice-Hall Bierman, G.J., 1977, Factorization Methods for Discrete Sequential Estimation, Academic Press. Greenberg, D.M., 1988, Advanced Engineering Mathematics, Prentice-Hall John C. Doyle and Keith Glover, Robust and Optimal Control, Prentice-Hall. Kemin Zhou., 1998, Essentials of Robust Control, Prentice-Hall. Lewis, F.L., 1992, Applied Optimal Control & Estimation, Prentice-Hall. Nakamura, S., 1991, Applied Numerical Methods with Software, Pentice-Hall. Olsder, G.J., 1994, Mathematical System Theory, Delftse Uitgevers, Maatschappij, Netherlands

120

Suggest Documents