Time-Parameterized Sensing Task Model for Real ... - Semantic Scholar

2 downloads 6246 Views 240KB Size Report
Columbus, OH. {namm ..... (see Figure 2), sensing job k's FOV center Lk is determined ..... We call this abstraction a probabilistic schedulability en- velope since ...
Time-Parameterized Sensing Task Model for Real-Time Tracking Min-Young Nam, Chang-Gun Lee ∗ ECE Dept. The Ohio State University Columbus, OH {namm, cglee}@ece.osu.edu

Kanghee Kim CSE Dept. Seoul National University Seoul, Korea [email protected]

Abstract This paper proposes a novel task model whose physical and temporal parameters are specified as timeparameterized functions and their values are finally determined at the actual dispatch time. This model is clearly differentiated from the classical task model where parameters are fixed at the job release time. The new model better suits sensing tasks in tracking applications, since the sensor parameters such as field-of-view and measurement duration can be properly adjusted at the actual sensing time. The new model, however, creates the cyclic dependency between task parameters and scheduling behavior, that is, the task parameters depend on scheduling behavior and the latter in turn depends on the former. This cyclic dependency makes the schedulability check even more difficult. We handle this difficulty by iterative convergence and probabilistic schedulability envelope, which provides an efficient online schedulability check. The experimental study shows that the new model significantly improves the effective capacity of tracking systems without losing track accuracy.

1

Introduction

Real-time tracking of physical states is a typical example of real-time applications. This includes pure tracking applications like radar-based target tracking and also plant state tracking for supporting the final control of the plant. For the success of such a tracking, a set of associated temporal constraints should be guaranteed. The real-time community has traditionally assumed that tasks’ temporal parameters such as periods, execution times and deadlines are given by domain experts either in deterministic or probabilistic forms and mainly focused on respecting them. However, relatively less attention has been paid to underlying nature of real-time constraints driven by the application algorithms such as tracking and control algorithms. Recent papers [4, 6, 8, 15, 19] studied the joint-relations ∗ The

corresponding author is Chang-Gun Lee.

Marco Caccamo CS Dept. University of Illinois Urbana-Champaign, IL [email protected]

between control and scheduling and pointed out that the endperformance largely depends on physical interaction points, i.e., sensing and actuation times rather than job scheduling times on CPU. To improve the control performance, they proposed a new scheduling method that separates sensorread and actuator-write from the regular control-law computation and executes the read/write operations at the designated periodic points regardless of scheduling jitters of computational jobs. This solution is possible because the time for a read/write operation is negligibly small if we use a dedicated continuous sensor/actuator. However, in more general tracking applications, we usually use a multifunctional shared sensor like a radar antenna or an infrared camera for tracking multiple targets concurrently. For those sensors, sensing is not a simple read operation but a time-taking job for steering sensor’s field-ofview (FOV) and also for measuring the field-of-view for a certain time to capture a measurement of an acceptable quality. If such a sensor is shared by multiple concurrent tracking tasks, multiple sensing jobs can be released at the same time and thus they will be queued and scheduled causing non-negligible deviations from the designated sensing times. Therefore, we have to address such deviations by scheduling multiple sensing jobs while jointly considering the nature of tracking algorithms. However, the current tracking system design relies on extensive simulations to achieve acceptable end-performance without analytic reasoning, due to limited knowledge of real-time scheduling. This paper jointly considers the tracking algorithm and sensing job scheduling to build an analytic co-design framework. The contribution can be summarized as follows: • We propose a novel task model called a timeparameterized model, which better suits tracking applications. Unlike classical real-time task models where each job’s parameters are fixed at its release time, in the new model, each sensing job’s parameters like FOV and measurement duration are given as time-varying functions. With the time-varying functions, the actual parameters are finalized when the job is eventually dis-

patched by the sensor scheduler. This is best illustrated by Figure 1. After the n-th sensing job is executed to measure the target position, the target keeps on moving. Since the exact dispatch time is unknown for the n+1-th sensing job, it is better to specify FOV as a time varying function such that it would be finalized when the job is actually dispatched. Also, as the inter-dispatch time t between the n-th job and the n + 1-th job increases, the uncertainty of the target position increases, and hence the FOV size and in turn the measurement duration should increase accordingly. This model can effectively interpolate the dispatch time jitter caused by the scheduler and thus requires much relaxed real-time constraints compared to the worst-case based classical task model. • We propose an analytic framework for the schedulability analysis. The challenge is the cyclic-dependency between the time-parameterized task model and the scheduling behavior—sensing job’s execution time (i.e., measurement duration proportional to the size of FOV in Figure 1) is determined by the scheduling behavior (i.e., inter-dispatch time t in Figure 1), and in turn, the scheduling behavior is determined by execution times of sensing jobs. Existing work assumes that the job execution times are given as an input either as constants or probability distribution functions to analyze the resulting scheduling behavior such as response time or deadline miss probabilities. To the best of our knowledge, no existing work can analyze the case of the above cyclic-dependency. We tackle this problem offline by a so called iterative-convergence method that repeatedly applies stochastic scheduling analysis until the input execution time model converges to the one given by the resulting scheduling behavior. We abstract the results of this offline schedulability analysis as a closed-form formula called a probabilistic schedulability envelope for a simple online schedulability check. The rest of this paper is organized as follows: In the next section, we overview the generic estimation theory for tracking. Section 3 proposes a time-parameterized sensing job model, which is more suitable for tracking applications. Section 4 explains our framework for checking online schedulability in respect of the original requirement of tracking. Section 5 describes our experimental results. Section 6 summarizes the related work. Finally, Section 7 concludes this paper.

2

Overview of tracking

Tracking is possible by a repeated sequence of sensing and computation as shown in Figure 2(a). The sensing is performed by a sensor, for example, a radar antenna by steering radar’s FOV, illuminating a radar beam, and getting the

Uncertainty

n-th sample

t

(n+1)-th sample

Figure 1. Time varying parameters reflected radar image. In this paper, we consider a radar as an example of shared sensors but the same logic applies for different types of sensors like pan/tilt controllable infrared cameras and visual sensors. Upon reception of the measured data, the computation part processes it to detect target’s position and predict the new position at the next sampling point. This prediction gives the next sensing job that should sense the predicted position to measure the actual target position. The computational part runs the tracking algorithm. For the sake of simplicity, let us explain the tracking algorithm with a target moving along one dimensional x-axis with a constant velocity but with random acceleration noise. For this simple example, the target state s(k) at sampling step k is represented by a column vector of target position x(k) and µ ¶ x(k) velocity v(k), i.e., s(k) = . Then, the next state v(k) s(k + 1) after t time units is represented by the following target process model: ¶ µ t2 ¶ ¶µ ¶ µ µ x(k) 1 t x(k + 1) 2 ǫ. (1) + = v(k) 0 1 v(k + 1) t µ ¶ 1 t In this equation, A(t) = is the state transi0 1 tion matrix assuming constant velocity movement during the time period t. However, the actual movement can be deviated from the constant velocity movement. To account for this, µ a random ¶ acceleration noise is added, which is E(t) =

t2 2

ǫ where ǫ is a random variable from the t normal distribution of N (0, σp2 ). The standard deviation σp is determined by the characteristic of target motion, that is, the larger maneuver the target can have, the larger σp will be. The corresponding process noise covariance matrix is µ t2 ¶ ³ ´ t2 2 σp2 . (2) Q(t) = t 2 t

release guard

k−1

computation

release

k−1

sensing

k release

k

(k − 1)T

k+1 (k + 1)T

kT

computer

time

antenna

(a) Computation and sensing jobs for tracking (k − 1)T x(k − 1) x ¯(k − 1|k − 1)

(k + 1)T

kT actual trajectory linear prediction

time

z(k) x(k) x ¯(k|k) x ¯(k|k − 1) x ¯(k + 1|k) x(k + 1) FOV diameter

x (b) State prediction, measurement, and update

Figure 2. Job sequence for tracking a target Another model the tracking algorithm uses is the measurement model. Supposing that the sensor can measure only target position not velocity, the measured position z(k) at sampling step k is represented by ¶ µ ¡ ¢ x(k) + ω. (3) z(k) = 1 0 v(k) ¡ ¢ In this equation, H = 1 0 is called a measurement matrix that converts the target state to the measured data. Since a sensing device can by no means be 100% accurate, the measured data z(k) can be deviated from the actual target position x(k). To account for this, a random measurement 2 ) noise ω is added. ω follows the normal distribution N (0, σm where the standard deviation σm is determined by the accuracy of the sensing device. The corresponding measurement 2 . noise covariance is R = σm For the given models, the tracking goal is to effectively filter out the process noise and measurement noise µ such that ¶ x ¯(k|k) the mean square error between estimated state v¯(k|k) ¶ µ x(k) is minimized1 . Among many and actual state v(k) such estimation methods, we explain this with Kalman filter [3], which is most commonly used in tracking applications. The Kalman equations solved by computation job k are Eqs. (4), (5), (6), and (7). Although the equations look sophisticated, they are actually quite intuitive. Let us give 1 In this paper, we use a bar over a symbol to denote estimation in contrast to the actual-value of the symbol. Also, x ¯(k|k) is the estimation of x(k) at step k and x ¯(k + 1|k) is the prediction of x(k + 1) from step k

only intuitions using the scenario in Figure 2. In the figure, horizontal is the time line and vertical is the target position along x-axis. Consider a sensing job k dispatched at kT . For serving the sensing job k, sensor’s FOV needs to be steered to the predicted position x ¯(k|k − 1) which is given by the computation job k − 1 using the state-prediction equation of Eq. (6) by replacing k with k − 1 (marked as “linear prediction” in Figure 2(b)). The accuracy of this prediction is maintained by a state-error covariance ¶ matrix µ σxx (k|k − 1)2 σxv (k|k − 1)2 , which P (k|k − 1) = σvx (k|k − 1)2 σvv (k|k − 1)2 is also similarly predicted by the computation job k − 1 using the error-covariance-prediction equation of Eq. (7) by replacing k with k − 1. As long as the actual position x(k) is inside the FOV centered at x ¯(k|k − 1), the sensing job k can give a new position measurement z(k). Once a target position measurement z(k) is made by the sensing job k, the computation job k solves the above equations Eqs. (4), (5), (6), and ¶ combines the previous state predicµ (7). First, Eq.(4) x ¯(k|k − 1) and the new measurement z(k) to have tion v¯(k|k − 1) µ ¶ x ¯(k|k) the updated state estimation by optimally filterv¯(k|k) µ ¶ x ¯(k|k − 1) ing out the process noise in state prediction v¯(k|k − 1) and measurement noise in the new measurement z(k). This update is fairly intuitive. The first term is the state prediction if we did not have a measurement. The second term is called the corrector term, and it represents how much the first term needs to be corrected due to the measurement.

µ µ

x ¯(k|k) v¯(k|k)

¶ ¶

σxx (k|k)2 σxv (k|k)2 σvx (k|k)2 σvv (k|k)2 µ ¶ x ¯(k + 1|k) v¯(k + 1|k) ¶ µ σxx (k + 1|k)2 σxv (k + 1|k)2 σvx (k + 1|k)2 σvv (k + 1|k)2

µ

x ¯(k|k − 1) v¯(k|k − 1)

µ

µ

x ¯(k|k − 1) v¯(k|k − 1)

¶¶

+ G(k) z(k) − H ¶ σxx (k|k − 1)2 σxv (k|k − 1)2 − G(k) · S(k) · G(k)T = σvx (k|k − 1)2 σvv (k|k − 1)2 µ ¶µ ¶ 1 t x ¯(k|k) = 0 1 v¯(k|k) µ ¶µ ¶µ ¶T 1 t 1 t σxx (k|k)2 σxv (k|k)2 = + Q(t). σvx (k|k)2 σvv (k|k)2 0 1 0 1 =

(4)

µ

If the measurement noise is much greater than the process noise, the filter gain G(k) = P (k|k − 1) · H T · S(k)−1 (where S(k) = H · P (k|k − 1) · H T + R) will be small (that is, we do not give much credence to the measurement). On the other hand, if the measurement noise is much smaller than the process noise, G(k) will be large (that is, we give a lot of credence to the measurement). The second equation, Eq. (5) similarly updates the predicted error covariance matrix P (k|k − 1) using the filter gain G(k). Finally, the third µ and fourth equations Eqs. (6) and ¶ x ¯(k + 1|k) (7) predict the next state and error covariv¯(k + 1|k) µ ¶ 2 σxx (k + 1|k) σxv (k + 1|k)2 ance matrix , respecσvx (k + 1|k)2 σvv (k + 1|k)2 tively, which will be used by sensing job k + 1 and computation job k + 1. This sequence of prediction, sensing, and update will be repeated to keep track of the target state as close as possible.

3



Time-parameterized sensing job model

In this section, the question to be answered is how to derive the sensing job parameters from the application level tracking requirement. The tracking requirement is usually H given as the target loss probability threshold, P robTloss . The target loss probability is defined as the probability that the actual target position is beyond the FOV coverage—loss of target. The loss of target causes enormous overhead to search the entire space to recapture the target and thus its probability H should be maintained under a threshold P robTloss . In general, sensing job k is characterized by FOV center denoted by Lk , FOV diameter denoted by Wk , and measurement time duration Ek . As explained in the previous section (see Figure 2), sensing job k’s FOV center Lk is determined by the predicted position x ¯(k|k − 1), which is given by computation job k − 1 using the prediction equation, i.e, µ ¶ µ ¶µ ¶ x ¯(k|k − 1) 1 t x ¯(k − 1|k − 1) = . (8) v¯(k|k − 1) 0 1 v¯(k − 1|k − 1) The gap between this predicted position x ¯(k|k − 1) and the actual position x(k) is represented by the predicted state er-

at (k − 1)T

at (k − 1)T + t

(5) (6) (7)

at (k − 1)T + T time

PDF of x at time (k − 1)T + t

½ ¾ W (T ) W (T ) TH P rob x ¯(k|k − 1) − ≤ x(k) ≤ x ¯(k|k − 1) + = 1 − P robloss 2 2 PDF of x at time (k − 1)T + T

x ¯(k − 1|k − 1)

x ¯(t)

x

x ¯(k|k − 1) Wmin Wtheory Wmax

Figure 3. Probability density function of actual target position x(k)

ror covariance matrix P (k|k−1) which is also given by computation job k −1 using the error covariance prediction equation, i.e., ¶ µ ¶ 1 t σxx (k|k − 1)2 σxv (k|k − 1)2 = σvx (k|k − 1)2 σvv (k|k − 1)2 01 ¶T ¶µ µ 2 1 t σxx (k − 1|k − 1) σxv (k − 1|k − 1)2 · 0 1 σvx (k − 1|k − 1)2 σvv (k − 1|k − 1)2

µ

+ Q(t).

(9)

Specifically, the actual target position x(k) is a random variable that follows the normal distribution with the expected value of x ¯(k|k − 1) and standard deviation of σxx (k|k − 1), i.e., N (¯ x(k|k − 1), σxx (k|k − 1)2 ). This is depicted in Figure 3, which assumes that sensing job k − 1 is dispatched at time (k − 1)T and shows two PDFs, one assuming that sensing job k’s dispatch time is (k − 1)T + t and the other assuming that sensing job k’s dispatch time is (k − 1)T + T . Considering this distribution, sensing job k’s FOV diameter Wk can be determined in such a way that the probability of the actual position x(k) being inside FOV is higher than H 1 − P robTloss . However, we have to note that x ¯(k|k−1) and σxx (k|k−1) in Eqs. (8) and (9) are functions of t. Here, t represents the time passed since the last sensing, i.e., dispatch time of sens-

ing job k − 1. t is unknown at the release time of sensing job k. It is actually determined only at the dispatch time of sensing job k by the sensor scheduler. One possible approach is to assume pure-periodic sensing meaning that the dispatch time distance t between two consecutive sensing jobs is always equal to the period T . Then, it can determine the FOV center Lk and FOV diameter Wk at the release time of sensing job k. However, the actual dispatch time distance t can be sometimes shorter and sometimes longer than the period T because the sensor scheduler must schedule multiple sensing jobs from multiple concurrent tracking tasks. Consider Figure 3. Suppose that sensing job k’s parameters are determined assuming that dispatch time distance is T (i.e., assuming the right PDF). However, if the actual dispatch time distance is shorter (i.e., left PDF), the predetermined sensing job k cannot cover the actual target position with a probaH . Even if we always use the bility higher than 1 − P robTloss largest possible FOV diameter Wmax , the performance will not be acceptable. Also, such approach unnecessarily overconsumes energy of the sensing device to get a measure from the largest FOV. Motivated by these observations, we propose a timeparameterized job model where parameters of a sensing job are specified as time-parameterized functions at its release time so that the parameters are finalized when the sensing job is actually dispatched by the sensor scheduler. Specifically, sensing job k’s FOV center is specified as a function Lk (t) of inter-dispatch time t, which is the predicted target location x ¯(k|k − 1)(t) at t time units after the previous sensing, i.e., Lk (t) = x ¯(k|k − 1)(t) where x ¯(k|k − 1)(t) is given by µ ¶ µ ¶µ ¶ x ¯(k|k − 1)(t) 1 t x ¯(k − 1|k − 1) = . (10) v¯(k|k − 1)(t) 01 v¯(k − 1|k − 1) The FOV diameter is also specified as a timeparameterized function Wk (t) considering the t-dependent prediction of state error covariance, i.e., ¶ µ µ ¶ 1 t σxx (k|k − 1)(t)2 σxv (k|k − 1)(t)2 = σvx (k|k − 1)(t)2 σvv (k|k − 1)(t)2 01 ¶ µ ¶T µ 1 t σxx (k − 1|k − 1)2 σxv (k − 1|k − 1)2 · σvx (k − 1|k − 1)2 σvv (k − 1|k − 1)2 0 1 + Q(t).

(11)

Since the actual target position x(t) at t time after the previous sensing follows the normal distribution N (¯ x(k|k − 1)(t), σxx (k|k − 1)(t)2 ), the required theoretical FOV diamH eter Wk,theory (t) to meet P robTloss can be determined such that the probability of the actual position x(t) being inside H FOV is larger than or equal to 1 − P robTloss , i.e., P rob{¯ x(k|k − 1)(t) − Wk,theory (t)/2 ≤ x(t) H ≤x ¯(k|k − 1)(t) + Wk,theory (t)/2} ≥ 1 − P robTloss . (12)

Wk,theory (t) that satisfies the inequality can easily be found by the characteristic function of the normal distribution as shown in Figure 3. Note that there exist the minimum and maximum FOV diameters Wmin and Wmax , which are given by the physical characteristics of the sensor device. Thus, the actual FOV diameter Wk (t) is specified as the following time-parameterized function:  if Wk,theory (t) < Wmin  Wmin Wk,theory (t) if Wmin ≤ Wk,theory (t) ≤ Wmax Wk (t) =  Wmax if Wmax < Wk,theory (t) (13) It is worth noting that since the standard deviation σxx (k|k − 1)(t) of x(t) increases as t increases, the FOV size Wk (t) H increases as well to meet P robTloss . This implies that Wk (t) is proportional to the inter-dispatch time distance t: Wk (t) ∝ t. The FOV diameter of a sensing job is related to the energy consumption and measurement time duration. For example, for a wider radar beam transmission, more energy of the radar antenna is consumed. Also, for measuring a large FOV, measurement time duration should be longer to have the constant quality measurement. For the simplicity of explanation, we ignore the energy issue. But, it can be included in our design framework by the energy-constrained scheduling methods in [13, 11]. We also use a simplifying assumption that the measurement time duration Ek (t) is linearly proportional to the FOV diameter Wk (t), i.e., Ek (t) = g · Wk (t)

(14)

where g is the proportional factor. We are using this linear relation just for explanation. In fact, any kind of actual relation can be used considering the actual sensor characteristics. Our proposed method is not limited to any specific relation. This time parameterized sensing job specification can effectively compensate the jitter of inter-dispatch time t since the sensing parameters are determined exactly when the sensing job is dispatched, not at the release time. In summary, a tracking task τ can be modeled as an end-to-end task as in Figure 2(a) that consists of two subtasks: (1) computation subtask and (2) time-parameterized sensing subtask. The computation subtask is represented by τ comp = (T, E comp , Dcomp ) where T is the sampling period, E comp is the worst case execution time for solving the Kalman equations, and Dcomp (< T ) is the intermediate deadline of the computation subtask. We will explain how H to determine the sampling period T to meet P robTloss in the next section. The time-parameterized sensing subtask is represented by τ sens = (T, E, Dsens ) where T is the sampling period, E is the measurement time duration for sensing, and Dsens is the intermediate deadline of the sensing subtask. We omit other parameters like FOV center and FOV diameter, since they are not relevant from the scheduling point of

view. Remember that in the time-parameterized task model, the measurement time duration Ek (t) varies depending on inter-dispatch time t as in Eq. (14). We denote the measurement time duration as a random variable E and inter-dispatch time as a random variable J . Their probability distribution functions are denoted by fE (e) = P rob{E = e} and fJ (t) = P rob{J = t}, respectively. The probability distribution function of measurement duration fE (·) depends on the probability distribution of inter-dispatch time fJ (·) and vice versa creating the cyclic-dependency. The two intermediate deadlines Dcomp and Dsens can be determined using a deadline assignment algorithm [17, 12] such that Dcomp + Dsens = T . Note that Dcomp and Dsens are not hard constraints meaning that missing them does not necessarily mean losing the target. Instead, missing those deadlines can delay the release of next jobs and the timeparameterized model is tolerable to a certain extent of such delay. In this regard, we consider the deadlines Dcomp and Dsens not as a constraint but as a metric to determine the absolute deadlines, i.e., scheduling priorities, for EDF scheduling in the rest of this paper. Generally, the computation part is not a bottleneck since the execution time of computation subtask can be easily reduced by using multiple vector processors and high-end computers. Therefore, its intermediate deadline Dcomp can be easily guaranteed, which makes sensing job released periodically using the release guard approach as in Figure 2(a). Thus, from now on, for simplicity, we will focus only on the sensing subtask τ sens = (T, E, Dsens ) whose sensing job is periodically released (or with the minimal inter-release time of T ) but has an execution time E which is a random variable that is unknown until it is eventually dispatched by the sensor scheduler. Our schedulability criteria is not deterministic guarantee of Dsens , but satisfying the original tracking H requirement P robTloss .

4

Schedulability analysis

For scheduling concurrent sensing jobs, we assume nonpreemptive EDF policy due to the non-preemptive nature of sensing. At the release time of a sensing job, its absolute deadline is determined as the current time plus its intermediate deadline Dsens . Then, it is entered into the queue in the EDF order. At each scheduling time point, the sensor scheduler dispatches the job from the head of the queue. The sensing job parameters like FOV center Lk (t), FOV diameter Wk (t), and measurement time duration Ek (t) will be finalized at that dispatch time. Once the sensing job is dispatched, the sensor will measure the FOV specified by (Lk (t), Wk (t)) for Ek (t) time units without being preempted even if another sensing job is released with a shorter deadline. Now, our problem is how to check the schedulability of the set of sensing subtasks T S = {τ1sens , τ2sens , · · · , τnsens } for tracking n simultaneous targets. The schedulability

criteria should respect the original tracking requirements H P robTloss . For this, we develop a way to analytically model the end-performance, i.e., the expected loss probability, as a function of sensing job release period T and probability distribution fJ (·) of inter-dispatch time J . First, we calculate the loss probability of sensing job k when the inter-dispatch time between sensing job k − 1 and sensing job k is t. Such conditional probability is denoted by P rob{loss|J = t}. From the way we determine the FOV diameter Wk (t) in P rob{loss|J = t}  W ≤ x(t) − x ¯(t) ≤  1 − P rob{− min 2 TH = P robloss  1 − P rob{− Wmax ≤ x(t) − x ¯(t) ≤ 2

Wmin } 2

if t < a if a ≤ t ≤ b Wmax } if t > b 2 (15)

where a is the largest t that makes Wk,theory (t) < Wmin and b is the smallest t that makes Wk,theory (t) > Wmax . The above probability can be easily calculated since the random variable x(t) − x ¯(t) follows the normal distribution N (0, σxx (t)2 ) where σxx (t) is given by Eq. (11)2 . By averaging P rob{loss|J = t} along the spectrum of possible inter-dispatch time J , i.e., [0, ∞]3 , we can calculate the expected loss probability, P robloss (T, fJ (·)) =

Z



P rob{loss|J = t} · fJ (t)dt.

0

(16) One important observation here is that the end-performance, that is, target loss probability P robloss (T, fJ (·)), can be better modeled by the scheduling behavior itself, i.e., the probability distribution function of inter-dispatch time fJ (·), rather than deterministic deadline requirement or deadline miss probability. Figure 4 plots one example of the above loss probability model for a target with average velocity vˆ = 40m/sec, maneuverability, i.e., process noise σp = 1, and loss probability threshold of 0.05. In this figure, we artificially model fJ (·) as a negative-truncated normal distribution with a single parameter σJ that represents the inter-dispatch time jitter. When, σJ = 0, sensing is pure periodic, that is, interdispatch time J is always T . As σJ increases, the interdispatch time is further and further deviated from T . Note that such artificial fJ (·) is used just for characterizing the end performance. The actual fJ (·) will be determined later by stochastic analysis of scheduling behavior. For each target characterized by its average velocity vˆ, maneuverability, i.e., process noise σp , and loss probability H threshold P robTloss , we can build the end-performance model 2 In

lieu of P (k|k), we can use the steady state error covariance lim P (k|k)—the solution of Riccati equation, since P (k|k) can quickly

k→∞

converge to lim P (k|k) regardless of dispatch time jitters. 3 We

k→∞

are not enforcing deterministic deadline guarantee and hence the inter-dispatch time can be longer than 2T .

dispatch time distribution of each task in the system state, we use the stochastic analysis method proposed by Diaz et al. [7]. The method gets the probability distribution of execution time and period of each task τi as inputs and analytically finds the probability distribution of response time Rji of each job τij . Figure 5 shows an example of two tasks. Once the analysis produces the response time distribution fRj (·)

0.7

expected loss prob.

0.6 0.5 0.4 0.3

i

of each job τij , the inter-dispatch time distribution fJ j,j+1 (·)

0.2

i

0.1 0 2.5 2

2 1.5

1.5 1 1 sampling period (sec)

0.5 0.5

0

inter−dispatch time jitter

Figure 4. Expected loss probability as in Figure 4. This model guides us to pick the sampling period T for the target. If we pick a small T (for example, 0.5 sec in Figure 4), it is tolerable to a large jitter σJ (for exH ample, up to σJ = 2 in Figure 4) for meeting P robTloss (for example, 0.05 in Figure 4). On the other hand, if we pick a large T (for example, 1.5 sec in Figure 4), it is tolerable only up to a smaller jitter (for example, only up to σJ = 0.5 in Figure 4). In this paper, we pick T such that it is tolerable up to a large enough jitter σJ , say 1.0. This choice is for subsuming the gap between the above artificial model of fJ (t) and actual probability distribution of J in terms of meeting H P robTloss . Once we determine T , the next step is to analyze the actual probability distribution fJ (·) of the inter-dispatch time J of each task and then check if the expected loss probability H given in Eq. (16) can be lower than the threshold P robTloss . If we are given the probability functions of execution times of each task, the existing methods [18, 9, 10, 14, 7] can analyze the stochastic scheduling behavior like response time distribution and deadline miss probability. However, in our time-parameterized task model, the execution time distribution depend on the inter-dispatch time distribution and the latter again depends on the former. To rectify this difficulty, we propose an offline-based iterative convergence method. The inter-dispatch time distribution of each task varies depending on the number of concurrent tasks, i.e., the number of targets being tracked. In the offline design phase, we can identify the different types of potential targets like a helicopter, a jet airplane, and a bomber. Each target type i is characterized by average velocity vˆi , H process noise σp,i , and loss probability threshold P robTloss,i . If we have K different target types, a possible set of concurrent tasks called a system state is represented by a K-tuple, (n1 , n2 , · · · , nK ) where ni represents the number of target instances of type i. Let us explain our iterative convergence method for a system state (n1 , n2 , · · · , nK ). In order to find the inter-

between two consecutive jobs, τij and τij+1 can be simply calculated as the probability distribution function of a random variable T + (Rj+1 − Rji ). By combining the interi dispatch time distribution functions of all pairs of two consecutive jobs during a hyperperiod (i.e., least common multiple of all periods), we can find the probability distribution fJi (·) of inter-dispatch time Ji of task τi . We use this “stochastic analysis” as a basic block in the overall flow of iterative convergence given in Figure 6 to find the final probability distribution fJi (·) of inter-dispatch time Ji of each task τi . In our problem, however, the execution time distribution is not known in the beginning since the inter-dispatch time distribution is unknown. Thus, as an initial assumption, we assume hypothetical pure-periodic scheduling of each task—the inter-dispatch time Ji of each task τi is always equal to its period Ti . This initial distribution is denoted as fJi (·)input in Figure 6. With this initial distribution, the next block can compute the execution time distribution fEi (·) of our time-parameterized task using Eq. (14). Using this execution time distribution fEi (·) and period Ti for each task τi , the aforementioned stochastic analysis can compute the inter-dispatch time distribution fJi (·)output . Although we are assuming non-preemptive EDF scheduling of sensing jobs, we can use any prioritydriven scheduling including EDF, RM, and FIFO, since the stochastic analysis [7] can generally work for any prioritydriven scheduling. The resulting inter-dispatch time distribution fJi (·)output for each task is compared with the one previously assumed fJi (·)input , to see its convergence. If not, we update fJi (·)input using fJi (·)output as the interdispatch time distribution, i.e., fJi (·)input = fJi (·)output , and rerun the above process: (1) recalculate the execution time distribution and (2) re-perform the stochastic analysis. We repeat this until the inter-dispatch time distribution of each task converges. Although, we cannot theoretically prove the convergence property for now, we can empirically observe the convergence for all cases we explore as will be discussed in Section 5. Once we reach the convergence of the inter-dispatch time distribution fJi (·) for each task τi in the system state (n1 , n2 , · · · , nK ), the final step is to plug-in the resulting inter-dispatch time distribution fJi (·) into Eq. (16) to calculate the expected loss probability of each task. If the expected

prob

prob ex time

ex time

prob ex time

τ12

τ11

task τ1

prob

ex time

τ14

τ13

period T1 prob

prob

response time

prob

response time

prob

response time

response time

prob ex time

prob ex time

τ21

task τ2

prob

ex time

τ22

τ23

period T2 prob

prob

response time

prob

response time

response time

hyperperiod

Figure 5. Stochastic analysis of job response times task set at system state (n1 , · · · , nK ) initial assumption of inter−dispatch time distribution (hypothetical pure−periodic scheduling) fJ (·)input for each task fE (·) for each task

stochastic analysis fJ (·)output for each task

fJ (·)input for each task

update inter−dispatch time distribution fJ (·)input ← fJ (·)output for each task

NO

convergence check fJ (·)output == fJ (·)input for each task?

YES calculate expected loss prob. P robloss (T, fJ (·)) for each task

H P robloss (T, fJ (·)) < P robTloss for each task?

YES feasible

We call this abstraction a probabilistic schedulability envelope since we can simply claim that a system state (n1 , n2 , · · · , nK ) is schedulable in the sense of meeting the H probabilistic requirements P robTloss of all the tracking tasks if it is inside the envelope, i.e., a1 n1 + a2 n2 + · · · + aK nK ≤ b.

(18)

NO infeasible

Figure 6. Iterative convergence method

H loss probability is less than the required threshold P robTloss for every task in the system state (n1 , n2 , · · · , nK ), we can say that all tasks are schedulable in the state and hence we name the state as a feasible state.

The iterative convergence with repeated stochastic analysis is time consuming and hence cannot be used for online schedulability check. To facilitate the online schedulability check, we repeat this iterative convergence test exploring all possible system states marking them “feasible” or “infeasible”. As a result, we can find the boundary states from which the system becomes “infeasible” meaning that at least one H task violates P robTloss requirement. Figure 7 shows the en-

16 number of targets of Type−2

calculate execution time distribution from inter−dispatch time distribution

tire state space and boundary states for a simple example with two target types. Once we find the boundary states, we can apply a constrained regression technique (see [13] for more details) to find a closed form linear abstraction represented as a1 x1 + a2 x2 + · · · + aK xK = b. (17)

14

Infeasible

12 10 8 6

Feasible

4 2 0 0

2

4

6

8

10

12

14

16

number of targets of Type−1

Figure 7. Boundary states and probabilistic schedulability envelope (linear model)

The iterative convergence method and probabilistic schedulability envelope are core ingredients of our analytic design framework since they resolve the difficulty of cyclic dependency between the time-parameterized task model and scheduling behavior so as to provide an efficient schedulability check.

5

Experiments

This section presents experimental results to quantitatively show the benefit of our proposed time-parameterized task model and analysis framework. For the experiment, we consider three target types: Type 1 (helicopter), Type 2 (jetairplane), and Type 3 (bomber). They are characterized by average velocity vˆ, maneuverability, i.e., process noise σp , H and loss probability threshold P robTloss , which are summarized in Table 1. As an example of a sensor, we consider a phased array antenna whose measurement error standard deviation σm = 1, min and max FOV diameters are 5 m and 15 m, respectively, and the converting factor g from the FOV size to measurement duration is 4. With these settings, we compare three methods: • Method A determines the FOV center at job’s release time assuming the pure-periodic scheduling and always uses the maximum FOV diameter and measurement duration, • Method B parameterizes only the FOV center as a function of inter-dispatch time but always uses maximum FOV diameter and measurement duration, and • Method C parameterizes all of FOV center, FOV diameter, and measurement duration, as functions of interdispatch time. Method A is a classical method that can be used when the sensor is a dedicated device and hence the sensing is actually possible exactly at the designated periodic points. For this method, the sampling period is determined such that the uncertainty after one period equals the coverage of maximum FOV. For Method B, we conservatively estimate the maximum inter-dispatch time as twice the sampling period. Thus, the sampling period is determined such that uncertainty after two period (i.e., maximum inter-dispatch time) equals the coverage of maximum FOV. This is a classical worst case method requiring deterministic guarantee of all deadlines. Method C is the proposed approach. For this method, we determine the sampling period from the end-performance model as explained in Section 4. The last three columns of Table 1 summarize such calculated periods for the three target types and the three methods. Note that the sampling periods of Method A are approximately twice of those of Method B, since Method A assumes that the inter-dispatch time is always equal to the period while Method B assumes that the maximum inter-dispatch time is twice of the period. For these two methods, we

can perform the online schedulability check using the utilization bound of 1.0 considering non-preemptive penalty, which is given by the maximum sensing duration. On the other hand, for Method C, we use the offline iterative convergence approach. Thus, the convergence property is important. We extensively check the convergence by changing the number of targets of each type and empirically observed that it actually converges in most cases within 10 iterations. Figure 8 shows how the inter-dispatch time quickly converges for an example system state where 10 helicopters, 20 jet-airplanes, and 5 bombers are concurrently tracked, i.e., (n1 , n2 , n3 ) = (10, 20, 5). At each iteration, the top, middle, and bottom graphs show the inter-dispatch time distributions of a target of Type 1, Type 2, and Type 3, respectively. In the initial distribution, the inter-dispatch time is equal to the period with probability of 1. It eventually converges to the actual distribution at iteration 4. Only cases in which the distribution does not converge within 10 iterations are when the average utilization is very close to 1.0. Thus, we can intentionally terminate the iteration when the average utilization is higher than a cut-off threshold, say 0.95 and declare the system state as infeasible. This termination allows us to always finish the convergence check within 10 iterations. The penalty of such intentional termination with 0.95 threshold is minor since if the “average” utilization is higher than 1, the system is permanently overloaded and thus definitely infeasible even in the probabilistic sense. The cut-off threshold can be treated as an input parameter of our design framework—if we have more computing power for the offline analysis, we can push up the cut-off threshold closer to 1. In order to compare the online performance, we simulate random arrival and departure of targets, random target motions, and also Kalman filters for tracking the targets admitted into the system. We assume Poisson arrivals and Exponential life times of targets. The motion of each target is simulated with average velocity and acceleration noise summarized in Table 1. Figure 9 shows the average number of admitted targets as increasing the target arrival rates while fixing the average lifetime of targets as 300 sec. Method B is saturated below 25 since it is the most conservative method: (1) calculates the period worrying that the maximum interdispatch time can be twice the period, (2) constantly uses the maximum FOV and longest measurement duration, and (3) performs the deterministic schedulability check for admission control. On the other hand, Method A can admit much larger number of targets since its sampling periods are generally longer than Method B. Method C can admit almost similar number of targets even though its periods are shorter. This is possible because Method C admits targets as long as the H stochastic behavior of scheduling is acceptable for P robTloss while Method A enforces the deterministic guarantee assuming the largest FOV and longest measurement duration. Although Method A and Method C are similar in terms of the number of admitted targets, it does not necessarily mean

Table 1. Experimental parameters target type



σp

H P robT loss

Method A

Method B

Method C

Type 1 (Helicopter) Type 2 (Jet Airplane) Type 3 (Bomber)

40m/s 80m/s 50m/s

0.8m/s2 0.2m/s2 0.8m/s2

0.05 0.03 0.01

2300 ms 4300 ms 1800 ms

1200 ms 2300 ms 1000 ms

1200 ms 2000 ms 1000 ms

0.8

Prob.

Prob.

1

0.5

45

0.6 0.4 0.2

0

0

500

1000

0

1500

inter−dispatch time for a target of Type 1

500

1000

1500

0.5

average number of admitted targets

0.8

Prob.

Prob.

40 0

inter−dispatch time for a target of Type 1

1

0.6 0.4 0.2

0

0

500

1000

1500

2000

2500

0

3000

inter−dispatch time for a target of Type 2

0

500

1000

1500

2000

2500

3000

1000

1200

inter−dispatch time for a target of Type 2

1

0.4

Prob.

Prob.

0.3 0.5

0.2 0.1

0

0

200

400

600

800

1000

0

1200

0

200

400

600

800

inter−dispatch time for a target of Type 3

inter−dispatch time for a target of Type 3

(a) initial

(b) iteration 1 0.04

0.06

35

30

25

20

15 Method A Method B Method C

Prob.

Prob.

0.03

0.04 0.02 0

0.02

10

0.01 0

0

500

1000

1500

0

500

Prob.

Prob.

0.02 0.015

0.01

0.005

0

0

500

1000

1500

2000

2500

3000

5

0

500

0.06 0.04

Prob.

Prob.

inter−dispatch time for a target of Type 2

0.04 0.02

200

400

600

800

1000

1200

1000

1500

2000

2500

3000

400

600

800

1000

1200

inter−dispatch time for a target of Type 2

0.02 0

0

0

200

inter−dispatch time for a target of Type 3

inter−dispatch time for a target of Type 3

(c) iteration 2

(d) iteration 3

0.04

Prob.

0.03 0.02 0.01 0

0

500

1000

1500

inter−dispatch time for a target of Type 1

0.02

Prob.

0.015 0.01 0.005 0

0

500

1000

1500

2000

2500

3000

1000

1200

inter−dispatch time for a target of Type 2

Prob.

0.06 0.04 0.02 0

0

200

400

600

800

inter−dispatch time for a target of Type 3

(e) iteration 4 Figure 8. Convergence of inter-dispatch time distribution that they perform equally good. Figure 10 shows the target loss probability as increasing the target arrival rate. The target loss probability of Method A easily goes beyond the H required threshold P robTloss since its parameters are determined based on the pure-periodic assumption and hence it is vulnerable to even a small deviation from the periodic assumption, which is the common case when multiple jobs are scheduled on a single shared sensor device. In contrast, Method C always meet the requirement since the timeparameterized task model can adjust the parameters depending on the actual inter-dispatch time.

6

0

0.05

0.1

0.15

0.2

0.25

0.3

average target arrival rate (1/sec)

0.06

0

1500

0.01

0.005 0

1000

inter−dispatch time for a target of Type 1

inter−dispatch time for a target of Type 1 0.02 0.015

Related work

Recently, there has been much research on the codesign of control and scheduling [16, 8, 15, 19]. The research has studied the impacts of scheduling behavior on the control performance and observed that the control performance is mainly determined by the probabilistic distribution of physical interaction points, i.e., sensing and actuation times rather than job

Figure 9. Average number of admitted targets scheduling times on CPU. Based on this observation, a number of algorithms [6, 5] are proposed to make the scheduling behavior of sensing/actuation times more favorable to the end control performance. However, they consider only a dedicated and continuous sensing device where sensing is a simple read operation and thus sensing time can be easily controlled as desired. Due to this limitation, the existing methods are not applicable to multifunctional shared sensing devices where multiple sensing requests compete and thus their scheduling time control is not always possible. Marti et al. [20] develop a way to compensate the schedule jitters to improve the control performance. However, they consider only CPU scheduling jitters, again assuming simple sensing/actuation operations with dedicated devices, which assumption is not valid in our target applications with multifunctional shared sensors. Researchers have recognized that probabilistic task models better fit many soft real-time systems like multimedia and even control and tracking systems. Progress has recently been made in the analysis of real-time systems under the stochastic assumption of task models [18, 9, 10, 14, 1, 2]. However, they assume that the probabilistic functions of task execution times are given as inputs and analyze the resulting scheduling behavior based on the given inputs. No existing work can handle the cyclic dependency of the proposed timeparameterized task model where the task execution times and the scheduling behavior are inter-dependent.

7

Conclusion

This paper presents a time-parameterized sensing task model where sensing job parameters such as FOV center, FOV diameter, and measurement duration are given as time-parameterized functions. With the time-parameterized

70

loss prob. of Type 1 (%)

60 50

Method A Method B Method C

40 30 20 10 0

0

0.05

0.1

0.15

0.2

0.25

0.3

average target arrival rate (1/sec)

loss prob. of Type 2 (%)

100

80

60

Method A Method B Method C

40

20

0

0

0.05

0.1

0.15

0.2

0.25

0.3

average target arrival rate (1/sec) 60

loss prob. of Type 3 (%)

50 40

Method A Method B Method C

30 20 10 0

0

0.05

0.1

0.15

0.2

0.25

0.3

average target arrival rate (1/sec)

Figure 10. Target loss probability of each type functions, parameter values are finally determined when the sensor actually senses the targets, not at the release times of sensing jobs. Therefore, the tracking algorithm like a Kalman filter can give a significantly better track performance despite of dispatch time jitter of sensing jobs on a shared multifunctional sensing device. For an efficient schedulability check of time parameterized sensing tasks, an iterative convergence method and a probabilistic schedulability envelope have been proposed. Our simulation results show that the proposed framework can significantly improve the effective capacity of tracking systems without losing track accuracy. In the future, we plan to extend the proposed design framework to more advanced estimation theory such as MHT (multiple hypothesis tracking) and IMM (interacting multiple model).

References [1] L. Abeni and G. Buttazzo. Stochastic Analysis of a Reservation Based Systems. In Proceedings of the 9th International Workshop on Parallel and Distributed Real-Time Systems, April 2001. [2] A. K. Atlas and A. Bestavros. Statistical Rate Monotonic Scheduling. In Proceedings of the 19th Real-Time Systems Symposium, December 1998.

[3] S. Blackman and R. Popoli. Design and Analysis of Modern Tracking Systems. Artech House, Inc., 1999. [4] A. Cervin. Integrated Control and Real-Time Scheduling. PhD thesis, Dept. of Automatic Control, Lund Institue of Technology, 2003. [5] A. Cervin and J. Eker. Feedback scheduling of control tasks. In Proceedings of the 39th IEEE Conference on Decision and Control, 2000. [6] A. Cervin and J. Eker. The Control Server: A computational model for real-time control tasks. In Proceedings of the 15th Euromicro Conference on Real-Time Systems, 2003. [7] L. Diaz, D. Garcia, K. Kim, C.-G. Lee, L. Bello, J. Lopez, S. Min, and O. Mirabella. Stochastic Analysis of Periodic Real-Time Systems. In Proceedings of the 23rd Real-Time Systems Symposium, December 2002. [8] J. Eker, P. Hagander, and K.-E. Arzen. A feedback scheduler for real-time control tasks. Control Engineering Practice, 8(12):1369–1378, 2000. [9] M. K. Gardner. Probabilistic Analysis and Scheduling of Critical Soft Real-Time Systems. PhD thesis, University of Illinois at Urbana-Champaign, 1999. [10] M. K. Gardner and J.-W. Liu. Analyzing Stochastic FixedPriority Real-Time Systems. In Proceedings of the 5th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, Mar. 1999. [11] S. Ghosh, J. Hansen, R. Rajkumar, and J. Lehoczky. Integrated Resource Management and Scheduling with MultiResource Constraints. In Proceedings of the 25th IEEE International Real-Time Systems Symposium, pages 12–22, 2004. [12] B. Kao and H. Garcia-Molina. Subtask Deadline Assignment for Complex Distributed Soft Real-Time Tasks. In Proceedings of 14th IEEE International Conference on Distributed Computing Systems, pages 172–181, 1994. [13] C.-G. Lee, P.-S. Kang, C.-S. Shih, and L. Sha. Radar Dwell Scheduling Considering Physical Characteristics of Phased Array Antenna. In Proceedings of the 24th Real-Time Systems Symposium, December 2003. [14] J. P. Lehoczky. Real-Time Queueing Theory. In Proceedings of the 17th Real-Time Systems Symposium, December 1996. [15] H. Rehbinder and M. Sanfridson. Integration of off-line scheduling and optimal control. In Proceedings of the 12th Euromicro Conference on Real-Time Systems, 2000. [16] M. Ryu, S. Hong, and M. Saksena. Streamlining real-time controller design: From performance specifications to endto-end timing constraints. In Proceedings of the Real-Time Technology and Applications Symposium, June 1997. [17] J. Sun. Fixed Priority Scheduling of End-to-End Periodic Tasks. PhD thesis, Department of Computer Science, University of Illinois at Urbana-Champaign, 1997. [18] T.-S. tia, Z. Deng, M. Shankar, M. Storch, J. Sun, L.-C. Wu, and J.-S. Liu. Probabilistic Performance Guarantee for RealTime Task with Varying Computation Times. In Proceedings of the Real-Time Technology and Applications Symposium, May 1995. [19] Q. C. Zhao and D. Z. Zheng. Stable and real-time scheduling of a class of hybrid dynamic systems. Journal of Discrete Event Dynamical Systems, 9(1):45–64, 1999. [20] P. Marti, J. M. Fuerter, G. Fohler, and K. Ramamritham. Jitter Compensation for Real-Time Control Systems. In Proceedings of the 22nd Real-Time Systems Symposium, Dec. 2001.

Suggest Documents