CPU Scheduling Algorithms: Case & Comparative Study Sonia Zouaoui, Lotfi Boussaid, Abdellatif Mtibaa
[email protected],
[email protected],
[email protected] Electric Department National Engineering School of Sousse, Riadh city, 4000, Sousse, Tunisia
Abstract-- As applications grow increasingly complex, so do the complexities of the microcontroller-based embedded devices such as cell phones, digital TVs, smart cars, washing machines and so on. Although these recent applications operate in hard constraint conditions, embedded systems are required to respond to external events in a shortest possible time. In this paper, we present a variety of scheduling algorithms for a single microcontroller-based embedded devises with static and dynamic priorities and show a study on different parameters, such as computing time, burst time, waiting times and average turnaround time for each algorithm. This study aims to select the appropriate operating system for the particular situation and also to conceive new solutions that can improve several features such as waiting times and average turnaround time. Key Words: operating system, Scheduling Algorithm, burst time, turnaround time, waiting time, fixed priority, and dynamic priority.
I-Introduction All software parts of an operating system (OS) can be seen as a set of processes whose execution is managed by a particular process scheduler. A scheduler is facing two major problems: - The choice of process to run, and - The allocation of processor time to the chosen process. A multitasking operating system is preemptive when it can stop (requisition) at any time any application for handing over to the next. In preemptive operating systems multiple applications can run at a time and switch from one to another, or even launch an application while another performs work. There are also so-called multitasking operating systems, which are in fact "cooperative multitasking". What is the difference? A cooperative multitasking allows multiple applications to run and occupy memory beaches,
leaving it up to these applications to manage the occupation, to clog the system. By cons, with a "preemptive multitasking", the core is always in control (who does what, when and how), and reserves the right to close applications that consume system resources. And system crashes are nonexistent. II-Operating system An operating system primarily acts as a manmachine interface. Specifically, it effectively manages both software resources (text editors, software communications, etc.) and hardware (processors, memories, input / output units, etc.) of the computer system. In personal computers, the bulk of this management task is to share resources among multiple users working simultaneously: this is what is commonly called multiprogramming. An operating system performs two basic functions: Information management hardware resources.
and
management of
- Information is managed primarily to provide users with the means to create, find, destroy objects (information in various forms) on which they carry out operations and make available these objects by the system management units enter exit. This feature also includes the sharing and exchange of information, mutual protection of users, as well as the operating system deal with these same users. - As for the management of hardware resources, it affects the allocation of main memory, secondary memory and input / output devices. A job dispatcher is, in this case, necessary to fairly share the central processing unit (CPU), in a context of multiprogramming. II-1- Information management An operating system must provide the ability to manage information. These can be files for storing programs and data segments in the memory, variables, arrays and structures defined in the user
programs. The operating system allows the user to access information indicating a symbolic name rather than a physical address on the storage unit. For example, to access a file, the user must first specify the file name. The operating system also allows multiple users to access common information to share. At the same time, it must ensure the independence of users. There are two ways to share information to create copies of the information for each user who has expressed the need or, even, allow users to access to unique exemplary. II-2-Hardware Resources Management There, in a computer system, a set of hardware and software which uses a set of processes in execution or programs. The operating system is responsible for the allocation of these resources. It must be designed to avoid certain anomalies such as the performance degradation due to mismanagement of resources and the deadlock of a group process - situation where each process is found waiting for a resource owned by another process. II-3-Characteristic of an operating system An operating system (OS) is seen as an abstract ideal material. Actually it is composed of a software layer which wraps the hardware one whose purpose is to simplify software application design. It can be considered as a unique contact point with programs since it presents the following characteristics: -
-
It can hide to software tasks unavailability of hardware. It may provide resource management functions which relieve program from them. For example, a scheduler enables several software tasks to share resources. It can provide a unique and single interface regardless of the material component.
Programs can be seen as resource consuming entities; these resources represent material [19]. Among hardware resources, the first are the processor on which programs are executed, and the memory in which it stores the data. Others could resources etc.
be
communication,
calculation
Hardware resources are not always directly accessible, because, for complex applications, resources are always shared between several programs, especially when they need to share the same resource. That’s why; operating system can be considered as resource management system for applications: It manage processor resource via task scheduling algorithms, memory resource via allocation and
release of memory function and all other resources via a peripheral manager. III-Real Time Operating System (RTOS) Real Time Operating System (RTOS) is a reactive OS that must respond continuously to stimuli from a process that seeks to control. A real-time system is a reactive system that must meet time constraints. A real-time system must be able to process information from the process within a period that does not affect the process control. React too late can lead to catastrophic consequences for the system itself or the process. Compliance with time constraints is the main constraint to satisfy. The validity of a real time system depends not only on the results of the treatment carried out but also the temporal aspect (a fair calculation but out of time is an invalid calculation). A- Real-time systems classification The critical time constraints led to classify the realtime system in the following three categories [1]: - Real time strict system: a system that is subject to strict time constraints, that is to say for which the slightest mistake time can have devastating human and economic consequences. Most applications in the avionics field, automobile, etc., are strict real time; - Real-time flexible system: a system that is subject to flexible time constraints, a number of timing errors can be tolerated. This is known as quality of service (QoS). - Real-time mixed system: a system that is subject to strict and flexible time constraints. B-Real-time task A real-time task consists of a set of instructions that can be run in sequence on one or more processors and meet time constraints. In the following we will assume that a task will not run in parallel. It can be repeated any number of times, possibly infinite. Each of his performances is called instance or work ("job"). A real-time system consists of a set of realtime tasks subject to real-time constraints (Real time constraints). A real-time task can be: - Periodic: its instances (versions) are repeated indefinitely and there is a constant time between two successive activations of instances referred period, - Sporadic: its instances (versions) are repeated indefinitely and there is a minimum time between two successive instances, - Aperiodic: there is no correlation between successive instances.
C-Periodic tasks The classic model of periodic tasks called Liu and Layland models the most used in modeling real-time systems [2]. This model allows defining multiple settings for a job. These parameters are of two types: static parameters for the task itself and the dynamic parameters on each instance of the task. The basic static parameters of a periodic task τi = (Ri, Ci, Di, Ti) are (Figure1): - Ri (release time): date of first activation of task i, when τi can start its first performance, - Ci (computing time): execution time of τi. This parameter is considered in several works on realtime scheduling as the worst case execution time (WCET for worst-case execution time) on the processor on which it will run;
according to the relationship between the period T i and the deadline for each task i Di we distinguish three types of deadlines: - Di = Ti: I for each activation of the task τi must be executed before the next activation. We talk about deadlines on tasks activations, implicit deadlines or deadlines on requests; - Di≤Ti: for each activation of the task τi.to be executed on or before a date less than or equal to the date of its next activation. We talk about stress at work deadlines; - Di≠Ti: there is no correlation between at latest end date of execution of τi. upon activation and its next activation of τi. One can have Di≤Ti or Di>Ti, we speak here about arbitrary tasksdeadlines. *strict periodicity
- Di: relative maturity or critical time frame for each activation of τi. - Ti: implementation period τi.
Consider a periodic taskτi in a real time task system, strict periodicity constraint requires that the time elapsed between two consecutive beginning of execution dates 𝑠𝑖𝑘 and 𝑠𝑖𝑘+1 corresponds exactly to the period of the task τi. The advantage of this constraint is that knowledge of the actual start date of the first instance 𝑠𝑖1 implies knowledge of the effectivestart date of all subsequent instances of the same task [3], this is expressed by the relationship 𝑠𝑖𝑘+1 =𝑠𝑖1 +kTi, k≥ 1.
Figure1: Classic model in periodic tasks
*Dependencies between tasks Other static parameters are derived from the basic ones: 𝐶
- Ui = 𝑖: CPU utilization factor for τi,Ui≤1, 𝑇𝑖
- CHi =
𝐶𝑖 𝐷𝑖
:The density of the task τiCHi≤1.
*Concrete/ no concrete tasks If all the first activation dates of all jobs are known it is said that the tasks are concrete. On the contrary, if we do not know the dates of first activation, it is said that these tasks are not concrete. *Synchronous/ asynchronous tasks If all tasks are practical and have the same date of first activation, said that the tasks are synchronized activations. Otherwise, they are asynchronous *real-time constraints In real-time scheduling, tasks can be subject to several constraints such as the constraints of deadlines, strict periodicity, dependencies and precedence, etc. *Deadlines The constraints of deadlines allow expressing a condition of the end date of implementation at latest of a given task. Consider a system of periodic tasks,
A dependency between two tasks i and j can be of two types: A precedence dependency and / or data dependency. A precedence dependency between (i,j) requires that the task j started running after the task i have completely finished running [3], [4], [5], [6]. The precedence of constraints are indirectly realtime constraints and we said that the task i is a predecessor of the task j and j is a successor of i. If i runs exactly once before a run of j, we have then a simple precedence constraint if not it’s a wide precedence[7], [8], [9]. A data dependency means the taski produce a result that is consumed by j[3], [4], this dependence inevitably leads to precedence between tasks. Tasks are called independent if they are defined only by their temporal parameters. *Latency Latency is defined for dependent tasks by transitivity in the case of a path tasks. Let’s suppose τi and τj two tasks, the latency between i and j denoted L (τi,τj) is the time between the start of execution of τi and the end of execution of τj.
IV-Scheduling real-time systems * Scheduling Algorithms A real-time system consists of one or more processors and a set of tasks with real-time
constraints. A scheduling algorithm determines the total order and starting dates of the executions of tasks on one or more processors. There are several classes of real-time scheduling algorithms.
The sharing of the processor and resources introduced several states for a task Figure2:
*Uniprocessor / Multiprocessor
- Ready: the task is enabled and has all the resources it needs to run,
Scheduling is single processor type if all tasks are running on a single processor. If the tasks can run on multiple processors is multiprocessor scheduling.
- Waiting: the task is waiting for resources,
*Preemptive / Non-preemptive A scheduling algorithm is preemptive when a task running on a processor may be suspended in favor of a higher priority task, then resume execution later. In the opposite case it is non-preemptive task. *Online / Offline A scheduling algorithm off-line built the complete sequence of execution start dates of tasks based on time parameters. This sequence is known before the execution of all tasks. In practice, the scheduling has the shape of a plane offline (or static), executed repeatedly or cyclically. A schedule on-line constructed the sequence of execution start dates of tasks from the temporal parameters of tasks during the execution of all tasks. Online algorithms are more robust with respect to overruns WCETs. *fixed / dynamic priority A scheduling algorithm is fixed priority, if the task priorities are based on static parameters (eg period). A scheduling algorithm is dynamic priorities, if the task priorities are based on dynamic parameters (eg laxity).
- New: the task is not yet activated,
- Running: the task runs, - Terminated: the task has no current application it has terminated. The scheduler is responsible for the transition from one task state to another. Each invocation, the scheduler updates the list of ready tasks by including all active tasks and who have their resources and removing tasks that ended their performances or blocked by waiting a resource. Then among the ready tasks, the scheduler selects the highest priority task to run. Thus, a task in the New state can move to the ready state. A task in the ready state may go to running state or waiting state. A task running status may return to the ready state if it is preempted by another higher-priority task, it can go to the waiting state if it is waiting for resource liberation or has finished executing, it passes in the passive state. A task can move from the waiting state to the ready state. And finally a task can move from the passive state to the ready state. Figure2 provides an illustration of the different states and their transitions [7].
*Optimal / Non optimal By definition, an optimal scheduling algorithm for a given class of scheduling problem (offline, online, fixed or dynamic priorities, etc.) is such that: if a system is schedulable by at least one algorithm of the same class, then the system is also schedulable by the optimal scheduling algorithm. Accordingly, if a system is not schedulable by the optimal scheduling algorithm of a given class, then it is not by any other scheduling algorithm of the same class. *Feasibility analysis A feasibility analysis of a set of tasks to determine if there is a scheduling algorithm that can schedule this set of tasks. The condition that must verify all tasks to be feasible is called feasibility condition. A-Scheduler A real-time scheduler is an online program responsible for allocating one or more task (s) to be scheduled at (x) processor (s). It is invoked at certain times. And this according to a scheduling algorithm given in such a way that all tasks meet their realtime constraints.
Figure2: Process state transition diagram
B-Uniprocessor scheduling *Fixed Priorities A fixed priority tasks is a priority that does not change during the execution of the task. The scheduling algorithms fixed priority to the most common tasks are "rate monotonic" and "Deadline Monotonic". Rate Monotonic (RM) RM scheduling algorithm was introduced by Liu and Layland in 1973 [2]. This is a preemptive scheduling algorithm which applies to independent periodic tasks, and due upon request (T i = Di). The task priority is inversely proportional to its period, that is
to say, the longer the period of a task is, the more its priority is high. This algorithm is optimal in the class of algorithms for fixed priority preemptive independent tasks due on synchronous request. A sufficient condition for schedulability of the RM scheduling algorithm for a set of periodic tasks Γn maturing on request is given by [2]: ∑𝑛𝑖=1
𝐶𝑖 𝑇𝑖
The DM scheduling algorithm was introduced by Leung and Whitehead in 1982 for conditioned deadline tasks. The task priority is inversely proportional to its relative deadline, that is to say, the longer the relative deadline is, the more its priority is high[8]. This algorithm is optimal in the class of preemptive and fixed priority algorithms for, independent preemptive and conditioned deadline tasks (Di ≤ Ti). The sufficient condition for schedulability of the DM scheduling algorithm for a set of periodic tasks Γn due to stress is given by [8]: 𝐷𝑖 𝐶𝑖 + ∑ ⌈ ⌉ . 𝐶𝑖 ≤ 𝐷𝑖 𝑇𝑗 𝑗∈ℎ𝑝(𝜏𝑖 )
With hp (τi) is the set of Γn tasks of priorities higher than or equal to Γn, not including Γn.
Earliest Deadline First (EDF) The EDF scheduling algorithm was introduced by Lui and Layland in 1973 [2]. It is a scheduling algorithm which can be preemptive or nonpreemptive and that applies to independent periodic tasks (Ti = Di). The highest priority at t is allocated to the task with the closest absolute deadline. EDF is optimal for independent and preemptive tasks. A necessary and sufficient condition of schedulability of EDF for a set of periodic tasks Γn is given by: ∀τi∈Γn ,∑ni=1
Ci Ti
τi ∈Γn ,∑ni=1 TC ii ≤ 1
A dynamic priority changes during the execution of an instance. The scheduling algorithm for dynamic priorities most used is "Least laxity Fisrt". Least-Laxity First (LLF)
1
≤ n(2𝑛 – 1)
Deadline Monotonic (DM)
∀𝜏𝑖 ∈ 𝛤𝑛 ,
∀
The LLF scheduling algorithm is based on the laxity. The task whose laxity is the lowest compared to all the ready tasks have the highest priority [9]. This algorithm is optimal for independent and preemptive tasks. The condition of schedulability of LLF and the EDF are the same. That is to say that the necessary and sufficient condition of schedulability of LLF for a set of periodic tasks Γn is given by: ∀
τi ∈Γn ,∑ni=1 TC ii ≤ 1
V- Comparative study As we have previously introduced both fixed and dynamic priority scheduling algorithms and how to judge if the chosen algorithm is optimal, in this section, we will conduct a detailed performance comparison study. 1-Fixed priority scheduling algorithms A-Computation of Gantt chart, Waiting Time and Turnaround Time: Let's Consider the following set of processes with their ID and associated burst time is shown in Table1 [10]. Table 1: Processes with Its Id and Burst Time Process ID P0 P1 P2 P3 P4
Burst Time(ms) 10 2 3 2 6
≤1 a. First Come First Serve P0 P1 P2
*Dynamic priorities
0
A dynamic priority changes during the execution of an instance. The scheduling algorithm for dynamic priorities most used is "Least laxity Fisrt".
b. Shortest Job First P1 P3 P2
12
0
2
The condition of schedulability of LLF and the EDF are the same. That is to say that the necessary and sufficient condition of schedulability of LLF for a set of periodic tasks Γn is given by:
P4 19
P1
4 7 Figure4 : Gantt chart for SJF
Least-Laxity First (LLF) The LLF scheduling algorithm is based on the laxity. The task whose laxity is the lowest compared to all the ready tasks have the highest priority. This algorithm is optimal for independent and preemptive tasks.
P3
14 17 Figure3 : Gantt chart for FCFS
25
P0 13
25
c. Round Robin Considering the quantum of 5ms for each process P0 P1 P2 P3 P4 P0 P4 P0 0
5
7
10 12 17 22 Figure5 : Gantt chart for RR
23
25
d. Priority Scheduling Priorities are attributed as follows
Figures7 and 8 mentioned below shows a comparison of fundamental scheduling algorithms turnaround time and waiting time [11].
Table 2: Processes with Its Id, Burst Time and priority Process ID P0 P1 P2 P3 P4
P1 0
Burst Time (ms) 10 2 3 2 6
P4 2
P0
P2
Priority 3 1 3 4 2
i-Waiting Time:
P3
8 20 23 Figure6: Gantt chart for PS
25
For example, through Gantt chart for SJF scheduling, turnaround time can be obtained as time of submission of a process to the time of completion of the process. Turnaround time for process P0, P1, P2, P3and P4 is obtained as 25, 2, 7, 4 & 13 and average turnaround time is (25+2+7+4+13)/5=10.2ms. The waiting time for the process is computed as time taken by the process waiting in the ready queue is as well observed from Gantt chart for SJF scheduling. Waiting time for process P0, P1, P2, P3 and P4 is obtained as 13, 0, 4, 2 & 7 respectively and average waiting time is (13+0+4+2+7)/5 =5.2ms. Similarly the turnaround time and waiting time is calculated for all other algorithms and summarized in Table3 and Table 4. From the above analysis, notice that First Come First Serve (FCFS) & Shortest Job First (SJF) are in general suitable for batch operating systems and Round Robin (RR) & Priority Scheduling (PS) is suitable for time sharing systems. For SJF is characterized by its optimality for all type of scheduling algorithm. Consequently, SJF algorithm is suitable for all scenarios.
Figure7: Comparison of Fundamental scheduling AlgorithmWaiting Time
ii. Turnaround Time
Table3: Turnaround Time for Individual Process And Average Turnaround Time for Each Scheduling. Process ID FCFS P0 P1 P2 P3 P4 Average Turnaround Time
12 14 17 19 25 17.4
Turnaround Time (ms) SJF Round Priority Robin Scheduling 25 25 20 2 7 2 7 10 23 4 12 25 13 23 8 10.2 15.4 15.6
Table4: Waiting Time for Individual Process And Average Waiting Time for Each Scheduling. Process ID FCFS P0 P1 P2 P3 P4 Average Waiting Time
0 12 14 17 19 12.4
Waiting Time (ms) SJF Round Priority Robin Scheduling 13 13 8 0 5 0 4 7 20 2 10 23 7 17 2 5.2 10.4 10.6
Figure 8: Comparison of Fundamental scheduling AlgorithmTurnaround Time
It is clear that SJF scheduling algorithm own the optimum turnaround time, waiting time and response time of the processes compared to all other fundamental algorithms. Throughput and CPU utilization rate are optimum as well. As motioned in the above analysis and discussion, FCFS is the simplest algorithm and it's only suitable for batch system with large waiting time. Minimum average waiting time and average turnaround time are given by SJF scheduling algorithm. In regards to the priority scheduling algorithm which is based on priority, the problem of starvation appears because the highest priority job can run first and the lowest priority job need to wait though it Being preemptive, RR scheduling algorithm is following the interactive system and dealing with the time sharing system 2-Dynamic priority scheduling algorithms
According to the less restrictiveness of soft real time operating system, it is required that priority is allocated to over less critical processes. In a soft real-time system, when missing some deadlines will decrease the performance of the system although the system still operating. The best system utilization is offered by Preemptive schedulers, that’s why it is preferred over non preemptive schedulers. The quantity of time dedicated to the system to execute and achieve the request after arriving is called Deadline parameters As above mentioned, Preemptive real-time scheduling algorithms can be classified onto two categories (based on how priorities are assigned): static priority and dynamic priority. After huge work done by Liu and Layland[2], static and dynamic priority scheduling has received a big attraction for future researches. As above mentioned, for RM algorithm the shorter periods get the higher priority and EDF algorithm which is a dynamic priority assignment policy in which a task is assigned highest priority if its deadline is the nearest and vice-versa. First of all, the problem with RM scheduling algorithms is, unfortunately, in the case of static scheduling, resources are scheduled under the supposition that the interruptions occur at the maximum rate which result in reducing effectively the CPU utilization. This reduction is caused by the fact that unused resources cannot be reallocated[9]. Still in static scheduling, changing priorities in run time cannot be easy done at run time and their allocation shall be based on worst-case conditions. Therefore, supposing that operation requires 10msec, so when analyzing static scheduling we must assume that the same amount of time is needed for every call of task (10msec). An additional penalty is attributed to CPU utilization is that in idle state, resources consume up to for 3 msec in the usual case. Compared with RM, EDF and least laxity first (LLF) have a big advantage that they overcome the utilization limitations [9][12]. In particular, penalty is decreasing due to prioritizing operations by EDF and LLF according to their dynamic run-timecharacteristics.EDF and LLF scheduling algorithms are capable to handle harmonic and non-harmonic periods and respond with flexible manner to invocation and to invocation variation as well in resource requirements. Thus, in term of CPU utilization, those schedulers are optimal. Being a purely dynamic scheduler, EDF and LLF put aside the utilization limitations of the static scheduler RM. But for those algorithms mentioned above (EDF and LLF), evaluating the scheduling algorithm at run-time is costly. Unfortunately, with purely dynamic scheduler there isn’t a control to avoid that operations will miss their deadlines if the schedulable bound is exceeded.
So, consequently, if the system becomes overloaded, necessarily the risk of deadline missing increases. This can result on decreasing the margin of security especially when operations are added to the schedule to guarantee higher utilization [9] [12]. Conclusion & Future work For fixed priority scheduling algorithms, due to its minimum average waiting time and average turnaround time, SJF scheduling algorithm serves all types of job with optimum scheduling criteria. But long process will never been served. Concerning dynamic priority scheduling algorithms, compared with RM, EDF and least laxity first (LLF) have a big advantage that they overcome the utilization limitations. In particular, penalty is decreasing due to prioritizing operations by EDF and LLF according to their dynamic run-time characteristics. To improve time management performance, we will deal with the advantages of the hardware like parallel and dependent execution. An operating system where hardware and software model coexist lead to raising several topics as communication overhead and resource optimization, portability, suitable interface and mechanisms of synchronization,…. REFERENCES [1]F.Ndoye, “Ordonnancement temps réel préemptif multiprocesseur avec prise en compte du coût du système d’exploitation”, ÉCOLE DOCTORALE Sciences et Technologie de l’Information, des Télécommunications et des Systèmes INRIA Paris-Rocquencourt, April 2014. [2]C. L. Liu and J W. Layland. Scheduling algorithms for multiprogramming in a hard-real-time environnement. [3]L. Cucu.Ordonnancement non préemptif et condition d’ordonnançabilité pour systèmes embarqués à contraintes temps réel. PhD thesis, Université Paris Sud, 2004. [4] J-J. Hwang, Y-C. Chow, F. D. Anger, and C-Y. Lee.Scheduling precedence graphs in systems with interprocessor communication times.SIAM J. Comput., 18:244–257, April 1989. [5] W. W. Chu and L. M-T. Lan. Task allocation and precedence relation for distributed real-time systems. IEEE Transactions on Computers, vol.C-36(18), June 1987. [6] J. Xu.Multiprocessor scheduling of processes with release times, deadlines, precedence, and exclusion relations. IEEE Trans. Softw. Eng., 19:139–154, February 1993. [7] SILBERSCHATZ, A. et P.B. GALVIN,Operating System Concepts. 8th Edition,Addison Wesley. [8] J. Y-T. Leung and J. Whitehead. On the complexity of fixed-priority scheduling of periodic realtime tasks. [9]R. Naik, R.R.Manthalkar, Instantaneous Utilization Based Scheduling Algorithms for Real Time Systems, Pune University, SGGS Nanded, RadhakrishnaNaik et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 2 (2) , 2011, 654-662. [10] N. Goel, R.B. Garg “ A Comparative Study of CPU Scheduling Algorithms’’ , International Journal of Graphics & Image Processing |Vol 2|issue 4|November 2012.
[11] P. Singh, V.Singh, A. Pandey ‘Analysis and Comparison of CPU Scheduling Algorithms’, International Journal of Emerging Technology and Advanced Engineering, Volume 4, Issue 1, January 2014. [12]Buttazzo G.C., Rate monotonic vs. EDF, judgment day. RealTime Syst., 2005, 29, 5-26,