Proceedings of National Conference on Recent Trends in Parallel Computing (RTPC - 2014), November 1-2, 2014, Mangalayatan University, Aligarh, India.
A Comparative Review of CPU Scheduling Algorithms Mohd Shoaib
Mohd Zeeshan Farooqui
Faculty of Computer Science & Engineering Shri Ramswaroop Memorial University Uttar Pradesh - 225053, India.
[email protected]
Faculty of Computer Science & Engineering Shri Ramswaroop Memorial University Uttar Pradesh - 225053, India.
[email protected]
Abstract— A number of programs can be in memory at the same time, allows overlap of CPU and I/O. CPU scheduling deals with the problem of choosing a process from the ready queue to be executed by the CPU. It is difficult and time consuming to develop CPU scheduling algorithm and to understand its impact because of need to modify and test operating system and measure the performance of real applications. As processor is the important resource, CPU scheduling becomes very important in accomplishing the operating system (OS) design goals. The goal of CPU scheduling is to minimize the average turnaround time and average waiting time in order to allow as many as possible running processes at all time in order to make best use of CPU. This paper attempts to summarize major CPU scheduling algorithms proposed till date. We look at the algorithms such as FCFS, SJF, SRTF, Round Robin, Priority scheduling, HRRN and LJF. Keywords—Scheduler; Dispatcher; FCFS; SJF; SRTF; Round Robin; Priority Scheduling; HRRN; LJF
I.
INTRODUCTION CPU scheduling is the basis of multi programmed operating systems. The aim of multiprogramming is to maximize CPU utilization by running some processes at all times. In uniprocessor system only one process may run at a time, any other process must wait until the CPU is free. Scheduling is a primary operating system function where almost all computer resources are scheduled before use. Thus as CPU is one of the primary computer resource, its scheduling is central to operating system design [1]. OS may features different types of schedulers: long term scheduler (also known as job scheduler), mid-term or medium-term scheduler and short-term scheduler (also known as a dispatcher or CPU scheduler) [2]. Figure 1 shows the scheduling and process transition state. This paper outlines the major scheduling algorithms along with their relative pros and cons. We start with basic scheduling algorithms such as FCFS, NP-SJF, SRTF, Round Robin, LRTF and Priority based algorithm. A. Long Term Scheduler The long term scheduler selects the programs which are to be admitted to the system for execution and when, also selects the ones which should exit. Long term scheduler manages the degree of multiprogramming in multitasking systems. It follows certain policies through which decision is made which task will be selected if more than one task is submitted or whether the system can accept a new task submission. The compromise between degree of multiprogramming and throughput seems obvious by all processes for a fair share of over
20
Proceedings of National Conference on Recent Trends in Parallel Computing (RTPC - 2014), November 1-2, 2014, Mangalayatan University, Aligarh, India.
CPU as more the number of processes, lesser the time each of them may get on CPU for execution. B. Medium Term Scheduler The medium term scheduler determines when any process is to be suspended or resumed. The job done by medium term scheduler is called swapping. Medium term scheduling is primarily deals with memory management; hence it is very often designed as a part of the memory management subsystem of an OS.
Figure 1. Scheduling and Process State Transition
C. Short Term Scheduler The short-term scheduler (also known as CPU scheduler) determines a process to allocate it to CPU from the processes in memory that are ready to execute. CPU scheduling decisions may take place when a process: Switches from running to waiting state (e.g., I/O request). Switches from running to ready state (e.g., Interrupts occur). Switches from waiting to ready state (e.g., Completion of I/O). When a process terminates. Scheduling under 1 and 4 is non pre-emptive; otherwise is called pre-emptive. In non preemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU and does not release it until either the process is terminated or it switches to the waiting state. D. Dispatcher The dispatcher is responsible of saving the context of one process and loading the context of other process. The dispatcher is the module that provides process selected by the short term scheduler with control of the CPU. Context switching is done by dispatcher. The dispatcher should be as fast as possible, as it is invoked during every switching of processes. The time taken by dispatcher to stop one process and start another process is known as dispatch latency [1]. II.
SCHEDULING CRITERIA There are different scheduling algorithms and performance of each can be judged on various criterions. Different algorithms may favour different types of processes. Some criteria are as follows: CPU Utilization: Amount of time till CPU remains as busy as possible.
21
Proceedings of National Conference on Recent Trends in Parallel Computing (RTPC - 2014), November 1-2, 2014, Mangalayatan University, Aligarh, India.
Throughput: Number of processes that complete their execution per unit time. Burst Time: Amount of time required for the process for its execution. Completion Time: The time when process completes its execution. Turnaround Time: The time required to execute a particular process. It is denoted by: Turnaround Time= Completion Time - Arrival Time
(1)
Waiting Time: Amount of Time process was waiting in waiting queue. It is denoted by: Waiting Time= Turnaround Time - Burst Time
(2)
Response Time: Amount of time from the submission of a request until the first
response is produced. III. OPTIMIZATION CRITERIA We want to maximize CPU utilization and throughput and to minimize turnaround time, waiting time and response time. In most cases we optimize the average cases and in some situations we want to optimize the minimum and maximum values rather than the average. Some optimization criteria are as follows Maximum CPU utilization Maximum Throughput Minimum Turnaround time Minimum Waiting time Minimum Response time IV. SCHEDULING ALGORITHMS CPU scheduling deals with the problem of choosing a process from the ready queue to be executed by the CPU. The following CPU scheduling algorithms will be described: A. First Come First Serve (FCFS) The first come first serve (FCFS) is the simplest scheduling algorithm. In this the process that requests the CPU first, is allocated the CPU first. The implementation of the FCFS policy is managed by FIFO queue. The FCFS algorithm is non preemptive (i.e. when the CPU has been allocated to a process, that process keeps the CPU until either terminating or requesting I/O). The FCFS algorithm is troublesome for time-sharing systems, as each process do not get to share of the CPU at the regular intervals due to its non preemptive nature [1] [3]. B. Shortest Job First (SJF) Shortest Job First (SJF), also known as Shortest Job Next (SJN) or Shortest Process Next (SPN), is a non preemptive scheduling policy that chooses the waiting process with the smallest execution time to execute next [2]. Shortest job first is advantageous because of its simplicity and it also minimizes the average amount of time each process has to wait until its
22
Proceedings of National Conference on Recent Trends in Parallel Computing (RTPC - 2014), November 1-2, 2014, Mangalayatan University, Aligarh, India.
execution is complete [4]. However, the disadvantage of SJF is that if short processes are continually added, it starves the processes which require a long time to complete. Another disadvantage of using SJF is that the total execution time of a job must be known before execution which is not possible [1]. C. Shortest Remaining Time First (SRTF) Shortest Remaining Time First (SRTF) also known as shortest remaining time is a preemptive version of Shortest Job First scheduling algorithm. In this scheduling algorithm, the process with the minimum burst time remaining until its completion is selected to execute. Every process will always continue to execute until they complete their execution or a new process is added that requires a smaller amount of burst time. The advantages of Shortest remaining time are that firstly short processes are handled very quickly as whenever a new process is added; comparison is done of leftover burst time of currently executing process with the new process, ignoring all other processes currently waiting to execute. Secondly switching is less as it only makes a decision about switching new process, when a new process is added or a process completes execution. Similar to SJF, it also has the potential for process starvation. This threat can be minimal when process times follow a heavy-tailed distribution [5]. Shortest Remaining Time First scheduling is rarely used outside of specialized environments because it requires accurate estimations of the runtime of all processes that are waiting to execute. D. Round Robin (RR): Round Robin scheduling is a method of CPU time sharing. Each process is given a certain amount of CPU time (a time slice or time quantum), and if it is not finished by the end of the time slice, the process is moved to the back of the process queue, and the next process in line is moved to the CPU [6][7]. A common variant on Round Robin allows a process to give up the remainder of its time slice if it doesn't need it. This might be because it is waiting for a particular event, or because it is completed. The Round Robin (RR) is similar to FCFS, but preemption is added to switch between processes [8][9]as shown in Figure 2. In Round Robin scheduling if the Time Slice/Quantum is too short, then too much process switching takes place and the whole process will become slow. If the set time is too long, then the system may become unresponsive, time wasting and would emulate First Come First Served. Many varied version of Round Robin algorithm are available which provide better results [10][11][12][13].
Figure 2. Pictorial representation of Round Robin scheduling
E. Priority Scheduling In Priority Scheduling a priority number (integer) is associated with each process and CPU is allotted to the highest priority process as shown in figure 3. If priority of two or more processes is equal then they are scheduled in FCFS order. Priority scheduling can be either
23
Proceedings of National Conference on Recent Trends in Parallel Computing (RTPC - 2014), November 1-2, 2014, Mangalayatan University, Aligarh, India.
pre-emptive or non-preemptive [14]. A pre-emptive priority scheduling algorithm will preempt the CPU if the priority of the newly arrived process is higher than the priority of the currently running process.
A non-preemptive priority scheduling algorithm will simply put the new process at the head of the ready queue.
Figure 3. A scheduling algorithm with four priority classes
A major problem with priority scheduling algorithms is that if a process that is ready to run but waiting for the CPU because of low priority, is said to be blocked and this condition is termed as indefinite blocking or starvation. In a high processing computer system, a steady stream of higher priority processes can prevent a low priority process from ever getting the CPU [14]. F. Highest Response Ratio Next (HRRN): Highest Response Ratio Next (HRRN) scheduling [6] is a non-preemptive algorithm, similar to Shortest Job Next (SJN), in which the priority of each job is dependent on its estimated run time, and also the amount of time it has spent waiting. The longer any process waits, higher priority becomes, which prevents indefinite postponement (process starvation). Priority of a process is calculated as: Highest Response Ratio = 1 +
(3)
Highest Response Ratio Next scheduling algorithm has been developed to correct certain weaknesses in Shortest Job Next (SJN) or Shortest Job First(SJF) including the difficulty in estimating the runtime. G. Longest Job First (LJF): Longest Job First is a scheduling policy that selects the waiting process with the largest execution time to execute first. LJF is a non-preemptive algorithm. Longest Job First (LJF) has the contradiction behavior of SJF. While shortest job is believe will reduce the response time, processing longest job first on the fastest resource will minimize the make span time [15]. However, LJF will be suffering due to slightly increase in the response time. V. COMPARISON OF SCHEDULING POLICIES We assume that we have five processes P1 to P5 as shown in table I. We compare the results of the discussed algorithms over a set of data provided in Table I.
24
Proceedings of National Conference on Recent Trends in Parallel Computing (RTPC - 2014), November 1-2, 2014, Mangalayatan University, Aligarh, India. TABLE I. Process P1 P2 P3 P4 P5
SCHEDULING DATA SET
Arrival Time (milliseconds) 0 1 2 3 4
Burst Time (milliseconds) 20 10 25 15 5
Priority 1 2 2 4 3
The jobs have run to completion. No new jobs arrive till these jobs are processed. Time required for each of the jobs is already known. During the run of jobs there is no suspension for IO operations. The processing of jobs is shown by a Gantt chart, from which the average waiting time and the average turnaround time are calculated. The calculated average turnaround time and average waiting time is shown in Table II. The assumed modes of operations for comparison of all algorithms are given in Table III. a) First Come First Serve (FCFS) 0
20 P1
30 P2
55 P3
70
75
P4
P5
b) Shortest Job First (SJF) 0
20 P1
25 P5
35 P2
50
75
P4
P3
c) Shortest Remaining Time First (SRTF) 0
1 P1
4 P2
9 P5
16 P2
31 P4
50 P1
75 P3
d) Round Robin (Time Slice= 10) 0
10 20 30 40 45 55 65 70 75 P1 P2 P3 P4 P5 P1 P3 P4 P3
e) Priority Scheduling Process P4 with priority 4 is highest while process P1 with priority 1 is lowest.
25
Proceedings of National Conference on Recent Trends in Parallel Computing (RTPC - 2014), November 1-2, 2014, Mangalayatan University, Aligarh, India.
Preemptive Scheduling
0
1 P1
3 P2
18 P4
23 P5
31 P2
56 P3
75 P1
Non Preemptive Scheduling
0
20 P1
35 P4
40 P5
50 P2
75 P3
f) Highest Response Ratio Next (HRRN) 0
20 P1
25 P5
35 P2
50 P4
75 P3
g) Longest Job First (LJF) 0
20 P1
45 P3
60 P4
70 P2
75 P5
VI. SUMMARY OF CPU SCHEDULING ALGORITHMS We have now looked at a variety of scheduling algorithms. Figure 4 shows the comparison of results of the data set. IT clearly depicts that for the given data set Shortest Remaining time First (SRTF) takes minimal time to completely execute processes and have approximately minimal average waiting time. While Longest Job First (LJF) appears to be most slow in performance. Table III shows the scheduling algorithms along with their modes of operation and criteria which is needed to be known for scheduling. TABLE II. AVERAGE TURN AROUND TIME AND AVERAGE WAITING TIME
Algorithm FCFS SJF SRTF Round Robin Priority Scheduling (preemptive) Priority Scheduling (non preemptive) HRRN LJF
Average Turn Around Time (milliseconds) 48 39 34.6 51
Average Waiting Time (milliseconds) 33 24 19.6 36
38.6
23.6
42
27
39 52
24 37
26
Proceedings of National Conference on Recent Trends in Parallel Computing (RTPC - 2014), November 1-2, 2014, Mangalayatan University, Aligarh, India.
Figure 4. Graph to showing Average Turn Around Time and Average Waiting Time TABLE III. Algorithm FCFS SJF SRTF Priority Based LJF Round Robin HRRN
SUMMARY OF SCHEDULING ALGORITHMS Mode Non preemptive Non preemptive Preemptive Non preemptive/ preemptive Non preemptive Preemptive Non preemptive
Criteria Arrival Time
Starvation No
Burst Time
Yes
Burst Time Priority
Yes Yes
Burst Time
Yes
Time quantum/ Time slice Response Ratio
No No
REFERENCES [1] A. Silberschatz, P. B. Galvin, G. Gagne, Operating Systems Concepts, 7th ed., Wiley publication, 2005. ISBN 0-471-69466-5. [2] A. S. Tanenbaum, Modern Operating Systems, 3rd ed., Pearson Education, 2008, ISBN 0-13-600663-9. [3] J. Patel and A. K. Solanki, CPU Scheduling: A Comparative Study. Proceedings of the 5th National Conference, INDIACom, Computing for Nation Development, March 1011, 2011. [4] S. Lupetti, and D. Zagorodnov, “Data popularity and shortest-job-first scheduling of network transfers”, Proceedings of IEEE International Conference on Digital Telecommunications, 2006, pp. 26-26. [5] M. Harchol-Balter, B. Schroeder, N. Bansal and M. Agrawal, "Size-Based Scheduling to
27
Proceedings of National Conference on Recent Trends in Parallel Computing (RTPC - 2014), November 1-2, 2014, Mangalayatan University, Aligarh, India.
Improve Web Performance", ACM Transactions on Computer Systems, volume 21, issue 2, 2003, pp. 207–233. [6] W. Stallings, Operating systems: internals and design principles, 4th ed., Prentice-Hall, 2001, ISBN 0-13-031999-6. [7] R. J. Matarneh, “Self-Adjustment Time Quantum in Round Robin Algorithm Depending on Burst Time of the Now Running Processes”, American Journal of Applied Sciences, Vol. 6, No. 10, 2009, pp. 1831. [8] A. Bashir, M. N. Doja and R. Biswas, “Finding Time Quantum of Round Robin CPU Scheduling Algorithm Using Fuzzy Logic, The International Conference on Computer and Electrical Engineering (ICCEE), 2008. [9] R. Mohanty, H. S. Behera and D. Nayak, “A New Proposed Dynamic Quantum with ReAdjusted Round Robin Scheduling Algorithm and Its Performance Analysis”, International Journal of Computer Applications (0975 – 8887), Vol. 5, No.5, August 2010. [10] A. Noon, A. Kalakech and S. Kadry, “A New Round Robin Based Scheduling Algorithm for Operating Systems: Dynamic Quantum Using the Mean Average”, International Journal of Computer Science Issues, Vol. 8, Issue 3, No. 1, May 2011. [11] R. Mohanty, H. S. Behera, K. Patwari and M. R. Das, “Design and Performance Evaluation of a New Proposed Shortest Remaining Burst Round Robin (SRBRR) Scheduling Algorithm”, In Proceedings of International Symposium on Computer Engineering & Technology (ISCET), Vol. 17, 2010. [12] S. M. Mostafa, S. Z. Rida and S. H. Hamad, “Finding Time Quantum Of Round Robin CPU Scheduling Algorithm In General Computing Systems Using Integer Programming”, International Journal of Research and Reviews in Applied Sciences (IJRRAS), Vol. 5, Issue 1, 2010. [13] R. K. Yadav, A. K Mishra, N. Prakash and H. Sharma, “An Improved Round Robin Scheduling Algorithm for CPU scheduling”, International Journal on Computer Science and Engineering, Vol. 02, No. 04, 2010, 1064-1066. [14] Cankaya University , http://siber.cankaya.edu.tr/OperatingSystems/ceng328/node124.html
Turkey,
[15] Z. R. M. Azmi, K. Abu Bakar, A. H. Abdullah, M. S. Shamsir and W. N. W. Manan, “Performance Comparison of Priority Rule Scheduling Algorithms Using Different Inter Arrival Time Jobs in Grid Environment”, International Journal of Grid and Distributed Computing, Vol. 4, No. 3, September, 2011.
28