Workflow Scheduling on Distributed Systems Maslina Abdul Aziz, Jemal Abawajy School of Information Technology, Deakin University, Australia {mabdula, jemal.abawajy}@deakin.edu.au
Rafiqul Islam School of Computing and Mathematics, Charles Sturt University, Australia.
[email protected]
Tutut Herawan Department of Information Systems, University of Malaya, Malaysia
[email protected]
Abstract— Growing evidence shows that in obtaining high performance, a well-managed time-constrained workflow scheduling is needed. Efficient workflow scheduling is critical for achieving high performance especially in heterogeneous computing system. However, it is a great challenge to improve performance and to optimize several objectives simultaneously. We propose a workflow scheduling algorithm that minimizes the makespan of the workflow application modeled by a Directed Acyclic Graph (DAG). The new proposed scheduling algorithm is named Multi Dependency Joint (MDJ) Algorithm. The performance of MDJ is compared with existing algorithms such as, Highest Level First with Estimated Time (HLFET), Modified Critical Path (MCP) and Earliest Time First (ETF). As a result, the experiments show that our proposed MDJ algorithm outperforms HLEFT, MCP, and EFT with a 7% lower overall completion time. Keywords— distributed systems; performance; workflow; scheduling;
I.
INTRODUCTION
There are increasing demands for processing large amounts of data in real-time tasks with desired cost reduction of computational resources. Because of the complex operations, processes and tasks have multiple interdependence relationships. The Multi Dependency Joint (MDJ) Algorithm is a list scheduling based algorithm that produced schedules for distributed systems. The distributed systems is a set of connected R of R identical resources in a fully connected task graph. The directed acyclic graph (DAG) is used to represent the tasks and dependencies of parent /child inter-relationship. Each task has various execution times, priorities and deadline constraints that are associated to other workflow. The aim of MDJ is to finish the execution of an application task faster than the given dateline. The performance of this algorithm was fairly compared to other list scheduling algorithms from the existing literature [1-3]. For the experiment setup, we construct a network of processors, where the processors have the same capabilities and speed. The results of this paper is a comparison of the proposed scheduling algorithms performance with the
c 978-1-4799-8389-6/15/$31.00 2015 IEEE
following scheduling algorithms: Highest Level First with Estimated Time (HLFET), Modified Critical Path (MCP) and Earliest Time First (ETF). The experiment result shows that MDJ produced schedules with makespan that are significantly better than others. The remaining of this paper is organized as follows. The next section presents relevant reviews of existing literature in the field of list scheduling algorithm. Section 3 introduces the theoretical background in detail. Section 4 is the Proposed Algorithm. The experimental results are depicted in Section 5 and then, some conclusions are drawn in Section 6. II.
RELATED WORK
There are two different implementation of general multiobjective optimization algorithms for workflow scheduling; Task Scheduling and Genetic Algorithm (GA). The task scheduling algorithm is basically putting tasks in queues based on the pre-defined priority. It is commonly used to schedule parallel tasks especially in static scheduling heuristic. The GA is a meta-heuristics algorithm that provides a more flexible and dynamic solutions. However, this algorithm has a high time complexity and it is best suited for large system. Finding and selecting the best solution from the set of Pareto optimal solutions is still unsolved. The workflow scheduling problems can be divided to two: static and dynamic. A static algorithm has few characteristics. This algorithm is usually executed at runtime, with constant task computation and communication times. Usually, data are known upfront before program execution. There are many existing static scheduling algorithms used in solving various optimization problems (Table 1). All these methods are bounded on a set of objectives constraints: single objective, biobjectives and multi-objectives. Some authors proposed an algorithm that minimizes the overall budget, energy consumption and availability of the application. The algorithm is customized and analyzed for four objectives: makespan (M), cost (C), energy (E) and reliability (R).
683
TABLE I.
STATIC SCHEDULING
Authors
Algorithm
CY Yang et al. (2009) [2]
An Approximation Scheme for EnergyEfficient Scheduling of Real-Time Tasks in Heterogeneous Multiprocessor Systems The anatomy study of high performance task scheduling algorithm for Grid computing system Performance Analysis of Dynamic Workflow Scheduling in Multicluster Grids A novel deadline and budget constrained scheduling heuristics for computational grids EAD and PEBD : Two Energy-Aware Duplication Scheduling Algorithms for Parallel Tasks on Homogeneous Clusters Cost-Driven Scheduling of Grid Workflows Using Partial Critical Paths Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing Enhanced Energy-Efficient Scheduling for Parallel Applications in Cloud Energy-efficient deadline scheduling for heterogeneous systems Energy-aware parallel task scheduling in a cluster A green energy-efficient scheduling algorithm using the DVFS technique for cloud datacenters Multi-objective Workflow Scheduling: An Analysis of the Energy Efficiency and Makespan Tradeoff Adaptive energy-efficient scheduling for real-time tasks on DVS-enabled heterogeneous clusters
L.Y. Tseng (2009) [3] Sonmez, O, et al. (2010) [4] Bahati, R. M. et al. (2011) [5] Z. Zong et al. (2011) [6]
Abrishami, S., et al. (2012) [7] Beloglazov, A et al. (2012) [8] Q Huang, et al. (2012) [9] Y. Ma et al. (2012) [10] L Wang, et al. (2013) [11] CM Wu, et al. (2013) [12] JJ Durillo et al. (2013) [13] X Zhu et al. (2014) [14]
M C E R ¥
¥ Y. Xu et al. (2012) [20]
¥
¥
Chen, W. N. et al. (2012) [21]
¥ ¥
¥
Authors Kim, H., et al. (2011) [15]
Bayi et al. (2013) [23] ¥
¥
¥
¥
¥
¥
¥
¥
¥
¥
¥
Khajemohamm adi et al. (2013) [24] S Yassa et al. (2013) [25]
¥
¥
¥ ¥
¥ ¥
¥
¥
¥
¥
¥
¥
J Koáodziej et al., (2011) [16] Dutta D et al. (2011) [17] J Koáodziej et al., (2011) [18]
684
DYNAMIC SCHEDULING
Algorithm Communication-aware task scheduling and voltage selection for total energy minimization in a multiprocessor system using Ant Colony Optimization Genetic Algorithms for Energy-Aware Scheduling in Computational Grids A Genetic – Algorithm Approach to Cost-Based Multi-QoS Job Scheduling in Cloud Computing Environment Genetic Algorithms for Energy-Aware Scheduling in Computational Grids
S. KardaniMoghaddam (2012) [22]
¥
Meanwhile, a dynamic algorithm allows changes to be made at execution level. It has the flexibility of scheduling all the task at runtime before execution. The dynamic algorithms have been proven effective based on the number of research in the last few decades. Among the most famous meta-heuristics bio-inspired algorithms used for multi-objective optimization are Ant Colony Optimization (ACO), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The research studied the dynamic algorithm in different scheduling applications to solve task-scheduling problems by finding an optimal solution of makespan (M), cost (C), energy (E) and reliability (R). TABLE II.
S. Pandey et al. (2012) [19]
M C E R ¥
¥
¥
¥
¥ ¥
¥ ¥
A Particle Swarm Optimization-Based Heuristic for Scheduling Workflow Applications in Cloud Computing Environments A Multiple Priority Queueing Genetic Algorithm for Task Scheduling on Heterogeneous Computing Systems An Ant Colony Optimization Approach to a Grid Workflow Scheduling Problem With Various QoS Requirements A Hybrid Genetic Algorithm and Variable Neighborhood Search for Task Scheduling Problem in Grid Environment An improved ant colony optimization for scheduling identical parallel batching machines with arbitrary job sizes Fast Workflow Scheduling for Grid Computing Based on a Multi-objective Genetic Algorithm Multi-objective approach for energyaware workflow scheduling in cloud computing environments
¥
¥
¥ ¥ ¥
¥
¥
¥
¥
¥ ¥
¥ ¥
Considering the negative correlation of these objectives (makespan, cost, energy and reliability) the use of GA have proven to be beneficial. They provide a more adaptable solutions, able to escape local optimum trap, nature-inspired and there are no complicated mathematical formulations A. Comparison with list scheduling heuristics The Highest Level First with Estimated Times (HLFET) algorithm is a list scheduling technique that prioritizes task based on the length of the longest path of all tasks in a workflow [26-28]. It starts of the by navigating downwards from the entry task and summing up all task along the path. It known as static level (SL). Once all SL for each task have been computed with the predecessors scheduled, a new list will be generated. This task will be arranged in a descending order starting from the biggest value of the SL to the smallest value. The first task in the list will be the first to be scheduled to the first available processor. This process will continue until all tasks are scheduled and the list is updated until it finishes. One of the main criteria of HLFET is that it is the simplest list scheduling technique because it uses the constant value of SL. However, as complexity of workflow scheduling increases, this method may not be applicable. HLFET does not consider time as one of the factors to prioritize the tasks. The tasks’ communication costs are ignored. When a task is being schedule to the first available processor, there will be gaps between the scheduled tasks due to different sizes of each task. Since the HLFET algorithm is a non-insertion based method, the gaps are ignored and caused long periods of idleness. When a resource is not being fully utilized, it affects a poor overall performance level. Modified Critical Path (MCP) is one of the frequently used list scheduling technique that calculates the longest path starting from the entry level or also known as Critical Path (CP) [28-31]. This method sets the priority to the task based
2015 IEEE 10th Conference on Industrial Electronics and Applications (ICIEA)
on the maximum time a node can be delayed (Latest Start Time (LST)) or (As Late as Possible (ASAP)). Once the ALAP of all tasks are computed, a list will be generated. This list will then be sorted in ascending order, starting from the smallest value to the biggest value. If there are tasks with the same ALAP value, the tie-breaking rule is to look at the children ALAP values. The nodes based on the prioritize list will be scheduled to the first available processor. The CP allows insertion approach which diminishes the idle time by identifying the gaps between tasks. However, the CP have few drawbacks too. Identifying critical paths is not easy since CP. There could be multiple critical paths and not all tasks are part of the critical path. Earliest Time First (ETF) algorithm looks at the earliest start time for all tasks and selects the ones with the lowest value [32-36]. This can be done by computing all computation costs and communication costs of edges of all tasks. Then the tasks are ranked. The navigating process starting upwards from the exit. This process will be repeated until all tasks are scheduled. The Earliest Start Time (EST) for each node is computed. If there are two tasks that have the same EST, the task that has the higher slack time will be given higher priority. However, the task with higher SL is unnecessarily be scheduled first since the main priority is the total EST of each task. III.
THEORETICAL BACKGROUND
In this section we provide various definitions of makespan. )and The aim is to minimize the makespan ( therefore increase the overall performance. This can be done by developing a scheduling algorithm that minimizes the total execution time of a workflow on set of resources, while satisfying a user-defined deadline. The scheduling challenge is to find an assignment of the task to the host such that the makespan is minimized. TABLE III. Definition
BASIC DEFINITIONS
Symbols
Workflow Task
ǡ
Deadline Resources
Data transferred
Data flow dependencies Makspan
Execution time
A. Makespan The proposed algorithm is modeled for homogeneous processors. Resource homogeneity is a simplest setup with no inter-processor communication overheads and a fixed set of
processors. The makespan of the schedule is the actual time for all the task to complete or exit is longer. The aim of MDJ algorithm is to improve the performance of the scheduling algorithm by reducing the task execution time. (1) is the units of time it takes to complete task t_i and where is given as follows:
(2) where is the execution time of task on and the second term is the competition time of the parent tasks. The parameter is the number of bytes transferred from to and is the available bandwidth. Eq. 2 states that the is equal to the sum of completion time of a task execution on resource and the completion time of the parent task. If has more than one task, we consider the parent task be a with the largest completion time will be selected. Let variable indicating if task is assigned to resource . The scheduling objective can be formulated as follows: (3)
(4)
(5) (6) The constraint in Eq. 2 ensures that each task is resources and the constraint in scheduled on one of the Eq. 3 ensures that each resource executes the tasks without exceeding the deadline . IV.
THE PROPOSED ALGORITHM
In this section we will introduce the proposed scheduling algorithm and compare it with Highest Level First with Estimated Time (HLFET), Modified Critical Path (MCP) and Earliest Time First (ETF). We will use makespan as the parameter to compare the performance of the algorithms. A. Task Prioritizing Based on in Figure 1 the application is represented by a directed acyclic graph (DAG). The nodes are represented as a tasks and edges represent data dependencies. Each task has computation cost and each edge has the communication cost in between tasks. In the workflow graph, if there is a direct arc from task to task , we say is a parent of task and is the child task of task .
2015 IEEE 10th Conference on Industrial Electronics and Applications (ICIEA)
685
Layer 2
Layer 1
Layer 3
Layer 4
T6 T9 T5 T1
T10
T8
T4 T3
T7 T2
Fig. 1. Example of a workflow with layers
Each workflow has layers and arc. Each arc represents a precedence constraint which indicates that task should complete executing before task can start. The workflow is a four-layer workflow (Figure 1). Note that in layer one, a task can have no parent (e.g., ). In layer two, there are five tasks with one parent (e.g., ). In layer three there are three tasks that have or more than one parent (e.g., , and ). We use to denote the set of parents of task (i.e. tasks to be completed before starting ). Similarly, a task can have no child (e.g., in Figure 3), one child (e.g., in Figure 3) or more than one child (e.g., in Figure 3). The task with no parent (e.g., ) is also referred to . Similarly, a task that as an entry task and denotes it as does not have a child task (e.g., ) as an exit task and denote it as . TABLE IV.
WORKFLOW TASK DEPENDENCY TABLE
Layer
1
2
3
4
Tasks
T1
T2
T3
T4
T5
T6
T7
T8
T9
T10
Comp
20
20
20
15
5
5
10
15
20
20
Comm
0
8
4
2
8
4
8
8
8
8
Based on the DAG graph above, there are 2 ways to assign priority: FOR_ward Scanning (3) dan BACK_ward Scanning (2). The FOR_ward Scanning of task ( ) looks at the ). For Parent_Child dependencies from the entry level ( each task count the number of intermediate child / children. The FOR_ward Scanning is computed recursively across the node as follows: DAG downward staring from the INPUT: T task OUTPUT: Prioritize list BEGIN While T = ; i = 0; R = 0; If ≥ ; If R >2, Get the next task;
686
1: FOR each incoming task T DO IDENTIFY T task IF task has a parent 2: Identify all children for each task (Parent) and identify the task. FOR_ward( ) = } 3: Label task ELSE Initilize T 4:Restart the scanning process from the start node 5: Identify all parent for each task (child) and identify the task. BACK_ward( ) = } 6: Increment T count by 1 Label task in the table Increment I; R = R+1; END
The mapping process starts off with task in layer one. As mentioned above, the task in layer one is also denotes as . This example workflow has a single exit, the denotes as . The same process for the Back_ward (2). Mark (2) in the Task Dependency Mapping Table (Table 3) according to the layers and dependencies in each column based on respective tasks ( ) for each row. The mapping process starts off with the task at layer four going upwards. The explanation for table 3 is as below. The mapping process starts off with task in layer one. As mentioned above, the task in layer . one is also denotes as Based on table V, the next step is to calculate all the marks for each node based on the dependencies column. Based on the total marks, a new priority list will be generated. The new priority list is based on the task rank from the highest to the lowest according to the layers. a. In layer 1 task is ranked first since this is the . b. In layer 2 there are 5 tasks with different dependencies number. and has the same highest total of 3. That means the two nodes have the same number of dependencies. As a tie breaker, the completion time of each node is compared. Since has higher computation time as compared to , therefore is rank as second. Meanwhile, for the remaining tasks and these tasks also have the same total dependencies. Again, as a tie breaker, the computation time for each task is compared. Since has the highest computation time, and have will be rank third. As seen in the table, the same amount of dependencies and computation time. In order to set the priority, the communication time of the has bigger communication two tasks will be compared. time, therefore it will be scheduled fifth. and . There are two c. In layer 3 there are 3 tasks , tasks having the same number of dependencies.
2015 IEEE 10th Conference on Industrial Electronics and Applications (ICIEA)
d.
Therefore, by comparing the computation time for each task the sixth task is and the seventh task is . will be the eighth task to be in the list. . In layer 4 there is only 1 task TABLE V.
TASK DEPENDENCY MAPPING TABLE
2
3
4
scheduling of the new prioritize list in a ready state to be executed to the respective resources. TABLE VI.
TASK RESOURCE MAPPING
Task
R1
R2
T1
0
0
R3 0
Processor R1
T2
20
28
28
R1
T4
40
22
22
R2
T3
40
37
24
R3
Layer
1
D
T1
T2
T3
T4
T5
T6
T7
T8
T9
T10
Comp
20
20
20
15
5
5
10
15
20
20
T5
40
37
44
R2
Comm
0
8
4
2
8
4
8
8
8
8
T6
40
42
44
R1
T9
45
42
44
R2
T8
45
62
46
R3
Task
3 3 3 3
T1 T2
3 3 3
2 3
T3
2
T4
2
3 3
T5
2
3
T6
2
3 2
T7
2 2
T8 T9 T10 Total New List
3 2 2 2 2
3 3 2 2 2
5
3
2
3
2
2
2
4
4
3
1
2
4
3
5
5
8
7
6
9
The arrival of tasks in a workflow have the same arrival time. The start task should be at the beginning of the list and the last task should be at the end of the list. The priority of the list is determined. The task order is Task Scheduling = { }. Based on the given list, the task that has the highest priority will be scheduled first. This is the traditional method for example Heterogonous Earliest Finish Time (HEFT) algorithm [34, 35]. This algorithm will map the task to the first available resources without any modification of the list. The resource that finishes the earliest will get the first available task. The communication time for all the task are taken into consideration. However, this method takes more time to complete. Based on the experiment done earlier, by efficiently grouping and packing the tasks based on their interdependence relationship together a new task scheduling list can be produced. By reshuffling the task order without violating the task priority, the execution order of the tasks can be improved. This is possible because, the tasks can be allocated to the same processor by the positions of the new list. B. Processor Selection The MDJ algorithm will then schedule the new task scheduling list to the available resources. There are three resources (R1, R2 and R3). The resources are selected based on the readiness of the resources. Figure 5 is the actual
T7
45
62
60
R1
T10
55
62
60
R1
The new prioritized task scheduling list ready to be executed to the available resources, based on the computation of the time slots they are assigned to. The resources will execute the tasks with the same capability and speed, regardless of the task’s size and type. Each task can only be executed one at a time per resource. If two dependent tasks are assigned to the same resource, no communication time between these tasks will incur. The task will be scheduled to the resource that finishes the earliest task execution. The tracing of the execution start time of the tasks in each resources are calculated in the given table V. The task will execute based on the new task list to the earliest available resource. Based on the table above, the new Processor list is generated. The earliest start time for each task in all three resources will be computed. The scheduling trace of the new list is given in Table VI. In the table, the execution start times of each node on all resources at each step are given, and the nodes on the list are scheduled one by one, to the available resources that have the earliest start time. V. EXPERIMENTAL RESULTS The new proposed scheduling algorithm is named Multi Dependency Joint (MDJ) Algorithm. The performance of the MDJ algorithm is compared and represented on basic graph structures. The comparison of MDJ was done on most popular algorithms Highest Level First with Estimated Time (HLFET), Modified Critical Path (MCP) and Earliest Time First (ETF). The performance of these algorithms are measured based on total schedule time and load balancing on the processors. The total scheduling time for HLEFT and EFT are 88 mins are shown in figure 2. The MCP scheduling with makespan of 85 mins is presented in Figure 3. Meanwhile, Figure 4 shows the result of MDJ algorithm. It shows MDJ algorithm the makespan of the workflow application is 82 mins. The MDJ performs efficiently in general, with the improvement of 7% from the original scheduling of the HLEFT, EFT and MCP.
2015 IEEE 10th Conference on Industrial Electronics and Applications (ICIEA)
687
The main idea of MDJ is differentiating the levels of workflows and looking at which parent task is delaying its child. By grouping and prioritizing based on the parent-child dependency relationship, the scheduling process can be improved. In other words, packing the tasks in a more effective way will improve the performance of the scheduling process. Based on the experiment, the parent task that has multiple number of immediate child is the critical task. By considering the parent-child dependency relationship, a new scheduling list was established. The new list takes into consideration few important parameters. The parameters measured are task levels, task computation time and task communication time. All this details is in Section 4. 20 R1
T1
40
60
T2
R2
T4
T5
R3
80
T8
T6
T9
T3
T10
T7
AKNOWLEDGEMENT
Fig. 2. Highest Level First with Estimated Time (HLFET) and Earliest Time First (ETF) with the makespan of 88 20 T1
R1
40
60
T2 T6 T5
R3
80
T9
T4
R2
T8 T3
20 T1
R2 R3
40
T10
REFERENCES [1]
60
T2
T6
T4
T5 T3
This paper not have been possible without the assistance and valuable contributions of Ida Normaya Mohd Nasir from Faculty of Mathematical Sciences, University Technology Mara, Kedah, Malaysia.
T7
Fig. 3. Modified Critical Path (MCP) with the makespan of 85
R1
VI. CONCLUSION In this paper we studied how different ways of prioritizing tasks to respective processors will impact the performance of the overall system. To measure the performance of the system, we examined the makespan of a workflow application for the given resources. Based on the simulation results, we can observe that the MDJ algorithm significantly surpass other approaches based on the performance metrics (minimum overall completion time) and therefore performs better. The 7% makespan improvements. This work represents our initial findings of second-order scheduling problem. We are planning to extend this work in several directions. We shall extend our research by relating MDJ algorithm to other parameters such as cost, reliability and energy and analysis of the effects on performance in a more dynamic environment.
T7
80 T10
T9 T8
Fig. 4. MDJ with the makespan of 82
For this experiment, we calculate the average makespan by simulating it on different number of resources for the same application. We experiment the algorithm by increasing the number of resources. It can be derived by calculating the difference of the time taken for the entry task and the exit task being scheduled in the workflow. Then, the measurement of makespan is calculated by averaging over all the workflows in the system based on the average value for each task. Based on Figure 5 it shows MDJ algorithm shows a significant reduction of 15% of the total makespan.
Adam T, Chandy K, Dickson J (1974) A comparison of list schedules for parallel processing systems. ACM Commun 17:685–690 [2] Yang, C., Chen, J., Kuo, T., & Thiele, L. (2009). An Approximation Scheme for Energy-Efficient Scheduling of Real-Time Tasks in Heterogeneous Multiprocessor Systems, 694–699. [3] Tseng, L. Y., Chin, Y. H., & Wang, S. C. (2009). The anatomy study of high performance task scheduling algorithm for Grid computing system. Computer Standards & Interfaces, 31(4), 713–722. [4] Sonmez, O., & Epema, D. (2010). Performance Analysis of Dynamic Workflow Scheduling in Multicluster Grids. In Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing ACM, 49–60. [5] Wang, Y., Bahati, R. M., & Bauer, M. A. (2011). A novel deadline and budget constrained scheduling heuristics for computational grids. Journal of Central South University of Technology, 18(2), 465-472. [6] Zong, Z., Manzanares, A., Ruan, X., Qin, X., & Member, S. (2011). EAD and PEBD : Two Energy-Aware Duplication Scheduling Algorithms for Parallel Tasks on Homogeneous Clusters, 60(3), 360– 374. [7] Abrishami, S., Naghibzadeh, M., & Epema, D. H. J. (2013). Deadlineconstrained workflow scheduling algorithms for Infrastructure as a Service Clouds. Future Generation Computer Systems, 29(1), 158–169. [8] Beloglazov, A., Abawajy, J., & Buyya, R. (2012). Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing. Future Generation Computer Systems, 28(5), 755– 768. [9] Huang, Q., Su, S., Li, J., Xu, P., Shuang, K., & Huang, X. (2012). Enhanced Energy-Efficient Scheduling for Parallel Applications in Cloud. 2012 12th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (ccgrid 2012), 781–786. [10] Ma, Y., Gong, B., Sugihara, R., & Gupta, R. (2012). Energy-efficient deadline scheduling for heterogeneous systems. Journal of Parallel and Distributed Computing, 72(12), 1725–1740. [11] Wang, L., Khan, S. U., Chen, D., Koáodziej, J., Ranjan, R., Xu, C., & Zomaya, A. (2013). Energy-aware parallel task scheduling in a cluster. Future Generation Computer Systems, 29(7), 1661–1670.
Fig. 5. Makespan reduction
688
2015 IEEE 10th Conference on Industrial Electronics and Applications (ICIEA)
[12] Wu, C.-M., Chang, R.-S., & Chan, H.-Y. (2014). A green energyefficient scheduling algorithm using the DVFS technique for cloud datacenters. Future Generation Computer Systems, 37, 141–147. [13] Durillo, J. J., Fard, H. M., & Prodan, R. (2012). MOHEFT : A MultiObjective List-based Method for Workflow Scheduling, 185–192. [14] Zhu, X., He, C., Li, K., & Qin, X. (2012). Adaptive energy-efficient scheduling for real-time tasks on DVS-enabled heterogeneous clusters. Journal of Parallel and Distributed Computing, 72(6), 751–763. [15] Kim, H., & Kang, S. (2011). Communication-aware task scheduling and voltage selection for total energy minimization in a multiprocessor system using Ant Colony Optimization. Information Sciences, 181(18), 3995–4008. [16] Kolodziej, J., Khan, S. U., & Xhafa, F. (2011). Genetic Algorithms for Energy-Aware Scheduling in Computational Grids. 2011 International Conference on P2P, Parallel, Grid, Cloud and Internet Computing, 17– 24. [17] Dutta, D. (2011). A Genetic – Algorithm Approach to Cost-Based MultiQoS Job Scheduling in Cloud Computing Environment, (Icwet), 422– 427. [18] Pandey, S., Wu, L., Guru, S. M., & Buyya, R. (2010). A Particle Swarm Optimization-Based Heuristic for Scheduling Workflow Applications in Cloud Computing Environments. 2010 24th IEEE International Conference on Advanced Information Networking and Applications, 400–407. [19] Xu, Y., Li, K., Khac, T. T., & Qiu, M. (2012). A Multiple Priority Queueing Genetic Algorithm for Task Scheduling on Heterogeneous Computing Systems. 2012 IEEE 14th International Conference on High Performance Computing and Communication (1), 639–646. [20] Chen, W., & Zhang, J. (2009). An Ant Colony Optimization Approach to a Grid Workflow Scheduling Problem With Various QoS Requirements, 39(1), 29–43. [21] Kardani-Moghaddam, S., Khodadadi, F., Entezari-Maleki, R., & Movaghar, A. (2012). A Hybrid Genetic Algorithm and Variable Neighborhood Search for Task Scheduling Problem in Grid Environment. Procedia Engineering, 29, 3808–3814. [22] Cheng, B., Wang, Q., Yang, S., & Hu, X. (2013). An improved ant colony optimization for scheduling identical parallel batching machines with arbitrary job sizes. Applied Soft Computing, 13(2), 765–772. [23] Khajemohammadi, H., Fanian, A., & Gulliver, T. A. (2013). Fast Workflow Scheduling for Grid Computing Based on a Multi-objective Genetic Algorithm, 96–101.
[24] Yassa, S., Chelouah, R., Kadima, H., & Granado, B. (2013). Multiobjective approach for energy-aware workflow scheduling in cloud computing environments. TheScientificWorldJournal, 2013. [25] Yu, J., Buyya, R., & Ramamohanarao, K. (2008). Workflow Scheduling Algorithms for Grid, 173–214 [26] Javadi, B., Abawajy, J., & Buyya, R. (2012). Failure-aware resource provisioning for hybrid Cloud infrastructure. Journal of Parallel and Distributed Computing, 72(10), 1318–1331. [27] Simion, A., Sbirlea, D., Pop, F., & Cristea, V. (2009). Dynamic Scheduling Algorithms for Workflow Applications in Grid Environment. 2009 11th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, 254–261. [28] Pop, F. (2012). Heuristics Analysis for Distributed Scheduling using MONARC Simulation Tool, 157–163. [29] Falzon, G., & Li, M. (2011). Enhancing genetic algorithms for dependent job scheduling in grid computing environments. The Journal of Supercomputing, 62(1), 290–314. [30] Sih, G. C., & Lee., E. A. (1993). A Compile-Time Scheduling Heuristic for Interconnection-Constrained Heterogeneous Processor Architectures.pdf. IEEE Transactions on Parallel and Distributed Systems 4 ( 2). IEEE Comput. Soc. [31] Jiang, Y., Shao, Z., & Guo, Y. (2014). A DAG Scheduling Scheme on Heterogeneous Computing Systems Using Tuple-Based Chemical Reaction Optimization. TheScientificWorldJournal, 2014. [32] Pop, F., Dobre, C., & Cristea, V. (2008). Performance Analysis of Grid DAG Scheduling Algorithms using MONARC Simulation Tool. 2008 International Symposium on Parallel and Distributed Computing. [33] Hagras, T., & Janeþek, J. (2003). Static vs . Dynamic List-Scheduling Performance Comparison, Acta Polytechnica, 43(6). [34] Topcuoglu, H., Hariri, S., & Wu, M.-Y. (2002). Performance-Effective and Low-Complexity Task Scheduling for Heterogeneous Computing, 13(3), 260–274 [35] Marek, W., Hoheisel, A., & Radu, P. (2008). Taxonomies of the MultiCriteria Grid Workflow Scheduling Problem. In D. Talia, R. Yahyapour, & W. Ziegler (Eds.), Grid Middleware and Services: Challenges and Solutions (pp. 237–264). Springer US. [36] Sakellariou, R., and Henan Z. (2004). "A hybrid heuristic for DAG scheduling on heterogeneous systems." Parallel and Distributed Processing Symposium. Proceedings. 18th International. IEEE, 2004.
2015 IEEE 10th Conference on Industrial Electronics and Applications (ICIEA)
689