service task, which can be served by various candidate Grid nodes (CGNs), ..... Voyager Platform, Recursion Software Inc. http://www.recursionsw.com/. Agrawal ...
Efficient Service Task Assignment in Grid Computing Environments
Angelos Michalas Technological Educational Institute of Western Macedonia, Department of Informatics and Computer Technology, Kastoria, Greece Malamati Louta Harokopio University of Athens, Department of Informatics and Telematics, Greece INTRODUCTION The availability of powerful computers and high-speed network technologies is changing the way computers are used. These technology enhancements led to the possibility of using distributed computers as a single, unified computing resource, leading to what is popularly known as Grid computing (Foster, 2001). The term Grid is adopted from the power Grid which supplies transparent access to electric power regardless of its source. Cloud computing, scalable computing, global computing, internet computing, and more recently peer-to-peer computing are well known names describing the Grid technology in distributed systems. Grids facilitate the employment of various nodes comprising supercomputers, storage elements and databases that are distributed for resolving computational demanding problems in many disciplines of science and commerce (Foster, 2001). To utilize Grids effectively, an efficient allocation algorithm is needed to assign service tasks to Grid resources. Thus, assuming that a user wishes to perform a specific service task, which can be served by various candidate Grid nodes (CGNs), a problem that should be addressed is the assignment of the requested service task to the most appropriate Grid node. In this paper, the pertinent problem is called Service Task Allocation (STA). This study is related to the pertinent previous work in the literature, since efficient resource utilization, load balancing and job scheduling are topics that attract the attention of the researchers, as computational Grids have become an emerging trend on high performance computing. Most studies in the field of resource allocation schemes aim at efficiently utilizing the otherwise unutilized computing power spread throughout a network. Different global objectives could be considered, such as minimization of mean service task completion time, maximization of resources utilization (e.g., CPU time), and minimization of mean response ratio, while in most cases load balancing among nodes is considered. A high level problem statement addressed in the current version of this study may be the following. Given the set of candidate Grid nodes and their layout, the set of service tasks constituting the required services, the resource requirement of each service task in terms of CPU utilization, the characteristics of each Grid node, the current load conditions of each Grid node and of the network links, find the best assignment
pattern of service tasks to Grid nodes subject to a set of constraints, associated with the capabilities of the Grid nodes. The proposed service task allocation scheme handles complex services composed by tasks requiring communication (i.e., message exchange) with other service components (e.g., databases). Care is also taken in case there is no resource with available spare capacity so as to accommodate a new service task on a congested system. Our approach uses an Ant Colony Optimization algorithm (ACO) for service task allocation in Grid computing environments. ACO actions follow the behavioural pattern of real ants in nature, which travel across various paths marking them with pheromone while seeking for food. ACO is used to solve many NP-hard problems including routing, assignment, and scheduling. We assume each service task is an ant and the algorithm sends the ants to search for Grid nodes.
BACKGROUND Most studies in the field of resource allocation schemes aim at efficiently utillising the resources spread throughout a network. In most cases the problem is reduced to load balancing among specific nodes. Basic service task assignment strategies comprise the following (Balasubramanian, 2004): First, Round Robin, according to which the tasks are allocated to nodes by simply iterating through the nodes list. Second, Random, where the nodes to be assigned with the tasks are selected randomly. Third, Least Loaded in accordance with which the tasks are assigned to a specific node until a pre-specified threshold is reached. Thereafter, all subsequent requests are transferred to the node with the lowest load and the aforementioned steps are repeated. Fourth, Load Minimum, where the average load of the system is calculated. In case the load of a node is higher than the average node and of the least loaded node by a certain amount, all subsequent requests are transferred to the least loaded location. According to the task farming paradigm (Andrews, 1991), a pool of tasks and one worker on each node of the system is considered. Each worker repeatedly claims a task from the pool, executes it and claims the next task. This way, the system load is efficiently distributed to the available resources. Considering dynamic, distributed controlled resource allocation, schemes in most cases follow three basic types (Agrawal, 1987): Sender-Initiated, where congested nodes (nodes where the load reaches a predefined threshold) take the initiative and probe other nodes in order to determine the most suitable node (e.g., least loaded node) for remote task execution, Receiver-Initiated, where lightly-loaded nodes search for work in a similar manner (probe other nodes in order to determine the node(s) that should be relieved from tasks e.g., the most loaded node), Symmetrically-Initiated, according to which both congested and lightly loaded nodes take the initiative. In (Lazowska 1986, Krueger 1988) the performance of these schemes is evaluated. The sender-initiated scheme is shown to perform better in light or moderate loaded systems, while the receiver-initiated paradigm is preferable at higher load conditions, under the assumption that the cost of transferring a task between the nodes is comparable for the two schemes. Both sender-initiated and symmetrically-initiated schemes become unstable at high load conditions, especially when the cost of probing other nodes is taken into account. In general, many approaches have derived and encourage the necessity of adaptive switching between strategies (Svenson, 1992) and dynamic adjustment of decision parameters (e.g., node’s load predefined threshold, time interval upon which load information exchange between the nodes should take place) (Xu, 1993). However, depending on the number of nodes in the network, the load balancing technique adopted, the network status, the time required and the complexity indroduced, the resource allocation scheme itself may diminish the net benefit of the overall procedure. In (Eager, 1986), the relative benefits of simple versus complex load sharing policies are examined. Using an analytical model for a homogeneous network, the authors concluded that simple policies that require only a small amount of state information perform as well as complex policies.
Researchers also borrow notions from economic fields (particularly, dynamic pricing and game theory) in order to efficiently allocate network resources through the construction of market-based systems (Chavez, 1997). In (Buyya, 2002), a computational economy framework for resource allocation and for regulating supply and demand in grid computing environments is proposed. Specifically, economic models (commodity market models, posted pricing schemes, tender and auction mechanisms), system architectures and policies for resource management are provided for computational grids and peer-to-peer computing systems. ACO algorithms are based in a behavioral pattern exhibited by ants and more specifically their ability to find shortest paths using pheromone, a chemical substance that ants can deposit and smell across paths. These algorithms have been emerged in the early ‘90s for the solution of optimization problems. One of the problems ACO tries to solve is the Generalized Assignment Problem (GAP), where a set of tasks i ∈ I has to be assigned to a set of resources j ∈ J . Each resource has a limited capacity a j . Each task
i assigned to resource j consumes a quantity rij of the resource’s capacity. Additionally, the cost d j of assigning a task to resource j is given. The objective is to find the minimum cost task assignment pattern. Care should be taken to assign tasks to resources with enough spare capacity. In case there is no resource with enough spare capacity, the task is assigned to any resource, producing in this way an infeasible assignment. The first ACO application to the GAP was presented by (Lourenco, 2002) and is called Max-Min Ant System (MMAS). Each service task is represented by an ant and the algorithm allocates ants to resources. The pheromone trail τ j represents the desirability of assigning a task to resource j . Initially all pheromone trails are set to the inverse of the cost of the respective resource (i.e.,
τ j (0) = 1 d j ). Solutions are constructed iteratively by assigning tasks to resources. The probability
[τ ] [n ] a
according to which task i is assigned to resource j is given by: p j =
b
j
j
∑ [τ ] [n ] a
k
b
where
k
k
n j = τ j (0) = 1 d j is a heuristic value known a priori for the performance of resource j , and a , b are two parameters, which determine the relative significance of the pheromone trail and the heuristic information, respectively. After each iteration, the task deposits pheromone on the trail chosen. The amount of pheromone located to a path depends on the feasibility of the solution. A feasible solution is followed by a deposit of 0.05 units of pheromone, whereas 0.01 units of pheromone are deposited when a solution is unfeasible. Several studies of task allocation in grid environments have been proposed since the Min-Max ACO: (Yan, 2005) uses the basic idea of MMAS ACO . The pheromone deposited on a trail includes a) an encouragement coefficient when a task is completed successfully and the resource is released, b) a punishment coefficient when a job failed and returned from the resource and c) a load balancing factor related to the job finishing rate on a specific resource. (Chang, 2009) uses a balanced ACO which performs job scheduling according to resources status in grid environment and the size of a given job. Local pheromone update function updates the status of a selected path after job assignment. Global pheromone update function updates the status of all existing paths after the completion of a job. (Dornermann, 2007) presents a metascheduler which decides where to execute a job in a Grid environment consisting of several administration domains controlled by different local schedulers. The approach is based on the ant colony paradigm to provide good balance of the computational load. The information exchange protocol used is the Anthill framework (Babaoglou 2002) in which AntNests offer
services to users based on the work of autonomous agents called Ants. A grid node hosts one running AntNest which receives, schedules and processes Ants as well as sends Ants to neighboring AntNests. State information carried by Ants is used to update pheromones on paths along AntNests. Additionally, Ants may carry jobs which have to be transferred and executed from one AntNest to another.
METHODOLOGY FUNDAMENTALS The STA process, as a first step, requires a computational component that will act on behalf of the user. Its role will be to capture the user preferences, requirements and constraints regarding the requested service task and to deliver them in a suitable form to appropriate Grid resource provider entities. As a second step, STA requires an entity that will act on behalf of a Grid resource provider. Each role will be to intercept user requests, acquire and evaluate the corresponding Grid nodes and network load conditions, and ultimately, to select the most appropriate Grid node for the realization of the task. Furthermore, a monitoring module is required. The monitoring module consists of a distributed set of agents, which run on each Grid node. Each agent is responsible for monitoring the load conditions and available resources of the Grid node and delivering them to the Grid resource provider related entity. Additionally, a distributed set of network provider related entities will be responsible for providing the Grid resource provider entity with network load conditions and managing the network connections necessary for resource provisioning. The following key extensions are made so as to cover the functionality that was identified above. First, the Grid Resource Provider Agent (GRPA) is introduced and assigned with the role of selecting on behalf of the Grid resource provider the best service task assignment pattern. Second, the User Agent (UA) is assigned with the role of promoting the service request to the appropriate GRPA. Third, the Grid Node Agent (GNA) is introduced and assigned with the role of promoting the current load conditions of a CGN. Finally, the Network Provider Agent (NPA) is introduced and assigned with the task of providing current network load conditions (i.e., bandwidth availability) to the appropriate GRPA. In essence, the distributed set of the GNAs and NPAs forms the monitoring module. In other words, the GRPA interacts with the UA in order to acquire the user preferences, requirements and constraints, analyzes the user request in order to identify the respective requirements in terms of CPU, identifies the set of CGNs and their respective capabilities, interacts with the GNAs of the candidate Grid nodes so as to obtain their current load conditions and with the NPAs so as to acquire the network load conditions, and ultimately selects the most appropriate service task assignment pattern.
Figure 1. System model and physical distribution of the service task assignment related software components The GRPA applies an extended version of the MMAS-GAP in which a three step framework iteratively repeated on a set of tasks can be used to describe the proposed algorithm. The first step includes the
selection of an unassigned task based on a priority function, and its assignment to the CGN with the maximum pheromone trail value. The second step applies a local search in order to improve the initial solution in case the system is congested and there are no resources with spare capacity so as to accommodate new service task assignments. Finally, in the third step, the pheromone level on paths is updated using the current optimal solution. Our approach is presented in a detailed manner in the next paragraph. Regarding the system model, we consider a set of Grid nodes GN and a set of links L . Each Grid node ni ∈ GN corresponds to a server, while each link l ∈ L corresponds to a physical link that interconnects two nodes ni , n j ∈ GN . Our system operates in a multi-tasking environment, i.e., several tasks may be executed on a single Grid node sharing its resources (e.g., CPU utilization, memory, disk space). Let Di denote a set of Grid nodes grouped to form a domain. In essence, domains represent different network segments. A pattern for the physical distribution of the related software components to the service task assignment scheme is given in Figure 1. Each GRPA controls the Grid nodes of a domain. Each GNA is associated with each Grid node, while each NPA is associated with the network elements (e.g., switches or routers) necessary for supporting Grid Node connectivity. The GNA, NPA role (in a sense) is to represent the Grid nodes or network elements, respectively, and to assist GRPA by providing information on the availability of resources of the Grid nodes/network element. Domain state information (load conditions of the Grid nodes of the particular domain and link utilization) is exchanged between the GRPA and the GNAs/NPAs residing in the specific domain.
The ACO Algorithm In this paper an extended version of the MMAS-GAP ACO algorithm is used to solve the STA problem. During the construction of a solution, ants chose the CGN where the task should be assigned. The initial pheromone value of each CGN is given by the formula:
τ j (0) = CPU _ Speed j ⋅ (1 − CPU _ Load j ) (1)
Pheromone trails are updated upon assignment of a task on a CGN and upon termination of a task according to the formula:
τ jpost = ρ ⋅ τ jpre + Δτ ij
( 2)
where τ jpost and τ jpre are the trail intensity from a task to CGN j after and before the updating procedure. The excess in accumulation of pheromone is controlled by the use of the pheromone evaporation rate ρ (where 0 ≤ ρ ≤ 1 ). The latter also allows the algorithm to avoid wrong decisions taken in the past (Lourenco, 2002). Δτ ij is the differentiation of the pheromone value a particular task i deposits on the path to CGN j . Specifically, the pheromone of a resource is reduced upon task assignment in relation to the task size and to the power of the resource. Adversely, the pheromone is refunded to the resource upon job termination. When task i is assigned to CGN j Δτ ij = − M , while when task i is completed and CGN j is released Δτ ij = M . M is a positive value relevant to the computation workload of the task. In the current version of this study:
M =
Task _ Instructionsi CPU _ Speed j ⋅ (1 − CPU _ Load j )
(3)
The factor Task _ Instructionsi is the number of instructions task i contains. The CPU _ Speed j factor is the CPU speed of grid node j while the CPU _ Load j factor refers to the CPU load of grid node j . In case CPU _ Load j approximates 1 the resource has not enough capacity and another assignment of a
task on it should be avoided. The authors consider that 1 − CPU _ Load j assumes the minimum value of 0.001, yielding thus a high value for parameter M, so that next tasks could not possibly be assigned on the specific resource. The desirability of assigning task i to CGN j is defined by the following formula:
desi , j = τ j − Com _ Costi. j
(4)
Namely, a task i is assigned to CGN j in case desi , j takes the maximum value among all j resources. In this expression the factor τ j is the current trail intensity on CGN j . The factor Com _ Costi , j is the cost of migration to CGN j , plus the communication cost introduced in case i service task needs to interact with other service components (e.g., other service tasks or databases) residing on different grid nodes to accomplish its goal. It is defined as:
Com _ Cost i , j =
Task _ Sizei + ∑ mi ,k ⋅ cc j ,l Bandwidth j k
(5)
In the above formula the volume of messages exchanged between service task i and component k for the accomplishment of task i is represented as mi , k , and the communication cost per unit message that is exchanged between grid nodes j, l is represented as cc jl (we suppose that service task i and component
k reside respectively on j, l grid nodes). This later factor may be proportional to the distance (e.g., number of hops) between the two grid nodes and the load conditions (e.g., bandwidth availability) of the communication link interconnecting the two nodes. According to equation (4) the desirability value is proportional to the pheromone τ j minus the communication cost given by formula (5). The authors have decided to include in their model the Com _ Costi , j , so as to have a more integrated solution of the service task assignment problem. The additional cost from the interaction of a task with other software units needs to be considered, since most current services are composed by distinct collaborative components. Local search takes place when there are no resources with enough spare capacity. In such a case, the task is assigned to a resource according to the initial pheromone values of the resources ( τ j (0) ). Additionally, the initial pheromone values are also considered in case two or more resources have the maximum desirability value. In case of equal initial pheromone values, the task is assigned randomly to any of these resources. Based on the aforementioned analysis, the grid node selection process, graphically illustrated in Figure 2, may be described as following: Step 1. The UA component is acquainted with the preferences, requirements and constraints of user u regarding service task j . Step 2. The GRPA obtains from the UA user preferences, requirements and constraints concerning the requested task j . Step 3. The GRPA retrieves from a database the set of candidate grid nodes for the completion of the service task j as well as the CPU speed of each grid node. Step 4. The GRPA computes for each service task the corresponding resources required for its completion in terms of the number of instructions the task contains. Step 5. The GRPA interacts with the GNAs in order to obtain the current load conditions of each CGN. Step 6. The GRPA contacts the NPAs in order to acquire the current load conditions of the communication links. Step 7. The GRPA estimates the communication cost for service task j, on the basis of equation (5).
Step 8. The GRPA solves the appropriate instance of the service task assignment problem (equations (1)(4)). Step 9. End.
Figure 2. Grid node selection process.
EXPERIMENTAL RESULTS In this section, indicative results are provided in order to assess the proposed framework, which allows for effective service task assignment in Grid environments. In order to test the performance of the service task assignment scheme, we conducted experiments on a simulated grid environment composed of six service nodes with the following configuration: three service nodes with 3GHz CPU and 2 GB RAM, two service nodes with 3,2GHz CPU and 2 GB RAM and one service node with 2,7GHz CPU and 1 GB RAM. All service nodes reside on a 100Mbit/sec Ethernet LAN, running the Linux Redhat OS. Concerning the implementation issues of our experiments, the overall Grid Resource Provisioning System (GRPS) has been implemented in Java. The Voyager mobile agent platform (Voyager) has been used for the realisation of the software components as well as for the inter-component communication. To be more specific, the system components (UA, GRPA and the monitoring module GNAs, NPAs) have been implemented as fixed agents and the service task constituting the service as intelligent mobile agent, which can migrate and execute to remote service nodes. To evaluate the efficiency of our service task allocation method the following experimental procedure has been followed which is similar to (Chang, 2009). We consider 1500 simple tasks each performing matrix multiplication of real numbers. The matrix sizes are varying from 400x400 up to 1000x1000. The task
size depends on its matrix size and is about n x n x 4 bytes (each real number is represented by 4 bytes). The number of instructions that the task contains, can be drawn from task’s complexity. Since matrix multiplication has O(n3 ) complexity, 2n3 instructions are estimated for a n × n matrix multiplication. Since communication cost is similar for all hosts only the computation workload of tasks is considered.
In our experiment we compared the proposed ACO service task allocation scheme with the round robin (RR), the random (Rand), the Least Loaded (LL) and the Load Minimum (LM) assignment algorithms. In order to measure the efficiency of each method we use the standard deviation of CPU load of CGNs. The load of each CGN is sampled after each task assignment and the standard deviation of each method is computed per 100 samples from 100 to 1500 tasks. The standard deviation is computed as:
σ=
1 N ∑ ( xi − x )2 N i =1
Where σ is the standard deviation xi is the CPU load of resource i and x is the average load of all resources. Standard Deviation of Load 0,40 0,35 0,30
ACO
0,25
RR
0,20 0,15
Rand
0,10
LM
LL
0,05 0,00 1500
1400
1300
1200
1100
1000
900
800
700
600
500
400
300
200
100
Jobs Executing
Figure 3. Standard Deviation of load for ACO, Round Robin, Random, Least Loaded and Load Minimum scheduling algorithms. Figure 3 illustrates the standard deviation of each method. Low standard deviation values indicate that the system is better load balanced. From the obtained results, we observe a decrease in the standard deviation when the ACO service task assignment scheme is used which verifies that the load of CGNs is better balanced. At this point it should be mentioned that the performance improvement is related to the number of tasks being executed on the Grid environment. It may be observed that for a small number of executing tasks (under 200 tasks) there is not significant improvement among different task assignment methods. However, in case more tasks exist in the system, methods such as ACO, Least Loaded and Load Minimum that need state information (e.g., CPU load) perform a lot better than simple allocation methods like Random or Round Robin.
CONCLUSION In this study the service task assignment problem in Grid computing environments has been addressed using an Ant Colony Optimization algorithm (ACO). Our objective is to find the best service task assignment pattern, i.e., an assignment of service tasks to Grid nodes that is optimal, given the current load conditions and number of service tasks being served by each Grid node. Experimental results
indicate that the proposed framework produces good results in relatively simple contexts (e.g., when a task does not need interactions with other service components). Our scheme manages the load among Grid nodes effectively and performs better than other algorithms requiring similar state information. Future work includes realization of further wide-scale trials, so as to experiment with the applicability of the framework presented herewith as well as the comparison of our scheme with alternative ACO scheduling algorithms. Moreover, evaluation of the performance of the proposed service task assignment scheme is required in complex contexts where interaction among software components is necessary.
REFERENCES Voyager Platform, Recursion Software Inc. http://www.recursionsw.com/ Agrawal, R., & Ezzat, A. (1987). Location Independent Remote Execution in NEST. IEEE Transactions on Software Engineering, Vol. 13, no. 8, (pp. 905-912). IEEE Computer Society. Andrews, G., (1991). Paradigms for Process Interaction in Distributed Programs. ACM Computing Surveys, Vol. 23, no. 1, (pp. 49-90). Association for Computer Machinery. Babaoglu, O., Meling, H., & Montresor, A. (2002). Anthill: A framework for the development of agentbased peer-to-peer systems. In Proc. of the 22th International Conference on Distributed Computing Systems (pp. 15–22), Vienna, Austria. IEEE. Balasubramanian, J.,Schmidt, D., Dowdy, L., & Othman, O. (2004). Evaluating the Performance of Middleware Load Balancing Strategies. In Proc. of the 8th International IEEE Enterprise Distributed Object Computing Conference (pp. 135- 146). Monterey, California, USA. Buyya, R., Abramson, D., Giddy, J., & Stockinger H. (2002). Economic models for resource management and scheduling in Grid computing. Concurrency and Computation: Practice and Experience, vol. 14, (pp. 1507-1542). Wiley InterScience. Chang, R., S., Chang, J., S., Lin, P.,S., (2009). An ant algorithm for balanced job scheduling in grids. The International Journal of Grid Computing: Theory, Methods and Applications, Vol. 25 (pp. 20–27). Elsevier. Chavez, A., Moukas, A., & Maes, P. (1997). Challenger: A Multi-agent System for Distributed Resource Allocation. In Proc. of the 1st International Conference on Autonomous Agents (pp. 323- 331). New York, USA. Dörnemann, K., Prenzer, J., & Freisleben, B., (2007).A peer-to-peer meta-scheduler for service-oriented grid environments. In Proc. of the first international conference on Networks for grid applications (article no 7). Lyon, France. Eager, D., Lazowska, E., & Zahorjan, J. (1986). Adaptive Load Sharing in Homogenous Distributed Systems. IEEE Transactions on Software Engineering, vol. 12, (pp. 662-675). IEEE Computer Society. Foster, I., Kesselman, C., Tuecke, S. (2001). The Anatomy of the Grid : Enabling Scalable Virtual Organizations. International Journal of Supercomputer Applications, Vol 15 , I 3, (pp. 200 – 222). Sage Publications. Krueger, P., & Livny, M. (1988). A Comparison of Preemptive and Non-Preemptive Load Distributing. In Proc. of the 8th International Conference on Distributed Computing Systems, (pp. 123-130). San Jose, California. Lazowska, E., Eager, D. & Zahorjan, J. (1986). A Comparison of Receiver-Initiated Sender-Initiated Dynamic Load Sharing. Performance Evaluation Review, Vol. 6, no. 1, (pp. 53-68). ACM SIGMETRICS. Lourenco, H., & Serra, D. (2002). Adaptive search heuristics for the generalized assignment problem. Mathware and Soft Computing, Vol. 9, (pp. 209–234). Svenson, A., (1992). Dynamic Alternation between Load Sharing Algorithms. In Proc. Of the 25th Hawaii International Conference on system sciences vol. 1, (pp. 193-201). Hawaii, United States. Xu, J., & Hwang, K. (1993). Heuristic Methods for Dynamic Load Balancing in a Message-Passing Multicomputer. Journal of Parallel and Distributed Computing, vol. 18, (pp. 1-13). Elsevier. Yan, H., Qin, X., Li, X., Wu, M., (2005). An improved ant algorithm for job scheduling in
grid computing. In Proc. of the 2005 International Conference on Machine Learning and Cybernetics, vol. 5, 18–21 (pp. 2957–2961). Guangzhou, China.
KEY TERMS & DEFINITIONS Grid Computing: A distributed network of high performance computers, storage elements, sensors and collaboration environments accessed transparently by users. Access to resources is conditional based on factors like authorization, trust, negotiation and resource-based policies. Job Scheduling: An optimization problem in computer science specifying which jobs should be assigned to specific resources at particular times. Service Task Allocation: The way tasks are chosen, coordinated and assigned to resources. Load Balancing: A technique to spread work between two or more computers, network links, CPUs, hard drives, or other resources in order to get optimal resource utilization, maximize throughput, and minimize response time. Ant Colony Algorithm: Ant colony algorithms follow the behavioural pattern of real ants in nature which travel across various paths marking them with pheromone while seeking for food. These kinds of algorithms are used to solve many NP-hard problems including routing, assignment, and scheduling. Generalized Assignment Problem: In this problem a set of tasks i ∈ I has to be assigned to a set of resources j ∈ J . Each resource has a limited capacity a j . Each task i assigned to resource j consumes a quantity rij of the resource’s capacity. Also the cost d j of assigning a task to resource j is given. The objective is to find a task assignment pattern which has minimum cost. Care should be taken to assign tasks to resources with enough spare capacity. In case there is no resource with enough spare capacity, the task is assigned to any resource, producing in this way an infeasible assignment.