Distributed allocation of the capacity of a single-facility using cooperative interaction via coupling agents In-Jae Jeong† and V. Jorge Leon‡ Revised: June 13, 2002 Accepted for publication in IJPR, June 19, 2002.
Abstract This paper considers the problem associated with the allocation of the finite capacity of a single facility among different business organization under partial information sharing. In distributed allocation the decision authorities and system information are dispersed amid organizations and a facility, i.e. no organization requires explicit access to systemwide information in order to effectively allocate the capacity of the shared facility. The lack of explicit information access is compensated by the careful exchange of information among organizations via the shared facility; i.e., cooperative interaction. The facility resolves conflicting interests among organizations on capacity usage and directs locally optimized solutions to a globally optimized solution. The distributed decision making problem associated with each organization and the facility are formulated as linear programs. The proposed cooperative algorithm is tested under two levels of information sharing: when capacity information of the facility is unknown to organizations, and when partial capacity information of the facility is known to organizations. Experimental results suggest that even in this restricted information environment, the proposed method yield solutions that are comparable to those obtained with methodologies that require unrestricted access to information. _____________________________________________________________________ †
Department of Industrial Engineering, Texas A&M University, College Station, TX 77843-3131. (Email:
[email protected])
‡
Department of Industrial Engineering, and Department of Engineering Technology, Texas A&M University, College Station, TX
77843-3367. (Email:
[email protected])
1. Introduction Consider a situation where a large contract manufacturer has some customers that are competitors among themselves.
In order to better integrate the supply chain, the
customers would like to participate in the capacity planning of the contractor’s facility but are understandably unwilling to fully disclose their own costs, demands and constraints with their competitors. The impossibility of complete information sharing prohibits global coordination among the customer and contractor’s systems resulting in loss of potential savings for all involved. In a different scenario, a manufacturer has two separate production areas. One is a high volume low mix product continuous line, and the second is a low volume, high-mix job shop; however, both areas use the same test facility. Due to these structural differences, the organization structure, operating modes and planning models used in each area are very different; for instance, each area has its own production planner and planning procedures. Typically in these situations it is complicated and impractical to use a single monolithic model to coordinate the manufacturer’s operations. The methodology presented in this paper is an initial attempt to enable close-to-optimal coordination in this type of situations; i.e., situations where only partial information sharing is present or practical. Furthermore, distributed decision models are becoming attractive alternatives to centralized planning as network and information technologies enable companies to quickly form or disband alliances as required by rapidly changing markets. 1.1. Background The need for decentralized enterprises is becoming more important as the need for rapid response to new markets increases (Kurt and Rainer 1995). Also, advances in networking and communication technologies make the alternative of decentralization more attractive to large organizations. Decentralized enterprises are characterized by the distribution of decision authority and information among local decision makers (Goedhart and Spronk 1995). In this environment, decision makers routinely have to deal with conflicting interests on shared resources and limited knowledge of enterprise-level information.
1
Past research has focused on decomposition methods originated as an approach to solve large linear programming problems that are linked by coupling constraints of special characteristics. The interaction between various parts of the algorithms can serve as an initial model for the decentralized enterprise (Panne 1991). In decomposition methods, the problem is decomposed into a master problem and associated subproblems. In price-directive decomposition methods (e.g. Choi and Kim 1999, Danzig and Wolfe 1961, Fisher 1981, Jennergren 1973), the master problem sends the price of shared resources to subproblems, and subproblems respond by sending a divisional quantity proposal to the master problem. Based on this information, the master problem updates the associated resource prices. In resource-directive decomposition methods (e.g. Benders 1962, Kate 1972), the master problem allocates the quantity of shared resources to subproblems. According to the quantity allocated to subproblems, divisions evaluate the prices of the common resources. The application of decomposition methods in distributed organizations is limited mainly because the master problem must utilize detailed information contained in every coupling constraint. For Danzig-Wolfe decomposition (Danzig and Wolfe 1961), the master problem instructs divisions how to convex-combine local solutions.
In the
Lagrangian Relaxation with subgradient method, the master problem must know the upper or lower bound of the centralized problem which can be obtained from a globally feasible solution. This implies the existence of a decision maker with unrestricted access to information in the entire system, which is not allowed in distributed organizations of the type considered in this paper. The remainder of this paper is organized as follows: Section 2 presents the distributed capacity allocation problem, Section 3 describes the solution approach based on CICA. Experiments and computational results are reported in section 4. Finally, concluding remarks are given in section 5. 2. Distributed capacity allocation problem In this paper Distributed capacity allocation refers to the problem of allocating the finite capacity of a single-facility to satisfy the demand associated with multiple
2
organizations within a given planning horizon. The allocation is termed distributed because: (1) the decision authorities and system information are dispersed amid the participating organizations, and (2) complete information sharing is not required in order to achieve close-to-optimal allocations. What specific information is private is discussed later in this section. The above characteristics make it impossible to apply a standard optimization method to solve the problem under consideration. Let us first define the necessary notation: t = total production horizon. m = total number of organizations in the firm. d i = total demand of product i of organization i; i = 1, … , m. xik = production quantity of product i at time interval k proposed by organization i. yik = production quantity of product i at time interval k proposed by the facility. bik = benefit of selling unit of product i for organization i at time interval k. k = 1, ... , t. ck = available service time of the facility at time interval k. aik = processing time of unit of product i at time interval k . Figure 1 illustrates the problem structure of the distributed capacity allocation problem under consideration. The global objective for the entire system is the maximization of m
t
the sum of organizations’ benefits, or ∑∑ bik xik .
Two types of constraints are
i =1 k =1
considered: demand constraints and capacity constraints. Demand constraints ensure that organization i schedules production on the facility such that its demand is satisfied at the end of the planing horizon, or
t
∑x
ik
= d i . Capacity constraints ensure that the production
k =1
scheduled on the facility does not exceed its available capacity at any time period k, or m
∑a i =1
ik
yik ≤ ck , k = 1, 2, ... , t. Distinctive to the environment studied in this paper is that
close-to-optimal allocation should be achieved when each organization only has direct
3
access to partial information. Specifically, the following distribution of information is assumed in this paper: •
Each organization only views its local objective; i.e., the term
t
∑b
ik x ik
in the
k =1
objective function is only visible to organization i and not to organization j, j ≠ i. •
Each organization only views its demand; i.e., the demand constraints
t
∑x
ik
= d i are
k =1
only visible to organization i and not to organization j, j ≠ i. •
Organizations have limited view of the facility’s capacity and loading. Two cases are considered for the facility’s capacity constraints
m
∑a
ik
y ik ≤ c k , k = 1, 2, ... , t, in order
i =1
to study the impact of the level of information sharing on the proposed methodology. The first case is without capacity information, WOCI. In this case organization i do not have access to the capacity constraints at all. The second case considers partial information about capacity, WPCI. Here organization i has access to the facility capacity for each time period. Hence in WPCI the organization can ensure that its requirements will not exceed the facility capacity constraints, or aik yik ≤ ck , k = 1, 2, ... , t. However, it cannot view the load imposed by the other organizations j, j ≠ i. Organization 1 Max b11x11+ b12x12+ ... +b1tx1t St. x11+ x12+ ... + x1t = d1
Organization 2
Organization m
Max b21x21+ b22x22+ … +b2tx2t St. x21+ x22+ ... + x2t=d2
Max bm1xm1+ bm2xm2+ ... +bmtxmt … St. xm1+ xm2+ ... + xmt=dm
Facility a11x11 +
a21x21 a12x12 +
+… + a21x22 +…
≤ c1 ≤ c2
am1xm1 +
am2xm2
…
a1tx1t +
a2tx2t
+…
+
amtxmt
Figure 1. The problem structure of capacity allocation problem 4
≤ ct
This limited access to system-wide information makes the solution of the problem depicted in Figure 1 challenging. Clearly, the unrestricted information access version of the problem can be solved using standard linear programming techniques. 3. Solution Approach This section describes the proposed solution methodology based on Jeong and Leon (2000). The goal is to achieve close-to-optimal performance by means of interaction among organizations with only partial revelation of local information as specified in the previous section. The methodology developed by Jeong and Leon (2000) is termed Cooperative Interaction via Coupling Agents (CICA).
Compared to the existing decomposition
methods mentioned in section 1, CICA allows the existence of multiple Coupling Agents that operate over arbitrary subsets of coupling constraints. In addition, the method requires only limited local information sharing among subproblems; i.e. Coupled Autonomous Organizations. In this paper, the facility and organizations are modeled as coupling agents and coupled autonomous organizations, respectively. Facility Problem, (FP)
{ y ikn −1 , µ kn −1 , π kn −1 }
Organization 1 (OP1)
…
{ x ikn , α ikn , β ikn }
Organization i Problem, (OPi)
…
Organization m
(OP2)
Figure 2. CICA model for a capacity allocation problem Figure 2 illustrates how the capacity allocation problem is decomposed using CICA and the information exchange among the subsystems.
Based on the information
distribution described in the previous section, facilities and organizations interact passing the information triplets between them. The information triplet consists of the production quantity scheduled in each period, and two marginal penalties associated with scheduling under or over the given quantity. Specifically at iteration n:
5
•
{ x ikn , α ikn , β ikn } is passed from organization i to the facility; where, α ikn and β ikn are the marginal penalties specified by organization i if deviations from x ikn occur. These penalties reflect both losses in the organization’s objective value and demand constraint violations.
•
{ y ikn −1 , µ ikn −1 , π ikn −1 } is passed from the facility to organization i; where, µ ikn −1 , π ikn −1 are the marginal penalties specified by the facility to organization i if the scheduled quantity deviates from y ikn −1 . These penalties reflect both losses in overall system compromise and capacity constraint violations.
Given each organizations’ triplets, the facility problem (FP) consists of minimizing the total costs caused from deviating from each organization’s specified quantities, subject to the facility capacity constraints. On the other hand, given the facility triplet, organization i’s problem (OPi) consists in optimizing its local objective and deviations from the facility’s recommended quantities subject to demand satisfaction constraints.
By
receiving system-wide information vectors from iteration-to-iteration from the facility, the coupled organizations can gradually gain global knowledge. Hence the lack of global information is compensated by partial information exchange through collaborative interactions.
Detailed description of the decision problems, and derivation and
calculation of these triplets are given in the following sections. 3.1. The organization problem (OPi) This section explains the organization problem OPi. If complete information about capacity constraints were available from the shared facility, then organization i could explicitly incorporate capacity constraints in its own local objective function as follows: Max
t
∑ k =1
t
bik x ik +
∑ k =1
m
θ k (c k −
∑ i =1
a ik x ik )
St.
t
∑x
ik
= di
(1)
k =1
where, θ k is the Lagrangian multiplier of the capacity constraint at time interval k. However, in this distributed environment, passing complete information about the capacity constraint is not allowed since it is assumed to be the private information of the facility. Thus, the facility passes only surrogate information about capacity constraint through a facility information vector that consists of a facility’s solution and penalty
6
weights, y ikn −1 , µ ikn −1 and π ikn −1 , respectively. The facility’s solution satisfies the capacity constraints of the facility, but it may violate the demand constraints since complete information from the organization is not known to the facility. The penalty weights reflect the marginal penalty of violating capacity constraints at the current facility’s solution. Thus, by sending this information to organizations, the facility provides the partial information of a global system to organizations. Given the facility information vector ( y ikn −1 , µ ikn −1 , π ikn −1 ), OPi is the problem where organization i estimates the objective function of (1) at nth iteration as follows: (OPi ): Max
t
∑b
n ik xik
∑ (µ t
+
k =1
n −1 ik
)
max(0, yikn −1 − xikn ) + πikn −1 max(0, xikn − yikn −1 ) St.
k =1
t
∑x
n ik
= di .
(2)
k =1
The new objective of the organization is to compromise between the local optimal solution and the facility’s solution, y ikn −1 . The second term of the objective function in (2) t
∑
is an approximation of
m
θ k (c k −
k =1
∑a
ik x ik
) in (1).
i =1
Let the solution of (2) be x ikn . Penalty weights associated with deviations from x ikn can be formulated as follows:
t
− ∂( α ikn
∑
bik x ik + ρ in (d i −
k =1
=
t
∑x
ik
))
k =1
∂x ik
= − (bik − ρ in ) ,
(3)
= (bik − ρ in ) .
(4)
xik = xikn − ∆ t
∂( β ikn
=
∑
bik x ik + ρ in (d i −
k =1
t
∑x
ik
))
k =1
∂x ik xik = xikn + ∆
α ikn
( β ikn ) represents the cost increment of
m
t
∑ ∑ b i =1
k =1
ik xik
+ ρin (di −
t
∑x k =1
as x ik decreases
ik )
(increases) ∆ from x ikn . Therefore the penalty weights contain information about both local objective and local constraints. In turn, x ikn , α ikn and β ikn are used as an input for the nth iteration of the facility problem.
7
3.2. The facility problem (FP) Given complete information, the facility’s objective is to achieve the global goal subject to coupling constraints of the system (i.e. capacity constraints) as follows: Max
m
ik
i =1
t
t
∑ ∑ b
y ik + ρ i (d i −
k =1
∑y
ik
k =1
) St.
m
∑a
ik
y ik ≤ c k k = 1, 2, ... , t.
(5)
i =1
Where, ρi is the Lagrangian multiplier of the demand satisfaction constraint of organization i. However, in this paper, organizations neither disclose their local objective function nor their demand satisfaction constraints, to the facility. Rather the organization information vector is composed of triplet elements formed by its solution and penalty weights, x ikn , α ikn and β ikn respectively. The approximation of (5) is as follows: (FP): Max
∑∑ [α m
t
n ik
max(0 ,x ikn − y ikn ) + β ikn max(0,y ikn − x ikn )
]
i =1 k =1
St.
m
∑a
ik
y ikn ≤ c k k = 1, 2, ... , t
(6)
i =1
Let the solution of (6) be y ikn . The penalty weights by deviating from y ikn can be t
∑
calculated from the cost increment of
θ kn (c k −
k =1
∆
m
∑a
ik
y ik ) when y ik decreases/increases
i =1
from y ikn , as follows: − ∂ (θ kn (c k − µ ikn
=
m
∑a
ik
y ik ))
i =1
= θ kn a ik ,
(7)
= − θ kn a ik .
(8)
∂y ik yik = yikn − ∆
∂ (θ kn (c k − π ikn
=
m
∑a
ik
y ik ))
i =1
∂y ik yik = yikn + ∆
Given (3), (4), (7) and (8), the organization problems and the facility problem can be reformulated by replacing µ ikn −1 and π ikn −1 in (2), and by replacing α ikn and β ikn in (6), as follows:
8
(OPi):
Max
t
∑ (b
ik
t
s.t. ∑ xikn = d i .
− θ kn −1 a ik ) x ikn ,
k =1
(FP):
Max
m
t
∑∑
(9)
k =1
(bik − ρ in ) y ikn ,s.t.
i =1 k =1
m
∑a
ik
y ikn ≤ c k
k = 1, 2, … , t.
(10)
i =1
The organizations’ and the facility’s decision-making problems are linear programs. Thus the organization’s solution can oscillate between the vertex of the organization’s local feasible region. Also, the facility’s solution can oscillates between the extreme points of the coupling constraints. A modified version of the convex combination rule by Choi and Kim (1999) is proposed to resolve potential oscillation problems. The details of the algorithm are explained in section 3.4. 3.3. Updating the Lagrangian multiplier In order to solve (9) and (10) it is necessary to specify how to update the associated multipliers θ and ρ. Notably, the proposed methodology requires less global information for updating the multipliers than classical Lagrangian relaxation. Given y ink−1 from the facility, organization i updates the multipliers associated with its demand satisfaction constraints at nth iteration as follows: ρin
t
= ρin −1 − t in −1 (d i − ∑ y ikn −1 ) ,
(11)
k =1
ψ in −1 t in −1
t
∑b
n ik x ik
k =1
=
t
−
∑b
ik
k =1
di − y ikn −1 k =1 t
∑
ψ in −1
=
y ikn −1
,
2
(12)
ψ in − 2 if R xn −1 ≤ R xn − 2 ,
R xn − 2
= ψ in−2
R xn −1 t
where, R xn −1 =
∑
otherwise;
bik x ikn −
k =1
t
∑
di − y ikn −1 k =1
∑
t
bik y ikn −1
k =1
t
(13)
2
9
and R xn − 2 =
∑
bik x ikn −1 −
k =1
t
∑b
ik
y ikn − 2
k =1
di − y ikn − 2 k =1 t
∑
2
.
In (11), organization i updates ρin using only the available information about the organization’s own demand constraints and local objective functions. Notice that, the t
subgradient associated with the facility’s objective, given y ikn −1 , is ± (d i − ∑ y ikn −1 ) . Hence, k =1
for a maximization (minimization) problem of Lagragian dual problem, ρin is updated along with the positive (negative) subgradient direction. The step size in (12) uses a different updating rule when compared to the Lagrangian Relaxation, since
t
∑
bik x ikn −
k =1
t
∑b
ik
y ikn −1 is used instead of ( z n − z * ) to avoid using globally
k =1
feasible solution in the calculations. In a distributed environment, it is not easy to find a globally feasible solution to calculate z * since no one has complete access to the constraints or the objective function for the entire system. If organizations agree to accept the facility’s solution proposed at (n − 1)th iteration; that is, x ikn = y ink−1 , ∀i, k , then t
∑
bik x ikn −
k =1
t
∑
bik y ikn −1 = 0 .
t
∑
Therefore,
k =1
bik x ikn −
k =1
t
∑b
ik
y ikn −1
can be used as a criterion to
k =1
measure whether the facility and organizations achieve a compromise solution.
As
shown in (13), the step length ψ in −1 is decreased by step parameter p if t
∑b
n ik x ik
k =1
t
−
∑b
ik
y ikn −1 is not reduced compared to the previous iteration.
k =1
Conversely, given x ikn from the organization, the facility updates the Lagrangian multiplier of the capacity constraint at kth time interval for nth iteration as follows: θ kn
m
= max(0,θ kn −1 − s n (c k − ∑ a ik x ikn )) ,
(14)
i =1
s
n
=
n τ n Zˆ CA t
m
∑ (c − ∑ k
k =1
and
(15)
a ik x ikn ) 2
i =1
τ n = τ n −1 if R yn ≤ R yn−1 ,
= τ n−1
R yn −1 R yn
otherwise,
10
(16)
where R yn =
n Zˆ CA t
m
∑ (c − ∑ k
k =1
and R yn −1 = a ik x ikn ) 2
n −1 Zˆ CA t
m
∑ (c − ∑ k
i =1
k =1
.
a ik x ikn −1 ) 2
i =1
Where, expression (14) updates θ kn along the positive(negative) subgradient direction if the Lagrangian dual problem of (1) is a maximization (minimization) problem using only the available information about the facility’s capacity constraints. In order to avoid using n global information, Zˆ CA is used instead of ( z n − z * ) in determining step size as shown in n n is the objective value of the facility’s problem (6) or (10). If Zˆ CA = 0, it means (15). Zˆ CA
that the facility agrees with the solution proposed by organizations, thus the rule does not n change θ kn −1 . If Zˆ CA ≠ 0, the system solution and local solution are different and the rule n changes the current Lagrangian multiplier proportionate to Zˆ CA . Also, the step length is n reduced by the step parameter whenever Zˆ CA has failed to improve compared to the
previous interaction as shown in (16). It is well known that the subgradient method does not work well on linear programs (Lasdon 1979, Jennergren 1973, Sherali and Choi 1996). The local solution always oscillates the vertex of the local feasible region because the objective function of the Lagrangian subproblem is also a linear function. Meanwhile, the global optimal may lie inside the local feasible region. To resolve the oscillation problem, this paper adopts the convex combination rule proposed by Choi and Kim (1999). Let X in be the primal solution proposed by the organization i and Y n be the primal solution proposed by the facility at nth interaction. Using the convex combination rule proposed by Choi and Kim, the primal solutions are defined as follows (see Choi and Kim, 1999 for details): X in
n
=
∑ j =1
xij
x ij n
∀i
n
yj
j =1
n
i = 1,..., m and Y = ∑ n
respectively.
(17)
is the solution of (2) and y j is the solution of (6) at jth iteration j = 1, … , n. Thus
(17) implies that the primal solution is recovered by giving equal weight to organizations’ solutions and the facility’s solutions generated so far.
11
3.4. CICA algorithm for distributed capacity allocation problems This section presents the proposed CICA algorithm for distributed capacity allocation problems. Also, measures are proposed to evaluate the quality of a solution. Initialization: Set the number of maximum iteration N. Set µ ik0 = π ik0 = 0, ∀i, k , s 0 = t i0 = 0, τ 0 = ψ i0 = 2 and 0 < p < 1. Set n=1. Step 1. Organization’s problem. For all i = 1, ... , m Step 1.1 Solve the organization’s problem (OPi) as shown in (9). Step 1.2 Store the organization’s primal solution X in , ∀i as shown in (17). Step 1.3 Update step length ψ in −1 as shown in (13). Step 1.4 Update step size t in−1 as shown in (12). Step 1.5 Update ρin as shown in (11). Step 1.6 Calculate α ikn , βikn as shown in (3) and (4). Step 1.7 Pass xikn , α ikn , and βikn , ∀i, k , to the facility. Step 2. Facility’s problem. Step 2.1 Solve the facility’s problem FP as shown in (10). Step 2.2 Store the facility’s primal solution Y n as shown in (17). Step 2.3 Update the step length τ n as shown in (16). Step 2.4 Update the step size s n as shown in (15). Step 2.5 Update θ kn as shown in (14). Step 2.6 Calculate µ ikn , π ikn as shown in (7), (8). Step 2.7 Pass µ ikn , π ikn and y ikn ∀k to organization i, i = 1, ... , m. Step 3. If n + 1 > N or a convergence is achieved, stop. Otherwise n = n + 1 and go to Step 1. The algorithm stops if: (i) convergence or compromise is achieved if X n = ( X 1n ,…, X mn )
= Y n ; the compromise solution X n = Y n is a globally feasible but not necessarily
12
optimal solution. (ii) All step parameters are close to zero; i.e., ψ in ≤ ε ∀i and τ n ≤ ε ; where, ε is a small positive real constant. (iii) A given number of iterations N is reached. Similar to other Lagrangian Relaxation based methodologies, this algorithm does not guarantee global feasibility and the optimal convergence. Therefore, typically simple heuristic methods are applied to restore global feasibility.
Feasibility restoration
heuristics will not be implemented in this paper to compare the basic forms of the methodologies under consideration. 3.5. Performance measures To experimentally evaluate the performance of CICA performance measures for solution quality and feasibility are developed in this section. The degree of constraint violation will be measured with specially designed performance measures.
CV is proposed as a measure of capacity violation for
organization’s solution, X and DV as a measure of demand violation for the facility’s solution, Y. m max 0, a ik x ik − c k k =1 i =1 t
∑ CV =
∑ t
∑c
(18)
k
k =1
m
DV =
∑
t
di −
i =1
∑y k =1
m
∑d
ik
.
(19)
i
b =1
CV represents the total excess of capacity per unit capacity by X. Also, DV represents the total demand violation per unit demand by Y. Note that DV is zero for X and CV is zero for Y. Another measure is the closeness of X and Y to the centralized solution. The closeness of individual solution X and Y to the global solution can be evaluated by the Percent Deviation (PD) from the optimal value as follows:
13
PDx =
Z x -Z * Z*
and PDy
=
Z y -Z * Z*
.
(20)
Where, Zx, Zy and Z* are the global objective values of organization’s solution, the facility’s solution and the optimal solution. The second type of closeness is the Compromise Gap (CG) between X and Y . The CG is defined as follows: CGx =
Z x -Z y Zx
and CGy =
Z x -Z y Zy
.
(21)
CGx represents the percent deviation of Y from X and CGy is the percent deviation of X
from Y . A small CG does not always mean the solutions are close to the optimal
solution since the solutions may deviate greatly from the optimal objective value. Therefore the quality of solutions must be decided considering CV, DV, PD and CG simultaneously.
14
4. Experimental results This section starts introducing an example to discuss the convergence and performances of CICA. Next, two versions of CICA will be experimentally tested on a set of randomly generated problems to explore how the amount of information sharing affects the performance of the methodology. The results are also compared with an implementation of a Lagrangian-based algorithm and the global solution. The two latter methods are expected to outperform CICA because they have unrestricted access to global information. 4.1 Example Consider a capacity allocation problem with one facility, two organizations and four time intervals; that is m = 2 and t = 4. An example of the centralized problem is formulated as follows: Max 20x11 + 30x12 + 10x13 + 50x14 + 10x21 + 20x22 + 40x23 + 30x24 St.
x11+
x12+
x13+
x14 x21+
2x11+
= 400 x22+
x23+
≤ 480
1x22
3x13+
= 400 ≤ 480
3x21
4x12+
x24
≤ 480
5x23 5x14
2x24
≤ 480
The global solution can be obtained easily since it is a simple linear program. Table 1 shows the centralized solution, Lagrangian Relaxation and CICA for the example problem with N = 100 and ε = 0.00001 . As mentioned in section 2, two versions of CICA are implemented. The first version of CICA is the CICA without capacity information (CICA-WOCI). The second version is CICA with partial capacity information (CICAWPCI). In this case, the capacity of the facility at any time interval is known to the organizations as follows: Max
t
t
∑ (b
ik
k =1
− θ k a ik ) x ik
St. ∑ xik = d i , a ik x ik ≤ c k ∀k . k =1
15
(22)
However, the organizations do not know the specific allocation of capacity among them, i.e. organization i does not know x jk , j ≠ i .
Relative to the CICA-WOCI, the
organizations receive more information about the system in case of CICA-WPCI. The final results of Lagrangian Relaxation and two versions of CICA are shown in Table 1. Note that CICA-WPCI gives a solution which is very close to the optimal solution and it dominates CICA-WOCI. It is interesting to compare Lagrangian Relaxation solution to the organization’s solution because both solutions satisfy the demand constraints and may violate the capacity constraints. The results show that Lagrangian Relaxation solution dominates the organization’s solution in cases of CICA-WOCI. However Lagrangian Relaxation solution is dominated by the organization’s solution in cases of CICA-WPCI.
Objective value
CV
DV
PD
CGx
CGy
Centralized algorithm
20687.06
0
0
0
N/A
N/A
Lagrangian Relaxation
20133.33
0.192
0
0.027
N/A
N/A
24655.7
0.37
0
0.19 0.125
0.14
0.006
0.006
CICAWOCI
CICAWPCI
Organization’s solution Facility’s solution Organization’s solution Facility’s solution
21565.9
0
0.16
0.04
20385.6
0.14
0
0.015
20516.8
0
0.04
0.0082
Table 1. The solutions of algorithms for the example Figures 3 shows the variation of objective value of Lagrangian Relaxation (LR), CICA-WPCI (CO: organization; CA: facility), and optimal solution (Opt) for the sample problem as iterations take place. For LR the solution converges to a near optimal solution quickly. CICA-WPCI behaves similar to LR, the solution converges very fast and there is a very small gap between the organization’s and the facility’s solution. In addition, the two solutions, CA and OR, are very close to the global optimal solution. For CICA-WOCI, not shown in the figure, we observed that it took relatively more iterations to converge and the gap between the organization’s solution and the facility’s
16
solution was larger. The facility’s solution is close to the optimal solution but the organization’s solution is far from the optimal solution.
26000 Objective value
25000 24000
Opt OR CA LR
23000 22000 21000 20000 29
25
21
17
13
9
5
1
19000 Iteration Figure 3. LR and CICA-WPCI versus iterations for the example. The results suggest that CICA has potential as a distributed algorithm in this problem context. As expected, the results show that the performance of CICA can be improved by sending more information about the system to organizations. In the next section, CICA is tested on a large set of randomly generated problems. 4.2 Experiments The experiments consider a capacity allocation problem with two organizations, m = 2 and the algorithm stops after N = 100 or ε ≤ 0.00001 . Table 2 shows five factors and corresponding test levels that are considered in the experiments. In order to avoid generating infeasible problems for centralized problems, the factor E is introduced, which is the capacity ratio. Let x 'ik , ∀i, k be a random solution which satisfies the demand constraint. Then the capacity at the time interval k is determined using the following equation: 17
2 c k = E a ik x 'ik k = 1, 2,.., t . i =1
∑
(23)
If E ≥ 1.0, the centralized problem will always be a feasible problem since the problem has at least one feasible solution; that is xik’s. If E < 1.0, the problems can be infeasible. The number of problem types is 25 and 10 random problems are generated for each problem type. Thus the total number of test problems is 320. Statistical analysis is performed using DESIGN EXPERT and STATGRAPHICS PLUS. The algorithms are coded using C language and CPLEX callable library to solve linear programs.
Factor
Description
A
Number of time intervals
B
Benefit coefficients
C
Processing times
D
Demands
E
Capacity ratio
Level 1
Level 2
(small variance)
(large variance)
5
10
U(10,30)*
U(10,50)
U(2,5)
U(2,10)
U(300,500)
U(300,600)
1.0
1.1
* U(a,b) means discrete uniform distribution with minimum a and maximum b.
Table 2. Factors considered Table 3 shows the result of CICA-WOCI, CICA-WPCI and Lagrangian Relaxation respectively. In each case, the table shows the minimum, average and the maximum of the performance measures of organizations’ solution and the facility’s solution. The number of iterations to reach the solution is also shown. For each algorithm, the table shows the statistically significant factors for each performance measure. The significant factors, whose p-values are less than 0.01, are selected under the null hypothesis that there is no effect from considering the factor.
The bolded factor means that the
corresponding measure increases when the factor was generated from the second level. CICA-WOCI and CICA-WPCI are tested under the null hypothesis, H0: µ CICA−WPCI = µ CICA−WOCI and the alternative hypothesis, H1: µ CICA−WPCI < µ CICA−WOCI for all
performance measures where µ CICA−WPCI and µ CICA−WPCI are the mean of CICA-WOCI and CICA-WPCI respectively. The result shows that the null hypothesis is rejected with p-
18
value less than 0.001 for all performance measures except that for CV, where the p-value was 0.033. Therefore CICA-WPCI is better than CICA-WOCI with 5% of significance level. However, note that CICA-WPCI needs more iterations to reach a solution than does CICA-WOCI. Our implementation of a Lagrangian Relaxation with Choi and Kim’s algorithm is applied for the same set of problems. The statistical analysis is performed for null hypothesis H0: µ CICA−WPCI = µ LR and the alternative hypothesis H1: µ CICA−WPCI < µ LR for PDx and CV where µ LR is the mean of Lagrangian Relaxation. The p-value for PDx was less than 0.001 and the p-value for CV was 0.142.
Therefore CICA-WPCI is
significantly better than LR for PDx but the null hypothesis cannot be rejected for CV. Note that Lagrangian Relaxation takes less iterations to converge than does CICA-WPCI.
CICAWOCI
PDy
CV
DV
CGx
CGy
Min.
0.0000
0.0002
0.0000
0.0105
0.0005
0.0005
19.0000
Avg.
0.1246
0.0993
0.3554
0.2259
0.1170
0.1356
64.4594
Max.
0.4752
0.5363
0.9362
0.9939
0.5057
1.0229
100.0000
No factor
C, E
B
B
C, E
A, C
*
Min.
0.0000
0.0002
0.0000
0.0078
0.0001
0.0001
21.0000
Avg.
0.0261
0.0870
0.0762
0.1808
0.0889
0.0833
82.1719
Max.
0.2506
0.6005
0.4538
0.7407
0.6005
0.6982
100.0000
E
E
E
E
E
A, C, E
*
Min.
0.0000
N/A
0.0000
N/A
N/A
N/A
15
Avg.
0.0408
N/A
0.0833
N/A
N/A
N/A
44.80938
Max.
0.4288
N/A
0.4272
N/A
N/A
N/A
100
C
N/A
A, E
N/A
N/A
N/A
No factor
Significant factors CICAWPCI
Significant factors Lagrangian Relaxation
# of
PDx
Significant factors
iteration
* means that we could not transform the data to maintain independency.
Table 3. The test results for CICA and Lagrangian Relaxation The statistically significant factors with a 1 % significance level are shown in table 3. It seems that the Capacity Ratio factor E is the most significant factor for both CICA-
19
WOCI and CICA-WPCI. For CICA-WOCI, the second level of factor E deteriorates the performance. Meanwhile, the second level of E improves the performances of CICAWPCI.
This can be explained by the way we generate random problems in the
experiments. In generating random problems, a random solution x 'ik , ∀i, k is generated which satisfies the demand constraints. Then processing times are generated from the discrete uniform distribution.
The capacity at the time interval k is determined by
multiplying the capacity ratio E to the required capacity to produce the random solution;
2
that is E ∑ aik xik' . If E is increased, the capacity is also increased. Also, the increased i =1
capacity implies that the chance of violating capacity constraints by organizations is decreased; that is, CV is decreased. In addition, the increased capacity implies that the increased solution space, that is PDx and PDy, is decreased. These results are true only for CICA-WPCI since capacity information is not given to organizations in case of CICA-WOCI. 5. Conclusion This paper investigated capacity allocation problems in distributed environments where decision-making problems are formulated as linear programs. CICA was applied as a solution methodology. Also, a modified convex combination rule is proposed to avoid the oscillation problem of linear program in distributed environments. Interestingly, CICA showed better performance than Lagrangian Relaxation, especially when the capacity of facilities were partially known to the organizations. It is conjectured that the performance of Langrangian relaxation can improve given a better quality upper bound. The experimental results showed that the proposed modified convex combination rule worked well for CICA. Further research is needed to assess the performance of the approach when more than one coupling agent (e.g., more facilities or splitting the coupling constraints in arbitrary subsets) is used.
20
References BAZARAA, M. S., SHERALI, H. D. and Shetty, C. M., 1993, Nonlinear Programming: Theory and Algorithms. 2nd Ed., (New York, John Wiley & Sons). BENDERS J. R., 1962, Partitioning procedure for solving mixed variables programming problems. Numerische Mathematik, 4, 238-252 . CHOI G. and KIM C., 1999, Primal recovery strategy for Lagrangian dual subgradientbased methods. The Korean Institute of Industrial Engineers /Korean Operation Research and Management Science Society, Proceedings of the 99' Spring Conference. DANZIG G. B. and WOLFE P., 1961, The decomposition algorithm for linear programs. Econometrica, 29, 767-778. FISHER, M., 1981, The Lagrangian relaxation method for solving integer programming problems. Management Science, 27, 1-18. GOEDHART M. H. and SPRONK J., 1995, An iterative heuristic for financial planning in decentralized organizations. European Journal of Operations Research, 86, 162-175. JEONG I. J. and LEON V. J., 2002, Decision-making and cooperative interaction via coupling agents in organizationally distributed system. IIE Transactions – Special issue in Large Scale Optimization, in print. KATE A. T., 1972, Decomposition of linear programs by direct distribution. Econometrica, 40(5), 883-898. KURT J. and RAINER L., 1995, Decomposition and iterative aggregation in hierarchical and decentralized planning structures. European Journal of Operations Research, 86, 120-141. LASDON, L., 1979, Optimization theory for large systems. (New York, Macmillan). 21
SHERALI, H. and CHOI, G., 1996, Recovery of primal solutions when using subgradient optimization methods to solve Lagrangian duals of linear programs. Operations Research Letters, 19,105-113. VAN DE PANNE C., 1991, Decentralization for multidivision enterprises. Operations Research, 39(5), 786-797. WOLFE, P and CROWDER, H. D. 1974, Validation of subgradient optimization. Mathematical Programming 6, 62-88.
22