Monomial geometric programming with an arbitrary fuzzy ... - ispacs

0 downloads 0 Views 539KB Size Report
Peterson proposed the geometric programming theory in 1961 Duffin et al. [30]. ... programming and the fuzzy relation equation in theory and applications, Yang.
Communications in Numerical Analysis 2015 No.2 (2015) 162-177

Available online at www.ispacs.com/cna Volume 2015, Issue 2, Year 2015 Article ID cna-00243, 16 Pages

doi:10.5899/2015/cna-00243 Research Article

Monomial geometric programming with an arbitrary fuzzy relational inequality E. Shivanian1*, F. Sohrabi2 (1) Department of Mathematics, Imam Khomeini International University, Qazvin, 34149-16818, Iran (2) Department of Mathematics, Alborz Institute of higher education, Qazvin, Iran

Copyright 2015 © E. Shivanian and F. Sohrabi. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract In this paper, an optimization model with geometric objective function is presented. Geometric programming is widely used; many objective functions in optimization problems can be analyzed by geometric programming. We often encounter these in resource allocation and structure optimization and technology management, etc. On the other hand, fuzzy relation equalities and inequalities are also used in many areas. We here present a geometric programming model with a monomial objective function subject to the fuzzy relation inequality constraints with an arbitrary function. The feasible solution set is determined and compared with some common results in the literature. A necessary and sufficient condition and three other necessary conditions are presented to conceptualize the feasibility of the problem. In general a lower bound is always attainable for the optimal objective value by removing the components having no effect on the solution process. By separating problem to non-decreasing and non-increasing function to prove the optimal solution, we simplify operations to accelerate the resolution of the problem. Keywords: Monomial geometric function optimization; Fuzzy relations; Fuzzy relational inequalities; Fuzzy compositions and t-norms.

1 Introduction Fuzzy set theory was first introduced in 1965 by Zadeh [1]. The operations proposed by him are specified as the membership function of intersection and union of two fuzzy sets, and that of complement of the normalized fuzzy set. Since the resolution of fuzzy relation equations was proposed by Sanchez [2], many researchers have studied different fuzzy relational equations (FREs) and fuzzy relational inequalities (FRIs) and their connected problems in both theoretical and applied areas, [3-16]. Generally, FRE and FRI have a number of properties that make them suitable for formulating the uncertain information upon which many applied concepts are usually based. FRE theory has been applied different fields including fuzzy decision making [17] fuzzy control [18], fuzzy modeling [19], medical diagnosis [20,21], fuzzy analysis, compression

* Corresponding Author. Email address: [email protected], Tel: +989126825371 162

Communications in Numerical Analysis 2015 No. 2 (2015) 162-177 http://www.ispacs.com/journals/cna/2015/cna-00243/

163

and decompression of images and videos [22-25], and estimation of flow rates in a chemical plant and pipe network and transport systems [14].optimization problem with respect to the FRE constraints by considering the max-min composition Fang and Li [1,6,26]. Recent results in the literature, however, show that the min operator is not always the best choice for the intersection operation. Instead, the max-product composition provided results better or equivalent to the max-min composition in some applications Alayón et al. [27]. Bourk and Fisher [28] provided theoretical results for determining the complete sets of solutions as well as the conditions for the existence of resolutions. Their results showed that such complete sets of solutions can be characterized by one maximum solution and a number of minimal solutions. A problem of optimization was studied by Loetamonfong and Fang with max-product composition Loetamonphong [3]. Also, Guo and Xia presented an algorithm to accelerate the resolution of this problem Guo and Xia [29]. Zener, Duffin and Peterson proposed the geometric programming theory in 1961 Duffin et al. [30]. In 1987, Cao proposed the fuzzy geometric programming problem. He solved several problems of power systems Cao [31]. In view of the importance of geometric programming and the fuzzy relation equation in theory and applications, Yang and Cao have proposed a fuzzy relation geometric programming, discussed optimal solutions with two kinds of objective functions based on fuzzy max-product operator Yang and Cao [6]. Shivanian and Keshtkar generalized the geometric programming of the FRE with the max-product operator by considering the fuzzy relation inequalities instead of the equations in the constraints [33-36]. The monograph by Di Nola, Sessa, Pedrycz and Sanchez [7] contains a thorough discussion of this class of equations. Fang and Li solved this using a branch and bound method with a jump-tracking technique. Their method was improved by Wu et al. who presented a procedure for decreasing the search domain [37]. They also simplified the optimization process using three rules resulting from a necessary condition [38]. The optimization problem of max-min and max-product can be separated into two sub problems by separating the nonnegative and negative coefficients in the objective function. The sub- problem formed by the negative coefficients can be solved easily by obtaining the maximum solution of the feasible solutions set. Another subproblem was converted into a 0–1 integer programming problem and was solved using a branch and bound method. The associated 0-1 integer programming problem was solved by Fang and Li [26] using the branch and bound technique. Lu and Fang [7] proposed a genetic algorithm to solve nonlinear optimization problem with max-min composition and Guu and Wu [22] improved Fang and Li's method by providing an upper bound for the branch–and-bound procedure. Guu and Wu provided a necessary condition for an optimal solution in terms of the maximum solution derived from FREs to derive an efficient procedure for solving the linear optimization problem [39]. Khorram and Mashayekhi strengthened that condition and provided a faster algorithm to solve a similar problem [40]. Wang studied an optimization problem subject to FREs with multiple linear objective functions [41]. Guu et al. considered a multi-objective optimization with a max-tnorm FRE constraint [42]. Loetamonphong et al. Investigated an on linear multi objective optimization problem with FRE constraints and proposed a genetic algorithm to find the Pareto optimal solutions [8]. By contrast, Lu and Fang considered a single nonlinear objective function and described an efficient process to simplify the problem [43]. They solved that problem with an FRE constraint and a max–min operator using a genetic algorithm. Ghodousian and Khorram studied a similar problem in which constraints were considered as fuzzy forms [44]. They proved the equivalency of simplification operations in a previous study [45]. In addition to FREs and FRIs defined with various t-norms, many applications involve operators that are not t-norms. In addition to FREs and FRIs defined with various t-norms, many applications involve operators that are not t-norms. Operators motivated Ghodousian, Khorram to formulate FREs defined with an arbitrary function instead of a t-norm [46]. In this paper, we generalize the monomial optimization with an arbitrary operator by considering the FRE constraints. This problem can be formulated as follows:

International Scientific Publications and Consulting Services

Communications in Numerical Analysis 2015 No. 2 (2015) 162-177 http://www.ispacs.com/journals/cna/2015/cna-00243/ n

min c  x j

164

j

i 1

A X  b 1 DX  b

(1.1)

2

x   0,1 n

min c  x j

j

i 1

 (ai , x )  bi1  (d i , x )  b

(1.2)

2 i

x   0,1 Where A and D are fuzzy matrices such that: I1  1,2, , m , I 2  m  1, m  2, , m  l , J  1,2, , n

j  J , i  I1 , 0  aij  1, A  (aij )mn

j  J , i  I 2 , 0  dij  1, D  (dij )ln b 1  b11 , b 21 ,

, b n1    0,1

b 2  b m2 1 , b m2  2 ,

m

, b m2  n    0,1 . l

Suppose ai , d i are i ’th rows of the matrices A and D respectively. Also,  is the arbitrary function instead of known t-norm. We define the constraints in the below as:   ai , x     ai1 , x1     ai 2 , x2      ain , xn   bi1 , i  I1

(1.3)

  di , x     di1 , x1     di 2 , x2  

(1.4)

   din , xn   bi2 , i  I 2

2 The characteristics of the set of feasible solution We explain the basic definitions and concepts used during the paper. Definition 2.1. a) for each i  I1 and each j  J ,





ij1  x j  0,1 :   aij , x j   bi1 we define:

   a , b   inf 



1 (i) If ij   then sup ij  xmax aij , bi ,inf ij  xmin aij , bi

1

1

1 (ii) If ij   then sup ij  xmax

1

b) for each i  I 2 and each j  J ,

1 i

ij

1

1 ij

1



 xmin  aij , bi1   





ij2  x j   0,1 :   dij , x j   bi2 we define:









(i) If

ij2  

then sup ij2  xmax dij , bi2 ,inf ij2  xmin dij , bi2

(ii) If

ij2  

then sup ij2  xmax dij , bi2  inf ij2  xmin dij , bi2   .









In addition we use the following notations throughout the paper.

J i1   j  J : ij1   i  I1 and J i2   j  J : ij2   i  I 2 .

International Scientific Publications and Consulting Services

Communications in Numerical Analysis 2015 No. 2 (2015) 162-177 http://www.ispacs.com/journals/cna/2015/cna-00243/

165

Definition 2.2. For each i  I1 and i  I 2 let:

  b) S  a , b    x   0,1 :   d , x   b  , i  I a) S1  ai , bi1   x   0,1 :   ai , x   bi1 , i  I1 n

2

i

n

2 i

2 i

i

Also we can rewrite in the forms:

   x  0,1

S 1  A , b 1   x   0,1 : A  x  b 1 n

S 2  D ,b 2



 S  A, D, b , b   x  0,1 : A x  b , D x  b  . We describe feasibility of sets n

Finally, we have

2

: Dx  b 2 . 1

n

2

1

2

S1  ai , bi1  , i  I1 , S2  di , bi2  , i  I 2 by giving a necessary and sufficient condition in the following lemma. Lemma 2.1.





1 Suppose a fixed i  I1 , then S1 ai , bi1   iff ij   , j  J . Also, assume a fixed i  I 2 , then

S 2  di , b

2 i

   iff,

n

ij2  

j 1

Proof. proof is easily obtained by considering a fixed i  I1 and x  S1 (ai , bi1 ) from (1.3) and (1.4) and Definitions 2.1. and 2.2. Definition 2.3.









Suppose S1 ai , bi1   , i  I1 and S2 di , bi2   , i  I 2 and  is an operator with closed complex

















solution, if there exist values xmax aij , bi1 , xmin aij , bi1 , xmax dij , bi2 , xmin dij , bi2 such that

ij1   xmin  aij , bi1  , xmax  aij , bi1  , i  I1 , j  J , ij2   xmin  dij , bi2  , xmax  dij , bi2  , i  I 2 , j  J i2 . Moreover, at least one of the above closed intervals is converted into an open or half-open interval, if  is said to be an operator with convex solutions. Lemma 2.2.

1) S1  ai , bi1   i11  i12  2) S2  di , bi2  

 in1 , i  I1

i2  j , i  I 2 jJ i2

Proof. See [46].

International Scientific Publications and Consulting Services

Communications in Numerical Analysis 2015 No. 2 (2015) 162-177 http://www.ispacs.com/journals/cna/2015/cna-00243/

166

Definition 2.4.

1) xmax  ai , bi1    xmax  ai , bi1  , xmax  ai , bi1  , 1 2 

, xmax  ai , bi1   , i  I1 n

xmax  ai , bi1   xmax  aij , bi1  j

2) xmin  ai , bi1    xmin  ai , bi1  , xmin  ai , bi1  , 1 2  xmin  ai , bi1   xmin  aij , bi1 

, xmin  ai , bi1   , i  I1 n

j

j j j 3) xmax  di , bi2    xmax  di , bi2 1 , xmax  di , bi2 2 ,  xmax  dij , bi2  , k  j j 2 xmax  di , bi    k k j 1,

j , xmax  di , bi2 n  , i  I 2 , j  J i2

j 4)x min d i ,bi2   x minj d i ,bi2 1 , x minj d i ,bi2 2 , 2 x min d ij , bi  , k  j j 2 x min d , b   i i k  k  j. 1,

j , x min d i ,bi2 n  , j  J i2

We achieve some results by Definitions 2.4. together lemma 2.2. which characterizes the feasible region for the ith relational inequality in a more familiar way. We give new theorem that gives the general upper and lower bound for the feasible solutions and we reach more details about the feasible region if more properties about the operator  are known. Theorem 2.1.

a )S 1 ai , bi1   x min ai 1 , bi1  , x max ai 1 , bi1  

 x min ain , bi1  , x max ain , bi1  

x min  ai , bi1  , x max ai , bi1  , i  I 1.

b) S2  di , bi2    jJ i2

  0,1  xmin  dij , bi2  , xmax  dij , bi2    0,1 

0,1  jJ i2

 0,1

j xmin  di , bi2  , xmaxj  di , bi2  , i  I 2 .

Proof. In part (a) we suppose x1 , x 2  R,





for each i  I1 let S1 ai , bi1  i11  i12 

x1 , x 2 is subset of the set  x1 , x 2  , by considering Lemma 2.2.

 in1 and

ij1  xmin  aij , bi1  , xmax  aij , bi1  , i  I1 , j  J . Now with placement in S1  ai , bi1  we have S1  ai , bi1   xmin  ai1 , bi1  , xmax  ai1 , bi1   xmin  ai 2 , bi1  , xmax  ai 2 , bi1   ...  xmin  ain , bi1  , xmax  ain , bi1  . For part (b) by considering Lemma 2.2. and Definition 2.4. the proof is easily obtained. Corollary 2.1. We consider  is an operator with closed solution. Then:

1) S1  ai , bi1    xmin  ai , bi1  , xmax  ai , bi1   , i  I1. 2) S 2  di , bi2  

jJ i2

j  xmin  di , bi2  , xmaxj  di , bi2 , i  I 2 . 

International Scientific Publications and Consulting Services

Communications in Numerical Analysis 2015 No. 2 (2015) 162-177 http://www.ispacs.com/journals/cna/2015/cna-00243/

167

Lemma 2.3.

1) S1  A, b1   2) S2  D, b

The

S2  D, b

2

iI1



2

Proof.

i11 

in1



iI1

iI1



2 i

 j

iI 2 jJ i2

proof



i12 

is

obtained

from

Lemma

2.3.

and

equalities

S1  A, b1  

S 2  di , b  .

iI

S1  ai , bi1  ,

2 i

iI

By lemma 2.3. and Corollary 2.2. we reach to another feasible necessary condition for problem (1.1). Corollary 2.2.





ij1  , j  J .

If S A, D, b1 , b2   then iI1

Definition 2.5. Let e : I 2  J I2 so that e(i)  j  J i2 , i  I 2 , and let E be the set of all vectors e. Definition 2.6. Let I j (e)  i  I 2 : e(i)  j we define :

a )X

S1

 X 1S1 , X 2S1 ,

b )X

S1

 X

c )X

S2

X

S2

X

S2

,X

e   X

e j

S2

d )X

S1 1

S2

,

,X

(e )1 , X

S1 n S2

 , where X

(e ) 2 ,

,X

S2

S1 j

ij1 iI1

ij1

 inf iI1

(e ) n  , where

1, I j e     sup ij2 , I j  e     iI j  e  

e   X S

e  j

S1 2

, X nS1  , where X Sj1  sup

2

(e )1 , X

S2

(e )2 ,

,X

S2

(e ) n  , where

0, I j e      inf  2 , I e   .  i I e  ij j j 

Definition 2.6. reduces to an easy maximization and minimization process as

X S1  min  xmax (ai , bi1 ) , X S1  max  xmin (ai , bi1 ) , iI1

X

S2

 e   min x iI

e( j ) max

2

(di , b ) , X 2 i

iI1

S2

e( j ) xmin ( di , bi2 ) ,  e   max  iI 2

Theorem 2.2.

1) S1 ( A, b1 )  X S1 , X S1 2) S2 ( D, b 2 ) 

X S2 (e), X S2 (e) eE

International Scientific Publications and Consulting Services

Communications in Numerical Analysis 2015 No. 2 (2015) 162-177 http://www.ispacs.com/journals/cna/2015/cna-00243/

168

Proof. See [46]. Corollary 2.3. Consider  is an operator with closed convex solution, then we have:

1) S1 ( A, b1 )   X s1 , X s1  2) S 2 ( D, b2 )  eE

 X S2 (e), X S2 (e) .

Now we explain necessary and sufficient conditions for the original problem. Theorem 2.3.



1

Suppose S A, D, b , b

2

   , then e  E such that  X

s1

, X S2 (e) 

 X S2 (e), X s1   .

Proof. See Ref [46]. 3 Simplification and Resolution of the solution set In this section, we find the bounds of the feasible solution set for problem (1.1) by theorem 3.1. Theorem 3.1.

Suppose S  A, D, b1 , b2    the S ( A, D, b1 , b2 ) 







Proof. Since S A, D, b1 , b2  S1 A, b1



eE

max  X S1 , X S2 (e) , min  X S1 , X S2 (e) .

S2  D, b2  , and the statement is established by Theorem 2.2.

  X S2 (e), X S2 (e)  .   eE  1 2 In Theorem 3.1. if S  A, D, b , b    and  is a continuous t-norm then Theorem 3.1. reduce to S  A, D, b1 , b 2   X S1 , X S1









following equality by considering S1 A, b1   X S1 , X S1  , S2 D, b 2 



If  is an operator with closed convex solutions S A, D, b , b 1

2

eE

 X S2 (e), X S2 (e)  .

   then the feasible solution set is

determined by finite maximal and finite minimal solutions:

S ( A, D, b1 , b 2 ) 

max 0, X S2 (e) , min  X S1 , 1   X S2 (e), X S1    eE eE

(1.5)

Lemma 3.1. Suppose  is an operator with convex solutions, then: e (i ) e (i ) e  E  xmin (die(i ) , bi2 )  X eS(1i ) and X eS(i )  xmax (die ( i ) , bi2 ), i  I 2 . 1

Proof. The proof is easily obtained by Definition 2.5. Now to accelerate recognition of the solutions X S2 (e), X S2 (e) , we set E  instead of set E to reduce our search. We limit set

J i2

by removing each

j  J i2

such that

xmin  dij , bi2   X Sj 1

xmax  dij , bi2   X Sj 1 , i  I 2 before selecting the vectors e to construct solutions X S2 (e) and X

S2

or

(e ).

International Scientific Publications and Consulting Services

Communications in Numerical Analysis 2015 No. 2 (2015) 162-177 http://www.ispacs.com/journals/cna/2015/cna-00243/

Now, suppose operator 

169

2 2 is a t-norm, then ij   iff dij  bi , i  I 2

and

jJ

Then

2 2 J i2   j  J : dij  bi2 . Since e  E iff e(i)  J i i  I 2 , e  E iff die (i )  bi .

By lemma 3.1. and relation (1.5) this fact converts the necessary condition to a vector belongs to E 



2

considering die ( i )  bi2 and xmin die (i ) , bi for each

 X

S1 e (i )

, i  I 2 . Then we can remove j from the set J i2

i  I 2 and j  J i2 such that xmin  dij , bi2   X Sj before selecting the vectors e to find the

solutions X

1

S2

(e) . A necessary condition for the optimal solution in terms of the maximum solution is

presented  29 . Definition 3.1. Suppose  is an operator with convex solutions. Let M 1  (mij1 )ln and M 2  (mij2 )ln be matrices which components are defined as follows for i  I 2 and j  J : 2 2   xmin (dij , bi ), ij   m  otherwise   2 2   xmax (dij , bi ), ij   2 mij   otherwise   1 ij

1 1 2 2 Now, we product the modified matrices M  (mij )ln and M  (mij )ln from the matrices M 1 and

M 2 respectively as follows:

S1 S1 2 2 2  , ij   or xmin (dij , bi )  X j or xmax  dij , bi   X j m  1 otherwise  mij 1 ij

, ij2   or xmin (dij , bi2 )  X Sj 1 or xmax  dij , bi2   X Sj 1 m  2 otherwise mij 2 ij

By attention to the definition of the matrices and then we have M 1 and M 2

J i1   j  J : mij1   or J i2   j  J : mij2   and  2 2 1 E   e  E : e(i )  J i2 , i  I 2  then we have J i  J i and E   E  E . Matrices M and

M 2 give us the necessary and sufficient conditions stated in theorem 3.1. 4 Optimization of the monomial objective function Problem (1.1) is converted to two subproblems which by finding each optimal solution of them, we can find optimal solution for problem (1.1). We denote these two subproblems by:

International Scientific Publications and Consulting Services

Communications in Numerical Analysis 2015 No. 2 (2015) 162-177 http://www.ispacs.com/journals/cna/2015/cna-00243/ n

min c  x j

170

j

i 1 j R 

A X  b 1 DX  b 2 x   0,1 n

min c  x j

(1.6) j

i 1 j R 

A X  b 1

(1.7)

DX  b 2 x   0,1

R    j :  j  0, j  J  and R   j :  j  0, j  J 





It is easy to prove that min X S1 , X S2 (e) is the optimal solution of (1.7) for some e  E and

max  X S1 , X S2 (e) for some e  E is the optimal solution for (1.6). Theorem 4.1.





Suppose S  A, D, b1 , b2    . In addition, suppose that max X S1 , X S2 (e) is the optimal solution of



(1.6) for some e  E and min X



e  E then c X 



j

S1

S2

,X

(e) is the optimal solution of subproblems (1.7) for some

is the lower bound of the optimal value for the objective function (1.1).

Proof. Let x  S  A, D, b1 , b2  . Then from theorem 3.1 we have

x eE

 max  X S1 , X S2 (e) , min  X S1 , X S2 (e) therefore for each j  J such that  j  0, x j  x j

implies  j j

c j (x )

j

j

c j ( xj )

 c j (x j ) j

 c j (x j )

Corollary 4.1.



for n

therefore

c x j 1

jJ

each j

 j

n

such

that

 j  0, xj  x j

implies

 c xj . j

j 1



Suppose S A, D, b1 , b2   .



a) If φ is a non-decreasing operator with convex solutions, then x  x1 , x2 ,

, xn  as defined below

is the optimal solution of problem (1.1). s1  X j ,  j  0 x  S 2   X (e) j ,  j  0  j

where X

S2

for j  1, 2,

,n

(e) is the optimal solution of subproblem (1.6) for some e  E .

International Scientific Publications and Consulting Services

Communications in Numerical Analysis 2015 No. 2 (2015) 162-177 http://www.ispacs.com/journals/cna/2015/cna-00243/

171



b) If φ is a non-increasing operator with convex solutions, then x  x1 , x2 ,

, xn  as defined below

is the optimal solution of problem (1.1) S2   X (e) j ,  j  0 x  s X 1 ,  j  0 for j  1, 2, , n   j where X S2 (e) is the optimal solution of subproblem (1.7) for some e  E .  j

Proof.

c( x )

we have

is the lower bound of the optimal objective function according to definition of vector x

X (se2) j  xj  X sj1 , j  J which implies x 

decreasing operator we have S ( A, D, b1 , b 2 ) 

X S2 , X S1



since  is a non-

eE

X S2 (e), X S2 (e) . In addition since φ is an operator eE

with convex solution then x 

S2

X ,X

S1

.

eE

b) This part is similar to part (a). Now, we present the process as an algorithm that should be mentioned all t-norms are operators with convex or closed solutions then we can use the results above that hold for such operators. Algorithm Given problem (1.1) and a function  : 1. Compute ij1 and ij2 , i  I1 , j  J by Definition 2.1. 2. Compute J i2 i  I 2 by using Definition 2.1. 3. Find:

1) S1  ai , bi1   i11  i12  2) S2  di , bi2  

 in1 , i  I1

i2  j , i  I 2 jJ i2

4. Using Definition 2.5, obtain:

xmax  ai , bi1  , xmin  ai , bi1  , i  I1 , j j xmax  di , bi2  , xmin  di , bi2  , i  I 2 .

5. By Definition 2.6. for I j (e)  i  I 2 : e(i)  j find:

ij1

a) X S1 where X Sj1  sup iI1

b) X

S1

where X

S1 j

ij1

 inf iI1

1, I j e    c) X  e  , e  E where X  e  j   sup  2 , I e    iI  e  ij j j  0, I j e    S2 S2 d ) X  e  , e  E where X  e  j   inf  2 , I e    iI  e  ij j j  S2

S2

International Scientific Publications and Consulting Services

Communications in Numerical Analysis 2015 No. 2 (2015) 162-177 http://www.ispacs.com/journals/cna/2015/cna-00243/

6. By Definition 3.1. compute M

1

172

 (mij1 )ln , M 2  (mij2 )ln .

7. Applying Theorem 4.1. if the problem is not feasible, stop; otherwise, go step 8. 





, xn  as follows:



8. Determine x  x1 , x2 ,

 

 

min X s1 , X s2 , j 0 j ( e ) j  xj   max X s1 , X (se2) ,  j  0 j  n

c ( xj )

9. From Corollary 4.1. compute

j

for j  1, 2,

, n.

.

j 1

4 Numerical example We want to solve an example with max-product composition. Consider the problem below:

min Z   x1 

2

1

 x2  x3  2  x4 



1 2

 x1  0.6 0.5 0.1 0.1   0.48  0.2 0.6 0.6 0.5  .  x2   0.56    x     0.5 0.9 0.8 0.4   3  0.72   x4   0.5 0.8 0.35 0.25   x1  0.4     0.9 0.92 0.9 1   x2  0.9   .   0.2 1 0.45 0.4   x3  0.8        0.55 0.6 0.8 0.64   x4  0.65 0  x j  1,

j  1, 2,3, 4

111  0,0.8 , 121  0,0.96 , 131  0, 4.8  , 141  0, 4.8   1 1 1 1 21  0, 2.8  , 22  0,0.93 , 23  0,0.93 , 24  0,1.12  

311  0,1.44  , 321  0,0.8 , 331  0,0.9 , 341  0,1.8   112  0.8,1 , 122  0.5,1 , 132  , 142  , 212  1,1 , 222  0.97,1 , 232  1,1 , 242  0.9,1 , 312  , 322  0.8,1 , 332  , 342  ,

412  ,, 422  , 432  0.81,1 , 442   J12  1, 2 , J 22  1, 2,3, 4 , J 32  2 , J 42  3 xmin  a1 , b11   0

xmax  a1 , b11   0.8,0.96, 4.8, 4.8

xmin  a2 , b21   0

xmax  a2 , b21    2.8,0.93,0.93,1.12

xmin  a3 , b31   0

xmax  a3 , b31   1.44,0.8,0.9,1.8

International Scientific Publications and Consulting Services

Communications in Numerical Analysis 2015 No. 2 (2015) 162-177 http://www.ispacs.com/journals/cna/2015/cna-00243/

x1min  d1 , b12   0.8,0,0,0

x1max  d1 , b12   1

x1min  d2 , b22   0,0.5,0,0

x1max  d2 , b22   1

2 xmin  d2 , b22   0,0.97,0,0

x1max  d 2 , b22   1

2 xmin  d4 , b42   0,0,0,0.9

2 xmax  d4 , b42   1

3 xmin  d2 , b22   0,0.8,0,0

3 xmax  d2 , b22   1

4 xmin  d3 , b32   0,0,0.81,0

173

4 xmax  d3 , b32   1

By Definition 2.6. we have:

X S1   0.8, 0.93, 0.93,1 , X S1  0,

X S2  1,

X S1   0.8, 0.8, 0.81, 0.9

Now we obtain the characteristic matrices according to Definition 3.1:

  0.8 0.5  1 0.97 1 0.9   M1     0.8      0.81   

 1 1   1 1 1 1 2  M   1        1  

 0.8 0.5     0.9  1  M           0.81  

 1 1      1             1  

M 2

By selecting the vectors e from matrix M 1 , the vectors X

S2

e

such that e  E are obtained as

follows:

e1  1,1,1,3

e5   2, 2,1,3

X S2  e1    0.8, 0, 0, 0

X S2  e5    0.8, 0.5, 0.81, 0

e2  1, 2,1,3

e6   2,3,1,3

X S2  e2    0.8, 0.5, 0, 0 e3  1,3,1,3

X S2  e6    0.8, 0.5, 0.81, 0 e7  3,1, 2,3

X S2  e3    0.8,, 0, 0.81, 0 e4   2,1,1,3

X S2  e6    0.8, 0.5, 0.81, 0 e8  3,3,1,3

X S2  e4    0.8, 0.5, 0.81, 0

X S2  e8    0.8,, 0, 0.81, 0

 Therefore, the optimal solution of the problem is x   0.8,0.8,0.81,1 and the minimum value of the

 

objective function is Z x   0.8

2

1

 0.8 0.81 2 1



1 2

 

, Z x  1.125.

International Scientific Publications and Consulting Services

Communications in Numerical Analysis 2015 No. 2 (2015) 162-177 http://www.ispacs.com/journals/cna/2015/cna-00243/

174

5 Conclusion In this paper, we studied the monomial geometric programming problem with fuzzy relational inequality constraints defined by an arbitrary operator. Since the difficulty of this problem is finding the minimal solutions optimizing the same problem with the positive powers objective function, we presented an algorithm together with some simplification operations to accelerate the problem resolution. At last, we gave a numerical example to illustrate the proposed algorithm. Acknowledgments We are very grateful to the anonymous referee for his/her comments and suggestions, which were very helpful in improving the paper. References [1] L. A. Zadeh, Fuzzy sets, Inform. Control, 8 (1965) 338-353. http://dx.doi.org/10.1016/S0019-9958(65)90241-X [2] V. Loia, S. Sessa, Fuzzy relation equations for coding/decoding processes of images and videos, Inf. Sci, 171 (2005) 145-172. http://dx.doi.org/10.1016/j.ins.2004.04.003 [3] J. Loetamonphong, S. C. Fang, Optimization of Fuzzy Relation Equations with Max-product Composition, Fuzzy Sets and Systems, 118 (2001). http://dx.doi.org/10.1016/s0165-0114(98)00417-5 [4] S. C. Han, H. X. Li, J. Y. Wang, Resolution of finite fuzzy relation equations based on strong pseudo-tnorms, Appl. Math. Lett, 19 (2006) 752-757. http://dx.doi.org/10.1016/j.aml.2005.11.001 [5] M. Higashi, G. J. Kli, Resolution of finite fuzzy relation equations, Fuzzy Sets Syst, 13 (1984) 65-82. http://dx.doi.org/10.1016/0165-0114(84)90026-5 [6] J. H. Yang, B. Y. Cao, Geometric programming with fuzzy relation equation constraints, IEEE International Fuzzy Systems Conference Proceedings, Reno, Nevada. (2005a). http://dx.doi.org/10.1109/FUZZY.2005.1452454 [7] J. Lu, S. -C. Fang, Solving nonlinear optimization problems with fuzzy relation equation constraints, Fuzzy Sets and Systems, 119 (2001) 1-20. http://dx.doi.org/10.1016/S0165-0114(98)00471-0 [8] J. Loetamonphong, S. -C. Fang, Optimization of fuzzy relation equations with max-product composition, Fuzzy Sets Syst, 118 (2001) 509-517. http://dx.doi.org/10.1016/S0165-0114(98)00417-5 [9] W. Pedrycz, A. V. Vasilakos, Modularization of fuzzy relational equations, Soft Comput, 6 (2002) 3337. http://dx.doi.org/10.1007/s005000100125

International Scientific Publications and Consulting Services

Communications in Numerical Analysis 2015 No. 2 (2015) 162-177 http://www.ispacs.com/journals/cna/2015/cna-00243/

175

[10] K. Peeva, Y. Kyosev, Fuzzy Relational Calculus: Theory, Applications and Software, World Scientific, Singapore, (2004). [11] E. Sanchez, Resolution of composite fuzzy relation equations, Inf. Control, 30 (1976) 38-48. http://dx.doi.org/10.1016/S0019-9958(76)90446-0 [12] E. Khorram, E. Shivanian, A. Ghodousian, Optimization of linear objective function subject to fuzzy relation inequalities constraints with max-average composition, Iran. J. Fuzzy Syst, 4 (2) (2007) 15-29. [13] G. B. Stamou, S. G. Tzafestas, Resolution of composite fuzzy relation equations based on Archimedean triangular norms, Fuzzy Sets Syst, 120 (2001) 395-407. http://dx.doi.org/10.1016/S0165-0114(99)00117-7 [14] W. B. Vasantha Kandasamy, F. Smarandache, Some applications of FRE, in: Fuzzy Relational Maps and Neutrosophic Relational Maps, Hexis, Church Rock, 67 (2004) 167-220. [15] A. Jafarian, S. Measoomynia, Utilizing a new feed-back fuzzy neural network for solving a system of fuzzy equations, Communications in Numerical Analysis, 2012 (2012) 1-12. http://dx.doi.org/10.5899/2012/cna-00096 [16] M. Nikuie, M. K. Mirnia, Minimal solution for inconsistent singular fuzzy matrix equations, Communications in Numerical Analysis, 2013 (2013) 1-9. http://dx.doi.org/10.5899/2013/cna-00147 [17] R. E. Bellman, L. A. Zadeh, Decision-making in fuzzy environment, Manage. Sci, 17 (1970) 141-164. http://dx.doi.org/10.1287/mnsc.17.4.B141 [18] D. Dubois, H. Prade, Fuzzy Sets and Systems: Theory and Applications, Academic Press NewYork, (1980). [19] F. Wenstop, Deductive verbal models of organizations, Int. J. Man-Mach. Stud, 8 (1976) 293-311. http://dx.doi.org/10.1016/S0020-7373(76)80002-8 [20] E. Sanchez, Solution in composite fuzzy relation equations: application to medical diagnosis in Brouwerian logic, in:M.M. Gupta, G.N. Saridis, B.R. Games (Eds.), Fuzzy Automata and Decision Processes, North-Holland, New York, (1977) 221-234. [21] H. F. Wang, C. W. Wu, C. H. Ho, M. J. Hsieh, Diagnosis of gastric cancer with fuzzy pattern recognition, J. Syst. Eng, 2 (1992) 151-163. [22] F. DiMartino, S. Sessa, Digital watermaring incoding/decoding processes with fuzzy relation equations, Soft Comput, 10 (2006) 238-243. http://dx.doi.org/10.1007/s00500-005-0477-9 [23] A. DiNola, C. Russo, Lukasiewicz, Transform and its application to compression and reconstruction of digital images, Inf. Sci, 177 (2007) 1481-1498. http://dx.doi.org/10.1016/j.ins.2006.09.002

International Scientific Publications and Consulting Services

Communications in Numerical Analysis 2015 No. 2 (2015) 162-177 http://www.ispacs.com/journals/cna/2015/cna-00243/

176

[24] H. Nobuhara, K. Hirota, F. Di Martino, W. Pedrycz, S. Sessa, Fuzzy relation equations for compression/decompression processes of colour images in the RGB and YUV colour spaces, Fuzzy Optim. Decision Making, 4 (2005) 235-246. http://dx.doi.org/10.1007/s10700-005-1892-1 [25] H. Nobuhara, B. Bede, K. Hirota, On various eigen fuzzy sets and their application to image reconstruction, Inf. Sci, 176 (2006) 2988-3010. http://dx.doi.org/10.1016/j.ins.2005.11.008 [26] S. -C. Fang, G. Li, Solving fuzzy relational equations with a linear objective function, Fuzzy Sets Syst, 103 (1999) 107-113. http://dx.doi.org/10.1016/S0165-0114(97)00184-X [27] S. Alayón, R. Robertson, S. K. Warfield, Ruiz-Alzola, A fuzzy system for helping medical diagnosis of malformations of cortical development, Journal of Biomedical Informatics, J. 40 (3) (2007). [28] M. M. Brouke, D. G. Fisher, Solution Algorithms for Fuzzy Relation Equations with Max-product composition, Fuzzy sets and Systems, Fuzzy Sets Syst, 94 (1) (1998) 61–69. http://dx.doi.org/10.1016/S0165-0114(96)00246-1 [29] F. Guo, Z.Q. Xia, An algorithm for solving optimization problems with one linear objective function and finitely many constraints of fuzzy relation inequalities, Fuzzy Optim. Decision Making, 5 (2006) 33-47. http://dx.doi.org/10.1007/s10700-005-4914-0 [30] E. Khorram, A. A. Molai, An algorithm for solving fuzzy relation equations with max-T composition operator, Inf. Sci, 178 (2008) 1293-1308. http://dx.doi.org/10.1016/j.ins.2007.10.010 [31] B. Y. Cao, Fuzzy geometric programming, Boston, Kluwer Academic Publishers, (2001)31-39. [32] E. Shivanian, M. Keshtkar, Geometric Programming Subject to System of Fuzzy Relation Inequalities, An International Journal. (2012) 261-282. [33] E. Shivanian, E. Khorram, Monomial geometric programming with fuzzy relation inequality constraints with max-product composition, Computers & Industrial Engineering, 56 (4) 1386-1392. http://dx.doi.org/10.1016/j.cie.2008.08.015 [34] E Shivanian, E Khorram, Optimization of linear objective function subject to Fuzzy relation inequalities constraints with max-product composition, Iranian Journal of Fuzzy Systems, 7 (3) 51-71. [35] E Shivanian, An Algorithm for Finding Solutions of Fuzzy Relation Equations With Max-Lukasiewicz Composition, Mathware & Soft Computing, 17 (1) 15-26. [36] E Shivanian, Linear optimization of fuzzy relation inequalities with max-Lukasiewicz composition, International Journal of Industrial Mathematics, 7 (2) 129-138.

International Scientific Publications and Consulting Services

Communications in Numerical Analysis 2015 No. 2 (2015) 162-177 http://www.ispacs.com/journals/cna/2015/cna-00243/

177

[37] Y. -K. Wu, S. -M. Guu, Y. C. Liu, An accelerated approach for solving fuzzy relation equations with a linear objective function, IEEE Trans. Fuzzy Syst, 10 (2002) 552-558. http://dx.doi.org/10.1109/TFUZZ.2002.800657 [38] Y. K. Wu, S. -M. Guu, Minimizing a linear function under a fuzzy max-min relational equation constraint, Fuzzy Sets Syst, 150 (2005) 147-162. http://dx.doi.org/10.1016/j.fss.2004.09.010 [39] S. -M. Guu, Y. K. Wu, Minimizing a linear objective function with fuzzy relation equation constraints, Fuzzy Optim. Decision Making, 1 (4) (2002) 347-360. http://dx.doi.org/10.1023/A:1020955112523 [40] E. Khorram, Z. Mashayekhi, On optimizing a linear objective function subjected to fuzzy relation inequalities, Fuzzy Optim. Decision Making, 8 (2009) 103–114. http://dx.doi.org/10.1007/s10700-009-9054-5 [41] H. F. Wang, A multi-objective mathematical programming problem with fuzzy relation constraints, J. Multi-Criteria Decision Anal, 4 (1995) 23-35. http://dx.doi.org/10.1002/mcda.4020040103 [42] S. -M. Guu, Y. K. Wu, E. S. Lee, Multi-objective optimization with a max-t-norm fuzzy relational equation constraint, Comput. Math. Appl, 61 (2011) 1559-1566. http://dx.doi.org/10.1016/j.camwa.2011.01.023 [43] J. Lu, S. -C. Fang, Solving nonlinear optimization problems with fuzzy relation equations constraints, Fuzzy Sets Syst, 119 (2001) 1-20. http://dx.doi.org/10.1016/S0165-0114(98)00471-0 [44] A. Ghodousian, E. Khorram, Fuzzy linear optimization in the presence of the fuzzy relation inequality constraints with max–min composition, Inf. Sci, 178 (2008) 501-519. http://dx.doi.org/10.1016/j.ins.2007.07.022 [45] A. Ghodousian, E. Khorram, An algorithm for optimizing the linear function with fuzzy relation equation constraints regarding max-prod composition, Appl. Math. Comput, 178 (2006) 502-509. http://dx.doi.org/10.1016/j.amc.2005.11.069 [46] A. Ghodousiana, E. Khorram, Linear optimization with an arbitrary fuzzy relational inequality, Fuzzy Sets Syst, (2012) 89-102. http://dx.doi.org/10.1016/j.fss.2012.04.009

International Scientific Publications and Consulting Services

Suggest Documents