Generalized Symmetric Accelerated over Relaxation ... - Springer Link

3 downloads 116 Views 300KB Size Report
INTRODUCTION. The complementarity theory introduced and studied by Lemke (see [1]) and Cottle and Dantzi. (see [2]) has enjoyed a vigorous growth for the ...
ISSN 09655425, Computational Mathematics and Mathematical Physics, 2013, Vol. 53, No. 3, pp. 265–272. © Pleiades Publishing, Ltd., 2013.

Generalized Symmetric Accelerated over Relaxation Method for Solving Absolute Value Complementarity Problems1 M. A. Noora, K. I. Noora, and Javed Iqbalb a

Mathematics Department, COMSATS Institute of Information Technology, Park Road, Chak Shahzad, Islamabad, Pakistan b Mathematics Department, Abdul Wali Khan University, Mardan, KPK, Pakistan email: [email protected], [email protected], [email protected] Received February 1, 2012

Abstract—In this paper, we suggest and analyze a symmetric accelerated over relaxation (SAOR) method for absolute complementarity problems of finding x ∈ Rn, such that x ≥ 0, Ax – |x| – b ≥ 0, 〈x, Ax – |x| – b〉 = 0, where A ∈ Rn × n and b ∈ Rn. We discuss the convergence of SAOR method when the system matrix A is an Lmatrix. Several examples are given to illustrate the implementation and efficiency of the method. The results proved in this paper may stimulate further research in this fasci nating and interesting field. DOI: 10.1134/S0965542513030123 Keywords: variational inequalities, absolute complementarity problems, symmetric AOR method, convergence analysis.

1. INTRODUCTION The complementarity theory introduced and studied by Lemke (see [1]) and Cottle and Dantzi (see [2]) has enjoyed a vigorous growth for the last fifty years. A wide class of problems, which arises in pure and applied sciences can be studied vie the complementarity problems, see the references. These complementarity problems have been extended and generalized in different directions using novel and new ideas and technique. Noor et al. in [3] have introduced and studied a new class of complementarity problems, which are called absolute complementarity problems. It has been shown that the absolute com plementarity problems include the system of absolute value equations, which have been studied exten sively in recent years (see [3, 4–9]). Related to the absolute complementarity problems, we have the class of variational inequality, which is called the absolute value variational inequality, which were introduced by Noor et al. in [3]. It has been shown that the absolute value complementarity problems and the absolute value variational inequalities are equivalent. This equivalent formulation is useful in suggesting and analyzing some iterative methods for solving the absolute value complementarity problems. Noor et al. in [3] used this alternative formula tion to suggest generalized AOR method for solving the absolute value complementarity problems. Inspired and motivated by the research going on, we suggest and analyze SAOR method for absolute complementarity problem introduced by Noor et al. (see [3]). This is main motivation of this paper. Our SAOR method can be viewed as a generalization of a SAOR method of Hadjidimos and Yeyios (see [10]). The convergence analysis of the proposed method is considered under some suitable conditions. Some examples are given to illustrate the efficiency and implementation of the proposed iterative methods. Results are very encouraging. The ideas and the technique of this paper may stimulate further research in these areas. Let Rn be the finite dimension Euclidean space, whose inner product and norm are denoted by 〈., .〉 and ||.|| respectively. For a given matrix A ∈ R n × n, a vector b ∈ Rn, we consider the problem of finding x ∈ K*, such that

x ∈ K *,

Ax − x − b ∈ K *,

1 The article is published in the original.

265

Ax − x − b, x = 0,

(1.1)

266

NOOR et al.

where K* = {x ∈ Rn : 〈x, y〉 ≥ 0, y ∈ K} is the polar cone of a closed convex cone K in Rn and |x| will denote the vector in R n with absolute values of components of x ∈ R n. Problem (1.1) is called the absolute value complementarity problem, which was introduced by Noor et al. in [3]. We remark that the absolute value complementarity problem (1.1) can be viewed as an extension of the complementarity problem considered by Lemke in [11]. Let K be a closed and convex set in the inner product space R n. We consider the problem of finding x ∈ K such that

Ax − x − b, y − x ≥ 0,

∀y ∈ K .

(1.2)

The problem (1.2) is called the absolute value variational inequality (see [3]). We note that problem (1.2) is special form of the mildly nonlinear variational inequalities (see [12]). If K = R n, then the problem (1.2) is equivalent to find x ∈ R n such that

Ax − x − b = 0.

(1.3)

Problem (1.3) is known as the absolute value equations and is being studied extensively in recent years (see [3–9, 13, 14]). To propose and analyze algorithm for absolute vale complementarity problems, We need the following definitions: Definition 1.1. B ∈ R n × n, is called an Lmatrix if bii > 0 for i = 1, 2, …, n, and bij ≤ 0 for i ≠ j, i, j = 1, 2, …, n. Definition 1.2. If A ∈ R n × n, is positive definite, then: i. There exists a constant γ > 0, such that

Ax, x ≥ γ x

2

for all

x∈R . n

ii. There exists a constant β > 0 such that

Ax ≤ β x

for all

x∈R . n

2. ABSOLUTE COMPLEMENTARITY PROBLEMS To propose and analyze algorithm for absolute complementarity problems, we need the following results. Lemma 2.1 (see [15]). Let K be a nonempty closed convex set in R n. For a given z ∈ R n, u ∈ K satisfies the inequality

u − z, u − v ≥ 0, v ∈ K , if and only if u = PK z,

Rn

onto the closed convex set K. where PK is the projection of Using the technique of Noor et al. from [3], one can establish the equivalence between problem (1.1) and (1.2). Lemma 2.2 (see [3]). If K is the positive cone in R n, then x ∈ K, is a solution of absolute variational ine quality (1.2) if and only if x ∈ K, solve the complementanty problem (1.1). In the next result, we prove the equivalence between variational inequality (1.2) and the fixed point using Lemma 2.1. This result is due to Noor et al. (see [3]). Lemma 2.3 (see [3]). If K is closed convex set in R n, then ρ > 0, x ∈ K, satisfies (1.2) if and only if x ∈ K satisfies the relation

x = PK (x − ρ[ Ax − x − b]),

(2.1)

Rn

where PK is the projection of onto the closed convex set K. Lemma 2.3 implies that the absolute complementarity problem (1.1) is equivalent to the fixed point problem. This alternative equivalent plays an important and fundament role in the study of the existence of a solution of the absolute value complementarity problems. This fixed point formulation is used to sug gest and analyze iterative methods for solving absolute value complementarity problems. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 53

No. 3

2013

GENERALIZED SYMMETRIC ACCELERATED OVER RELAXATION METHOD

267

To define the projection operator PK, we consider the special case when, K = [0, c] is a closed convex set in R n as follows: Definition 2.1 (see [16]). Let K = [0, c] is a closed convex set in R n. Then the projection operator PK x is defined as

(PK x)i = min {max ( 0, xi ) , ci } ,

i = 1,2,…, n.

Lemma 2.3 (see [16]). For any x and y in R n, the following facts holds; (i) PK(x + y) ≤ PK x + PK y, (ii) PK x – PK y ≤ PK(x – y), (iii) x ≤ y ⇒ PK x ≤ PK y, (iv) PK x + PK(–x) ≤ |x|; with equality, if and only if –c ≤ x ≤ c. Now we decompose the matrix A as, (2.2) A = D − L −U , where D is the diagonal matrix, L and U are strictly lower and strictly upper triangular matrices respec tively. Let 0 < α ≤ ω ≤ 1, using (2.2) and technique of Hadjidimos and Yeyios from [10], we suggest the SAOR method for solving (1.1) as follows: Algorithm 2.1 Step 1. Choose an initial vector x0 ∈ R n and a parameter ω ∈ R+ set k = 0. Step 2. Calculate x k +1 = PK (x k − D −1[−α Lx k +1 + (ω(2 − ω)A + α L)x k − ω(2 − ω)( x k + b)]).

Step 3. If xk + 1 = xk, then stop; else, set k = k + 1 and go to step 2. Now we define an operator g : R n → R n such that g(x) = ξ, where ξ is the fixed point of the system ξ = PK (x − D −1[−α Lξ + (ω(2 − ω) A + α L) x − ω(2 − ω)( x + b)]).

(2.3)

We also assume that the solution set ϕ = {x ∈ R n : x ≥ 0, Ax − x − b ≥ 0},

of the absolute complementarity problem is nonempty. To prove the convergence of Algorithm 2.1 we need the following result. Theorem 2.1. Consider the operator g : R n → R n as define in (2.3). Assume that A ∈ R n × n is an Lmatrix. Also assume that 0 < α ≤ ω ≤ 1, 0 ≤ α ≤ 1. Then for any x ∈ ϕ it holds that: (i) g(x) ≤ x, (ii) x ≤ y ⇒ g(x) ≤ g(y), (iii) ξ = g(x) ∈ ϕ. Proof. To prove (i), we need to verify that ξ i ≤ xi ,

i = 1,2, …, n,

hold with ξi satisfying ⎛ ⎡ i −1 ⎤⎞ ξ i = PK ⎜ x i − aii−1 ⎢−α Lij (ξ j − x j ) + ω(2 − ω)( Ax − x − b)i ⎥ ⎟ . ⎜ ⎥⎟ ⎝ ⎣⎢ j =1 ⎦⎠ To prove the required result, we use mathematical induction. For this let i = 1,



(2.4)

ξ1 = PK (x1 − a11−1ω(2 − ω)( Ax − x − b)1).

Since Ax – |x| – b ≥ 0, 0 < ω ≤ 1 therefore ξ1 ≤ x1. For i = 2, we have −1 ξ 2 = PK (x 2 − a22 [−α L21(ξ1 − x1) + ω(2 − ω)( Ax − x − b)2]).

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 53

No. 3

2013

268

NOOR et al.

Here Ax – |x| – b ≥ 0, 0 < α ≤ ω ≤ 1, L21 ≥ 0 and ξ1 – x1 ≤ 0. This implies that ξ2 ≤ x2. Suppose that

ξi ≤ xi for i = 1, 2, …, k − 1, we have to prove that the statement is true for i = k, that is ξ k ≤ xk. Consider ⎛ ⎡ k −1 ⎤⎞ −1 ξ k = PK ⎜ x k − akk ⎢−α Lkj (ξ j − x j ) + ω(2 − ω)( Ax − x − b) k ⎥ ⎟ , ⎜ ⎥⎟ ⎝ ⎣⎢ j =1 ⎦⎠



−1 = PK (x k − akk [−α(Lk1(ξ1 − x1) + Lk2(ξ 2 − x 2 ) + … + Lkk −1(ξ k −1 − x k −1))

(2.5)

+ ω(2 − ω)( Ax − x − b)k ]). Since Ax – |x| – b ≥ 0, 0 < α ≤ ωk ≤ 1, Lk1, Lk2, …, Lkk – 1 ≥ 0 and ξi ≤ xi for i = 1, 2, …, k – 1, from (2.5), we can write ξ k ≤ xk.

Hence (i) is proved. Now we prove (ii). For this let us suppose that ξ = g(x) and φ = g(y). We will prove x ≤ y ⇒ ξ ≤ φ.

As

ξ = PK (x − D −1[−α Lξ + (ω(2 − ω)A + α L)x − ω(2 − ω)( x + b)]). So ξi can be written as i −1 n ⎛ ⎡ i −1 ⎤⎞ ⎜ ⎢−α Lij ξ j + ω aii x i + (α − ω(2 − ω)) Lij x j − ω(2 − ω) U ij x j ⎥ ⎟ ⎥⎟ ξ i = PK ⎜ x i − aii−1 ⎢ j =1 j =1 j =1 ⎜ j ≠i ⎢ ⎥⎟ ⎜ ⎢−ω(2 − ω) x − ω(2 − ω)b ⎥⎟ i i ⎝ ⎣ ⎦⎠







i −1 n ⎛ ⎡ i −1 ⎤⎞ ⎜ ⎢−α Lij ξ j + (α − ω(2 − ω)) Lij x j − ω(2 − ω) U ij x j ⎥ ⎟ −1 ⎥⎟. = PK ⎜ (1 − ω)x i − aii ⎢ j =1 j =1 j =1 ⎜ j ≠i ⎢ ⎥⎟ ⎜ ⎢−ω(2 − ω) x − ω(2 − ω)b ⎥⎟ i i ⎝ ⎣ ⎦⎠ Similarly, for φi we have







i −1 n ⎛ ⎡ i −1 ⎤⎞ ⎜ ⎢−α Lij φ j + (α − ω(2 − ω)) Lij y j − ω(2 − ω) U ij y j ⎥ ⎟ ⎥ ⎟. φ i = PK ⎜ (1 − ω)yi − aii−1 ⎢ j =1 j =1 j =1 ⎜ j ≠i ⎢ ⎥⎟ ⎜ ⎢−ω(2 − ω) y − ω(2 − ω)b ⎥⎟ i i ⎝ ⎣ ⎦⎠ For i = 1,







⎛ ⎡ n ⎤⎞ ⎜ ⎢ ⎥⎟ φ1 = PK ⎜ (1 − ω)y1 − a11−1ω(2 − ω) ⎢− U 1 j y j − y1 − b1 ⎥ ⎟ ⎜ ⎢ j =1 ⎥⎟ ⎝ ⎣ j ≠i ⎦⎠



⎛ ⎡ n ⎤⎞ ⎜ ⎢ ⎥⎟ −1 ≥ PK ⎜ (1 − ω)x1 − a11 ω(2 − ω) ⎢− U 1 j x j − x1 − b1 ⎥ ⎟ ⎜ ⎢ j =1 ⎥⎟ ⎝ ⎣ j ≠i ⎦⎠ = ξ1.



COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 53

No. 3

2013

GENERALIZED SYMMETRIC ACCELERATED OVER RELAXATION METHOD

269

Since y1 ≥ x1, therefore –|y1| ≤ – |x1|. Hence it is true for i = 1. Suppose it is true for i = 1, 2, …, k – 1, we will prove it for i = k, for this consider k −1 n ⎛ ⎡ k −1 ⎤⎞ ⎜ ⎢α Lkj φ j + (α − ω(2 − ω)) Lkj y j − ω(2 − ω) U kj y j ⎥ ⎟ −1 ⎢ j =1 ⎥⎟ φ k = PK ⎜ (1 − ω)y k − akk j =1 j =1 ⎜ j ≠i ⎢ ⎥⎟ ⎜ ⎢−ω(2 − ω) y − ω(2 − ω)b ⎥⎟ k k ⎝ ⎣ ⎦⎠







k −1 n ⎛ ⎡ k −1 ⎤⎞ ⎜ ⎢α Lkj ξ j + (α − ω(2 − ω)) Lkj x j − ω(2 − ω) U kj x j ⎥ ⎟ −1 ⎢ j =1 ⎥⎟ ≥ PK ⎜ (1 − ω)x k − akk j =1 j =1 ⎜ j ≠i ⎢ ⎥⎟ ⎜ ⎢−ω(2 − ω) x − ω(2 − ω)b ⎥⎟ k k ⎝ ⎣ ⎦⎠







= ξ k. Since x ≤ y, and ξi ≤ φi for i = 1, 2, …, k – 1. Hence it is true for k and (ii) is verified. Next we prove (iii), that is

ξ = g(x) ∈ ϕ.

(2.6)

Let λ = g(ξ) = PK (ξ − D −1[α L(λ − ξ) + ω(2 − ω)( Aξ − ξ − b)])

from (i) g(ξ) = λ ≤ ξ. Also by definition of g, ξ = g(x) ≥ 0 and λ = g(ξ) ≥ 0. Now ⎛ ⎡ i −1 ⎤⎞ −1 λ i = PK ⎜ ξ i − aii ⎢−α Lij ( λ j − ξ j ) + ω(2 − ω) ( Aξ − ξ − b ) i ⎥ ⎟ . ⎜ ⎢⎣ j =1 ⎥⎟ ⎝ ⎦⎠ For i = 1, ξ1 ≥ 0 by definition of g. Suppose that (Aξ – |ξ| – b)i < 0, so



−1

λ1 = PK (ξ1 − a11 ω(2 − ω)( Aξ − ξ − b)1) > PK ( ξ1 ) = ξ1, which contradicts the fact that λ ≤ ξ. Therefore, (Aξ – |ξ| – b)i ≥ 0. Now we prove it for any k in i = 1, 2, …, n. Suppose the contrary (Aξ – |ξ| – b)i < 0, then ⎛ ⎡ k −1 ⎤⎞ −1 λ k = PK ⎜ ξ k − akk ⎢−α Lkj (λ j − ξ j ) + ω(2 − ω) ( Aξ − ξ − b ) k ⎥ ⎟ . ⎜ ⎥⎟ ⎝ ⎣⎢ j =1 ⎦⎠ As it is true for all α ∈ [0, 1], it should be true for α = 0. That is



−1

λ k = PK (ξ k − akk ω(2 − ω)( Aξ − ξ − b)k ) > PK ( ξ k ) = ξ k . This contradicts the fact that λ ≤ ξ. So (Aξ – |ξ| – b)k ≥ 0, for any k in i = 1, 2, …, n. Hence ξ = f(x) ∈ ϕ. Now we prove the convergence criteria of Algorithm 2.1 when the matrix A is an Lmatrix as stated in the next result. Theorem 2.2. Assume that A ∈ R n × n is an Lmatrix. Also assume that 0 < ωi ≤ 1, 0 ≤ α ≤ 1. Then for any initial vector x0 ∈ ϕ, the sequence {xk}, k = 0, 1, 2, …, defined by Algorithm 2.1 has the following properties: (i) 0 ≤ xk + 1 ≤ xk ≤ x0; k = 0, 1, 2, …, (ii) lim x k = x* is the unique solution of the absolute complementarity problem. k→∞

Proof. Since x0 ∈ ϕ, by (i) of Theorem 2.1 we have x1 ≤ x0 and x1 ∈ ϕ. Recursively using Theorem 2.2 we obtain 0 ≤ x k +1 ≤ x k ≤ x0 ;

k = 0,1,2,… .

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

(2.7) Vol. 53

No. 3

2013

270

NOOR et al. 2norm of residual 102 101 100 10−1 10−2 10−3 10−4 10−5 10−6 10−7 0

50

100

150

200

250 300 Number of iterations

Comparison between GAOR method and SAOR method.

n

From (i) we observe that the sequence {xk}, is monotone bounded, therefore it converges to some x* ∈ R + satisfying

x* = PK (x* − D −1[−αLx* + ω(2 − ω)( A + αL)x* − ω(2 − ω)( x* + b)]) = PK (x* − D −1ω(2 − ω)[ Ax* − x* − b]). Hence x* is the solution of the absolute complementarity problem (1.1). 3. NUMERICAL RESULTS In this section, we consider several examples to show the efficiency of the proposed method. The con vergence of SAOR method is guaranteed for Lmatrices only but it is also possible to solve different type of systems. All the experiments are performed with Intel(R) Core(TM) 2 × 2.1 GHz, 1 GB RAM and the codes are written in MATLAB 7. Example 3.1 (see [3]). Consider the ordinary differential equation

d 2 x − x = (1 − x 2 ), 2 dt

0 ≤ x ≤ 1,

x(0) = 0,

x(1) = 1.

(3.1)

The exact solution is

⎧⎪0.7378827425 sin(t) − 3 cos(t ) + 3 − t 2, x < 0, x(t ) = ⎨ 2 −t t ⎪− ⎩ 0.7310585786e − 0.2689414214e + 1 + t , x > 0. We take n = 10, the matrix A is given by

ai, j

⎧−242 for j = i, ⎪ ⎧⎪ j = i + 1, ⎪ = ⎨121 for ⎨ ⎪⎩ j = i − 1, ⎪ ⎪⎩0, otherwise.

i = 1, 2, …, n − 1, i = 2, 3, …, n,

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 53

No. 3

2013

GENERALIZED SYMMETRIC ACCELERATED OVER RELAXATION METHOD

271

Table Iterative method

SAOR method

Order 4 8 16 32 64 128 256 512 1024

no. of iterations

TOC

no. of iterations

TOC

10 11 11 12 12 12 12 12 13

0.0168 0.018 0.143 3.319 7.145 11.342 25.014 98.317 534.903

9 9 9 10 10 10 10 10 10

0.001 0.001 0.001 0.001 0.018 0.030 1.098 10.230 137.649

The constant vector b is given by T

−14 620 ⎞ b = ⎛⎜120 , 117 , 112 , 105 , 96 , 85 , 72 , 57 , 40 , ⎟ . ⎝121 121 121 121 121 121 121 121 121 121 ⎠ Here A is not an Lmatrix. The comparison between the exact solution and the approximate solutions is given in figure. In figure, we see that the SAOR method converges rapidly to the approximate solution of absolute complementarity problem (1.1) as compare to GAOR method. In the next example, we compare SAOR method with iterative method by Noor et al. (see [8]). Example 3.2 (see [8]). Let the matrix A be given by

ai, j

⎧8, for j = i, ⎪ ⎧⎪ j = i + 1, ⎪ = ⎨−1 for ⎨ ⎪⎩ j = i − 1, ⎪ ⎪⎩0, otherwise.

i = 1, 2, …, n − 1, i = 2, 3, …, n,

Let b = (6, 5, 5, …, 5, 6)T, the problem size n, ranging from 4 to 1024. The stopping criteria are ||Ax – |x| – b|| < 10–6. We choose initial guess x0 as x0 = (0, 0, …, 0)T. The computational results are shown in table. In the table TOC denotes total time taken by CPU. The rate of convergence of SAOR method is better than iterative method from [8]. 4. CONCLUSION In this paper, we have suggested and analyzed a general SAOR method for solving the absolute value complementarity problems. Convergence analysis of the proposed method is proved under some suitable conditions. Several examples are given to illustrate the efficiency of the new SAOR method. Comparison with other methods is carried out, which shows that the proposed method perform better. Readers are invited to discover novel and new applications of the absolute value problems in different branches of pure and applied sciences. REFERENCES 1. C. E. Lemke, “Bimatrix equilibrium points and mathematical programming,” Manag. Sci. 11, 681–689 (1965). 2. R. W. Cottle and G. Dantzig, “Complementary pivot theory of mathematical programming,” Lin. Alg. Appl. 1, 103–125 (1968). COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 53

No. 3

2013

272

NOOR et al.

3. M. A. Noor, J. Iqbal, K. I. Noor, and E. AlSaid, “Generalized AOR method for solving absolute complemen tarity problems,” J. Appl. Math. DOI:10.1155/2012/743861. 4. O. L. Mangasarian, “Absolute value programming,” Comput. Optim. Appl. 36, 43–53 (2007). 5. O. L. Mangasarian, “Absolute value equation solution via concave minimization,” Optim. Lett. 1, 3–8 (2007). 6. O. L. Mangasarian and R. R. Meyer, “Absolute value equations,” Lin. Alg. Appl. 419, 359–367 (2006). 7. M. A. Noor, J. Iqbal, S. Khattri, and E. AlSaid, “A new iterative method for solving absolute value equations,” Int. J. Phys. Sci., 6, 1793–1797 (2011). 8. M. A. Noor, J. Iqbal, K. I. Noor, and E. AlSaid, “On an iterative method for solving absolute value equations,” Optim. Lett. 6, 1027–1033 (2012). 9. M. A. Noor, J. Iqbal, and E. AlSaid, “Residual iterative method for solving absolute value equations,” Abs. Appl. Anal. DOI:10.1155/2012/406232. 10. A. Hadjidimos and A. Yeyios, “Symmetric accelerated overrelaxation (SAOR) method,” Math. Comp. Sim. 24, 72–76 (1982). 11. S. Karamardian, “Generalized complementarity problem,” J. Optim. Theory Appl. 8, 161–168 (1971). 12. M. A. Noor, PhD Thesis (Brunel University, London, UK, 1975). 13. O. L. Mangasarian, “The linear complementarity problem as a separable bilinear program,” J. Glob. Optim. 6, 153–161 (1995). 14. J. Rohn, “A theorem of the alternatives for the equation Ax + B|x| = b,” Lin. Multilin. Alg. 52, 421–426 (2004). 15. M. A. Noor, “On merit functions for quasivariational inequalities,” J. Math. Ineq. 1, 259–268 (2007). 16. B. H. Ahn, “Iterative methods for linear complementarity problems with upper bounds on primary variables,” Programming 26, 295–315 (1983).

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 53

No. 3

2013