Implementation of Quasi-Maximum-Likelihood Detection ... - IEEE Xplore

2 downloads 0 Views 306KB Size Report
Lev Rapoport. Huawei Technologies Co., Ltd, Russian R&D Center and Institute of Control Sciences RAS. Moscow, Russia. Zeng Yanxing, Vladimir Ivanov, and.
2012 IEEE 27th Convention of Electrical and Electronics Engineers in Israel 2012 IEEE 27-th Convention of Electrical and Electronics Engineers in Israel

Implementation of Quasi-Maximum-Likelihood Detection Based on Semidenite Relaxation and Linear Programming Lev Rapoport Huawei Technologies Co., Ltd, Russian R&D Center and Institute of Control Sciences RAS Moscow, Russia

Abstract—In this paper, a new numerical method is proposed for fast signal detection in large scale MIMO systems. Semidenite relaxation (SDR) approach is utilized. The SDR problem is further reduced to the sequential linear programming by adding new form of cutting planes and column generation method. Bit error rate (BER) performance results conclude the paper. BER performance is compared with other MIMO detection algorithms. Performance of the new scheme practically identical to performance of the maximum-likelihood detection, while complexity is much less and does not depend on the conditioning number of the channel matrix.

I. I NTRODUCTION There is a large amount of literature where the SDR approach is used for signal detection in MIMO systems. Despite the differences in the statement of the problems to be solved in papers [1]-[6], they all solve the same mathematical problem. Given the real-valued m × n matrix H consider the linear transmission channel y = Hx + ε

(1)

where y ∈ Rm is a vector of received signal, x is a vector of received binary symbols taking values −1, 1 (x ∈ {−1, 1}n). Let ε be the Gaussian white noise with covariance matrix E(εεT ) = σ 2 I,

(2)

where symbol T denotes the matrix transpose, all vectors are supposed to be columns, the symbol I denotes the identity matrix. Provided that the noise is Gaussian with covariance matrix (2), the maximum likelihood method consists in solving the following binary optimization problem x∗ = arg

min

x∈{−1,1}n

�y − Hx�2 ,

(3)

where �·�2 is the square of the Euclidean norm. Complexity of the exhaustive search in (3) is 2n which grows exponentially making the problem (3) hard for solution for n ≥ 20. Well known zero-forcing detector is much less computationally consuming: xzf = [(H T H)−1 H T y]± , (4)

978-1-4673-4681-8/12/$31.00 ©2012 IEEE

Zeng Yanxing, Vladimir Ivanov, and Shen Jianqiang Huawei Technologies Co., Ltd, Russian R&D Center Moscow, Russia

where the component-wise operation [ξ]± gives −1 if ξ < 0 and 1 otherwise. Complexity of the pseudoinvere is estimated as max{m2 n, n3 }. The main drawback of ZF-detector is its poor performance on the case of ill-conditioned matrix H. Another popular low complexity detector is the minimum mean square error (MMSE) which adds regularization xmmse = [(H T H + σ 2 I)−1 H T y]± ,

(5)

provided the value of σ is known. MMSE detector gives better results than ZF owing to regularization, but still looses much to ML, especially in the case of high SNR (i.e. low values of σ). In what follows we suppose that rows of the matrix H are normalized so that n  E( h2ij ) = 1. (6) j=1

II. S EMIDEFINITE RELAXATION

Attempt to get rid of exponential complexity of MLD preserving good BER performance as much as possible gave rise of the semidenite relaxation approach which potentially has polynomial complexity. Starting with the paper [8], where it was illustrated for the max-cut problem, the SDR approach has received attention in a huge amount of subsequent literature, combining ideas of convex optimization with cutting plane iterates. Idea of so called ”triangle” constraints leading to Cn3 cutting planes was introduced in [8] and used in [1], [4], and other papers. Convex optimization algorithm is applied to the barrier function, introduced in [7] and [8]. In the present paper, we follow the SDR approach, however do not use barrier functions and so, get rid of necessity to follow the minimization path as the barrier parameter tends to zero. Instead, exact cutting planes, which are support planes for the cone of the positive semidenite matrices are iteratively constructed when solving successive linear programming problems [9]. New cutting planes act as new generated columns in the dual simplex method, see [10], [11]. Also, new class of Cn2 cutting planes, looking pretty much as the triangle constraints, but having different nature, are introduced in this paper.

1

A. SDR formulation Rewriting the problem (3) in the equivalent form 1 x∗ , X ∗ = arg min tr(XH T H) − y T Hx, 2 subjected to constraints

(7)

x ∈ {−1, 1}n, X − xxT = 0

(8)

X = X T , X − xxT ≥ 0, −1 ≤ Xij ≤ 1, i, j = 1, ..., n, i �= j, Xii = 1, i = 1, ..., n, −1 ≤ xi ≤ 1, i = 1, ..., n.

(9)

weaken the conditions (8) replacing them with the relaxed set of conditions

Finally, introducing notations   T  X x H H , Q= Y = xT 1 −y T H

−H T y 0



,

(10)

and taking into account the Schur complements Lemma, stating equivalence between linear matrix inequalities (X−xxT ≥ 0, X ≥ 0) and Y ≥ 0 (see [7]), arrive at the problem

1 Y ∗ = arg min tr(Y Q), 2 subjected to the set of relaxed constraints

(11)

(12)

Note that the matrix Y is symmetric (n + 1) × (n + 1). Let C be the convex acute cone of symmetric positive semi-denite (n + 1) × (n + 1) matrices and C¯ be its boundary. The set dened by conditions (12) is wider than the set dened by conditions (8). To make constraints (12) more strict, additional constraints are usually formulated. The class of ”triangle” constraints was introduced in [8] and used in many papers, see [1], [4] for example. We introduce another set of constraints. Let us rene the set (12) excluding its undesirable parts. Note that according to (10) we have xi = Y(n+1)i . The following combinations of values are possible = 1, = 1, = −1, = −1,

Y(n+1)i Y(n+1)i Y(n+1)i Y(n+1)i

= 1, Y(n+1)j = −1, Y(n+1)j = −1, Y(n+1)j = 1, Y(n+1)j

= 1, = −1, = 1, = −1,

(13)

while other four combinations are undesirable as they would contradict to the initial constraint X = xxT . Let Y 1 , Y 2 , Y 3 , Y 4 be the symmetric matrices dened by conditions: Yij1 Yij2 Yij3 Yij4

= 1, = 1, = −1, = −1,

1 Y(n+1)j 2 Y(n+1)j 3 Y(n+1)j 4 Y(n+1)j

= 1, = −1, = −1, = 1,

1 Y(n+1)i 2 Y(n+1)i 3 Y(n+1)i 4 Y(n+1)i

Y ∈ Ωij , i = 2, . . . , n + 1, j = 1, . . . , i − 1

(15)

can be added to (12) to improve quality of approximation of the resulting feasible set in the optimization problem (11), (12). Condition (15) can be equivalently presented by four linear inequality constraints: −Yij + Y(n+1)i + Y(n+1)j ≤ 1, −Yij − Y(n+1)i − Y(n+1)j ≤ 1, Yij + Y(n+1)i − Y(n+1)j ≤ 1, Yij − Y(n+1)i + Y(n+1)j ≤ 1.

(16)

Total number of constraints (16) is 2n(n − 1). There are many numerical schemes for efcient solution of the linear optimization problem (11) with convex constraints (12). First constraint in (12) is Y ∈ C , while others are just simple linear inequalities and trivial equalities. This notation makes us to think of this problem mainly as linear problem trying to approximate the cone condition by the set of linear support plane inequalities. B. Linear programming formulation

Y = Y T , Y ≥ 0, −1 ≤ Yij ≤ 1, i, j = 1, ..., n, i �= j, Yii = 1, i = 1, ..., n.

Yij Yij Yij Yij

with unit diagonal Yllα = 1, l = 1, . . . , n + 1 and all other entries equal to zero except for those, listed in (14): Ylkα = 0, l �= k, α = 1, . . . , 4. Let Ωi,j be the convex hall of the vertices Y 1 , Y 2 , Y 3 , Y 4 dened for certain pair of indices i �= j. Then additional conditions

= 1, = −1, = 1, = −1

(14)

Let us dene the mapping of the set of symmetric (n + 1) × (n + 1) matrices on the space RN , N = n(n − 1)/2. The mapping y = col(Y ) maps the set of lower diagonal entries (the diagonal consisting of units is excluded) on the N -dimensional vector y with entries yk = Yij where i = 2, . . . , n + 1, j = 1, . . . , i − 1, k = (i − 1)(i − 2)/2 + j. Then the following identity holds tr(Y Z) = 2col(Y )T col(Z) +

n+1 

Yii Zii .

(17)

i=1

Let mat(y) be the inverse mapping of the N -dimensional vector y to the symmetric (n + 1) × (n + 1) matrix Y with unit diagonal. Let also denote c = col(C)

(18)

image of the cone C which is also a convex acute cone in RN . Omitting (temporarily) the constraint Y ∈ C in (12), consider the set of linear constraints (12), (16), rewritten in terms of the vector y = col(Y ): −1 ≤ yk ≤ 1, k = 1, . . . , N −yk(i,j) + yk(n+1,i) + yk(n+1,j) ≤ 1, −yk(i,j) − yk(n+1,i) − yk(n+1,j) ≤ 1, yk(i,j) + yk(n+1,i) − yk(n+1,j) ≤ 1, yk(i,j) − yk(n+1,i) + yk(n+1,j) ≤ 1,

(19)

where k(i, j) = (i − 1)(i − 2)/2 + j. Let q = col(Q). According to (17), the linear function in (11) can be rewritten

2

n+1 as tr(Y Q) = 2col(Y )T q + i=1 Qii . Omitting last term not depending on y = col(Y ), solve the linear programming (LP) problem q T y → min

(20)

subject to constraints (19). Let y (0) be solution of this LP problem and Y (0) = mat(y (0) ). This matrix is not necessary positive semi-denite as the constraint Y ∈ C is omitted. Let Y¯ ∈ C \ C¯ be arbitrary matrix belonging to the internal part of the cone C, Y¯ = I for example. Let Y (0) ∈ / C. Dene Y (α) = (1 − α)Y¯ + αY (0)

(21)

and nd such a value α(0) that Y (α(0) ) belongs to the boundary of C. In other words α(0) = sup {α : Y (α) ∈ C}.

(22)

α∈[0,1]

If Y (0) ∈ C then the problem is solved. Otherwise we have Y (0) ∈ / C, Y¯ ∈ C \ C¯ and so, 0 < α(0) < 1. Let T ¯L ¯ be the Choletsky factorization of Y¯ , V Λ(0) V T = Y¯ = L −1 (0) ¯ −1 )T be the eigenvalue factorization, V V T = I, ¯ Y (L L (0)

and Λ(0) be a diagonal matrix of eigenvalues λi of the matrix ¯ −1 Y (0) (L ¯ −1 )T . Some of eigenvalues must be negative as we L assume that Y (0) ∈ / C. Then ¯ ((1 − α)I + αΛ(0) )V T L ¯T Y (α) = LV

(23)



(24)

and α(0) = min (0)

i: λi

Suggest Documents