VARIABLE-P AFFINE SCALING ... - Semantic Scholar

1 downloads 0 Views 290KB Size Report
In this paper we investigate the use of the p parameter for improved recovery accuracy and speed in the iterative. Affine Scaling Transformation (AST) family of ...
VARIABLE-P AFFINE SCALING TRANSFORMATION ALGORITHMS FOR IMPROVED COMPRESSIVE SENSING. Sergio D. Cabreraa, Rufino Dominguezb, J. Gerardo Rosilesa, and Javier Vega-Pinedab a Dept. of Electrical and Comp. Eng., The University of Texas at El Paso, El Paso, TX 79968 USA E-mail: [email protected] b Div. de Posgrado en Electronica, Inst. Tec. de Chihuahua, 31310 Chihuahua, Chih. MEXICO ABSTRACT In this paper we investigate the use of the p parameter for improved recovery accuracy and speed in the iterative Affine Scaling Transformation (AST) family of algorithms which solve the compressive sensing problem. The AST family of algorithms is applicable to the minimization of the ℓp, p-norm-like diversity measure. This includes the numerosity (p=0), and the ℓ1 norm (p=1) as special cases. In our previous work we concluded that any p in [0, 1] can give the sparse solution when exact recovery is possible, however, the best-approximation problem and the behavior of the algorithm is highly dependent on this parameter. We present and evaluate experimentally some simple strategies to vary the values of p as a function of the iteration in the AST algorithm. These variable-p variations of AST capture most of the benefits of the p=0 and the p=1 fixed-p approaches simultaneously. Index Terms— Affine Scaling Transformation, diversity measure, compressive sensing, sparse signal recovery. 1. INTRODUCTION AND MOTIVATION Let M be the number of linear measurements used in recovery of a signal with K non-zero components and which is N samples long. In [1], we showed that the AST for p=1 requires 3-5 times more iterations to converge to its solution than AST for p=0. The minimum M needed for perfect recovery is approximately the same on the average for all values of p, however, there is an increasing spread in minimum M as p is reduced from p=1 to p=0. We desire to use the compressive sensing framework and notation as presented in [2]. We call the measurements matrix Φ, the resulting measurements vector y, which is taken from a signal s which is sparse in some transform domain. Thus, the desired solution x is the sparse transform of the signal s from which the measurements are taken. So, x=Ψs and Ψ is the unitary N×N transform matrix. The problem to be solved is thus to find x from y from

ΦΨ H x = y

or

Θx = y.

In the problem involving a sparse DFT spectrum and measurements being equal to arbitrary time samples of a signal, the entries in each matrix take the form [1]: 2π ⎧1, c = mr + 1 1 j N (r −1)(c −1) e Φ r,c = ⎨ ; Ψ H r,c = else N ⎩0, As mentioned in [2], this is an underdetermined linear inverse problem where there are an infinite number of solutions for x. Looking for sparse, or the sparsest solution, reduces the set of solutions of interest. The approach used in Compressive Sensing (CS) is to find sparse solutions by minimizing the ℓ1 norm. In the more general CS problem, the matrix Φ is the measurements matrix which can be of a more general form than that in Eq. (2). For example, one can use a matrix of zero-mean Gaussian random values, random binary patterns, etc. as mentioned in [3], which are the novel aspects of CS.

( )

2. THE AFFINE SCALING TRANSFORMATION ALGORITHM At each iteration of the AST [4], see Fig. 1, a pseudoinverse is computed using a modified Θ matrix whose columns are weighted by the magnitude of the previous solution. Thus, columns of Θ that were significant in the previous solution are favored to be present in the next solution. Successive weighting produces a successive de-emphasis of components that are not present in most previous solutions and in the limit, the number of surviving components is a minimum (at most M=rank(Θ)). The procedure is equivalent to solving a series of underdetermined problems ( k −1) ΘW ( k −1) q = y by minimizing a weighted ℓ2-norm per iteration to obtain x ( k ) = W ( k −1) q ( k −1) [5]. Noting that the weighted Θ matrix will become increasingly ill-conditioned, it is necessary to perform a regularization of the pseudoinverse process by introducing the regularization parameter



(k )

λ

Θ (k ) H + λ I

)

−1

and

replacing



(k )

Θ(k ) H

)

−1

in step (3) of the algorithm in Fig. 1.

with

of M starting at 5 and increasing by 1 until we find the minimum value of M for which AST achieves perfect recovery (always possible for a large enough M). This is defined to be those cases where the %RMSE error is below 0.5, where

Initialize Θ, y, W(0) =I, and k=0

k=k+1 (1)

x

(k )

=W

( k −1)

Θ

(k )H

W ( k ) = diag{ x NO

x

(k )

(Θ (k )

−x

(k )

Θ

)

( k ) H −1

(1 − p 2 )

( k −1)

% RMSE = 100 || x − xˆ || 2 || x || 2

(2)

Θ ( k ) = Θ W ( k −1)

}

Suggest Documents