2012 International Conference on Advances in Computing and Communications
Implementation of 1 Magic and One bit Compressed Sensing based on Linear Programming Using Excel Indukala P K, Lakshmi K, Sowmya V, Soman K P Center for Excellence in Computational Engineering and Networking Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore -641112, Tamilnadu
[email protected]
tion of time and energy to measure the data can be reduced to a great extent. The concept of compressive sensing is explained by a simple example. Let us consider 100 coins, weighing 20 grams except one faulty coin, weighing say 22 grams. By divide and conquer method, take half the coins together on a scale and check whether they are 1000 grams. From this, one can say which half of the set includes the faulty coin. This process of halving is repeated until only the faulty coin remains [1]. But this approach doesn’t work well when the number of faulty coin is more. Trouble also occurs when each faulty coin are of unknown masses, either weighing larger or smaller than normal weight. The above problem can be solved by compressed sensing, provided the number of faulty coins are very less. To keep track of the coins, let’s number it from 1 to 100. Let Xi be the amount that mass of the ith coin deviates from the original value 20 grams. If there are only few faulty coins, then Xi is 0 in most cases which make the vector X sparse. Let x1 , x2 · · · · · · x100 denotes the deviations from the standard. Form a random subset by throwing a dice for each of the 100 coins and accept the coin if even numbers are coming, else reject it. Let the number of coins selected in the subset be m coins. The mass of the first random subset of coins are measured. Let it be denoted as g. Repeat the procedure p (say p=25) times; each time the mass of the new random subset of coins are measured. Create a random matrix A with size p × 100 by putting 0’s and 1’s randomly. The problem results in a system of p linear equations and can be summarized as Ax = y.
Abstract—Compressed sensing helps in the reconstruction of sparse or compressible signals from small number of measurements. The sparse representation has great importance in modern signal processing. The main objective is to provide a strong understanding of the concept behind the theory of compressed sensing by using the key ideas from linear algebra. In this paper, the concept of compressed sensing is explained through an experiment formulated based on linear programming and solved using 1 magic and One bit compressed sensing methods in Excel. Keywords-Compressed sensing; 1 Magic; Compressed sensing; Linear Programming; Excel
One
bit
I. I NTRODUCTION The main objective of using Excel tool is to provide clear visualization and strong understanding of the concept behind the theory of compressed sensing through basic mathematical operations. Due to the lack of visual interpretation of mathematical expressions, almost all students face extreme difficulty in digesting the concept behind the theory. So as a part of the academic curriculum, the students should be encouraged to experiment on different problems related to the theory and analyze the obtained solution using simple mathematical tools like Excel. Microsoft Excel is a spreadsheet tool capable of performing calculation, analyzing data and integrating information from different programs. Such a teaching methodology will train the students to understand and learn the concepts through solving practical problems. In this paper , the concept of compressed sensing is explained through an experiment formulated based on linear programming and solved using 1 magic and One bit compressed sensing methods in Excel.
⎡ ⎢ ⎢ ⎢ ⎢ ⎣
Compressed sensing is a process of acquiring and reconstructing a signal which is sparse or compressible. The data used for reconstruction in compressed sensing is a collection of inner products of the signal with some random functions known as compression functions. This process is called a measurement. The number of measurements required for practical signal which are usually sparse when expressed in fourier or wavelet basis is very small. Therefore, consump978-0-7695-4723-7/12 $26.00 © 2012 IEEE DOI 10.1109/ICACC.2012.67
A
. . 25×100
⎤⎡ ⎥ ⎥ ⎥ ⎥ ⎦
X
⎢ ⎢ ⎢ ⎢ ⎣
⎤
⎡
Y
⎥ ⎢ ⎥ ⎢ ⎥ =⎢ ⎥ ⎢ ⎦ ⎣ 100×1
⎤ ⎥ ⎥ ⎥ ⎥ ⎦
25×1
Each measurement of the random subset can be of the form 0 1 0 . . 1 1×100 69
The first value of the measurement vector can be represented as ⎡ ⎤ x1 ⎢ ⎥ ⎢ x2 ⎥ ⎥ . y1 = 0 1 0 . . 1 ⎢ ⎢ ⎥ ⎣ . ⎦ x100
between x1 and x2 , that is x = x1 − x2 takes both positive and negative values.
Each component yi is equal to the mass of ith subset minus 20 times the number of coins in the subset. For example, in the case of first random subset, y1 = g − 20 ∗ m. Thus y ∈ R25 denotes the measurement vector. The unknown vector x ∈ R100 is assumed to be sparse and let the 5th, 20th and 80th coin be faulty. The values at 5th, 20th and 80th locations, that is x represents the deviations from normal weight 20 grams, leading to faulty coins. If the number of faulty coins are less, then in most of the cases Xi = 0, where Xi is the ith element of X. The main aim is to recover every component of x, given the random sensing matrix A and measurement vector y [1]. With only 25 weighings of randomly selected subsets, we are almost certain to find the 3 faulty coins. By applying 1 optimization technique, all the faulty coins can be detected. The above illustration can be converted to a standard problem in linear programming. Minimizing ||x||1 subject to the linear constraints will result in producing exactly the same x values.
Ax = y x≥0
The standard linear programming formulation is as follows: min cT x subjected to
Our problem can be reformulated in standard linear programming as shown below: min
n
x1 ,x2 i=1
xi1 + xi2
subjected to A(x1 − x2 ) = y x1 ≥ 0, x2 ≥ 0 B. ONE BIT COMPRESSED SENSING The reconstruction from compressive sensing measurements are quantized to one bit per measurements. For consistent reconstruction from 1-bit measurements, the measurements are considered as sign constraints [2]. The sign information of the random measurements are preserved which further helps in recovering the signal. The measurement vector y Rm can be obtained as y = Ax where A is a given m×n measurement matrix and x Rn is an unknown signal that one needs to recover from y. Each measurement is the sign of the inner product of the sparse signal with the measurement vector φi ,
II. 1 MAGIC AND ONE BIT COMPRESSED SENSING WITH LINEAR PROGRAMMING A. 1 MAGIC A sparse object x Rn can be obtained from its linear measurements y = Ax. A is a random m×n sensing matrix with mn and y is a measurement vector. The main aim is to recover x from knowledge of A and y. The sensing matrix A reduces the number of measurements from n to m which helps in the recovery of x from measurement vector y Rm . The solution is obtained by minimizing x0 such that x satisfies the relation Ax = y. 0 norm is computationally expensive. However, it is found that when x is sparse 1 norm optimization with same constraint provides a solution.
yi = sign(φi , x) From this, it can be seen that yi (sign (φi , x)) ≥ 0 The measurements can be expressed as y = sign(Ax) where y is the measurement vector, A is a matrix representing the measurement system and the 1-bit quantization function sign(.) is applied element-wise to the vector Ax [2]. The reconstructed sparse vector x is a scaled version of the original x vector and is obtained by solving the following convex minimization program:
min x1 x
subjected to Ax = y
min x1
where the 1 norm for x is defined as
|xi | x1 =
subjected to sign(Ax) = y < yi , Axi >≥ 0,
i
To convert the above problem into a standard linear programming, split x Rn into x1 Rn and x2 Rn which can take only positive values, but the difference
Our problem can be reformulated in standard linear programming as shown below:
70
Figure 1.
1 Magic in spreadsheet
min
n
x1 ,x2 i=1
Figure 2.
One bit Compressed sensing in spreadsheet
Press ctrl + shif t + enter. It is named as ynew . 100 xi1 + xi2 . In d) Formulating objective function min
xi1 + xi2
x1 ,x2 i=1
sign(A(x1 − x2 )) = y < yi , Axi > ≥ 0, x1 ≥ 0, x2 ≥ 0
cell DH4, write “ = sum(DE7 : DF 106)”. e) Select solver from data option- Setting the Solver Parameters for minimization: 100 Set objective function: DH4, for xi1 + xi2
III. DEMONSTRATION IN EXCEL
Variable cells: DE7 to DF 106, for x1 , x2 Constraints: ynew =DB7:DB106, for ynew = y, x1 , x2 ≥ 0 and Press solve.
subjected to
i=1
A. 1 MAGIC
B. ONE BIT COMPRESSED SENSING
STEPS: (Refer Figure.1) 1. Create Random Sensing matrix A of size 25 × 100. In a sheet, write 1 and 2 in cells A1 and B1. Select together and pull right till 100. This is to specify the column number of the matrix. In cell A2, write “ = rand()” and pull till row 26 and column 100. Copy the new matrix generated and paste into a fresh sheet in cells A1 to CV 25 and name it as A. 2. Create sparse vector x which is to be reconstructed. The non zero elements are considered as the deviations. Fill cells from CZ7 to CZ106 with zeros and replace a few locations with some non-zero numbers. This is named as x. 3. Compute the constraint y = Ax. Select cells from DB7 to DB106 and write “ = mmult(A, x)”. Press ctrl + shif t + enter. This is the measured value y.
STEPS: (Refer Figure.2) 1. Create Random Sensing matrix A of size 25 × 100. In a sheet, write 1 and 2 in cells A1 and B1. Select together and pull right till 100. This is to specify the column number of the matrix. In cell A2, write “ = norminv(rand(), 0, 1)” and pull till row 26 and column 100. Copy the new matrix generated and paste into a fresh sheet in cells A1 to CV 25 and name it as A. 2. Create sparse vector x which is to be reconstructed. The non zero elements are considered as the deviations. Fill cells from CZ7 to CZ106 with zeros and replace a few locations with some non-zero numbers. This is named as x. 3. Compute the constraint y = Ax. Select cells from DB7 to DB31 and write “ = mmult(A, x)”. Press ctrl + shif t + enter. 4. Now for finding y = sign(Ax), write “ = sign(DB7)” in DC7, and pull down till DC106. This is the measured value y.
Reconstruction of vector x: a) For formulating the constraint A(x1 − x2 ) = y, create x1 and x2 . Fill cells DE7 to DE106 (represent x1 ) and DF 7 to DF 106 (represent x2 ) with zeros. b) Let xnew =x1 -x2 . In cell DG7(represents xnew ), write “ = DE2 − DF 2” and pull down till DG106. It is named as xnew . c) Compute ynew = A(x1 − x2 ) = Axnew . Select cells from DI7 to DI106 and write “ = mmult(A, Xnew)”.
Reconstruction of vector x: a) For formulating the constraint A(x1 − x2 ) = y, create x1 and x2 . Fill cells DE7 to DE106 (represent x1 )and DF 7 to DF 106 (represent x2 ) with zeros. b) Let xnew = x1 − x2 . In cell DG7(represents xnew ), write “ = DE7 − DF 7” and pull down till DG106. It is
71
named as xnew c) Compute ynew = A ∗ xnew . Select cells from DI7 to DI31 and write “ = mmult(A, xnew)”. Press ctrl + shif t + enter. d) Compute y∗A∗xnew . In cell DJ7, write “ = DC7∗DI7” and pull down till DJ31 m e) Compute yi Ai , xnew , should be equal to m. In cell
R EFERENCES [1] Kurt Bryan and Tanya Leise, “Making do with less: An Introduction to Compressed sensing”, SIAM Review’s Education Section, June 2011. [2] B.P.T and R G. Baraniuk, “1-Bit Compressive Sensing”, Proc. Conf. Inform. Science and Systems (CISS), Princeton, NJ, March 19-21, 2008.
i=1
DJ32, write “sum(DJ7 : DJ31)” (It should be 25, after minimization). 100 f) Formulating Objective function min xi1 + xi2 . In
[3] K P Soman, R Ramanathan, “Digital Signal and Image Procesing- The sparse way”, Elsevier India publications, 2012 [4] E. J. Candes, J. Romberg, and T. Tao, “Robust Uncertainty Principles:Exact Signal Reconstruction from Highly Incomplete Frequency Information”, IEEE Trans. Info. Theory, vol.52, no. 2, pp. 489-509, Feb.2006
x1 ,x2 i=1
cell DH4, write “ = sum(DE7 : DG106)” g) Making a temporary variable, temp. Write zeros from DK7 to DK106; to make x1 ≥ 0, x2 ≥ 0. h) Select solver from data option- Setting the Solver Parameters for minimization: 100 xi1 + xi2 Set objective: DH4, for
[5] E. J. Cand‘es and M.B. Wakin, “An introduction to compressive sampling”, IEEE Signal Processing Magazine, vol.25, pp. 21-30, 2008 [6] E.Cand‘es and J.Romberg, 1 -MAGIC, http://www.acm. caltech.edu / l1magic / (accessed 5/21/11)
i=1
Variable cells: DE7 to DF 106, for x1 , x2 Constraints: DJ7 : DJ31 ≥ 0, DJ32 ≥ 25, DE7 : DE106 ≥ DK7 : DK106, DF 7 : DF 106 ≥ DK7 : DK106 and Press solve.
[7] Yaniv Plan, Roman Vershynin , “One-bit compressed sensing by linear programming”, v4, 6 October 2011
C. RESULTS From the random sensing matrix A and sparse vector x, the measurement vector y is calculated, in both cases. By using 1 magic, it is found that the reconstructed sparse vector x is same as that of the original x vector. In One bit compressed sensing method, the reconstructed sparse vector x is a scaled version of the original x vector. By using the compressed sensing methodology, even if few measurements are lost it doesn’t make much difference and still can recover all the elements of x. It is found that perfect reconstruction of signals is possible with reasonable amount of computation and relatively small amount of data. From the computational perspective, compressed sensing methods are more reliable when compared with the conventional methods. They are useful in applications where computation is relatively less but measurement is expensive. IV. CONCLUSION In this paper, the concept of compressed sensing is explained through an experiment formulated based on linear programming and solved using 1 magic and One bit compressed sensing methods in Excel. The main aim is to encourage the students in experimenting different problems related to the theory and to analyze the results using Excel. The visual interpretation of mathematical expressions create awareness among students. The concept of 1 magic and one bit compressed sensing is demonstrated in a simple way using spreadsheet. The use of spreadsheet help the students to gain a strong understanding and clear visualization of the concept behind the theory of compressed sensing.
72