A new robust and efficient estimator for ill ... - Semantic Scholar

3 downloads 200048 Views 857KB Size Report
Apr 23, 2015 - New regularized τ-estimator. 5. Algorithm. 6. ... A has a large condition number → LS estimate fails. 0. 0. 40. 80. 120 ..... Application to real data.
A new robust and efficient estimator for ill-conditioned linear inverse problems with outliers Marta Martinez-Camara1 , Michael Muma2 , Abdelhak M. Zoubir2 , Martin Vetterli1 1 School of Computer and Communication Sciences ´ Ecole Polytechnique F´ ed´ erale de Lausanne (EPFL) 2 Signal Processing Group Technische Universit¨ at Darmstadt

23 April, 2015

Martinez-Camara et al.

ICASSP’15

23.04.2015

1 / 25

Outline 1. Motivation 2. Problem formulation 3. Background on robust estimators 4. New regularized τ -estimator 5. Algorithm 6. Results 7. Conclusions

Martinez-Camara et al.

ICASSP’15

23.04.2015

2 / 25

Motivation

Source location

Martinez-Camara et al.

Sensor location

ICASSP’15

23.04.2015

3 / 25

Motivation

?

Weather information

Source location

Sensor location Measurements

Martinez-Camara et al.

ICASSP’15

23.04.2015

3 / 25

Problem formulation

Consider the following linear inverse problem

y = Ax + e

Martinez-Camara et al.

I

y: measurement vector

I

A: known deterministic matrix

I

e: error term

I

x: unknown parameter vector

ICASSP’15

23.04.2015

4 / 25

Problem formulation

Consider the following linear inverse problem

y = Ax + e

I

y: measurement vector

I

A: known deterministic matrix

I

e: error term

I

x: unknown parameter vector

Typical assumptions I

A is well conditioned

I

Distribution of e is Gaussian

Martinez-Camara et al.

ICASSP’15

23.04.2015

4 / 25

Problem formulation

Consider the following linear inverse problem

y = Ax + e

I

y: measurement vector

I

A: known deterministic matrix

I

e: error term

I

x: unknown parameter vector

Typical assumptions I

A is well conditioned

I

Distribution of e is Gaussian

Martinez-Camara et al.

Standard estimator: least-squares (LS) b x = arg minky − Axk22 x

ICASSP’15

23.04.2015

4 / 25

Difficulty 1 Ill-conditioned problem: A has a large condition number → LS estimate fails LS estimate

Ground truth 4 1

0

-4 0

40

80

120

0

40

80

120

(A ∈ R300×120 , condition number = 1000, Gaussian errors SNR = 10 dB) Martinez-Camara et al.

ICASSP’15

23.04.2015

5 / 25

Difficulty 2 Impulsive noise and outliers

Gr ou nd t

ru th

e contains outliers → LS estimate breaks down

imate

t LS es

Martinez-Camara et al.

ICASSP’15

23.04.2015

6 / 25

Goals of this work

1. Design an estimator that is simultaneously I I I

robust against outliers, near optimal with Gaussian errors, and can handle A with a large condition number.

Martinez-Camara et al.

ICASSP’15

23.04.2015

7 / 25

Goals of this work

1. Design an estimator that is simultaneously I I I

robust against outliers, near optimal with Gaussian errors, and can handle A with a large condition number.

2. Develop a fast and reliable algorithm to compute the estimates.

Martinez-Camara et al.

ICASSP’15

23.04.2015

7 / 25

Goals of this work

1. Design an estimator that is simultaneously I I I

robust against outliers, near optimal with Gaussian errors, and can handle A with a large condition number.

2. Develop a fast and reliable algorithm to compute the estimates. Proposed approach: regularized τ -estimator I

A robust and efficient loss function.

I

A penalty term for regularization.

Martinez-Camara et al.

ICASSP’15

23.04.2015

7 / 25

Background: robust estimation Least Squares estimator b xLS = arg min x

I

rn (x) = yn − An x

I

An is the n-th row of A

Martinez-Camara et al.

N X

(rn (x))2

n=1

ICASSP’15

23.04.2015

8 / 25

Background: robust estimation Least Squares estimator b xLS = arg min x

I

rn (x) = yn − An x

I

An is the n-th row of A

I

Optimal in the sense that the variance of the estimate (σx2LS ) is minimised with Gaussian noise.

Martinez-Camara et al.

N X

(rn (x))2

n=1

ICASSP’15

23.04.2015

8 / 25

Background: robust estimation – M estimation Least Squares estimator b xLS = arg min x

I

rn (x) = yn − An x

I

An is the n-th row of A

I

Optimal in the sense that the variance of the estimate (σx2LS ) is minimised with Gaussian noise.

Martinez-Camara et al.

N X

ρ(rn (x))

n=1

ICASSP’15

23.04.2015

9 / 25

Background: robust estimation – M estimation Least Squares estimator b xLS = arg min x

I

rn (x) = yn − An x

I

An is the n-th row of A

I

Optimal in the sense that the variance of the estimate (σx2LS ) is minimised with Gaussian noise.

Martinez-Camara et al.

N X

ρ(rn (x))

n=1

ICASSP’15

23.04.2015

9 / 25

Background: robust estimation M estimation b xLS = arg min x

N X

ρ(rn (x))

n=1 Squared Bisquare c1

I

rn (x) = yn − An x

I

An is the n-th row of A

Martinez-Camara et al.

ICASSP’15

23.04.2015

10 / 25

Background: robust estimation – M estimation M estimation b xM = arg min x

N X

ρ(rn (x))

n=1 Squared Bisquare c1

I

rn (x) = yn − An x

I

An is the n-th row of A

Martinez-Camara et al.

ICASSP’15

23.04.2015

11 / 25

Background: robust estimation – M estimation M estimation b xM = arg min x

N X

ρ(rn (x))

n=1 Squared Bisquare c1 Bisquare c2

I

rn (x) = yn − An x

I

An is the n-th row of A

Martinez-Camara et al.

ICASSP’15

23.04.2015

12 / 25

Background: robust estimation – M estimation M estimation b xM

  N X rn (x) = arg min ρ σ ˆM (r(x)) x n=1

I

rn (x) = yn − An x

I

An is the n-th row of A

I

σ ˆM (r(x)): residual scale M-estimate

I

ρ(x): Symmetric, positive and non-decreasing on [0, ∞]

I

Squared Bisquare c1 Bisquare c2

LS If σx σxM close to 1 with Gaussian errors, efficient

Martinez-Camara et al.

ICASSP’15

23.04.2015

13 / 25

Background: robust estimation – τ estimation I

Choosing ρ(x) based on the data

Martinez-Camara et al.

ICASSP’15

23.04.2015

14 / 25

Background: robust estimation – τ estimation I

Choosing ρ(x) based on the data

I

Asymptotically equivalent to an M-estimator

1

ρ(·) = w (·) ρ1 (·) + ρ2 (·) | {z } | {z } robust

I

efficient

w (·): adapts to the distribution of the data

Martinez-Camara et al.

ICASSP’15

0 −4

−2

0

2

23.04.2015

4

14 / 25

Background: robust estimation – τ estimation I

Choosing ρ(x) based on the data

I

Asymptotically equivalent to an M-estimator

1

ρ(·) = w (·) ρ1 (·) + ρ2 (·) | {z } | {z } robust

I

I

efficient 0

−4 −2 w (·): adapts to the distribution of the data b xτ minimizes a robust and efficient τ -scale estimate

0

2

4

b xτ = arg min σ ˆτ (r(x)) x

Martinez-Camara et al.

ICASSP’15

23.04.2015

14 / 25

Recap

Robust

Efficient

Ill posed

M estimator robust

M estimator efficient

estimator

Martinez-Camara et al.

ICASSP’15

23.04.2015

15 / 25

New regularized τ -estimator

Proposed estimator b xτ = arg min σ ˆτ (r(x)) + λkxk2 x

I

σ ˆτ (r(x)): τ -estimate of the scale

I

λ ≥ 0: regularization parameter

Martinez-Camara et al.

ICASSP’15

23.04.2015

16 / 25

New regularized τ -estimator

Proposed estimator b xτ = arg min σ ˆτ (r(x)) + λkxk2 x

I

σ ˆτ (r(x)): τ -estimate of the scale

I

λ ≥ 0: regularization parameter

Key difficulty: How to compute the regularized τ estimate? I

Non-convex function

I

No guarantees of finding the global minimum

Martinez-Camara et al.

ICASSP’15

23.04.2015

16 / 25

Algorithm

Steps: 1. How to find local minima 2. How to find the global one 3. Speeding up the algorithm

Martinez-Camara et al.

ICASSP’15

23.04.2015

17 / 25

Algorithm

Step 1: finding local minima I

Equivalent to Iterative Reweighted Least Squares (IRLS) b x = arg min kW(x)(y − Ax)k22 + λ2 kxk22 x

I

W(x): data adaptive term that we derive from ∂(ˆ στ2 (r(x)) + λkxk22 ) =0 ∂x

Martinez-Camara et al.

ICASSP’15

23.04.2015

18 / 25

Algorithm Step 2: finding the global minima I

We take many different initial solutions...

Initial solution Local minimum

Global minimum

I

... and we hope to find the correct valley! Martinez-Camara et al.

ICASSP’15

23.04.2015

19 / 25

Algorithm

Step 3: speeding up the algorithm I

For each initial solution, make only a few IRLS iterations. → fast convergence

I

Pick the N best solutions.

I

Use them as new initial solutions.

I

Iterate IRLS until convergence.

Martinez-Camara et al.

ICASSP’15

23.04.2015

20 / 25

Results Experimental setup y = Ax + eG + eo

1

I

A∈R

300×120

: random iid Gaussian

I

x: piecewise constant

I

eG : Gaussian noise

I

eo : sparse vector, entries with large variance (outliers)

I

λ: determined experimentally 0 −4

Martinez-Camara et al.

ICASSP’15

−2

0

2

23.04.2015

4

21 / 25

Results – with previous estimators Non-regularized LS-estimator, M-estimator, and τ -estimator I I

A with a condition number of 50. kˆx − xk: Monte Carlo average. LS

80

M tau

60

40

20

0

10

20

30

40

% outliers Martinez-Camara et al.

ICASSP’15

23.04.2015

22 / 25

Results – with new estimator Regularized LS-estimator, M-estimator, and τ -estimator I A with a condition number of 1000. I kˆ x − xk: Monte Carlo average. Reg. LS Reg. M Reg. tau

4

3

2

1

0

10

20

30

40

% outliers Martinez-Camara et al.

ICASSP’15

23.04.2015

23 / 25

Conclusions I

New regularized robust estimator I I I

highly robust against outliers highly efficient in the presence of Gaussian noise stable when the mixing matrix has a large condition number

Martinez-Camara et al.

ICASSP’15

23.04.2015

24 / 25

Conclusions I

New regularized robust estimator I I I

I

highly robust against outliers highly efficient in the presence of Gaussian noise stable when the mixing matrix has a large condition number

Current research I I I I

Study the interrelation of robustness and regularization Develop Lasso-type regularized τ -estimator Derivation of influence function Application to real data

Martinez-Camara et al.

ICASSP’15

23.04.2015

24 / 25

Conclusions I

New regularized robust estimator I I I

I

Current research I I I I

I

highly robust against outliers highly efficient in the presence of Gaussian noise stable when the mixing matrix has a large condition number

Study the interrelation of robustness and regularization Develop Lasso-type regularized τ -estimator Derivation of influence function Application to real data

Reproducible Results I

https://github.com/LCAV/RegularizedTauEstimator

Martinez-Camara et al.

ICASSP’15

23.04.2015

24 / 25

Thank you for your attention.

Martinez-Camara et al.

ICASSP’15

23.04.2015

25 / 25

Thank you for your attention. Questions?

Martinez-Camara et al.

ICASSP’15

23.04.2015

25 / 25

Suggest Documents