A new robust and efficient estimator for ill ... - Semantic Scholar
Recommend Documents
5. REFERENCES. [1] K.C.Ho, Xiaoning L and L.Kovavisaruch, âSource Localiza- ... [2] Kehu Yang, Gang Wang, and Zhi-Quan Luo, âEfficient Convex. Relaxation ...
recognition (ASR) services on wireless mobile devices .... cal interaction between a DSEM server and one of many clients. ... mote dedicated compute servers.
attention is devoted to support vector machines (SVMs), which are a promising ... IN THE last years, the remote sensing community has de- voted particular ...
AN ADAPTIVE ROBUST ESTIMATOR FOR SCALE IN CONTAMINATED DISTRIBUTIONS. Ramon Brcich, Christopher L. Brown and Abdelhak M. Zoubir.
AbstractâThe ordinary least squares (OLS) procedure is inefficient when the underlying assumption of constant error variances (homoscedasticity) is not met.
Although the first methods for occupancy mapping relied on a 2D grid representation, ..... a single lookup still has a high performance impact. We hence first perform a ... neighboring rooms or intersections, the produced artifacts can lead to a ...
Robust regression methods can tolerate gross errors (outliers) [1][5]. The most common form of regression analysis is the least squares (LS) method, which can ...
Oct 5, 2012 - For the needle-haystack model we require the following SNR, which is ... Theorem 5 If X is an i.i.d. sample of size N from a needle-haystack ...
longer the best solution for many of these new application domains; instead, ...... additional profit to âbuy offâ the participant, Robot 1, and thus convince Robot 1 ...
Horn and Schunck [6]. *This research was supported in part by the NSF grant ECS-. 9210648. B. C. Vemuri. Department of Computer 8.1 Information Sciences.
placental ribonuclease inhibitor (133 unitsml), L-[U-. 14C]leucine, MTX immobilized on agarose, creatine kinase, spermidine, and the 20 amino acids. Dialysis ...
(b) A typical SLJ experiment. Figure 7: Typical SLJ model and experiment. These images were taken just prior to failure. (a) Global view of the ENF model.
delivery. Location-awareness is essential for many wireless network applications including geocasting .... Geographic routing has several advantages: the state.
Sep 9, 2016 - DNA Fountain can also perfectly recover the file after creating a deep copy of the ... Even this aggressiv
version of cryptographic public keys, as pseudonyms ... Cryptographic primitives guarantee the ...... the list and keeps silent (i.e., does not broadcast neither a.
Jan 28, 2016 - 1Global Institute for Water Security & School of Environment and Sustainability, University of Saskatchewan, Saskatoon,. Saskatchewan ...
Jan 28, 2016 - global sensitivity analysis: 1. Theory. Saman Razavi1,2 and Hoshin V. Gupta3. 1Global Institute for Water Security & School of Environment and ...
315. A New Colour Space for Efficient and Robust Segmentation. Gourab Sen Gupta*, +, Donald Bailey*, Chris Messom#. *IIS&T, Massey University, Palmerston ...
ROBUST M-ESTIMATOR OF PARAMETERS IN. VARIANCE COMPONENTS MODEL. Roman Zmyslony and Stefan Zontek. Institute of Mathematics. University of ...
This paper deals with hands-free acoustical front-ends for human/machine .... the time-domain signals ebر(k) have to be constrained in such a way that the first ...
Jun 25, 2009 - radio active (s). ⢠idle (include Python and Nokia Energy Profiler) (i). 2 ...... made sure only to walk in straight lines and we registered the actual ...
COMPUTATIONALLY EFFICIENT FREQUENCY-DOMAIN ROBUST. GENERALIZED SIDELOBE CANCELLER. W. Herbordt and W. Kellermann.
Dec 26, 2014 - ratio estimator using information of two auxiliary variables, the ... suggested the modified exponential ratio and product estimators in two phase.
The complexity of a single rule is accounted for by rci which could be de ned in analogy to .... consists of ten random binary variables (v1 to v10). The outcome (Y ...
A new robust and efficient estimator for ill ... - Semantic Scholar
Apr 23, 2015 - New regularized Ï-estimator. 5. Algorithm. 6. ... A has a large condition number â LS estimate fails. 0. 0. 40. 80. 120 ..... Application to real data.
A new robust and efficient estimator for ill-conditioned linear inverse problems with outliers Marta Martinez-Camara1 , Michael Muma2 , Abdelhak M. Zoubir2 , Martin Vetterli1 1 School of Computer and Communication Sciences ´ Ecole Polytechnique F´ ed´ erale de Lausanne (EPFL) 2 Signal Processing Group Technische Universit¨ at Darmstadt
23 April, 2015
Martinez-Camara et al.
ICASSP’15
23.04.2015
1 / 25
Outline 1. Motivation 2. Problem formulation 3. Background on robust estimators 4. New regularized τ -estimator 5. Algorithm 6. Results 7. Conclusions
Martinez-Camara et al.
ICASSP’15
23.04.2015
2 / 25
Motivation
Source location
Martinez-Camara et al.
Sensor location
ICASSP’15
23.04.2015
3 / 25
Motivation
?
Weather information
Source location
Sensor location Measurements
Martinez-Camara et al.
ICASSP’15
23.04.2015
3 / 25
Problem formulation
Consider the following linear inverse problem
y = Ax + e
Martinez-Camara et al.
I
y: measurement vector
I
A: known deterministic matrix
I
e: error term
I
x: unknown parameter vector
ICASSP’15
23.04.2015
4 / 25
Problem formulation
Consider the following linear inverse problem
y = Ax + e
I
y: measurement vector
I
A: known deterministic matrix
I
e: error term
I
x: unknown parameter vector
Typical assumptions I
A is well conditioned
I
Distribution of e is Gaussian
Martinez-Camara et al.
ICASSP’15
23.04.2015
4 / 25
Problem formulation
Consider the following linear inverse problem
y = Ax + e
I
y: measurement vector
I
A: known deterministic matrix
I
e: error term
I
x: unknown parameter vector
Typical assumptions I
A is well conditioned
I
Distribution of e is Gaussian
Martinez-Camara et al.
Standard estimator: least-squares (LS) b x = arg minky − Axk22 x
ICASSP’15
23.04.2015
4 / 25
Difficulty 1 Ill-conditioned problem: A has a large condition number → LS estimate fails LS estimate
Ground truth 4 1
0
-4 0
40
80
120
0
40
80
120
(A ∈ R300×120 , condition number = 1000, Gaussian errors SNR = 10 dB) Martinez-Camara et al.
ICASSP’15
23.04.2015
5 / 25
Difficulty 2 Impulsive noise and outliers
Gr ou nd t
ru th
e contains outliers → LS estimate breaks down
imate
t LS es
Martinez-Camara et al.
ICASSP’15
23.04.2015
6 / 25
Goals of this work
1. Design an estimator that is simultaneously I I I
robust against outliers, near optimal with Gaussian errors, and can handle A with a large condition number.
Martinez-Camara et al.
ICASSP’15
23.04.2015
7 / 25
Goals of this work
1. Design an estimator that is simultaneously I I I
robust against outliers, near optimal with Gaussian errors, and can handle A with a large condition number.
2. Develop a fast and reliable algorithm to compute the estimates.
Martinez-Camara et al.
ICASSP’15
23.04.2015
7 / 25
Goals of this work
1. Design an estimator that is simultaneously I I I
robust against outliers, near optimal with Gaussian errors, and can handle A with a large condition number.
2. Develop a fast and reliable algorithm to compute the estimates. Proposed approach: regularized τ -estimator I
A robust and efficient loss function.
I
A penalty term for regularization.
Martinez-Camara et al.
ICASSP’15
23.04.2015
7 / 25
Background: robust estimation Least Squares estimator b xLS = arg min x
I
rn (x) = yn − An x
I
An is the n-th row of A
Martinez-Camara et al.
N X
(rn (x))2
n=1
ICASSP’15
23.04.2015
8 / 25
Background: robust estimation Least Squares estimator b xLS = arg min x
I
rn (x) = yn − An x
I
An is the n-th row of A
I
Optimal in the sense that the variance of the estimate (σx2LS ) is minimised with Gaussian noise.
Martinez-Camara et al.
N X
(rn (x))2
n=1
ICASSP’15
23.04.2015
8 / 25
Background: robust estimation – M estimation Least Squares estimator b xLS = arg min x
I
rn (x) = yn − An x
I
An is the n-th row of A
I
Optimal in the sense that the variance of the estimate (σx2LS ) is minimised with Gaussian noise.
Martinez-Camara et al.
N X
ρ(rn (x))
n=1
ICASSP’15
23.04.2015
9 / 25
Background: robust estimation – M estimation Least Squares estimator b xLS = arg min x
I
rn (x) = yn − An x
I
An is the n-th row of A
I
Optimal in the sense that the variance of the estimate (σx2LS ) is minimised with Gaussian noise.
Martinez-Camara et al.
N X
ρ(rn (x))
n=1
ICASSP’15
23.04.2015
9 / 25
Background: robust estimation M estimation b xLS = arg min x
N X
ρ(rn (x))
n=1 Squared Bisquare c1
I
rn (x) = yn − An x
I
An is the n-th row of A
Martinez-Camara et al.
ICASSP’15
23.04.2015
10 / 25
Background: robust estimation – M estimation M estimation b xM = arg min x
N X
ρ(rn (x))
n=1 Squared Bisquare c1
I
rn (x) = yn − An x
I
An is the n-th row of A
Martinez-Camara et al.
ICASSP’15
23.04.2015
11 / 25
Background: robust estimation – M estimation M estimation b xM = arg min x
N X
ρ(rn (x))
n=1 Squared Bisquare c1 Bisquare c2
I
rn (x) = yn − An x
I
An is the n-th row of A
Martinez-Camara et al.
ICASSP’15
23.04.2015
12 / 25
Background: robust estimation – M estimation M estimation b xM
N X rn (x) = arg min ρ σ ˆM (r(x)) x n=1
I
rn (x) = yn − An x
I
An is the n-th row of A
I
σ ˆM (r(x)): residual scale M-estimate
I
ρ(x): Symmetric, positive and non-decreasing on [0, ∞]
I
Squared Bisquare c1 Bisquare c2
LS If σx σxM close to 1 with Gaussian errors, efficient
Martinez-Camara et al.
ICASSP’15
23.04.2015
13 / 25
Background: robust estimation – τ estimation I
Choosing ρ(x) based on the data
Martinez-Camara et al.
ICASSP’15
23.04.2015
14 / 25
Background: robust estimation – τ estimation I
Choosing ρ(x) based on the data
I
Asymptotically equivalent to an M-estimator
1
ρ(·) = w (·) ρ1 (·) + ρ2 (·) | {z } | {z } robust
I
efficient
w (·): adapts to the distribution of the data
Martinez-Camara et al.
ICASSP’15
0 −4
−2
0
2
23.04.2015
4
14 / 25
Background: robust estimation – τ estimation I
Choosing ρ(x) based on the data
I
Asymptotically equivalent to an M-estimator
1
ρ(·) = w (·) ρ1 (·) + ρ2 (·) | {z } | {z } robust
I
I
efficient 0
−4 −2 w (·): adapts to the distribution of the data b xτ minimizes a robust and efficient τ -scale estimate
0
2
4
b xτ = arg min σ ˆτ (r(x)) x
Martinez-Camara et al.
ICASSP’15
23.04.2015
14 / 25
Recap
Robust
Efficient
Ill posed
M estimator robust
M estimator efficient
estimator
Martinez-Camara et al.
ICASSP’15
23.04.2015
15 / 25
New regularized τ -estimator
Proposed estimator b xτ = arg min σ ˆτ (r(x)) + λkxk2 x
I
σ ˆτ (r(x)): τ -estimate of the scale
I
λ ≥ 0: regularization parameter
Martinez-Camara et al.
ICASSP’15
23.04.2015
16 / 25
New regularized τ -estimator
Proposed estimator b xτ = arg min σ ˆτ (r(x)) + λkxk2 x
I
σ ˆτ (r(x)): τ -estimate of the scale
I
λ ≥ 0: regularization parameter
Key difficulty: How to compute the regularized τ estimate? I
Non-convex function
I
No guarantees of finding the global minimum
Martinez-Camara et al.
ICASSP’15
23.04.2015
16 / 25
Algorithm
Steps: 1. How to find local minima 2. How to find the global one 3. Speeding up the algorithm
Martinez-Camara et al.
ICASSP’15
23.04.2015
17 / 25
Algorithm
Step 1: finding local minima I
Equivalent to Iterative Reweighted Least Squares (IRLS) b x = arg min kW(x)(y − Ax)k22 + λ2 kxk22 x
I
W(x): data adaptive term that we derive from ∂(ˆ στ2 (r(x)) + λkxk22 ) =0 ∂x
Martinez-Camara et al.
ICASSP’15
23.04.2015
18 / 25
Algorithm Step 2: finding the global minima I
We take many different initial solutions...
Initial solution Local minimum
Global minimum
I
... and we hope to find the correct valley! Martinez-Camara et al.
ICASSP’15
23.04.2015
19 / 25
Algorithm
Step 3: speeding up the algorithm I
For each initial solution, make only a few IRLS iterations. → fast convergence
I
Pick the N best solutions.
I
Use them as new initial solutions.
I
Iterate IRLS until convergence.
Martinez-Camara et al.
ICASSP’15
23.04.2015
20 / 25
Results Experimental setup y = Ax + eG + eo
1
I
A∈R
300×120
: random iid Gaussian
I
x: piecewise constant
I
eG : Gaussian noise
I
eo : sparse vector, entries with large variance (outliers)
I
λ: determined experimentally 0 −4
Martinez-Camara et al.
ICASSP’15
−2
0
2
23.04.2015
4
21 / 25
Results – with previous estimators Non-regularized LS-estimator, M-estimator, and τ -estimator I I
A with a condition number of 50. kˆx − xk: Monte Carlo average. LS
80
M tau
60
40
20
0
10
20
30
40
% outliers Martinez-Camara et al.
ICASSP’15
23.04.2015
22 / 25
Results – with new estimator Regularized LS-estimator, M-estimator, and τ -estimator I A with a condition number of 1000. I kˆ x − xk: Monte Carlo average. Reg. LS Reg. M Reg. tau
4
3
2
1
0
10
20
30
40
% outliers Martinez-Camara et al.
ICASSP’15
23.04.2015
23 / 25
Conclusions I
New regularized robust estimator I I I
highly robust against outliers highly efficient in the presence of Gaussian noise stable when the mixing matrix has a large condition number
Martinez-Camara et al.
ICASSP’15
23.04.2015
24 / 25
Conclusions I
New regularized robust estimator I I I
I
highly robust against outliers highly efficient in the presence of Gaussian noise stable when the mixing matrix has a large condition number
Current research I I I I
Study the interrelation of robustness and regularization Develop Lasso-type regularized τ -estimator Derivation of influence function Application to real data
Martinez-Camara et al.
ICASSP’15
23.04.2015
24 / 25
Conclusions I
New regularized robust estimator I I I
I
Current research I I I I
I
highly robust against outliers highly efficient in the presence of Gaussian noise stable when the mixing matrix has a large condition number
Study the interrelation of robustness and regularization Develop Lasso-type regularized τ -estimator Derivation of influence function Application to real data