Basis pursuit denoising

From HandWiki
Short description: Mathematical optimization problem

In applied mathematics and statistics, basis pursuit denoising (BPDN) refers to a mathematical optimization problem of the form

[math]\displaystyle{ \min_x \left(\frac{1}{2} \|y - Ax\|^2_2 + \lambda \|x\|_1\right), }[/math]

where [math]\displaystyle{ \lambda }[/math] is a parameter that controls the trade-off between sparsity and reconstruction fidelity, [math]\displaystyle{ x }[/math] is an [math]\displaystyle{ N \times 1 }[/math] solution vector, [math]\displaystyle{ y }[/math] is an [math]\displaystyle{ M \times 1 }[/math] vector of observations, [math]\displaystyle{ A }[/math] is an [math]\displaystyle{ M \times N }[/math] transform matrix and [math]\displaystyle{ M \lt N }[/math]. This is an instance of convex optimization.

Some authors refer to basis pursuit denoising as the following closely related problem:

[math]\displaystyle{ \min_x \|x\|_1 \text{ subject to } \|y - Ax\|^2_2 \le \delta, }[/math]

which, for any given [math]\displaystyle{ \lambda }[/math], is equivalent to the unconstrained formulation for some (usually unknown a priori) value of [math]\displaystyle{ \delta }[/math]. The two problems are quite similar. In practice, the unconstrained formulation, for which most specialized and efficient computational algorithms are developed, is usually preferred.

Either types of basis pursuit denoising solve a regularization problem with a trade-off between having a small residual (making [math]\displaystyle{ y }[/math] close to [math]\displaystyle{ Ax }[/math] in terms of the squared error) and making [math]\displaystyle{ x }[/math] simple in the [math]\displaystyle{ \ell_1 }[/math]-norm sense. It can be thought of as a mathematical statement of Occam's razor, finding the simplest possible explanation (i.e. one that yields [math]\displaystyle{ \min_x \|x\|_1 }[/math]) capable of accounting for the observations [math]\displaystyle{ y }[/math].

Exact solutions to basis pursuit denoising are often the best computationally tractable approximation of an underdetermined system of equations.[citation needed] Basis pursuit denoising has potential applications in statistics (see the LASSO method of regularization), image compression and compressed sensing.

When [math]\displaystyle{ \delta = 0 }[/math], this problem becomes basis pursuit.

Basis pursuit denoising was introduced by Chen and Donoho in 1994,[1] in the field of signal processing. In statistics, it is well known under the name LASSO, after being introduced by Tibshirani in 1996.

Solving basis pursuit denoising

The problem is a convex quadratic problem, so it can be solved by many general solvers, such as interior-point methods. For very large problems, many specialized methods that are faster than interior-point methods have been proposed.

Several popular methods for solving basis pursuit denoising include the in-crowd algorithm (a fast solver for large, sparse problems[2]), homotopy continuation, fixed-point continuation (a special case of the forward–backward algorithm[3]) and spectral projected gradient for L1 minimization (which actually solves LASSO, a related problem).

References

  1. Chen, Shaobing; Donoho, D. (1994). "Basis pursuit". Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers. 1. pp. 41–44. doi:10.1109/ACSSC.1994.471413. ISBN 0-8186-6405-3. 
  2. See Gill, Patrick R.; Wang, Albert; Molnar, Alyosha (2011). "The In-Crowd Algorithm for Fast Basis Pursuit Denoising". IEEE Transactions on Signal Processing 59 (10): 4595–4605. doi:10.1109/TSP.2011.2161292;  demo MATLAB code available [1].
  3. "Forward Backward Algorithm". Archived from the original on February 16, 2014. https://web.archive.org/web/20140216231347/http://www.ugcs.caltech.edu/~srbecker/wiki/Forward_Backward_Algorithm. 

External links