Aller au contenu principal

Matching pursuit


Matching pursuit


Matching pursuit (MP) is a sparse approximation algorithm which finds the "best matching" projections of multidimensional data onto the span of an over-complete (i.e., redundant) dictionary D {\displaystyle D} . The basic idea is to approximately represent a signal f {\displaystyle f} from Hilbert space H {\displaystyle H} as a weighted sum of finitely many functions g γ n {\displaystyle g_{\gamma _{n}}} (called atoms) taken from D {\displaystyle D} . An approximation with N {\displaystyle N} atoms has the form

f ( t ) f ^ N ( t ) := n = 1 N a n g γ n ( t ) {\displaystyle f(t)\approx {\hat {f}}_{N}(t):=\sum _{n=1}^{N}a_{n}g_{\gamma _{n}}(t)}

where g γ n {\displaystyle g_{\gamma _{n}}} is the γ n {\displaystyle \gamma _{n}} th column of the matrix D {\displaystyle D} and a n {\displaystyle a_{n}} is the scalar weighting factor (amplitude) for the atom g γ n {\displaystyle g_{\gamma _{n}}} . Normally, not every atom in D {\displaystyle D} will be used in this sum. Instead, matching pursuit chooses the atoms one at a time in order to maximally (greedily) reduce the approximation error. This is achieved by finding the atom that has the highest inner product with the signal (assuming the atoms are normalized), subtracting from the signal an approximation that uses only that one atom, and repeating the process until the signal is satisfactorily decomposed, i.e., the norm of the residual is small, where the residual after calculating γ N {\displaystyle \gamma _{N}} and a N {\displaystyle a_{N}} is denoted by

R N + 1 = f f ^ N {\displaystyle R_{N+1}=f-{\hat {f}}_{N}} .

If R n {\displaystyle R_{n}} converges quickly to zero, then only a few atoms are needed to get a good approximation to f {\displaystyle f} . Such sparse representations are desirable for signal coding and compression. More precisely, the sparsity problem that matching pursuit is intended to approximately solve is

min x f D x 2 2    subject to    x 0 N , {\displaystyle \min _{x}\|f-Dx\|_{2}^{2}\ {\text{ subject to }}\ \|x\|_{0}\leq N,}

where x 0 {\displaystyle \|x\|_{0}} is the L 0 {\displaystyle L_{0}} pseudo-norm (i.e. the number of nonzero elements of x {\displaystyle x} ). In the previous notation, the nonzero entries of x {\displaystyle x} are x γ n = a n {\displaystyle x_{\gamma _{n}}=a_{n}} . Solving the sparsity problem exactly is NP-hard, which is why approximation methods like MP are used.

For comparison, consider the Fourier transform representation of a signal - this can be described using the terms given above, where the dictionary is built from sinusoidal basis functions (the smallest possible complete dictionary). The main disadvantage of Fourier analysis in signal processing is that it extracts only the global features of the signals and does not adapt to the analysed signals f {\displaystyle f} . By taking an extremely redundant dictionary, we can look in it for atoms (functions) that best match a signal f {\displaystyle f} .

The algorithm

If D {\displaystyle D} contains a large number of vectors, searching for the most sparse representation of f {\displaystyle f} is computationally unacceptable for practical applications. In 1993, Mallat and Zhang proposed a greedy solution that they named "Matching Pursuit." For any signal f {\displaystyle f} and any dictionary D {\displaystyle D} , the algorithm iteratively generates a sorted list of atom indices and weighting scalars, which form the sub-optimal solution to the problem of sparse signal representation.

In signal processing, the concept of matching pursuit is related to statistical projection pursuit, in which "interesting" projections are found; ones that deviate more from a normal distribution are considered to be more interesting.

Properties

  • The algorithm converges (i.e. R n 0 {\displaystyle R_{n}\to 0} ) for any f {\displaystyle f} that is in the space spanned by the dictionary.
  • The error R n {\displaystyle \|R_{n}\|} decreases monotonically.
  • As at each step, the residual is orthogonal to the selected filter, the energy conservation equation is satisfied for each N {\displaystyle N} :
f 2 = R N + 1 2 + n = 1 N | a n | 2 {\displaystyle \|f\|^{2}=\|R_{N+1}\|^{2}+\sum _{n=1}^{N}{|a_{n}|^{2}}} .
  • In the case that the vectors in D {\displaystyle D} are orthonormal, rather than being redundant, then MP is a form of principal component analysis

Applications

Matching pursuit has been applied to signal, image and video coding, shape representation and recognition, 3D objects coding, and in interdisciplinary applications like structural health monitoring. It has been shown that it performs better than DCT based coding for low bit rates in both efficiency of coding and quality of image. The main problem with matching pursuit is the computational complexity of the encoder. In the basic version of an algorithm, the large dictionary needs to be searched at each iteration. Improvements include the use of approximate dictionary representations and suboptimal ways of choosing the best match at each iteration (atom extraction). The matching pursuit algorithm is used in MP/SOFT, a method of simulating quantum dynamics.

MP is also used in dictionary learning. In this algorithm, atoms are learned from a database (in general, natural scenes such as usual images) and not chosen from generic dictionaries.

A very recent application of MP is its use in linear computation coding to speed-up the computation of matrix-vector products.

Extensions

A popular extension of Matching Pursuit (MP) is its orthogonal version: Orthogonal Matching Pursuit (OMP). The main difference from MP is that after every step, all the coefficients extracted so far are updated, by computing the orthogonal projection of the signal onto the subspace spanned by the set of atoms selected so far. This can lead to results better than standard MP, but requires more computation. OMP was shown to have stability and performance guarantees under certain restricted isometry conditions. The incremental multi-parameter algorithm (IMP), published three years before MP, works in the same way as OMP.

Extensions such as Multichannel MP and Multichannel OMP allow one to process multicomponent signals. An obvious extension of Matching Pursuit is over multiple positions and scales, by augmenting the dictionary to be that of a wavelet basis. This can be done efficiently using the convolution operator without changing the core algorithm.

Matching pursuit is related to the field of compressed sensing and has been extended by researchers in that community. Notable extensions are Orthogonal Matching Pursuit (OMP), Stagewise OMP (StOMP), compressive sampling matching pursuit (CoSaMP), Generalized OMP (gOMP), and Multipath Matching Pursuit (MMP).

See also

  • CLEAN algorithm
  • Image processing
  • Least-squares spectral analysis
  • Principal component analysis (PCA)
  • Projection pursuit
  • Signal processing
  • Sparse approximation
  • Stepwise regression

References


Text submitted to CC-BY-SA license. Source: Matching pursuit by Wikipedia (Historical)