Aller au contenu principal

Segmentation-based object categorization


Segmentation-based object categorization


The image segmentation problem is concerned with partitioning an image into multiple regions according to some homogeneity criterion. This article is primarily concerned with graph theoretic approaches to image segmentation applying graph partitioning via minimum cut or maximum cut. Segmentation-based object categorization can be viewed as a specific case of spectral clustering applied to image segmentation.

Applications of image segmentation

  • Image compression
    • Segment the image into homogeneous components, and use the most suitable compression algorithm for each component to improve compression.
  • Medical diagnosis
    • Automatic segmentation of MRI images for identification of cancerous regions.
  • Mapping and measurement
    • Automatic analysis of remote sensing data from satellites to identify and measure regions of interest.
  • Transportation
    • Partition a transportation network makes it possible to identify regions characterized by homogeneous traffic states.

Segmentation using normalized cuts

Graph theoretic formulation

The set of points in an arbitrary feature space can be represented as a weighted undirected complete graph G = (V, E), where the nodes of the graph are the points in the feature space. The weight w i j {\displaystyle w_{ij}} of an edge ( i , j ) E {\displaystyle (i,j)\in E} is a function of the similarity between the nodes i {\displaystyle i} and j {\displaystyle j} . In this context, we can formulate the image segmentation problem as a graph partitioning problem that asks for a partition V 1 , , V k {\displaystyle V_{1},\cdots ,V_{k}} of the vertex set V {\displaystyle V} , where, according to some measure, the vertices in any set V i {\displaystyle V_{i}} have high similarity, and the vertices in two different sets V i , V j {\displaystyle V_{i},V_{j}} have low similarity.

Normalized cuts

Let G = (V, E, w) be a weighted graph. Let A {\displaystyle A} and B {\displaystyle B} be two subsets of vertices.

Let:

w ( A , B ) = i A , j B w i j {\displaystyle w(A,B)=\sum \limits _{i\in A,j\in B}w_{ij}}
ncut ( A , B ) = w ( A , B ) w ( A , V ) + w ( A , B ) w ( B , V ) {\displaystyle \operatorname {ncut} (A,B)={\frac {w(A,B)}{w(A,V)}}+{\frac {w(A,B)}{w(B,V)}}}
nassoc ( A , B ) = w ( A , A ) w ( A , V ) + w ( B , B ) w ( B , V ) {\displaystyle \operatorname {nassoc} (A,B)={\frac {w(A,A)}{w(A,V)}}+{\frac {w(B,B)}{w(B,V)}}}

In the normalized cuts approach, for any cut ( S , S ¯ ) {\displaystyle (S,{\overline {S}})} in G {\displaystyle G} , ncut ( S , S ¯ ) {\displaystyle \operatorname {ncut} (S,{\overline {S}})} measures the similarity between different parts, and nassoc ( S , S ¯ ) {\displaystyle \operatorname {nassoc} (S,{\overline {S}})} measures the total similarity of vertices in the same part.

Since ncut ( S , S ¯ ) = 2 nassoc ( S , S ¯ ) {\displaystyle \operatorname {ncut} (S,{\overline {S}})=2-\operatorname {nassoc} (S,{\overline {S}})} , a cut ( S , S ¯ ) {\displaystyle (S^{*},{\overline {S}}^{*})} that minimizes ncut ( S , S ¯ ) {\displaystyle \operatorname {ncut} (S,{\overline {S}})} also maximizes nassoc ( S , S ¯ ) {\displaystyle \operatorname {nassoc} (S,{\overline {S}})} .

Computing a cut ( S , S ¯ ) {\displaystyle (S^{*},{\overline {S}}^{*})} that minimizes ncut ( S , S ¯ ) {\displaystyle \operatorname {ncut} (S,{\overline {S}})} is an NP-hard problem. However, we can find in polynomial time a cut ( S , S ¯ ) {\displaystyle (S,{\overline {S}})} of small normalized weight ncut ( S , S ¯ ) {\displaystyle \operatorname {ncut} (S,{\overline {S}})} using spectral techniques.

The ncut algorithm

Let:

d ( i ) = j w i j {\displaystyle d(i)=\sum \limits _{j}w_{ij}}

Also, let D be an n × n {\displaystyle n\times n} diagonal matrix with d {\displaystyle d} on the diagonal, and let W {\displaystyle W} be an n × n {\displaystyle n\times n} symmetric matrix with w i j = w j i {\displaystyle w_{ij}=w_{ji}} .

After some algebraic manipulations, we get:

min ( S , S ¯ ) ncut ( S , S ¯ ) = min y y T ( D W ) y y T D y {\displaystyle \min \limits _{(S,{\overline {S}})}\operatorname {ncut} (S,{\overline {S}})=\min \limits _{y}{\frac {y^{T}(D-W)y}{y^{T}Dy}}}

subject to the constraints:

  • y i { 1 , b } {\displaystyle y_{i}\in \{1,-b\}} , for some constant b {\displaystyle -b}
  • y t D 1 = 0 {\displaystyle y^{t}D1=0}

Minimizing y T ( D W ) y y T D y {\displaystyle {\frac {y^{T}(D-W)y}{y^{T}Dy}}} subject to the constraints above is NP-hard. To make the problem tractable, we relax the constraints on y {\displaystyle y} , and allow it to take real values. The relaxed problem can be solved by solving the generalized eigenvalue problem ( D W ) y = λ D y {\displaystyle (D-W)y=\lambda Dy} for the second smallest generalized eigenvalue.

The partitioning algorithm:

  1. Given a set of features, set up a weighted graph G = ( V , E ) {\displaystyle G=(V,E)} , compute the weight of each edge, and summarize the information in D {\displaystyle D} and W {\displaystyle W} .
  2. Solve ( D W ) y = λ D y {\displaystyle (D-W)y=\lambda Dy} for eigenvectors with the second smallest eigenvalues.
  3. Use the eigenvector with the second smallest eigenvalue to bipartition the graph (e.g. grouping according to sign).
  4. Decide if the current partition should be subdivided.
  5. Recursively partition the segmented parts, if necessary.

Computational Complexity

Solving a standard eigenvalue problem for all eigenvectors (using the QR algorithm, for instance) takes O ( n 3 ) {\displaystyle O(n^{3})} time. This is impractical for image segmentation applications where n {\displaystyle n} is the number of pixels in the image.

Since only one eigenvector, corresponding to the second smallest generalized eigenvalue, is used by the uncut algorithm, efficiency can be dramatically improved if the solve of the corresponding eigenvalue problem is performed in a matrix-free fashion, i.e., without explicitly manipulating with or even computing the matrix W, as, e.g., in the Lanczos algorithm. Matrix-free methods require only a function that performs a matrix-vector product for a given vector, on every iteration. For image segmentation, the matrix W is typically sparse, with a number of nonzero entries O ( n ) {\displaystyle O(n)} , so such a matrix-vector product takes O ( n ) {\displaystyle O(n)} time.

For high-resolution images, the second eigenvalue is often ill-conditioned, leading to slow convergence of iterative eigenvalue solvers, such as the Lanczos algorithm. Preconditioning is a key technology accelerating the convergence, e.g., in the matrix-free LOBPCG method. Computing the eigenvector using an optimally preconditioned matrix-free method takes O ( n ) {\displaystyle O(n)} time, which is the optimal complexity, since the eigenvector has n {\displaystyle n} components.

Software Implementations

scikit-learn uses LOBPCG from SciPy with algebraic multigrid preconditioning for solving the eigenvalue problem for the graph Laplacian to perform image segmentation via spectral graph partitioning as first proposed in and actually tested in and.

OBJ CUT

OBJ CUT is an efficient method that automatically segments an object. The OBJ CUT method is a generic method, and therefore it is applicable to any object category model. Given an image D containing an instance of a known object category, e.g. cows, the OBJ CUT algorithm computes a segmentation of the object, that is, it infers a set of labels m.

Let m be a set of binary labels, and let Θ {\displaystyle \Theta } be a shape parameter( Θ {\displaystyle \Theta } is a shape prior on the labels from a layered pictorial structure (LPS) model). An energy function E ( m , Θ ) {\displaystyle E(m,\Theta )} is defined as follows.

E ( m , Θ ) = ϕ x ( D | m x ) + ϕ x ( m x | Θ ) + Ψ x y ( m x , m y ) + ϕ ( D | m x , m y ) {\displaystyle E(m,\Theta )=\sum \phi _{x}(D|m_{x})+\phi _{x}(m_{x}|\Theta )+\sum \Psi _{xy}(m_{x},m_{y})+\phi (D|m_{x},m_{y})} (1)

The term ϕ x ( D | m x ) + ϕ x ( m x | Θ ) {\displaystyle \phi _{x}(D|m_{x})+\phi _{x}(m_{x}|\Theta )} is called a unary term, and the term Ψ x y ( m x , m y ) + ϕ ( D | m x , m y ) {\displaystyle \Psi _{xy}(m_{x},m_{y})+\phi (D|m_{x},m_{y})} is called a pairwise term. A unary term consists of the likelihood ϕ x ( D | m x ) {\displaystyle \phi _{x}(D|m_{x})} based on color, and the unary potential ϕ x ( m x | Θ ) {\displaystyle \phi _{x}(m_{x}|\Theta )} based on the distance from Θ {\displaystyle \Theta } . A pairwise term consists of a prior Ψ x y ( m x , m y ) {\displaystyle \Psi _{xy}(m_{x},m_{y})} and a contrast term ϕ ( D | m x , m y ) {\displaystyle \phi (D|m_{x},m_{y})} .

The best labeling m {\displaystyle m^{*}} minimizes i w i E ( m , Θ i ) {\displaystyle \sum \limits _{i}w_{i}E(m,\Theta _{i})} , where w i {\displaystyle w_{i}} is the weight of the parameter Θ i {\displaystyle \Theta _{i}} .

m = arg min m i w i E ( m , Θ i ) {\displaystyle m^{*}=\arg \min \limits _{m}\sum \limits _{i}w_{i}E(m,\Theta _{i})} (2)

Algorithm

  1. Given an image D, an object category is chosen, e.g. cows or horses.
  2. The corresponding LPS model is matched to D to obtain the samples Θ 1 , , Θ s {\displaystyle \Theta _{1},\cdots ,\Theta _{s}}
  3. The objective function given by equation (2) is determined by computing E ( m , Θ i ) {\displaystyle E(m,\Theta _{i})} and using w i = g ( Θ i | Z ) {\displaystyle w_{i}=g(\Theta _{i}|Z)}
  4. The objective function is minimized using a single MINCUT operation to obtain the segmentation m.

Other approaches

  • Jigsaw approach
  • Image parsing
  • Interleaved segmentation
  • LOCUS
  • LayoutCRF
  • Minimum spanning tree-based segmentation

References


Text submitted to CC-BY-SA license. Source: Segmentation-based object categorization by Wikipedia (Historical)



INVESTIGATION