Affine shape adaptation

From HandWiki

Affine shape adaptation is a methodology for iteratively adapting the shape of the smoothing kernels in an affine group of smoothing kernels to the local image structure in neighbourhood region of a specific image point. Equivalently, affine shape adaptation can be accomplished by iteratively warping a local image patch with affine transformations while applying a rotationally symmetric filter to the warped image patches. Provided that this iterative process converges, the resulting fixed point will be affine invariant. In the area of computer vision, this idea has been used for defining affine invariant interest point operators as well as affine invariant texture analysis methods.

Affine-adapted interest point operators

The interest points obtained from the scale-adapted Laplacian blob detector or the multi-scale Harris corner detector with automatic scale selection are invariant to translations, rotations and uniform rescalings in the spatial domain. The images that constitute the input to a computer vision system are, however, also subject to perspective distortions. To obtain interest points that are more robust to perspective transformations, a natural approach is to devise a feature detector that is invariant to affine transformations.

Affine invariance can be accomplished from measurements of the same multi-scale windowed second moment matrix [math]\displaystyle{ \mu }[/math] as is used in the multi-scale Harris operator provided that we extend the regular scale space concept obtained by convolution with rotationally symmetric Gaussian kernels to an affine Gaussian scale-space obtained by shape-adapted Gaussian kernels (Lindeberg 1994, section 15.3; Lindeberg & Garding 1997). For a two-dimensional image [math]\displaystyle{ I }[/math], let [math]\displaystyle{ \bar{x} = (x, y)^T }[/math] and let [math]\displaystyle{ \Sigma_t }[/math] be a positive definite 2×2 matrix. Then, a non-uniform Gaussian kernel can be defined as

[math]\displaystyle{ g(\bar{x}; \Sigma) = \frac{1}{2 \pi \sqrt{\operatorname{det} \Sigma_t}} e^{-\bar{x} \Sigma_t^{-1} \bar{x}/2} }[/math]

and given any input image [math]\displaystyle{ I_L }[/math] the affine Gaussian scale-space is the three-parameter scale-space defined as

[math]\displaystyle{ L(\bar{x}; \Sigma_t) = \int_{\bar{xi}} I_L(x-\xi) \, g(\bar{\xi}; \Sigma_t) \, d\bar{\xi}. }[/math]

Next, introduce an affine transformation [math]\displaystyle{ \eta = B \xi }[/math] where [math]\displaystyle{ B }[/math] is a 2×2-matrix, and define a transformed image [math]\displaystyle{ I_R }[/math] as

[math]\displaystyle{ I_L(\bar{\xi}) = I_R(\bar{\eta}) }[/math].

Then, the affine scale-space representations [math]\displaystyle{ L }[/math] and [math]\displaystyle{ R }[/math] of [math]\displaystyle{ I_L }[/math] and [math]\displaystyle{ I_R }[/math], respectively, are related according to

[math]\displaystyle{ L(\bar{\xi}, \Sigma_L) = R(\bar{\eta}, \Sigma_R) }[/math]

provided that the affine shape matrices [math]\displaystyle{ \Sigma_L }[/math] and [math]\displaystyle{ \Sigma_R }[/math] are related according to

[math]\displaystyle{ \Sigma_R = B \Sigma_L B^T }[/math].

Disregarding mathematical details, which unfortunately become somewhat technical if one aims at a precise description of what is going on, the important message is that the affine Gaussian scale-space is closed under affine transformations.

If we, given the notation [math]\displaystyle{ \nabla L = (L_x, L_y)^T }[/math] as well as local shape matrix [math]\displaystyle{ \Sigma_t }[/math] and an integration shape matrix [math]\displaystyle{ \Sigma_s }[/math], introduce an affine-adapted multi-scale second-moment matrix according to

[math]\displaystyle{ \mu_L(\bar{x}; \Sigma_t, \Sigma_s) = g(\bar{x} - \bar{\xi}; \Sigma_s) \, \left( \nabla_L(\bar{\xi}; \Sigma_t) \nabla_L^T(\bar{\xi}; \Sigma_t) \right) }[/math]

it can be shown that under any affine transformation [math]\displaystyle{ \bar{q} = B \bar{p} }[/math] the affine-adapted multi-scale second-moment matrix transforms according to

[math]\displaystyle{ \mu_L(\bar{p}; \Sigma_t, \Sigma_s) = B^T \mu_R(\bar{q}; B \Sigma_t B^T, B \Sigma_s B^T) B }[/math].

Again, disregarding somewhat messy technical details, the important message here is that given a correspondence between the image points [math]\displaystyle{ \bar{p} }[/math] and [math]\displaystyle{ \bar{q} }[/math], the affine transformation [math]\displaystyle{ B }[/math] can be estimated from measurements of the multi-scale second-moment matrices [math]\displaystyle{ \mu_L }[/math] and [math]\displaystyle{ \mu_R }[/math] in the two domains.

An important consequence of this study is that if we can find an affine transformation [math]\displaystyle{ B }[/math] such that [math]\displaystyle{ \mu_R }[/math] is a constant times the unit matrix, then we obtain a fixed-point that is invariant to affine transformations (Lindeberg 1994, section 15.4; Lindeberg & Garding 1997). For the purpose of practical implementation, this property can often be reached by in either of two main ways. The first approach is based on transformations of the smoothing filters and consists of:

  • estimating the second-moment matrix [math]\displaystyle{ \mu }[/math] in the image domain,
  • determining a new adapted smoothing kernel with covariance matrix proportional to [math]\displaystyle{ \mu^{-1} }[/math],
  • smoothing the original image by the shape-adapted smoothing kernel, and
  • repeating this operation until the difference between two successive second-moment matrices is sufficiently small.

The second approach is based on warpings in the image domain and implies:

  • estimating [math]\displaystyle{ \mu }[/math] in the image domain,
  • estimating a local affine transformation proportional to [math]\displaystyle{ \hat{B} = \mu^{1/2} }[/math] where [math]\displaystyle{ \mu^{1/2} }[/math] denotes the square root matrix of [math]\displaystyle{ \mu }[/math],
  • warping the input image by the affine transformation [math]\displaystyle{ \hat{B}^{-1} }[/math] and
  • repeating this operation until [math]\displaystyle{ \mu }[/math] is sufficiently close to a constant times the unit matrix.

This overall process is referred to as affine shape adaptation (Lindeberg & Garding 1997; Baumberg 2000; Mikolajczyk & Schmid 2004; Tuytelaars & van Gool 2004; Ravela 2004; Lindeberg 2008). In the ideal continuous case, the two approaches are mathematically equivalent. In practical implementations, however, the first filter-based approach is usually more accurate in the presence of noise while the second warping-based approach is usually faster.

In practice, the affine shape adaptation process described here is often combined with interest point detection automatic scale selection as described in the articles on blob detection and corner detection, to obtain interest points that are invariant to the full affine group, including scale changes. Besides the commonly used multi-scale Harris operator, this affine shape adaptation can also be applied to other types of interest point operators such as the Laplacian/Difference of Gaussian blob operator and the determinant of the Hessian (Lindeberg 2008). Affine shape adaptation can also be used for affine invariant texture recognition and affine invariant texture segmentation.

Closely related to the notion of affine shape adaptation is the notion of affine normalization, which defines an affine invariant reference frame as further described in Lindeberg (2013a,b, 2021:Appendix I.3), such that any image measurement performed in the affine invariant reference frame is affine invariant.

See also

References