Transfer matrix

From HandWiki

In applied mathematics, the transfer matrix is a formulation in terms of a block-Toeplitz matrix of the two-scale equation, which characterizes refinable functions. Refinable functions play an important role in wavelet theory and finite element theory.

For the mask [math]\displaystyle{ h }[/math], which is a vector with component indexes from [math]\displaystyle{ a }[/math] to [math]\displaystyle{ b }[/math], the transfer matrix of [math]\displaystyle{ h }[/math], we call it [math]\displaystyle{ T_h }[/math] here, is defined as

[math]\displaystyle{ (T_h)_{j,k} = h_{2\cdot j-k}. }[/math]

More verbosely

[math]\displaystyle{ T_h = \begin{pmatrix} h_{a } & & & & & \\ h_{a+2} & h_{a+1} & h_{a } & & & \\ h_{a+4} & h_{a+3} & h_{a+2} & h_{a+1} & h_{a } & \\ \ddots & \ddots & \ddots & \ddots & \ddots & \ddots \\ & h_{b } & h_{b-1} & h_{b-2} & h_{b-3} & h_{b-4} \\ & & & h_{b } & h_{b-1} & h_{b-2} \\ & & & & & h_{b } \end{pmatrix}. }[/math]

The effect of [math]\displaystyle{ T_h }[/math] can be expressed in terms of the downsampling operator "[math]\displaystyle{ \downarrow }[/math]":

[math]\displaystyle{ T_h\cdot x = (h*x)\downarrow 2. }[/math]

Properties

  • [math]\displaystyle{ T_h\cdot x = T_x\cdot h }[/math].
  • If you drop the first and the last column and move the odd-indexed columns to the left and the even-indexed columns to the right, then you obtain a transposed Sylvester matrix.
  • The determinant of a transfer matrix is essentially a resultant.

    More precisely:

    Let [math]\displaystyle{ h_{\mathrm{e}} }[/math] be the even-indexed coefficients of [math]\displaystyle{ h }[/math] ([math]\displaystyle{ (h_{\mathrm{e}})_k = h_{2k} }[/math]) and let [math]\displaystyle{ h_{\mathrm{o}} }[/math] be the odd-indexed coefficients of [math]\displaystyle{ h }[/math] ([math]\displaystyle{ (h_{\mathrm{o}})_k = h_{2k+1} }[/math]).

    Then [math]\displaystyle{ \det T_h = (-1)^{\lfloor\frac{b-a+1}{4}\rfloor}\cdot h_a\cdot h_b\cdot\mathrm{res}(h_{\mathrm{e}},h_{\mathrm{o}}) }[/math], where [math]\displaystyle{ \mathrm{res} }[/math] is the resultant.

    This connection allows for fast computation using the Euclidean algorithm.
  • For the trace of the transfer matrix of convolved masks holds
    [math]\displaystyle{ \mathrm{tr}~T_{g*h} = \mathrm{tr}~T_g \cdot \mathrm{tr}~T_h }[/math]
  • For the determinant of the transfer matrix of convolved mask holds

    [math]\displaystyle{ \det T_{g*h} = \det T_g \cdot \det T_h \cdot \mathrm{res}(g_-,h) }[/math]

    where [math]\displaystyle{ g_- }[/math] denotes the mask with alternating signs, i.e. [math]\displaystyle{ (g_-)_k = (-1)^k \cdot g_k }[/math].
  • If [math]\displaystyle{ T_{h}\cdot x = 0 }[/math], then [math]\displaystyle{ T_{g*h}\cdot (g_-*x) = 0 }[/math].
    This is a concretion of the determinant property above. From the determinant property one knows that [math]\displaystyle{ T_{g*h} }[/math] is singular whenever [math]\displaystyle{ T_{h} }[/math] is singular. This property also tells, how vectors from the null space of [math]\displaystyle{ T_{h} }[/math] can be converted to null space vectors of [math]\displaystyle{ T_{g*h} }[/math].
  • If [math]\displaystyle{ x }[/math] is an eigenvector of [math]\displaystyle{ T_{h} }[/math] with respect to the eigenvalue [math]\displaystyle{ \lambda }[/math], i.e.

    [math]\displaystyle{ T_{h}\cdot x = \lambda\cdot x }[/math],

    then [math]\displaystyle{ x*(1,-1) }[/math] is an eigenvector of [math]\displaystyle{ T_{h*(1,1)} }[/math] with respect to the same eigenvalue, i.e.

    [math]\displaystyle{ T_{h*(1,1)}\cdot (x*(1,-1)) = \lambda\cdot (x*(1,-1)) }[/math].
  • Let [math]\displaystyle{ \lambda_a,\dots,\lambda_b }[/math] be the eigenvalues of [math]\displaystyle{ T_h }[/math], which implies [math]\displaystyle{ \lambda_a+\dots+\lambda_b = \mathrm{tr}~T_h }[/math] and more generally [math]\displaystyle{ \lambda_a^n+\dots+\lambda_b^n = \mathrm{tr}(T_h^n) }[/math]. This sum is useful for estimating the spectral radius of [math]\displaystyle{ T_h }[/math]. There is an alternative possibility for computing the sum of eigenvalue powers, which is faster for small [math]\displaystyle{ n }[/math].

    Let [math]\displaystyle{ C_k h }[/math] be the periodization of [math]\displaystyle{ h }[/math] with respect to period [math]\displaystyle{ 2^k-1 }[/math]. That is [math]\displaystyle{ C_k h }[/math] is a circular filter, which means that the component indexes are residue classes with respect to the modulus [math]\displaystyle{ 2^k-1 }[/math]. Then with the upsampling operator [math]\displaystyle{ \uparrow }[/math] it holds

    [math]\displaystyle{ \mathrm{tr}(T_h^n) = \left(C_k h * (C_k h\uparrow 2) * (C_k h\uparrow 2^2) * \cdots * (C_k h\uparrow 2^{n-1})\right)_{[0]_{2^n-1}} }[/math]

    Actually not [math]\displaystyle{ n-2 }[/math] convolutions are necessary, but only [math]\displaystyle{ 2\cdot \log_2 n }[/math] ones, when applying the strategy of efficient computation of powers. Even more the approach can be further sped up using the Fast Fourier transform.
  • From the previous statement we can derive an estimate of the spectral radius of [math]\displaystyle{ \varrho(T_h) }[/math]. It holds

    [math]\displaystyle{ \varrho(T_h) \ge \frac{a}{\sqrt{\# h}} \ge \frac{1}{\sqrt{3\cdot \# h}} }[/math]

    where [math]\displaystyle{ \# h }[/math] is the size of the filter and if all eigenvalues are real, it is also true that

    [math]\displaystyle{ \varrho(T_h) \le a }[/math],

    where [math]\displaystyle{ a = \Vert C_2 h \Vert_2 }[/math].

See also

References

  • Strang, Gilbert (1996). "Eigenvalues of [math]\displaystyle{ (\downarrow 2){H} }[/math] and convergence of the cascade algorithm". IEEE Transactions on Signal Processing 44: 233–238. doi:10.1109/78.485920. 
  • Thielemann, Henning (2006). Optimally matched wavelets (PhD thesis). (contains proofs of the above properties)