Rayleigh–Ritz method

From HandWiki

The Rayleigh–Ritz method is a direct numerical method of approximating eigenvalues, originated in the context of solving physical boundary value problems and named after Lord Rayleigh and Walther Ritz. It is used in all applications that involve approximating eigenvalues and eigenvectors, often under different names. In quantum mechanics, where a system of particles is described using a Hamiltonian, the Ritz method uses trial wave functions to approximate the ground state eigenfunction with the lowest energy. In the finite element method context, mathematically the same algorithm is commonly called the Ritz-Galerkin method. The Rayleigh–Ritz method or Ritz method terminology is typical in mechanical and structural engineering to approximate the eigenmodes and resonant frequencies of a structure.

Naming and attribution

The name Rayleigh–Ritz is being debated[1][2] vs. the Ritz method after Walther Ritz, since the numerical procedure has been published by Walther Ritz in 1908-1909. According to A. W. Leissa,[1] Lord Rayleigh wrote a paper congratulating Ritz on his work in 1911, but stating that he himself had used Ritz's method in many places in his book and in another publication. This statement, although later disputed, and the fact that the method in the trivial case of a single vector results in the Rayleigh quotient make the arguable misnomer persist. According to S. Ilanko,[2] citing Richard Courant, both Lord Rayleigh and Walther Ritz independently conceived the idea of utilizing the equivalence between boundary value problems of partial differential equations on the one hand and problems of the calculus of variations on the other hand for numerical calculation of the solutions, by substituting for the variational problems simpler approximating extremum problems in which a finite number of parameters need to be determined; see the article Ritz method for details. Ironically for the debate, the modern justification of the algorithm drops the calculus of variations in favor of the simpler and more general approach of orthogonal projection as in Galerkin method named after Boris Galerkin, thus leading also to the Ritz-Galerkin method naming.

For matrix eigenvalue problems

In numerical linear algebra, the Rayleigh–Ritz method is commonly[3] applied to approximate an eigenvalue problem [math]\displaystyle{ A \mathbf{x} = \lambda \mathbf{x} }[/math] for the matrix [math]\displaystyle{ A \in \mathbb{C}^{N \times N} }[/math] of size [math]\displaystyle{ N }[/math] using a projected matrix of a smaller size [math]\displaystyle{ m \lt N }[/math], generated from a given matrix [math]\displaystyle{ V \in \mathbb{C}^{N \times m} }[/math] with orthonormal columns. The matrix version of the algorithm is the most simple:

  1. Compute the [math]\displaystyle{ m \times m }[/math] matrix [math]\displaystyle{ V^* A V }[/math], where [math]\displaystyle{ V^* }[/math] denotes the complex-conjugate transpose of [math]\displaystyle{ V }[/math]
  2. Solve the eigenvalue problem [math]\displaystyle{ V^* A V \mathbf{y}_i = \mu_i \mathbf{y}_i }[/math]
  3. Compute the Ritz vectors [math]\displaystyle{ \tilde{\mathbf{x}}_i = V \mathbf{y}_i }[/math] and the Ritz value [math]\displaystyle{ \tilde{\lambda}_i=\mu_i }[/math]
  4. Output approximations [math]\displaystyle{ (\tilde{\lambda}_i,\tilde{\mathbf{x}}_i) }[/math], called the Ritz pairs, to eigenvalues and eigenvectors of the original matrix [math]\displaystyle{ A }[/math].

If the subspace with the orthonormal basis given by the columns of the matrix [math]\displaystyle{ V \in \mathbb{C}^{N \times m} }[/math] contains [math]\displaystyle{ k \leq m }[/math] vectors that are close to eigenvectors of the matrix [math]\displaystyle{ A }[/math], the Rayleigh–Ritz method above finds [math]\displaystyle{ k }[/math] Ritz vectors that well approximate these eigenvectors. The easily computable quantity [math]\displaystyle{ \| A \tilde{\mathbf{x}}_i - \tilde{\lambda}_i \tilde{\mathbf{x}}_i\| }[/math] determines the accuracy of such an approximation for every Ritz pair.

In the easiest case [math]\displaystyle{ m = 1 }[/math], the [math]\displaystyle{ N \times m }[/math] matrix [math]\displaystyle{ V }[/math] turns into a unit column-vector [math]\displaystyle{ v }[/math], the [math]\displaystyle{ m \times m }[/math] matrix [math]\displaystyle{ V^* A V }[/math] is a scalar that is equal to the Rayleigh quotient [math]\displaystyle{ \rho(v) = v^*Av/v^*v }[/math], the only [math]\displaystyle{ i = 1 }[/math] solution to the eigenvalue problem is [math]\displaystyle{ y_i = 1 }[/math] and [math]\displaystyle{ \mu_i = \rho(v) }[/math], and the only one Ritz vector is [math]\displaystyle{ v }[/math] itself. Thus, the Rayleigh–Ritz method turns into computing of the Rayleigh quotient if [math]\displaystyle{ m = 1 }[/math].

Another useful connection to the Rayleigh quotient is that [math]\displaystyle{ \mu_i = \rho(v_i) }[/math] for every Ritz pair [math]\displaystyle{ (\tilde{\lambda}_i, \tilde{\mathbf{x}}_i) }[/math], allowing to derive some properties of Ritz values [math]\displaystyle{ \mu_i }[/math] from the corresponding theory for the Rayleigh quotient. For example, if [math]\displaystyle{ A }[/math] is a Hermitian matrix, its Rayleigh quotient (and thus its every Ritz value) is real and takes values within the closed interval of the smallest and largest eigenvalues of [math]\displaystyle{ A }[/math].

Example

The matrix [math]\displaystyle{ A = \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 1 \\ 0 & 1 & 2 \end{bmatrix} }[/math] has eigenvalues [math]\displaystyle{ 1, 2, 3 }[/math] and the corresponding eigenvectors [math]\displaystyle{ \mathbf x_{\lambda=1} = \begin{bmatrix} 0 \\ 1 \\ -1 \end{bmatrix}, \quad \mathbf x_{\lambda=2} = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}, \quad \mathbf x_{\lambda=3} = \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}. }[/math] Let us take [math]\displaystyle{ V = \begin{bmatrix} 0 & 0 \\ 1 & 0 \\ 0 & 1 \end{bmatrix}, }[/math] then [math]\displaystyle{ V^* A V = \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix} }[/math] with eigenvalues [math]\displaystyle{ 1, 3 }[/math] and the corresponding eigenvectors [math]\displaystyle{ \mathbf y_{\mu=1} = \begin{bmatrix} 1 \\ -1 \end{bmatrix}, \quad \mathbf y_{\mu=3} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}, }[/math] so that the Ritz values are [math]\displaystyle{ 1, 3 }[/math] and the Ritz vectors are [math]\displaystyle{ \mathbf \tilde{x}_{\tilde{\lambda}=1} = \begin{bmatrix} 0 \\ 1 \\ -1 \end{bmatrix}, \quad \mathbf \tilde{x}_{\tilde{\lambda}=3} = \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}. }[/math] We observe that each one of the Ritz vectors is exactly one of the eigenvectors of [math]\displaystyle{ A }[/math] for the given [math]\displaystyle{ V }[/math] as well as the Ritz values give exactly two of the three eigenvalues of [math]\displaystyle{ A }[/math]. A mathematical explanation for the exact approximation is based on the fact that the column space of the matrix [math]\displaystyle{ V }[/math] happens to be exactly the same as the subspace spanned by the two eigenvectors [math]\displaystyle{ \mathbf x_{\lambda=1} }[/math] and [math]\displaystyle{ \mathbf x_{\lambda=3} }[/math] in this example.

For matrix singular value problems

Truncated singular value decomposition (SVD) in numerical linear algebra can also use the Rayleigh–Ritz method to find approximations to left and right singular vectors of the matrix [math]\displaystyle{ M \in \mathbb{C}^{M \times N} }[/math] of size [math]\displaystyle{ M \times N }[/math] in given subspaces by turning the singular value problem into an eigenvalue problem.

Using the normal matrix

The definition of the singular value [math]\displaystyle{ \sigma }[/math] and the corresponding left and right singular vectors is [math]\displaystyle{ M v = \sigma u }[/math] and [math]\displaystyle{ M^* u = \sigma v }[/math]. Having found one set (left of right) of approximate singular vectors and singular values by applying naively the Rayleigh–Ritz method to the Hermitian normal matrix [math]\displaystyle{ M^* M \in \mathbb{C}^{N \times N} }[/math] or [math]\displaystyle{ M M^* \in \mathbb{C}^{M \times M} }[/math], whichever one is smaller size, one could determine the other set of left of right singular vectors simply by dividing by the singular values, i.e., [math]\displaystyle{ u = Mv / \sigma }[/math] and [math]\displaystyle{ v = M^* u / \sigma }[/math]. However, the division is unstable or fails for small or zero singular values.

An alternative approach, e.g., defining the normal matrix as [math]\displaystyle{ A = M^* M \in \mathbb{C}^{N \times N} }[/math] of size [math]\displaystyle{ N \times N }[/math], takes advantage of the fact that for a given [math]\displaystyle{ N \times m }[/math] matrix [math]\displaystyle{ W \in \mathbb{C}^{N \times m} }[/math] with orthonormal columns the eigenvalue problem of the Rayleigh–Ritz method for the [math]\displaystyle{ m \times m }[/math] matrix [math]\displaystyle{ W^* A W = W^* M^* M W = (M W)^* M W }[/math] can be interpreted as a singular value problem for the [math]\displaystyle{ N \times m }[/math] matrix [math]\displaystyle{ M W }[/math]. This interpretation allows simple simultaneous calculation of both left and right approximate singular vectors as follows.

  1. Compute the [math]\displaystyle{ N \times m }[/math] matrix [math]\displaystyle{ M W }[/math].
  2. Compute the thin, or economy-sized, SVD [math]\displaystyle{ M W = \mathbf {U} \Sigma \mathbf V_h, }[/math] with [math]\displaystyle{ N \times m }[/math] matrix [math]\displaystyle{ \mathbf U }[/math], [math]\displaystyle{ m \times m }[/math] diagonal matrix [math]\displaystyle{ \Sigma }[/math], and [math]\displaystyle{ m \times m }[/math] matrix [math]\displaystyle{ \mathbf {V}_h }[/math].
  3. Compute the matrices of the Ritz left [math]\displaystyle{ U = \mathbf U }[/math] and right [math]\displaystyle{ V_h = \mathbf {V}_h W^* }[/math] singular vectors.
  4. Output approximations [math]\displaystyle{ U, \Sigma, V_h }[/math], called the Ritz singular triplets, to selected singular values and the corresponding left and right singular vectors of the original matrix [math]\displaystyle{ M }[/math] representing an approximate Truncated singular value decomposition (SVD) with left singular vectors restricted to the column-space of the matrix [math]\displaystyle{ W }[/math].

The algorithm can be used as a post-processing step where the matrix [math]\displaystyle{ W }[/math] is an output of an eigenvalue solver, e.g., such as LOBPCG, approximating numerically selected eigenvectors of the normal matrix [math]\displaystyle{ A = M^* M }[/math].

Example

The matrix [math]\displaystyle{ M = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 4 \\ 0 & 0 & 0 & 0 \end{bmatrix} }[/math] has its normal matrix [math]\displaystyle{ A = M^* M = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 4 & 0 & 0 \\ 0 & 0 & 9 & 0 \\ 0 & 0 & 0 & 16 \\ \end{bmatrix}, }[/math] singular values [math]\displaystyle{ 1, 2, 3, 4 }[/math] and the corresponding thin SVD [math]\displaystyle{ A = \begin{bmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \begin{bmatrix} 4 & 0 & 0 & 0 \\ 0 & 3 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix}, }[/math] where the columns of the first multiplier from the complete set of the left singular vectors of the matrix [math]\displaystyle{ A }[/math], the diagonal entries of the middle term are the singular values, and the columns of the last multiplier transposed (although the transposition does not change it) [math]\displaystyle{ \begin{bmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix}^* \quad = \quad \begin{bmatrix} 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix} }[/math] are the corresponding right singular vectors.

Let us take [math]\displaystyle{ W = \begin{bmatrix} 1 / \sqrt{2} & 1 / \sqrt{2} \\ 1 / \sqrt{2} & -1 / \sqrt{2} \\ 0 & 0 \\ 0 & 0 \end{bmatrix} }[/math] with the column-space that is spanned by the two exact right singular vectors [math]\displaystyle{ \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ 0 & 0 \\ 0 & 0 \end{bmatrix} }[/math] corresponding to the singular values 1 and 2.

Following the algorithm step 1, we compute [math]\displaystyle{ MW = \begin{bmatrix} 1 / \sqrt{2} & 1 / \sqrt{2} \\ \sqrt{2} & -\sqrt{2} \\ 0 & 0 \\ 0 & 0 \end{bmatrix}, }[/math] and on step 2 its thin SVD [math]\displaystyle{ M W = \mathbf {U}{{\Sigma }}\mathbf {V}_h }[/math] with [math]\displaystyle{ \mathbf {U} = \begin{bmatrix} 0 & 1 \\ 1 & 0 \\ 0 & 0 \\ 0 & 0 \\ 0 & 0 \end{bmatrix}, \quad \Sigma = \begin{bmatrix} 2 & 0 \\ 0 & 1 \end{bmatrix}, \quad \mathbf {V}_h = \begin{bmatrix} 1 / \sqrt{2} & -1 / \sqrt{2} \\ 1 / \sqrt{2} & 1 / \sqrt{2} \end{bmatrix}. }[/math] Thus we already obtain the singular values 2 and 1 from [math]\displaystyle{ \Sigma }[/math] and from [math]\displaystyle{ \mathbf {U} }[/math] the corresponding two left singular vectors [math]\displaystyle{ u }[/math] as [math]\displaystyle{ [0, 1, 0, 0, 0]^* }[/math] and [math]\displaystyle{ [1, 0, 0, 0, 0]^* }[/math], which span the column-space of the matrix [math]\displaystyle{ W }[/math], explaining why the approximations are exact for the given [math]\displaystyle{ W }[/math].

Finally, step 3 computes the matrix [math]\displaystyle{ V_h = \mathbf {V}_h W^* }[/math] [math]\displaystyle{ \mathbf {V}_h = \begin{bmatrix} 1 / \sqrt{2} & -1 / \sqrt{2} \\ 1 / \sqrt{2} & 1 / \sqrt{2} \end{bmatrix} \, \begin{bmatrix} 1 / \sqrt{2} & 1 / \sqrt{2} & 0 & 0 \\ 1 / \sqrt{2} & -1 / \sqrt{2} & 0 & 0 \end{bmatrix} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{bmatrix} }[/math] recovering from its rows the two right singular vectors [math]\displaystyle{ v }[/math] as [math]\displaystyle{ [0, 1, 0, 0]^* }[/math] and [math]\displaystyle{ [1, 0, 0, 0]^* }[/math]. We validate the first vector: [math]\displaystyle{ M v = \sigma u }[/math] [math]\displaystyle{ \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 3 & 0 \\ 0 & 0 & 0 & 4 \\ 0 & 0 & 0 & 0 \end{bmatrix} \, \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} = \, 2 \, \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} }[/math] and [math]\displaystyle{ M^* u = \sigma v }[/math] [math]\displaystyle{ \begin{bmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 & 0 \\ 0 & 0 & 3 & 0 & 0 \\ 0 & 0 & 0 & 4 & 0 \end{bmatrix} \, \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \\ 0 \end{bmatrix} = \, 2 \, \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}. }[/math] Thus, for the given matrix [math]\displaystyle{ W }[/math] with its column-space that is spanned by two exact right singular vectors, we determine these right singular vectors, as well as the corresponding left singular vectors and the singular values, all exactly. For an arbitrary matrix [math]\displaystyle{ W }[/math], we obtain approximate singular triplets which are optimal given [math]\displaystyle{ W }[/math] in the sense of optimality of the Rayleigh–Ritz method.

See also

Notes and references

  1. 1.0 1.1 Leissa, A.W. (2005). "The historical bases of the Rayleigh and Ritz methods". Journal of Sound and Vibration 287 (4–5): 961–978. doi:10.1016/j.jsv.2004.12.021. Bibcode2005JSV...287..961L. https://www.sciencedirect.com/science/article/abs/pii/S0022460X05000362. 
  2. 2.0 2.1 Ilanko, Sinniah (2009). "Comments on the historical bases of the Rayleigh and Ritz methods". Journal of Sound and Vibration 319 (1–2): 731–733. doi:10.1016/j.jsv.2008.06.001. Bibcode2009JSV...319..731I. 
  3. Trefethen, Lloyd N.; Bau, III, David (1997). Numerical Linear Algebra. SIAM. p. 254. ISBN 978-0-89871-957-4. https://books.google.com/books?id=JaPtxOytY7kC. 

External links