Physics:Grassmann number

From HandWiki
Short description: Anticommutating number

In mathematical physics, a Grassmann number, named after Hermann Grassmann (also called an anticommuting number or supernumber), is an element of the exterior algebra over the complex numbers.[1] The special case of a 1-dimensional algebra is known as a dual number. Grassmann numbers saw an early use in physics to express a path integral representation for fermionic fields, although they are now widely used as a foundation for superspace, on which supersymmetry is constructed.

Informal discussion

Grassmann numbers are generated by anti-commuting elements or objects. The idea of anti-commuting objects arises in multiple areas of mathematics: they are typically seen in differential geometry, where the differential forms are anti-commuting. Differential forms are normally defined in terms of derivatives on a manifold; however, one can contemplate the situation where one "forgets" or "ignores" the existence of any underlying manifold, and "forgets" or "ignores" that the forms were defined as derivatives, and instead, simply contemplate a situation where one has objects that anti-commute, and have no other pre-defined or pre-supposed properties. Such objects form an algebra, and specifically the Grassmann algebra or exterior algebra.

The Grassmann numbers are elements of that algebra. The appellation of "number" is justified by the fact that they behave not unlike "ordinary" numbers: they can be added, multiplied and divided: they behave almost like a field. More can be done: one can consider polynomials of Grassmann numbers, leading to the idea of holomorphic functions. One can take derivatives of such functions, and then consider the anti-derivatives as well. Each of these ideas can be carefully defined, and correspond reasonably well to the equivalent concepts from ordinary mathematics. The analogy does not stop there: one has an entire branch of supermathematics, where the analog of Euclidean space is superspace, the analog of a manifold is a supermanifold, the analog of a Lie algebra is a Lie superalgebra and so on. The Grassmann numbers are the underlying construct that make this all possible.

Of course, one could pursue a similar program for any other field, or even ring, and this is indeed widely and commonly done in mathematics. However, supermathematics takes on a special significance in physics, because the anti-commuting behavior can be strongly identified with the quantum-mechanical behavior of fermions: the anti-commutation is that of the Pauli exclusion principle. Thus, the study of Grassmann numbers, and of supermathematics, in general, is strongly driven by their utility in physics.

Specifically, in quantum field theory, or more narrowly, second quantization, one works with ladder operators that create multi-particle quantum states. The ladder operators for fermions create field quanta that must necessarily have anti-symmetric wave functions, as this is forced by the Pauli exclusion principle. In this situation, a Grassmann number corresponds immediately and directly to a wave function that contains some (typically indeterminate) number of fermions.

When the number of fermions is fixed and finite, an explicit relationship between anticommutation relations and spinors is given by means of the spin group. This group can be defined as the subset of unit-length vectors in the Clifford algebra, and naturally factorizes into anti-commuting Weyl spinors. Both the anti-commutation and the expression as spinors arises in a natural fashion for the spin group. In essence, the Grassmann numbers can be thought of as discarding the relationships arising from spin, and keeping only the relationships due to anti-commutation.

General description and properties

Grassmann numbers are individual elements or points of the exterior algebra generated by a set of n Grassmann variables or Grassmann directions or supercharges [math]\displaystyle{ \{\theta_i\} }[/math], with n possibly being infinite. The usage of the term "Grassmann variables" is historic; they are not variables, per se; they are better understood as the basis elements of a unital algebra. The terminology comes from the fact that a primary use is to define integrals, and that the variable of integration is Grassmann-valued, and thus, by abuse of language, is called a Grassmann variable. Similarly, the notion of direction comes from the notion of superspace, where ordinary Euclidean space is extended with additional Grassmann-valued "directions". The appellation of charge comes from the notion of charges in physics, which correspond to the generators of physical symmetries (via Noether's theorem). The perceived symmetry is that multiplication by a single Grassmann variable swaps the [math]\displaystyle{ \mathbb{Z}_2 }[/math] grading between fermions and bosons; this is discussed in greater detail below.

The Grassmann variables are the basis vectors of a vector space (of dimension n). They form an algebra over a field, with the field usually being taken to be the complex numbers, although one could contemplate other fields, such as the reals. The algebra is a unital algebra, and the generators are anti-commuting:

[math]\displaystyle{ \theta_i \theta_j = -\theta_j \theta_i }[/math]

Since the [math]\displaystyle{ \theta_i }[/math] are elements of a vector space over the complex numbers, they, by definition, commute with complex numbers. That is, for complex x, one has

[math]\displaystyle{ \theta_i x = x \theta_i. }[/math]

The squares of the generators vanish:

[math]\displaystyle{ (\theta_i)^2 = 0, }[/math] since [math]\displaystyle{ \theta_i \theta_i = -\theta_i \theta_i. }[/math]

In other words, a Grassmann variable is a non-zero square-root of zero.

Formal definition

Formally, let V be an n-dimensional complex vector space with basis [math]\displaystyle{ \theta_i, i=1,\ldots,n }[/math]. The Grassmann algebra whose Grassmann variables are [math]\displaystyle{ \theta_i, i=1,\ldots,n }[/math] is defined to be the exterior algebra of V, namely

[math]\displaystyle{ \Lambda(V) = \mathbb{C} \oplus V \oplus \left( V \wedge V \right) \oplus \left( V\wedge V \wedge V \right) \oplus \cdots \oplus \underbrace{\left( V\wedge V \wedge \cdots \wedge V \right)}_n \equiv \mathbb{C} \oplus \Lambda^1 V \oplus \Lambda^2 V \oplus \cdots \oplus \Lambda^n V, }[/math]

where [math]\displaystyle{ \wedge }[/math] is the exterior product and [math]\displaystyle{ \oplus }[/math] is the direct sum. The individual elements of this algebra are then called Grassmann numbers. It is standard to omit the wedge symbol [math]\displaystyle{ \wedge }[/math] when writing a Grassmann number once the definition is established. A general Grassmann number can be written as

[math]\displaystyle{ z=c_0 + \sum_{k=1}^n \sum_{i_1,i_2,\cdots ,i_k} c_{i_1i_2\cdots i_k} \theta_{i_1}\theta_{i_2}\cdots\theta_{i_k} , }[/math]

where [math]\displaystyle{ (i_1, i_2, \ldots, i_k) }[/math] are strictly increasing k-tuples with [math]\displaystyle{ 1 \le i_j \le n, 1 \le j \le k }[/math], and the [math]\displaystyle{ c_{i_1i_2\cdots i_k} }[/math] are complex, completely antisymmetric tensors of rank k. Again, the [math]\displaystyle{ \theta_i }[/math], and the [math]\displaystyle{ \theta_i \wedge \theta_j = \theta_i \theta_j }[/math] (subject to [math]\displaystyle{ i \lt j }[/math]), and larger finite products, can be seen here to be playing the role of a basis vectors of subspaces of [math]\displaystyle{ \Lambda }[/math].

The Grassmann algebra generated by n linearly independent Grassmann variables has dimension 2n; this follows from the binomial theorem applied to the above sum, and the fact that the (n + 1)-fold product of variables must vanish, by the anti-commutation relations, above. The dimension of [math]\displaystyle{ \Lambda^k V }[/math] is given by n choose k, the binomial coefficient. The special case of n = 1 is called a dual number, and was introduced by William Clifford in 1873.

In case V is infinite-dimensional, the above series does not terminate and one defines

[math]\displaystyle{ \Lambda_\infty(V) = \mathbb{C} \oplus \Lambda^1 V \oplus \Lambda^2 V \oplus \cdots. }[/math]

The general element is now

[math]\displaystyle{ z=\sum_{k=0}^\infty \sum_{i_1,i_2,\cdots ,i_k} \frac{1}{k!}c_{i_1i_2\cdots i_k} \theta_{i_1}\theta_{i_2}\cdots\theta_{i_k} \equiv z_B + z_S = z_B + \sum_{k=1}^\infty \sum_{i_1,i_2,\cdots ,i_k} \frac{1}{k!}c_{i_1i_2\cdots i_k} \theta_{i_1}\theta_{i_2}\cdots\theta_{i_k}, }[/math]

where [math]\displaystyle{ z_B }[/math] is sometimes referred to as the body and [math]\displaystyle{ z_S }[/math] as the soul of the supernumber [math]\displaystyle{ z }[/math].

Properties

In the finite-dimensional case (using the same terminology) the soul is nilpotent, i.e.

[math]\displaystyle{ z_S^{n+1} = 0, }[/math]

but this is not necessarily so in the infinite-dimensional case.[2]

If V is finite-dimensional, then

[math]\displaystyle{ \theta_iz = 0, \quad 1 \le i \le n \Rightarrow z = c\theta_1\theta_2\cdots\theta_n, \quad c \in \mathbb C, }[/math]

and if V is infinite-dimensional[3]

[math]\displaystyle{ \theta_az = 0 \quad \forall a \Rightarrow z = 0. }[/math]

Finite vs. countable sets of generators

Two distinct kinds of supernumbers commonly appear in the literature: those with a finite number of generators, typically n = 1, 2, 3 or 4, and those with a countably-infinite number of generators. These two situations are not as unrelated as they may seem at first. First, in the definition of a supermanifold, one variant uses a countably-infinite number of generators, but then employs a topology that effectively reduces the dimension to a small finite number.[4][5]

In the other case, one may start with a finite number of generators, but in the course of second quantization, a need for an infinite number of generators arises: one each for every possible momentum that a fermion might carry.

Involution, choice of field

The complex numbers are usually chosen as the field for the definition of the Grassmann numbers, as opposed to the real numbers, as this avoids some strange behaviors when a conjugation or involution is introduced. It is common to introduce an operator * on the Grassmann numbers such that:

[math]\displaystyle{ \theta=\theta^* }[/math]

when [math]\displaystyle{ \theta }[/math] is a generator, and such that

[math]\displaystyle{ (\theta_i\theta_j\cdots\theta_k)^* = \theta_k\cdots \theta_j\theta_i }[/math]

One may then consider Grassmann numbers z for which [math]\displaystyle{ z=z^* }[/math], and term these (super) real, while those that obey [math]\displaystyle{ z^*=-z }[/math] are termed (super) imaginary. These definitions carry through just fine, even if the Grassmann numbers use the real numbers as the base field; however, in such a case, many coefficients are forced to vanish if the number of generators is less than 4. Thus, by convention, the Grassmann numbers are usually defined over the complex numbers.

Other conventions are possible; the above is sometimes referred to as the DeWitt convention; Rogers employs [math]\displaystyle{ \theta^*=i\theta }[/math] for the involution. In this convention, the real supernumbers always have real coefficients; whereas in the DeWitt convention, the real supernumbers may have both real and imaginary coefficients. Despite this, it is usually easiest to work with the DeWitt convention.

Analysis

Products of an odd number of Grassmann variables anti-commute with each other; such a product is often called an a-number. Products of an even number of Grassmann variables commute (with all Grassman numbers); they are often called c-numbers. By abuse of terminology, an a-number is sometimes called an anticommuting c-number. This decomposition into even and odd subspaces provides a [math]\displaystyle{ \mathbb{Z}_2 }[/math] grading on the algebra; thus Grassmann algebras are the prototypical examples of supercommutative algebras. Note that the c-numbers form a subalgebra of [math]\displaystyle{ \Lambda }[/math], but the a-numbers do not (they are a subspace, not a subalgebra).

The definition of Grassmann numbers allows mathematical analysis to be performed, in analogy to analysis on complex numbers. That is, one may define superholomorphic functions, define derivatives, as well as defining integrals. Some of the basic concepts are developed in greater detail in the article on dual numbers.

As a general rule, it is usually easier to define the super-symmetric analogs of ordinary mathematical entities by working with Grassmann numbers with an infinite number of generators: most definitions become straightforward, and can be taken over from the corresponding bosonic definitions. For example, a single Grassmann number can be thought of as generating a one-dimensional space. A vector space, the m-dimensional superspace, then appears as the m-fold Cartesian product of these one-dimensional [math]\displaystyle{ \Lambda. }[/math][clarification needed] It can be shown that this is essentially equivalent to an algebra with m generators, but this requires work.[6][clarification needed]

Spinor space

The spinor space is defined as the Grassmann or exterior algebra [math]\displaystyle{ \textstyle{\bigwedge} W }[/math] of the space of Weyl spinors [math]\displaystyle{ W }[/math] (and anti-spinors [math]\displaystyle{ \overline{W} }[/math]), such that the wave functions of n fermions belong in [math]\displaystyle{ \textstyle{\bigwedge}^n W }[/math].

Integration

Integrals over Grassmann numbers are known as Berezin integrals (sometimes called Grassmann integrals). In order to reproduce the path integral for a Fermi field, the definition of Grassmann integration needs to have the following properties:

  • linearity [math]\displaystyle{ \int\,[a f(\theta) + b g(\theta) ]\, d\theta = a \int\,f(\theta)\, d\theta + b \int\,g(\theta)\, d\theta }[/math]
  • partial integration formula [math]\displaystyle{ \int \left[\frac{\partial}{\partial\theta}f(\theta)\right]\, d\theta = 0. }[/math]

Moreover, the Taylor expansion of any function [math]\displaystyle{ f(\theta)=A+B\theta }[/math] terminates after two terms because [math]\displaystyle{ \theta^2=0 }[/math], and quantum field theory additionally require invariance under the shift of integration variables [math]\displaystyle{ \theta\to\theta+\eta }[/math] such that

[math]\displaystyle{ \int d\theta f(\theta)=\int d\theta (A+B\theta) \equiv \int d\theta((A+B\eta)+B\theta). }[/math]

The only linear function satisfying this condition is a constant (conventionally 1) times B, so Berezin defined[7]

[math]\displaystyle{ \int d\theta (A+B\theta) \equiv B. }[/math]

This results in the following rules for the integration of a Grassmann quantity:

  • [math]\displaystyle{ \int\, 1\, d\theta = 0 }[/math]
  • [math]\displaystyle{ \int\, \theta\, d\theta = 1. }[/math]

Thus we conclude that the operations of integration and differentiation of a Grassmann number are identical.

In the path integral formulation of quantum field theory the following Gaussian integral of Grassmann quantities is needed for fermionic anticommuting fields, with A being an N × N matrix:

[math]\displaystyle{ \int \exp\left[-\theta^{\rm T}A\eta\right] \,d\theta\,d\eta = \det A }[/math].

Conventions and complex integration

An ambiguity arises when integrating over multiple Grassmann numbers. The convention that performs the innermost integral first yields

[math]\displaystyle{ \int d\theta \int d\eta\; \eta\theta = +1. }[/math]

Some authors also define complex conjugation similar to Hermitian conjugation of operators,[8]

[math]\displaystyle{ (\theta\eta)^*\equiv \eta^*\theta^* = -\theta^*\eta^*. }[/math]

With the additional convention

[math]\displaystyle{ \theta=\frac{\theta_1+i\theta_2}{\sqrt 2},\quad \theta^*=\frac{\theta_1-i\theta_2}{\sqrt 2}, }[/math]

we can treat θ and θ* as independent Grassmann numbers, and adopt

[math]\displaystyle{ \int d\theta^* d\theta\, (\theta\theta^*)=1. }[/math]

Thus a Gaussian integral evaluates to

[math]\displaystyle{ \int d\theta^* d\theta\, e^{-\theta^* b \theta} = \int d\theta^* d\theta\, (1 -\theta^* b \theta) = \int d\theta^* d\theta\, (1+\theta\theta^* b) = b }[/math]

and an extra factor of θθ* effectively introduces a factor of (1/b), just like an ordinary Gaussian,

[math]\displaystyle{ \int d\theta^* d\theta\, \theta\theta^*\, e^{-\theta^* b \theta} = 1. }[/math]

After proving unitarity, we can evaluate a general Gaussian integral involving a Hermitian matrix B with eigenvalues bi,[8][9]

[math]\displaystyle{ \left(\prod_i \int d\theta_i^* \,d\theta_i \right) e^{-\theta_i^*B_{ij}\theta_j} = \left(\prod_i \int d\theta_i^* \, d\theta_i \right) e^{-\theta_i^*b_i\theta_i} = \prod_i b_i = \det B. }[/math]

Matrix representations

Grassmann numbers can be represented by matrices. Consider, for example, the Grassmann algebra generated by two Grassmann numbers [math]\displaystyle{ \theta_1 }[/math] and [math]\displaystyle{ \theta_2 }[/math]. These Grassmann numbers can be represented by 4×4 matrices:

[math]\displaystyle{ \theta_1 = \begin{bmatrix} 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 \end{bmatrix}\qquad \theta_2 = \begin{bmatrix} 0&0&0&0\\ 0&0&0&0\\ 1&0&0&0\\ 0&-1&0&0 \end{bmatrix}\qquad \theta_1\theta_2 = -\theta_2\theta_1 = \begin{bmatrix} 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 1&0&0&0 \end{bmatrix}. }[/math]

In general, a Grassmann algebra on n generators can be represented by 2n × 2n square matrices. Physically, these matrices can be thought of as raising operators acting on a Hilbert space of n identical fermions in the occupation number basis. Since the occupation number for each fermion is 0 or 1, there are 2n possible basis states. Mathematically, these matrices can be interpreted as the linear operators corresponding to left exterior multiplication on the Grassmann algebra itself.

Generalisations

There are some generalisations to Grassmann numbers. These require rules in terms of N variables such that:

[math]\displaystyle{ \theta_{i_1} \theta_{i_2}\cdots\theta_{i_N} + \theta_{i_N}\theta_{i_1}\theta_{i_2}\cdots +\cdots = 0 }[/math]

where the indices are summed over all permutations so that as a consequence:

[math]\displaystyle{ (\theta_i)^N = 0\, }[/math]

for some N > 2. These are useful for calculating hyperdeterminants of N-tensors where N > 2 and also for calculating discriminants of polynomials for powers larger than 2. There is also the limiting case as N tends to infinity in which case one can define analytic functions on the numbers. For example, in the case with N = 3 a single Grassmann number can be represented by the matrix:

[math]\displaystyle{ \theta = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{bmatrix}\qquad }[/math]

so that [math]\displaystyle{ \theta^3=0 }[/math]. For two Grassmann numbers the matrix would be of size 10×10.

For example, the rules for N = 3 with two Grassmann variables imply:

[math]\displaystyle{ \theta_1 (\theta_2)^2 + \theta_2 \theta_1 \theta_2 + (\theta_2)^2 \theta_1 = 0 }[/math]

so that it can be shown that

[math]\displaystyle{ \theta_1 (\theta_2)^2 = -\frac{1}{2} \theta_2 \theta_1 \theta_2 = (\theta_2)^2 \theta_1 }[/math]

and so

[math]\displaystyle{ (\theta_1)^2(\theta_2)^2 = (\theta_2)^2(\theta_1)^2 = \theta_1(\theta_2)^2 \theta_1 = \theta_2(\theta_1)^2 \theta_2 = -\frac{1}{2} \theta_1 \theta_2 \theta_1 \theta_2 = -\frac{1}{2} \theta_2 \theta_1 \theta_2 \theta_1, }[/math]

which gives a definition for the hyperdeterminant of a 2×2×2 tensor as

[math]\displaystyle{ (A^{abc}\theta_a\eta_b\psi_c)^4 = \det(A)(\theta_1)^2(\theta_2)^2(\eta_1)^2(\eta_2)^2(\psi_1)^2(\psi_2)^2. }[/math]

See also

Notes

  1. DeWitt 1984, Chapter 1, page 1.
  2. DeWitt 1984, pp. 1–2.
  3. DeWitt 1984, p. 2.
  4. Rogers 2007a, Chapter 1 (available online)
  5. Rogers 2007, Chapter 1 and Chapter 8.
  6. Rogers 2007
  7. Berezin, F. A. (1966). The Method of Second Quantization. Pure and Applied Physics. 24. New York. https://www.sciencedirect.com/bookseries/pure-and-applied-physics/vol/24. 
  8. 8.0 8.1 Peskin, Michael E.; Schroeder, Daniel V. (1995). An introduction to quantum field theory (5. (corrected) printing. ed.). Reading, Mass.: Addison-Wesley. ISBN 9780201503975. https://archive.org/details/introductiontoqu0000pesk. 
  9. Indices' typo present in source.

References