Multilinear form

From HandWiki
Short description: Map from multiple vectors to an underlying field of scalars, linear in each argument

In abstract algebra and multilinear algebra, a multilinear form on a vector space [math]\displaystyle{ V }[/math] over a field [math]\displaystyle{ K }[/math] is a map

[math]\displaystyle{ f\colon V^k \to K }[/math]

that is separately [math]\displaystyle{ K }[/math]-linear in each of its [math]\displaystyle{ k }[/math] arguments.[1] More generally, one can define multilinear forms on a module over a commutative ring. The rest of this article, however, will only consider multilinear forms on finite-dimensional vector spaces.

A multilinear [math]\displaystyle{ k }[/math]-form on [math]\displaystyle{ V }[/math] over [math]\displaystyle{ \R }[/math] is called a (covariant) [math]\displaystyle{ \boldsymbol{k} }[/math]-tensor, and the vector space of such forms is usually denoted [math]\displaystyle{ \mathcal{T}^k(V) }[/math] or [math]\displaystyle{ \mathcal{L}^k(V) }[/math].[2]

Tensor product

Given a [math]\displaystyle{ k }[/math]-tensor [math]\displaystyle{ f\in\mathcal{T}^k(V) }[/math] and an [math]\displaystyle{ \ell }[/math]-tensor [math]\displaystyle{ g\in\mathcal{T}^\ell(V) }[/math], a product [math]\displaystyle{ f\otimes g\in\mathcal{T}^{k+\ell}(V) }[/math], known as the tensor product, can be defined by the property

[math]\displaystyle{ (f\otimes g)(v_1,\ldots,v_k,v_{k+1},\ldots, v_{k+\ell})=f(v_1,\ldots,v_k)g(v_{k+1},\ldots, v_{k+\ell}), }[/math]

for all [math]\displaystyle{ v_1,\ldots,v_{k+\ell}\in V }[/math]. The tensor product of multilinear forms is not commutative; however it is bilinear and associative:

[math]\displaystyle{ f\otimes(ag_1+bg_2)=a(f\otimes g_1)+b(f\otimes g_2) }[/math], [math]\displaystyle{ (af_1+bf_2)\otimes g=a(f_1\otimes g)+b(f_2\otimes g), }[/math]

and

[math]\displaystyle{ (f\otimes g)\otimes h=f\otimes (g\otimes h). }[/math]

If [math]\displaystyle{ (v_1,\ldots, v_n) }[/math] forms a basis for an [math]\displaystyle{ n }[/math]-dimensional vector space [math]\displaystyle{ V }[/math] and [math]\displaystyle{ (\phi^1,\ldots,\phi^n) }[/math] is the corresponding dual basis for the dual space [math]\displaystyle{ V^*=\mathcal{T}^1(V) }[/math], then the products [math]\displaystyle{ \phi^{i_1}\otimes\cdots\otimes\phi^{i_k} }[/math], with [math]\displaystyle{ 1\le i_1,\ldots,i_k\le n }[/math] form a basis for [math]\displaystyle{ \mathcal{T}^k(V) }[/math]. Consequently, [math]\displaystyle{ \mathcal{T}^k(V) }[/math] has dimension [math]\displaystyle{ n^k }[/math].

Examples

Bilinear forms

Main page: Bilinear form

If [math]\displaystyle{ k=2 }[/math], [math]\displaystyle{ f:V\times V\to K }[/math] is referred to as a bilinear form. A familiar and important example of a (symmetric) bilinear form is the standard inner product (dot product) of vectors.

Alternating multilinear forms

Main page: Alternating multilinear map

An important class of multilinear forms are the alternating multilinear forms, which have the additional property that[3]

[math]\displaystyle{ f(x_{\sigma(1)},\ldots, x_{\sigma(k)}) = \sgn(\sigma)f(x_1,\ldots, x_k), }[/math]

where [math]\displaystyle{ \sigma:\mathbf{N}_k\to\mathbf{N}_k }[/math] is a permutation and [math]\displaystyle{ \sgn(\sigma) }[/math] denotes its sign (+1 if even, –1 if odd). As a consequence, alternating multilinear forms are antisymmetric with respect to swapping of any two arguments (i.e., [math]\displaystyle{ \sigma(p)=q,\sigma(q)=p }[/math] and [math]\displaystyle{ \sigma(i)=i, 1\le i\le k, i\neq p,q }[/math]):

[math]\displaystyle{ f(x_1,\ldots, x_p,\ldots, x_q,\ldots, x_k) = -f(x_1,\ldots, x_q,\ldots, x_p,\ldots, x_k). }[/math]

With the additional hypothesis that the characteristic of the field [math]\displaystyle{ K }[/math] is not 2, setting [math]\displaystyle{ x_p=x_q=x }[/math] implies as a corollary that [math]\displaystyle{ f(x_1,\ldots, x,\ldots, x,\ldots, x_k) = 0 }[/math]; that is, the form has a value of 0 whenever two of its arguments are equal. Note, however, that some authors[4] use this last condition as the defining property of alternating forms. This definition implies the property given at the beginning of the section, but as noted above, the converse implication holds only when [math]\displaystyle{ \operatorname{char}(K)\neq 2 }[/math].

An alternating multilinear [math]\displaystyle{ k }[/math]-form on [math]\displaystyle{ V }[/math] over [math]\displaystyle{ \R }[/math] is called a multicovector of degree [math]\displaystyle{ \boldsymbol{k} }[/math] or [math]\displaystyle{ \boldsymbol{k} }[/math]-covector, and the vector space of such alternating forms, a subspace of [math]\displaystyle{ \mathcal{T}^k(V) }[/math], is generally denoted [math]\displaystyle{ \mathcal{A}^k(V) }[/math], or, using the notation for the isomorphic kth exterior power of [math]\displaystyle{ V^* }[/math](the dual space of [math]\displaystyle{ V }[/math]), [math]\displaystyle{ \bigwedge^k V^* }[/math].[5] Note that linear functionals (multilinear 1-forms over [math]\displaystyle{ \R }[/math]) are trivially alternating, so that [math]\displaystyle{ \mathcal{A}^1(V)=\mathcal{T}^1(V)=V^* }[/math], while, by convention, 0-forms are defined to be scalars: [math]\displaystyle{ \mathcal{A}^0(V)=\mathcal{T}^0(V)=\R }[/math].

The determinant on [math]\displaystyle{ n\times n }[/math] matrices, viewed as an [math]\displaystyle{ n }[/math] argument function of the column vectors, is an important example of an alternating multilinear form.

Exterior product

The tensor product of alternating multilinear forms is, in general, no longer alternating. However, by summing over all permutations of the tensor product, taking into account the parity of each term, the exterior product ([math]\displaystyle{ \wedge }[/math], also known as the wedge product) of multicovectors can be defined, so that if [math]\displaystyle{ f\in\mathcal{A}^k(V) }[/math] and [math]\displaystyle{ g\in\mathcal{A}^\ell(V) }[/math], then [math]\displaystyle{ f\wedge g\in\mathcal{A}^{k+\ell}(V) }[/math]:

[math]\displaystyle{ (f\wedge g)(v_1,\ldots, v_{k+\ell})=\frac{1}{k!\ell!}\sum_{\sigma\in S_{k+\ell}} (\sgn(\sigma)) f(v_{\sigma(1)}, \ldots, v_{\sigma(k)})g(v_{\sigma(k+1)} ,\ldots,v_{\sigma(k+\ell)}), }[/math]

where the sum is taken over the set of all permutations over [math]\displaystyle{ k+\ell }[/math] elements, [math]\displaystyle{ S_{k+\ell} }[/math]. The exterior product is bilinear, associative, and graded-alternating: if [math]\displaystyle{ f\in\mathcal{A}^k(V) }[/math] and [math]\displaystyle{ g\in\mathcal{A}^\ell(V) }[/math] then [math]\displaystyle{ f\wedge g=(-1)^{k\ell}g\wedge f }[/math].

Given a basis [math]\displaystyle{ (v_1,\ldots, v_n) }[/math] for [math]\displaystyle{ V }[/math] and dual basis [math]\displaystyle{ (\phi^1,\ldots,\phi^n) }[/math] for [math]\displaystyle{ V^*=\mathcal{A}^1(V) }[/math], the exterior products [math]\displaystyle{ \phi^{i_1}\wedge\cdots\wedge\phi^{i_k} }[/math], with [math]\displaystyle{ 1\leq i_1\lt \cdots\lt i_k\leq n }[/math] form a basis for [math]\displaystyle{ \mathcal{A}^k(V) }[/math]. Hence, the dimension of [math]\displaystyle{ \mathcal{A}^k(V) }[/math] for n-dimensional [math]\displaystyle{ V }[/math] is [math]\displaystyle{ \tbinom{n}{k}=\frac{n!}{(n-k)!\,k!} }[/math].

Differential forms

Main page: Differential form

Differential forms are mathematical objects constructed via tangent spaces and multilinear forms that behave, in many ways, like differentials in the classical sense. Though conceptually and computationally useful, differentials are founded on ill-defined notions of infinitesimal quantities developed early in the history of calculus. Differential forms provide a mathematically rigorous and precise framework to modernize this long-standing idea. Differential forms are especially useful in multivariable calculus (analysis) and differential geometry because they possess transformation properties that allow them be integrated on curves, surfaces, and their higher-dimensional analogues (differentiable manifolds). One far-reaching application is the modern statement of Stokes' theorem, a sweeping generalization of the fundamental theorem of calculus to higher dimensions.

The synopsis below is primarily based on Spivak (1965)[6] and Tu (2011).[3]

Definition of differential k-forms and construction of 1-forms

To define differential forms on open subsets [math]\displaystyle{ U\subset\R^n }[/math], we first need the notion of the tangent space of [math]\displaystyle{ \R^n }[/math]at [math]\displaystyle{ p }[/math], usually denoted [math]\displaystyle{ T_p\R^n }[/math] or [math]\displaystyle{ \R^n_p }[/math]. The vector space [math]\displaystyle{ \R^n_p }[/math] can be defined most conveniently as the set of elements [math]\displaystyle{ v_p }[/math] ([math]\displaystyle{ v\in\R^n }[/math], with [math]\displaystyle{ p\in\R^n }[/math] fixed) with vector addition and scalar multiplication defined by [math]\displaystyle{ v_p+w_p:=(v+w)_p }[/math] and [math]\displaystyle{ a\cdot(v_p):=(a\cdot v)_p }[/math], respectively. Moreover, if [math]\displaystyle{ (e_1,\ldots,e_n) }[/math] is the standard basis for [math]\displaystyle{ \R^n }[/math], then [math]\displaystyle{ ((e_1)_p,\ldots,(e_n)_p) }[/math] is the analogous standard basis for [math]\displaystyle{ \R^n_p }[/math]. In other words, each tangent space [math]\displaystyle{ \R^n_p }[/math] can simply be regarded as a copy of [math]\displaystyle{ \R^n }[/math] (a set of tangent vectors) based at the point [math]\displaystyle{ p }[/math]. The collection (disjoint union) of tangent spaces of [math]\displaystyle{ \R^n }[/math] at all [math]\displaystyle{ p\in\R^n }[/math] is known as the tangent bundle of [math]\displaystyle{ \R^n }[/math] and is usually denoted [math]\displaystyle{ T\R^n:=\bigcup_{p\in\R^n}\R^n_p }[/math]. While the definition given here provides a simple description of the tangent space of [math]\displaystyle{ \R^n }[/math], there are other, more sophisticated constructions that are better suited for defining the tangent spaces of smooth manifolds in general (see the article on tangent spaces for details).

A differential [math]\displaystyle{ \boldsymbol{k} }[/math]-form on [math]\displaystyle{ U\subset\R^n }[/math] is defined as a function [math]\displaystyle{ \omega }[/math] that assigns to every [math]\displaystyle{ p\in U }[/math] a [math]\displaystyle{ k }[/math]-covector on the tangent space of [math]\displaystyle{ \R^n }[/math]at [math]\displaystyle{ p }[/math], usually denoted [math]\displaystyle{ \omega_p:=\omega(p)\in\mathcal{A}^k(\R^n_p) }[/math]. In brief, a differential [math]\displaystyle{ k }[/math]-form is a [math]\displaystyle{ k }[/math]-covector field. The space of [math]\displaystyle{ k }[/math]-forms on [math]\displaystyle{ U }[/math] is usually denoted [math]\displaystyle{ \Omega^k(U) }[/math]; thus if [math]\displaystyle{ \omega }[/math] is a differential [math]\displaystyle{ k }[/math]-form, we write [math]\displaystyle{ \omega\in\Omega^k(U) }[/math]. By convention, a continuous function on [math]\displaystyle{ U }[/math] is a differential 0-form: [math]\displaystyle{ f\in C^0(U)=\Omega^0(U) }[/math].

We first construct differential 1-forms from 0-forms and deduce some of their basic properties. To simplify the discussion below, we will only consider smooth differential forms constructed from smooth ([math]\displaystyle{ C^\infty }[/math]) functions. Let [math]\displaystyle{ f:\R^n\to\R }[/math] be a smooth function. We define the 1-form [math]\displaystyle{ df }[/math] on [math]\displaystyle{ U }[/math] for [math]\displaystyle{ p\in U }[/math] and [math]\displaystyle{ v_p\in\R^n_p }[/math] by [math]\displaystyle{ (df)_p(v_p):=Df|_p(v) }[/math], where [math]\displaystyle{ Df|_p:\R^n\to\R }[/math] is the total derivative of [math]\displaystyle{ f }[/math] at [math]\displaystyle{ p }[/math]. (Recall that the total derivative is a linear transformation.) Of particular interest are the projection maps (also known as coordinate functions) [math]\displaystyle{ \pi^i:\R^n\to\R }[/math], defined by [math]\displaystyle{ x\mapsto x^i }[/math], where [math]\displaystyle{ x^i }[/math] is the ith standard coordinate of [math]\displaystyle{ x\in\R^n }[/math]. The 1-forms [math]\displaystyle{ d\pi^i }[/math] are known as the basic 1-forms; they are conventionally denoted [math]\displaystyle{ dx^i }[/math]. If the standard coordinates of [math]\displaystyle{ v_p\in\R^n_p }[/math] are [math]\displaystyle{ (v^1,\ldots, v^n) }[/math], then application of the definition of [math]\displaystyle{ df }[/math] yields [math]\displaystyle{ dx^i_p(v_p)=v^i }[/math], so that [math]\displaystyle{ dx^i_p((e_j)_p)=\delta_j^i }[/math], where [math]\displaystyle{ \delta^i_j }[/math] is the Kronecker delta.[7] Thus, as the dual of the standard basis for [math]\displaystyle{ \R^n_p }[/math], [math]\displaystyle{ (dx^1_p,\ldots,dx^n_p) }[/math] forms a basis for [math]\displaystyle{ \mathcal{A}^1(\R^n_p)=(\R^n_p)^* }[/math]. As a consequence, if [math]\displaystyle{ \omega }[/math] is a 1-form on [math]\displaystyle{ U }[/math], then [math]\displaystyle{ \omega }[/math] can be written as [math]\displaystyle{ \sum a_i\,dx^i }[/math] for smooth functions [math]\displaystyle{ a_i:U\to\R }[/math]. Furthermore, we can derive an expression for [math]\displaystyle{ df }[/math] that coincides with the classical expression for a total differential:

[math]\displaystyle{ df=\sum_{i=1}^n D_i f\; dx^i={\partial f\over\partial x^1} \, dx^1+\cdots+{\partial f\over\partial x^n} \, dx^n. }[/math]

[Comments on notation: In this article, we follow the convention from tensor calculus and differential geometry in which multivectors and multicovectors are written with lower and upper indices, respectively. Since differential forms are multicovector fields, upper indices are employed to index them.[3] The opposite rule applies to the components of multivectors and multicovectors, which instead are written with upper and lower indices, respectively. For instance, we represent the standard coordinates of vector [math]\displaystyle{ v\in\R^n }[/math] as [math]\displaystyle{ (v^1,\ldots,v^n) }[/math], so that [math]\displaystyle{ v=\sum_{i=1}^n v^ie_i }[/math] in terms of the standard basis [math]\displaystyle{ (e_1,\ldots,e_n) }[/math]. In addition, superscripts appearing in the denominator of an expression (as in [math]\displaystyle{ \frac{\partial f}{\partial x^i} }[/math]) are treated as lower indices in this convention. When indices are applied and interpreted in this manner, the number of upper indices minus the number of lower indices in each term of an expression is conserved, both within the sum and across an equal sign, a feature that serves as a useful mnemonic device and helps pinpoint errors made during manual computation.]

Basic operations on differential k-forms

The exterior product ([math]\displaystyle{ \wedge }[/math]) and exterior derivative ([math]\displaystyle{ d }[/math]) are two fundamental operations on differential forms. The exterior product of a [math]\displaystyle{ k }[/math]-form and an [math]\displaystyle{ \ell }[/math]-form is a [math]\displaystyle{ (k+\ell) }[/math]-form, while the exterior derivative of a [math]\displaystyle{ k }[/math]-form is a [math]\displaystyle{ (k+1) }[/math]-form. Thus, both operations generate differential forms of higher degree from those of lower degree.

The exterior product [math]\displaystyle{ \wedge:\Omega^k(U)\times\Omega^\ell(U)\to\Omega^{k+\ell}(U) }[/math] of differential forms is a special case of the exterior product of multicovectors in general (see above). As is true in general for the exterior product, the exterior product of differential forms is bilinear, associative, and is graded-alternating.

More concretely, if [math]\displaystyle{ \omega=a_{i_1\ldots i_k} \, dx^{i_1}\wedge\cdots\wedge dx^{i_k} }[/math] and [math]\displaystyle{ \eta=a_{j_1\ldots i_{\ell}} dx^{j_1}\wedge\cdots\wedge dx^{j_{\ell}} }[/math], then

[math]\displaystyle{ \omega\wedge\eta=a_{i_1\ldots i_k}a_{j_1\ldots j_\ell} \, dx^{i_1}\wedge\cdots\wedge dx^{i_k}\wedge dx^{j_1} \wedge \cdots\wedge dx^{j_\ell}. }[/math]

Furthermore, for any set of indices [math]\displaystyle{ \{\alpha_1\ldots,\alpha_m\} }[/math],

[math]\displaystyle{ dx^{\alpha_1} \wedge\cdots\wedge dx^{\alpha_p} \wedge \cdots \wedge dx^{\alpha_q} \wedge\cdots\wedge dx^{\alpha_m} = -dx^{\alpha_1} \wedge\cdots\wedge dx^{\alpha_q} \wedge \cdots\wedge dx^{\alpha_p}\wedge\cdots\wedge dx^{\alpha_m}. }[/math]

If [math]\displaystyle{ I=\{i_1,\ldots,i_k\} }[/math], [math]\displaystyle{ J=\{j_1,\ldots,j_{\ell}\} }[/math], and [math]\displaystyle{ I\cap J=\varnothing }[/math], then the indices of [math]\displaystyle{ \omega\wedge\eta }[/math] can be arranged in ascending order by a (finite) sequence of such swaps. Since [math]\displaystyle{ dx^\alpha\wedge dx^\alpha=0 }[/math], [math]\displaystyle{ I\cap J\neq\varnothing }[/math] implies that [math]\displaystyle{ \omega\wedge\eta=0 }[/math]. Finally, as a consequence of bilinearity, if [math]\displaystyle{ \omega }[/math] and [math]\displaystyle{ \eta }[/math] are the sums of several terms, their exterior product obeys distributivity with respect to each of these terms.

The collection of the exterior products of basic 1-forms [math]\displaystyle{ \{dx^{i_1}\wedge\cdots\wedge dx^{i_k} \mid 1\leq i_1\lt \cdots\lt i_k\leq n\} }[/math] constitutes a basis for the space of differential k-forms. Thus, any [math]\displaystyle{ \omega\in\Omega^k(U) }[/math] can be written in the form

[math]\displaystyle{ \omega=\sum_{i_1\lt \cdots\lt i_k} a_{i_1\ldots i_k} \, dx^{i_1}\wedge\cdots\wedge dx^{i_k}, \qquad (*) }[/math]

where [math]\displaystyle{ a_{i_1\ldots i_k}:U\to\R }[/math] are smooth functions. With each set of indices [math]\displaystyle{ \{i_1,\ldots,i_k\} }[/math] placed in ascending order, (*) is said to be the standard presentation of [math]\displaystyle{ \omega }[/math].

In the previous section, the 1-form [math]\displaystyle{ df }[/math] was defined by taking the exterior derivative of the 0-form (continuous function) [math]\displaystyle{ f }[/math]. We now extend this by defining the exterior derivative operator [math]\displaystyle{ d:\Omega^k(U)\to\Omega^{k+1}(U) }[/math] for [math]\displaystyle{ k\geq1 }[/math]. If the standard presentation of [math]\displaystyle{ k }[/math]-form [math]\displaystyle{ \omega }[/math] is given by (*), the [math]\displaystyle{ (k+1) }[/math]-form [math]\displaystyle{ d\omega }[/math] is defined by

[math]\displaystyle{ d\omega:=\sum_{i_1\lt \ldots \lt i_k} da_{i_1\ldots i_k}\wedge dx^{i_1}\wedge\cdots\wedge dx^{i_k}. }[/math]

A property of [math]\displaystyle{ d }[/math] that holds for all smooth forms is that the second exterior derivative of any [math]\displaystyle{ \omega }[/math] vanishes identically: [math]\displaystyle{ d^2\omega=d(d\omega)\equiv 0 }[/math]. This can be established directly from the definition of [math]\displaystyle{ d }[/math] and the equality of mixed second-order partial derivatives of [math]\displaystyle{ C^2 }[/math] functions (see the article on closed and exact forms for details).

Integration of differential forms and Stokes' theorem for chains

To integrate a differential form over a parameterized domain, we first need to introduce the notion of the pullback of a differential form. Roughly speaking, when a differential form is integrated, applying the pullback transforms it in a way that correctly accounts for a change-of-coordinates.

Given a differentiable function [math]\displaystyle{ f:\R^n\to\R^m }[/math] and [math]\displaystyle{ k }[/math]-form [math]\displaystyle{ \eta\in\Omega^k(\R^m) }[/math], we call [math]\displaystyle{ f^*\eta\in\Omega^k(\R^n) }[/math] the pullback of [math]\displaystyle{ \eta }[/math] by [math]\displaystyle{ f }[/math] and define it as the [math]\displaystyle{ k }[/math]-form such that

[math]\displaystyle{ (f^*\eta)_p(v_{1p},\ldots, v_{kp}):=\eta_{f(p)}(f_*(v_{1p}),\ldots,f_*(v_{kp})), }[/math]

for [math]\displaystyle{ v_{1p},\ldots,v_{kp}\in\R^n_p }[/math], where [math]\displaystyle{ f_*:\R^n_p\to\R^m_{f(p)} }[/math] is the map [math]\displaystyle{ v_p\mapsto(Df|_p(v))_{f(p)} }[/math].

If [math]\displaystyle{ \omega=f\, dx^1\wedge\cdots\wedge dx^n }[/math] is an [math]\displaystyle{ n }[/math]-form on [math]\displaystyle{ \R^n }[/math] (i.e., [math]\displaystyle{ \omega\in\Omega^n(\R^n) }[/math]), we define its integral over the unit [math]\displaystyle{ n }[/math]-cell as the iterated Riemann integral of [math]\displaystyle{ f }[/math]:

[math]\displaystyle{ \int_{[0,1]^n} \omega = \int_{[0,1]^n} f\,dx^1\wedge\cdots \wedge dx^n:= \int_0^1\cdots\int_0^1 f\, dx^1\cdots dx^n. }[/math]

Next, we consider a domain of integration parameterized by a differentiable function [math]\displaystyle{ c:[0,1]^n\to A\subset\R^m }[/math], known as an n-cube. To define the integral of [math]\displaystyle{ \omega\in\Omega^n(A) }[/math] over [math]\displaystyle{ c }[/math], we "pull back" from [math]\displaystyle{ A }[/math] to the unit n-cell:

[math]\displaystyle{ \int_c \omega :=\int_{[0,1]^n}c^*\omega. }[/math]

To integrate over more general domains, we define an [math]\displaystyle{ \boldsymbol{n} }[/math]-chain [math]\displaystyle{ C=\sum_i n_ic_i }[/math] as the formal sum of [math]\displaystyle{ n }[/math]-cubes and set

[math]\displaystyle{ \int_C \omega :=\sum_i n_i\int_{c_i} \omega. }[/math]

An appropriate definition of the [math]\displaystyle{ (n-1) }[/math]-chain [math]\displaystyle{ \partial C }[/math], known as the boundary of [math]\displaystyle{ C }[/math],[8] allows us to state the celebrated Stokes' theorem (Stokes–Cartan theorem) for chains in a subset of [math]\displaystyle{ \R^m }[/math]:

If [math]\displaystyle{ \omega }[/math] is a smooth [math]\displaystyle{ (n-1) }[/math]-form on an open set [math]\displaystyle{ A\subset\R^m }[/math] and [math]\displaystyle{ C }[/math] is a smooth [math]\displaystyle{ n }[/math]-chain in [math]\displaystyle{ A }[/math], then[math]\displaystyle{ \int_C d\omega=\int_{\partial C} \omega }[/math].

Using more sophisticated machinery (e.g., germs and derivations), the tangent space [math]\displaystyle{ T_p M }[/math] of any smooth manifold [math]\displaystyle{ M }[/math] (not necessarily embedded in [math]\displaystyle{ \R^m }[/math]) can be defined. Analogously, a differential form [math]\displaystyle{ \omega\in\Omega^k(M) }[/math] on a general smooth manifold is a map [math]\displaystyle{ \omega:p\in M\mapsto\omega_p\in \mathcal{A}^k(T_pM) }[/math]. Stokes' theorem can be further generalized to arbitrary smooth manifolds-with-boundary and even certain "rough" domains (see the article on Stokes' theorem for details).

See also

References

  1. Weisstein, Eric W.. "Multilinear Form". http://mathworld.wolfram.com/MultilinearForm.html. 
  2. Many authors use the opposite convention, writing [math]\displaystyle{ \mathcal{T}^k(V) }[/math] to denote the contravariant k-tensors on [math]\displaystyle{ V }[/math] and [math]\displaystyle{ \mathcal{T}_k(V) }[/math] to denote the covariant k-tensors on [math]\displaystyle{ V }[/math].
  3. 3.0 3.1 3.2 Tu, Loring W. (2011). An Introduction to Manifolds (2nd ed.). Springer. pp. 22–23. ISBN 978-1-4419-7399-3. https://archive.org/details/introductiontoma00lwtu_506. 
  4. Halmos, Paul R. (1958). Finite-Dimensional Vector Spaces (2nd ed.). Van Nostrand. pp. 50. ISBN 0-387-90093-4. 
  5. Spivak uses [math]\displaystyle{ \Omega^k(V) }[/math] for the space of [math]\displaystyle{ k }[/math]-covectors on [math]\displaystyle{ V }[/math]. However, this notation is more commonly reserved for the space of differential [math]\displaystyle{ k }[/math]-forms on [math]\displaystyle{ V }[/math]. In this article, we use [math]\displaystyle{ \Omega^k(V) }[/math] to mean the latter.
  6. Spivak, Michael (1965). Calculus on Manifolds. W. A. Benjamin, Inc.. pp. 75–146. ISBN 0805390219. https://archive.org/details/SpivakM.CalculusOnManifoldsPerseus2006Reprint. 
  7. The Kronecker delta is usually denoted by [math]\displaystyle{ \delta_{ij}=\delta(i,j) }[/math] and defined as [math]\displaystyle{ \delta:X\times X\to\{0,1\},\ (i,j)\mapsto \begin{cases} 1, & i=j \\ 0, & i\neq j \end{cases} }[/math]. Here, the notation [math]\displaystyle{ \delta^i_j }[/math] is used to conform to the tensor calculus convention on the use of upper and lower indices.
  8. The formal definition of the boundary of a chain is somewhat involved and is omitted here (see Spivak 1965, pp. 98–99 for a discussion). Intuitively, if [math]\displaystyle{ C }[/math] maps to a square, then [math]\displaystyle{ \partial C }[/math] is a linear combination of functions that maps to its edges in a counterclockwise manner. The boundary of a chain is distinct from the notion of a boundary in point-set topology.