Differential (mathematics)

From HandWiki
Short description: Mathematical notion of infinitesimal difference

In mathematics, differential refers to several related notions[1] derived from the early days of calculus, put on a rigorous footing, such as infinitesimal differences and the derivatives of functions.[2]

The term is used in various branches of mathematics such as calculus, differential geometry, algebraic geometry and algebraic topology.

Introduction

The term differential is used nonrigorously in calculus to refer to an infinitesimal ("infinitely small") change in some varying quantity. For example, if x is a variable, then a change in the value of x is often denoted Δx (pronounced delta x). The differential dx represents an infinitely small change in the variable x. The idea of an infinitely small or infinitely slow change is, intuitively, extremely useful, and there are a number of ways to make the notion mathematically precise.

Using calculus, it is possible to relate the infinitely small changes of various variables to each other mathematically using derivatives. If y is a function of x, then the differential dy of y is related to dx by the formula [math]\displaystyle{ dy = \frac{dy}{dx} \,dx, }[/math] where [math]\displaystyle{ \frac{dy}{dx} \, }[/math]denotes the derivative of y with respect to x. This formula summarizes the intuitive idea that the derivative of y with respect to x is the limit of the ratio of differences Δyx as Δx becomes infinitesimal.

Basic notions

  • In calculus, the differential represents a change in the linearization of a function.
    • The total differential is its generalization for functions of multiple variables.
  • In traditional approaches to calculus, the differentials (e.g. dx, dy, dt, etc.) are interpreted as infinitesimals. There are several methods of defining infinitesimals rigorously, but it is sufficient to say that an infinitesimal number is smaller in absolute value than any positive real number, just as an infinitely large number is larger than any real number.
  • The differential is another name for the Jacobian matrix of partial derivatives of a function from Rn to Rm (especially when this matrix is viewed as a linear map).
  • More generally, the differential or pushforward refers to the derivative of a map between smooth manifolds and the pushforward operations it defines. The differential is also used to define the dual concept of pullback.
  • Stochastic calculus provides a notion of stochastic differential and an associated calculus for stochastic processes.
  • The integrator in a Stieltjes integral is represented as the differential of a function. Formally, the differential appearing under the integral behaves exactly as a differential: thus, the integration by substitution and integration by parts formulae for Stieltjes integral correspond, respectively, to the chain rule and product rule for the differential.

History and usage

Infinitesimal quantities played a significant role in the development of calculus. Archimedes used them, even though he did not believe that arguments involving infinitesimals were rigorous.[3] Isaac Newton referred to them as fluxions. However, it was Gottfried Leibniz who coined the term differentials for infinitesimal quantities and introduced the notation for them which is still used today.

In Leibniz's notation, if x is a variable quantity, then dx denotes an infinitesimal change in the variable x. Thus, if y is a function of x, then the derivative of y with respect to x is often denoted dy/dx, which would otherwise be denoted (in the notation of Newton or Lagrange) or y. The use of differentials in this form attracted much criticism, for instance in the famous pamphlet The Analyst by Bishop Berkeley. Nevertheless, the notation has remained popular because it suggests strongly the idea that the derivative of y at x is its instantaneous rate of change (the slope of the graph's tangent line), which may be obtained by taking the limit of the ratio Δyx as Δx becomes arbitrarily small. Differentials are also compatible with dimensional analysis, where a differential such as dx has the same dimensions as the variable x.

Calculus evolved into a distinct branch of mathematics during the 17th century CE, although there were antecedents going back to antiquity. The presentations of, e.g., Newton, Leibniz, were marked by non-rigorous definitions of terms like differential, fluent and "infinitely small". While many of the arguments in Bishop Berkeley's 1734 The Analyst are theological in nature, modern mathematicians acknowledge the validity of his argument against "the Ghosts of departed Quantities"; however, the modern approaches do not have the same technical issues. Despite the lack of rigor, immense progress was made in the 17th and 18th centuries. In the 19th century, Cauchy and others gradually developed the Epsilon, delta approach to continuity, limits and derivatives, giving a solid conceptual foundation for calculus.

In the 20th century, several new concepts in, e.g., multivariable calculus, differential geometry, seemed to encapsulate the intent of the old terms, especially differential; both differential and infinitesimal are used with new, more rigorous, meanings.

Differentials are also used in the notation for integrals because an integral can be regarded as an infinite sum of infinitesimal quantities: the area under a graph is obtained by subdividing the graph into infinitely thin strips and summing their areas. In an expression such as [math]\displaystyle{ \int f(x) \,dx, }[/math] the integral sign (which is a modified long s) denotes the infinite sum, f(x) denotes the "height" of a thin strip, and the differential dx denotes its infinitely thin width.

Approaches

There are several approaches for making the notion of differentials mathematically precise.

  1. Differentials as linear maps. This approach underlies the definition of the derivative and the exterior derivative in differential geometry.[4]
  2. Differentials as nilpotent elements of commutative rings. This approach is popular in algebraic geometry.[5]
  3. Differentials in smooth models of set theory. This approach is known as synthetic differential geometry or smooth infinitesimal analysis and is closely related to the algebraic geometric approach, except that ideas from topos theory are used to hide the mechanisms by which nilpotent infinitesimals are introduced.[6]
  4. Differentials as infinitesimals in hyperreal number systems, which are extensions of the real numbers that contain invertible infinitesimals and infinitely large numbers. This is the approach of nonstandard analysis pioneered by Abraham Robinson.[7]

These approaches are very different from each other, but they have in common the idea of being quantitative, i.e., saying not just that a differential is infinitely small, but how small it is.

Differentials as linear maps

There is a simple way to make precise sense of differentials, first used on the Real line by regarding them as linear maps. It can be used on [math]\displaystyle{ \mathbb{R} }[/math], [math]\displaystyle{ \mathbb{R}^n }[/math], a Hilbert space, a Banach space, or more generally, a topological vector space. The case of the Real line is the easiest to explain. This type of differential is also known as a covariant vector or cotangent vector, depending on context.

Differentials as linear maps on R

Suppose [math]\displaystyle{ f(x) }[/math] is a real-valued function on [math]\displaystyle{ \mathbb{R} }[/math]. We can reinterpret the variable [math]\displaystyle{ x }[/math] in [math]\displaystyle{ f(x) }[/math] as being a function rather than a number, namely the identity map on the real line, which takes a real number [math]\displaystyle{ p }[/math] to itself: [math]\displaystyle{ x(p)=p }[/math]. Then [math]\displaystyle{ f(x) }[/math] is the composite of [math]\displaystyle{ f }[/math] with [math]\displaystyle{ x }[/math], whose value at [math]\displaystyle{ p }[/math] is [math]\displaystyle{ f(x(p))=f(p) }[/math]. The differential [math]\displaystyle{ \operatorname{d}f }[/math] (which of course depends on [math]\displaystyle{ f }[/math]) is then a function whose value at [math]\displaystyle{ p }[/math] (usually denoted [math]\displaystyle{ df_p }[/math]) is not a number, but a linear map from [math]\displaystyle{ \mathbb{R} }[/math] to [math]\displaystyle{ \mathbb{R} }[/math]. Since a linear map from [math]\displaystyle{ \mathbb{R} }[/math] to [math]\displaystyle{ \mathbb{R} }[/math] is given by a [math]\displaystyle{ 1\times 1 }[/math] matrix, it is essentially the same thing as a number, but the change in the point of view allows us to think of [math]\displaystyle{ df_p }[/math] as an infinitesimal and compare it with the standard infinitesimal [math]\displaystyle{ dx_p }[/math], which is again just the identity map from [math]\displaystyle{ \mathbb{R} }[/math] to [math]\displaystyle{ \mathbb{R} }[/math] (a [math]\displaystyle{ 1\times 1 }[/math] matrix with entry [math]\displaystyle{ 1 }[/math]). The identity map has the property that if [math]\displaystyle{ \varepsilon }[/math] is very small, then [math]\displaystyle{ dx_p(\varepsilon) }[/math] is very small, which enables us to regard it as infinitesimal. The differential [math]\displaystyle{ df_p }[/math] has the same property, because it is just a multiple of [math]\displaystyle{ dx_p }[/math], and this multiple is the derivative [math]\displaystyle{ f'(p) }[/math] by definition. We therefore obtain that [math]\displaystyle{ df_p=f'(p)\,dx_p }[/math], and hence [math]\displaystyle{ df=f'\,dx }[/math]. Thus we recover the idea that [math]\displaystyle{ f' }[/math] is the ratio of the differentials [math]\displaystyle{ df }[/math] and [math]\displaystyle{ dx }[/math].

This would just be a trick were it not for the fact that:

  1. it captures the idea of the derivative of [math]\displaystyle{ f }[/math] at [math]\displaystyle{ p }[/math] as the best linear approximation to [math]\displaystyle{ f }[/math] at [math]\displaystyle{ p }[/math];
  2. it has many generalizations.

Differentials as linear maps on Rn

If [math]\displaystyle{ f }[/math] is a function from [math]\displaystyle{ \mathbb{R}^n }[/math] to [math]\displaystyle{ \mathbb{R} }[/math], then we say that [math]\displaystyle{ f }[/math] is differentiable[8] at [math]\displaystyle{ p\in\mathbb{R}^n }[/math] if there is a linear map [math]\displaystyle{ df_p }[/math] from [math]\displaystyle{ \mathbb{R}^n }[/math] to [math]\displaystyle{ \mathbb{R} }[/math] such that for any [math]\displaystyle{ \varepsilon\gt 0 }[/math], there is a neighbourhood [math]\displaystyle{ N }[/math] of [math]\displaystyle{ p }[/math] such that for [math]\displaystyle{ x\in N }[/math], [math]\displaystyle{ \left|f(x) - f(p) - df_p(x-p)\right| \lt \varepsilon \left|x-p\right| . }[/math]

We can now use the same trick as in the one-dimensional case and think of the expression [math]\displaystyle{ f(x_1, x_2, \ldots, x_n) }[/math] as the composite of [math]\displaystyle{ f }[/math] with the standard coordinates [math]\displaystyle{ x_1, x_2, \ldots, x_n }[/math] on [math]\displaystyle{ \mathbb{R}^n }[/math] (so that [math]\displaystyle{ x_j(p) }[/math] is the [math]\displaystyle{ j }[/math]-th component of [math]\displaystyle{ p\in\mathbb{R}^n }[/math]). Then the differentials [math]\displaystyle{ \left(dx_1\right)_p, \left(dx_2\right)_p, \ldots, \left(dx_n\right)_p }[/math] at a point [math]\displaystyle{ p }[/math] form a basis for the vector space of linear maps from [math]\displaystyle{ \mathbb{R}^n }[/math] to [math]\displaystyle{ \mathbb{R} }[/math] and therefore, if [math]\displaystyle{ f }[/math] is differentiable at [math]\displaystyle{ p }[/math], we can write [math]\displaystyle{ \operatorname{d}f_p }[/math] as a linear combination of these basis elements: [math]\displaystyle{ df_p = \sum_{j=1}^n D_j f(p) \,(dx_j)_p. }[/math]

The coefficients [math]\displaystyle{ D_j f(p) }[/math] are (by definition) the partial derivatives of [math]\displaystyle{ f }[/math] at [math]\displaystyle{ p }[/math] with respect to [math]\displaystyle{ x_1, x_2, \ldots, x_n }[/math]. Hence, if [math]\displaystyle{ f }[/math] is differentiable on all of [math]\displaystyle{ \mathbb{R}^n }[/math], we can write, more concisely: [math]\displaystyle{ \operatorname{d}f = \frac{\partial f}{\partial x_1} \,dx_1 + \frac{\partial f}{\partial x_2} \,dx_2 + \cdots +\frac{\partial f}{\partial x_n} \,dx_n. }[/math]

In the one-dimensional case this becomes [math]\displaystyle{ df = \frac{df}{dx}dx }[/math] as before.

This idea generalizes straightforwardly to functions from [math]\displaystyle{ \mathbb{R}^n }[/math] to [math]\displaystyle{ \mathbb{R}^m }[/math]. Furthermore, it has the decisive advantage over other definitions of the derivative that it is invariant under changes of coordinates. This means that the same idea can be used to define the differential of smooth maps between smooth manifolds.

Aside: Note that the existence of all the partial derivatives of [math]\displaystyle{ f(x) }[/math] at [math]\displaystyle{ x }[/math] is a necessary condition for the existence of a differential at [math]\displaystyle{ x }[/math]. However it is not a sufficient condition. For counterexamples, see Gateaux derivative.

Differentials as linear maps on a vector space

The same procedure works on a vector space with a enough additional structure to reasonably talk about continuity. The most concrete case is a Hilbert space, also known as a complete inner product space, where the inner product and its associated norm define a suitable concept of distance. The same procedure works for a Banach space, also known as a complete Normed vector space. However, for a more general topological vector space, some of the details are more abstract because there is no concept of distance.

For the important case of a finite dimension, any inner product space is a Hilbert space, any normed vector space is a Banach space and any topological vector space is complete. As a result, you can define a coordinate system from an arbitrary basis and use the same technique as for [math]\displaystyle{ \mathbb{R}^n }[/math].

Differentials as germs of functions

This approach works on any differentiable manifold. If

  1. U and V are open sets containing p
  2. [math]\displaystyle{ f\colon U\to \mathbb{R} }[/math] is continuous
  3. [math]\displaystyle{ g\colon V\to \mathbb{R} }[/math] is continuous

then f is equivalent to g at p, denoted [math]\displaystyle{ f \sim_p g }[/math], if and only if there is an open [math]\displaystyle{ W \subseteq U \cap V }[/math] containing p such that [math]\displaystyle{ f(x) = g(x) }[/math] for every x in W. The germ of f at p, denoted [math]\displaystyle{ [f]_p }[/math], is the set of all real continuous functions equivalent to f at p; if f is smooth at p then [math]\displaystyle{ [f]_p }[/math] is a smooth germ. If

  1. [math]\displaystyle{ U_1 }[/math], [math]\displaystyle{ U_2 }[/math] [math]\displaystyle{ V_1 }[/math] and [math]\displaystyle{ V_2 }[/math] are open sets containing p
  2. [math]\displaystyle{ f_1\colon U_1\to \mathbb{R} }[/math], [math]\displaystyle{ f_2\colon U_2\to \mathbb{R} }[/math], [math]\displaystyle{ g_1\colon V_1\to \mathbb{R} }[/math] and [math]\displaystyle{ g_2\colon V_2\to \mathbb{R} }[/math] are smooth functions
  3. [math]\displaystyle{ f_1 \sim_p g_1 }[/math]
  4. [math]\displaystyle{ f_2 \sim_p g_2 }[/math]
  5. r is a real number

then

  1. [math]\displaystyle{ r*f_1 \sim_p r*g_1 }[/math]
  2. [math]\displaystyle{ f_1+f_2\colon U_1 \cap U_2\to \mathbb{R} \sim_p g_1+g_2\colon V_1 \cap V_2\to \mathbb{R} }[/math]
  3. [math]\displaystyle{ f_1*f_2\colon U_1 \cap U_2\to \mathbb{R} \sim_p g_1*g_2\colon V_1 \cap V_2\to \mathbb{R} }[/math]

This shows that the germs at p form an algebra.

Define [math]\displaystyle{ \mathcal{I}_p }[/math] to be the set of all smooth germs vanishing at p and [math]\displaystyle{ \mathcal{I}_p^2 }[/math] to be the product of ideals [math]\displaystyle{ \mathcal{I}_p \mathcal{I}_p }[/math]. Then a differential at p (cotangent vector at p) is an element of [math]\displaystyle{ \mathcal{I}_p/\mathcal{I}_p^2 }[/math]. The differential of a smooth function f at p, denoted [math]\displaystyle{ \mathrm d f_p }[/math], is [math]\displaystyle{ [f-f(p)]_p/\mathcal{I}_p^2 }[/math].

A similar approach is to define differential equivalence of first order in terms of derivatives in an arbitrary coordinate patch. Then the differential of f at p is the set of all functions differentially equivalent to [math]\displaystyle{ f-f(p) }[/math] at p.

Algebraic geometry

In algebraic geometry, differentials and other infinitesimal notions are handled in a very explicit way by accepting that the coordinate ring or structure sheaf of a space may contain nilpotent elements. The simplest example is the ring of dual numbers R[ε], where ε2 = 0.

This can be motivated by the algebro-geometric point of view on the derivative of a function f from R to R at a point p. For this, note first that f − f(p) belongs to the ideal Ip of functions on R which vanish at p. If the derivative f vanishes at p, then f − f(p) belongs to the square Ip2 of this ideal. Hence the derivative of f at p may be captured by the equivalence class [f − f(p)] in the quotient space Ip/Ip2, and the 1-jet of f (which encodes its value and its first derivative) is the equivalence class of f in the space of all functions modulo Ip2. Algebraic geometers regard this equivalence class as the restriction of f to a thickened version of the point p whose coordinate ring is not R (which is the quotient space of functions on R modulo Ip) but R[ε] which is the quotient space of functions on R modulo Ip2. Such a thickened point is a simple example of a scheme.[5]

Algebraic geometry notions

Differentials are also important in algebraic geometry, and there are several important notions.

Synthetic differential geometry

A fifth approach to infinitesimals is the method of synthetic differential geometry[9] or smooth infinitesimal analysis.[10] This is closely related to the algebraic-geometric approach, except that the infinitesimals are more implicit and intuitive. The main idea of this approach is to replace the category of sets with another category of smoothly varying sets which is a topos. In this category, one can define the real numbers, smooth functions, and so on, but the real numbers automatically contain nilpotent infinitesimals, so these do not need to be introduced by hand as in the algebraic geometric approach. However the logic in this new category is not identical to the familiar logic of the category of sets: in particular, the law of the excluded middle does not hold. This means that set-theoretic mathematical arguments only extend to smooth infinitesimal analysis if they are constructive (e.g., do not use proof by contradiction). Some[who?] regard this disadvantage as a positive thing, since it forces one to find constructive arguments wherever they are available.

Nonstandard analysis

The final approach to infinitesimals again involves extending the real numbers, but in a less drastic way. In the nonstandard analysis approach there are no nilpotent infinitesimals, only invertible ones, which may be viewed as the reciprocals of infinitely large numbers.[7] Such extensions of the real numbers may be constructed explicitly using equivalence classes of sequences of real numbers, so that, for example, the sequence (1, 1/2, 1/3, ..., 1/n, ...) represents an infinitesimal. The first-order logic of this new set of hyperreal numbers is the same as the logic for the usual real numbers, but the completeness axiom (which involves second-order logic) does not hold. Nevertheless, this suffices to develop an elementary and quite intuitive approach to calculus using infinitesimals, see transfer principle.

Differential geometry

The notion of a differential motivates several concepts in differential geometry (and differential topology).

Other meanings

The term differential has also been adopted in homological algebra and algebraic topology, because of the role the exterior derivative plays in de Rham cohomology: in a cochain complex [math]\displaystyle{ (C_\bullet, d_\bullet), }[/math] the maps (or coboundary operators) di are often called differentials. Dually, the boundary operators in a chain complex are sometimes called codifferentials.

The properties of the differential also motivate the algebraic notions of a derivation and a differential algebra.

See also

Notes

Citations

  1. "Differential". Wolfram MathWorld. https://mathworld.wolfram.com/Differential.html. "The word differential has several related meaning in mathematics. In the most common context, it means "related to derivatives." So, for example, the portion of calculus dealing with taking derivatives (i.e., differentiation), is known as differential calculus.
    The word "differential" also has a more technical meaning in the theory of differential k-forms as a so-called one-form."
     
  2. "differential - Definition of differential in US English by Oxford Dictionaries". http://www.oxforddictionaries.com/us/definition/american_english/differential. 
  3. Boyer 1991.
  4. Darling 1994.
  5. 5.0 5.1 Eisenbud & Harris 1998.
  6. See Kock 2006 and Moerdijk & Reyes 1991.
  7. 7.0 7.1 See Robinson 1996 and Keisler 1986.
  8. See, for instance, Apostol 1967.
  9. See Kock 2006 and Lawvere 1968.
  10. See Moerdijk & Reyes 1991 and Bell 1998.

References