Norm (mathematics)

From HandWiki
Short description: Length in a vector space

In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude of the vector. This norm can be defined as the square root of the inner product of a vector with itself.

A seminorm satisfies the first two properties of a norm, but may be zero for vectors other than the origin.[1] A vector space with a specified norm is called a normed vector space. In a similar manner, a vector space with a seminorm is called a seminormed vector space.

The term pseudonorm has been used for several related meanings. It may be a synonym of "seminorm".[1] A pseudonorm may satisfy the same axioms as a norm, with the equality replaced by an inequality "[math]\displaystyle{ \,\leq\, }[/math]" in the homogeneity axiom.[2] It can also refer to a norm that can take infinite values,[3] or to certain functions parametrised by a directed set.[4]

Definition

Given a vector space [math]\displaystyle{ X }[/math] over a subfield [math]\displaystyle{ F }[/math] of the complex numbers [math]\displaystyle{ \Complex, }[/math] a norm on [math]\displaystyle{ X }[/math] is a real-valued function [math]\displaystyle{ p : X \to \Reals }[/math] with the following properties, where [math]\displaystyle{ |s| }[/math] denotes the usual absolute value of a scalar [math]\displaystyle{ s }[/math]:[5]

  1. Subadditivity/Triangle inequality: [math]\displaystyle{ p(x + y) \leq p(x) + p(y) }[/math] for all [math]\displaystyle{ x, y \in X. }[/math]
  2. Absolute homogeneity: [math]\displaystyle{ p(s x) = |s| p(x) }[/math] for all [math]\displaystyle{ x \in X }[/math] and all scalars [math]\displaystyle{ s. }[/math]
  3. Positive definiteness/positiveness[6]/Point-separating: for all [math]\displaystyle{ x \in X, }[/math] if [math]\displaystyle{ p(x) = 0 }[/math] then [math]\displaystyle{ x = 0. }[/math]
    • Because property (2.) implies [math]\displaystyle{ p(0) = 0, }[/math] some authors replace property (3.) with the equivalent condition: for every [math]\displaystyle{ x \in X, }[/math] [math]\displaystyle{ p(x) = 0 }[/math] if and only if [math]\displaystyle{ x = 0. }[/math]

A seminorm on [math]\displaystyle{ X }[/math] is a function [math]\displaystyle{ p : X \to \Reals }[/math] that has properties (1.) and (2.)[7] so that in particular, every norm is also a seminorm (and thus also a sublinear functional). However, there exist seminorms that are not norms. Properties (1.) and (2.) imply that if [math]\displaystyle{ p }[/math] is a norm (or more generally, a seminorm) then [math]\displaystyle{ p(0) = 0 }[/math] and that [math]\displaystyle{ p }[/math] also has the following property:

  1. Non-negativity:[6] [math]\displaystyle{ p(x) \geq 0 }[/math] for all [math]\displaystyle{ x \in X. }[/math]

Some authors include non-negativity as part of the definition of "norm", although this is not necessary. Although this article defined "positive" to be a synonym of "positive definite", some authors instead define "positive" to be a synonym of "non-negative";[8] these definitions are not equivalent.

Equivalent norms

Suppose that [math]\displaystyle{ p }[/math] and [math]\displaystyle{ q }[/math] are two norms (or seminorms) on a vector space [math]\displaystyle{ X. }[/math] Then [math]\displaystyle{ p }[/math] and [math]\displaystyle{ q }[/math] are called equivalent, if there exist two positive real constants [math]\displaystyle{ c }[/math] and [math]\displaystyle{ C }[/math] with [math]\displaystyle{ c \gt 0 }[/math] such that for every vector [math]\displaystyle{ x \in X, }[/math] [math]\displaystyle{ c q(x) \leq p(x) \leq C q(x). }[/math] The relation "[math]\displaystyle{ p }[/math] is equivalent to [math]\displaystyle{ q }[/math]" is reflexive, symmetric ([math]\displaystyle{ c q \leq p \leq C q }[/math] implies [math]\displaystyle{ \tfrac{1}{C} p \leq q \leq \tfrac{1}{c} p }[/math]), and transitive and thus defines an equivalence relation on the set of all norms on [math]\displaystyle{ X. }[/math] The norms [math]\displaystyle{ p }[/math] and [math]\displaystyle{ q }[/math] are equivalent if and only if they induce the same topology on [math]\displaystyle{ X. }[/math][9] Any two norms on a finite-dimensional space are equivalent but this does not extend to infinite-dimensional spaces.[9]

Notation

If a norm [math]\displaystyle{ p : X \to \R }[/math] is given on a vector space [math]\displaystyle{ X, }[/math] then the norm of a vector [math]\displaystyle{ z \in X }[/math] is usually denoted by enclosing it within double vertical lines: [math]\displaystyle{ \|z\| = p(z). }[/math] Such notation is also sometimes used if [math]\displaystyle{ p }[/math] is only a seminorm. For the length of a vector in Euclidean space (which is an example of a norm, as explained below), the notation [math]\displaystyle{ |x| }[/math] with single vertical lines is also widespread.

Examples

Every (real or complex) vector space admits a norm: If [math]\displaystyle{ x_{\bull} = \left(x_i\right)_{i \in I} }[/math] is a Hamel basis for a vector space [math]\displaystyle{ X }[/math] then the real-valued map that sends [math]\displaystyle{ x = \sum_{i \in I} s_i x_i \in X }[/math] (where all but finitely many of the scalars [math]\displaystyle{ s_i }[/math] are [math]\displaystyle{ 0 }[/math]) to [math]\displaystyle{ \sum_{i \in I} \left|s_i\right| }[/math] is a norm on [math]\displaystyle{ X. }[/math][10] There are also a large number of norms that exhibit additional properties that make them useful for specific problems.

Absolute-value norm

The absolute value [math]\displaystyle{ \|x\| = |x| }[/math] is a norm on the one-dimensional vector space formed by the real or complex numbers.

Any norm [math]\displaystyle{ p }[/math] on a one-dimensional vector space [math]\displaystyle{ X }[/math] is equivalent (up to scaling) to the absolute value norm, meaning that there is a norm-preserving isomorphism of vector spaces [math]\displaystyle{ f : \mathbb{F} \to X, }[/math] where [math]\displaystyle{ \mathbb{F} }[/math] is either [math]\displaystyle{ \R }[/math] or [math]\displaystyle{ \Complex, }[/math] and norm-preserving means that [math]\displaystyle{ |x| = p(f(x)). }[/math] This isomorphism is given by sending [math]\displaystyle{ 1 \isin \mathbb{F} }[/math] to a vector of norm [math]\displaystyle{ 1, }[/math] which exists since such a vector is obtained by multiplying any non-zero vector by the inverse of its norm.

Euclidean norm

On the [math]\displaystyle{ n }[/math]-dimensional Euclidean space [math]\displaystyle{ \R^n, }[/math] the intuitive notion of length of the vector [math]\displaystyle{ \boldsymbol{x} = \left(x_1, x_2, \ldots, x_n\right) }[/math] is captured by the formula[11] [math]\displaystyle{ \|\boldsymbol{x}\|_2 := \sqrt{x_1^2 + \cdots + x_n^2}. }[/math]

This is the Euclidean norm, which gives the ordinary distance from the origin to the point X—a consequence of the Pythagorean theorem. This operation may also be referred to as "SRSS", which is an acronym for the square root of the sum of squares.[12]

The Euclidean norm is by far the most commonly used norm on [math]\displaystyle{ \R^n, }[/math][11] but there are other norms on this vector space as will be shown below. However, all these norms are equivalent in the sense that they all define the same topology on finite-dimensional spaces.

The inner product of two vectors of a Euclidean vector space is the dot product of their coordinate vectors over an orthonormal basis. Hence, the Euclidean norm can be written in a coordinate-free way as [math]\displaystyle{ \|\boldsymbol{x}\| := \sqrt{\boldsymbol{x} \cdot \boldsymbol{x}}. }[/math]

The Euclidean norm is also called the quadratic norm, [math]\displaystyle{ L^2 }[/math] norm,[13] [math]\displaystyle{ \ell^2 }[/math] norm, 2-norm, or square norm; see [math]\displaystyle{ L^p }[/math] space. It defines a distance function called the Euclidean length, [math]\displaystyle{ L^2 }[/math] distance, or [math]\displaystyle{ \ell^2 }[/math] distance.

The set of vectors in [math]\displaystyle{ \R^{n+1} }[/math] whose Euclidean norm is a given positive constant forms an [math]\displaystyle{ n }[/math]-sphere.

Euclidean norm of complex numbers

The Euclidean norm of a complex number is the absolute value (also called the modulus) of it, if the complex plane is identified with the Euclidean plane [math]\displaystyle{ \R^2. }[/math] This identification of the complex number [math]\displaystyle{ x + i y }[/math] as a vector in the Euclidean plane, makes the quantity [math]\displaystyle{ \sqrt{x^2 + y^2} }[/math] (as first suggested by Euler) the Euclidean norm associated with the complex number. For [math]\displaystyle{ z = x +iy }[/math], the norm can also be written as [math]\displaystyle{ \sqrt{\bar z z} }[/math] where [math]\displaystyle{ \bar z }[/math] is the complex conjugate of [math]\displaystyle{ z\,. }[/math]

Quaternions and octonions

There are exactly four Euclidean Hurwitz algebras over the real numbers. These are the real numbers [math]\displaystyle{ \R, }[/math] the complex numbers [math]\displaystyle{ \Complex, }[/math] the quaternions [math]\displaystyle{ \mathbb{H}, }[/math] and lastly the octonions [math]\displaystyle{ \mathbb{O}, }[/math] where the dimensions of these spaces over the real numbers are [math]\displaystyle{ 1, 2, 4, \text{ and } 8, }[/math] respectively. The canonical norms on [math]\displaystyle{ \R }[/math] and [math]\displaystyle{ \Complex }[/math] are their absolute value functions, as discussed previously.

The canonical norm on [math]\displaystyle{ \mathbb{H} }[/math] of quaternions is defined by [math]\displaystyle{ \lVert q \rVert = \sqrt{\,qq^*~} = \sqrt{\,q^*q~} = \sqrt{\, a^2 + b^2 + c^2 + d^2 ~} }[/math] for every quaternion [math]\displaystyle{ q = a + b\,\mathbf i + c\,\mathbf j + d\,\mathbf k }[/math] in [math]\displaystyle{ \mathbb{H}. }[/math] This is the same as the Euclidean norm on [math]\displaystyle{ \mathbb{H} }[/math] considered as the vector space [math]\displaystyle{ \R^4. }[/math] Similarly, the canonical norm on the octonions is just the Euclidean norm on [math]\displaystyle{ \R^8. }[/math]

Finite-dimensional complex normed spaces

On an [math]\displaystyle{ n }[/math]-dimensional complex space [math]\displaystyle{ \Complex^n, }[/math] the most common norm is [math]\displaystyle{ \|\boldsymbol{z}\| := \sqrt{\left|z_1\right|^2 + \cdots + \left|z_n\right|^2} = \sqrt{z_1 \bar z_1 + \cdots + z_n \bar z_n}. }[/math]

In this case, the norm can be expressed as the square root of the inner product of the vector and itself: [math]\displaystyle{ \|\boldsymbol{x}\| := \sqrt{\boldsymbol{x}^H ~ \boldsymbol{x}}, }[/math] where [math]\displaystyle{ \boldsymbol{x} }[/math] is represented as a column vector [math]\displaystyle{ \begin{bmatrix} x_1 \; x_2 \; \dots \; x_n \end{bmatrix}^{\rm T} }[/math] and [math]\displaystyle{ \boldsymbol{x}^H }[/math] denotes its conjugate transpose.

This formula is valid for any inner product space, including Euclidean and complex spaces. For complex spaces, the inner product is equivalent to the complex dot product. Hence the formula in this case can also be written using the following notation: [math]\displaystyle{ \|\boldsymbol{x}\| := \sqrt{\boldsymbol{x} \cdot \boldsymbol{x}}. }[/math]

Taxicab norm or Manhattan norm

Main page: Taxicab geometry

[math]\displaystyle{ \|\boldsymbol{x}\|_1 := \sum_{i=1}^n \left|x_i\right|. }[/math] The name relates to the distance a taxi has to drive in a rectangular street grid (like that of the New York City borough of Manhattan) to get from the origin to the point [math]\displaystyle{ x. }[/math]

The set of vectors whose 1-norm is a given constant forms the surface of a cross polytope, which has dimension equal to the dimension of the vector space minus 1. The Taxicab norm is also called the [math]\displaystyle{ \ell^1 }[/math] norm. The distance derived from this norm is called the Manhattan distance or [math]\displaystyle{ \ell^1 }[/math] distance.

The 1-norm is simply the sum of the absolute values of the columns.

In contrast, [math]\displaystyle{ \sum_{i=1}^n x_i }[/math] is not a norm because it may yield negative results.

p-norm

Let [math]\displaystyle{ p \geq 1 }[/math] be a real number. The [math]\displaystyle{ p }[/math]-norm (also called [math]\displaystyle{ \ell^p }[/math]-norm) of vector [math]\displaystyle{ \mathbf{x} = (x_1, \ldots, x_n) }[/math] is[11] [math]\displaystyle{ \|\mathbf{x}\|_p := \left(\sum_{i=1}^n \left|x_i\right|^p\right)^{1/p}. }[/math] For [math]\displaystyle{ p = 1, }[/math] we get the taxicab norm, for [math]\displaystyle{ p = 2 }[/math] we get the Euclidean norm, and as [math]\displaystyle{ p }[/math] approaches [math]\displaystyle{ \infty }[/math] the [math]\displaystyle{ p }[/math]-norm approaches the infinity norm or maximum norm: [math]\displaystyle{ \|\mathbf{x}\|_\infty := \max_i \left|x_i\right|. }[/math] The [math]\displaystyle{ p }[/math]-norm is related to the generalized mean or power mean.

For [math]\displaystyle{ p = 2, }[/math] the [math]\displaystyle{ \|\,\cdot\,\|_2 }[/math]-norm is even induced by a canonical inner product [math]\displaystyle{ \langle \,\cdot,\,\cdot\rangle, }[/math] meaning that [math]\displaystyle{ \|\mathbf{x}\|_2 = \sqrt{\langle \mathbf{x}, \mathbf{x} \rangle} }[/math] for all vectors [math]\displaystyle{ \mathbf{x}. }[/math] This inner product can be expressed in terms of the norm by using the polarization identity. On [math]\displaystyle{ \ell^2, }[/math] this inner product is the Euclidean inner product defined by [math]\displaystyle{ \langle \left(x_n\right)_{n}, \left(y_n\right)_{n} \rangle_{\ell^2} ~=~ \sum_n \overline{x_n} y_n }[/math] while for the space [math]\displaystyle{ L^2(X, \mu) }[/math] associated with a measure space [math]\displaystyle{ (X, \Sigma, \mu), }[/math] which consists of all square-integrable functions, this inner product is [math]\displaystyle{ \langle f, g \rangle_{L^2} = \int_X \overline{f(x)} g(x)\, \mathrm dx. }[/math]

This definition is still of some interest for [math]\displaystyle{ 0 \lt p \lt 1, }[/math] but the resulting function does not define a norm,[14] because it violates the triangle inequality. What is true for this case of [math]\displaystyle{ 0 \lt p \lt 1, }[/math] even in the measurable analog, is that the corresponding [math]\displaystyle{ L^p }[/math] class is a vector space, and it is also true that the function [math]\displaystyle{ \int_X |f(x) - g(x)|^p ~ \mathrm d \mu }[/math] (without [math]\displaystyle{ p }[/math]th root) defines a distance that makes [math]\displaystyle{ L^p(X) }[/math] into a complete metric topological vector space. These spaces are of great interest in functional analysis, probability theory and harmonic analysis. However, aside from trivial cases, this topological vector space is not locally convex, and has no continuous non-zero linear forms. Thus the topological dual space contains only the zero functional.

The partial derivative of the [math]\displaystyle{ p }[/math]-norm is given by [math]\displaystyle{ \frac{\partial}{\partial x_k} \|\mathbf{x}\|_p = \frac{x_k \left|x_k\right|^{p-2}} { \|\mathbf{x}\|_p^{p-1}}. }[/math]

The derivative with respect to [math]\displaystyle{ x, }[/math] therefore, is [math]\displaystyle{ \frac{\partial \|\mathbf{x}\|_p}{\partial \mathbf{x}} =\frac{\mathbf{x} \circ |\mathbf{x}|^{p-2}} {\|\mathbf{x}\|^{p-1}_p}. }[/math] where [math]\displaystyle{ \circ }[/math] denotes Hadamard product and [math]\displaystyle{ |\cdot| }[/math] is used for absolute value of each component of the vector.

For the special case of [math]\displaystyle{ p = 2, }[/math] this becomes [math]\displaystyle{ \frac{\partial}{\partial x_k} \|\mathbf{x}\|_2 = \frac{x_k}{\|\mathbf{x}\|_2}, }[/math] or [math]\displaystyle{ \frac{\partial}{\partial \mathbf{x}} \|\mathbf{x}\|_2 = \frac{\mathbf{x}}{ \|\mathbf{x}\|_2}. }[/math]

Maximum norm (special case of: infinity norm, uniform norm, or supremum norm)

[math]\displaystyle{ \|x\|_\infty = 1 }[/math]

If [math]\displaystyle{ \mathbf{x} }[/math] is some vector such that [math]\displaystyle{ \mathbf{x} = (x_1, x_2, \ldots ,x_n), }[/math] then: [math]\displaystyle{ \|\mathbf{x}\|_\infty := \max \left(\left|x_1\right| , \ldots , \left|x_n\right|\right). }[/math]

The set of vectors whose infinity norm is a given constant, [math]\displaystyle{ c, }[/math] forms the surface of a hypercube with edge length [math]\displaystyle{ 2 c. }[/math]

Zero norm

In probability and functional analysis, the zero norm induces a complete metric topology for the space of measurable functions and for the F-space of sequences with F–norm [math]\displaystyle{ (x_n) \mapsto \sum_n{2^{-n} x_n/(1+x_n)}. }[/math][15] Here we mean by F-norm some real-valued function [math]\displaystyle{ \lVert \cdot \rVert }[/math] on an F-space with distance [math]\displaystyle{ d, }[/math] such that [math]\displaystyle{ \lVert x \rVert = d(x,0). }[/math] The F-norm described above is not a norm in the usual sense because it lacks the required homogeneity property.

Hamming distance of a vector from zero

In metric geometry, the discrete metric takes the value one for distinct points and zero otherwise. When applied coordinate-wise to the elements of a vector space, the discrete distance defines the Hamming distance, which is important in coding and information theory. In the field of real or complex numbers, the distance of the discrete metric from zero is not homogeneous in the non-zero point; indeed, the distance from zero remains one as its non-zero argument approaches zero. However, the discrete distance of a number from zero does satisfy the other properties of a norm, namely the triangle inequality and positive definiteness. When applied component-wise to vectors, the discrete distance from zero behaves like a non-homogeneous "norm", which counts the number of non-zero components in its vector argument; again, this non-homogeneous "norm" is discontinuous.

In signal processing and statistics, David Donoho referred to the zero "norm" with quotation marks. Following Donoho's notation, the zero "norm" of [math]\displaystyle{ x }[/math] is simply the number of non-zero coordinates of [math]\displaystyle{ x, }[/math] or the Hamming distance of the vector from zero. When this "norm" is localized to a bounded set, it is the limit of [math]\displaystyle{ p }[/math]-norms as [math]\displaystyle{ p }[/math] approaches 0. Of course, the zero "norm" is not truly a norm, because it is not positive homogeneous. Indeed, it is not even an F-norm in the sense described above, since it is discontinuous, jointly and severally, with respect to the scalar argument in scalar–vector multiplication and with respect to its vector argument. Abusing terminology, some engineers[who?] omit Donoho's quotation marks and inappropriately call the number-of-non-zeros function the [math]\displaystyle{ L^0 }[/math] norm, echoing the notation for the Lebesgue space of measurable functions.

Infinite dimensions

The generalization of the above norms to an infinite number of components leads to [math]\displaystyle{ \ell^p }[/math] and [math]\displaystyle{ L^p }[/math] spaces for [math]\displaystyle{ p \ge 1\,, }[/math] with norms

[math]\displaystyle{ \|x\|_p = \bigg(\sum_{i \in \N} \left|x_i\right|^p\bigg)^{1/p} \text{ and }\ \|f\|_{p,X} = \bigg(\int_X |f(x)|^p ~ \mathrm d x\bigg)^{1/p} }[/math]

for complex-valued sequences and functions on [math]\displaystyle{ X \sube \R^n }[/math] respectively, which can be further generalized (see Haar measure). These norms are also valid in the limit as [math]\displaystyle{ p \rightarrow +\infty }[/math], giving a supremum norm, and are called [math]\displaystyle{ \ell^\infty }[/math] and [math]\displaystyle{ L^\infty\,. }[/math]

Any inner product induces in a natural way the norm [math]\displaystyle{ \|x\| := \sqrt{\langle x , x\rangle}. }[/math]

Other examples of infinite-dimensional normed vector spaces can be found in the Banach space article.

Generally, these norms do not give the same topologies. For example, an infinite-dimensional [math]\displaystyle{ \ell^p }[/math] space gives a strictly finer topology than an infinite-dimensional [math]\displaystyle{ \ell^q }[/math] space when [math]\displaystyle{ p \lt q\,. }[/math]

Composite norms

Other norms on [math]\displaystyle{ \R^n }[/math] can be constructed by combining the above; for example [math]\displaystyle{ \|x\| := 2 \left|x_1\right| + \sqrt{3 \left|x_2\right|^2 + \max (\left|x_3\right| , 2 \left|x_4\right|)^2} }[/math] is a norm on [math]\displaystyle{ \R^4. }[/math]

For any norm and any injective linear transformation [math]\displaystyle{ A }[/math] we can define a new norm of [math]\displaystyle{ x, }[/math] equal to [math]\displaystyle{ \|A x\|. }[/math] In 2D, with [math]\displaystyle{ A }[/math] a rotation by 45° and a suitable scaling, this changes the taxicab norm into the maximum norm. Each [math]\displaystyle{ A }[/math] applied to the taxicab norm, up to inversion and interchanging of axes, gives a different unit ball: a parallelogram of a particular shape, size, and orientation.

In 3D, this is similar but different for the 1-norm (octahedrons) and the maximum norm (prisms with parallelogram base).

There are examples of norms that are not defined by "entrywise" formulas. For instance, the Minkowski functional of a centrally-symmetric convex body in [math]\displaystyle{ \R^n }[/math] (centered at zero) defines a norm on [math]\displaystyle{ \R^n }[/math] (see § Classification of seminorms: absolutely convex absorbing sets below).

All the above formulas also yield norms on [math]\displaystyle{ \Complex^n }[/math] without modification.

There are also norms on spaces of matrices (with real or complex entries), the so-called matrix norms.

In abstract algebra

Main page: Field norm

Let [math]\displaystyle{ E }[/math] be a finite extension of a field [math]\displaystyle{ k }[/math] of inseparable degree [math]\displaystyle{ p^{\mu}, }[/math] and let [math]\displaystyle{ k }[/math] have algebraic closure [math]\displaystyle{ K. }[/math] If the distinct embeddings of [math]\displaystyle{ E }[/math] are [math]\displaystyle{ \left\{\sigma_j\right\}_j, }[/math] then the Galois-theoretic norm of an element [math]\displaystyle{ \alpha \in E }[/math] is the value [math]\displaystyle{ \left(\prod_j {\sigma_k(\alpha)}\right)^{p^{\mu}}. }[/math] As that function is homogeneous of degree [math]\displaystyle{ [E : k] }[/math], the Galois-theoretic norm is not a norm in the sense of this article. However, the [math]\displaystyle{ [E : k] }[/math]-th root of the norm (assuming that concept makes sense) is a norm.[16]

Composition algebras

The concept of norm [math]\displaystyle{ N(z) }[/math] in composition algebras does not share the usual properties of a norm since null vectors are allowed. A composition algebra [math]\displaystyle{ (A, {}^*, N) }[/math] consists of an algebra over a field [math]\displaystyle{ A, }[/math] an involution [math]\displaystyle{ {}^*, }[/math] and a quadratic form [math]\displaystyle{ N(z) = z z^* }[/math] called the "norm".

The characteristic feature of composition algebras is the homomorphism property of [math]\displaystyle{ N }[/math]: for the product [math]\displaystyle{ w z }[/math] of two elements [math]\displaystyle{ w }[/math] and [math]\displaystyle{ z }[/math] of the composition algebra, its norm satisfies [math]\displaystyle{ N(wz) = N(w) N(z). }[/math] In the case of division algebras [math]\displaystyle{ \R, }[/math] [math]\displaystyle{ \Complex, }[/math] [math]\displaystyle{ \mathbb{H}, }[/math] and O the composition algebra norm is the square of the norm discussed above. In those cases the norm is a definite quadratic form. In the split algebras the norm is an isotropic quadratic form.

Properties

For any norm [math]\displaystyle{ p : X \to \R }[/math] on a vector space [math]\displaystyle{ X, }[/math] the reverse triangle inequality holds: [math]\displaystyle{ p(x \pm y) \geq |p(x) - p(y)| \text{ for all } x, y \in X. }[/math] If [math]\displaystyle{ u : X \to Y }[/math] is a continuous linear map between normed spaces, then the norm of [math]\displaystyle{ u }[/math] and the norm of the transpose of [math]\displaystyle{ u }[/math] are equal.[17]

For the [math]\displaystyle{ L^p }[/math] norms, we have Hölder's inequality[18] [math]\displaystyle{ |\langle x, y \rangle| \leq \|x\|_p \|y\|_q \qquad \frac{1}{p} + \frac{1}{q} = 1. }[/math] A special case of this is the Cauchy–Schwarz inequality:[18] [math]\displaystyle{ \left|\langle x, y \rangle\right| \leq \|x\|_2 \|y\|_2. }[/math]

Illustrations of unit circles in different norms.

Every norm is a seminorm and thus satisfies all properties of the latter. In turn, every seminorm is a sublinear function and thus satisfies all properties of the latter. In particular, every norm is a convex function.

Equivalence

The concept of unit circle (the set of all vectors of norm 1) is different in different norms: for the 1-norm, the unit circle is a square oriented as a diamond; for the 2-norm (Euclidean norm), it is the well-known unit circle; while for the infinity norm, it is an axis-aligned square. For any [math]\displaystyle{ p }[/math]-norm, it is a superellipse with congruent axes (see the accompanying illustration). Due to the definition of the norm, the unit circle must be convex and centrally symmetric (therefore, for example, the unit ball may be a rectangle but cannot be a triangle, and [math]\displaystyle{ p \geq 1 }[/math] for a [math]\displaystyle{ p }[/math]-norm).

In terms of the vector space, the seminorm defines a topology on the space, and this is a Hausdorff topology precisely when the seminorm can distinguish between distinct vectors, which is again equivalent to the seminorm being a norm. The topology thus defined (by either a norm or a seminorm) can be understood either in terms of sequences or open sets. A sequence of vectors [math]\displaystyle{ \{v_n\} }[/math] is said to converge in norm to [math]\displaystyle{ v, }[/math] if [math]\displaystyle{ \left\|v_n - v\right\| \to 0 }[/math] as [math]\displaystyle{ n \to \infty. }[/math] Equivalently, the topology consists of all sets that can be represented as a union of open balls. If [math]\displaystyle{ (X, \|\cdot\|) }[/math] is a normed space then[19] [math]\displaystyle{ \|x - y\| = \|x - z\| + \|z - y\| \text{ for all } x, y \in X \text{ and } z \in [x, y]. }[/math]

Two norms [math]\displaystyle{ \|\cdot\|_\alpha }[/math] and [math]\displaystyle{ \|\cdot\|_\beta }[/math] on a vector space [math]\displaystyle{ X }[/math] are called equivalent if they induce the same topology,[9] which happens if and only if there exist positive real numbers [math]\displaystyle{ C }[/math] and [math]\displaystyle{ D }[/math] such that for all [math]\displaystyle{ x \in X }[/math] [math]\displaystyle{ C \|x\|_\alpha \leq \|x\|_\beta \leq D \|x\|_\alpha. }[/math] For instance, if [math]\displaystyle{ p \gt r \geq 1 }[/math] on [math]\displaystyle{ \Complex^n, }[/math] then[20] [math]\displaystyle{ \|x\|_p \leq \|x\|_r \leq n^{(1/r-1/p)} \|x\|_p. }[/math]

In particular, [math]\displaystyle{ \|x\|_2 \leq \|x\|_1 \leq \sqrt{n} \|x\|_2 }[/math] [math]\displaystyle{ \|x\|_\infty \leq \|x\|_2 \leq \sqrt{n} \|x\|_\infty }[/math] [math]\displaystyle{ \|x\|_\infty \leq \|x\|_1 \leq n \|x\|_\infty , }[/math] That is, [math]\displaystyle{ \|x\|_\infty \leq \|x\|_2 \leq \|x\|_1 \leq \sqrt{n} \|x\|_2 \leq n \|x\|_\infty. }[/math] If the vector space is a finite-dimensional real or complex one, all norms are equivalent. On the other hand, in the case of infinite-dimensional vector spaces, not all norms are equivalent.

Equivalent norms define the same notions of continuity and convergence and for many purposes do not need to be distinguished. To be more precise the uniform structure defined by equivalent norms on the vector space is uniformly isomorphic.

Classification of seminorms: absolutely convex absorbing sets

Main page: Seminorm

All seminorms on a vector space [math]\displaystyle{ X }[/math] can be classified in terms of absolutely convex absorbing subsets [math]\displaystyle{ A }[/math] of [math]\displaystyle{ X. }[/math] To each such subset corresponds a seminorm [math]\displaystyle{ p_A }[/math] called the gauge of [math]\displaystyle{ A, }[/math] defined as [math]\displaystyle{ p_A(x) := \inf \{r \in \R : r \gt 0, x \in r A\} }[/math] where [math]\displaystyle{ \inf_{} }[/math] is the infimum, with the property that [math]\displaystyle{ \left\{x \in X : p_A(x) \lt 1\right\} ~\subseteq~ A ~\subseteq~ \left\{x \in X : p_A(x) \leq 1\right\}. }[/math] Conversely:

Any locally convex topological vector space has a local basis consisting of absolutely convex sets. A common method to construct such a basis is to use a family [math]\displaystyle{ (p) }[/math] of seminorms [math]\displaystyle{ p }[/math] that separates points: the collection of all finite intersections of sets [math]\displaystyle{ \{p \lt 1/n\} }[/math] turns the space into a locally convex topological vector space so that every p is continuous.

Such a method is used to design weak and weak* topologies.

norm case:

Suppose now that [math]\displaystyle{ (p) }[/math] contains a single [math]\displaystyle{ p: }[/math] since [math]\displaystyle{ (p) }[/math] is separating, [math]\displaystyle{ p }[/math] is a norm, and [math]\displaystyle{ A = \{p \lt 1\} }[/math] is its open unit ball. Then [math]\displaystyle{ A }[/math] is an absolutely convex bounded neighbourhood of 0, and [math]\displaystyle{ p = p_A }[/math] is continuous.
The converse is due to Andrey Kolmogorov: any locally convex and locally bounded topological vector space is normable. Precisely:
If [math]\displaystyle{ X }[/math] is an absolutely convex bounded neighbourhood of 0, the gauge [math]\displaystyle{ g_X }[/math] (so that [math]\displaystyle{ X = \{g_X \lt 1\} }[/math] is a norm.

See also

References

  1. 1.0 1.1 Knapp, A.W. (2005). Basic Real Analysis. Birkhäuser. p. [1]. ISBN 978-0-817-63250-2. 
  2. "Pseudo-norm - Encyclopedia of Mathematics". https://encyclopediaofmath.org/wiki/Pseudo-norm. 
  3. "Pseudonorm" (in de). https://www.spektrum.de/lexikon/mathematik/pseudonorm/8161. 
  4. Hyers, D. H. (1939-09-01). "Pseudo-normed linear spaces and Abelian groups". Duke Mathematical Journal 5 (3). doi:10.1215/s0012-7094-39-00551-x. ISSN 0012-7094. http://dx.doi.org/10.1215/s0012-7094-39-00551-x. 
  5. Pugh, C.C. (2015). Real Mathematical Analysis. Springer. p. page 28. ISBN 978-3-319-17770-0.  Prugovečki, E. (1981). Quantum Mechanics in Hilbert Space. p. page 20. 
  6. 6.0 6.1 Kubrusly 2011, p. 200.
  7. Rudin, W. (1991). Functional Analysis. p. 25. 
  8. Narici & Beckenstein 2011, pp. 120-121.
  9. 9.0 9.1 9.2 Conrad, Keith. "Equivalence of norms". https://kconrad.math.uconn.edu/blurbs/gradnumthy/equivnorms.pdf. 
  10. Wilansky 2013, pp. 20-21.
  11. 11.0 11.1 11.2 Weisstein, Eric W.. "Vector Norm" (in en). https://mathworld.wolfram.com/VectorNorm.html. 
  12. Chopra, Anil (2012). Dynamics of Structures, 4th Ed.. Prentice-Hall. ISBN 978-0-13-285803-8. 
  13. Weisstein, Eric W.. "Norm" (in en). https://mathworld.wolfram.com/Norm.html. 
  14. Except in [math]\displaystyle{ \R^1, }[/math] where it coincides with the Euclidean norm, and [math]\displaystyle{ \R^0, }[/math] where it is trivial.
  15. Rolewicz, Stefan (1987), Functional analysis and control theory: Linear systems, Mathematics and its Applications (East European Series), 29 (Translated from the Polish by Ewa Bednarczuk ed.), Dordrecht; Warsaw: D. Reidel Publishing Co.; PWN—Polish Scientific Publishers, pp. xvi,524, doi:10.1007/978-94-015-7758-8, ISBN 90-277-2186-6, OCLC 13064804 
  16. Lang, Serge (2002). Algebra (Revised 3rd ed.). New York: Springer Verlag. pp. 284. ISBN 0-387-95385-X. 
  17. Trèves 2006, pp. 242–243.
  18. 18.0 18.1 Golub, Gene; Van Loan, Charles F. (1996). Matrix Computations (Third ed.). Baltimore: The Johns Hopkins University Press. p. 53. ISBN 0-8018-5413-X. 
  19. Narici & Beckenstein 2011, pp. 107-113.
  20. "Relation between p-norms". https://math.stackexchange.com/q/218046. 

Bibliography