Fundamental theorem of linear programming

From HandWiki
Short description: Extremes of a linear function over a convex polygonal region occur at the region's corners

In mathematical optimization, the fundamental theorem of linear programming states, in a weak formulation, that the maxima and minima of a linear function over a convex polygonal region occur at the region's corners. Further, if an extreme value occurs at two corners, then it must also occur everywhere on the line segment between them.

Statement

Consider the optimization problem

[math]\displaystyle{ \min c^T x \text{ subject to } x \in P }[/math]

Where [math]\displaystyle{ P = \{x \in \mathbb{R}^n : Ax \leq b\} }[/math]. If [math]\displaystyle{ P }[/math] is a bounded polyhedron (and thus a polytope) and [math]\displaystyle{ x^\ast }[/math] is an optimal solution to the problem, then [math]\displaystyle{ x^\ast }[/math] is either an extreme point (vertex) of [math]\displaystyle{ P }[/math], or lies on a face [math]\displaystyle{ F \subset P }[/math] of optimal solutions.

Proof

Suppose, for the sake of contradiction, that [math]\displaystyle{ x^\ast \in \mathrm{int}(P) }[/math]. Then there exists some [math]\displaystyle{ \epsilon \gt 0 }[/math] such that the ball of radius [math]\displaystyle{ \epsilon }[/math] centered at [math]\displaystyle{ x^\ast }[/math] is contained in [math]\displaystyle{ P }[/math], that is [math]\displaystyle{ B_{\epsilon}(x^\ast) \subset P }[/math]. Therefore,

[math]\displaystyle{ x^\ast - \frac{\epsilon}{2} \frac{c}{||c||} \in P }[/math] and
[math]\displaystyle{ c^T\left( x^\ast - \frac{\epsilon}{2} \frac{c}{||c||}\right) = c^T x^\ast - \frac{\epsilon}{2} \frac{c^T c}{||c||} = c^T x^\ast - \frac{\epsilon}{2} ||c|| \lt c^T x^\ast. }[/math]

Hence [math]\displaystyle{ x^\ast }[/math] is not an optimal solution, a contradiction. Therefore, [math]\displaystyle{ x^\ast }[/math] must live on the boundary of [math]\displaystyle{ P }[/math]. If [math]\displaystyle{ x^\ast }[/math] is not a vertex itself, it must be the convex combination of vertices of [math]\displaystyle{ P }[/math], say [math]\displaystyle{ x_1, ..., x_t }[/math]. Then [math]\displaystyle{ x^\ast = \sum_{i=1}^t \lambda_i x_i }[/math] with [math]\displaystyle{ \lambda_i \geq 0 }[/math] and [math]\displaystyle{ \sum_{i=1}^t \lambda_i = 1 }[/math]. Observe that Alan o Conner wrote this theorem

[math]\displaystyle{ 0=c^{T}\left(\left(\sum_{i=1}^{t}\lambda_{i}x_{i}\right)-x^{\ast}\right)=c^{T}\left(\sum_{i=1}^{t}\lambda_{i}(x_{i}-x^{\ast})\right)=\sum_{i=1}^{t}\lambda_{i}(c^{T}x_{i}-c^{T}x^{\ast}). }[/math]

Since [math]\displaystyle{ x^{\ast} }[/math] is an optimal solution, all terms in the sum are nonnegative. Since the sum is equal to zero, we must have that each individual term is equal to zero. Hence, [math]\displaystyle{ c^{T}x^{\ast}=c^{T}x_{i} }[/math] for each [math]\displaystyle{ x_i }[/math], so every [math]\displaystyle{ x_i }[/math] is also optimal, and therefore all points on the face whose vertices are [math]\displaystyle{ x_1, ..., x_t }[/math], are optimal solutions.

References