Why does the maximum/minimum of linear programming occurs at a vertex?

Suppose your feasible region looks as shown below :

enter image description here

Lets assume we're maximizing the objective function $C=ax+by$

Notice that, for different values of C, you get different straight lines of varying y intercepts but they will have same slope :

enter image description here

Only the lines that cut through the feasible region satisfy all the given constraints because you can cookup x,y values such that they fall in both feasible region and the objective function.

From the second picture it is obvious that the the maximum value of $ax+by=C$ occurs when the y intercept of these feasible lines is maximum. Consequently the vertex A gives the maximum value for the objective function.


Some basic linear algebra is essential for this. Have you already studied that a bit?

Say you have point $p$ inside your polyhedron of feasible points inside $\mathbb{R}^n$, and let's say you want to maximize the objective function. The objective function is linear, given by some $c \in \mathbb{R}^n$. If $v \in \mathbb{R}^n$ and $c \cdot v = 0$, then $p+v$ has the same objective value as $p$. In other words, one does not improve $p$ by moving it in a direction orthogonal to the objective function.

If $v$ has some component parallel to $c$, then $p+v$ will be either strictly better or strictly worse than $p$. Therefore, if you are the point $p$ and you want to improve your objective value, then you should look for a direction $v$ in which you can move which has some positive component parallel to $c$, that is $c \cdot v > 0$. You may not be able to walk exactly in the direction of $c$ because a face of the polyhedron may stand in the way. But you may be able to walk in a direction having positive $c$ component. The only way you could get stuck is that for every possible improving direction, some face blocks you. In other words for every $v$ in the open halfspace $\{v:c\cdot v > 0\}$, the point $p+v$ lies outside the feasible polyhedron. This can only happen if either you already stand at a vertex, or you stand on a face of optimal solutions parallel to the hyperplane $\{v:c\cdot v = 0\}$. If you do not already stand at a vertex, then while staying on the hyperplane, you can walk to a vertex without changing your objective value.

In the above explanation, I have used the boundedness of the feasible region: Without boundedness, it might be possible to walk in an improving direction forever, or it might be possible to walk along a face of optimal solutions without ever arriving at a vertex. I have also used convexity: If I do not stand at an optimal solution, then there will always exist an improving direction.


Suppose you are not at a vertex of the convex region. If you are in the strict interior of the convex region, then you can move a little bit in any direction you want. In particular, if your objective is $ax + by$ then you can move a little bit in the direction of $(a,b)$ and get a higher objective function value. Similarly, if you are on an edge, let $(a_1,b_1)$ be a vector parallel to the edge. Then either the dot product of $(a_1,b_1)$ with $(a,b)$ is greater than or equal to zero, or less than or equal to zero. In the first case, you can move along the edge in direction $(a_1,b_1)$ until you hit a vertex and get at least as good of a solution, and in the second case you can move along the edge in direction $(-a_1,-b_1)$ until you hit a vertex and you will get at least as good of a solution. Put all the pieces together and you get that some vertex provides the optimal solution.