Simplex method for SDP?
I believe as of now simplex methods have not been extended to SDP. The problem is that the feasible region of an SDP has a curved boundary. So for any point on the boundary I can cook up a linear function that is minimized there. So there are infinite set of possible candidates for the minimizing point.
For LP the boundary consists of straight segments, so minima occur only at vertices of the feasible region. As there are only finitely many vertices, you can search over this finite set to find your minimizing point, which is what the simplex algorithm does.
[the first part of this answer is similar to Dinakar Muthiah's]
When optimizing a linear function on a convex set, it can always be assumed that the optimal solution lies on an extreme point of the feasible region (if there are several optimal solutions, at least one is an extreme point).
In the case of linear programming, these extreme points are vertices of a polyhedron, with the nice property that
- there are a finite number of vertices
- every vertex admits a simple algebraic description (this is the notion of basis, which is essentially a list of active inequalities)
However, for semidefinite programming, the feasible region, altough convex, typically admits an infinite number of extreme points, for which there is no clear equivalent to the concept of basis.
Note that simplex-type methods (also called active-set methods) can be generalized to quadratic programming (minimization of a convex quadratic function over a polyhedron). On the other hand, I am not aware of any such generalization for quadratically constrained quadratic programming (QCQP, i.e. quadratic programming with convex quadratic constraints) or second-order cone programming (a slight extension of QCQP), two problem classes whose instances can be posed as semidefinite programming problems.
well, here is some work toward a simplex-like approach to SDP.
http://www4.ncsu.edu/~kksivara/publications/rpi-colloquium.pdf
the author says you can email him for the pre-print but the slides for a talk are there.
If I understand it correctly, it's simplex-like in the sense that it's working with matrices that aren't full rank. So instead of variables entering and leaving a basis, as in LP, you would swap columns in and out. It's not simplex-like in the geometric sense of moving from extreme point to extreme point on a polyhedron.
X can be factorized as X=P^TDP where P isn't of full rank so the problem can be reduced to finding the right P. The talk goes into how to update the columns of P iteratively till the optimality criterion (having a psd slack matrix) is met. The columns with negative reduced costs at any iteration are the eigenvectors associated with negative eigenvalues of the indefinite slack matrices.
If I understand it all correctly.