Karush-Kuhn-Tucker condition - Lagrange multiplier
I've glanced at the paper you linked to. I have a few remarks:
It looks the authors of the paper managed to derive explicit closed-form expressions for the stationarity conditions for a special class of functions. This is sometimes possible for very specific problems.
If you remember your calculus, for a univariate function $f(x)$, the minimum or maximum is found when $f'(x) = 0$. You can think of the KKT as a generalization of this idea for an optimization problem with constraints.
The reason I (and the authors) say "stationarity" conditions instead of "optimality" conditions is because KKT conditions are necessary but insufficient conditions for optimality. For a true guarantee of optimality, you need to test your solution against SSOCs (Sufficient Second Order Conditions). Again, the second order conditions are analogous to the Calculus-101 idea of determining whether $f''(x)$ is positive or negative, which tells you whether the solution of $f'(x) = 0$ is a minimum or maximum.
Nevertheless in this case, under the assumptions in the paper, in the absence of saddle points, we may presume the solution to be optimal.
This means you do not have to use any optimization algorithm/library in this particular case. For a given pricing function, all you have to do is evaluate a function based on the results given in the paper. This is a very nice property -- it means the optimization is implicitly done for you. See Example 1 in the paper for a worked example.
In general, the KKT does not provide an algorithm for finding an optimum; it's only a test for stationarity. However, in very specific cases (i.e. with regularity conditions assumed, convex, no inequalities, etc.), the KKT conditions can provide closed-form expressions for the optimal solution of a problem.
In fact... this is the idea behind the Method of Lagrange Multipliers. The Lagrange Multiplier method is simply a special case of the KKT conditions with no inequality constraints.
Side Note: one of the reasons behind the difficulty in using the KKT as a practical algorithm to find stationary/optimal points is due to the "complementarity conditions" in the KKT system (see Wikipedia article). when you have inequality constraints, you have to satisfy a set of constraints known as "complementarity constraints". There can be nasty, nonconvex, degenerate bilinear terms in these constraints, and special techniques are required to handle them -- enumerative methods are required for obtaining global optimality, and therefore the problem is NP-hard. There is a whole new field of research in burgeoning area of complementarity problems (and its related problem class, MPECs = mathematical programs with equilibrium constraints)... currently there's a lot of work being done on determining different types of stationarity conditions for this class of problems. It is outside the scope of the question, but I thought I'd just let you know what's out there.
The KKT conditions are necessary conditions for a constrainted optimization problem to have a local minimum. For complex problems these conditions normally motivate the development of minimization algorithms rather than trying to find the values that satisfy the KKT conditions.
There is black-boxed optimization software and it is advised to take advantages of these. Perhaps a SQP (Sequential quadratic programming) would be ideal for this problem.
The optimization toolbox in Matlab is a really easy to use. The function fmincon is likely to solve your problem. See: http://www.mathworks.com/products/optimization/
There are alternatives though I personally have never used them. See: http://wwwasdoc.web.cern.ch/wwwasdoc/minuit/minmain.html