What Numerical Methods Are Known to Solve $ {L}_{1} $ Regularized Quadratic Programming Problems?
Oh, absolutely, there are lots of tools. Do a search for proximal gradient methods in your favorite search engine; these are generalizations of projected gradient methods. For specific tools, search for TFOCS (disclosure below), FISTA, NESTA, SPGL1, GPSR, SpaRSA, L1LS... and the bibliographies for these will lead to even more options. Even better, see this rather extensive list of first-order methods compiled by Stephen Becker as part of his larger sparse- and low-rank approximation wiki.
A basic proximal gradient method considers your function as the sum of a smooth term $f(x)=\tfrac{1}{2}x^TAx+b^Tx$ and a non-smooth but "prox-capable" term $g(x)=\lambda\|x\|_1$. At each iteration, it uses the gradient of the smooth term $v=\nabla f(x)=Ax+b$ and computes the next step using a proximal minimization $$x^{+} = \mathop{\textrm{arg min}}_z ~ \lambda \|z\|_1 + \tfrac{1}{2t} \|x-tv-z\|_2^2$$ where $t$ is a step size. Many presentations of these methods assume a fixed step size determined by a Lipschitz continuity measure, but such step sizes are always very conservative. So a simple step size adaptation strategy is essential for good performance.
More advanced methods use the same gradient and proximal minimization computations, but mix that information together in different ways to achieve a provable improvement in worst-case performance. But it turns out that for your function is strongly convex, due to your claim that $A$ is positive definite. So a standard proximal gradient algorithm structure is likely a very good choice.
I co-wrote TFOCS with Stephen Becker and Emmanuel Candes. TFOCS is a MATLAB-based toolbox that lets you build custom first-order solvers using a variety of algorithms, smooth functions, linear operators, and prox functions. We wrote an accompanying journal article for it. Our goal was to structure the code in such a way that it is easy to try a variety of common first-order methods on your models, and to make the code that implements the iterations themselves easy to read. But really, it's just one of many software packages out there, and of course if you don't want to use MATLAB, you'll have to try something else!
If you're willing and able to to factorize $A=R^TR$, then you can indeed use standard LASSO tools: $$\min_x \tfrac{1}{2} \| R x - \bar{b} \|_2^2 + \lambda \|x\|_1 - \tfrac{1}{2} \bar{b}^T\bar{b}, \qquad \bar{b}\triangleq - R^{-T} b$$ Don't limit yourself to LARS; there are a lot of alternative approaches out there, and many of the packages above specifically support the LASSO. But the nice thing about many (but not all) of the tools above, including TFOCS, is that you do not necessarily need to do the factorization if you prefer not to.
Finally, this is of course a convex optimization problem, so any general-purpose convex optimization framework will be able to handle this with aplomb, albeit less efficiently than these more specialized methods. Typically they will split $x$ into positive and negative parts: $$x=x^{+} - x^{-}, \quad x^{+}, x^{-} \succeq 0$$ This enables the substitution $\|x\|_1=\vec{1}^T(x^{+}+x^{-})$, in which case the problem becomes a standard QP.
Major edit history:
- Clarified that the OP's function is strongly convex due to the positive definiteness of $A$.
- Added a note about the importance of step size adaptation in practice; thanks littleO.
- Added links to Stephen Becker's sparse approximation wiki; thanks icurays1.
CVX (MATLAB) or CVXPY (Python) would allow you to solve that really easily, as long as your problem isn't too large.
The CVX code would be just:
cvx_begin
variable x(n)
minimize( 1/2*quad_form(x,A) + b'*x + lambda*norm(x,1) )
cvx_end