This post is another tour of quadratic programming algorithms and applications in R. First, we look at the quadratic program that lies at the heart of support vector machine (SVM) classification. Then we’ll look at a very different quadratic programming demo problem that models the energy of a circus tent. The key difference between these two problems is that the energy minimization problem has a positive definite system matrix whereas the SVM problem has only a semi-definite one. This distinction has important implications when it comes to choosing a quadratic program solver and we’ll do some solver benchmarking to further illustrate this issue.

# QP and SVM

Let’s consider the following very simple version of the SVM problem. Suppose we have observed , for (perfectly linearly separable) training cases. We let denote the vector of training labels and the matrix of predictor variables. Our task is to find the hyperplane in which “best separates” our two classes of labels and . Visually:

1 2 3 |
library("e1071") library("rgl") train |

The problem of finding this optimal hyperplane is a quadratic program (the primal SVM problem). For computational reasons however, it is much easier to work with a related quadratic program, the SVM dual problem. With a general kernel function, this quadratic program is:

In the simple case of a linear kernel, and so we can rewrite in matrix notation as

with

where denotes the -th column of .

It is important to note that the matrix is symmetric positive semi-definite since and for any nonzero in :

However, in the context of SVM the matrix will usually not be positive definite. In particular, the rank of is at most (number of features) which is typically much less than the number of observations . By the Rank-Nullity theorem, the nullspace of has dimension which we expect to be quite large. So intuitively quite a few non-zero map to under and so the above inequality cannot be strengthened to a strict inequality.

The fact that the matrix is only semi-definite and not positive-definite is significant because many quadratic programming algorithms are specialized to solve only positive definite problems. For example, R’s quadprog handles only positive definite problems, whereas solvers like kernlab’s ipop method can handle semidefinite problems. In the specialized semidefinite case of SVM, many highly optimized algorithms exist (for example, the algorithms implemented in libsvm and liblinear). In the following gist, we solve a separable SVM problem for Fisher’s iris data in three ways:

- Using the e1071 wrapper around libsvm.
- Using the semi-definite ipop solver from kernlab. Note that this general interior point solver is implemented in R and it can be quite slow when applied to larger scale problems.
- Using quadprog’s positive definite solver with a slight perturbance to the SVM data so that the system matrix becomes positive definite. Quadprog is a wrapper around an interior point solver implemented in Fortran.

Note that only the first method is recommended for solving SVM problems in real life. The second and third methods are only included for the sake of the demonstrating the mechanics of quadratic programming.

We see that all three methods generate quite similar solutions for this highly stable and quite simple problem.

As another example of the SVM technique, here is a minimal example that uses SVM to classify topics in the Reuters21578 corpus.

# The Circus Tent Revisited

In a previous post, I explained how the solution of a quadratic program can model the shape of a circus tent.

The system matrix in the circus tent problem is symmetric positive definite and therefore an ideal candidate for the quadprog solver. In the gist below, we build a series of increasingly larger circus tent problems and use this to profile the solver times of ipop and quadprog.

From this simple experiment we can make the following observations:

- For this symmetric positive definite problem, quadprog’s solver is significantly faster than ipop. This is not surprising given that quadprog is calling compiled Fortran routines whereas ipop is an R implementation.
- The time complexity for both solvers is superlinear in the problem size. Roughly, both solvers appear to be nearly cubic in problem size.
- Even though both the system and constraint matrices are sparse, neither solver is able to take advantage of sparse matrix representations. In a pinch, it’s worth noting that a little memory can be saved by using quadprog’s solve.QP.compact which utilizes a more compact representation of the constraint matrix .

Jelmer Ypma, who wrote and maintains the nloptr package to interface with the NLOPT libraries also wrote a package called ipoptr which will interface with an installation of ipopt written in C++, which should make its speed much closer that of quadprog. It isn't simple to install, though (I've never done it). See http://www.ucl.ac.uk/~uctpjyy/ipoptr.html.

Thank you very much for this reference; I was not aware of the ipoptr project. After some fiddling around, I was able to build ipoptr. I will post an update with details on building the package if I can get it working on my example problem. Unfortunately, the licensing for the ipopt backend is very complex and restrictive, so it may not be an option that works for the whole R community.

"programming" is misspelled in the title of your blog and is misspelled in a different way at R bloggers http://www.r-bloggers.com/more-on-quadratic-progamming-in-r/ .

Thanks! Fixed that.