The Moe Nonlinear Optimization Library
Chemical Computing Group Inc.
1010 Sherbrooke Street W, Suite 910; Montreal, Quebec; Canada H3A 2R7
Abstract. The nonlinear optimization methods provided in MOE are described along with performace data. Among the methods supported by the library are Steepest Descent, Conjugate Gradient, and Truncated Newton. In particular, the Truncated Newton delivers superlinear convergence rates yet requires little memory.
An important SVL program library provided with MOE is the unconstrained non-linear optimization library. Among the methods contained in the library are Steepest Descent, Conjugate Gradient, and Truncated Newton. Each of the methods is suitable for large-scale problems; that is, each of the methods requires memory linear in the number of free variables. In this article we present the methods as well as the results of an experiment to compare the methods.
Each of the optimization methods has the same basic structure. Let x[k] be the estimate of the minimizer to a twice continuously differentiable function f at iteration k. At each iteration, the following steps are performed:
Termination Criteria. Most optimization methods will converge only in the limit (even those that require only a finite number of steps with exact arithmetic). Termination criteria are important for a number of reasons:
These conditions apply mostly for ill-conditioned problems and the usual gradient test will apply in well-conditioned problems.
Search Direction. Each of the methods uses a different procedure to determine the search direction p[k]. The method of Steepest Descent is the simplest of the methods and is most often run for a few iterations before using one of the more powerful methods. The method of steepest descent takes p[k] = - grad f (x[k]) which is guaranteed to be a descent direction. The method of Conjugate Gradients is well known and described elsewhere.
The Truncated Newton method uses an approximate solution to the Newton equations to determine a search direction. If, on iteration k, we expand f with a Taylor series along a direction p and truncate it after the second term:
we obtain a local quadratic model. The critical points of the model with respect to t and p occur when
The MOE Truncated Newton method uses two techniques to overcome these problems:
As an example, we ran the mentioned optimization methods on the protein 6RXN from the Brookhaven Protein Data Bank. The AMBER 1989 paramter set was used with a cutoff of 7.5A and a distance-based dielectric. Each method began with 5 steps of Steepest Descent and was terminted when the RMS gradient fell below 0.0002. The line search methodology was held fixed for all of the tests; that is, only the selection of descent direction differed.
The following table summarizes the results of the experiments. For each method, we report the total number of iterations required, the total number of function evaluations, and the normalized elapsed time required.
Since the library is written entirely in SVL, it can be used to optimize functions written in SVL. If an objective function calculates the function value and gradient it can be optimized with the Optimization Library. The MOE Optimization Library is a powerful tools that has been used in a number of applications including energy minimization, Implicit Euler dynamics, partial charge methods and nonlinear fitting.