UncminForJava
edu.stanford.rsl.jpop.fortran

Class UncminForJava



  • public class UncminForJavaextends Object

    This class contains Java translations of the UNCMIN unconstrained optimization routines. See R.B. Schnabel, J.E. Koontz, and B.E.Weiss, A Modular Systemof Algorithms for Unconstrained Minimization, Report CU-CS-240-82,Comp. Sci. Dept., University of Colorado at Boulder, 1982.

    IMPORTANT: The "_f77" suffixes indicate that these routines useFORTRAN style indexing. For example, you will see

       for (i = 1; i <= n; i++)
    rather than
       for (i = 0; i < n; i++)
    To use the "_f77" routines you will have to declare your vectorsand matrices to be one element larger (e.g., v[101] rather thanv[100], and a[101][101] rather than a[100][100]), and you will haveto fill elements 1 through n rather than elements 0 through n - 1.Versions of these programs that use C/Java style indexing willeventually be available. They will end with the suffix "_j".

    This class was translated by a statistician from a FORTRAN version of UNCMIN. It is NOT an official translation. It wastesmemory by failing to use the first elements of vectors. When public domain Java optimization routines become available from the people who produced UNCMIN, then THE CODE PRODUCEDBY THE NUMERICAL ANALYSTS SHOULD BE USED.

    Meanwhile, if you have suggestions for improving thiscode, please contact Steve Verrill at steve@ws13.fpl.fs.fed.us.

    • Method Summary

      Methods 
      Modifier and TypeMethod and Description
      static voidinitialize(int n, double[] x_in_java, double[] typsiz, double[] fscale, int[] method, int[] iexp, int[] msg, int[] ndigit, int[] itnlim, int[] iagflg, int[] iahflg, double[] dlt, double[] gradtl, double[] stepmx, double[] steptl)
      Deprecated. 
      voidoptimizeFunction(int n, double[] x, OptimizableFunction minclass, double[] typsiz, double[] fscale, int[] method, int[] iexp, int[] msg, int[] ndigit, int[] itnlim, int[] iagflg, int[] iahflg, double[] dlt, double[] gradtl, double[] stepmx, double[] steptl, double[] xpls, double[] fpls, double[] gpls, int[] itrmcd, double[][] a, double[] udiag, double[] numericalGradient, double[] p, double[] sx, double[] wrk0, double[] wrk1, double[] wrk2, double[] wrk3)
      Deprecated. 
      voidoptimizeFunction0(int dimension, double[] initialX_in_java, OptimizableFunction function, double[] vectorX, double[] functionValueAtX, double[] gradientAtX, int[] terminationCode, double[][] hessianAtX, double[] diagonalOfHessian)
      Deprecated. 
      voidoptimizeFunction7(int n, double[] x_in_java, OptimizableFunction minclass, double[] typsiz, double[] fscale, int[] method, int[] iexp, int[] msg, int[] ndigit, int[] itnlim, int[] iagflg, int[] iahflg, double[] dlt, double[] gradtl, double[] stepmx, double[] steptl, double[] xpls, double[] fpls, double[] gpls, int[] itrmcd, double[][] a, double[] udiag)
      Deprecated. 
    • Method Detail

      • optimizeFunction0

        @Deprecatedpublic void optimizeFunction0(int dimension,                                double[] initialX_in_java,                                OptimizableFunction function,                                double[] vectorX,                                double[] functionValueAtX,                                double[] gradientAtX,                                int[] terminationCode,                                double[][] hessianAtX,                                double[] diagonalOfHessian)
        Deprecated. 

        The optif0_f77 method minimizes a smooth nonlinear function of n variables.A method that computes the function value at any pointmust be supplied. (See Uncmin_methods.java and UncminTest.java.)Derivative values are not required.The optif0_f77 method provides the simplest user access to the UNCMINminimization routines. Without a recompile,the user has no control over options.For details, see the Schnabel et al reference and the comments in the code.Translated by Steve Verrill, August 4, 1998.

        Parameters:
        dimension - The number of arguments of the function to minimize
        initialX - The initial estimate of the minimum point
        function - A class that implements the OptimizableFunction interface (see the definition in OptimizableFunction.java). See UncminTest_f77.java for an example of such a class. The class must define: 1.) a method, evaluate, to minimize. evaluate must have the form public static double evaluate(double x[]) where x is the vector of arguments to the function and the return value is the value of the function evaluated at x. 2.) a method, gradient, that has the form public static double [] gradient(double x[]) where the return value is the gradient of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the gradient. 3.) a method, hessian, that has the form public static double [] [] hessian(double x[]) where the return value is the Hessian of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the Hessian. If the user wants Uncmin to check the Hessian, then the hessian method should only fill the lower triangle (and diagonal) of the Hessian.
        vectorX - The final estimate of the minimum point
        functionValueAtX - The value of f_to_minimize at xpls
        gradientAtX - The gradient at the local minimum xpls
        terminationCode - Termination code ITRMCD = 0: Optimal solution found ITRMCD = 1: Terminated with gradient small, xpls is probably optimal ITRMCD = 2: Terminated with stepsize small, xpls is probably optimal ITRMCD = 3: Lower point cannot be found, xpls is probably optimal ITRMCD = 4: Iteration limit (150) exceeded ITRMCD = 5: Too many large steps, function may be unbounded
        hessianAtX - Workspace for the Hessian (or its estimate) and its Cholesky decomposition
        diagonalOfHessian - Workspace for the diagonal of the Hessian
      • optimizeFunction7

        @Deprecatedpublic void optimizeFunction7(int n,                                double[] x_in_java,                                OptimizableFunction minclass,                                double[] typsiz,                                double[] fscale,                                int[] method,                                int[] iexp,                                int[] msg,                                int[] ndigit,                                int[] itnlim,                                int[] iagflg,                                int[] iahflg,                                double[] dlt,                                double[] gradtl,                                double[] stepmx,                                double[] steptl,                                double[] xpls,                                double[] fpls,                                double[] gpls,                                int[] itrmcd,                                double[][] a,                                double[] udiag)
        Deprecated. 

        The optif9_f77 method minimizes a smooth nonlinear function of n variables.A method that computes the function value at any pointmust be supplied. (See Uncmin_methods.java and UncminTest.java.)Derivative values are not required.The optif9 method provides complete user access to the UNCMINminimization routines. The user has full control over options.For details, see the Schnabel et al reference and the comments in the code.Translated by Steve Verrill, August 4, 1998.

        Parameters:
        n - The number of arguments of the function to minimize
        x - The initial estimate of the minimum point
        minclass - A class that implements the OptimizableFunction interface (see the definition in GradientOptimizableFunction.java). See UncminTest_f77.java for an example of such a class. The class must define: 1.) a method, evaluate, to minimize. evaluate must have the form public static double evaluate(double x[]) where x is the vector of arguments to the function and the return value is the value of the function evaluated at x. 2.) a method, gradient, that has the form public static double [] gradient(double x[]) where the return value is the gradient of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the gradient. 3.) a method, hessian, that has the form public static double [] [] hessian(double x[]) where the return value is the Hessian of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the Hessian. If the user wants Uncmin to check the Hessian, then the hessian method should only fill the lower triangle (and diagonal) of the Hessian.
        typsiz - Typical size for each component of x
        fscale - Estimate of the scale of the objective function
        method - Algorithm to use to solve the minimization problem = 1 line search = 2 double dogleg = 3 More-Hebdon
        iexp - = 1 if the optimization function f_to_minimize is expensive to evaluate, = 0 otherwise. If iexp = 1, then the Hessian will be evaluated by secant update rather than analytically or by finite differences.
        msg - Message to inhibit certain automatic checks and output
        ndigit - Number of good digits in the minimization function
        itnlim - Maximum number of allowable iterations
        iagflg - = 0 if an analytic gradient is not supplied
        iahflg - = 0 if an analytic Hessian is not supplied
        dlt - Trust region radius
        gradtl - Tolerance at which the gradient is considered close enough to zero to terminate the algorithm
        stepmx - Maximum allowable step size
        steptl - Relative step size at which successive iterates are considered close enough to terminate the algorithm
        xpls - The final estimate of the minimum point
        fpls - The value of f_to_minimize at xpls
        gpls - The gradient at the local minimum xpls
        itrmcd - Termination code ITRMCD = 0: Optimal solution found ITRMCD = 1: Terminated with gradient small, X is probably optimal ITRMCD = 2: Terminated with stepsize small, X is probably optimal ITRMCD = 3: Lower point cannot be found, X is probably optimal ITRMCD = 4: Iteration limit (150) exceeded ITRMCD = 5: Too many large steps, function may be unbounded
        a - Workspace for the Hessian (or its estimate) and its Cholesky decomposition
        udiag - Workspace for the diagonal of the Hessian
      • initialize

        @Deprecatedpublic static void initialize(int n,                         double[] x_in_java,                         double[] typsiz,                         double[] fscale,                         int[] method,                         int[] iexp,                         int[] msg,                         int[] ndigit,                         int[] itnlim,                         int[] iagflg,                         int[] iahflg,                         double[] dlt,                         double[] gradtl,                         double[] stepmx,                         double[] steptl)
        Deprecated. 

        The dfault_f77 method sets default values for each inputvariable to the minimization algorithm.Translated by Steve Verrill, August 4, 1998.

        Parameters:
        n - Dimension of the problem
        x - Initial estimate of the solution (to compute max step size)
        typsiz - Typical size for each component of x
        fscale - Estimate of the scale of the minimization function
        method - Algorithm to use to solve the minimization problem
        iexp - = 0 if the minimization function is not expensive to evaluate
        msg - Message to inhibit certain automatic checks and output
        ndigit - Number of good digits in the minimization function
        itnlim - Maximum number of allowable iterations
        iagflg - = 0 if an analytic gradient is not supplied
        iahflg - = 0 if an analytic Hessian is not supplied
        dlt - Trust region radius
        gradtl - Tolerance at which the gradient is considered close enough to zero to terminate the algorithm
        stepmx - "Value of zero to trip default maximum in optchk"
        steptl - Tolerance at which successive iterates are considered close enough to terminate the algorithm
      • optimizeFunction

        @Deprecatedpublic void optimizeFunction(int n,                               double[] x,                               OptimizableFunction minclass,                               double[] typsiz,                               double[] fscale,                               int[] method,                               int[] iexp,                               int[] msg,                               int[] ndigit,                               int[] itnlim,                               int[] iagflg,                               int[] iahflg,                               double[] dlt,                               double[] gradtl,                               double[] stepmx,                               double[] steptl,                               double[] xpls,                               double[] fpls,                               double[] gpls,                               int[] itrmcd,                               double[][] a,                               double[] udiag,                               double[] numericalGradient,                               double[] p,                               double[] sx,                               double[] wrk0,                               double[] wrk1,                               double[] wrk2,                               double[] wrk3)
        Deprecated. 

        The optdrv_f77 method is the driver for the nonlinear optimization problem.Translated by Steve Verrill, May 18, 1998.

        Parameters:
        n - The dimension of the problem
        x - On entry, estimate of the location of a minimum of f_to_minimize
        minclass - A class that implements the OptimizableFunction interface (see the definition in GradientOptimizableFunction.java). See UncminTest_f77.java for an example of such a class. The class must define: 1.) a method, evaluate, to minimize. evaluate must have the form public static double evaluate(double x[]) where x is the vector of arguments to the function and the return value is the value of the function evaluated at x. 2.) a method, gradient, that has the form public static double [] gradient(double x[]) where the return value is the gradient of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the gradient. 3.) a method, hessian, that has the form public static double [] [] hessian(double x[]) where the return value is the Hessian of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the Hessian. If the user wants Uncmin to check the Hessian, then the hessian method should only fill the lower triangle (and diagonal) of the Hessian.
        typsiz - Typical size of each component of x
        fscale - Estimate of scale of objective function
        method - Algorithm indicator 1 -- line search 2 -- double dogleg 3 -- More-Hebdon
        iexp - Expense flag. 1 -- optimization function, f_to_minimize, is expensive to evaluate 0 -- otherwise If iexp = 1, the Hessian will be evaluated by secant update rather than analytically or by finite differences.
        msg - On input: (> 0) message to inhibit certain automatic checks On output: (< 0) error code (= 0, no error)
        ndigit - Number of good digits in the optimization function
        itnlim - Maximum number of allowable iterations
        iagflg - = 1 if an analytic gradient is supplied
        iahflg - = 1 if an analytic Hessian is supplied
        dlt - Trust region radius
        gradtl - Tolerance at which the gradient is considered close enough to zero to terminate the algorithm
        stepmx - Maximum step size
        steptl - Relative step size at which successive iterates are considered close enough to terminate the algorithm
        xpls - On exit: xpls is a local minimum
        fpls - On exit: function value at xpls
        gpls - On exit: gradient at xpls
        itrmcd - Termination code
        a - workspace for Hessian (or its approximation) and its Cholesky decomposition
        udiag - workspace (for diagonal of Hessian)
        numericalGradient - workspace (for gradient at current iterate)
        p - workspace for step
        sx - workspace (for scaling vector)
        wrk0 - workspace
        wrk1 - workspace
        wrk2 - workspace
        wrk3 - workspace

SCaVis 2.1 © jWork.ORG