ResilientPropagation
org.encog.neural.networks.training.propagation.resilient

Class ResilientPropagation

  • All Implemented Interfaces:
    MLTrain, Train


    public class ResilientPropagationextends Propagation
    One problem with the backpropagation algorithm is that the magnitude of the partial derivative is usually too large or too small. Further, the learning rate is a single value for the entire neural network. The resilient propagation learning algorithm uses a special update value(similar to the learning rate) for every neuron connection. Further these update values are automatically determined, unlike the learning rate of the backpropagation algorithm. For most training situations, we suggest that the resilient propagation algorithm (this class) be used for training. There are a total of three parameters that must be provided to the resilient training algorithm. Defaults are provided for each, and in nearly all cases, these defaults are acceptable. This makes the resilient propagation algorithm one of the easiest and most efficient training algorithms available. The optional parameters are: zeroTolerance - How close to zero can a number be to be considered zero. The default is 0.00000000000000001. initialUpdate - What are the initial update values for each matrix value. The default is 0.1. maxStep - What is the largest amount that the update values can step. The default is 50. Usually you will not need to use these, and you should use the constructor that does not require them.
    • Constructor Detail

      • ResilientPropagation

        public ResilientPropagation(ContainsFlat network,                    MLDataSet training)
        Construct an RPROP trainer, allows an OpenCL device to be specified. Use the defaults for all training parameters. Usually this is the constructor to use as the resilient training algorithm is designed for the default parameters to be acceptable for nearly all problems.
        Parameters:
        network - The network to train.
        training - The training data to use.
      • ResilientPropagation

        public ResilientPropagation(ContainsFlat network,                    MLDataSet training,                    double initialUpdate,                    double maxStep)
        Construct a resilient training object, allow the training parameters to be specified. Usually the default parameters are acceptable for the resilient training algorithm. Therefore you should usually use the other constructor, that makes use of the default values.
        Parameters:
        network - The network to train.
        training - The training set to use.
        initialUpdate - The initial update values, this is the amount that the deltas are all initially set to.
        maxStep - The maximum that a delta can reach.
    • Method Detail

      • canContinue

        public final boolean canContinue()
        Returns:
        True, as RPROP can continue.
      • isValidResume

        public final boolean isValidResume(TrainingContinuation state)
        Determine if the specified continuation object is valid to resume with.
        Parameters:
        state - The continuation object to check.
        Returns:
        True if the specified continuation object is valid for this training method and network.
      • pause

        public final TrainingContinuation pause()
        Pause the training.
        Returns:
        A training continuation object to continue with.
      • resume

        public final void resume(TrainingContinuation state)
        Resume training.
        Parameters:
        state - The training state to return to.
      • setRPROPType

        public void setRPROPType(RPROPType t)
        Set the type of RPROP to use. The default is RPROPp (RPROP+), or classic RPROP.
        Parameters:
        t - The type.
      • getRPROPType

        public RPROPType getRPROPType()
        Returns:
        The type of RPROP used.

SCaVis 2.2 © jWork.ORG