public class Backpropagationextends Propagationimplements Momentum, LearningRateThis class implements a backpropagation training algorithm for feed forward neural networks. It is used in the same manner as any other training class that implements the Train interface. Backpropagation is a common neural network training algorithm. It works by analyzing the error of the output of the neural network. Each neuron in the output layer's contribution, according to weight, to this error is determined. These weights are then adjusted to minimize this error. This process continues working its way backwards through the layers of the neural network. This implementation of the backpropagation algorithm uses both momentum and a learning rate. The learning rate specifies the degree to which the weight matrixes will be modified through each iteration. The momentum specifies how much the previous learning iteration affects the current. To use no momentum at all specify zero. One primary problem with backpropagation is that the magnitude of the partial derivative is often detrimental to the training of the neural network. The other propagation methods of Manhatten and Resilient address this issue in different ways. In general, it is suggested that you use the resilient propagation technique for most Encog training tasks over back propagation.
Fields Modifier and Type Field and Description
LAST_DELTAThe resume key for backpropagation.
Constructors Constructor and Description
Backpropagation(ContainsFlat network, MLDataSet training)Create a class to train using backpropagation.
Backpropagation(ContainsFlat network, MLDataSet training, double learnRate, double momentum)
Methods Modifier and Type Method and Description
isValidResume(TrainingContinuation state)Determine if the specified continuation object is valid to resume with.
pause()Pause the training.
resume(TrainingContinuation state)Resume training.
setLearningRate(double rate)Set the learning rate, this is value is essentially a percent.
setMomentum(double m)Set the momentum for training.
Methods inherited from class org.encog.neural.networks.training.propagation.Propagation
finishTraining, fixFlatSpot, getCurrentFlatNetwork, getFlatTraining, getMethod, getNumThreads, iteration, iteration, setErrorFunction, setFlatTraining, setNumThreads
Methods inherited from class org.encog.ml.train.BasicTraining
addStrategy, getError, getImplementationType, getIteration, getStrategies, getTraining, isTrainingDone, postIteration, preIteration, setError, setIteration, setTraining
Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public Backpropagation(ContainsFlat network, MLDataSet training)Create a class to train using backpropagation. Use auto learn rate and momentum. Use the CPU to train.
network- The network that is to be trained.
training- The training data to be used for backpropagation.
public Backpropagation(ContainsFlat network, MLDataSet training, double learnRate, double momentum)
network- The network that is to be trained
training- The training set
learnRate- The rate at which the weight matrix will be adjusted based on learning.
momentum- The influence that previous iteration's training deltas will have on the current iteration.
public final boolean canContinue()
public final double getLastDelta()
- Ther last delta values.
public final double getLearningRate()
public final double getMomentum()
public final boolean isValidResume(TrainingContinuation state)Determine if the specified continuation object is valid to resume with.
state- The continuation object to check.
- True if the specified continuation object is valid for this training method and network.
public final TrainingContinuation pause()Pause the training.
public final void resume(TrainingContinuation state)Resume training.
public final void setLearningRate(double rate)Set the learning rate, this is value is essentially a percent. It is the degree to which the gradients are applied to the weight matrix to allow learning.
public final void setMomentum(double m)Set the momentum for training. This is the degree to which changes from which the previous training iteration will affect this training iteration. This can be useful to overcome local minima.
SCaVis 2.1 © jWork.ORG