BasicTrainSOM
org.encog.neural.som.training.basic

Class BasicTrainSOM

  • All Implemented Interfaces:
    MLTrain, LearningRate


    public class BasicTrainSOMextends BasicTrainingimplements LearningRate
    This class implements competitive training, which would be used in a winner-take-all neural network, such as the self organizing map (SOM). This is an unsupervised training method, no ideal data is needed on the training set. If ideal data is provided, it will be ignored. Training is done by looping over all of the training elements and calculating a "best matching unit" (BMU). This BMU output neuron is then adjusted to better "learn" this pattern. Additionally, this training may be applied to other "nearby" output neurons. The degree to which nearby neurons are update is defined by the neighborhood function. A neighborhood function is required to determine the degree to which neighboring neurons (to the winning neuron) are updated by each training iteration. Because this is unsupervised training, calculating an error to measure progress by is difficult. The error is defined to be the "worst", or longest, Euclidean distance of any of the BMU's. This value should be minimized, as learning progresses. Because only the BMU neuron and its close neighbors are updated, you can end up with some output neurons that learn nothing. By default these neurons are not forced to win patterns that are not represented well. This spreads out the workload among all output neurons. This feature is not used by default, but can be enabled by setting the "forceWinner" property.
    • Constructor Detail

      • BasicTrainSOM

        public BasicTrainSOM(SOM network,             double learningRate,             MLDataSet training,             NeighborhoodFunction neighborhood)
        Create an instance of competitive training.
        Parameters:
        network - The network to train.
        learningRate - The learning rate, how much to apply per iteration.
        training - The training set (unsupervised).
        neighborhood - The neighborhood function to use.
    • Method Detail

      • autoDecay

        public final void autoDecay()
        Should be called each iteration if autodecay is desired.
      • canContinue

        public final boolean canContinue()
        Specified by:
        canContinue in interface MLTrain
        Returns:
        True if the training can be paused, and later continued.
      • decay

        public final void decay(double d)
        Called to decay the learning rate and radius by the specified amount.
        Parameters:
        d - The percent to decay by.
      • decay

        public final void decay(double decayRate,         double decayRadius)
        Decay the learning rate and radius by the specified amount.
        Parameters:
        decayRate - The percent to decay the learning rate by.
        decayRadius - The percent to decay the radius by.
      • getInputNeuronCount

        public final int getInputNeuronCount()
        Returns:
        The input neuron count.
      • getLearningRate

        public final double getLearningRate()
        Specified by:
        getLearningRate in interface LearningRate
        Returns:
        The learning rate. This was set when the object was created.
      • getMethod

        public final BasicNetwork getMethod()
        Get the current best machine learning method from the training.
        Specified by:
        getMethod in interface MLTrain
        Returns:
        The best machine learningm method.
      • getNeighborhood

        public final NeighborhoodFunction getNeighborhood()
        Returns:
        The network neighborhood function.
      • getOutputNeuronCount

        public final int getOutputNeuronCount()
        Returns:
        The output neuron count.
      • isForceWinner

        public final boolean isForceWinner()
        Returns:
        Is a winner to be forced of neurons that do not learn. See class description for more info.
      • iteration

        public final void iteration()
        Perform one training iteration.
        Specified by:
        iteration in interface MLTrain
      • pause

        public final TrainingContinuation pause()
        Pause the training to continue later.
        Specified by:
        pause in interface MLTrain
        Returns:
        A training continuation object.
      • resume

        public void resume(TrainingContinuation state)
        Resume training.
        Specified by:
        resume in interface MLTrain
        Parameters:
        state - The training continuation object to use to continue.
      • setAutoDecay

        public final void setAutoDecay(int plannedIterations,                double startRate,                double endRate,                double startRadius,                double endRadius)
        Setup autodecay. This will decrease the radius and learning rate from the start values to the end values.
        Parameters:
        plannedIterations - The number of iterations that are planned. This allows the decay rate to be determined.
        startRate - The starting learning rate.
        endRate - The ending learning rate.
        startRadius - The starting radius.
        endRadius - The ending radius.
      • setForceWinner

        public final void setForceWinner(boolean forceWinner)
        Determine if a winner is to be forced. See class description for more info.
        Parameters:
        forceWinner - True if a winner is to be forced.
      • setLearningRate

        public final void setLearningRate(double rate)
        Set the learning rate. This is the rate at which the weights are changed.
        Specified by:
        setLearningRate in interface LearningRate
        Parameters:
        rate - The learning rate.
      • setParams

        public final void setParams(double rate,             double radius)
        Set the learning rate and radius.
        Parameters:
        rate - The new learning rate.
        radius - The new radius.
      • trainPattern

        public final void trainPattern(MLData pattern)
        Train the specified pattern. Find a winning neuron and adjust all neurons according to the neighborhood function.
        Parameters:
        pattern - The pattern to train.

SCaVis 2.0 © jWork.ORG