Documentation API of the 'org.encog.neural.neat.NEATNetwork' Java class
NEATNetwork
org.encog.neural.neat

Class NEATNetwork

  • All Implemented Interfaces:
    Serializable, MLContext, MLError, MLInput, MLInputOutput, MLMethod, MLOutput, MLProperties, MLRegression


    public class NEATNetworkextends BasicMLimplements MLContext, MLRegression, MLError
    Implements a NEAT network as a synapse between two layers. In Encog, a NEAT network is created by using a NEATSynapse between an input and output layer. NEAT networks only have an input and an output layer. There are no actual hidden layers. Rather this synapse will evolve many hidden neurons that have connections that are not easily defined by layers. Connections can be feedforward, recurrent, or self-connected. NEAT networks relieve the programmer of the need to define the hidden layer structure of the neural network. The output from the neural network can be calculated normally or using a snapshot. The snapshot mode is slower, but it can be more accurate. The snapshot handles recurrent layers better, as it takes the time to loop through the network multiple times to "flush out" the recurrent links. NeuroEvolution of Augmenting Topologies (NEAT) is a genetic algorithm for the generation of evolving artificial neural networks. It was developed by Ken Stanley while at The University of Texas at Austin. http://www.cs.ucf.edu/~kstanley/
    See Also:
    Serialized Form

Warning: You cannot see the full API documentation of this class since the access to the DatMelt documentation for third-party Java classes is denied. Guests can only view jhplot Java API. To view the complete description of this class and its methods, please request the full DataMelt membership.

If you are already a full member, please login to the DataMelt member area before visiting this documentation.