DMelt:AI/3 Kohonen Maps

From HandWiki
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.
Member


Kohonen SOM

Kohonen Self-Organizing Maps (or just Self-Organizing Maps, or SOM for short) is a type of artificial neural network that is trained using unsupervised learning to produce a two-dimensional discretized representation of the input space of the training samples, called a map.

These maps are useful for classification and visualizing low-dimensional views of high-dimensional data, akin to multidimensional scaling. The model was first described as an artificial neural network by the Finnish professor Teuvo Kohonen, and is sometimes called a Kohonen Map. See Self-organizing_map article.


The SOM may be described as a nonlinear, ordered, smooth mapping of high-dimensional input data onto the elements of a regular, low-dimensional array. In its basic form it produces a similarity graph of input data.

The SOM converts the nonlinear statistical relationships between high-dimensional data into simple geometric relationship of their image points on a regular two-dimensional grid of nodes. The SOM maps can be used for classification and visualizing of high-dimensional data.

1.2. Learning Algorithm

Unlike many other types of neural nets, the SOM doesn't need a target output to be specified. Instead, where the node weights match the input vector, that area of the lattice is selectively optimized to more closely resemble the data for the class the input vector is a member of.

Training

From an initial distribution of random weights, and over many iterations, the SOM eventually settles into a map of stable zones. Each zone is effectively a feature classifier, so you can think of the graphical output as a type of feature map of the input space.

Training occurs in several steps and over many iterations:

  1. Each node's weights are initialized with random values.
  2. A vector is chosen randomly from the set of training data.
  3. Every node is examined to calculate which one's weights are most like the input vector. The winning node is commonly known as the Best Matching Unit (BMU).
  4. The radius of the neighborhood of the BMU is calculated. Initially, this value is set to the radius of the lattice, but diminish each time step.
  5. For any nodes found inside the radius of BMU, the node's weights are adjusted to make them more like the input vector. The closer a node to the BMU, the more its weights get alerted.
  6. Repeat step 2 for N iterations.

Below we will consider a number of examples of how to construct and run Kohonen Self-Organizing Maps. The code is implemented in Java language. But, as before, we will use Python scripting to make the SOM examples shorter. The Kohonen SOM examples will be based on the Java class jhpro.nnet.KohonenFeatureMap jhpro.nnet.KohonenFeatureMap. We should acknowledge the work of Jochen Fröhlich who has create a first version of this Java class.

Kohonen SOM in 2D

Now we will consider a Kohonen Self-Organizing Map in 2D space. In this example we generate random numbers in X-Y space and then apply the Kohonen Self-Organizing Map algorithm with 4x4 neurons. Then we perform learning and update the plot every 50 iterations, until the area is equal to 0.01.

Note that we shod weights as lines. Therefore, to make a visually appealing plot, we sort the weight array to draw lines. You can remove the line option and then the sort() function will not be needed.

# In Kohonen Feature Map, neurons are organizing themselves according to certain input values.
# wse 4x4 neutrons and plot the results as line. 
# (c) Chekanov

from jhplot  import  *
from jhpro.nnet import *
from java.util import Random
from java.awt import Color
import math


# make empty canvas in some range
c1 =HPlot("Canvas")
c1.visible()
c1.setLegend(0)
c1.setRange(0,100,0,150)
c1.setMarginLeft(70)
c1.setNameX("X")
c1.setNameY("Y")

p1= P1D("X-Y data")
rand = Random(10)

inputSize=100   # number of random points in 2D 
for i in range(100):
      x=i+10*rand.nextGaussian()
      y=50+30*math.cos(0.2*x) + 10*rand.nextGaussian()
      p1.add(x,y)
c1.draw(p1)


kfm=KohonenFeatureMap()
mapSizeX=4   # map size is 4x4 neurons
maxCycle=100000000
im = InputMatrix(inputSize, 2);
kfm.setMaxLearningCycles(maxCycle);
kfm.createMapLayer(mapSizeX,mapSizeX ) # create a map in X-X
kfm.setStopArea(0.01)                  # stop learning here
kfm.setInitActivationArea(1)
kfm.setInitLearningRate(0.6)

im.setInputXY(p1)
kfm.connectLayers(im);
  
i=0
# to represent outputs
p2=P1D("weights")
p2.setColor(Color.red)
p2.setStyle("l")
    
while kfm.finishedLearning() == False:
    kfm.learn()
    i=i+1
    if (i%50 ==0):
       weights=kfm.getWeightValues()
       p2.clear(); c1.clearData()
       print "print  rate=",kfm.getLearningRate(), "Activation area=",kfm.getActivationArea(), "Elapsed time=",kfm.getElapsedTime()
       for j in range(mapSizeX*mapSizeX):  
                       p2.add(weights[0][j], weights[1][j])
       p2.sort(0) # to draw a line, we should sort the result in X 
       c1.draw(p1)
       c1.draw(p2)
 
 
print "Final activation area=",kfm.getStopArea()
p3=P1D("weights")
p3.setColor(Color.red)
p3.setStyle("l")

print p2

The resulting answer is shown below. Note that this image shows the final result after the learning stop.

DMelt example: Kohonen Feature Map in 2D (SOM)

Kohonen SOM in 3D

Now we will consider a Kohonen Self-Organizing Maps network in 3D space. This Python example is very similar to the 2D case. Only now we show the resulting weights as red dots in the 3D space:


# In Kohonen Feature Map, neurons are organizing themselves according to certain input values. 
# (c) Chekanov

from jhplot  import  *
from jhpro.nnet import *
from java.awt import *
from java.util import Random
import math

kfm=KohonenFeatureMap()

inputSize=200   # number of random points in 3D 
mapSizeX=4     # map size is 4x4
mapSizeY=4 
maxCycle=1000000
im = InputMatrix(inputSize, 3);
kfm.setMaxLearningCycles(maxCycle);
kfm.createMapLayer(mapSizeX, mapSizeY) # create a map in Y-Y
kfm.setStopArea(0.02)                   # stop learning here
kfm.setInitActivationArea(1)
kfm.setInitLearningRate(0.6)

c1=HPlot3D("Canvas",600,500)
c1.setRange(-100, 100, -100, 100, -100,100)
c1.visible()
pc=P2D("Input Data")
from jarray import array
inputX=[]
inputY=[]
inputZ=[]
rand = Random()
for i in range(inputSize):
     x=10*rand.nextGaussian() 
     y=15*rand.nextGaussian()
     z=20*rand.nextGaussian()
     if (i>100):
             x=10+10*rand.nextGaussian() 
             y=-20-30*rand.nextGaussian()
             z=50-10*rand.nextGaussian()
     pc.add(x,y,z)
     inputX.append(int(x))
     inputY.append(int(y))
     inputZ.append(int(z)) 

c1.draw(pc)         # draw data

px=array(inputX, 'i')  
py=array(inputY, 'i') 
pz=array(inputZ, 'i')
im.setInputValues(px,py,pz);
kfm.connectLayers(im);

# the result 
p2=P2D("SOM weights")
p2.setSymbolSize(5);
p2.setSymbolColor(Color.red);
 
i=0
while kfm.finishedLearning() == False:
    kfm.learn()
    if (i%50 ==0): 
             p2.clear()
             print "print  rate=",kfm.getLearningRate(), "Activation area=",kfm.getActivationArea(), "Elapsed time=",kfm.getElapsedTime()
             weights=kfm.getWeightValues()
             for j in range(mapSizeX*mapSizeX):
                       p2.add(weights[0][j], weights[1][j],weights[2][j])
             c1.draw(p2)
             c1.updateData()

    i=i+1
    
c1.update()

The result of the above example is shown below:

DMelt example: Kohonen Feature Map in 3D (SOM)

Self-Organizing Maps in Python

Another example of the Python coding is a Self-Organizing Maps implemented in a self-contained Python code:

Look at the online examples ("Tools"-"Online examples"): Artificial Intelligence/neural net/neural_net_som.py Open it and run by pressing [F8].