This IPython notebook defines and explores the Kohonen SOM (self-organizing map) model of retinotopy described in pages 53-59 of:

Miikkulainen, Bednar, Choe, and Sirosh (2005), Computational Maps in the Visual Cortex, Springer.

If you can double-click on this text and edit it, you are in a live IPython Notebook environment where you can run the code and explore the model. Otherwise, you are viewing a static (e.g. HTML) copy of the notebook, which allows you to see the precomputed results only. To switch to the live notebook, see the notebook installation instructions.

This IPython notebook constructs the definition of the SOM retinotopy model in the Topographica simulator, and shows how it organizes.

A static version of this notebook may be viewed online. To run the live notebook and explore the model interactively, you will need both IPython Notebook and Topographica.

If you prefer the older Tk GUI interface or a command line, you may use the som_retinotopy.ty script distributed with Topographica, without IPython Notebook, as follows:

```
./topographica -g examples/som_retinotopy.ty
```

To run this Notebook version, you can run all cells in order automatically:

- Selecting
`Kernel -> Restart`

from the menu above. - Selecting
`Cell -> Run All`

.

Alternatively, you may run each cell in sequence, starting at the top of the notebook and pressing `Shift + Enter`

.

In [1]:

```
%reload_ext topo.misc.ipython
%opts GridSpace [tick_format="%.1f" figure_size=70]
%timer start
```

The next three cells define the SOM model in its entirety (copied from examples/som_retinotopy.ty). First, we import required libraries and we declare various parameters to allow the modeller to control the behavior:

In [2]:

```
"""
Basic example of a fully connected SOM retinotopic map with ConnectionFields.
Contains a Retina (2D Gaussian generator) fully connected to a V1
(SOM) sheet, with no initial ordering for topography.
Constructed to match the retinotopic simulation from page 53-59 of
Miikkulainen, Bednar, Choe, and Sirosh (2005), Computational Maps in
the Visual Cortex, Springer. Known differences include:
- The cortex_density and retina_density are smaller for
speed (compared to 40 and 24 in the book).
- The original simulation used a radius_0 of 13.3/40, which does work
for some random seeds, but a much larger radius is used here so that
it converges more reliably.
"""
import topo
import imagen
from math import exp, sqrt
import param
from topo import learningfn,numbergen,transferfn,pattern,projection,responsefn,sheet
import topo.learningfn.projfn
import topo.pattern.random
import topo.responsefn.optimized
import topo.transferfn.misc
# Disable the measurement progress bar for this notebook
from topo.analysis.featureresponses import pattern_response
pattern_response.progress_bar = False
# Parameters that can be passed on the command line using -p
from topo.misc.commandline import global_params as p
p.add(
retina_density=param.Number(default=10.0,bounds=(0,None),
inclusive_bounds=(False,True),doc="""
The nominal_density to use for the retina."""),
cortex_density=param.Number(default=10.0,bounds=(0,None),
inclusive_bounds=(False,True),doc="""
The nominal_density to use for V1."""),
input_seed=param.Number(default=0,bounds=(0,None),doc="""
Seed for the pseudorandom number generator controlling the
input patterns."""),
weight_seed=param.Number(default=0,bounds=(0,None),doc="""
Seed for the pseudorandom number generator controlling the
initial weight values."""),
radius_0=param.Number(default=1.0,bounds=(0,None),doc="""
Starting radius for the neighborhood function."""),
alpha_0=param.Number(default=0.42,bounds=(0,None),doc="""
Starting value for the learning rate."""))
```

The cell below may be used to modify any of the parameters defined above, allowing you to explore the parameter space of the model. To illustrate, the default retina and cortex densities are set to 10 (their default values), but you may change the value of any parameter in this way:

In [3]:

```
p.retina_density=10
p.cortex_density=10
```

In [4]:

```
# Using time dependent random streams.
# Corresponding numbergen objects should be given suitable names.
param.Dynamic.time_dependent=True
numbergen.TimeAwareRandomState.time_dependent = True
# input pattern
sheet.GeneratorSheet.period = 1.0
sheet.GeneratorSheet.phase = 0.05
sheet.GeneratorSheet.nominal_density = p.retina_density
input_pattern = pattern.Gaussian(scale=1.0,size=2*sqrt(2.0*0.1*24.0)/24.0,
aspect_ratio=1.0,orientation=0,
x=numbergen.UniformRandom(name='xgen', lbound=-0.5,ubound=0.5, seed=p.input_seed),
y=numbergen.UniformRandom(name='ygen', lbound=-0.5,ubound=0.5, seed=p.input_seed))
topo.sim['Retina'] = sheet.GeneratorSheet(input_generator=input_pattern)
topo.sim['V1'] = sheet.CFSheet(
nominal_density = p.cortex_density,
# Original CMVC simulation used an initial radius of 13.3/40.0 (~0.33)
output_fns=[transferfn.misc.KernelMax(density=p.cortex_density,
kernel_radius=numbergen.BoundedNumber(
bounds=(0.5/40,None),
generator=numbergen.ExponentialDecay(
starting_value=p.radius_0,
time_constant=40000/5.0)))])
topo.sim.connect('Retina','V1',name='Afferent', delay=0.05,
seed = p.weight_seed,
connection_type=projection.CFProjection,
weights_generator = pattern.random.UniformRandom(name='Weights'),
nominal_bounds_template=sheet.BoundingBox(radius=1.0), # fully connected network.
learning_rate=numbergen.ExponentialDecay(
starting_value = p.alpha_0,
time_constant=40000/6.0),
response_fn = responsefn.optimized.CFPRF_EuclideanDistance_opt(),
learning_fn = learningfn.projfn.CFPLF_EuclideanHebbian())
'Loaded the self-organizing map model (som_retinotopy)'
```

Out[4]:

In [5]:

```
import numpy as np
from holoviews import Dimension, Image
```

Now the model has been defined, we can explore it. The structure of the loaded model is shown in this screenshot taken from the Tk GUI's Model Editor:

The large circle indicates that the Retina Sheet is fully connected to the V1 Sheet.

The plot below shows the initial set of weights from a 10x10 subset of the V1 neurons (i.e., every neuron with the reduced cortex_density, or every fourth neuron for cortex_density=40):

In [6]:

```
topo.sim.V1.Afferent.grid()
```

Out[6]:

As we can see, the initial weights are uniform random. Each neuron receives a full set of connections from all input units, and thus each has a 24x24 or 10x10 array of weights (depending on the retina_density).

In [7]:

```
from topo.command.analysis import measure_cog
cog_data = measure_cog()
```

In [8]:

```
cog_data.CoG.Afferent
```

Out[8]:

In [9]:

```
xcog=cog_data.XCoG.Afferent.last
ycog=cog_data.YCoG.Afferent.last
xcog + ycog + xcog*ycog
```

Out[9]:

In [10]:

```
xcog.data.min(),xcog.data.max(),ycog.data.min(),ycog.data.max()
```

Out[10]:

We can have a look at what the training patterns that will be used to train the model without having to run it. In the cell labelled `In [4]`

(in the model definition), we see where the training patterns are defined in a variable called `input_pattern`

. We see that the circular Gaussian stimulus has `x`

and `y`

values that are drawn from a random distribution by two independent `numbergen.UniformRandom`

objects. We can now view what 100 frames of training patterns will look like:

In [11]:

```
input_pattern.anim(30, offset=0.05)
```

Out[11]:

At the end of the notebook, we will generate a set of nice animations showing the plots we have already shown evolve over development. We now create a `Collector`

object that collects all information needed for plotting and animation. We will collect the information we have just examined and advance the simulation one iteration:

In [12]:

```
from topo.analysis import Collector
from topo.command.analysis import measure_cog
c = Collector()
c.collect(measure_cog)
c.Activity.Retina = c.collect(topo.sim.Retina)
c.Activity.V1 = c.collect(topo.sim.V1)
c.Activity.V1Afferent = c.collect(topo.sim.V1.Afferent)
c.CFs.Afferent = c.collect(topo.sim.V1.Afferent, grid=True, rows=10, cols=10)
```

The initial activities would be blank, but after running the model a single iteration, the sheet activities now look as follows:

In [13]:

```
data = c(times=[1])
```

In [14]:

```
data.Activity.Retina + data.Activity.V1
```

Out[14]:

To see what the responses were before SOMβs neighborhood function forced them into a Gaussian shape, you can look at the Projection Activity plot, which shows the feedforward activity in V1:

In [15]:

```
data.Activity.V1Afferent
```

Out[15]:

Here these responses are best thought of as Euclidean proximity, not distance. This formulation of the SOM response function actually subtracts the distances from the max distance, to ensure that the response will be larger for smaller Euclidean distances (as one intuitively expects for a neural response). The V1 feedforward activity appears random here because the Euclidean distance from the input vector to the initial random weight vector is random.

If you look at the weights now we have run a single training iteration, you'll see that most of the neurons have learned new weight patterns based on this input:

In [16]:

```
data.CFs.Afferent
```

Out[16]:

Some of the weights to each neuron have now changed due to learning. In the SOM algorithm, the unit with the maximum response (i.e., the minimum Euclidean distance between its weight vector and the input pattern) is chosen, and the weights of units within a circular area defined by a Gaussian-shaped neighborhood function around this neuron are updated.

This effect is visible in this plot β a few neurons around the winning unit at the top right have changed their weights. Let us run a few more iterations before having another look:

In [17]:

```
topo.sim.run(4)
topo.sim.V1.Afferent.grid()
```

Out[17]:

In [18]:

```
topo.sim.V1.Afferent.projection_view()
```

Out[18]:

In [19]:

```
topo.sim.run(245)
```

Let us use our `Collector`

object `c`

to collect more measurements. We will start collecting measurements every 250 steps until we complete 5000 iterations, which should take a few seconds at the default densities, and may be a minute or two at the higher densities:

In [20]:

```
times = [topo.sim.time()*i for i in range(1,21)]
print("Running %d measurements between iteration %s and iteration %s" %
(len(times), min(times), max(times)))
```

In [21]:

```
data = c(data, times=times)
```

We see that the topographic grid plot evolves over this period as follows:

In [22]:

```
data.CoG.Afferent
```

Out[22]:

In [23]:

```
%%opts Image {-axiswise}
(data.XCoG.Afferent + data.YCoG.Afferent +
data.XCoG.Afferent * data.YCoG.Afferent).select(Time=5000)
```

Out[23]:

In [24]:

```
data.CFs.Afferent.last
```

Out[24]:

`.last`

to view an animation to 5000 iterationsAdditional training up to 10000 iterations (which becomes faster due to a smaller neighborhood radius) leads to a nearly flat, square map:

In [25]:

```
times = [5000+(250*i) for i in range(1,21)]
print("Running %d measurements between iteration %s and iteration %s" %
(len(times), min(times), max(times)))
```

In [26]:

```
data = c(data, times=times)
```

In [27]:

```
data.CoG.Afferent.last
```

Out[27]:

`.last`

to view an animation to 10000 iterationsIn [28]:

```
topo.sim.V1.output_fns[0].kernel_radius
```

Out[28]:

In [29]:

```
data.CFs.Afferent.last
```

Out[29]:

By 30000 iterations the map has good coverage of the available portion of the input space:

In [30]:

```
times = [10000+(500*i) for i in range(1,41)]
print("Running %d measurements between iteration %s and iteration %s\n\n" %
(len(times), min(times), max(times)))
```

In [31]:

```
data = c(data, times=times)
```

In [32]:

```
data.CoG.Afferent.last
```

Out[32]:

In [33]:

```
topo.sim.V1.output_fns[0].kernel_radius
```

Out[33]:

In [34]:

```
data.CFs.Afferent.last
```

Out[34]:

We can watch how the topographic mapping unfolds over all 30000 iterations we have run:

In [35]:

```
%%opts Gravity_Contours [rescale_individually=False]
data.CoG.Afferent
```

Out[35]: