The GCAL model (gain control, adaptation and lateral) is a general purpose cortical developmental model of topographic map formation in the primary visual cortex (V1), driven by the presentation of visual patterns on a retinal sheet. This model accounts for receptive field formation, the emergence and of orientation selectivity across the cortical surface as well as contrast-invariant orientation tuning. The GCAL model is robust to large changes in the contrast of the training patterns that may flucuate between widely varying statistical distributions. The resulting orientation maps are robust and stable yet adaptive development that reflects the statistics of the input.
This notebook replicates the figures in Stevens et al. (2013):
@article{Stevens02102013, author = {Stevens, Jean-Luc R. and Law, Judith S. and Antol\'{i}k, J\'{a}n and Bednar, James A.}, title = {Mechanisms for Stable, Robust, and Adaptive Development of Orientation Maps in the Primary Visual Cortex}, journal = {The Journal of Neuroscience}, volume = {33}, number = {40}, pages = {15747-15766}, year = {2013}, doi = {10.1523/JNEUROSCI.1037-13.2013}, url = {http://www.jneurosci.org/content/33/40/15747.full} }
If you can double-click on this text and edit it, you are in a live IPython Notebook environment where you can run the code and explore the model. Otherwise, you are viewing a static (e.g. HTML) copy of the notebook, which allows you to see the precomputed results only. To switch to the live notebook, see the notebook installation instructions.
This notebook uses the GCAL model definition gcal.ty
exported from this notebook to launch the simulations and collate the data to plot all the figures published in the paper mentioned above. Collecting all the necessary statistics required to reproduce the figures in the paper requires 842 simulations of about 45 minutes each (four threads on an Intel Xeon E5620 quad-core processor) to achieve publication quality.
You can complete the launch much more quickly by running a subset of all the simulations (covering most of the figures) by setting the simulation parameters and by reducing the simulation quality. By default, this notebook uses double the default cortical density (size of V1) specified in the model file to attain publication quality maps.
Running all 842 high-quality simulations is computationally very expensive but running a single simulation at half cortical density is quick - the GCAL model can be run to completion (20000 iterations) in 20 minutes on an Intel i3 processor. You can launch the Topographica GUI and explore the model from the notebook defining the model.
Note that the gcal.ty model file exported by the notebook may be run without IPython using Topographica as follows:
bash ./topographica -g ./gcal.ty
import os, sys
sys.path += [os.path.abspath(os.path.join('..','..'))]
Specifying the parameter space
Launching the simulations Reproducing the published figures
Note that by default, all jobs are launch locally. If you wish to launch all the necessary jobs, it is advisable to use a compute cluster if available.
This section defines the global settings as well as the basic definitions (constant) that define the settings for the four basic models (L,AL,GCL and GCAL). Running all 842 high quality simulations is very computationally expensive - you may wish to use a lower simulation quality on a subset of the total number of necessary simulations.
import os
import numpy as np
import topo, param, lancet
from jn13_figures.lib import measurement, misc
from topo.misc.commandline import global_params as jn13rc
jn13rc.add(
run_locally = param.Boolean(default=True,
doc="Whether to use Grid Engine or run the simulations locally"),
node_threads = param.Number(default=4,
doc="Number of threads to use per node (if using a cluster)"),
quality = param.Number(default=1.0,
doc="Quality of the simulations used (default of 50% for speed)."),
seed_count = param.Number(default=10,
doc="Number of random training seeds for Figures 6-9."),
Fig05_06 = param.Boolean(default=False,
doc="Launch L simulations, required for Figures 5 and 6."),
Fig07 = param.Boolean(default=False,
doc="Launch AL simulations, required for Figure 7."),
Fig08 = param.Boolean(default=False,
doc="Launch GCL simulations, required for Figure 8."),
Fig09 = param.Boolean(default=False,
doc="Launch GCAL simulations, required for Figures 9."),
Fig10_12 = param.Boolean(default=False,
doc="Launch GCAL with noisy disks->images for Figure 10 and 12."),
Fig11 = param.Boolean(default=False,
doc="Launch GCAL with biased image patches for Figure 11.")
)
# You may enable the generation of more figures here as necessary
jn13rc.quality = 0.5
jn13rc.Fig10_12 = True
jn13rc.Fig11 = True
The appropriate boolean settings for contrast-gain control (LGN) and homeostatic threshold adaptation (V1) that choose between the L, AL, GCL and GCAL models:
# Analysis common to L, AL, GCL and GCAL
gaussian_figures = ['Fig05_06', 'Fig07', 'Fig08', 'Fig09']
# Figures specific to GCAL (retinal wave -> image patches)
GCAL_figures = ['Fig10_12', 'Fig11']
mechanisms = { # (HOMEOSTASIS, GAIN CONTROL)
'Fig05_06':(False, False), # The L Model
'Fig07': (True, False), # The AL Model
'Fig08': (False, True), # The GCL Model
'Fig09': (True, True), # The GCAL Model
'Fig10_12':(True, True), # GCAL (noisy disk -> image patches)
'Fig11': (True, True), # GCAL (biased image patches)
}
# The first element of the tuple is the dataset, the second is the number of retinal waves
training = {
'Fig05_06':('Gaussian', 0),
'Fig07': ('Gaussian', 0),
'Fig08': ('Gaussian', 0),
'Fig09': ('Gaussian', 0),
'Fig10_12':('Nature', 6000),
'Fig11': ('VGR', 6000)} # Vertical google rearing
output_directory = os.path.join(os.getcwd(), 'jn13_figures', 'output')
print "OUTPUT DIRECTORY: %r" % output_directory
Generate the dataset for vertical goggle rearing:
images_dir = param.resolve_path('images', path_to_file=False)
misc.generate_GR(images_dir, name='VGR')
# Use the local Launcher if jobs to be run locally, else use Grid Engine
Launcher = lancet.Launcher if jn13rc.run_locally else lancet.QLauncher
# Topographica supports OpenMP so one process will use all cores
Launcher.max_concurrency = 1 if jn13rc.run_locally else None
# The qsub options for Grid Engine to use OpenMP with N threads.
qsub_flag_options = dict(b='y',
pe=('memory-2G', str(jn13rc.node_threads)),
v='OMP_NUM_THREADS=%s' % str(jn13rc.node_threads),
l='h_rt=05:59:00',
P='inf_ndtc')
launcher_kwargs = {} if jn13rc.run_locally else {'qsub_flag_options':qsub_flag_options}
These are the constants define the settings common to all simulations launched.
# Settings that are constant for all simulations launched
model_constants = lancet.Args(retina_density=24.0, lgn_density=24.0, area=1.5,
cortex_density= 98.0 * jn13rc.quality)
# Simulation times at which to perform measurements
measurement_times = lancet.Args(times=[i*1000 for i in range(21)])
# The number of phases and orientations used in the orientation tuning curve measurement
analysis_constants = lancet.Args( num_orientation=20, num_phase=8 )
print "The analysis and measurement contant settings:"; (model_constants * analysis_constants).dframe
constants = (model_constants * analysis_constants * measurement_times)
Metaparameters allow parameterization over sets of runs that allows different behaviour for different sets of batches. In this paper, the four models are first run with 10 different seeds using Gaussian training patterns (Figures 5-9) before GCAL is tested using different input statistics (Figures 10-12). First define a pool of random seeds to be used in both cases:
np.random.seed(42)
random_seeds = [np.random.randint(0,1000) for i in range(jn13rc.seed_count)]
Choose the models to analyse (Figures 6-9) and how many runs will be used for each:
# The different models that need to be training by published figure number
gauss_figures = lancet.List('figure', [fig for fig in gaussian_figures if getattr(jn13rc, fig)])
# Defining <seed_count> random training seeds
input_seeds = lancet.List('input_seed', random_seeds)
gauss_metaparams = (gauss_figures * input_seeds)
Figures 10 and 11 each require a single GCAL simulation but under a different training condition. Only a single seed is required to reproduce these figures.
gcal_figures = lancet.List('figure', [fig for fig in GCAL_figures if getattr(jn13rc, fig)])
input_seed = lancet.Args(input_seed=random_seeds[0]) # Use the first random seed
gcal_metaparams = gcal_figures * input_seed
metaparameters = gauss_metaparams + gcal_metaparams
print "The first 12 metaparameters:"
metaparameters.dframe.head(12).T
from topo.misc.lancext import RunBatchCommand, Analysis, topo_metadata
metadata = topo_metadata()
lancet.review_and_launch.output_directory = output_directory
@lancet.review_and_launch(launch_args=metaparameters)
def jn13_figures(figure=None, input_seed=None):
"""
"""
# Adding the current notebook path to the search path before loading analysis code.
analysis = Analysis(metadata=['contrast', 'input_seed', 'times'], paths=[os.getcwd()])
# Simulation settings
homeostasis, gain_control = mechanisms[figure]
dataset, retinal_waves = training[figure]
common_settings = lancet.Args(dataset=dataset,
homeostasis=homeostasis,
gain_control = gain_control,
retinal_waves=retinal_waves,
input_seed=input_seed,
figure=figure)
# The "Nature" and goggle rearing conditions use GCAL measured close to the 'sweet spot'
# while the rest of the simulations span 0-100% for Figures 6-9 (includes Fig 5)
contrasts = (lancet.Args(contrast=20) if figure in ['Fig10_12', 'Fig11']
else lancet.Range("contrast", 0, 100, 21, fp_precision=2))
if figure =='Fig10_12':
analysis.add_analysis_fn(measurement.measure_shouval)
elif figure == 'Fig11':
analysis.add_analysis_fn(measurement.measure_GR)
else:
analysis.add_analysis_fn(measurement.measure_FF)
analysis.add_analysis_fn(measurement.pinwheel_analysis)
analysis.add_analysis_fn(measurement.stability_analysis)
analysis.add_analysis_fn(measurement.afferent_CFs)
batch_arguments = contrasts * constants * common_settings
runbatch_cmd = RunBatchCommand('./gcal.ty', analysis)
batch_name = '%s_seed%d' % (figure, input_seed)
# Capture version control and platform specific metadata for reproducibility
return Launcher(batch_name, batch_arguments, runbatch_cmd, subdir=[figure],
metadata=metadata(), **launcher_kwargs)
Matching the scientific results of Stevens et al. (2013) is possible with any version of Topographica; this notebook may be run with any recent version of the simulator. The final GCAL model presented is stable and robust, which allows development to proceed consistently even if there are small changes in the numerical accuracy of the simulation. In contrast, the L and AL models are unstable, especially in the high contrast regime (Figures 7F, 8F) making the results generated by these models sensitive to implementation details of the Topographica version used.
To match the published example maps in figures 7F and 8F exactly, a small patch needs to be applied to Topographica as described here which reverts weight normalization from 64 to 32bit precision. The precise versions of the software used to generate this notebook version are shown below where the two Topographica files containing uncommited changes are a result of applying the patch.
metadata.summary()
# Run all the specified simulations
jn13_figures()
Once all the files have been generated by launching the simulations defined above, the figures may now now be generated.
show_raster
to True
show_raster = False
build_dir = './jn13_figures/output/build_dir'
template_dir = './jn13_figures/templates'
settings = dict(
build_dir = build_dir, # The directory where the template will be applied
template_dir = template_dir, # The SVG templates directory
size_factor=1.0,
show_raster = show_raster
)
import lancet
import jn13_figures
from jn13_figures import Display
from IPython.display import SVG
Display(template_dir=template_dir, snapshot='fig01')
This figure shows analysis of experimental data only, replotted from Chapman et al 1996. For this reason, a static SVG in the templates directory is shown. A notebook demonstrating the analysis of the experimental data (Kaschube 2010, Chapman 1996) will be included in the near future.
Display(template_dir=template_dir, snapshot='fig02')
jn13_figures.fig03(**settings)
Display(snapshot='fig04', template_dir=template_dir)
jn13_figures.fig05(**settings)
These figures are fairly complex, containing 2 static schematics (showing the relationship between Photoreceptor, LGN and V1 activity and 3 static rasters for the inset Gaussian retinal pattern examples. The rest is dynamically generated with 9 dynamically generated raster images, 4 'stream' plots, 3 histograms (with arrows/ curve fits) and six vector overlays (pinwheel markers, scale bars). The code to collate the data from files, analyse the results, save the raster and vector components and composite the results for Figures 6-9 is less than 100 lines of code. You can examine this code by running:
jn13_figures.fig06_09??
First, it is necessary to load the metadata from all ten seeds across all four models to find the maximum selectivity (averaged across the map). This value is used to normalize the selectivity scale to the highest observed value in figures 6-9:
max_selectivities = []
for fig in [6,7,8,9]:
name = ('Fig0%d' % fig) if fig != 6 else 'Fig05_06'
pattern = 'jn13_figures/output/%s/*%s_*/*/*.npz' % (name, name)
files = lancet.FilePattern('npz_file', pattern)
info = lancet.FileInfo(files, 'npz_file', lancet.NumpyFile())
max_selectivities.append(info.dframe['mean_selectivity'].max())
del files
del info
max_selectivity = max(max_selectivities)
common_kwargs = dict(contrasts=(10,25,100), input_seed=102, selectivity_norm=max_selectivity)
gaussian_settings = dict(settings, **common_kwargs)
jn13_figures.fig06_09(fig=6, **gaussian_settings)
jn13_figures.fig06_09(fig=7, **gaussian_settings)
jn13_figures.fig06_09(fig=8, **gaussian_settings)
jn13_figures.fig06_09(fig=9, **gaussian_settings)
Note that unlike the other figures, which were cropped to a sheet area of 1.0x1.0, this figure uses the sheet area of 1.2x1.2 for better comparison to the experimental data
jn13_figures.fig10(bound_radius=0.6, **settings)
These maps are very slightly different from the published versions due to ambiguity in file ordering - see the gcal.ty file for details with instructions on how to reproduce the figure example precisely.
jn13_figures.fig11(**settings)
jn13_figures.fig12(**settings)
!mogrify -format png ./jn13_figures/figures/*.svg
!mv ./jn13_figures/figures/*.png ./jn13_figures/templates/snapshots/
This section is designed to illustrate how you can succinctly collate data from many simulations and generate publication-quality figures with that data. It generates a small but widely useful subset of the published figures - in this instance, everything in Figure 6E except for the inset Gaussian retinal pattern
# Describe the pattern matching all numpy files containing data from the L model.
L_pattern = 'jn13_figures/output/Fig05_06/*/*/*.npz'
# Find all the matching files and store then under the key 'npz_file'
L_files = lancet.FilePattern('npz_file', L_pattern)
# Load all the metadata (includes analysis)
L_info = lancet.FileInfo(L_files, 'npz_file', lancet.NumpyFile())
# Use the metadata to select a row via a Pandas DataFrame.
L_selection = (L_info.dframe['input_seed']==102) & (L_info.dframe['contrast']==25) & (L_info.dframe['time']==20000)
# Load the raw data for that row:
# includes the OrientationPreference and OrientationSelectivity Sheetviews, afferent weights and pinwheel contours..
L_example = L_info.load(L_info.dframe[L_selection])
# Show the transpose of the result
L_example.T
SVG(jn13_figures.OR_analysis(L_example.iloc[0], template_dir, build_dir, selectivity=True, scalebox=False, cfs=False))
import os, shutil
build_dir = './jn13_figures/output/build_dir/latex_build'
pdf_dir = os.path.join(build_dir, 'figures')
latex_file = os.path.join(build_dir, 'stevens_jn13.tex')
if not os.path.isdir(pdf_dir): os.makedirs(pdf_dir)
shutil.copyfile('./jn13_figures/templates/stevens_jn13.tex', latex_file)
%%capture inkscapelog
for i in range(1,13):
basename = 'fig%.02d' % i
svg_file = './jn13_figures/figures/%s.svg' % basename
pdf_file = pdf_dir+'/%s.pdf' % basename
print svg_file, pdf_file
!inkscape --export-pdf=$pdf_file $svg_file > /dev/null
%%capture pdflatexlog
%pushd $build_dir
!pdflatex stevens_jn13
!bibtex -min-crossrefs=99999 stevens_jn13
!pdflatex stevens_jn13
!bibtex -min-crossrefs=99999 stevens_jn13
%popd