Network models of V1
Contributors

Network models of V1

This project will be used to test implementations in PyNN (and eventually NeuroML) of published models of primary visual cortex (V1) based on spiking point neurons.

An initial focus will be on pubmed:14614078, but other models investigated will include pubmed:19477158 and pubmed:22681694.

This project is part of the INCF participation in the Google Summer of Code 2014.

Troyer Model

Here I will describe briefly the implementation of pubmed:9671678.

In order to run this model is necessary to first install git and PyNN and the appropriate simulator.

After that you can clone directly from git using:

git clone https://github.com/OpenSourceBrain/V1NetworkModels.git

The model runs in NEST and Neuron with the following versions:

PyNN '0.8beta1'
Nest version: Version 2.2.2
Neuron Release 7.3

Overview of the model

As the project stands at this moment the workflow can be briefly described in two steps: first there are two scripts that implement the spatio-temporal filter in the retina and produce the spike-trains for each cell in the Lateral Geniculate Nucelus (LGN) and stores them for further use. Second, there is a file that loads those spike-trains and runs the simulation of the cortical networks in PyNN using them. The first task is executed by two scripts produce_lgn_spikes_on_cells.py and produce_lgn_spikes_off_cells.py which generates pickled files in the folder './data' with the spike trains and positions for a given contrast that is selected in the parameters of the script. After we have run the file to produce the spikes with a given contrast (which can be adjusted in the scripts mentioned above) we can run the main script full_model.py with the same contrast in order to run the complete model.

In order to describe the model in more detail we will start by describing full_model.py. That is, we will assume that we already have the spikes' data from the LGN that is going to be feed into the other layers. So we will start by describing the general structure of the model which is shown in the following diagram.

Scheme

The model consists in three qualitatvely different types of layers. The LGN with center-surround receptive fields and the inhibitory and excitatory layers which are connected with a Gabor filter profile to the LGN and a correlation based connectivity between them. At the beginning of the full_model.py script we have the following parameters that control the general structure of the model and the connections between the layers. First we have the parameters that control the number of cells in each layer which were set accordingly to the values given in the troyer paper. Furthermore we have included a factor constant to decrease the overall size of the model and we also give the user the ability to chose how many LGN population layers he wants to include in the simulation:

factor = 1.0  # Reduction factor
Nside_exc = int(factor * Nside_exc)
Nside_inh = int(factor * Nside_inh)

Ncell_lgn = Nside_lgn * Nside_lgn
Ncell_exc = Nside_exc ** 2
Ncell_inh = Nside_inh ** 2

N_lgn_layers = 1

After we also include a series of boolean parameters that give the user the ability to show whether he wants to include certain connections and layers in the simulation. This is very useful to test the effect of a particular connection or layer in the overall behavior of the model.

## Main connections
thalamo_cortical_connections = True # If True create connections from the thalamus to the cortex
feed_forward_inhibition = True # If True add feed-forward inhibition ( i -> e )
cortical_excitatory_feedback = True # If True add cortical excitatory feedback (e -> e) and ( e -> i )
background_noise = True  # If True add cortical noise
correlated_noise = False  # Makes the noise coorelated

This is all regarding the general structure of the model. The remaining part of full_model.py is composed of two main sections. The first one determines the parameters of the neurons and the connections and were set according the paper. The second part is the building of the model in PyNN, this is detail in the companion blog of this project. In order to allow the user to interact immediately with the model and to provide with a cleared understanding of how different parts of the Troyer model can be reproduce with our code (and its limitations) we provide a series of scripts that reproduce qualitatively a substantial amount of the figures in Troyer original paper.

Scripts to Reproduce the Figures

Toyer3a

Troyer7a

Troyer8b

Troyer8b

Caveats, Missing Features and Further Work

As the code stands the model is able to reproduce qualitatively most of the beahviours of the Troyer paper. There is however the need for tunning to achieve a beahviour that is also quantitatlvely consistent with the one from the paper. We believe that this boils down to the fact that we have some missing features from the original model that destroy the fine tuning. Among them we find:
- The Troyer paper uses a conductivity that not only falls exponentially but also rises in such a way. The one we have here limits itself to the falling part.
- The Troyer paper uses a variable delay after each synaptic event. Our delays are constant.
- The Troyer paapr uses a correlation connectivity algorithm fro the cortical connections that is based in the correlation between the LGN's receptive field instead of using the Gabor filters directly as we did.

Further work could consists in adding PyNN the capabilities to handle such situations in order to implement a Troyer model that is more faithful the the original intentions of the paper.

Details

The companion blog of this project contains the details of how to construct the different layers and connections that belong to this model. In this section we describe for each section of the model which libraries of the project it involves and the appropriate reference to the blog post where it is described at more length.

LGN - spikes

In brief, the Retina and the Thalamus part of the model can be represented by a spatio-temporal filter that, when convolved with the stimuli, will produce the firing rate of a given LGN cell. After that, we can use a non-homogeneous Poisson process to produce the corresponding spikes for each cell. We describe this in detail bellow.

Spatio-Temporal Receptive Field (STRF)

The file kernel_functions.py contains the code for creating the STRF. The spatial part of the kernel possess a center-surround architecture which is model as a different of Gaussians. The temporal part of the receptive field has a biphasic structure, we use the implementation describe in Cai et al (1998). The details of the implementation are described in detail in the companion blog of this project in the posts (Retinal Filter I).

We also include a small script center_surround_plot.py that can be used to visualize the spatial component of the STRF and received immediate feedback on how the overall pattern changes when the parameters and resolutions are changed.

Stimuli

The file stimuli_functions.py contains the code for creating the stimuli. In particular we used the implementation of a full field sinusoidal grating with the parameters described in the paper. We also include in this file ternary noise in case that the user wants to sample the receptive field through an STA or any other method of estimation based on White Noise. Note that if the user wants to subject the system to any other stimulus this is the place to implement it.

We also include a small script sine_grating_plot.py to visualize how the sine grating looks at a particular point in time.

Convolution

After we have the stimuli and the STRF we can use the convolution function defined in the file analysis_functions.py to calculate the response of LGN' neurons. The details of how the the convolution is implemented is described in the detail in the following entry of the blog (Retinal Filter II). After we have the sine_grating and the functions to perform the convolution we can calculate the firing rate in the an LGN equipped with cente-surround receptive field's structure. We describe this in more detail in the post (Firing Rate induced by a Sinus Grating)

Producing Spikes

After we have the firing rate of a neuron we can use the produce_spikes functions in the file analysis_functions.py. This functions takes the firing rate and using non-homogeneous Poisson process outputs an array with the spikes times. We provide in the repository the script produce_lgn_spikes_one.py for testing variations of parameters and as an example showcase.

Storing Spikes

Now that we have the complete mechanism of spike creation we can use the files produce_lgn_on_spikes.py and produce_lgn_off_spikes.py to create the spikes for the on and off LGN cells.This file creates the grid of positions (This should correspond to the grid of LGN cells that we are going to use in PyNN) and the list of spikes associated with them. The results are stored in cPickled and will be used by the next stage of the model.

LGN - Network

Now that we have the spikes of the LGN we can start creating the structure of the model within PyNN. The first thing to do is to use the files in connector_functions.py in order create the LGN population that will handle and represents the spikes stored in the cPickled file described above. How to do this in PyNN using the neuron model SpikeSourceArray is described in some detail in the following post:
(Arbitrary Spike-trains in PyNN).

Thalamo-Cortical Connections

Also in the functions connector_functions.py we find the corresponding functions to create the connection between the LGN and the cortical layers following a Gabor-profile connectivity. The process is described in more detail in the following blog Thalamo-cortical connections.

Cortical Connections

Finally again in the same file -connector_functions.py - we have the necessary files to create a connectivity profile based on correlations between the cortical layers. We implemented this rule with two methods that follow the same qualitatively the logic. The explanation on how this was carried out and the comparison between the two approaches can be found in Cortical Connections

Blog Scripts

For the sake of complentess we have included all the scripts used in the blog spots in the repository