Archives 17/18

Wednesday October 31 
Location: SV 1717Time: 10:30
Danilo Jimenez Rezende (homepage)

Google Deepmind

Title:

Probabilistic Deep Learning:
Foundations, applications and open problems.

Abstract
Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I’ll review the foundations of probabilistic reasoning and generative modeling. I will then introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models, demonstrate a few applications of these models to density estimation, missing data imputation, data compression and planning. Finally, I will discuss some of the open problems in the field.

Wednesday October 10 
Location: SV 1717

Time: 12:15

BMI Seminar: Sandro Romani (homepage)

Janelia Research Campus

Title:

Learning and memory in hippocampus and cortex, a tale of two theories

Abstract
Learning is thought to be mediated by changes of synaptic weights. Many physiology-based theories posit that synaptic modifications are induced by repeated and almost coincident spiking of pre- and post-synaptic neurons. At the level of behavior, learning can occur without repetitions and can link events that are separated in time by seconds. The hippocampus has been implicated in these forms of learning. Analysis and modeling of in-vivo recordings from region CA1 of rodents hippocampus, validated with in-vitro manipulations, reveal a novel learning rule: pre-synaptic spiking and post-synaptic complex spiking can be separated by a time-scale of seconds while still inducing potent (one-shot) changes in synaptic weights. This novel plasticity rule offers an immediate connection between behavior and synaptic changes in the hippocampus.
Cortex on the other hand is thought to learn by slowly shaping circuit dynamics to subserve complex behaviors. For instance, certain behaviors require short-term memory, the ability to maintain information in memory, in the absence of cues, for a time scale of seconds. The classic neural correlate of short-term memory is persistent selective activity, elevated neuronal firing that persists during memory maintenance, shows large and systematic ramping, and depends on the particular item kept in memory. Several hypotheses have been proposed to explain how neural circuits can support short-term memory. We use theory-driven optogenetic perturbations of pre-motor cortex in rodents performing a delayed binary response task. Circuit dynamics following recovery from perturbations reveal the presence of discrete attractors. We further devise a task structure to eliminate ramping activity, obtaining stationary activity patterns as observed in standard attractor network models.

Monday September 10 
Location:  AAC 132Time:12h 05
Jan Peters (homepage)

Technische Universitaet Darmstadt

Title:

Reinforcement Learning of Robot Skills: From Policy Gradients to Divergence-based Policy Search

Abstract
Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches. It involves generating a representation of motor skills by parameterized motor primitive policies acting as building blocks of movement generation, and a learned task execution module that transforms these movements into motor commands. We discuss learning on three different levels of abstraction, i.e., learning for accurate control is needed to execute, learning of motor primitives is needed to acquire simple movements, and learning of the task-dependent „hyperparameters“ of these motor primitives allows learning complex tasks. We discuss task-appropriate learning approaches for imitation learning, model learning and reinforcement learning for robots with many degrees of freedom. Empirical evaluations on a several robot systems illustrate the effectiveness and applicability to learning control on an anthropomorphic robot arm. These robot motor skills range from toy examples (e.g., paddling a ball, ball-in-a-cup) to playing robot table tennis against a human being and manipulation of various objects.

Wednesday  4th of July
Location: AAC 114Time: 10:30
Maurizio Mattia (homepage)

Complex Systems Section

Istituto Superiore di Sanità

Title:

Heterogeneous dynamics of cortical assemblies: evidence and advantages

Abstract

The computational power of neuronal networks relies on the richness of the dynamical repertoire they are capable to express. This complexity can be effectively captured by suited dynamical mean-field theories for the pooled spiking activity of neuronal assemblies. Here, I will briefly summarize how an effective low-
dimensional nonlinear dynamics of cortical assemblies can be worked out. This framework will be then used to describe the alternation between high-firing (Up) and almost quiescent (Down) states observed in the intact brain during slow-wave sleep and deep anaesthesia. The footprints of such nonlinear dynamics will be shown in experiments where the activity of a whole cortical column is recorded and hysteresis loops with heterogeneous size arise in layer 5 as response to the input received from layer 6. This evidence of heterogeneous excitability will be finally shown to exist also in the prefrontal cortex of non-human primates performing a goal-to-action transformation task, thus implementing simultaneously both
dynamical stability and input susceptibility.


Wednesday  27th of June
Location: SV 2 510Time: 10:30
Simon Weber (homepage)

Institute of Software Engineering and Theoretical Computer Science

Technische Universität Berlin

Title:

Learning place cells, grid cells and invariances with excitatory and inhibitory plasticity.

Abstract


Thursday  14th of June
Location: AAC 120Time: 14:00
Rui Ponte Costa (homepage)

Department of Physiology

University of Bern

Title:

Learning probabilistic synapses: where and when

Abstract


Tuesday  12th of June
Location: AAC 120Time: 10:30
Christian Tetzlaff (homepage)

Bernstein Center for Computational Neuroscience – Goettingen

Title:

The formation and organization of memory representations in adaptive, self-organized neural networks

Abstract


Monday February 5th 
Location: AAC 120Time: 10:30
Jean-Pascal Pfister (homepage)

Institute of Neuroinformatics
University of Zurich and ETH Zurich

Title:

Neural Particle Filter

Abstract


Wednesday January 31st 
Location:AAC 014Time: 10:00
Ljerka Ostojic (homepage)

Department of Psychology,

University of Cambridge, UK

Title:

Social Cognition in Corvids: Lessons from Eurasian jay caching and food-sharing behaviours

Abstract

To study social cognition in corvids, we use Eurasian jays (Garrulus glandarius) as a model species because we can utilise two natural behaviours, namely caching and food-sharing. Like other corvids, Eurasian jays utilise a range of cache-protection strategies when there is potential that a conspecific could steal their caches. These studies suggest that cachers might be able to take into account what a conspecific can see or hear (perspective taking). In addition, we have developed a novel behavioural paradigm to test whether jays might be sensitive to what a conspecific wants or desires (desire-state attribution). This paradigm allows us to investigate whether male Eurasian jays respond to the changing desire of their female partner when sharing food with her during courtship. Finally, I will present recent collaborative projects that integrate behavioural experiments and computational models to investigate the mechanisms that underlie the jays’ caching and food-sharing behaviours.


Wednesday January 24th 
Location:TBDTime: TBD
Sander Bohte (homepage)

Neural Computation Lab
Machine Learning Group
CWI Amsterdam

Title:

Efficient Computation in Adaptive Artificial Spiking Neural Networks

Abstract

Artificial Neural Networks (ANNs) are bio-inspired models of neural computation that have proven highly effective. Still, ANNs lack a natural notion of time, and neural units in ANNs exchange analog values in a frame-based manner, a computationally and energetically inefficient form of communication. This contrasts sharply with biological neurons that communicate sparingly and efficiently using binary spikes. While artificial Spiking Neural Networks (SNNs) can be constructed by replacing the units of an ANN with spiking neurons, the performance of current SNNs does not match from that of deep ANNs on hard benchmarks and these SNNs use much higher firing rates compared to their biological counterparts, limiting their efficiency.
Here we show how spiking neurons that employ an efficient form of neural coding can be used to construct SNNs that match high-performance ANNs and exceed state-of-the-art in SNNs on important benchmarks, while requiring much lower average firing rates. For this, we use spike-time coding based on the firing rate limiting adaptation phenomenon observed in biological spiking neurons. This phenomenon can be captured in fast adapting spiking neuron models, for which we derive the effective transfer function. Neural units in ANNs trained with this transfer function can be substituted directly with adaptive spiking neurons, and the resulting Adaptive SNNs (AdSNNs) can carry out classification in deep neural networks using up to an order of magnitude fewer spikes compared to previous AdSNNs.
Adaptive spike-time coding additionally allows for the dynamic control of neural coding precision: we show how a simple model of arousal in AdSNNsp further halves the average required firing rate and this notion naturally extends to other forms of attention. AdSNNs thus hold promise as a novel and efficient model for neural computation that naturally fits to temporally continuous and asynchronous applications.


Wednesday December 13 
Location: SV 1717Time: 12:15
BMI Seminar: Rodrigo Quian Quiroga (homepage)

Centre for Systems Neuroscience, University of Leicester, UK

Title:

Memory formation and long-term coding in the human hippocampus

Abstract
Intracranial recordings in patients suffering from intractable epilepsy allow studying the firing of multiple single neurons in awake and behaving human subjects. These recordings are typically done in the hippocampus and surrounding cortex, an area known to be critical for memory functions. Using the unique opportunity to record directly from such neurons in the human brain, about 10 years ago we found what has been named ‘Concept Cells’ or ‘Jennifer Aniston Neurons’ – neurons that represent specific concepts, responding to particular persons or objects, such as Jennifer Aniston, Luke Skywalker or the Sydney Opera House. In this talk will show more recent work on how these neurons are involved in forming and storing declarative, and particularly episodic memories – the memories we have of our life experiences.


Wednesday November 8 
Location: AAC 120Time: 10:30
Francesca Mastrogiuseppe (homepage)

Group for Neural Theory, LNC, DEC, ENS, Paris

Title:

Linking connectivity, dynamics and computations in recurrent neural networks

Abstract

Synaptic connectivity determines the dynamics and computations performed by neural circuits. Due to the highly recurrent nature of circuitry in cortical networks, the relationship between connectivity, dynamics and computations is complex, and understanding it requires theoretical models.
Classical models of recurrent networks are based on connectivity that is either fully random or highly structured, e.g. clustered. Experimental measurements in contrast show that cortical connectivity lies somewhere between these two extremes. Moreover, a number of functional approaches suggest that a minimal amount of structure in the connectivity is sufficient to implement a large range of computations.
Based on these observations, here we develop a theory of recurrent networks with a connectivity consisting of a combination of a random part and a minimal, low dimensional structure. We show that in such networks, the dynamics are low-dimensional and can be directly inferred from connectivity using a geometrical approach. We exploit this understanding to determine minimal connectivity structures required to implement specific computations. We find that the dynamical range and computational capacity of a network quickly increases with the dimensionality of the structure in the connectivity. Our simplified theoretical framework captures and connects a number of outstanding experimental observations, in particular the fact that neural representations are high-dimensional and distributed, while dynamics are low-dimensional, with a dimensionality
that increases with task complexity.


Thursday, March 30th: Swiss Computational Neuroscience Seminars

 

Time: 14.15 Xiao-Jing WANG (homepage)

New York University and NYU Shanghai

Title:

What does it mean to build a large-scale brain circuit model?

Abstract

Eilif MULLER (homepage)

Blue Brain Project, EPFL, Switzerland

Title:

Knowledge integration in neuroscience: The bridging role of data-driven model

Abstract

Click here for a complete list of all future and past Swiss Computational Neuroscience seminars


Thursday November 2 
Location: SV 2 615Time:16:15
Prof. Giancarlo La Camera (homepage)

Department of Neurobiology & Behavior, Stony Brook University, New York

Title:

Spike-based reinforcement learning for temporal stimulus segmentation and decision making


Wednesday May 17
Location: SV 1717Time: 16:15
Anton V. CHIZHOV (homepage)

Ioffe Institute, Saint-Petersburg, Russia

Title:

Conductance-based refractory density model of a neuronal statistical ensemble

Abstract