Semester Projects

List of Projects – Spring 2020

1. Surprise and Learning in the Human Brain: What can we find in experimental data?

As simply as one can expect the occurrence of an event, he or she can also experience the violation of that expectation. Such a violation is perceived by the brain as surprise, which can be seen as a measure of how much the brain’s current belief differs from reality. Recently, there have been a few works* on the mathematical formulation of surprise and surprise-based learning in human brain. The goal of this project is to connect the existing theories to the experimental data. The student will analyse a recently published data-set to: 1. Find the biomarkers of surprise in brain signals, 2. Compare different computational models in explaining human perception of surprise, and 3. Possibly extend the computational models (depending on the results of the first 2 steps).

1. Experience with  MATLAB or Python programming (knowledge of Julialang is preferable but not necessary)
2. Solid understanding of data science and statistics
3. Familiarity with signal-processing

Interested students should send grades and CV to Alireza Modirshanechi.

* e.g. and

2. Implementation of a surprise-based reinforcement learning spiking neural network in a volatile environment.

Surprise is a neurophysiological response to unexpected events. There is growing experimental evidence that surprise is a key process in learning; surprising information is more memorable and allows quick adaptation to a changing environment.
Model-free reinforcement learning algorithms, Especially for spiking neural network (SNN), are often in-efficient when solving volatile tasks, such as the Blocking Maze task, Reinforcement Learning – Richard S. Sutton, 2017.
The goal of this Master project is to implement the Model-free SNN designed in “A Spiking Neural Network Model of Model-Free Reinforcement Learning with High-Dimensional Sensory Input and Perceptual Ambiguity”, Nakano, Takashi, et al., 2015 and introducing a neural population computing a surprise signal allowing fast adaptation to the changing environment.
Further work could also be the addition of model-based SNN in order to show the increase of performance when combining both model-free RL and model-based exploration.

Requirements: (Strong) Python (or Julia) programming, good knowledge in SNN and reinforcement learning.

Interested students should send grades and CV to Martin Barry.