Master Thesis Projects

Master Thesis Projects are started once the complete master program is finished and all the credits have been obtained.
Projects for SSC and SIN students should last 4 months at the EPFL or 6 months in the industry or in another University.
Master Thesis Projects must be done individually.
Master Thesis Projects are worth 30 credits.
Students must have the approval of the Professor in charge of the laboratory before registering for the given project.

Link to the Academic Calendar

List of Projects – Spring 2020:

1. Surprise and Learning in the Human Brain: What can we find in experimental data?

As simply as one can expect the occurrence of an event, he or she can also experience the violation of that expectation. Such a violation is perceived by the brain as surprise, which can be seen as a measure of how much the brain’s current belief differs from reality. Recently, there have been a few works* on the mathematical formulation of surprise and surprise-based learning in human brain. The goal of this project is to connect the existing theories to the experimental data. The student will analyse a recently published data-set to: 1. Find the biomarkers of surprise in brain signals, 2. Compare different computational models in explaining human perception of surprise, and 3. Possibly extend the computational models (depending on the results of the first 2 steps).

Requirements:
1. Experience with  MATLAB or Python programming (knowledge of Julialang is preferable but not necessary)
2. Solid understanding of data science and statistics
3. Familiarity with signal-processing

Interested students should send grades and CV to Alireza Modirshanechi.

* e.g. https://arxiv.org/abs/1907.02936 and https://www.mitpressjournals.org/doi/full/10.1162/neco_a_01025

2. Implementation of a surprise-based reinforcement learning spiking neural network in a volatile environment.

Surprise is a neurophysiological response to unexpected events. There is growing experimental evidence that surprise is a key process in learning; surprising information is more memorable and allows quick adaptation to a changing environment.
Model-free reinforcement learning algorithms, Especially for spiking neural network (SNN), are often in-efficient when solving volatile tasks, such as the Blocking Maze task, Reinforcement Learning – Richard S. Sutton, 2017.
The goal of this Master project is to implement the Model-free SNN designed in “A Spiking Neural Network Model of Model-Free Reinforcement Learning with High-Dimensional Sensory Input and Perceptual Ambiguity”, Nakano, Takashi, et al., 2015 and introducing a neural population computing a surprise signal allowing fast adaptation to the changing environment.
Further work could also be the addition of model-based SNN in order to show the increase of performance when combining both model-free RL and model-based exploration.

Requirements: (Strong) Python (or Julia) programming, good knowledge in SNN and reinforcement learning.

Interested students should send grades and CV to Martin Barry.

3. Associative memories

Human memory operates with associations. If you visited a famous place with your best friend, any postcard of that famous place will remind you of him or her. Experimental evidence suggests that associative memory is stored by cell assemblies that respond selectively to single concepts. Associations between different concepts are encoded in the overlap between the respective cell assemblies.
Associative memory is traditionally modeled through attractor neural networks (ANN). Memory engrams are represented by binary or Gaussian patterns.
In this project, the student will consider an ANN with Gaussian patterns and study how the stability of a single memory pattern is affected by the correlation with other patterns. This, so far, remains an open question and it is complementary to the more traditional study of ANNs with binary patterns.
The student will first get to learn about the well-established literature on ANN.
Secondly, using mean-field approximation, the student will derive analytic equations for the network dynamics in the case of correlated patterns. The goal is to look for a critical correlation Cˆ∗ (which corresponds to a percentage of shared neurons) above which patterns are not distinguishable.
Finally, the student will solve numerically the equations he/she derived and compare his/her prediction with the full network simulation.
The results will be compared with the experimental results on memory association.

Requirements: Python programming (knowledge of Julialang is preferable but not necessary)

Interested students should send grades and CV to Chiara Gastaldi.