next up previous contents index
Next: 10.4 Detailed Models of Up: 10. Hebbian Models Previous: 10.2 Rate-Based Hebbian Learning


10.3 Spike-Time Dependent Plasticity

In this section we study synaptic plasticity at the level of individual spikes and focus on changes in the synaptic efficacy that are driven by temporal correlations between presynaptic spike arrival and postsynaptic firing.

We have seen in Chapter 1.4 that the neuronal code is still far from being fully understood. In particular, the relevance of precisely timed spikes in neuronal systems is a fundamental, yet unsolved question (Rieke et al., 1996). Nevertheless, there are neuronal systems for which the relevance of temporal information has been clearly shown. Prominent examples are the electro-sensory system of electric fish and the auditory system of barn owls (Konishi, 1993; Carr and Konishi, 1990; Konishi, 1986; Carr, 1993; Heiligenberg, 1991). If timing of spikes is important then we have to deal with the following questions: How does the timing of spikes influence weight changes? How do weight changes influence the timing of spikes?

10.3.1 Phenomenological Model

The experiments described in the Section 10.1 show that changes of the synaptic efficacy are driven by pre- and postsynaptic activity. The amplitude and even the direction of the change depend on the relative timing of presynaptic spike arrival and postsynaptic triggering of action potentials.

In the following we develop a phenomenological model for spike-time dependent synaptic plasticity. We assume that apart from an activity-independent weight decay all changes are triggered by pre- or postsynaptic action potentials. For the sake of simplicity - and for want of detailed knowledge - we take weight changes to be instantaneous, i.e., the synaptic efficacy is a piece-wise continuous function of time with steps whenever a spike occurs. The amplitude of each step depends on the relative timing of previous spikes; cf. Fig. 10.6.

Let us first concentrate on the effect of presynaptic spikes. Each spike that arrives at the presynaptic terminal can trigger a change in the synaptic efficacy even without additional postsynaptic action potentials. In the case of so-called (non-Hebbian) presynaptic LTP, the amplitude a1pre of these changes is positive; cf. Fig. 10.6B. In addition to this non-Hebbian effect there is also a contribution to the change that depends on the time since the last postsynaptic action potential(s). In analogy to the spike response formalism of Chapter 4.2 we use an integral kernel a2pre, post to describe the amplitude of the change in the synaptic efficacy. Altogether we have

$\displaystyle {\frac{{{\text{d}}}}{{{\text{d}}t}}}$wij(t) = Sj(t$\displaystyle \left[\vphantom{ a_1^{\text{pre}} + \int_0^\infty a_2^{\text{pre,post}}(s) \, S_i(t-s) \; {\text{d}}s }\right.$a1pre + $\displaystyle \int_{0}^{\infty}$a2pre, post(sSi(t - s)  ds$\displaystyle \left.\vphantom{ a_1^{\text{pre}} + \int_0^\infty a_2^{\text{pre,post}}(s) \, S_i(t-s) \; {\text{d}}s }\right]$ , (10.13)

where Sj(t) = $ \sum_{f}^{}$$ \delta$(t - tj(f)) and Si(t) = $ \sum_{f}^{}$$ \delta$(t - ti(f)) are pre- and postsynaptic spike trains, respectively. The value of the kernel a2pre, post(s) gives the weight change if a postsynaptic action potential is followed by presynaptic spike arrival with delay s. In pyramidal cells of the hippocampus, for example, this term is negative; cf. Fig. 10.4.

Changes in the synaptic efficacy can also be triggered by postsynaptic action potentials. Similarly to presynaptically triggered changes, the amplitude of the weight change consists of at least two contributions, viz., a non-Hebbian term a1post and the correlation term described by an integral kernel a2post, pre. Together with an activity-independent term a0 the total change in the synaptic efficacy reads

$\displaystyle {\frac{{{\text{d}}}}{{{\text{d}}t}}}$wij(t) = a0 + Sj(t$\displaystyle \left[\vphantom{ a_1^{\text{pre}} + \int_0^\infty a_2^{\text{pre,post}}(s) \, S_i(t-s) \; {\text{d}}s }\right.$a1pre + $\displaystyle \int_{0}^{\infty}$a2pre, post(sSi(t - s)  ds$\displaystyle \left.\vphantom{ a_1^{\text{pre}} + \int_0^\infty a_2^{\text{pre,post}}(s) \, S_i(t-s) \; {\text{d}}s }\right]$    
  + Si(t$\displaystyle \left[\vphantom{ a_1^{\text{post}} + \int_0^\infty a_2^{\text{post,pre}}(s) \, S_j(t-s) \; {\text{d}}s }\right.$a1post + $\displaystyle \int_{0}^{\infty}$a2post, pre(sSj(t - s)  ds$\displaystyle \left.\vphantom{ a_1^{\text{post}} + \int_0^\infty a_2^{\text{post,pre}}(s) \, S_j(t-s) \; {\text{d}}s }\right]$ , (10.14)

cf. Kistler and van Hemmen (2000a). Note that all parameters a0, a1pre, a1post and all kernels a2pre, post, a2post, pre may also depend upon the actual value of the synaptic efficacy. A possible consequence of this dependence, for example, is that it becomes increasingly difficult to strenthen a synapse that has already been potentiated and, vice versa, to weaken a depressed synapse (Ngezahayo et al., 2000). This can be exploited in a model in order to ensure that the weight wij stays bounded; cf. Section [*]. Here and in the following we will suppress this dependence for the sake of brevity.

Equation (10.14) can easily be extended so as to include more complex dependencies between pre- and postsynaptic spikes or between different consecutive pre- or postsynaptic spikes. Analogously to Eq. (10.2) we can include higher-order terms such as Sj(t$ \int_{0}^{\infty}$a2pre, pre(sSj(t - s) ds and Si(t$ \int_{0}^{\infty}$a2post, post(sSi(t - s) ds that are quadratic in pre- or postsynaptic spike trains. Nevertheless, the essence of Hebbian synaptic plasticity is captured by the terms that are bilinear in the pre- and postsynaptic spike train. The terms containing a2pre, post and a2post, pre describe the form of the `learning window' such as the one shown in Fig. 10.4. The kernel a2post, pre(s), which is usually positive, gives the amount of the weight change when a presynaptic spike is followed by a postsynaptic action potential with delay s; cf. the left half of the graph shown in Fig. 10.4. The kernel a2pre, post(s) describes the right half of the graph, i.e., the amount of change if the timing is the other way round. Since experimental results on spike-time dependent plasticity are usually presented in graphical form such as in Fig. 10.4, we define a `learning window' W as

W(s) = \begin{displaymath}\begin{cases}
a_2^{\text{post,pre}}(-s) &\text{ if } s<0 \,, \\ a_2^{\text{pre,post}}(s) &\text{ if } s>0 \,, \end{cases}\end{displaymath} (10.15)

where s = tj(f) - ti(f) is the delay between presynaptic spike arrival and postsynaptic firing. Note that s < 0 refers to presynaptic spike arrival before postsynaptic firing.

Figure 10.6: The lower part of each graph shows presynaptic spikes (neuron j) and postsynaptic spikes (neuron i). The upper part shows the evolution of the weight wij. A. A zero-order term a0 < 0 leads to a decrease of the synaptic weight wij. B. Linear order terms change the weight whenever a presynaptic spike arrives or a postsynaptic spike is fired. For apre1 > 0, presynaptic spike arrival at time tj(f) leads to a positive weight change $ \Delta$wij = apre1. For apost1 < 0, each postsynaptic spike leads to a negative weight change. C. We assume Hebbian plasticity with a2post, pre(ti(f) - tj(f)) = W(tj(f) - ti(f)) > 0 for tj(f) < ti(f). Thus, if a postsynaptic spike ti(f) is fired shortly after presynaptic spike arrival at tj(f), the weight change W(tj(f) - ti(f)) + apost1 at the moment of the postsynaptic spike can be positive, even if apost1 < 0.
\par\hbox{{\bf A}}\hbox{ \hspace{15mm}
\includegraphics[width=74mm]{Figs-ch-Hebbrules/Fig12c.eps}} } Example: Exponential learning windows

A simple choice for the learning window - and thus for the kernels a2post, pre and a2pre, post - inspired by Fig. 10.4 is

W(s) = \begin{displaymath}\begin{cases}
A_+\, \exp[s/\tau_1] & \text{ for } s<0 \,, \\ A_- \, \exp[-s/\tau_2] & \text{ for } s>0 \,, \end{cases}\end{displaymath} (10.16)

with some constants A± and $ \tau_{{1,2}}^{}$. If A+ > 0 and A- < 0 then the synaptic efficacy is increased if presynaptic spike arrives slightly before the postsynaptic firing (W(s) > 0 for s < 0) and the synapse is weakened if presynaptic spikes arrive a few milliseconds after the output spike (W(s) < 0); cf. Fig. 10.7.

Figure 10.7: Two-phase learning window W as a function of the time difference s = tj(f) - ti(f) between presynaptic spike arrival and postsynaptic firing; cf. Eq. (10.16) with A+ = - A- = 1, $ \tau_{{1}}^{}$ = 10 ms, and $ \tau_{2}^{}$ = 20ms (Zhang et al., 1998).
} \vspace{-15mm}

In order to obtain a realistic description of synaptic plasticity we have to make sure that the synaptic efficacy stays within certain bounds. Excitatory synapses, for example, should have a positive weight and must not exceed a maximum value of, say, wij = 1. We can implement these constraints in Eq. (10.16) by setting A- = wij a- and A+ = (1 - wija+. The remaining terms in Eq. (10.14) can be treated analogously10.1. For each positive term (leading to a weight increase) we assume a weight dependence $ \propto$ (1 - wij), while for each negative term (leading to weight decrease) we assume a weight dependence $ \propto$ wij. The synapse is thus no longer strengthened (or weakened) if the weight reaches its upper (lower) bound (Kistler and van Hemmen, 2000a; van Rossum et al., 2000).

10.3.2 Consolidation of Synaptic Efficacies

So far we have emphasized that the synaptic coupling strength is a dynamical variable that is subject to rapid changes dependent on pre- and postsynaptic activity. On the other hand, it is generally accepted that long-lasting modifications of the synaptic efficacy form the basis of learning and memory. How can fast dynamical properties be reconciled with long-lasting modifications?

Most learning theories dealing with artificial neural networks concentrate on the induction of weight changes. As soon as the `learning session' is over, synaptic plasticity is `switched off' and weights are taken as fixed parameters. In biological systems, a similar mechanism can be observed during development. There are critical periods in the early life time of an animal where certain synapses show a form of plasticity that is `switched off' after maturation (Crepel, 1982). The majority of synapses, especially those involved in higher brain functions, however, keep their plasticity throughout their life. At first glance there is thus the risk that previously stored information is simply overwritten by new input (`palimpsest property'). Grossberg has coined the term `stability-plasticity dilemma' for this problem (Carpenter and Grossberg, 1987; Grossberg, 1987).

To address these questions, Fusi et al. (2000) have studied the problem of the consolidation of synaptic weights. They argue that bistability of synaptic weights can solve the stability-plasticity dilemma. More specifically, the dynamics of synaptic efficacies is characterized by two stable fixed points at wij = 0 and wij = 1. In the absence of stimulation the synaptic weight will thus settle down at either one of these values. Pre- or postsynaptic activity can lead to a transition from one fixed point to the other, but only if the duration of the stimulus presentation or its amplitude exceeds a certain threshold. In other words, synapses can be switched on or off but this will happen only for those synapses where the learning threshold is exceeded. Learning thus affects only a few synapses so that previously stored information is mostly preserved.

In the framework of Eq. (10.14), such a dynamics for synaptic weights can be implemented by setting

a0(wij) = - $\displaystyle \gamma$ wij (1 - wij) (w$\scriptstyle \theta$ - wij) , (10.17)

where 0 < w$\scriptstyle \theta$ < 1 and $ \gamma$ > 0. In the absence of stimulation, small weights ( wij < w$\scriptstyle \theta$) decay to zero whereas large weights ( wij > w$\scriptstyle \theta$) increase towards one. Spike activity thus has to drive the synaptic weight across the threshold w$\scriptstyle \theta$ before long-lasting changes take place. A combination of Eqs. (10.17) and (10.14) can therefore be considered as a model of induction and consolidation of synaptic plasticity.

10.3.3 General Framework (*)

In Eq. (10.14) weight changes occur instantaneously at the moment of presynaptic spike arrival or postsynaptic firing. In this subsection we will develop a slightly more general equation for the evolution of synaptic weights. The approach taken in this section can be seen as a generalization of the Taylor expansion in the rate model of Section 10.2 to the case of spiking neurons.

We recall that we started our formulation of rate-based Hebbian learning from a general formula

$\displaystyle {{\text{d}}\over {\text{d}}t}$wij = F(wij;$\displaystyle \nu_{i}^{}$,$\displaystyle \nu_{j}^{}$) (10.18)

where weight changes are given as a function of the weight wij as well as of the pre- and postsynaptic rates $ \nu_{j}^{}$,$ \nu_{i}^{}$; cf. Eq. (10.1). The essential assumption was that neuronal activity is characterized by firing rates that change slowly enough to be considered as stationary. Hebbian rules followed then from a Taylor expansion of Eq. (10.18). In the following, we keep the idea of an expansion, but drop the simplifications that are inherent to a description in terms of a mean firing rate.

The internal state of spiking neurons (e.g., integrate-and-fire or Spike Response Model neurons) is characterized by the membrane potential u which in turn depends on the input and the last output spike. The generalization of Eq. (10.18) to the case of spiking neurons is therefore

$\displaystyle {{\text{d}}\over {\text{d}}t}$wij(t) = F$\displaystyle \left(\vphantom{ w_{ij}(t); \{u_i^{\rm post}(t'<t)\}, \{u_j^{\rm pre}(t''<t)\} }\right.$wij(t);{uipost(t' < t)},{ujpre(t''<t)}$\displaystyle \left.\vphantom{ w_{ij}(t); \{u_i^{\rm post}(t'<t)\}, \{u_j^{\rm pre}(t''<t)\} }\right)$ , (10.19)

where F is now a functional of the time course of pre- and postsynaptic membrane potential at the location of the synapse. Our notation with t' and t'' is intended to indicate that the weight changes do not only depend on the momentary value of the pre- and postsynaptic potential, but also on their history t' < t and t'' < t. The weight value wij and the local value of pre- and postsynaptic membrane potential are the essential variables that are available at the site of the synapse to control the up- and down-regulation of synaptic weights. In detailed neuron models, F would depend not only on the weight wij and membrane potentials, but also on all other variables that are locally available at the site of the synapse. In particular, there could be a dependence upon the local calcium concentration; cf. Section 10.4.

In analogy to the approach taken in Section 10.2, we now expand the right-hand side of Eq. (10.19) about the resting state uipost = ujpre = urest in a Volterra series (Palm and Poggio, 1977; van Hemmen et al., 2000; Volterra, 1959). For the sake of simplicity we shift the voltage scale so that urest = 0. We find

$\displaystyle {{\rm d} w_{ij} \over {\rm dt}}$ = a0(wij) + $\displaystyle \int_{0}^{\infty}$$\displaystyle \alpha^{{\text{pre}}}_{1}$(wij;sujpre(t - s) ds  
    + $\displaystyle \int_{0}^{\infty}$$\displaystyle \alpha^{{\text{post}}}_{1}$(wij;s'uipost(t - s') ds' (10.20)
    + $\displaystyle \int_{0}^{\infty}$$\displaystyle \int_{0}^{\infty}$$\displaystyle \alpha^{{\text{corr}}}_{2}$(wij;s, s'ujpre(t - suipost(t - s') ds' ds +  ... .  

The next terms would be quadratic in uipost or ujpre and have been neglected. Equation (10.20) provides a framework for the formulation of spike-based learning rules and may be seen as the generalization of the general rate-based model that we have derived in Section 10.2.

In order to establish a connection with various other formulations of spike-based learning rules, we consider the time course of the pre- and postsynaptic membrane potential in more detail. At the presynaptic terminal, the membrane potential is most of the time at rest, except when an action potential arrives. Since the duration of each action potential is short, the presynaptic membrane potential can be approximated by a train of $ \delta$ functions

ujpre(t) = $\displaystyle \sum_{f}^{}$$\displaystyle \delta$(t - tj(f)) (10.21)

where tj(f) denotes the spike arrival times at the presynaptic terminal.

The situation at the postsynaptic site is somewhat more complicated. For the simple spike response model SRM0, the membrane potential can be written as

uipost(t) = $\displaystyle \eta$(t - $\displaystyle \hat{{t}}_{i}^{}$) + hi(t) , (10.22)

where $ \hat{{t}}_{i}^{}$ is the last postsynaptic firing time. In contrast to the usual interpretation of terms on the right-hand side of Eq. (10.22), the function $ \eta$ is now taken as the time course of the back propagating action potential at the location of the synapse. Similarly, hi(t) is the local postsynaptic potential at the synapse.

For a further simplification of Eq. (10.20), we need to make some approximations. Specifically we will explore two different approximation schemes. In the first scheme, we suppose that the dominating term on the right-hand side of Eq. (10.22) is the back propagating action potential, while in the second scheme we neglect $ \eta$ and consider h as the dominant term. Let us discuss both approximations in turn. (i) Sharply peaked back propagating action potential

We assume that the back propagating action potential is sharply peaked, i.e., it has a large amplitude and short duration. In this case, the membrane potential of the postsynaptic neuron is dominated by the back propagating action potential and the term h(t) in Eq. (10.22) can be neglected. Furthermore $ \eta$ can be approximated by a $ \delta$ function. The membrane potential at the postsynaptic site reduces then to a train of pulses,

uipost(t) = $\displaystyle \sum_{f}^{}$$\displaystyle \delta$(t - ti(f)) , (10.23)

where ti(f) denotes the postsynaptic firing times. Equation (10.23) is a sensible approximation for synapses that are located on or close to the soma so that the full somatic action potential is `felt' by the postsynaptic neuron. For neurons with active processes in the dendrite that keep the back propagating action potential well focused, Eq. (10.23) is also a reasonable approximation for synapses that are further away from the soma. A transmission delay for back propagation of the spike from the soma to the site of the synapse can be incorporated at no extra cost.

If we insert Eqs. (10.21) and (10.23) into Eq. (10.20) we find

{{\rm d} w_{ij} \over {\rm dt}} =
+ \sum_f \alpha^{\text...
... \alpha^{\text{corr}}_2(t-t_i^{(f)},t-t_j^{(f)})
+ \dots
\, ,

where we have omitted the wij dependence of the right-hand side terms. In contrast to Eq. (10.14) weight changes are now continuous. A single presynaptic spike at time tj(f), for example, will cause a weight change that builds up during some time after tj(f). An example will be given below in Eq. (10.31).

In typical plasticity experiments, the synaptic weight is monitored every few hundred milliseconds so that the exact time course of the functions, $ \alpha^{{\text{pre}}}_{1}$, $ \alpha^{{\text{post}}}_{1}$ and $ \alpha^{{\text{corr}}}_{2}$ is not measured. To establish the connection to Eq. (10.14), we now assume that the weight changes are rapid compared to the time scale of weight monitoring. In other words, we make the replacement

$\displaystyle \alpha^{{\text{pre}}}_{1}$(t - tj(f))     $\displaystyle \longrightarrow$     apre1 $\displaystyle \delta$(t - tj(f)) (10.24)
$\displaystyle \alpha^{{\text{post}}}_{1}$(t - ti(f))     $\displaystyle \longrightarrow$     apost1 $\displaystyle \delta$(t - ti(f)) (10.25)

where apre1 = $ \int_{0}^{\infty}$$ \alpha^{{\text{pre}}}_{1}$(s) ds and apost1 = $ \int_{0}^{\infty}$$ \alpha^{{\text{post}}}_{1}$(s) ds. For the correlation term we exploit the invariance with respect to time translation, i.e., the final result only depends on the time difference tj(f) - ti(f). The weight update occurs at the moment of the postsynaptic spike if tj(f) < ti(f) and at the moment of the presynaptic spike if tj(f) > ti(f). Hence, the assumption of instantaneous update yields two terms

$\displaystyle \alpha^{{\text{corr}}}_{2}$(t - ti(f), t - tj(f)$\displaystyle \longrightarrow$   \begin{displaymath}\begin{cases}
a_2^{\text{pre,post}}(t_j^{(f)}-t_i^{(f)}) \, ...
...elta(t-t_i^{(f)}) & \text{ if } t_i^{(f)}>t_j^{(f)} \end{cases}\end{displaymath} (10.26)

Thus, for sharply peaked back propagating action potentials and rapid weight changes, the general framework of Eq. (10.20) leads us back to the Eq. (10.14). (ii) No back propagating action potential

In the second approximation scheme, we assume that the membrane potential at the location of the synapse is dominated by the slowly varying potential hi(t). This is, for example, a valid assumption in voltage-clamp experiments where the postsynaptic neuron is artificially kept at a constant membrane potential hpost. This is also a good approximation for synapses that are located far away from the soma on a passive dendrite, so that the back propagation of somatic action potentials is negligible.

Let us consider a voltage clamp experiment where hi(t) is kept at a constant level hpost. As before, we suppose that weight changes are rapid. If we insert uprej(t) = $ \sum_{f}^{}$$ \delta$(t - tj(f)) and uipost(t) = hpost into Eq. (10.20), we find

$\displaystyle {{\rm d} w_{ij}\over {\rm d} t}$ = a0  + $\displaystyle \sum_{f}^{}$apre1 $\displaystyle \delta$(t - tj(f))  
    + apost1 hpost + acorr2 hpost$\displaystyle \sum_{f}^{}$ $\displaystyle \delta$(t - tj(f)) +  ... (10.27)

where apre1 = $ \int_{0}^{\infty}$$ \alpha^{{\text{pre}}}_{1}$(s) ds, apost1 = $ \int_{0}^{\infty}$$ \alpha^{{\text{post}}}_{1}$(s) ds and acorr2 = $ \int_{0}^{\infty}$$ \int_{0}^{\infty}$acorr2(s, s') ds ds'. Equation (10.28) is the starting point of the theory of spike-based learning of Fusi et al. (2000). Weight changes are triggered by presynaptic spikes. The direction and value of the weight update depends on the postsynaptic membrane potential. In our framework, Eq. (10.28) is a special case of the slightly more general Eq. (10.20).

next up previous contents index
Next: 10.4 Detailed Models of Up: 10. Hebbian Models Previous: 10.2 Rate-Based Hebbian Learning
Gerstner and Kistler
Spiking Neuron Models. Single Neurons, Populations, Plasticity
Cambridge University Press, 2002

© Cambridge University Press
This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.