next up previous contents index
Next: 8.3 Oscillations in reverberating Up: 8. Oscillations and Synchrony Previous: 8.1 Instability of the


8.2 Synchronized Oscillations and Locking

We have seen in the previous section that the state of asynchronous firing can loose stability towards certain oscillatory modes that are solutions of the linearized population equations. We are now going to investigate oscillatory modes in more detail and check whether a large-amplitude oscillation where all neurons are firing in ``lockstep'' can be a stable solution of the population equations.

8.2.1 Locking in Noise-Free Populations

We consider a homogeneous population of SRM0 or integrate-and-fire neurons which is nearly perfectly synchronized and fires almost regularly with period T. In order to analyze the existence and stability of a fully locked synchronous oscillation we approximate the population activity by a sequence of square pulses k, k $ \in$ {0,±1,±2,...}, centered around t = k T. Each pulse k has a certain half-width $ \delta_{k}^{}$ and amplitude (2$ \delta_{k}^{}$)-1 - since all neurons are supposed to fire once in each pulse. In order to check whether the fully synchronized state is a stable solution of the population equation (6.75), we assume that the population has already fired a couple of narrow pulses for t < 0 with widths $ \delta_{k}^{}$ $ \ll$ T, k$ \le$ 0, and calculate the amplitude and width of subsequent pulses. If we find that the amplitude of subsequent pulses increases while their width decreases (i.e., $ \lim_{{k \to \infty}}^{}$$ \delta_{k}^{}$ = 0), then we conclude that the fully locked state is stable.

To make the above outline more explicit, we use

A(t) = $\displaystyle \sum_{{k=-\infty}}^{\infty}$$\displaystyle {1\over 2\delta_k}$$\displaystyle \mathcal {H}$[t - (k T + $\displaystyle \delta_{k}^{}$)] $\displaystyle \mathcal {H}$[(k T + $\displaystyle \delta_{k}^{}$) - t] (8.11)

as a parameterization of the population activity; cf. Fig. 8.4. Here, $ \mathcal {H}$(.) denotes the Heaviside step function with $ \mathcal {H}$(s) = 1 for s > 0 and $ \mathcal {H}$(s) = 0 for s$ \le$ 0. For stability, we need to show that the amplitude A(0), A(T), A(2T),... of the rectangular pulses increases while the width $ \delta_{k}^{}$ of subsequent pulses decreases.

Figure 8.4: Sequence of rectangular activity pulses. If the fully synchronized state is stable, the width $ \delta$ of the pulses decreases while the amplitude increases.

As we will see below, the condition for stable locking of all neurons in the population can be stated as a condition on the slope of the input potential h at the moment of firing. More precisely, if the last population pulse occurred at about t = 0 with amplitude A(0) the amplitude of the population pulse at t = T increases, if h'(T) > 0:

h'(T) > 0     $\displaystyle \Longleftrightarrow$     A(T) > A(0) . (8.12)

If the amplitude of subsequent pulses increases, their width decreases. In other words, we have the following Locking Theorem. In a spatially homogeneous network of SRM0 or integrate-and-fire neurons, a necessary and, in the limit of a large number of presynaptic neurons ( N$ \to$$ \infty$), also sufficient condition for a coherent oscillation to be asymptotically stable is that firing occurs when the postsynaptic potential arising from all previous spikes is increasing in time (Gerstner et al., 1996b).

The Locking Theorem is applicable for large populations that are already close to the fully synchronized state. A related but global locking argument has been presented by (Mirollo and Strogatz, 1990). The locking argument can be generalized to heterogeneous networks (Chow, 1998; Gerstner et al., 1993a) and to electrical coupling (Chow and Kopell, 2000). Synchronization in small networks has been discussed in, e.g., (Bose et al., 2000; Hansel et al., 1995; Chow, 1998; Ernst et al., 1995; van Vreeswijk, 1996; van Vreeswijk et al., 1994). For weak coupling, synchronization and locking can be systematically analyzed in the framework of phase models (Ermentrout and Kopell, 1984; Kopell, 1986; Kuramoto, 1975) or canonical neuron models (Izhikevich, 1999; Hansel et al., 1995; Ermentrout, 1996; Ermentrout et al., 2001; Hoppensteadt and Izhikevich, 1997).

Before we derive the locking condition for spiking neuron models, we illustrate the main idea by two examples. Example: Perfect synchrony in noiseless SRM0 neurons

In this example we will show that locking in a population of spiking neurons can be understood by simple geometrical arguments; there is no need to use the abstract mathematical framework of the population equations. It will turn out that the results are - of course - consistent with those derived from the population equation.

We study a homogeneous network of N identical neurons which are mutually coupled with strength wij = J0/N where J0 > 0 is a positive constant. In other words, the (excitatory) interaction is scaled with one over N so that the total input to a neuron i is of order one even if the number of neurons is large ( N$ \to$$ \infty$). Since we are interested in synchrony we suppose that all neurons have fired simultaneously at $ \hat{{t}}$ = 0. When will the neurons fire again?

Since all neurons are identical we expect that the next firing time will also be synchronous. Let us calculate the period T between one synchronous pulse and the next. We start from the firing condition of SRM0 neurons

$\displaystyle \vartheta$ = ui(t) = $\displaystyle \eta$(t - $\displaystyle \hat{{t}}_{i}^{}$) + $\displaystyle \sum_{j}^{}$wij$\displaystyle \sum_{f}^{}$$\displaystyle \epsilon$(t - tj(f)) , (8.13)

where $ \epsilon$(t) is the postsynaptic potential. The axonal transmission delay $ \Delta^{{\rm ax}}_{}$ is included in the definition of $ \epsilon$, i.e., $ \epsilon$(t) = 0 for t < $ \Delta^{{\rm ax}}_{}$. Since all neurons have fired synchronously at t = 0, we set $ \hat{{t}}_{i}^{}$ = tj(f) = 0. The result is a condition of the form

$\displaystyle \vartheta$ - $\displaystyle \eta$(t) = J0 $\displaystyle \epsilon$(t) , (8.14)

since wij = J0/N for j = 1,..., N. Note that we have neglected the postsynaptic potentials that may have been caused by earlier spikes tj(f) < 0 back in the past. The graphical solution of Eq. (8.14) is presented in Fig. 8.5. The first crossing point of the $ \vartheta$ - $ \eta$(t) and J0 $ \epsilon$(t) defines the time T of the next synchronous pulse.

Figure 8.5: A. Perfect Synchrony. All neurons have fired at $ \hat{{t}}$ = 0. The next spike occurs when the summed postsynaptic potential J0 $ \epsilon$(t) reaches the dynamic threshold $ \vartheta$ - $ \eta$(t). B. Stability of perfect synchrony. The last neuron is out of tune. The firing time difference at t = 0 is $ \delta_{0}^{}$. One period later the firing time difference is reduced ( $ \delta_{1}^{}$ < $ \delta_{0}^{}$), since the threshold is reached at a point where J0$ \epsilon$(t) is rising. Adapted from Gerstner et al. (1996b).
{\bf A}\\

What happens if synchrony at t = 0 was not perfect? Let us assume that one of the neurons is slightly late compared to the others; Fig. 8.5B. It will receive the input J0 $ \epsilon$(t) from the others, thus the right-hand side of (8.14) is the same. The left-hand side, however, is different since the last firing was at $ \delta_{0}^{}$ instead of zero. The next firing time is at t = T + $ \delta_{1}^{}$ where $ \delta_{1}^{}$ is found from

$\displaystyle \vartheta$ - $\displaystyle \eta$(T + $\displaystyle \delta_{1}^{}$ - $\displaystyle \delta_{0}^{}$) = J0 $\displaystyle \epsilon$(T + $\displaystyle \delta_{1}^{}$) . (8.15)

Linearization with respect to $ \delta_{0}^{}$ and $ \delta_{1}^{}$ yields:

$\displaystyle \delta_{1}^{}$ < $\displaystyle \delta_{0}^{}$     $\displaystyle \Longleftrightarrow$     $\displaystyle \epsilon{^\prime}$(T) > 0 . (8.16)

Thus the neuron which has been late is `pulled back' into the synchronized pulse of the others, if the postsynaptic potential $ \epsilon$ is rising at the moment of firing at T. Equation (8.16) is a special case of the Locking Theorem.

We see from Fig. 8.5B that, in the case of excitatory coupling, stable locking works nicely if the transmission delay $ \Delta^{{\rm ax}}_{}$ is in the range of the firing period, but slightly shorter so that firing occurs during the rise time of the EPSP. Example: SRM0 neurons with inhibitory coupling

Locking can also occur in networks with purely inhibitory couplings (van Vreeswijk et al., 1994). In order to get a response at all in such a system, we need a constant stimulus I0 or, equivalently, a negative firing threshold $ \vartheta$ < 0. The stability criterion, however, is equivalent to that of the previous example.

Figure 8.6 summarizes the stability arguments analogously to Fig. 8.5. In Fig. 8.6A all neurons have fired synchronously at t = 0 and do so again at t = T when the inhibitory postsynaptic potential has decayed so that the threshold condition,

$\displaystyle \vartheta$ - $\displaystyle \eta$(T) = J0 $\displaystyle \sum_{k}^{}$$\displaystyle \epsilon$(t - k T) , (8.17)

is fulfilled. This state is stable if the synaptic contribution to the potential, $ \sum_{k}^{}$$ \epsilon$(t - k T), has positive slope at t = T. Figure 8.6 demonstrates that a single neuron firing at t = $ \delta_{0}^{}$ instead of t = 0 is triggered again at t = T + $ \delta_{1}^{}$ with |$ \delta_{1}^{}$| < |$ \delta_{0}^{}$| for simple geometrical reasons.

Figure 8.6: Similar plot as in Fig. 8.5 but for purely inhibitory coupling. A. All neurons have fired synchronously at $ \hat{{t}}$ = 0. The next spike occurs when the summed inhibitory postsynaptic potential J0 $ \epsilon$(t) has decayed back to the dynamic threshold $ \vartheta$ - $ \eta$(t). B. Stability of perfect synchrony. The last neuron is out of tune. The firing time difference at t = 0 is $ \delta_{0}^{}$. One period later the firing time difference is reduced ( $ \delta_{1}^{}$ < $ \delta_{0}^{}$), since the threshold is reached at a point where J0 $ \epsilon$(t) is rising.
{\bf A}\\
\end{minipage} Derivation of the locking theorem (*)

We consider a homogeneous populations of SRM neurons that are close to a periodic state of synchronized activity. We assume that the population activity in the past consists of a sequence of rectangular pulses as specified in Eq. (8.11). We determine the period T and the sequence of half-widths $ \delta_{k}^{}$ of the rectangular pulses in a self-consistent manner. In order to prove stability, we need to show that the amplitude A(k T) increases while the halfwidth $ \delta_{k}^{}$ decreases as a function of k. To do so we start from the noise-free population equation (7.13) that we recall here for convenience

A(t) = $\displaystyle \left[\vphantom{ 1 + {\partial_t h + \partial_{\hat{t}} h \over \eta' - \partial_{\hat{t}}h \,} }\right.$1 + $\displaystyle {\partial_t h + \partial_{\hat{t}} h \over \eta' - \partial_{\hat{t}}h \,}$$\displaystyle \left.\vphantom{ 1 + {\partial_t h + \partial_{\hat{t}} h \over \eta' - \partial_{\hat{t}}h \,} }\right]$ A(t - Tb(t)) (8.18)

where $ \partial_{t}^{}$h and $ \partial_{{\hat{t}}}^{}$h are the partial derivatives of the total postsynaptic potential hPSP and Tb(t) is the backward interval; cf. Fig. 7.1.

As a first step, we calculate the potential hPSP(t|$ \hat{{t}}$). Given hPSP we can find the period T from the threshold condition and also the derivatives $ \partial_{t}^{}$h and $ \partial_{{\hat{t}}}^{}$h required for Eq. (7.13). In order to obtain hPSP, we substitute Eq. (8.11) in (6.8), assume $ \delta_{k}^{}$ $ \ll$ T, and integrate. To first order in $ \delta_{k}^{}$ we obtain

hPSP(t|$\displaystyle \hat{{t}}$) = $\displaystyle \sum_{{k=0}}^{{k_{\text{max}}}}$J0 $\displaystyle \epsilon$(t - $\displaystyle \hat{{t}}$, t + k T)  +  $\displaystyle \mathcal {O}$$\displaystyle \left[\vphantom{(\delta_k)^2 }\right.$($\displaystyle \delta_{k}^{}$)2$\displaystyle \left.\vphantom{(\delta_k)^2 }\right]$ , (8.19)

where - $ \delta_{0}^{}$$ \le$$ \hat{{t}}$$ \le$$ \delta_{0}^{}$ is the last firing time of the neuron under consideration. The sum runs over all pulses back in the past. Since $ \epsilon$(t - $ \hat{{t}}$, s) as a function of s is rapidly decaying for s $ \gg$ T, it is usually sufficient to keep only a finite number of terms, e.g., kmax = 1 or 2.

In the second step we determine the period T. To do so, we consider a neuron in the center of the square pulse which has fired its last spike at $ \hat{{t}}$ = 0. Since we consider noiseless neurons the relative order of firing of the neurons cannot change. Consistency of the ansatz (8.11) thus requires that the next spike of this neuron must occur at t = T, viz. in the center of the next square pulse. We use $ \hat{{t}}$ = 0 in the threshold condition for spike firing which yields

T = min$\displaystyle \left\{\vphantom{t\,\vert\, \eta(t) + J_0\sum_{k=0}^{k_{\text{max}}} \epsilon(t, t+k\,T) = \vartheta}\right.$t | $\displaystyle \eta$(t) + J0$\displaystyle \sum_{{k=0}}^{{k_{\text{max}}}}$$\displaystyle \epsilon$(t, t + k T) = $\displaystyle \vartheta$$\displaystyle \left.\vphantom{t\,\vert\, \eta(t) + J_0\sum_{k=0}^{k_{\text{max}}} \epsilon(t, t+k\,T) = \vartheta}\right\}$ . (8.20)

If a synchronized solution exists, (8.20) defines its period.

In the population equation (8.18) we need the derivative of hPSP,

$\displaystyle \partial_{t}^{}$h + $\displaystyle \partial_{{\hat{t}}}^{}$h = J0$\displaystyle \left.\vphantom{ \sum_{k=0}^{k_{\text{max}}} {{\text{d}}\over {\text{d}}s}\epsilon(x,s) }\right.$$\displaystyle \sum_{{k=0}}^{{k_{\text{max}}}}$$\displaystyle {{\text{d}}\over {\text{d}}s}$$\displaystyle \epsilon$(x, s)$\displaystyle \left.\vphantom{ \sum_{k=0}^{k_{\text{max}}} {{\text{d}}\over {\text{d}}s}\epsilon(x,s) }\right\vert _{{x=T, s =k\,T}}^{}$ . (8.21)

According to Eq. (8.18), the new value of the activity at time t = T is the old value multiplied by the factor in the square brackets. A necessary condition for an increase of the activity from one cycle to the next is that the derivative defined by the right-hand-side of (8.21) is positive - which is the essence of the Locking Theorem.

We now apply Eq. (8.21) to a population of SRM0 neurons. For SRM0 neurons we have $ \epsilon$(x, s) = $ \epsilon_{0}^{}$(s), hence $ \partial_{{\hat{t}}}^{}$h = 0 and hPSP(t|$ \hat{{t}}$) = h(t) = J0$ \sum_{k}^{}$$ \epsilon_{0}^{}$(t + k T). For a standard $ \eta$ kernel (e.g. an exponentially decaying function), we have $ \eta{^\prime}$(T) > 0 whatever T and thus

h'(T) = J0$\displaystyle \sum_{{k=1}}^{{k_{\text{max}}+1}}$$\displaystyle \epsilon_{0}{^\prime}$(k T) > 0     $\displaystyle \Longleftrightarrow$     A(T) > A(0) , (8.22)

which is identical to Eq. (8.12). For integrate-and-fire neurons we could go through an analogous argument to show that Eq. (8.12) holds. The amplitude of the synchronous pulse thus grows only if h'(T) > 0.

Figure 8.7: A sequence of activity pulses (top) contracts to $ \delta$-pulses, if firing occurs during the rising phase of the input potential h (dashed line, bottom). Numerical integration of the population equation (6.75) for SRM0-neurons with inhibitory interaction J = - 0.1 and kernel (8.10) with delay $ \Delta^{{\rm ax}}_{}$ = 2ms. There is no noise ($ \sigma$ = 0). The activity was initialized with a square pulse A(t) = 1kHz for -1ms< t < 0 and integrated with a step size of 0.05ms.

The growth of amplitude corresponds to a compression of the width of the pulse. It can be shown that the `corner neurons' which have fired at time ±$ \delta_{0}^{}$ fire their next spike at T±$ \delta_{1}^{}$ where $ \delta_{1}^{}$ = $ \delta_{0}^{}$ A(0)/A(T). Thus the square pulse remains normalized as it should be. By iteration of the argument for t = k T with k = 2, 3, 4,... we see that the sequence $ \delta_{n}^{}$ converges to zero and the square pulses approach a Dirac $ \delta$-pulse under the condition that h'(T) = $ \sum_{k}^{}$$ \epsilon_{0}{^\prime}$(k T) > 0. In other words, the T-periodic synchronized solution with T given by Eq. (8.20) is stable, if the input potential h at the moment of firing is rising (Gerstner et al., 1996b).

In order for the sequence of square pulses to be an exact solution of the population equation, we must require that the factor in the square brackets of Eq. (8.18) remains constant over the width of a pulse. The derivatives of Eq. (8.19), however, do depend on t. As a consequence, the form of the pulse changes over time as is visible in Fig. 8.7. The activity as a function of time was obtained by a numerical integration of the population equation with a square pulse as initial condition for a network of SRM0 neurons coupled via (8.10) with weak inhibitory coupling J = - 0.1 and delay $ \Delta^{{\rm ax}}_{}$ = 2ms. For this set of parameters h' > 0 and locking is possible.

8.2.2 Locking in SRM0 Neurons with Noisy Reset (*)

The framework of the population equation allows us also to extend the locking argument to noisy SRM0 neurons. At each cycle, the pulse of synchronous activity is compressed due to locking if h'(T) > 0. At the same time it is smeared out because of noise. To illustrate this idea we consider SRM0 neurons with Gaussian noise in the reset.

In the case of noisy reset, the interval distribution can be written as PI(t|$ \hat{{t}}$) = $ \int_{{-\infty}}^{\infty}$dr $ \delta$[t - $ \hat{{t}}$ - T($ \hat{{t}}$, r)] $ \mathcal {G}$$\scriptstyle \sigma$(r); cf. Eq. (5.68). We insert the interval distribution into the population equation A(t) = $ \int_{{-\infty}}^{t}$PI(t|$ \hat{{t}}$A($ \hat{{t}}$) d$ \hat{{t}}$ and find

A(t) = $\displaystyle \int_{{-\infty}}^{t}$d$\displaystyle \hat{{t}}$$\displaystyle \int_{{-\infty}}^{\infty}$dr $\displaystyle \delta$[t - $\displaystyle \hat{{t}}$ - T($\displaystyle \hat{{t}}$, r)] $\displaystyle \mathcal {G}$$\scriptstyle \sigma$(rA($\displaystyle \hat{{t}}$) . (8.23)

The interspike interval of a neuron with reset parameter r is T($ \hat{{t}}$, r) = r + T0($ \hat{{t}}$ + r) where T0(t') is the forward interval of a noiseless neuron that has fired its last spike at t'. The integration over $ \hat{{t}}$ in Eq. (8.23) can be done and yields

A(t) = $\displaystyle \left[\vphantom{1+{h'\over \eta'}}\right.$1 + $\displaystyle {h'\over \eta'}$$\displaystyle \left.\vphantom{1+{h'\over \eta'}}\right]$ $\displaystyle \int_{{-\infty}}^{\infty}$dr $\displaystyle \mathcal {G}$$\scriptstyle \sigma$(rA[t - Tb(t) - r] , (8.24)

where Tb is the backward interval. The factor [1 + (h'/$ \eta{^\prime}$)] arises due to the integration over the $ \delta$ function just as in the noiseless case; cf. Eqs. (7.13) and (7.15). The integral over r leads to a broadening, the factor [1 + (h'/$ \eta{^\prime}$)] to a compression of the pulse.

We now search for periodic solutions. As shown below, a limit cycle solution of Eq. (8.24) consisting of a sequence of Gaussian pulses exists if the noise amplitude $ \sigma$ is small and (h'/$ \eta{^\prime}$) > 0. The width d of the activity pulses in the limit cycle is proportional to the noise level $ \sigma$. A simulation of locking in the presence of noise is shown in Fig. 8.8. The network of SRM0 neurons has inhibitory connections (J0 = - 1) and is coupled via the response kernel (8.10) with a transmission delay of $ \Delta^{{\rm ax}}_{}$ = 2 ms. Doubling the noise level $ \sigma$ leads to activity pulses with twice the width.

Figure 8.8: Synchronous activity in the presence of noise. Simulation of a population of 1000 neurons with inhibitory coupling ( J = - 1, $ \Delta^{{\rm ax}}_{}$ = 2ms) and noisy reset. A. Low noise level ( $ \sigma$ = 0.25). B. For a hight noise level ( $ \sigma$ = 0.5), the periodic pulses become broader.
{\bf A}\\
\end{minipage} Pulse width in the presence of noise (*)

In order to calculate the width of the activity pulses in a locked state, we look for periodic pulse-type solutions of Eq. (8.24). We assume that the pulses are Gaussians with width d and repeat with period T, viz., A(t) = $ \sum_{k}^{}$$ \mathcal {G}$d(t - k T). The pulse width d will be determined self-consistently from Eq. (8.24). The integral over r in Eq. (8.24) can be performed and yields a Gaussian with width $ \tilde{{\sigma}}$ = [d2 + $ \sigma^{2}_{}$]1/2. Equation (8.24) becomes

$\displaystyle \sum_{{k}}^{}$$\displaystyle \mathcal {G}$d(t - k T) = $\displaystyle \left[\vphantom{1+{h'(t)\over \eta'(T)}}\right.$1 + $\displaystyle {h'(t)\over \eta'(T)}$$\displaystyle \left.\vphantom{1+{h'(t)\over \eta'(T)}}\right]$ $\displaystyle \sum_{{k}}^{}$$\displaystyle \mathcal {G}$$\scriptstyle \tilde{{\sigma}}$[t - Tb(t) - k T] , (8.25)

where Tb(t) = $ \tau$ ln{$ \tilde{{\eta}}_{0}^{}$/[h(t) - $ \vartheta$]} is the inter-spike interval looking backwards in time.

Let us work out the self-consistency condition and focus on the pulse around t $ \approx$ 0. It corresponds to the k = 0 term on the left-hand side which must equal the k = - 1 term on the right-hand side of Eq. (8.25). We assume that the pulse width is small d $ \ll$ T and expand Tb(t) to linear order around Tb(0) = T. This yields

t - Tb(t) = t $\displaystyle \left[\vphantom{1+{h'(0)\over \eta'(T)}}\right.$1 + $\displaystyle {h'(0)\over \eta'(T)}$$\displaystyle \left.\vphantom{1+{h'(0)\over \eta'(T)}}\right]$ - T . (8.26)

The expansion is valid if h'(t) varies slowly over the width d of the pulse. We use Eq. (8.26) in the argument of the Gaussian on the right-hand side of Eq. (8.25). Since we have assumed that h' varies slowly, the factor h'(t) in Eq. (8.25) may be replaced by h'(0). In the following we suppress the arguments and write simply h' and $ \eta{^\prime}$. The result is

$\displaystyle \mathcal {G}$d(t) = $\displaystyle \left(\vphantom{1+{h'\over \eta'}}\right.$1 + $\displaystyle {h'\over \eta'}$$\displaystyle \left.\vphantom{1+{h'\over \eta'}}\right)$ $\displaystyle \mathcal {G}$$\scriptstyle \tilde{{\sigma}}$$\displaystyle \left[\vphantom{t\,\left(1+{h'\over \eta'}\right) }\right.$t $\displaystyle \left(\vphantom{1+{h'\over \eta'}}\right.$1 + $\displaystyle {h'\over \eta'}$$\displaystyle \left.\vphantom{1+{h'\over \eta'}}\right)$$\displaystyle \left.\vphantom{t\,\left(1+{h'\over \eta'}\right) }\right]$ . (8.27)

The Gaussian on the left-hand side of (8.27) must have the same width as the Gaussian on the right-hand side. The condition is d = $ \tilde{{\sigma}}$ /[1 + h'/$ \eta{^\prime}$] with $ \tilde{{\sigma}}$ = [d2 + $ \sigma^{2}_{}$]1/2. A simple algebraic transformation yields an explicit expression for the pulse width,

d = $\displaystyle \sigma$ $\displaystyle \left[\vphantom{2 \left({h'/ \eta'}\right) + \left({h'/ \eta'}\right)^2 }\right.$2$\displaystyle \left(\vphantom{{h'/ \eta'}}\right.$h'/$\displaystyle \eta{^\prime}$$\displaystyle \left.\vphantom{{h'/ \eta'}}\right)$ + $\displaystyle \left(\vphantom{{h'/ \eta'}}\right.$h'/$\displaystyle \eta{^\prime}$$\displaystyle \left.\vphantom{{h'/ \eta'}}\right)^{2}_{}$$\displaystyle \left.\vphantom{2 \left({h'/ \eta'}\right) + \left({h'/ \eta'}\right)^2 }\right]^{{-1/2}}_{}$ , (8.28)

where d is the width of the pulse and $ \sigma$ is the strength of the noise.

8.2.3 Cluster States

We have seen that, on the one hand, the state of asynchronous firing is typically unstable for low levels of noise. On the other hand, the fully locked state may be unstable as well if transmission delay and length of the refractory period do not allow spikes to be triggered during the rising phase of the input potential. The natural question is thus: What does the network activity look like if both the asynchronous and the fully locked state are unstable?

Figure 8.9: Stability of cluster states. A. In an excitatory network with vanishing transmission delay the fully locked solution may be unstable. In this schematic drawing the next set of spikes at t = T is triggered while the synaptic contribution to the potential is decaying. B. However, a cluster state where neurons split up into two groups that fire alternatingly, can be stable. Here, the first group of neurons that have fired at t = 0 is triggered again at t = T by the second group of neurons that fire with a phase shift of T/2 relative to the first group. This state is stable because spikes are triggered during the rising phase of the input potential.
{\bf A}\\

Figure 8.9A shows an example of an excitatory network with vanishing transmission delay and a rather long refractory period as compared to the rising phase of the postsynaptic potential. As a consequence, the threshold condition is met when the postsynaptic potential has already passed its maximum. The fully locked state is thus unstable. This, however, does not mean that the network will switch into the asynchronous mode. Instead, the neurons may split into several subgroups (``cluster'') that fire alternatingly. Neurons within each group stay synchronized. An example of such a cluster state with two subgroups is illustrated in Fig. 8.9B. Action potentials produced by neurons from group 1 trigger group 2 neurons and vice versa. The population activity thus oscillates with twice the frequency of an individual neuron.

In general, there is an infinite number of different cluster states that can be indexed by the number of subgroups. The length T of the inter-spike interval for a single neuron and the number of subgroups n in a cluster state are related by the threshold condition for spike triggering (Kistler and van Hemmen, 1999; Chow, 1998),

$\displaystyle \vartheta$ - $\displaystyle \eta$(T) = $\displaystyle {\frac{{J_0}}{{n}}}$ $\displaystyle \sum_{{k=0}}^{\infty}$$\displaystyle \epsilon$(k T/n) . (8.29)

Stability is clarified by the Locking Theorem: A cluster state with n subgroups is stable if spikes are triggered during the rising flank of the input potential, i.e., if

$\displaystyle \left.\vphantom{ \frac{{\text{d}}}{{\text{d}}t} \frac{J_0}{n} \, \sum_{k=0}^\infty \epsilon (t + k\,T/n) }\right.$$\displaystyle {\frac{{{\text{d}}}}{{{\text{d}}t}}}$$\displaystyle {\frac{{J_0}}{{n}}}$ $\displaystyle \sum_{{k=0}}^{\infty}$$\displaystyle \epsilon$(t + k T/n)$\displaystyle \left.\vphantom{ \frac{{\text{d}}}{{\text{d}}t} \frac{J_0}{n} \, \sum_{k=0}^\infty \epsilon (t + k\,T/n) }\right\vert _{{t=0}}^{}$ > 0 . (8.30)

In Section 8.1 we have seen that the state of asynchronous firing in a SRM network is always unstable in the absence of noise. We now see that even if the fully locked state is unstable the network is not firing asynchronously but usually gets stuck in one of many possible cluster states. Asynchronous firing can only be reached asymptotically by increasing the number of subgroups so as to ``distribute'' the spike activity more evenly in time. Individual neurons, however, will always fire in a periodical manner. Nevertheless, increasing the number of subgroups will also reduce the amplitude of the oscillations in the input potential and the firing time of the neurons becomes more and more sensitive to noise. The above statement that asynchrony can only be reached asymptotically is therefore only valid in strictly noiseless networks.

A final remark on the stability of the clusters is in order. Depending on the form of the postsynaptic potential, the stability of the locked state may be asymmetric in the sense that neurons that fire too late are pulled back into their cluster, neurons that have fired to early, however, are attracted by the cluster that has just fired before. If the noise level is not too low, there are always some neurons that drop out of their cluster and drift slowly towards an adjacent cluster (Ernst et al., 1995; van Vreeswijk, 1996). Example: Cluster states and harmonics

To illustrate the relation between the instability of the state of asynchronous firing and cluster states, we return to the network of SRM0 neurons with noisy reset that we have studied in Section 8.1. For low noise ( $ \sigma$ = 0.04), the asynchronous firing state is unstable whatever the axonal transmission delay; cf. Fig. 8.2. With an axonal delay of 2ms, asynchronous firing is unstable with respect to an oscillation with $ \omega_{3}^{}$. The population splits into 3 different groups of neurons that fire with a period of about 8ms. The population activity, however, oscillates with a period of 2.7ms; cf. Fig. 8.10A. With a delay of 1.2ms, the asynchronous firing state has an instability with respect to $ \omega_{5}^{}$ so that the population activity oscillates with a period of about 1.6ms. The population splits into 5 diferent groups of neurons that fire with a period of about 8ms; cf. Fig. 8.10B.

Figure 8.10: Cluster states for SRM0 neurons with stochastic reset. Population activity (top) and spike trains of 20 neurons (bottom). A. For an axonal delay of $ \Delta^{{\rm ax}}_{}$ = 2ms, the population splits into three clusters. B. Same as in A, but with an axonal delay of $ \Delta^{{\rm ax}}_{}$ = 1.2 ms. The population splits into five clusters, because the asynchronous firing is unstable with respect to an oscillation with frequency $ \omega_{3}^{}$; cf. Fig. 8.2. Very low noise ( $ \sigma$ = 0.04ms); all parameters as in Fig. 8.2.
{\bf A}

next up previous contents index
Next: 8.3 Oscillations in reverberating Up: 8. Oscillations and Synchrony Previous: 8.1 Instability of the
Gerstner and Kistler
Spiking Neuron Models. Single Neurons, Populations, Plasticity
Cambridge University Press, 2002

© Cambridge University Press
This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.