next up previous contents index
Next: 7.2 Transients Up: 7. Signal Transmission and Previous: 7. Signal Transmission and


7.1 Linearized Population Equation

We consider a homogeneous population of independent neurons. All neurons receive the same current I(t) fluctuating about the mean I0. More specifically we set

I(t) = I0 + $\displaystyle \Delta$I(t) . (7.1)

For small fluctuations, |$ \Delta$I| $ \ll$ I0, we expect that the population activity stays close to the value A0 that it would have for a constant current I0, i.e.,

A(t) = A0 + $\displaystyle \Delta$A(t) , (7.2)

with |$ \Delta$A| $ \ll$ A0. In that case, we may expand the right-hand side of the population equation A(t) = $ \int_{{-\infty}}^{t}$PI(t|$ \hat{{t}}$A($ \hat{{t}}$) d$ \hat{{t}}$ into a Taylor series about A0 to linear order in $ \Delta$A. In this section, we want to show that for spiking neuron models (either integrate-and-fire or SRM0 neurons) the linearized population equation can be written in the form

$\displaystyle \Delta$A(t) = $\displaystyle \int_{{-\infty}}^{t}$P0(t - $\displaystyle \hat{{t}}$$\displaystyle \Delta$A($\displaystyle \hat{{t}}$) d$\displaystyle \hat{{t}}$ + A0 $\displaystyle {{\text{d}}\over {\text{d}}t}$$\displaystyle \int_{0}^{\infty}$$\displaystyle \mathcal {L}$(x$\displaystyle \Delta$h(t - x) dx , (7.3)

where P0(t - $ \hat{{t}}$) is the interval distribution for constant input I0, $ \mathcal {L}$(x) is a real-valued function that plays the role of an integral kernel, and

$\displaystyle \Delta$h(t) = $\displaystyle \int_{0}^{\infty}$$\displaystyle \kappa$(s$\displaystyle \Delta$I(t - s) ds (7.4)

is the input potential generated by the time-dependent part of the input current. The first term of the right-hand side of Eq. (7.3) takes into account that previous perturbations $ \Delta$A($ \hat{{t}}$) with $ \hat{{t}}$ < t have an after-effect one inter-spike interval later. The second term describes the immediate response to a change in the input potential. If we want to understand the response of the population to an input current $ \Delta$I(t), we need to know the characteristics of the kernel $ \mathcal {L}$(x). The main task of this section is therefore the calculation of $ \mathcal {L}$(x).

Here we give an overview of the main results that we will obtain in the present chapter; explicit expressions for the kernel $ \mathcal {L}$(x) are presented in Tab. 7.1.

In the low-noise limit, the kernel $ \mathcal {L}$(x) is a Dirac $ \delta$ function. The dynamics of the population activity $ \Delta$A has therefore a term proportional to the derivative of the input potential; cf. Eq. (7.3). We will see that this result implies a fast response $ \Delta$A to any change in the input.

For high noise, the kernel $ \mathcal {L}$(x) depends critically on the noise model. For noise that is slow compared to the intrinsic neuronal dynamics (e.g., noise in the reset or stochastic spike arrival in combination with a slow synaptic time constant) the kernel $ \mathcal {L}$(x) is similar to that in the noise-free case. Thus the dynamics of $ \Delta$A is proportional to the derivative of the input potential and therefore fast.

For a large amount of `fast' noise (e.g., escape noise), the kernel $ \mathcal {L}$(x) is broad so that the dynamics of the population activity is rather proportional to the input potential than to its derivative; cf. Eq. (7.3). As we will see, this implies that the response to a change in the input is slow.

Results for escape noise and reset noise have been derived by Gerstner (2000b) while results for diffusive noise have been presented by Brunel et al. (2001) based on a linearization of the membrane potential density equation (Brunel and Hakim, 1999). The effect of slow noise in parameters has already been discussed in Knight (1972a). Apart from the approach discussed in this section, a fast response of a population of integrate-and-fire neurons with diffusive noise can also be induced if the variance of the diffusive noise is changed (Bethge et al., 2001; Lindner and Schimansky-Geier, 2001).

Before we turn to the general case, we will focus in Section 7.1.1 on a noise-free population. We will see why the dynamics of $ \Delta$A(t) has a contribution proportional to the derivative of the input potential. In Section 7.1.2 we derive the general expression for the kernel $ \mathcal {L}$(x) and apply it to different situations. Readers not interested in the mathematical details may skip the remainder of this section and move directly to Section 7.2.

Table 7.1: The kernel $ \mathcal {L}$(x) for integrate-and-fire and SRM0 neurons (upper index IF and SRM, respectively) in the general case (`Definition'), without noise, as well as for escape and reset noise. S0(s) is the survivor function in the asynchronous state and $ \mathcal {G}$$\scriptstyle \sigma$ a normalized Gaussian with width $ \sigma$. Primes denote derivatives with respect to the argument.
\renewedcommand{baselinestretch}{1.5} \normalsize \begin{displaymath}
\end{array} \end{displaymath} \renewedcommand{baselinestretch}{1} \normalsize

7.1.1 Noise-free Population Dynamics (*)

We start with a reduction of the population integral equation (6.75) to the noise-free case. In the limit of no noise, the input-dependent interval distribution PI(t | $ \hat{{t}}$) reduces to a Dirac $ \delta$ function, i.e.,

PI(t | $\displaystyle \hat{{t}}$) = $\displaystyle \delta$[t - $\displaystyle \hat{{t}}$ - T($\displaystyle \hat{{t}}$)] . (7.5)

where T($ \hat{{t}}$) is the inter-spike interval of a neuron that has fired its last spike at time $ \hat{{t}}$. If we insert Eq. (7.5) in the integral equation of the population activity, A(t) = $ \int_{{-\infty}}^{t}$PI(t|$ \hat{{t}}$A($ \hat{{t}}$) d$ \hat{{t}}$, we find

A(t) = $\displaystyle \int_{{-\infty}}^{t}$$\displaystyle \delta$(t - $\displaystyle \hat{{t}}$ - T($\displaystyle \hat{{t}}$)) A($\displaystyle \hat{{t}}$) d$\displaystyle \hat{{t}}$ . (7.6)

The interval T($ \hat{{t}}$) of a noise-free neuron is given implicitly by the threshold condition

T($\displaystyle \hat{{t}}$) = min{(t - $\displaystyle \hat{{t}}$) | u(t) = $\displaystyle \vartheta$;$\displaystyle \dot{{u}}$ > 0, t > $\displaystyle \hat{{t}}$} . (7.7)

Note that T($ \hat{{t}}$) is the interval starting at $ \hat{{t}}$ and looking forward towards the next spike; cf. Fig. 7.1. The integration over the $ \delta$-function in Eq. (7.6) can be done, but since T in the argument of the $ \delta$-function depends upon $ \hat{{t}}$, the evaluation of the integral needs some care.

Figure 7.1: A neuron that has fired at time $ \hat{{t}}$ fires its next spike at $ \hat{{t}}$ + T($ \hat{{t}}$) where T is the `forward' interval. Looking backwards we find that a neuron that fires now at time t has fired its last spike at t - Tb(t) where Tb is the backward interval.

We recall from the rules for $ \delta$ functions that

$\displaystyle \int_{a}^{b}$$\displaystyle \delta$[f (x)] g(x) dx = $\displaystyle {g(x_0)\over \vert f'(x_0)\vert}$ (7.8)

if f has a single zero-crossing f (x0) = 0 in the interval a < x0 < b with f'(x0)$ \ne$ 0. The prime denotes the derivative. If there is no solution f (x0) = 0 in the interval [a, b], the integral vanishes. In our case, x plays the role of the variable $ \hat{{t}}$ with f ($ \hat{{t}}$) = t - $ \hat{{t}}$ - T($ \hat{{t}}$). Hence f'($ \hat{{t}}$) = - 1 - T'($ \hat{{t}}$) and

A(t) = $\displaystyle {1\over 1+T'(\hat{t})}$ A($\displaystyle \hat{{t}}$) , (7.9)

whenever a solution of $ \hat{{t}}$ = t - Tb(t) exists. Here Tb(t) is the backward interval of neurons that reach the threshold at time t. Eq. (7.9) allows an intuitive interpretation. The activity at time t is proportional to the number of neurons that have fired one period earlier. The proportionality constant is called compression factor. If the inter-spike intervals decrease (T' < 0), then neuronal firing times are `compressed' and the population activity increases. If inter-spike intervals become larger (T' > 0), the population activity decreases; cf. Fig. 7.2.

To evaluate T'($ \hat{{t}}$) we use the threshold condition (7.7). From $ \vartheta$ = u[$ \hat{{t}}$ + T($ \hat{{t}}$)] = $ \eta$[T($ \hat{{t}}$)] + h[$ \hat{{t}}$ + T($ \hat{{t}}$)|$ \hat{{t}}$] we find by taking the derivative with respect to $ \hat{{t}}$

0 = $\displaystyle \eta{^\prime}$[T($\displaystyle \hat{{t}}$)] T'($\displaystyle \hat{{t}}$) + $\displaystyle \partial_{t}^{}$h[$\displaystyle \hat{{t}}$ + T($\displaystyle \hat{{t}}$)|$\displaystyle \hat{{t}}$] [1 + T'($\displaystyle \hat{{t}}$)] + $\displaystyle \partial_{{\hat{t}}}^{}$h[$\displaystyle \hat{{t}}$ + T($\displaystyle \hat{{t}}$)|$\displaystyle \hat{{t}}$] . (7.10)

The prime denotes the derivative with respect to the argument. We have introduced a short-hand notation for the partial derivatives, viz., $ \partial_{t}^{}$h(t|$ \hat{{t}}$) = $ \partial$h(t|$ \hat{{t}}$)/$ \partial$t and $ \partial_{{\hat{t}}}^{}$h(t|$ \hat{{t}}$) = $ \partial$h(t|$ \hat{{t}}$)/$ \partial$$ \hat{{t}}$. We solve for T' and find

T' = - $\displaystyle {\partial_{\hat{t}}h + \partial_t h \over \eta' + \partial_t h}$ , (7.11)

where we have suppressed the arguments for brevity. A simple algebraic transformation yields

$\displaystyle {1 \over 1+T'}$ = 1 + $\displaystyle {\partial_t h + \partial_{\hat{t}} h \over \eta' - \partial_{\hat{t}}h}$ , (7.12)

which we insert into Eq. (7.9). The result is

A(t) = $\displaystyle \left[\vphantom{1 + {\partial_t h(t\vert\hat{t}) + \partial_{\hat...
...hat{t}) \over \eta'(t-\hat{t}) - \partial_{\hat{t}}h(t\vert\hat{t}) \,}}\right.$1 + $\displaystyle {\partial_t h(t\vert\hat{t}) + \partial_{\hat{t}} h(t\vert\hat{t}) \over \eta'(t-\hat{t}) - \partial_{\hat{t}}h(t\vert\hat{t}) \,}$$\displaystyle \left.\vphantom{1 + {\partial_t h(t\vert\hat{t}) + \partial_{\hat...
...hat{t}) \over \eta'(t-\hat{t}) - \partial_{\hat{t}}h(t\vert\hat{t}) \,}}\right]$ A($\displaystyle \hat{{t}}$) , with $\displaystyle \hat{{t}}$ = t - Tb(t) , (7.13)

where Tb(t) is the backward interval given a spike at time t. A solution Tb(t) exists only if some neurons reach the threshold at time t. If this is not the case, the activity A(t) vanishes. The partial derivatives in Eq. (7.13) are to be evaluated at $ \hat{{t}}$ = t - Tb(t); the derivative $ \eta{^\prime}$ = d$ \eta$(s)/ds is to be evaluated at s = Tb(t). We may summarize Eq. (7.13) by saying that the activity at time t depends on the activity one period earlier modulated by the factor in square brackets. Note that Eq. (7.13) is still exact. Linearization

Let us consider a fluctuating input current that generates small perturbations in the population activity $ \Delta$A(t) and the input potential $ \Delta$h(t) as outlined at the beginning of this section. If we substitute A(t) = A0 + $ \Delta$A(T) and h(t|$ \hat{{t}}$) = h0 + $ \delta$h(t|$ \hat{{t}}$) into Eq. (7.13) and linearize in $ \Delta$A and $ \Delta$h we obtain an expression of the form

$\displaystyle \Delta$A(t) = $\displaystyle \Delta$A(t - T) + A0 C(t) , (7.14)

where T = 1/A0 is the interval for constant input I0 and C a time-dependent factor, called compression factor. The activity at time t depends thus on the activity one inter-spike interval earlier and on the instantanuous value of the compression factor.

For SRM0 neurons we have h(t|$ \hat{{t}}$) = h(t) so that the partial derivative with respect to $ \hat{{t}}$ vanishes. The factor in square brackets in Eq. (7.13) reduces therefore to [1 + (h'/$ \eta{^\prime}$)]. If we linearize Eq. (7.13) we find the compression factor

CSRM(t) = h'(t)/$\displaystyle \eta{^\prime}$(T) . (7.15)

For integrate-and-fire neurons we have a similar result. To evaluate the partial derivatives that we need in Eq. (7.13) we write u(t) = $ \eta$(t - $ \hat{{t}}$) + h(t|$ \hat{{t}}$) with

$\displaystyle \eta$(t - $\displaystyle \hat{{t}}$) = ur e$\scriptstyle {-{t-\hat{t}\over \tau_m}}$  
h(t|$\displaystyle \hat{{t}}$) = h(t) - h($\displaystyle \hat{{t}}$e$\scriptstyle {-{t-\hat{t}\over \tau_m}}$ ; (7.16)

cf. Eqs. (4.34) and (4.60). Here ur is the reset potential of the integrate-and-fire neurons and h(t) = $ \int_{0}^{\infty}$exp(- s/$ \tau_{m}^{}$I(t - s) ds is the input potential generated by the input current I.

Taking the derivative of $ \eta$ and the partial derivatives of h yields

$\displaystyle {\partial_t h + \partial_{\hat{t}} h \over \eta' - \partial_{\hat{t}}h \,}$ = $\displaystyle {h'(t) - h'(t-T_b) \, e^{-T_b/\tau_m} \over h' (t-T_b)\,e^{-T_b/\tau_m} - \tau_m^{-1}\,[u_r + h(t-T_b)] \, e^{-T_b/\tau_m}}$ , (7.17)

which we now insert in Eq. (7.13). Since we are interested in the linearized activity equation, we replace Tb(t) by the interval T = 1/A0 for constant input and drop the term h' in the denominator. This yields Eq. (7.14) with a compression factor CIF given by

CIF(t) = [h'(t) - h'(t - T) exp(- T/$\displaystyle \tau_{m}^{}$)]/u' . (7.18)

Here u' is the derivative of the membrane potential for constant input current I0, i.e., u' = - $ \tau_{m}^{{-1}}$ [ur + h(t - Tb)] e-Tb/$\scriptstyle \tau_{m}$. The label IF is short for integrate-and-fire neurons. Example: Compression of firing times for SRM0 neurons

In order to motivate the name `compression factor' and to give an interpretation of Eq. (7.14) we consider SRM0 neurons with an exponential refractory kernel $ \eta$(s) = - $ \eta_{0}^{}$ exp(- s/$ \tau$). We want to show graphically that the population activity $ \Delta$A has a contribution that is proportional to the derivative of the input potential.

Figure 7.2: A change in the input potential h with positive slope h' > 0 (dashed line, bottom) shifts neuronal firing times closer together (middle). As a result, the activity A(t) (solid line, top) is higher at t = $ \hat{{t}}$ + T($ \hat{{t}}$) than it was at time $ \hat{{t}}$ (schematic diagram); taken from (Gerstner, 2000b)

We consider Fig. 7.2. A neuron which has fired at $ \hat{{t}}$ will fire again at t = $ \hat{{t}}$ + T($ \hat{{t}}$). Another neuron which has fired slightly later at $ \hat{{t}}$ + $ \delta$$ \hat{{t}}$ fires its next spike at t + $ \delta$t. If the input potential is constant between t and t + $ \delta$t, then $ \delta$t = $ \delta$$ \hat{{t}}$. If, however, h increases between t and t + $ \delta$t as is the case in Fig. 7.2, then the firing time difference is reduced. The compression of firing time differences is directly related to an increase in the activity A. To see this, we note that all neurons which fire between $ \hat{{t}}$ and $ \hat{{t}}$ + $ \delta$$ \hat{{t}}$, must fire again between t and t + $ \delta$t. This is due to the fact that the network is homogeneous and the mapping $ \hat{{t}}$$ \to$t = $ \hat{{t}}$ + T($ \hat{{t}}$) is monotonous. If firing time differences are compressed, the population activity increases.

In order to establish the relation between Fig. 7.2 and Eq. (7.15), we note that the compression faction is equal to h'/$ \eta{^\prime}$. For a SRM0 neuron with exponential refractory kernel, $ \eta{^\prime}$(s) > 0 holds for all s > 0. An input with h' > 0 implies then, because of Eq. (7.14), an increase of the activity:

h' > 0 $\displaystyle \Longrightarrow$ A(t) > A(t - T) . (7.19)

7.1.2 Escape noise (*)

In this section we focus on a population of neurons with escape noise. The aim of this section is two-fold. First, we want to show how to derive the linearized population equation (7.3) that has already been stated at the beginning of Section 7.1. Second, we will show that in the case of high noise the population activity follows the input potential h(t), whereas for low noise the activity follows the derivative h'(t). These results will be used in the following three sections for a discussion of signal transmission and coding properties.

In order to derive the linearized response $ \Delta$A of the population activity to a change in the input we start from the conservation law,

1 = $\displaystyle \int_{{-\infty}}^{t}$SI(t | $\displaystyle \hat{{t}}$A($\displaystyle \hat{{t}}$) d$\displaystyle \hat{{t}}$ , (7.20)

cf. (6.73). As we have seen in Chapter 6.3 the population equation (6.75) can be obtained by taking the derivative of Eq. (7.20) with respect to t, i.e.,

0 = $\displaystyle {{\text{d}}\over {\text{d}}t}$$\displaystyle \int_{{-\infty}}^{t}$SI(t | $\displaystyle \hat{{t}}$A($\displaystyle \hat{{t}}$) d$\displaystyle \hat{{t}}$ . (7.21)

For constant input I0, the population activity has a constant value A0. We consider a small perturbation of the stationary state, A(t) = A0 + $ \Delta$A(t), that is caused by a small change in the input current, $ \Delta$I(t). The time-dependent input generates a total postsynaptic potential,

h(t|$\displaystyle \hat{{t}}$) = h0(t|$\displaystyle \hat{{t}}$) + $\displaystyle \Delta$h(t|$\displaystyle \hat{{t}}$) , (7.22)

where h0(t|$ \hat{{t}}$) is the postsynaptic potential for constant input I0 and

$\displaystyle \Delta$h(t|$\displaystyle \hat{{t}}$) = $\displaystyle \int_{0}^{\infty}$$\displaystyle \kappa$(t - $\displaystyle \hat{{t}}$, s$\displaystyle \Delta$I(t - s) ds (7.23)

is the change of the postsynaptic potential generated by $ \Delta$I. We expand Eq. (7.21) to linear order in $ \Delta$A and $ \Delta$h and find

0 = {{\text{d}}\over {\text{d}}t} \int_{-\infty}^t S_0(t-\hat{...
...\Delta h(s\vert\hat{t})}\right\vert _{\Delta h=0} \right\}
\, .

We have used the notation S0(t - $ \hat{{t}}$) = SI0(t | $ \hat{{t}}$) for the survivor function of the asynchronous firing state. To take the derivative of the first term in Eq. (7.24) we use dS0(s)/ds = - P0(s) and S0(0) = 1. This yields

\Delta A(t) = \int_{-\infty}^t P_{0} (t - \hat{t}) \,
...elta h(s\vert\hat{t})}
\right\vert _{\Delta h=0} \right\}
\, .

We note that the first term on the right-hand side of Eq. (7.25) has the same form as the population integral equation (6.75), except that P0 is the interval distribution in the stationary state of asynchronous firing.

To make some progress in the treatment of the second term on the right-hand side of Eq. (7.25), we restrict the choice of neuron model and focus on SRM0 or integrate-and-fire neurons. For SRM0 neurons, we may drop the $ \hat{{t}}$ dependence of the potential and set $ \Delta$h(t|$ \hat{{t}}$) = $ \Delta$h(t) where $ \Delta$h is the input potential caused by the time-dependent current $ \Delta$I; compare Eqs. (7.4) and (7.23). This allows us to pull the variable $ \Delta$h(s) in front of the integral over $ \hat{{t}}$ and write Eq. (7.25) in the form

$\displaystyle \Delta$A(t) = $\displaystyle \int_{{-\infty}}^{t}$P0(t - $\displaystyle \hat{{t}}$$\displaystyle \Delta$A($\displaystyle \hat{{t}}$) d$\displaystyle \hat{{t}}$ + A0 $\displaystyle {{\text{d}}\over {\text{d}}t}$$\displaystyle \left\{\vphantom{ \int_0^\infty {\mathcal{L}}(x) \, \Delta h(t-x)\, {\text{d}}x }\right.$$\displaystyle \int_{0}^{\infty}$$\displaystyle \mathcal {L}$(x$\displaystyle \Delta$h(t - x) dx$\displaystyle \left.\vphantom{ \int_0^\infty {\mathcal{L}}(x) \, \Delta h(t-x)\, {\text{d}}x }\right\}$ . (7.24)

with a kernel

$\displaystyle \mathcal {L}$(x) = - $\displaystyle \int_{x}^{\infty}$d$\displaystyle \xi$ $\displaystyle {\partial S(\xi\vert) \over \partial \Delta h(\xi-x)}$ $\displaystyle \equiv$ $\displaystyle \mathcal {L}$SRM(x) ; (7.25)

cf. Tab. 7.1.

For integrate-and-fire neurons we set $ \Delta$h(t|$ \hat{{t}}$) = $ \Delta$h(t) - $ \Delta$h($ \hat{{t}}$) exp[- (t - $ \hat{{t}}$)/$ \tau$]; cf. Eq. (7.16). After some rearrangements of the terms, Eq. (7.25) becomes identical to Eq. (7.26) with a kernel

$\displaystyle \mathcal {L}$(x) = - $\displaystyle \int_{x}^{\infty}$d$\displaystyle \xi$ $\displaystyle {\partial S(\xi\vert) \over \partial \Delta h(\xi-x)}$ + $\displaystyle \int_{0}^{x}$d$\displaystyle \xi$ e-$\scriptstyle \xi$/$\scriptstyle \tau$ $\displaystyle {\partial S(x\vert) \over \partial \Delta h(\xi)}$ $\displaystyle \equiv$ $\displaystyle \mathcal {L}$IF(x) ; (7.26)

cf. Tab. 7.1.

Let us discuss Eq. (7.26). The first term on the right-hand side of Eq. (7.26) is of the same form as the dynamic equation (6.75) and describes how perturbations $ \Delta$A($ \hat{{t}}$) in the past influence the present activity $ \Delta$A(t). The second term gives an additional contribution which is proportional to the derivative of a filtered version of the potential $ \Delta$h.

We see from Fig. 7.3 that the width of the kernel $ \mathcal {L}$ depends on the noise level. For low noise, it is significantly sharper than for high noise. For a further discussion of Eq. (7.26) we approximate the kernel by an exponential low-pass filter

$\displaystyle \mathcal {L}$SRM(x) = a $\displaystyle \rho$ e-$\scriptstyle \rho$ x $\displaystyle \mathcal {H}$(x) , (7.27)

where a is a constant and $ \rho$ is a measure of the noise. It is shown in the examples below that Eq. (7.29) is exact for neurons with step-function escape noise and for neurons with absolute refractoriness.

The noise-free threshold process can be retrieved from Eq. (7.29) for $ \rho$$ \to$$ \infty$. In this limit $ \mathcal {L}$SRM(x) = a $ \delta$(x) and the initial transient is proportional to h' as discussed above. For small $ \rho$, however, the behavior is different. We use Eq. (7.29) and rewrite the last term in Eq. (7.26) in the form

$\displaystyle {{\text{d}}\over {\text{d}}t}$$\displaystyle \int_{0}^{\infty}$$\displaystyle \mathcal {L}$SRM(x$\displaystyle \Delta$h(t - x) dx = a$\displaystyle \rho$ [$\displaystyle \Delta$h(t) - $\displaystyle \overline{{\Delta h}}$(t)] (7.28)

where $ \overline{{\Delta h}}$(t) = $ \int_{0}^{\infty}$$ \rho$ exp(- $ \rho$ x$ \Delta$h(t - x) dx is a running average. Thus the activity responds to the temporal contrast $ \Delta$h(t) - $ \overline{{\Delta h}}$(t). At high noise levels $ \rho$ is small so that $ \overline{{\Delta h}}$ is an average over a long time window; cf. Eq. (7.29). If the fluctuations $ \Delta$I have vanishing mean ( $ \langle$$ \Delta$I$ \rangle$ = 0), we may set $ \overline{{\Delta h}}$(t) = 0. Thus, we find for escape noise in the large noise limit $ \Delta$A(t) $ \propto$ h(t). This is exactly the result that would be expected for a simple rate model. The kernel $ \mathcal {L}$(x) for escape noise (*)

Figure 7.3: Interval distribution (A) and the kernel $ \mathcal {L}$SRM(x) (B) for SRM0 neurons with escape noise. The escape rate has been taken as piecewise linear $ \rho$ = $ \rho_{0}^{}$ [u - $ \vartheta$]$ \mathcal {H}$(u - $ \vartheta$). For low noise (solid lines in A and B) the interval distribution is sharply peaked and the kernel $ \mathcal {L}$SRM has a small width. For high noise (dashed line) both the interval distribution and the kernel $ \mathcal {L}$SRM are broad. The value of the bias current I0 has been adjusted so that the mean interval is always 40ms. The kernel has been normalized to $ \int_{0}^{\infty}$$ \mathcal {L}$(x) dx = 1.
{\bf A}
{\bf B}

In the escape noise model, the survivor function is given by

SI(t | $\displaystyle \hat{{t}}$) = exp$\displaystyle \left\{\vphantom{ -\int_{\hat{t}}^t f[\eta(t'-\hat{t}) + h(t'\vert\hat{t}) ]\, {\text{d}} t' }\right.$ - $\displaystyle \int_{{\hat{t}}}^{t}$f[$\displaystyle \eta$(t' - $\displaystyle \hat{{t}}$) + h(t'|$\displaystyle \hat{{t}}$)] dt'$\displaystyle \left.\vphantom{ -\int_{\hat{t}}^t f[\eta(t'-\hat{t}) + h(t'\vert\hat{t}) ]\, {\text{d}} t' }\right\}$ (7.29)

where f[u] is the instantaneous escape rate across the noisy threshold; cf. Chapter 5. We write h(t|$ \hat{{t}}$) = h0(t - $ \hat{{t}}$) + $ \Delta$h(t|$ \hat{{t}}$). Taking the derivative with respect to $ \Delta$h yields

$\displaystyle \left.\vphantom{ {\partial S_I(t\,\vert\,\hat{t}) \over \partial \Delta h(s\vert\hat{t}) } }\right.$$\displaystyle {\partial S_I(t\,\vert\,\hat{t}) \over \partial \Delta h(s\vert\hat{t})}$$\displaystyle \left.\vphantom{ {\partial S_I(t\,\vert\,\hat{t}) \over \partial \Delta h(s\vert\hat{t}) } }\right\vert _{{\Delta h=0}}^{}$ = - $\displaystyle \mathcal {H}$(s - $\displaystyle \hat{{t}}$$\displaystyle \mathcal {H}$(t - sf'[$\displaystyle \eta$(s - $\displaystyle \hat{{t}}$) + h0(s - $\displaystyle \hat{{t}}$)] S0(t - $\displaystyle \hat{{t}}$) (7.30)

where S0(t - $ \hat{{t}}$) = Sh0(t | $ \hat{{t}}$) and f' = df (u)/du. For SRM0-neurons, we have h0(t - $ \hat{{t}}$) $ \equiv$ h0 and $ \Delta$h(t|$ \hat{{t}}$) = $ \Delta$h(t), independent of $ \hat{{t}}$. The kernel $ \mathcal {L}$ is therefore

$\displaystyle \mathcal {L}$SRM(t - s) = $\displaystyle \mathcal {H}$(t - s)$\displaystyle \int_{{-\infty}}^{s}$d$\displaystyle \hat{{t}}$ f'[$\displaystyle \eta$(s - $\displaystyle \hat{{t}}$) + h0S0(t - $\displaystyle \hat{{t}}$) . (7.31)

as noted in Tab. 7.1. Example: Step-function escape rate (*)

Figure 7.4: Interval distribution (A) and the kernel $ \mathcal {L}$IF(x) (B) for integrate-and-fire neurons with escape noise. The escape rate has been taken as piecewise linear $ \rho$ = $ \rho_{0}^{}$ [u - $ \vartheta$]$ \mathcal {H}$(u - $ \vartheta$). The value of the bias current I0 has been adjusted so that the mean interval is always 8ms. The dip in the kernel around x = 8ms is typical for integrate-and-fire neurons. Low noise: sharply peaked interval distribution and kernel. High noise: broad interval distribution and kernel.
{\bf A}
{\bf B}
\par }

We take f (u) = $ \rho$ $ \mathcal {H}$(u - $ \vartheta$), i.e., a step-function escape rate. For $ \rho$$ \to$$ \infty$ neurons fire immediately as soon as u(t) > $ \vartheta$ and we are back to the noise-free sharp threshold. For finite $ \rho$, neurons respond stochastically with time constant $ \rho^{{-1}}_{}$. We will show that the kernel $ \mathcal {L}$(x) for neurons with step-function escape rate is an exponential function; cf. Eq. (7.29).

Let us denote by T0 the time between the last firing time $ \hat{{t}}$ and the formal threshold crossing, T0 = min$ \left\{\vphantom{ s\,\vert\,
\eta(s) +h_0= \vartheta}\right.$s | $ \eta$(s) + h0 = $ \vartheta$$ \left.\vphantom{ s\,\vert\,
\eta(s) +h_0= \vartheta}\right\}$. The derivative of f is a $ \delta$-function,

f'[$\displaystyle \eta$(s) + h0] = $\displaystyle \rho$ $\displaystyle \delta$[$\displaystyle \eta$(s) + h0 - $\displaystyle \vartheta$] = $\displaystyle {\rho\over \eta'}$ $\displaystyle \delta$(s - T0) (7.32)

where $ \eta{^\prime}$ = $ {{\text{d}}\eta(s) \over {\text{d}}s}$|s=T0. The survivor function S0(s) is unity for s < T0 and S0(s) = exp[- $ \rho$ (s - T0)] for s > T0. Integration of Eq. (7.33) yields

$\displaystyle \mathcal {L}$(s) = $\displaystyle {1\over \eta'}$ $\displaystyle \mathcal {H}$(s$\displaystyle \rho$ exp[- $\displaystyle \rho$ (s)] (7.33)

as claimed above. Example: Absolute refractoriness (*)

We take an arbitrary escape rate f (u)$ \ge$ 0 with limu$\scriptstyle \to$-$\scriptstyle \infty$f (u) = 0 = limu$\scriptstyle \to$-$\scriptstyle \infty$f'(u). Absolute refractoriness is defined by a refractory kernel $ \eta$(s) = - $ \infty$ for 0 < s < $ \delta^{{\rm abs}}_{}$ and zero otherwise. This yields f[$ \eta$(t - $ \hat{{t}}$) + h0] = f (h0$ \mathcal {H}$(t - $ \hat{{t}}$ - $ \delta^{{\rm abs}}_{}$) and hence

f'[$\displaystyle \eta$(t - $\displaystyle \hat{{t}}$) + h0] = f'(h0$\displaystyle \mathcal {H}$(t - $\displaystyle \hat{{t}}$ - $\displaystyle \delta^{{\rm abs}}_{}$) . (7.34)

The survivor function S0(s) is unity for s < $ \delta^{{\rm abs}}_{}$ and decays as exp[- f (h0) (s - $ \delta^{{\rm abs}}_{}$)] for s > $ \delta^{{\rm abs}}_{}$. Integration of Eq. (7.33) yields

$\displaystyle \mathcal {L}$(t - t1) = $\displaystyle \mathcal {H}$(t - t1)$\displaystyle {f'(h_0) \over f(h_0)}$exp[- f (h0) (t - t1)] . (7.35)

Note that for neurons with absolute refractoriness the transition to the noiseless case is not meaningful. We have seen in Chapter 6 that absolute refractoriness leads to the Wilson-Cowan integral equation (6.76). Thus $ \mathcal {L}$ defined in (7.37) is the kernel relating to Eq. (6.76); it could have been derived directly from the linearization of the Wilson-Cowan integral equation. We note that it is a low-pass filter with cut-off frequency f (h0) which depends on the input potential h0.

7.1.3 Noisy reset (*)

We consider SRM0-neurons with noisy reset as introduced in Chapter 5.4. After each spike the membrane potential is reset to a randomly chosen value parameterized by the reset variable r. This is an example of a `slow' noise model, since a new value of the stochastic variable r is chosen only once per inter-spike interval. The interval distribution of the noisy reset model is

PI(t|$\displaystyle \hat{{t}}$) = $\displaystyle \int_{{-\infty}}^{\infty}$dr $\displaystyle \delta$[t - $\displaystyle \hat{{t}}$ - T($\displaystyle \hat{{t}}$, r)] $\displaystyle \mathcal {G}$$\scriptstyle \sigma$(r) , (7.36)

where $ \mathcal {G}$$\scriptstyle \sigma$ is a normalized Gaussian with width $ \sigma$; cf. Eq. (5.68). The population equation (6.75) is thus

A(t) = $\displaystyle \int_{{-\infty}}^{t}$d$\displaystyle \hat{{t}}$$\displaystyle \int_{{-\infty}}^{\infty}$dr $\displaystyle \delta$[t - $\displaystyle \hat{{t}}$ - T($\displaystyle \hat{{t}}$, r)] $\displaystyle \mathcal {G}$$\scriptstyle \sigma$(rA($\displaystyle \hat{{t}}$) . (7.37)

A neuron that has been reset at time $ \hat{{t}}$ with value r behaves identical to a noise-free neuron that has fired its last spike at $ \hat{{t}}$ + r. In particular we have the relation T($ \hat{{t}}$, r) = r + T0($ \hat{{t}}$ + r) where T0(t') is the forward interval of a noiseless neuron that has fired its last spike at t'. The integration over $ \hat{{t}}$ in Eq. (7.39) can therefore be done and yields

A(t) = $\displaystyle \left[\vphantom{1+{h'\over \eta'}}\right.$1 + $\displaystyle {h'\over \eta'}$$\displaystyle \left.\vphantom{1+{h'\over \eta'}}\right]$ $\displaystyle \int_{{-\infty}}^{\infty}$dr $\displaystyle \mathcal {G}$$\scriptstyle \sigma$(rA[t - Tb(t) - r] (7.38)

where Tb is the backward interval. The factor [1 + (h'/$ \eta{^\prime}$)] arises due to the integration over the $ \delta$-function just as in the noiseless case; cf. Eqs. (7.13) and (7.15).

To simplify the expression, we write A(t) = A0 + $ \Delta$A(t) and expand Eq. (7.40) to first order in $ \Delta$A. The result is

$\displaystyle \Delta$A(t) = $\displaystyle \int_{{-\infty}}^{\infty}$$\displaystyle \mathcal {G}$$\scriptstyle \sigma$(r$\displaystyle \Delta$A(t - T0 - r) dr + $\displaystyle {h'(t)\over \eta'(T_0)}$ A0 (7.39)

A comparison of Eqs. (7.41) and (7.3) yields the kernel $ \mathcal {L}$(x) = $ \delta$(x)/$ \eta{^\prime}$ for the noisy-reset model. Note that it is identical to that of a population of noise-free neurons; cf. Tab. 7.1. The reason is that the effect of noise is limited to the moment of the reset. The approach of the membrane potential towards the threshold is noise-free.

next up previous contents index
Next: 7.2 Transients Up: 7. Signal Transmission and Previous: 7. Signal Transmission and
Gerstner and Kistler
Spiking Neuron Models. Single Neurons, Populations, Plasticity
Cambridge University Press, 2002

© Cambridge University Press
This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.