next up previous contents index
Next: 7.5 Summary Up: 7. Signal Transmission and Previous: 7.3 Transfer Function


7.4 The Significance of a Single Spike

The above results derived for a population of spiking neurons have an intimate relation to experimental measurements of the input-output transforms of a single neuron as typically measured by a peri-stimulus time histogram (PSTH) or by reverse correlations. This relation allows to give an interpretation of population results in the language of neural coding; see Chapter 1.4. In particular, we would like to understand the `meaning' of a spike. In Section 7.4.1 we focus on the typical effect of a single presynaptic spike on the firing probability of a postsynaptic neuron. In Section 7.4.2 we study how much we can learn from a single postsynaptic spike about the presynaptic input.

7.4.1 The Effect of an Input Spike

What is the typical response of a neuron to a single presynaptic spike? An experimental approach to answer this question is to study the temporal response of a single neuron to current pulses (Fetz and Gustafsson, 1983; Poliakov et al., 1997). More precisely a neuron is driven by a constant background current I0 plus a noise current Inoise. At time t = 0 an additional short current pulse is injected into the neuron that mimics the time course of an excitatory or inhibitory postsynaptic current. In order to test whether this extra input pulse can cause a postsynaptic action potential the experiment is repeated several times and a peri-stimulus time histogram (PSTH) is compiled. The PSTH can be interpreted as the probability density of firing as a function of time t since the stimulus, here denoted fPSTH(t). Experiments show that the shape of the PSTH response to an input pulse is determined by the amount of synaptic noise and the time course of the postsynaptic potential (PSP) caused by the current pulse (Kirkwood and Sears, 1978; Moore et al., 1970; Knox, 1974; Fetz and Gustafsson, 1983; Poliakov et al., 1997).

Figure 7.11: A. A neuron which is driven by a noisy background input receives at time t0 = 0 one extra input spike. Does this extra input trigger an output spike? B. Two hypothetic scenarios: Top: With noisy input an output spike is the more likely the closer the mean membrane potential (thick solid line) is to the threshold (dashed line). The firing probability increases during the postsynaptic potential that is caused by the input pulse at t0 = 0 (arrow). Bottom: Without noise, the membrane potential can reach the threshold only during the rising phase of the postsynaptic potential.
{\bf A} \hspace{60mm} {\bf B}

How can we understand the relation between postsynaptic potential and PSTH? There are two different intuitive pictures; cf. Fig. 7.11. First, consider a neuron driven by stochastic background input. If the input is not too strong, its membrane potential u hovers somewhere below threshold. The shorter the distance $ \vartheta$ - u0 between the mean membrane potential u0 and the threshold $ \vartheta$ the higher the probability that the fluctuations drive the neuron to firing. Let us suppose that at t = 0 an additional excitatory input spike arrives. It causes an excitatory postsynaptic potential with time course $ \epsilon_{0}^{}$(t) which drives the mean potential closer to threshold. We therefore expect (Moore et al., 1970) that the probability density of firing (and hence the PSTH) shows a time course similar to the time course of the postsynaptic potential, i.e., fPSTH(t) $ \propto$ $ \epsilon_{0}^{}$(t); cf. Fig. 7.11B (top).

On the other hand, consider a neuron driven by a constant super-threshold current I0 without any noise. If an input spike arrives during the phase where the membrane potential u0(t) is just below threshold, it may trigger a spike. Since the threshold crossing can only occur during the rising phase of the postsynaptic potential, we may expect (Kirkwood and Sears, 1978) that the PSTH is proportional to the derivative of the postsynaptic potential, i.e., fPSTH(t) $ \propto$ $ {d\over dt}$$ \epsilon_{0}^{}$(t); cf. Fig. 7.11B (bottom).

Figure 7.12: Effect of noise on the PSTH. A. An integrate-and-fire neuron is stimulated by a current transient which consists of a deterministic pulse I(t) = I0 + $ \Delta$I(t) plus a noise current Inoise. B. Same as in A, but reduced noise amplitude. C. The response of the neuron to repeated presentations of the stimulus is measured by the PSTH. For high noise the PSTH is similar to the postsynaptic potential. D. For low noise, the PSTH resembles the derivative of the postsynaptic potential. E. Time course of the postsynaptic potential and F of its derivative; taken from Herrmann and Gerstner (2001a).
%this is the compound figure of Ali...

Both regimes can be observed in simulations of integrate-and-fire neurons; cf. Fig. 7.12. An input pulse at t = 0 causes a PSTH. The shape of the PSTH depends on the noise level and is either similar to the postsynaptic potential or to its derivative. Closely related effects have been reported in the experimental literature cited above. In this section we show that the theory of signal transmission by a population of spiking neurons allows us to analyze these results from a systematic point of view.

In order to understand how the theory of population activity can be applied to single-neuron PSTHs, let us consider a homogeneous population of N unconnected, noisy neurons initialized with random initial conditions, all receiving the same input. Since the neurons are independent, the activity of the population as a whole in response to a given stimulus is equivalent to the PSTH compiled from the response of a single noisy neuron to N repeated presentations of the same stimulus. Hence, we can apply theoretical results for the activity of homogeneous populations to the PSTH of an individual neuron.

Since a presynaptic spike causes typically an input pulse of small amplitude, we may calculate the PSTH from the linearized population activity equation; cf. Eq. (7.3). During the initial phase of the response, the integral over P0(s$ \Delta$A(t - s) in Eq. (7.3) vanishes and the dominant term is

fPSTH(t) = $\displaystyle {{\text{d}}\over {\text{d}}t}$$\displaystyle \int_{0}^{\infty}$$\displaystyle \mathcal {L}$(x$\displaystyle \epsilon_{0}^{}$(t - x) dx ,    for 0 < t $\displaystyle \ll$ [A0]-1 (7.63)

where $ \epsilon_{0}^{}$(t) is the postsynaptic potential generated by the input pulse at t = 0. We have seen that for low noise the kernel $ \mathcal {L}$(x) approaches a $ \delta$ function. Hence, in the low-noise limit, the PSTH is proportional to the derivative of the postsynaptic potential. On the other hand, for high noise the kernel $ \mathcal {L}$(x) is rather broad. In this case, the derivative and the integration that are to be performed on the right-hand side of Eq. (7.67) cancel each other so that the PSTH is proportional to the postsynaptic potential. Equation (7.67) can also be applied in the case of intermediate noise level where the intuitive pictures outlined above are not sufficient. Example: The input-output crosscorrelation of integrate-and-fire neurons

In this example we study integrate-and-fire neurons with escape noise. A bias current is applied so that we have a constant baseline firing rate of about 30Hz. At t = 0 an excitatory (or inhibitory) current pulse is applied which increases (or decreases) the firing density as measured with the PSTH; cf. Fig. 7.13. At low noise the initial response is followed by a decaying oscillation with a period equal to the single-neuron firing rate. At high noise the response is proportional to the excitatory (or inhibitory) postsynaptic potential. Note the asymmetry between excitation and inhibition, i.e., an the response to an inhibitory current pulse is smaller than that to an excitatory one. The linear theory can not reproduce this asymmetry. It is, however, possible to integrate the full nonlinear population equation (6.75) using the methods discussed in Chapter 6. The numerical integration reproduces nicely the non-linearities found in the simulated PSTH; cf. Fig. 7.13A.

Figure 7.13: Integrate-and-fire neurons with escape noise. Population activities in response to positive and negative current-pulses at two different noise levels. Simulations (thin stepped lines) are compared to theoretical responses: The thick solid line shows the result of the integration of the full nonlinear population equation (6.75) whereas the dashed line gives the approximation by the linear theory; cf. Eq. (7.3). A. High noise. B. Low noise. The bias current I0 was adjusted to compensate for the change in mean activity resulting from the difference in noise levels so that in both cases A0 $ \approx$ 30 Hz. The current pulse $ \propto$ t exp(- t/$ \tau_{s}^{}$) with $ \tau_{s}^{}$ = 2 ms is indicated above the main figure. Input pulse amplitudes were chosen to produce peaks of comparable size, $ \Delta$A $ \approx$ 6 Hz; taken from Herrmann and Gerstner (2001a).
\hbox{ {\bf A} \hspace{60mm} {\bf B}}
{\includegraphics[width=50mm]{Figs-ch-signal/simGISILow.eps}}} Example: Input-output measurements in motoneurons

Figure 7.14: Effect of noise on the PSTH response of a rat hypoglossal motoneuron. A Poisson train of excitatory alpha-shaped current pulses of amplitude 0.2 nA was injected into the soma of a rat hypoglossal motoneuron, superimposed on a long 1 nA current step inducing repetitive firing. In the `high' noise condition, this input was combined with an additional noise waveform. A. PSTH in the regime of `high' noise (noise power level 30 nA2$ \mu$s). B. PSTH for `low' noise. C. Motoneuron model in the high-noise and D in the low-noise condition. Simulations (thin stepped line) are compared to the integration of the full nonlinear population equation (thick solid line) and to the prediction of the linear theory (thick dashed line). Experimental data from Poliakov et al. (1996), courtesy of M. Binder; model from Herrmann and Gerstner (2001b).
\hbox{{\bf A} \hspace{62mm} {\bf B}}

In this example we compare theoretical results with experimental input-output measurements in motoneurons (Fetz and Gustafsson, 1983; Poliakov et al., 1996,1997). In the study of Poliakov et al. (1997), PSTH responses to Poisson-distributed trains of current pulses were recorded. The pulses were injected into the soma of rat hypoglossal motoneurons during repetitive discharge. The time course of the pulses was chosen to mimic postsynaptic currents generated by presynaptic spike arrival. PSTHs of motoneuron discharge occurrences were compiled when the pulse trains were delivered either with or without additional current noise which simulated noisy background input. Fig. 7.14 shows examples of responses from a rat motoneuron taken from the work of Poliakov which is a continuation of earlier work (Moore et al., 1970; Kirkwood and Sears, 1978; Knox, 1974; Fetz and Gustafsson, 1983). The effect of adding noise can be seen clearly: the low-noise peak is followed by a marked trough, whereas the high-noise PSTH has a reduced amplitude and a much smaller trough. Thus, in the low-noise regime (where the type of noise model is irrelevant) the response to a synaptic input current pulse is similar to the derivative of the postsynaptic potential (Fetz and Gustafsson, 1983), as predicted by earlier theories (Knox, 1974), while for high noise it is similar to the postsynaptic potential itself.

Fig. 7.14C and D shows PSTHs produced by a Spike Response Model of a motoneuron; cf. Chapter 4.2. The model neuron is stimulated by exactly the same type of stimulus that was used in the above experiments on motoneurons. The simulations of the motoneuron model are compared with the PSTH response predicted from the theory. The linear response reproduces the general characteristics that we see in the simulations. The full nonlinear theory derived from the numerical solution of the population equation fits nicely with the simulation. The results are also in qualitative agreement with the experimental data.

7.4.2 Reverse Correlation - the Significance of an Output Spike

In a standard experimental protocol to characterize the coding properties of a single neuron, the neuron is driven by a time-dependent stimulus I(t) = I0 + $ \Delta$I(t) that fluctuates around a mean value I0. Each time the neuron emits a spike, the time-course of the input just before the spike is recorded. Averaging over many spikes yields the typical input that drives the neuron towards firing. This spike-triggered average is called the `reverse correlation' function; cf. Chapter 1.4. Formally, if neuronal firing times are denoted by t(f) and the stimulus before the spike by I(t(f) - s), we define the reverse correlation function as

Crev(s) = $\displaystyle \langle$$\displaystyle \Delta$I(t(f) - s)$\displaystyle \rangle_{{f}}^{}$ (7.64)

where the average is to be taken over all firing times t(f). In our definition the reverse correlation evaluated at a positive time s > 0 looks backward in time, i.e., describes the mean input that precedes a spike. If the type of allowed stimuli is appropriately constrained, it can be shown that amongst all possible stimuli, a stimulus $ \Delta$I(t) $ \propto$ Crev(- t) is in fact the optimal stimulus to trigger a spike; cf. the example at the end of this section.

In this section, we want to relate the reverse correlation function Crev(s) to the signal transfer properties of a single neuron (Bryant and Segundo, 1976). In Section 7.3, we have seen that, in the linear regime, signal transmission properties of a population of neuron are described by

$\displaystyle \hat{{A}}$($\displaystyle \omega$) = $\displaystyle \hat{{G}}$($\displaystyle \omega$I($\displaystyle \omega$) . (7.65)

with a frequency-dependent gain $ \hat{{G}}$($ \omega$); see Eq. (7.57). We will use that, for independent neurons, the transfer characteristics of a population are identical to that of a single neuron. We therefore interpret $ \hat{{G}}$($ \omega$) as the single-neuron transfer function. Inverse Fourier transform of Eq. (7.69) yields

A(t) = A0 + $\displaystyle \int_{0}^{\infty}$G(s$\displaystyle \Delta$I(t - s) ds (7.66)

with a kernel defined in Eq. (7.59). A0 is the mean rate for constant drive I0. We want to show that the reverse correlation function Crev(s) is proportional to the kernel G(s).

Eq. (7.70) describes the relation between a known (deterministic) input $ \Delta$I(t) and the population activity. We now adopt a statistical point of view and assume that the input $ \Delta$I(t) is drawn from a statistical ensemble of stimuli with mean $ \langle$$ \Delta$I(t)$ \rangle$ = 0. Angular brackets denote averaging over the input ensemble or, equivalently, over an infinite input sequence. We are interested in the correlation

CAI(s) = $\displaystyle \lim_{{T\to\infty}}^{}$$\displaystyle {1\over T}$$\displaystyle \int_{0}^{T}$A(t + s$\displaystyle \Delta$I(t) dt = $\displaystyle \langle$A(t + s$\displaystyle \Delta$I(t)$\displaystyle \rangle$ (7.67)

between input $ \Delta$I and activity $ \Delta$A. If the input amplitude is small so that the linearized population equation (7.70) is applicable, we find

CAI(s) = $\displaystyle \int_{0}^{\infty}$G(s)$\displaystyle \langle$$\displaystyle \Delta$I(t + s - s'$\displaystyle \Delta$I(t)$\displaystyle \rangle$ ds' . (7.68)

where we have used A0 $ \langle$$ \Delta$I(t)$ \rangle$ = 0. The correlation function CAI depends on the kernel G(s) as well as on the autocorrelation $ \langle$$ \Delta$I(t'$ \Delta$I(t)$ \rangle$ of the input ensemble.

For the sake of simplicity, we assume that the input consists of white noise7.1, i.e., the input has an autocorrelation

$\displaystyle \langle$$\displaystyle \Delta$I(t'$\displaystyle \Delta$I(t)$\displaystyle \rangle$ = $\displaystyle \sigma^{2}_{}$ $\displaystyle \delta$(t' - t) . (7.69)

In this case Eq. (7.72) reduces to

CAI(s) = $\displaystyle \sigma^{2}_{}$ G(s) . (7.70)

Thus the correlation function CAI is proportional to G(s).

In order to relate the correlation function CAI to the reverse correlation Crev, we recall the definition of the population activity

A(t) = $\displaystyle {1\over N}$$\displaystyle \sum_{{i=1}}^{N}$$\displaystyle \sum_{f}^{}$$\displaystyle \delta$(t - ti(f)) . (7.71)

The correlation function (7.71) is therefore

CAI(s) = limT$\scriptstyle \to$$\scriptstyle \infty$$\displaystyle {1\over T}$$\displaystyle \left[\vphantom{ {1\over N} \sum_{i=1}^N \sum_f \Delta I(t_i^{(f)}-s) }\right.$$\displaystyle {1\over N}$$\displaystyle \sum_{{i=1}}^{N}$$\displaystyle \sum_{f}^{}$$\displaystyle \Delta$I(ti(f) - s)$\displaystyle \left.\vphantom{ {1\over N} \sum_{i=1}^N \sum_f \Delta I(t_i^{(f)}-s) }\right]$ . (7.72)

Thus the value of the correlation function CAI at, e.g., s = 5 ms, can be measured by observing the mean input 5ms before each spike. The sum in the square brackets runs over all spikes of all neurons. With the neuronal firing rate $ \nu$ = A0, the expected number of spikes of N identical and independent neurons is $ \nu$ T N where T is the measurement time window. We now use the definition (7.68) of the reverse correlation function on the right-hand side of Eq. (7.76) and find

CAI(s) = $\displaystyle \nu$ Crev(s) . (7.73)

Since we have focused on a population of independent neurons, the reverse correlation of the population is identical to that of a single neuron. The combination of Eqs. (7.74) and (7.77) yields

Crev(s) = $\displaystyle {\sigma^2 \over \nu}$ G(s) . (7.74)

This is an important result. For spiking neurons the transfer function G(s) can be calculated from neuronal parameters while the reverse correlation function Crev is easily measurable in single-neuron experiments. This allows an interpretation of reverse correlation results in terms of neuronal parameters such as membrane time constant, refractoriness, and noise. Example: Reverse correlation funtion for SRM0 neurons

Figure 7.15: Reverse correlations. A SRM0 neuron is stimulated by a constant bias current plus a stochastic input current. Each time an output spike is triggered, the time course of the input current is recorded. A. Average input $ \langle$I(t - t(f))$ \rangle_{{t^{(f)}}}^{}$ as a function of s = t - t(f) averaged over 1000 output spikes f = 1,..., 1000. B. The same, but averaged over 25000 spikes. The simulation result is compared with the time-reversed impulse response G(- s) predicted from the theory (smooth line).
\hbox{ {\bf A} \hspace{60mm} {\bf B}}

We consider a SRM0 neuron u(t) = $ \eta_{0}^{}$(t - $ \hat{{t}}$) + $ \int_{0}^{\infty}$$ \kappa_{0}^{}$(sI(t - s) ds with piecewise linear escape noise. The response kernels are exponential with a time constant of $ \tau$ = 4ms for the kernel $ \kappa$ and $ \tau_{{\rm ref}}^{}$ = 20 ms for the refractory kernel $ \eta$. The neuron is driven by a current I(t) = I0 + $ \Delta$I(t). The bias current I0 was adjusted so that the neuron fires at a mean rate of 50Hz. The noise current was generated by the following procedure. Every time step of 0.1ms we apply with a probability of 0.5 an input pulse. The amplitude of the pulse is ±1 with equal probability. To estimate the reverse correlation function, we build up a histogram of the average input $ \langle$I(t - t(f))$ \rangle_{{t^{(f)}}}^{}$ preceding a spike t(f). We see from Fig. 7.15A that the main characteristics of the reverse correlation function are already visible after 1000 spikes. After an average over 25000 spikes, the time course is much cleaner and reproduces to a high degree of accuracy the time course of the time-reversed impulse response G(- s) predicted by the theory; cf. Fig. 7.15B. The oscillation with a period of about 20ms reflects the intrinsic firing period of the neuron. Example: Reverse correlation as the optimal stimulus (*)

In this example we want to show that the reverse correlation function Crev(s) can be interpreted as the optimal stimulus to trigger a spike. To do so, we assume that the amplitude of the stimulus is small and use the linearized population equation

A(t) = A0 + $\displaystyle \int_{0}^{\infty}$G(s$\displaystyle \Delta$I(t - s) ds . (7.75)

Suppose that we want to have a large response $ \Delta$A = A(0) - A0 at time t = 0. More precise we ask the following question. Amongst all possible stimuli I(t) for t < 0 with the same power

$\displaystyle \int_{{-\infty}}^{0}$$\displaystyle \Delta$I2(t) dt = constP, (7.76)

which one will give the maximal response $ \Delta$A? We claim that the optimal stimulus has the same time course as the linear kernel G apart from a reversal in time, i.e,

$\displaystyle \Delta$Iopt(t) $\displaystyle \propto$ G(- t) . (7.77)

To prove the assertion, we need to maximize $ \Delta$A = $ \int_{0}^{\infty}$G(s$ \Delta$I(- s) ds under the constraint (7.80). We insert the constraint by a Lagrange-multiplier $ \lambda$ and arrive at the condition

0 = $\displaystyle {\partial \over \partial I(t)}$$\displaystyle \left\{\vphantom{ \int_0^\infty G(s) \, \Delta I(-s) \, {\text{d}...
... {\rm const}_P - \int_0^{\infty} \Delta I^2(-s) \, {\text{d}}s \right] }\right.$$\displaystyle \int_{0}^{\infty}$G(s$\displaystyle \Delta$I(- s) ds + $\displaystyle \lambda$$\displaystyle \left[\vphantom{ {\rm const}_P - \int_0^{\infty} \Delta I^2(-s) \, {\text{d}}s }\right.$constP - $\displaystyle \int_{0}^{{\infty}}$$\displaystyle \Delta$I2(- s) ds$\displaystyle \left.\vphantom{ {\rm const}_P - \int_0^{\infty} \Delta I^2(-s) \, {\text{d}}s }\right]$$\displaystyle \left.\vphantom{ \int_0^\infty G(s) \, \Delta I(-s) \, {\text{d}}...
...{\rm const}_P - \int_0^{\infty} \Delta I^2(-s) \, {\text{d}}s \right] }\right\}$ (7.78)

which must hold at any arbitrary time t. Taking the derivative of the braced term with respect to I(t) yields

G(t) = 2 $\displaystyle \lambda$ $\displaystyle \Delta$Iopt(- t) (7.79)

which proves the assertion (7.81). The exact value of $ \lambda$ could be determined from Eq. (7.80) but is not important for our arguments. Finally, from Eq. (7.78) we have G(s) $ \propto$ Crev(s) . Thus the result of reverse correlation measurements with white noise input can be interpreted as the optimal stimulus, as claimed in the text after Eq. (7.68).

next up previous contents index
Next: 7.5 Summary Up: 7. Signal Transmission and Previous: 7.3 Transfer Function
Gerstner and Kistler
Spiking Neuron Models. Single Neurons, Populations, Plasticity
Cambridge University Press, 2002

© Cambridge University Press
This book is in copyright. No reproduction of any part of it may take place without the written permission of Cambridge University Press.