The figure above has several examples of voltage recordings from single neurons; those are the black, noisy traces. In each of them, there are many "spikes," the large vertical deflections which simply look like line segments. These are Action Potentials. Up close, APs look like this:
They have a highly sterotyped form reflective of the underlying generative mechanism. The work of Edgar Adrian made it clear that Action Potentials are the universal signal I've pronounced them to be. It was he who demonstrated that APs are used by the brain for both outward bound signals to muscles, and inward ones from sensory apparati. However, since this time, understanding exactly how information is encoded in these Action Potentials has been the source of great debate.
Each neuron has a baseline level of activity, an average firing rate, or number of spikes per second (between 5-100 spikes/second is the physiologically relevant range). An elevated frequency (more per-second) of APs to a muscle means greater contraction, more action potentials also signals warmer temperatures when coming from the appropriate sensor. Both of these are examples of Rate Codes. That is to say that the relevant part of the code is how often the pulses arrive. However, this is not the whole story.
Many researchers have noticed that ensembles of constituitively excited neurons have a tendency to fire their spikes simultaneously. The degree of coincident firing by a pair of neurons can be quantified by a measure called covariance. This synchrony has been implicated in everything from working memory to attention; from perceptual grouping (our tendency to compartmentalize objects as individual wholes out of the continuity of sensory experiences) to consciousness itself.
That synchronous firing happens at all is no great surprise as there is a large degree of correlation in sensory data. If I throw a red ball across your field of vision, for instance, it is highly likely that the report of red by individual neurons in your brain will happen at the same time as they are being stimulated at the same time; there is high temporal correlation in the many sensory inputs to the brain. The idea that synchrony might be at work in settings lacking such obvious sources of correlation, is intriguing.
These two ideas, rate codes and synchrony, represent major branches on the tree of theoretical efforts to describe information encoding at a basic level in the brain. A recent paper from the lab of Alex Reyes at NYU's Center for Neural Science, has given those interested in such endeavors something new to keep their gears turning (ref. 1).
The authors of this letter to Nature describe a series of experiments in which they measure the degree of synchrony in the outputs of a pair of neurons which are not connected to each-other. The experimental variable they manipulate is input correlation. Each neuron receives an input signal which is made partly from a joint source and partly from some other, random signal. In this way, they can control the amount of commonality in the signal that each neuron receives. It is no surprise that as the correlation in the two inputs is increased, so too is the output correlation. What is surprising is that if one leaves the correlation between the inputs constant and increases their overall amplitude, the correlation in the output again increases.
This is a strange state of affairs because, as I said, the signal is made of two parts: one is the bit that is common to both inputs, lets call that A, the other is unique to each neuron, lets call those B & C respectively. This means that neuron 1 receives a signal = A+B; while neuron 2 receives a signal = A+C. If we simply increase the amplitude of both signals by some factor, D (S1 = D*(A+B), S2 = D*(A+C)), then both parts of the signal are scaled up. A, which would tend to produce correlated outputs, and B & C which would tend to produce random, uncorrelated output. This means that there is something intrinsic to the transformation that neurons perform between their inputs and outputs, which somehow enhances input correlations.
The scaling of inputs mentioned above is tantamount to increasing the firing rate of the received signals. This means that the authors have found a link between Rate Coding and Synchrony. These two concepts, once distinct, have become linked.
The Letter progresses nicely, from modeling work done with artificial neurons, into a more biologically plausible setting combining single neurons with simulation, and finally to a more pared down mathematical exposition which seems to capture the essence of this phenomenon, namely the "threshold linear" transformation which neurons perform.
This is brave work, it makes clear how little we understand of the brain, how far we have to go. Several theorists have studied synchrony in the setting of artificially constructed networks, in vitro and in vivo (refs. 2,3,4,5). None, however, have achieved the kind of generality of this result. Understanding the underlying behavioral rules of single neurons is paramount to building a complete theoretical understanding of the mystery that is the brain. These authors have set an example of how we might move forward if we ask the right question in the right way.
References
1. Jaime de la Rocha, Brent Doiron, Eric Shea-Brown, Kres caronimir Josic & Alex Reyes (2007) Correlation between neural spike trains increases with firing rate Nature 448, 802-806 doi:10.1038/nature06028
2. Ritz R, Sejnowski TJ. (1997) Synchronous oscillatory activity in sensory systems: new vistas on mechanisms. Curr Opin Neurobiol. 7(4):536-46.
3. Vogels TP, Abbott LF. (2005) Signal propagation and logic gating in networks of integrate-and-fire neurons. J Neurosci. 25(46):10786-95.
4. Ikegaya Y, Aaron G, Cossart R, Aronov D, Lampl I, Ferster D, Yuste R. (2004) Synfire chains and cortical songs: temporal modules of cortical activity. Science 304(5670):559-64.
5. Mehring C, Hehl U, Kubo M, Diesmann M, Aertsen A. (2003) Activity dynamics and propagation of synchronous spiking in locally connected random networks. Biol Cybern. 88(5):395-408.