Suppose we wanted to determine the bandwidth of a sensor having the response properties depicted above. A standard way to do so is the following: one measures the maximum response of the sensor (in this case, 10), and divides that value by two (thus, 5). Then one finds the smallest frequency which will produce that half-max response of 5 (a bit less than 8), and the largest frequency which produces that response (a bit greater than 12). The difference between the larger and smaller frequencies is termedthe bandwidth. Using this method, it's also called the full-width-at-half-max, for somewhat obvious reasons. We would then describe this sensor as having a central frequency of 10Hz, a Gaussian profile, and a bandwidth of 4Hz.
Now, let me reiterate: you are able to distinguish auditory frequencies smaller than the bandwidth (a measure of the range of frequencies to which a sensor will respond) of the cells that transduce sound from pressure waves into electrical impulses in your brain. This is odd for the following reason, suppose you were relying on the sensor above to tell you about what frequencies (pitches) of sound you were hearing. If I played you a sound at 8Hz and another at 12Hz, the response of the sensor (as you can see by the dotted lines on the figure above) would be identical. That sensor is unable to distinguish between sounds at 8Hz and sounds at 12Hz, yet somehow, your brain can. The way it achieves this feat is through population coding. What this means is that the brain almost always pools the responses of many sensory neurons in creating the conscious representations of sensory data that we experience.
A brief aside, you may be asking the question: Why don't we just have sensors with different response properties, linear, say? Like the figure below:
That would work nicely since the responses at 8Hz and 12Hz (and any other pair of frequencies for that matter) are distinct. However, it's very difficult to build biological sensors that have this kind of response profile, and in the interest of steering clear of unwieldy posts, I'll leave it at that.
Returning to population coding, I’ve said that the brain pools responses, but what does this mean exactly?
Let us now imagine that we examined the responses of two cells, with central frequencies of 9Hz and 11Hz, respectively. At 8Hz, cell 1’s response is ~8.5, and cell 2’s is ~2.5, while at 12Hz, the situation is flipped, with cell 1’s response being 2.5, and cell 2’s being 8.5. This reversal of fortunes is not intentional, not an inherent feature of this system, rather it is the result of my simplified illustration. These two cells are able to achieve in concert what a sole actor cannot: tell the difference between two sounds separated by a difference smaller than their individual bandwidths. All that is needed now is a further cell (in reality another layer of cells) to read off this code. “Whenever cell 1 says ‘8.5’ and cell 2 says ‘2.5,’ I know that sound is being played at 9Hz,” this further cell says.
This simplified view is not so far off from what we think is happening in the transformation of signals from the sensory periphery (your ear) to central processing areas (primary auditory cortex).
And now on to the mysterious facet mentioned earlier. These intrepid explorers of frequency tuning in primary auditory cortex found cells there with vary small bandwidths compared to sensory cells, implying that these cells were performing a computation similar to the one I’ve outlined above, but in order to test this hypothesis, they had to employ a different strategy than the one used for building frequency tuning curves.
In constructing the auditory response profile of a single cell, one generally uses single frequency sounds, pure tones. However, the brain was built to represent the real world, a place where single frequency sounds are essentially never encountered. Thus, the definition of a cell's response in this manner is necessarily lacking. Though it is possible, there is no reason to expect that one can predict the way a cell will respond to the simultaneous playback of 8Hz & 12Hz based on a simple summation of the individual responses elicited by 8Hz & 12Hz. Further, the heuristic version of population coding that I presented specifically does make that prediction, so recording the responses of these single cells to complex sounds allows these auditory neuroscientists to test their hypothesis concerning the underlying computation and the wiring of the brain.
Before I conclude, I want to mention that this research in particular is of a rare and important type, it is performed on humans. This is not some sort of needless invasion, it is unfortunately necessary to probe the electrical responses of the brains of epilepsy patients in order to remove certain small parts that cause their seizures.
It will probably come as no surprise that the responses predicted by the linear model I've discussed were quite distinct from those that the researchers found. This is exciting because it means that the brain has yet again provided a puzzle for us to solve. We know what the brain must be doing, but how, is the question presented. . Exploration of such quandaries can yield results that expand our general knowledge, be applied to other fields, and give us insight into the very nature of how we function. Such is the beauty of neuroscience.
References
1. Y. Bitterman, R. Mukamel, R. Malach, I. Fried & I. Nelken (2008) Ultra-fine frequency tuning revealed in single neurons of human auditory cortex. Nature 451, 197-201 | doi:10.1038/nature06476
No comments:
Post a Comment