Thursday, December 13, 2007

On Making Faces

How do babies learn to make faces? With arm or leg movements, it seems plausible that, as William James suggested, one might gain insight from simply associating observed appendage position with concurrent muscle activation patterns1. However, in contrast to an attempted stirring of limbs, the infant cannot see what results from the activation of his facial muscles. Thus, the learning mechanism cannot rely on sensory confirmation that the indented action was successfull. That new humans very rapidly learn to express their emotional state through a smile, frown or furrowing of brow hints that there is some implicit path to the acquisition of this skill. It has been suggested by some that the Mirror Neuron (MN) system constitutes this road, or at least a map of it2.

In the adult, MNs are cells that respond when an animal performs some act – picking up a piece of fruit, say - and when it observes another individual doing the same thing, even if the observed actor is a member of another species or stranger, a robot. Beyond this, their activation potentiates the pathways that would be involved in the execution of such a movement by the observer: producing measurable sub-threshold responses in the muscles involved. The suggestion that this system is involved in motor learning and imitation from birth amounts to the assumption that MNs are prenatally wired to function as they do in the mature brain. This is a very attractive idea as it removes the need for some external form of reinforcement - like a visual confirmation of the completed movement - to inform the motor-learning process. Instead, the responsibility for being both carrot and stick is shifted internally, to the MNs. The proposition is that the genetically defined circuitry imbues the MNs with a "knowledge" of the pattern of muscle activity associated with both an observed or executed behavior.

This, unsurprisingly, presents a further question: if the MN system is present from birth and possesses such information as described above, why do infants need to learn how to move at all? This is where we must tread a bit into the realm of informed speculation. First, the MN system cannot know how to execute every movement possible: for instance it certainly cannot know at birth the set of motor commands associated with performing some complex gymnastic move, say a double backflip. If the MN is an artifact of evolution, then it is likely that there is a continuum of innate interpretability, from simple acts like smiles that are well known to the MN system to more rare or contemporary behaviors like figure skating or fixing a bicycle. Thus, using the MN system as a template, of sorts, can only be effective for behaviors on the oft-encountered end of the spectrum. Second, since babies sadly do not leap from the the womb as masters of muscular control, the wiring of the MN system must itself develop at a pace commensurate with the time-course of an individual's behavioral procurement.

How is it then, that this internal electro-cultivation proceeds in lock-step with the infant's newfound agility? It has been well documented that humans lose half the total number of neurons in their central nervous system by the time they've reached six months of age. This pruning is a synaptic refinement process, also termed neural darwinism4. It is quite beneficial to the newborn animal, since his neural connectivity is far too manifest, too spatially noisy, and must be cleaned up. This happens in all areas of cerebral cortex. For example, in the visual system, molecular cues guide the axons of nearby retinal ganglion cells to adjacent targets in the thalamus while the animal is still in the womb in a gross way, but it is the activity arising from visual stimulation which pares down the connections to the state we see them in the adult. the exquisite spatial precision of connections between the retina and thalamus cells in the retina are connected to cells in the thalamus with exquisite spatial precision because of visual experience. It was Hebb who pointed out that cells that "fire together wire together.5" This means that if two retinal cells fire at the same time, they will tend to be connected to the same post-synaptic target. That is not to say that any two cells that fire at the same time anywhere in the brain will inevitably be connected to each-other, but rather that in deciding which of the molecularly defined crude connections to keep, a post synaptic cell will retain those which tend to fire at the same time in response to stimulation. The stimuli that cause the retinal cells to fire are not simply sets of independently fluctuating pixels; rather they are full of spatial correlations. If a single vertical line passed across your field of vision, a single line of cells in the retina would be stimulated at once as the line moved by. It is almost never the case that two abutting photoreceptors see completely uncorrelated (in time) patterns of illumination. In this way, the spatial relationships in the image translate into spatial relationships in the connections in the brain.

The goings-on I've outlined above might generally be termed activity dependent synaptic refinement (ADSR). What I'm hypothesizing is that, as with vision, some form of ADSR is at work in those cortical-areas involved in learning how to move: that the very act of generating motor output leads to more stereotyped action through application of the Hebbian "fire together wire together" motif. The commonality between vision and movement, and indeed the unifying principle behind ADSR, is that the systems are exploiting the presence of underlying statistical content. While the MN system is biasing motor output towards certain configurations, the motor cortex is learning about the possible relationships between muscle tensions, lengths and contractile velocities. It is thought that the brain might use such a mechanism universally as an attempt at maximizing efficiency. For example: if you always listen to symphonic classical music, you might set your stereo to boost bass & treble, but if you're more the piano concerto type, you'd want a touch more mid-range; then again, if your tastes are more varied, it might make sense to emphasize all three frequency bands equally so as not to aurally marginalize any particular genre. Another for instance more relevant to motor output: when you drive in the city, you rarely ever get out of 2nd gear, but on the highway, you're almost always in the upper gears. In the same sense, the brain attempts to use the Hebbian rule to optimize its (sensory) inputs and (motor) outputs to the sensory stimuli impinging on it and the motor programs it generates.

You may have noticed in all this that I've skipped over one important point: how (genetic wiring notwithstanding), do mirror neurons extract the information about a movement being performed by another agent? It is not at all understood how this occurs in the adult, so how it could be happening in babies is even more mysterious, especially in light of the messiness of infant brains that I've spoken of. It simply must be the case that visual stimuli are translated into data concerning the movement of bodies. It is possible that specialized structures for recognizing arms and hands, faces and feet, become sophisticated very early on, but nothing like this has been observed to-date. The needed course of research is clear, but developing experiments to elucidate what might be at work is not. We can only wait and watch from the wings while the scientific players act, and perhaps deliver soto voce direction from time to time.

References

1. James, W. The Principles of Psychology Vol. 1. Henry Holt & Company (1890)
2. Lepage JF, Théoret H. (2007) The mirror neuron system: grasping others' actions from birth? Dev Sci. 10(5):513-23.
3. Rizzolatti, G., & Craighero, L., (2004) The Mirror Neuron System. Annu. Rev. Neurosci. 27, 169-192.
4. Hebb, D.O. The Organization of Behavior. John H. Wiley & Sons (1967)
5. Edelman, G.M. Neural Darwinism. Oxford Paperbacks (1990)

Thursday, November 22, 2007

On Quorum Sensing and Antibiotics

In your body, cells belonging to other organisms are more numerous than your own1. Most of these are not parasitic, we benefit significantly from some of our inhabitants. This is one of the reasons that traditional antibiotics are potentially harmful. Their action is indiscriminate, targeting both harmful and helpful bacteria. The wholesale killing off of our microbial boarders makes many vacancies, providing an opportunity for more virulent creatures to invade. As if this weren't bad enough, left behind after a course of antibiotics are any bacteria that might have developed immunity to the drugs that put down their brethren. Thus, prescribing such medications also amounts to a selective pressure, an evolutionary nudge towards ever stronger infectors.

Once in your body, harmful bacteria must wait until their colony reaches a certain size for their attacks to be effective. This means that they must posses the ability to detect how many individuals of their species are present. Indeed, this behavior has been the subject of extensive research, and is referred to as Quorum Sensing (QS). The way this works is actually rather simple, each bacterium secretes a small molecule called an autoinducer (AI) at an approximately constant rate (in time and across individuals). Once the concentration of AI is high enough, the colony knows their population has risen to a level where the release virulence factors stands a good chance of successfully inducing pathology.



from reference 2

This example of cell-to-cell communication, in addition to providing a unique system to study such information transfer systems, presents an opportunity to attack unwanted microorganisms in a more species selective way. Thus avoiding both of the issues with antibiotics mentioned above.

Just such a feat was accomplished several years in the laboratory of Hiroaki Suga at SUNY Buffalo. This team of researchers was able to successfully reduce the virulence of Pseudomonas aeruginosa which is the main infectious killer of those with weakened immune systems, such as cancer, AIDS, and cistic fibrosis patients3. This was a great triumph, but another entry in this category has come along which further bolsters the case for attacking the bacterial-telegraph-system.

A group led by Kim Janda at Scripps was able to have a similar impact on Staphylococcus aureus. This bacteria is the main cause of infections in hospitals, and thus represents one of the strains most likely to evolve immunity to antibiotics4. Beyond this and in contrast to the earlier work, these authors were able to use antibodies to target the AIs, making the work potentially generalizable and inexpensive.

It is impossible to understate the beneficial effects that penicillin and it's derivatives have had in western medicine. As we move forward, however, we must find ways to keep pace with our miniscule counterparts. These two examples of top notch research are exactly the kind of thinking that we need.

References

1. French, K. Randall, D & Burggren, W. (2001) Eckert Animal Physiology. W.H. Freeman

2. Waters, C.M. & Bassler, B.L. (2005) Quorum Sensing: Cell-to-Cell Communication in Bacteria. Annu. Rev. Cell Dev. Biol. 21:319–446

3. Smith, K.M. Yigong, B. & Suga, H. (2003) Induction and Inhibition of Pseudomonas aeruginosa Quorum Sensing by Synthetic Autoinducer Analogs. Chemistry & Biology 10:81-89

4. Park, J. Jagasia R. Kaufmann, G.F. Mathison, J.C. Ruiz, D.I. Moss, J.A. Meijler, M.M. Ulevitich, R.J. & Janda, K.M. (2007) Infection Control by Antibody Disruption of Bacterial Quorum Sensing Signaling. Chemistry & Biology 14:1119-1127

Tuesday, November 13, 2007

On Working Out The Details



Those interested in figuring out how the brain works have collected data using techniques which probe progressively smaller structures. From observing outward behaviors of the whole animal, to the voltages generated by areas of the brain using EEG, down to the activity of single cells using electrophysiology. The last is the term for any measurement of the electrical responses (more often voltage, but current as well) of individual neurons, and is the sine qua non of nervous system function in that it reflects the millisecond by millisecond goings-on of the brain's most basic units.

Even at this level, however, there remains an essential ambiguity: while the spiking of the neurons in your eye definitely represents sensory information and the voltage in motor neurons connected to the muscles in your arm reflects motor output, the electrical bustle of units in so called "association areas" of the cerebral cortex, are much more difficult to categorize in such unequivocal fashion. These regions respond to stimulation in many sensory modalities and during motor output, to varying degrees.

The posterior parietal cortex (PPC) is one example which has been extensively examined. If a monkey is given a simple task which connects sensory to motor - say, reaching for or directing one's gaze towards a target - the neurons in this area will light up. But it is unclear whether their vigor is a response to the stimulus, or is responsible for the invocation of the movement1.

At this point, it may be prudent to ask: couldn't they be doing both? Yes. However, none of the single-cell electrophysiology experiments aimed at the PPC to-date have been specific enough to discriminate between the possibilities that it is responsible for sensory or motor, or truly a combination of the two.

Enter Richard Andersen, a CalTech researcher who has been working in the field of motor planning for quite some time. His view is in opposition to Columbia'sMickey Goldberg: that the PPC is about attention, not intention.

In Dr. Andersen's recent work, meant to be the last word in their ongoing debate, and published in Neuron, he uses a new twist on old experiments: allowing the monkey, from whose brain the data are being gathered, to freely decide what type of movement he wants to execute2. The stimuli are always the same, either a red or green ball, and the monkey can choose whether to reach out and touch it, or simply move his gaze towards is (in the reaching case he must keep his gaze elsewhere). The intriguing finding is that there are cells which are selective for the type of movement but not the type of stimulus. It is in this sense which Dr. Andersen thinks he has demonstrated the motoric nature of the PPC.

The beauty of Dr. Andersen's experiment is that this technique has been around for say 40 years now, and yet we are still able to learn much by savvy applications of its use. Human ingenuity in experimental design has always been the primary drive in scientific discovery, for what good are tools if one doesn't know how to use them. Don't get me wrong, technological advancements are essential to scientific progress, but it is simply astounding how simple tweaks on old ideas can open up new avenues of research.

References

1. Colby, CL & Goldberg, ME (1999) Space and Attention in Parietal Cortex. Annu. Rev. Neurosci. 22:319–349

2. Cui H, & Andersen, RA (2007) Posterior Parietal Cortex Encodes Autonomously Selected Motor Plans. Neuron, V56:552-559

Monday, November 12, 2007

On Believing Yourself

I have always been quite troubled by the fact that I can remember things that never happened. If I am confident that a childhood friend's name was Paul when it was actually Roger, how am I to be certain that I correctly remember how to perform the act of addition, or my distaste for the texture of most mushrooms?

Perhaps even more troubling is the fact that studies devoted to exploring the interplay between confidence and memory have found that, in general, the memories we're most confident in are most likely to be authentic1 (see figure, below).


The paradox is fairly clear: how can we be confident in a false memory, if confidence correlates with accuracy?

The authors of a recent study suggest, and go a ways towards demonstrating, that two distinct mechanisms are at work, one at work when we express confidence in veridical memories, and one for when we express confidence in false recollections2.

Specifically, these authors use fMRI, and a categorized word recall task, to demonstrate that distinct brain areas are active when we're sure of veracious retrospection and another when we're confident in specious anamnesis. The researchers speculate that the latter is due to the familiarity of certain events based on the anatomy of the active sites revealed by the scan (see figure, below).


As a final note, the two areas identified in this study are quite far apart in brain terms, once again pointing to the notion that memory is physiologically and anatomically diffuse. So when you can't remember your first pet's name, don't get too worried, your brain is a big place to search.


References

1. Lindsay DS, Read JD, Sharma K (1998) Accuracy and confidence in person identification: the relationship is strong when witnessing conditions vary widely. Psychol Sci 9:215–218.
2. Kim H, & Cabeza R (2007) Trusting Our Memories: Dissociating the Neural Correlates of Confidence in Veridical versus Illusory Memories. J. Neurosci 27(45):12190-12197

Thursday, November 8, 2007

Ode to Sentences

Why are we so averse to long sentences? Is there some inherent property rendering them anathema to our natural mode of communication? There is certainly no grammatical rule excluding their use. In fact, some of the most gorgeous sentences in all of English prose are those which might be labeled run-on! Consider the following lead sentence from William Faulkner's Absalom, Absalom!:

"From a little after two oclock until almost sundown of the long still hot weary dead September afternoon they sat in what Miss Coldfield still called the office because her father had called it that - a dim hot airless room with the blinds all closed and fastened for forty-three summers because when she was a girl someone had believed that light and moving air carried heat and that dark was always cooler, and which (as the sun shone fuller and fuller on that side of the house) became latticed with yellow slashes full of dust motes which Quentin thought of as being flecks of the dead old dried paint itself blown inward from the scaling blinds as wind might have blown them."

Or the following from James Joyce's Finnegan's Wake:

(To say nothing of course of the ends of either Wake or Ulysses, which descend into language completely lacking in punctuation.)

"His husband, poor old A'Hara (Okaroff?) crestfallen by things and down at heels at the time, they squeak, accepted the (Zassnoch!) ardree's shilling at the conclusion of the Crimean war and, having flown his wild geese, alohned in crowds to warnder on like Shuley Luney, enlisted in Tyrone's horse, the Irish whites, and soldiered a bit with Wolsey under the assumed name of Blanco Fusilovna Bucklovitch (spurious) after which the cawer and marble halls of Pump Court Columbarium, the home of the old seakings, looked upon each other and queth their haven ever more for it transpires that on the other side of the water it came about that on the field of Vasileff's Cornix inauspiciously with his unit he perished, suying, this papal leafless to old chap give, rawl chawclates for mouther-in-louth."

The former is perhaps a bit more intelligible at first blush than the latter, but both prove a point. Long sentences allow for a different kind of expressive hue.

Beyond their aesthetic appeal, the existence of (semi) meaningful long sentences serves another purpose: they speak to one goal of Noam Chomsky's theory of generative grammar.

In brief, linguistics prior to Chomsky was a taxonomic science, sure in the descriptive quality of its program to catalog the "corpus" of a language: all the phonemes (sounds) and morphemes (combinations of sounds). Amongst several issues Chomsky raised with this system was the fact that there are an infinite number of possible sentences, making any attempt to index them an impossible task, and generally pointing to the inadequacy of such a strategy. Beyond this, however, Chomsky was interested in exposing some sort of mentally internalized grammar, some system at work in each of us when we compose sentences.

The standard example cited to demonstrate that there are unending possibilities for sentence construction is an example of some iterative procedure such as: "The man whose house had a roof that sagged at the point where the ladder had fallen when the repairman lost his balance while looking at the woman who was passing because..." In my opinion, these examples don't really go far towards characterizing such a lumenous system for building sentences because we do not use anything like them in speaking or writing. Though we are clearly capable of deducing the meaning of the instance cited above, the fact that we don't employ them also speaks to the nature of whatever subconscious lingual machinery we've got.

I suppose I've not clarified the question of sentential length, but what I have tried to do is point to the fact that sentence length is somehow reflective of the possible modes of expression that one can achieve as defined by our personal grammars. Perhaps we will find that as we evolve, the need for ever more subtle communications will lead to long dense sentences like those above. Another possibility is that such objects will remain in their traditional home of stylized prose. In any case, none of us should be afraid of the dreaded run on.

Thursday, November 1, 2007

On Emotion and Memory

from reference 1



Why is it that experiences imbued with emotion crystalize into easily recollected memories? Our memory is quite limited, so we need a system for deciding what to remember and what to forget. Emotions may thus act as a filter, marking certain experiences as being of particular importance. In this way, we have templates of states we felt were positive or negative, examples of the consequences of our behaviors, with significantly happy or sad outcomes featuring as the most poignant reminders.

None of this gestalt psychological explanation is informative as to the neurophysiological mechanisms underlying this phenomenon. However, some recent research does address what mechanisms may be at work on a molecular level. Joseph Ledoux and Robert Malinow have been working on memory for a quite a while, and they are the two most senior authors on a paper published in Cell concerning AMPA receptors, emotion, and memory (ref. 1). AMPA receptors are one of the major glutaminergic receptors in the brain. Glutamate is the neurotransmitter they recognize, and it is the major excitatory neurotransmitter in the brain. So if one neuron wants to send a signal to turn on another, it will almost invariably release glutamate at it's axon terminal, and that glutamte will likely be recognized by an AMPA receptor on the post-synaptic cell (the target of the excitation). The major finding of this paper is that norepinephrine (more commonly known as adrenaline) facilitates the incorporation of AMPA receptors into the membranes of cells, during periods of high activity.

It is commonly known that adrenaline is released during times of emotional distress and happiness, these researchers have found that one specific effect of the adrenaline is to increase the number of receptors being incorporated into a synapse, again during periods of high activity. Let's imagine a scenario where this might apply. An animal is being chased by a predator. His motor planning and execution areas are blasting away action potentials, they're highly activated. He makes a decision about some route to take during his escape, activating a specific subset of pathways. It is these connections that will be strengthened by the application of adrenaline. Because more receptors are being integrated into the synapses in these circuits, they will be more likely to be activated the next time he is in the same situation. In this sense, he has formed a memory of the experience which is modulated by the amount of adrenaline, and by extension the intensity of the emotion experienced.

This is essentially what these researchers observed. While it is impossible to directly modulate the emotional state of the animal, they can apply norepinephrine during a learning task. What they found was that animals who received larger doses of applied norepinephrine were more likely to remember the task. The figure at the top of this piece illustrates the finding. The authors compared genetically altered (GA) mice - which lack the effects of adrenaline on AMPA receptor trafficking - to "normal" or wild-type (WT) mice. The graph on the left displays the responses of the WT mice, with the GA mice on the right. The key is that two data points are significantly different (marked with an asterisk) on the left, but not on the right.

While this work doesn't do much to help understand the biosychological basis of Proust, it does illuminate one more minuscule thread in the web of conscious experience.


References

1. Hailan Hu, Eleonore Rea, Kogo Takamiya, Myoung-Goo Kang, Joseph Ledoux, Richard L. Huganir and Roberto Malinow, (2007) Emotion Enhances Learning via Norepinephrine Regulation of AMPA-Receptor Trafficking, Cell 131,1, pp 160-173 [doi:10.1016/j.cell.2007.09.017]

Monday, October 15, 2007

Towards What Are We Evolving?

A group of researchers has found that our choice of diet has had an effect on our genomes. Specifically, they've found that individuals from groups with traditionally high-starch diets have more copies of the gene for Salivary Amylase, the enzyme which breaks down starches in our mouths and stomachs, as compared to those with low-starch diets. This is fascinating news because it is the beginning of an answer to the question: towards what are we evolving? What are the selective pressures acting on our genes to produce further iterations of our species' existence? In general, this is an extremely difficult question to answer, if not impossible.

There are some examples of human intervention into evolution; our efforts at breeding plants and animals have yielded several well known successes (including the development of a transparent frog, see below) and we also use evolution to design molecules with specific properties (ref. 1). These, however, are examples where many generations can be generated rapidly and only those individuals with desired traits are allowed produce offspring.



Transparent Frog (HO / REUTERS)


Fitness is the term that is generally used to quantify how likely an individual is to be reproductively successful, and is thus a most relevant concept in determining the direction in which evolution is nudging us. When the environmental variables and set of possbile behaviors are simple, it is possible to make predictions concerning fitness. For instance, a salt marsh is an environment in which organisms are subject to varying levels of salinity. It is a fair bet that after continued but not overwhelming exposure to increased levels of salt, an initially non-salt-tolerant plant will become salt tolerant. This is the case because the individuals with some salt tolerance are presumably more fit than others, and their offspring will retain that advantage.

When one tries to analyze what makes a human being fit, however, there are several obstacles. First, the set of circumstances we're adapting to are quite complex, meaking it no minor task to pick out which elements might be most important. Second, we define what constitutes fitness through our influence on social structure. Third, even if we're somehow genetically most-fit, we can choose to thwart evolution by not having any children. One might think that the rich constitute a good candidate for the title of most-fit, but they certainly do not reproduce the most. If anything, the group with the highest reproductive success is the poor.

Even if we believe that Darwinian evolution is the dominant force in defining how we will change over the coming epochs, we must admit that it plays some roll. In attempting to understand the future of our species, and how to act in our own best interests, we must acknowledge the forces at work in shaping our selves. Darwinism is clearly important for analyzing broad trends, such as in the research presented above. However, the fast-and-faster acting influence of cultural evolution which currently influences every aspect of our lives, will undoubtedly be of paramount importance to understanding humanity as well. Our challenge is now to understand how the effects of cultural evolution will play out, feeding back on our biology.

References

1. Farinas ET, Bulter T, Arnold FH. (2001) Directed enzyme evolution. Curr Opin Biotechnol. 12(6):545-51
2. Perry GH, Dominy NJ, Claw KG, Lee AS, Fiegler H, Redon R, Werner J, Villanea FA, Mountain JL, Misra R, Carter NP, Lee C, Stone AC. (2007) Diet and the evolution of human amylase gene copy number variation. Nat Genet. 39(10):1256-60.

Monday, October 1, 2007

On Bacteria & Wiring



All known living things harvest high-energy electrons from hydrocarbons for power. Those creatures which reside in oxygen rich environments pass these waste electrons to oxygen while those that live near geothermal vents use sulfur as their dustbin.

(Shewanella oneidensis from ref. 1)


The bacteria pictured above, however, have access to neither. They live in minimal-sulfur soil at a depth where oxygen is unavailable. Instead, they pass their low-energy electrons to metals, readily available conductors in the earth. The idea that a metallic element may be substituted for something as "fundamental" to life as oxygen is food for the imagination. Beyond this, however, is an even more contentious concept. As you can see from the image, these bacteria produce nanoscale structures resembling wires. Furthermore, it has been demonstrated that these filaments conduct electricity (ref. 1). The researchers who demonstrated this fact believe that the bacteria are using their nano-wires to transport their electrons over long distances to the surface oxygen, creating a current source in the dirt.

This claim is by no means proven, but it is intriguing that the suggestion hasn't even been considered until now. Also, the implications of an electrically connected community of bacteria are significant. Microorganisms have presented many examples of behaviors normally thought to be reserved to higher animals, and if the authors responsible for this work turns out to be true, studying the dynamic interactions of these communities has the potential to teach us about systems of electrically coupled cells like our brains. Taking speculation to the extreme, one might ask whether these creatures could constitute a biological-battery, yielding electricity for our own use.

References

1. Y. A. Gorby et al. (2006) Electrically conductive bacterial nanowires produced by Shewanella oneidensis strain MR-1 and other microorganisms Proc. Natl Acad. Sci. USA Vol. 103, pp. 11358–11363

2. Jestin JL, Kaminski PA. (2004) Directed enzyme evolution and selections for catalysis based on product formation. J Biotechnol. 113(1-3):85-103

Friday, September 28, 2007

On Corn & Carbon



In the fascinating, free 1st chapter of Michael Pollan's new book The Omnivore's Dilemma, he makes the point that American industrial agriculture is largely based on corn. Most of our livestock species are fed on corn, and a laundry list of comestibles contain additives derived from it. He goes on to explain that it is possible to quantify how much corn one consumes directly or indirectly due to the type of photosynthesis that corn engages in.

All forms of photosynthesis involve the fixation (harvesting) of carbon which is made into simple sugars subsequently consumed for energy. Photosynthetic types can be categorized, according to the specifics of the carbon fixation process, as C3, C4, or CAM. Corn is a C4 grass. This means that it has developed an adaptation to deal with dry, arrid climes. Specifically, the fixation of carbon is achieved first by PEP carboxylase, which basically buffers it. Plants absorb carbon in the form of CO2 through holes on their leaves called stomata. However, water can also be lost through these pores, so they must be closed to prevent dehyrdation. During these periods, the lack of available CO2 can become a problem because the costly reverse reaction of photosynthesis: photorespiration, occurs. Not so for corn and it's C4 cronies; their buffer of carbon prevents wasteful photorespiring. In addition to this benefit, it turns out that there is another significant consequence of such a strategy: indiscriminate absoprtion of carbon isotopes.

Most (98.9%)of carbon is C-12, it has 6 protons and 6 neutrons. Almost the entirety of the remaining 1.1% is C-13, 6 protons and 7 neutrons, a mass difference of 1.6749 × 10^−27 kg. Most plants preferentially absorb C-12, but not C4 plants. Given such a minute difference between these atoms, it is quite amazing that plants have any ability to discriminate between them. It turns out that differences in the rate of (1) diffusion into the stomata, (2) absorption of C02 by water in the plant, and (3) diffusion of carbon out of the plant is what makes the difference1. Beyond these passive properties, PEP carboxylase has somehow managed to develop a bias in which type of CO2 (preferring C-13) it fixes. It is damned astounding that an enzyme can have such specificity, discriminating such a small mass difference (there is no charge & so probably not much of an electron-shell-structure difference). The end result of all this is that the ratio between the two isotopes in, for example, your flesh is informative about what type of plant supplies most of your carbon.

It is unclear if monitoring this value will ever be relevant to the individual. The potentially deleterious effects of eating large amounts of corn on health and well being are up for debate. It is immediately interesting, however, from a socio-cultural standpoint. Mr. Pollan makes this point quite well in an article he wrote for The New York Times2. He points out that the consequences of industrial monoculture farming are probably not sustainable in the long term. I firmly believe that questioning the long term feasibility of our life styles is important for continued human survival and happiness. The ability to measure corn consumption on a large scale will allow us to monitor and understand this potential problem in a very direct way.

References

1. Farquhar, G.D., Ehleringer, J.R., & Hubick, K.T. (1989) Carbon Isotope Discrimination and Photosynthesis, Annual Revews of Plant Physiology and Plant Molecular Biology, Vol. 40 pp. 503-537
2. Pollan, M. (2007) Unhappy Meals, The New York Times (http://www.nytimes.com/2007/01/28/magazine/28nutritionism.t.html)

Monday, September 24, 2007

On the Difficulty of Understanding Evolved Objects, Namely Biology

(a map of yeast protein interactions1


Imagine a machine designed to slice bread which, through some pathological design concept, posessed the trait that its blade was also somehow its power source. Removing the blade/power supply would clearly render the device inoperable, but understanding how this action had achieved its effect would be quite difficult. This is the essence one of the main problems which confronts anyone interested in teasing apart the complex web of interactions that is molecular biology.

For whatever reason, whether it be a basic feature of human intelligence or simply a sort of paradigmatic immaturity as a species, we tend not to design things in the same way the evolution does. By that I mean employing multi-purpose parts in the way the fictional device mentioned above does. One human-designed object posessing that property is the bicycle. There, it so happens, that the rotation of the wheels actually tends to keep the bike up-right. The wheels are like gyroscopes: their rotational intertia tends to keep them in their plane of rotation in the same way that the linear inertia of an object moving in a straight line tends to keep it on that trajectory. In the sense that the idea of the bicycle has retained this accidental advantage, it resembles an evolution-designed object. However, examples of human engineering that fit this category are few.

In biology, it appears, this type of overlapping, redundant functionality is the norm. For example, insulin is a molecule which is well known to many layman as being involved in the metabolism of glucose: the regulation of blood sugar. However, if one simply consults the wikipedia article on insulin, it immediately becomes clear that this is far to simple a tale. The functions of insulin listed there are:

1. Increased glycogen synthesis – insulin forces storage of glucose in liver (and muscle) cells in the form of glycogen; lowered levels of insulin cause liver cells to convert glycogen to glucose and excrete it into the blood. This is the clinical action of insulin which is directly useful in reducing high blood glucose levels as in diabetes.
2. Increased fatty acid synthesis – insulin forces fat cells to take in blood lipids which are converted to triglycerides; lack of insulin causes the reverse.
3. Increased esterification of fatty acids – forces adipose tissue to make fats (i.e., triglycerides) from fatty acid esters; lack of insulin causes the reverse.
4. Decreased proteinolysis – forces reduction of protein degradation; lack of insulin increases protein degradation.
5. Decreased lipolysis – forces reduction in conversion of fat cell lipid stores into blood fatty acids; lack of insulin causes the reverse.
6. Decreased gluconeogenesis – decreases production of glucose from non-sugar substrates, primarily in the liver (remember, the vast majority of endogenous insulin arriving at the liver never leaves the liver) ; lack of insulin causes glucose production from assorted substrates in the liver and elsewhere.
7. Increased amino acid uptake – forces cells to absorb circulating amino acids; lack of insulin inhibits absorption.
8. Increased potassium uptake – forces cells to absorb serum potassium; lack of insulin inhibits absorption.
9. Arterial muscle tone – forces arterial wall muscle to relax, increasing blood flow, especially in micro arteries; lack of insulin reduces flow by allowing these muscles to contract.


Even as I'm writing this, I have come across an article in Nature about a previously unknown action of insulin in a biochemical pathway involving a protein called TORC22.

Some would say that I am pointing to an inherent flaw in reductionist thinking. That our tendency to search for the smallest parts in order to build up a description of everything from the universe itself to the many varied forms of matter we find within it, cannot hope to penetrate these massively interconnected systems. It seems true that our current notions of what the smallest parts are will lead us to descriptions which are simply too large scale to be intuitively understood. However, this doesn't necessarily point to a flaw in reductionism, especially since the alternative approach of holism doesn't seem to offer any ways forward which circumnavigate such a problem. Rather, I would suggest that we need to fundamentally shift the way we think about evolved things in order to make significant progress towards understanding that which falls under the blanket term of "complex systems."

Early in the last century, physics was spurred on by a shift in thinking: quantum theory, often thought of as one of the canonical scientific revolutions. I am hopeful that this century, or some time in mankind's future, will do the same for biology, and complexity in general.

References

1. Durrett, Rick. Random Graph Dynamics New York: Cambridge University Press, 2006.
2. Dentin, R. Liu, Y., Koo, S.H., Hendrick, S., Vargas, T., Heredia, J., Yates, J. III, Montiminy, M. (2007) Insulin Modulated gluconeogenesis by inhibition of the coactivator TORC2 Nature 449: 366-369

Thursday, September 13, 2007

Trees Can Talk to Each Other

How do plants converse with each other? As human beings, we posses probably the most sophisticated communication abilities of any species on the planet. This makes it very easy for us to forget that every form of life has some ability to transmit information between individuals. This is true even at a microscopic level where bacterial cell-to-cell signaling is a popular research topic (ref 1).

Having been around for a very long time and being unable to move much, it is no surprise that plants have developed many sophisticated adaptations for the purpose of communication. Plants can communcate in a host of ways, see (ref. 2) for a brief overview. One of the most fascinating of these is the use of "smoke signals."

Ten years ago, researchers interested in plant biology and forest fires discovered that exposing seeds to smoke or certain nitrogenous compounds in smoke will induce germination (ref. 3). The evolutionary advantage of this behavior is presumed to be that forest fires leave an area rife for new growth.

The greater significance of this ability is our ongoing opportunity to learn from biological organisms. Although we use our intelligence to guide us in solving problems, we still use trial and error extensively. The greatest expert on trial and error is evolution. The process of evolving progressively more sophisticated life forms has relied on the use of trial and error for the last 3.7 billion years, and we would do well to realize that when it comes to the challenges of existing on earth, we've got a teacher who's got a valuable store of experience.


References

1. Melissa B. Miller & ­ Bonnie L. Bassler Quorum Sensing in Bacteria Annual Review of Microbiology 55: 165-199 [doi:10.1146/annurev.micro.55.1.165]

2. Ragan M. Callaway & Bruce E. Mahall Plant ecology: Family roots Nature 448, 145-147 [DOI: 10.1038/448145a]

3. Jon E. Keeley & C. J. Fotheringham Trace Gas Emissions and Smoke-Induced Seed Germination Science 276: 1248-1250 [DOI: 10.1126/science.276.5316.1248]

Monday, August 27, 2007

Rate Codes and Synchrony

Your body is electrical. Each cell has a system of ion pumps that tightly regulate the trans-membrane passage of potassium, sodium, calcium, and chloride whose concentrations affect the voltage within the cell. The nervous system has evolved the ability to transmit pulses of electrical activity called "action potentials" for rapid communication over distances from millimeters up to meters.


(Brown, S.L., Joseph, J. & Stopfer, M.Nature Neuroscience 8, 1568 - 1576)



The figure above has several examples of voltage recordings from single neurons; those are the black, noisy traces. In each of them, there are many "spikes," the large vertical deflections which simply look like line segments. These are Action Potentials. Up close, APs look like this:




They have a highly sterotyped form reflective of the underlying generative mechanism. The work of Edgar Adrian made it clear that Action Potentials are the universal signal I've pronounced them to be. It was he who demonstrated that APs are used by the brain for both outward bound signals to muscles, and inward ones from sensory apparati. However, since this time, understanding exactly how information is encoded in these Action Potentials has been the source of great debate.

Each neuron has a baseline level of activity, an average firing rate, or number of spikes per second (between 5-100 spikes/second is the physiologically relevant range). An elevated frequency (more per-second) of APs to a muscle means greater contraction, more action potentials also signals warmer temperatures when coming from the appropriate sensor. Both of these are examples of Rate Codes. That is to say that the relevant part of the code is how often the pulses arrive. However, this is not the whole story.

Many researchers have noticed that ensembles of constituitively excited neurons have a tendency to fire their spikes simultaneously. The degree of coincident firing by a pair of neurons can be quantified by a measure called covariance. This synchrony has been implicated in everything from working memory to attention; from perceptual grouping (our tendency to compartmentalize objects as individual wholes out of the continuity of sensory experiences) to consciousness itself.

That synchronous firing happens at all is no great surprise as there is a large degree of correlation in sensory data. If I throw a red ball across your field of vision, for instance, it is highly likely that the report of red by individual neurons in your brain will happen at the same time as they are being stimulated at the same time; there is high temporal correlation in the many sensory inputs to the brain. The idea that synchrony might be at work in settings lacking such obvious sources of correlation, is intriguing.

These two ideas, rate codes and synchrony, represent major branches on the tree of theoretical efforts to describe information encoding at a basic level in the brain. A recent paper from the lab of Alex Reyes at NYU's Center for Neural Science, has given those interested in such endeavors something new to keep their gears turning (ref. 1).

The authors of this letter to Nature describe a series of experiments in which they measure the degree of synchrony in the outputs of a pair of neurons which are not connected to each-other. The experimental variable they manipulate is input correlation. Each neuron receives an input signal which is made partly from a joint source and partly from some other, random signal. In this way, they can control the amount of commonality in the signal that each neuron receives. It is no surprise that as the correlation in the two inputs is increased, so too is the output correlation. What is surprising is that if one leaves the correlation between the inputs constant and increases their overall amplitude, the correlation in the output again increases.

This is a strange state of affairs because, as I said, the signal is made of two parts: one is the bit that is common to both inputs, lets call that A, the other is unique to each neuron, lets call those B & C respectively. This means that neuron 1 receives a signal = A+B; while neuron 2 receives a signal = A+C. If we simply increase the amplitude of both signals by some factor, D (S1 = D*(A+B), S2 = D*(A+C)), then both parts of the signal are scaled up. A, which would tend to produce correlated outputs, and B & C which would tend to produce random, uncorrelated output. This means that there is something intrinsic to the transformation that neurons perform between their inputs and outputs, which somehow enhances input correlations.

The scaling of inputs mentioned above is tantamount to increasing the firing rate of the received signals. This means that the authors have found a link between Rate Coding and Synchrony. These two concepts, once distinct, have become linked.

The Letter progresses nicely, from modeling work done with artificial neurons, into a more biologically plausible setting combining single neurons with simulation, and finally to a more pared down mathematical exposition which seems to capture the essence of this phenomenon, namely the "threshold linear" transformation which neurons perform.

This is brave work, it makes clear how little we understand of the brain, how far we have to go. Several theorists have studied synchrony in the setting of artificially constructed networks, in vitro and in vivo (refs. 2,3,4,5). None, however, have achieved the kind of generality of this result. Understanding the underlying behavioral rules of single neurons is paramount to building a complete theoretical understanding of the mystery that is the brain. These authors have set an example of how we might move forward if we ask the right question in the right way.

References

1. Jaime de la Rocha, Brent Doiron, Eric Shea-Brown, Kres caronimir Josic & Alex Reyes (2007) Correlation between neural spike trains increases with firing rate Nature 448, 802-806 doi:10.1038/nature06028

2. Ritz R, Sejnowski TJ. (1997) Synchronous oscillatory activity in sensory systems: new vistas on mechanisms. Curr Opin Neurobiol. 7(4):536-46.

3. Vogels TP, Abbott LF. (2005) Signal propagation and logic gating in networks of integrate-and-fire neurons. J Neurosci. 25(46):10786-95.

4. Ikegaya Y, Aaron G, Cossart R, Aronov D, Lampl I, Ferster D, Yuste R. (2004) Synfire chains and cortical songs: temporal modules of cortical activity. Science 304(5670):559-64.

5. Mehring C, Hehl U, Kubo M, Diesmann M, Aertsen A. (2003) Activity dynamics and propagation of synchronous spiking in locally connected random networks. Biol Cybern. 88(5):395-408.

Wednesday, August 22, 2007

Language Acquisition



Apparently these things start to learn words at an accelerated rate around the two year mark. That is to say that they seem to abruptly begin to amass and employ a wide variety of words. This phenomenon is referred to as "vocabulary explosion."

Convincing generalized theories of language acquisition have been around for around 30 years, the most successful of which is probably Noam Chomsky's theory of Universal Grammar. But such theories do not attempt to describe the dynamics of communication mastery introduced above. That is one of the tasks of the ingenious minds at work on the subject today.

A recent submission to Science Magazine by the Psychologist Bob McMurray (ref. 1) attempts to computationally model the observed acceleration of the word uptake process. Although past explanations of this phenomenon have invoked specialized and well timed brain mechanisms, Dr. McMurray's work attempts a more parsimonious description.

He concludes:

"Acceleration is guaranteed in any system in which (i) words are acquired in parallel, that is, the system builds representations for multiple words simultaneously, and (ii) the difficulty of learning words is distributed such that there are few words that can be acquired quickly and a greater number that take longer. This distribution of difficulty derives from many factors, including frequency, phonology, syntax, the child's capabilities, and the contexts where words appear."

He goes on to demonstrate that languages seem to display such a distribution of word difficulty, and to show that his model captures the accelerating behavior well.

The real beauty of the work however, is the posited inherent parallelism. Such ability in the human brain has long been suggested by a wide variety of scientists and philosophers. Indeed, I scarcely need use the word suggested, as we know that certain things happen in parallel, the processing of visual information, for instance, does not happen one pixel at a time but rather proceeds by working on the entire pattern of light that falls on the retina at once.

Dr. McMurray has thus figured out an elegant way to apply what should be thought of as a basic property of the brain to explain what seemed an exceedingly difficult problem, something everybody who works on complex systems hopes to be able to do.


1. McMurray B., (2007) Defusing the childhood vocabulary explosion.
Science 317 (5838):631.

Monday, August 13, 2007

The Look of Touch



Consciousness feels whole. That is to say that the various sensory experiences that our brains process in parallel feel like one coherent thing, our own individual consciousness. However, the electrical activity generated by different sensory experiences are largely segregated to different parts of the brain and it is possible to turn them off selectively. For instance, form and motion are represented by different parts of the visual cortex. By using a technique such as TCMS, it would be possible to eliminate sensations of motion in an image while retaining static vision. This would no doubt be a very strange state to be in. There are also many pathologies, induced by head injury or otherwise, that produce abnormal combinations of sensory data and qualities of consciousness in general (Dr. Oliver Sacks has written extensively on this topic).

On the other hand, different cortical sensory areas are highly connected to each other; this is at least partly why our sensations feel so unitary. This means that simply hearing something move or feeling the touch (ref. 1) of something moving can produce measurable responses in the parts of the visual cortex most sensitive to movement. Some recent research has gone farther than this, since, as the authors of this work (ref. 2) point out: it is no surprise that the feeling of something moving can elicit such a reaction because merely imagining motion can have the same effect. These experiments demonstrate that a highly specialized area of the visual cortex called MST is sensitive to "vibrotacticle" stimuli: those incongruent with motion.

Because consciousness is often thought of as an emergent property of our massively interconnected system of neurons, understanding interactions between parts of the brain at many different scales (from single neurons to large collections or areas as in this case) is integral to understanding how this efflorescence works. The work highlighted here is one step in that direction.

References

  1. Hagen MC, Franzen O, McGlone F, Essick G, Dancer C, Pardo JV (2002) Tactile motion activates the human middle temporal/V5 (MT/V5) complex. Eur. J. Neurosci. 16:957–964.
  2. Beauchamp MS, Yasar NE, Kishan N, Ro T. (2007) Human MST but not MT responds to tactile stimulation. J. Neurosci. 27(31):8261-8267

Thursday, August 9, 2007

Life Span

It is somewhat paradoxical that we cannot perform experiments on the animal we are most interested in studing, ourselves. It is difficult enough to deal with the moral implications of experimenting on non-humans. I frequently remind myself that the study of other creatures mitigates the suffering of my own species and this sometimes seems a paltry justification. Humans are investiaged non-invasively, or further when such intrusion is neccessary for medical purposes, but we are still limited in our understanding by such restrictions, the following science included.



It will come as no suprise that there is quite a bit of research into the possibility of extending the length of time that a living thing spends alive. To date, the only effective means of doing so have been based in some way on the concept of Calorie Restriction(CR), see ref. 1 for a review. This simply means that an animal takes in fewer calories than normal while maintaining adequate levels of nutrients. The results are reasonably unequivocal, from nematoads to mammals, lifespan is increased by this method. As one can see from the graph above, increased caloric restriction is effective up to around 65% fewer calories being taken in, at which point it plateaus.

Some recent research, however, (ref. 2) has found a seemingly non-CR-intertwined mechanism that also has an effect on mammalian lifespan. The authors of this study bred mice which lack the gene which codes for adenylyl cyclase 5 (AC5). ACs in general play a key role in beta-adrenergic receptor (β-AR) signaling. In the interest of brevity, I will not delve into the molecular biology of cell-to-cell communications, howiver it is important to know that the blockade of this particular signalling pathway has recently been demonstrated to sucessfully treat mild-to-moderate chronic heart failure (ref. 3). The research into mice which lack the AC5 gene shows that their lifespan is ~30% longer, "are protected from reduced bone density and susceptibility to fractures of aging. Old AC5 KO mice are also protected from aging-induced cardiomyopathy, e.g., hypertrophy, apoptosis, fibrosis, and reduced cardiac function." (ref. 1)

With both of these examples of extended lifespan, however, a question arises. What quality of life do these animals have? This is perhaps more relevant for the research on calorie restriction, but the animals studied can never report to us how they are feeling though the scientist involved always take pains to minimize any outward signs of discomfort. Until such techniques have been tried in human beings the complete effects of these therapies remains a bit of a quesion mark in my mind. That is not to say that I'm not amazed and optimistic about this direction of progress, I simply find it incredible that such simple things as eating less or disrupting a single gene could have universally positive effects.

References
  1. D.A. Sinclair (2005) Toward a unified theory of caloric restriction and longevity regulation, Mech. Ageing Dev. 126, 987–1002.
  2. Lin Yan, Dorothy E. Vatner, J. Patrick O'Connor, Andreas Ivessa, Hui Ge, Wei Chen, Shinichi Hirotani, Yoshihiro Ishikawa, Junichi Sadoshima, and Stephen F. Vatner (2007) Type 5 Adenylyl Cyclase Disruption Increases Longevity and Protects Against Stress, Cell 130, 247-258
  3. M.R. Bristow (2000) beta-adrenergic receptor blockade in chronic heart failure, Circulation 101, 558–569.

Friday, July 27, 2007

The Feel of Space

(left:eRiK, right:me)


That's my friend eRiK. My mother emphatically titles him "eRiK the Dane." eRiK and I studied Physics and Math together as undergraduates at The University of California at Berkeley. We share a great love of understanding, and whenever something's puzzling me, from Set Theory to counting cards in BlackJack, I turn to him.

He's been singing the praises of the show RadioLab on NPR lately and he was particularly stricken by a comment made by the well known Columbia University Physicist Brain Greene. Dr. Greene was discussing the expansion of the universe, and, this is hearsay now, he said that there is no center to the expansion. No origin, no point away from which things are expanding. This is, as eRiK said, unsettling. If you were blowing a bubble with chewing gum, bubble swelling from your mouth, the rate of expansion would be greatest at your lips where the mass of sticky stuff was being stretched into a sheet. If you were pulling a rubber band with your index fingers the rate of expansion would be highest near your digits and lower elsewhere. The point I'm trying to get across is that it's difficult to think of examples of isotropic expansion of objects. This means spatially and directionally uniform expansion. A pizza pie made from a lump of dough is expanded into a sheet in a roughly constant spatial manner, but the spread is not directionally uniform, it is expanded radially, out from the center. Dr. Greene's comment means that there is no direction or origin associated with the universes growth. As astrophysicists and astronomers watch stars getting farther away from eachother, it appears to be happening in the same way everywhere. At the very least this means that the universe itself must behave differently from every object in it.



There is one example that is a bit comforting : imagine that you lived on a line, confined to one dimension. Further, lets say this line was connected at the ends, an infinite hoop of 1D existence. If "something" caused this circle to grow radially out from the center (which wouldn't be a part of the line itself of course), to those living on the line segment, it would appear as if everything was expanding isotropically. We could extend this idea to a 4 dimensional space-time as a hoop embedded in a higher dimensional space, but this is pure speculation.

This brings us to the topic that motivated the title of this post. What would it feel like to come to the edge of a universe? I don't know that such a boundary exists or not, but we can certainly conceptualize a space like our own with well defined boundaries. This is not like being in a room with boundaries. The repulsive forces that we experience as a result of encountering walls are just that: fields of force. A boundary of space must be very different. There wouldn't necessarily be any repulsive force, I imagine it more like asking a person to reach into the 14th dimension or backward in time. It doesn't even make sense to try and conceptualize it. There is simply nothing to try or do or a direction to move in or a place to point to or anything. This is the closest thing I can imagine to arriving at a boundary to space. Not only would there be nothing there, our perceptual abilities would probably be quite stymied by such a thing. Again I have found myself in the slippery slope of speculation, and I invite others to weigh in on this. I'm not sure that anybody has the required personal experience to comment on this but I am sure that somebody could, in the great tradition of doing so in physics, suggest a thought experiment which would shed some light on the subject.

Wednesday, July 25, 2007

Saccade Gain Adaptation

(left:me right:Jordan)


I'm a graduate student. I study Neuroscience The City College of New York as a student of the CUNY Graduate Center. I work in the Biology Department of CCNY in the lab of Josh Wallman. I study a process known as Saccade Gain Adaptation.

Saccades (as I've described before in this very blog) are rapid point to point displacements of gaze. Unless you have a target which is moving that you can follow with your gaze, you make saccades. This is unique to eye movements. That is to say that there are no such constraints on arm or leg or any other kind of movements. If you want to move your arm from one location to the other, there are a tremendous number of paths to follow and speeds to employ. With eye movements, however, you have little to no control over the path taken or the speed of the action. If you've never heard of this, it's useful to try and trace a line (like a corner where walls meet) slowly with your eyes. You'll rapidly see that this is impossible, the best you can do is make small steps along the line. This is in contrast to say, holding out your hand and moving it across your field of vision while fixating on one of your finger-tips. In this case, you can make a smooth pursuit movement.

Saccade gain adaptation is a process through which the size of eye movements elicited by an abrupt change in position of a target is either increased or reduced. Below is a little flash movie that I made illustrating this procedure.

Why might the eye want to behave in this way? Let us suppose that over time the muscles in your eye become weak as a result of aging. It makes sense that the commands sent to your eye muscles must also change in order to accurately move your eyes to desired targets in the world. Josh (and I) think that this is not quite the whole story, but accept for the moment that it is possible and sensible for the brain to be able to change its saccadic gain.

Experimentally, we can induce gain adaptation in the following way: we start with "no step back" trials (runs) in which the subject fixate a small target. At some unpredictable (to the subject) time the target abruptly changes position, and the subject's instructions are simply to follow the target. After some of these "baseline" trials, we move on to the "step back" trials. In this phase, the target steps to a new position, but when the subject moves their eye to the new target position, we move the target back slightly. Interestingly, if the step is small, the subject will not even notice the second target movement, they will simply follow it with their eyes. After many hundreds of trials ("Step Back (late)"), instead of making two saccades to reach the final position of the target (after its two steps), the subject will simply make one saccade to the final target location.





(Click on the black play button to see what a single trial looks like, and the trial type buttons to change the trial type)

In the above animation, there is a graph representing the type of data that we gather. We are only interested in the horizontal (or x) position of the gaze. All of the stimuli we're using in the experiment are presented on a monitor. We use a computer to control the presentation of the stimuli, and we use a camera and a calibration procedure to record the direction of gaze from a subjects right eye. In the graph, the vertical axis represents horizontal displacement from some arbitrary zero point (I've purposefully ignored details such as this). So when the trace representing the target jumps up, that means it has moved to some new position to the right of where it formerly was. The same is true of the trace for the gaze position. When a trace steps down, it means the correspding thing (target or gze) has moved to the left. The set of gray dots merely represents the passage of time. I'm not sure if this description is complete enough, but hit the "play" button a bunch of times and puzzle over it if you're still confused and it'll make sense, or email me and I'll explain ad nauseum.

A final note that there is a rather large (~%10 of the size of the eye movement) error in most saccades, I've simply idealized the graphics to simplify the presentation.

Tuesday, July 24, 2007

Mirror Neurons & Autism


The Mirror Neuron system (MNS) is thought to underlie imitation in primates, and has been implicated in Autism Spectrum disorder in humans(1, 2). First observed in Macaques, mirror neurons are classified as units that selectively increase their firing rate both during the execution of a motor action by an individual and while that individual observes the same action performed by another . The interest in the MNS in relation to autism was sparked by the fact that two of its major symptoms are generalized social interaction & communication deficits which would seem to rely on something like the MNS. In order to explore how MNS properties might differ in normal vs. autistic patients, Hugo Théoret has been performing experiments in human subjects. His results suggest that a general deficit of something akin to the mirror neuron system is present in autistic individuals.


(EMG stuff)

Dr. Théoret uses two techniques in his research on the mirror neuron system. These are electromyography (EMG) and transcranial magnetic stimulation (TCMS or TMS). EMG measures the voltage difference between ground and the skin nearby a muscle group. Muscle contraction is accompanied by currents which cause a change in voltage or potential. The is sensitive enough to detect voltage changes when an individual even considers a movement involving the measured muscle group. TMS is a coarse method of selectively activating cortical regions(3). The combined use of these tools has allowed Dr. Théoret to use simple experiments to draw interesting conclusions about individuals with Autism.


(TMS-er)


Dr. Théoret’s main finding can be summarized by describing two experimental outcomes. First, in normal (non-autistic) individuals, there is a reliable deflection of the electromyogram produced by having the subjects watch a video of an action being performed which involves the measured muscle. For instance, if the right bicep is being measured, there will be an observable deflection of the potential in that muscle when the subject watches a video of an arm lifting an apple. There is also a measurable potential-deflection in that muscle when the proper area of motor cortex is stimulated via TMS. Beyond these individual effects, there is a summation effect such that the deflection is even larger when the subject both observes the video and receives the TMS.

Second, in autistic subjects, there is no deflection of the electromyogram upon a subject’s observation of the above described video. There is in these subjects a potential produced by TMS of the appropriate area, implying that there is no defect in the circuitry to produce such sub-threshold muscle activation. Needless to say there is no summation effect in these subjects.

Dr. Théoret feels that this work implies that understanding of others’ actions is achieved by an individual mapping actions onto their own motor cortex(4). This is an intriguing hypothesis, but there are really two possibilities which both fit with the data. One is as suggested by Dr. Théoret, the other would be that the mirror neuron system alone interprets the intention of the action, and (when possible) maps the action onto the motor cortex. The former possibility would require, for instance, that anybody receiving sufficiently strong TMS would necessarily experience the feeling that they were either observing somebody perform an action or that they were performing the action themselves. This is in keeping with the theory will laid out by Daniel M. Wegner in his book The Illusion of Conscious Will. Without getting too far afield, Dr. Wegner believes that we have a general ability to ascribe agency to observed acts, attributing them to either to ourselves or to others.


The implications of this work are that a defect in the mirror neuron system is responsible for social-interaction pathology in patients with autism. In fact, some researchers believe that defects in the mirror neuron system could lead to all the deficits associated with autism(5). Of course, others feel that such dysfunction cannot be responsible for all the symptoms of autism(6). It remains to be seen whether any definitive explanation of the role of the mirror neuron system in autism will arise, but it is clear that it plays some role in the interpretation of actions.
References

1. Rizzolatti, G., & Craighero, L., (2004) The Mirror Neuron System, Annu. Rev. Neurosci. 27, 169-192.
2. Oberman, L.M., & Ramachandran, V.S., (2007) The simulating social mind: the role of the mirror neuron system and simulation in the social and communicative deficits of autism spectrum disorders. Psychol. Bull., 133, 310-327.
3. Fitzgerald, P.B., Fountain, S. & Daskalakis, Z.J. (2006). A comprehensive review of the effects of rTMS on motor cortical excitability and inhibition. Clinical Neurophysiology 117, 2584-2596
4. Théoret, H., Halligan, E., Kobayashi, M., Fregni, F., Tager-Flusberg, H. & Pascual-Leone, A. (2005) Impaired motor facilitation during action observation in individuals with autism spectrum disorder. Curr Biol. 2005 15, R84-R85.
5. Iacoboni, M., Dapretto, M. (2006) The mirror neuron system and the consequences of its dysfunction. Nat Rev Neurosci. 7, 942-951.
6. Hadjikhani, N., Joseph, R.M., Snyder, J. & Tager-Flusberg, H. (2006) Anatomical differences in the mirror neuron system and social cognition network in autism. Cereb. Cortex. 16, 1276-1282.

Thursday, July 19, 2007

Miniature Eye Movements

Your brain doesn't care about brightness, it likes contrast. In fact, by the time signals generated by light impinging on your retina propagate through its 10 layers of cells, brightness information has largely been discarded in favor of contrast, both spatial and temporal (see paragraph two for a description). A very simple example of this is demonstrated below. Initially the contrast (in time) of the two dots is the same because they are surrounded by the same brightness. When you click on the thin or thick surrounds button, the contrasts are now inverted between the two as evidenced by the change in percept. I guarantee that nothing about the dots themselves change, only the surrounding area.









(Shapiro, A. G., D’Antona, A. D., Charles, J. P., Belano, L. A., Smith, J. B., & Shear-Heyman, M. (2004). Induced contrast asynchronies. Journal of Vision, 4(6):5, 459-468, http://journalofvision.org/4/6/5/ica.html)

Now let me disambiguate a bit what is meant by temporal and spatial contrast. A painting, say Seurat's Sunday Afternoon on the Island of La Grande Jatte, has plenty of spatial contrast, but because nothing changes in time, there is no temporal contrast. A movie screen filled with white which fades to black and back to white, oscillating, has plenty of temporal contrast and no spatial contrast. Now, if there is no temporal contrast at all in your visual field, the world will fade away. Your visual system needs temporal contrast. This is one of the purposes of so called fixational eye movements. These are small involuntary eye movements which you make between the large point to point movements called saccades that we use to change the direction of our gaze. So even if you were standing in front of a painting such that it filled your vision completely and only stared at one point, your eyes would move slightly, around your fixation target, to prevent the image from fading away. If somebody drugged your eye muscles so that there was no way to execute these small movements and filled your vision with an image that had no temporal contrast, the world would fade away.

The idea that brains only encode change and not static values of sensory data is pretty ubiquitous, and there are a wealth of examples. What I'd like to continue with, however, is another function of fixational eye movements that has been speculated about but only demonstrated of late. In a recent paper in Nature*, researchers have discovered that these small eye movements serve to enhance our fine scale spatial resolution. That is to say that without small eye movements, we are less able to detect the presence of and report the properties of fine spatial scale visual stimuli.

One very useful analogy is the way we run our fingers over something textured to better comprehend the shape of it. For example, suppose you were blindfolded and I put a piece of wood in your lap. I tell you that this piece of wood has some number of very small adjacent grooves cut into it at some particular position. If I asked you to find them and count them, I suspect that you would run your fingers across the wood until you found them and then rub your index finger over them a couple of times to determine the number. It seems a very natural way to do it, and this is exactly akin to making small eye movements to improve spatial resolution. Not making small eye movements like that would be akin to simply pressing your finger down straight on the grooves in an attempt to count them. Perhaps you could do alright at this if there were only one or two, or if they were very big, but as the task got more and more difficult you'd need to use the sliding technique in order to discriminate. The commonality here is that both your sense of touch and sense of sight are mediated by an array of detectors of fixed size and position, and some stimuli are simply too small and/or finely spaced to be accurately detected by the particular array of detectors you've got.

Here's another example: suppose you were using a number of long same-diameter, same length rods to determine the topographical features of a small area of the bottom a pool of water. One way to do this would be to take many rods in a bundle and push them each down (still in a bundle) until they stopped, recording each of their heights individually. The problem with this method is that the resolution of your image of the bottom is limited to the diameter of the sticks. Assuming you can't use ever thinner sticks (we can't make the receptor size in our eyes or hands arbitrarily small), you can get a better resolution image by running a single rod (or many rods) over the area to be mapped, continuously recording the height. In this way you have more information than if you simply assign each rod to a single point on the bottom, increasing your resolution.

*Rucci, M., Iovin1, R., Poletti1, M. & Santini, F. Miniature eye movements enhance fine spatial detail Nature 447, 852-855 (14 June 2007)