Brain


It’s easy to appreciate the seasonality of winter blues, but web searches show that other disorders may ebb and flow with the weather as well.

Google searches are becoming an intriguing source of health-related information, exposing everything from the first signs of an infectious disease outbreak to previously undocumented side effects of medications. So researchers led by John Ayers of the University of Southern California decided to comb through queries about mental illnesses to look for potentially helpful patterns related to these conditions. Given well known connections between depression and winter weather, they investigated possible connections between mental illnesses and seasons.

Using all of Google’s search data from 2006 to 2010, they studied searches for terms like “schizophrenia” “attention deficit/hyperactivity disorder (ADHD),” “bulimia” and “bipolar” in both the United States and Australia.  Since winter and summer are reversed in the two countries finding opposing patterns in the two countries’ data would strongly suggest that season, rather than other things that might vary with time of year, was important in some way in the prevalence of the disorders.

21918519_BG1

“All mental health queries followed seasonal patterns with winter peaks and summer troughs,” the researchers write in their study, published in the American Journal of Preventive Medicine. They found that mental health queries in general were 14% higher in the winter in the U.S. and 11% higher in the Australian winter.

The seasonal timing of queries regarding each disorder was also similar in the two countries. In both countries, for example, searches about eating disorders (including anorexia and bulimia) and schizophrenia surged during winter months; those in the U.S. were 37% more likely and Australians were 42% more likely to seek information about these disorders during colder weather than during the summer. And compared to summer searches, schizophrenia queries were 37% more common in the American winter and 36% more frequent during the Australian winter. ADHD queries were also highly seasonal, with 31% more winter searches in the U.S. and 28% more in Australia compared to summer months.

Searches for depression and bipolar disorder, which might seem to be among the more common mental illnesses to strike during the cold winter months, didn’t solicit as many queries: there were 19% more winter searches for depression in the U.S. and 22% more in Australia for depression. For bipolar, 16% more American searches for the term occurred in the winter than in the summer, and 18% more searches occurred during the Australian winter. The least seasonal disorder was anxiety, which varied by just 7% in the U.S. and 15% in Australia between summer and winter months.

Understanding how the prevalence of mental illnesses change with the seasons could lead to more effective preventive measures that alert people to symptoms and guide them toward treatments that could help, say experts. Previous research suggests that shorter daylight hours and the social isolation that accompanies harsh weather conditions might explain some of these seasonal differences in mental illnesses, for example, so improving social interactions during the winter months might be one way to alleviate some symptoms. Drops in vitamin D levels, which rise with exposure to sunlight, may also play a role, so supplementation for some people affected by mood disorders could also be effective.

 

The researchers emphasize that searches for disorders are only queries for more information, and don’t necessarily reflect a desire to learn more about a mental illness after a new diagnosis. For example, while the study found that searches for ‘suicide’ were 29% more common in winter in America and 24% more common during the colder season in Australia, other investigations showed that completed suicides tend to peak in spring and early summer. Whether winter queries have any relationship at all to spring or summer suicides isn’t clear yet, but the results suggest a new way of analyzing data that could lead to better understanding of a potential connection.

And that’s the promise of data on web searches, says the scientists. Studies on mental illnesses typically rely on telephone or in-person surveys in which participants are asked about symptoms of mental illness or any history with psychological disorders, and people may not always answer truthfully in these situations. Searches, on the other hand, have the advantage of reflecting people’s desire to learn more about symptoms they may be experiencing or to improve their knowledge about a condition for which they were recently diagnosed. So such queries could become a useful resource for spotting previously undetected patterns in complex psychiatric disorders.  “The current results suggest that monitoring queries can provide insight into national trends on seeking information regarding mental health, such as seasonality…If additional studies can validate the current approach by linking clinical symptoms with patterns of search queries,” the authors conclude, “This method may prove essential in promoting population mental health.”

 

Our brains are better than Google or the best robot from iRobot.

We can instantly search through a vast wealth of experiences and emotions. We can immediately recognize the face of a parent, spouse, friend or pet, whether in daylight, darkness, from above or sideways—a task that the computer vision system built into the most sophisticated robots can accomplish only haltingly. We can also multitask effortlessly when we extract a handkerchief from a pocket and mop our brow while striking up a conversation with an acquaintance. Yet designing an electronic brain that would allow a robot to perform this simple combination of behaviors remains a distant prospect.

How does the brain pull all this off, given that the complexity of the networks inside our skull—trillions of connections among billions of brain cells—rivals that of theInternet? One answer is energy efficiency: when a nerve cell communicates with another, the brain uses just a millionth of the energy that a digital computer expends to perform the equivalent operation. Evolution, in fact, may have played an important role in pushing the three-pound organ toward ever greater energy efficiencies.

Parsimonious energy consumption cannot be the full explanation, though, given that the brain also comes with many built-in limitations. One neuron in the cerebral cortex, for instance, can respond to an input from another neuron by firing an impulse, or a “spike,” in thousandths of a second—a snail’s pace compared with the transistors that serve as switches in computers, which take billionths of a second to switch on. The reliability of the neuronal network is also low: a signal traveling from one cortical cell to another typically has only a 20 percent possibility of arriving at its ultimate destination and much less of a chance of reaching a distant neuron to which it is not directly connected.

Neuroscientists do not fully understand how the brain manages to extract meaningful information from all the signaling that goes on within it. The two of us and others, however, have recently made exciting progress by focusing new attention on how the brain can efficiently use the timing of spikes to encode information and rapidly solve difficult computational problems. This is because a group of spikes that fire almost at the same moment can carry much more information than can a comparably sized group that activates in an unsynchronized fashion.

Beyond offering insight into the most complex known machine in the universe, further advances in this research could lead to entirely new kinds of computers. Already scientists have built “neuromorphic” electronic circuits that mimic aspects of the brain’s signaling network. We can build devices today with a million electronic neurons, and much larger systems are planned. Ultimately investigators should be able to build neuromorphic computers that function much faster than modern computers but require just a fraction of the power [see “Neuromorphic Microchips,” by Kwabena Boahen; Scientific American, May 2005].

Cell Chatter

Like many other neuroscientists, we often use the visual system as our test bed, in part because its basic wiring diagram is well understood. Timing of signals there and elsewhere in the brain has long been suspected of being a key part of the code that the brain uses to decide whether information passing through the network is meaningful. Yet for many decades these ideas were neglected because timing is only important when compared between different parts of the brain, and it was hard to measure activity of more than one neuron at a time. Recently, however, the practical development of computer models of the nervous system and new results from experimental and theoretical neuroscience have spurred interest in timing as a way to better understand how neurons talk to one another.

Brain cells receive all kinds of inputs on different timescales. The microsecond-quick signal from the right ear must be reconciled with the slightly out-of-sync input from the left. These rapid responses contrast with the sluggish stream of hormones coursing through the bloodstream. The signals most important for this discussion, though, are the spikes, which are sharp rises in voltage that course through and between neurons. For cell-to-cell communication, spikes lasting a few milliseconds handle immediate needs. A neuron fires a spike after deciding that the number of inputs urging it to switch on outweigh the number telling it to turn off. When the decision is made, a spike travels down the cell’s axon (somewhat akin to a branched electrical wire) to its tips. Then the signal is relayed chemically through junctions, called synapses, that link the axon with recipient neurons.

In each eye, 100 million photoreceptors in the retina respond to changing patterns of light. After the incoming light is processed by several layers of neurons, a million ganglion cells at the back of the retina convert these signals into a sequence of spikes that are relayed by axons to other parts of the brain, which in turn send spikes to still other regions that ultimately give rise to a conscious perception. Each axon can carry up to several hundred spikes each second, though more often just a few spikes course along the neural wiring. All that you perceive of the visual world—the shapes, colors and movements of everything around you—is coded into these rivers of spikes with varying time intervals separating them.

Monitoring the activity of many individual neurons at once is critical for making sense of what goes on in the brain but has long been extremely challenging. In 2010, though, E. J. Chichilnisky of the Salk Institute for Biological Studies in La Jolla, Calif., and his colleagues reported in Nature that they had achieved the monumental task of simultaneously recording all the spikes from hundreds of neighboring ganglion cells in monkey retinas. (Scientific American is part of Nature Publishing Group.) This achievement made it possible to trace the specific photoreceptors that fed into each ganglion cell. The growing ability to record spikes from many neurons simultaneously will assist in deciphering meaning from these codelike brain signals.

For years investigators have used several methods to interpret, or decode, the meaning in the stream of spikes coming from the retina. One method counts spikes from each axon separately over some period: the higher the firing rate, the stronger the signal. The information conveyed by a variable firing rate, a rate code, relays features of visual images, such as location in space, regions of differing light contrast, and where motion occurs, with each of these features represented by a given group of neurons.

Information is also transmitted by relative timing—when one neuron fires in close relation to when another cell spikes. Ganglion cells in the retina, for instance, are exquisitely sensitive to light intensity and can respond to a changing visual scene by transmitting spikes to other parts of the brain. When multiple ganglion cells fire at almost the same instant, the brain suspects that they are responding to an aspect of the same physical object. Horace Barlow, a leading neuroscientist at the University of Cambridge, characterized this phenomenon as a set of “suspicious coincidences.” Barlow referred to the observation that each cell in the visual cortex may be activated by a specific physical feature of an object (say, its color or its orientation within a scene). When several of these cells switch on at the same time, their combined activation constitutes a suspicious coincidence because it may only occur at a specific time for a unique object. Apparently the brain takes such synchrony to mean that the signals are worth noting because the odds of such coordination occurring by chance are slim.

 

Electrical engineers are trying to build on this knowledge to create more efficient hardware that incorporates the principles of spike timing when recording visual scenes. One of us (Delbruck) has built a camera that emits spikes in response to changes in a scene’s brightness, which enables the tracking of very fast moving objects with minimal processing by the hardware to capture images [see box above].

Into the Cortex

New evidence adds proof that the visual cortex attends to temporal clues to make sense of what the eye sees. The ganglion cells in the retina do not project directly to the cortex but relay signals through neurons in the thalamus, deep within the brain’s midsection. This region in turn must activate 100 million cells in the visual cortex in each hemisphere at the back of the brain before the messages are sent to higher brain areas for conscious interpretation.

We can learn something about which spike patterns are most effective in turning on cells in the visual cortex by examining the connections from relay neurons in the thalamus to cells known as spiny stellate neurons in a middle layer of the visual cortex. In 1994 Kevan Martin, now at the Institute of Neuroinformatics at the University of Zurich, and his colleagues reconstructed the thalamic inputs to the cortex and found that they account for only 6 percent of all the synapses on each spiny stellate cell. How, then, everyone wondered, does this relatively weak visual input, a mere trickle, manage to reliably communicate with neurons in all layers of the cortex?

Cortical neurons are exquisitely sensitive to fluctuating inputs and can respond to them by emitting a spike in a matter of a few milliseconds. In 2010 one of us (Sejnowski), along with Hsi-Ping Wang and Donald Spencer of the Salk Institute and Jean-Marc Fellous of the University of Arizona, developed a detailed computer model of a spiny stellate cell and showed that even though a single spike from only one axon cannot cause one of these cells to fire, the same neuron will respond reliably to inputs from as few as four axons projecting from the thalamus if the spikes from all four arrive within a few milliseconds of one another. Once inputs arrive from the thalamus, only a sparse subset of the neurons in the visual cortex needs to fire to represent the outline and texture of an object. Each spiny stellate neuron has a preferred visual stimulus from the eye that produces a high firing rate, such as the edge of an object with a particular angle of orientation.

In the 1960s David Hubel of Harvard Medical School and Torsten Wiesel, now at the Rockefeller University, discovered that each neuron in the relevant section of the cortex responds strongly to its preferred stimulus only if activation comes from a specific part of the visual field called the neuron’s receptive field. Neurons responding to stimulation in the fovea, the central region of the retina, have the smallest receptive fields—about the size of the letter e on this page. Think of them as looking at the world through soda straws. In the 1980s John Allman of the California Institute of Technology showed that visual stimulation from outside the receptive field of a neuron can alter its firing rate in reaction to inputs from within its receptive field. This “surround” input puts the feature that a neuron responds to into the context of the broader visual environment.

Stimulating the region surrounding a neuron’s receptive field also has a dramatic effect on the precision of spike timing. David McCormick, James Mazer and their colleagues at Yale University recently recorded the responses of single neurons in the cat visual cortex to a movie that was replayed many times. When they narrowed the movie image so that neurons triggered by inputs from the receptive field fired (no input came from the surrounding area), the timing of the signals from these neurons had a randomly varying and imprecise pattern. When they expanded the movie to cover the surrounding area outside the receptive field, the firing rate of each neuron decreased, but the spikes were precisely timed.

 

The timing of spikes also matters for other neural processes. Some evidence suggests that synchronized timing—with each spike representing one aspect of an object (color or orientation)—functions as a means of assembling an image from component parts. A spike for “pinkish red” fires in synchrony with one for “round contour,” enabling the visual cortex to merge these signals into the recognizable image of a flower pot.

Attention and Memory

Our story so far has tracked visual processing from the photoreceptors to the cortex. But still more goes into forming a perception of a scene. The activity of cortical neurons that receive visual input is influenced not only by those inputs but also by excitatory and inhibitory interactions between cortical neurons. Of particular importance for coordinating the many neurons responsible for forming a visual perception is the spontaneous, rhythmic firing of a large number of widely separated cortical neurons at frequencies below 100 hertz.

Attention—a central facet of cognition—may also have its physical underpinnings in sequences of synchronized spikes. It appears that such synchrony acts to emphasize the importance of a particular perception or memory passing through conscious awareness. Robert Desimone, now at the Massachusetts Institute of Technology, and his colleagues have shown that when monkeys pay attention to a given stimulus, the number of cortical neurons that fire synchronized spikes in the gamma band of frequencies (30 to 80 hertz) increases, and the rate at which they fire rises as well. Pascal Fries of the Ernst Strüngmann Institute for Neuroscience in cooperation with the Max Planck Society in Frankfurt found evidence for gamma-band signaling between distant cortical areas.

Neural activation of the gamma-frequency band has also attracted the attention of researchers who have found that patients with schizophrenia and autism show decreased levels of this type of signaling on electroencephalographic recordings. David Lewis of the University of Pittsburgh, Margarita Behrens of the Salk Institute and others have traced this deficit to a type of cortical neuron called a basket cell, which is involved in synchronizing spikes in nearby circuits. An imbalance of either inhibition or excitation of the basket cells seems to reduce synchronized activity in the gamma band and may thus explain some of the physiological underpinnings of these neurological disorders. Interestingly, patients with schizophrenia do not perceive some visual illusions, such as the tilt illusion, in which a person typically misjudges the tilt of a line because of the tilt of nearby lines. Similar circuit abnormalities in the prefrontal cortex may be responsible for the thought disorders that accompany schizophrenia.

When it comes to laying down memories, the relative timing of spikes seems to be as important as the rate of firing. In particular, the synchronized firing of spikes in the cortex is important for increasing the strengths of synapses—an important process in forming long-term memories. A synapse is said to be strengthened when the firing of a neuron on one side of a synapse leads the neuron on the other side of the synapse to register a stronger response. In 1997 Henry Markram and Bert Sakmann, then at the Max Plank Institute for Medical Research in Heidelberg, discovered a strengthening process known as spike-timing-dependent plasticity, in which an input at a synapse is delivered at a frequency in the gamma range and is consistently followed within 10 milliseconds by a spike from the neuron on the other side of the synapse, a pattern that leads to enhanced firing by the neuron receiving the stimulation. Conversely, if the neuron on the other side fires within 10 milliseconds before the first one, the strength of the synapse between the cells decreases.

Some of the strongest evidence that synchronous spikes may be important for memory comes from research by György Buzsáki of New York University and others on the hippocampus, a brain area that is important for remembering objects and events. The spiking of neurons in the hippocampus and the cortical areas that it interacts with is strongly influenced by synchronous oscillations of brain waves in a range of frequencies from four to eight hertz (the theta band), the type of neural activity encountered, for instance, when a rat is exploring its cage in a laboratory experiment. These theta-band oscillations can coordinate the timing of spikes and also have a more permanent effect in the synapses, which results in long-term changes in the firing of neurons.

 

A Grand Challenge Ahead

Neuroscience is at a turning point as new methods for simultaneously recording spikes in thousands of neurons help to reveal key patterns in spike timing and produce massive databases for researchers. Also, optogenetics—a technique for turning on genetically engineered neurons using light—can selectively activate or silence neurons in the cortex, an essential step in establishing how neural signals control behavior. Together, these and other techniques will help us eavesdrop on neurons in the brain and learn more and more about the secret code that the brain uses to talk to itself. When we decipher the code, we will not only achieve an understanding of the brain’s communication system, we will also start building machines that emulate the efficiency of this remarkable organ.

 

* By Terry Sejnowski and Tobi Delbruck  

ABOUT THE AUTHOR(S)

Terry Sejnowski is an investigator with the Howard Hughes Medical Institute and is Francis Crick Professor at the Salk Institute for Biological Studies, where he directs the Computational Neurobiology Laboratory.

Tobi Delbruck is co-leader of the sensors group at the Institute of Neuroinformatics at the University of Zurich.

 MORE TO EXPLORE

Terry Sejnowski’s 2008 Wolfgang Pauli Lectures on how neurons compute and communicate: www.podcast.ethz.ch/podcast/episodes/?id=607

Neuromorphic Sensory Systems. Shih-Chii Liu and Tobi Delbruck in Current Opinion in Neurobiology, Vol. 20, No. 3, pages 288–295; June 2010. http://tinyurl.com/bot7ag8

SCIENTIFIC AMERICAN ONLINE
Watch a video about a motion-sensing video camera that uses spikes for imaging at ScientificAmerican.com/oct2012/dvs

Make all sorts of ostensibly conscious and seemingly rational choices when we are aware of a potential risk. We eat organic food, max out on multivitamins and quickly forswear some products (even whole technologies) at the slightest hint of danger. We carry guns and vote for the candidate we think will keep us safe. Yet these choices are far from carefully considered — and, surprisingly often, they contravene reason. What’s more, while our choices about risk invariably feel right when we make them, many of these decisions end up putting us in greater peril.

 

Researchers in neuroscience, psychology, economics and other disciplines have made a range of discoveries about why human beings sometimes fear more than the evidence warrants, and sometimes less than the evidence warns. That science is worth reviewing at length. But one current issue offers a crash course in the most significant of these findings: the fear of vaccines, particularly vaccines for children.

In a 2011 Thomson Reuters/NPR poll, nearly one parent in three with a child under 18 was worried about vaccines, and roughly one American in four was concerned about the value and safety of vaccines in general. In the same poll, roughly one out of every five college-educated respondents worried that childhood vaccination was connected with autism; 7 percent said they feared a link with Type 1 diabetes.

Based on the evidence, these and most other concerns about vaccines are unfounded. A comprehensive report last year from the Institute of Medicine is just one of many studies to report that vaccines do not cause autism, diabetes, asthma or other major afflictions listed by the anti-vaccination movement.

Yet these fears, fierce and visceral, persist. To frustrated doctors and health officials, vaccine-phobia seems an irrational denial of the facts that puts both the unvaccinated child and the community at greater risk (as herd immunity goes down, disease spread rises). But the more we learn about how risk perception works, the more understandable — if still quite dangerous — the fear of vaccines becomes.

Along with many others, the cognitive psychologists Paul Slovic of the University of Oregon and Baruch Fischhoff of Carnegie Mellon University have identified several reasons something might feel more or less scary than mere reason might suppose. Humans subconsciously weigh the risks and benefits of any choice or course of action — and if taking a particular action seems to afford little or no benefit, the risk automatically feels bigger. Vaccinations are a striking example. As the subconscious mind might view it, vaccines protect children from diseases like measles and pertussis, or whooping cough, that are no longer common, so the benefit to vaccination feels small — and smaller still, perhaps, compared to even the minuscule risk of a serious side effect. (In actuality, outbreaks of both of these infections have been more common in recent years, according to the Centers for Disease Control and Prevention.) Contrast this with how people felt in the 1950s, in the frightening days of polio, when parents lined their kids up for vaccines that carried much greater risk than do the modern ones. The risk felt smaller, because the benefit was abundantly clear.

Professor Slovic and Professor Fischhoff and others have found that a risk imposed upon a person, like mandatory vaccination programs (nearly all of which allow people to opt out), feels scarier than the same risk if taken voluntarily. Risk perception also depends on trust. A risk created by a source you don’t trust will feel scarier. The anti-vaccination movement is thick with mistrust of government and the drug industry. Finally, risks that are human-made, like vaccines, evoke more worry than risks that are natural. Some parents who refuse to have their kids vaccinated say they are willing to accept the risk of the disease, because the disease is “natural.”

Still, shouldn’t our wonderful powers of reason be able to overcome these instinctive impediments to clear thinking? The neuroscience of fear makes clear that such hope is hubris. Work on the neural roots of fear by the neuroscientist Joseph LeDoux of New York University, and others, has found that in the complex interplay of slower, conscious reason and quicker, subconscious emotion and instinct, the basic architecture of the brain ensures that we feel first and think second. The part of the brain where the instinctive “fight or flight” signal is first triggered — the amygdala — is situated such that it receives incoming stimuli before the parts of the brain that think things over. Then, in our ongoing response to potential peril, the way the brain is built and operates assures that we are likely to feel more and think less. As Professor LeDoux puts it in “The Emotional Brain”: “the wiring of the brain at this point in our evolutionary history is such that connections from the emotional systems to the cognitive systems are stronger than connections from the cognitive systems to the emotional systems.”

And so we have excessive fear of vaccines. But just as we are too afraid of some things, this same “feelings and facts” system works the other way too, sometimes leaving us inadequately concerned about bigger risks. A risky behavior you engage in voluntarily and that seems to afford plenty of benefit — think sun-tanning for that “nice, healthy glow” — feels less dangerous. A societal risk, well off in the future, tends not to trigger the same instinctive alarm — in part, because the hazard isn’t singling any one of us out, individually. This helps explain why concern over climate change is broad, but thin.

Though it may be prone to occasional errors, our risk-perception system isn’t all bad. After all, it has gotten us this far through evolution’s gantlet. But a system that relies so heavily on emotion and instinct sometimes produces risk perceptions that don’t match the evidence, a “risk perception gap” that can be a risk in itself. We do have to fear the dangers of fear itself.

In this remarkable era of discovery about how our brains operate, we have discovered a great deal about why the gap occurs, and we can — and should — put our detailed knowledge of risk perception to use in narrowing the risk-perception gap and reducing its dangers. As the Italian philosopher Nicola Abbagnano advised, “Reason itself is fallible, and this fallibility must find a place in our logic.” Accepting that risk perception is not so much a process of pure reason, but rather a subjective combination of the facts and how those facts feel, might be just the step in the human learning curve we need to make. Then, maybe, we’ll start making smarter decisions about vaccines and other health matters.

 

By DAVID ROPEIK, September 28, 2012

David Ropeik is an instructor at the Harvard Extension School and the author of “How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts.”

Research on the link between implicit race preference and brain activity could be used to prevent unintended consequences of race bias

race, how the brain views race, liz phelps

How the brain responds to and processes images of people from different racial groups is an emerging field of investigation that could have major implications for society. Psychologist Elizabeth Phelps of New York University, in New York, who in 2000 led one of the first studies in this area, tellsNature what her latest review of the field reveals about the neuroscience of race.

What does psychology tell us about race?
Social psychologists differentiate between the attitudes that people express and their implicit preferences. This can be studied using the implicit association task, which measures initial, evaluative responses. It involves asking people to pair concepts such as black and white with concepts like good and bad. What you find is that most white Americans take longer to make a response that pairs black with good and white with bad than vice versa. This reveals their implicit preferences.

What did your review of the neuroscience literature show?
My colleagues and I found that there’s a network of brain regions that is consistently activated in neuroimaging studies of race processing. This network overlaps with the circuits involved in decision-making and emotion regulation, and includes the amygdala, fusiform face area (FFA), anterior cingulate cortex (ACC) and dorsolateral prefrontal cortex (DLPFC).

What did your previous work show?
Our 2000 study was the first to link race preference to brain activity. We measured the eye-blink startle, a reflex response that people display when they hear a loud noise, for example. A lot of studies have shown that this reflex is potentiated [enhanced] when people are anxious or in the presence of something they think is negative. We found that implicit preferences were correlated with potentiated startle, and that both were correlated with the amount of amygdala activation.

How does the neuroscience fit with the psychological model?
Activity in the FFA isn’t surprising, because all of these studies use photos of faces. The amygdala is involved in emotions, and might be linked to the automatic evaluations we make when we see people from other racial groups. We think that the ACC and DLPFC are involved in more complex functions. People tend to show unintentional indications of race bias, even when they are motivated to be non-prejudiced, so the ACC may be involved in detecting these conflicts. You can have an implicit bias and choose not to act on it, and the DLPFC may be trying to regulate the emotional responses that conflict with our egalitarian goals and beliefs.

What about people who are overtly prejudiced?
Finding differences in people with extreme views wouldn’t be too surprising, but I’m not sure we’d see anything more than an exaggerated [emotional] response. We’re more interested in ‘normal’ people. Those who are more internally motivated to be non-prejudiced show greater ACC activity, whereas those who hold extreme views obviously have explicit, intentional race bias and don’t care about controlling their emotional responses.

What are the societal implications of this research?
Most white Americans we studied show an implicit preference for their own group. They don’t have bad intentions, but because they’ve associated black people with, say, criminality so many times, their decisions are infused with that association, whether or not they believe it’s accurate. There’s evidence of unintentional race bias at every stage of the legal process. Despite the fact that it aims to be egalitarian, sentencing is vastly different for African Americans. The bias is also there in employment.

How should this research progress?
We need to investigate how our implicit preferences are linked to the choices and decisions we make. We want to use this knowledge to reduce the unintended consequences of race bias — the things we do that aren’t consistent with our beliefs. One problem is the lack of funding for this type of work. It’s very hard to fund this kind of research because it’s not really relevant to health. One way to go would be to apply the sophisticated tools of neuroeconomics to investigate how unintentional bias affects our decision making. The research could also be linked to emerging work on controlling emotions.

 

* This article is reproduced with permission from the magazine Nature. The article was first published on June 26, 2012.