April 2011


WE all enjoy speculating about which Arab regime will be toppled next, but maybe we should  be looking closer to home. High unemployment? Check. Out-of-touch elites? Check. Frustrated young people? As a 24-year-old American, I can testify that this rich democracy has plenty of those too.
About one-fourth of Egyptian workers under 25 are unemployed, a statistic that is often cited as a reason for the revolution there. In the United States, the Bureau of Labor Statistics reported in January an official unemployment rate of 21 percent for workers ages 16 to 24.


My generation was taught that all we needed to succeed was an education and hard work. Tell that to my friend from high school who studied Chinese and international relations at a top-tier college. He had the misfortune to graduate in the class of 2009, and could find paid work only as a lifeguard and a personal trainer.  Unpaid internships at research institutes led to nothing.  After more than a year he moved back in with his parents.



Millions of college graduates in rich nations could tell similar stories. In Italy, Portugal and Spain, about one-fourth of college graduates under the age of 25 are unemployed. In the United States, the official unemployment rate for this group is 11.2 percent, but for college graduates 25 and over it is only 4.5 percent.


The true unemployment rate for young graduates is most likely even higher because it fails to account for those who went to graduate school in an attempt to ride out the economic storm or fled the country to teach English overseas. It would be higher still if it accounted for all of those young graduates who have given up looking for full-time work, and are working part time for lack of any alternative.



The cost of youth unemployment is not only financial, but also emotional. Having a job is supposed to be the reward for hours of SAT prep, evenings spent on homework instead of with friends and countless all-nighters writing papers. The millions of young people who cannot get jobs or who take work that does not require a college education are in danger of losing their faith in the future. They are indefinitely postponing the life they wanted and prepared for; all that matters is finding rent money. Even if the job market becomes as robust as it was in 2007 — something economists say could take more than a decade — my generation will have lost years of career-building experience.



It was simple to blame Hosni Mubarak for the frustrations of Egypt’s young people — he had been in power longer than they had been alive. Barack Obama is not such an easy target; besides his democratic legitimacy, he is far from the only one responsible for the weakness of the recovery. In the absence of someone specific to blame, the frustration simply builds.


As governments across the developed world balance their budgets, I fear that the young will bear the brunt of the pain: taxes on workers will be raised and spending on education will be cut while mortgage subsidies and entitlements for the elderly are untouchable. At least the Saudis and Kuwaitis are trying to bribe their younger subjects.

The uprisings in the Middle East and North Africa are a warning for the developed world. Even if an Egyptian-style revolution breaking out in a rich democracy is unthinkable, it is easy to recognize the frustration of a generation that lacks opportunity. Indeed, the “desperate generation” in Portugal got tens of thousands of people to participate in nationwide protests on March 12. How much longer until the rest of the rich world follows their lead?


By MATTHEW C. KLEIN (March 20, 2011)
Matthew C. Klein is a research associate at the Council on Foreign Relations.

 

 

 

 

 

As worries grow over radiation leaks at Fukushima, is it possible to gauge the immediate and lasting health effects of radiation exposure? Here’s the science behind radiation sickness and other threats facing Japan


The developing crisis at the Fukushima Daiichi nuclear power plant in the wake of the March 11 earthquake and tsunami has raised concerns over the health effects of radiation exposure: What is a “dangerous” level of radiation? How does radiation damage health? What are the consequences of acute and long-term low-dose radiation?

Though radioactive steam has been released to reduce pressure within the wrecked complex’s reactors and there has been additional radiation leakage from the three explosions there, the resulting spikes in radiation levels have not been sustained. The highest radiation level reported thus far was a pulse of 400 millisieverts per hour at reactor No. 3, measured at 10:22 A.M. local time March 15. (A sievert is a unit of ionizing radiation equal to 100 rems; a rem is a dosage unit of x-ray and gamma-ray radiation exposure.) The level of radiation decreases dramatically as distance from the site increases. Radiation levels in Tokyo, about 220 kilometers to the southwest, have been reported to be only slightly above normal.

“We are nowhere near levels where people should be worried,” says Susan M. Langhorst, a health physicist and the radiation safety officer at Washington University in Saint Louis.

According to Abel Gonzalez, vice chairman of the International Commission on Radiological Protection who studied the 1986 Chernobyl disaster, current information coming from Japan about levels of radiation leakage are incomplete at best and speculations about “worst-case scenarios” are as of yet irrelevant.

The health effects caused by radiation exposure depend on its level, type and duration.


Radiation level:


The average person is exposed to 2 to 3 millisieverts of background radiation per year from a combination of cosmic radiation and emissions from building materials and natural radioactive substances in the environment.

The U.S. Nuclear Regulatory Commission recommends that beyond this background level, the public limit their exposure to less than an additional one millisievert per year. The U.S. limit for radiation workers is 50 millisieverts annually, although few workers are exposed to anything approaching that amount. For patients undergoing medical radiation there is no strict exposure limit—it is the responsibility of medical professionals to weigh the risks and benefits of radiation used in diagnostics and treatment, according to Langhorst. A single CT scan, for example, can expose a patient to more than one millisievert.

Radiation sickness (or acute radiation syndrome) usually sets in after a whole-body dose of three sieverts—3,000 times the recommended public dose limit per year, Langhorst says. The first symptoms of radiation sickness—nausea, vomiting, and diarrhea— can appear within minutes or in days, according to the U.S. Centers for Disease Control and Prevention. A period of serious illness, including appetite loss, fatigue, fever, gastrointestinal problems, and possible seizures or coma, may follow and last from hours to months.


Radiation type:


Of concern in the current situation is ionizing radiation, which is produced by spontaneously decaying heavy isotopes, such as iodine 131 and cesium 137. (Isotopes are species of the same element, albeit with different numbers of neutrons and hence different atomic masses.) This type of radiation has sufficient energy to ionize atoms (usually creating a positive charge by knocking out electrons), thereby giving them the chemical potential to react deleteriously with the atoms and molecules of living tissues.

Ionizing radiation takes different forms: In gamma and x-ray radiation atoms release energetic light particles that are powerful enough to penetrate the body. Alpha and beta particle radiation is lower energy and can often be blocked by just a sheet of paper. If radioactive material is ingested or inhaled into the body, however, it is actually the lower energy alpha and beta radiation that becomes the more dangerous. That’s because a large portion of gamma and x-ray radiation will pass directly through the body without interacting with the tissue (considering that at the atomic level, the body is mostly empty space), whereas alpha and beta radiation, unable to penetrate tissue, will expend all their energy by colliding with the atoms in the body and likely cause more damage.

In the Fukushima situation, the radioactive isotopes detected, iodine 131 and cesium 137, emit both gamma and beta radiation. These radioactive elements are by-products of the fission reaction that generates power in the nuclear plants.

The Japanese government has evacuated 180,000 people from within a 20-kilometer radius of the Fukushima Daiichi complex. They are urging people within 30 kilometers of the plant to remain indoors, close all windows, and to change clothes and wash exposed skin after going outside. These measures are mainly aimed at reducing the potential for inhaling or ingesting beta-emitting radioactive material.


Exposure time:


A very high single dose of radiation (acquired within minutes can be more harmful than the same dosage accumulated over time. According to the World Nuclear Association, a single one-sievert dose is likely to cause temporary radiation sickness and lower white blood cell count, but is not fatal. One five-sievert dose would likely kill half of those exposed within a month. At 10 sieverts, death occurs within a few weeks.

The effects of long-term, low-dose radiation are much more difficult to gauge. DNA damage from ionizing radiation can cause mutations that lead to cancer, especially in tissues with high rates of cell division, such as the gastrointestinal tract, reproductive cells and bone marrow. But the increase in cancer risk is so small as to be difficult to determine without studying a very large exposed population of people. As an example, according to Langhorst, 10,000 people exposed to a 0.01-sievert whole-body dose of radiation would potentially increase the total number of cancers in that population by eight. The normal prevalence of cancer, however, would predict 2,000 to 3,300 cancer cases in a population of 10,000, so “how do you see eight excess cancers?” Langhorst asks.



Chernobyl’s lessons:
According to Gonzalez, some of the emergency workers at Chernobyl received several sieverts of radiation, and many were working “basically naked” due to the heat, allowing contaminated powder to be absorbed through their skin. In comparison, the Japanese workers are most likely very well-equipped and protected at least from direct skin doses.
The Tokyo Electric Power Co. (TEPCO), the plant’s owners, has evacuated most of its workers, but 50 remain at the site to pump cooling seawater into the reactors and prevent more explosions. These workers are likely exposing themselves to high levels of radiation and braving significant health risks. “As a matter of precaution, I would limit the workers’ exposure to 0.1 sievert and I would rotate them,” Gonzalez says. The workers should be wearing personal detectors that calculate both the rate and total dose of radiation and that set off alarms when maximum doses are reached. “If the dose of the workers start to approach one sievert then the situation is serious,” he says.

The thousands of children who became sick in the aftermath of the Chernobyl disaster were not harmed from direct radiation or even from inhalation of radioactive particles, but from drinking milk contaminated with iodine 131. The isotope, released by the Chernobyl explosion, had contaminated the grass on which cows fed, and the radioactive substance accumulated in cows’ milk. Parents, unaware of the danger, served contaminated milk to their children. “Certainly this will not happen in Japan,” Gonzalez says.

When it comes to radiation exposure, professionals who frequently work with radioactive materials, whether in a hospital or a nuclear power plant, abide by the ALARA principle: “as low as reasonably achievable”. Radiation exposure limits are conservatively set well below the levels known to induce radiation sickness or suspected of causing long-term health effects. Temporary exposure to dosages many times these limits, however, is not necessarily dangerous.

News of the U.S. Navy repositioning its warships upwind of the reactor site, the distribution of potassium iodide pills by the Japanese government, and images of officials in hazmat suits using Geiger counters to measure radiation levels among babies may stoke the public’s fears—but, for now, these measures are ALARA in action, or “good extra precautions,” Gonzalez says. The idea here is to always err on the side of caution.

By Nina Bai  | Tuesday, March 15, 2011 | 18

JAPAN is still reeling from the earthquakeand tsunami that struck its north-east coast on March 11th, with the government struggling to contain a nuclear disaster and around 10,000 people still unaccounted for.


Provisional  estimates released today by the World Bank put the economic damage resulting from the disaster at as much as $235 billion, around 4% of GDP. That figure would make this disaster the costliest since comparable records began in 1965.

The Indian Ocean tsunami in 2004, which caused some 250,000 deaths, does not feature on this chart. Economic losses there amounted to only $14 billion in today’s prices, partly because of low property and land values in the affected areas.

 

By The Economist online, March 21, 2011

 

 

It’s only a matter of time—in fact, they’ve already started cropping up—before reality-challenged individuals begin pontificating about what God could have possibly been so hot-and-bothered about to trigger last week’s devastating earthquake and tsunami in Japan. (Surely, if we were to ask Westboro Baptist Church members, it must have something to do with the gays.) But from a psychological perspective, what type of mind does it take to see unexpected natural events such as the horrifying scenes still unfolding in Japan as “signs” or “omens” related to human behaviors?


In the summer of 2005, my University of Arkansas colleague Becky Parker and I began the first experimental study to investigate the psychology underlying this strange phenomenon. In this experiment, published the following year in Developmental Psychology, we invited a group of three- to nine-year-old children into our lab and told them they were about to play a fun guessing game. It was a simple game in which each child was tested individually. The child was asked to go to the corner of the room and to cover his or her eyes before coming back and guessing which of two large boxes contained a hidden ball. All the child had to do was place a hand on the box that he or she believed contained the ball. A short time was allowed for the decision to be made but, importantly, during that time the children were allowed to change their mind at any time by moving their hand to the other box. The final answer on each of the four trials was reflected simply by where the child’s hand was when the experimenter said, “Time’s up!” Children who guessed right won a sticker prize.


In reality, the game was a little more complicated than this. There were secretly two balls, one in each box, and we had decided in advance whether the children were going to get it “right” or “wrong” on each of the four guessing trials. At the conclusion of each trial, the child was shown the contents of only one of the boxes. The other box remained closed. For example, for “wrong” guesses, only the unselected box was opened, and the child was told to look inside (“Aw, too bad. The ball was in the other box this time. See?”). Children who had been randomly assigned to the control condition were told that they had been successful on a random two of the four trials. Children assigned to the experimental condition received some additional information before starting the game. These children were told that there was a friendly magic princess in the room, “Princess Alice,” who had made herself invisible. We showed them a picture of Princess Alice hanging against the door inside the room (one that looked remarkably like Barbie), and we gave them the following information: “Princess Alice really likes you, and she’s going to help you play this game. She’s going to tell you, somehow, when you pick the wrong box.” We repeated this information right before each of the four trials, in case the children had forgotten.


For every child in the study, whether assigned to the standard control condition (“No Princess Alice”) or to the experimental condition (“Princess Alice”), we engineered the room such that a spontaneous and unexpected event would occur just as the child placed a hand on one of the boxes. For example, in one case, the picture of Princess Alice came crashing to the floor as soon as the child made a decision, and in another case a table lamp flickered on and off. (We didn’t have to consult with Industrial Light & Magic to rig these surprise events; we just arranged for an undergraduate student to lift a magnet on the other side of the door to make the picture fall, and we hid a remote control for the table lamp surreptitiously in the experimenter’s pocket.) The predictions were clear: if the children in the experimental condition interpreted the picture falling and the light flashing as a sign from Princess Alice that they had chosen the wrong box, they would move their hand to the other box.

What we found was rather surprising, even to us. Only the oldest children, the seven- to nine-year-olds, from the experimental (Princess Alice) condition, moved their hands to the other box in response to the unexpected events. By contrast, their same-aged peers from the control condition failed to move their hands. This finding told us that the explicit concept of a specific supernatural agent—likely acquired from and reinforced by cultural sources—is needed for people to see communicative messages in natural events. In other words, children, at least, don’t automatically infer meaning in natural events without first being primed somehow with the idea of an identifiable supernatural agent such as Princess Alice (or God, one’s dead mother, angels, etc.).

More curious, though, was the fact that the slightly younger children in the study, even those who had been told about Princess Alice, apparently failed to see any communicative message in the light-flashing or picture-falling events. These children kept their hands just where they were. When we asked them later why these things happened, these five- and six-year-olds said that Princess Alice had caused them, but they saw her as simply an eccentric, invisible woman running around the room knocking pictures off the wall and causing the lights to flicker. To them, Princess Alice was like a mischievous poltergeist with attention deficit disorder: she did things because she wanted to, and that’s that. One of these children answered that Princess Alice had knocked the picture off the wall because she thought it looked better on the ground. In other words, they completely failed to see her “behavior” as having any meaningful connection with the decision they had just made on the guessing game; they saw no “signs” there.


The youngest children in the study, the three- and four-year-olds in both conditions, only shrugged their shoulders or gave physical explanations for the events, such as the picture not being sticky enough to stay on the wall or the light being broken. Ironically, these youngest children were actually the most scientific of the bunch, perhaps because they interpreted “invisible” to mean simply “not present in the room” rather than “transparent.” Contrary to the common assumption that superstitious beliefs represent a childish mode of sloppy and undeveloped thinking, therefore, the ability to be superstitious actually demands some mental sophistication. At the very least, it’s an acquired cognitive skill.

Still, the real puzzle to our findings was to be found in the reactions of the five- and six-year-olds from the Princess Alice condition. Clearly they possessed the same understanding of invisibility as did the older children, because they also believed Princess Alice caused these spooky things to happen in the lab. Yet although we reminded these children repeatedly that Princess Alice would tell them, somehow, if they chose the wrong box, they failed to put two and two together. So what is the critical change between the ages of about six and seven that allows older children to perceive natural events as being communicative messages about their own behaviors (in this case, their choice of box) rather than simply the capricious, arbitrary actions of some invisible or otherwise supernatural entity?

The answer probably lies in the maturation of children’s theory-of-mind abilities in this critical period of brain development. Research by University of Salzburg psychologist Josef Perner, for instance, has revealed that it’s not until about the age of seven that children are first able to reason about “multiple orders” of mental states. This is the type of everyday, grown-up social cognition whereby theory of mind becomes effortlessly layered in complex, soap opera–style interactions with other people. Not only do we reason about what’s going on inside someone else’s head, but we also reason about what other people are reasoning is happening inside still other people’s heads! For example, in the everyday (nonsupernatural) social domain, one would need this kind of mature theory of mind to reason in the following manner:

“Jakob thinks that Adrienne doesn’t know I stole the jewels.”
Whereas a basic (“first-order”) theory of mind allows even a young preschooler to understand the first propositional clause in this statement, “Jakob thinks that . . . ,” it takes a somewhat more mature (“second-order”) theory of mind to fully comprehend the entire social scenario: “Jakob thinks that [Adrienne doesn’t know] . . .”

Most people can’t go much beyond four orders of mental-state reasoning (consider the Machiavellian complexities of, say, Leo Tolstoy’s novels), but studies show that the absolute maximum in adults hovers around seven orders of mental state. The important thing to note is that, owing to their still-developing theory-of-mind skills, children younger than seven years of age have great difficulty reasoning about multiple orders of mental states. Knowing this then helps us understand the surprising results from the Princess Alice experiment. To pass the test (move their hand) in response to the picture falling or the light flashing, the children essentially had to be reasoning in the following manner:

“Princess Alice knows that [I don’t know] where the ball is hidden.”

 
To interpret the events as communicative messages, as being about their choice on the guessing game, demands a sort of third-person perspective of the self’s actions: “What must this other entity, who is watching my behavior, think is happening inside my head?” The Princess Alice findings are important because they tell us that, before the age of seven, children’s minds aren’t quite cognitively ripe enough to allow them to be superstitious thinkers. The inner lives of slightly older children, by contrast, are drenched in symbolic meaning. One second-grader was even convinced that the bell in the nearby university clock tower was Princess Alice “talking” to him.

Princess Alice may not have the je ne sais quoi of Mother Mary or the fiery charisma of the Abrahamic God we’re all familiar with, but she’s arguably a sort of empirically constructed god-by-proxy in her own right. The point is, the same basic cognitive processes—namely, a mature theory of mind—are also involved in the believer’s sense of receiving divine guidance from these other members of the more popular holy family. When people ask God to give them a sign, they’re often at a standstill, a fork in the road, paralyzed in a critical moment of existential ambivalence. In such cases, our ears are pricked, our eyes widened, our thoughts ruminating on a particular problem—often “only God knows” what’s on our minds and the extent to which we’re struggling to make a decision. It’s not questions like whether we should choose a different box, but rather decisions such as these: Should I stay with this person or leave him? Should I risk everything, start all over in a new city, or stay here where I’m stifled and bored? Should I have another baby? Should I continue receiving harsh treatment for my disease, or should I just pack it in and call it a life? Just like the location of the hidden ball inside one of those two boxes, we’re convinced that there’s a right and a wrong answer to such important life questions. And for most of us, it’s God, not Princess Alice, who holds the privileged answers.

 


God doesn’t tell us the answers directly, of course. There’s no nod to the left, no telling elbow poke in our side or “psst” in our ear. Rather, many envision God, and other entities like Him, as encrypting strategic information in an almost infinite array of natural events: the prognostic stopping of a clock at a certain hour and time; the sudden shrieking of a hawk; an embarrassing blemish on our nose appearing on the eve of an important interview; a choice parking spot opening up at a crowded mall just as we pull around; an interesting stranger sitting next to us on a plane. The possibilities are endless. When the emotional climate is just right, there’s hardly a shape or form that “evidence” cannot assume. Our minds make meaning by disambiguating the meaningless.

This sign-reading tendency has a distinct and clear relationship with morality. When it comes to unexpected heartache and tragedy, our appetite for unraveling the meaning of these ambiguous “messages” can become ravenous. Misfortunes appear cryptic, symbolic; they seem clearly to be about our behaviors. Our minds restlessly gather up bits of the past as if they were important clues to what just happened. And no stone goes unturned. Nothing is too mundane or trivial; anything to settle our peripatetic thoughts from arriving at the unthinkable truth that there is no answer because there is no riddle, that life is life and that is that.

 


Text by Jesse Bering | Sunday, March 13, 2011 |

(Author’s note: Some of the foregoing material is excerpted, with edits, from my new book, The Belief Instinct: The Psychology of Souls, Destiny and the Meaning of Life.)

 

 

 

As we watch in the images rolling in from Japan we are yet again reminded of the sudden destructive potential of mother Earth. The number of fatalities is currently in the hundreds; the number displaced from their homes is in the tens of thousands.


The tsunami generated by this magnitude 8.9 earthquake sent a wall of water sweeping across Japan, and across the Pacific. It was more than 30 feet high in places and reached six miles inland carrying cars, homes and everything else with it. Although the earthquake was 230 miles northeast of Tokyo, this was the worst shaking that people have felt in a city used to earthquakes. Explosions at the Fukushima Daiichi Nuclear Power Station have leaked radioactive material into the surrounding area, and we will undoubtedly hear of other catastrophic impacts over the next few days.

But it could have been much worse. The 2010 Haiti earthquake was magnitude 7; Japan’s earthquake released almost 1000 times more energy than the Haiti event. Yet it is estimated that more than 200,000 people were killed in Haiti compared to the current estimate of hundreds in Japan. The reason for this difference is that Japan is one of the most earthquake-ready countries on Earth, Haiti was not.

For decades Japan has steadily pushed the limits of earthquake preparedness. It invests in research and development to understand the earthquake process and create infrastructure that is better able to withstand future effects. Their state of the art buildings shake but do not collapse. Classes about earthquakes in their schools make earthquake preparedness part of everyone’s lifestyle, and regular public earthquake drills reinforce this for a lifetime. Their seismic networks, the best in the world, provide a tsunami warning system, and more recently an earthquake warning system that provided tens of seconds warning in this earthquake.

This long-term investment that Japan has made to reduce the impact of earthquakes seems like a very good deal today. It has undoubtedly saved many thousands of lives, and will also reduce the long-term impact of the earthquake on the economy as Japan rapidly bounces back. The investment will pay for itself many times over for this earthquake, and the next.

In the U.S. we also have an earthquake problem. Our west-coast cities are built atop active fault zones that give us occasional jolts reminding us of their presence from time to time. The 1989 magnitude 7.0 Loma Prieta earthquake was one such reminder, as was the 1994 magnitude 6.7 Northridge earthquake. Both events were moderate in size and the strongest shaking was in unpopulated mountainous areas. We have not seen the true power of west-coast earthquakes since 1906 when a magnitude 8 earthquake destroyed San Francisco. Los Angeles, the San Francisco Bay Area, or Seattle could be next.

Today, we should not have any illusions about the ability of an earthquake to bring wide-spread destruction a modern city. We most recently experienced the might of mother Earth in the U.S. when Hurricane Katrina hit New Orleans in 2005. In addition to the immediate destruction of the widespread flooding, New Orleans also stands as a testament to the long-term effects of these events on our cities. The recent census count shows that the New Orleans population is still down almost one third since the previous pre-Katrina count.

So what is our fate on the west coast? Do we follow Japan’s lead, or do we fall back in the direction of Haiti? We must use this terrible event in Japan as a reminder to redouble our efforts to build an earthquake resilient society. We need to invest in the research and the development that brings about better earthquake safety. We must push the limits of our technologies to deliver new earthquake mitigation strategies.

Modern buildings are built to standards that make them unlikely to collapse, but we need to focus on improving older buildings to bring them up to modern standards. We need more education about earthquake preparedness in our schools, and large-scale drills such as the California Shake-Out. And we need a warning system, like the one that delivered a warning in Japan. A prototype is operational in California. With only a moderate investment, public warnings could be available state-wide. Perhaps this warning from Japan can spur the investment now. We will be very glad we did when the next earthquake strikes.


By Richard Allen ,Mar 12, 2011

Richard M. Allen is the Associate Director of the University of California, Berkeley, Seismological Laboratory and an associate professor in the university’s Department of Earth & Planetary Science

 

 

Actor Charlie Sheen, known for his heavy cocaine use, has been stating in interviews that he freed himself of his drug habit. How likely is that?


When asked recently on The Today Showhow he cured himself of his addiction, Two and a Half Men sitcom star Charlie Sheen replied, “I closed my eyes and made it so with the power of my mind.”
Until last month, he was the highest paid actor on TV, despite his well-known bad-boy lifestyle and persistent problems with alcohol and cocaine. After the rest of his season’s shows were canceled by producers, Sheen has gone on an interview tear with many bizarre statements, including that he is on a “winning” streak. His claims of quitting a serious drug habit on his own, however, is perhaps one of his least eccentric statements.

A prevailing view of substance abuse, supported by both the National Institute on Drug Abuse and Alcoholics Anonymous, is the disease model of addiction. The model attributes addiction largely to changes in brain structure and function. Because these changes make it much harder for the addict to control substance use, health experts recommend professional treatment and complete abstinence.

But some in the field point out that many if not most addicts successfully recover without professional help. A survey by Gene Heyman, a research psychologist at McLean Hospital in Massachusetts, found that between 60 to 80 percent of people who were addicted in their teens and 20s were substance-free by their 30s, and they avoided addiction in subsequent decades. Other studies on Vietnam War veteranssuggest that the majority of soldiers who became addicted to narcotics overseas later stopped using them without therapy.

Scientific American spoke with Sally Satel, a resident scholar at the American Enterprise Institute for Public Policy Research and lecturer in psychiatry at the Yale University School of Medicine, about quitting drugs without professional treatment. Satel was formerly a staff psychiatrist at the Oasis Clinic in Washington, D.C., where she worked with substance abuse patients.

[An edited transcript of the interview follows.]

Is it possible to cure yourself of addiction without professional help? How often does that happen?

Of course it’s possible. Most people recover and most people do it on their own. That’s in no way saying that everyone should be expected to quit on their own and in no way denies that quitting is a hard thing to do. This is just an empirical fact. It is even possible that those who quit on their own could have quit earlier if they sought professional help. The implicit message isn’t that treatment isn’t important for many—in fact it should probably be made more accessible—but it is simply a fact that most people cure themselves.

How do addicts stop on their own?

They have to be motivated. It takes the realization that their family, their future, their employment—all these—are becoming severely compromised. The subtext isn’t that they just “walk away” from the addiction. But I’ve had a number of patients in the clinic whose six-year-old says, “Why don’t you ever come to my ball games?” This can prompt a crisis of identity causing the addict to ask himself, “Is this the type of father I want to be?”

If not, there are lots of recovery strategies that users figure out themselves. For example, they change whom they associate with. They can make it harder to access drugs, perhaps by never carrying cash with them. People will put obstacles in front of themselves. True, some people decide they can’t do it on their own and decide to go into treatment—that’s taking matters into one’s own hands, too.


What do professional drug addiction programs offer that is difficult to replicate on one’s own?


If you’re already in treatment, you’ve made a big step. Even for court-ordered treatment, people often internalize the decision as their own. You get a lot of support. You get instruction in formal relapse prevention therapy. You might get methadone for withdrawal and medications for an underlying psychiatric problem.

Most experts regard drug addiction as a brain disease. Do you agree?
I’m critical of the standard view promoted by the National Institute on Drug Abuse that addiction is a brain disease. Naturally, every behavior is mediated by the brain, but the language “brain disease” carries the connotation that the afflicted person is helpless before his own brain chemistry. That is too fatalistic.

It also overlooks the enormously important truth that addicts use drugs to help them cope in some manner. That, as destructive as they are, drugs also serve a purpose. This recognition is very important for designing personalized therapies.


Don’t most studies show that addicts do better with professional help?


People who come to treatment tend to have concurrent psychiatric illness, and they also tend to be less responsive to treatment. Most research is done on people in a treatment program, so by definition you’ve already got a skewed population. This is called the “clinical illusion,” and it applies to all medical conditions. It refers to a tendency to think that the patients you see in a clinical setting fully represent all people with that condition. It’s not true. You’re not seeing the full universe of people.


Based on his public interviews, does it seem likely that Charlie Sheen cured himself?


I doubt it. Of course, I haven’t examined him, but based on what one sees, one would be concerned about ongoing drug use and underlying mental illness.


Is there brain damage from drug use? Is it possible to recover from such damage?


The only drugs that are neurotoxic are alcohol, methamphetamine, probably MDMA [ecstasy], and some inhalants.* Cocaine can lead to micro strokes. That’s brain damage. Yes, addiction changes the brain but this does not doom people to use drugs forever. The most permanent change is memories. Some people have stronger memories and they are more cue-reactive [more reactive to stimulus that triggers the reward pathway]. Nonaddicts won’t show that level of cue-reactivity.

For some people the addiction and withdrawal will be more intense through genetically mediated problems. Those people have a harder time stopping.


What else might account for Charlie Sheen’s strange behavior in those interviews?


One would want to explore the possibility of underlying psychiatric problems. The grandiosity, the loose associations, the jumbled flow suggest a thought disorder. Heavy, heavy drug use could cause that. Stimulant use can cause temporary thought disorder or intensify an underlying thought disorder or hypomanic state. To try to make a good diagnosis, whatever ongoing drug use there is would have to stop. After the withdrawal phase is resolved clinicians would then need to see if an underlying thought or mood disorder persisted. That would aid in parsing how much of a confusing clinical picture is due to drug use and how much is due to a primary mental disorder.


By Nina Bai , March 4, 2011