Depression


It’s easy to appreciate the seasonality of winter blues, but web searches show that other disorders may ebb and flow with the weather as well.

Google searches are becoming an intriguing source of health-related information, exposing everything from the first signs of an infectious disease outbreak to previously undocumented side effects of medications. So researchers led by John Ayers of the University of Southern California decided to comb through queries about mental illnesses to look for potentially helpful patterns related to these conditions. Given well known connections between depression and winter weather, they investigated possible connections between mental illnesses and seasons.

Using all of Google’s search data from 2006 to 2010, they studied searches for terms like “schizophrenia” “attention deficit/hyperactivity disorder (ADHD),” “bulimia” and “bipolar” in both the United States and Australia.  Since winter and summer are reversed in the two countries finding opposing patterns in the two countries’ data would strongly suggest that season, rather than other things that might vary with time of year, was important in some way in the prevalence of the disorders.

21918519_BG1

“All mental health queries followed seasonal patterns with winter peaks and summer troughs,” the researchers write in their study, published in the American Journal of Preventive Medicine. They found that mental health queries in general were 14% higher in the winter in the U.S. and 11% higher in the Australian winter.

The seasonal timing of queries regarding each disorder was also similar in the two countries. In both countries, for example, searches about eating disorders (including anorexia and bulimia) and schizophrenia surged during winter months; those in the U.S. were 37% more likely and Australians were 42% more likely to seek information about these disorders during colder weather than during the summer. And compared to summer searches, schizophrenia queries were 37% more common in the American winter and 36% more frequent during the Australian winter. ADHD queries were also highly seasonal, with 31% more winter searches in the U.S. and 28% more in Australia compared to summer months.

Searches for depression and bipolar disorder, which might seem to be among the more common mental illnesses to strike during the cold winter months, didn’t solicit as many queries: there were 19% more winter searches for depression in the U.S. and 22% more in Australia for depression. For bipolar, 16% more American searches for the term occurred in the winter than in the summer, and 18% more searches occurred during the Australian winter. The least seasonal disorder was anxiety, which varied by just 7% in the U.S. and 15% in Australia between summer and winter months.

Understanding how the prevalence of mental illnesses change with the seasons could lead to more effective preventive measures that alert people to symptoms and guide them toward treatments that could help, say experts. Previous research suggests that shorter daylight hours and the social isolation that accompanies harsh weather conditions might explain some of these seasonal differences in mental illnesses, for example, so improving social interactions during the winter months might be one way to alleviate some symptoms. Drops in vitamin D levels, which rise with exposure to sunlight, may also play a role, so supplementation for some people affected by mood disorders could also be effective.

 

The researchers emphasize that searches for disorders are only queries for more information, and don’t necessarily reflect a desire to learn more about a mental illness after a new diagnosis. For example, while the study found that searches for ‘suicide’ were 29% more common in winter in America and 24% more common during the colder season in Australia, other investigations showed that completed suicides tend to peak in spring and early summer. Whether winter queries have any relationship at all to spring or summer suicides isn’t clear yet, but the results suggest a new way of analyzing data that could lead to better understanding of a potential connection.

And that’s the promise of data on web searches, says the scientists. Studies on mental illnesses typically rely on telephone or in-person surveys in which participants are asked about symptoms of mental illness or any history with psychological disorders, and people may not always answer truthfully in these situations. Searches, on the other hand, have the advantage of reflecting people’s desire to learn more about symptoms they may be experiencing or to improve their knowledge about a condition for which they were recently diagnosed. So such queries could become a useful resource for spotting previously undetected patterns in complex psychiatric disorders.  “The current results suggest that monitoring queries can provide insight into national trends on seeking information regarding mental health, such as seasonality…If additional studies can validate the current approach by linking clinical symptoms with patterns of search queries,” the authors conclude, “This method may prove essential in promoting population mental health.”

 

Elizabeth Warren said that a much higher baseline would be appropriate if wages were tied to productivity gains.

Image: Office worker
Digital Vision-Getty Images-Getty Images

What if U.S. workers were paid more as the nation’s productivity increased?
If we had adopted that policy decades ago, the minimum wage would now be about $22 an hour, said Sen. Elizabeth Warren (D-Mass.) last week. Warren was speaking at a hearing held by the Senate’s Committee on Health, Education, Labor and Pensions.

Warren was talking to Arindrajit Dube, a University of Massachusetts Amherst professor who has studied the issue of minimum wage. “With a minimum wage of $7.25 an hour, what happened to the other $14.75?” she asked Dube. “It sure didn’t go to the worker.”
The $22 minimum wage Warren referred to came from a 2012 study from the Center for Economic and Policy Research. It said that the minimum wage would have hit $21.72 an hour last year if it had been tied to the increases seen in worker productivity since 1968. Even if the minimum wage got only one-fourth the pickup as the rate of productivity, it would now be $12.25 an hour instead of $7.25.
Some of the news media took this to mean that Warren is calling for a minimum-wage increase to $22 an hour. That doesn’t appear to be the case. She seems to be merely pointing out that the minimum wage has grown more slowly than other facets of the economy.
Warren is taking some hits on Twitter for her comments. One user describes her as “clueless and out of touch” while another calls her “delusional.” But other users are praising her arguments as “compelling,” saying she is “asking the right questions regarding minimum wage.”
By Kim  Peterson

Make all sorts of ostensibly conscious and seemingly rational choices when we are aware of a potential risk. We eat organic food, max out on multivitamins and quickly forswear some products (even whole technologies) at the slightest hint of danger. We carry guns and vote for the candidate we think will keep us safe. Yet these choices are far from carefully considered — and, surprisingly often, they contravene reason. What’s more, while our choices about risk invariably feel right when we make them, many of these decisions end up putting us in greater peril.

 

Researchers in neuroscience, psychology, economics and other disciplines have made a range of discoveries about why human beings sometimes fear more than the evidence warrants, and sometimes less than the evidence warns. That science is worth reviewing at length. But one current issue offers a crash course in the most significant of these findings: the fear of vaccines, particularly vaccines for children.

In a 2011 Thomson Reuters/NPR poll, nearly one parent in three with a child under 18 was worried about vaccines, and roughly one American in four was concerned about the value and safety of vaccines in general. In the same poll, roughly one out of every five college-educated respondents worried that childhood vaccination was connected with autism; 7 percent said they feared a link with Type 1 diabetes.

Based on the evidence, these and most other concerns about vaccines are unfounded. A comprehensive report last year from the Institute of Medicine is just one of many studies to report that vaccines do not cause autism, diabetes, asthma or other major afflictions listed by the anti-vaccination movement.

Yet these fears, fierce and visceral, persist. To frustrated doctors and health officials, vaccine-phobia seems an irrational denial of the facts that puts both the unvaccinated child and the community at greater risk (as herd immunity goes down, disease spread rises). But the more we learn about how risk perception works, the more understandable — if still quite dangerous — the fear of vaccines becomes.

Along with many others, the cognitive psychologists Paul Slovic of the University of Oregon and Baruch Fischhoff of Carnegie Mellon University have identified several reasons something might feel more or less scary than mere reason might suppose. Humans subconsciously weigh the risks and benefits of any choice or course of action — and if taking a particular action seems to afford little or no benefit, the risk automatically feels bigger. Vaccinations are a striking example. As the subconscious mind might view it, vaccines protect children from diseases like measles and pertussis, or whooping cough, that are no longer common, so the benefit to vaccination feels small — and smaller still, perhaps, compared to even the minuscule risk of a serious side effect. (In actuality, outbreaks of both of these infections have been more common in recent years, according to the Centers for Disease Control and Prevention.) Contrast this with how people felt in the 1950s, in the frightening days of polio, when parents lined their kids up for vaccines that carried much greater risk than do the modern ones. The risk felt smaller, because the benefit was abundantly clear.

Professor Slovic and Professor Fischhoff and others have found that a risk imposed upon a person, like mandatory vaccination programs (nearly all of which allow people to opt out), feels scarier than the same risk if taken voluntarily. Risk perception also depends on trust. A risk created by a source you don’t trust will feel scarier. The anti-vaccination movement is thick with mistrust of government and the drug industry. Finally, risks that are human-made, like vaccines, evoke more worry than risks that are natural. Some parents who refuse to have their kids vaccinated say they are willing to accept the risk of the disease, because the disease is “natural.”

Still, shouldn’t our wonderful powers of reason be able to overcome these instinctive impediments to clear thinking? The neuroscience of fear makes clear that such hope is hubris. Work on the neural roots of fear by the neuroscientist Joseph LeDoux of New York University, and others, has found that in the complex interplay of slower, conscious reason and quicker, subconscious emotion and instinct, the basic architecture of the brain ensures that we feel first and think second. The part of the brain where the instinctive “fight or flight” signal is first triggered — the amygdala — is situated such that it receives incoming stimuli before the parts of the brain that think things over. Then, in our ongoing response to potential peril, the way the brain is built and operates assures that we are likely to feel more and think less. As Professor LeDoux puts it in “The Emotional Brain”: “the wiring of the brain at this point in our evolutionary history is such that connections from the emotional systems to the cognitive systems are stronger than connections from the cognitive systems to the emotional systems.”

And so we have excessive fear of vaccines. But just as we are too afraid of some things, this same “feelings and facts” system works the other way too, sometimes leaving us inadequately concerned about bigger risks. A risky behavior you engage in voluntarily and that seems to afford plenty of benefit — think sun-tanning for that “nice, healthy glow” — feels less dangerous. A societal risk, well off in the future, tends not to trigger the same instinctive alarm — in part, because the hazard isn’t singling any one of us out, individually. This helps explain why concern over climate change is broad, but thin.

Though it may be prone to occasional errors, our risk-perception system isn’t all bad. After all, it has gotten us this far through evolution’s gantlet. But a system that relies so heavily on emotion and instinct sometimes produces risk perceptions that don’t match the evidence, a “risk perception gap” that can be a risk in itself. We do have to fear the dangers of fear itself.

In this remarkable era of discovery about how our brains operate, we have discovered a great deal about why the gap occurs, and we can — and should — put our detailed knowledge of risk perception to use in narrowing the risk-perception gap and reducing its dangers. As the Italian philosopher Nicola Abbagnano advised, “Reason itself is fallible, and this fallibility must find a place in our logic.” Accepting that risk perception is not so much a process of pure reason, but rather a subjective combination of the facts and how those facts feel, might be just the step in the human learning curve we need to make. Then, maybe, we’ll start making smarter decisions about vaccines and other health matters.

 

By DAVID ROPEIK, September 28, 2012

David Ropeik is an instructor at the Harvard Extension School and the author of “How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts.”

For adolescents, Facebook and other social media have created an irresistible forum for online sharing and oversharing, so much so that endless mood-of-the-moment updates have inspired a snickering retort on T-shirts and posters: “Face your problems, don’t Facebook them.”

But specialists in adolescent medicine and mental health experts say that dark postings should not be hastily dismissed because they can serve as signs of depression and an early warning system for timely intervention. Whether therapists should engage with patients over Facebook, however, remains a matter of debate.

And parents have their own conundrum: how to distinguish a teenager’s typically melodramatic mutterings — like the “worst day of my life” rants about their “frenemies,” academics or even cafeteria food — from a true emerging crisis.

(Dr. Megan A. Moreno has studied college students’ Facebook postings for signs of depression. Some showed signs of risk.)

Last year, researchers examined Facebook profiles of 200 students at the University of Washington and the University of Wisconsin-Madison. Some 30 percent posted updates that met the American Psychiatric Association’s criteria for a symptom of depression, reporting feelings of worthlessness or hopelessness, insomnia or sleeping too much, and difficulty concentrating.

Their findings echo research that suggests depression is increasingly common among college students. Some studies have concluded that 30 to 40 percent of college students suffer a debilitating depressive episode each year. Yet scarcely 10 percent seek counseling.

“You can identify adolescents and young adults on Facebook who are showing signs of being at risk, who would benefit from a clinical visit for screening,” said Dr. Megan A. Moreno, a principal investigator in the Facebook studies and an assistant professor of pediatrics at the University of Wisconsin-Madison.

Sometimes the warnings are seen in hindsight. Before 15-year-old Amanda Cummings committed suicide by jumping in front of a bus near her Staten Island home on Dec. 27, her Facebook updates may have revealed her anguish. On Dec. 1, she wrote: “then ill go kill myself, with these pills, this knife, this life has already done half the job.”

Facebook started working with the National Suicide Prevention Lifeline in 2007. A reader who spots a disturbing post can alert Facebook and report the content as “suicidal.” After Facebook verifies the comment, it sends a link for the prevention lifeline to both the person who may need help and the person who alerted Facebook. In December, Facebook also began sending the distressed person a link to an online counselor.

While Facebook’s reporting feature has been criticized by some technology experts as unwieldy, and by some suicide prevention experts as a blunt instrument to address a volatile situation, other therapists have praised it as a positive step.

At some universities, resident advisers are using Facebook to monitor their charges. Last year, when Lilly Cao, then a junior, was a house fellow at Wisconsin-Madison, she decided to accept Facebook “friend” requests from most of the 56 freshmen on her floor.

She spotted posts about homesickness, academic despair and a menacing ex-boyfriend.

“One student clearly had an alcohol problem,” recalled Ms. Cao. “I found her unconscious in front of the dorm and had to call the ambulance. I began paying more attention to her status updates.”

Ms. Cao said she would never reply on Facebook, preferring instead to talk to students in person. The students were grateful for the conversations, she said.

“If they say something alarming on Facebook,” she added, “they know it’s public and they want someone to respond.”

While social media updates can offer clues that someone is overwrought, they also raise difficult questions: Who should intervene? When? How?

“Do you hire someone in the university clinic to look at Facebook all day?” Dr. Moreno said. “That’s not practical and borders on creepy.”

She said a student might be willing to take a concerned call from a parent, or from a professor who could be trained what to look for.

But ethically, should professors or even therapists “friend” a student or patient? (The students monitored by Dr. Moreno’s team had given their consent.)

Debra Corbett, a therapist in Charlotte, N.C., who treats adolescents and young adults, said some clients do “friend” her. But she limits their access to her Facebook profile. When clients post updates relevant to therapy, she feels chagrined. But she will not respond online, to maintain the confidentiality of the therapeutic relationship.

Instead, Ms. Corbett will address the posts in therapy sessions. One client, for example, is a college student who has low self-esteem. Her Facebook posts are virtual pleas for applause.

Ms. Corbett will say to her: “How did you feel when you posted that? We’re working on you validating yourself. When you put it out there, you have no control about what they’ll say back.”

Susan Kidd, who teaches emotionally vulnerable students at a Kentucky high school, follows their Facebook updates, which she calls a “valuable tool” for intervention with those who “may otherwise not have been forthcoming with serious issues.”

At Cornell University, psychologists do not “friend” students. At weekly meetings, however, counselors, residence advisors and the police discuss students who may be at risk. As one marker among many, they may bring up Facebook comments that have been forwarded to them.

“People do post very distressing things,” said Dr. Gregory T. Eells, director of Cornell’s counseling and psychological services. “Sometimes they’re just letting off steam, using Facebook as something between a diary and an op-ed piece. But sometimes we’ll tell the team, ‘check in on this person.’ ”

They proceed cautiously, because of “false positives,” like a report of a Facebook photo of a student posing with guns. “When you look,” said Dr. Eells, “it’s often benign.”

Dr. Moreno said she thought it made sense for house fellows at the University of Wisconsin to keep an eye on their students who “friend” them. Students’ immediate friends, she said, should not be expected to shoulder responsibility for intervention: “How well they can identify and help each other, I’m not so sure.”

Tolu Taiwo, a junior at the University of Illinois at Urbana-Champaign, agreed. “I know someone who wrote that he wanted to kill himself,” she said. “It turned out he probably just wanted attention. But what if it was real? We wouldn’t know.”

In fact, when adolescents bare their souls on Facebook, they risk derision. Replying to questions posted on Facebook by The New York Times, Daylina Miller, a recent graduate of the University of South Florida, said that when she poured out her sadness online, some readers responded only with the Facebook “like” symbol: a thumb’s up.

“You feel the same way?” said Ms. Miller, puzzled. “Or you like that I’m sad? You’re sadistic?”

Some readers, flummoxed by a friend’s misery, remain silent, which inadvertently may be taken as the most hurtful response.

In comments to The Times, parents who followed their children’s Facebook posts said they did not always know how to distinguish the drama du jour from silent screams. Often their teenagers felt angry and embarrassed when parents responded on Facebook walls or even, after reading a worrisome comment by their child’s friend, alerted the friend’s parents.

Many parents said they felt embarrassed, too. After reading a grim post, they might raise an alarm, only to be curtly told by their offspring that it was a popular song lyric, a tactic teens use to comment in code, in part to confound snooping parents.

Ms. Corbett, the Charlotte therapist, said that when she followed her sons’ Facebook pages, she used caution before responding to occasional downbeat posts. If parents react to every little bad mood, she said, children might be less open on Facebook, assuming that “my parents will freak out.”

Dr. Moreno said that parents should consider whether the posts are typical for their child or whether the child also seems depressed at home. Early intervention can be low-key — a brief text or knock on the bedroom door: “I saw you posted this on Facebook. Is everything O.K.?”

Sometimes a Facebook posting can truly be a last-resort cry for help. One recent afternoon while Jackie Wells, who lives near Dayton, Ohio, was waiting for her phone service to be fixed, she went online to check on her daughter, 18, who lives about an hour away. Just 20 minutes earlier, the girl, unable to reach her mother by phone, used her own Facebook page to post to Mrs. Wells or anyone else who might read it:

“I just did something stupid, mom. Help me.”

Mrs. Wells borrowed a cellphone from her parents and called relatives who lived closer to her daughter. The girl had overdosed on pills. They got her to the hospital in time.

“Facebook might be a pain in the neck to keep up with,” Mrs. Wells said. “But having that extra form of communication saves lives.”

Liz Heron contributed reporting.

 

 

By