Mystery


Of all the story lines to emerge from l’affaire Petraeus, surely the following three are widest of the mark:

First, the idea that the director of the Central Intelligence Agency is a victim of America’s puritanical mores.

Second, the idea that, whatever the legal fine points, an FBI investigation involving a mistress accessing an email of a CIA director does not become a de facto investigation of the director.

Third, the idea that a CIA director can have a private email account, wholly personal and separate from his job.

All extramarital affairs are human tragedies. That would be true even if this one involved a factory worker and secretary cheating, not two high-profile, high-achieving West Point graduates. The difference is that the pain and injustice are even more monstrous for the innocent spouses and children here, because their private humiliation is playing out across our front pages and television screens.

At its core, however, the scandal that felled David Petraeus has public dimensions only tangentially connected with sex. An affair that was truly private might be buried quickly and quietly. Now that the affair has been broadcast to the world—beginning with Mr. Petraeus’s own resignation statement—honor itself requires honest answers to awkward questions that affect the public trust.

These questions start with the basic: When did this affair begin? Initial reports suggested that it started in Afghanistan, where biographer Paula Broadwell spent much time with the general. Now his friends are telling reporters that the affair began after he joined the CIA in September 2011.

Let us hope this is true. We must hope so, first, because if his affair occurred while he was still in the Army, it is a crime under the uniform code of military justice. We must hope so, too, because if the affair started before he took over the CIA, the truthfulness of his statements to investigators during the run-up to his Senate confirmation might be called into question.

Ask former Clinton cabinet member Henry Cisneros or Bush Homeland Security pick Bernard Kerik about the consequences of making false statements during confirmation checks. They can be grave. At the public level, the more candid the general is about his affair, the more credible he will be when he speaks about Benghazi.

The same goes for the email. On the face of it, that a man in his position would send as much “private” email as has been reported seems extraordinary, given that every foreign intelligence service on the planet would love to know those emails’ contents. In addition, Ms. Broadwell apparently had classified documents on her computer.

The FBI says it is satisfied that these documents didn’t come from Mr. Petraeus. But they did come from someone—and it would be good to know who. We ought to know more about how exposed that information is, especially if it was traveling the world through Gmail. Quite apart from questions of sexual intimacy, we ought to know, too, whether the privileged access that Mr. Petraeus gave his biographer allowed her to get (or encouraged those around him to give her) information she ought not to have had.

Above all, Mr. Petraeus’s affair raises questions about what the general was telling Congress and the public about the mess in Benghazi that saw four Americans killed. We know Mr. Petraeus can be direct when he wishes. We saw that with the unequivocal CIA statement denying that anyone in the agency ordered anyone not to come to the aid of those under attack in Benghazi.

Less clear, alas, is the CIA involvement in the spin put out by the White House: that the attack on the consulate was the work of an out-of-control mob enraged by a video blaspheming the Prophet Muhammad. News reports in the aftermath of the attack suggest that Mr. Petraeus backed the White House line when he briefed Congress.

Finally, there is Americas “unrealistic” attitude toward sex. David Gergen complained about this view on “Face the Nation” on Sunday: “I think we have to be understanding that as the saying goes the best of men are still men—men at their best.”

Depends on what you mean by being “realistic” about sex and human nature. Citizens who accept positions in government that give them access to sensitive information—myself included, when I went to work for the White House in 2005—are asked highly intrusive questions about marriage and adultery. The questions involve less moral judgment than a practical recognition that sexual intimacy is more than a physical act; it leads to emotional entanglements that can take even the most judicious of us to reckless and irresponsible places.

All of which relates to the question of the week: Given what we know now about the consulate attack in Benghazi, we need to find out whether Mr. Petraeus’s personal troubles influenced what he said to Congress. In short, America still needs to know what Mr. Petraeus’s unvarnished view of Libya was, and is.

 

*Text by “The Wall Street Journal” (Editorial); November 12, 2012

Our brains are better than Google or the best robot from iRobot.

We can instantly search through a vast wealth of experiences and emotions. We can immediately recognize the face of a parent, spouse, friend or pet, whether in daylight, darkness, from above or sideways—a task that the computer vision system built into the most sophisticated robots can accomplish only haltingly. We can also multitask effortlessly when we extract a handkerchief from a pocket and mop our brow while striking up a conversation with an acquaintance. Yet designing an electronic brain that would allow a robot to perform this simple combination of behaviors remains a distant prospect.

How does the brain pull all this off, given that the complexity of the networks inside our skull—trillions of connections among billions of brain cells—rivals that of theInternet? One answer is energy efficiency: when a nerve cell communicates with another, the brain uses just a millionth of the energy that a digital computer expends to perform the equivalent operation. Evolution, in fact, may have played an important role in pushing the three-pound organ toward ever greater energy efficiencies.

Parsimonious energy consumption cannot be the full explanation, though, given that the brain also comes with many built-in limitations. One neuron in the cerebral cortex, for instance, can respond to an input from another neuron by firing an impulse, or a “spike,” in thousandths of a second—a snail’s pace compared with the transistors that serve as switches in computers, which take billionths of a second to switch on. The reliability of the neuronal network is also low: a signal traveling from one cortical cell to another typically has only a 20 percent possibility of arriving at its ultimate destination and much less of a chance of reaching a distant neuron to which it is not directly connected.

Neuroscientists do not fully understand how the brain manages to extract meaningful information from all the signaling that goes on within it. The two of us and others, however, have recently made exciting progress by focusing new attention on how the brain can efficiently use the timing of spikes to encode information and rapidly solve difficult computational problems. This is because a group of spikes that fire almost at the same moment can carry much more information than can a comparably sized group that activates in an unsynchronized fashion.

Beyond offering insight into the most complex known machine in the universe, further advances in this research could lead to entirely new kinds of computers. Already scientists have built “neuromorphic” electronic circuits that mimic aspects of the brain’s signaling network. We can build devices today with a million electronic neurons, and much larger systems are planned. Ultimately investigators should be able to build neuromorphic computers that function much faster than modern computers but require just a fraction of the power [see “Neuromorphic Microchips,” by Kwabena Boahen; Scientific American, May 2005].

Cell Chatter

Like many other neuroscientists, we often use the visual system as our test bed, in part because its basic wiring diagram is well understood. Timing of signals there and elsewhere in the brain has long been suspected of being a key part of the code that the brain uses to decide whether information passing through the network is meaningful. Yet for many decades these ideas were neglected because timing is only important when compared between different parts of the brain, and it was hard to measure activity of more than one neuron at a time. Recently, however, the practical development of computer models of the nervous system and new results from experimental and theoretical neuroscience have spurred interest in timing as a way to better understand how neurons talk to one another.

Brain cells receive all kinds of inputs on different timescales. The microsecond-quick signal from the right ear must be reconciled with the slightly out-of-sync input from the left. These rapid responses contrast with the sluggish stream of hormones coursing through the bloodstream. The signals most important for this discussion, though, are the spikes, which are sharp rises in voltage that course through and between neurons. For cell-to-cell communication, spikes lasting a few milliseconds handle immediate needs. A neuron fires a spike after deciding that the number of inputs urging it to switch on outweigh the number telling it to turn off. When the decision is made, a spike travels down the cell’s axon (somewhat akin to a branched electrical wire) to its tips. Then the signal is relayed chemically through junctions, called synapses, that link the axon with recipient neurons.

In each eye, 100 million photoreceptors in the retina respond to changing patterns of light. After the incoming light is processed by several layers of neurons, a million ganglion cells at the back of the retina convert these signals into a sequence of spikes that are relayed by axons to other parts of the brain, which in turn send spikes to still other regions that ultimately give rise to a conscious perception. Each axon can carry up to several hundred spikes each second, though more often just a few spikes course along the neural wiring. All that you perceive of the visual world—the shapes, colors and movements of everything around you—is coded into these rivers of spikes with varying time intervals separating them.

Monitoring the activity of many individual neurons at once is critical for making sense of what goes on in the brain but has long been extremely challenging. In 2010, though, E. J. Chichilnisky of the Salk Institute for Biological Studies in La Jolla, Calif., and his colleagues reported in Nature that they had achieved the monumental task of simultaneously recording all the spikes from hundreds of neighboring ganglion cells in monkey retinas. (Scientific American is part of Nature Publishing Group.) This achievement made it possible to trace the specific photoreceptors that fed into each ganglion cell. The growing ability to record spikes from many neurons simultaneously will assist in deciphering meaning from these codelike brain signals.

For years investigators have used several methods to interpret, or decode, the meaning in the stream of spikes coming from the retina. One method counts spikes from each axon separately over some period: the higher the firing rate, the stronger the signal. The information conveyed by a variable firing rate, a rate code, relays features of visual images, such as location in space, regions of differing light contrast, and where motion occurs, with each of these features represented by a given group of neurons.

Information is also transmitted by relative timing—when one neuron fires in close relation to when another cell spikes. Ganglion cells in the retina, for instance, are exquisitely sensitive to light intensity and can respond to a changing visual scene by transmitting spikes to other parts of the brain. When multiple ganglion cells fire at almost the same instant, the brain suspects that they are responding to an aspect of the same physical object. Horace Barlow, a leading neuroscientist at the University of Cambridge, characterized this phenomenon as a set of “suspicious coincidences.” Barlow referred to the observation that each cell in the visual cortex may be activated by a specific physical feature of an object (say, its color or its orientation within a scene). When several of these cells switch on at the same time, their combined activation constitutes a suspicious coincidence because it may only occur at a specific time for a unique object. Apparently the brain takes such synchrony to mean that the signals are worth noting because the odds of such coordination occurring by chance are slim.

 

Electrical engineers are trying to build on this knowledge to create more efficient hardware that incorporates the principles of spike timing when recording visual scenes. One of us (Delbruck) has built a camera that emits spikes in response to changes in a scene’s brightness, which enables the tracking of very fast moving objects with minimal processing by the hardware to capture images [see box above].

Into the Cortex

New evidence adds proof that the visual cortex attends to temporal clues to make sense of what the eye sees. The ganglion cells in the retina do not project directly to the cortex but relay signals through neurons in the thalamus, deep within the brain’s midsection. This region in turn must activate 100 million cells in the visual cortex in each hemisphere at the back of the brain before the messages are sent to higher brain areas for conscious interpretation.

We can learn something about which spike patterns are most effective in turning on cells in the visual cortex by examining the connections from relay neurons in the thalamus to cells known as spiny stellate neurons in a middle layer of the visual cortex. In 1994 Kevan Martin, now at the Institute of Neuroinformatics at the University of Zurich, and his colleagues reconstructed the thalamic inputs to the cortex and found that they account for only 6 percent of all the synapses on each spiny stellate cell. How, then, everyone wondered, does this relatively weak visual input, a mere trickle, manage to reliably communicate with neurons in all layers of the cortex?

Cortical neurons are exquisitely sensitive to fluctuating inputs and can respond to them by emitting a spike in a matter of a few milliseconds. In 2010 one of us (Sejnowski), along with Hsi-Ping Wang and Donald Spencer of the Salk Institute and Jean-Marc Fellous of the University of Arizona, developed a detailed computer model of a spiny stellate cell and showed that even though a single spike from only one axon cannot cause one of these cells to fire, the same neuron will respond reliably to inputs from as few as four axons projecting from the thalamus if the spikes from all four arrive within a few milliseconds of one another. Once inputs arrive from the thalamus, only a sparse subset of the neurons in the visual cortex needs to fire to represent the outline and texture of an object. Each spiny stellate neuron has a preferred visual stimulus from the eye that produces a high firing rate, such as the edge of an object with a particular angle of orientation.

In the 1960s David Hubel of Harvard Medical School and Torsten Wiesel, now at the Rockefeller University, discovered that each neuron in the relevant section of the cortex responds strongly to its preferred stimulus only if activation comes from a specific part of the visual field called the neuron’s receptive field. Neurons responding to stimulation in the fovea, the central region of the retina, have the smallest receptive fields—about the size of the letter e on this page. Think of them as looking at the world through soda straws. In the 1980s John Allman of the California Institute of Technology showed that visual stimulation from outside the receptive field of a neuron can alter its firing rate in reaction to inputs from within its receptive field. This “surround” input puts the feature that a neuron responds to into the context of the broader visual environment.

Stimulating the region surrounding a neuron’s receptive field also has a dramatic effect on the precision of spike timing. David McCormick, James Mazer and their colleagues at Yale University recently recorded the responses of single neurons in the cat visual cortex to a movie that was replayed many times. When they narrowed the movie image so that neurons triggered by inputs from the receptive field fired (no input came from the surrounding area), the timing of the signals from these neurons had a randomly varying and imprecise pattern. When they expanded the movie to cover the surrounding area outside the receptive field, the firing rate of each neuron decreased, but the spikes were precisely timed.

 

The timing of spikes also matters for other neural processes. Some evidence suggests that synchronized timing—with each spike representing one aspect of an object (color or orientation)—functions as a means of assembling an image from component parts. A spike for “pinkish red” fires in synchrony with one for “round contour,” enabling the visual cortex to merge these signals into the recognizable image of a flower pot.

Attention and Memory

Our story so far has tracked visual processing from the photoreceptors to the cortex. But still more goes into forming a perception of a scene. The activity of cortical neurons that receive visual input is influenced not only by those inputs but also by excitatory and inhibitory interactions between cortical neurons. Of particular importance for coordinating the many neurons responsible for forming a visual perception is the spontaneous, rhythmic firing of a large number of widely separated cortical neurons at frequencies below 100 hertz.

Attention—a central facet of cognition—may also have its physical underpinnings in sequences of synchronized spikes. It appears that such synchrony acts to emphasize the importance of a particular perception or memory passing through conscious awareness. Robert Desimone, now at the Massachusetts Institute of Technology, and his colleagues have shown that when monkeys pay attention to a given stimulus, the number of cortical neurons that fire synchronized spikes in the gamma band of frequencies (30 to 80 hertz) increases, and the rate at which they fire rises as well. Pascal Fries of the Ernst Strüngmann Institute for Neuroscience in cooperation with the Max Planck Society in Frankfurt found evidence for gamma-band signaling between distant cortical areas.

Neural activation of the gamma-frequency band has also attracted the attention of researchers who have found that patients with schizophrenia and autism show decreased levels of this type of signaling on electroencephalographic recordings. David Lewis of the University of Pittsburgh, Margarita Behrens of the Salk Institute and others have traced this deficit to a type of cortical neuron called a basket cell, which is involved in synchronizing spikes in nearby circuits. An imbalance of either inhibition or excitation of the basket cells seems to reduce synchronized activity in the gamma band and may thus explain some of the physiological underpinnings of these neurological disorders. Interestingly, patients with schizophrenia do not perceive some visual illusions, such as the tilt illusion, in which a person typically misjudges the tilt of a line because of the tilt of nearby lines. Similar circuit abnormalities in the prefrontal cortex may be responsible for the thought disorders that accompany schizophrenia.

When it comes to laying down memories, the relative timing of spikes seems to be as important as the rate of firing. In particular, the synchronized firing of spikes in the cortex is important for increasing the strengths of synapses—an important process in forming long-term memories. A synapse is said to be strengthened when the firing of a neuron on one side of a synapse leads the neuron on the other side of the synapse to register a stronger response. In 1997 Henry Markram and Bert Sakmann, then at the Max Plank Institute for Medical Research in Heidelberg, discovered a strengthening process known as spike-timing-dependent plasticity, in which an input at a synapse is delivered at a frequency in the gamma range and is consistently followed within 10 milliseconds by a spike from the neuron on the other side of the synapse, a pattern that leads to enhanced firing by the neuron receiving the stimulation. Conversely, if the neuron on the other side fires within 10 milliseconds before the first one, the strength of the synapse between the cells decreases.

Some of the strongest evidence that synchronous spikes may be important for memory comes from research by György Buzsáki of New York University and others on the hippocampus, a brain area that is important for remembering objects and events. The spiking of neurons in the hippocampus and the cortical areas that it interacts with is strongly influenced by synchronous oscillations of brain waves in a range of frequencies from four to eight hertz (the theta band), the type of neural activity encountered, for instance, when a rat is exploring its cage in a laboratory experiment. These theta-band oscillations can coordinate the timing of spikes and also have a more permanent effect in the synapses, which results in long-term changes in the firing of neurons.

 

A Grand Challenge Ahead

Neuroscience is at a turning point as new methods for simultaneously recording spikes in thousands of neurons help to reveal key patterns in spike timing and produce massive databases for researchers. Also, optogenetics—a technique for turning on genetically engineered neurons using light—can selectively activate or silence neurons in the cortex, an essential step in establishing how neural signals control behavior. Together, these and other techniques will help us eavesdrop on neurons in the brain and learn more and more about the secret code that the brain uses to talk to itself. When we decipher the code, we will not only achieve an understanding of the brain’s communication system, we will also start building machines that emulate the efficiency of this remarkable organ.

 

* By Terry Sejnowski and Tobi Delbruck  

ABOUT THE AUTHOR(S)

Terry Sejnowski is an investigator with the Howard Hughes Medical Institute and is Francis Crick Professor at the Salk Institute for Biological Studies, where he directs the Computational Neurobiology Laboratory.

Tobi Delbruck is co-leader of the sensors group at the Institute of Neuroinformatics at the University of Zurich.

 MORE TO EXPLORE

Terry Sejnowski’s 2008 Wolfgang Pauli Lectures on how neurons compute and communicate: www.podcast.ethz.ch/podcast/episodes/?id=607

Neuromorphic Sensory Systems. Shih-Chii Liu and Tobi Delbruck in Current Opinion in Neurobiology, Vol. 20, No. 3, pages 288–295; June 2010. http://tinyurl.com/bot7ag8

SCIENTIFIC AMERICAN ONLINE
Watch a video about a motion-sensing video camera that uses spikes for imaging at ScientificAmerican.com/oct2012/dvs

In the 1993 movie “Groundhog Day,” Bill Murray plays Phil Connors, a reporter who, confronted with living the same day over and over again, matures from an arrogant, self-serving professional climber to someone capable of loving and appreciating others and his world. Murray convincingly portrays the transformation from someone whose self-importance is difficult to abide into a person imbued with kindness.  It seems that the Nietzschean test of eternal return, insofar as it is played out in Punxsutawney, yields not an overman but a man of decency.

But there is another story line at work in the film, one we can see if we examine Murray’s character not in the early arrogant stage, nor in the post-epiphany stage, where the calendar is once again set in motion, but in the film’s middle, where he is knowingly stuck in the repetition of days. In this part of the narrative, Murray’s character has come to terms with his situation. He alone knows what is going to happen, over and over again.  He has no expectations for anything different.  In this period, his period of reconciliation, he becomes a model citizen of Punxsutawney. He radiates warmth and kindness, but also a certain distance.

The early and final moments of “Groundhog Day” offer something that is missing during this period of peace:  passion. Granted, Phil Connors’s early ambitious passion for advancement is a far less attractive thing than the later passion of his love for Rita (played by Andie MacDowell).  But there is passion in both cases.  It seems that the eternal return of the same may bring peace and reconciliation, but at least in this case not intensity.

And here is where a lesson about love may lie.  One would not want to deny that Connors comes to love Rita during the period of the eternal Groundhog Day.  But his love lacks the passion, the abandon, of the love he feels when he is released into a real future with her. There is something different in those final moments of the film.  A future has opened for their relationship, and with it new avenues for the intensity of his feelings for her. Without a future for growth and development, romantic love can extend only so far.  Its distinction from, say, a friendship with benefits begins to become effaced.

There is, of course, in all romantic love the initial infatuation, which rarely lasts.  But if the love is to remain romantic, that infatuation must evolve into a longer-term intensity, even if a quiet one, that nourishes and is nourished by the common engagements and projects undertaken over time.

This might be taken to mean that a limitless future would allow for even more intensity to love than a limited one.  Romantic love among immortals would open itself to an intensity that eludes our mortal race.  After all, immortality opens an infinite future.  And this would seem to be to the benefit of love’s passion.  I think, however, that matters are quite the opposite, and that “Groundhog Day” gives us the clue as to why this is.  What the film displays, if we follow this interpretive thread past the film’s plot, is not merely the necessity of time itself for love’s intensity but the necessity of a specific kind of time:  time for development.  The eternal return of “Groundhog Day” offered plenty of time.  It promised an eternity of it.  But it was the wrong kind of time.  There was no time to develop a coexistence.  There was instead just more of the same.

The intensity we associate with romantic love requires a future that can allow its elaboration.  That intensity is of the moment, to be sure, but is also bound to the unfolding of a trajectory that it sees as its fate.  If we were stuck in the same moment, the same day, day after day, the love might still remain, but its animating passion would begin to diminish.

This is why romantic love requires death.

If our time were endless, then sooner or later the future would resemble an endless Groundhog Day in Punxsutawney.  It is not simply the fact of a future that ensures the intensity of romantic love; it is the future of meaningful coexistence.  It is the future of common projects and the passion that unfolds within them.  One might indeed remain in love with another for all eternity.  But that love would not burn as brightly if the years were to stammer on without number.

Why not, one might ask?  The future is open.  Unlike the future in “Groundhog Day,” it is not already decided.  We do not have our next days framed for us by the day just passed.  We can make something different of our relationships.  There is always more to do and more to create of ourselves with the ones with whom we are in love.

This is not true, however, and romantic love itself shows us why.  Love is between two particular people in their particularity.  We cannot love just anyone, even others with much the same qualities.  If we did, then when we met someone like the beloved but who possessed a little more of a quality to which we were drawn, we would, in the phrase philosophers of love use, “trade up.”  But we don’t trade up, or at least most of us don’t.  This is because we love that particular person in his or her specificity.  And what we create together, our common projects and shared emotions, are grounded in those specificities.  Romantic love is not capable of everything. It is capable only of what the unfolding of a future between two specific people can meaningfully allow.

Sooner or later the paths that can be opened by the specificities of a relationship come to an end.  Not every couple can, with a sense of common meaningfulness, take up skiing or karaoke, political discussion or gardening.  Eventually we must tread the same roads again, wearing them with our days.  This need not kill love, although it might.  But it cannot, over the course of eternity, sustain the intensity that makes romantic love, well, romantic.

One might object here that the intensity of love is a filling of the present, not a projection into the future.  It is now, in a moment that needs no other moments, that I feel the vitality of romantic love.  Why could this not continue, moment after moment?

To this, I can answer only that the human experience does not point this way.  This is why so many sages have asked us to distance ourselves from the world in order to be able to cherish it properly.  Phil Connors, in his reconciled moments, is something like a Buddhist.  But he is not a romantic.

Many readers will probably already have recognized that this lesson about love concerns not only its relationship with death, but also its relationship with life.  It doesn’t take eternity for many of our romantic love’s embers to begin to dim.  We lose the freshness of our shared projects and our passions, and something of our relationships gets lost along with them.  We still love our partner, but we think more about the old days, when love was new and the horizons of the future beckoned us.  In those cases, we needn’t look for Groundhog Day, for it will already have found us.

And how do we live with this?  How do we assimilate the contingency of romance, the waning of the intensity of our loves?  We can reconcile ourselves to our loves as they are, or we can aim to sacrifice our placid comfort for an uncertain future, with or without the one we love.  Just as there is no guarantee that love’s intensity must continue, there is no guarantee that it must diminish.  An old teacher of mine once said that “one has to risk somewhat for his soul.” Perhaps this is true of romantic love as well. The gift of our deaths saves us from the ineluctability of the dimming of our love; perhaps the gift of our lives might, here or there, save us from the dimming itself.

 

* Text By TODD MAY, NYT, FEBRUARY 26, 2012

 Todd May is Class of 1941 Memorial Professor of the Humanities at Clemson University.  His forthcoming book, “Friendship in an Age of Economics,” is based on an earlier column for The Stone.

There is a story about Bertrand Russell giving a public lecture somewhere or other, defending his atheism. A furious woman stood up at the end of the lecture and asked: “And Lord Russell, what will you say when you stand in front of the throne of God on judgment day?” Russell replied: “I will say: ‘I’m terribly sorry, but you didn’t give us enough evidence.’ ”


This is a very natural way for atheists to react to religious claims: to ask for evidence, and reject these claims in the absence of it. Many of the several hundred comments that followed two earlier Stone posts “Philosophy and Faith” and “On Dawkins’s Atheism: A Response,” both by Gary Gutting, took this stance. Certainly this is the way that today’s “new atheists”  tend to approach religion. According to their view, religions — by this they mean basically Christianity, Judaism and Islam and I will follow them in this — are largely in the business of making claims about the universe that are a bit like scientific hypotheses. In other words, they are claims — like the claim that God created the world — that are supported by evidence, that are proved by arguments and tested against our experience of the world. And against the evidence, these hypotheses do not seem to fare well.


But is this the right way to think about religion? Here I want to suggest that it is not, and to try and locate what seem to me some significant differences between science and religion.


To begin with, scientific explanation is a very specific and technical kind of knowledge. It requires patience, pedantry, a narrowing of focus and (in the case of the most profound scientific theories) considerable mathematical knowledge and ability. No-one can understand quantum theory — by any account, the most successful physical theory there has ever been — unless they grasp the underlying mathematics. Anyone who says otherwise is fooling themselves.
Religious belief is a very different kind of thing. It is not restricted only to those with a certain education or knowledge, it does not require years of training, it is not specialized and it is not technical. (I’m talking here about the content of what people who regularly attend church, mosque or synagogue take themselves to be thinking; I’m not talking about how theologians interpret this content.)


What is more, while religious belief is widespread, scientific knowledge is not. I would guess that very few people in the world are actually interested in the details of contemporary scientific theories. Why? One obvious reason is that many lack access to this knowledge. Another reason is that even when they have access, these theories require sophisticated knowledge and abilities, which not everyone is capable of getting.


Yet another reason — and the one I am interested in here — is that most people aren’t deeply interested in science, even when they have the opportunity and the basic intellectual capacity to learn about it. Of course, educated people who know about science know roughly what Einstein, Newton and Darwin said. Many educated people accept the modern scientific view of the world and understand its main outlines. But this is not the same as being interested in the details of science, or being immersed in scientific thinking.


This lack of interest in science contrasts sharply with the worldwide interest in religion. It’s hard to say whether religion is in decline or growing, partly because it’s hard to identify only one thing asreligion — not a question I can address here. But it’s pretty obvious that whatever it is, religion commands and absorbs the passions and intellects of hundreds of millions of people, many more people than science does. Why is this? Is it because — as the new atheists might argue — they want to explain the world in a scientific kind of way, but since they have not been properly educated they haven’t quite got there yet? Or is it because so many people are incurably irrational and are incapable of scientific thinking? Or is something else going on?


Some philosophers have said that religion is so unlike science that it has its own “grammar” or “logic” and should not be held accountable to the same standards as scientific or ordinary empirical belief. When Christians express their belief that “Christ has risen,” for example, they should not be taken as making a factual claim, but as expressing their commitment to what Wittgenstein called a certain “form of life,” a way of seeing significance in the world, a moral and practical outlook which is worlds away from scientific explanation.


This view has some merits, as we shall see, but it grossly misrepresents some central phenomena of religion. It is absolutely essential to religions that they make certain factual or historical claims. When Saint Paul says “if Christ is not risen, then our preaching is in vain and our faith is in vain” he is saying that the point of his faith depends on a certain historical occurrence.


Theologians will debate exactly what it means to claim that Christ has risen, what exactly the meaning and significance of this occurrence is, and will give more or less sophisticated accounts of it. But all I am saying is that whatever its specific nature, Christians must hold that there was such an occurrence. Christianity does make factual, historical claims. But this is not the same as being a kind of proto-science. This will become clear if we reflect a bit on what science involves.


The essence of science involves making hypotheses about the causes and natures of things, in order to explain the phenomena we observe around us, and to predict their future behavior. Some sciences — medical science, for example — make hypotheses about the causes of diseases and test them by intervening. Others — cosmology, for example — make hypotheses that are more remote from everyday causes, and involve a high level of mathematical abstraction and idealization. Scientific reasoning involves an obligation to hold a hypothesis only to the extent that the evidence requires it. Scientists should not accept hypotheses which are “ad hoc” — that is, just tailored for one specific situation but cannot be generalized to others. Most scientific theories involve some kind of generalization: they don’t just make claims about one thing, but about things of a general kind. And their hypotheses are designed, on the whole, to make predictions; and if these predictions don’t come out true, then this is something for the scientists to worry about.


Religions do not construct hypotheses in this sense. I said above that Christianity rests upon certain historical claims, like the claim of the resurrection. But this is not enough to make scientific hypotheses central to Christianity, any more than it makes such hypotheses central to history. It is true, as I have just said, that Christianity does place certain historical events at the heart of their conception of the world, and to that extent, one cannot be a Christian unless one believes that these events happened. Speaking for myself, it is because I reject the factual basis of the central Christian doctrines that I consider myself an atheist. But I do not reject these claims because I think they are bad hypotheses in the scientific sense. Not all factual claims are scientific hypotheses. So I disagree with Richard Dawkins when he says “religions make existence claims, and this means scientific claims.”


Taken as hypotheses, religious claims do very badly: they are ad hoc, they are arbitrary, they rarely make predictions and when they do they almost never come true. Yet the striking fact is that it does not worry Christians when this happens. In the gospels Jesus predicts the end of the world and the coming of the kingdom of God. It does not worry believers that Jesus was wrong (even if it causes theologians to reinterpret what is meant by ‘the kingdom of God’). If Jesus was framing something like a scientific hypothesis, then it should worry them. Critics of religion might say that this just shows the manifest irrationality of religion. But what it suggests to me is that that something else is going on, other than hypothesis formation.


Religious belief tolerates a high degree of mystery and ignorance in its understanding of the world. When the devout pray, and their prayers are not answered, they do not take this as evidence which has to be weighed alongside all the other evidence that prayer is effective. They feel no obligation whatsoever to weigh the evidence. If God does not answer their prayers, well, there must be some explanation of this, even though we may never know it. Why do people suffer if an omnipotent God loves them? Many complex answers have been offered, but in the end they come down to this: it’s a mystery.


Science too has its share of mysteries (or rather: things that must simply be accepted without further explanation). But one aim of science is to minimize such things, to reduce the number of primitive concepts or primitive explanations. The religious attitude is very different. It does not seek to minimize mystery. Mysteries are accepted as a consequence of what, for the religious, makes the world meaningful.


This point gets to the heart of the difference between science and religion. Religion is an attempt to make sense of the world, but it does not try and do this in the way science does. Science makes sense of the world by showing how things conform to its hypotheses. The characteristic mode of scientific explanation is showing how events fit into a general pattern.


Religion, on the other hand, attempts to make sense of the world by seeing a kind of meaning or significance in things. This kind of significance does not need laws or generalizations, but just the sense that the everyday world we experience is not all there is, and that behind it all is the mystery of God’s presence. The believer is already convinced that God is present in everything, even if they cannot explain this or support it with evidence. But it makes sense of their life by suffusing it with meaning. This is the attitude (seeing God in everything) expressed in George Herbert’s poem, “The Elixir.” Equipped with this attitude, even the most miserable tasks can come to have value:Who sweeps a room as for Thy laws/ Makes that and th’ action fine.


None of these remarks are intended as being for or against religion. Rather, they are part of an attempt (by an atheist, from the outside) to understand what it is. Those who criticize religion should have an accurate understanding of what it is they are criticizing. But to understand a world view, or a philosophy or system of thought, it is not enough just to understand the propositions it contains. You also have to understand what is central and what is peripheral to the view. Religions do make factual and historical claims, and if these claims are false, then the religions fail. But this dependence on fact does not make religious claims anything like hypotheses in the scientific sense. Hypotheses are not central. Rather, what is central is the commitment to the meaningfulness (and therefore the mystery) of the world.


I have suggested that while religious thinking is widespread in the world, scientific thinking is not. I don’t think that this can be accounted for merely in terms of the ignorance or irrationality of human beings. Rather, it is because of the kind of intellectual, emotional and practical appeal that religion has for people, which is a very different appeal from the kind of appeal that science has.


Stephen Jay Gould once argued that religion and science are “non-overlapping magisteria.” If he meant by this that religion makes no factual claims which can be refuted by empirical investigations, then he was wrong. But if he meant that religion and science are very different kinds of attempt to understand the world, then he was certainly right.


By TIM CRANE
Tim Crane is Knightbridge Professor of Philosophy at the University of Cambridge. He is the author of two books, “The Mechanical Mind” (1995) and “Elements of Mind” (2001), and several other publications. He is currently working on two books: one on the representation of the non-existent and another on atheism and humanism.

Souls, spirits, ghosts, gods, demons, angels, aliens, intelligent designers, government conspirators, and all manner of invisible agents with power and intention are believed to haunt our world and control our lives. Why?

Jan18#43


The answer has two parts, starting with the concept of “patternicity,” which I defined in my December 2008 column as the human tendency to find meaningful patterns in meaningless noise. Consider the face on Mars, the Virgin Mary on a grilled cheese sandwich, satanic messages in rock music. Of course, some patterns are real. Finding predictive patterns
in changing weather, fruiting trees, migrating prey animals and hungry predators was central to the survival of Paleolithic hominids.
The problem is that we did not evolve a baloney-detection device in our brains to discriminate between true and false patterns. So we make two types of errors: a type I error, or false positive, is believing a pattern is real when it is not; a type II error, or false negative, is not believing a pattern is real when it is.

If you believe that the rustle in the grass is a dangerous predator when it is just the wind (a type I error), you are more likely to survive than if you believe that the rustle in the grass is just the wind when it is a dangerous predator (a type II error).

Because the cost of making a type I error is less than the cost of making a type II error and because there is no time for careful deliberation between patternicities in the split-second world of predator-prey interactions, natural selection would have favored those animals most likely to assume that all patterns are real.

But we do something other animals do not do. As large-brained hominids with a developed cortex and a theory of mind—the capacity to be aware of such mental states as desires and intentions in both ourselves and others—we infer agency behind the patterns we observe in a practice I call “agenticity”: the tendency to believe that the world is controlled by invisible intentional agents.

We believe that these intentional agents control the world, sometimes invisibly from the top down (as opposed to bottom-up causal randomness). Together patternicity and agenticity form the cognitive basis of shamanism, paganism, animism, polytheism, monotheism, and all modes of Old and New Age spiritualisms.

Agenticity carries us far beyond the spirit world. The Intelligent Designer is said to be an invisible agent who created life from the top down. Aliens are often portrayed as powerful beings coming down from on high to warn us of our impending self-destruction.

Conspiracy theories predictably include hidden agents at work behind the scenes, puppet masters pulling political and economic strings as we dance to the tune of the Bilderbergers, the Rothschilds, the Rockefellers or the Illuminati.

Even the belief that government can impose top-down measures to rescue the economy is a form of agenticity, with President Barack Obama being touted as “the one” with almost messianic powers who will save us.

There is now substantial evidence from cognitive neuroscience that humans readily find patterns and impart agency to them, well documented in the new book SuperSense (HarperOne, 2009) by University of Bristol psychologist Bruce Hood. Examples: children believe that the sun can think and follows them around; because of such beliefs, they often add smiley faces on sketched suns.

Adults typically refuse to wear a mass murderer’s sweater, believing that “evil” is a supernatural force that imparts its negative agency to the wearer (and, alternatively, that donning Mr. Rogers’s cardigan will make you a better person). A third of transplant patients believe that the donor’s personality is transplanted with the organ. Genital-shaped foods (bananas, oysters) are often believed to enhance sexual potency. Subjects watching geometric shapes with eye spots interacting on a computer screen conclude that they represent agents with moral intentions.

 

* Text by Michael Shermer from Scientific American Magazine (May 2009)

After a loved one dies, most people see ghosts

ghost-stories-visits-from-the-deceased_1

Carlos Sluzki’s cat died a while ago now, but he still sometimes visits. Now more of a shadow cat, the former pet seems to lurk at the edges of Sluzki’s vision, as a misinterpreted movement amid the everyday chaos of domestic life. All the same, the shadow cat is beginning to slink away and Sluzki notes that as the grief fades his erstwhile friend is “erasing himself from the world of the present and receding into the bittersweet world of the memories of the loved ones.”

7-modelo

The dead stay with us, that much is clear. They remain in our hearts and minds, of course, but for many people they also linger in our senses—as sights, sounds, smells, touches or presences. Grief hallucinations are a normal reaction to bereavement but are rarely discussed, because people fear they might be considered insane or mentally destabilised by their loss. As a society we tend to associate hallucinations with things like drugs and mental illness, but we now know that hallucinations are common in sober healthy people and that they are more likely during times of stress.

A Common Hallucination
Mourning seems to be a time when hallucinations are particularly common, to the point where feeling the presence of the deceased is the norm rather than the exception. One study, by the researcher Agneta Grimby at the University of Goteborg, found that over 80 percent of elderly people experience hallucinations associated with their dead partner one month after bereavement, as if their perception had yet to catch up with the knowledge of their beloved’s passing. As a marker of how vivid such visions can seem, almost a third of the people reported that they spoke in response to their experiences. In other words, these weren’t just peripheral illusions: they could evoke the very essence of the deceased.

jun0933

Occasionally, these hallucinations are heart-rending. A 2002 case report by German researchers described how a middle aged woman, grieving her daughter’s death from a heroin overdose, regularly saw the young girl and sometimes heard her say “Mamma, Mamma!” and “It’s so cold.” Thankfully, these distressing experiences tend to be rare, and most people who experience hallucinations during bereavement find them comforting, as if they were re-connecting with something of the positive from the person’s life. Perhaps this reconnecting is reflected in the fact that the intensity of grief has been found to predict the number of pleasant hallucinations, as has the happiness of the marriage to the person who passed away.

There are hints that the type of grief hallucinations might also differ across cultures. Anthropologists have told us a great deal about how the ceremonies, beliefs and the social rituals of death differ greatly across the world, but we have few clues about how these different approaches affect how people experience the dead after they have gone. Carlos Sluzki, the owner of the shadow cat and a cross-cultural researcher at George Mason University, suggests that in cultures of non-European origin the distinction between “in here” and “out there” experiences is less strictly defined, and so grief hallucinations may not be considered so personally worrying. In a recent article, he discussed the case of an elderly Hispanic lady who was frequently “visited” by two of her children who died in adulthood and were a comforting and valued part of her social network. Other case reports have suggested that such hallucinations may be looked on more favorably among the Hopi Indians, or the Mu Ghayeb people from Oman, but little systematic work has been done.

And there, our knowledge ends. Despite the fact that hallucinations are one of the most common reactions to loss, they have barely been investigated and we know little more about them. Like sorrow itself, we seem a little uncomfortable with it, unwilling to broach the subject and preferring to dwell on the practicalities—the “call me if I can do anything,” the “let’s take your mind off it,” the “are you looking after yourself?”

Only a minority of people reading this article are likely to experience grief without re-experiencing the dead. We often fall back on the cultural catch all of the “ghost” while the reality is, in many ways, more profound. Our perception is so tuned to their presence that when they are not there to fill that gap, we unconsciously try to mold the world into what we have lived with for so long and so badly long for. Even reality is no match for our love.

 

By Vaughan Bell, Scientific American

When Paul Butler began hunting for planets beyond our solar system, few people took him seriously, and some, he says, questioned his credentials as a scientist.

Researcher Jay Quade looks for signs of microbes in Chile’s Atacama Desert. (By Julio L. Betancourt — U.s. Geological Survey Via Associated Press)

That was a decade ago, before Butler helped find some of the first extra-solar planets, and before he and his team identified about half of the 300 discovered since.

Biogeologist Lisa M. Pratt of Indiana University had a similar experience with her early research on “extremophiles,” bizarre microbes found in very harsh Earth environments. She and colleagues explored the depths of South African gold mines and, to their great surprise, found bacteria sustained only by the radioactive decay of nearby rocks.

Indiana University biogeologist Lisa M. Pratt and Edward J. Weiler, chief of NASA’s science division, think life in other solar systems is possible and, perhaps, detectable. (Indiana University – Indiana University)

“Until several years ago, absolutely nobody thought this kind of life was possible — it hadn’t even made it into science fiction,” she said. “Now it’s quite possible to imagine a microbe like that living deep beneath the surface of Mars.”

The experiences of these two researchers reflect the scientific explosion taking place in astrobiology, the multi-disciplined search for extreme forms of life on Earth and for possibly similar, or more advanced, life elsewhere in the solar system and in distant galaxies.

The confidence that alien life will ultimately be found is strong enough to have kindled formal discussions among scientists, philosophers, theologians and others about the implications that such a find would have for humanity’s view of itself, and how to prepare the public for the news, should it come.

“There’s been a fundamental shift in the thinking of the scientific community on the question of life-forms beyond Earth,” Pratt said.

Edward J. Weiler, one of the founders of NASA’s astrobiology program and now chief of the agency’s science division, goes even further.

Astrobiology’s most intensive effort at the moment is focused on Mars, where NASA’s robotic lander Phoenix is digging up soil and ice in a search for organic material. (Bill Ingalls – Associated Press)

“We now know the number of stars in the universe is something like 1 followed by 23 zeros,” he said. “Given that number, how arrogant to think ours is the only sun with a planet that supports life, and that it’s in the only solar system with intelligent life.”

Although humans have speculated for centuries about the possibility of extraterrestrial life, astrobiology began as a formal NASA program only in the mid-1990s, created in the excitement that followed the discovery of a meteorite from Mars that was initially thought to contain fossils or other evidence of microscopic organisms (a conclusion now generally rejected). The field has nonetheless grown quickly. More than 700 scientists and graduate students — including molecular biologists, chemists, planetary scientists and cosmologists — showed up at a NASA-sponsored astrobiology conference in California this past spring.

Many schools have growing astrobiology programs, and planet-hunter Paul Butler often travels from his base at the Carnegie Institution in the District to Chile, Hawaii and Australia to work with other astronomers at big telescopes. He estimates that 1,000 to 2,000 scientists now work in the field.

Few believe that the discovery of extraterrestrial life is imminent. However, just as scientists long theorized that there were planets orbiting other stars — but could not prove it until new technologies and insights broke the field wide open — many astrobiologists now see their job as to develop new ways to search for the life they are sure is out there.

The most intensive effort at the moment is focused on Mars, where NASA’s robotic lander Phoenix is digging up soil and ice in search of organic material. The automated lab has excited scientists by finding many of the nutrients needed for life, although it has not found anything that was, or is, living. Also, photos and other data from NASA’s Mars Reconnaissance Orbiter produced dramatic new evidence this month that the planet was once home to vast lakes, flowing rivers and a variety of other wet environments that had the potential to support life.

Much more is on the way. NASA will launch the Kepler probe next year, and its central goal will be to identify Earth-like, and possibly habitable, planets around distant stars. Japanese astronomers plan to band together to observe one star in great detail because of hints that it could have an orbiting planet with life. And preliminary work is underway for joint NASA-European Space Agency probes of Europa and Titan, moons of Jupiter and Saturn with conditions that might support life.

The basic roadmap for the United States’ astrobiology effort, and about $40 million in seed money, came from NASA. It funds the NASA Astrobiology Institute in California and teams of researchers in universities nationwide, as well as efforts to develop new technologies for exploring extreme forms of life in Mars- or moon-like environments on Earth. The yearly astrobiology budget was halved after reaching a peak of $60 million in 2005, but pressure from the space science community is pushing that figure back up.

Butler and Pratt are part of Astrobiology Institute-funded teams, as are scientists who are creating virtual planets to model what the atmosphere of a distant inhabited planet might look like, and others studying how very simple organisms evolve into more complex ones. This kind of basic research is often used by NASA, as well as other astronomers and explorers for extraterrestrial life, to design space missions and plan ground-based observations.

John Rummel, director of the NASA astrobiology program, said the program is changing the way people think about life on Earth and beyond.

“The context for life is much broader than just what we see on Earth,” he said. “Organic material is falling from the sky all the time, and we’re learning that what happens out there is very important down here. Who knows: Maybe life on Earth came from Mars billions of years ago, when it had liquid water on its surface.”

Rummel said that the discovery of many varieties of extremophiles on Earth, coupled with a better understanding of some potentially habitable environments on other planets or moons, leads him to believe that life beyond Earth will be found, with ramifications comparable to Copernicus’s 15th-century discovery that Earth is not the center of the universe. “The Copernican revolution continues,” Rummel said.

Tales of canals and green men on Mars, UFOs and “Star Trek” characters have long captured the imagination, but finding microbes or evidence of other life beneath the surface of Mars or on the moons of Jupiter or Saturn is another matter entirely. Even if the first extraterrestrial life to be identified were primitive rather than intelligent, experts said, the discovery would be a major milestone in human history.

“If any extraterrestrial life is found in our solar system and we can determine it has no relation to life on Earth, then the assumption has to be that life of all sorts is quite common throughout the galaxies,” Butler said.

To some, debating the implications of discovering extraterrestrial life is premature at best, because — all UFO “sightings” aside — none has ever been found.

Two Viking missions to Mars in the 1970s to search for organic material did not identify any — although they were unable to dig below the rugged and parched Martian surface into the ground where scientists now think that water and possibly life could be found. In addition, the private group SETI, or Search for Extraterrestrial Intelligence, has been broadcasting radio messages to hoped-for intelligent aliens for years and listening for a response — sometimes with NASA support — but has been met so far with silence. And what some consider the rush to declare that the meteorite from Mars contained fossil remains has become an object lesson in the importance of confirming the science before making any declarations about extraterrestrial life.

What is different now, researchers say, is that they know so much more about extreme life-forms on Earth that could quite comfortably live on other planets. In addition to South Africa’s radioactivity-driven bacteria, extremophiles have also been found living near super-hot sulfurous steam vents at the deep ocean floor, in pools composed almost entirely of acid, and recently two miles below the surface of the Greenland ice sheet. All get little or no energy from the sun, which sustains virtually all other life-forms, and their survival makes it more conceivable that microbes could live in the sub-surface ice or water on Mars and Europa.

Having identified more than 300 planets outside the solar system, researchers are also convinced that planets and solar systems — some probably similar to ours — are present and perhaps quite common, elsewhere in the universe. The next step is to find extrasolar planets in the “habitable zone” of their solar systems; planets whose size, makeup and distance from their sun might allow life to develop.

In addition, the Hubble Space Telescope and other instruments have given researchers new data about the evolution and structure of the universe — information that makes it increasingly appear to be “fine-tuned” for life.

Lord Martin Rees, England’s Astronomer Royal made that argument as the keynote speaker at NASA’s spring astrobiology conference — saying that life could not exist on Earth or anywhere else if the basic physical dynamics of the universe were not almost precisely what they are. Slight changes in the strength of the electrical force that holds atoms together, of the pull of gravity, or of the total mass of the universe would have made it difficult for stars to form and create the heavy elements essential for life, and impossible for them to remain active long enough to support the process of evolution.

Many religious thinkers see this fine-tuning as an argument for the existence of a creator, but Rees and other cosmologists offer a different explanation: that our universe is but one in a world of multiple (or infinite) universes. However it came into being, Rees argued, our universe is inherently life-supporting, and there is no reason to believe that that potential has been realized only on Earth.

The excitement now in the field, and its central challenges, were expressed in a report last year by the National Research Council, which assembles experts to study scientific issues and problems.

“The Limits of Organic Life in Planetary Systems” report — also known simply as “Weird Life” — concluded: “The likelihood of encountering some form of life in subsurface Mars and sub-ice Europa appears high. . . . The committee sees no reason to exclude the possibility of life in environments as diverse as the aerosols above Venus and the water-ammonia [mixture] of Titan.”

The report then warned that “nothing would be more tragic to the American exploration of space than to encounter alien life and fail to recognize it, either because of the consequences of contamination or because of the lack of proper tools and scientific preparation.”

Astrobiology’s goal is to make sure that does not happen.

By Marc Kaufman
Washington Post Staff Writer
Sunday, July 20, 2008

For most non-medical people, the term “apnea” is most familiar when coupled with the word “sleep,” and refers to a dangerous condition in which people inadvertently stop breathing while asleep. But the word literally means a temporary cessation of breathing and it is practiced (on purpose) around the world by an international community of extreme athletes — a brotherhood that now includes magician and stuntman David Blaine. On the set of The Oprah Winfrey Show on April 30, Blaine broke the world record by holding his breath for 17 minutes and 4 seconds — proving that just how temporary apnea can be is a question of training, endurance and will.

An average person in good health can hold his breath for about two minutes, but with even small amounts of practice it is possible to increase that time dramatically. “The body can be trained,” explains Dr. Ralph Potkin, a pulmonary specialist who worked with Blaine in the weeks leading up to his recent feat.

When you deprive your body of oxygen, it is only a matter of time before your carbon dioxide levels build, triggering a reflex that will cause your breathing muscles — including the diaphragm and the muscles between the ribs — to spasm. The pain of these spasms is what causes most people to gulp for breath after just a couple of minutes. When holding your breath underwater, however, you have a bit of mammalian evolution on your side. When humans are submerged in cold water, our bodies instinctively prepare to conserve oxygen, much in the way that dolphins’ and whales’ bodies do when they dive. “Heart rate drops, blood pressure goes up and circulation gets redistributed,” Potkin says. The body’s focus becomes getting the oxygenated blood primarily to the vital organs — the brain and the heart — and not the extremities or abdomen.

This reflex can help us conserve the oxygen we do have, but it doesn’t do much for the painful muscle spasms. Overcoming those is a matter of concentration and meditation. “This is one of those Zen sports,” Potkin explains.

Suppressing the powerful pain impulse too successfully can prove deadly: subjects can continue holding their breath up to the point that their brains shut down from lack of oxygen. If you’re 100 feet under water — or even three feet underwater in a pool — it’s not a good time to pass out. In order to break the world record, Blaine had to hold his breath without fainting. (Had he continued until he’d depleted his brain’s oxygen, however, Potkin is convinced he could have gone for another full minute.)

That of course, is down to months of rigorous training, including practicing a technique called glossopharyngeal insufflation, or lung packing. In order to maximize the amount of air taken into the lungs before apnea, Blaine, among other divers, inhaled until his lungs were filled to their physiological capacity, and then forced additional air into the lungs by swallowing, hard. Using this technique, Blaine was able to cram another quart’s worth of air into his already full lungs, Potkin estimates. (He also fasted before before the actual record breaking act, in order to have more room for his lungs to expand without bumping up against a full stomach.) In a study of five elite free divers, who descend to scuba-diving depths without the aid of equipment, Potkin found that the lung packing was “associated with deeper dives and longer holding times.”

Of course, another factor associated with longer holding times is the consumption of pure oxygen beforehand. The world record for holding your breath after inhaling pure oxygen is now Blaine’s — 17 minutes and 4 seconds. The record without the pure oxygen, which Blaine failed to break during an attempt last year in Manhattan’s Lincoln Center, is 8 minutes and 58 seconds.

With or without pure oxygen, holding your breath is a difficult and dangerous pastime even for elite athletes. When not done carefully, it can lead to drowning, or to potential tissue damage in the heart, brains or lungs. Preliminary results from Potkin’s research into apnea’s long-term effects show some abnormal brain scans among young, extreme free divers. There’s still much to learn about the phenomenon; as a medical student, Potkin recalls, he was told that no one could hold his breath for more than five minutes without suffering brain damage. Now, he wants to see if the technique can be used for medical purposes — and he’s hoping Blaine’s latest stunt provides the impetus for a greater scientific understanding of how to hold one’s breath.

* By Tiffany Sharples (TIME)

Word that convicted “D.C. Madam” Deborah Jean Palfrey committed suicide Thursday morning shocked Capitol Hill, where Palfrey’s most famous known client, Sen. David Vitter (R-La.), was going about his day’s work.

Palfrey, 52, whose prosecutorial saga ballooned into a full-fledged political sex scandal, was found dead around 11 a.m. in a shed behind her mother’s mobile home in Tarpon Springs, Fla., according to police. Detectives say she was the victim of “an apparent suicide by hanging.”

Her demise stunned defense attorneys around town, who described Palfrey’s suicide as every defense lawyer’s worst fear.

Palfrey’s lawyer, Preston Burton said, “This is tragic news, and my heart goes out to her mother.”

Another D.C. defense attorney, Barry Boss, told the Sleuth, “The whole thing, from start to finish, had a real sense of tragedy for her, her employees and her customers. If this were Shakespeare or the opera, it would seem like a fitting final act.”

Deborah Jeane Palfrey, contending that her company was “a legal, high-end erotic fantasy service” with clients “from the more refined walks of life.” (Jay Mallin — Bloomberg News) The tragedy of how Palfrey wound up taking her own life in a trailer park was eerily foreshadowed in an interview she gave ABC News last year as she awaited trial on prostitution-related charges of racketeering and money laundering.

She vowed that she would never again return to prison. (She had already served time in the 1990s on other charges involving prostitution.) “I sure as heck am not going to be going to federal prison for one day, let alone, you know, four to eight years,” she said.

Palfrey faced up to 55 years in prison for her conviction on April 15 by a federal jury of running a prostitution service. She was free pending her July 24 sentencing.

Palfrey’s was one of the most widely dropped names around Washington for many months last year as the town’s chattering class — namely the media — contemplated a trial that had the potential to humiliate scores of important men at the highest echelons of politics. That never came to pass, though Palfrey did make a name for herself nationally when she provided ABC News with thousands of pages of her phone records last summer.

Though it had titillating potential and most of Washington waited with bated breath for a knockout sex scandal that would implicate high-level politicians, the biggest name on that particular set of phone records was Randall Tobias, the State Department’s administrator of foreign aid programs, who resigned as a result.

But it was a gumshoe reporter working for Hustler publisher Larry Flynt who nabbed the most high profile name of all on Palfrey’s client list: Senator Vitter, whose telephone number showed up six times on Palfrey’s escort service’s phone log between 1999 and 2001, when he was a House member.

Palfrey later posted her phone records on the Internet.

Vitter famously apologized last July for committing “a very serious sin” in his past, but he has so far avoided any political fallout for his involvement in the escort service. He luckily narrowly avoided having to testify last month at D.C. Madam’s trial.

Palfrey claimed all along that her service provided “erotic fantasy,” not actual sex. And she planned to rely on the testimony of her clients, including Vitter, to back up that claim, though ultimately her defense rested its case without calling any witnesses.

The senator’s office did not return a phone call and email seeking reaction to Palfrey’s suicide. Around the time that the story broke Thursday, Vitter was seen walking from the Capitol back to his office in the Senate Hart office building, where just Wednesday night, the Louisiana National Guard hosted a crawfish boil, which Vitter attended.

(Note: While Palfrey’s middle name routinely was spelled as “Jeane” in news accounts throughout her legal ordeal, police in Florida say the name on her California-issued driver’s license was Deborah Jean Palfrey.)

By Mary Ann Akers | May 1, 2008;

The presidential candidates may have star qualities — and they also have stars in their families, according to a genealogical study linking Hillary Clinton to Angelina Jolie and Barack Obama to Brad Pitt.

angelina-y-brad.jpg

The New England Historic Genealogical Society (NEHGS) in Boston on Wednesday released a study in which it traced the family trees of all three presidential candidates to find they all had famous relatives, both dead and alive.

It found Illinois Senator Barack Obama, whose mother is from Kansas, can claim at least six U.S. presidents as distant cousins, including George W. Bush and his father, Gerald R. Ford, Lyndon B. Johnson, Harry S. Truman, and James Madison.

But other cousins include British Prime Minister Sir Winston Churchill — and Brad Pitt who is a ninth cousin linked back Edwin Hickman who died in Virginia in 1769.

“Obama’s maternal ancestry includes the mid-Atlantic States and the South,” said Christopher Child, a genealogist with NEHGC that dates back to 1845 and describes itself as the United States oldest and largest non-profit genealogical organization.

Meanwhile his Democratic rival, New York Senator Hillary Clinton, shares a common ancestor with Pitt’s partner, actress Angelina Jolie. Clinton and Jolie are ninth cousins twice removed linked by Jean Cusson of St. Sulpice, Quebec, who died in 1718.

Child said Clinton is also a cousin of a number of famous people with French Canadian ancestry, including Madonna (ninth cousins linked by Pierre Gagne of Quebec who died in 1656) Celine Dion, and Alanis Morissette, as well as author Jack Kerouac. Another cousin is Camilla Parker-Bowles, wife of Prince Charles.

“It is common to find people of French Canadian descent to be related to large numbers of other French Canadians, including these notables,” said Child in a statement.

Republican nominee, Arizona Senator John McCain, is a sixth cousin of Laura Bush but it was hard track other ancestors.

“McCain’s ancestry is almost entirely southern,” said Child, adding this made notable connections harder to trace because of challenges to genealogists in that region.

Child said having famous cousins makes for interesting conversation but it “should not influence voters.”

“But at a time when the race focuses on pointing out differences, the candidates may enjoy learning about famous cousins and their varied family histories,” he said.

LOS ANGELES (Reuters) -Wed Mar 26, 2008

sep2409.JPG

Henri Poincare (1854-1912) was one of the most eminent French mathematicians of the past two centuries.

One of Poincare’s best-known problems is what is today called the Poincare conjecture.

The poincare conjecture is considered so important that the clay Mathematics Institute named it one of the seven millennium prize problems will be awarded $1 million.

The Poincare conjecture falls within the realm of topology.

This branch of mathematics focuses, roughly speaking, on the issue of whether one body can be deformed into a different body through pulling, squashing or rotating, without tearing or gluing pieces together.

A ball, an egg, and a flowerpot are, topologically speaking, equivalent bodies, since any of them can be deformed into any of others without performing any of the “illegal” actions.

A ball and a coffee cup, on the other hand, are not equivalent, since the cup has a handle, which could not have been formed out of the ball without poking a hole through it.
The ball, egg, and a flowerpot are said to be “simply connected” as opposed to the cup, a bagel, or a pretzel.

Poincare sought to investigate such issues not by geometric means but through algebra, thus becoming the originator of “algebraic topology.”

In 1904 he asked whether all bodies that do not have a handle are equivalent to spheres. In two dimensions this questions this question refers to the surfaces of eggs, coffee cups, and flowerpots and can be answered yes. (Surfaces like the leather skin of a football or the crust of a bagel are two-dimensional objects floating in three-dimensional space)

For three-dimensional surfaces in four-dimensional space the answer is not quite clear. While Poincare was inclined to believe that the answer was yes, he was not able to provide a proof.

Several Mathematicians were able to prove the equivalent of Poincare’s conjecture for all bodies of dimension greater than four. This is because higher-dimensional spaces provide more elbowroom so mathematicians find it simpler to prove the Poincare conjecture.

But, for three-dimensional surfaces in four-dimensional space –remember: The surface of a four-dimensional object is a three-dimensional object. Poincare’s conjecture remained as elusive as ever.

See you later
Carlos Tiger without Time

c388yhte49lk.gif

How the twin primes to cause the error in the processor Pentium Intel?
Whitin the group of integers, prime numbers are in a way thought of as atoms, since all integers can be expressed as a product of prime numbers (for example, 30=2x3x5), just as molecules are made up of separate atoms.

The theory of prime numbers continues to be shrouded in mystery and still holds many secrets.

Taking the first 100 numbers we count 25 primes; between 1001 and 1100 there are only 16; and between the numbers 100,001 and 100100 there are a mere six.

Prime numbers become increasingly sparse. In other words, the average distance between two consecutive primes becomes increasingly large.

Around the turn of the 19th century, the Frenchman Adrien-Marie Legendre and the German Carl Friedrich Gauss studied the distribution of primes. Based on their investigations they conjectured the space between a prime P and the next bigger prime would on average, be as big as the natural logarithm of P.

Sometimes the gaps are much larger, sometimes much smaller. There are even arbitrarily long intervals in which no primes occur whatsoever. The smallest gap. On the other hand, are two, since there is at least one even number between any two primes.

Primes that are separated from each other by a gap of only two –for instance, 11 and 13, or 197 and 199- are called twin primes.

There are also prime cousins, which are primes separated from each other by four nonprime numbers. Primes that are separated from each other by six nonprime numbers are called, sexy primes.

Much less is known about twin primes than about regular primes. What is certain is that they are fairly rare.

Among the first million integers there are only 8169 twin prime pairs. The largest twin primes so far discovered have over 50,000 digits. But much is unknown.

Nobody knows whether an infinite number of twin prime pairs exist, or whether after one particular twin prime pair there are no larger ones.

Working on the theory of twin primes, Thomas Nicely from Virginia, in the 1990s. He was running through all integers up to 4 quadrillion.

The algorithm required the computation of the banal expression X times (1/X). But to his shock, when inserting certain numbers into this formula, he received not the value 1 but an incorrect result.

On October 30, 1994, his computer consistently produced erroneous results when calculating the above equation with numbers ranging between 824,633,702,418 and 824,633,702,449. Through his research on twin primes, Thomas Nicely had hit on the notorious Pentium bug.

The error in the processor cost Intel, the manufacturer, and $500 million in compensations.

See you later
Carlos Tiger without Time

jul1108.jpg
A new way to examine humanity’s impact on the environment is to consider how the world would fare if all the people disappeared.

If all human beings vanished, for example, Manhattan (New York) would eventually revert to a forested island.
Many skyscrapers would topple within decades, undermined by waterlogged foundations.

Weeds and colonizing trees would take root in the cracked pavement, while raptors nested in the ruins and foxes roamed the streets.

According to Alan Weisman (author of book “The world without Us”), large parts of our physical infrastructure would begin to crumble almost immediately. Without street cleaners and road crews, our grand boulevards and Superhighways would start to crack and buckle in a matter of months.

Over the following decades many houses and office buildings would collapse, but some ordinary items would resist decay for an extraordinarily long time.

Stainless-steel pots, for example, could last for millennia, especially if they were buried in the weed-covered mounds that used to be our kitchens.

And certain common plastics might remain intact for hundreds of thousands of years; they would not break down until microbes evolved the ability to consume them.

Now look down the next pictures about what happens at…?
jul1109.jpg

* 2 Days after the disappearance of humans (New York City’s subway system completely fills with water.)
* 2 to 4 years (cracked streets become covered with weeds…)
* 20 Years (Dozens of streams and marshes form in Manhattan)
jul1110.jpg

* 100 Year (The roof of nearly all houses have caved in…)
* 300 Year (New York City’s suspension bridges have fallen)
* 500 Years (Mature Forest cover the New York metropolitan area)
*5,000 Years (As the casings of nuclear warheads corrode, radioactive plutonium 239 is released into the environment.)
 
jul1111.jpg

* 15,000 Years (The last remnants of stone buildings in Manhattan fall to advancing glaciers as a new ice age begins)
* 10 Million Years (Bronze sculptures, many of wish still retain their original shape, survive as relics of the human age)
* 1 Billion Years (The earth heats dramatically, but insects and other animals may adapt.)
* 5 Billion Years (The earth vaporizes as the dying sun expands and consumes all the inner planets.)
Trillions of Years (Ex-planet earth is in the Twilight Zone, still travel outward through space.

jul1112.jpg

I’m not suggesting that we have to worry about human beings suddenly disappearing tomorrow, some alien death ray taking us all away.

Think about how long it would take to wipe out some of the things, we have created. Some of our more formidable inventions have a longevity that we can’t even predict yet, like some of the persistent organic pollutants that began as pesticides or industrial chemicals. Or some of our plastics, which have an enormous role in our lives and an enormous presence in the environment.

Wouldn’t it be a sad loss if humanity were extirpated from the planet?
Would this world be beautiful without us?
I don’t think it’s necessary for us to all disappear for the earth to come back to a healthier state.

* Summarized and adapted from Scientific American, July 2007

scrisk_1125.jpg

It would be a lot easier to enjoy your life if there weren’t so many things trying to kill you every day. The problems start even before you’re fully awake. There’s the fall out of bed that kills 600 Americans each year.

There’s the early-morning heart attack, which is 40% more common than those that strike later in the day. There’s the fatal plunge down the stairs, the bite of sausage that gets lodged in your throat, the tumble on the slippery sidewalk as you leave the house, the high-speed automotive pinball game that is your daily commute.

Other dangers stalk you all day long. Will a cabbie’s brakes fail when you’re in the crosswalk? Will you have a violent reaction to bad food? And what about the risks you carry with you all your life? The father and grandfather who died of coronaries in their 50s probably passed the same cardiac weakness on to you. The tendency to take chances on the highway that has twice landed you in traffic court could just as easily land you in the morgue.

Shadowed by peril as we are, you would think we’d get pretty good at distinguishing the risks likeliest to do us in from the ones that are statistical long shots. But you would be wrong. We agonize over avian flu, which to date has killed precisely no one in the U.S., but have to be cajoled into getting vaccinated for the common flu, which contributes to the deaths of 36,000 Americans each year. We wring our hands over the mad cow pathogen that might be (but almost certainly isn’t) in our hamburger and worry far less about the cholesterol that contributes to the heart disease that kills 700,000 of us annually.

We pride ourselves on being the only species that understands the concept of risk, yet we have a confounding habit of worrying about mere possibilities while ignoring probabilities, building barricades against perceived dangers while leaving ourselves exposed to real ones.
Sensible calculation of real-world risks is a multidimensional math problem that sometimes seems entirely beyond us. And while it may be true that it’s something we’ll never do exceptionally well, it’s almost certainly something we can learn to do better.

Part of the problem we have with evaluating risk, scientists say, is that we’re moving through the modern world with what is, in many respects, a prehistoric brain. We may think we’ve grown accustomed to living in a predator-free environment in which most of the dangers of the wild have been driven away or fenced off, but our central nervous system–evolving at a glacial pace–hasn’t got the message.

We dread anything that poses a greater risk for cancer more than the things that injure us in a traditional way, like an auto crash.

We similarly misjudge risk if we feel we have some control over it, even if it’s an illusory sense. The decision to drive instead of fly is the most commonly cited example, probably because it’s such a good one.

In the next Diagram we can see the relation between accidents and diseases in U.S.

may1011.jpg

Just as important is to remember to pay proper mind to the dangers that, as the risk experts put it, are hiding in plain sight.

Most people no longer doubt that global warming is happening, yet we live and work in air-conditioned buildings and drive gas-guzzling cars.

Most people would be far likelier to participate in a protest at a nuclear power plant than at a tobacco company, but it’s smoking, not nukes, that kills an average of 1,200 Americans every single day.

* Summarized of TIME, Dec. 2006

1356b82b-e7f2-99df-30ca562c33c4f03c_1.jpg

Dark energy does more than hurry along the expansion of the universe. It also has a stranglehold on the shape and spacing of galaxies

What took us so long? Only in 1998 did astronomers discover we had been missing nearly three quarters of the contents of the universe, the so-called dark energy–an unknown form of energy that surrounds each of us, tugging at us ever so slightly, holding the fate of the cosmos in its grip, but to which we are almost totally blind.

Some researchers, to be sure, had anticipated that such energy existed, but even they will tell you that its detection ranks among the most revolutionary discoveries in 20th-century cosmology. Not only does dark energy appear to make up the bulk of the universe, but its existence, if it stands the test of time, will probably require the development of new theories of physics.
Scientists are just starting the long process of figuring out what dark energy is and what its implications are. One realization has already sunk in: although dark energy betrayed its existence through its effect on the universe as a whole, it may also shape the evolution of the universe’s inhabitants–stars, galaxies, galaxy clusters. Astronomers may have been staring at its handiwork for decades without realizing it.

Ironically, the very pervasiveness of dark energy is what made it so hard to recognize. Dark energy, unlike matter, does not clump in some places more than others; by its very nature, it is spread smoothly everywhere.

Whatever the location–be it in your kitchen or in intergalactic space–it has the same density, about 10-26 kilogram per cubic meter, equivalent to a handful of hydrogen atoms. All the dark energy in our solar system amounts to the mass of a small asteroid, making it an utterly inconsequential player in the dance of the planets. Its effects stand out only when viewed over vast distances and spans of time.
Since the days of American astronomer Edwin Hubble, observers have known that all but the nearest galaxies are moving away from us at a rapid rate.

This rate is proportional to distance: the more distant a galaxy is, the faster its recession. Such a pattern implied that galaxies are not moving through space in the conventional sense but are being carried along as the fabric of space itself stretches [see “Misconceptions about the Big Bang,” by Charles H. Lineweaver and Tamara M. Davis; Scientific American, March 2005].

For decades, astronomers struggled to answer the obvious follow-up question: How does the expansion rate change over time? They reasoned that it should be slowing down, as the inward gravitational attraction exerted by galaxies on one another should have counteracted the outward expansion.

The first clear observational evidence for changes in the expansion rate involved distant supernovae, massive exploding stars that can be used as markers of cosmic expansion, just as watching driftwood lets you measure the speed of a river.

These observations made clear that the expansion was slower in the past than today and is therefore accelerating. More specifically, it had been slowing down but at some point underwent a transition and began speeding up [see “Surveying Space-time with Supernovae,” by Craig J. Hogan, Robert P. Kirshner and Nicholas B. Suntzeff; Scientific American, January 1999, and “From Slowdown to Speedup,” by Adam G. Riess and Michael S. Turner; Scientific American, February 2004].

This striking result has since been cross-checked by independent studies of the cosmic microwave background radiation by, for example, the Wilkinson Microwave Anisotropy Probe (WMAP).

Dark energy may be the key link among several aspects of galaxy formation that used to appear unrelated.

One possible conclusion is that different laws of gravity apply on supergalactic scales than on lesser ones, so that galaxies’ gravity does not, in fact, resist expansion.

But the more generally accepted hypothesis is that the laws of gravity are universal and that some form of energy, previously unknown to science, opposes and overwhelms galaxies’ mutual attraction, pushing them apart ever faster. Although dark energy is inconsequential within our galaxy (let alone your kitchen), it adds up to the most powerful force in the cosmos.

As astronomers have explored this new phenomenon, they have found that, in addition to determining the overall expansion rate of the universe, dark energy has long-term consequences for smaller scales. As you zoom in from the entire observable universe, the first thing you notice is that matter on cosmic scales is distributed in a cobweblike pattern–a filigree of filaments, several tens of millions of light-years long, interspersed with voids of similar size. Simulations show that both matter and dark energy are needed to explain the pattern.

That finding is not terribly surprising, though. The filaments and voids are not coherent bodies like, say, a planet. They have not detached from the overall cosmic expansion and established their own internal equilibrium of forces. Rather they are features shaped by the competition between cosmic expansion (and any phenomenon affecting it) and their own gravity. In our universe, neither player in this tug-of-war is overwhelmingly dominant. If dark energy were stronger, expansion would have won and matter would be spread out rather than concentrated in filaments. If dark energy were weaker, matter would be even more concentrated than it is.

The situation gets more complicated as you continue to zoom in and reach the scale of galaxies and galaxy clusters. Galaxies, including our own Milky Way, do not expand with time. Their size is controlled by an equilibrium between gravity and the angular momentum of the stars, gas and other material that make them up; they grow only by accreting new material from intergalactic space or by merging with other galaxies. Cosmic expansion has an insignificant effect on them. Thus, it is not at all obvious that dark energy should have had any say whatsoever in how galaxies formed. The same is true of galaxy clusters, the largest coherent bodies in the universe–assemblages of thousands of galaxies embedded in a vast cloud of hot gas and bound together by gravity.
Yet it now appears that dark energy may be the key link among several aspects of galaxy and cluster formation that not long ago appeared unrelated. The reason is that the formation and evolution of these systems is partially driven by interactions and mergers between galaxies, which in turn may have been driven strongly by dark energy.

To understand the influence of dark energy on the formation of galaxies, first consider how astronomers think galaxies form. Current theories are based on the idea that matter comes in two basic kinds. First, there is ordinary matter, whose particles readily interact with one another and, if electrically charged, with electromagnetic radiation. Astronomers call this type of matter “baryonic” in reference to its main constituent, baryons, such as protons and neutrons. Second, there is dark matter (which is distinct from dark energy), which makes up 85 percent of all matter and whose salient property is that it comprises particles that do not react with radiation.

Gravitationally, dark matter behaves just like ordinary matter.
According to models, dark matter began to clump immediately after the big bang, forming spherical blobs that astronomers refer to as “halos.” The baryons, in contrast, were initially kept from clumping by their interactions with one another and with radiation. They remained in a hot, gaseous phase. As the universe expanded, this gas cooled and the baryons were able to pack themselves together.

The first stars and galaxies coalesced out of this cooled gas a few hundred million years after the big bang. They did not materialize in random locations but in the centers of the dark matter halos that had already taken shape.

Since the 1980s a number of theorists have done detailed computer simulations of this process, including groups led by Simon D. M. White of the Max Planck Institute for Astrophysics in Garching, Germany, and Carlos S. Frenk of Durham University in England. They have shown that most of the first structures were small, low-mass dark matter halos.

Because the early universe was so dense, these low-mass halos (and the galaxies they contained) merged with one another to form larger-mass systems. In this way, galaxy construction was a bottom-up process, like building a dollhouse out of Lego bricks. (The alternative would have been a top-down process, in which you start with the dollhouse and smash it to make bricks.)

My colleagues and I have sought to test these models by looking at distant galaxies and how they have merged over cosmic time.

Detailed studies indicate that a galaxy gets bent out of shape when it merges with another galaxy.

The earliest galaxies we can see existed when the universe was about a billion years old, and many of these indeed appear to be merging. As time went on, though, the fusion of massive galaxies became less common. Between two billion and six billion years after the big bang–that is, over the first half of cosmic history–the fraction of massive galaxies undergoing a merger dropped from half to nearly nothing at all.

Since then, the distribution of galaxy shapes has been frozen, an indication that smashups and mergers have become relatively uncommon.

In fact, fully 98 percent of massive galaxies in today’s universe are either elliptical or spiral, with shapes that would be disrupted by a merger. These galaxies are stable and comprise mostly old stars, which tells us that they must have formed early and have remained in a regular morphological form for quite some time. A few galaxies are merging in the present day, but they are typically of low mass.

The virtual cessation of mergers is not the only way the universe has run out of steam since it was half its current age. Star formation, too, has been waning. Most of the stars that exist today were born in the first half of cosmic history, as first convincingly shown by several teams in the 1990s, including ones led by Simon J. Lilly, then at the University of Toronto, Piero Madau, then at the Space Telescope Science Institute, and Charles C. Steidel of the California Institute of Technology.

More recently, researchers have learned how this trend occurred. It turns out that star formation in massive galaxies shut down early. Since the universe was half its current age, only lightweight systems have continued to create stars at a significant rate.

This shift in the venue of star formation is called galaxy downsizing [see “The Midlife Crisis of the Cosmos,” by Amy J. Barger; Scientific American, January 2005]. It seems paradoxical. Galaxy formation theory predicts that small galaxies take shape first and, as they amalgamate, massive ones arise. Yet the history of star formation shows the reverse: massive galaxies are initially the main stellar birthing grounds, then smaller ones take over.

The universe has run out of steam since it was half its current age. Mergers have ceased, and black holes are quiescent.

Another oddity is that the buildup of supermassive black holes, found at the centers of galaxies, seems to have slowed down considerably. Such holes power quasars and other types of active galaxies, which are rare in the modern universe; the black holes in our galaxy and others are quiescent. Are any of these trends in galaxy evolution related? Is it really possible that dark energy is the root cause?

Some astronomers have proposed that internal processes in galaxies, such as energy released by black holes and supernovae, turned off galaxy and star formation. But dark energy has emerged as possibly a more fundamental culprit, the one that can link everything together. The central piece of evidence is the rough coincidence in timing between the end of most galaxy and cluster formation and the onset of the domination of dark energy. Both happened when the universe was about half its present age.

The idea is that up to that point in cosmic history, the density of matter was so high that gravitational forces among galaxies dominated over the effects of dark energy. Galaxies rubbed shoulders, interacted with one another, and frequently merged.

New stars formed as gas clouds within galaxies collided, and black holes grew when gas was driven toward the centers of these systems. As time progressed and space expanded, matter thinned out and its gravity weakened, whereas the strength of dark energy remained constant (or nearly so).

The inexorable shift in the balance between the two eventually caused the expansion rate to switch from deceleration to acceleration.

The structures in which galaxies reside were then pulled apart, with a gradual decrease in the galaxy merger rate as a result. Likewise, intergalactic gas was less able to fall into galaxies. Deprived of fuel, black holes became more quiescent.

This sequence could perhaps account for the downsizing of the galaxy population. The most massive dark matter halos, as well as their embedded galaxies, are also the most clustered; they reside in close proximity to other massive halos. Thus, they are likely to knock into their neighbors earlier than are lower-mass systems. When they do, they experience a burst of star formation.

The newly formed stars light up and then blow up, heating the gas and preventing it from collapsing into new stars. In this way, star formation chokes itself off: stars heat the gas from which they emerged, preventing new ones from forming. The black hole at the center of such a galaxy acts as another damper on star formation.

A galaxy merger feeds gas into the black hole, causing it to fire out jets that heat up gas in the system and prevent it from cooling to form new stars.

Apparently, once star formation in massive galaxies shuts down, it does not start up again–most likely because the gas in these systems becomes depleted or becomes so hot that it cannot cool down quickly enough.

These massive galaxies can still merge with one another, but few new stars emerge for want of cold gas. As the massive galaxies stagnate, smaller galaxies continue to merge and form stars. The result is that massive galaxies take shape before smaller ones, as is observed. Dark energy perhaps modulated this process by determining the degree of galaxy clustering and the rate of merging.

Dark energy would also explain the evolution of galaxy clusters. Ancient clusters, found when the universe was less than half its present age, were already as massive as today’s clusters. That is, galaxy clusters have not grown by a significant amount in the past six billion to eight billion years. This lack of growth is an indication that the infall of galaxies into clusters has been curtailed since the universe was about half its current age–a direct sign that dark energy is influencing the way galaxies are interacting on large scales.

Astronomers knew as early as the mid-1990s that galaxy clusters had not grown much in the past eight billion years, and they attributed this to a lower matter density than theoretical arguments had predicted.

The discovery of dark energy resolved the tension between observation and theory.

An example of how dark energy alters the history of galaxy clusters is the fate of the galaxies in our immediate vicinity, known as the Local Group. Just a few years ago astronomers thought that the Milky Way and Andromeda, its closest large neighbor, along with their retinue of satellites, would fall into the nearby Virgo cluster. But it now appears that we shall escape that fate and never become part of a large cluster of galaxies. Dark energy will cause the distance between us and Virgo to expand faster than the Local Group can cross it.
By throttling cluster development, dark energy also controls the makeup of galaxies within clusters. The cluster environment facilitates the formation of a zoo of galaxies such as the so-called lenticulars, giant ellipticals and dwarf ellipticals. By regulating the ability of galaxies to join clusters, dark energy dictates the relative abundance of these galaxy types.

Space is emptying out, leaving our Milky Way galaxy and its neighbors an increasingly isolated island.

This is a good story, but is it true? Galaxy mergers, black hole activity and star formation all decline with time, and very likely they are related in some way. But astronomers have yet to follow the full sequence of events.

Ongoing surveys with the Hubble Space Telescope, the Chandra X-ray Observatory and sensitive ground-based imaging and spectroscopy will scrutinize these links in coming years. One way to do this is to obtain a good census of distant active galaxies and to determine the time when those galaxies last underwent a merger.

The analysis will require the development of new theoretical tools but should be within our grasp in the next few years.

An accelerating universe dominated by dark energy is a natural way to produce all the observed changes in the galaxy population–namely, the cessation of mergers and its many corollaries, such as loss of vigorous star formation and the end of galactic metamorphosis.

If dark energy did not exist, galaxy mergers would probably have continued for longer than they did, and today the universe would contain many more massive galaxies with old stellar populations. Likewise, it would have fewer lower-mass systems, and spiral galaxies such as our Milky Way would be rare (given that spirals cannot survive the merger process).

Large-scale structures of galaxies would have been more tightly bound, and more mergers of structures and accretion would have occurred.

Conversely, if dark energy were even stronger than it is, the universe would have had fewer mergers and thus fewer massive galaxies and galaxy clusters. Spiral and low-mass dwarf irregular galaxies would be more common, because fewer galaxy mergers would have occurred throughout time, and galaxy clusters would be much less massive or perhaps not exist at all. It is also likely that fewer stars would have formed, and a higher fraction of our universe’s baryonic mass would still be in a gaseous state.
Although these processes may seem distant, the way galaxies form has an influence on our own existence. Stars are needed to produce elements heavier than lithium, which are used to build terrestrial planets and life. If lower star formation rates meant that these elements did not form in great abundance, the universe would not have many planets, and life itself might never have arisen. In this way, dark energy could have had a profound effect on many different and seemingly unrelated aspects of the universe, and perhaps even on the detailed history of our own planet.
Dark energy is by no means finished with its work. It may appear to benefit life: the acceleration will prevent the eventual collapse that was a worry of astronomers not so long ago.

But dark energy brings other risks. At the very least, it pulls apart distant galaxies, making them recede so fast that we lose sight of them for good.

Space is emptying out, leaving our galaxy and its immediate neighbors an increasingly isolated island. Galaxy clusters, galaxies and even stars drifting through intergalactic space will eventually have a limited sphere of gravitational influence not much larger than their own individual sizes.

Worse, dark energy might be evolving. Some models predict that if dark energy becomes ever more dominant over time, it will rip apart gravitationally bound objects, such as galaxy clusters and galaxies.

Ultimately, planet Earth will be stripped from the sun and shredded, along with all objects on it. Even atoms will be destroyed. Dark energy, once cast in the shadows of matter, will have exacted its final revenge.

*Source: Scientific American, January 14, 2007, By Christopher J. Conselice

Next Page »