125 question on science

写在前面:2005年6月,Science杂志在创刊125周年之际,制作一期特刊,讲述the most compelling puzzles and questions facing scientists today。本文为其前25个问题的部分摘要,剩下的100个问题参考这里

1. What Is the Universe Made of

Every once in a while, cosmologists are dragged, kicking and screaming, into a universe much more unsettling than they had any reason to expect. In the 1500s and 1600s, Copernicus, Kepler, and Newton showed that Earth is just one of many planets orbiting one of many stars, destroying the comfortable Medieval notion of a closed and tiny cosmos. In the 1920s, Edwin Hubble showed that our universe is constantly expanding and evolving, a finding that eventually shattered the idea that the universe is unchanging and eternal. And in the past few decades, cosmologists have discovered that the ordinary matter that makes up stars and galaxies and people is less than 5% of everything there is. Grappling with this new understanding of the cosmos, scientists face one overriding question: What is the universe made of?

This question arises from years of progressively stranger observations. In the 1960s, astronomers discovered that galaxies spun around too fast for the collective pull of the stars’ gravity to keep them from flying apart. Something unseen appears to be keeping the stars from flinging themselves away from the center: unilluminated matter that exerts extra gravitational force. This is dark matter.

Over the years, scientists have spotted some of this dark matter in space; they have seen ghostly clouds of gas with x-ray telescopes, watched the twinkle of distant stars as invisible clumps of matter pass in front of them, and measured the distortion of space and time caused by invisible mass in galaxies. And thanks to observations of the abundances of elements in primordial gas clouds, physicists have concluded that only 10% of ordinary matter is visible to telescopes.

But even multiplying all the visible “ordinary” matter by 10 doesn’t come close to accounting for how the universe is structured. When astronomers look up in the heavens with powerful telescopes, they see a lumpy cosmos. Galaxies don’t dot the skies uniformly; they cluster together in thin tendrils and filaments that twine among vast voids. Just as there isn’t enough visible matter to keep galaxies spinning at the right speed, there isn’t enough ordinary matter to account for this lumpiness. Cosmologists now conclude that the gravitational forces exerted by another form of dark matter, made of an as-yetundiscovered type of particle, must be sculpting these vast cosmic structures. They estimate that this exotic dark matter makes up about 25% of the stuff in the universe— five times as much as ordinary matter.

But even this mysterious entity pales by comparison to another mystery: dark energy. In the late 1990s, scientists examining distant supernovae discovered that the universe is expanding faster and faster, instead of slowing down as the laws of physics would imply. Is there some sort of antigravity force blowing the universe up?

All signs point to yes. Independent measurements of a variety of phenomena—cosmic background radiation, element abundances, galaxy clustering, gravitational lensing, gas cloud properties—all converge on a consistent, but bizarre, picture of the cosmos. Ordinary matter and exotic, unknown particles together make up only about 30% of the stuff in the universe; the rest is this mysterious antigravity force known as dark energy.

This means that figuring out what the universe is made of will require answers to three increasingly difficult sets of questions. What is ordinary dark matter made of, and where does it reside? Astrophysical observations, such as those that measure the bending of light by massive objects in space, are already yielding the answer. What is exotic dark matter? Scientists have some ideas, and with luck, a dark-matter trap buried deep underground or a high-energy atom smasher will discover a new type of particle within the next decade. And finally, what is dark energy? This question, which wouldn’t even have been asked a decade ago, seems to transcend known physics more than any other phenomenon yet observed. Ever-better measurements of supernovae and cosmic background radiation as well as planned observations of gravitational lensing will yield information about dark energy’s “equation of state”—essentially a measure of how squishy the substance is. But at the moment, the nature of dark energy is arguably the murkiest question in physics— and the one that, when answered, may shed the most light.

——CHARLES SEIFE


2. What Is the Biological Basis of Consciousness

For centuries, debating the nature of consciousness was the exclusive purview of philosophers. But if the recent torrent of books on the topic is any indication, a shift has taken place: Scientists are getting into the game.

Has the nature of consciousness finally shifted from a philosophical question to a scientific one that can be solved by doing experiments? The answer, as with any related to this topic, depends on whom you ask. But scientific interest in this slippery, age-old question seems to be gathering momentum. So far, however, although theories abound, hard data are sparse.

The discourse on consciousness has been hugely influenced by René Descartes, the French philosopher who in the mid–17th century declared that body and mind are made of different stuff entirely. It must be so, Descartes concluded, because the body exists in both time and space, whereas the mind has no spatial dimension.

Recent scientifically oriented accounts of consciousness generally reject Descartes’s solution; most prefer to treat body and mind as different aspects of the same thing. In this view, consciousness emerges from the properties and organization of neurons in the brain. But how? And how can scientists, with their devotion to objective observation and measurement, gain access to the inherently private and subjective realm of consciousness?

Some insights have come from examining neurological patients whose injuries have altered their consciousness. Damage to certain evolutionarily ancient structures in the brainstem robs people of consciousness entirely, leaving them in a coma or a persistent vegetative state. Although these regions may be a master switch for consciousness, they are unlikely to be its sole source. Different aspects of consciousness are probably generated in different brain regions. Damage to visual areas of the cerebral cortex, for example, can produce strange deficits limited to visual awareness. One extensively studied patient, known as D.F., is unable to identify shapes or determine the orientation of a thin slot in a vertical disk. Yet when asked to pick up a card and slide it through the slot, she does so easily.

At some level, D.F. must know the orientation of the slot to be able to do this, but she seems not to know she knows. Cleverly designed experiments can produce similar dissociations of unconscious and conscious knowledge in people without neurological damage. And researchers hope that scanning the brains of subjects engaged in such tasks will reveal clues about the neural activity required for conscious awareness. Work with monkeys also may elucidate some aspects of consciousness, particularly visual awareness. One experimental approach is to present a monkey with an optical illusion that creates a “bistable percept,” looking like one thing one moment and another the next. (The orientation-flipping Necker cube is a well-known example.) Monkeys can be trained to indicate which version they perceive. At the same time, researchers hunt for neurons that track the monkey’s perception, in hopes that these neurons will lead them to the neural systems involved in conscious visual awareness and ultimately to an explanation of how a particular pattern of photons hitting the retina produces the experience of seeing, say, a rose.

Experiments under way at present generally address only pieces of the consciousness puzzle, and very few directly address the most enigmatic aspect of the conscious human mind: the sense of self. Yet the experimental work has begun, and if the results don’t provide a blinding insight into how consciousness arises from tangles of neurons, they should at least refine the next round of questions. Ultimately, scientists would like to understand not just the biological basis of consciousness but also why it exists. What selection pressure led to its development, and how many of our fellow creatures share it? Some researchers suspect that consciousness is not unique to humans, but of course much depends on how the term is defined. Biological markers for consciousness might help settle the matter and shed light on how consciousness develops early in life. Such markers could also inform medical decisions about loved ones who are in an unresponsive state.

Until fairly recently, tackling the subject of consciousness was a dubious career move for any scientist without tenure (and perhaps a Nobel Prize already in the bag). Fortunately, more young researchers are now joining the fray. The unanswered questions should keep them—and the printing presses—busy for many years to come.

–GREG MILLER


3. Why Do Humans Have So Few Genes

4. To What Extent Are Genetic Variation and Personal Health Linked

Forty years ago, doctors learned why some patients who received the anesthetic succinylcholine awoke normally but remained temporarily paralyzed and unable to breathe: They shared an inherited quirk that slowed their metabolism of the drug. Later, scientists traced sluggish succinylcholine metabolism to a particular gene variant. Roughly 1 in 3500 people carry two deleterious copies, putting them at high risk of this distressing side effect.

The solution to the succinylcholine mystery was among the first links drawn between genetic variation and an individual’s response to drugs. Since then, a small but growing number of differences in drug metabolism have been linked to genetics, helping explain why some patients benefit from a particular drug, some gain nothing, and others suffer toxic side effects.

The same sort of variation, it is now clear, plays a key role in individual risks of coming down with a variety of diseases. Gene variants have been linked to elevated risks for disorders from Alzheimer’s disease to breast cancer, and they may help explain why, for example, some smokers develop lung cancer whereas many others don’t.

These developments have led to hopes— and some hype—that we are on the verge of an era of personalized medicine, one in which genetic tests will determine disease risks and guide prevention strategies and therapies. But digging up the DNA responsible—if in fact DNA is responsible—and converting that knowledge into gene tests that doctors can use remains a formidable challenge.
Many conditions, including various cancers, heart attacks, lupus, and depression, likely arise when a particular mix of genes collides with something in the environment, such as nicotine or a fatty diet. These multigene interactions are subtler and knottier than the single gene drivers of diseases such as hemophilia and cystic fibrosis; spotting them calls for statistical inspiration and rigorous experiments repeated again and again to guard against introducing unproven gene tests into the clinic. And determining treatment strategies will be no less complex: Last summer, for example, a team of scientists linked 124 different genes to resistance to four leukemia drugs.

But identifying gene networks like these is only the beginning. One of the toughest tasks is replicating these studies—an especially diff icult proposition in diseases that are not overwhelmingly heritable, such as asthma, or ones that affect fairly small patient cohorts, such as certain childhood cancers. Many clinical trials do not routinely collect DNA from volunteers, making it sometimes difficult for scientists to correlate disease or drug response with genes. Gene microarrays, which measure expression of dozens of genes at once, can be fickle and supply inconsistent results. Gene studies can also be prohibitively costly.
Nonetheless, genetic dissection of some diseases—such as cancer, asthma, and heart disease—is galloping ahead.

Progress in other areas, such as psychiatric disorders, is slower. Severely depressed or schizophrenic patients could benefit enormously from tests that reveal which drug and dose will help them the most, but unlike asthma, drug response can be difficult to quantify biologically, making genedrug relations tougher to pin down. As DNA sequence becomes more available and technologies improve, the genetic patterns that govern health will likely come into sharper relief. Genetic tools still under construction, such as a haplotype map that will be used to discern genetic variation behind common diseases, could further accelerate the search for disease genes. The next step will be designing DNA tests to guide clinical decision-making—and using them. If history is any guide, integrating such tests into standard practice will take time. In emergencies—a heart attack, an acute cancer, or an asthma attack—such tests will be valuable only if they rapidly deliver results.

Ultimately, comprehensive personalized medicine will come only if pharmaceutical companies want it to—and it will take enormous investments in research and development. Many companies worry that testing for genetic differences will narrow their market and squelch their profits.

Still, researchers continue to identify new opportunities. In May, the Icelandic company deCODE Genetics reported that an experimental asthma drug that pharmaceutical giant Bayer had abandoned appeared to decrease the risk of heart attack in more than 170 patients who carried particular gene variants. The drug targets the protein produced by one of those genes. The finding is likely to be just a foretaste of the many surprises in store, as the braids binding DNA, drugs, and disease are slowly unwound.

–JENNIFER COUZIN


5. Can the Laws of Physics Be Unified

At its best, physics eliminates complexity by revealing underlying simplicity. Maxwell’s equations, for example, describe all the confusing and diverse phenomena of classical electricity and magnetism by means of four simple rules. These equations are beautiful; they have an eerie symmetry, mirroring one another in an intricate dance of symbols. The four together feel as elegant, as whole, and as complete to a physicist as a Shakespearean sonnet does to a poet.

The Standard Model of particle physics is an unfinished poem. Most of the pieces are there, and even unfinished, it is arguably the most brilliant opus in the literature of physics. With great precision, it describes all known matter—all the subatomic particles such as quarks and leptons—as well as the forces by which those particles interact with one another. These forces are electromagnetism, which describes how charged objects feel each other’s influence: the weak force, which explains how particles can change their identities, and the strong force, which describes how quarks stick together to form protons and other composite particles. But as lovely as the Standard Model’s description is, it is in pieces, and some of those pieces—those that describe gravity—are missing. It is a few shards of beauty that hint at something greater, like a few lines of Sappho on a fragment of papyrus.

The beauty of the Standard Model is in its symmetry; mathematicians describe its symmetries with objects known as Lie groups. And a mere glimpse at the Standard Model’s Lie group betrays its fragmented nature: SU(3) × SU(2) × U(1). Each of those pieces represents one type of symmetry, but the symmetry of the whole is broken. Each of the forces behaves in a slightly different way, so each is described with a slightly different symmetry.

But those differences might be superficial. Electromagnetism and the weak force appear very dissimilar, but in the 1960s physicists showed that at high temperatures, the two forces “unify.” It becomes apparent that electromagnetism and the weak force are really the same thing, just as it becomes obvious that ice and liquid water are the same substance if you warm them up together. This connection led physicists to hope that the strong force could also be unified with the other two forces, yielding one large theory described by a single symmetry such as SU(5).

A unified theory should have observable consequences. For example, if the strong force truly is the same as the electroweak force, then protons might not be truly stable; once in a long while, they should decay spontaneously. Despite many searches, nobody has spotted a proton decay, nor has anyone sighted any particles predicted by some symmetryenhancing modifications to the Standard Model, such as supersymmetry. Worse yet, even such a unified theory can’t be complete— as long as it ignores gravity.
Gravity is a troublesome force. The theory that describes it, general relativity, assumes that space and time are smooth and continuous, whereas the underlying quantum physics that governs subatomic particles and forces is inherently discontinuous and jumpy. Gravity clashes with quantum theory so badly that nobody has come up with a convincing way to build a single theory that includes all the particles, the strong and electroweak forces, and gravity all in one big bundle. But physicists do have some leads. Perhaps the most promising is superstring theory.

Superstring theory has a large following because it provides a way to unify everything into one large theory with a single symmetry—SO(32) for one branch of superstring theory, for example—but it requires a universe with 10 or 11 dimensions, scads of undetected particles, and a lot of intellectual baggage that might never be verifiable. It may be that there are dozens of unified theories, only one of which is correct, but scientists may never have the means to determine which. Or it may be that the struggle to unify all the forces and particles is a fool’s quest.

In the meantime, physicists will continue to look for proton decays, as well as search for supersymmetric particles in underground traps and in the Large Hadron Collider (LHC) in Geneva, Switzerland, when it comes online in 2007. Scientists believe that LHC will also reveal the existence of the Higgs boson, a particle intimately related to fundamental symmetries in the model of particle physics. And physicists hope that one day, they will be able to finish the unfinished poem and frame its fearful symmetry.

–CHARLES SEIFE


6. How Much Can Human Life Span Be Extended

7. What Controls Organ Regeneration

8. How Can a Skin Cell Become a Nerve Cell

9. How Does a Single Somatic Cell Become A Whole Plant

10. How Does Earth’s Interior Work

11. Are We Alone In the Universe

12. How and Where Did Life on Earth Arise

13. What Determines Species Diversity

14. What Genetic Changes Made Us Uniquely Human


15. How Are Memories Stored and Retrieved

Packed into the kilogram or so of neural wetware between the ears is everything we know: a compendium of useful and trivial facts about the world, the history of our lives, plus every skill we’ve ever learned, from riding a bike to persuading a loved one to take out the trash. Memories make each of us unique, and they give continuity to our lives. Understanding how memories are stored in the brain is an essential step toward understanding ourselves.

Neuroscientists have already made great strides, identifying key brain regions and potential molecular mechanisms. Still, many important questions remain unanswered, and a chasm gapes between the molecular and whole-brain research.

The birth of the modern era of memory research is often pegged to the publication, in 1957, of an account of the neurological patient H.M. At age 27, H.M. had large chunks of the temporal lobes of his brain surgically removed in a last-ditch effort to relieve chronic epilepsy. The surgery worked, but it left H.M. unable to remember anything that happened—or anyone he met—after his surgery. The case showed that the medial temporal lobes (MTL), which include the hippocampus, are crucial for making new memories. H.M.’s case also revealed, on closer examination, that memory is not a monolith: Given a tricky mirror drawing task, H.M.’s performance improved steadily over 3 days even though he had no memory of his previous practice. Remembering how is not the same as remembering what, as far as the brain is concerned.

Thanks to experiments on animals and the advent of human brain imaging, scientists now have a working knowledge of the various kinds of memory as well as which parts of the brain are involved in each. But persistent gaps remain. Although the MTL has indeed proved critical for declarative memory—the recollection of facts and events—the region remains something of a black box. How its various components interact during memory encoding and retrieval is unresolved. Moreover, the MTL is not the f inal repository of declarative memories. Such memories are apparently filed to the cerebral cortex for long-term storage, but how this happens, and how memories are represented in the cortex, remains unclear.

More than a century ago, the great Spanish neuro – anatomist Santiago Ramón y Cajal proposed that making memories must require neurons to strengthen their connections with one another. Dogma at the time held that no new neurons are bor n in the adult brain, so Ramón y Cajal made the reasonable assumption that the key changes must occur between existing neurons. Until recently, scientists had few clues about how this might happen.

Since the 1970s, however, work on isolated chunks of nervous-system tissue has identified a host of molecular players in memory formation. Many of the same molecules have been implicated in both declarative and nondeclarative memory and in species as varied as sea slugs, fruit flies, and rodents, suggesting that the molecular machinery for memory has been widely conserved. A key insight from this work has been that short-term memory (lasting minutes) involves chemical modifications that strengthen existing connections, called synapses, between neurons, whereas long-term memory (lasting days or weeks) requires protein synthesis and probably the construction of new synapses.

Tying this work to the whole-brain research is a major challenge. A potential bridge is a process called long-term potentiation (LTP), a type of synaptic strengthening that has been scrutinized in slices of rodent hippocampus and is widely considered a likely physiological basis for memory. A conclusive demonstration that LTP really does underlie memory formation in vivo would be a big breakthrough.
Meanwhile, more questions keep popping up. Recent studies have found that patterns of neural activity seen when an animal is learning a new task are replayed later during sleep. Could this play a role in solidifying memories? Other work shows that our memories are not as trustworthy as we generally assume. Why is memory so labile? A hint may come from recent studies that revive the controversial notion that memories are briefly vulnerable to manipulation each time they’re recalled. Finally, the no-new-neurons dogma went down in flames in the 1990s, with the demonstration that the hippocampus, of all places, is a virtual neuron nursery throughout life. The extent to which these newborn cells support learning and memory remains to be seen.

–GREGMILLER


16. How Did Cooperative Behavior Evolve

17. How Will Big Pictures Emerge From a Sea of Biological Data

Biology is rich in descriptive data— and getting richer all the time. Large-scale methods of probing samples, such as DNA sequencing, microarrays, and automated gene-function studies, are f illing new databases to the brim. Many subfields from biomechanics to ecology have gone digital, and as a result, observations are more precise and more plentiful. A central question now confronting virtually all fields of biology is whether scientists can deduce from this torrent of molecular data how systems and whole organisms work. All this information needs to be sifted, organized, compiled, and—most importantly— connected in a way that enables researchers to make predictions based on general principles.

Enter systems biology. Loosely def ined and still struggling to f ind i ts way, this newly emerging approach aims to connect the dots that have emerged from decades of molecular, cellular, organismal, and even environmental observations. Its proponents seek to make biology more quantitative by relying on mathematics, engineering, and computer science to build a more rigid framework for linking disparate findings. They argue that it is the only way the field can move forward. And they suggest that biomedicine, particularly deciphering risk factors for disease, will benefit greatly.

The field got a big boost from the completion of the human genome sequence. The product of a massive, trip-to-the-moon logistical effort, the sequence is now a hard and fast fact. The biochemistry of human inheritance has been defined and measured. And that has inspired researchers to try to make other aspects of life equally knowable.

Molecular geneticists dream of having a similarly comprehensive view of networks that control genes: For example, they would like to identify rules explaining how a single DNA sequence can express different proteins, or varying amounts of protein, in different circumstances (see p. 80). Cell biologists would like to reduce the complex communication patterns traced by molecules that regulate the health of the cell to a set of signaling rules. Developmental biologists would like a comprehensive picture of how the embryo manages to direct a handful of cells into a myriad of specialized functions in bone, blood, and skin tissue. These hard puzzles can only be solved by systems biology, proponents say. The same can be said for neuroscientists trying to work out the emergent properties—higher thought, for example—hidden in complex brain circuits. To understand ecosystem changes, including global warming, ecologists need ways to incorporate physical as well as biological data into their thinking.

Today, systems biologists have only begun to tackle relatively simple networks. They have worked out the metabolic pathway in yeast for breaking down galactose, a carbohydrate. Others have tracked the first few hours of the embryonic development of sea urchins and other organisms with the goal of seeing how various transcription factors alter gene expression over time. Researchers are also developing rudimentary models of signaling networks in cells and simple brain circuits.

Progress is limited by the difficulty of translating biological patterns into computer models. Network computer programs themselves are relatively simple, and the methods of portraying the results in ways that researchers can understand and interpret need improving. New institutions around the world are gathering interdisciplinary teams of biologists, mathematicians, and computer specialists to help promote systems biology approaches. But it is still in its early days.

No one yet knows whether intensive interdisciplinary work and improved computational power will enable researchers to create a comprehensive, highly structured picture of how life works.

–ELIZABETH PENNISI


18. How Far Can We Push Chemical Self-Assembly

19. What Are the Limits of Conventional Computing

20. Can We Selectively Shut Off Immune Responses

21. Do Deeper Principles Underlie Quantum Uncertainty and Nonlocality

22. Is an Effective HIV Vaccine Feasible

23. How Hot Will The Greenhouse

24. World Be What Can Replace Cheap Oil—and When

25. Will Malthus Continue to Be Wrong

END

0 评论
留言