Sunday, January 16, 2011

Inside a Snowstorm: Scientists Obtain Close-Up Look at Old Man Winter

Doppler-on-Wheels Gives New View of Lake-Effect Snows


DOW Doppler radar of cells in a snowband.
DOW Doppler radar of cells in a snowband; the coiled hooks are the most intense snows.


In this winter of heavy snows--with more on the way this week--nature's bull's-eye might be Oswego, N.Y., and the nearby Tug Hill Plateau.
There the proximity of the Great Lakes whips wind and snow into high gear. Old Man Winter then blows across New York state, burying cities and towns in snowdrifts several feet high. This season, however, something is standing in his way.
The Doppler-on-Wheels (DOW), a data-collecting radar dish, is waiting. This month and next, scientists inside the DOW are tracking snowstorms in and around Oswego to learn what drives lake-effect snowstorms that form parallel to the long axis of a Great Lake and produce enormous snowfall rates.
These long lake-axis-parallel (LLAP) bands of snow are more intense than those of other snow squalls and produce some of the highest snowfall rates and amounts in the world, say atmospheric scientists Scott Steiger of the State University of New York (SUNY)-Oswego, Jeffrey Frame of the University of Illinois at Urbana-Champaign, and Alfred Stamm of SUNY-Oswego.
"The mobility of a DOW," Steiger says, "is ideal for following lake-effect storms. The DOW will allow us to witness them as they form and cross lakes, which other weather radars can't do."
The DOW, or more properly "DOWs" as there are three, is a National Science Foundation (NSF) atmospheric science facility.
A DOW looks more like the dish of a radio telescope than a sophisticated weather instrument. It's mounted on the back of a flat-bed truck. With a DOW on board, the truck becomes an odd configuration of generator, equipment and operator cabin.
Ungainly as it may appear, it's ideally suited to provide detailed information on the inner workings of snow and other storms, says Josh Wurman, director of the Center for Severe Weather Research (CSWR) in Boulder, Colo.
Wurman should know. He and colleagues developed the first DOW in 1995.
The DOW uses Doppler radar to produce velocity data about severe storms at a distance.
When a DOW is deployed, it collects fine-scale data from within a snowstorm and displays features that can't be seen with more distant radars.
The DOW radars are dual polarization, says Wurman, which means that they send out both horizontally- and vertically-oriented energy. By looking at differences in the energy bounced back from these horizontal and vertical beams, scientists can learn more about the snowflakes, ice, rain and snow pellets in snowbands.
"NSF's dual-polarization DOW radars offer an important new avenue toward better understanding this intense winter weather phenomenon affecting the Great Lakes region," says Brad Smull, program director in NSF's Division of Atmospheric and Geospace Sciences, which funds the DOWs and the LLAP project.
The DOWs measure Doppler winds, snow intensity, and properties related to whether snow is dense, comprised of pellets, or formed from loose collections of traditional six-sided snowflakes. A storm's snow crystal type plays a major role in whether lake-effect snowbands drop a few inches of snow--or more than two feet.
"Understanding snow type is critically important," says Wurman.
The DOWs collect data that will be used to determine how LLAP snowbands intensify and weaken, and move across a region. The scientists are right behind.
"Instead of waiting for snowbands to come to us," says Wurman, "we and the DOWs are going to them."
After forecasting likely snowband events, Steiger, Wurman and colleagues and the DOW drive to one of more than 30 sites near Lake Ontario, Lake Erie, and the Tug Hill Plateau to monitor LLAP snowbands.
During the past week, scientists deployed the DOW to four locations near Oswego and Rochester to study intense snowbands. The bands dropped snow at up to four inches per hour, with final totals of more than two feet.
Initial findings are that intense storm circulations were observed in the bands, and that the snow type changed during the passage of the storm.
More lake effect snows are forecast for this weekend. DOWs and scientists are again heading to New York hilltops to peer inside.
Old Man Winter may have no choice but to give up his secrets.

Locating the source of fear in Brain

"FEAR FACTORY"

Researchers at the University of Iowa have pinpointed the part of the brain that causes people to experience fear -- a discovery that could improve treatment of post-traumatic stress disorder (PTSD) and other anxiety conditions.

Ever see one of those "No Fear" stickers on a vehicle? It may indicate that the driver is either extremely brave and daring or that he might have an underdeveloped amygdala region of the brain.
Animal studies have shown for a long time that this almond-shaped region of the brain plays an important role in generating fear reactions in rats and monkeys. Now researchers at the University of Iowa have put a human face on what it's like to live with no fear and for the first time, have confirmed that the amygdala is responsible for generating fear in humans.
The patient in their study had a rare condition that destroyed this particular region of the brain. Other studies showed that she could not recognize fear in facial expressions, but the new research revealed that she could not experience fear. Not from any of the usual stimuli -- spiders, snakes, haunted houses, horror films, or even reminders of traumatic events from her past.
The researchers say that, without the amygdala, the alarm in our brain that pushes us to avoid danger is missing.
To the 7.7 million Americans who suffer from post traumatic stress disorder (PTSD), or related anxiety disorders, this research could someday lead to safe, non-invasive treatments to dampen down activity of the fear-inducing part of the brain.
Kinda gives new meaning to the term, "fear factor."

Putting the Dead to Work

Conservation paleobiologists dig deep to solve today's ecological, evolutionary questions


Cover of the January, 2011 issue of the journal Trends in Ecology and Evolution.
Conservation paleobiologists are looking at ecology and evolution in new ways.
Conservation paleobiologists--scientists who use the fossil record to understand the evolutionary and ecological responses of present-day species to changes in their environment--are putting the dead to work.
A new review of the research in this emerging field provides examples of how the fossil record can help assess environmental impacts, predict which species will be most vulnerable to environmental changes, and provide guidelines for restoration.
The literature review by conservation paleobiologists Gregory Dietl of the Paleontological Research Institution and Cornell University and Karl Flessa of the University of Arizona is published in the January, 2011, issue of the journal Trends in Ecology and Evolution.
"Conservation paleobiologists apply the data and tools of paleontology to today's problems in biodiversity conservation," says Dietl.
The primary sources of data are "geohistorical": the fossils, geochemistry and sediments of the geologic record.
"A conservation paleobiology perspective has the unique advantage of being able to identify phenomena beyond time scales of direct observation," Dietl says.
Such data, says Flessa, "are crucial for documenting the species we have already lost--such as the extinct birds of the Hawaiian islands--and for developing more effective conservation policies in the face of an uncertain future."
Geohistorical records, the authors write, are critical to identifying where--and how--species survived long-ago periods of climate change.
"Historically, paleontologists have focused their efforts on understanding the deep-time geological record of ancient life on Earth, but these authors turn that focus 180 degrees," says H. Richard Lane, program director in NSF's Division of Earth Sciences, which funds Dietl's and Flessa's research.
"In putting the dead to work, they identify the significant impact knowledge of fossil life can have on interpreting modern biodiversity and ecological trends."
Ancient DNA, for example, has been used to show that the arctic fox (Alopex lagopus) was not able to move with shifting climates as its range contracted, eventually becoming extinct in Europe at the end of the Pleistocene.
However, the species persisted in regions of northeastern Siberia where the climate was still suitable for arctic foxes.
In another tale from the beyond, fossil evidence suggests that the birds of the Hawaiian Islands suffered large-scale extinctions around the time of the arrival of the Polynesians.
Studies comparing the ecological characteristics of bird species before and after these extinctions reveal a strong bias against larger-bodied and flightless, ground-nesting species.
The pattern suggests that hunting by humans played a role in the extinction of the flightless species.
By the 18th century, the time of the first Europeans' arrival in the islands, most large-bodied birds had already disappeared. European colonization of the islands led to a second wave of exctinctions.
Those birds that survived had traits that helped them weather two onslaughts.
In their review paper, Dietl and Flessa cite a study of the frequency in the fossil record of insect damage to flowering plant leaves in the Bighorn Basin of Wyoming dating from before, during and after the Paleocene-Eocene Thermal Maximum (PETM, some 55.8 million years ago).
The PETM, scientists believe, is one of the best deep-time analogs for current global climate change questions.
Results from the insect research suggest that herbivory intensified during the PETM global warming episode.
"This finding provides insights into how the human-induced rise in atmospheric carbon dioxide is likely to affect insect-plant interactions in the long run," the authors write, "which is difficult to predict from short-term studies that have highly species-specific responses."
The dead can help us even in remote places like the Galapagos Islands.
Scientists have used the fossil pollen and plant record there to show that at least six non-native or "doubtfully native" species were present before the arrival of humans. 
This baseline information, says Dietl, "is crucial to a current conservation priority in the Galapagos: the removal of invasive species."
An important role of geohistorical data is to provide access to a wider range of past environmental conditions--alternative worlds of every imaginable circumstance.
Tales of the past that may lead to better conservation practices, crucial for life, not death, on Earth.
The dead, it turns out, do tell tales.

Earth's Hot Past: Prologue to Future Climate?

Study of Earth's deep past leads to look into the future


Photo of a red sun on the horizon with silouetted landscape in foreground.
Earth may someday return to a hotter climate when the Antarctic ice sheet didn't exist.
The magnitude of climate change during Earth's deep past suggests that future temperatures may eventually rise far more than projected if society continues its pace of emitting greenhouse gases, a new analysis concludes.
The study, by National Center for Atmospheric Research (NCAR) scientist Jeffrey Kiehl, will appear as a "Perspectives" article in this week's issue of the journal Science.
The work was funded by the National Science Foundation (NSF), NCAR's sponsor.
Building on recent research, the study examines the relationship between global temperatures and high levels of carbon dioxide in the atmosphere tens of millions of years ago.
It warns that, if carbon dioxide emissions continue at their current rate through the end of this century, atmospheric concentrations of the greenhouse gas will reach levels that existed about 30 million to 100 million years ago.
Global temperatures then averaged about 29 degrees Fahrenheit (16 degrees Celsius) above pre-industrial levels.
Kiehl said that global temperatures may take centuries or millennia to fully adjust in response to the higher carbon dioxide levels.
Accorning to the study and based on recent computer model studies of geochemical processes, elevated levels of carbon dioxide may remain in the atmosphere for tens of thousands of years.
The study also indicates that the planet's climate system, over long periods of times, may be at least twice as sensitive to carbon dioxide as currently projected by computer models, which have generally focused on shorter-term warming trends.
This is largely because even sophisticated computer models have not yet been able to incorporate critical processes, such as the loss of ice sheets, that take place over centuries or millennia and amplify the initial warming effects of carbon dioxide.
"If we don't start seriously working toward a reduction of carbon emissions, we are putting our planet on a trajectory that the human species has never experienced," says Kiehl, a climate scientist who specializes in studying global climate in Earth's geologic past.
"We will have committed human civilization to living in a different world for multiple generations."
The Perspectives article pulls together several recent studies that look at various aspects of the climate system, while adding a mathematical approach by Kiehl to estimate average global temperatures in the distant past.
Its analysis of the climate system's response to elevated levels of carbon dioxide is supported by previous studies that Kiehl cites.
"This research shows that squaring the evidence of environmental change in the geologic record with mathematical models of future climate is crucial," says David Verardo, Director of NSF's Paleoclimate Program. "Perhaps Shakespeare's words that 'what's past is prologue' also apply to climate."
Kiehl focused on a fundamental question: when was the last time Earth's atmosphere contained as much carbon dioxide as it may by the end of this century?
If society continues its current pace of increasing the burning of fossil fuels, atmospheric levels of carbon dioxide are expected to reach about 900 to 1,000 parts per million by the end of this century.
That compares with current levels of about 390 parts per million, and pre-industrial levels of about 280 parts per million.
Since carbon dioxide is a greenhouse gas that traps heat in Earth's atmosphere, it is critical for regulating Earth's climate.
Without carbon dioxide, the planet would freeze over.
But as atmospheric levels of the gas rise, which has happened at times in the geologic past, global temperatures increase dramatically and additional greenhouse gases, such as water vapor and methane, enter the atmosphere through processes related to evaporation and thawing.
This leads to further heating.
Kiehl drew on recently published research that, by analyzing molecular structures in fossilized organic materials, showed that carbon dioxide levels likely reached 900 to 1,000 parts per million about 35 million years ago.
At that time, temperatures worldwide were substantially warmer than at present, especially in polar regions--even though the Sun's energy output was slightly weaker.
The high levels of carbon dioxide in the ancient atmosphere kept the tropics at about 9-18 F (5-10 C) above present-day temperatures.
The polar regions were some 27-36 F (15-20 C) above present-day temperatures.
Kiehl applied mathematical formulas to calculate that Earth's average annual temperature 30 to 40 million years ago was about 88 F (31 C)--substantially higher than the pre-industrial average temperature of about 59 F (15 C).
The study also found that carbon dioxide may have two times or more an effect on global temperatures than currently projected by computer models of global climate.
The world's leading computer models generally project that a doubling of carbon dioxide in the atmosphere would have a heating impact in the range of 0.5 to 1.0 degrees Celsius watts per square meter. (The unit is a measure of the sensitivity of Earth's climate to changes in greenhouse gases.)
However, the published data show that the comparable impact of carbon dioxide 35 million years ago amounted to about 2 C watts per square meter.
Computer models successfully capture the short-term effects of increasing carbon dioxide in the atmosphere.
But the record from Earth's geologic past also encompasses longer-term effects, which accounts for the discrepancy in findings.
The eventual melting of ice sheets, for example, leads to additional heating because exposed dark surfaces of land or water absorb more heat than ice sheets.
"This analysis shows that on longer time scales, our planet may be much more sensitive to greenhouse gases than we thought," Kiehl says.
Climate scientists are currently adding more sophisticated depictions of ice sheets and other factors to computer models.
As these improvements come on-line, Kiehl believes that the computer models and the paleoclimate record will be in closer agreement, showing that the impacts of carbon dioxide on climate over time will likely be far more substantial than recent research has indicated.
Because carbon dioxide is being pumped into the atmosphere at a rate that has never been experienced, Kiehl could not estimate how long it would take for the planet to fully heat up.
However, a rapid warm-up would make it especially difficult for societies and ecosystems to adapt, he says.
If emissions continue on their current trajectory, "the human species and global ecosystems will be placed in a climate state never before experienced in human history," the paper states.

Writing About Anxiety Helps Students Ace Exams

Research says test performance improves when students write about their worries.

Photo of a teacher sitting at a desk , blackboard with the word EXAM, and students taking a test.
The desire to perform well in academics causes some students to perform below their abilities.
Sian Beilock, lead author of a new study that appears today in the journal Science, says writing about test-related worries for ten minutes immediately before taking an exam is an effective way to improve test scores in classroom settings.
"By writing down one's negative thoughts, students may come to realize that the situation is not as bad as they thought or that they are prepared to take it on," said Beilock, an associate professor of Psychology at the University of Chicago. "As a result, they worry less during the test."
Her study, "Writing about Testing Worries Boosts Exam Performance in the Classroom," was funded by the National Science Foundation's Directorate for Education and Human Resources.
"Understanding performance strategies that help people overcome memory limitations in high-pressure situations is an important goal of this research," said Gregg Solomon, a program director in the directorate's Division of Research on Learning in Formal and Informal Settings.
"Other researchers have shown that expressive writing can help people decrease worries, but Beilock and her co-author wanted to test the benefits for students in the classroom."
University of Chicago Institute of Education Sciences pre-doctoral fellow Gerardo Ramirez conducted the study with Beilock.
"For many students, the desire to perform their best in academics is high," Beilock and Ramirez wrote in the report. "Yet, despite the fact that students are often motivated to perform their best, the pressure-filled situations in which important tests occur can cause students to perform below their ability instead."
But students who get particularly anxious about tests taking saw their scores on high stakes exams go up nearly one grade point after they wrote about why they feared the test's outcome.
In fact, highly-anxious students who had the opportunity to write down their thoughts before taking a research-study, administered math test received an average grade of B+. Highly anxious students who didn't write down their thoughts received an average grade of B-.
The researchers concluded that the writing exercise provided students an opportunity to unload their anxieties before taking the test and thereby freed up brainpower needed to perform successfully--brainpower that is normally occupied by testing worries.
Beilock, who is also the author of a recently published book about mental logjams called "Choke: What the Secrets of the Brain Reveal about Getting It Right When You Have To," said the findings of this new research are somewhat counterintuitive. Writing about anxiety before a test, it seems, would cause the worries to crystallize and manifest themselves, not alleviate them.
"But, once you know some of the science behind test anxiety, expressive writing and test performance, it makes sense," she said. 
It has been suggested that writing about one's thoughts and feelings regarding a particular event or situation reduces one's tendency to ponder negative consequences because it allows individuals to reexamine the situation such that the need to worry is decreased.
"If worries use up important thinking and reasoning resources that could otherwise be devoted to exam performance and writing eliminates these worries, then students' performance should improve," said Beilock.
The researchers say this type of writing may help people perform their best in variety of pressure-filled situations--whether it is a big presentation to a client, a speech to an audience, or even a job interview.

Water and Oil Everywhere, and Now it's Safe to Drink

Developer demonstrates oil filtration technology tested in 2010 Gulf of Mexico oil spill

Photo of Dr. Stephen Jolly and Doug Martin of AbsMaterials.
Clean water being passed over a vibratory separator after treatment with Osorb®.
Building upon research conducted during the 2010 Deepwater Horizon oil spill in the Gulf of Mexico, engineers have incorporated a swellable nano-structured glass called Osorb®into a system for extracting pollutants like dissolved petroleum from water--and collecting the petroleum for later use. 
During a webcast from the National Science Foundation, developer Paul Edmiston of the College of Wooster will demonstrate the new application for the Osorb® technology and discuss how it is being evaluated in the petroleum industry.
As part of the media briefing, Edmiston will conduct demonstrations to show how the material expands to eight times its original volume in the presence of hydrocarbons--expanding with a force that could lift 20,000 times its original weight--and filter a gasoline-tainted sample of drinking water for consumption. 

Surprise: Dwarf Galaxy Harbors Super-massive Black Hole

Henize 2-10
The dwarf galaxy Henize 2-10, seen in visible light
by the Hubble Space Telescope. The central, light-pink
region shows an area of radio emission, seen with the 
Very Large Array. This area indicates the presence of
a supermassive black hole drawing in material from its
surroundings. This also is indicated by strong X-ray
emission from this region detected by the Chandra
X-Ray Observatory.


The surprising discovery of a supermassive black hole in a small nearby galaxy has given astronomers a tantalizing look at how black holes and galaxies may have grown in the early history of the Universe. Finding a black hole a million times more massive than the Sun in a star-forming dwarf galaxy is a strong indication that supermassive black holes formed before the buildup of galaxies, the astronomers said.
The galaxy, called Henize 2-10, 30 million light-years from Earth, has been studied for years, and is forming stars very rapidly. Irregularly shaped and about 3,000 light-years across (compared to 100,000 for our own Milky Way), it resembles what scientists think were some of the first galaxies to form in the early Universe.
"This galaxy gives us important clues about a very early phase of galaxy evolution that has not been observed before," said Amy Reines, a Ph.D. candidate at the University of Virginia.
Supermassive black holes lie at the cores of all "full-sized" galaxies. In the nearby Universe, there is a direct relationship -- a constant ratio -- between the masses of the black holes and that of the central "bulges" of the galaxies, leading them to conclude that the black holes and bulges affected each others' growth.
Two years ago, an international team of astronomers found that black holes in young galaxies in the early Universe were more massive than this ratio would indicate. This, they said, was strong evidence that black holes developed before their surrounding galaxies.
"Now, we have found a dwarf galaxy with no bulge at all, yet it has a supermassive black hole. This greatly strengthens the case for the black holes developing first, before the galaxy's bulge is formed," Reines said.
Reines, along with Gregory Sivakoff and Kelsey Johnson of the University of Virginia and the National Radio Astronomy Observatory (NRAO), and Crystal Brogan of the NRAO, observed Henize 2-10 with the National Science Foundation's Very Large Array radio telescope and with the Hubble Space Telescope. They found a region near the center of the galaxy that strongly emits radio waves with characteristics of those emitted by super-fast "jets" of material spewed outward from areas close to a black hole.
They then searched images from the Chandra X-Ray Observatory that showed this same, radio-bright region to be strongly emitting energetic X-rays. This combination, they said, indicates an active, black-hole-powered, galactic nucleus.
"Not many dwarf galaxies are known to have massive black holes," Sivakoff said.
While central black holes of roughly the same mass as the one in Henize 2-10 have been found in other galaxies, those galaxies all have much more regular shapes. Henize 2-10 differs not only in its irregular shape and small size but also in its furious star formation, concentrated in numerous, very dense "super star clusters."
"This galaxy probably resembles those in the very young Universe, when galaxies were just starting to form and were colliding frequently. All its properties, including the supermassive black hole, are giving us important new clues about how these black holes and galaxies formed at that time," Johnson said.

Monday, January 10, 2011

Wildflower Colors Tell Butterflies How To Do Their Jobs

Where two species of phlox overlap, one turns red to discourage butterflies. | Robin Hopkins
Where two species of phlox overlap, one turns red to discourage butterflies.

The recipe for making one species into two requires time and some kind of separation, like being on different islands or something else that discourages gene flow between the two budding species.
In the case of common Texas wildflowers that share meadows and roadside ditches, color-coding apparently does the trick.
Duke University graduate student Robin Hopkins has found the first evidence of a specific genetic change that helps two closely related wildflowers avoid creating costly hybrids. It results in one of the normally light blue flowers being tagged with a reddish color to appear less appetizing to the pollinating butterflies which prefer blue.
"There are big questions about evolution that are addressed by flower color," said Hopkins, who successfully defended her doctoral dissertation just weeks before seeing the same work appear in the prestigious journal Nature.
What Hopkins found, with her thesis adviser, Duke biology professor Mark Rausher, is the first clear genetic evidence for something called reinforcement in plants. Reinforcement keeps two similar proto-species moving apart by discouraging hybrid matings. Flower color had been expected to aid reinforcement, but the genes had not been found. 
In animals or insects, reinforcement might be accomplished by a small difference in scent, plumage or mating rituals. But plants don't dance or choose their mates. So they apparently exert some choice by using color to discourage the butterflies from mingling their pollen, Hopkins said.
Where Phlox drummondii lives by itself, it has a periwinkle blue blossom. But where its range overlaps with Phlox cuspidata, which is also light blue, drummondii flowers appear  darker and more red. Some individual butterflies prefer light blue blossoms and will go from blue to blue, avoiding the dark reds. Other individual butterflies prefer the reds and will stick with those. This "constancy" prevents hybrid crosses.
Hybrid offspring between drummondii and cuspidata turn out to be nearly sterile, making the next generation a genetic dead-end. The persistent force of natural selection tends to push the plants toward avoiding those less fruitful crosses, and encourages breeding true to type. In this case, selection apparently worked upon floral color.
Hopkins was able to find the genes involved in the color change by crossing a light blue drummondii with the red in greenhouse experiments. She found the offspring occurred in four different colors in the exact 9-to-3-to-3-to-1 ratios of classical Mendelian inheritance. "It was 2 in the morning when I figured this out," she said. "I almost woke up my adviser."
From there, she did standard genetics to find the exact genes. The change to red is caused by a recessive gene that knocks out the production of the plant's one blue pigment while allowing for the continued production of two red pigments.
Even where the red flowers are present, about 11 percent of each generation will be the nearly-sterile hybrids. But without color-coding, that figure would be more like 28 percent, Hopkins said. Why and how the butterflies make the distinction has yet to be discovered.
Hopkins will be continuing her research as a visiting scientist at the University of Texas, and the clear message from all of her advisers is "follow the butterflies. Everyone wants to know more about the butterflies!" 

Friday, January 7, 2011

How the Sun Gets Its Spots???

To prevent solar damage to communication, navigation and other high tech systems, scientists are determining the temperatures, composition and movement of materials inside the sun

High-resolution image of a sunspot taken at the Sacramento Peak Observatory, New Mexico.
High-resolution image of a sunspot taken at the Sacramento Peak Observatory in New Mexico.



Sunspots are huge, dark, irregularly shaped--and yet, temporary--areas of intense magnetism on the sun that expand and contract as they move.
"The diameters of sunspots are frequently on the order of 50,000 miles," said Frank Hill of the National Science Foundation's (NSF) National Solar Observatory. "By contrast, the Earth's diameter at the equator is about 8,000 miles. The intense magnetism of sunspots usually reaches about 3,000 Gauss. [The more intense a body's magnetic field is, the higher its Gauss number.] By contrast, refrigerator magnets average about 5 Gauss, the sun averages about 1.0 Gauss, and the Earth averages about .50 Gauss."
Most of the sun's surface is covered by convection cells--roiling and boiling gases that bring heat up to the sun's surface from the furnace in its core via convection. However, the intense magnetism of sunspots inhibits convection and the associated heat transport to them. Therefore, their temperatures range from about 5,000 to 7,600 degrees Fahrenheit (F), cooler than their surroundings, which hover around 10,000 degrees F.
It is only because of the "coolness" of sunspots that they appear black relative to their surroundings; if sunspots could be separated from their surroundings, they would appear brighter than electric arcs.
Sunspots are cyclic. The number of sunspots increases and decreases over a period of approximately 11 years. During solar maximums, when sunspot activity is high, areas near sunspot clusters experience particularly frequent explosive activity, such as Coronal Mass Ejections (CMEs), massive blasts of highly charged particles and gases hurled from the sun. CMEs can pose serious threats to people because they may damage satellites, increase the radiation exposure of astronauts, disrupt communication and navigation systems, and knock out power grids and other high-tech systems.
During solar minimums, when sunspot activity is low, CMEs occur less frequently than they do during maximums. Nevertheless, solar minimums are not necessarily CME-free periods; large CMEs have occurred during solar minimums.
"During the solar cycle, slow (20 to 30 mile per hour) flows of plasmas, known as jet streams, move from east to west across the sun and slowly south from the solar north pole and slowly north from the south pole to the equator," Hill said.
Jet streams reach depths of about 65,000 miles below the sun's surface. "Sunspots and the jet stream are closely associated with one another in terms of location and behavior," adds Hill. Sunspots initially appear during a solar cycle when the center of the jet stream reaches a latitude of about 25 degrees. Also, sunspots are born above the jet stream and reach deep inside the sun into the stream.
At the beginning of any given sunspot cycle, sunspots are usually born in clusters at high latitudes. But by the end of the cycle, the birthplace of sunspots has--like the jet stream--usually moved to the equator.
During the current sunspot cycle, the jet stream took a year and a half longer to reach a latitude of 25 degrees than during the previous cycle. Likewise, the solar minimum between the previous and current cycle lasted 1.5 years longer than the previous minimum. This observation suggests that "scientists might be able to use the jet stream to predict the timing of sunspot cycles," Hill said. "Nevertheless, we don't know yet whether the jet stream causes sunspots or sunspots cause the jet stream."
How can scientists possibly determine what's happening in the sun's depths from our vantage 93 million miles away? They observe the speed of waves travelling through the sun, which manifest on the sun's surface as observable up-and-down oscillations of gases. From those oscillations, scientists can deduce the temperatures, composition and movement of materials inside the sun.
The technique of "seeing" inside the sun by observing its oscillations--known as helioseismology--is analogous to techniques used in Earth seismology to "see" inside our planet by measuring how long it takes earthquake-generated waves to travel through the interior and reach the Earth's surface.

Visualization of Bacterial Chemical Signal

Visualization of the bacterial chemical signal c-di-GMP

First visualization of the bacterial chemical signal c-di-GMP, that regulates the bacteria's movement. Green bacteria within the aggregate of the bacteriaPseudomonas aeruginosa have higher levels of the chemical.

In reproduction of some species of bacteria, a single-celled organism will split in two and the daughter cell--the swarmer--inherits a propeller to swim freely, while the mother cell builds a stalk to cling to surfaces. Researchers at the University of Washington (UW), along with a colleague at Stanford University, designed biosensors to observe how a bacterium gets the message to divide into these two, functionally and structurally different cells. The biosensors can measure biochemical fluctuations inside a single bacteria cell, which is smaller than an animal or plant cell.

During cell division, a signaling chemical, found only in bacteria, helps determine the fate of the resulting two cells. The signal is a tiny circular molecule called cyclic diguanosine monophosphate or c-di-GMP. By acting as an inside messenger responding to information about the environment outside the bacteria cell, c-di-GMP is implicated in several bacterial survival strategies. In harmless bacteria, some of these tactics keep them alive through harsh conditions. In disease-causing bacteria, c-di-GMP is thought to regulate antibiotic resistance, adhesiveness, biofilm formation and cell motility.

The research was supported by grants from the National Institute of Allergy and Infectious Diseases of the National Institutes of Health, the Swiss National Foundation, the Novartis Foundation, the Cystic Fibrosis Foundation, and a graduate research fellowship from the National Science Foundation.

Longstanding Mystery of Sun's Hot Outer Atmosphere Solved

Answer lies in jets of plasma


Images showing narrow jets of material streaking upward from the Sun's surface at high speeds.
Narrow jets of material, called spicules, streak upward from the Sun's surface at high speeds.
One of the most enduring mysteries in solar physics is why the Sun's outer atmosphere, or corona, is millions of degrees hotter than its surface.
Now scientists believe they have discovered a major source of hot gas that replenishes the corona: jets of plasma shooting up from just above the Sun's surface.
The finding addresses a fundamental question in astrophysics: how energy is moved from the Sun's interior to create its hot outer atmosphere.
"It's always been quite a puzzle to figure out why the Sun's atmosphere is hotter than its surface," says Scott McIntosh, a solar physicist at the High Altitude Observatory of the National Center for Atmospheric Research (NCAR) in Boulder, Colo., who was involved in the study.
"By identifying that these jets insert heated plasma into the Sun's outer atmosphere, we can gain a much greater understanding of that region and possibly improve our knowledge of the Sun's subtle influence on the Earth's upper atmosphere."
The research, results of which are published this week in the journal Science, was conducted by scientists from Lockheed Martin's Solar and Astrophysics Laboratory (LMSAL), NCAR, and the University of Oslo. It was supported by NASA and the National Science Foundation (NSF), NCAR's sponsor.
"These observations are a significant step in understanding observed temperatures in the solar corona"

"They provide new insight about the energy output of the Sun and other stars. The results are also a great example of the power of collaboration among university, private industry and government scientists and organizations."
The research team focused on jets of plasma known as spicules, which are fountains of plasma propelled upward from near the surface of the Sun into the outer atmosphere.
For decades scientists believed spicules could send heat into the corona. However, following observational research in the 1980s, it was found that spicule plasma did not reach coronal temperatures, and so the theory largely fell out of vogue.
"Heating of spicules to millions of degrees has never been directly observed, so their role in coronal heating had been dismissed as unlikely," says Bart De Pontieu, the lead researcher and a solar physicist at LMSAL.
In 2007, De Pontieu, McIntosh, and their colleagues identified a new class of spicules that moved much faster and were shorter-lived than the traditional spicules.
These "Type II" spicules shoot upward at high speeds, often in excess of 100 kilometers per second, before disappearing.
The rapid disappearance of these jets suggested that the plasma they carried might get very hot, but direct observational evidence of this process was missing.
The researchers used new observations from the Atmospheric Imaging Assembly on NASA's recently launched Solar Dynamics Observatory and NASA's Focal Plane Package for the Solar Optical Telescope (SOT) on the Japanese Hinode satellite to test their hypothesis.
"The high spatial and temporal resolution of the newer instruments was crucial in revealing this previously hidden coronal mass supply," says McIntosh.
"Our observations reveal, for the first time, the one-to-one connection between plasma that is heated to millions of degrees and the spicules that insert this plasma into the corona."
The findings provide an observational challenge to the existing theories of coronal heating.
During the past few decades, scientists proposed a wide variety of theoretical models, but the lack of detailed observation significantly hampered progress.
"One of our biggest challenges is to understand what drives and heats the material in the spicules," says De Pontieu.
A key step, according to De Pontieu, will be to better understand the interface region between the Sun's visible surface, or photosphere, and its corona.
Another NASA mission, the Interface Region Imaging Spectrograph (IRIS), is scheduled for launch in 2012 to provide high-fidelity data on the complex processes and enormous contrasts of density, temperature and magnetic field between the photosphere and corona. Researchers hope this will reveal more about the spicule heating and launch mechanism.
The LMSAL is part of the Lockheed Martin Space Systems Company, which designs and develops, tests, manufactures and operates a full spectrum of advanced-technology systems for national security and military, civil government and commercial customers.

More Images:
Image showing jets of plasma from just above the Sun's surface.
Jets of plasma from just above the Sun's surface likely replenish its corona.
---------------------------------------------------------


Images showing the Sun's outer atmosphere, or corona, and a jet of hot material.
The Sun's outer atmosphere, or corona, is millions of degrees hotter than its surface.
                   --------------------------------------------------- 
Image showing of a solar eclipse showcasing the Sun's corona.
A solar eclipse showcases the Sun's corona.


Thursday, January 6, 2011

Widespread Ancient Ocean "Dead Zones" Challenged Early Life

Persistent lack of oxygen in Earth's oceans affected animal evolution


Photo of Little Horse Canyon near Orr Ridge, Utah.
Little Horse Canyon near Orr Ridge, Utah: many of the study samples were collected nearby.


The oceans became oxygen-rich as they are today about 600 million years ago, during Earth's Late Ediacaran Period. Before that, most scientists believed until recently, the ancient oceans were relatively oxygen-poor for the preceding four billion years.
Now biogeochemists at the University of California-Riverside (UCR) have found evidence that the oceans went back to being "anoxic," or oxygen-poor, around 499 million years ago, soon after the first appearance of animals on the planet.
They remained anoxic for two to four million years.
The researchers suggest that such anoxic conditions may have been commonplace over a much broader interval of time.
"This work is important at many levels, from the steady growth of atmospheric oxygen in the last 600 million years, to the potential impact of oxygen level fluctuations on early evolution and diversification of life."
The researchers argue that such fluctuations in the oceans' oxygen levels are the most likely explanation for what drove the explosive diversification of life forms and rapid evolutionary turnover that marked the Cambrian Period some 540 to 488 million years ago.
They report in this week's issue of the journal Nature that the transition from a generally oxygen-rich ocean during the Cambrian to the fully oxygenated ocean we have today was not a simple turn of the switch, as has been widely accepted until now.
"Our research shows that the ocean fluctuated between oxygenation states 499 million years ago," said paper co-author Timothy Lyons, a UCR biogeochemist and co-author of the paper.
"Such fluctuations played a major, perhaps dominant, role in shaping the early evolution of animals on the planet by driving extinction and clearing the way for new organisms to take their place."
Oxygen is necessary for animal survival, but not for the many bacteria that thrive in and even demand life without oxygen.
Understanding how the environment changed over the course of Earth's history can give scientists clues to how life evolved and flourished during the critical, very early stages of animal evolution.
"Life and the environment in which it lives are intimately linked," said Benjamin Gill, the first author of the paper, a biogeochemist at UCR, and currently a postdoctoral researcher at Harvard University.
When the ocean's oxygenation states changed rapidly in Earth's history, some organisms were not able to cope.
Oceanic oxygen affects cycles of other biologically important elements such as iron, phosphorus and nitrogen.
"Disruption of these cycles is another way to drive biological crises," Gill said. "A switch to an oxygen-poor state of the ocean can cause major extinction of species."
The researchers are now working to find an explanation for why the oceans became oxygen-poor about 499 million years ago.
"We have the 'effect,' but not the 'cause,'" said Gill.
"The oxygen-poor state persisted likely until the enhanced burial of organic matter, originally derived from oxygen-producing photosynthesis, resulted in the accumulation of more oxygen in the atmosphere and ocean
"As a kind of negative feedback, the abundant burial of organic material facilitated by anoxia may have bounced the ocean to a more oxygen-rich state."
Understanding past events in Earth's distant history can help refine our view of changes happening on the planet now, said Gill.
"Today, some sections of the world's oceans are becoming oxygen-poor--the Chesapeake Bay (surrounded by Maryland and Virginia) and the so-called 'dead zone' in the Gulf of Mexico are just two examples," he said.
"We know the Earth went through similar scenarios in the past. Understanding the ancient causes and consequences can provide essential clues to what the future has in store for our oceans."
The team examined the carbon, sulfur and molybdenum contents of rocks they collected from localities in the United States, Sweden, and Australia.
Combined, these analyses allowed the scientists to infer the amount of oxygen present in the ocean at the time the limestones and shales were deposited.
By looking at successive rock layers, they were able to compile the biogeochemical history of the ocean.
Lyons and Gill were joined in the research by Seth Young of Indiana University, Bloomington; Lee Kump of Pennsylvania State University; Andrew Knoll of Harvard University; and Matthew Saltzman of Ohio State University.

Globally Sustainable Fisheries Possible With Co-Management

Community-based management key to sustaining aquatic resources
Photo of fishing boats lining the beach at Punta del Diablo, Uruguay.
Fishing boats line the beach at Punta del Diablo, a seaside fishing community in Uruguay.
The bulk of the world's fisheries--including the kind of small-scale, often non-industrialized fisheries that millions of people depend on for food--could be sustained using community-based co-management. This is the conclusion of a study reported in this week's issue of the journal Nature.
"The majority of the world's fisheries are not--and never will be--managed by strong centralized governments with top-down rules and the means to enforce them," says Nicolas Gutiérrez, a University of Washington fisheries scientist and lead author of the Nature paper.
"Our findings show that many community-based co-managed fisheries around the world are well managed under limited central government structure, provided communities of fishers are proactively engaged," he says.
"Community-based co-management is the only realistic solution for the majority of the world's fisheries, and is an effective way to sustain aquatic resources and the livelihoods of communities depending on them."
Under such a management system, responsibility for resources is shared between the government and users.
"This important research shows that a better understanding of ecological, social and economic interactions--and shared responsibilities for management--can yield sustainable well-being for ecosystems and fishers alike."
On the smallest scale, this might involve mayors and fishers from different villages agreeing to avoid fishing in each other's waters.
Examples on a larger scale include protecting Chile's most valuable fishery. In 1988, local fishers in a single community began cooperating along a 2-mile (4-kilometer) stretch of coastline. Today it involves 700 co-managed areas with 20,000 artisanal fishers along 2,500 miles (4,000 kilometers) of coastline.
"It's encouraging to see new models for sustainable fisheries management being proposed, especially those that incorporate the human dimension as a key component in solutions."

While case studies of individual co-managed fisheries exist, this new work used data on 130 fisheries in 44 developed and developing nations, and included marine and freshwater ecosystems as well as diverse fishing gears and targeted species.
Statistical analysis shows that co-management typically fails without: prominent community leadership and social cohesion clear incentives that, for example, give fishers security over the amount they can catch or the area in which they can fish and protected areas, especially when combined with regulated harvest inside or outside the area and when the protected area is proposed and monitored by local communities.
"Additional resources should be spent on efforts to identify community leaders and build social capital rather than only imposing management tactics without user involvement," says Gutiérrez.
The study further confirms the theories of Elinor Ostrom, who won the Nobel Prize in Economics in 2009 for challenging the conventional wisdom that common property is always poorly managed, and should be either regulated by central authorities or privatized.
Resource users frequently develop sophisticated mechanisms for decision-making and rule enforcement, Ostrom said, to handle conflicts of interest.
"With community-based co-management, fishers are capable of self-organizing, maintaining their resources and achieving sustainable fisheries," says Omar Defeo, a biologist at the University of Uruguay, scientific coordinator of Uruguay's national fishery management program, and a co-author of the paper.
Gutiérrez and colleagues assembled data from scientific literature, government and non-government reports, and personal interviews for 130 co-managed fisheries.
They scored them on eight outcomes--ranging from community empowerment, to sustainable catches, to increases in abundance of fish and prices of what was caught.
With 40 percent of the fisheries scoring positively on 6, 7 or all 8 outcomes, and another 25 percent scoring positively on 4 or 5 of the outcomes, the scientists found that community-based co-management holds promise for successful and sustainable fisheries worldwide.
Ray Hilborn, a University of Washington fisheries biologist and a co-author of the paper, says that these findings "further illustrate the world's growing ability to manage fisheries sustainably.
"The tools appropriate for industrial fisheries in countries with strong central governments are quite different from those in small-scale fisheries or countries without strong central governments."


"Gene-Age-Ology"

"New" genes are just as important as "Old" genes:
Researchers at the University of Chicago have discovered that "new" genes which have evolved in species as little as one million years ago can be just as essential for life as ancient genes.





In a new study out of the University of Chicago that reveals that, when it comes to genes, age doesn't matter.
Their experiments involved shutting down or silencing individual genes in these flies through a process called RNA interference. The genes were tested in two groups: ancient ones that have been passed on for a long time through natural selection and relatively newer genes that appeared sometime in the last 35 million years. The old school genes were traditionally believed to be the most important for overall survival, while the newbies were believed to be nice, but not totally essential. This new study says 'not so'! The scientists discovered that roughly the same percentage of genes in each group were needed to keep the fly alive -- giving almost equal significance to new and old genes.
If a new gene comes along and has traits that help reproduction or survival, it's favored by natural selection and stays in the genome. After a while, it becomes an essential part of a species' biology.
That may have big implications for human health. The researchers say that though animals have been useful for learning about human disease, important health information may well reside in genes unique to us.
The study said nothing about stone-washed "jeans."

Tuesday, January 4, 2011

Jellyfish Species Chrysaora fuscescens

Jellyfish species <em>Chrysaora fuscescens</em>  Jellyfish species Chrysaora fuscescens.



Ionizing atoms with a nanotube

Launched laser-cooled atoms captured by single, suspended, single-walled carbon nanotube

Launched laser-cooled atoms are captured by a single, suspended, single-walled carbon nanotube charged to hundreds of volts. A captured atom spirals toward the nanotube (white path) and reaches the environs of the tube surface, where its valence electron (yellow) tunnels into the tube. The resulting ion (purple) is ejected and detected, and the dynamics at the nanoscale are sensitively probed.

In a paper written and published by graduate students Anne Goodsell and Trygve Ristroph, with Professors Lene Hau and Jene Golovchenko, Harvard University, the researchers report the first experimental realization of a combined cold atom-nanostructure system that represents a new paradigm at the interface of these two disciplines. Atoms are laser cooled to microkelvin temperatures and then launched towards a single, freely suspended carbon nanotube charged to hundreds of volts. The nanotube acts as a 'black hole.' Atoms are attracted to the nanotube from distances of more than a hundred times the tube diameter, and spiral towards the tube under dramatic acceleration, with orbit times reaching just a few picoseconds. Close to the nanotube, an atom's valence electron tunnels into the tube, converting the atom into an ion that is ejected at high energy and easily detected. The system demonstrates sensitive probing of atom, electron and ion dynamics at the nanoscale, and opens the door to a new generation of cold atom experiments and nanoscale devices.

Wing of Emerald-patched Cattleheart Butterfly

The Amazonian butterfly emerald-patched cattleheart
The Amazonian butterfly emerald-patched cattleheart.
Vivid, emerald-green scales adorn the wings of the butterfly emerald-patched cattleheart
Like shingles on a roof, vivid, emerald-green scales adorn the wings of the Amazonian butterfly, emerald-patched cattleheart.

More about this Image
Researchers from Yale University, supported by the National Science Foundation, are studying the properties of the colors of butterfly wings. Using an X-ray scattering technique, they were able to determine the 3-D internal structure of the scales on the wings of several species of butterflies.

The crystal nanostructures that give butterflies their color are called gyroids--strange, 3-D curving structures that selectively scatter light. The gyroids are made of chitin (the same tough, starchy material that forms the exoskeletons of insects and crustaceans), which is usually deposited on the outer membranes of cells. Researchers wanted to know how the cells sculpt themselves into these extraordinary forms resembling a network of three-bladed boomerangs. They discovered that the outer membranes of the butterfly wing scale cells grow and fold into the interior of the cells. The membranes then form a double gyroid--two, mirro-image networks shaped by the outer and inner cell membranes. The latter are easier to grow but are not as good at scattering light. The chitin is then deposited in the outer gyroid to create a single, solid crystal. After this, the cell dies, leaving behind the crystal nanostructures on the butterfly wing.

Photonic engineers are using gyroid shapes to try and create more efficient solar cells and, by mimicking nature, may be able to produce more efficient optical devices as well.

Erythrocytes (Red Blood Cells)

Erythrocytes (red blood cells)
Erythrocytes (red blood cells). Image was taken with a scanning electron micrograph using false
color.

This image was taken to study quantitative phase imaging of cells and tissues. The research was performed at the Quantitative Light Imaging Laboratory at the University of Illinois at Urbana-Champaign.

Exotic Discovery Made in Soft Polymer

Novel nanostructure pattern never seen in a plastic material

Photo showing marshmallows representing hairy spheres connected with plastic coffee stirrers.
Marshmallows are used to represent hairy spheres that are connected with plastic coffee stirrers.

Professor Frank S. Bates and his research team at the University of Minnesota in Minneapolis have discovered an unusual type of soft material that was conceived of over 50 years ago, but has never before been found in a plastic--although it has been seen in stainless steel and other metal alloys.
Bates' group, which is funded by the National Science Foundation's Division of Materials Research, specializes in creating a particular class of material, called block copolymers.
"This class of polymers has been studied intensively for nearly half a century, so discovering a new equilibrium structure in a diblock copolymer is unexpected," said Bates. "I have used the analogy that this is a bit like finding a new planet."
Multicolored beaded strings
Polymers are gigantic chain-like molecules which contain hundreds or thousands of repeating units ("poly" means "many;" "mer" in chemistry means a "repeat unit").
To visualize a polymer, think of a very long string of beads, where each bead represents a basic chemical building block, or mer. Different colors stand for different repeat units. Block copolymers are created by connecting two or more of these long sequences.
For example, a diblock copolymer might contain a string of a thousand red beads followed by a sequence containing a thousand blue beads ("di" means "two"). In principle, two, three or any number of bead strings can be linked together, limited only by the strategies cooked up by synthetic chemists, Bates said.
These block copolymers can be created with a number of molecular architectures--linear sequences, branched designs or even star patterns.  Different arrangements give each new material different physical characteristics.
Block copolymers exhibit a consistency somewhere between stiff solids and free-flowing liquids--similar to children's play putty--which gives them the descriptive name "melts."
"Block copolymer melts are used in many practical applications such as pressure-sensitive adhesives, tough clear plastics, elastomers in sneaker soles, asphalt additives, drug-eluting stents for clogged coronary arteries and much more," Bates said.
A happy accident
Bates and his team stumbled upon the completely new material quite by accident.
"My students were exploring the mechanical properties of polymers made from poly(lactide)--a derivative of corn--and poly(isoprene)--a synthetic form of natural rubber," Bates said. "They discovered a new and unexplainable x-ray scattering pattern."
The unusual pattern turned out to be associated with a long-predicted crystal structure, now known as the Frank-Kasper sigma phase.
"This phase was named for Sir F.C. Frank, a famous British physicist known for important theoretical contributions to the field of materials physics," Bates explained. "In the 1950s, Frank and General Electric Corporation Researcher J.S. Kasper wrote several creative papers discussing how spherical objects such as atoms might arrange into complex crystal structures."
Bates and his team reported their discovery of a Frank-Kasper sigma phase found in two different block copolymers in a recent issue of the journal Science.
Assembling themselves together
The interesting thing about block copolymers is that they can spontaneously self-assemble into tiny nanoscale structures, including spheres, when cooled. The spheres form because the different blocks, though chemically connected, repel each other like oil and vinegar.
To overcome this self-revulsion, the material organizes itself so that a core of one type of block circles in on itself, leaving a "halo" of the second type of block hanging out like a hairy fringe.
These hairy microscopic spheres are called microphases, or sometimes nanophases, and the act of forming the material into the spheres is called microphase separation.
"Imagine that dozens of block copolymers segregate to form spheres, each composed of a red spherical core surrounded by and a fuzzy blue halo," Bates said. "In a block copolymer melt this occurs everywhere, leading to a relatively dense ensemble of such hairy spheres."
Packing frustrations
Airline passengers aren't the only ones who experience "packing frustrations." As polymers cool from a liquid state to an ordered state, they, too experience the frustration of trying to cram into a small amount of space.
"In order to optimize the packing of the hairy spheres together into a solid crystalline material, they find the most efficient way to fill space," Bates explained. "Different arrangements lead to different levels of stretching and compression of the hairy halo."
To illustrate the Frank-Kasper sigma phase in the new block copolymer crystal, Bates along with graduate student Sangwoo Lee, used large fluffy marshmallows to represent the hairy spheres, connecting them together with plastic coffee stirrers
"Marshmallows represent the "squishiness" of the hairy microspheres," Bates explained, "and they are flexible enough to allow the many different angles required to combine them in the complicated sigma-phase configuration."
Crystal construction
In any crystal, a unit cell is the smallest three-dimensional pattern of atoms or molecules that is repeated in any direction as the crystal grows. Most sphere forming block copolymers, and certain atomic crystals such as iron at room temperature, exhibit a cubic structure, known as the body-centered cubic or "bcc arrangement," according to Bates.
A typical bcc crystal formed from iron has only two atoms per unit cell while a diamond, which forms a face centered cubic or fcc structure, has eight atoms per unit cell. In contrast, the Frank-Kasper sigma phase involves a complex combination of triangular and square groups of 30 block copolymer microspheres fitted together in a single unit cell.
"In this gigantic structure, each microsphere has about 200 diblock copolymers and each diblock copolymer has about 650 atoms, so there are nearly 4 million atoms in each unit cell!" Bates explained.
"The sigma-phase is a true three-dimensional crystal, albeit one that has a gigantic unit cell," said Bates. "This new arrangement indicates that there are packing forces at play related to those that produce quasicrystals, because a simple misplacement of sigma elements during phase formation could lead to a quasicrystal."
The almost crystal
Quasicrystals are materials with irregularly repeating, or aperiodic, patterns of atoms or molecules that do not contain unit cells that repeat or translate as do true crystals. "There is a close relationship between aperiodic order and periodic crystals with large unit cells," Bates said. "The Frank-Kasper sigma-phase represents the "periodic approximant" to certain dodecagonal (12-sided polygon) quasicrystals."
Now that they've discovered the first block copolymer Frank-Kasper sigma phase, Bates and his team are trying to determine the range of block copolymer molecular parameters over which the sigma-phase occurs. "We are synthesizing new polymers and investigating them by x-ray scattering and electron microscopy," he said.
"Also, we will collaborate with theorists to determine whether the sigma-phase can be accounted for using current statistical mechanical tools," Bates added.
"The large unit cell suggests that we might be able to produce crystal structures with unit cell dimensions that are significantly greater than 100 nanometers (nm)," he said. "As the unit cell dimension approaches the wavelength of visible light (400 nm-700 nm), I anticipate potential uses in photonics."
Photonics is the concept of using photons, or light particles, rather than electrons, to transmit video and voice signals (fiber optics) or carry out computations (optical computing). "The potential for application of the Frank-Kasper block copolymer in photonics would be enhanced if actual quasicrystal phases are discovered," Bates said.
The next phase
"The principles associated with sigma-phase formation in the diblock copolymers might be extrapolated to other materials formed from spherical cores and "hairy" halos," said Bates. "This might be accomplished by chemically attaching polymer chains to the surface of spherical nanoparticles. A large refractive index difference between the spherical core (perhaps made from a metal) and the polymer halo would be conducive to photonics."
According to Bates, this particular research is similar to previous projects that have resulted in commercial products, including a new type of plastic and fracture resistant epoxy.
"However, in my mind, what is most significant is the connection between very disparate fields (e.g., metals, polymers, etc.) represented by this work," he said. "Nature provides certain basic ingredients upon which materials are formed. Whenever a new and unanticipated result such as this one surfaces we are forced to reconsider our presumed understanding of these scientific fields."