text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
NOAA: November Warmer than Average in U.S., January-November Temperature Near Average for U.S.
December 11, 2008
The November 2008 temperature for the contiguous United States was warmer than the long-term average, according to NOAA’s National Climatic Data Center in Asheville, N.C. The January-November 2008 temperature was near average.
The average November temperature of 44.5 degrees F was 2.0 degrees F above the 20th Century average. Precipitation across the contiguous United States in November averaged 1.93 inches, which is 0.20 inch below the 1901-2000 average.
For the January-November period, the average temperature of 54.9 degrees F was 0.3 degree above the 20th Century average. The nation’s January-November temperature has increased at a rate of 0.12 degrees per decade since 1895, and at a faster rate of 0.41 degrees each decade during the last 50 years. All findings are based on a preliminary analysis of data based on records dating back to 1895.
U.S. Temperature Highlights
- November temperatures were cooler than average across the Southeast and Central regions, and much warmer than average in the Southwest, Northwest and West regions.
- The West region had its fourth warmest November on record. This contrasted with the Southeast, which was much below normal.
- Persistent above-average temperatures for the last six months have resulted in a record warm June-November period for the West region. California set a record for its warmest June-November, while both Nevada and Utah had their fifth warmest June-November period.
- Based on NOAA's Residential Energy Demand Temperature Index, the contiguous U.S. temperature-related energy demand was 0.6 percent below average in November.
U.S. Precipitation Highlights
- The United States measured above-normal precipitation across the northern Great Plains from eastern Montana to western Minnesota. However, November was drier than normal across much of the South and Central regions.
- Precipitation across most of the Midwest was only 50-75 percent of normal and some areas from southern Missouri through central Illinois received less than 50 percent of normal precipitation.
- The January-November period has been persistently wet across much of the country from the central Plains to the Northeast. The 11-month period was the wettest on record for New Hampshire and Massachusetts, second wettest for Missouri, third wettest for Vermont and Illinois, and fifth wettest for Maine and Iowa.
- At the end of November, 22 percent of the contiguous United States was in moderate-to-exceptional drought, about the same as October. Meanwhile, extreme-to-exceptional drought conditions continued in the western Carolinas, northeast Georgia, eastern Tennessee, southern Texas, and Hawai’i.
- About 26 percent of the contiguous United States was in moderately-to-extremely wet conditions at the end of November, according to the Palmer Index. This was a decrease of about three percent compared to October.
- It was the wettest November on record in Yuma, Ariz., with 2.2 inches (5.6 cm) of precipitation – all of it falling on November 26. This was more than five times the November average.
- An early November blizzard forced more than 100 businesses and schools, and Interstate 90, to close in western South Dakota on Nov. 5 and 6. The blizzard brought total snow accumulations of 3 to 4 feet and drifts up to 20 feet in places.
- Several periods of strong northwesterly winds during the month resulted in mountain-enhanced snowfalls across the mountains of western Virginia, North Carolina, and extreme northern Georgia. Banner Elk, N.C. recorded 6.2 inches (15.7 cm) of snow during the month making it the snowiest November since 1983.
- Three separate wildfires, which scorched 41,000 acres in Southern California, destroyed 1,000 homes and prompted 15,000 people to evacuate from November 13-17.
NCDC’s preliminary reports, which assess the current state of the climate, are released soon after the end of each month. These analyses are based on preliminary data, which are subject to revision. Additional quality control is applied to the data when late reports are received several weeks after the end of the month and as increased scientific methods improve NCDC’s processing algorithms.
NOAA understands and predicts changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and conserves and manages our coastal and marine resources. | <urn:uuid:b2be6da7-ff96-4bf6-9cc3-ef1ddb2bf1b1> | 2.859375 | 951 | News (Org.) | Science & Tech. | 51.164115 | 2,300 |
|Global Climate Observing System (GCOS)|
|The Global Climate Observing System (GCOS) was established in 1992 to ensure that the observations and information needed to address climate-related issues are obtained and made available to all potential users. It is co-sponsored by the World Meteorological Organization (WMO), the Intergovernmental Oceanographic Commission (IOC) of UNESCO, the United Nations Environment Programme (UNEP) and the International Council for Science (ICSU). GCOS is intended to be a long-term, user-driven operational system capable of providing the comprehensive observations required for monitoring the climate system, for detecting and attributing climate change, for assessing the impacts of climate variability and change, and for supporting research toward improved understanding, modelling and prediction of the climate system. It addresses the total climate system including physical, chemical and biological properties, and atmospheric, oceanic, hydrologic, cryospheric and terrestrial processes. |
GCOS does not itself directly make observations nor generate data products. It stimulates, encourages, coordinates and otherwise facilitates the taking of the needed observations by national or international organizations in support of their own requirements as well as of common goals. It provides an operational framework for integrating, and enhancing as needed, observational systems of participating countries and organizations into a comprehensive system focussed on the requirements for climate issues. GCOS builds upon, and works in partnership with, other existing and developing observing systems such as the Global Ocean Observing System, the Global Terrestrial Observing System, and the Global Observing System and Global Atmospheric Watch of the World Meteorological Organization.
printed on 2013/05/24 03:44:06 | <urn:uuid:8852f8b9-f812-47e6-9d5f-7f2e233324ca> | 3.515625 | 336 | Knowledge Article | Science & Tech. | -7.887122 | 2,301 |
In the first-ever public test of artificial muscle, in March a high-school girl arm-wrestled three devices powered by the material. See how well she fared
On March 7, 17-year-old high-school student Panna Felsen squared off against three stalwart competitors in the first-ever human-robot arm-wrestling match. Each of the robots was powered by a distinct variety of electroactive polymer, also known as artificial muscle. The contenders varied in size and shape, and their creators’ budgets ranged from $800 to roughly $250,000.
The competition was designed to promote the development of materials that could someday animate prosthetic limbs, shape-shifting airplane wings and a host of other devices. | <urn:uuid:46de0416-8d47-4ed4-97e1-8c2c6496d09c> | 2.84375 | 153 | News Article | Science & Tech. | 44.718034 | 2,302 |
Introns in promoterhow is it possible?
Posted 15 May 2012 - 09:38 PM
Posted 09 June 2012 - 08:21 AM
Posted 09 August 2012 - 01:53 AM
Eukaryotes promoters are quite complex and some binding sites for proteins that activate the transcription may actually be downstream of the mRNA start position, or it may be that the presensce of the first intron/exon increases the promoters activity and is therefore considered part of the promoter region. Many of the strongest mammalian promoters have introns in them, Ubiquitin, Elongation factor alpha and chicken beta actin (full length) all have introns in them. In fact, many people add an intron upstream of their gene to prolong expression in vivo. It makes a gene more like a natural gene and decreases its chances of being methylated. | <urn:uuid:36db8c5a-6e12-44c5-98fa-990808870102> | 2.78125 | 174 | Q&A Forum | Science & Tech. | 42.440511 | 2,303 |
Recently we received a request for setting up a glossary-only search mechanism, or perhaps one web page with a long list of glossary entries with hot links to full explanations. The glossary that we already have is a good start, but we are all busy and it’s hard to find the time for extending this.
But there are also a number of external web pages which provide climate-related glossaries, such as the NOAA (they also have a seperate page for paleo-stuff), the Bureau of Meteorology (Australia, and there is even one by the Australian EPA), the Environmental Protection Agency (EPA, the U.S.), and the Western Regional Climate Center (WRCC, the U.S.). Wikipedia also has a glossary for climatological terms.
Furthermore, there are some nice resources available, such as the Encyclopedia of Earth. | <urn:uuid:da2728b9-52f7-43d5-a1b4-a8e5af93b187> | 2.515625 | 178 | Knowledge Article | Science & Tech. | 43.528188 | 2,304 |
Apr. 24, 1998 COLUMBUS, Ohio -- Researchers at Ohio State University have developed a new model of atomic forces that may solve a long-standing problem in particle physics.
The work may aid the understanding of the structure of protons and other particles that contain quarks because it begins to reconcile physicist Richard Feynman’s 1970s model of the proton with modern views of the quark structure of sub-atomic particles.
“We’re hoping our work will make it easier for people who work with the quark model to calculate a lot of experimental information,” said Kenneth Wilson, professor of physics at Ohio State and 1982 Nobel Laureate in Physics. “Right now, the equations that describe proton structure are very complicated.”
Wilson helps to lead the research group for this project, which includes Robert Perry, also professor of physics, and Stan Glazek, a frequent visitor to Ohio State and associate professor of physics from Warsaw University. The researchers discussed their model April 18 at the 1998 American Physical Society meeting in Columbus.
Physicists have a hard time mathematically describing the structure of the proton, because the particle is supposed to be surrounded by a cloud of virtual particles that blink in and out of existence all the time, severely complicating the equations.
In the early 1970s, Feynman, a former physicist at Caltech, devised a way for physicists to separate the proton’s constituents from these virtual particles -- mathematically, at least. He suggested that a proton moving at the speed of light could outrun the slower virtual particles so physicists could observe its constituents on their own. He envisaged the proton’s constituents as being just three fundamental particles called quarks. This greatly simplified the mathematics.
Physicists now hypothesize that protons are made up of quarks and other fundamental particles called gluons, and that the massless and neutrally charged gluons bind quarks together.
The current theory is much more complicated than Feynman’s: The connection between quarks and gluons is supposed to be so strong that smashing a proton in a particle accelerator would release not just three quarks as Feynman predicted, but a shower of quarks, anti-quarks, and gluons.
Still, Feynman’s ideas provided for simple equations that matched experimental results concerning the energy states of protons.
“The question was, once we had this very complex quark theory, why did Feynman’s simple model still work so well? No one has ever been able to figure out why. In fact, the problem became so difficult that people just gave up,” said Wilson.
Wilson and his colleagues have formulated a new picture of quark-gluon interaction. They think that gluons may bind strongly to each other but not so strongly to quarks. That would prevent quarks from escaping easily during experiments, but also allow for Feynman’s simpler mathematical model.
“The coupling of gluons to each other is quite strong, and that coupling confines quarks inside the proton,” explained Wilson.
With this theory, when the bonds between gluons are broken, the reaction emits mostly other gluons. Extra particles such as anti-quarks and virtual particles don’t emerge because no strong bonds exist between quarks. Even gluons should be rarely emitted because, in the new theory, they are expected to have high masses, making them hard to produce.
This work, which was sponsored by a grant from the National Science Foundation, is in the preliminary stages, and the Ohio State researchers will continue to develop it mathematically. But even before then, they hope other physicists will explore the new theory as well.
Other social bookmarking and sharing tools:
The above story is reprinted from materials provided by Ohio State University.
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:df3f4552-030d-48ca-929e-4258cc56e4bd> | 3.1875 | 854 | News Article | Science & Tech. | 39.818372 | 2,305 |
Oct. 19, 2005 Residents and seismologists in Northern California focus on the San Andreas Fault, but a Penn State researcher thinks more questions should be asked about the Eastern California Shear Zone, a fault that ends or dissipates without a clear connection.
"We want to know how it formed, why it formed," says Dr. Kevin P. Furlong, professor of geosciences. "We know the San Andreas boundary is getting longer as the Mendocino Triple Junction point moves northward. Right now we believe that the Eastern California Shear Zone is growing along with the San Andreas."
The Eastern California Shear Zone runs roughly parallel to the San Andreas Fault from the gulf of California and is a wide area in western Nevada. The problem is that so far, no one has identified the northern end of the zone.
Basic plate tectonics requires that the large plates that make up the Earth's crust move over or under each other, slide above or below each other, or meet end to end forming large mountainous plateaus. In the California-Nevada area, most of the plate boundaries behave nicely. The Pacific plate slides northward while the North American Plate slides southward. The Juan de Fuca plate in the north slides beneath the North American plate and all three of these plates meet at a point near Mendocino, California, called the Mendocino Triple Junction. For the most part, seismologists understand how these three plates move.
However, the Eastern California Shear Zone, sometimes called Walker Lane, does not act properly. No obvious connection of the northern end of the zone with another plate boundary exits.
"This would not be a problem if the slip were not significant, but the slip is significant," Furlong told attendees at the 117th annual meeting of the Geological Society of America today (Oct. 17) in Salt Lake City. "The total displacement has been 50 kilometers and we know it has been going on for 5 to 6 million years."
The movement on the Eastern California Shear Zone � 10 to 12 millimeters per year -- makes up 25 percent of the total movement of the North American Plate. The western side of the shear zone is moving northward, but at a different rate than the San Andreas area. In the middle of these two areas are the Sierra Nevada mountains that do not participate in the slipping except to move along with the piece of plate on which they reside.
"We have this intermediate piece of land from the eastern side of the San Andreas to the Eastern California Shear Zone � including the Sierra Nevadas � that is confusing," says Furlong. "We know this shear zone is an active earthquake area because there was a magnitude 8 earthquake there in the 1800s."
The Penn State researcher notes that we do not currently have a model to model this type of transition.
"We are trying to identify the questions," says Furlong. "There may be evidence to explain this transition, but we have not found it yet. We have not been looking for the right type of data."
There are other areas on Earth, such as in Central Asia near Tibet, where fault zones simply end without explanation.
"While the concept of a major fault terminating without an obvious end is not uncommon, we do not understand how this happens," says Furlong. "We need to quantify the questions, so we can find the answers."
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:ec8b295b-1fe7-42bd-a080-e0839af8eab6> | 3.625 | 743 | News Article | Science & Tech. | 51.002068 | 2,306 |
Feb. 23, 2010 Regulatory proteins common to all eukaryotic cells can have additional, unique functions in embryonic stem (ES) cells, according to a study in the February 22 issue of the Journal of Cell Biology. If cancer progenitor cells -- which function similarly to stem cells -- are shown to rely on these regulatory proteins in the same way, it may be possible to target them therapeutically without harming healthy neighboring cells.
The new study, by Thomas Fazzio and Barbara Panning (University of California, San Francisco) finds that two chromatin regulatory proteins essential for ES cell survival, Smc2 and Smc4, together form the heart of the condensin complexes that promote chromosome condensation in mitosis and meiosis. Because somatic cells lacking condensins continue to proliferate with relatively minor mitotic defects, Fazzio and Panning wondered why ES cells died in the absence of Smc2 or Smc4.
ES cells lacking the condensin subunits accrued massive amounts of DNA damage that resulted in cell death. It isn't clear why ES cells are so sensitive to the loss of condensins, but it may be connected to two other phenotypes seen in ES, but not somatic, cells. After Smc2 or Smc4 was blocked, mitotic ES cells arrested in metaphase and interphase ES cell nuclei were enlarged and misshapen.
This suggests that condensins promote mitotic progression and maintain interphase chromatin compaction in ES cells -- functions that they don't have in somatic cells. In fact, many other chromatin regulatory proteins involved in ES cell survival can be depleted in differentiated cells without affecting viability, indicating that the chromatin of ES cells -- and possibly cancer progenitor cells -- is fundamentally different from somatic cell chromatin.
Reference: Fazzio, T.G., and B. Panning. 2010. J. Cell Biol. doi:10.1083/jcb.200908026.
Other social bookmarking and sharing tools:
Note: If no author is given, the source is cited instead. | <urn:uuid:69378649-eec1-4c2d-baff-7f0e547e8514> | 2.703125 | 437 | Truncated | Science & Tech. | 42.799632 | 2,307 |
July 23, 2010 Pioneering observations with the National Science Foundation's giant Robert C. Byrd Green Bank Telescope (GBT) have given astronomers a new tool for mapping large cosmic structures. The new tool promises to provide valuable clues about the nature of the mysterious "dark energy" believed to constitute nearly three-fourths of the mass and energy of the Universe.
Dark energy is the label scientists have given to what is causing the Universe to expand at an accelerating rate. While the acceleration was discovered in 1998, its cause remains unknown. Physicists have advanced competing theories to explain the acceleration, and believe the best way to test those theories is to precisely measure large-scale cosmic structures.
Sound waves in the matter-energy soup of the extremely early Universe are thought to have left detectable imprints on the large-scale distribution of galaxies in the Universe. The researchers developed a way to measure such imprints by observing the radio emission of hydrogen gas. Their technique, called intensity mapping, when applied to greater areas of the Universe, could reveal how such large-scale structure has changed over the last few billion years, giving insight into which theory of dark energy is the most accurate.
"Our project mapped hydrogen gas to greater cosmic distances than ever before, and shows that the techniques we developed can be used to map huge volumes of the Universe in three dimensions and to test the competing theories of dark energy," said Tzu-Ching Chang, of the Academia Sinica in Taiwan and the University of Toronto.
To get their results, the researchers used the GBT to study a region of sky that previously had been surveyed in detail in visible light by the Keck II telescope in Hawaii. This optical survey used spectroscopy to map the locations of thousands of galaxies in three dimensions. With the GBT, instead of looking for hydrogen gas in these individual, distant galaxies -- a daunting challenge beyond the technical capabilities of current instruments -- the team used their intensity-mapping technique to accumulate the radio waves emitted by the hydrogen gas in large volumes of space including many galaxies.
"Since the early part of the 20th Century, astronomers have traced the expansion of the Universe by observing galaxies. Our new technique allows us to skip the galaxy-detection step and gather radio emissions from a thousand galaxies at a time, as well as all the dimly-glowing material between them," said Jeffrey Peterson, of Carnegie Mellon University.
The astronomers also developed new techniques that removed both man-made radio interference and radio emission caused by more-nearby astronomical sources, leaving only the extremely faint radio waves coming from the very distant hydrogen gas. The result was a map of part of the "cosmic web" that correlated neatly with the structure shown by the earlier optical study. The team first proposed their intensity-mapping technique in 2008, and their GBT observations were the first test of the idea.
"These observations detected more hydrogen gas than all the previously-detected hydrogen in the Universe, and at distances ten times farther than any radio wave-emitting hydrogen seen before," said Ue-Li Pen of the University of Toronto.
"This is a demonstration of an important technique that has great promise for future studies of the evolution of large-scale structure in the Universe," said National Radio Astronomy Observatory Chief Scientist Chris Carilli, who was not part of the research team.
In addition to Chang, Peterson, and Pen, the research team included Kevin Bandura of Carnegie Mellon University. The scientists reported their work in the July 22 issue of the scientific journal Nature.
The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.
Other social bookmarking and sharing tools:
- Chris L. Carilli. Astrophysics: Broad-brush cosmos. Nature, DOI: 10.1038/466444a
Note: If no author is given, the source is cited instead. | <urn:uuid:718f4aa2-3abe-45b7-9b6a-c359baa3f9d9> | 3.640625 | 801 | News Article | Science & Tech. | 27.783049 | 2,308 |
Jan. 18, 2013 Cheating is a behavior not limited to humans, animals and plants. Even microscopically small, single-celled algae do it, a team of University of Arizona researchers has discovered.
Humans do it, chimpanzees do it, cuckoos do it -- cheating to score a free ride is a well-documented behavior by many animals, even plants. But microscopically small, single-celled algae? Yes, they do it too, biologists with the University of Arizona's department of ecology and evolutionary biology have discovered.
"There are cheaters out there that we didn't know of," said William Driscoll, lead author of a research report on the topic who studied an environmentally devastating toxic alga that is invading U.S. waters as part of his doctoral research in the lab of Jeremiah Hackett, an assistant professor of ecology and evolutionary biology.
Driscoll isolated several strains of the species, Prymnesium parvum, and noticed that some grew more quickly and do not produce any of the toxins that protect the algae against competition from other species of algae.
"When those 'cheaters' are cultured with their toxic counterparts, they can still benefit from the toxins produced by their cooperative neighbors -- they are true 'free riders,'" Driscoll explained.
The study, published in the journal Evolution, adds to the emerging view that microbes often have active social lives. Future research into the social side of toxic algae could open up new approaches to control or counteract toxic algal blooms, which can pose serious threats to human health and wipe out local fisheries, for example.
Prymnesium belong to a group of algae known as golden algae, so named for their accessory pigments, which give the cells a golden sheen. This toxic species lives mostly in oceans and only recently has invaded freshwater environments. Its distant relatives include the equally microscopic diatoms, which make up a large part of phytoplankton, and giant kelp.
The algae produce toxins that are deadly to fish but so far have not been shown to threaten the health of humans or cattle. Many scientists believe the toxin arose as a chemical weapon to wipe out other algae and other organisms competing for the same nutrients and sunlight on which the algae depend. The discovery of cheaters that don't bother to produce toxin, however, throws a wrench into this scenario.
"We are trying to understand the ecological side in these algae," Driscoll said.
"If you're a single cell, regardless of whether you make a toxin or not, you're just drifting through the water, and everything is drifting with you," Driscoll explained. "Producing toxins only makes sense if the entire population does it. Any given individual cell won't get any benefit from the chemicals it makes because they immediately diffuse away. It's a bit like schooling behavior in fish: A single fish can't confuse a predator; you need everyone else do the same thing." For that reason, he explained, the cheaters should have an immediate advantage over their "honest" peers because they can invest the energy and resources they save into making more offspring.
"Theory tells us cooperation should break down in these circumstances. If you are secreting a toxin and it's beneficial to your species, then everybody gets access to that benefit. In a well-mixed population where there is no group structure, natural selection should favor selfishness, and the cheaters should take over." But for some reason, they don't. An alternative explanation for toxicity becomes clear when toxic cells are observed alongside their competition under a microscope.
"They attack other cells," he said. "Using their two flagella, they swim up to the prey and latch on to it. Sometimes a struggle takes place, and more cells swim up to the scene, surround their victim and release more toxin, and then they eat it."
"These toxins might have evolved less as a means to keep competitors away and more like a rattlesnake venom. The algae might use it to stun or immobilize prey." Driscoll and his co-workers isolated the toxic and the non-toxic strains side by side from the same water sample, taken from a late bloom as the bloom started to crash.
"When times are good and there are plenty of nutrients in the water, the algae use photosynthesis to gain energy from sunlight, but when nutrients become sparse, they attack and become toxic," Driscoll said. "That's when they start swimming around looking for prey. They are a little bit like carnivorous plants in that way -- like a Venus fly trap."
The group observed that as soon as nutrients become scarce, the toxic population ceases to grow, but the cheaters keep multiplying.
Driscoll and his team think the cheating behavior could be an adaptation to the algae bloom life style.
"During a bloom you have killed off all the prey or a huge amount of it, so why produce toxins and go looking for something that isn't there? It might be better to just keep growing and not even try to bother to keep looking for prey because it's gone."
Driscoll said the research illustrates how little is known about the ecology of microbes.
"We're just starting to understand what the mechanisms are that maintain cooperation in microbes. The theory is heavily slanted toward multicellular organisms. Only recently have people started to think about microbes cooperating."
To better understand the genes and biochemical pathways that control how the algae make their toxins, the group in Hackett's lab is investigating which genes are active in the toxic compared to the non-toxic strains.
"We are finding a number of stress-related genes are regulated differently in the cheaters," Driscoll said. "A lot of the other genes have not been studied before, especially those most likely involved in toxin production."
"The problem is that nothing close to these algae has had its genome sequenced, so they're pretty mysterious. Many of the genes we have sequenced are novel, so understanding their function is a big part of the challenge."
Unraveling the molecular mechanisms behind all this chemical warfare, cheating behavior and maximizing growth could potentially lead to new applications, the researchers speculate, albeit cautiously.
Driscoll explained the cheating trait might be an Achilles heel that could be exploited to curb algal blooms.
"We are ultimately interested in disrupting the competitive abilities of these bloom-forming populations. While this research is just scratching the surface, understanding how natural selection may work over the course of a bloom can provide a deeper understanding of the traits that are most important to the success of this species."
In addition, the cheaters' tendency to keep growing when their toxic peers no longer can is in some ways reminiscent of cancer cells.
According to Driscoll, one way to think about cancer is that cancerous cells have an immediate advantage over their non-cancerous, well-behaved neighbors. But this advantage, if unchecked, is very shortsighted because it will interfere with the basic functioning of the multicellular organism of which they are all a part.
"What we may be seeing in our algae is a -- far less extreme -- version of a similar story, because a short-term advantage to not producing toxins may interfere with the long-term competitive ability of the population."
Other social bookmarking and sharing tools:
- William W. Driscoll, Noelle J. Espinosa, Omar T. Eldakar, Jeremiah D. Hackett. ALLELOPATHY AS AN EMERGENT, EXPLOITABLE PUBLIC GOOD IN THE BLOOM-FORMING MICROALGAPRYMNESIUM PARVUM. Evolution, 2013; DOI: 10.1111/evo.12030
Note: If no author is given, the source is cited instead. | <urn:uuid:1ac9ac74-9a81-48fb-8674-0c3736d734b8> | 3.5625 | 1,616 | News Article | Science & Tech. | 41.36155 | 2,309 |
Mar. 14, 2013 At the end of the last ice age, a population of polar bears was stranded by the receding ice on a few islands in southeastern Alaska. Male brown bears swam across to the islands from the Alaskan mainland and mated with female polar bears, eventually transforming the polar bear population into brown bears.
Evidence for this surprising scenario emerged from a new genetic study of polar bears and brown bears led by researchers at the University of California, Santa Cruz. The findings, published March 14 in PLOS Genetics, upend prevailing ideas about the evolutionary history of the two species, which are closely related and known to produce fertile hybrids.
Previous studies suggested that past hybridization had resulted in all polar bears having genes that came from brown bears. But the new study indicates that episodes of gene flow between the two species occurred only in isolated populations and did not affect the larger polar bear population, which remains free of brown bear genes.
At the center of the confusion is a population of brown bears that live on Alaska's Admiralty, Baranof, and Chicagof Islands, known as the ABC Islands. These bears--clearly brown bears in appearance and behavior--have striking genetic similarities to polar bears.
"This population of brown bears stood out as being really weird genetically, and there's been a long controversy about their relationship to polar bears. We can now explain it, and instead of the convoluted history some have proposed, it's a very simple story," said coauthor Beth Shapiro, associate professor of ecology and evolutionary biology at UC Santa Cruz.
Shapiro and her colleagues analyzed genome-wide DNA sequence data from seven polar bears, an ABC Islands brown bear, a mainland Alaskan brown bear, and a black bear. The study also included genetic data from other bears that was recently published by other researchers. Shapiro's team found that polar bears are a remarkably homogeneous species with no evidence of brown bear ancestry, whereas the ABC Islands brown bears show clear evidence of polar bear ancestry.
A key finding is that the polar bear ancestry of ABC Islands brown bears is conspicuously enriched in the maternally inherited X chromosome. About 6.5 percent of the X chromosomes of the ABC Islands bears came recently from polar bears, compared to about 1 percent of the rest of their genome. This means that the ABC Islands brown bears share more DNA with polar bear females than they do with polar bear males, Shapiro said.
To understand how hybridization could lead to this unexpected result, the team ran simulations of various demographic scenarios. "Of all the models we tested, the best supported was the scenario in which male brown bears wandered onto the islands and gradually transformed the population from polar bears into brown bears," said first author James Cahill, a graduate student in ecology and evolutionary biology at UC Santa Cruz.
This scenario is consistent with the known behavior of brown bears and polar bears, according to coauthor Ian Stirling, a biologist at the University of Alberta in Edmonton, Canada. Mixing of polar bears and brown bears is seen today in the Canadian Beaufort Sea, where adult male brown bears wander onto the remaining sea ice in late spring and sometimes mate with female polar bears, he said. In areas such as western Hudson Bay and the Russian coast, polar bears are spending more time on land in response to climate warming and loss of sea ice, a behavior that could have left polar bears stranded on the ABC Islands at the end of the last ice age.
Young male brown bears tend to leave the area where they were born in search of new territory. They may well have dispersed across the water from the Alaskan mainland to the ABC Islands and hybridized with polar bears stranded there when the sea ice disappeared.
"The combination of genetics and the known behavior of brown and polar bears hybridizing in the wild today tells us how the ABC Islands bears came to be: they are the descendants of many male brown bear immigrants and some female polar bears from long ago," Stirling said.
The findings suggest that continued climate warming and loss of arctic sea ice may lead to the same thing happening more broadly, said coauthor Richard E. (Ed) Green, an assistant professor of biomolecular engineering in UCSC's Baskin School of Engineering. "As the ice melts in the Arctic, what is going to happen to the polar bears? In the ABC Islands, the polar bears are gone. They're brown bears now, but with polar bear genes still present in their genomes," he said.
The first genetic studies of ABC Islands brown bears looked at their mitochondrial DNA, which is separate from the chromosomes and is inherited only through the female lineage. The mitochondrial DNA of ABC Islands brown bears matches that of polar bears more closely than that of other brown bears, which led some scientists to think that the ABC Islands brown bears gave rise to modern polar bears.
The new study looks at the "nuclear DNA" carried on the chromosomes in the cell nucleus. It is the latest in a series of genetic studies of polar bears published in recent years, each of which has prompted new ideas about the relationship between polar bears and brown bears. A 2010 study of fossils and mitochondrial DNA supported the idea that polar bears evolved from the ABC Islands brown bears. But a 2011 study of mitochondrial DNA from extinct Irish brown bears showed an even closer match to polar bears and suggested that polar bears got their mitochondrial DNA from hybridization with Irish bears. Shapiro, a coauthor of that study, said she now thinks the Irish brown bears may be another example of what happened in the ABC Islands, but she can't say for sure until she studies their nuclear DNA.
"In retrospect, I think we were wrong about the directionality of the gene flow between polar bears and Irish brown bears," she said.
Two studies published in 2012 sought to determine when the polar bear lineage diverged from the brown bear lineage using nuclear DNA data. The first, published in April in Science, put the split at 600,000 years ago and concluded that polar bears carry brown bear mitochondrial DNA due to past hybridizations. The second, published in July in Proceedings of the National Academy of Sciences, suggested that brown bears, black bears, and polar bears diverged around 4 to 5 million years ago, followed by repeated episodes of hybridization between polar bears and brown bears.
The new study does not address the question of how long ago polar bears diverged from brown bears, but it may help sort out the conflicting results of recent studies. "It's a good step in the right direction of understanding what really happened," Shapiro said.
The study does indicate that the divergence of polar bears from brown bears was only half as long ago as the split between the brown bear and black bear lineages, said Cahill. "We can tell how long brown bears and polar bears have been separate species as a proportion of how long ago they separated from more distantly related species, but putting a year on it is very difficult," he said.
Green noted that efforts to understand the relationship between polar bears and brown bears has been complicated by the unusual case of the ABC Islands brown bears. "It's as if you were studying the relationship between humans and chimpanzees and your analysis included DNA from some weird population of humans that had hybridized with chimps. You would get very strange results until you figured that out," he said.
In addition to Cahill, Green, Shapiro, and Stirling, the coauthors of the new paper include postdoctoral researchers Tara Fulton and Mathias Stiller, undergraduate Rauf Salamzade, and graduate student John St. John at UC Santa Cruz; Flora Jay and Montgomery Slatkin at UC Berkeley; and Nikita Ovsyanikov at the Wrangel Island State Nature Reserve in Russia. Green and Shapiro direct the UCSC Paleogenomics Lab. This research was funded by the Searle Scholars Program.
Other social bookmarking and sharing tools:
- Cahill JA, Green RE, Fulton TL, Stiller M, Jay F, et al. Genomic Evidence for Island Population Conversion Resolves Conflicting Theories of Polar Bear Evolution. PLoS Genet, 9(3): e1003345; 2013 DOI: 10.1371/journal.pgen.1003345
Note: If no author is given, the source is cited instead. | <urn:uuid:0d9b7fc7-dc85-44b0-870a-24a3e0dad0d8> | 4 | 1,695 | Knowledge Article | Science & Tech. | 44.60351 | 2,310 |
Web edition: August 27, 2012
Though ancient Egyptians are famous for their mummies, Americans — South Americans — practiced the preservation method first. In a desert coastal region of what is now northern Chile and southern Peru, the Chinchorro people began mummifying their dead about 7,000 years ago. Now scientists have proposed an explanation for how this practice got its start: The Chinchorro were just copying nature.
B. Bower. Good times led to grisly custom. Science News Online, August 13, 2012. [Go to] | <urn:uuid:9c4a68b9-8d79-4ead-b464-5115745c1929> | 3.21875 | 115 | Truncated | Science & Tech. | 56.69622 | 2,311 |
In the late 1990s, a handful of physicists and engineers began to take a greater interest in biology. The Human Genome Project was spitting out more and more gene sequences—blueprints for the protein building blocks of the cell—generating a flood of new information about the molecular machinery of life. Trouble was, there were not enough biologists doing the job of figuring out how all these genes and proteins worked together to create a living, breathing organism.
It was around this time that Boston University bioengineer James Collins saw his chance to inject a little engineering know-how into the study of biology. There were two ways to go about it, he figured—either disassemble cells or build them. “A burgeoning young engineer [is] either the kind of kid who takes stuff apart to try to figure out how it works, or [he’s] the kid who puts stuff together,” Collins says. Though both approaches seemed promising, there simply wasn’t enough known about the structures or functions of the genes and their protein products to infer how all the parts worked together by taking a cell apart, piece by piece.
“Reverse engineering seems to be too challenging,” Collins recalls musing to his then grad student Tim Gardner. “But can we do forward engineering? Can we take parts from cells and put them together in circuits, just as an electrical engineer might?”
As the field works to create new living systems that serve a purpose...a new foundation for biological understanding should emerge.
—Pamela Silver, Jeffrey Way, “Cells by Design,”The Scientist, September 27, 2004
The results were published in 2000, alongside a paper from physicist Stanislas Leibler’s lab at Princeton University, which had undertaken a similar, but independent, project. Much like Collins with Gardner, Leibler teamed up with his graduate student Michael Elowitz to build an oscillator, which, like Collins’s toggle switches, used transcriptional repressors in E. coli. The Princeton team engineered three genes to inhibit each other in a cyclical manner, rock-paper-scissors style, with each gene repressing the next when a threshold concentration of its gene product had been reached. The result was the periodic expression of all three genes—monitored by the periodic glow of green fluorescent protein (GFP), whose expression was linked to another copy of a promoter controlling one of the three repressors.
The two publications are now widely cited as the seminal papers of synthetic biology, though neither paper received much publicity at the time. “[We were] kind of a ragtag group of engineers and physicists who were essentially amateurs in molecular biology,” Collins says. But in the last decade, many trained molecular and cell biologists have turned to syn bio, designing synthetic circuits built from biological components and branching out from the transcriptional regulation tools of Leibler, now at Rockefeller University, and Collins to add translation and post-translation components.
The methods for actually manufacturing the circuits have also improved. While the Collins and Leibler teams were essentially cutting and pasting existing genes, J. Craig Venter and his colleagues went for a ground-up approach. They took the blueprint of a known bacterial genome and rebuilt the entire sequence, stitching together genes chemically manufactured by an automated DNA synthesizer. The genome was then inserted into the nucleus of another bacterium, with May 2010 headlines announcing the creation of the first cell to run on a genome synthesized entirely from scratch. (Read Venter’s opinion piece, "Synthesizing Life.")
Many researchers still use the basic cut-and-paste approach, however, employing well-vetted and still advancing genome-editing technologies to select different bits of DNA, called BioBricks, from living organisms and piece them together to form novel circuits. Others, like George Church of Harvard University, fall somewhere in the middle, synthesizing individual genetic components using oligonucleotide chips, then piecing them together. “I think it’s an open question as to whether the core of synthetic biology is going to make things by BioBricks, by total synthesis, or from scratch from chips in a modular way,” says Church.
Regardless of how the circuits are assembled, engineered organisms hold potential in a wide range of fields, including biofuel production, agricultural innovation, and biomedical advances. One of the most successful medical applications has been the engineering of yeast to produce a precursor of the antimalarial drug artemisinin, a natural product of the plant Artemesia annua. The production of the drug is currently limited to small farms in Southeast Asia, where farmers grow the plants and extract the drug using relatively crude techniques, making the drug expensive and often in short supply—a bad combination for the developing nations that need it most.
To address these problems, Jay Keasling of the Lawrence Berkeley National Laboratory and his colleagues decided to rebuild the artemisinin pathway in a more manageable microbial system. After several years of tweaking the molecular components first in E. coli, then in yeast, the researchers succeeded in building a synthetic circuit in yeast cells that generates a healthy supply of artemisinic acid—an artemisinin precursor. “If you were to take something like a 100,000-liter fermenter, and grow up our artemisinin-producing yeast, running that full time you could probably get enough artemisinin for the entire world,” Keasling says. With funding from the Bill & Melinda Gates Foundation and partnerships with California-based biotech Amyris, the Institute for OneWorld Health, and pharmaceutical giant Sanofi to optimize and scale up production and distribute the product to Africa, Keasling and his colleagues expect that the yeast-derived artemisinin will be commercially available by the end of this year, and that drugs containing the product will hit the market in early 2012.
Another synthetic biology inspired malaria project aims to stop transmission of the disease at the level of its vectors by engineering a genetic system to establish itself in a mosquito population. While researchers have successfully engineered mosquitos to be resistant to infection by the malaria parasite, introducing those mosquitos into the wild is not likely to result in sufficient spread of the resistance, as the wild-type genes will vastly outnumber the introduced variety. Something, such as a significant fitness advantage, must help drive the new genes into the population. Geneticist Bruce Hay and his team at Caltech got their inspiration for a solution to this problem from the Medea toxin/antidote genetic element in Tribolium beetles, in which a toxic maternal gene product kills any embryos that do not inherit the element, ensuring its quick spread through the population. Armed with 50+ years of Drosophila genetics knowledge, the researchers created a genetic element, Medeamyd88-1, which caused mother flies to produce eggs that only survived if they received a copy of the element.
In laboratory tests, Medeamyd88-1 quickly spread through the population, such that every individual carried the element by the 12th generation. Hay’s group is now working on developing a similar system in disease-carrying mosquitos. If he succeeds, “then it becomes a question of can we link these two pieces of biology together,” Hay says—the gene that makes the mosquitos disease-resistant and the Medea element that drives it through the population.
As was the intention of some of the field’s founding engineers, synthetic biology also promises to help researchers understand the basic rules of cellular function in ways that traditional biology hasn’t been able to, says Elowitz, now a professor at Caltech. “With the synthetic approach, you can start to think of the cell as a laboratory where you can tinker around and really ask questions about the basic principles of genetic circuit design.”
The growing influence of engineering in biology is, in some sense, “the best of both worlds,” adds Church. (See his opinion, "Evolving Engineering.") The good design principles of engineering and the unique properties of evolving biological systems are “just an incredible combination,” he says.Jef Akst is a News Editor at The Scientist. | <urn:uuid:0a487c26-dc29-450a-a531-2c019a51462b> | 3.375 | 1,708 | News Article | Science & Tech. | 28.924152 | 2,312 |
Discovered: A camera that can see around corners, why we don't eat smelly foods, the super-Earth is not so super, noise pollution is also bad for plants and abnormal brain development might determine personality.
- This camera takes pictures of things it never saw. Remember that camera that could capture the speed of light we learned all about last December? The brains at the MIT media lab intended on using that technology for a camera that can see around corners. And, they have accomplished just that, creating a contraption that captures images of things around corners. It works like a periscope but instead of using reflective surfaces it uses walls, doors and floors -- things that do not reflect light. That sounds like straight-up magic to us. But, this video claims it's just science being science. [MIT Media Lab]
- Why we don't eat smelly foods. We didn't realize that not eating smelly foods was a thing, because we quite like smelly foods. But science says it's a thing and has also discovered the reason behind the aversion. Our brains and taste-buds tell us to take smaller bites of things with strong aromas because the smaller bites make us taste the food less than bigger ones. "Perhaps, in keeping with the idea that smaller bites are associated with lower flavour sensations from the food and that, there is an unconscious feedback loop using bite size to regulate the amount of flavour experienced," explains researcher Rene A de Wijk. All of this, one day, could be used as some sort of dieting scheme. [BioMed Central Limited]
- The Super Earth is not so super. That new "possibly habitable" Super-Earth science got all excited about isn't all that after all. (Go, original Earth!) It can maybe still support life, because liquid water could possible exist there -- it's all very dubious. One thing is for sure, though. The planet could never transfer life to another one of its solar-system neighbors. "Planet d would have a very small chance of transferring material to the other planets in the Gliese system and, thus, is far more isolated, biologically, than the inner planets of our own solar system," explained researcher Laci Brock . "It really shows us how unique our solar system is." So, should we start calling it not-so-Super-Earth now? [Lunar and Planetary Science]
- Noise pollution is also bad for plants. It makes sense that human noises would scare and probably harm animals, but plants? Science says yes. The animals react to the noise, and thus have different pollination and eating habits. The whole thing has an unintended consequences domino effect thing going on because once the plants change then the whole eco-system changes, attracting different types of animals. [National Evolutionary Synthesis Center]
- If you have this type of personality you might have a brain defect. Are you both gregarious and anxious? (Yes ... ) Well it might have something to do with abnormal brain development, which cause a disease called Williams Syndrome, finds research out of NIH. "Scans of the brain's tissue composition, wiring, and activity produced converging evidence of genetically-caused abnormalities in the structure and function of the front part of the insula and in its connectivity to other brain areas in the circuit," explains researcher Karen Berman. [Proceedings of the National Academy of Sciences] | <urn:uuid:fc1ab892-d078-42ea-a18f-51e101ad2a45> | 2.875 | 699 | Listicle | Science & Tech. | 50.459687 | 2,313 |
2010 is the International Year of Biodiversity. Why should you care?
Biodiversity is the number of different species that exist in a given area. The healthier an area is, the more biodiversity it has. Different forms of wildlife and plants inhabit areas, and these plants and animals learn to coexist and form ecological relationships with each other.
In unhealthy environments, only a few kinds of species of each plant or animal exist. This is what is known as a monoculture, and monocultures are unhealthy. Whether a monoculture is ten thousand or a hundred thousand acres planted all together of one kind of crop, or an entire subdivision with lawns all growing the same five plants, monocultures are vulnerable to pests and disease. For example, part of the fire ant problem in the United States is exacerbated by homeowners, because fire ants prefer monocultures and will shy away from biodiverse areas.
By contrast, the more kinds of species that inhabit an area, the more likely it is that at least a few strains of plants or animals will be resistant to pests and disease. If you have thirty kinds of plants, and a virus blows in across the ocean from a remote island, your lawn has a better chance of surviving and renewing itself than if you have only five kinds of plants. In the same way, having more species of wild birds will provide more secure insect control than having only a few kinds of wild birds. So, by growing more kinds of plants in your yard, you will attract more animals, and help to increase your neighbourhood’s biodiversity. | <urn:uuid:5a91c161-3285-41c8-8469-edeb3b85a206> | 3.4375 | 325 | Listicle | Science & Tech. | 44.127564 | 2,314 |
Tag Archives: biology
New USGS research shows that rice could become adapted to climate change and some catastrophic events by colonizing its seeds or plants with the spores of tiny naturally occurring fungi. The DNA of the rice plant itself is not changed; instead, researchers are re-creating what normally happens in nature.
USGS science supports management, conservation, and restoration of imperiled, at-risk, and endangered species.
New USGS research shows that certain lichens can break down the infectious proteins responsible for chronic wasting disease, a troubling neurological disease fatal to wild deer and elk and spreading throughout the United States and Canada.
For reliable information about amphibians and the environmental factors that are important to their management and conservation, visit the new USGS Amphibian Monitoring and Research Initiative website.
Efforts are underway to restore the Greater Everglades Ecosystem, which has been profoundly altered by development and water management practices. Join us on December 1st when Dr. Lynn Wingard shares USGS research that is helping restoration management agencies develop realistic and attainable restoration goals for the region.
Two new tools that enable the public to report sick or dead wild animals could also lead to the detection and containment of wildlife disease outbreaks that may pose a health risk to people.
The timing of animal migration and reproduction, and observing when plants send out new leaves and bear fruit, is increasingly important in understanding how climate change affects biological and hydrologic systems. Photo credit Copyright C Brandon Cole.
The USGS has been researching manatees in Florida and the Caribbean for decades, but little is known about Cuban manatees. A USGS biologist recently visited Cuba with a team of international manatee experts working to conserve manatees around the Caribbean.
USGS scientists have discovered a new turtle species, the Pearl River map turtle, found only in the Pearl River in Louisiana and Mississippi. Sea-level changes between glacial and interglacial periods over 10,000 years ago isolated the map turtles, causing them to evolve into unique species. | <urn:uuid:7b99ffe4-a450-4492-84cc-d927dcfb00b0> | 3 | 416 | Content Listing | Science & Tech. | 24.563085 | 2,315 |
Applies genetics and genomics to answer questions of endangered species, wildlife populations, captive breeding and reintroductions, sources of nonnative species introductions and hybridization and functional genomics.
Research in fish and wildlife health and disease, geomicrobiology, ecosystem function, climate change, water quality for drinking and in recreation, bioremediation, nanotechnology, energy and geographic patterns of microbial distribution.
An interdisciplinary organization advancing the use of science in natural resource decision making. SDC focuses on research and applications in three science areas: decision science (adaptive management and structured decision making), ecosystem services, and resilience/sustainability.
USGS science can help allow for renewable energy growth while lessening conflicts between renewable energy, ecosystems and wildlife.
Browse by Location
Science Feature: Ecosystems, Wildlife, and Homegrown Renewable Energy
Photo: Scientists have found that wind turbines are causing fatalities of certain species of migratory insect-eating bats, although a March 2011 study in Science suggests that solutions to reduce the impacts of wind turbines on bats may be possible. Credit: Paul Cryan, USGS.
Interest is booming in renewable energy sources, especially in the areas of wind, solar, and biofuels. Such energy sources have huge benefits, including diversification of the nation’s energy portfolio, new jobs, and potential reductions in greenhouse gas emissions. Yet these energy sources sometimes have adverse effects on ecosystems and the wildlife that live in them, such as bats, birds, and reptiles and amphibians.
The USGS Ecosystem Mission Area serves our Department of the Interior partners and other resource managers by conducting research and monitoring to understand freshwater, terrestrial and marine ecosystems and the fish and wildlife within them. More... | <urn:uuid:bb781a0d-014a-4279-a1a5-672b2a892e09> | 3.265625 | 355 | About (Org.) | Science & Tech. | 5.676818 | 2,316 |
Learn something new every day More Info... by email
Thermoelasticity is the change in the size and shape of a solid object as the temperature of that object fluctuates. Materials that are more elastic will expand and contract more than those materials that are more inelastic. Scientists use their understanding of thermoelasticity to design materials and objects that can withstand fluctuations in temperature without breaking.
Scientists have understood the equations that describe thermoelasticity for over 100 years but have only recently begun stress testing materials in order to determine how thermoelastic they are. By subjecting materials to rising and falling temperatures, engineers are able to predict how much these materials will expand or contract at different temperatures. This knowledge is important when building machines or weight bearing structures with pieces that need to fit closely together. Understanding the principles of thermoelasticity helps engineers design things that maintain their structural integrity for a range of temperatures.
The principles of thermoelasticity have affected the way engineers design a number of different objects. Knowing that concrete expands when it is heated, for instance, is the reason that sidewalks are designed with small spaces between the slabs. Without these spaces, the concrete would have no room to expand, causing a great deal of stress on the material, and leading to cracks, breaks or holes. Likewise, bridges are designed with expansion joints to allow for the components to expand as they are heated.
All materials that are elastic expand when heated and contract when cooled. The expansion that is described by thermoelasticity formulas is caused by an increase in the movement of the atoms in the material. These atoms remain linked to each other as a solid heats but the molecular bonds grow in size, allowing the atoms to move away from each other and causing the material to grow. Conversely, when a material is cooled, the atoms move less and the bonds pull them closer to each other.
The principles of thermoelasticity dictate that the expansion caused by an increase in temperature will cause an object to expand in all directions. Slabs of concrete expand out towards one another, up away from the ground, and down towards the ground when they are heated. Cups or other vessels will expand in all directions as well in such a way that the total volume they can contain increases along with the size of the vessel. Specific formulas are used in the study of thermoelasticity to describe how objects change in shape with changes in temperature. | <urn:uuid:a7408339-0d5d-4aaf-9c09-f4325bcdcedc> | 4.0625 | 498 | Knowledge Article | Science & Tech. | 33.447062 | 2,317 |
Washington: Researchers from the King Juan Carlos University (URJC) have found that spiders, like many other animal species, are suffering from habitat loss and human encroachment.
"The abundance and number of spider species is negatively affected by the impact of many human land uses, such as habitat fragmentation, fire and pesticides", said Samuel Prieto-Benítez and Marcos Mendez.
Until now, fewer than 20 percent of a total of 173 scientific papers published since 1980 have indicated any negative effects of human impact on arachnids.
The study demonstrates "evident" damaging effects on spider numbers due to the use of soil in farming and pasture systems. "In woodlands this was not so clear", the study explains.
The study proposes some solutions for spider conservation. A reduction in mechanical alterations to the land, such as harvesting, ploughing and grazing would increase spider diversity in agricultural and pastural ecosystems. In addition, the use of insecticides should be more controlled, as in organic farming, and habitat fragmentation should be avoided.
According to the authors, although "they do not enjoy an excessive level of public sympathy", spiders are an important animal group for humans, since they free us of a large number of pest insects and are "very important" predators in the functioning of natural systems.
The study has been published in Biological Conservation.
First Published: Sunday, May 22, 2011, 00:39 | <urn:uuid:016c6974-145b-461a-b6bd-777e3c84a883> | 3.375 | 290 | News Article | Science & Tech. | 27.049097 | 2,318 |
March 15, 2012 | 2
Predator-prey interactions are often viewed as evolutionary arms races; while predators improve their hunting behaviors and their ability to sneak up on their prey, the prey improve upon their abilities to detect and escape from their predators. The problem, of course, is that there is a trade-off between maintaining vigilance – the attention necessary to be consistently aware of others in the environment takes quite a bit of physical and mental energy – and doing all the other things that an animal must do, such as finding its own food. As a result of this trade-off, many social species, especially mammalian and avian species, have developed alarm calls. Alarm calls are specific vocalizations that signal the presence of a danger in the environment to nearby conspecifics, and sometimes contain additional information about the type of threat or predator.
Subsequent to the introduction of predatory birds, howler monkeys on Barro Colorado Island near Panama rapidly developed an alarm call specific for those birds that indicated the presence of an avian predator: something like “danger from above!” They did not merely adapt an already existing alarm call to the new predator, they developed an entirely new one.
In certain cases, some species have developed the additional ability to eavesdrop on the alarm calls of other species, gaining access to an additional source of information relevant to the presence of danger in the environment. This ability could be the consequence of a learned association between the alarm calls of another species and the presence of the predator, or it could be due to certain auditory properties common to the alarm calls of both species. More research is required to tease apart these possibilities. However, until recently, it was thought that the ability to identify and react to the alarm calls of other species was only possible in species that already had vocal communication. Several years ago, however, researchers from Princeton University observed this behavior in an unlikely species – a non-vocal reptile called the Galapagos marine iguana (Amblyrhynchus cristatus).
Prior to this observation, it was thought that non-vocal species, who did not have alarm calls themselves, would not be able to associate complex auditory stimuli with the presence of a predator. The Galapagos marine iguana does not have any vocalizations. Instead, they communicate by using visual and olfactory signals. However, they live among the Galapagos mockingbirds (Mimus parvulus), a species that does have auditory vocalizations and specific alarm calls. Further, since iguanas primarily live on the rocky shoreline, they are often unable to view hawks (their main predator) until it is too late. If they had the ability to eavesdrop on the alarm calls of the mockingbirds, they would possibly be able to engage their anti-predator behaviors significantly earlier and increase their chances of survival.
The researchers recorded two types of vocalizations from the Galapagos mockingbirds: their song calls and their alarm calls, and edited them into soundtracks each containing two or three examples of either type of call. The researchers would find a cluster of juvenile and female iguanas (the hawks mostly ignore the mature males) in three different sites on the coast of Santa Fe Island, and would play back the various mockingbird calls through a portable speaker system.
Previous research indicated that upon detecting a hawk, iguanas who were aggregated in a cluster would scatter in different directions, like many prey animals do, perhaps in an attempt to confuse the predator. Upon playback, the researchers noted the behavior of the iguanas. Their behaviors were coded as “non-response,” “alert,” (head raise, orienting towards the sound), or “escape” (walking or running away).
Forty-five percent of iguanas showed vigilance behavior during the playback of the alarm call, compared with only 28.1% of iguanas during the playback of the mockingbird songs, a statistically significant difference. This suggests that the Galapagos iguanas are able to eavesdrop on the alarm calls of the mockingbirds and respond accordingly. However, in addition to the type of playback, the time of day and the data collection site were also significant factors in predicting the proportion of iguanas responding to the playbacks.
At each site, the iguanas successfully differentiated between the alarm calls and the songs, responding with escape behaviors more often subsequent to the alarm call. This is the first experimental evidence that a non-vocal species associated the alarm call of a different species with the presence of a nearby predator.
What might account for the different responses among recording sites? The authors speculate that differences in ambient noise could have contributed, resulting in changes in volume or sound quality. As each recording site was on the rocky coast of the island, the sound quality would be subject to wind and the sounds of the ocean. An alternative explanation, however, is that there are true differences between the sites, perhaps owing to slight variations in predator behavior or mockingbird vocalizations. The fact that the hawks on Santa Fe island begin their hunts each day on the northern side of the island and proceed south throughout the day might indeed result in higher predation rates at site three. If the hawks are successful at site three on a given day, they might not have any reason to continue on to sites one or two.
It was also noted that the iguanas were more responsive to the playbacks earlier in the day. One possibility is that these cold-blooded reptiles are more responsive earlier in the day, when their body temperature is lower, and they therefore require more time to escape due to their reduced agility. As the day continues, the sun passes overhead, and their body heat increases, their ability to escape more quickly might improve.
It is particularly impressive that these iguanas appear able to capitalize on the alarm calls of another species, the Galapagos mockingbird. By considering the environmental constraints placed on these iguanas by their specific location on the coast of Santa Fe Island, the elegance of evolution and natural selection becomes apparent.
On Santa Fe Island, hawks routinely approach begin their day hunting in the north, and proceed south throughout the day. It might make sense, therefore, that the iguanas generally orient towards the north so that they would be able to see the hawks coming. However, marine iguanas such as these must orient their bodies directly towards the sun, or at 180 degree angles to it, in order to survive. Without this thermoregulation, they would quickly overheat and die. The realities of their environmental requirements are directly at odds: face north to anticipate predation and risk overheating, or face the sun to avoid overheating and risk being eaten by an unseen hawk. The ability to capitalize on the auditory alarm vocalizations of another species – especially a species less constrained by body temperature needs – could therefore provide significant benefit. The evolution of this ability would allow the iguanas to simultaneously maintain their body temperature while maintaining awareness of potential threats of predation.
In order to confirm this evolutionary explanation, more research would be required to identify whether naive Galapagos iguanas, never exposed to the threat of predation or the calls of the mockingbirds, would recognize and respond to the alarm calls. The other possibility is that this is simply the result of associative learning of complex auditory information. In either case, the eavesdropping ability of this non-vocal reptile species is remarkable. For these iguanas, it truly is a sin to kill a mockingbird.
Vitousek MN, Adelman JS, Gregory NC, & Clair JJ (2007). Heterospecific alarm call recognition in a non-vocal reptile. Biology letters, 3 (6), 632-634. PMID: 17911047 | <urn:uuid:9e026a88-86eb-4e13-b6f6-dd02ae1ded47> | 3.8125 | 1,609 | Academic Writing | Science & Tech. | 24.497547 | 2,319 |
Archive for DARPA
You are browsing the archives of DARPA.
You are browsing the archives of DARPA.
There a lot of great stuff going on:
On the absolute temperature scale that is used by physicists, the Kelvin scale, one cannot go below zero—at least not in the sense of getting colder than zero Kelvin. Physicists of the Ludwig-Maximilians University Munich and the Max Planck Institute of Quantum Optics, Garching, Germany, have now created an atomic gas in the lab that has nonetheless negative Kelvin values. These negative absolute temperatures lead to several striking consequences: Although the atoms in the gas attract each other and give rise to a negative pressure, the gas does not collapse, a behavior that is also postulated for dark energy in cosmology. Also supposedly impossible heat engines can be realized with the help of negative absolute temperatures, such as an engine with a thermodynamic efficiency above 100 percent. In order to bring water to the boil, energy needs to be added to the water. During heating up, the water molecules increase their kinetic energy over time and move faster on average. Yet, the individual molecules possess different kinetic energies, from very slow to very fast. In thermal equilibrium, low-energy states are more likely than high-energy states, i.e., only a few particles move really fast. In physics, this distribution is called Boltzmann distribution. Physicists led by Ulrich Schneider and Immanuel Bloch have now created a gas in which this distribution is inverted: Many particles possess large energies and only a few have small energies. This inversion of the energy distribution means that the particles have assumed a negative absolute temperature. “The inverted Boltzmann distribution is the hallmark of negative absolute temperature; and this is what we have achieved,” says Schneider. “Yet the gas is not colder than zero Kelvin, but hotter. It is even hotter than at any positive temperature. The temperature scale simply does not end at infinity, but jumps to negative values instead.”
(GizMag) A streetscape that includes natural landscaping, bicycle lanes, wind powered lighting, storm water diversion for irrigation, drought-resistant native plants and innovative concrete has earned Cermak Road in Chicago the title of “Greenest Street in America” according to the Chicago Department of Transport (CDOT). The location runs through an industrial zone which links the state and US highways. The project will record quantifiable results through a set of equally aggressive sustainability goals charting eight performance areas such as storm water management, material reuse, energy reduction, and place making. The most anticipated data will be collected from the first commercial use of photocatalytic cement for the inside highway lanes. This “smog eating” cement contains nano particles of titanium dioxide and is designed to clean the surface of the road and remove nitrogen oxide from the surrounding air through a catalytic reaction driven by UV light. In addition CDOT used 30 percent recycled content in the sidewalk concrete.
Researchers from the University of Pennsylvania have shown a new way to direct the assembly of liquid crystals, generating small features that spontaneously arrange in arrays based on much larger templates. “Liquid crystals naturally produce a pattern of close-packed defects on their surfaces,” says Shu Yang, leader of the study, “but it turns out that this pattern is often not that interesting for device applications. We want to arbitrarily manipulate that pattern on demand.” Electrical fields are often used to change the crystals’ orientation, as in the case with liquid crystal displays, but the Penn research team was interested in manipulating defects by using a physical template. Employing a class of liquid crystals that forms stacks of layers spaced in nanometers—known as “smectic” liquid crystals—the researchers set out to show that, by altering the geometry of the molecules on the bottommost layer, they could produce changes in the patterns of defects on the topmost. “The molecules can feel the geometry of the template, which creates a sort of elastic cue,” says another research, Kathleen Stebe. “That cue is transmitted layer by layer, and the whole system responds.” The researchers’ template was a series of microscopic posts arrayed like a bed of nails. By altering the size, shape, symmetry and spacing of these posts, as well as the thickness of the liquid crystal film, the researchers discovered they could make subtle changes in the patterns of the defects.
(Government Executive) DARPA has developed an injectable foam that shows promise in reducing death from internal bleeding, especially in situations involving noncompressible wounds. Two separate liquid compounds are injected in the body. When the two liquids mix, they react to form a foam coagulant that expands within the abdominal cavity-compressing the wound without sticking to vital organs. In tests, the compression was shown to reduce blood loss by six-fold and increase the three hour survival rate to 72 percent, up from just eight percent. DARPA Wound Stasis program manager Brian Holloway says, “If testing bears out, the foam technology could affect up to 50 percent of potentially survivable battlefield wounds.”
NASA’s decision to buy an inflatable new room for the International Space Station may push the module’s builder—commercial spaceflight company Bigelow Aerospace—one step closer to establishing its own private stations in orbit. Last week, NASA announced that it will pay $17.8 million for the Nevada-based company’s Bigelow Expandable Activity Module (BEAM), which will be affixed to the huge orbiting lab as a technology demonstration.
This has little to do with ceramics or glass—but everything to do with the biggest “What in the world…” moment I have had in a long, long time.” I will try to keep this brief, but its nearly impossible to convey the weird (not meant to be pejorative) materials work of Anna C. Balazs’s team at the University of Pittsburgh.
This all started last week when I breezily was scrolling through a list of new papers published in the recent issue of PNAS. Something in the abstract of “Reconfigurable Assemblies of Active, Autochemotactic Gels” caught my eye. Maybe it was the word “autochemotactic,” which I had to look up. Or, maybe it was these two spooky sentences in the abstract,
“To the best of our knowledge, this is the closest system to the ultimate self-recombining material, which can be divided into separated parts and the parts move autonomously to assemble into a structure resembling the original, uncut sample.… Our findings pave the way for creating reconfigurable materials from self-propelled elements, which autonomously communicate with neighboring units and thereby actively participate in constructing the final structure.”
“Hmm,” I thought. “Wasn’t this the big gimmick in the second Terminator movie?”
Liquid metal or no liquid metal, Balazs had me seriously hooked.
It turns out that Balazs works with Belousov–Zhabotinsky (BZ) gels that are relatively simple and, most importantly, have the fascinating ability to “quiver” for extended periods (but not forever) in predictable patterns by means of a self-regenerating internal redox reaction. Watching how the waves spread through these gels is pretty astounding, but this is no one-trick pony. Balazs and her researchers learned quite a bit about how to manipulate the gels and the oscillations, based on things like shape and composition.
They also learned how to use light on one part of material to stimulate the oscillations to move through the gel from one end towards the other, or use two or more lighted areas to create even more complex oscillations. It turns out that if they made the gel into a cylinder shape, a precise use of the light could actually make the worm slowly move. And, the moves could be complex, with lots of twists and turns in three dimensions, kind of like steering a real worm with sticks, but in this case the sticks are just light beams. But, is this just a good bar trick? Not if you are, say, DARPA, and are looking for a soft, synthetic robot that could climb walls and follow complex routes.
And, Balazs’s group was just warming up. It turns out they also figured out how to make microcapsules of these gels that could emit—in a controllable manner, using light—nanoparticles that create gradients that act in philic or phobic fashions to help propel and steer the capsules, and attract other ones. This is where the self-propulsion, self-recombination and “train” functions starts to come into play. Their models (once they understood the chemistry, most of the group’s work was done through modeling, so lots of video snippets are available) indicated that snake-like assemblies of these capsules could selectively “attract” or drop off other capsules as might be needed.
Okay—I know I am not doing this work justice. But please do me (and yourself) a favor and set a side about 15-30 minutes to watch the above video. The first part features a fairly recent lecture by Balazs (with lots of delightful animations and videos) at Harvard/Radcliffe. You can save yourself some time by starting at about the 2:10 mark.
I know from an unfortunate experience with a family member that sepsis is a helluva medical condition that can arise suddenly, cause enormous pain and, if not diagnosed quickly, can bring unexpected death within a day or two. About 20-35 percent of patients with severe sepsis and 40-60 percent of patients with septic shock die within 30 days. Others die within the ensuing 6 months, often after enduring multiple surgical attempts to identify and treat the source of the infection. According the Centers for Disease Control, sepsis in the United States is the second-leading cause of death in noncoronary ICU patients, and the tenth-most-common cause of death overall. Sepsis is common and also more dangerous in elderly populations.
Thus, I think it is great that DARPA is attempting to take on one of the grand challenges of the medical establishment. The agency describes sepsis as “an overwhelming blood infection, which when coupled with shock (such as that which may be experienced following a combat injury) has a mortality rate near 50 percent. Current methods to identify and treat sepsis may take 48 hours or longer—resulting in increased recovery time from combat wounds and hundreds of preventable deaths.”
Apparently, DARPA began to tackle sepsis beginning in fall 2011 through its Dialysis-Like Therapeutics program. The agency says the goal of the DLT is to demonstrate a portable device capable of quickly sensing and removing bacteria, viruses, toxins and cytokines from the bloodstream on clinically relevant time scales. It notes, “research to date has focused on advancing the components needed for such a device.”
Now it appears that DARPA is taking the next step and has issued a notice requesting “next step” proposals:
DARPA [is] seeking integration of previously awarded DLT projects to develop sensors, complex fluid manipulation architectures, separation technologies and closed-loop control algorithms. After integration, DARPA hopes for a single device capable of removing at least 90 percent of sepsis-causing material from a patient within 24 hours. The DLT device sought by DARPA would differ from kidney dialysis devices by potentially enabling continuous, early sensing based on the entire blood volume, removing the need for anticoagulants, and facilitating label-free separation of multiple targets within the blood.
DLT is a technology demonstration and human trials will not be funded. However, proposers are encouraged to submit plans for testing that would result in an investigational device exemption approval from the Food and Drug Administration (FDA). The FDA will be engaged with the DLT team throughout the program lifecycle by reviewing proposals, participating in proposers’ day meetings and participating in Government review boards.
If successful, the sepsis technology should prevent the deaths of thousands of people in military service and may open the door to novel detection and treatment approaches for other medical maladies. “DLT represents a revolutionary approach in the treatment of blood-borne illness,” says Tim Broderick, DARPA program manager. “If successful, this technology could be used to treat sepsis faster and more effectively, saving lives and reducing treatment costs. In 2009 alone, more than 1,500 active duty Service members were diagnosed with sepsis. DLT may eliminate the need for expensive culture-based identification methods and extended hospital stays. And, as the technology matures, we believe the device could be adapted to diagnose and treat a variety of illnesses.”
Detailed information about the program and requirements for proposals can be found in the Broad Agency Announcement. The proposal due date is July 13, 2012, and DARPA expects that the awards will be issued in October. The manager for this program is Timothy Broderick.
DARPA has issued a notice saying they are seeking proposals related to developing low-temperature processes for the deposition of thin films whose current minimum processing temperatures exceed the maximum temperature substrates of interest to the Department of Defense. From the notice:
Nontraditional performers outside of the materials research/thin film deposition communities in areas such as surface acoustic wave spectroscopy, plasma physics, photochemistry, etc., are highly encouraged to submit proposals to the LoCo program.
Performance of materials, parts, and assemblies (e.g., tribological, optical, electronic and/or thermal) are dictated by interactions between surfaces and the environment, affecting the cost, capability and readiness of DOD systems. A number of known thin-film materials exist that could mitigate these performance limitations when deposited as coatings on substrates of interest to DOD (e.g., crystalline diamond thin film). However, the high bulk deposition temperatures used in state-of-the-art chemical vapor deposition processes to meet the energetic and chemical requirements for processes, such as reactant flux, surface mobility and reaction energy, are often higher than the maximum temperature that many DOD-relevant substrates can withstand (i.e., due to property changes such as melting, grain growth, etc.). These synthetic constraints not only affect our ability to manipulate surfaces of existing DOD systems, but also restrict development of new technologies on emerging substrates (e.g., diamond on polymers for flexible electronics and on living cells for biotic/abiotic interfaces).
To this end, DARPA is seeking innovative multidisciplinary research proposals that independently develop novel chemical and physical processes to meet the energetic/chemical requirements of thin film deposition without reliance on broadband temperature input used in state-of-the-art chemical vapor deposition. DARPA anticipates early stage efforts that address one piece of the deposition puzzle (i.e., process component) such as reactant flux, surface mobility, reaction energy, nucleation, byproduct removal, etc., that will be integrated later in the program to create a full deposition process. The specific objective of the LoCo program is to develop a deposition process applicable to a large variety of thin-film materials. However, to guide proposal development for process components, initial areas of interest for LoCo thin films include synthesis of covalent thin films with long-range order that require high deposition temperatures (>700°C) for insertion points in tribological, thermal management, optical and electronic applications (e.g., crystalline diamond thin films).
The LoCo program is broken into two independent, but intellectually connected thrusts: Thrust 1, the main effort of the LoCo program, is arranged in a progression of discrete tasks that rapidly move from fundamental research to deposition of a thin film on a DOD part for testing and evaluation. Performers will address one or more of the process components for thin film growth (e.g., flux, mobility, reactivity, etc.). This initial effort will focus on validation of the fundamental approach. To facilitate technology transfer, DARPA is also seeking input on DOD systems and parts that could benefit from success in the LoCo program under Thrust 2, where industrial performer team(s) are asked to carry out technical analyses on a proposed DOD part that could benefit from a LoCo coating.
Detailed information about the program and requirements for proposals can be found in the Broad Agency Announcement.
Researchers with unique capabilities looking for teammates or specific expertise should post their in-formation on the teaming site. Proposers are strongly encouraged to submit an abstract in advance of a full proposal. Proposers must submit their abstracts and proposals in response to the DARPA BAA (DARPA-BAA-12-20) through the Grants.gov website or DSO’s BAA website. The proposal abstract due date is June 7, 2012, and the proposal due date is July 26, 2012. The technical contact for this program is: Brian Holloway (DARPA-BAAfirstname.lastname@example.org).
DARPA emphasizes that this notice does not constitute an official solicitation. No information provided here supersedes any of the information in the posted Broad Agency Announcement. This notice does not constitute a specific commitment by DARPA to provide any funds.
Last August DARPA conducted the second test flight of its hypersonic technology vehicle, the Falcon HTV-2. The test ended when the vehicle sent itself into the Pacific Ocean nine minutes into the flight. At the time, the reasons for the abort were unclear and frustrating. The project’s program manager, Maj. Chris Schulz, USAF said then, “We’ll learn. We’ll try again. That’s what it takes.”
To help figure out what it takes, DARPA enlisted the aid of an independent engineering review board comprised of government and academic experts to evaluate the data collected during the flight. The vehicle was built not only to demonstrate the technology, but also as a data-gathering platform. Thus, the ERB had plenty of data telling the story of what happened.
The goal of the program is to develop a vehicle that can reach any location in the world within an hour, which requires hypersonic speeds. According to a story on the DARPA website, the August 11 test flight successfully achieved stable, aerodynamically-controlled speeds up to Mach 20 for the first three minutes. The vehicle appears to have experienced a series of “shockwave disturbances” that were more than 100 times more intense than it was designed to withstand. The vehicle recovered from these first shockwaves and maintained control, which DARPA’s acting director, Kaigham Gabriel observed, was in itself a successful outcome, “That’s a major validation that we’re advancing our understanding of aerodynamic control for hypersonic flight.”
So, how did the vehicle eventually lose control? The ERB conclusion was that “the most probable cause of the HTV-2 Flight 2 premature flight termination was unexpected aeroshell degradation, creating multiple upsets of increasing severity that ultimately activated the Flight Safety System,” which triggered a controlled descent and ocean ditch of the vehicle.
Vehicle design engineers knew there would be a “gradual wearing away of the vehicles skin as it reached stress tolerance limits,” however, more of the vehicle skin separated from the vehicle than was expected. The gaps created by the peeling “created strong, impulsive shock waves around the vehicle,” which caused it to roll suddenly. Eventually, the shockwave-induced rolls became more than the vehicle could overcome.
The old maxim that we learn more from our failures than our successes applies here. Schulz said in the DARPA story, “Data collected during the second test flight revealed new knowledge about thermal-protective material properties and uncertainties for Mach 20 flight inside the atmosphere.” That is, the data collected during the flight showed that the assumptions and extrapolations used to design the vehicle were not enough to predict accurately the extreme environment experienced at Mach 20. Schulz says, “The result of these findings is a profound advancement in understanding the areas we need to focus on to advance aerothermal structures for future hypersonic vehicles. Only actual flight data could have revealed this to us.”
The DARPA story says the next step for the program is to improve models for “characterizing the thermal uncertainties and heat-stress allowances for the vehicle’s outer shell.”
However, accurately characterizing materials at high temperatures is not easy. Last week we summarized a review article on methods for measuring thermophysical properties above 1,500°C, a laboratory capability that is being driven largely by aerospace and nuclear applications. Even the business of accurately measuring temperature for those tests is as much art as science.
These materials are not easy to work with, either. The materials under investigation are refractory nonoxide composites, like C/SiC, sometimes with refractory borides mixed in. The cover story of the January/February issue of The Bulletin gives an overview of materials, processes and properties of UHTC composite materials under investigation for hypersonic vehicles in the UK. In the US, a multi-university and industry partnership is working on the problem under the umbrella organization, National Hypersonic Science Center for Materials and Structures.
A recent paper (abstract only) by a research team at the University of California, Santa Barbara—one of the partner universities—describes a method for measuring strain at high-temperatures. The authors, Mark Novak and Frank Zok, note that development of materials for extreme environments requires the ability to reproduce conditions in the laboratory, which is not trivial.
In their paper, they use digital image correlation to measure displacement and strain. DIC is an optical, non-contact method that can be used at high temperatures.
Displacements are measured by correlating speckle pattern images of specimen surfaces in the deformed state to the undeformed state. Strains are determined by differentiating between displacement fields. The technique eliminates strain gauges, and they report, is accurate with excellent spatial resolution. It has the further advantage of being useful for specimens subject to thermal gradients or mechanical loads because it can recognize out-of-plane displacements.
The trick is in the imaging, which requires an illumination source that can be distinguished from the glow of thermal radiation. Also, heat haze is a problem when the measurements are made in ambient air. Finally, the speckle pattern itself has to be thermally stable and have enough contrast in the test temperature interval.
The paper describes a technique Novak and Zok devised using a CO2 laser as the illumination source, which they demonstrated on a C/SiC composite and a nickel base superalloy (Inconel 625). Alumina or zirconia paints were used to enhance the speckle contrast on the composite; the superalloy was oxidized to create a dark background.
Heat haze was managed by using an “air knife,” which blows air across the surface of the sample, minimizes turbulence and mixes the air in the sight lines of the imaging instruments. The air knife did not completely eliminate heat haze, but their results show that using it led to sharper images and reduced the standard deviation of strain values by a factor of three.
They were able to demonstrate full-field strain mapping up to 1,500°C, and suggest that the upper-temperature limit for measuring thermomechanical properties could be extended by modifying the illumination and filtering out longer wavelengths. | <urn:uuid:fc2fb86b-a639-4e06-8d26-5742c51bb9e4> | 2.625 | 4,950 | Content Listing | Science & Tech. | 31.32151 | 2,320 |
The module pdb defines an interactive source code debugger for Python programs. It supports setting (conditional) breakpoints and single stepping at the source line level, inspection of stack frames, source code listing, and evaluation of arbitrary Python code in the context of any stack frame. It also supports post-mortem debugging and can be called under program control.
The debugger is extensible -- it is actually defined as the class Pdb . This is currently undocumented but easily understood by reading the source. The extension interface uses the modules bdb (undocumented) and cmd .
The debugger's prompt is "(Pdb) ". Typical usage to run a program under control of the debugger is:
>>> import pdb >>> import mymodule >>> pdb.run('mymodule.test()') > <string>(0)?() (Pdb) continue > <string>(1)?() (Pdb) continue NameError: 'spam' > <string>(1)?() (Pdb)
pdb.py can also be invoked as a script to debug other scripts. For example:
python /usr/local/lib/python1.5/pdb.py myscript.py
Typical usage to inspect a crashed program is:
>>> import pdb >>> import mymodule >>> mymodule.test() Traceback (most recent call last): File "<stdin>", line 1, in ? File "./mymodule.py", line 4, in test test2() File "./mymodule.py", line 3, in test2 print spam NameError: spam >>> pdb.pm() > ./mymodule.py(3)test2() -> print spam (Pdb)
The module defines the following functions; each enters the debugger in a slightly different way: | <urn:uuid:c776a668-9c93-41ab-8f72-83303f66549f> | 3.09375 | 367 | Documentation | Software Dev. | 59.073867 | 2,321 |
10 Nov 2011: Report
Military Bases Provide Unlikely
Refuge for South’s Longleaf Pine
The expanses of longleaf pine forest that once covered the southeastern United States have been whittled away to just 3 percent of their original range. But as scientists are discovering, this threatened forest ecosystem has found a sanctuary in an unexpected place — U.S. military installations.
Down a narrow dirt road at the Third Infantry Division’s home base of Fort Stewart, Georgia, Joe Veldman pulls into a sand-hill landscape covered with turkey oak, saw palmetto, and, most crucially, a healthy stand of longleaf pine.
At first glance, this hardscrabble ecosystem on one of the U.S. Army’s largest installations appears to have seen better days. But Veldman, a plant ecologist at the University of Wisconsin, politely demurs. “If every place in the Southeast were like this, we wouldn’t be doing any research,” he said. “This area is in good shape.”
From Virginia to Texas, the longleaf pine has seen its historical habitat reduced to a mere 3 percent of its original 92 million-acre range. Centuries of logging and farming have exacted a heavy toll on these ecosystems, and in recent decades the Southeast’s economic boom has led to large tracts of longleaf pine being razed for housing, malls, and office parks.
As it turns out, however, military bases such as Fort Stewart have become a key refuge for longleaf pine. And now the U.S. Department of Defense is funding several independent, long-term studies on how to restore some of
‘That the military keeps large chunks of intact land is a godsend for these ecosystems.’
the pine’s ecosystems — one of the most biodiverse environments north of the tropics — to their former glory. Data gleaned from these studies will help the broader longleaf research community conduct longleaf restoration on government and private land across the Southeast; conservationists have set a 15-year goal of restoring the longleaf pine from its current 3.4 million acres to 8 million acres, or levels approaching 9 percent of its historical range.
“Longleaf pine habitat usually gets chopped down and paved,” said John Orrock
, a conservation biologist at the University of Wisconsin and the Fort Stewart understory study’s onsite project leader. “That the military actually keeps large chunks of intact land is a godsend because the danger for these ecosystems is that they get developed into something like Walmart.”
Orrock and Veldman are a part of a $1.8 million study, involving some 25 researchers at three locations: Fort Stewart; Fort Bragg, North Carolina; and South Carolina’s Savannah River nuclear site. The study’s main goal is to understand how best to restore the longleaf’s diverse understory.
Plant ecologist Joe Veldman in a longleaf pine ecosystem in Fort Stewart, Georgia
For its part, the military has found longleaf pine habitat ideal for troop maneuvers. “The openness of the longleaf pine and the thin stands provides visibility and maneuverability that is very consistent with what a mechanized force like the Third Infantry division likes to fight in,” said Tim Beaty, a U.S. Army wildlife biologist.
And while it may seem counter-intuitive, military bases have proven to be hospitable places for longleaf pine ecosystems. This ancient, fire-resistant species depends on random fires to thin the understory and rid itself of competitors, creating an unexpected synergy between live-ammunition maneuvers — which often lead to small blazes — and a thriving longleaf pine ecosystem.
“When it comes to longleaf pine management, the military is by far the best,” said John Kush, a forest ecologist at Auburn University.
In the early 19th century, vast expanses of longleaf pine stretched inland from the coasts of the Atlantic and Gulf of Mexico for hundreds of miles, creating a sense of unrelenting monotony. Travelers in Georgia reported a feeling of dread and loneliness upon entering seemingly endless stretches of this forest, where row upon row of tall, straight longleaf pines towered over an understory rife with astors, morning glories, rosemary, huckleberries, wax myrtle, winged sumac, and wiregrass. Deer, feral pigs, rattlesnakes, wild turkey, and bobwhite quail were found in abundance.
Today, some 66 percent of longleaf pine forest remains in private hands, primarily in southwest Georgia, the Florida panhandle, and south Alabama. Much of this land is being logged or farmed, although today a younger generation of owners is only too willing to sell their inheritance to developers.
The Alabama-based Longleaf Alliance
— the nation’s oldest longleaf conservation group — is working to educate private owners about both the aesthetic and economic benefits of maintaining a longleaf ecosystem. Longleaf pine is still in high demand as timber, primarily for utility poles,
Agriculture can spoil longleaf habitat for up to a century after cultivation.
and selective logging can preserve these ecosystems.
Rhett Johnson, a forest ecologist and the president of the Longleaf Alliance, said that restoring longleaf forests on former agricultural land is a major challenge, since plant seeds that once made up the ecosystem’s diverse understory have often disappeared. This is where the scientific work on military bases may be helpful, as Orrock and his colleagues are now studying how best to restore former farmland.
Almost half of Fort Stewart’s 279,270 acres lie in longleaf pine ecosystems
; but before it became a military base some 70 years ago much of its acreage was either under cultivation or grazed. Orrock and his team have established 36 experimental sites at Fort Stewart alone and are placing an emphasis on how to restore the diverse understory found in pristine longleaf pine ecosystems. The team’s preliminary results indicate that agriculture can spoil longleaf habitat for decades — even a century — after cultivation. Restoration of former farmland, often now covered in loblolly pines, would likely require clear-cutting and starting from scratch.
“Our work shows that sites that were in agriculture many years ago still have much less diverse communities,” said Orrock. “The exciting part for us is whether or not these lasting effects of the past can be undone with our current experiments.”
The military base studies also are underscoring that in longleaf pine ecosystems, fires are as welcome as a healthy rain. Longleaf pine seedlings actually need fire to ensure the ecosystem maintains an open canopy, providing the young, shade-intolerant pines with sunlight. Without fire, broadleaf trees like sweetgum and water oak thrive and eventually cast the understory in permanent shade.
Joe Veldman noted the importance of fire as he drove me to research plots scattered around Fort Stewart. We headed along a two-lane blacktop that threads its way through thousands of acres of mixed woodland and Army live-fire ranges. Veldman stopped at an area known as sector E-22, not far from a Cold War-era mock enemy interrogation camp. Much of E-22 is an area of formerly cultivated farms, and the things growing there now — loblolly, jasmine, and wild vines of the muscadine grape — typically colonize old fields.
Later, walking through a stand of longleaf pine, Veldman pointed to a nearby seedling that could be easily mistaken for an oversized clump of grass. Such young seedlings can remain within two feet of the surface for
In longleaf pine ecosystems, fires are as welcome as a healthy rain.
five years or more, then enter a phase where they can quickly spurt by as much as four to five feet per year. But as long as the seedling’s dominant growth bud is below or near the surface, Veldman explains, fires can burn right over it, leaving it unscathed. Mature longleaf pine, which have potential lifespans of 500 years or more, are relatively immune from the ravages of wildfire, since their thick bark offers them protection.
Orrock, Veldman, and their colleagues are testing the assumption that understory diversity is maintained by a longleaf pine ecosystem that thrives on frequent fire. “Within the span of one square meter,” said Orrick, “it’s exciting to see 30 to 40 different plant species in really pristine longleaf pine understories.”
From ground level, it’s clear why the military would like this type of landscape for training. There's enough territory and tree cover for camouflage maneuvers, and during live-fire exercises there is ample room to maneuver tanks and Humvees in a park-like landscape. And if their ordnance happens to start a fire? So much the better.
Depending on drought conditions, as many as 200 wildfires per year are started by ordnance, smoke grenades, and signal or illumination flares. On average, these training-related fires consume no more than 2,000 acres per year, and most are allowed to burn out on their own, except when they present a significant safety danger due to smoke.
In addition to funding studies of longleaf pine restoration, the Department of Defense is readying itself for an era in which its military bases have to account for their carbon footprint
. Longleaf pine ecosystems can help the military do that through carbon sequestration. Another five-year, military-funded study — led by Lisa Samuelson, an eco-physiologist at Auburn University — will focus on measuring carbon storage and ecosystem biomass at Camp Lejeune, North Carolina; Fort Polk, Louisiana; and Fort Benning, Georgia. The project’s ultimate goal is to help the Department of Defense develop a carbon sequestration plan.
MORE FROM YALE e360
What’s Killing the Great
Forests of the American West?
Across western North America, huge tracts of forest are dying off at an extraordinary rate, mostly because of outbreaks of insects. Scientists are now seeing such forest die-offs around the world and are linking them to changes in climate, science journalist Jim Robbins
reports. READ MORE
The military is also charged with helping retain and restore threatened species on military bases. At Fort Stewart, these include the red-cockaded woodpecker and the gopher tortoise, which relies on the uncluttered understory of longleaf ecosystems to move through the landscape.
Even though aesthetics and endangered wildlife might seem to be the least of the Army’s worries, it’s hard not to be impressed when standing in the midst of such an ancient ecosystem. Near the end of a rutted fire road that disappeared into a longleaf horizon, Veldman and I stood and took it all in. The only sound was the lonely swoosh of wind in the pines. Standing there, it was easy to imagine these pines as they once were, ubiquitous and untouched.
POSTED ON 10 Nov 2011 IN
Biodiversity Forests Oceans Policy & Politics Water North America North America | <urn:uuid:39af7cc5-0dc7-4f6a-9ce5-949bcd076e88> | 3.296875 | 2,337 | News Article | Science & Tech. | 35.961406 | 2,322 |
I am all for creative thinking, but this may be the oddest concept I've yet seen to treat one of the symptoms of global warming.
Going far beyond just tracking melting patterns, German researchers want to actually stop, or at least slow down the melting of glaciers in the Swiss Alps, and they plan to do so by setting up a large screen that would trap cold air over the ice. The experimental screen is nearly 50 feet long by 10 feet high, and was set up in the middle of the Rhone glacier in
Ooooh kay. This may be one of those science-for-the-environment-gone-off-track moments. I have a very hard time imagining that this idea will go very far in keeping glaciers from melting, but at least they're trying. Filed under “Weird.”
written by Krazd, August 16, 2008
written by bobbknight, August 17, 2008
written by ETC, August 17, 2008
written by ielectalk, August 17, 2008
|< Prev||Next >| | <urn:uuid:8653e540-3c42-48ea-8ce9-167f33b5dd79> | 2.828125 | 222 | Comment Section | Science & Tech. | 67.14249 | 2,323 |
Arctic sea ice extent declined quickly in June, setting record daily lows for a brief period in the middle of the month. Strong ice loss in the Kara, Bering, and Beaufort seas, and Hudson and Baffin bays, led the overall retreat. Northern Hemisphere snow extent was unusually low in May and June, continuing a pattern of rapid spring snow melt seen in the past six years.
Overview of conditions
Arctic sea ice extent for June 2012 averaged 10.97 million square kilometers (4.24 million square miles). This was 1.18 million square kilometers (456,000 square miles) below the 1979 to 2000 average extent. The last three Junes (2010-2012) are the three lowest in the satellite record. June 2012 ice extent was 140,000 square kilometers (54,000 square miles) above the 2010 record low. Ice losses were notable in the Kara Sea, and in the Beaufort Sea, where a large polynya has formed. Retreat of ice in the Hudson and Baffin bays also contributed to the low June 2012 extent. The only area of the Arctic where sea ice extent is currently above average is along the eastern Greenland coast.
The ice extent recorded for 30 June 2012 of 9.59 million square kilometers (3.70 million square miles) would not normally be expected until July 21, based on 1979-2000 averages. This puts extent decline three weeks ahead of schedule.
Conditions in context
In June, the Arctic lost a total of 2.86 million square kilometers (1.10 million square miles) of ice. This is the largest June ice loss in the satellite record. Similar to May, the month was characterized by a period of especially rapid ice loss (discussed in the mid-month entry, June 19th) followed by a period of slower loss. Warm conditions prevailed over most of the Arctic; temperatures at the 925 hPa level (about 3000 feet above the ocean surface) were typically 1 to 4 degrees Celsius (1.8 to 7.2 degrees Fahrenheit) above the 1981 to 2010 average, and as much as 7 to 9 degrees Celsius (12.6 to 16.2 degrees Fahrenheit) above average over northern Eurasia and near southern Baffin Bay. Weather patterns over the Arctic Ocean varied substantially through the month.
June 2012 compared to recent years
Arctic sea ice extent for June 2012 was well below average for the month compared to the satellite record from 1979 to 2000. It was the second lowest in the satellite record, behind 2010. Through 2012, the linear rate of decline for June Arctic ice extent over the satellite record is 3.7% per decade.
A report from the field
Dr. Chris Polashenski of the Cold Regions Research Lab (CRREL) recently returned from making sea ice measurements on landfast ice a few kilometers offshore near Barrow, Alaska as part of the National Science Foundation and NASA funded Seasonal Ice Zone Observing Network (SIZONET) project. He and his fellow researchers made some interesting observations. Prior to the onset of melt, the ice was thicker than observed in recent years – around 1.8 meters (5.9 feet) as compared to typical conditions of around 1.4 meters (4.6 feet). Despite this thick ice at the beginning of the season, melt proceeded relatively rapidly. Melt ponds began forming on June 4—a typical timing for recent years, but high temperatures, sunny afternoons, and foggy nights combined to speed the melt of ice thereafter.
On June 17-18, a confluence of weather conditions, including a daytime high of 19 degrees Celsius (66 degrees Fahrenheit), overnight condensing fog, and bright sun in the afternoon combined to produce exceptional surface melt of just under 11 centimeters (4.3 inches) in a 24-hour period, according to preliminary lidar data. By June 18, ice conditions had deteriorated significantly and with strong winds forecast out of the west, safety dictated it was time to get off the ice. Collisions of the pack with the weakened shore fast ice on June 21-23 resulted in substantial deformation and a series of ice pushes onto the beach, an amazing process to watch from the safety of land.
Such field observations may only be representative of the local area. However, they provide context for basin-wide observations and a better understanding of local processes.
Low June snow extent
Snow cover over Northern Hemisphere lands retreated rapidly in May and June, leaving the Arctic Ocean coastline nearly snow free. June 2012 set a record low for snow extent (for a 45-year period of record spanning 1967-2012) by a significant margin. Snow extent for June 2012 was more than 1 million square kilometers (386,000 square miles) below the previous record set in 2010. Snow extent for 2011 was a close third lowest. May 2012 had third lowest snow extent for the period of record. This rapid and early retreat of snow cover exposes large, darker underlying surfaces to the sun early in the season, fostering higher air temperatures and warmer soils.
A note on the daily sea ice data
NSIDC has published the underlying data used for the Daily Sea Ice Extent image and the Daily Sea Ice Extent 5-Month Time Series graph. Please see the links below for documentation for the Sea Ice Index and links to the data: | <urn:uuid:87c857d7-f812-49cd-8dcb-03e8385c741f> | 2.8125 | 1,086 | Knowledge Article | Science & Tech. | 56.539281 | 2,324 |
(PhysOrg.com) -- A team of scientists led by NASA space scientist James Mason have proposed the idea of using a mid-powered laser and telescope to nudge pieces of space junk out of the way and slow it down to avoid collisions.
Currently, the low Earth orbit (LEO) is filled with over 9,700 pieces of debris and 1,500 old rocket bodies that are tracked by the U.S. military. When these pieces collide in space, more debris pieces are created. While many of these pieces are small, when you realize that they are traveling at a speed equivalent to 17,000 miles per hour, they pose a serious threat to space travel and the launching of new satellites.
In 1978, a NASA scientist predicted what is now known as the "Kessler syndrome." The idea behind this syndrome being that with the increase in space debris, the increase in collisions, and the generation of more debris could eventually render space exploration and the use of satellites impossible.
Through the years, many proposals have been discussed to remove this space junk, such as rendezvousing with large objects and bringing them back to earth. However, this proposal is complex and comes with a high price tag.
Another study in 1996 suggested using powerful beams to destroy surface material on debris and propel it towards Earth. The concern with this idea is that other countries involved in space exploration could see this as a possible threat to their functional satellites.
Mason and his team at NASA Ames Center and Stanford University have discovered a possible method utilizing much less expensive lasers and providing only enough power to nudge the debris and not cause any damage.
By utilizing a laser beam of five to ten kilowatts, scientists believe that constantly focusing this beam on a piece of debris would exert enough push to change its orbit. The concerns by other countries of this being a threat would be eliminated as this beam would not be capable of creating a force strong enough to alter large functional satellites.
While this would be done on a case by case basis, the question as to whether this would be able to provide a long term solution still needs to be answered. Scientists have said they need to conduct a population model on the debris system to determine if this could be enough of a solution to stop, or at least slow down, the Kessler syndrome.
Explore further: Communications satellite launched into space
More information: Orbital Debris-Debris Collision Avoidance, arXiv:1103.1690v1 [physics.space-ph] arxiv.org/abs/1103.1690 | <urn:uuid:c35d7223-18eb-4d65-ad7e-5ab4d6fae00d> | 3.6875 | 525 | News Article | Science & Tech. | 46.656519 | 2,325 |
Nuclear reaction defies expectations
Dec 10, 2010 11 comments
A novel kind of fission reaction observed at the CERN particle physics laboratory in Geneva has exposed serious weaknesses in our current understanding of the nucleus. The fission of mercury-180 was expected to be a "symmetric" reaction that would result in two equal fragments but instead produced two nuclei with quite different masses, an "asymmetric" reaction that poses a significant challenge to theorists.
Nuclear fission involves the splitting of a heavy nucleus into two lighter nuclei. According to the liquid-drop model, which describes the nucleus in terms of its macroscopic quantities of surface tension and electrostatic repulsion, fission should be symmetric. Some fission reactions are, however, asymmetric, including many of those of uranium and its neighbouring actinide elements. These instead can be understood by also using the shell model, in which unequal fragments can be preferentially created if one or both of these fragments contains a "magic" number of protons and/or neutrons. For example, one of the fragments produced in many of the fission reactions involving actinides is tin-132, which is a "doubly-magic" nucleus, containing 50 protons and 82 neutrons.
The latest work, carried out by a collaboration of physicists using CERN's ISOLDE radioactive beam facility, investigated the interplay between the macroscopic and microscopic components of nuclear fission. It used what is known as beta-delayed fission, a two-step process in which a parent nucleus beta decays and then the daughter nucleus undergoes fission if it is created in a highly excited state. This kind of reaction allows scientists to study fission reactions in relatively exotic nuclei and was first studied at the Flerov Laboratory in Dubna, Russia, about 20 years ago, although the Dubna measurements did not reveal the masses of the fragments produced.
Firing protons at uranium
The experiment at ISOLDE involved firing a proton beam at a uranium target and then using laser beams and a magnetic field to filter out ions of thallium-180 from among the wide variety of nuclei produced in the proton collisions. These ions then became implanted in a carbon foil, where they underwent beta decay and some of the resulting atoms of mercury-180 then fissioned. Silicon detectors placed in front of and behind the foil allowed the energies of the fission products to be measured.
The researchers were expecting the fission reaction to be symmetric, with the mercury-180 splitting into two nuclei of zirconium-90, a result thought to be particularly favoured because these nuclei would contain a magic number of neutrons (50) and a "semi-magic" number of protons (40). What they found, however, was quite different. The energy of the fission products recorded in the silicon detectors did not peak at one particular value, which would be the case if only one kind of nuclei was being produced in the reactions, but instead showed two distinct peaks centred around the nuclei ruthenium-100 and krypton-80.
Collaboration spokesperson Andrei Andreyev of the University of Leuven, Belgium, (and currently at the University of West of Scotland) says that this asymmetric fission was unexpected because the observed fragments do not contain any magic or semi-magic shells. His colleague, theorist Peter Möller of the Los Alamos National Laboratory in the US had in fact devised a model of the nucleus that predicted that mercury-180 would undergo asymmetric fission. But he wasn't able to explain why that is, having plotted a three-dimensional potential energy surface for the fission of mercury-180 and then identified a minimum in that surface, but he couldn't identify which of the three variables were responsible for that minimum.
'Beautiful experimental achievement'
Phil Walker of the University of Surrey in the UK, who is not a member of the collaboration, describes the research as a "beautiful experimental achievement" that has "an impressive theoretical outcome". He says that the result will be mainly of interest to academics but believes that it might just have practical implications. "Much of our energy generation depends on nuclear fission," he points out, "and if we want to make reactors safer and cheaper we need to be able to trust the basic theory of the fission process. I would say that the theory has been found to be sadly lacking, and it needs to be fixed."
Andreyev agrees. "I hope that as a result of our paper theorists will start to think about this problem and tell us what is happening," he says. "For the moment we don't know."
The research appears in Physical Review Letters.
About the author
Edwin Cartlidge is a science writer based in Rome | <urn:uuid:899c108b-3d22-4e11-aea1-41461427f3b9> | 3.796875 | 1,002 | News Article | Science & Tech. | 31.834822 | 2,326 |
Novel observing mode on XMM-Newton opens new perspectives on galaxy clusters
31 May 2010
Surveying the sky, XMM-Newton has discovered two massive galaxy clusters, confirming a previous detection obtained through observations of the Sunyaev-Zel'dovich effect, the 'shadow' they cast on the Cosmic Microwave Background. The discovery, made possible thanks to a novel mosaic observing mode recently introduced on ESA's X-ray observatory, opens a new window to study the Universe's largest bound structures in a multi-wavelength approach.
Galaxy clusters are the largest gravitationally bound objects in the Universe. As such, they are extremely important probes of cosmic properties on very large scales, since they form in the densest knots of the large-scale structure, the cosmic web. Originally discovered as an excess density (or cluster) of galaxies located at the same redshift, hence the name, there is much more to these enormous structures than mere galaxies: in fact, only about one tenth of the entire mass of a galaxy cluster arises from galaxies (up to a thousand in the most massive cases), another tenth consists of hot gas, and the remainder can be attributed to dark matter.
The gas that fills galaxy clusters is hot enough to emit X-rays — with a temperature of more than 10 million Kelvin, the gas is ionised and electrons scattering off ions are decelerated, emitting radiation in the process. From measurements of the X-ray luminosity of galaxy clusters and of the gas temperature, the total mass of these structures can be estimated. This yields clear evidence that clusters are indeed gravitationally bound structures and that their mass is dominated by the elusive and invisible dark matter.
The two massive galaxy clusters discovered with XMM-Newton. Left: SPT-CL J2332-5358; Right: SPT-CL J2342-5411. (Click on images for extended captions and credit details.)
"Interestingly, the same hot gas we directly observe in X-rays also affects the photons of the Cosmic Microwave Background (CMB), which are passing through the cluster on their way to us," says Hans Böhringer from the Max-Planck Institute for Extraterrestrial Physics. The CMB photons interact with the extremely energetic electrons in the cluster plasma and in doing so their energy is modified in a very characteristic way, leaving a signature on the CMB — the so-called Sunyaev-Zel'dovich Effect (SZE). "We can then see clusters as 'shadows' cast on the CMB in the millimetre subset of radio wavelengths," Böhringer adds.
A survey of the sky at millimetre wavelengths, currently being carried out with the South Pole Telescope (SPT), has recently achieved its first results, detecting a dozen of previously unknown galaxy clusters by means of their SZE signature. Follow-up observations in the optical and X-rays are, however, needed in order to better characterise the physical properties of these structures and to probe how the observed SZE signal depends on the mass of the clusters.
"Using XMM-Newton, we have independently detected two of the newly discovered clusters found by the SPT," says Róbert Suhada, who led the study. Using the X-ray data, the mass of both clusters could be estimated, leading to values of over 1015 solar masses and about 3x1014 solar masses, respectively. "One of the clusters is exceptionally massive, and it ranks among the hottest clusters ever observed," adds Suhada.
The discovery was possible thanks to a new mode of observations recently implemented by the XMM-Newton Science Operations Centre. "The new mosaic observing mode enables us to survey large areas of the sky in a much more efficient way than previously," explains Maria Santos-Lleo, XMM-Newton Science Support Manager.
ESA's X-ray observatory has been operating for more than ten years, but the demand for observing time is still high and is often driven by new science goals — some of them unexpected during the project phase, over a decade ago. In some cases, the scientific objectives require the observation of sky regions larger than the field of view of the cameras aboard the spacecraft. This pushes the support scientists to implement new operating modes that optimise the performance of the instruments. "It is difficult, and very rare, to develop new modes when the spacecraft is already in orbit and operating. In this particular case, we succeeded in figuring out a novel way to exploit the instruments in order to satisfy new needs of the astronomical community," adds Santos-Lleo. Thanks to the mosaic mode, it was possible to extend the observed patch of the sky to about 14 square degrees, about 70 times the area of the full Moon.
Mosaic mode XMM-Newton image of the entire XMM-BCS survey field. (Click on image for extended caption and credit details.)
Besides the SZE detection and X-ray data, optical observations of the galaxies in the two clusters enabled their redshifts to be established: z=0.3 (in the case of the more massive one) and z=1.0, respectively. This is the very first joint discovery of galaxy clusters in a sky survey combining data probing these three different wavebands.
"This survey not only shows that we can efficiently detect galaxy clusters in all these wavelengths, but also that the cluster redshifts reach easily as far as z=1, a necessary condition to follow structure evolution over an interesting cosmological time span," Hans Böhringer comments. The most distant of the two clusters is in fact seen as it was when the Universe was barely 6000 million years old, less than a half of its current age.
This result opens a new window to probe galaxy clusters to very high redshifts, which will be exploited by future missions examining different regions of the electromagnetic spectrum. One of the scientific goals of ESA's Planck mission, which is currently scanning the whole sky in microwaves, is to detect about 1000 galaxy clusters through their SZE signal imprinted on the CMB. The Euclid mission, a candidate Cosmic Vision M-class mission, is expected to detect a large number of clusters in optical and near-infrared wavelengths, thanks to its wide field of view, and to identify their redshifts. This first discovery is thus a preview of future galaxy cluster surveys and of the exciting scientific results they will bring, in the process expanding our knowledge about the evolution of cosmic structure.
Notes for editors
The two galaxy clusters detected in this study are SPT-CL J2332-5358 and SPT-CL J2342-5411, with photometric redshifts of z=0.32 and z=1.08, respectively. The photometric redshifts were obtained from optical data from the Blanco Cosmology Survey. The cluster temperatures are about T=9.3 keV (= 108 million Kelvin) and T= 4.5 keV (= 52 million Kelvin), respectively. With a mass of over 1015 solar masses, SPT-CL J2332-5358 is among the hottest and most massive clusters known; SPT-CL J2342-5411 is less massive (about 3x1014 solar masses) but is among the most distant known clusters to have been detected both through X-ray emission and SZE.
The new mosaic observing mode allows large areas of the sky, larger than the field of view of the cameras aboard the spacecraft, to be surveyed in a very efficient way. This is achieved by means of a single calibration measurement, which is performed at the beginning of the observing series and takes only up to one hour, and is then applied to all adjacent fields that are subsequently observed. This is clearly much cheaper, in terms of observing time, than the previous mode, in which the calibration measurements were performed for each field individually, consuming up to one hour for each estimate.
This new mode can be applied to observations of several astrophysical environments (for example galaxy clusters, supernova remnants, crowded fields) which require large areas of the sky to be surveyed with exposure times from a few minutes to a couple of hours.
The XMM-Newton spacecraft is controlled by the European Space Operations Centre (ESOC, Darmstadt, Germany) using ground stations at Perth (Australia) and Kourou (French Guiana). The XMM-Newton Science Operations Centre situated at ESAC in Villafranca, Spain, manages observation requests and receives XMM-Newton data. The XMM-Newton Survey Science Centre (SSC), at Leicester University, UK, processes and correlates all XMM-Newton observations with existing sky data held elsewhere in the world.
Suhada, R., et al., "XMM-Newton detection of two clusters of galaxies with strong SPT Sunyaev-Zel'dovich effect signatures", Astronomy & Astrophysics, Vol. 514, L3, 2010. DOI: 10.1051/0004-6361/201014434
The paper appeared in May 2010 issue of Astronomy & Astrophysics.
Max-Planck Institute for Extraterrestrial Physics, Germany
Max-Planck Institute for Extraterrestrial Physics, Germany
Maria Santos-Lleo, XMM-Newton Science Support Manager
XMM-Newton Science Operations Centre
Directorate of Science and Robotic Exploration
European Space Agency
Norbert Schartel, ESA XMM-Newton Project Scientist
Directorate of Science and Robotic Exploration European Space Agency
Last Update: 31 May 2010 | <urn:uuid:7849262f-61bf-4a25-9628-7f4b8a60f4a4> | 2.671875 | 2,002 | News (Org.) | Science & Tech. | 38.626335 | 2,327 |
Science Fair Project Encyclopedia
Alforsite is a mineral, Ba5Cl(PO4)3, composed of barium, phosphorus, chlorine, and oxygen. It was discovered in 1981, and named to honor geologist John T. Alfors of the California Division of Mines and Geology for his work in the area where it was discovered.
Alforsite is a hexagonal colorless crystal in the chemical class phosphates and the group apatite. It is found in certain parts of central California, primarily Fresno, Mariposa, and Tulare Counties. It has also been found in Baja California, Mexico.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:87625e32-2f02-4652-9a3a-dec87e67ccf6> | 3.71875 | 162 | Knowledge Article | Science & Tech. | 37.907295 | 2,328 |
Science Fair Project Encyclopedia
Gilbert N. Lewis
His family moved to Lincoln, Nebraska when he was 9. He was homeschooled until age 9. He went to public school from age 9 to 14 and then he went to the University of Nebraska, and three years later transferred to the Harvard University where he showed an interest in economics, but concentrated in chemistry, getting his B.A. in 1896 and his Ph.D. in 1899. His first published work, a study of thermochemical and electrochemical properties of amalgams, was based on his doctoral research and was published in 1898.
After earning his Ph.D., he stayed as an instructor for a year before taking a traveling fellowship, studying under the physical chemist Wilhelm Ostwald at Leipzig and Walter Nernst at Göttingen. He then returned to Harvard as an instructor for three more years, and in 1904 left to become superintendent of weights and measures for the Bureau of Science of the Philippine Islands in Manila. The next year he returned to Cambridge when the Massachusetts Institute of Technology (MIT) appointed him to a faculty position, in which he had a chance to join a group of outstanding physical chemists under the direction of Arthur Amos Noyes. He quickly rose in rank, becoming assistant professor in 1907, associate professor on 1908, and full professor in 1911. He left MIT to become professor of physical chemistry and dean of the College of Chemistry at the University of California, Berkeley in 1912.
In 1916, he formulated the idea that a covalent bond consisted of a shared pair of electrons and defined the term odd molecule when an electron is not shared. His ideas on chemical bonding were expanded upon by Irving Langmuir and became the inspiration for the studies on the nature of the chemical bond by Linus Pauling.
In 1923, he formulated the electron-pair theory of acid-base reactions. In the so-called Lewis theory of acids and bases, a "Lewis acid" is an electron-pair acceptor and a "Lewis base" is an electron-pair donor.
Students of chemistry learn about a notation system for the valence electrons which is known as the Lewis dot structure.
Based on work by J. Willard Gibbs, it was known that chemical reactions proceeded to an equilibrium determined by the free energy of the substances taking part. Lewis spent 25 years determining free energies of various substances. In 1923 he and Merle Randall published the results of this study and formalizing chemical thermodynamics.
Lewis was the first to produce a pure sample of deuterium oxide (heavy water) in 1933. By accelerating deuterons (deuterium nuclei) in Ernest O. Lawrence's cyclotron, he was able to study many of the properties of atomic nuclei.
In the last years of his life, he established that phosphorescence of organic molecules involves an excited triplet state (a state in which electrons that would normally be paired with opposite spins are instead excited to have their spin vectors in the same direction) and measured the magnetic properties of this triplet state.
He died at age 70 of a heart attack while working in his laboratory in Berkeley.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:d12b4004-dc26-44a9-9bd4-1694fa753456> | 3.765625 | 674 | Knowledge Article | Science & Tech. | 46.404644 | 2,329 |
God particle signal is simulated as sound
Scientists have simulated the sounds set to be made by sub-atomic particles such as the Higgs boson when they are produced at the Large Hadron Collider.
Their aim is to develop a means for physicists at Cern to "listen to the data" and pick out the Higgs particle if and when they finally detect it.
Dr Lily Asquith modelled data from the giant Atlas experiment at the LHC.
She worked with sound engineers to convert data expected from collisions at the LHC into sounds.
"If the energy is close to you, you will hear a low pitch and if it's further away you hear a higher pitch," the particle physicist told BBC News.
"If it's lots of energy it will be louder and if it's just a bit of energy it will be quieter."
The £6bn LHC machine on the Swiss-French border is designed to shed light on fundamental questions in physics.
It is housed in a 27km-long circular tunnel, where thousands of magnets steer beams of proton particles around the vast "ring".
At allotted points around the tunnel, the beams cross paths, smashing together near four massive "experiments" that monitor these collisions for interesting events.
Scientists are hoping that new sub-atomic particles will emerge, revealing insights into the nature of the cosmos.
Atlas is one of the experiments at the LHC. An instrument inside Atlas called the calorimeter is used for measuring energy and is made up of seven concentric layers.
Each layer is represented by a note and their pitch is different depending on the amount of energy that is deposited in that layer.
The process of converting scientific data into sounds is called sonification.
Dr Asquith and her team have so far generated a number of simulations based on predictions of what might happen during collisions inside the LHC.
The team is only now feeding in real results from real experiments.
"When you are hearing what the sonifications do you really are hearing the data. It's true to the data, and it's telling you something about the data that you couldn't know in any other way," said Archer Endrich, a software engineer working on the project.
The aim is to give physicists at the LHC another way to analyse their data. The sonification team believes that ears are better suited than eyes to pick out the subtle changes that might indicate the detection of a new particle.
But Richard Dobson - a composer involved with the project - says he is struck at how musical the products of the collisions sound.
"We can hear clear structures in the sound, almost as if they had been composed. They seem to tell a little story all to themselves. They're so dynamic and shifting all the time, it does sound like a lot of the music that you hear in contemporary composition," he explained.
Although the project's aim is to provide particle physicists with a new analysis tool, Archer Endrich believes that it may also enable us to eavesdrop on the harmonious background sound of the Universe.
He said he hoped the particle collisions at Cern would "reveal something new and something important about the nature of the Universe".
And Mr Endrich says that those who have been involved in the project have felt something akin to a religious experience while listening to the sounds.
"You feel closer to the mystery of Nature which I think a lot of scientists do when they get deep into these matters," he said.
"Its so intriguing and there's so much mystery and so much to learn. The deeper you go, the more of a pattern you find and it's fascinating and it's uplifting." | <urn:uuid:da51132f-3b0e-4617-b17a-41e16c30e02a> | 3.328125 | 755 | News Article | Science & Tech. | 53.517143 | 2,330 |
Earth’s rarest metals ranked in a new 'risk list'
The relative risks to the supply of some of Earth's rarest elements have been detailed in a new list published by the British Geological Survey (BGS).
So-called "technology metals" like indium and niobium are extracted from the Earth and are used in a wide range of modern digital devices and green technologies.
They are therefore increasingly in demand from global industries.
The list highlights 52 elements most at risk from "supply disruption".
Incorporating information about each metal's abundance in the Earth, the distribution of its deposits, and the political stability of the country in which it is found, the list ranks these highly desired elements on a relative scale.
Speaking at the British Science Festival in Bradford, Andrew Bloodworth from the BGS explained that "while we won't run out of these metals any time soon, the risks to supply are mostly human".
Geopolitics, resource nationalism, accidents, and the lengthy delay between the discovery of a resource and its efficient extraction are all factors that could threaten the supply of the metals on which our modern technology has come to rely.
This is an especially important factor, given the notable monopoly that certain countries have on supply.
For example, 97% of all rare earth elements (REEs), including neodymium and scandium, are produced in China.Pace of demand
Antimony, the element most "at risk", is used extensively for fire-proofing, but is deposited by hot fluids inside the Earth's crust and extracted mostly in China.
In fact, China dominates global production of all the elements on the BGS list, being responsible for extraction of over 50% of them.
Mr Bloodworth said that he hoped this new list would help to inform policy makers of the need to diversify supply sources, as well as making manufacturers and the public aware of where these critical metals come from.
There are many more locations on Earth where these critical metals can be mined, including varied geological deposits from Southern Africa, Australia, Brazil, and the US. Professor Frances Wall of the Camborne School of Mines said that mining these alternative deposits would "take away the monopoly of current suppliers of these metals".
In the move towards a more low-carbon economy, digital and renewable energy technologies rely heavily on metals which, just 10 years ago, would have been of little interest to industry.
Today, these elements are ubiquitous, being used widely in smart mobile devices, flat screens, wind turbines, electric cars, rechargeable batteries and many other technologies.
Mobile phones embrace the use of these technology metals, with lithium batteries, indium in the screen, and REEs in the circuitry.
With around a billion mobile phones being made every year, the "volume of technology metals required is astonishing and the pace of demand is not letting up", said Alan McLelland of the National Metals Technology Centre.
Recycling of the metals used in phones is currently too expensive and energy-intensive, but Mr McLelland hopes that the risks outlined in the BGS list will alert the manufacturers to the need to make the embedded metals more accessible for recycling.
As the supply and demand of the elements change, the BGS anticipates the list being updated annually. | <urn:uuid:34d596f3-be78-439b-a9d3-41d886063e55> | 3.21875 | 678 | News Article | Science & Tech. | 28.390862 | 2,331 |
We often see math applied to the real world through word problems, and the applications of linear equations are seen throughout all our math courses after Algebra. To understand applications of linear equations we need to have an understanding of slope, how to interpret a graph, and how to write an equation. In upper-level Algebra, we apply systems of linear equations to these problems as well.
I'm a math teacher. And one of the things students ask me all the time is when are we going to use this in the real world? And a lot of the times, you guys, you really do use math in the real world. And one of the situations you're going to see those kind of situations in your math class is using graphs that describe linear equations or word problems.
When you're looking at a word problem or a graph that describes a word problem, there's a bunch of things to keep in mind. One thing to be really sure you're aware of is the scale on the X and the Y axis. By scale, that means how much are you counting by? Are you counting by 5s or counting by 500s. That's something that's really important in terms of the real world context.
Along those same lines, pay attention to the units. Units meaning are you looking at how much you're paying in dollars or pounds or yen or cents or whatever it is. All those kinds of units are really important to keep in mind especially when you get to the slope.
One of the most common graphs of word problems is about slopes of distances and times, because the slope is distance per time. It's the rate. It's how fast you're traveling. So a lot of times you're going to be asked to interpret the slope of a word problem graph.
Another thing I want you guys to keep in mind is the intercepts and what the intercepts mean on a graph. Keep in mind that the X intercept means the Y quantity is equal to 0 .Y intercept means X quantity is equal to 0.
When you're looking at graphs all of these things are really important to keep in mind and it will help you connect this math to the real world. | <urn:uuid:c7fd8571-c779-4518-b43c-9ba583e29bb1> | 4.09375 | 444 | Tutorial | Science & Tech. | 73.21179 | 2,332 |
In Calculus, a limit is used to describe the value that a function approaches as the input of that function approaches a certain value. There are two ways to determine a Calculus limit: a numerical approach or a graphical approach. In the graphical approach, we analyze the graph of the function to determine the points that each of the one-sided limits approach.
I want to talk about one sided limits, here's a function g of x equals and it's piece y is defined x+8 for x less than -4 and x squared -1 for x greater than or equal to -4.
Describe its behavior as x approaches -4 well at -4 that's where the two pieces are kind of joined together and so it may behave differently depending on what side of -4 we're on. So let's try approaching -4 from the left -5 is to the left of -4 on the number line so we're starting from the left and moving to the right. Uh -5, -4.1, -4.01 look what happens to the values we get 3, 3.9, 3.99 these values are getting closer and closer to 4.
When x is less than -4 we're using this piece of the function so we're getting closer and closer to the value 4. Now if we start from the right like -3 is to the right of -4 and I go to the left I get -3, -3.9, -3.99. These are the values I get 8, 14.2, 14.9 and you can see that these values seem to be getting closer and closer to 15.
Okay so we're approaching 4 from the left and approaching 15 from the right. So here's what we say, we say the limit as x approaches -4 from the left of f of x sorry g of x this is g is 4. This is the left hand limit, the left hand limit is one of the one sided limits for g of x at 4. And then we say limit as x approaches -4 from the right of g of x is 15. This is the right hand limit, this little uh superscript tells you which it is whether it's the left hand or the right hand limit.
The superscript negative means that you're approaching -4 from the left from the more negative direction. And the superscript plus means you're approaching -4 from the right from the more positive direction. Now notice these two limits are not equal, whenever the two one sided limits are not equal the two sided limit x approaches -4 in this case does not exist very important. So in order for a two sided limit like this to exist you need both of the one sided limits to exist and for them to be equal. And that's what this theorem states right here. Limit as x approaches a of f of x equals l if and only if the two one sided limits the limit is x approaches a from the left and the limit is x approaches a from the right of f of x equal l the same number.
Only if those two one sided limits have the same value will the two sided limit exist. | <urn:uuid:0afe58a0-88f7-4c33-b0ff-b1c376dd3a47> | 4.375 | 644 | Tutorial | Science & Tech. | 82.594607 | 2,333 |
What makes a supernova? Scientists thought they knew.
New X-ray research questions the origins of a key type of supernova, and may bend one of astronomy's go-to rulers.
One of astronomy's most heavily used tools for measuring distances may be less reliable than researchers have assumed, according to a pair of astrophysicists.Skip to next paragraph
Subscribe Today to the Monitor
Their conclusion, which will be published in Thursday’s issue of the journal Nature, doesn't appear to cast doubt on decades of discoveries about the structure and evolution of the universe that relied on the tool – a form of exploding star called a Type 1A supernova.
These events "are the most important explosions in cosmology," says Marat Gilfanov, one of the two researchers involved in the Nature study.
In principle the new result "muddies the waters" for such stunning discoveries as the existence of dark energy, says astrophysicist Mario Livio, with the Space Telescope Science Institute in Baltimore.
At the least, the result – if it holds up – represents "an embarrassment for all astrophysicists," he says. "For decades we've been studying these types of explosions, and we still don't know" which of two broad mechanisms are involved in triggering them. That implies that there may be differences in the amount of light they produce, even though they've been lumped into the same class of supernovae, he said during a press briefing Wednesday.
Supernovae briefly outshine the galaxies that contain them. Using observations of nearby supernovae and the types of stars that exploded, astronomers concluded that Type 1A supernovae tended to reach the same intrinsic peak brightness. And they had a telltale signature – their light faded over time in a characteristic pattern different from other types of supernovae.
Because light dims at a predictable rate with distance, researchers use the light from Type 1A supernovae as a kind of standard candle that allows them to calculate the distances to galaxies in which the explosions occur.
But over time, some researchers suggested that the process for generating a Type 1A supernova could vary.
Type 1As are thought to occur in a binary star system in which one member is a white dwarf – an end-of-life phase for a star like the sun – and the other a star still in its prime. Gravity from the white dwarf, which packs our sun's mass into an Earth-sized volume, draws matter from its companion. When the white dwarf's mass exceeds 1.4 times the sun's mass, the dwarf explodes as a supernova.
Yet researchers also have suggested that a merger of two white dwarfs could trigger a Type 1A supernova. One way to tell the difference: Look at their X-ray emissions; the dwarf-on-dwarf merger should display weaker X-ray emissions than an explosion resulting from accretion, when a white dwarf pirates material from a normal companion.
Dr. Gilfanov and Akos Bogdan, with the Max Planck Institute for Astrophysics in Garching, Germany, looked at the X-ray, visible, and infrared emissions from six elliptical galaxies relatively close to the Milky Way. The X-ray data came from NASA's Chandra X-Ray Observatory.
Based on estimates of the number of stellar explosions one would typically expect to find in a galaxy, the X-ray emissions were 30 to 50 times weaker in the galaxies they studied than one would expect if the Type 1A supernovae were triggered by accretion, rather than by mergers.
In complementing the work, Dr. Livios cautions that "the conclusion is weakened by the fact we're talking about a nondetection," likening it to the Sherlock Holmes tale in which the sleuth cracked the case based in large part on a dog that didn't bark.
More work needs to be done, the researchers acknowledge, to see if the same relative dearth of X-rays is present in spiral galaxies, which have far higher rates of star formation – and explosive star deaths – as ellipticals.
However, Livios adds, these results and recent observational evidence that Type 1A supernovae display slightly different light output, depending on the type of galaxy they inhabit, "emphasize the need to really finally try to understand what the progenitor [star] systems really are." | <urn:uuid:ba917538-fcc0-46bb-8a7a-d3300de28e27> | 4.125 | 904 | News Article | Science & Tech. | 39.262627 | 2,334 |
Securing Virtual Private Networks (VPN), Page 2
Asymmetric Encryption, or public key encryption, depends on a pair of keys called public key and private key; hence the name. The keys are selected such that, if data is encrypted through key 1, it can be only decrypted through key 2 and vice versa. Of the two keys, we tell about one to everybody and call it a public key. The other is kept private for decrypting and called a private key. For example, our e-mail account has a public e-mail address that we give to everyone we want to but we won't tell the password to anyone.
Suppose a person named Linda is a broker and she gets a request mail by James Anderson for buying some stock shares for his company. She performs all the arrangements and sends a confirmation mail to James. In the end, she sends a bill to him for the payment; at this point, James completely denies that he has ever sent a mail to Linda for any stock shares. Now what should Linda do? She is in extreme trouble because there is no clue to prove that James was the actual e-mailer.
The solution is provided by the use of public key encryption; if Linda has encrypted the data by a public key, it can be decrypted only through Linda's private key which should be told only to James, so when James replies to the confirmation mail for the shares, it is known for sure that the answering person is no other then James Anderson and he is caught. This is source authentication.
If we use the hashing scheme, such as MD5, on our data and generate a hash value for it at the source computer and send it along the data to the target, the destination computer will also compute its hash code for the received data. If the hash generated by the destination is same as the one received by the source, our data integrity is preserved; in other words, the data has reached its destination without any change or loss. This hash code is called a digital signature when sent with e-mail data.
- Data Integrity
- Data origin authentication
- Replay prevention
- Limited traffic flow confidentiality
Replay prevention means that if somebody gets to know the keys by some means and resends your messages again or if someone gets to know the user name and password of your account, he or she can directly learn all your important business transactions and deals with others and can enjoy full authority to make other deals with them on your account using your name.
IKE is a mechanism in IPSec where we exchange the key. It is a hybrid protocol that implements Oakley and Skeme key exchanges inside the ISAKMP framework. While IKE can be used with other protocols, its initial implementation is with the IPSec protocol. IKE provides authentication of the IPSec peers, negotiates IPSec keys, and negotiates IPSec security associations. The main features of IKE are as follows:
- Negotiates policy to protect communication
- Authenticated Diffie-Hellman key exchange
- Negotiates (possibly multiple) security associations (SA) for IPSec.
Diffie-Hellman is a public-key cryptography protocol that allows two parties to establish a shared secret over an unsecured communication channel. Diffie-Hellman is used within IKE to establish session keys. 768-bit and 1024-bit Diffie-Hellman groups are supported.
Security Association (SA) combines the agreed upon principles for VPN communication. This is done by IKE. The secret key exchange is the main process so that the dependent data to be delivered is secured.
Isakmp + oakley is the IKE policy that we define to start the encryption process. The Internet Security Association and Key Management Protocol (isakmp) is a protocol framework that defines payload formats, the mechanics of implementing a key exchange protocol, and the negotiation of a security association. Oakley is a key exchange protocol that defines how to derive authenticated keying material. Skeme is a key exchange protocol that defines how to derive authenticated keying material, with rapid key refreshment.
MD5 (Message Digest 5) is a hash algorithm used to authenticate packet data. HMAC is a variant that provides an additional level of hashing. The Data Encryption Standard (DES) is used to encrypt packet data. IKE implements the 56-bit DES-CBC with Explicit IV standard. Authentication header is used for data integrity and source authentication whereas encapsulating security protocol is used for confidentiality. | <urn:uuid:d54a77d8-14d9-44f5-a59c-c1e8e5622021> | 3.59375 | 923 | Documentation | Software Dev. | 38.205978 | 2,335 |
Search for the Brush Tailed Phascogale
A group of wildlife enthusiasts recently spent several wet and cold days searching the Wombat Forest for the Brush-tailed Phascogale and their hard work was rewarded when one of the elusive creatures was found.
Biodiversity staff from the Department of Environment and Primary Industries (DEPI) and environmental students from the University of Ballarat joined forces with local volunteers to look for the shy, nocturnal creature in the hope of finding that the small population is stable, healthy and not in decline.
DEPI Senior Biodiversity Officer Andy Arnold, who has been monitoring phascogales for many years, coordinated the five-day survey in Hepburn Regional Park.
The last few days we've been setting up a survey grid to study some trends in Brush Tailed Phascogales which occurs in this area. We've been looking at a range of vegetation types and different habitat areas within a broad area bounded by part of the northern Wombat Forest, part of the Yandoit area, and the Hepburn Regional Park.
Fragmentation of habitat is a major threat and what is happening and has happened over the years since settlement in Victoria, is that some of the forest habitat areas have become separated, where as previously they would have been linked together. There are still linkages there, through areas like road reserves and natural patches of bush on both public and private land, but gradually some of those linkages have disappeared or declined. That is a major concern. One of the reasons we are doing the genetic study is to see to what extent fragmentation might be occurring in a genetic sense.
When we get Brush Tailed Phascogales in our Elliot traps, we carefully remove them from the traps (which we only do after establishing a suitable release site), weigh them, check their sex, and then take a DNA sample from the animals. This assists us in the long term to look at what we call the population management units which exist across the state. So all of the procedures we go through are designed to provide useful information in terms of longer term management.
I have to admit that I'm enamoured by Brush Tailed Phascogales. I've always liked them and they've always appealed to me. I think that they're an under-recognised and under-valued part of our fauna. For that reason I'd like to see much more known about them and also that many people become aware of them, their uniqueness and their beauty. They are a unique animal. | <urn:uuid:854fcaea-f6d2-4a69-9c7b-5bf06241e8c1> | 2.796875 | 517 | News (Org.) | Science & Tech. | 38.095155 | 2,336 |
Researchers explain Indo-Pacific climate change
A breakthrough in Indo-Pacific climate predictions is now reported in a study spearheaded by Hiroki Tokinaga, associate researcher at the International Pacific Research Center at the University of Hawaiʻi at Mānoa, and published in the November 15, 2012 issue of Nature.
Tokinaga’s study covers the tropical Indo-Pacific climate’s global impact as seen in the floods and droughts spawned by the El Niño-Southern Oscillation. Meteorological observations over the last 60 years show this atmospheric circulation has slowed—the trade winds have weakened and rainfall has shifted eastward toward the central Pacific.
Tokinaga, along with other meteorologists have been frustrated by this anomaly for years. They could not reproduce it in their atmospheric models, questioning the ability of climate models to simulate gradual climate change.
At the root of the models’ failure, Tokinaga proposed, was the lack of precise sea surface temperature data used to drive the models. Slight differences in this temperature across the tropical Indo-Pacific Ocean can greatly affect wind and rainfall.
Tokinaga, who is an expert in understanding old, archived data sets, studied temperature data taken by ships and at correcting their biases, found two measures that have been consistent throughout the data—the bucket technique, in which the temperature is taken of sea water scooped up in a bucket lowered from a ship, and night time marine air temperature.
“Removing observational biases from the measurements was still challenging, but we saw that these quite different ways of measuring sea surface temperature turned out to agree very well over the 60-year span from 1950–2009, and were supported by subsurface ocean temperature observations,” explained Tokinaga. “To our surprise both measures showed that the surface temperature across the Indo-Pacific did not rise evenly with global warming, but that the east-west temperature contrast has actually decreased by 0.3-0.4°C, similar to what happens during an El Niño.”
Using this unbiased, reconstructed surface temperature data set in four widely used atmospheric models, the scientists were able to reproduce quite closely the observed patterns of climate change seen over the 60-year period in the tropical Indo-Pacific and the slowdown of the Walker circulation.
“The Walker circulation affects tropical convection, and the global impacts of a temporary slowdown during an El Niño are well known, resulting in extreme floods or droughts in North America and other regions of the world. How the gradual slowdown observed in this study impacts global climate still needs to be investigated.” | <urn:uuid:0f695f93-75f5-4075-8734-a1b36ae3c21e> | 3.703125 | 534 | News (Org.) | Science & Tech. | 18.501942 | 2,337 |
JPL News wrote: “Galileo makes two daring passes less than 620 km above Io on October 11 and November 25, 1999. In November Galileo might even pass through the plume of Pillan Patera, making it the first spacecraft ever to fly through an alien volcano.”
NASA scientists are upholding a long tradition of misinterpreting observations from their space probes. This time they are jeopardising one of their most successful missions. Long ago in 1979, when the so-called volcanoes of Io were first discovered, Professor Thomas Gold of Cornell University wrote that they are actually the site of powerful electric discharges. NASA geologists paid no attention.
Jupiter is still capable of hurling a few thunderbolts!
“The biggest mystery about Io’s volcanoes is why they’re so hot,” says Bill Smythe, a co-investigator on JPL’s NIMS team. “At 1800 K, the vents are about 1/3 the temperature of the surface of the sun!“
The temperature measured by Galileo is an average based on the sharpest resolution of its instruments. If scientists are having difficulty explaining 1800 K, they are in for a shock when they get closer…
I predict that when seen close up the temperature of those hot spots will approach that of the Sun as they are both electric arcs. (Electric arcs create intensely hot spots.)
The plan to fly the Galileo spacecraft through the the plume of an Io volcano in November is therefore as foolhardy as flying a kite in an electrical storm. It is to be hoped that NASA will recognise the dangers in time to change their plan for November. That is, if Galileo survives the October flyby.
“Another thing we’ll be going for with these close-up flybys are high resolution pictures of the lava flows”, continued Smythe. “We really want to know what the shapes and edges of the flows look like because that can tell us a lot about the properties of the lava. On Earth lava flows form little side lobes, or extrusions that look like arms, feet and toes.”
On the contrary, most of the dark patterns seen radiating from the crater in this image of the Marduk “volcano” are not lava flows. They have the shape of lightning scars on Earth and are caused by powerful currents streaking across the surface to satisfy the arc’s hunger for electric charge. They rip huge sinuous furrows in the soil and hurl it to either side to form levee banks and side lobes. The stubby side channels will be found to have rounded ends like those seen on Martian “rivers”.
Credit: Closeup of an Io Volcano – NASA, Voyager Project, Copyright Calvin J. Hamilton | <urn:uuid:59ae9291-4037-438c-a14f-1571f06e0d2c> | 3.875 | 589 | News Article | Science & Tech. | 52.543595 | 2,338 |
The National Weather Service map for Nov. 2, 2012 showed two areas of low pressure over eastern Canada, near Quebec.
That's where the remnants of Sandy are located and the storm's massive cloud cover continues to linger over a large area. That low pressure area is associated with Sandy's remnants.
A visible image from NOAA's GOES-13 satellite at 1:31 p.m. EDT on Nov. 2, 2012 showed the remnant clouds from Sandy still linger over the Great Lakes east to New England.
In Canada, Sandy's clouds stretch from Newfoundland and Labrador west over Quebec, Ottawa and Toronto. The GOES image was created by NASA's GOES Project at the NASA Goddard Space Flight Center, Greenbelt, Md.
By Monday, Nov. 6, the National Weather Service map projects that the low pressure area associated with Sandy's remnants will be offshore.
Rob Gutro | Source: EurekAlert!
Further information: www.nasa.gov
More articles from Earth Sciences:
Tracking the Earth’s Mantle
24.05.2013 | Syracuse University
Strong earthquake at exceptional depth
24.05.2013 | Helmholtz-Zentrum Potsdam - Deutsches GeoForschungsZentrum GFZ
This morning at 05:45 CEST, the earth trembled beneath the Okhotsk Sea in the Pacific Northwest. The quake, with a magnitude of 8.2, took place at an exceptional depth of 605 kilometers.
Because of the great depth of the earthquake a tsunami is not expected and there should also be no major damage due to shaking.
Professor Frederik Tilmann of the GFZ German Research Centre for Geosciences: "The epicenter is exceptionally deep, far below the earth's crust in the mantle. Such strong ...
The Ring Nebula's distinctive shape makes it a popular illustration for astronomy books. But new observations by NASA's Hubble Space Telescope of the glowing gas shroud around an old, dying, sun-like star reveal a new twist.
"The nebula is not like a bagel, but rather, it's like a jelly doughnut, because it's filled with material in the middle," said C. Robert O'Dell of Vanderbilt University in Nashville, Tenn.
He leads a research team that used Hubble and several ground-based telescopes to obtain the best view yet of ...
New indicator molecules visualise the activation of auto-aggressive T cells in the body as never before
Biological processes are generally based on events at the molecular and cellular level. To understand what happens in the course of infections, diseases or normal bodily functions, scientists would need to examine individual cells and their activity directly in the tissue.
The development of new microscopes and fluorescent dyes in ...
A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics and materials.
The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. About a millimeter in overall size, the droplets are produced individually, their shapes maintained by a surrounding springy material made of polymers.
Droplets in this toroidal shape made ...
Frauhofer FEP will present a novel roll-to-roll manufacturing process for high-barriers and functional films for flexible displays at the SID DisplayWeek 2013 in Vancouver – the International showcase for the Display Industry.
Displays that are flexible and paper thin at the same time?! What might still seem like science fiction will be a major topic at the SID Display Week 2013 that currently takes place in Vancouver in Canada.
High manufacturing cost and a short lifetime are still a major obstacle on ...
24.05.2013 | Life Sciences
24.05.2013 | Ecology, The Environment and Conservation
24.05.2013 | Physics and Astronomy
17.05.2013 | Event News
15.05.2013 | Event News
08.05.2013 | Event News | <urn:uuid:b39cb64e-7bee-4377-a0cc-43a476f4409c> | 3.078125 | 849 | Content Listing | Science & Tech. | 57.870693 | 2,339 |
Discussion about math, puzzles, games and fun. Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ °
You are not logged in.
4 digit no.
Two 4-digit numbers are to be formed such that the sum of the number is also a 4-digit number and in no place the addition is with carrying the number of ways of forming the number under above conditions is
Re: 4 digit no.
I am taking it that you mean the sum of the two 4 digit numbers must be a 4 digit number and in no column can there be a carry.
In mathematics, you don't understand things. You just get used to them.
Probability is the most important concept in modern science, especially as nobody has the slightest notion what it means.
90% of mathematicians do not understand 90% of currently published mathematics. | <urn:uuid:64a1e336-54d9-4185-908a-83ca3962fe25> | 2.515625 | 208 | Comment Section | Science & Tech. | 71.92 | 2,340 |
Query: keyword: "Verspreiding"
|Authors||R. Felix, H. van Kleef|
|Title||Boomkrekels Oecanthus pellucens bij Lobith het land binnen (Orthoptera: Gryllidae)|
|Journal||Nederlandse Faunistische Mededelingen|
|Keywords||Boomkrekel; Nederland; Verspreiding; Herkenning|
|Abstract||Oecanthus pellucens entering the Netherlands at Lobith (Orthoptera: Gryllidae)|
On august 8, 2004 a male Oecanthus pellucens was heard along the river Waal, just east of the
city of Nijmegen (the Netherlands). Additional searches revealed that the species is distributed
from the German border towards Nijmegen and Arnhem along the rivers Waal and Rhine
respectively. These observations confirm that O. pellucens has successfully colonised the
Netherlands in a natural way and has probably been present for several years. Previous records
of O. pellucens in the Netherlands were attributed to transport by traffic from southern Europe.
All specimens were found along the shore of the river, indicating that dispersal is accomplished
by water transport during winter and spring flooding. During this period the eggs of the species
are in diapause inside plants. Since O. pellucens is very thermophilous it remains to be seen if it
can survive the next cold and wet summer.
|Download paper|| http://www.repository.naturalis.nl/document/94018 | | <urn:uuid:8181f4db-9489-4d2a-bc52-d4a434331963> | 2.671875 | 356 | Structured Data | Science & Tech. | 34.455169 | 2,341 |
Sep. 21, 2011 Genes essential to producing the developmental differences displayed by social insects evolve more rapidly than genes governing other aspects of organismal function, a new study has found.
All species of life are able to develop in different ways by varying the genes they express, ultimately becoming different shapes, sizes, colors and sexes. This plasticity permits organisms to operate successfully in their environments. A new study of the genomes of social insects provides insight into the evolution of the genes involved in this developmental plasticity.
The study, which was conducted by researchers at the Georgia Institute of Technology and the University of Lausanne in Switzerland, showed that genes involved in creating different sexes, life stages and castes of fire ants and honeybees evolved more rapidly than genes not involved in these developmental processes. The researchers also found that these fast-evolving genes exhibited elevated rates of evolution even before they were recruited to produce diverse forms of an organism.
"This was a totally unexpected finding because most theory suggested that genes involved in producing diverse forms of an organism would evolve rapidly specifically because they generated developmental differences," said Michael Goodisman, an associate professor in the School of Biology at Georgia Tech. "Instead, this study suggests that fast-evolving genes are actually predisposed to generating new developmental forms."
The results of the study were published in the Sept. 20, 2011 issue of the journal Proceedings of the National Academy of Sciences.
The project was an international collaboration between Goodisman, associate professor Soojin Yi and postdoctoral fellow Brendan Hunt from the Georgia Tech School of Biology, and professor Laurent Keller, research scientist DeWayne Shoemaker, and postdoctoral fellows Lino Ometto and Yannick Wurm from the Department of Ecology and Evolution at the University of Lausanne.
Social insects exhibit a sophisticated social structure in which queens reproduce and workers engage in tasks related to brood-rearing and colony defense. By investigating the evolution of genes associated with castes, sexes and developmental stages of the invasive fire ant Solenopsis invicta, the researchers explored how social insects produce such a diversity of form and function from genetically similar individuals.
"Social insects provided the perfect test subjects because they can develop into such dramatically different forms," said Goodisman.
Microarray analyses revealed that many fire ant genes were regulated differently depending on whether the fire ant was male or female, queen or worker, and pupal or adult. These differentially expressed genes exhibited elevated rates of evolution, as predicted. In addition, genes that were differentially expressed in multiple contexts -- castes, sexes or developmental stages -- tended to evolve more rapidly than genes that were differentially expressed in only a single context.
To examine when the genes with elevated rates of evolution began to evolve rapidly, the researchers compared the rate of evolution of genes associated with the production of castes in the fire ant with the same genes in a wasp that does not have a caste system. They found that the genes were rapidly evolving in the genomes of both species, even though only one produced a caste system. These results were also replicated for the honeybee Apis mellifera.
"This is one the most comprehensive studies of the evolution of genes involved in producing developmental differences," Goodisman noted.
This research was supported by the National Science Foundation.
This study helps explain the fundamental evolutionary processes that allow organisms to develop different adaptive forms. Future research will include determining what these fast-evolving genes do and how they're involved in the production of different sexes, life stages and castes, said Goodisman.
This project is supported by the National Science Foundation (NSF) (Award No. DEB-0640690).
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
- B. G. Hunt, L. Ometto, Y. Wurm, D. Shoemaker, S. V. Yi, L. Keller, M. A. D. Goodisman. Relaxed selection is a precursor to the evolution of phenotypic plasticity. Proceedings of the National Academy of Sciences, 2011; DOI: 10.1073/pnas.1104825108
Note: If no author is given, the source is cited instead. | <urn:uuid:2e1846fc-8e45-49f4-86ce-30a34dcf8c26> | 3.40625 | 881 | News (Org.) | Science & Tech. | 23.271801 | 2,342 |
by Staff Writers
Washington DC (SPX) Nov 20, 2012
A new approach to invisibility cloaking may one day be used at sea to shield floating objects - such as oil rigs and ships - from rough waves. Unlike most other cloaking techniques that rely on transformation optics, this one is based on the influence of the ocean floor's topography on the various "layers" of ocean water.
At the American Physical Society's (APS) Division of Fluid Dynamics (DFD) meeting Reza Alam, assistant professor of mechanical engineering at the University of California, Berkeley, will describe how the variation of density in ocean water can be used to cloak floating objects against incident surface waves.
"The density of water in an ocean or sea typically isn't constant, mainly because of variations in temperature and salinity," explains Alam.
"Solar radiation heats the upper layer of the water, and the flow of rivers and the melting of ice lowers the water density near the surface. Over time, these effects add up to form a stable density stratification of two layers - with the lighter fluid layer on top and the more dense fluid layer below it."
Stratified waters, much like regular surface waves, contain "internal waves," which are gravity waves that propagate between the two layers of water. For the same frequency of oscillation, however, internal waves travel at a much shorter wavelength and slower speed than surface waves.
Both wave types "feel" the ocean floor's influence, which generates an energy transfer.
Zeroing in on this energy transfer, Alam used computer simulations to transform a surface wave into internal wave as it approaches an object - meaning that the wave will pass beneath the object rather than crashing into it. And once the internal wave moves beyond the object, it can be transformed back into a surface wave.
This would be achieved by creating "corrugations" or wavy ripples that are tuned to a specific wavelength on the ocean floor in front of the floating object to be cloaked.
"Cloaking in seas by modifying the floor may play a role in protecting near-shore or offshore structures and in creating shelter for fishermen during storms," says Alam.
"In reverse, it can cause the disappearance and reappearance of surface waves in areas where sandbars or any other appreciable bottom variations exist."
American Institute of Physics
Space Technology News - Applications and Research
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement| | <urn:uuid:4572d094-1ced-4edf-9ff2-994327f85bef> | 3.25 | 609 | News Article | Science & Tech. | 29.60229 | 2,343 |
Canadian researchers have developed a liquid mirror that could operate in a future telescope located on the moon, allowing researchers to peer back into the origins of the universe with extraordinary clarity. Telescopes relying on liquid mirrors can be hundreds of times more powerful than those with glass mirrors–for the same cost–and they should be easier to assemble in space.
A liquid-mirror telescope could reveal much fainter objects than the Hubble Telescope can, says Ermanno F. Borra, a physics professor at the Université Laval, in Quebec, who is leading the development of the new mirror. The power of a telescope is proportional to the surface area of its mirror. The James Webb telescope, which is scheduled to launch in 2013 and is far more powerful than the Hubble, has a diameter of about six meters. (See “Giant Mirror for a New Space Telescope.”) A lunar liquid-mirror telescope could be as large as 20 to 100 meters, says Borra.
The liquid mirror, which was funded by NASA, consists of a pool of an ionic liquid coated with a film of silver. Such ionic liquids are carbon-containing salts that freeze only at very low temperatures and have very high viscosity. The salt used in the Laval mirror is liquid down to -150 ºC and does not evaporate below room temperature, even in a vacuum–suggesting that it could withstand the harsh environment of the moon.
There are two limitations on cosmologists’ observations of the early universe: “The objects you want to observe are incredibly distant and incredibly faint,” says Borra. Telescopes in orbit like the Hubble, whose views are unobstructed by Earth’s atmosphere, are limited in size and power; telescopes on Earth can be larger and more powerful but produce fuzzier images because of the atmosphere. Liquid mirrors couldn’t go into orbit, but they could operate on the moon, which has no atmosphere.
Large, powerful liquid-mirror telescopes should be less complicated to take into space than their glass counterparts. “To put a glass mirror into a rocket, you have to break it into segments and then reassemble them,” says Borra. “You can carry a liquid mirror in a jug.” But none developed so far have been space worthy. The University of British Columbia’s Large Zenith Telescope uses a liquid mirror made of mercury to observe the early universe. Mercury solidifies at -40 ºC–much warmer than the temperature on the moon.
Borra searched for a better liquid to make telescope mirrors and found that ionic liquids seemed promising. Unlike mercury, however, these molten salts are not reflective, and they require a metal coating to function as a mirror. “Depositing a layer of silver on liquid is like painting on air,” says Borra. Laval graduate student Omar Seddiki adapted the technique used to coat glass mirrors: in a vacuum chamber, Borra and Seddiki run an electrical current between pieces of silver, which vaporize and form a thin coating over the liquid salt. The Laval researchers have so far made a small mirror, about two inches in diameter, to demonstrate the technology. | <urn:uuid:177b257a-b2b5-4abe-bfad-c2b18da8fce2> | 4.125 | 666 | News Article | Science & Tech. | 37.917527 | 2,344 |
Bipedalism didn’t evolve as a way for ancient humans to keep cool during the heat of the day, according to a new model published today (December 12) in Proceedings of the National Academy of Sciences. But once hominins did start walking on two feet, it ignited another change that allowed them to stay cool—the loss of body hair. The new model explains why similarly sized mammals that walk on all fours and that may tend to overheat have not given up their coats.
“If you are already walking upright for other reasons it actually makes the advantage you get from losing hair bigger than if you were on four legs,” said David Wilkinson of John Moores University in Liverpool, who authored the study along with Graeme Ruxton of the University of Glasgow. “You are moving more of your body up above the ground and sweat evaporates more easily” than it can if you were on all fours, because more air will circulate around you, Wilkinson explained.
Wilkinson and Ruxton came to this conclusion after analyzing a mathematical model of body temperatures during activity at different times of the day for quadrupeds and bipeds with and without fur. The model is an update to a previous theory by Peter Wheeler also of John Moores University, who proposed that both hair loss and bipedalism were driven by our need to cool down. His theory was that switching from four to two feet would reduce the amount of an animal’s body in direct sun and thus increase its ability to stay cool.
But Wheeler left out a critical factor, Wilkinson said—animal movement. Stationary animals could just hang out in the shade during the peak of the day to avoid overheating, he noted, while activities such as foraging likely forced early humans into direct sunlight more often.
Taking movement into account, Wilkinson and Ruxton’s model predicted that modern human ancestors would generate much more body heat metabolically as they traveled and hunted than the sun could cause, suggesting that standing upright to avoid the sun, as Wheeler’s model proposed, would have done little to fight overheating.
“In Peter’s models, he had a nice thermal advantage to standing upright,” said Wilkinson, “but now that vanishes in our version of the model.”
The new model further showed that four-legged creatures do not shed body heat as quickly when they lose their fur, suggesting that the loss of body hair would only have been a significant advantage to ancient humans if they were already walking on two feet. Thus, Wilkinson and Ruxton argue that bipedalism arose first—for some reason other than heat loss, such as improved observation of dangers, appearing larger to predators, or freeing the hands for tool use and carrying—then hair loss began, as a way to combat overheating.
The addition of animal movement to the model was key, said Sarah Elton of the Hull York Medical School, UK, who was not involved in the study. “In any environment you move between parts that are shaded and parts that are in open sun…. sometimes you are sheltered from the wind or not.”
But while Elton is generally in praise of the model, she pointed out that, “at the end of the day, it is just a model and models stand and fall on the type of evidence and also on the sensitivity of the model itself,” or the degree to which it is affected by variations in the assumed parameters, such as the climate, early humans’ movements, availability of shade, and so on. “There are other ideas” about why humans may have dropped their body fur, such as selective pressures imposed by the opposite sex, like a preference for hair-free mates.
Markus Rantala of the University of Turku, Finland, who was not convinced by Ruxton and Wilkinson’s model, offered another theory. “My personal opinion is that only selection caused by ectoparasites is able to explain the origin of human nakedness in a satisfactory way,” he said in an email. As humans began to live in fixed home bases and in close quarters, many parasites such as lice and fleas would have flourished. “As the ectoparasite burden on hominids increased, having fewer parasites may have become more important for survival than a warm fur coat,” Rantala said. Less body hair makes ectoparasites easier to spot and pick off.
Rantala also asks why, if the Ruxton-Wilkinson model is correct, did our ancestors not regain hair when they migrated to northern latitudes with cooler climates, about two million years ago. “Our skin color changed but we did not regain the hair,” he said. “There must have been other selective benefits to being naked than just thermoregulatory reasons.”
But by that time, humans may have gained other ways to keep warm, Wilkinson argued. “One possibility is that by the time humans were migrating they probably had fire and possibly clothing.” Although Wilkinson believes in the new model, he’s not surprised that there is some disagreement in the field. “Human evolution is an argumentative area of science,” he said. “ It always has been.”
G.D. Ruxton, D.M. Wilkinson, “Avoidance of overheating and selection for both hair loss and bipedality in hominins,” PNAS, doi: 10.1073/pnas.1113915108, 2011. | <urn:uuid:6a0f8e75-3fed-4e68-b964-2e649f7cc810> | 3.75 | 1,159 | News Article | Science & Tech. | 40.810418 | 2,345 |
Physicists announce antimatter discoveryBy Steve Koppes
Mother Nature likes matter better than antimattera preference physicists technically refer to as charge-parity violation. First observed in 1964 by James Cronin and Val Fitch, the indirect CP violation they studied won them the 1980 Nobel Prize in physics.
Since then, theorists have worked to devise a model of physics that could account for CP violation, but there was no independent evidence to test the models against. Nothing, that is, until Wednesday, Feb. 24, when Chicago graduate student Peter Shawhan announced at a Fermilab seminar the discovery of direct CP violation, an entirely new type of inequality between matter and antimatter.
Its an uncharted territory, said Bruce Winstein, the Samuel Allison Distinguished Service Professor in Physics, who headed up the 21-year effort that led to the discovery. For 34 years weve had one measurement of CP violation, just one manifestation of it. This is the first new one since that time.
In 1964 at Brookhaven National Laboratory, Cronin and Fitch observed indirect CP violation, the unbalanced mixing of neutral subatomic kaon particles with their charged antiparticles. The Fermilab team has observed direct CP violation.
To study the process, the Fermilab team produces enormous quantities of kaons with the worlds highest-energy proton beam at Fermilabs TeVatron accelerator. Kaons decay into other types of particles within a tiny fraction of a second after they are produced, so the KTeV detectors must identify and measure their position and energy quickly.
The experiment, called Kaons at the TeVatron at Fermilab, is a collaboration involving 80 physicists from 12 institutions. About 15 of the KTeV physicists are from the University; eight of these Chicago scientists analyzed the data that led to the Feb. 24 announcement.
Winstein began experimenting with CP violation in 1978. Construction on the latest experiment, the third in a successively more accurate series, began in 1992. The experiment began running 24 hours a day in late 1996.
Its an extremely high-precision experiment, said Shawhan, the senior Chicago graduate student on the project. First we have to design the experiment well. Then we have to be very certain that we understand our detector and our analysis in greater detail than most other high-energy experiments because were looking for such a subtle effect. A great deal of work has gone into that effort.
The experiment attempts to measure a quantity called epsilon prime divided by epsilon. If the quantity had turned out to be zero, it would have verified the Superweak Model of CP violation. A nonzero value would favor the Standard Model, to which most physicists subscribe.
The result that Shawhan announced Feb. 24 was 0.00280 with an error of 0.00041. This eliminates the Superweak Model as the sole explanation for CP violation, but a problem remains. The number that we got was larger than most theorists had predicted, said Edward Blucher, Assistant Professor in Physics at Chicago and a member of the Fermilab team for five years. Blucher and his students, Jim Graham and Val Prasad, along with graduate student Colin Bown, postdoctoral scientists Rick Kessler and Sasha Glazov, and former member Aaron Roodman, made up the Chicago team for this analysis.
The European laboratory for particle physics, CERN, in Switzerland, found evidence for direct CP violation before the Fermilab team, but the CERN measurements were less precise. It wasnt definitive evidence, Winstein said.
The Chicago researchers initially reacted to the latest result with mixed emotions. There was a mixture of jubilation, shock and a feeling of, oh my God, did we screw up, all at once, Winstein said.
But there is no question about the latest Fermilab results, Cronin said: Its final.
The experiment has doubled scientific knowledge about CP violation independent of any theory or speculation, said Cronin, Professor in Physics and Astronomy & Astrophysics at Chicago.
This wonderful discovery is a beautiful surprise, Cronin said. Its just wonderful because I dont think anybody expected it. Thats what makes it especially delicious. | <urn:uuid:93eadd17-3b11-4286-afbb-462ea547cded> | 3.671875 | 874 | News Article | Science & Tech. | 37.250566 | 2,346 |
“We are liquidating the earth’s natural assets to fuel our consumption.”
In World on the Edge, Lester Brown of the Earth Policy Institute, writes
“The world’s ever-growing herds of cattle, sheep, and goats are converting vast stretches of grassland to desert. Forests are shrinking by 13 million acres per year as we clear land for agriculture and cut trees for lumber and paper. Four fifths of oceanic fisheries are being fished at capacity or overfished and headed for collapse. In system after system, demand is overshooting supply.”
Lester Brown Chapter 1 On the Edge
Human expansion into the environment for shelter, food, and fuel, exacerbated by the sprawl around cities, is squeezing wildlife into the last remaining islands of shrinking habitat.
What is at stake?
In 1979, 1.2 million elephants roamed the African continent. That number currently is 300,000 elephants. We have lost 75% of the elephant herds mainly due to poaching, loss of habitat and human conflict.
Wildlife Direct Elephant Slaughter is Recurring
Kenya’s Lions are on the brink of extinction.
The total population of mountain gorillas worldwide is 786 because of shrinking habitat.
North America inexplicably began losing bee colonies. Chemicals in the water supply cause small amphibians to show mutations. A giant Garbage Patch in the Pacific floats unnoticed by humans, but putting plastics into the DNA of birds.
The list is long and depressing.
But this conflict that threatens the futures of both wildlife and humans has a solution – with abundant cold fusion energy technology.
Growing food locally and sustainably is economically viable using cold fusion energy.
The first commercial device of this revolutionary technology is essentially a steam generator, which could provide affordable clean water, as well as hot water and heat, off-grid in the remotest locations to support agriculture, horticulture, aquaculture, and domestic needs.
Designed to run on hydrogen and a recyclable nickel powder, only one gram of nickel can provide 10 kilowatts of thermal energy from a small portable unit, recharging in about six-months.
Even in cold climates, steam generators can heat greenhouses for organic food all year long.
With plentiful energy, it becomes economically viable to recycle all waste, reducing the need for more virgin resources, and no more plastic into the environment!
With clean and abundant cold fusion energy, we can stop the pollution from fossil fuels and the end resource wars limited supplies engender.
Carbon-burning steam power revolutionized farming in the 19th century. Nickel-Hydrogen Exothermic Reaction steam power can revolutionize world agriculture in the 21rst century.
In the ancient Mediterranean villages where philosophy and science began, problems associated with increasing population in were taken care of by shipping off a portion of the tribe to a new location, sometimes forcefully, and founding a new colony.
21rst century humans do not have that same opportunity. There is no ‘Unknown’ area on Earth to expand into. Moon base 2020? Not likely without cold fusion energy.
On Earth, there is only what we choose to programas Nature. ‘Nature’ can only be a work of Art with the participation of All.
Why should we care about the wildlife of this world?
One of the first philosophers in the Greek world was Empedocles. He lived around 450BC and came from the island of Sicily. In one of his surviving fragments of writing, he relates to his student Pausanias the importance of listening carefully to the words his teacher Empedocles is saying.
And why is listening to the teacher Empedocles’ words so important?
“If you press them down
underneath your dense-packed diaphragm
and oversee them with good will and with
pure attention to the work, they will all
without the slightest exception stay with you
for as long as you live. And, from them,
you will come to possess many other things.
For they grow, each according to its
own inner disposition, in whatever way their
But if you reach out instead after
other kinds of things – after the ten thousand
worthless things that exist among humans,
blunting their cares – then you can be sure
they will only too gladly leave you with the
circling of time, longing to return to their
own dear kind. For you need to know that
everything has intelligence and a share
Empedocles 450BC translated by Peter Kingsley inReality
Cold Fusion Now!
Reality by Peter Kingsley www.peterkingsley.org
Earth Policy Institute
You can help animals and birds in your neighborhood by providing fresh drinking water, putting bird seed in feeders, and plant indigenous to feed local wildlife. Refrain from toxic chemical use in and around your home, and recycle even the tiniest of plastic pieces.
To Learn More about Endangered Species | <urn:uuid:b7110989-2aab-42ca-94fa-e17a79662b43> | 3.4375 | 1,043 | Nonfiction Writing | Science & Tech. | 43.747775 | 2,347 |
Convert calendar time to local time
#include <time.h> struct tm* localtime_r( const time_t* timer, struct tm* result );
- A pointer to a time_t object that contains the calendar time that you want to convert.
- A pointer to a tm structure where the function can store the converted time.
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The localtime_r() function converts the calendar time pointed to by timer into local time, storing the information in the struct tm that result points to. Whenever you call localtime_r(), it calls tzset().
You typically get a calendar time by calling time(). That time is Coordinated Universal Time (UTC, formerly known as Greenwich Mean Time or GMT).
You typically use the date command to set the computer's internal clock using Coordinated Universal Time (UTC). Use the TZ environment variable or _CS_TIMEZONE configuration string to establish the local time zone.
A pointer to result, the struct tm. | <urn:uuid:96371144-25fe-4437-93af-25566f999a2d> | 2.65625 | 231 | Documentation | Software Dev. | 47.4325 | 2,348 |
6. Built-in Exceptions
Exceptions should be class objects. The exceptions are defined in the module
exceptions. This module never needs to be imported explicitly: the
exceptions are provided in the built-in namespace as well as the
For class exceptions, in a try statement with an except
clause that mentions a particular class, that clause also handles any exception
classes derived from that class (but not exception classes from which it is
derived). Two exception classes that are not related via subclassing are never
equivalent, even if they have the same name.
The built-in exceptions listed below can be generated by the interpreter or
built-in functions. Except where mentioned, they have an “associated value”
indicating the detailed cause of the error. This may be a string or a tuple
containing several items of information (e.g., an error code and a string
explaining the code). The associated value is the second argument to the
raise statement. If the exception class is derived from the standard
root class BaseException, the associated value is present as the
exception instance’s args attribute.
User code can raise built-in exceptions. This can be used to test an exception
handler or to report an error condition “just like” the situation in which the
interpreter raises the same exception; but beware that there is nothing to
prevent user code from raising an inappropriate error.
The built-in exception classes can be sub-classed to define new exceptions;
programmers are encouraged to at least derive new exceptions from the
Exception class and not BaseException. More information on
defining exceptions is available in the Python Tutorial under
The following exceptions are only used as base classes for other exceptions.
- The base class for all built-in exceptions. It is not meant to be directly
inherited by user-defined classes (for that use Exception). If
bytes() or str() is called on an instance of this class, the
representation of the argument(s) to the instance are returned or the empty
string when there were no arguments. All arguments are stored in args
as a tuple.
- All built-in, non-system-exiting exceptions are derived from this class. All
user-defined exceptions should also be derived from this class.
- The base class for those built-in exceptions that are raised for various
arithmetic errors: OverflowError, ZeroDivisionError,
- The base class for the exceptions that are raised when a key or index used on
a mapping or sequence is invalid: IndexError, KeyError. This
can be raised directly by codecs.lookup().
The base class for exceptions that can occur outside the Python system:
IOError, OSError. When exceptions of this type are created with a
2-tuple, the first item is available on the instance’s errno attribute
(it is assumed to be an error number), and the second item is available on the
strerror attribute (it is usually the associated error message). The
tuple itself is also available on the args attribute.
When an EnvironmentError exception is instantiated with a 3-tuple, the
first two items are available as above, while the third item is available on the
filename attribute. However, for backwards compatibility, the
args attribute contains only a 2-tuple of the first two constructor
The filename attribute is None when this exception is created with
other than 3 arguments. The errno and strerror attributes are
also None when the instance was created with other than 2 or 3 arguments.
In this last case, args contains the verbatim constructor arguments as a
The following exceptions are the exceptions that are actually raised.
Raised when an assert statement fails.
- Raised when an attribute reference (see Attribute references) or
assignment fails. (When an object does not support attribute references or
attribute assignments at all, TypeError is raised.)
- Raised when one of the built-in functions (input() or raw_input())
hits an end-of-file condition (EOF) without reading any data. (N.B.: the
file.read() and file.readline() methods return an empty string
when they hit EOF.)
- Raised when a floating point operation fails. This exception is always defined,
but can only be raised when Python is configured with the
--with-fpectl option, or the WANT_SIGFPE_HANDLER symbol is
defined in the pyconfig.h file.
- Raise when a generator‘s close() method is called. It
directly inherits from BaseException instead of Exception since
it is technically not an error.
Raised when an I/O operation (such as the built-in print() or
open() functions or a method of a file object) fails for an I/O-related
reason, e.g., “file not found” or “disk full”.
This class is derived from EnvironmentError. See the discussion above
for more information on exception instance attributes.
- Raised when an import statement fails to find the module definition
or when a from ... import fails to find a name that is to be imported.
- Raised when a sequence subscript is out of range. (Slice indices are
silently truncated to fall in the allowed range; if an index is not an
integer, TypeError is raised.)
- Raised when a mapping (dictionary) key is not found in the set of existing keys.
- Raised when the user hits the interrupt key (normally Control-C or
Delete). During execution, a check for interrupts is made
regularly. The exception inherits from BaseException so as to not be
accidentally caught by code that catches Exception and thus prevent
the interpreter from exiting.
- Raised when an operation runs out of memory but the situation may still be
rescued (by deleting some objects). The associated value is a string indicating
what kind of (internal) operation ran out of memory. Note that because of the
underlying memory management architecture (C’s malloc() function), the
interpreter may not always be able to completely recover from this situation; it
nevertheless raises an exception so that a stack traceback can be printed, in
case a run-away program was the cause.
- Raised when a local or global name is not found. This applies only to
unqualified names. The associated value is an error message that includes the
name that could not be found.
- This exception is derived from RuntimeError. In user defined base
classes, abstract methods should raise this exception when they require derived
classes to override the method.
This exception is derived from EnvironmentError. It is raised when a
function returns a system-related error (not for illegal argument types or
other incidental errors). The errno attribute is a numeric error
code from errno, and the strerror attribute is the
corresponding string, as would be printed by the C function perror().
See the module errno, which contains names for the error codes defined
by the underlying operating system.
For exceptions that involve a file system path (such as chdir() or
unlink()), the exception instance will contain a third attribute,
filename, which is the file name passed to the function.
- Raised when the result of an arithmetic operation is too large to be
represented. This cannot occur for integers (which would rather raise
MemoryError than give up). Because of the lack of standardization of
floating point exception handling in C, most floating point operations also
- This exception is raised when a weak reference proxy, created by the
weakref.proxy() function, is used to access an attribute of the referent
after it has been garbage collected. For more information on weak references,
see the weakref module.
- Raised when an error is detected that doesn’t fall in any of the other
categories. The associated value is a string indicating what precisely went
wrong. (This exception is mostly a relic from a previous version of the
interpreter; it is not used very much any more.)
- Raised by builtin next() and an iterator‘s __next__()
method to signal that there are no further values.
Raised when the parser encounters a syntax error. This may occur in an
import statement, in a call to the built-in functions exec()
or eval(), or when reading the initial script or standard input
Instances of this class have attributes filename, lineno,
offset and text for easier access to the details. str()
of the exception instance returns only the message.
Raised when the interpreter finds an internal error, but the situation does not
look so serious to cause it to abandon all hope. The associated value is a
string indicating what went wrong (in low-level terms).
You should report this to the author or maintainer of your Python interpreter.
Be sure to report the version of the Python interpreter (sys.version; it is
also printed at the start of an interactive Python session), the exact error
message (the exception’s associated value) and if possible the source of the
program that triggered the error.
This exception is raised by the sys.exit() function. When it is not
handled, the Python interpreter exits; no stack traceback is printed. If the
associated value is an integer, it specifies the system exit status (passed
to C’s exit() function); if it is None, the exit status is zero;
if it has another type (such as a string), the object’s value is printed and
the exit status is one.
Instances have an attribute code which is set to the proposed exit
status or error message (defaulting to None). Also, this exception derives
directly from BaseException and not Exception, since it is not
technically an error.
A call to sys.exit() is translated into an exception so that clean-up
handlers (finally clauses of try statements) can be
executed, and so that a debugger can execute a script without running the risk
of losing control. The os._exit() function can be used if it is
absolutely positively necessary to exit immediately (for example, in the child
process after a call to fork()).
The exception inherits from BaseException instead of Exception so
that it is not accidentally caught by code that catches Exception. This
allows the exception to properly propagate up and cause the interpreter to exit.
- Raised when an operation or function is applied to an object of inappropriate
type. The associated value is a string giving details about the type mismatch.
- Raised when a reference is made to a local variable in a function or method, but
no value has been bound to that variable. This is a subclass of
- Raised when a Unicode-related encoding or decoding error occurs. It is a
subclass of ValueError.
- Raised when a Unicode-related error occurs during encoding. It is a subclass of
- Raised when a Unicode-related error occurs during decoding. It is a subclass of
- Raised when a Unicode-related error occurs during translating. It is a subclass
- Raised when a built-in operation or function receives an argument that has the
right type but an inappropriate value, and the situation is not described by a
more precise exception such as IndexError.
- Only available on VMS. Raised when a VMS-specific error occurs.
- Raised when a Windows-specific error occurs or when the error number does not
correspond to an errno value. The winerror and
strerror values are created from the return values of the
GetLastError() and FormatMessage() functions from the Windows
Platform API. The errno value maps the winerror value to
corresponding errno.h values. This is a subclass of OSError.
- Raised when the second argument of a division or modulo operation is zero. The
associated value is a string indicating the type of the operands and the
The following exceptions are used as warning categories; see the warnings
module for more information.
- Base class for warning categories.
- Base class for warnings generated by user code.
- Base class for warnings about deprecated features.
- Base class for warnings about features which will be deprecated in the future.
- Base class for warnings about dubious syntax
- Base class for warnings about dubious runtime behavior.
- Base class for warnings about constructs that will change semantically in the
- Base class for warnings about probable mistakes in module imports.
- Base class for warnings related to Unicode.
- Base class for warnings related to bytes and buffer.
6.1. Exception hierarchy
The class hierarchy for built-in exceptions is:
| +-- FloatingPointError
| +-- OverflowError
| +-- ZeroDivisionError
| +-- IOError
| +-- OSError
| +-- WindowsError (Windows)
| +-- VMSError (VMS)
| +-- IndexError
| +-- KeyError
| +-- UnboundLocalError
| +-- NotImplementedError
| +-- IndentationError
| +-- TabError
| +-- UnicodeError
| +-- UnicodeDecodeError
| +-- UnicodeEncodeError
| +-- UnicodeTranslateError | <urn:uuid:7532d15d-663e-4069-813a-f78e9b7d0d77> | 3.71875 | 2,886 | Documentation | Software Dev. | 40.601432 | 2,349 |
Overview of energetic particle hazards during prospective manned missions to Mars
McKenna-Lawlor, Susan and Gonçalves, P. and Keating, A. and Reitz, G. and Matthiä, D. (2011) Overview of energetic particle hazards during prospective manned missions to Mars. Planetary and Space Science, 63-64, pp. 123-132. Elsevier. DOI: 10.1016/j.pss.2011.06.017.
Full text not available from this repository.
A scenario for an initial manned mission to Mars involves transits through the Van Allen Radiation Belts, a 30 day ‘short surface stay’ and a 400 day Cruise Phase (to/from the planet). The contribution to the total dose incurred through transiting the belts is relatively small and manageable. Estimates of the particle radiation hazard incurred during a 30 day stay on the surface (using ESA's Mars Energetic Radiation Environment Models dMEREM and e MEREM) indicate that the dose is not expected to be particularly challenging health-wise due to the shielding effect provided by the Martian atmosphere and the body of the planet. This is in accord with estimations obtained using the Langley HZETRN code. Estimates of GCR exposure in free space during the minimum phase of Solar Cycle 23 determined using the CREME2009 model are in reasonable agreement with published results obtained using HZETRN (which they exceed by about 10%). The Cruise Phase poses a significant radiation problem due to the cumulative effects of isotropic Galactic Cosmic Radiation over 400 days. The occurrence during this period of a large Solar Energetic Particle (SEP) event, especially if it has a hard energy spectrum, could be catastrophic health wise to the crew. Such particle events are rare but they are not currently predictable. An overview of mitigating strategies currently under development to meet the radiation challenge is provided and it is shown that the health problem posed by energetic particle radiation is presently unresolved.
|Title:||Overview of energetic particle hazards during prospective manned missions to Mars|
|Journal or Publication Title:||Planetary and Space Science|
|In Open Access:||No|
|In ISI Web of Science:||Yes|
|Page Range:||pp. 123-132|
|Keywords:||Mars, Galactic cosmic radiation, Solar energetic particles, Manned missions|
|HGF - Research field:||Aeronautics, Space and Transport|
|HGF - Program:||Raumfahrt|
|HGF - Program Themes:||R FR - Forschung unter Weltraumbedingungen|
|DLR - Research area:||Raumfahrt|
|DLR - Program:||R FR - Forschung unter Weltraumbedingungen|
|DLR - Research theme (Project):||R - Vorhaben Strahlenbiologie|
|Institutes and Institutions:||Institute of Aerospace Medicine > Radiation Biology|
|Deposited By:||Kerstin Kopp|
|Deposited On:||06 Dec 2011 12:40|
|Last Modified:||26 Mar 2013 13:33|
Repository Staff Only: item control page | <urn:uuid:0a877dc3-b90c-418c-9954-e24b1f183af1> | 3.046875 | 675 | Academic Writing | Science & Tech. | 37.943032 | 2,350 |
ASHVILLE, Ala. (AP) — Biologists this week waded through creeks in Alabama searching for tiny fish, documenting them as party of a study of aquatic wildlife in the region.
The Anniston Star reports (http://bit.ly/MCLUKI) that the scientists were taking part in a "fish blitz" aimed at identifying and documenting fish that live in the headwaters of Terrapin Creek and in Big Canoe Creek near Ashville and Springville.
This week and next, the scientists will visit about three dozen sites along the two creeks to do biological monitoring. Their findings will be part of a report by the Geological Survey of Alabama.
The biologists expect that report to be used for decades to help officials and private landowners make land management decisions.
The findings could also help restore populations of endangered species and improve water quality across Alabama. | <urn:uuid:646e5a66-4d78-4b21-bfb5-4b4420425e16> | 3.015625 | 180 | News Article | Science & Tech. | 41.301533 | 2,351 |
When disturbed, the spider might first vibrate the web to try to make its body look bigger, but if that fails to deter a predator she will drop to the ground and hide (Faulkner 1999). Adults may be captured by wasps such as the Blue Mud Dauber, Chalybion californicum (Landes et al. 1987). They are also eaten by birds, lizards, and shrews.
Overwintering egg cases protect spiderlings from predation. Suspending the cocoon from the web is particularly effective against ant predation. The vast majority, however, are eventually damaged by birds. Cocoons wall layers provide barriers against burrowing larvae of insect predators and ovipositors of parasitic insects, but Ichneumonidae wasps such as Tromatopia rufopectus and Chloropidae flies such as Pseudogaurax signatus lay their eggs in Argiope aurantia egg cases. In fact, one study found that in addition to Argiope aurantia, nineteen species of insects and eleven species of spiders emerged from Argiope aurantia egg cases. (Hieber 1993, Lockley and Young 1993). () | <urn:uuid:ad546e16-0478-4f12-a216-0445f4c8b7f1> | 3.640625 | 248 | Knowledge Article | Science & Tech. | 39.90125 | 2,352 |
Forests.org News Archive
Non-profit forests news links and archive of materials no longer on web provided on these terms to help find solutions and for posterity
Disclaimer & Conditions for Use
Share on Facebook
* @EcoInternet has long campaigned
Report: Large-scale forest biomass energy not sustainable
Large-scale production could sacrifice forest ecosystem integrity and actually lead to higher greenhouse gas emissions
Large-scale use of forest biomass for energy production may be unsustainable and is likely to increase greenhouse gas emissions in the long run, according to a new study.
The research was done by the Max-Planck Institute for Biogeochemistry in Germany, Oregon State University, and other universities in Switzerland, Austria and France. The work was supported by several agencies in Europe and the U.S. Department of Energy.
The results show that a significant shift to forest biomass energy production would create a ubstantial risk of sacrificing forest integrity and sustainability with no guarantee that it would mitigate climate change,” according to the researchers.
Early assumptions that biomass energy production would be greenhouse-neutral, or even reduce greenhouse emissions “are based on erroneous assumptions,” the researchers said, adding that large-scale biomass energy production would have negative impacts on forest ecosystems, including ...
Rate Article: 1 (Worst) to 10 (Best)
Search the Internet with Forests.org's Search Engine for more information on: 'forest biomass energy'
Forests.org users agree to the site disclaimer as a condition for use. | <urn:uuid:e4bc34e9-7dd5-425c-ab46-a34b301afbd3> | 2.875 | 311 | Content Listing | Science & Tech. | 23.634599 | 2,353 |
In studies of breast cancer cells, [senior investigator Dr. Bert] O'Malley and his colleagues showed how the clock works. Using steroid receptor coactivator-3 (SRC-3), they demonstrated that activation requires addition of a phosphate molecule to the protein at one spot and addition of an ubiquitin molecule at another point. Each time the message of the gene is transcribed into a protein, another ubiquitin molecule is chained on. Five ubiquitins in the chain and the protein is automatically destroyed.A counter on a separate work tape: neat !
Main article at sciencedaily; link via Science and Reason. | <urn:uuid:6e81bfdb-83f0-4a1f-aca7-131d25ae644e> | 2.546875 | 130 | Personal Blog | Science & Tech. | 38.10051 | 2,354 |
The height of a hill (in feet) is given by , where is the distance north, is the distance east of South Hadley.
(a) Where is the top of the hill located.
So . Then what? Set it equal to ?
(b) How high is this hill?
This is the magnitude of ?
(c) How steep is the slope (in feet per mile) at a point mile north and one mile east of South Hadley? In what direction is the slope steepest at that point?
Plug in value in gradient and find magnitude? | <urn:uuid:b23bafd4-eb66-4068-bcb2-d4c802aaaf2c> | 3.09375 | 120 | Q&A Forum | Science & Tech. | 87.536331 | 2,355 |
Thorfinn.au sends along big physics news: magnetic monopoles have been detected at low temperatures in "Dirac strings" within a single crystal of Dysprosium Titanate. Two papers are being published today in the journal Science and two more on arXiv.org, as yet unpublished, provide further evidence. "Theoretical work had shown that monopoles probably exist, and they have been measured indirectly. But the Science papers are the first direct experiments to record the monopole's effects on the spin-ice material. The papers use neutrons to detect atoms in the crystal aligned into long daisy chains. These daisy chains tie each north and south monopole together. Known as 'Dirac strings,' the chains, as well as the existence of monopoles, were predicted in the 1930s by the British theoretical physicist Paul Dirac. Heat measurements in one paper also support the monopole argument. The two, as yet unpublished, papers on arXiv add to the evidence. The first provides additional observations, and the second uses a new technique to determine the magnetic charge of each monopole to be 4.6x10-13 joules per tesla metre. All together, the evidence for magnetic monopoles 'is now overwhelming,' says Steve Bramwell, a materials scientist at University College London and author on one of the Science papers and one of the arXiv papers." | <urn:uuid:5380ca90-c9af-488c-89b3-47de88c2f554> | 2.59375 | 286 | News Article | Science & Tech. | 37.25 | 2,356 |
Biofuel in the Pipeline
What’s so great about ethanol? Chemical engineer Kristala Jones Prather thinks she can design a better biofuel — one that’s closer to the octane of gasoline and doesn’t absorb quite as much metal-corroding water.
The U.S. Department of Energy reports that the U.S. spends nearly $1 billion a day on imported oil. Fuels derived from biomass hold a great deal of promise as renewable energy sources and may significantly diversify the menu of transportation fuel options available to consumers. To produce a better biofuel, Prather, the Theodore T. Miller Career Development Associate Professor of Chemical Engineering, looks for molecules that have physical and chemical characteristics that — unlike ethanol — will make them compatible with today’s cars, pipelines, and other aspects of the U.S. petroleum infrastructure.
“We live in a world where liquid transportation fuels dominate. So given that reality, we’ve got to come up with alternatives that will work within the existing infrastructure,” she says. Butanol, or butyl alcohol, has been demonstrated to work in vehicles designed to run on gasoline and packs more of an octane punch than ethanol. Prather looks to this and other natural pathways for inspiration to design biosynthetic routes to new molecules.
Prather is enlisting the help of bacteria as micro biofuel factories. A bacterium called Clostridium acetobutylicum can ferment glucose to produce butanol, but it’s not very efficient. Prather has genetically tweaked other bacteria such as E. coli to use sugars in plant material to churn out butanol. In the process, Prather has happened upon new ideas about how to produce even better biofuels. The secret is in the enzymes that drive the fermentation process, so Prather has teamed up with Bruce Tidor, MIT professor of biological engineering and computer science, to engineer new enzymes and to predict which ones would be most effective in which biological systems. “Microbes are promising as chemical factories because of the ease with which these enzymes can be introduced into them from a wide variety of natural sources,” she says. Additionally, there is already a rich and growing body of industrial experience in using microbial systems to produce biofuels and biochemicals at very high volumes.
Prather, whose work is funded by Shell Global Solutions, sees biofuels as one of a diverse array of solutions to the energy crisis. “We can easily run into a situation where the rate at which we consume biomass is greater than the rate at which we can produce it,” she warns. But how long it takes to implement a solution is key, and “in the short term, biofuels can be more rapidly brought on line than other alternative energy sources, especially for transportation. | <urn:uuid:c6b1c936-026a-4377-8d6d-5110abc8a810> | 3.328125 | 583 | News Article | Science & Tech. | 35.616599 | 2,357 |
Earth on track to be hottest in human history: study
Earth is on track to becoming the hottest it has been at any time in the past 11.3 millennia, a period spanning the history of human civilisation, a new study says.
Based on fossil samples and other data collected from 73 sites around the world, scientists have been able to reconstruct the history of the planet's temperature from the end of the last Ice Age around 11,000 years ago to the present.
They have determined the past 10 years have been hotter than 80 per cent of the past 11,300 years.
But virtually all the climate models evaluated by the Intergovernmental Panel on Climate Change predict Earth's atmosphere will be hotter in the coming decades than at any time since the end of the Ice Age, no matter what greenhouse gas emission scenario is used, the study found.
"We already knew that on a global scale, Earth is warmer today than it was over much of the past 2,000 years," said Shaun Marcott, the lead author of the study, which was published in Science.
"Now we know that it is warmer than most of the past 11,300 years.
"This is of particular interest because the Holocene spans the entire period of human civilisation."
The data show that temperatures cooled by 0.8 degrees Celsius over the past 5,000 years, but have been rising again in the past 100 years, particularly in the northern hemisphere where land masses and population centres are larger.
The climate models project that average global temperatures will rise by 1.1 to 6.3 degrees Celsius by the end of the century, depending on the level of C02 emissions resulting from human activities, the researchers found.
"What is most troubling is that this warming will be significantly greater than at any time during the past 11,300 years," said Peter Clark, a paleoclimatologist at Oregon State University.
Earth's position with respect to the Sun is the main natural factor affecting temperatures during that time, the scientists said.
"During the warmest period of the Holocene, the Earth was positioned such that northern hemisphere summers warmed more," Mr Marcott said.
"As the Earth's orientation changed, northern hemisphere summers became cooler, and we should now be near the bottom of this long-term cooling trend - but obviously, we are not."
Other studies have concluded that human activities - not natural causes - have been responsible for the warming experienced over the past 50 years. | <urn:uuid:31643a93-5056-464d-b488-48d0d57521bb> | 3.734375 | 504 | News Article | Science & Tech. | 49.1825 | 2,358 |
Ghosts of Tsunamis Past
Brian Atwater is one of those people whose surname coincidentally fits his or her line of work. As a geologist for the U.S. Geological Survey, he studies earthquakes and tsunamis of the past few thousand years. Doing this often requires him to get wet along shorelines, tidal marshes, and river deltas to investigate the residue of these catastrophes, buried where land meets sea.
For tsunami researchers, witnessing an event as wide-reaching and destructive as last winter’s Indian Ocean tsunami is exceedingly rare. This makes Atwater’s soggy forays into geologic history quite valuable. By unearthing sediment deposits tsunamis leave behind, scientists can study the waves’ origins, extent, and frequency. Such work helps avert surprises from locations that have the geological apparatus to produce a tsunami, but haven’tin written history, at least.
History in Layers
Science Bulletins met up with Atwater at the Copalis River estuary on the Pacific coast of Washington State. The estuary is one of dozens that sit above an enormous fault plane that slants beneath the Pacific coast. Called the Cascadia subduction zone, the fault stretches 1,100 km from Vancouver Island, B.C., to northern California. One tectonic plate descends beneath another here. As they abrade, the overriding continental plate sticks and warps atop the subducting one. Strain builds over time. When the plates suddenly slip free, an earthquake occurs.
At this moment, the continental plate springs upward, and can launch a massive volume of water: a tsunami. Thus “unstuck,” the plate settles, now lower in elevation than it was prior to the earthquake.
Despite the fault’s existence, written records from the Pacific Northwest coast, which began about 200 years ago, are silent on the subject of large earthquakes and tsunamis generated from them. “I first went to the Copalis River in the spring of 1986,” says Atwater. “At that time, very few scientists believed that large earthquakes and tsunamis could happen here, and nobody had demonstrated that they had.” But Atwater became one of the first researchers to find geologic proof.
Atwater explains as he hacks into a marshy bank of the Copalis River with a World War II folding shovel. “What I’m unearthing is a record of a catastrophe from 300 years ago,” he says. Atwater points to the lowest of three distinct bands of sediment stacked in the bank. “Around 1700, this salt marsh, represented by the soil here, was up at the level of the present marsh above us. But then the land abruptly dropped a meter or two during an earthquake.” He traces the 10 cm thick layer of sand above the ancient marsh. “Then comes the tsunami, and lays down a sheet of sand. The sea was free to come in because the continental plate had dropped, so the ocean then laid down this top layer of mud.”
In 1986, Atwater surveyed a sand sheet that he suspected a tsunami washed into Willapa Bay, Washington. Sand deposits had been associated with only one tsunami previously, the 1960 event in the southeastern Pacific that affected Chile and Japan. Jody Bourgeois, a sedimentologist at the University of Washington, investigated further. “We started with little to go on,” she recalls. “We had to show that the sand layer was from a surge of a tsunami wave, and not from a high tide or a storm.” A team of colleagues mapped the sand and land-level changes along coastal Washington State and compared the results with deposits and eyewitness accounts of the 1960 tsunami. The picture began to come together.
Atwater and other researchers later found key clues in eerie stands of Western red cedars bordering the Copalis River and three other estuaries in the state. “We call it a ghost forest because the trees have been standing dead for centuries,” Atwater says. It’s a sign not of a tsunami, but of an earthquake capable of causing one. “After the earthquake drops the continental plate, saltwater can come in at high tide and routinely cover the forest floor, killing the trees,” he explains.
A Restless Record
At the start of World War II, a Japanese geographer looking at municipal documents from Japan’s Pacific coast noted a mention of a destructive tsunami on the evening of January 26, 1700. In 1996, Japanese researchers proposed that this event and the one that had affected the Pacific Northwest were one and the same. Tree-ring dating of the Western red cedars corroborated this date: the trees had all died at once, somewhere between August 1699 and May 1700. Recent analysis of flooding and other damage from the Japanese documents show that the tsunami’s parent earthquake was a gargantuan magnitude 9.
The mounting evidence since 1986 has convinced Earth scientists that the Pacific Northwest’s 1700 earthquake was just the most recent tsunami-generating quake of a surprisingly fitful fault. Additional deposit data have disclosed seven great Cascadia earthquakes over the past 3,500 years, with an average interval of 500 years.
As the geologic record unfolds, other researchers such as Ruth Ludwin, a seismologist at the Pacific Northwest Seismograph Network, are digging up oral traditions of Northwest Coast native communities that existed previous to written records. “It turns out there are stories amongst those tribes that are consistent with historical earthquakes and tsunamis on the coast of Cascadia,” says Bourgeois.
As the continental plate sluggishly gathers strain at Cascadia, there is no doubt that another massive earthquake and tsunami will roil the Pacific Northwest. When is unclear. To find out how science is reducing the risk of surprise and widespread damage, follow the essays about tsunami computer modeling and measurement in real time.
Natural Resources Canada: Giant Megathrust Earthquakes
A stellar introduction to the geology of these tsunami-producing disasters.
The Orphan Tsunami of 1700
Exhaustive historic Japanese maps and writings reveal the mystery of the 1700 tsunami.
Casadia Megathrust Earthquakes in Pacific Northwest Indian Legend
Ruth Ludwin's research into native stories about the area's large historical earthquakes and tsunamis.
More About This Resource...
Supplement a study of earth science with a classroom activity drawn from this Science Bulletin essay.
- Ask students what they know about earthquakes. What causes them? Have scientists identified all the locations that have the potential for a sizable earthquake?
- Have them read the essay (either online or a printed copy).
- Have them write a brief reaction to the article, focusing on what they learned about the limits of written history when it comes to identifying fault planes. | <urn:uuid:12d6fc47-6059-44c4-90b8-23653f52264f> | 4.0625 | 1,426 | Nonfiction Writing | Science & Tech. | 45.24313 | 2,359 |
Why only L amino acids?
Rafael N Szeinfeld
szeinfel at FOX.CCE.USP.BR
Thu Jul 7 11:41:18 EST 1994
On 6 Jul 1994 6566friedman at vmsa.csd.mu.edu wrote:
> Does anyone have a plausible hypothesis to explain why only L amino
> acids are used in proteins? I am teaching an introductory course in
> biochemistry this summer and this question was raised by a student.
> Please send your answers to:
> 6566FRIEDMAN at VMS.CSD.MU.EDU
> Alan Friedman
> Dept Biology
> Marquette University
> Milwaukee, WI
May be at the time life began the were an unbalance in the
concentration of l and d aminoacids, that is [L-aminoacids] > [D-aminoacids]
and so L-aa's were used at that time, thereof L-aa's were selected and so
they are used up to now.
More information about the Proteins | <urn:uuid:8f1a4041-7775-4bf4-95f9-869dc2aaab2d> | 2.671875 | 235 | Comment Section | Science & Tech. | 71.026154 | 2,360 |
to fling the limbs and body, as in making efforts to move; to struggle, as a horse in the mire, or as a fish on land; to roll, toss, and tumble; to flounce. They have floundered on from blunder to blunder. (Sir W. Hamilton)
The common english flounder is Pleuronectes flesus. There are several common American species used as food; as the smooth flounder (P. Glabra); the rough or winter flounder (P. Americanus); the summer flounder, or plaice (Paralichthys dentatus), atlantic coast; and the starry flounder (Pleuronectes stellatus).
Results from our forum
... blue crabs, shrimp, and fish swimming from the depths of the bay into the shallow waters of the shoreline. Generally, the bottom fish, such as flounders, catfish, and stingrays, are the most affected. Crabs are almost always a part of the event. The phenomenon in Mobile Bay has been studied ...
See entire post | <urn:uuid:41b37b93-259d-452c-b9c6-329940d1f86f> | 2.90625 | 235 | Truncated | Science & Tech. | 62.860011 | 2,361 |
The pattern of seasonal temperature odds across northern Australia is a result
of recent warm conditions in the Indian Ocean and an increasing level
of warmth in the Pacific. The Pacific has had the greater influence on this
The chance that the average July-September maximum temperature will exceed the
long-term median maximum temperature ranges from 60 to 70% across most of the
southern halves of both the NT and Queensland. In the southeast inland of
Queensland the chance approaches 75%.
This means that for every ten years with ocean patterns like the current, about six
or seven years would be expected to be warmer than average during the September
quarter over this broad zone stretching west-east across northern Australia, with about
three or four years being cooler.
The chances of a higher than normal seasonal average is between 45 and
60% in the far north.
Outlook confidence is related to how consistently the Pacific and Indian
Oceans affect Australian temperatures. During the September quarter,
history shows this effect on maximum temperatures to be moderately consistent
in both the NT and Queensland (see background information).
The outlook for mean minimum temperatures over July-September shows the
chance of a seasonal average above the long-term median minimum temperature is between
40 and 60% over northern Australia.
History shows the oceans' effect on minimum temperatures in the July to
September period to be moderately consistent over Queensland and the east of the NT.
Elsewhere the effect is only weakly or very weakly consistent. | <urn:uuid:226c2a80-cea6-4141-bb97-480f6be82080> | 2.9375 | 306 | Knowledge Article | Science & Tech. | 32.504394 | 2,362 |
I want to implement FIFO (First in First out) approach in Java. The requirement is as following, 1)I receive string data in chunks after some time interval. 2)I want to write it in FIFO buffer having a fixed size i.e. only fixed number of data can be stored(say 5). 3)Then I will pick one by one from the buffer for further processing. 4)After I pick from the buffer it should be deleted from the buffer and the space can be allocated for further data addition. Can anybody guide/ tell me how ti achieve this??? Thnx in advance.
"JavaRanch, where the deer and the Certified play" - David O'Meara
Joined: Mar 05, 2001
Thanx Cindy, but can you elaborate on this. As my understanding of BufferedInputStream is I can take i/p faster. How to achieve FIFO approach with fixed set of values.
Joined: May 25, 2001
hi, you need an array to put the input to (the size of the array is your fixed size). two indexes: one for the index to write to and one for the index to read from. then you also have to check some conditions: is there space to write, something to read ? use wait() notify()/notifyAll() to prevent deadlocks. the consumer and producer must be different threads in this scenario to prevent deadlocks. perhaps this is a little more than you were looking for but this is FIFO as i understand it........ | <urn:uuid:be760671-eec4-4870-bedd-f7e99f637db6> | 2.78125 | 319 | Comment Section | Software Dev. | 70.415686 | 2,363 |
docs.oracle.com wrote: public void print(String s)
Print a string. If the argument is null then the string "null" is printed. Otherwise, the string's characters are converted into bytes according to the platform's default character encoding, and these bytes are written in exactly the manner of the write(int) method.
s - The String to be printed
Ritesh raushan wrote:but i did'nt undestand...this is has-A relationship but where the object of PrintStream class is creating.
What is there, you did not understand? It simply says that the System class has an object "out" of PrintStream class. You have already shown the code there. If you are worried about the initialization of "out",then | <urn:uuid:e3612000-55c5-4f93-b37f-4bf79a94f10d> | 3.296875 | 161 | Q&A Forum | Software Dev. | 68.375 | 2,364 |
I was using while (true) to control some code that was supposed to run for a designated amount of time. I noticed the processor load spike by 40-60%. As there was no input to slow down execution, it was tearing through the code in the while loop and repeating as quickly as possible. Looked up a way to slow it down and discovered sleep(# of milliseconds in Windows.h. Is there either: a) a better way to go about controlling the rate of execution; or b) a cross-platform variant of sleep(time)?
Generally sleep() is not recommended. The alternative is to use events and signals to control the flow of a program. The reason for this is that programs should not be 'hanging' themselves but instead allow the user to do other tasks while the program is waiting for some new event to process.
while(true) is a brutal way to pause a program as you've seen. Depending on how you've implemented your 'timer' the actual sleep time could have been dependent on processor speed and the actual hardware you use instead of the equivalent seconds you programed it for.
All this said, there are still applications for a sleep() function and it is much better than most while variants. Since sleep functions make use of the system time they are OS specific but most OS do have some sort of sleep() function. It's just a matter of finding the right header. | <urn:uuid:980c4646-a150-4ea3-b60f-b4a043ec8f94> | 3.28125 | 289 | Q&A Forum | Software Dev. | 60.486204 | 2,365 |
Scientists challenge General Relativity. And Mr. Einstein wins again.
Two tests use cosmic laboratories to question if the laws of physics are universal.
Physicists assume that the laws they discover on Earth hold true throughout the universe and throughout all time. Their faith is only as good as the facts that support it. That support now is a little stronger.Skip to next paragraph
Subscribe Today to the Monitor
New data show that a mathematical constant that’s fundamental to our understanding of particle physics remains the same on Earth today as it did half way across the universe and billions of years ago. And an unusual object 1,700 light-years away has verified Einstein’s general relativity theory of gravity with a type of measurement never made before.
Atoms are made up partly of protons and electrons. The ratio of a proton’s mass to that of an electron is one of the bedrock constants underpinning our standard theory of particle physics. If this – or any other fundamental constant – varied from place to place, it would make a hash of that theory. Hence the ongoing quest to test those constants’ universality.
Michael Murphy at Australia’s Swinburne University of Technology and his colleagues found a new opportunity to explore this idea: radio emissions from the energetic galaxy BO218+367 some 6 billion light-years from us. The emissions passed through ammonia clouds in a neighboring galaxy.
They observed how the ammonia absorbed certain wavelengths in the radio emission spectrum to examine the proton-electron mass ratio of the material that emitted the radio waves in the first place. The ratio checks out at approximately 1836.15, just as it does in earthly labs.
“We have been able to show that the laws of physics are the same in this galaxy [BO218+367] half way across the visible universe as they are here on Earth,” Dr. Murphy said.
Technical details of this research were published last month in Science. Last Friday, the journal carried details of a new test of Einstein’s 93-year-old gravity theory, the other main pillar of modern physics. About 1,700 light-years from Earth, two very dense rotating dead stars provided the test as they orbit each other. They are separated by only about twice the distance between Earth and the Moon.
Einstein’s general relativity theory predicts that two massive objects in such a tight orbit should affect each other’s rotation in a specific way.
Rene Breton, lead author on the research paper, notes that this is the only known case where two stars of this type orbit each other. Moreover, one star repeatedly eclipses the other as seen from Earth.
“Those eclipses are the key to making a measurement that could never be done before,” Mr. Breton says. It’s a perspective that allows the interaction of the two stars to be seen. The measurements verified the prediction of Einstein’s theory.
However, Breton warns, “It’s not quite right to say that we have now proven general relativity.”
For now, the physicists’ faith in the universality of their standard particle physics theory and of general relativity is upheld. They will continue to seek such confirmation. Some will seek it in laboratories such as the new high-energy particle accelerator in Geneva. Others will look to what happened long ago in galaxies far away. | <urn:uuid:852c22f2-ee7a-4aa3-b9a1-1f8e101f0c4c> | 3.140625 | 704 | News Article | Science & Tech. | 50.753634 | 2,366 |
Review of existing Red Fox, Feral Cat, Feral Rabbit, Feral Pig and Feral Goat control in Australia. II. Information Gaps
Ben Reddiex, David M. Forsyth.
Department of the Environment and Heritage, 2004
7. Results and Discussion (continued)
The first stage of this review (Reddiex et al. 2004) showed that there was little reliable knowledge about the benefits of feral rabbit control for native species and ecological communities. In contrast, there is some evidence of the impacts of rabbits on native species and ecological communities for rangelands and higher rainfall areas (see Williams et al. 1995). Feral rabbits are believed to impact on native fauna via direct competition for resources and through behavioural interactions such as exclusion of native animals from feeding areas (Williams et al. 1995). However, few studies have experimentally investigated these potential impacts (but see Robley et al. 2002).
There are reliable methods for estimating the relative abundance (i.e., spotlight counts; Caley and Morley 2002) and absolute abundance of feral rabbits (i.e., mark-recapture; Twigg et al. 2000) in most habitat types, and the effectiveness and costs of control are well known (Williams et al. 1995).
There is limited information on the benefits of feral rabbit control for native species and ecological communities. Studies that have investigated the impacts of feral rabbits on pasture composition and biomass have largely focused on modified agricultural landscapes where few threatened native species are present (e.g., Gooding 1955; Myers and Poole 1963; Croft et al. 2002). The impact of feral rabbits on native plant species has largely been inferred from exclosure studies (e.g., Lange and Graham 1983; Leigh et al. 1989; Henzell 1991). The main limitation of such studies when attempting to infer benefits of feral rabbit control is that eradication is not feasible in mainland areas of Australia (i.e., exclosures have feral rabbit densities that are not possible via conventional control).
In rangelands, the current replacement rate of many shrubs and trees is insufficient to prevent their loss in the long-term. Lange and Graham (1983) studied feral rabbit browsing of arid zone acacia (Acacia spp.) seedlings when feral rabbits were at low densities, and found that only seedlings that were protected from feral rabbits and sheep showed good growth. Several other studies have indicated that feral rabbits may prevent regeneration of many shrub and tree species (e.g., Johnson and Baird 1970; Friedel 1985; Auld 1990; Henzell 1991). In the Gammon Range National Park in South Australia, Henzell (1991) reported that feral rabbits were a critical factor in determining mulga regeneration because they killed nearly all of the seedlings, and Foran et al. (1985) found the same response for Acacia kempeana seedlings. In a replicated field experiment, Mutze et al. (1997) reported that feral rabbit control resulted in higher levels of recruitment of the arid zone shrubs of moderate palatability in South Australia. However, it is extremely difficult to undertake field experiments to assess the benefits of feral rabbit control for regeneration in rangelands as germination and establishment of vegetation in rangelands may only occur at time intervals of 5-50 years, mainly as a response to rainfall (Ireland and Andrew 1992; Williams et al. 1995).
In the Coorong National Park in South Australia, Cooke (1987) reported that feral rabbits prevented regeneration of Acacia longifolia and the sheoak Allocasuarina verticilliata. In Kosciusko National Park, where feral rabbits were excluded two new species of forbes were found in seven years, but where feral rabbits were present there was a loss of nine forb species (Leigh et al. 1987). An exclosure study in the mallee in western Victoria found 17 indigenous species of ground layer plants inside feral rabbit exclosures after 2 years that were not present outside (Cochrane and McDonald 1966). However, other herbivores were present in the study area.
The benefits of a reduction in feral rabbit densities resulting from RHD have been monitored at a number of sites (>10) across Australia (Sandell and Start 1999). Despite most of the sites only being monitored for two years post-RHD all but one of the sites found evidence of native vegetation recovery as a result of reduced feral rabbit abundance (Sandell and Start 1999). The structure of vegetation has been reported to have improved due to regeneration of native trees and shrubs, however floristic changes have been variable and dependent on climatic factors (the results for most of these sites are not available). Sandell (2002) found no evidence of widespread germination of woody seedlings, which is not surprising given the episodic nature of such regeneration in many environments.
Feral rabbits are a known or perceived threat for 84 species listed under the EPBC Act (Table 1); 13 mammals, 13 birds, 1 fish, 1 amphibian, 2 retiles, and 54 plant species. Few of these species were identified in the above overview. The 54 plant species listed under the EPBC Act for which feral rabbits are a known or perceived threat appear to have that status because feral rabbits either have been observed feeding on those species, or because browse on those species has been attributed to feral rabbits, or species have shown a positive response in areas where feral rabbits are excluded. Hence, there is limited reliable information on the benefits of conventional feral rabbit control for nearly all of the species listed in the EPBC Act for which feral rabbits have been identified as a threat.
We consider that the greatest priority is understanding the benefits of feral rabbit control for native plant species/communities. The next priority would be to determine the indirect impact of feral rabbits on native fauna species.
We advocate an experiment that assesses the functional relationship between feral rabbit density and damage to a combination of native species for which feral rabbits are a known key threatening process (Environment Australia 1999b) and other common native species that feral rabbits may impact upon. Our preferred experimental design is a response surface experiment (Mead 1988), and uses large-scale enclosures to assess the impact of feral rabbit density on native plant species diversity and composition, including seedling survival of planted shrub/tree species. We believe that the alternative approach of comparing vegetation response between feral rabbit control programs and paired non-control areas is less desirable due to potential difficulties in maintaining the desired treatments over extended periods of time and over a large scale, and limited control of other herbivores. The proposed enclosures have the advantage of enabling accurate assessment of feral rabbit densities and therefore relationship to damage, but are also large enough to simulate broad acre conditions (note that enclosures could not be used to simulate broad acre conditions for feral goats and feral pigs).
We suspect that the benefits of differing feral rabbit densities on native vegetation will vary between rangelands and high-rainfall areas (Williams et al. 1995). We therefore suggest conducting the following experiment at sites in each of these two ecosystems. However, we encourage the adoption of this design at as many sites as possible throughout the feral rabbit range. Where possible sites should be selected where published information is available on the dynamics of feral rabbit populations, including changes in abundance of feral rabbits following conventional control, and their associated impacts on native vegetation.
The experimental design would be the same for both ecosystems. The experiment should use a randomised design (see Figure 4), with different feral rabbit densities as the treatments at each site. There should be a minimum of four treatments (i.e., enclosures) at each site.
Figure 4. Experimental design for understanding the relationship between feral rabbit density and damage to native plant species in two ecosystems.
Recommended feral rabbit densities should represent typical feral rabbit densities for the regions studied and for the prevailing environmental conditions, but should include a low density and low-medium density representative of sustained conventional control of feral rabbits, and a medium-high and high density which is representative of uncontrolled feral rabbit populations. Each enclosure should include a number of relatively small exclosures that act as experimental controls. The experiment aims to examine the relationship between feral rabbit control and damage, therefore the densities within treatments should be treated to reflect management. We suggest the following management; low density treatment - remove 90-95% of feral rabbits once per year (small population levels may be prone to extinction in enclosures, and may require intensive management/reintroduction); low-medium density - remove 70-80% of feral rabbits once per year; medium-high density - remove 40-50% of feral rabbits once per year; and high density - no removal.
The reliability of the inferences increases with the number of sites and the number of replicates within each site. However, as long as there are at least five sites in each region there can be a minimum of one experiment in each site (i.e., no replication within sites). Sites should be selected so that they include the plant species predicted to respond to feral rabbit control (either in abundance and/or condition) and where possible include EPBC Act listed species for which feral rabbits are a known or perceived threat (Table 1). All treatments within a site would be undertaken on adjacent areas (see Figure 4), and all treatments should have similar soil types and vegetation composition and structure at the commencement of the study.
The size of each treatment enclosure should be at least 4 ha, but if resources permit we encourage the size of each enclosure to be increased (we have costed this experiment based on 4 ha enclosures). Feral rabbits generally do not forage far from their warrens. Wood et al. (1987) reported an inverse relationship between distance from warrens and the intensity of feeding, with 800kg/ha of forage removed <12m from the warren, 220kg/ha 25 m from the warren and 150kg/ha at 100 m from the warren. Feral rabbit home range differs markedly from one environment to another (range 0.05-4.70 ha; Myers et al. 1994). Each enclosure will be fenced in a manner that prevents feral rabbits from moving outside their intended enclosure, and they will be fenced to a height (c. 1.8 m) that prevents entry of other herbivores that are likely to affect the species of interest (e.g., Henzell 1991; Grice and Barchia 1992). Several fence designs achieve these requirements (review in Long and Robley 2004). Predator control will need to be undertaken around all enclosures throughout the duration of the experiment (predator control has been included in the experiment costing).
Prior to the treatments being imposed, the vegetation within each enclosure should be sampled. Monitoring protocols for assessing grassland species composition and biomass are widely available (e.g., dry-weight-rank technique; Mannetje and Haydock 1963; modified step-point sampling technique; Cunningham 1975). Monitoring of plant composition, condition and biomass should be undertaken quarterly as there are pronounced seasonal variation in many grassland systems.
The impact of feral rabbit densities on the survival of shrubs/tree species would be assessed through monitoring the survival of planted seedlings in all enclosure treatments. As mentioned above, germination and establishment of vegetation in rangelands may only occur at time intervals of 5-50 years, mainly as a response to rainfall (Ireland and Andrew 1992; Williams et al. 1995). Therefore, it is unlikely that establishment of shrub/tree species will occur naturally in the enclosures during the timeframe of the study. The shrub/tree species selected will act as a proxy for species that feral rabbits are believed to prevent regeneration of (e.g., Acacia spp.; Williams et al. 1995) and therefore be similar in palatability and structure. However, we have also costed the addition of a simulated rainfall treatment to this experiment that may enable natural regeneration to occur (i.e., doubling the number of enclosures at each site). This would involve replicating the above enclosures at each site and randomly selecting one block to be irrigated.
The abundance of feral rabbits should be monitored throughout the study (quarterly) using mark-recapture methods. Enclosures should also be inspected at least every fortnight, to ensure the fence has not been breached by feral rabbits or other herbivores and on the potential for incursion (e.g., overhanging branches that may fall on fences, or proximity to creeks that might erode the fence).
Other covariates should be monitored at each site. For example, rainfall is thought to be important for the germination of some seeds. Hence, a response of feral rabbit control might not occur until a threshold soil moisture has been exceeded. Other covariates might include the abundance of small native herbivores that may enter the enclosures, presence of disease (e.g., myxomatosis and RHD), and temperature.
This design will enable benefit-cost analyses to be undertaken as it will provide a relationship between incremental pest density and incremental damage, without which cost-benefit analyses are tenuous (Fleming et al. 2001).
How long should the experiments run for? The answer will depend on the plant species monitored and the environmental conditions that occur during the experiment. And there is always the possibility of 'demonic intrusion' (e.g., destruction of enclosure fences) ruining even the best design. However, we believe that there should be at least 1 sample in all treatments prior to the commencement of the study to gather accurate baseline information on the response variables that are to be assessed and at least four years of monitoring before the experiment is reviewed to enable sampling of different seasonal conditions. This design also enables the treatments to be reversed (i.e., feral rabbit densities changed between enclosures). The key relationship is that between feral rabbit density and damage. We expect that at least five sites are needed to provide a reasonable confidence interval around this relationship in each ecosystem.
Until study sites are identified, the cost of the experiment can only be considered indicative (Table 4). We estimate that the start-up costs of the experiment for one ecosystem with five sites will be (including overheads) $490K (excludes the simulated rainfall treatment). The annual ongoing cost will be $320K.
|Item||Start-up (year 1) costs ($000)||Ongoing (year 2and beyond) costs ($000)||Final year costs ($000)|
|a) Excludes simulated rainfall treatment|
|b) Includes simulated rainfall treatment|
1 Assumes 100% overheads, but not all organisations charge overheads.
2 Irrigation costs will depend upon the location of sites. | <urn:uuid:91f1eae6-af79-4b5f-9f18-83cd8ead5425> | 3.09375 | 3,040 | Academic Writing | Science & Tech. | 35.27796 | 2,367 |
MIAMI – As Hurricane Isaac barreled toward New Orleans, a team led by University of Miami (UM) Professor and Deep-C (Deep Sea to Coast Connectivity in the Eastern Gulf of Mexico) Co-Principal Investigator Nick Shay was planning NOAA's P-3 aircraft missions to fly into the storm. Dr. Benjamin Jaimes and UM graduate students Jodi Brewster and Ryan Shuster prepared and loaded 39 profilers into the plane. Their goal: to drop these profilers into the storm at optimum locations where they could collect measurements of ocean heat content, salinity and currents during the hurricane.
"We wanted to collect data from the DeSoto Canyon area where the Deepwater Horizon incident occurred, so we could capture the upwelling as it was occurring," said Shay, who is an expert on the Loop Current and regularly studies weather in this region. "We used operational products that we developed for NOAA's National Environmental Satellite Data and Information Service (NESDIS) to study the warm and cold core eddy ahead of the storm to establish drop points and deploy three different types of devices that penetrate to depths of 4,500 feet."
Prior to Hurricane Isaac the team flew over the area and deployed 54 devices to collect baseline oceanic and atmospheric data over the shelf and shelf break. After the storm, the team worked with the flight crew at NOAA's Aircraft Operation Center located at McDill Air Force Base to deploy another 67 probes and get a post-hurricane snapshot of the area tying the response from several research flights. The information from each of the flights is being analyzed by scientists, and for input into both research models that are being developed for Deep-C as well as operational models at forecasting centers.
"From previous hurricanes like Ivan and Frederic we knew this area was prone to upwelling, and deep sea responses to the events taking place in the atmosphere. These areas have high humidity and strong surface wind activity, which may lead to tar balls washing, ashore – which may have the same chemical fingerprint as the oil spill. We are interested in this possibility, and the long term impacts it might have on the coastal ecosystem," said Shay.
Hurricane Isaac presented a unique opportunity to investigate that possibility. Since the Deepwater Horizon accident, Deep-C scientists have visited and revisited sites along the Gulf Coast that were affected by the oil spill. A team scoured the beaches as recently as one week before Hurricane Isaac, looking for samples of oil that may have mixed with sand to create what are referred to as "sand patties." Immediately following the storm, those Deep-C teams led by investigators Dr. Chris Reddy of the Woods Hole Oceanographic Institution and Dr. Wade Jeffrey of the University of West Florida, returned to the beaches to collect additional samples.
"Our intent," Reddy said, "is to determine if these post-storm samples contain oil from the Deepwater Horizon spill and, if so, if they are in fact a result of the upwelling and deep sea responses to the recent hurricane."
One of the ways the fate of the oil can be determined is to study an effect called weathering — that is, how oil that is discharged into the environment changes over time. Weathering affects the properties of spilled oil and according to Reddy, oil from the deep bottom will have weathered differently than samples already on the shore prior to the storm that were simply unearthed or exposed by the winds and rain of Hurricane Isaac.
"We are doing a careful and prudent analysis of the samples found to determine if they are, in fact, Deepwater Horizon oil from the deep sea," Reddy added.
The Deep-C consortium is a long-term, interdisciplinary study investigating the environmental consequences of petroleum hydrocarbon release in the deep Gulf of Mexico on living marine resources and ecosystem health. The consortium focuses on the geomorphologic, hydrologic, and biogeochemical settings that influence the distribution and fate of the oil and dispersants released during the Deepwater Horizon accident, and is using the resulting data for model studies that support improved responses to possible future incidents.
The research was made possible by a grant from The Gulf of Mexico Research Initiative (GoMRI). The GoMRI Research Board is an independent body established by BP to administer the company's 10-year, $500 million commitment to independent research into the effects of the Deepwater Horizon incident. Through a series of competitive grant programs, the GRI is investigating the impacts of the oil, dispersed oil, and dispersant on the ecosystems of the Gulf of Mexico and the affected coastal States in a broad context of improving fundamental understanding of the dynamics of such events and their environmental stresses and public health implications.
The University of Miami's mission is to educate and nurture students, to create knowledge, and to provide service to our community and beyond. Committed to excellence and proud of the diversity of our University family, we strive to develop future leaders of our nation and the world. Founded in the 1940's, the Rosenstiel School of Marine & Atmospheric Science has grown into one of the world's premier marine and atmospheric research institutions. Offering dynamic interdisciplinary academics, the Rosenstiel School is dedicated to helping communities to better understand the planet, participating in the establishment of environmental policies, and aiding in the improvement of society and quality of life. For more information, please visit www.rsmas.miami.edu.
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system. | <urn:uuid:2ff0767d-c7bc-4d33-9408-3eb614e180db> | 2.96875 | 1,143 | News (Org.) | Science & Tech. | 30.721394 | 2,368 |
Systems of Three Variables Systems of Three Variables
Systems of Three Variables
⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles.
- Solve this system. And here we have three equations with three unknowns.
- And just so you have a way to visualize this,
- each of these equations would actually be a plane in three dimensions.
- And so you're actually trying to figure out where three planes in three dimensions intersect.
- I won't go into the details here, I'll focus more on the mechanics,
- but you can imagine if I were to draw a three dimensional space over here.
- Now all of a sudden we'll have an x, y, and z axes.
- So you can imagine that maybe this first plane
- and I'm not drawing it the may it might actually look.
- It might look something like that. (I'm just drawing part of the plane.)
- And maybe this plane over here.
- It intersects right over there and it comes popping out like this and then it goes behind it like that.
- It keeps going in every direction, I'm just drawing part of the plane.
- And maybe this plane over here, maybe it does something like this.
- Maybe it intersects over here and over here.
- And so it pops out like that and then it goes below it like that
- and then it goes like that. I'm just doing this for visualization purposes.
- And so the intersection of this plane - the x, y and z coordinates that would satisfy
- all three of these constraints the way I drew them - would be right over here.
- So that's what we're looking for. And a lot of times these three equations with three unknown systems
- will be inconsistent. You won't have a solution here, because it's very possible to have three planes
- that all don't intersect in one place. A very simple example of that is
- well, one, they could all be parallel to each other, or they could intersect each other but maybe they
- intersect each other in kind of a triangle, so maybe one plane looks like that, then another plane maybe
- pops out like that, goes underneath. And then maybe the third plane cuts in.
- It does something like this: where it goes into that plane
- and keeps going out like that, but it intersects this plane over here.
- So you see kind of forms a triangle and they don't all intersect in one point
- so in this situation, you would have an inconsistent system. So with that out of the way, let's try to
- actually solve this system. And the trick here is to try to eliminate one variable at a time from all
- of the equations, making sure that you have the information from all three equations here
- so what we're going to do is we could maybe - it looks like the easiest to eliminate
- since we have a positive y and a negative y and then another positive y
- it seems like we can eliminate the Ys.
- We can add these two equations and come up with another equation
- that will only be in terms of x and z. And then we could use these two equations
- to come up with another equation that will only be in terms of x and z.
- But it will have all of the x and z constraint information embedded in it because
- we're using all three equations. So let's do that. So first let's add these two equations right over here.
- So we have x plus y minus three z is equal to negative ten.
- And x minus y plus two z is equal to three. So over here if we want to eliminate y, we can literally
- just add these two equations. So on the left hand side, x plus x is two x. Y plus negative y cancels out
- And then negative three z plus two z - that gives us just a negative z
- and then we have negative ten plus three, which is negative seven.
- So using these two equations we got
- two x minus z is equal to negative seven - just adding these two equations.
- Now let's do these two equatons. And we can reuse this equation as long as
- we're using new information here. Now we're using the extra constraint of this bottom equation.
- So we have x minus y plus two z is equal to three.
- And we have two x plus y minus z is equal to negative six.
- If we want to eliminate the Ys, we can just add these two equations.
- So x plus two x is three x. Negative y plus y cancels out. Two z minus z - well that is just z.
- And that is going to be equal to three plus negative six, which is negative three.
- So if I add these two equations, I get three x plus z is equal to negative three. Now I have a system
- of two equations with two unknowns. This is a little bit more traditional of a problem. So let me write
- them over here. So we have two x minus z is equal to negative seven. And then we have three x plus z
- is equal to negative three and the way this problem is set up, it gets pretty simple pretty fast, because
- if we just add these two equations, the Zs cancel out. Otherwise if it didn't happen so naturally, we'd
- have to multiply one of these equations, or maybe both of them, by some scaling factor.
- But we can just add these two equations up.
- On the left hand side, two x plus three x is five x. Negative z plus z cancels out.
- Negative seven plus negative three - that is equal to negative ten.
- Divide both sides of this equation by five and
- we get x is equal to negative two. Now we can substitute back to find the other variables.
- Maybe we can substitute back into this equation to figure out what z must be equal to.
- So we have two times x. Two times negative two minus z is equal to negative seven.
- Or negative four minus z is equal to negative seven.
- We can add four to both sides of this equation and then we get
- negative z is equal to negative seven plus four, which is negative three.
- Multiply or divide both sides by negative one and you get z is equal to three. And now we can go and
- substitute back into one of these original equations. So we have x. We know x is negative two.
- So we have negative two plus y, minus three times z.
- Well, we know z is three (so minus three times three)
- should all be equal to negative ten. And now we just solve for y.
- So we get negative two plus y minus nine is equal to negative ten. And so negative two minus nine,
- that's negative eleven. So we have
- y minus eleven is equal to negative ten. And then we can add eleven to both
- sides of this equation. And we get y is equal to negative ten plus eleven, which is one.
- So we're done!
- We've got x is equal to negative two. Z is equal to three and y is equal to one.
- Now I can actually go back and check it.
- Verify that this x, y and z works for all three constraints
- that this three dimensional coordinate lies on all three planes.
- So let's try it out. We've got x is negative two, z is three, y is one.
- So if we substituted - let me do it into each of them - so in this first equation
- that means that we have negative two plus one (remember y was equal to one).
- Let me write it over here - y is equal to one, x is equal to negative two, z is equal to three.
- That was the result we got. Yup, that's the result we got.
- So when we test it into this first one, you have negative two plus one minus three times three.
- So minus nine. This should be equal to negative ten. And it is.
- Negative two plus one is negative one, minus nine is negative ten.
- So it works for the first one. Let's try it for the second equation right over here.
- So we have negative two minus y (so, minus one) plus two times z (so, z is three, so two times three)
- So, plus six needs to be equal to three.
- So this is negative three plus six, which is indeed equal to three.
- So this satisifies the second equation. And then we have the last one right over here!
- We have two times x, so two times negative two, which is negative four. Negative four.
- Plus y, so plus one. Minus z, so minus three. Minus three.
- Needs to be equal to negative six. Negative four plus one is negative three,
- and then you subtract three again. It equals negative six.
- So it satisfies all three equations, so we can feel pretty good about our answer.
Be specific, and indicate a time in the video:
At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger?
Have something that's not a question about this content?
This discussion area is not meant for answering homework questions.
Share a tip
When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831...
Have something that's not a tip or feedback about this content?
This discussion area is not meant for answering homework questions.
Discuss the site
For general discussions about Khan Academy, visit our Reddit discussion page.
Flag inappropriate posts
Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.
- disrespectful or offensive
- an advertisement
- low quality
- not about the video topic
- soliciting votes or seeking badges
- a homework question
- a duplicate answer
- repeatedly making the same post
- a tip or feedback in Questions
- a question in Tips & Feedback
- an answer that should be its own question
about the site | <urn:uuid:8f5362d3-5680-431d-90cf-abf2dd7153f4> | 3.53125 | 2,224 | Truncated | Science & Tech. | 68.932723 | 2,369 |
An endangered species is a group of organisms that are at risk of becoming extinct. 40% of all organisms are considered to be endangered. Many countries have created preserves where endangered animals live, and many have also created laws to protect them. However, only a few groups make it to the endangered species list and receive legal protection.
World Conservation Union categories of endangered species include:
Extinct: the last remaining member of the species has died, or is presumed to have died. Examples: Passenger Pigeon, Thylacine (Tasmanian Tiger), Dodo (Bird)
Extinct in the wild: animals in captivity survive, but there is no free-living, natural population. Examples: Alagoas Currassow (pheasant-like bird)
Critically endangered: faces an extremely high risk of extinction in the immediate future. Examples: Javan Rhino, Arakan Forest Turtle, Ivory-billed Woodpecker
Endangered: faces a very high risk of extinction in the near future. Examples: African Wild Dog, Cheetah, Blue Whale, Snow Leopard
Vulnerable: faces a high risk of extinction. Examples: Lion, Wolverine, Gaur (large, dark-coated ox)
Least Concern: no immediate threat. Examples: Nootka Cypress (tree), Brown Rat | <urn:uuid:da8d9df1-7c1e-474f-94e0-9ccde0ab2202> | 3.765625 | 274 | Knowledge Article | Science & Tech. | 23.47013 | 2,370 |
The earth rotates about 361 degrees relative to a "fixed" star every 24 hours. Its circumference at the equator is about 24,901.5 miles. The horizontal velocity of the earth's surface at the equator is about 1,000 miles per hour and at the poles is zero miles per hour. Between the equator and the poles, the velocity varies.
Gravity at the surface of the earth is approximately 32 feet per second squared. Nether flow into the earth at the surface of the planet has a velocity of approximately 6.94 miles per second. At the equator, the earth's surface is moving horizontally at a velocity of about .29 miles per second.
The earth orbits the sun at distance of approximately 93 million miles. The mass of the sun is about 333,000 times the mass of earth. The earth moves along its orbit at approximately 18 miles per second. At the distance of the earth's orbit from the sun, the solar gravity is approximately .0194 feet per second squared, and the nether moving toward the sun as part of the sun's gravity funnel, has a velocity of approximately 26 miles per second.
According to figures published in 1984 in a Scientific American publication called The Universe of Galaxies compiled by Paul W. Hodge, the sun and its planets move about the galactic center at a velocity that is appoximately 144 miles per second. From this, I calculated an incoming galactic nether velocity at the sun's orbital distance of 203.6468 miles/second and a gravity from the galactic center of about 2.03647x10-15 miles/second2. These calculations were based upon an equation found in Gravity Equations on this website which allows one to compute nether velocity past an orbit from the orbital velocity.
The Milky Way Galaxy (our galaxy) is probably orbiting a common center for a number of other galaxies and the incoming nether velocity for this common center may be even greater than the 203 miles per second moving toward our galactic center. Also, there will be a relative nether velocity between our system of galaxies and the expansion from the center for the Big Bang.
Each large component of a resultant nether velocity are affected by more local components. The relative velocity of a stationary ether would have varied according to where the earth was in its orbit at the time it was measured. The speed of the sun about the galactic center, the speed of the earth in orbit relative to the sun, the horizontal speed of the earth's surface, etc. would have interacted to cause variations in relative ether velocity that would have been measurable with the Michelson-Morley type of experiment. However, the ether is not fixed and it is moving into the earth, the sun, and the galactic center.
Furthermore, the earth's constituent subatomic entities (vorticles) are actually
vortices of nether which combine to make a gravity funnel in which nether
flows toward the planet. Instead of a ball, the planet is a giant vortex which
"sucks" nether into it. And, yes, there is some degree of "entrainment" in the
sense that the sucking pulls ether slightly sideways as it flows downward.
There is no "shearing" action between the planetary surface and the incoming
nether - the two are not separate as would have been the case with a big ball
moving through a stationary ether.
Consquently, the relative velocities found in MM type ether detection have been
much lower than were expected, and the relative nether velocity vector due to the
earth's rotation is exceedingly small at its greatest.
The Effect of Direction
The MM type of experiment uses a horizontal table. One of my brighter friends once
asked if the table should not have been vertical. Perhaps he was right. To find
the resultant incoming nether velocity and direction, the table should be tilted
and rotated until the maximum reading can be found.
The Effect of Compression
Nether flowing into a gravity funnel is compressing in two dimensions which are tangent to the surface of any theoretical sphere (funnel cross-section) with a large mass at its center. When there is relative nether velocity such as that caused by the earth's orbital motion about the sun, any relative tangential velocity is reduced beyond the point at which it can be measured - because of the nether compression.
Think of a car moving along a track that is circular with a radius of 100 yards. The car is moving at a velocity of 100 feet per second (about 68 miles per hour). We are looking at this motion from above in a helicopter. We can make measurements to see that the car is moving at 100 feet per second. Now, suppose that we compress the scene (just as nether is compressed when it moves into a gravity funnel) so that the track has half the radius (50 yards). We now notice that we measure the car's velocity as only 50 feet per second (half of what it was). Now we compress the track a bit more until its radius is only 25 yards long (one-quarter of what it was) and we measure the car's velocity as only 25 feet per second. We continue to compress the track until it has a radius of only one inch. It now has a radius that is 1/3600 times as long as it was at the beginning. The speed of the car has been reduced to 1/3 inch per second (.0189 miles per hour). However, the speed of the car is now lower than we can measure with our apparatus, so we mistakenly conclude that the car is not moving at all.
Nether compression tends to reduce the velocity of nether in a horizontal direction until it is lower than we can measure with our apparatus.
The relative nether velocity is only one vector of the resultant relative
nether movement and the vector of radial velocity, which may have been
quite small at a distance, is actually expanded to become much larger as
the nether approaches earth. So at the earth's surface, the inward
moving nether has a very high velocity vector and the tangential vector
is too small for us to notice.
The Effect of Density Increase
This is another way of explaining the effect of nether compression.
Nether remote from a gravity funnel has a relatively low density.
As it approaches a gravity funnel, the density of
the nether increases. Momentum, the product of Mass and velocity, must
remain the same. As Mass increases, the velocity must decrease.
Therefore, the relative velocity of the nether,
due to the planetary motion within it, is much lower than expected when
measured within the planetary gravity funnel.
The Effect of Vorticle Energy Conservation
The inflow of nether causes the subatomic entities which compose the planet to orient themselves in the most energy-saving direction. This means that each entity adjusts its orientation slightly in the direction of relative incoming nether velocity as compared to what it otherwise would be. Nether and the vorticles are one. Although a horizontal component of nether motion may be detectable between two widely separated horizontal planes, at any single horizontal plane, the horizontal vector is almost too small to measure.
These effects are all common to the concept of a gravity funnel. However,
they were not expected by Michelson and Morley. Other scientists of their
day, expecting a stationary and separate ether, had no idea that it was their
own tendency to dictate to the universe which prevented their enlightenment.
As an example, the vector of nether inflow velocity to the sun is about 26 miles per second at the distance of the earth's orbit (93 million miles from the sun). At a distance of many earth diameters from our planet, this inflow will begin to be noticeably affected by the earth's gravity because the nether will be at a slightly greater density than the solar inflow would be normally at 93 million miles. If the density is ten percent more, the inflow velocity will be ten percent less because of conservation of momentum (Mass times velocity in this case). As the nether passes the zone of the distant earth's gravity in its journey to the sun, it will speed up again to its higher velocity because the density will have dropped down.
This interchange between Mass and nether velocity will become more pronounced the closer that the passing nether moves to the earth. At one point, the velocity of the passing nether will be less than that of the nether velocity into the earth's gravity funnel. Here, it will be taken in by the earth mass as will be the case with any that passes more closely to the earth. What we will see on the earth's surface as the passing nether is overpowered by the earth inflow, is only the earth inflow - unless we have very sensitive instruments to detect the slight sideward vector.
This same effect will be felt with the solar gravity funnel versus the galactic gravity funnel, the galactic gravity funnel versus the intergalactic gravity funnel, and so forth. If we were to send a probe to a point well outside our circle of immediate galaxies, and if that probe were able to measure nether velocity, we might get a decent measurement - but this is, of course, an absurdity. | <urn:uuid:022573b5-5e3a-46a0-a9c9-5a25a7c32657> | 3.84375 | 1,902 | Knowledge Article | Science & Tech. | 44.932553 | 2,371 |
Mysterious Birth of Common Pollutants Revealed
Airborne particles and droplets called aerosols can make for colorful sunsets. Above, a sunset during an aerosol-ejecting Colorado wildfire in June 2012.
CREDIT: Brian Emory
The chaotic steps that give rise to microscopic particles in the atmosphere called aerosols were witnessed for the first time in a verdant forest in Finland, an important step in understanding how the particles affect Earth's climate.
Aerosols are solid and liquid droplets tiny enough to float in the air. They can come from soot, dust and chemicals from cars, factories and farming, or natural sources like deserts, sea spray and plants. The particles are a major pollution source, and can affect human health.
How aerosols form, and their role in climate, remains poorly understood, but scientists would like to know more so they can better understand the implications for future climate change. Aerosols seed cloud formation and can reflect the sun's heat, cooling the Earth, said Markku Kulmala, an aerosol physicist at the University of Helsinki in Finland and lead author of the study of aerosol formation. The study is detailed in today's (Feb. 21) issue of the journal Science.
In the Hyytiälä forest in Finland, set aside decades ago to monitor nuclear fallout from the 1986 Chernobyl disaster, Kulmala and his colleagues built the world's most sensitive aerosol-particle detector. The instrument helped them watch the smallest aerosol precursors in the atmosphere, which had never been seen before.
How aerosols form
The instrument saw that as gas molecules of sulfuric acid smashed together with organic molecules, they formed incredibly small clusters, less than two nanometers in diameter. Lined up side-by-side, about 25,000 of these clusters would still be smaller than the width of a human hair.
The neutrally charged clusters grew slowly at first, until they reached a critical size (about 3 nanometers), the study found. Then, in a burst of activity, the neutral clusters quickly added a heavy coat of organic molecules. "What is most exciting is that the growth of small clusters [is] size-dependent," Kulmala told OurAmazingPlanet in an email interview. "This means that the formation of new aerosol particles is limited by the vapors participating on the growth of 1.5- to 3-nanometer particles."
Understanding the buildup of clusters, and how they grow, is key to predicting aerosol formation and their effect on climate. "The importance of neutral clusters and their growth has significant effect on [the] global aerosol load, and also to global cloud droplet concentrations," Kulmala said.
Impacts on climate
The study site is boreal forest, which covers about 8 percent of the planet's northern latitudes and is expected to expand with global warming. [Top 10 Surprising Results of Global Warming]
The aerosol-forming processes in the planet's tropical forests and urban regions may be different. "We still have to see if the results can be generalized to other places," said atmospheric chemist Meinrat Andrae of the Max Planck Institute for Chemistry in Germany, who was not involved in the study.
Andrae also cautioned that the small particles analyzed in the study must grow bigger before they can affect health or climate. "The particles that are formed at this step (a few nanometers in size) are still a long way away from the size range where they have climate or health relevance," he told OurAmazingPlanet in an email interview.
MORE FROM LiveScience.com | <urn:uuid:ef0f05d3-bd38-460d-a4be-e6f776d588d4> | 3.765625 | 751 | News Article | Science & Tech. | 38.979261 | 2,372 |
void *dlopen(const char *file, int mode);
The dlopen() function shall make an executable object file specified by file available to the calling program. The class of files eligible for this operation and the manner of their construction are implementation-defined, though typically such files are executable objects such as shared libraries, relocatable files, or programs. Note that some implementations permit the construction of dependencies between such objects that are embedded within files. In such cases, a dlopen() operation shall load such dependencies in addition to the object referenced by file. Implementations may also impose specific constraints on the construction of programs that can employ dlopen() and its related services.
A successful dlopen() shall return a handle which the caller may use on subsequent calls to dlsym() and dlclose(). The value of this handle should not be interpreted in any way by the caller.
The file argument is used to construct a pathname to the object file. If file contains a slash character, the file argument is used as the pathname for the file. Otherwise, file is used in an implementation-defined manner to yield a pathname.
If the value of file is 0, dlopen() shall provide a handle on a global symbol object. This object shall provide access to the symbols from an ordered set of objects consisting of the original program image file, together with any objects loaded at program start-up as specified by that process image file (for example, shared libraries), and the set of objects loaded using a dlopen() operation together with the RTLD_GLOBAL flag. As the latter set of objects can change during execution, the set identified by handle can also change dynamically.
Only a single copy of an object file is brought into the address space, even if dlopen() is invoked multiple times in reference to the file, and even if different pathnames are used to reference the file.
The mode parameter describes how dlopen() shall operate upon file with respect to the processing of relocations and the scope of visibility of the symbols provided within file. When an object is brought into the address space of a process, it may contain references to symbols whose addresses are not known until the object is loaded. These references shall be relocated before the symbols can be accessed. The mode parameter governs when these relocations take place and may have the following values:
Any object loaded by dlopen() that requires relocations against global symbols can reference the symbols in the original process image file, any objects loaded at program start-up, from the object itself as well as any other object included in the same dlopen() invocation, and any objects that were loaded in any dlopen() invocation and which specified the RTLD_GLOBAL flag. To determine the scope of visibility for the symbols loaded with a dlopen() invocation, the mode parameter should be a bitwise-inclusive OR with one of the following values:
If neither RTLD_GLOBAL nor RTLD_LOCAL are specified, then an implementation-defined default behavior shall be applied.
If a file is specified in multiple dlopen() invocations, mode is interpreted at each invocation. Note, however, that once RTLD_NOW has been specified all relocations shall have been completed rendering further RTLD_NOW operations redundant and any further RTLD_LAZY operations irrelevant. Similarly, note that once RTLD_GLOBAL has been specified the object shall maintain the RTLD_GLOBAL status regardless of any previous or future specification of RTLD_LOCAL, as long as the object remains in the address space (see dlclose() ).
Symbols introduced into a program through calls to dlopen() may be used in relocation activities. Symbols so introduced may duplicate symbols already defined by the program or previous dlopen() operations. To resolve the ambiguities such a situation might present, the resolution of a symbol reference to symbol definition is based on a symbol resolution order. Two such resolution orders are defined: load or dependency ordering. Load order establishes an ordering among symbol definitions, such that the definition first loaded (including definitions from the image file and any dependent objects loaded with it) has priority over objects added later (via dlopen()). Load ordering is used in relocation processing. Dependency ordering uses a breadth-first order starting with a given object, then all of its dependencies, then any dependents of those, iterating until all dependencies are satisfied. With the exception of the global symbol object obtained via a dlopen() operation on a file of 0, dependency ordering is used by the dlsym() function. Load ordering is used in dlsym() operations upon the global symbol object.
When an object is first made accessible via dlopen() it and its dependent objects are added in dependency order. Once all the objects are added, relocations are performed using load order. Note that if an object or its dependencies had been previously loaded, the load and dependency orders may yield different resolutions.
The symbols introduced by dlopen() operations and available through dlsym() are at a minimum those which are exported as symbols of global scope by the object. Typically such symbols shall be those that were specified in (for example) C source code as having extern linkage. The precise manner in which an implementation constructs the set of exported symbols for a dlopen() object is specified by that implementation.
If file cannot be found, cannot be opened for reading, is not of an appropriate object format for processing by dlopen(), or if an error occurs during the process of loading file or relocating its symbolic references, dlopen() shall return NULL. More detailed diagnostic information shall be available through dlerror() .
No errors are defined.
The following sections are informative.
dlclose() , dlerror() , dlsym() , the Base Definitions volume of IEEE Std 1003.1-2001, <dlfcn.h> | <urn:uuid:ebe02864-3691-4a1d-abf8-3a23d74e1974> | 2.796875 | 1,236 | Documentation | Software Dev. | 26.429557 | 2,373 |
Assessing the ecological importance of clouds has substantial implications for our basic understanding of ecosystems and for predicting how they will respond to a changing climate. This study was conducted in a coastal Bishop pine forest ecosystem that experiences regular cycles of stratus cloud cover and inundation in summer. The study concludes that clouds are important to the ecological functioning of these coastal forests, providing summer shading and cooling that relieve pine and microbial drought stress as well as regular moisture inputs that elevate plant and microbial metabolism.
Mariah S. Carbone, A. Park Williams, Anthony R. Ambrose, Claudia M. Boot, Eliza S. Bradley, Todd E. Dawson, Sean M. Schaeffer, Joshua P. Schimel, Christopher J. Still
Global Change Biology, November 7, 2012 (online)
UCSB press release (includes video)
Featured Summary of this research project
Following is a sample of the media coverage of this study:
Red Orbit: Climate Change Could Affect Entire Forest Ecosystems
More information about this project's research | <urn:uuid:1620e373-382e-443f-9934-f72fad9f2c14> | 3.140625 | 211 | Knowledge Article | Science & Tech. | 34.676667 | 2,374 |
(PHP 4 >= 4.2.0, PHP 5)
pcntl_exec — Executes specified program in current process space
Executes the program with the given arguments.
path must be the path to a binary executable or a script with a valid path pointing to an executable in the shebang ( #!/usr/local/bin/perl for example) as the first line. See your system's man execve(2) page for additional information.
args is an array of argument strings passed to the program.
envs is an array of strings which are passed as environment to the program. The array is in the format of name => value, the key being the name of the environmental variable and the value being the value of that variable.
Returns FALSE on error and does not return on success. | <urn:uuid:bc016635-d1b3-438e-84de-0d3ae859dda6> | 2.640625 | 170 | Documentation | Software Dev. | 64.960188 | 2,375 |
(Reuters) - Scientists at Europe's CERN research centre have found a new subatomic particle, a basic building block of the universe, which appears to be the boson imagined and named half a century ago by theoretical physicist Peter Higgs.
"We have reached a milestone in our understanding of nature," CERN director general Rolf Heuer told a gathering of scientists and the world's media near Geneva on Wednesday.
"The discovery of a particle consistent with the Higgs boson opens the way to more detailed studies, requiring larger statistics, which will pin down the new particle's properties, and is likely to shed light on other mysteries of our universe."
Two independent studies of data produced by smashing proton particles together at CERN's Large Hadron Collider produced a convergent near-certainty on the existence of the new particle. It is unclear whether it is exactly the boson Higgs described.
But addressing scientists assembled in the CERN auditorium, Heuer posed them a question: "As a layman, I would say I think we have it. Would you agree?" A roar of applause said they did.
For some, there was no doubt the Higgs boson is found: "It's the Higgs," said Jim Al-Khalili of Surrey University, a British physicist and popular broadcaster. "The announcement from CERN is even more definitive and clear-cut than most of us expected."
Higgs, now 83, from Edinburgh University was among six theorists who in the early 1960s proposed the existence of a mechanism by which matter in the universe gained mass. Higgs himself argued that if there were an invisible field responsible for the process, it must be made up of particles.
He and some of the others were at CERN to welcome news of what, to the embarrassment of many scientists, some commentators have labeled the "God particle", for its role in turning the Big Bang into a living universe. Clearly overwhelmed, his eyes welling up, Higgs told the symposium of fellow researchers: "It is an incredible thing that it has happened in my lifetime."
Also read more and join the discussion in an article by Lawrence Krauss on Slate that was posted earlier today.
Note that the discussion thread for both articles is combined. | <urn:uuid:e3acee0b-841d-49c4-94d5-fa92cd1bb62d> | 2.640625 | 465 | News Article | Science & Tech. | 41.007504 | 2,376 |
Apr. 9, 2008 The Texas Petawatt laser reached greater than one petawatt of laser power on Monday morning, March 31, making it the highest powered laser in the world, Todd Ditmire, a physicist at The University of Texas at Austin, said. The Texas Petawatt is the only operating petawatt laser in the United States.
Ditmire says that when the laser is turned on, it has the power output of more than 2,000 times the output of all power plants in the United States. (A petawatt is one quadrillion watts.) The laser is brighter than sunlight on the surface of the sun, but it only lasts for an instant, a 10th of a trillionth of a second (0.0000000000001 second).
Ditmire and his colleagues at the Texas Center for High-Intensity Laser Science will use the laser to create and study matter at some of the most extreme conditions in the universe, including gases at temperatures greater than those in the sun and solids at pressures of many billions of atmospheres.
This will allow them to explore many astronomical phenomena in miniature. They will create mini-supernovas, tabletop stars and very high-density plasmas that mimic exotic stellar objects known as brown dwarfs.
"We can learn about these large astronomical objects from tiny reactions in the lab because of the similarity of the mathematical equations that describe the events," said Ditmire, director of the center.
Such a powerful laser will also allow them to study advanced ideas for creating energy by controlled fusion.
The Texas Petawatt was built with funding provided by the National Nuclear Security Administration, an agency within the U. S. Department of Energy.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:05a839af-caa8-4e4e-8582-d01fd6d44523> | 3.40625 | 399 | Truncated | Science & Tech. | 43.220242 | 2,377 |
To move a tile into the empty space, click on the tile to be moved. If you think you need a hint to see how the map should look, click here.
Florida is the state where the most lightning flashes strike the ground. However, the number of injuries and deaths caused by lightning do not always occur in the same place. For example, Colorado is ranked 24th in the number of lightning strikes but is ranked 10th in the number of deaths caused by lightning. This is because a lot of people hike and camp in the exposed, lightning-prone mountains in the western half of the state.
Remember, if you hear thunder GO INDOORS and remain there as least 30 minutes after the LAST clap of thunder is heard.
To discover about more about lightning, go to JetStream - an Online School for Weather. | <urn:uuid:657c2196-7e85-4061-b8c3-cda01e2de3b9> | 2.921875 | 170 | Knowledge Article | Science & Tech. | 64.6025 | 2,378 |
The Pre-historic Edge
|Nicole Casal-Moore||November 7th 2012|
|Ancient Bear Dog - Magericyon anceps|
The fossilized fangs of saber-toothed cats hold clues to how the extinct mammals shared space and food with other large predators 9 million years ago.
Led by the University of Michigan and the Museo Nacional de Ciencias Naturales in Madrid, a team of paleontologists has analyzed the tooth enamel of two species of saber-toothed cats and a bear dog unearthed in geological pits near Madrid. Bear dogs, also extinct, had dog-like teeth and a bear-like body and gait.
The researchers found that the cat species—a leopard-sized Promegantereon ogygia and a much larger, lion-sized Machairodus aphanistus—lived together in a woodland area. They likely hunted the same prey—horses and wild boar. In this habitat, the small saber-toothed cats could have used tree cover to avoid encountering the larger ones. The bear dog hunted antelope in a more open area that overlapped the cats' territory, but was slightly separated.
"These three animals were sympatric—they inhabited the same geographic area at the same time. What they did to coexist was to avoid each other and partition the resources," said Soledad Domingo, a postdoctoral fellow at the U-M Museum of Paleontology and the first author of a paper on the findings published in the Nov. 7 edition of Proceedings of the Royal Society B.
Millions of years before the first humans, the predators lived during the late Miocene Period in a forested area that had patches of grassland. Large carnivores such as these are rare in the fossil record, primarily because plant-eating animals lower on the food chain have outnumbered meat-eaters throughout history.
Cerro de los Batallones, where Domingo has been excavating for the past eight years, is special. Of its nine sites, two are ancient pits with an abundance of meat-eating mammal bones. Agile predators, the researchers say, likely leapt into the natural traps in search of trapped prey.
"These sites offer a unique window to understand life in the past," Domingo said.
To arrive at their findings, the researchers conducted what's called a stable carbon isotope analysis on the animals' teeth. Using a dentist's drill with a diamond bit, they sampled teeth from 69 specimens, including 27 saber-toothed cats and bear dogs. The rest were plant-eaters. They isolated the carbon from the tooth enamel. Using a mass spectrometer, which you could think of as a type of scale, they measured the ratio of the more massive carbon 13 molecules to the less-massive carbon 12. An isotope is a version of an element that contains a different number of neutrons in its nucleus.
Carbon 12 and 13 are both present in the carbon dioxide that plants take in during photosynthesis. Different plants make use of the isotopes in different ways, and so they retain different amounts of them in their fibers. When an herbivore eats a plant, that plant leaves an isotopic signature in the animal's bones and teeth. The signature travels through the food chain and can be found in carnivores as well.
"This would be the same in your tooth enamel today," Domingo said. "If we sampled them, we could have an idea of what you eat. It's a signature that remains through time."
Because the researchers can tell what the herbivores ate, they can surmise what their habitat was like. They believe the animals in this study lived in a wooded area that contained patches of grassland.
The cats showed no significant difference in their stable carbon isotope ratios. That means they likely fed on the same prey and lived in the same habitat, but the posits that the species each fed on different-sized prey.
The findings demonstrate the timelessness of predator-prey relationships.
"The three largest mammalian predators captured prey in different portions of the habitat, as do coexisting large predators today. So even though none of the species in this 9-million year old ecosystem are still alive today (some of their descendants are), we found evidence for similar ecological interactions as in modern ecosystems," said Catherine Badgley, co-author of the new study and assistant professor of ecology and evolutionary biology. | <urn:uuid:f7de6a27-9105-45ba-91ff-cee4810daab0> | 3.8125 | 939 | News Article | Science & Tech. | 42.814066 | 2,379 |
The fuel in all the 3 Units is thought to have at least partially melted down despite pumping sea water and boric acid into the Units
The crisis at the three Fukushima Daiichi nuclear power stations did not come from buildings collapsing due to the March 11 earthquake of magnitude 9 but from power failure following the quake. The tsunami knocked out the generators that produced the power. Lack of power in turn caused the cooling systems of the reactors to fail.
The Fukushima nuclear reactor 1 went critical on March 1971 and is a 460 MW reactor. Unit-2 and Unit-3 are 784 MW each and went critical in July 1974 and March 1976 respectively. All the three are Boiling Water Reactors (BWR) and use demineralised water for cooling nuclear fuel.
The fuel, in the form of pellets, is kept inside a casing called cladding. The cladding is made of zirconium alloy, and it completely seals the fuel. Fuel pins in the form of bundles are kept in the reactor core. Heat is generated in the reactor core through a fission process sustained by chain reaction.
The fuel bundles are placed in such a way that the coolant can easily flow around the fuel pins. The coolant never comes in direct contact with the fuel as the fuel is kept sealed inside the zirconium alloy cladding. The coolant changes into steam as it cools the hot fuel. It is this steam that generates electricity by driving the turbines.
All the heat that is produced by nuclear fission is not used for producing electricity. The efficiency of a power plant, including nuclear, is not 100 per cent. In the case of a nuclear power plant the efficiency is 30-35 per cent. “About 3 MW of thermal energy is required to produce 1 MW of electrical energy. Hence for the 460 MW Unit-1, 1,380 MW of thermal energy is produced,” said Dr. K.S. Parthasarathy, former Secretary, Atomic Energy Regulatory Board, Mumbai. “This heat has to be removed continuously.”
In the case of the Fukushima units, demineralised water is used as coolant. Uranium-235 is used as fuel in Unit-1 and Unit-2, and MOX (a mixture of oxides of Uranium-Plutonium-239) is used as fuel in Unit-3.
Since a very high amount of heat is generated, the flow of the coolant should never be disrupted. But on March 11, pumping of the coolant failed as even the diesel generator failed after an hour's operation.
Though the power producing fission process was stopped by using control rods that absorbed the neutrons immediately after the quake, the fuel still contains fission products such as iodine-131 and caesium-137 and activation products such as plutonium-239.
“These radionuclides decay at different timescales, and they continue to produce heat during the decay period,” Dr. Parthasarathy said.
The heat produced by radioactive decay of these radionuclides is called “decay heat.”
“Just prior to the shut down of the reactor the decay heat is 7 per cent. It reduces exponentially, to about 2 per cent in the first hour. After one day, the decay heat is about 1 per cent. Then it reduces very slowly,” he said.
While the uranium fission process can be stopped and heat generation can be halted, there is no way of stopping radioactive decay of the fission products.
Apart from the original heat, the heat produced continuously by the fission products and activation products has to be removed even after the uranium fission process has been stopped.
Inability to remove this heat led to a rise in coolant temperature. According to the Nature journal, when the temperature reached around 1,000 degree C, the zirconium alloy that encased the fuel (cladding) probably began to melt or split apart. “In the process it reacted with the steam and created hydrogen gas, which is highly volatile,” Nature notes.
Though the pressure created by hydrogen gas was reduced by controlled release, the massive build-up of hydrogen led to the explosion that blew the roof of the secondary confinement (outer buildings around the reactor) in all the three units (Unit-1, Unit-2 and Unit-3). The reactor core is present inside the primary containment.
But the real danger arises from fuel melting. This would happen following the rupture of the zirconium casing. “If the heat is not removed, the zirconium cladding along with the fuel would melt and become liquid,” Dr. Parthasarathy explained. The government has said that fuel rods in Unit-3 were likely already damaged.
Effect of melted fuel
Melted fuel is called “corium.” Since melted fuel is at a very high temperature it can even “burn through the concrete containment vessel.”
According to Nature, if enough melted fuel gathers outside the fuel assembly it can “restart the power-producing reactions, and in a completely uncontrolled way.”
What may result is a “full-scale nuclear meltdown.”
Pumping of sea-water is one way to reduce the heat and avoid such catastrophic consequences. The use of boric acid, which is an excellent neutron absorber, would reduce the chances of nuclear reactions restarting even if the fuel is found loose inside the reactor core. Both these measures have been resorted to in all three Units. Despite these measures, the fuel rods were found exposed in Unit-2 on two occasions.
Fate of reactor core
While the use of sea-water can prevent fuel melt, it makes the reactor core completely useless due to corrosion.
The case of Unit-4 is different from the other three units. Unlike in the case of Unit-1, 2 and 3, the Unit-4 is under maintenance and the core has been taken out, and the spent fuel rods are kept in the cooling pond.
Whatever led to a decrease in water level, the storage pond caught fire on March 15 possibly due to hydrogen explosion. The radioactivity was released directly into the atmosphere.
Spent fuel fate unknown
It is not known if the integrity of the cladding has been already affected and the fuel exposed. Since the core of a Boiling Water Reactor (BWR) is removed only once a year or so, the number of spent rods in the pond will be more.
If the fuel is indeed exposed, the possibility of fuel melt is very likely. Though the fuel will be at a lower temperature than found inside a working reactor, there are chances of the fuel melting.
Since it does not have any containment unlike the fuel found inside a reactor, the consequences of a fuel melt would be really bad. Radioactivity is released directly into the atmosphere. Radioactivity of about 400 milliSv/hour was reported at the site immediately after the fire. | <urn:uuid:6f1b2417-0b03-45dc-b3bf-922974d30ccf> | 3.5625 | 1,446 | Knowledge Article | Science & Tech. | 53.516006 | 2,380 |
Hawaii Experimental Tropical Forest selected as a candidate core site for the National Ecological ObservatoryUniversity of Hawaiʻi
Boone Kauffman, (808) 933-8121
Institute of Pacific Islands Forestry
The Laupahoehoe unit of the new Hawaiʻi Experimental Tropical Forest has been selected by the National Ecological Observatory Network to be funded by the National Science Foundation as one of twenty Ecological Observatories to be established across the country. The documents leading to its selection were authored by a group of more than 80 researchers, educators and land managers and led by scientists from the U.S. Forest Service Institute of Pacific Islands Forestry and the Universities of Hawaiʻi at Manoa and Hilo.
The National Ecological Observatory Network (NEON) is a 30-year long project designed to thoroughly describe and monitor natural landscapes — including the vegetation, animals, streams and climate across a wide range of ecosystems within the United States. The Laupahoehoe forest on Hawaiʻi island was the only tropical rain forest selected as an observatory.
"The Laupahoehoe unit of the HETF — consisting of 12-thousand acres on the windward slope of Mauna Kea — is an ideal location for a National Ecological Observatory," says Boone Kauffman, Director of the Institute of Pacific Islands Forestry. "It encompasses a remarkable diversity of forests, streams, and rare and endangered species that make it the perfect place to learn about Hawaiʻi ecosystems."
"Selection of the Laupahoehoe site is very important for the State of Hawaiʻi because it will allow us to find answers to some of our islands‘ most serious and pressing environmental issues," says Becky Ostertag, a UH Hilo science co-leader of the project. "These include climate change, biodiversity loss, invasive species and much more."
"The Ecological Observatory will greatly facilitate efforts in Hawaiʻi to conduct ecological research and monitoring that could help shape land management decisions and greatly improve educational opportunities for students of all ages," says Lawren Sack, NEON project co-leader from UH Manoa. "The project will link important new ecological research in Hawaiʻi with research at other sites across the U.S., enabling local researchers to contribute to the global effort to track, understand and ameliorate the impacts of global change on ecosystems."
"The NSF first proposed the NEON a number of years ago, and we are pleased that progress is being made," says Gary Ostrander, Vice Chancellor for Research and Graduate Education at UH Manoa. "This will be a unique opportunity for partnerships among the University of Hawaiʻi, other universities and government agencies."
NEON will provide educational opportunities for students and community members to increase their environmental awareness and understanding as research results from Hawaiʻi forest observations become available.
The Pacific NEON coordinating committee is composed of scientists from the University of Hawaiʻi at Manoa, the University of Hawaiʻi at Hilo and the U.S. Department of Agriculture Forest Service Institute of Pacific Islands Forestry.
For more information, visit: http://www.neoninc.org | <urn:uuid:db0654c8-252d-4503-a2ff-5f6058caa50a> | 2.515625 | 663 | News (Org.) | Science & Tech. | 13.249273 | 2,381 |
A massive weather formation has developed that is nearly nine thousand miles long, stretching from the equatorial Pacific into the northern part of the northern hemisphere, driving moist tropical air from the Pacific deep into the United States.
The formation is causing flooding in previously drought-afflicted Texas, and is driving a flow of warm, extremely moist air far to the north. It is unusual for a weather system this large to develop in earth's atmosphere, and its presence could be another sign of climate change.
In The Coming Global Superstorm, Art Bell and Whitley Strieber referred to theories that very large scale atmospheric phenomena like this would develop as the earth warmed, and the storm described in the book starts when a gigantic flow of warm, moist tropical air drives into the far north, a flow that is interrupted when ocean currents cease to support it. There is then a collision between the tropical air mass and cold air pouring down from the arctic in the wake of the collapsed currents.
There has been evidence that surface features of the North Atlantic Current have been weakening since 1999, and the recent severe storms that swept northern Europe may be connected with this process.
At the present time, arctic weather conditions are relatively normal for this time of year. However, if a significant temperature spike should now appear in the arctic, it would be cause for concern. In any case, the penetration of warm, humid tropical air so deeply into the northern hemisphere at this time of year sets the stage for further dramatic weather over the next few weeks, possibly taking the form of strong blizzards across the midwest.
NOTE: This news story, previously published on our old site, will have any links removed. | <urn:uuid:ea4e5e22-e73c-42fc-a56a-a4c6931061d8> | 3.6875 | 343 | News Article | Science & Tech. | 35.314649 | 2,382 |
The molar volume is equal to the atomic weight divided by the density.
The molar volume is also known as the atomic volume.
The standard SI units are m3. Normally, however, molar volume is expressed in units of cm3. To convert quoted values to m3, divide by 1000000.
The molar volume depends upon density, phase, allotrope, and temperature. Values here are given, where possible, for the solid at 298 K.
WebElements now has an online chemistry shop at which you can buy periodic table posters, mugs, T-shirts, games, molecular models, and more. | <urn:uuid:f34e32d2-f025-42ac-9515-722ab00612c7> | 3.40625 | 132 | Knowledge Article | Science & Tech. | 47.667883 | 2,383 |
11.28.12 - Forecasters could soon be better able to predict how intense tropical cyclones like Hurricane Sandy will be by analyzing relative-humidity levels within their large-scale environments, finds a new NASA-led study.
05.03.12 - With 2,378 spectral eyes measuring our atmosphere, the Atmospheric Infrared Sounder could be called a "monster" of weather and climate research.
12.14.06 - Two new NASA-funded studies of ozone in the tropics using NASA satellite data not previously available are giving scientists a fuller understanding of the processes driving ozone chemistry and its impacts on pollution and climate change.
08.28.06 - The 2005 hurricane season will long be remembered both for the record-breaking number of storms and a devastating hurricane named Katrina.
08.24.05 - NASA and the NOAA today outlined research that has helped to improve the accuracy of medium-range weather forecasts.
05.29.03 - As a scientist and an artist, Graeme Stephens strives to paint an accurate picture of clouds.
04.29.03 - Your weatherperson’s job just got a little easier, thanks to new data available from advanced weather instruments aboard NASA’s Aqua satellite.
04.22.02 - Aqua Spacecraft Launched, Ready To Study Earth's Water Cycle
04.22.02 - Aqua will make measurements of the Earth at the same time, all the time. | <urn:uuid:7ef8caf9-558e-4e71-af6e-85753041430b> | 2.90625 | 295 | Content Listing | Science & Tech. | 61.17606 | 2,384 |
Fermi-LAT Designer Awarded 2012 Panofsky Prize
William Atwood, a leading member of the Fermi Gamma-ray Space Telescope collaboration, will receive the 2012 W. K. H. Panofsky Prize in Experimental Particle Physics from the American Physical Society for his work as co-designer of the Large Area Telescope, the main instrument on Fermi, and for using the LAT to investigate the universe in gamma rays.
"Nobody was more surprised than myself" upon learning of the award, Atwood said. Now with the Santa Cruz Institute for Particle Physics, Atwood was a long-time SLAC National Accelerator Laboratory particle physicist who maintains his lab ties through the Fermi collaboration. "I'd just finished my SLAC cyber-security refresher course when I got this email," he continued. "I thought, 'Oh, jeez, this is just spam.'" But once Atwood confirmed the email's contents, he said, he was "blown away."
In 1970, as a graduate student from Caltech, Atwood joined the team at SLAC that scattered high-energy electrons off protons and neutrons and discovered they were made of something even smaller – quarks. SLAC physicist Richard Taylor, principal investigator, was awarded the 1990 Nobel Prize in Physics for this work. It was obvious in 1970 that Atwood “had tremendous potential," Taylor said, adding that he "was a remarkable physicist, even then."
Stanford physicist Peter Michelson, Atwood's partner in developing the LAT, was not surprised at Atwood's win. "To capture his contributions in a single quote ... I'm struggling," Michelson said. "Bill's deep understanding of particle physics led to the original design of the LAT, and what's flying is essentially that design, which he came up with literally overnight.”
Atwood also adapted the design for use in space and has done substantial scientific work with the LAT, such as contributing "a very efficient algorithm for blind searches for gamma-ray pulsars,” Michelson added.
SLAC managed the development of the LAT, assembled it from parts made at laboratories around the world, and now runs a center that processes LAT data and makes it available to researchers.
Astrophysics and particle physics may seem strange bedfellows, but Atwood has no trouble explaining how a telescope designer could win an award named after the founding director of a linear accelerator, or how a particle physicist could be enticed to work on a satellite.
"Almost all light comes from something hot," he said. In astronomical observing terms, visible light comes from the nuclear fires of the stars. Infrared light comes from hot dust and gas. But gamma rays are an exception to this rule. “Stuff can’t get hot enough” to produce photons of light in the gamma-ray range, with energies measured in the millions and billions of electron volts, Atwood said. What that means is that gamma rays show us the "non-thermal universe" – in other words, the part of the sky that heat cannot reveal.
Only extreme conditions can generate gamma rays, Atwood said – "black holes and neutron stars, pulsars. The gamma-ray sky is full of these exotic objects." And these exotic objects provide the extreme conditions necessary to accelerate particles to high energy.
At first, Atwood said, both the particle physics and the astrophysics communities were skeptical that this telescope was the proper instrument to conduct particle physics research or gamma ray astronomy.
"In the end, a good idea is a good idea and people came around," he said. In fact, "this instrument would not have been possible without the active participation of both communities." | <urn:uuid:2f8b1c46-00d7-42a7-b2b2-914bd33030b5> | 2.578125 | 767 | News (Org.) | Science & Tech. | 43.792587 | 2,385 |
Check out this short video to view the new features of C-ROADS.
(Click on the “Vimeo” word in the bottom right corner to view a larger version)
The top five improvements:
1. Choose from 14 different reference scenarios, pulled from EMF and SRES
2. Create emissions scenarios by changing Carbon intensity
3. Land use emissions are disaggregated by country (thanks to primary research funded by Heinz Center and TCG)
4. Flexible analysis of historical contribution by country to cumulative emissions, radiative forcing, and temperature (this feature is amazing — watch the video….)
5. Flexible analysis of effects of uncertainty
Dr. Phil Rice of Climate Interactive created this short video describing our “new tricks.” The improvements were completed primarily by Dr. Rice, Dr. Tom Fiddaman of Ventana Systems, Dr. Lori Siegel of Climate Interactive, and Tony Kennedy of Ventana Systems, through a contract with the US DOE and funding from Zennstrom Philanthropies, ClimateWorks Foundation, the Morgan Family Foundation, and others. | <urn:uuid:e0e2ddfe-ffef-423f-95ec-ba22f81faa7b> | 2.515625 | 228 | News (Org.) | Science & Tech. | 40.650958 | 2,386 |
Northern Crayfish Frog
The northern crayfish frog grows to a length of nearly four inches. It has dark round spots surrounded by light borders on its chin, and the back is noticeably humped. This somewhat stubby frog lives in the southern half of Illinois and is closely associated with the hardpan clay soils south of the Shelbyville Moraine. This nocturnal frog spends the daytime hiding in crayfish or other animal burrows or under boards or logs in wet prairies, pastures or golf courses. It lays its eggs from early March to mid-April in flooded fields, farm ponds and small lakes. The call of the crayfish frog carries a considerable distance. It resembles a deep, roaring snore. | <urn:uuid:7ee60440-f04d-4d64-bb07-9679d69a6b08> | 2.859375 | 149 | Knowledge Article | Science & Tech. | 60.439231 | 2,387 |
(PHP 5 >= 5.3.3)
stream_set_read_buffer — Set read file buffering on the given stream
Sets the read buffer. It's the equivalent of stream_set_write_buffer(), but for read operations.
The file pointer.
The number of bytes to buffer. If
bufferis 0 then read operations are unbuffered. This ensures that all reads with fread() are completed before other processes are allowed to read from that input stream.
Returns 0 on success, or EOF if the request cannot be honored.
- stream_set_write_buffer() - Dosya tamponunu ayarlar | <urn:uuid:9899b93f-0af2-4b43-be39-3955c10da2b8> | 2.53125 | 139 | Documentation | Software Dev. | 54.611111 | 2,388 |
The os module provides dozens of functions for interacting with the operating system:
>>> import os >>> os.getcwd() # Return the current working directory 'C:\\Python33' >>> os.chdir('/server/accesslogs') # Change current working directory >>> os.system('mkdir today') # Run the command mkdir in the system shell 0
>>> import os >>> dir(os) <returns a list of all module functions> >>> help(os) <returns an extensive manual page created from the module's docstrings>
For daily file and directory management tasks, the shutil module provides a higher level interface that is easier to use:
>>> import shutil >>> shutil.copyfile('data.db', 'archive.db') >>> shutil.move('/build/executables', 'installdir')
The glob module provides a function for making file lists from directory wildcard searches:
>>> import glob >>> glob.glob('*.py') ['primes.py', 'random.py', 'quote.py']
Common utility scripts often need to process command line arguments. These arguments are stored in the sys module’s argv attribute as a list. For instance the following output results from running python demo.py one two three at the command line:
>>> import sys >>> print(sys.argv) ['demo.py', 'one', 'two', 'three']
The sys module also has attributes for stdin, stdout, and stderr. The latter is useful for emitting warnings and error messages to make them visible even when stdout has been redirected:
>>> sys.stderr.write('Warning, log file not found starting a new one\n') Warning, log file not found starting a new one
The most direct way to terminate a script is to use sys.exit().
The re module provides regular expression tools for advanced string processing. For complex matching and manipulation, regular expressions offer succinct, optimized solutions:
>>> import re >>> re.findall(r'\bf[a-z]*', 'which foot or hand fell fastest') ['foot', 'fell', 'fastest'] >>> re.sub(r'(\b[a-z]+) \1', r'\1', 'cat in the the hat') 'cat in the hat'
When only simple capabilities are needed, string methods are preferred because they are easier to read and debug:
>>> 'tea for too'.replace('too', 'two') 'tea for two'
The math module gives access to the underlying C library functions for floating point math:
>>> import math >>> math.cos(math.pi / 4) 0.70710678118654757 >>> math.log(1024, 2) 10.0
The random module provides tools for making random selections:
>>> import random >>> random.choice(['apple', 'pear', 'banana']) 'apple' >>> random.sample(range(100), 10) # sampling without replacement [30, 83, 16, 4, 8, 81, 41, 50, 18, 33] >>> random.random() # random float 0.17970987693706186 >>> random.randrange(6) # random integer chosen from range(6) 4
The SciPy project <http://scipy.org> has many other modules for numerical computations.
>>> from urllib.request import urlopen >>> for line in urlopen('http://tycho.usno.navy.mil/cgi-bin/timer.pl'): ... line = line.decode('utf-8') # Decoding the binary data to text. ... if 'EST' in line or 'EDT' in line: # look for Eastern Time ... print(line) <BR>Nov. 25, 09:43:32 PM EST >>> import smtplib >>> server = smtplib.SMTP('localhost') >>> server.sendmail('firstname.lastname@example.org', 'email@example.com', ... """To: firstname.lastname@example.org ... From: email@example.com ... ... Beware the Ides of March. ... """) >>> server.quit()
(Note that the second example needs a mailserver running on localhost.)
The datetime module supplies classes for manipulating dates and times in both simple and complex ways. While date and time arithmetic is supported, the focus of the implementation is on efficient member extraction for output formatting and manipulation. The module also supports objects that are timezone aware.
>>> # dates are easily constructed and formatted >>> from datetime import date >>> now = date.today() >>> now datetime.date(2003, 12, 2) >>> now.strftime("%m-%d-%y. %d %b %Y is a %A on the %d day of %B.") '12-02-03. 02 Dec 2003 is a Tuesday on the 02 day of December.' >>> # dates support calendar arithmetic >>> birthday = date(1964, 7, 31) >>> age = now - birthday >>> age.days 14368
>>> import zlib >>> s = b'witch which has which witches wrist watch' >>> len(s) 41 >>> t = zlib.compress(s) >>> len(t) 37 >>> zlib.decompress(t) b'witch which has which witches wrist watch' >>> zlib.crc32(s) 226805979
Some Python users develop a deep interest in knowing the relative performance of different approaches to the same problem. Python provides a measurement tool that answers those questions immediately.
For example, it may be tempting to use the tuple packing and unpacking feature instead of the traditional approach to swapping arguments. The timeit module quickly demonstrates a modest performance advantage:
>>> from timeit import Timer >>> Timer('t=a; a=b; b=t', 'a=1; b=2').timeit() 0.57535828626024577 >>> Timer('a,b = b,a', 'a=1; b=2').timeit() 0.54962537085770791
One approach for developing high quality software is to write tests for each function as it is developed and to run those tests frequently during the development process.
The doctest module provides a tool for scanning a module and validating tests embedded in a program’s docstrings. Test construction is as simple as cutting-and-pasting a typical call along with its results into the docstring. This improves the documentation by providing the user with an example and it allows the doctest module to make sure the code remains true to the documentation:
def average(values): """Computes the arithmetic mean of a list of numbers. >>> print(average([20, 30, 70])) 40.0 """ return sum(values) / len(values) import doctest doctest.testmod() # automatically validate the embedded tests
import unittest class TestStatisticalFunctions(unittest.TestCase): def test_average(self): self.assertEqual(average([20, 30, 70]), 40.0) self.assertEqual(round(average([1, 5, 7]), 1), 4.3) with self.assertRaises(ZeroDivisionError): average() with self.assertRaises(TypeError): average(20, 30, 70) unittest.main() # Calling from the command line invokes all tests
Python has a “batteries included” philosophy. This is best seen through the sophisticated and robust capabilities of its larger packages. For example: | <urn:uuid:6854a7b0-8438-4166-b261-5bbfeea832e1> | 3.234375 | 1,623 | Documentation | Software Dev. | 63.25983 | 2,389 |
Weakly interacting massive particles
In astrophysics, weakly interacting massive particles or WIMPs, are hypothetical particles serving as one possible solution to the dark matter problem. These particles interact through the weak force and gravity, and possibly through other interactions no stronger than the weak force. Because they do not interact through electromagnetism they cannot be seen directly, and because they do not interact through the strong nuclear force they do not interact strongly with atomic nuclei. This combination of properties gives WIMPs many of the properties of neutrinos, except for being far more massive and therefore slower.
Theoretical framework and properties
||This section needs additional citations for verification. (January 2011)|
WIMP-like particles are predicted by R-parity-conserving supersymmetry, a popular type of extension to the standard model of particle physics, although none of the large number of new particles in supersymmetry have been observed. The main theoretical characteristics of a WIMP are:
- Interactions only through the weak nuclear force and gravity, or possibly other interactions with cross-sections no higher than the weak scale;
- Large mass compared to standard particles (WIMPs with sub-GeV masses may be considered to be light dark matter).
Because of their lack of electromagnetic interaction with normal matter, WIMPs would be dark and invisible through normal electromagnetic observations. Because of their large mass, they would be relatively slow moving and therefore cold. Their relatively low velocities would be insufficient to overcome the mutual gravitational attraction, and as a result WIMPs would tend to clump together. WIMPs are considered one of the main candidates for cold dark matter, the others being massive compact halo objects (MACHOs) and axions. (These names were deliberately chosen for contrast, with MACHOs named later than WIMPs.) Also, in contrast to MACHOs, there are no known stable particles within the standard model of particle physics that have all the properties of WIMPs. The particles that have little interaction with normal matter, such as neutrinos, are all very light, and hence would be fast moving or hot.
WIMPs as dark matter
Although the existence of WIMPs in nature is hypothetical at this point, it would resolve a number of astrophysical and cosmological problems related to dark matter. There is near consensus today among astronomers that most of the mass in the Universe is dark. Simulations of a universe full of cold dark matter produce galaxy distributions that are roughly similar to that which is observed. By contrast hot dark matter would smear out the large-scale structure of galaxies and thus is not considered a viable cosmological model.
The WIMP fits the model of a relic dark matter particle from the early Universe, when all particles were in a state of thermal equilibrium. For sufficiently high temperatures, such as existed in the early Universe, the dark matter particle and its antiparticle would have been both forming from and annihilating into lighter particles. As the Universe expanded and cooled, the average thermal energy of these lighter particles decreased and eventually became insufficient to form a dark matter particle-antiparticle pair. The annihilation of the dark matter particle-antiparticle pairs, however, would have continued, and the number density of dark matter particles would have begun to decrease exponentially. Eventually, however, the number density would become so low that the dark matter particle and antiparticle interaction would cease, and the number of dark matter particles would remain (roughly) constant as the Universe continued to expand. Particles with a larger interaction cross section would continue to annihilate for a longer period of time, and thus would have a smaller number density when the annihilation interaction ceases. Based on the current estimated abundance of dark matter in the Universe, if the dark matter particle is such a relic particle, the interaction cross section governing the particle-antiparticle annihilation can be no larger than the cross section for the weak interaction. If this model is correct, the dark matter particle would have the properties of the WIMP.
Experimental detection
Because WIMPs may only interact through gravitational and weak forces, they are extremely difficult to detect. However, there are many experiments underway to attempt to detect WIMPs both directly and indirectly. Halo WIMPs may, as they pass through the Sun, interact with solar protons and helium nuclei. Such an interaction would cause a WIMP to lose energy. The resulting slower WIMP would not have enough energy to escape the gravitational pull of the sun and thus would be "captured" by the Sun. As more and more WIMPs thermalize inside the Sun, they begin to annihilate with each other, forming a variety of particles including high-energy neutrinos. These neutrinos may then travel to the Earth to be detected in one of the many neutrino telescopes, such as the Super-Kamiokande detector in Japan. The number of neutrino events detected per day at these detectors depends upon the properties of the WIMP, as well as on the mass of the Higgs boson. Similar experiments are underway to detect neutrinos from WIMP annihilations within the Earth and from within the galactic center.
While most WIMP models indicate that a large enough number of WIMPs must be captured in large celestial bodies for these experiments to succeed, it remains possible that these models are either incorrect or only explain part of the dark matter phenomenon. Thus, even with the multiple experiments dedicated to providing indirect evidence for the existence of cold dark matter, direct detection measurements are also necessary to solidify the theory of WIMPs.
Although most WIMPs encountering the Sun or the Earth are expected to pass through without any effect, it is hoped that a large number of dark matter WIMPs crossing a sufficiently large detector will interact often enough to be seen—at least a few events per year. The general strategy of current attempts to detect WIMPs is to find very sensitive systems that can be scaled up to large volumes. This follows the lessons learned from the history of the discovery and (by now) routine detection of the neutrino.
A technique used by the Cryogenic Dark Matter Search (CDMS) detector at the Soudan Mine relies on multiple very cold germanium and silicon crystals. The crystals (each about the size of a hockey puck) are cooled to about 50 mK. A layer of metal (aluminium and tungsten) at the surfaces is used to detect a WIMP passing through the crystal. This design hopes to detect vibrations in the crystal matrix generated by an atom being "kicked" by a WIMP. The tungsten metal sensors are held at the critical temperature so they are in the superconducting state. Large crystal vibrations will generate heat in the metal and are detectable because of a change in resistance.
In February 2010, researchers at the Soudan Mine CDMS II experiment announced that they had observed two events that may have been caused by WIMP-nucleus collisions. CoGeNT, a smaller detector using a single germanium puck, designed to sense WIMPs with smaller masses, reported hundreds of detection events in 56 days. Juan Collar, who presented the results to a conference at the University of California, was quoted: "If it's real, we're looking at a very beautiful dark-matter signal" (Other explanations, such as an unexplained radioactive decay process in the electronics, might cause a spurious signal) The experiment estimated the WIMP masses at 7-11 GeV (approximately 10× the mass of a proton), which is at lower limit of detection of the CDMSII experiment.
The Directional Recoil Identification From Tracks (DRIFT) collaboration is attempting to utilize the predicted directionality of the WIMP signal in order to prove the existence of WIMPs. DRIFT detectors use a 1m3 volume of low pressure carbon disulfide gas as a target material. The use of a low pressure gas means that a WIMP colliding with an atom in the target will cause it to recoil several millimetres leaving a track of charged particles in the gas. This charged track is drifted to an MWPC readout plane that allows it to be reconstructed in three dimensions, which can then be used to determine the direction the WIMP came from.
Another way of detecting atoms "knocked about" by a WIMP is to use scintillating material, so that light pulses are generated by the moving atom. Experiments such as DEAP at SNOLAB or WARP at the LNGS plan to instrument a very large target mass of liquid argon for sensitive WIMP searches. Another example of this technique is the DAMA/NaI and DAMA/LIBRA detector in Italy. It uses multiple materials to identify false signals from other light-creating processes. This experiment observed an annual change in the rate of signals in the detector. This annual modulation is one of the predicted signatures of a WIMP signal, and on this basis the DAMA collaboration has claimed a positive detection. Other groups, however, have not confirmed this result. The CDMS and EDELWEISS experiments would be expected to observe a significant number of WIMP-nucleus scatters if the DAMA signal were in fact caused by WIMPs. Since the other experiments do not see these events, the interpretation of the DAMA result as a WIMP detection can be excluded for most WIMP models. It is possible to devise models that reconcile a positive DAMA result with the other negative results, but as the sensitivity of other experiments improves, this becomes more difficult. The CDMS data taken in the Soudan Mine and made public in May 2004 exclude the entire DAMA signal region given certain standard assumptions about the properties of the WIMPs and the dark matter halo.
The PICASSO (Project in Canada to Search for Supersymmetric Objects) experiment is a direct dark matter search experiment that is located at SNOLAB in Canada. It uses bubble detectors with Freon as the active mass. PICASSO is predominantly sensitive to spin-dependent interactions of WIMPs with the fluorine atoms in the Freon.
A bubble detector is a radiation sensitive device that uses small droplets of superheated liquid that are suspended in a gel matrix. It uses the principle of a bubble chamber but since only the small droplets can undergo a phase transition at a time the detector can stay active for much longer periods than a classic bubble chamber. When enough energy is deposited in a droplet by ionizing radiation the superheated droplet undergoes a phase transition and becomes a gas bubble. The PICASSO detectors contain Freon droplets with an average diameter of 200 µm. The bubble development in the detector is accompanied by an acoustic shock wave that is picked up by piezo-electric sensors. The main advantage of the bubble detector technique is that the detector is almost insensitive to background radiation. The detector sensitivity can be adjusted by changing the temperature of the droplets. Freon-loaded detectors are typically operated at temperatures between 15°C and 55°C. There is another similar experiment using this technique in Europe called SIMPLE.
PICASSO reports results (November 2009) for spin-dependent WIMP interactions on 19F. No dark matter signal has been found, but for WIMP masses of 24 Gev/c2 new stringent limits have been obtained on the spin-dependent cross section for WIMP scattering on 19F of 13.9 pb (90% CL). This result has been converted into a cross section limit for WIMP interactions on protons of 0.16 pb (90% CL). The obtained limits restrict recent interpretations of the DAMA/LIBRA annual modulation effect in terms of spin dependent interactions.
See also
- Massive compact halo object (MACHO)
- Higgs boson
- Micro black hole
- Robust associations of massive baryonic objects (RAMBOs)
Theoretical candidates
- H.V. Klapdor-Kleingrothaus, Double Beta Decay and Dark Matter Search - Window to New Physics now, and in future (GENIUS), 4 Feb 1998
- M. Kamionkowski, WIMP and Axion Dark Matter, 24 Oct 1997
- V. Zacek, Dark Matter Proc. of the 2007 Lake Louise Winter Institute, March 2007
- K. Griest, The Search for Dark Matter: WIMPs and MACHOs, 13 Mar 1993
- Griest, Kim (1991). "Galactic Microlensing as a Method of Detecting Massive Compact Halo Objects". The Astrophysical Journal 366: 412–421. Bibcode:1991ApJ...366..412G. doi:10.1086/169575.
- C. Conroy, R. H. Wechsler, A. V. Kravtsov, Modeling Luminosity-Dependent Galaxy Clustering Through Cosmic Time, 21 Feb 2006.
- The Millennium Simulation Project , Introduction: The Millennium Simulation The Millennium Run used more than 10 billion particles to trace the evolution of the matter distribution in a cubic region of the Universe over 2 billion light-years on a side.
- F. Ferrer, L. Krauss, and S. Profumo, Indirect detection of light neutralino dark matter in the NMSSM. Phys.Rev. D74 (2006) 115007
- K. Freese, Can Scalar Neutrinos Or Massive Dirac Neutrinos Be the Missing Mass? . Phys.Lett.B167:295 (1986).
- Merritt, D.; Bertone, G. (2005). "Dark Matter Dynamics and Indirect Detection". Modern Physics Letters A 20 (14): 1021–1036. arXiv:astro-ph/0504422. Bibcode:2005MPLA...20.1021B. doi:10.1142/S0217732305017391.
- N. Fornengo, Status and perspectives of indirect and direct dark matter searches. 36th COSPAR Scientific Assembly, Beijing, China, 16–23 July 2006
- "Key to the universe found on the Iron Range?". Retrieved December 18, 2009.
- CDMS Collaboration. "Results from the Final Exposure of the CDMS II Experiment". See also a non-technical summary: CDMS Collaboration. "Latest Results in the Search for Dark Matter"
- The CDMS II Collaboration (2010). "Dark Matter Search Results from the CDMS II Experiment". Science. doi:10.1126/science.1186112.
- Eric Hand (2010-02-26). "A CoGeNT result in the hunt for dark matter". Nature News.
- CoGeNT collaboration (C. E. Aalseth (2011). "Results from a Search for Light-Mass Dark Matter with a P-type Point Contact Germanium Detector". Physical Review Letters 106 (13). arXiv:1002.4703. Bibcode:2011PhRvL.106m1301A. doi:10.1103/PhysRevLett.106.131301.
- A. Drukier, K. Freese, and D. Spergel, http://prola.aps.org/pdf/PRD/v33/i12/p3495_1 Detecting Cold Dark Matter Candidates], Phys.Rev.D33:3495-3508 (1986).(subscription required)
- K. Freese, J. Frieman, and A. Gould, Signal Modulation in Cold Dark Matter Detection, Phys.Rev.D37:3388 (1988).
- Bubble Technology Industries
- PICASSO Collaboration; Aubin, F.; Auger, M.; Behnke, E.; Beltran, B.; Clark, K.; Dai, X.; Davour, A. et al. (2009). "Dark Matter Spin-Dependent Limits for WIMP Interactions on 19F by PICASSO". Physics Letters B 682 (2): 185. arXiv:0907.0307. Bibcode:2009PhLB..682..185A. doi:10.1016/j.physletb.2009.11.019. Unknown parameter
Further reading
- Bertone, Gianfranco (2010). Particle Dark Matter: Observations, Models and Searches. Cambridge University Press. p. 762. ISBN 978-0-521-76368-4. | <urn:uuid:d8a593e1-9d8f-49e9-af0d-8c2f441418c8> | 3.5 | 3,463 | Knowledge Article | Science & Tech. | 48.973457 | 2,390 |
Blackburn, George Alan (1998) Spectral indices for estimating photosynthethic pigment concentrations : a test using senescent tree leaves. International Journal of Remote Sensing, 19 (4). pp. 657-675.Full text not available from this repository.
The possibility of estimating the concentration of individual photosynthetic pigments within vegetation from reflectance spectra offers great promise for the use of remote sensing to assess physiological status, species type and productivity. This study evaluates a number of spectral indices for estimating pigment concentrations at the leaf scale, using samples from deciduous trees at various stages of senescence. Two new indices (PSSR and PSND) are developed which have advantages over previous techniques. The optimal individual wavebands for pigment estimation are identified empirically as 680 nm for chlorophyll a, 635 nm for chlorophyll b and 470 nm for the carotenoids. These wavebands are justified theoretically and are shown to improve the performance of many of the spectral indices tested. Strong predictive models are demonstrated for chlorophyll a and b, but not for the carotenoids and the paper explores the reasons for this.
|Journal or Publication Title:||International Journal of Remote Sensing|
|Subjects:||G Geography. Anthropology. Recreation > G Geography (General)|
|Departments:||Faculty of Science and Technology > Lancaster Environment Centre|
|Deposited By:||Dr GA Blackburn|
|Deposited On:||07 Feb 2007|
|Last Modified:||26 Jul 2012 18:09|
Actions (login required) | <urn:uuid:8775a527-f221-40c3-87b9-92c1fc32eb18> | 2.703125 | 332 | Academic Writing | Science & Tech. | 23.825249 | 2,391 |
The Tropical Rainfall Measuring Mission (TRMM) is a joint U.S.-Japan satellite mission to monitor tropical and subtropical precipitation and to estimate its associated latent heating.
The TRMM Precipitation Radar (PR), the first of its kind in space, is an electronically scanning radar, operating at 13.8 GHz that measures the 3-D rainfall distribution ... over both land and ocean, and defines the layer depth of the precipitation.
The 1B21 calculates the received power at the PR receiver input point from the Level-0 count value which is linearly proportional to the logarithm of the PR receiver output power. To convert the count value to the input power, extensive internal calibrations are applied, which are mainly based upon the system model, temperature dependence of model parameters and many temperature sensors attached at various locations of the PR. Periodically the input-output characteristics are measured using an internal calibration loop for the IF unit and later receiver stages. To make an absolute calibration, an Active Radar Calibrator (ARC) is placed at Kansai Branch of CRL and overall system gain of the PR is being measured every 2 months. Using the transfer function based on the above internal and external calibrations, the PR received power is obtained. Note that the value assumes that the signal follows the Rayleigh fading, so if the fading characteristics of a scatter is different, a small bias error may occur (within 1 or 2 dB).
The data are stored in the Hierarchical Data Format (HDF), which includes both core and product specific metadata applicable to the PR measurements. A file contains a single orbit of data with a file size of about 139 MB (uncompressed). The HDF-EOS "swath" structure is used to accommodate the actual geophysical data arrays. There are 16 files each of PR 1B21 and 1C21 data produced per day.
Spatial coverage is between 38 degrees North and 38 degrees South, owing to the 35 degree inclination of the TRMM satellite. This orbit provides extensive coverage in the tropics and allows each location to be covered at a different local time each day, enabling the analysis of the diurnal cycle of precipitation. There are, in general, 9150 scans along the orbit, with each scan consisting of 49 rays. The scan width is about 220 km. | <urn:uuid:dba54dab-7e59-4ca6-9c9e-5a6ec893412c> | 2.59375 | 478 | Knowledge Article | Science & Tech. | 43.486302 | 2,392 |
[Location: Location_Category='CONTINENT', Location_Type='ANTARCTICA', Detailed_Location='SOUTH VICTORIA LAND']
New Zealand International Transantarctic Scientific Expedition (NZ ITASE) - Climate variability measured from ice cores taken along the Victoria Land CoastEntry ID: K049_1999_2008_NZ_1
Click to see members of this collection.
Abstract: The climate of the Victoria Land Coast is created by the interacting influences of the Dry Valleys, East Antarctic Ice Sheet and the Ross Sea. Slight changes can significantly alter local weather patterns and as such a climate record of the area provides ideal opportunities to study rapid, high frequency climatic variations. International polar ice coring programmes (e.g. GISP and Vostok) have ... provided powerful new insights into Earth's climate back 400,000 years, from the diverse inventory of atmospheric information stored both within the ice and trapped air bubbles. To understand and predict the local response to anthropogenically induced global warming seen in these "global" ice cores, the focus of ice core research in Antarctica is moving to the acquisition of high-resolution regional paleoclimatic archives of annual-scale that overlap with and extend the instrumental records of the last 40 years back several thousand years. This has been a key motivation behind the US-led International Transantarctic Scientific Expedition (ITASE) of which New Zealand is a member.
The New Zealand project's objective is to recover a series of ice cores from glaciers along a 14-degree latitudinal transect of the climatically sensitive Victoria Land coastline and thereby directly contribute a critical dataset to ITASE. The NZ ITASE sites (including two sites at Victoria Lower Glacier, Baldwin Glacier, Wilson Piedmont Glacier, Polar Plateau, Evans Piedmont Glacier, Mt Erebus Saddle, Whitehall Glacier, Skinner Saddle and Gawn Ice Piedmont Glacier with future sites planned at Beardmore Glacier, Roosevelt Island, and coastal sites in West Antarctica) have been chosen to capture and quantify the steep climate gradients from the Scott Coast to the Polar Plateau, the local climate system of the McMurdo Dry Valleys, and the effect of altitude within the Transantarctic Mountains. Coastal sites are especially climate sensitive and show potential to archive local, rapid climate change events that are subdued or lost in the 'global' inland ice core records.
Investigation and datasets include: GPR/GPS surveying to map bedrock topography and internal glacial structure and glacier topography, firn and ice cores to quantify the variability of climate record with analysis of temperature, crystal structure, crystal geometry, density of the snow, melt, dust/tephra occurrence, gas content, porosity, gas bubble size and geometry, snow profiles were analysed for ion content, isotopic ratios, dust content, beta radioactivity, chemical properties and mineralogy to transfer functions with the meterological record, borehole temperature and light penetration, submergence velocity measurements to analyse the mass balance of the glaciers, meterological data and ablation measurements.
(Click for Interactive Map)
Start Date: 1999-11-24
Paleo Temporal Coverage
Latitude Resolution: 1:1 to 1:3 million
Longitude Resolution: 1:1 to 1:3 million
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS
TERRESTRIAL HYDROSPHERE > SURFACE WATER > LAKES
TERRESTRIAL HYDROSPHERE > SURFACE WATER > WETLANDS
BIOSPHERE > AQUATIC ECOSYSTEMS > LAKES
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS > ESTUARINE WETLANDS
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS > LACUSTRINE WETLANDS
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS > MARINE
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS > MARSHES
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS > PALUSTRINE WETLANDS
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS > PEATLANDS
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS > RIPARIAN WETLANDS
BIOSPHERE > TERRESTRIAL ECOSYSTEMS > WETLANDS > SWAMPS
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS > ESTUARINE WETLANDS
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS > LACUSTRINE WETLANDS
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS > MARINE
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS > MARSHES
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS > PALUSTRINE WETLANDS
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS > PEATLANDS
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS > RIPARIAN WETLANDS
BIOSPHERE > AQUATIC ECOSYSTEMS > WETLANDS > SWAMPS
Access Constraints GLWD is available for non-commercial scientific, conservation, and educational
purposes. Any modification of the original data by users must be noted. By
submitting the download request you agree to the regulations of the data
disclaimer (PDF format, 15k). See:
Use Constraints Credit the following and fill out the short form at
Lehner, B. and P. D?ll (2004): Development and validation of a global database
of lakes, reservoirs and wetlands. Journal of Hydrology 296/1-4: 1-22.
Data Set Progress
Distribution Media: online, www
Distribution Size: 46 MB
Distribution Format: shapefiles
Email: bernhard.lehner at wwfus.org
Conservation Science Program
Province or State: DC
Postal Code: 20037
Email: doell at usf.uni-kassel.de
Center for Environmental Systems Research Kurt-Wolters-Strasse 3 Room 2208
Postal Code: 34109
Lehner, B. and P. Doll (2004): Development and validation of a global database of lakes, reservoirs and wetlands. Journal of Hydrology 296/1-4: 1-22.
Birkett, C.M., Mason, I.M. (1995): A new global lakes database for a remote sensing program studying climatically sensitive large lakes. Journal of Great Lakes Research 21(3): 307-318.
ICOLD (International Commission on Large Dams) (1998): World Register of Dams. 1998 book and CD-ROM, ICOLD, Paris.
Loveland, T.R., Reed, B.C., Brown, J.F.,Ohlen, D.O.,Zhu, J., Yang, L. and
Merchant, J.W. (2000): Development of a global land cover characteristics
database and IGBP DISCover from 1-km AVHRR data. International Journal of
Remote Sensing 21(6/7): 1303-1330 http://edcdaac.usgs.gov/glcc/glcc.html
Vorosmarty, C.J., Sharma, K.P.,Fekete, B.M., Copeland, A.H., Holden, J., Lough, J.A. (1997): The storage and aging of continental runoff in large reservoir systems of the world, Ambio 26(4): 210-219.
WCMC (World Conservation Monitoring Centre) (1993): Digital wetlands data set. Cambridge, U.K.
Creation and Review Dates
DIF Creation Date: 2004-09-22
Last DIF Revision Date: 2004-09-23 | <urn:uuid:fd2e1696-bfd5-4b5c-8110-d54a6b6aaf1e> | 2.984375 | 1,783 | Content Listing | Science & Tech. | 31.927909 | 2,393 |
The photosynthetic performance of the fruticose lichen Usnea sphacelata was measured using advanced, climatised photosynthetic cuvettes at Scott Base. Individuals of the Usnea sp were identified and collected and their distribution recorded at Cape Royds. Two forms, a pale shade form and a dark sun form were identified. Specimens were removed and used for gas-exchange studies. Full photosynthetic response curves to light were obtained for the dark and pale forms at 10, 5, 0, -5 and -10°C. A photosynthesis to thallus water content curve was also produced for both dark and pale forms.
All data is held by the investigator. Please contact Professor Allan Green for more information.
The plant collections (~30,000 mosses, liverwort and lichens collected by Professor Rod Seppelt from Antarctica and sub-Antarctic Islands) are housed in the Australian Antarctic Division herbarium. This herbarium is being formerly transferred to the Tasmanian Herbarium. As specimens are ... fully incorporated into the herbarium (ADT) the data is automatically sent to and held in the Antarctic Database at the Australian Antarctic Division Data Centre. The data is currently in two separate databases, one which searchable and the other is in the process of being moved over.
For more information or access to samples, please contact:
Professor Rod Seppelt Principal Research Scientist Australian Antarctic Division Channel highway Kingston 7050 Tasmania, Australia ph: +61 (03) 6232 3438 fax: +61 (03) 6232 3449 e-mail: firstname.lastname@example.org | <urn:uuid:65b7bb51-0e10-47c7-9a08-9936be51269e> | 2.515625 | 344 | Knowledge Article | Science & Tech. | 38.354412 | 2,394 |
Better get used to it. More frequent and intense storms are what studies and New York City’s own panel on climate change have predicted for the city as average temperatures and sea levels rise over the next decades.
By midcentury, city officials say, New York City’s average temperature is projected to increase three to five degrees Fahrenheit and sea levels are expected to rise by more than two feet. By the end of the century, they say, New York City may feel more like North Carolina.
Hurricane Irene is a reminder of the city’s vulnerabilities, but some environmental groups say the good news is that the city is taking steps to prepare.
“We consider New York City to be one of the leaders nationally,” said Ben Chou, a water policy analyst with the Natural Resources Defense Council in Washington. “They are already looking at how climate change is going to impact the city.”
The N.R.D.C. this month released a report summarizing water-related threats to a dozen cities around the country. Most face increased flooding and problems like shoreline erosion and saltwater intrusion into sources of drinking water. The report recommends that cities undertake full assessments of the risks now so they can start protecting their water resources and taking other necessary measures to prepare.
New York City has already convened a panel on climate change and an adaptation task force. It has also begun investing in environmental techniques to capture and retain storm water and is moving critical equipment in city buildings to higher elevations— like pump motors and circuit breakers at the Rockaway Wastewater Treatment Plant in Queens. | <urn:uuid:b48a9d8d-9b16-4cc4-8552-a6b622f2fff9> | 3.015625 | 330 | News Article | Science & Tech. | 39.130598 | 2,395 |
The State of Earth’s Terrestrial Biosphere: How is it Responding to Rising Atmospheric CO2 and Warmer Temperatures?
One of the potential consequences of the historical and ongoing rise in the air’s CO2 content is global warming, which phenomenon has further been postulated to produce all sorts of other undesirable consequences. The United Nations’ Intergovernmental Panel on Climate Change, for example, contends that current levels of temperature and changing precipitation patterns (which they believe are mostly driven by the modern rise in atmospheric CO2) are beginning to stress Earth’s natural and agro-ecosystems now by reducing plant growth and development.
And looking to the future, they claim that unless drastic steps are taken to reduce the ongoing rise in the air’s CO2 content (e.g., scaling back on the use of fossil fuels that, when consumed, produce CO2), the situation will only get worse – that crops will fail, food shortages will become commonplace, and many species of plants (and the animals that depend on them for food) will be driven to extinction.
Such concerns, however, are not justified. In the ensuing report we present a meta-analysis of the peer-reviewed scientific literature, examining how the productivities of Earth’s plants have responded to the 20th and now 21st century rise in global temperature and atmospheric CO2, a rise that climate alarmists claim is unprecedented over thousands of years (temperature) to millions of years (CO2 concentration). | <urn:uuid:0c72590a-5319-4677-b2ce-d06b87305194> | 3.71875 | 311 | Academic Writing | Science & Tech. | 25.01062 | 2,396 |
Fri May 11, 2012
Saving California's Native Oyster
About halfway between Santa Cruz and Monterey is Elkhorn Slough. This estuary, with its meandering waterways and abundant wildlife, is a destination for kayakers, bird watchers and researchers. About a third of the slough is a National Estuarine Research Reserve. “So you’re seeing a little sample of the classic estuarine habitat types here. We are standing on a salt marsh beneath our feet. You see channels with permanently standing water and creeks. And then the third major habitat type of estuaries is mud flats. And that’s what we are about to venture out on is the mud,” said Kerstin Wasson, Research Coordinator for the Elkhorn Slough National Estuarine Research Reserve.
Wearing wader overalls, Wasson and UC Davis Biologist Dr. Chela Zabin walk into the thick, sticky mud. They’re headed toward a cluster of five different man-made oyster reefs that rest on top of the mud. “What you see are a number of different reef designs that we’ve tried over the years, biodegradable mesh tubes filled with clam shells. These are shell necklaces where we’ve strung clamshells onto strings,” said Wasson as she points out the different reefs. The reefs are a pilot project in search of the best habitat for California’s only native oyster, the Olympia oyster. Until Wasson rediscovered it in Elkhorn Slough about five years ago, the population was thought to be locally extinct. Now local extinction is a real possibility. “We don’t have many adults left, in the whole complex here, we think there’s maybe 500 adults alive,” said Wasson. That’s 500 in the reserve and maybe 5000 in the whole Elkhorn Slough. At birth oyster larvae can swim around, but to grow into a full size adult it must attach to something hard. When oyster populations are abundant, the oysters make their own reefs by attaching to each other and old oyster shells. But when the populations are as low as they are at Elkhorn, the oysters need some help. The three foot long clam shell necklace reef suspended above the mud is proving to be most effective. “After two years they’re adult size oysters growing on clam shells,” said Wasson.
Beyond Elkhorn Slough the nearest Olympia oyster populations are to the south near Malibu at Mugu Lagoon and to the north in San Francisco Bay. That’s where Dr. Cheyla Zabin does much of her oyster restoration work. She says the point of local restoration is not to create oysters that could be harvested. California’s Olympia oyster population isn’t big enough. “So that’s a huge challenge. You really need to bring numbers up so people can harvest, but there’s still enough oyster shell around for new oysters to settle on. That’s not an issue for us here. Maybe that will be a problem that we’d love to have in the future, but that’s not really the point of oyster restoration in California,” said Zabin. The point is in part to regain the ecosystem benefits provided by oysters like improved water quality through their filter feeding. Those are benefits not yet seen at Elkhorn. “In my mind you don’t even need to invoke those kinds of what will they do for us arguments. I mean, I think we care about there being condors and polar bears and native oysters, just for the sake of preserving the legacy of the native biodiversity that existed in our habitats on earth,” said Wasson. A recent grant from the California Department of Fish and Game will help preserve that legacy. With the addition of 180 oyster reefs in the Reserve, Wasson hopes to double the Olympia oyster population.
Much of the work at the Elkhorn Slough Reserve is done by volunteer citizen scientists. For more details visit elkhornslough dot org | <urn:uuid:c2cdf6c8-8e76-44c6-8324-98458fad007f> | 3.125 | 879 | News (Org.) | Science & Tech. | 57.717607 | 2,397 |
History---Cause---Effects on Different Parts of the World---Case Studies of South East Asia Countries
Cause of El Nino
The Southern Oscillation
This natural marvel, El Nino, could be related to a shift in the air movement over the tropical Pacific Ocean. Changes in wind direction cause the alteration in circulation and temperature of the ocean, which, in turn further disrupt air movement and ocean currents. This natural episode is the largest irregularity in the year to year fluctuation of the oceanic and atmospheric systems. It is probably caused by the interaction of the two systems. It is most likely related to the Southern Oscillation, which is an irregular oscillation of the atmospheric mass between the Indonesian low pressure system and the Easter Island high pressure system. The oscillation' period is several years.
Figure (a) shows the normal conditions, and figure (b) shows the abnormal conditions during El Nino
Return to Warning and Revenge page | <urn:uuid:296b9ba8-8dac-4383-88a4-6b4153b44ae0> | 3.578125 | 193 | Knowledge Article | Science & Tech. | 25.887308 | 2,398 |
See also the
Browse High School Functions
Stars indicate particularly interesting answers or
good places to begin browsing.
Selected answers to common questions:
Composition of functions.
Domain and range.
Inverse of a function.
- All About Functions [11/06/1996]
Could you please explain functions?
- Defining 'Undefined' [09/15/2003]
If a function is 'undefined at x', does this refer only to vertical
asymptotes, or to other discontinuities as well?
- Definition of the Signum Function [05/31/2000]
Can you give me a simple definition of the signum function, and any
practical examples of its usage?
- Exp, Log, and Ln Functions Explained [7/31/1996]
What is the exp function? When is it needed? Also, how do I calculate Log
and Ln functions with basic arithmetic and logic?
- Function Machine [10/26/1996]
How do you find the domain and range of a function?
- Function Tests [02/19/1997]
What is the reasoning behind the vertical and horizontal line tests?
- Interval Notation [4/1/1996]
I need to learn about interval notation in terms of domain and ranges.
- Mapping Functions in the Real World [3/20/1995]
What is the purpose of learning to map a function? What is it used for in
the real world?
- Rational Inequality [10/09/2001]
Solve this rational inequality and give an answer in interval notation: -
5/(3h+2) greater than or equal to 5/h.
- Sometimes, Always, or Never True? [02/12/2002]
Is this statement always, sometimes, or never true: f(g(x))=g(f(x)) ?
- What Are Quadratic Functions? [02/27/2003]
What is the difference between a quadratic function and a quadratic
- What is a Function? [06/14/2001]
I've read many definitions and I've asked many teachers, but I still
don't completely understand.
- Why is Zero the Limit? [02/25/2002]
Why is zero called the limit of the terms in the sequence the limit of 1
over n, as n approaches infinity, equals zero?
- x Factorial and the Gamma Function [05/29/1998]
What is x! when x is 0, negative, or not a whole number?
- 2^4 = 16 AND 4^2 = 16 [10/29/2001]
Can you think of any other pair of unequal numbers that share the same
relation as 2 and 4 in the above example? What was your strategy?
- 2^x = x^2 [02/13/2002]
Find the real value without graphing.
- Absolute Value and Continuity of Functions [09/15/2004]
I know that the absolute value of a continuous function is also
continuous. Is the opposite true? That is, if the absolute value of
a function is continuous, is the function continuous?
- Algebraically Equivalent Functions [06/27/2002]
If a function can be manipulated so that it can't have a denominator
equal to zero (and thus be undefined for that value), why is the
original function still considered undefined at that value?
- Approaching Zero and Losing the Plot [11/11/2010]
Looking near the origin at plots of y = x^n for ever tinier n, a student wonders why y
= x^0 does not equal zero. By emphasizing two different limits, Doctor Ali gets the
student back into line -- specifically, y = 1.
- Are All Functions Equations? [07/16/2001]
When my x's are not continuous, would I still have a function since the
vertical line test might in fact not touch a point at all?
- Assigning Random Numbers [05/16/2000]
I am using a programming language and have a random number generator that
can generate a random number of 0, 1, or 2. How can I assign those three
values to 4, 12, and 14?
- Asymptote of a Function [06/02/2002]
Determine the value of A so that y = (Ax+5)/(3-6x) has a horizontal
asymptote at y = -2/3.
- Big O Notation and Polynomials [04/12/2001]
Given the function f(x) = (x^3 - (4x^2) + 12)/(x^2 + 2), how can I find a polynomial function g(x) such that f(x) = O(g(x)) and g(x) = O(f(x))?
- Big O, Omega, and Sigma [09/19/2001]
I cannot understand how something can be both Big O and Omega (aka Big
Theta). A general explanation of O/Omega/Theta would be helpful.
- Brackets or Parentheses? [01/07/1997]
When using interval notation to describe when a function is increasing
and decreasing, how do I know whether to use brackets or parentheses?
- Calculus of Piecewise Functions [06/07/2003]
Can I take the integral or derivative of a piecewise function like the
floor function [u] or the absolute value function |u| and still notate
it in concise form, |U| or [U]?
- Can f'(-1) Equal Zero and f''(-1) Not Equal Zero? [03/23/2004]
Is it possible to have a derivative of zero and then have a double
derivative that is not zero at that same x value? How?
- Cases Where the Newton-Raphson Method Fails [06/30/2005]
Why does the Newton-Raphson method work for some functions but not for
- Catenary Curve [03/30/1999]
Find the vertex of a catenary curve.
- Chaotic Functions [10/30/2000]
Can you give some mathematical examples of chaos theory?
- Circular Functions [01/27/2001]
How do you define circular functions? Can you give me an example?
- Closed Form Solutions [09/16/1997]
What is the exact mathematical definition of a closed form solution?
- Coconuts, Forwards and Backwards [02/02/2010]
Doctor Greenie answers a chestnut about repeated division and
remainders, first working the question forwards before using the
inverse of a function to solve the same problem backwards much more
- Composing Functions [12/02/1998]
I'm trying to find f-of-g where f(x) = 2x and g(x) = 3x^2 + 1. What
happens when you compose two functions?
- Composite Functions [4/5/1996]
1) fog(x) = 7x + 3; gof(x) = 7x - 3; f(0) = 1; g(0) = .....
- Composite Functions [01/11/1998]
My students can't understand composite functions.
- Composite Functions Using Logarithms [3/10/1996]
Suppose f and g are functions defined by f(x) = x+2 and
g(x) = x. Find all x > -2 for which:
3^[g(x)*logbase3 f(x)] = f(x).
- Composition Functions with Added x Value [05/13/2001]
If x = 1, evaluate g(f(f(x))). I'm confused with this added value of x =
- Composition of Functions [07/23/1999]
How do I find f(g(x)) if f(x) = x+2 and g(x) = 3x-1?
- Connecting the Dots [02/02/1998]
How do you know whether or not to connect the dots when graphing a real- | <urn:uuid:18fc8150-175c-488f-963d-c24d85ed509a> | 3.875 | 1,778 | Q&A Forum | Science & Tech. | 77.647692 | 2,399 |