text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Here is an excerpt as related to that last point: Focusing on the past six decades, we observe no sustained upward trends in wind speed distributions (Figs. 1 and 3), the mean wind speed at landfall or the annual frequency of occurrence of landfalling segments (Fig. 8). (Note that this annual frequency is specific to landfalling segments and different from the annual frequency of landfalling events since some events have multiple landfalling segments, e.g. in 2005 Hurricane Katrina made landfall in both South Florida and Louisiana.) This being the case, the dramatic increases in total economic and insured losses from TCs, which have been manifest over the past six decades, indicates that the increasing losses must be attributed to the factors other than wind speed alone. This is in accord with recent studies (Pielke, 2005; Pielke et al., 2008; Crompton and McAneney, 2008), which demonstrate the importance of demographic changes in driving the increasing economic cost of hurricane losses.The paper concludes as follows: The quality of observational data is central to the ongoing debate between a warming climate and consequences for TC frequency and intensities. Our analyses show clear, anomalous differences in the wind speed distributions between the early historical period and the very recent six decades. While these differences cannot unequivocally exclude a possible Global Climate Change cause, we suggest that data quality issues are more plausible.Find the paper here in PDF. An enormous challenge lies ahead for recovering reliable wind estimates in the early historical record, especially for highly dynamic and short-lived extreme TCs. The counting of events by Saffir-Simpson Hurricane categories is determined by threshold wind speeds, and if the wind estimates are themselves unreliable, how can derivative statistics be trusted sufficiently for long-term trend analysis? It is timely to recognise that using the early historical record will inevitably involve some irreducible uncertainties and “fixing” these may not be possible and that more physically-based models are needed to help resolve the data impasse. Conclusions drawn from scientific and insurance applications using the inherently lower-quality components of the record should be treated with caution.
<urn:uuid:731c2f49-ebd6-413a-97e7-00ce77aa3acb>
2.671875
432
Academic Writing
Science & Tech.
25.341491
1,700
The principles of validation are almost the same as I have given on the previous page, so I won’t repeat them. The one difference is that it is not helpful to try to validate checkboxes at the moment they are checked; it is best left until submission time. The validateCheckbox routine in the code assumes that multiple checks are required. This is because if you want the reader to select just one option, radio buttons are a much easier way of achieving the same result. The code makes it possible to enforce exactly N checks, or a maximum of N, or a minimum of N. One curious case is when you want the reader to check a single box. This might at first glance seem pointless, but apparently the lawyers of the world have decided this is a good way of enforcing terms and conditions. This requires slightly different code, as a single checkbox is not treated as an array of one element. (Actually a probably better solution would be to use two radio buttons: “I do not accept” and “I accept” with the former set as the default. Then no script is needed.) A small example follows.
<urn:uuid:9c765a18-0fac-42d2-a290-7297db14ebb2>
2.546875
236
Documentation
Software Dev.
54.558154
1,701
Twin Rovers Headed for Mars |Tweet|Twin Rovers Headed for Mars NASA today announced plans to launch two large scientific rovers to the red planet in 2003. August 10, 2000 -- The traffic on Mars is expected to double in the near future. NASA today announced plans to launch two large scientific rovers to the red planet in 2003, rather than the original plan for just one, said Dr. Ed Weiler, Associate Administrator for Space Science, NASA Headquarters, Both Mars rovers, to be built, managed and operated by NASA's Jet Propulsion Laboratory, Pasadena, Calif., currently are planned for launch on Delta II rockets from Cape Canaveral Air Force Station, Fla. The first mission is targeted for May 22, with the second launch slated for June 4. After a seven-and-a- half month cruise, the first rover should enter Mars' atmosphere January 2, 2004, with the second rover bouncing to a stop on the Martian surface January 20. Above: This image is a single frame from a striking video of the planned Mars 2003 rover mission. [more information from NASA headquarters] The rovers will be exact duplicates, but that's where the similarities end. Relatives of the highly successful 1997 Sojourner rover, these 150-kilogram (300-pound) mobile laboratories may look and act alike, but they're going to decidedly different locations. Scott Hubbard, Mars program director at NASA Headquarters said, "For the past few weeks NASA has been undertaking an extensive study of a two-lander option. Hubbard added, "The scientific appeal of using the excellent launch opportunity in 2003 for two missions was weighed carefully against the resource requirements and schedule constraints." "Our teams concluded that we can successfully develop and launch these identical packages to the red planet," continued Hubbard. "We also determined that, in addition to the prospect of doubling our scientific return, this two-pronged approach adds resiliency and robustness to our exploration program." "Mars is a beguiling place, and conducting a real mobile field-geology mission is always better when there are multiple perspectives," said Dr. Jim Garvin, Mars program scientist at NASA Headquarters. However, the landing sites have yet to be selected. "We are thinking about localities where there is evidence of surface processes involving what we might call 'past' water on Mars," Garvin said. "This includes sites where we have today mineralogical evidence that water may have produced unique chemical fingerprints, as well as places where it seems likely water 'ponded' in closed depressions for enough time to modify the regional geology," Garvin added. During the next two to three years, engineers and scientists will conduct an intensive search for potential touchdown sites. Using the flood of data still coming in from Mars Global Surveyor, and that expected starting in 2002 from the Mars 2001 Orbiter, scientists will search for compelling landing zones with the fewest hazards and select the best candidates. Above: This artist's rendering shows a view of NASA's Mars 2003 Rover as it sets off to roam the surface of the red planet. The rover is scheduled for launch in June 2003 and will arrive in January 2004, shielded in its landing by an airbag shell. The airbag/lander structure, which has no scientific instruments of its own, is shown to the right in this image, behind the rover. [more information] "The goal of both rovers will be to learn about ancient water and climate on Mars," said Professor Steven Squyres, Cornell University, Ithaca, N.Y., and principal investigator for the rovers' Athena science package. "You can think of each rover as a robotic field geologist, equipped to read the geologic record at its landing site and to learn what the conditions were like back when the rocks and soils there were formed." |Parents and Educators: Please visit Thursday's Classroom for lesson plans and activities related to this story.| Given the high priority NASA and the administration assign to the space science program overall, and to the timely exploration of Mars, the agency proposes that space science cover any additional costs of the first rover mission, and that the bulk of the cost for the second lander be reallocated from programs outside Space The Mars 2003 Rover project will be managed at JPL, for the Office of Space Science. Dr. Firouz Naderi is the Mars Program Manager at JPL, which is a division of the California Institute of Technology in Pasadena. Mars Exploration Program - from the NASA Jet Propulsion Laboratory Science@NASA Stories about Mars: to the Future on Mars -- July 28, 2000. NASA announces plans for a Mars rover in 2003 with a second rover under consideration. Making a Splash on Mars -- June 29, 2000. Scientists ponder how to keep water in its liquid form on super-dry and cold Mars. Mars Surprise -- June 22, 2000. New pictures from NASA's Mars Global Surveyor spacecraft reveal gullies on Mars, possibly created by recent flash floods Martian Swiss Cheese -- March 9, 2000. New pictures from NASA's Mars Global Surveyor spacecraft show exotic terrain made of dry ice near the Red Planet's south pole. Unearthing Clues to Martian Fossils -- June 11, 1999. The hunt for signs of ancient life on Mars is leading scientists to an otherworldly lake on Earth. The Red Planet in 3D -- May 27, 1999. New data from Mars Global Surveyor reveal the topography of Mars better than many continental regions on Earth. Search for Life on Mars will Start in Siberia -- May 27, 1999. NASA funds permafrost study to support astrobiology research. Join our growing list of subscribers - sign up for our express news delivery and you will receive a mail message every time we post a new story!!! |For lesson plans and educational activities related to breaking science news, please visit Thursday's Classroom|| HQ Press Release 00-124| Production Editor: Dr. Tony Phillips Curator: Bryan Walls Media Relations: Steve Roy Responsible NASA official: John M. Horack
<urn:uuid:79958e87-a8a1-4ff2-90d7-2b630f1b2c09>
2.921875
1,285
News Article
Science & Tech.
47.5989
1,702
Flying a spacecraft to a far planet, millions of miles away, takes many talented people working with very special equipment. In planning the mission, engineers and scientists decide what kinds of instruments ride on board the spacecraft and what kind of information is gathered. They plan the spacecraft's journey by making precise calculations of its path through space. The spacecraft itself is a complex machine that must operate perfectly in alien environments -- under conditions of intense radiation, and extreme cold and heat. The spacecraft receives commands from mission controllers and sends scientific data back to Earth. A computer on board the spacecraft manages the two-way communications equipment and controls the scientific instruments and the other activities of the spacecraft. NASA tracks missions using a world-wide communications system called the Deep Space Network. Huge antennas --- some nearly as big across as a football field --- capture the faint signals from spacecraft. The signals carry the science data, which must be decoded into information or images. The Deep Space Network also has powerful transmitters to send commands to distant Next: Mission Planning
<urn:uuid:b13de43d-8e9f-4df2-8414-7592864b12be>
4.46875
224
Knowledge Article
Science & Tech.
33.378675
1,703
One of the outstanding achievements of 20th Century science was the realisation that the great diversity of nature is based on a handful of elementary particles acting under the influence of only a few fundamental forces. This talk aims to give an overview of the natural forces that shape everything around us and outlines current research and what we believe remains to be discovered. Prof. Peter Kalmus is Emeritus Professor at Queen Mary, University of London. At various times he has been President of the Physics Section of the British Association, Vice President of the Institute of Physics, and Vice President of the Royal Institution. A distinguished career has seen him awarded the Rutherford Medal for his outstanding role in the discovery of the W and Z particles, an OBE for his contributions to physics and only last week the Kelvin medal for his role in the public understanding of physics. Dramatic advances in the study of stem cells - the precursor cells of blood, skin, bone and nerve cells - could be used one day to help sufferers from Parkinson's disease, hepatitis, leukaemia, diabetes and rheumatoid arthritis. Stem cells hold the key to the ability to grow a patient's own tissue for repair, and are central to the cloning debate. Potentially they could be used to create unlimited supplies of replacement tissue, including nerve, bone, skin and heart muscle, for repairing injuries and for treating disease - potentially saving millions of lives. Cloning offers a way to grow a patient's own stem cells but, by perfecting such technology, scientists could accelerate efforts to conduct so-called reproductive cloning. Professor Richard Gardner, who is chairing the Royal Society's working group on stem cells and therapeutic cloning provides a rich overview into the how and why of cloning. Professor Wolpert's thesis is that science is not common sense. Common sense is misleading - it can make you accept that a seashell on the top of a mountain is proof of a global flood. In this talk Professor Wolpert gives one scientist's view of the culture of science and why the public's understanding of that culture is so much in error. His thoughtful analysis concludes that scientific thought is unnatural. As well as a CBE, Professor Wolpert is a fellow of the Royal Society and former chairman of the Committee for the Public Understanding of Science. In May he was awarded the Royal Institution's Michael Faraday award for services to the public understanding of science. The common flu of 1918 spread faster than any disease in history, before orsince, and killed more people in less time than all of the great plagues ofhistory, doing so in the presence of relatively 'modern' medical science.Even last year, approximately 20,000 people in the UK died from what werethought to be flu-related illnesses. Yet even this was not an epidemic -the Spanish flu pandemic at the close of the First World War is believed tohave accounted for the deaths of well over 20m people worldwide, including280,000 in the UK. The last official flu epidemic was 11 years ago, butthe fact that there is no way of guessing when the next one might be is avery serious concern. Dr Elspeth Garman will go through at what we know ofthe different strains of flu virus and outline the progress made in thedevelopment of a cure. Ref: `The Origin and Control of Flu Pandemics' Laver and Garman, Science Sep 7, 2001, 1776-1777. Cryptography, the science of encrypting and decrypting information, dates as far back as 1900 BC when a scribe in Egypt first used a derivation of the standard hieroglyphics of the day to communicate Today cryptography provides the locks and keys to the Information age. It is the technology that enables private emails to be sent and secure business transactions to take place over the Internet. Simon Singh, author of Fermat's Last Theorem and The Code Book, will give a brief history of cryptography and then discuss its impact in the 21st century. He will also bring with him a genuine Enigma cipher machine. Simon Singh completed his PhD in particle physics at Cambridge. In 1991 he joined the BBC Science Department and worked as a producer and director with 'Tomorrow's World' and 'Horizon'. He is also the author of 'Fermat's Last Theorem' and 'The Code Book', the latter forming the basis for the popular Channel 4 series 'The Science of Secrecy'. This talk is run in conjunction with BlackwellsCafé Scientifique.
<urn:uuid:5c6e8595-d959-401d-8022-7f7225a700c5>
3.03125
917
Content Listing
Science & Tech.
42.27272
1,704
Diablo Canyon Power Plant / AP This photo provided by the Diablo Canyon Power Plant on Friday shows salp, a gelatinous sea creature, at a nuclear reactor intake structure. In Japan, it was a monstrous earthquake and tsunami that brought down the Fukushima nuclear plant. In California, it’s a tiny, jellyfish-like sea creature called salp that’s causing problems at the Diablo Canyon atomic plant. An invasion of salp has prompted Pacific Gas and Electric Co. to temporarily shut down a nuclear reactor at Diablo Canyon, in Avila Beach, San Luisa Obispo County, on the central California coast. A giant swarm of the transluscent barrel-shaped organisms this week clogged intake screens that are used to keep marine life out of the seawater that is used as a coolant for the nuclear plant. On Wednesday, PG&E officials reduced power output at the Unit 2 reactor, then decided to shut it down altogether “until conditions improve at the intake structure.” The plant’s other reactor, Unit 1, had already been shut down earlier in the week for a planned refueling and maintenance outage. “Safety being the number one priority, there was such an influx of salp and you need ocean water to cool the reactors,” PG&E spokesman Tom Cuddy told msnbc.com on Friday. “At that point we made a conservative decision to safely shut down the unit.” PG&E owns and operates the Diablo Canyon Power Plant, whose two reactors together produce approximately 2,300 net megawatts of electricity – enough to serve nearly 3 million northern and central California homes. Cuddy said he wasn’t sure when the Unit 1 reactor would come back online. “We’ll turn the unit on to full power when it’s safe to do so – when the salps leave,” he said. “The bottom line is we’re taking a methodical and conservative approach.” Lara Uselding, a spokeswoman for the Nuclear Regulatory Commission, the federal agency that oversees reactor safety and security, said the plant is not in any danger. “It’s not a normal operation condition, but the plant is safe and all the systems operated as designed,” she said. Watch US News videos on msnbc.com Salps are tiny, gelatinous organisms that move by contracting, thus pumping water throughout their bodies. They can reproduce and multiply quickly. Though salps look a bit like jellyfish, they are actually more closely related to organisms that have backbones. They typically grow to 1 or 2 inches long and usually do not appear at the coast, says Larry Madin, a salp expert and research director at Woods Hole Oceanographic Institution in Massachusetts. “They’re typically more of an offshore living organism," Madin says. He surmises that the swarm at Diablo may have been carried in on currents blown by wind. But Steve Haddock, a scientist with the Monterey Bay Aquarium Research Institute in Moss Landing, Calif., said salps have been blooming in high numbers along the California coast since at least December. Several sightings have been reported to JellyWatch, a website Haddock runs to track sightings of jellyfish and other marine organisms. Other than clogging the cooling system filters of a nuclear plant, the organisms pose no danger, says Bruce Robison, senior scientist at the Monterey Bay Aquarium Research Institute. They don’t sting, they don’t have teeth and they’re not poisonous. Salps passively feed off tiny organic particles in the water and can reproduce sexually or asexually. “They can have their population size expand tremendously within a short period of time, which makes them very abundant. In a small space, they can take up all the space,” Robison says. Madin said the slimy swarm at Diablo would probably go away in a few days, carried off by currents. Or, says Robison, they’ll quickly die off when their food supply runs out. So the best bet, experts say, is for nuclear officials to just wait it out in the short term. "Long term, perhaps if their intakes were a bit deeper, it would not be a problem," Haddock said. Despite the reactor outage, California is not expected to experience any electricity shortages because it has ample reserves, said Stephanie McCorkle, spokeswoman California ISO, which operates the state's power grid and wholesale markets. It’s not the first time that sea creatures have interfered with nuclear plant activity. In 2008, a swarm of jellyfish led to a sharp decrease in power generation at Diablo Canyon, according to the Los Angeles Times. Similar jellyfish problems have cropped up at nuclear plants in the U.S., Japan, Israel and Scotland over the years, the newspaper said. “It happens. It’s something you would expect along the coast,” Uselding said. Madin said this is the first time he’s heard of salps interfering with the operation of a nuclear plant. More content from msnbc.com and NBC News: Follow US News on msnbc.com on Twitter and Facebook
<urn:uuid:e383f7e8-c1e3-4cee-b29d-af727c6130bc>
3.140625
1,102
News Article
Science & Tech.
50.127728
1,705
Lunar surface still active: study Active Moon A new study indicates the Moon may not be as geologically dead as previously thought, showing signs that it is simultaneously stretching and shrinking. New high-resolution images from NASA's Lunar Reconnaissance Orbiter, show parts of the Moon's surface are being pulled apart by expansion, forming small narrow trenches or rift valleys in the mare basalts and the highlands of the Lunar far side. The finding, which appears in the online edition of Nature Geoscience, contradicts the notion that the Moon is a cold, dead world with a surface that's slowly shrinking as its interior cools. These linear valleys called graben, form when the crust stretches, breaks and drops down along two bounding faults. Scientists estimate the graben formed less than 50 million years ago, which is recent by geologic time scales. Study lead author Dr Thomas Watters from the Centre for Earth and Planetary Studies at the Smithsonian's National Air and Space Museum in Washington DC says, "The graben tell us forces acting to shrink the Moon were overcome in places by forces acting to pull it apart". Earlier images taken by the Lunar Reconnaissance Orbiter in 2010 show rounded cliffs called lobate scarps distributed across the Moon's surface, evidence the Moon shrank in recent geologic time and might still be shrinking today. The researchers believe tectonic movements associated with both the graben and lobate scarps may be a possible source of some of the shallow moonquakes still being recorded. Planetary scientist Dr Michael Brown from Monash University in Melbourne says the findings will force scientists to rethink lunar geology. "It's certainly come as a surprise," says Brown. "It means the geologic behaviour of the Moon is more complex than originally thought." The Moon's lack of a global magnetic field had led scientists to conclude the lunar interior was mostly solid. "Now there's a suggestion the Lunar outer core is fluid and surrounded by a partially molten layer," says Brown. As to whether it's time to go back to the Moon for a closer hands on look, Brown says, "We want to understand the Moon in greater detail, but it's a difficult political question". "The Moon has a lot of variation and was barely explored by the Apollo landings. Certainly it would be fascinating to send a robot probe to do some of the work," says Brown. "The only time they sent an actual geologist to the Moon was on the very last mission."
<urn:uuid:ca84e223-82e4-41ff-9382-2e4512d58fc0>
3.90625
516
News Article
Science & Tech.
46.319212
1,706
The Western Diamond Back Rattle Snake The western diamond back rattle snake lives in the badlands and semi desert areas of North America, where its tough skin prevents it fromlosing too much moisture. It conserves water by excreting thick paste urine. It adjusts its daily behavior to regulate body heat, alternately basking in the sun and shade. Water is scarce in the diamondback’s arid habitat, but the snake has a tough skin that conserves moisture and a behavior pattern that helps it avoid the worst of the heat. The rattlesnake’s heat sensing pit organ guide the snake toward its prey, allowing it to strike with deadly accuracy even in total darkness. The western diamondback rattlesnake lives in arid, scrubby semi deserts of the southwest U.S. from California to Arkansas. Usually found in dry, sandy, or rocky terrain, the rattlesnake sometimes ventures onto cultivated land. The diamondback is adapted to surviving in a barren landscape where less than 1” of rain falls a year. Lack of water in its dusty range poses no problem for the diamondback. It can go for months without drinking, obtaining all the moisture it needs from its prey. The rattlesnake recycles as much of its body fluids as possible and when it does urinate, it excretes the waste as concentrated uric acid crystals rather than as a fluid. This takes the form of a white paste and is passed with the feces. In hotter areas the diamondback is most active at night, moving around in the open sunshine all day would cause it to overheat. Like other snakes, the rattlesnake can not generate enough body heat to operate its organs. The diamondback often spends the hottest daylight hours dozing beneath a rock. When hunting, the snake investigates every cranny, its forked tongue flicking in and out to taste the air for scent of prey. It preys mostly on rodents, but may eat small birds, lizards and larger animals, such as prairie dogs, ground squirrels and rabbits. As the snake strikes, long, hollow fangs swing down to stab into the prey and inject a lethal dose of venom. The jaws dislocate and skin stretches as the mouth engulfs the victim. The snake can not swallow, it walks its jaws over its prey to ingest it. The female diamondback can breed only once every two years, so there is intense competition among males. In spring, males are drawn to receptive females by scent. Several males may arrive in the same area at the same time, this signals the need for a contest to decide which of them will win the female. Mating may last 24 hours and eggs are fertilized internally. Eggs also develop and incubate inside the female’s body for about 165 days, after which young are born fully developed. The dozen or so young snakes may be up to 12” long and are independent immediately. They soon move off to catch their own prey, already equipped to deliver a killing bite. The western diamondback is not endangered yet, but its numbers are declining, along with other rattlesnake species, due to the ‘rattlesnake roundups’ that take place in some states.
<urn:uuid:ac0d9b25-8172-4a3e-8e20-47e04ad202af>
3.75
673
Knowledge Article
Science & Tech.
50.466313
1,707
International Business Times Natural climatic changes have led to the total ecosystem collapse of coral reefs, suggest a new report. According to a study by the Florida Institute of Technology, climate shifts have stalled the reef growth in the eastern Pacific for 2,500 years. The Daily Telegraph Pacific to shut down thousands of years ago, scientists have said. And human-induced pollution could worsen the trend in the future, they warned today...The reef shutdown began 4000 years ago and lasted about 2500 years, said the research led by the...
<urn:uuid:41da48c3-3502-49fe-928c-8930884ce95e>
2.6875
107
Content Listing
Science & Tech.
55.262332
1,708
We are remembered by our exaggerations. So at a recent reunion of my former research-group members, several recalled my saying "When you see a standard deviation in an x-ray crystal structure, multiply it by pi [π, 3.141...], or if the structure is done by friends, by e [2.718...]." I was talking about structures of molecules—details of their geometry, also of a particularly fruitful way to gain knowledge of these structures. And of the error estimates in such studies. There is no more basic enterprise in chemistry than the determination of the geometrical structure of a molecule. Such a determination, when it is well done, ends speculation and provides us with the starting point for understanding every physical, chemical and biological property of the molecule. Indeed, the chemical sciences (only modestly imperialistic, I take them to range from materials science through molecular biology) are what they are today largely as the result of careful structure determination. We'd be still waiting in ignorance if we believed the hype of various microscopies. A few very accurate structures have come to us through ingenious use of electron diffraction and various spectroscopies. But the vast majority of what we know about shapes and metric detail of molecules and extended materials derives from studies of the diffraction of x rays by single crystals of molecules, a technique popularly called "x-ray crystallography." » Post Comment
<urn:uuid:40d9e532-38da-4823-a06a-47ce1c0effc9>
3.171875
288
Comment Section
Science & Tech.
39.263077
1,709
[antlr-interest] What do . (period) and Tokens mean in tree grammars? Harald M. Müller harald_m_mueller at gmx.de Sat Dec 29 14:16:13 PST 2007 Sorry that I ask - but I did not find it on the Wiki and not in the ANTLR book: What do . and Tokens mean in tree grammars? AFAIK, . means "any complete subtree." - although this seems not to work in some current builds, if I understood some email of yesterday correctly? Question number two: Does a single Token also match a complete tree with this token at the root, or only a "childless tree" (i.e. the token as a tree node alone)? Question number three: What sort of lookahead is used over . ? For example, would the following work - assume here that the subtrees can be arbitrarily large subconditions (as is usual in expression trees): condition : ^(AND . ^(NOT .)) -> ...rewrite1... | ^(AND . .) -> ...rewrite2... The intention of this is to rewrite an AND tree which has as second child a tree with a NOT root to rewrite1; whereas all other trees are supposed to be rewritten as rewrite2. If ANTLR tree parsing works the way I assume it - namely the whole tree is flattened to a node sequence, on which "one-dimensional" parsing techniques (even LL(*)) are applied, then the NOT will be "too far" away even for an LL(*) analysis, because there will be recursively nested expressions on the way between the AND and the NOT. However, if ANTLR goes for real "two-dimensional" parsing, or does some lookahead over arbitrarily large subtrees (to the readily available "later" children!) - which I would call "1.5-dimensional lookahead computation/parsing", then the above two patterns could be disambiguated. -------------- next part -------------- An HTML attachment was scrubbed... More information about the antlr-interest
<urn:uuid:fbf907f8-5cfd-471c-ad04-918cb363fe58>
2.6875
468
Comment Section
Software Dev.
67.344652
1,710
Comet ISON will light up the sky This interplanetary visitor may be the brightest comet ever. September 25, 2012 About a year from now, Comet C/2012 S1 (ISON) probably will become the brightest comet anyone alive has ever seen. How bright it could get is currently the subject of vigorous discussion among planetary scientists and everyday comet-watchers. Comet C/2012 S1 (ISON) appears as a faint blob in this image taken at the Remote Astronomical Society Observatory near Mayhill, New Mexico. // Credit: E. Guido, G. Sostero, and N. Howes Two astronomers, Vitali Nevski from Vitebsk, Belarus, and Artyom Novichonok from Kondopoga, Russia, discovered the comet on images they obtained September 21. They used the 16-inch (0.4-meter) Santel reflector of the International Scientific Optical Network, whose abbreviation — ISON — is now the Comet C/2012 S1’s common name. When the two scientists found the comet, it glowed weakly at magnitude 18.8. As a comparison, it would take the light from more than 100,000 such comets to equal the faintest star visible to the naked eye from a dark site. According to predictions, the comet will approach to within 0.012 astronomical units (1.1 million miles [1.8 million kilometers]) of the Sun at the end of November 2013. One astronomical unit (AU) equals the average distance between the Sun and Earth, about 93 million miles (149.7 million km). Then, in January 2014, the comet will approach to within 0.4 AU (37.2 million miles [59.9 million km]) of Earth. Regarding visibility, Comet ISON — currently 6.5° due east of the 1st-magnitude star Pollux in Gemini the Twins — is now bright enough for amateur astronomers with large telescopes to image. That said, the comet itself will not show much in the way of detail for several months. By late summer 2013, observers at dark locations should be able to spot the comet through small telescopes or possibly even binoculars. And sometime in late October or early November, C/2012 S1 should cross the naked-eye visibility threshold. From there, it may reach — or even exceed — the brightness of the Full Moon. Currently, Comet ISON glows around 18th magnitude in front of the stars of Cancer the Crab. In the second week of December, it will enter Gemini the Twins. Astronomy: Richard Talcott and Roen Kelly When the comet is closest to the Sun (a moment astronomers call perihelion), it may shine a dozen times as brightly as Venus, normally the brightest “starlike” object in the sky. Unfortunately, on that date it will lie only 4.4° north of our daytime star, and the Sun’s glare may hide it from the view of casual observers. Immediately after reaching perihelion, Comet ISON heads north. And while the comet fades as its distance from the Sun increases, it still should be as bright as Venus, but with a spectacular tail. Its position will allow observers all over Earth to see it, but those in the Northern Hemisphere will get the better views as Christmas approaches. In fact, on January 8, 2014, the comet will lie only 2° from Polaris — the North Star. Astronomy will cover Earth’s encounter with Comet C/2012 S1 (ISON) in great detail in the coming year. Stay tuned!
<urn:uuid:fa70bfcc-a13f-4826-843a-ff6b62d7357a>
3.3125
751
News Article
Science & Tech.
61.731052
1,711
When the scientists combined the light from two 8-meters telescopes with MIDI, they could simulate the resolving power of a telescope with a diameter of about 100 meters. These observations gave a "visibility function," which measures how resolved a source is: A visibility of 1 happens when a source is completely unresolved while lower visibilities indicate increased resolution. For HD 69830, the scientists did not resolve the star itself, but did resolve the dust emission, as the visibility clearly does not match the pattern of an unresolved source (dashed blue line). The levels of dust emission vary in the wavelength range covered in the observation (8-13 microns, a region of the mid-infrared spectral range), and this variation can also be seen in the visibility function. These results show that the dust lies between 4.7 and 224 million miles (7.5 and 360 million km) from the star (0.05 to 3 times the Earth-Sun distance).
<urn:uuid:a7858873-054d-46c0-8286-5073936c357e>
3.734375
193
Knowledge Article
Science & Tech.
49.06
1,712
Severe ozone air pollution in the Persian Gulf region 1Energy, Environment and Water Research Centre, The Cyprus Institute, 20 Kavafi Street, 1645 Nicosia, Cyprus 2Max Planck Institute for Chemistry, Becherweg 27, 55128 Mainz, Germany 3Observatoire Midi-Pyrénées, CNRS – Laboratoire d'Aérologie, 14 Avenue E. Belin, 31400 Toulouse, France Abstract. Recently it was discovered that over the Middle East during summer ozone mixing ratios can reach a pronounced maximum in the middle troposphere. Here we extend the analysis to the surface and show that especially in the Persian Gulf region conditions are highly favorable for ozone air pollution. We apply the EMAC atmospheric chemistry-climate model to investigate long-distance transport and the regional formation of ozone. Further, we make use of available in situ and satellite measurements and compare these with model output. The results indicate that the region is a hot spot of photochemical smog where European Union air quality standards are violated throughout the year. Long-distance transports of air pollution from Europe and the Middle East, natural emissions and stratospheric ozone conspire to bring about relatively high background ozone mixing ratios. This provides a hotbed to strong and growing indigenous air pollution in the dry local climate, and these conditions are likely to get worse in the future.
<urn:uuid:ad1143d6-0e52-463b-8843-4f58014840d8>
2.515625
286
Academic Writing
Science & Tech.
17.848368
1,713
Man-made pollution is helping to push the tropics northwards, research suggests. The effect could impact weather and climate, making sub-tropical regions drier and creating wetter and stormier conditions further north. Scientists already knew that the tropics have been widening by around 0.7 degrees of latitude per decade. Ozone depletion in the stratosphere is thought to be the main driver of this expansion in the southern hemisphere. But the new findings indicate that tropic widening in the northern hemisphere is mainly due to black carbon and ozone lower in the atmosphere. While stratospheric ozone provides vital protection against harmful solar radiation, the same gas in the lower troposphere is a man-made pollutant and harmful to health. Professor Robert Allen, from the University of California at Riverside, who led the climate modelling study, said: "Both black carbon and tropospheric ozone warm the tropics by absorbing solar radiation. "Because they are short-lived pollutants, with lifetimes of one-two weeks, their concentrations remain highest near the sources: the northern hemisphere low-to-mid-latitudes. It's the heating of the mid-latitudes that pushes the boundaries of the tropics poleward." Tropical expansion northwards could affect large-scale atmospheric circulation, especially in the subtropics and mid-latitudes, said the researchers, whose findings are reported in the journal Nature. "If the tropics are moving poleward, then the sub-tropics will become even drier," said Dr Allen. "If a poleward displacement of the mid-latitude storm tracks also occurs, this will shift mid-latitude precipitation poleward, impacting regional agriculture, economy, and society."
<urn:uuid:a1cc9610-e191-4409-8354-868c1f25b2c3>
3.8125
355
News Article
Science & Tech.
27.421923
1,714
A hailstone begins as a frozen raindrop or ice crystal. Strong updrafts of warm air and downdrafts of cool air move the frozen particle up and down through different levels of the storm cloud. The hailstone encounters different forms of moisture as it moves, and layers of frozen ice particles accumulate on its surface. The resulting hailstone has a layered structure. ! Click the image to see the animation.
<urn:uuid:07b2ab16-32ce-4e5d-86ec-d855b1af6ffa>
2.828125
85
Truncated
Science & Tech.
52.258137
1,715
How has the sea ice that surrounds Antarctica varied over the period for which there exist comprehensive satellite data? In what follows, we review what has been learned about the subject -- in the order in which it was learned -- starting with the very first year of the current millennium. Noting that "Antarctic sea ice may show high sensitivity to any anthropogenic increase in temperature" -- as per the canary-in-the-coal-mine concept of high-latitude amplification of CO2-induced global warming -- while further noting that most climate models suggest that an increase in surface temperature "would result in a decrease in sea ice coverage," Watkins and Simmonds (2000) analyzed temporal trends in different measures of the sea ice that surrounds Antarctica, using Special Sensor Microwave Imager data obtained from the Defense Meteorological Satellite Program for the nine-year period December 1987-December 1996, in search of the suspected signal. But contrary to what one would expect on the basis of the model simulations, and especially in light of what climate alarmists call the unprecedented warming of the past quarter-century, the two scientists observed statistically significant increases in both sea ice area and extent; and when they combined their results with those of the preceding nine-year period (1978-1987), both parameters continued to show increases over that expanded time period. In addition, they found that the 1990s also experienced increases in the length of the sea-ice season. In a contemporary assessment of Antarctic sea ice behavior, Yuan and Martinson (2000) also utilized Special Sensor Microwave Imager data, but they additionally analyzed brightness temperatures obtained by the Nimbus-7 Scanning Multichannel Microwave Radiometer; determining that the mean trend in the latitudinal location of the Antarctic sea ice edge over the prior 18 years was an equatorward extension of 0.011 degree latitude per year, in harmony with the findings of Comiso (2000), who analyzed Antarctic temperature data obtained from 21 surface stations, as well as from infrared satellites operating from 1979 to 1998, and discovered a 20-year cooling trend of 0.042°C per year in the satellite data and 0.008°C per year in the station data. That Antarctic sea ice had indeed increased in area, extent and season length since at least 1978 was also supported by several subsequent studies. The very next year, for example, Hanna (2001) published an updated analysis of Antarctic sea ice cover -- also based on Special Sensor Microwave Imager data, but for the extended period of October 1987-September 1999 -- finding "an ongoing slight but significant hemispheric increase of 3.7(±0.3)% in extent and 6.6(±1.5)% in area." And one year later, Parkinson (2002) utilized satellite passive-microwave data to calculate and map the length of the sea-ice season throughout the Southern Ocean for each year of the period 1979-1999, finding that although there were opposing regional trends, a "much larger area of the Southern Ocean experienced an overall lengthening of the sea-ice season ... than experienced a shortening." Concurrently, Zwally et al. (2002) also utilized passive-microwave satellite data to study Antarctic sea ice trends. Over the 20-year period 1979-1998, they report that the sea ice extent of the entire Southern Ocean increased by 11,181 ± 4,190 square km per year, or by 0.98 ± 0.37 percent per decade, while sea ice area increased by nearly the same amount: 10,860 ± 3,720 square km per year, or by 1.26 ± 0.43 percent per decade. And in contradiction of the ancillary climate-alarmist claim that various aspects of earth's climate should exhibit greater variability when it is warmer than when it is colder, they observed that the variability of monthly sea ice extent declined from 4.0% over the first ten years of the record to 2.7% over the last ten years (which were supposedly the warmest of the prior millennium, according to the world's climate alarmists). One year later, Vyas et al. (2003) analyzed data from the multi-channel scanning microwave radiometer carried aboard India's OCEANSAT-1 satellite for the period June 1999-May 2001, which they combined with data for the period 1978-1987 that had been derived from space-based passive microwave radiometers carried aboard earlier Nimbus-5, Nimbus-7 and DMSP satellites, in order to study secular trends in sea ice extent about Antarctica over the period 1978-2001. This work revealed that the mean rate of change of sea ice extent for the entire Antarctic region over this period was an increase of 0.043 M km² per year. In addition, the six researchers concluded that "the increasing trend in the sea ice extent over the Antarctic region may be slowly accelerating in time, particularly over the last decade," which finding they described as "paradoxical in the global warming scenario resulting from increasing greenhouse gases in the atmosphere." In a somewhat similar study, Cavalieri et al. (2003) extended prior satellite-derived Antarctic sea ice records several years by bridging the gap between Nimbus 7 and earlier Nimbus 5 satellite data sets with National Ice Center digital sea ice data, finding that sea ice extent about Antarctica rose at a mean rate of 0.10 ± 0.05 x 106 km² per decade between 1977 and 2002. Likewise, Liu et al. (2004) employed sea ice concentration data retrieved from the Scanning Multichannel Microwave Radiometer on the Nimbus 7 satellite, plus the Special Sensor Microwave Imager on several defense meteorological satellites, to develop a quality-controlled history of Antarctic sea ice variability over the period 1979-2002 (which included different states of the Antarctic Oscillation and several ENSO events), after which they evaluated total sea ice extent and area trends by means of linear least-squares regression. This work revealed, in their words, that "overall, the total Antarctic sea ice extent has shown an increasing trend (~4,801 km²/yr)," and that "the total Antarctic sea ice area has increased significantly by ~13,295 km²/yr, exceeding the 95% confidence level." Shortly thereafter, Parkinson (2004) reviewed the history of satellite observations of sea ice extent in the Southern Ocean about Antarctica, concentrating on data obtained from the Scanning Multichannel Microwave Radiometer aboard the Nimbus 7 satellite and subsequent satellite-based Special Sensor Microwave Imagers, because these platforms provided, in her words, "the best long-term record of changes in the full Southern Ocean ice cover." The resulting plot of 12-month running-means of Southern Ocean sea ice extent, which extended from November 1978 through December 2002, revealed significant multi-year variability in the data, which began at the top of a peak and ended at the bottom of a trough. But in spite of the high beginning point and low end point of the data, which would mitigate against a long-term upward trend, the data exhibited just such a feature, the least-squares-fit slope of which revealed a 12,380 ± 1,730 km2 upward trend in sea ice extent per year. In considering this result, it is interesting to note that over the period of time that climate alarmists claim has experienced the most extreme global warming of the past millennium or more, and in spite of the fact they have historically claimed such warming should be most evident in earth's polar regions, and that it should lead to a decrease in polar sea ice extent, just the opposite had occurred to this point in time in the Southern Ocean that surrounds Antarctica. But what is doubly damning to their dogma is the fact that the Southern Ocean's sea ice extent is extremely sensitive to warming, decreasing from a 24-year-average maximum monthly value of 18.23 x 106 km2 in September to a similarly-calculated minimum monthly value of 2.98 x 106 km2 in February. This decrease represents the disappearance of nearly 84% of each year's maximum sea ice cover; and, therefore, it can be appreciated that given just a little extra seasonal warmth, it would disappear altogether each February. But it hasn't. In fact, it continues to slowly, but ever so surely, grow in the mean. Focusing on the spring-summer period of November/December/January (1981-2000) some four years later, Laine (2008) determined trends in Antarctic ice-sheet and sea-ice surface albedo and temperature, as well as sea-ice concentration and extent, based on Advanced Very High Resolution Polar Pathfinder data in the case of ice-sheet surface albedo and temperature, and the Scanning Multichannel Microwave Radiometer and Special Sensor Microwave Imagers in the case of sea-ice concentration and extent. These analyses were carried out for the continent as a whole, as well as for five longitudinal sectors emanating from the south pole: 20°E-90°E, 90°E-160°E, 160°E-130°W, 130°W-60°W, and 60°W-20°E. This work revealed, in Laine's words, that "all the regions show negative spring-summer surface temperature trends for the study period." In addition, the Finnish researcher found that "sea ice concentration shows slight increasing trends in most sectors, where the sea ice extent trends seem to be near zero." Laine also found that "the Antarctic region as a whole and all the sectors separately show slightly positive spring-summer albedo trends." Consequently, over the last two decades of the 20th century, Antarctica successfully bucked the world's supposedly unprecedented global warming trend by (1) cooling a bit, (2) acquiring slightly more sea ice, and (3) becoming a tad more reflective of incoming solar radiation. Several other studies of the subject were also conducted in 2008. Noting that earth's polar regions "are expected to provide early signals of a climate change primarily because of the 'ice-albedo feedback' which is associated with changes in absorption of solar energy due to changes in the area covered by the highly reflective sea ice," for example, Comiso and Nishio (2008) set about to provide updated and improved estimates of trends in Antarctic sea ice cover for the period extending from November 1978 to December 2006, based on data obtained from the Advanced Microwave Scanning Radiometer, the Special Scanning Microwave Imager and the Scanning Multichannel Microwave Radiometer. And in doing so, they found that the 28-year trends in Antarctic sea ice extent and area were +0.9 ± 0.2 and +1.7 ± 0.3% per decade, which is definitely not a "signal" of global warming. In another study employing satellite-borne passive microwave radiometer data that extended the analyses of the sea ice time series reported by Zwally et al. (2002) from 20 years (1979-1998) to 28 years (1979-2006), Cavalieri and Parkinson (2008) found that "the total Antarctic sea ice extent trend increased slightly, from 0.96 ± 0.61% per decade to 1.0 ± 0.4% per decade, from the 20- to 28-year period." The Antarctic sea ice area trend, however, remained constant at 1.2 ± 0.7% per decade. Its variability, however, like that of sea ice extent, declined (from ± 0.7% to ± 0.5% per decade), so that both sets of results indicated a "tightening up" of the two relationships. And why were these things so? The two researchers state that "what is driving the observed changes remains unanswered, and the physical mechanisms explaining these changes remain to be determined." Most recently, Turner et al. (2009) reviewed the history of Antarctic sea ice extent derived from satellite observations, after which they attempted to derive an explanation for the empirical data being what they are, based on climate model simulations. Citing the work of Zwalley et al. (2002), they first noted that over the period 1979-1998, sea ice extent surrounding Antarctica increased at a mean rate of 0.98% per decade, and that Comiso and Nishio (2008) derived a value of 0.9% per decade for the period 1978-2006. This sea ice extent increase, according to their modeling work, was largely driven by an autumn increase in the Ross Sea sector that they suggest "is primarily a result of stronger cyclonic atmospheric flow over the Amundsen Sea." And they say that "the trend towards stronger cyclonic circulation is mainly a result of stratospheric ozone depletion, which has strengthened autumn wind speeds around the continent, deepening the Amundsen Sea Low through flow separation around the high coastal orography." On the other hand, and much more simply, the nine researchers report that "statistics derived from a climate model control run suggest that the observed sea ice increase might still be within the range of natural climate variability." In light of these contrasting possibilities, it is clear that the true cause of the near-three-decade-long increase in Antarctic sea ice extent cannot be stated with any confidence. The only thing we can conclude at this point in time, therefore, is that for some still-unproven reason, and in spite of the supposedly unprecedented increases in mean global air temperature and atmospheric CO2 concentration that the planet has experienced since the late 1970s, Antarctica sea ice extent has stubbornly refused to do what climate models say it should be doing, as it just keeps on growing. Cavalieri, D.J. and Parkinson, C.L. 2008. Antarctic sea ice variability and trends, 1979-2006. Journal of Geophysical Research 113: 10.1029/2007JC004564. Cavalieri, D.J., Parkinson, C.L. and Vinnikov, K.Y. 2003. 30-Year satellite record reveals contrasting Arctic and Antarctic decadal sea ice variability. Geophysical Research Letters 30: 10.1029/2003GL018031. Comiso, J.C. 2000. Variability and trends in Antarctic surface temperatures from in situ and satellite infrared measurements. Journal of Climate 13: 1674-1696. Comiso, J.C. and Nishio, F. 2008. Trends in the sea ice cover using enhanced and compatible AMSR-E, SSM/I, and SMMR data. Journal of Geophysical Research 113: 10.1029/2007JC004257. Elderfield, H. and Rickaby, R.E.M. 2000. Oceanic Cd/P ratio and nutrient utilization in the glacial Southern Ocean. Nature 405: 305-310. Hanna, E. 2001. Anomalous peak in Antarctic sea-ice area, winter 1998, coincident with ENSO. Geophysical Research Letters 28: 1595-1598. Laine, V. 2008. Antarctic ice sheet and sea ice regional albedo and temperature change, 1981-2000, from AVHRR Polar Pathfinder data. Remote Sensing of Environment 112: 646-667. Parkinson, C.L. 2002. Trends in the length of the Southern Ocean sea-ice season, 1979-99. Annals of Glaciology 34: 435-440. Parkinson, C.L. 2004. Southern Ocean sea ice and its wider linkages: insights revealed from models and observations. Antarctic Science 16: 387-400. Turner, J., Comiso, J.C., Marshall, G.J., Lachlan-Cope, T.A., Bracegirdle, T., Maksym, T., Meredith, M.P., Wang, Z. and Orr, A. 2009. Non-annular atmospheric circulation change induced by stratospheric ozone depletion and its role in the recent increase of Antarctic sea ice extent. Geophysical Research Letters 36: 10.1029/2009GL037524. Vyas, N.K., Dash, M.K., Bhandari, S.M., Khare, N., Mitra, A. and Pandey, P.C. 2003. On the secular trends in sea ice extent over the Antarctic region based on OCEANSAT-1 MSMR observations. International Journal of Remote Sensing 24: 2277-2287. Watkins, A.B. and Simmonds, I. 2000. Current trends in Antarctic sea ice: The 1990s impact on a short climatology. Journal of Climate 13: 4441-4451. Yuan, X. and Martinson, D.G. 2000. Antarctic sea ice extent variability and its global connectivity. Journal of Climate 13: 1697-1717. Zwally, H.J., Comiso, J.C., Parkinson, C.L. Cavalieri, D.J. and Gloersen, P. 2002. Variability of Antarctic sea ice 1979-1998. Journal of Geophysical Research 107: 10.1029/2000JC000733.Last updated 30 December 2009
<urn:uuid:cd085c5e-c932-4186-a7bd-51a71a37e27c>
3.078125
3,570
Academic Writing
Science & Tech.
55.11015
1,716
Under what circumstances might you use the yield method of the Thread class A. To call from the currently running thread to allow another thread of the same priority to run B. To call on a waiting thread to allow it to run C. To allow a thread of higher priority to run D. To call from the currently running thread with a parameter designating which thread should be allowed to run. The answer given was A. But I think the answer should be A and C since in the API doc yield is describe as "Causes the currently executing thread object to temporarily pause and allow other threads to execute" Other threads could mean thread of the same priority and higher priority. At least, that's what I think. Thanks in advance.... Just For A Moment Joined: Apr 01, 2000 I think the yield method is to allow another thread of same / lesser priority to run. The thread with the higher priority will anyway be taken care by the JVM. Joined: Feb 08, 2000 Thanx for your reply Just For A Moment... But in Roberts and Heller p. 208, it says "Note that most schedulers do not stop yielding thread from running in favor of a thread of lower priority." So generally speaking, yield does not allow lower priority thread to run. In my opinion, a yielding thread would give thread of the same priority and higher priority the chance to run. I know what you mean by thread with the higher priority will be taken care by the JVM since the JVM will EVENTUALLY (but not immediately) allow the thread of higher priority to run. In my opinion, the yielding thread will make the higher priority to run much earlier than it is scheduled. In Thinking in Java, it defines yield() give control to other threads and some other books just define it just allowing other runnable thread to run. Now, I'm left with this question in my mind. Should I include the answer "To allow a thread of higher priority to run" and "To allow a thread of lower priority to run" in my answer if ever this question is asked. Need confirmation. I'm going to take the test soon. Joined: Jan 31, 2000 Jerson This topic came up long back. The static yield() method in Thread class is for giving a chance for other threads of SAME PRIORITY. To allow a thread of higher priority to run. Allowing a higher priority thread to run is, not the goal/job of yield method. It is JVM's. Simillarly for lower priority threads. When a thread calls the yield() method it just gives the chance for SAME PRIORITY thread-mates. If none exists this again continues to run. Only ans a) is correct. Refer to these 2 discussions here and here. regds maha anna [This message has been edited by maha anna (edited April 01, 2000).]
<urn:uuid:14a8e7c1-ab5b-4c89-904f-79ceb1baec0f>
2.8125
597
Comment Section
Software Dev.
72.534681
1,717
Recent upgrades to Boulder's wastewater treatment plant may have dramatically reduced the amount of chemicals in Boulder Creek that cause male fish to develop female characteristics, according to scientists at the University of Colorado. Researchers first discovered a problem in the fish living below the wastewater treatment plant's outflow pipe in Boulder almost a decade ago. David Norris, a professor of integrative physiology at CU, found that half of white suckers living above the pipe were male. But only one in six fish living below the pipe -- where effluent from the plant containing estrogen-related chemicals is dumped into the stream -- were male. The others were female or "intersex," with both male and female organs. Norris followed up on his discovery with a study in 2006 that used a mobile fish exposure lab. The research trailer, which was set up near the wastewater treatment plant on 75th Street, allowed him to expose fathead minnows to various mixtures of water from upstream of the plant and effluent collected directly from the plant's pipe. Norris and his colleagues, including CU researcher Alan Vajda, found that minnows exposed to a mixture of 50 percent upstream water and 50 percent effluent became "feminized" in only a week. "When we set up the experiment, we set it up to run for 28 days because we had no idea how long it would take us to see an effect," Norris said. "Initially, we were quite surprised at the effect we had within seven days." Now Norris has again tested the effects of the effluent on fathead minnows, but in the years between the 2006 research and the new study, the city of Boulder upgraded its treatment plant. "Basically, the city set up an experiment for us," Norris said. "They upgraded their processing system. We had earlier data, and now we had a before-and-after to make a comparison." In the new study, Norris found that fathead minnows exposed to 100 percent effluent took 28 days to show signs of feminization. "It appears so far -- we have a lot of data yet to analyze -- that the levels of chemicals are down quite a bit," Norris said. Even before Norris' 2006 experiment, the city had plans to update its wastewater treatment plant to use an "activated sludge" process in order to meet a state requirement to reduce the amount of nitrates and ammonia in the effluent. The apparent reduction in estrogen-related chemicals -- which are found in a wide range of products from shampoo to birth-control pills -- is a "pleasant side effect," according to Ned Williams, Boulder's director of public works for utilities. Fish feminization is a global issue, according to Norris, and though the results of the study are encouraging for Boulder Creek, they do not address the widespread problem of estrogen-related chemicals ending up in waterways. "It's a lot less of a problem to not put them in than to try and get them out after they're in," Norris said. "This is a fairly recent phenomenon -- a combination of too many people concentrated in too small an area and dumping all of their waste in one spot." Contact Camera Staff Writer Laura Snider at 303-473-1327 or firstname.lastname@example.org.
<urn:uuid:273544bd-80fa-43cc-804b-f6c08e8b3d67>
2.890625
672
News Article
Science & Tech.
44.390078
1,718
http://phys.org/news/2012-05-lemons-lem ... oxide.html Making carbon-based products from CO2 is nothing new, but carbon dioxide molecules are so stable that those reactions usually take up a lot of energy. If that energy were to come from fossil fuels, over time the chemical reactions would ultimately result in more carbon dioxide entering the atmosphere—defeating the purpose of a process that could otherwise help mitigate climate change. Professor Yun Hang Hu’s research team developed a heat-releasing reaction between carbon dioxide and Li3N that forms two chemicals: amorphous carbon nitride (C3N4), a semiconductor; and lithium cyanamide (Li2CN2), a precursor to fertilizers. “The reaction converts CO2 to a solid material,” said Hu. “That would be good even if it weren’t useful, but it is.”
<urn:uuid:a890fe39-22cf-4902-a48a-6b2bd82f7809>
4.125
192
Comment Section
Science & Tech.
46.941553
1,719
Skip to comments.NASA Rocket to Create Clouds Tuesday Posted on 09/15/2009 6:25:43 PM PDT by Free ThinkerNY A rocket experiment set to launch Tuesday aims to create artificial clouds at the outermost layers of Earth's atmosphere. The project, called the Charged Aerosol Release Experiment (CARE), plans to trigger cloud formation around the rocket's exhaust particles. The clouds are intended to simulate naturally-occurring phenomena called noctilucent clouds, which are the highest clouds in the atmosphere. "This is really essentially at the boundary of space," said Wayne Scales, a scientist at Virginia Tech who will use computer models to study the physics of the artificial dust cloud as it's released. "Nothing like this has been done before and that's why everybody's really excited about it." The experiment is the first attempt to create artificial noctilucent clouds. A previous spacecraft, called Aeronomy of Ice in the Mesosphere (AIM), launched in 2007 to observe the natural clouds from space. CARE is slated to launch Tuesday between 7:30 and 7:57 p.m. EDT (2330 and 2357 GMT) from NASA's Wallops Flight Facility in Virginia. Noctilucent means "night shining" in Latin. Although difficult to spot with the naked eye, the clouds are best visible when Earth's surface is in darkness and sunlight from below the horizon illuminates the high-altitude clouds. These clouds, also known as polar mesospheric clouds, are made of ice crystals. The natural ones tend to hover around 50 to 55 miles (80 to 90 km) above the Earth. CARE will release its dust particles a bit higher than that, then let them settle back down to a lower altitude. "What the CARE experiment hopes to do is to create an artificial dust layer," Scales told SPACE.com. "Hopefully it's a creation in a controlled sense, which will allow scientists to study different aspects of it, the turbulence generated on the inside, the distribution of dust particles and such." (Excerpt) Read more at livescience.com ... If it works, Obambi’s science advisor can continue to push his demented scheme (releasing sulfur-rich aerosols to combat the “greenhouse effect”). Wonder if these will be visible. Normally I'd say "WOW, this is kinda a cool experiment by NASA" ... but given their recent penchant for bowing towards the Global Warming™ crowd, I would put reservations on the purpose of this near-space experiment. Color MM as 'skeptical' as to the motive for this It doesnt pay to screw around with Mother Nature. Yep. I'm all wee-weed up. If the dust were made of shredded HR3200 I might be more interested. I am going to make an assumption that the real intent of this experiment is to buy ourselves more time before the threat of man made global warming kills us all. It’s just a pile of dust from the vacuum cleaners of a few NASA offices. There is the scary operative phrase --- "Hopefully ...." Wilhelm Reich, please call your office. I remember that ad in the 70’s. It would be followed up by and Irish Spring and then maybe you can call me Ray or you can call me Jay or maybe plop plop fiz fiz oh what a relief it is. Didn’t they used to do something like this at Wallops Island back in the early days of NASA? Oh, wonderful, now they are experimenting with climate control, just what we need. As in a nuclear scientist saying,"This experiment could trigger a chain reaction and destroy the world, hopefully my calculations are correct and it won't". A few months ago a Shuttle launch made one of these clouds. Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.
<urn:uuid:4764a0db-a1f4-4608-bb23-d0a63c6db88e>
3.46875
856
Comment Section
Science & Tech.
59.388515
1,720
NASA, along with other international space agencies, seem to be pretty good at putting things into Earth’s orbit, but they are not so good when it comes to bringing things down. As a result, Earth’s orbit has been cluttered with space junk which has put other spacecraft, include those which are manned, in danger of a possible catastrophic collision. The solution may be in the form of technology called a solar sail. A solar sail is a form of spacecraft propulsion system which uses radiation pressure, which can come from a star like our own Sun, or from an artificial source like a laser. It also has the ability to act as a parking brake to slow down spacecraft for de-orbit by skimming the top of our atmosphere. Such technology has been successfully tested by NASA this month on board a small spacecraft called NanoSail-D. At first, NASA believed the mission was a failure before it even began. That’s because the small spacecraft was stuck inside its mothership called the Fast, Affordable, Science and Technology SATellite (FASTSAT), which was launched in November. Luckily, and for an unknown reason, the NanoSail-D spacecraft ejected from its mothership on January 17. A few days later, on January 20, the NanoSail-D unfurled its solar sail. The sail consists of a thin polymer sheet of reflective material which covers a 10 m2 area. The solar sail will provide enough aerodynamic drag to allow the NanoSail-D to de-orbit within 70 to 120 days. This will test the possibility of including solar sails on future NASA satellites to allow them to return to Earth and harmlessly disintegrate in the atmosphere. Such a technique could prevent the build-up of future space junk. Of course, a solar sail does have an accelerator pedal in addition to a brake which is why NASA engineers will be measuring the pressure of sunlight on the sail as well. Read more at NASA
<urn:uuid:2da540aa-c7c5-4423-b159-5eeae56a31e0>
4.1875
403
News Article
Science & Tech.
47.094091
1,721
Fission Tracks in Zircons: Evidence for Abundant Nuclear Decay by Andrew A. Snelling, Ph.D. RATE II: Radioisotopes and the Age of The Earth: Results of a Young-Earth Creationist Research Initiative, (Volume II), L. Vardiman et al., eds. (San Diego, CA: Institute for Creation Research and the Creation Research Society, 2005) Fission tracks are a physical record of in situ nuclear decay, their density being directly proportional to the amount of nuclear decay that has occurred. The aim of this study was to investigate whether the amounts of fission tracks in zircon grains in targeted rock units (that is, their fission track “ages”) matched the radioisotope “ages” of those rocks. Stratigraphically well-constrained volcanic ash or tuff beds located in the Grand Canyon-Colorado Plateau “type section” of the Flood strata record were chosen—the Cambrian Muav and Tapeats tuffs from the western Grand Canyon (early Flood), Jurassic Morrison Formation tuff beds, southeastern Utah (middle Flood), and the Miocene Peach Springs Tuff, southeastern California and western Arizona (late Flood or post-Flood). The fission track “ages” of zircon grains separated from samples of these tuff units were determined by a specialized professional laboratory using the external detector method and a zeta (ζ) calibration factor. The observed fission track densities measured in all the zircons (and thus the fission track “ages”) from the samples of the Jurassic and Miocene tuffs, and in some of the zircons from the Muav and Tapeats tuffs, were found to exactly equate to the quantities of nuclear decay measured by radioisotope determinations of the same rock units. Though thermal annealing of fission tracks had occurred in some zircon grains in the two Cambrian Grand Canyon tuffs, the U-Pb radioisotope system had also been thermally reset, the resulting reset ages in both instances coinciding with the onset of the Laramide uplift of the Colorado Plateau. The fact that the thermal annealing of the fission tracks and the thermal resetting of the U-Pb radioisotope system in those zircon grains were exactly parallel is unequivocal confirmation that the radioisotope ratios are a product of radioactive decay, in just the same way as the fission tracks are physical evidence of nuclear decay. Furthermore, because the resetting of the U-Pb radioisotope system in zircons will only occur at elevated temperatures, the fact that it has been reset in these zircons could therefore be due to them having been heated by accelerated nuclear decay. Even so, in spite of this thermal annealing and resetting, there remains sufficient strong evidence to conclude that both the fission tracks and radioisotope ratios in the zircons in the Cambrian Grand Canyon tuff beds record more than 500 million years worth (at today’s rates) of nuclear and radioisotope decay during deposition of the Phanerozoic strata sequence of the Grand Canyon-Colorado Plateau region. Given the independent evidence that most of this strata sequence was deposited catastrophically during the year-long global Flood about 4500 years ago, then 500 million or more years worth (at today’s rates) of nuclear and radioisotope decay had to have occurred during the Flood year about 4500 years ago. Thus, the fission tracks in the zircons in these tuffs are physical evidence of accelerated nuclear decay. fission tracks, nuclear decay, thermal annealing, zircons, RATE II For Full Text Please see the Download PDF link above for the entire article.
<urn:uuid:22ca927c-1c33-4ed7-9774-2658e5b50280>
3.46875
804
Academic Writing
Science & Tech.
26.266509
1,722
Simple Equations Introduction to basic algebraic equations of the form Ax=B ⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles. - Let's say we have the equation seven times x is equal to fourteen. - Now before even trying to solve this equation, - what I want to do is think a little bit about what this actually means. - Seven x equals fourteen, - this is the exact same thing as saying seven times x, let me write it this way, seven times x, x in orange again. Seven times x is equal to fourteen. - Now you might be able to do this in your head. - You could literally go through the 7 times table. - You say well 7 times 1 is equal to 7, so that won't work. - 7 times 2 is equal to 14, so 2 works here. - So you would immediately be able to solve it. - You would immediately, just by trying different numbers - out, say hey, that's going to be a 2. - But what we're going to do in this video is to think about - how to solve this systematically. - Because what we're going to find is as these equations get - more and more complicated, you're not going to be able to - just think about it and do it in your head. - So it's really important that one, you understand how to - manipulate these equations, but even more important to - understand what they actually represent. - This literally just says 7 times x is equal to 14. - In algebra we don't write the times there. - When you write two numbers next to each other or a number next - to a variable like this, it just means that you - are multiplying. - It's just a shorthand, a shorthand notation. - And in general we don't use the multiplication sign because - it's confusing, because x is the most common variable - used in algebra. - And if I were to write 7 times x is equal to 14, if I write my - times sign or my x a little bit strange, it might look - like xx or times times. - So in general when you're dealing with equations, - especially when one of the variables is an x, you - wouldn't use the traditional multiplication sign. - You might use something like this -- you might use dot to - represent multiplication. - So you might have 7 times x is equal to 14. - But this is still a little unusual. - If you have something multiplying by a variable - you'll just write 7x. - That literally means 7 times x. - Now, to understand how you can manipulate this equation to - solve it, let's visualize this. - So 7 times x, what is that? - That's the same thing -- so I'm just going to re-write this - equation, but I'm going to re-write it in visual form. - So 7 times x. - So that literally means x added to itself 7 times. - That's the definition of multiplication. - So it's literally x plus x plus x plus x plus x -- let's see, - that's 5 x's -- plus x plus x. - So that right there is literally 7 x's. - This is 7x right there. - Let me re-write it down. - This right here is 7x. - Now this equation tells us that 7x is equal to 14. - So just saying that this is equal to 14. - Let me draw 14 objects here. - So let's say I have 1, 2, 3, 4, 5, 6, 7, 8, - 9, 10, 11, 12, 13, 14. - So literally we're saying 7x is equal to 14 things. - These are equivalent statements. - Now the reason why I drew it out this way is so that - you really understand what we're going to do when we - divide both sides by 7. - So let me erase this right here. - So the standard step whenever -- I didn't want to do that, - let me do this, let me draw that last circle. - So in general, whenever you simplify an equation down to a - -- a coefficient is just the number multiplying - the variable. - So some number multiplying the variable or we could call that - the coefficient times a variable equal to - something else. - What you want to do is just divide both sides by 7 in - this case, or divide both sides by the coefficient. - So if you divide both sides by 7, what do you get? - 7 times something divided by 7 is just going to be - that original something. - 7's cancel out and 14 divided by 7 is 2. - So your solution is going to be x is equal to 2. - But just to make it very tangible in your head, what's - going on here is when we're dividing both sides of the - equation by 7, we're literally dividing both sides by 7. - This is an equation. - It's saying that this is equal to that. - Anything I do to the left hand side I have to do to the right. - If they start off being equal, I can't just do an operation - to one side and have it still be equal. - They were the same thing. - So if I divide the left hand side by 7, so let me divide - it into seven groups. - So there are seven x's here, so that's one, two, three, - four, five, six, seven. - So it's one, two, three, four, five, six, seven groups. - Now if I divide that into seven groups, I'll also want - to divide the right hand side into seven groups. - One, two, three, four, five, six, seven. - So if this whole thing is equal to this whole thing, then each - of these little chunks that we broke into, these seven chunks, - are going to be equivalent. - So this chunk you could say is equal to that chunk. - This chunk is equal to this chunk -- they're - all equivalent chunks. - There are seven chunks here, seven chunks here. - So each x must be equal to two of these objects. - So we get x is equal to, in this case -- in this case - we had the objects drawn out where there's two of - them. x is equal to 2. - Now, let's just do a couple more examples here just so it - really gets in your mind that we're dealing with an equation, - and any operation that you do on one side of the equation - you should do to the other. - So let me scroll down a little bit. - So let's say I have I say I have 3x is equal to 15. - Now once again, you might be able to do is in your head. - You're saying this is saying 3 times some - number is equal to 15. - You could go through your 3 times tables and figure it out. - But if you just wanted to do this systematically, and it - is good to understand it systematically, say OK, this - thing on the left is equal to this thing on the right. - What do I have to do to this thing on the left - to have just an x there? - Well to have just an x there, I want to divide it by 3. - And my whole motivation for doing that is that 3 times - something divided by 3, the 3's will cancel out and I'm just - going to be left with an x. - Now, 3x was equal to 15. - If I'm dividing the left side by 3, in order for the equality - to still hold, I also have to divide the right side by 3. - Now what does that give us? - Well the left hand side, we're just going to be left with - an x, so it's just going to be an x. - And then the right hand side, what is 15 divided by 3? - Well it is just 5. - Now you could also done this equation in a slightly - different way, although they are really equivalent. - If I start with 3x is equal to 15, you might say hey, Sal, - instead of dividing by 3, I could also get rid of this 3, I - could just be left with an x if I multiply both sides of - this equation by 1/3. - So if I multiply both sides of this equation by 1/3 - that should also work. - You say look, 1/3 of 3 is 1. - When you just multiply this part right here, 1/3 times - 3, that is just 1, 1x. - 1x is equal to 15 times 1/3 third is equal to 5. - And 1 times x is the same thing as just x, so this is the same - thing as x is equal to 5. - And these are actually equivalent ways of doing it. - If you divide both sides by 3, that is equivalent to - multiplying both sides of the equation by 1/3. - Now let's do one more and I'm going to make it a little - bit more complicated. - And I'm going to change the variable a little bit. - So let's say I have 2y plus 4y is equal to 18. - Now all of a sudden it's a little harder to - do it in your head. - We're saying 2 times something plus 4 times that same - something is going to be equal to 18. - So it's harder to think about what number that is. - You could try them. - Say if y was 1, it'd be 2 times 1 plus 4 times 1, - well that doesn't work. - But let's think about how to do it systematically. - You could keep guessing and you might eventually get - the answer, but how do you do this systematically. - Let's visualize it. - So if I have two y's, what does that mean? - It literally means I have two y's added to each other. - So it's literally y plus y. - And then to that I'm adding four y's. - To that I'm adding four y's, which are literally four - y's added to each other. - So it's y plus y plus y plus y. - And that has got to be equal to 18. - So that is equal to 18. - Now, how many y's do I have here on the left hand side? - How many y's do I have? - I have one, two, three, four, five, six y's. - So you could simplify this as 6y is equal to 18. - And if you think about it it makes complete sense. - So this thing right here, the 2y plus the 4y is 6y. - So 2y plus 4y is 6y, which makes sense. - If I have 2 apples plus 4 apples, I'm going - to have 6 apples. - If I have 2 y's plus 4 y's I'm going to have 6 y's. - Now that's going to be equal to 18. - And now, hopefully, we understand how to do this. - If I have 6 times something is equal to 18, if I divide both - sides of this equation by 6, I'll solve for the something. - So divide the left hand side by 6, and divide the - right hand side by 6. - And we are left with y is equal to 3. - And you could try it out. - That's what's cool about an equation. - You can always check to see if you got the right answer. - Let's see if that works. - 2 times 3 plus 4 times 3 is equal to what? - 2 times 3, this right here is 6. - And then 4 times 3 is 12. - 6 plus 12 is, indeed, equal to 18. Be specific, and indicate a time in the video: At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger? Have something that's not a question about this content? This discussion area is not meant for answering homework questions. Share a tip When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831... Have something that's not a tip or feedback about this content? This discussion area is not meant for answering homework questions. Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. - disrespectful or offensive - an advertisement - low quality - not about the video topic - soliciting votes or seeking badges - a homework question - a duplicate answer - repeatedly making the same post - a tip or feedback in Questions - a question in Tips & Feedback - an answer that should be its own question about the site
<urn:uuid:0fd4521c-d6c5-4976-a39e-6b54d6f49c8b>
4.8125
2,935
Truncated
Science & Tech.
84.041077
1,723
In this module we need to be a little bit more precise about temperature and heat energy than we have been so far. Heat energy is usually measured in terms of calories. The calorie was originally defined as the amount of energy required to raise one gram of water one degree Celsius at a pressure of one atmosphere. This definition is not complete because the amount of energy required to raise one gram of water one degree Celsius varies with the original temperature of the water by as much as one percent. Since 1925 the calorie has been defined as 4.184 joules, the amount of energy required to raise the temperature of one gram of water from 14.5 degrees Celsius to 15.5 degrees Celsius. For our purposes here we can ignore the fact that the effect of one calorie of energy varies depending on the temperature of the water. Newton's model of cooling can be thought of, more precisely, as involving two steps. The picture above shows a brick whose length is four centimeters. We mentally divide the brick into two unequal pieces. The lefthand piece has a length of one centimeter and the righthand piece has a length of three centimeters. Heat is flowing across the mental boundary between the two pieces from left to right at the rate of A calories per hour. As a result the average temperature of the lefthand piece is changing at the rate of -kA degrees Celsius per hour. The constant k depends on the composition of the brick and its cross-sectional area. The average temperature of the righthand piece is changing at the rate of kA / 3 degrees Celsius per hour. The three in the denominator comes from the fact that since the righthand piece is three times the length of the lefthand piece, its mass is three times as big.
<urn:uuid:a815ae2f-316e-4506-a921-7c7a166c371c>
4.15625
363
Tutorial
Science & Tech.
50.643539
1,724
Four hundred years ago, Galileo Galilei changed the world by peering through a 3-foot-long telescope and spying the moons of Jupiter. Today, the world — or, more accurately, our collection of worlds — is on the brink of a change that could be just as dramatic. Over the next five years, giant telescopes and far-seeing space probes will revolutionize the way we think of stars, planets and the other denizens of planetary systems, including our own. One of the planetary pioneers is NASA's Wide-Field Infrared Survey Explorer, launching next week from Vandenberg Air Force Base in California. WISE is designed to look for objects too dim and distant to be noticed by past probes. By mid-2011, the polar-orbiting satellite should detect around 100,000 previously unseen asteroids. Even more intriguingly, the satellite has a chance of spotting a brown-dwarf star or a new planet on the outskirts of the solar system. "There is still the possibility of a large planet in the outer solar system, according to some experts," the mission's project scientist, Peter Eisenhardt of NASA's Jet Propulsion Laboratory, told msnbc.com. "My point of view is, as long as there's a reasonable chance of finding something, you ought to go and look." Looking beyond our solar system, astronomers are gearing up to reveal the initial findings from NASA's Kepler mission next month. Kepler is aimed at determining how many stars in a patch of sky have planets circling around them. Within three years, scientists hope to be able to detect Earth-size planets in the "habitable zones" around alien stars. After only a few months of observations, leaders of the Kepler team say they've already come across some potentially mind-bending findings. "We have some discoveries that someday, after they're verified, will knock your socks off," the mission's principal investigator, William Borucki of NASA's Ames Research Center, told msnbc.com. Another planet-hunting probe, the European Space Agency's COROT satellite, has already detected a rocky world only five times as massive as Earth, orbiting so close to its parent star that temperatures on the sun-facing side rise beyond 2,000 degrees Fahrenheit. Still more planets are sure to be revealed in the months and perhaps years to come. Ground-based telescopes are joining the search for new worlds as well. Some of the world's biggest eyes on the sky are already supporting the Kepler and COROT missions, and still more are in the design or construction phase: - Pan-STARRS: The $100 million Panoramic Survey Telescope and Rapid Response System, or Pan-STARRS for short, is an array of four telescopes being set up in Hawaii primarily to track fast-moving asteroids, some of which might threaten Earth. However, Pan-STARRS is expected to spot about 20,000 objects in the Kuiper Belt, a distant ring of icy material where Pluto was found nearly 80 years ago. Pan-STARRS should be able to find objects as small as Pluto well beyond the Kuiper Belt. - Giant Magellan Telescope: The $700 million GMT, due to be built in Chile by 2018, will combine the power of seven 27.6-foot-wide mirrors to produce images sharper than those of the Hubble Space Telescope. The instrument should be able to see the disks of worlds far beyond Pluto, piecing together the evidence for or against the existence of a giant Planet X. - Large Synoptic Survey Telescope: The $400 million LSST is expected to become fully operational in Chile in 2016 and revolutionize astronomy. “In the first week, we will see more data from this telescope than all the telescopes in humanity up to that point,” billionaire backer Charles Simonyi has said. The LSST is expected to spot up to 100,000 orbiting objects beyond Neptune, including ice dwarfs as big as Pluto that are more than six times farther out. - Discovery Channel Telescope: Arizona's Lowell Observatory has partnered with the Discovery Channel to build a $40 million telescope in Arizona that will extend the search for Kuiper Belt objects, as well as extrasolar planets and near-Earth asteroids. Lowell Observatory was the place where Pluto was discovered back in 1930, and observatory director Eileen Friel said it's fitting that the one of the first campaigns taken on by the new telescope will be to look for other objects like Pluto in the Kuiper Belt. "It has direct relevance to the observatory's legacy," she told msnbc.com. That legacy has undergone substantial revision in the past three years: Pluto is no longer considered the ninth planet, but rather the first of many ice dwarfs that could be identified in the Kuiper Belt. The dwarf-planet population already includes Eris, a world even bigger than Pluto that was found almost five years ago. Eris' discovery led the International Astronomical Union to come up with the definition for a dwarf planet in 2006, and move Pluto from the list of planets proper to the new classification. The planet boom Some astronomers continue to debate Pluto's demotion, but however the planetary pigeonholes are reshuffled, Pluto is due for a fresh look in 2015 when NASA's New Horizons probe sails by. Past studies have already determined that Pluto has a thin atmosphere, as well as clouds and weather patterns. Pluto's biggest moon, Charon, shows hints of having ice volcanoes, and Pluto may exhibit similar geological activity. New Horizons' observations could flesh out all those hints and present the public with a new, more planetlike image of Pluto and its kin. Another probe, NASA's Dawn spacecraft, may do the same for the solar system's smallest known dwarf planet, the asteroid Ceres, when it begins its reconnaissance mission in 2015. Vanderbilt University astronomer David Weintraub said Pluto and Ceres "will take on a new life when we see them as real objects." The new wave of world hunters could spark a similar transformation in the concepts of stars and planets, inside and outside our solar system. Is there a big Planet X out there? WISE, for example, is designed to scour areas that have not been looked at closely by past probes, far above and below the ecliptic plane, where the solar system's eight largest planets make their orbits. Because the probe searches in mid-infrared wavelengths, it could pick up objects that are too dim to be detected by their visible light. "We're sensitive to objects that are as cold as 50 Kelvin," or 370 degrees below zero Fahrenheit, Eisenhardt said. "That means objects that are way, way out there in the farther reaches of the solar system." He said WISE isn't the best instrument for detecting dwarf planets such as Pluto or Eris, but it should be able to pick up larger planets or brown dwarfs that couldn't be seen before. A world the size of Neptune could be spotted in an orbit 50 times farther away than Pluto's distance from the sun. Something as big as Jupiter could be seen even if it were a light-year away — more than 1,000 times farther than Pluto. In some quarters, such a discovery could lead to hand-wringing over a Planet X that's out to get us, but the researchers who have hypothesized the existence of such a planet emphasize that orbital mechanics would rule out any threat. And if WISE fails to detect a large planet, that should deal a mortal blow to the Planet X hypothesis, said John Matese, an astrophysicist at the University of Louisiana at Lafayette who has studied the issue for years. "If it doesn't discover it, the whole discussion should be concluded," he said. Looking for ‘failed stars’ WISE's science team is more confident about detecting brown dwarfs, which are sometimes known as "failed stars." Brown dwarfs are celestial bodies big enough to get a deuterium fusion process started — with masses higher than 13 times Jupiter's mass — but too small to sustain the hydrogen fusion reaction seen in stars. Brown dwarfs glow so dimly that they're difficult to detect, but when astronomers study distant star clusters closely, they usually spot at least one brown dwarf for every regular star. Astronomers suspect that our own celestial neighborhood contains brown dwarfs that have not yet been spotted. Some could be wandering closer to the solar system than our nearest known stellar neighbor, Proxima Centauri, which is 4.2 light-years away. "We should find several hundred brown dwarfs that are currently unknown," Ned Wright, an astrophysicist at the University of California at Los Angeles who serves as WISE's principal investigator, said in a news release. Some of those dark stars could even have planets, Wright said, and the yet-to-be-launched James Webb Space Telescope could focus on targets identified by WISE to look for them. Over the course of its planned nine-month science mission, WISE is expected to boost the solar system's inventory of asteroids by 100,000, Eisenhardt said. A system is being put in place to report hundreds of new asteroids per day, and to follow up on WISE's avalanche of data with ground-based telescope observations. Fresh discoveries, from WISE as well as other observational campaigns in space and on Earth, have the potential to blur the lines even further between stars, planets (from giants to dwarfs) and even smaller objects such as asteroids and comets. When astronomers want to study the entire planetary spectrum, they currently have only one sample to look at: our own solar system. And even there, scientists still haven't seen the full sample. Now that's about to change, and the reshaping of planetary science — a process that started in earnest five years ago with the discovery of Eris and the reclassification of Pluto — could continue for a long, long time. "We expect that the legacy of the WISE survey will last for decades," Eisenhardt said. © 2013 msnbc.com Reprints
<urn:uuid:f0289409-9c06-41cf-9d2d-0397741816ec>
3.53125
2,080
News Article
Science & Tech.
46.21081
1,725
USING a giant magnetic field, scientists at the University of Nottingham and the University of Nijmegen in the Netherlands have made a frog float in mid-air. The levitation trick works because giant magnetic fields slightly distort the orbits of electrons in the frog's atoms. The resulting electric current generates a magnetic field in the opposite direction to that of the magnet. A field of 16 teslas created an attractive force strong enough to make the frog floatuntil it made its escape. The team has also levitated plants, grasshoppers and fish. "If you have a magnet that is big enough, you could levitate a human," says Peter Main, one of the researchers. He adds that the frog did not seem to suffer any ill effects: "It went back to its fellow frogs looking perfectly happy." To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:00e22c8c-93ca-4955-a9a2-776a0dfdcf69>
3.484375
194
Truncated
Science & Tech.
49.86859
1,726
Having survived for over 200 million years, some turtle species are now threatened with extinction because they get caught in fishing nets. Martin Lenhardt at the Virginia Institute of Marine Science has worked with turtles accidentally netted in Chesapeake Bay, and found a way to save them (WO 01/17119). Turtles have an air-filled middle ear that resonates at a frequency that changes with age and the depth of water. Lenhardt discovered that if boats emit a 100-kilohertz ultrasound signal, the shells of nearby turtles convert it to a lower frequency, between 200 hertz and 15 kilohertz. This makes the beasts' middle ears resonate, scaring them into swimming away or diving to safety. To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:3400eb8f-5dd1-497e-a938-42527a362dd0>
4.15625
171
Truncated
Science & Tech.
46.26
1,727
These functions can be used to interrogate the status of an existing database connection object. Tip: libpq application programmers should be careful to maintain the PGconn abstraction. Use the accessor functions described below to get at the contents of PGconn. Reference to internal PGconn fields using libpq-int.h is not recommended because they are subject to change in the future. The following functions return parameter values established at connection. These values are fixed for the life of the PGconn object. Returns the database name of the connection. char *PQdb(const PGconn *conn); Returns the user name of the connection. char *PQuser(const PGconn *conn); Returns the password of the connection. char *PQpass(const PGconn *conn); Returns the server host name of the connection. char *PQhost(const PGconn *conn); Returns the port of the connection. char *PQport(const PGconn *conn); Returns the debug TTY of the connection. (This is obsolete, since the server no longer pays attention to the TTY setting, but the function remains for backward compatibility.) char *PQtty(const PGconn *conn); Returns the command-line options passed in the connection request. char *PQoptions(const PGconn *conn); The following functions return status data that can change as operations are executed on the PGconn object. Returns the status of the connection. ConnStatusType PQstatus(const PGconn *conn); The status can be one of a number of values. However, only two of these are seen outside of an asynchronous connection procedure: CONNECTION_OK and CONNECTION_BAD. A good connection to the database has the status CONNECTION_OK. A failed connection attempt is signaled by status CONNECTION_BAD. Ordinarily, an OK status will remain so until PQfinish, but a communications failure might result in the status changing to CONNECTION_BAD prematurely. In that case the application could try to recover by calling See the entry for PQconnectPoll with regards to other status codes that might be returned. Returns the current in-transaction status of the server. PGTransactionStatusType PQtransactionStatus(const PGconn *conn); The status can be PQTRANS_IDLE (currently idle), PQTRANS_ACTIVE (a command is in progress), PQTRANS_INTRANS (idle, in a valid transaction block), or PQTRANS_INERROR (idle, in a failed transaction block). PQTRANS_UNKNOWN is reported if the connection is bad. PQTRANS_ACTIVE is reported only when a query has been sent to the server and not yet completed. Looks up a current parameter setting of the server. const char *PQparameterStatus(const PGconn *conn, const char *paramName); Certain parameter values are reported by the server automatically at connection startup or whenever their PQparameterStatus can be used to interrogate these settings. It returns the current value of a parameter if known, or NULL if the parameter is not known. Parameters reported as of the current release include server_version, server_encoding, client_encoding, application_name, is_superuser, session_authorization, DateStyle, IntervalStyle, TimeZone, integer_datetimes, and standard_conforming_strings. (server_encoding, TimeZone, and integer_datetimes were not reported by releases before 8.0; standard_conforming_strings was not reported by releases before 8.1; IntervalStyle was not reported by releases before 8.4; application_name was not reported by releases before 9.0.) Note that server_version, server_encoding and integer_datetimes cannot change after startup. Pre-3.0-protocol servers do not report parameter settings, but libpq includes logic to obtain values for server_version and client_encoding anyway. Applications are encouraged to use PQparameterStatus rather than ad hoc code to determine these values. (Beware however that on a pre-3.0 connection, changing client_encoding via SET after connection startup will not be PQparameterStatus.) For server_version, see also PQserverVersion, which returns the information in a numeric form that is much easier to If no value for standard_conforming_strings is reported, applications can assume it is off, that is, backslashes are treated as escapes in string literals. Also, the presence of this parameter can be taken as an indication that the escape string syntax (E'...') is accepted. Although the returned pointer is declared const, it in fact points to mutable storage associated with the PGconn structure. It is unwise to assume the pointer will remain valid across queries. Interrogates the frontend/backend protocol being used. int PQprotocolVersion(const PGconn *conn); Applications might wish to use this function to determine whether certain features are supported. Currently, the possible values are 2 (2.0 protocol), 3 (3.0 protocol), or zero (connection bad). The protocol version will not change after connection startup is complete, but it could theoretically change during a connection reset. The 3.0 protocol will normally be used when communicating with PostgreSQL 7.4 or later servers; pre-7.4 servers support only protocol 2.0. (Protocol 1.0 is obsolete and not supported by libpq.) Returns an integer representing the backend version. int PQserverVersion(const PGconn *conn); Applications might use this function to determine the version of the database server they are connected to. The number is formed by converting the major, minor, and revision numbers into two-decimal-digit numbers and appending them together. For example, version 8.1.5 will be returned as 80105, and version 8.2 will be returned as 80200 (leading zeroes are not shown). Zero is returned if the connection is bad. Returns the error message most recently generated by an operation on the connection. char *PQerrorMessage(const PGconn *conn); Nearly all libpq functions will set a message for PQerrorMessage if they fail. Note that by libpq convention, a result can consist of multiple lines, and will include a trailing newline. The caller should not free the result directly. It will be freed when the associated PGconn handle is passed to PQfinish. The result string should not be expected to remain the same across operations on the Obtains the file descriptor number of the connection socket to the server. A valid descriptor will be greater than or equal to 0; a result of -1 indicates that no server connection is currently open. (This will not change during normal operation, but could change during connection setup or reset.) int PQsocket(const PGconn *conn); Returns the process ID (PID) of the backend process handling this connection. int PQbackendPID(const PGconn *conn); The backend PID is useful for debugging purposes and for comparison to NOTIFY messages (which include the PID of the notifying backend process). Note that the PID belongs to a process executing on the database server host, not the local host! Returns true (1) if the connection authentication method required a password, but none was available. Returns false (0) if not. int PQconnectionNeedsPassword(const PGconn *conn); This function can be applied after a failed connection attempt to decide whether to prompt the user for a password. Returns true (1) if the connection authentication method used a password. Returns false (0) if not. int PQconnectionUsedPassword(const PGconn *conn); This function can be applied after either a failed or successful connection attempt to detect whether the server demanded a password. Returns the SSL structure used in the connection, or null if SSL is not in use. SSL *PQgetssl(const PGconn *conn); This structure can be used to verify encryption levels, check server certificates, and more. Refer to the OpenSSL documentation for information about this structure. You must define USE_SSL in order to get the correct prototype for this function. Doing so will also automatically include ssl.h from OpenSSL.
<urn:uuid:16a93ea5-75a9-48b7-8762-475dbc980077>
2.578125
1,825
Documentation
Software Dev.
41.450206
1,728
Mar. 19, 1998 WEST LAFAYETTE, Ind. -- Purdue University researchers have developed a new class of materials that has a wide variety of potential applications, from a coating to repel liquids to a membrane that could be used in wastewater treatment and drug delivery. The materials are called co-polymer networks, which are "built" from intersecting chains of small molecules linked together to form a larger, mesh-like structure. The two molecular "building blocks," or monomers, used in the new materials are acrylic acid and a derivative of oligoethylene glycol. The properties of an individual material in the class can be varied depending on the relative amounts of the monomers used to prepare it. "Because these materials are co-polymers, we can control their properties more precisely and over a wider range than we could if they were made of a single type of monomer," says Robert Scott, a Ph.D. candidate at Purdue who helped develop the material. "This level of versatility and control allows for a number of applications." The new class of materials is unique in that it is the first time materials with such a wide variety of properties have been derived from a combination of these two monomers, says Scott's adviser Nicholas Peppas, the Showalter Distinguished Professor at Purdue. Peppas has conducted research in polymers for more than 26 years and has developed new materials and polymers for applications that include biomedical applications. "The most exciting thing about this research is that we've not only developed a class of materials with diverse properties, but we've also come to understand fundamentally, on a molecular level, the basis for those properties," Scott says. Scott will present information on the new materials in two talks March 16 at the annual meeting of the American Physical Society in Los Angeles. His research has been funded by the National Science Foundation and the National Institutes of Health. The new materials, which were developed over the past four years with the help of lab assistant Atsmon Shahar, are particularly suited for separations applications, such as filtering mechanisms used in wastewater treatment, where only certain substances are allowed to pass through the mesh created by the interlacing polymers. "As we increase the acrylic acid content of the materials, the oligoethylene glycol chains that make up the network move further apart, increasing the mesh size, which in turn determines what substances can pass through," Scott explains. "By varying the acrylic acid content, as well as other parameters, we can precisely control the size of the molecules we allow through." Another application Scott has investigated in his lab is the controlled release of substances. "We've made systems that contain a model drug, and we're studying how the rate of diffusion of that drug out of this polymer varies as we vary the polymer structure," Scott says. "Using the material as a membrane for drug delivery is a particularly appealing application because we have very fine control over what drugs could be released through it and under what conditions." In addition, the acrylic acid in the materials makes them sensitive to the acidity, or pH, of their environment. The mesh size and diffusive properties vary depending on the pH of the environment -- an important consideration for drug delivery applications, since different parts of the body exhibit different pH levels. For example, a capsule incorporating this material and containing a particular drug might remain "closed" in the mouth and "open" in the stomach to release a drug. One type of the material also can be made to have a very dense network of molecular chains, which would make it very resistant to liquids, Scott says. "In that case it might make an ideal coating for applications that require a very low permeability to moisture," he says. "We can also modify the properties so that it will absorb various amounts of liquid. We're looking very closely at how we can control its affinity for water." Depending on its affinity for water, the material might also find applications in the cosmetics industry in moisturizers, Peppas says. Peppas says further studies of the materials will be needed before they are ready for industrial or medical use. Other social bookmarking and sharing tools: The above story is reprinted from materials provided by Purdue University. Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:491a5b02-dc92-47fd-9ab7-74fe8390cf80>
3.046875
907
News Article
Science & Tech.
31.704733
1,729
Feb. 7, 2002 WEST LAFAYETTE, Ind. — A detailed look at a syringe-like structure designed to inject viral DNA into a host cell reveals a unique and complex entry scheme for viruses. The study may provide clues to how similar viruses infect cells and suggest ways for developing a new class of antibiotics and other drugs to prevent illnesses caused by viral pathogens. Scientists at Purdue University have solved the three-dimensional structure of the bacteriophage T4 virus, a virus that resembles a lunar lander in both its looks and intricate workings. The study, published in the Jan. 31 issue of the journal Nature, reveals for the first time how the virus binds to the surface of the host, punctures the cell wall with a syringe-like tube and injects its own genetic blueprint into the cell. This genetic information then sets the cell's machinery to work creating replicas of the virus. "Though the T4 virus has been studied extensively in the past, this study provides the first detailed information on the virus structure and how it works," says Michael Rossmann, Hanley Distinguished Professor of Biological Sciences at Purdue who directed the study. Bacteriophage T4 is a virus that infects only bacteria, in this case E. coli, a bacteria used extensively in molecular biology research. The study of bacterial viruses such as T4 is useful in understanding many basic functions in biology, Rossmann says. "This particular study tells us a great deal about how a virus infects a cell," he says. "These processes tend to be quite general, so mechanisms used by one virus often are similar to mechanisms used by other viruses, including those that infect humans." Bacteriophages may play a future role in controlling disease-causing bacteria, says Kamal Shukla, the National Science Foundation project officer for this research. "Knowing the exact mechanism of T4 bacteriophage infectivity is a significant breakthrough," Shukla says. "This information could eventually help in creating designer viruses that could be the next class of antibiotics." Analysis of the cell-puncturing device also reveals a structure that may hold potential for applications in nanotechnology, such as microscopic probes, Rossmann says. "This a very stable structure that looks like a small stylus," he says. "It might be useful as a probe in an atomic force microscope, which employs a probe of molecular dimension." The T4 virus consists of an elongated head, which carries the virus' genetic material, and a tail made up of a hexagonal baseplate and six leg-type structures, called long-tail and short-tail fibers. In the study, the Purdue group analyzed atom-by-atom the structure of the virus' baseplate. The baseplate is the key component of the virus, Rossmann says, serving as a "nerve center" and sending signals to and from the virus' head and tail fibers. While transmitting its messages, the baseplate also prepares the virus machinery to eject its DNA into the host cell. "A whole series of events are required to recognize, attach and confirm the attachment, and then contract so that the viral DNA can be ejected into the host," Rossmann says. "It's a very complicated system for infecting a cell." The viral machine works as follows: The virus uses its long-tail fibers to recognize its host and to send a signal back to the baseplate. Once the signal is received, the short-tail fibers help anchor the baseplate into the cell surface receptors. As the virus sinks down onto the surface, the baseplate undergoes a change — shifting from a hexagon to a star-shaped structure. At this time, the whole tail structure shrinks and widens, bringing the internal pin-like tube in contact with the outer membrane of the E. coli cell. As the tail tube punctures the outer and inner membranes of the E. coli cell, the virus' DNA is injected through the tail tube into the host cell. The DNA then instructs the bacterium to produce new viruses. So many are produced, in fact, that the E. coli eventually bursts, setting masses of new virus free to infect other cells. The new detailed images provided by Rossmann's group also reveal a structure slightly different than what scientists had envisioned. "We found that the baseplate is shaped like a cup or small dome," Rossmann says. "Previously it was believed that the baseplate was a rather flat structure." The study also is the first to show how the syringe-like tube is situated in the center of the baseplate, positioned in line with the DNA contained in the virus head. The studies were done at Purdue using X-ray crystallography, a technique often used to study structures such as proteins and viruses, in atomic detail. But the process works only if the substances can be made to form crystals. Crystals are used because the diffraction pattern from one single molecule could be insignificant, but the many individual, identical molecules in a crystal amplify the pattern. Diffraction patterns are created when an X-ray beam hits a crystal, causing the electrons surrounding each atom to bend the beam. Computers can then be used to interpret this pattern and reconstruct the positions of the atoms. "Because the structure is so complex, we could not crystallize the entire virus structure at once," Rossmann says. "Instead, we crystallized the various components and gradually pieced together a picture of the structure." The research was funded by the National Science Foundation. Rossmann and his research team at Purdue collaborated with Shuji Kanamaru and Fumio Arisaka of the Tokyo Institute of Technology, and Vadim Mesyanzhinov of the Shemyakin-Ovchnnikov Institute of Bioorganic Chemistry in Moscow, Russia. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:9bc5958e-0f03-439a-bbe4-ffc43310902d>
3.8125
1,253
News Article
Science & Tech.
39.944947
1,730
Apr. 17, 2012 Researchers from the Complutense University of Madrid (UCM, Spain) have mathematically shown that particles charged in a magnetic field can escape into infinity without ever stopping. One of the conditions is that the field is generated by current loops situated on the same plane. At the moment this is a theoretical mathematical study, but two researchers from UCM have recently proved that, in certain conditions, magnetic fields can send particles to infinity, according to the study published in the journal Quarterly of Applied Mathematics. "If a particle 'escapes' to infinity it means two things: that it will never stop, and "something else," Antonio Diaz-Cano, one of the authors, explained. Regarding the first, the particle can never stop, but it can be trapped, doing circles forever around a point, never leaving an enclosed space. However, the "something else" goes beyond the established limits. "If we imagine a spherical surface with a large radius, the particle will cross the surface going away from it, however big the radius may be" the researcher declares. Scientists have confirmed through equations that some particles can escape infinity. One condition is that the charges move below the activity of a magnetic field created by current loops on the same plane. Other requirements should also be met: the particle should be on some point on this plane, with its initial speed being parallel to it and far away enough from the loops. "We are not saying that these are the only conditions to escape infinity, there could be others, but in this case, we have confirmed that the phenomenon occurs," Diaz-Cano states. "We would have liked to have been able to try something more general, but the equations are a lot more complex." In any case, the researchers recognise that the ideal conditions for this study are "with a magnetic field and nothing else." Reality always has other variables to be considered, such as friction and there is a distant possibility of going towards infinity. Nonetheless, the movement of particles in magnetic fields is a "very significant" problem in fields such as applied and plasma physics. For example, one of the challenges that the scientists that study nuclear energy face is the confinement of particles to magnetic fields. Accelerators such as Large Hadron Collider (LHC) of the European Organisation for Nuclear Research (CERN) also used magnetic fields to accelerate particles. In these conditions they do not escape to infinity, but they remain doing circles until they acquire the speed that the experiments need. Other social bookmarking and sharing tools: - A. Díaz-Cano and F. González-Gascón. Escape to infinity in the presence of magnetic fields. Quart. Appl. Math., August 26, 2011; 70 (2012), 45-51 [link] Note: If no author is given, the source is cited instead.
<urn:uuid:04d12ec3-49ea-401c-965b-839439819452>
3.53125
588
News Article
Science & Tech.
41.440896
1,731
Oct. 25, 2012 Japan's "triple disaster," as it has become known, began on March 11, 2011, and remains unprecedented in its scope and complexity. To understand the lingering effects and potential public health implications of that chain of events, scientists are turning to a diverse and widespread sentinel in the world's ocean: fish. Events on March 11 began with a magnitude 9.0 earthquake, the fourth largest ever recorded. The earthquake in turn spawned a massive 40-foot tsunami that inundated the northeast Japanese coast and resulted in an estimated 20,000 missing or dead. Finally, the wave caused catastrophic damage to the Fukushima Dai-ichi nuclear power plant, resulting in the largest accidental release of radiation to the ocean in history, 80 percent of which ended up in the Northwest Pacific Ocean. In a Perspectives article appearing in October 26, 2012, issue of the journal Science, WHOI marine chemist Ken Buesseler analyzed data made publicly available by the Japanese Ministry of Agriculture, Forestry and Fisheries (MAFF) on radiation levels in fish, shellfish and seaweed collected at ports and inland sites in and around Fukushima Prefecture. The picture he draws from the nearly 9,000 samples describes the complex interplay between radionuclides released from Fukushima and the marine environment. In it, Buesseler shows that the vast majority of fish caught off the northeast coast of Japan remain below limits for seafood consumption, even though the Japanese government tightened those limits in April 2012. Nevertheless, he also finds that the most highly contaminated fish continue to be caught off the coast of Fukushima Prefecture, as could be expected, and that demersal, or bottom-dwelling fish, consistently show the highest level of contamination by a radioactive isotope of cesium from the damaged nuclear power plant. He also points out that levels of contamination in almost all classifications of fish are not declining, although not all types of fish are showing the same levels, and some are not showing any appreciable contamination. As a result, Buesseler concludes that there may be a continuing source of radionuclides into the ocean, either in the form of low-level leaks from the reactor site itself or contaminated sediment on the seafloor. In addition, the varying levels of contamination across fish types points to complex methods of uptake and release by different species, making the task of regulation and of communicating the reasons behind decision-making to the fish-hungry Japanese public all the more difficult. "To predict the how patterns of contamination will change over time will take more than just studies of fish," said Buesseler, who led an international research cruise in 2011 to study the spread of radionuclides from Fukushima. "What we really need is a better understanding of the sources and sinks of cesium and other radionuclides that continue to drive what we're seeing in the ocean off Fukushima." Other social bookmarking and sharing tools: - K. O. Buesseler. Fishing for Answers off Fukushima. Science, 2012; 338 (6106): 480 DOI: 10.1126/science.1228250 Note: If no author is given, the source is cited instead.
<urn:uuid:07ac73c1-a9d5-4499-b883-960463833692>
3.875
656
Truncated
Science & Tech.
35.690478
1,732
Nov. 19, 2012 Sound waves are commonly used in applications ranging from ultrasound imaging to hyperthermia therapy, in which high temperatures are induced, for example, in tumors to destroy them. In 2010, researchers at Caltech led by Chiara Daraio, a professor of aeronautics and applied physics, developed a nonlinear acoustic lens that can focus high-amplitude pressure pulses into compact "sound bullets." In that initial work, the scientists demonstrated how sound bullets form in solids. Now, they have done themselves one better, creating a device that can form and control those bullets in water. The nonlinear acoustic lens is constructed from chains strung with stainless-steel spheres that are oriented parallel to one another -- and squeezed together -- to form an array. The gadget was inspired by Newton's cradle, a popular toy that consists of a line of identical balls suspended by wires from a frame. When an end ball is pulled back and released, it slams into the next ball, causing the last ball in the line to fly outward. Similarly, in the acoustic lens, striking one end of the array generates compact nonlinear pulses of sound -- solitary waves that propagate through the lens and can be tightly focused on a target area; when they coalesce at this focal point, they produce a significantly amplified version: the sound bullet. These intense pressure waves may be used to obliterate tumors or kidney stones -- leaving surrounding tissues unharmed -- or probe objects like ship hulls or bridges for unseen defects. In the new work, the lens has been made more accurate, and a waterproof interface, which efficiently transmitted the pulses, was inserted between the chains and water. "We use water as a target medium with the idea that the acoustic lens could be used for underwater imaging and/or biomedical applications," says postdoc Carly Donahue, who helped refine the device. "Currently, our work is fundamental in nature. We are focused on demonstrating proof of principle and establishing the technical strengths and weaknesses, which will inform the future design of engineering devices for specific applications," she adds. "For example, using these systems in biomedical applications requires reducing their dimensions and learning about the related scaling effects. Creating commercially viable devices will require the involvement of industrial partners." Donahue discusses the technology and its potential applications in a talk at the APS Division of Fluid Dynamics Meeting, which will take place November 18-20, 2012 at the San Diego Convention Center, located near the historic Gaslamp District on the waterfront, in San Diego, California. The talk is entitled, "An Experimental Study of a Nonlinear Acoustic Lens Interfaced with Water." Other social bookmarking and sharing tools: The above story is reprinted from materials provided by American Physical Society's Division of Fluid Dynamics. Note: If no author is given, the source is cited instead.
<urn:uuid:55ec35e0-c3ef-42e1-883e-568dd27c08ef>
3.546875
582
Truncated
Science & Tech.
28.985489
1,733
Nov. 21, 2012 For the first time, researchers tracking the behavior of emperor penguins near the sea have identified the importance of sea ice for the penguins' feeding habits. The research, published November 21 in the open access journal PLOS ONE by Shinichi Watanabe from Fukuyama University, Japan and colleagues, Japan describes emperor penguin foraging behavior through the birds' chick-rearing season. Unlike other species like Adelie penguins, emperor penguins spent much more time diving for food, and only used about 30% of their time at sea to take short breaks to rest on sea ice. The birds did not travel for long distances on the ice, or use it for other activities. The study also suggests that these short rest periods on sea ice may help the penguins avoid predators such as leopard seals. Though sea ice conditions are known to affect penguin populations, the relationship between ice levels and penguins' foraging has been unclear because of the difficulties of tracking the birds at sea. Watanabe says, "The monitoring technique developed in this study will help to understand the relationship." Other social bookmarking and sharing tools: The above story is reprinted from materials provided by Public Library of Science. - Shinichi Watanabe, Katsufumi Sato, Paul J. Ponganis. Activity Time Budget during Foraging Trips of Emperor Penguins. PLoS ONE, 2012; 7 (11): e50357 DOI: 10.1371/journal.pone.0050357 Note: If no author is given, the source is cited instead.
<urn:uuid:06a324b2-9b3d-4bd5-ab58-a18398c0a311>
3.765625
329
News Article
Science & Tech.
45.127831
1,734
In a previous article, we laid out the framework for using OS X in Web development. With more than 60% of the world’s Web servers running Apache, having the same local, database-driven development environment can accelerate Web development, design and testing. In this piece, we will tackle getting an OS X systems configured for localhost Web development. A default OS X install includes the majority of what you will need to get a basic Web server up and running. Since we previously discussed the LAMP, or Linux, Apache, MySQL and Perl/PHP, platform, this is the base configuration we will cover here. I’ll also include links at the end of this article to information on installing Python, as this is a growing and popular language for Web scripting, along with some notes on using CVS. Your OS X system is somewhat pre-configured to run a static Website with Apache pre-installed and pre-configured. The primary Website document root, which would be accessed at http://127.0.0.1 (or http://localhost; which you use is up to you) is found in the /Library/WebServer/Documents/. You will find a number of index files in here, due to Apache’s ability to negotiate content for localization, i.e. serving up the same page in different languages. You may remove these files if you will be focusing on one language. Additionally, each user on OS X can, by default, serve Web pages from their home directory using their short name, i.e. http://127.0.0.1/~shortname (i.e. Blane Warrene uses the shortname bwarrene). Content for these local sites should go in the local user’s existing Sites folder within their home directory (/Users/shortname/Sites/). To enable browsing of these sites, you will need to start the Apache Web server, which is simply done in the System Preferences menu by selecting Sharing and turning on Personal Web Sharing. If you prefer to FTP your content locally, rather than copying the files into directories, you should also turn on FTP Access within this preference pane. Apache on OS X Tip As with Apache on Linux, additional modules or services can be added in, or Apache can be recompiled. These actions normally require that you restart Apache. You can return to the System Preferences menu, select the Sharing pane and simply stop and start Personal Web Sharing to restart Apache. If you require multiple local sites under development at once, you can easily establish a folder per project in the Sites directory in your Home folder. You can then access these for testing via browser using http://localhost/~shortname/project1, http://localhost/~shortname/project2, and so on. You can custom configure Apache to use custom urls, such as http://project1, or http://project2, with a combination of edits to the httpd.conf (found in /etc/httpd/httpd.conf) and Hosts files on your system, which can be found in /etc/hosts/. The rudimentary example below shows two sites. Using Vi or your favorite text editor, edit the httpd.conf file as follows: - Uncomment the NameVirtualHost line and replace the * with 127.0.0.1 - Add two virtual host containers: Save and close the httpd.conf file. This will entail an Apache restart, which is done in the System Preferences menu under the Sharing pane. Finally, to ensure that we’re resolving locally, we will edit your local hosts file. Using the Terminal utility, issue the following command: sudo vi /etc/hosts After entering your admin password, press i for insert mode, move the cursor to the end of the file, and add these lines: Then press the esc key, type :wq, and press return. Your configuring is complete! You should be able to access these separate sites in your Web browser at http://project1 and http://project2. Remember to put some content in those directories when you first browse to them! Some Final Notes on Apache There are numerous features you can utilize to create a mirror test environment to your production Apache Web servers. Take some time to peruse the httpd.conf file, which contains substantial documentation that can assist you in enabling features like server side includes. In addition, while we’re specifically discussing setting up OS X as a localhost Web server in this piece, you should note that it only takes a few steps to serve a Website directly from your workstation to the Internet for collaboration or testing activities. There are 2 pieces to this puzzle, assuming you have access to a static IP address and a broadband connection. If you prefer to use a fully qualified domain name or sub-domain (i.e. test.domain.com), you will need to insure DNS service is pointing to your static IP address on your local workstation. You will need to go into the System Preferences, select the Sharing pane, and ensure the “Network Address” section of this screen is correctly filled in with a publicly accessible IP address. Then stop and start Personal Web Sharing to restart Apache.
<urn:uuid:ddd82b05-0ae1-49d1-b80a-59dd58e0eb8e>
2.78125
1,091
Tutorial
Software Dev.
53.580932
1,735
VOICE: The oceans are the Earth’s single largest absorbers of carbon dioxide. But they are being overloaded by humans’ CO2 emissions. The result: acidification, with catastrophic consequences. Yves Paccalet - Renowned French writer, philosopher and environmentalist (M): If the acidification of the ocean continues as it is now, then these so-called major carbon sinks will no longer be able to play their role, and therefore, everything goes faster. Dr. Wendy Watson-Wright – Assistant Director-General, UNESCO Intergovernmental Oceanographic Commission (F): What’s happening with this absorption of carbon dioxide is #1, the ocean is becoming saturated, and #2, the ocean is becoming more acid. This has an enormous impact. Professor Jean-Pierre Gattuso – Oceanographer, National Centre for Scientific Research, France (M): And those changes which occurred in the past were very slow, so there was a lot of time and scope for the organisms to adapt, to evolve to those changing conditions. And now we are changing in almost an instant. VOICE: As the oceans reach a CO2 saturation point, their waters acidify faster, which in turn threatens all marine life. Dr. Wendy Watson-Wright – Assistant Director-General, UNESCO Intergovernmental Oceanographic Commission (F): Ocean acidification is having impacts on many, many different species. Dr. James Barry - Senior Scientist, Monterey Bay Aquarium Research Institute (M): In areas where corals are living where it is now more acidic, they have more fragile skeletons, which allows more rapid coastal erosion, etc. VOICE: In the Pacific northwest of the USA, coastal areas have already become so acidic that baby oysters are dying as their shells corrode before they are fully formed. Fish, previously thought to be unaffected, may also perish in the vulnerable egg and larvae stages of life. This is just a sign of what is to come if carbon emissions continue unchecked. Dr. Carol Turley – Senior Scientist, Plymouth Marine Laboratory, UK (F): Sixty-five million years ago, there was a big carbon perturbation then, and the oceans became more acidic, they became warmer, they had less oxygen. Many, many species on Earth became extinct, including the dinosaurs. And we’re kind of going through something that’s even more rapid now. Dr. James Barry (M): It will change more in the lifetime of our children than we’ve seen for the last 20-30 million years. Yves Paccalet (M): It’s a problem of the intersection of two curves -- a curve for the destruction and a curve showing the ability of the people to react. At what point will the curves meet? Will it intersect when the destruction will be so strong to make everything possibly collapse? Save our Oceans. Save our Planet. Be Veg. Go Green.
<urn:uuid:46bddbb0-3eb0-456f-a7a4-28cdce5d724a>
3.25
610
Content Listing
Science & Tech.
42.841746
1,736
June 18, 2012 Wind socks, bow shocks, shockwaves and collisions are often used to describe the phenomena that create high-frequency electromagnetic radiation in the cosmos. From gamma rays down through X-rays and extreme ultraviolet, conventional theories have relied upon gravity and acceleration as the only way for them to be produced in space. Compression of hydrogen gas and dust is supposed to create enough transfer of momentum that it reaches temperatures greater than the cores of some stars. In other words, it is the high temperature of the gas that makes it glow so brightly. The CHANDRA satellite has detected streams of charged material pouring out of the Crab Nebula, emitting X-rays as they go. It was long thought that nebular clouds or the expanding gases of supernova explosions could not be sources of those frequencies, since the bubbles were supposed to be areas where gases were losing kinetic energy and cooling off. However, several “mysterious” observations have called into question the underlying principles of standard theory. Astronomers also note that the two giant stars in Eta Carinae are blowing off “intense winds” of such velocity that the collision of the wave fronts is said to generate X-rays where the shells intersect. This is supposed to take place through kinetic shock, even though it is acknowledged that the “wind” is ionized particles. According to researchers, as electrons bounce back and forth in the magnetic fields they are accelerated until they collide with low-frequency photons and give them an energy boost, creating the X-ray emissions. In previous Picture of the Day articles, it was noted that many structures in the galaxy are active energy sources. Some of them eject charged matter out from their poles, or leave long braided tails extending for light-years, or have hourglass shapes composed of tightly bunched filaments. A more detailed image of Eta Carinae reveals the distinctive hourglass shape that results from intense plasma discharges. The Eta Carinae binary system appears to have a mass 150-times that of the Sun and to be shining with four-million-times the brilliance, which indicates the high current density of the stellar z-pinch. It is well known that one shouldn’t look directly at an electric arc without eye-protection since the brilliant blue-white light is also a source of intense ultraviolet that can damage the retina. In the same way, the arc light from Eta Carinae is so bright that it is generating X-rays powerful enough to be detected on Earth, 7500 light-years away. Eta Carinae also erupted with a flash of visible light, brighter than the Moon, in the 1800s. It then faded from visibility until 1941 when it slowly began to brighten to a naked-eye object, and it remains so today. The variability of the binary stars’ behavior can be attributed to changes in the circuit caused by the motions of the two giant stars at the heart of the system. Eta Carinae, rather than being an example of “billiard ball physics” and “wind socks” in space is a remarkable confirmation of the Electric Star hypothesis.
<urn:uuid:b870ee67-e472-453c-bd62-0a0f3653ee54>
3.703125
654
Truncated
Science & Tech.
38.268276
1,737
Prof. Michael Mauel, Columbia University "Bringing the Stars to Earth: The Path to Fusion Power" Abstract: A grand challenge of applied physics is to use our scientific know-how of plasma physics to achieve one of the world’s most important technical goals: a source of energy that is clean, safe, and available for thousands of years. Fusion energy is the most promising source of energy meeting these requirements. Fusion uses the heavy isotope of hydrogen, called deuterium, to form helium and release huge amounts of energy. Every bottle of water contains enough deuterium to generate the equivalent of a barrel of oil when used in a fusion power source. But a major challenge remains: deuterium must first be heated to the temperature of the stars before fusion energy can be released. Professor Mike Mauel will describe experiments that test whether or not the magnetic fields used to confine high temperature plasma at the surfaces of stars or in planetary magnetospheres can produce the conditions that will make fusion energy work. The largest of these is the ITER experiment, now under construction in France and lead by an international organization that includes the U.S., Europe, Russia, China, India, South Korea, and Japan. Biography: Michael Mauel was educated at MIT receiving his B.S. (1978) and his Sc.D. (1983) with a research specialty in plasma physics. While at MIT, he was awarded the Fortesque Fellowship from the IEEE and the Guillemin Prize. Following post-doctoral research at MIT he joined the faculty of Columbia University in 1985 where he is currently Professor of Applied Physics and was Chair of the Department of Applied Physics and Applied Mathematics from 2000-06. At Columbia, his research focused on high temperature plasma physics, and he was awarded a Certificate of Appreciation by the U.S. Department of Energy in 1989 for his work in fusion energy. Dr. Mauel collaborated extensively with the TFTR research team at the Princeton Plasma Physics Laboratory and participated in advanced tokamak experiments and in the world’s first high-power D-T fusion experiments. He was a visiting scientist at DIII-D fusion experiment at General Atomics in 1994, investigating high-pressure "wall mode" instabilities and co-discovered techniques to generate internal transport barriers. At Columbia University, he built experimental programs in plasma processing in collaboration with IBM and in laboratory space physics with the support of NASA, NSF, and the AFOSR. He also co-directed the Levitated Dipole Experiment, a joint research project of Columbia University and MIT that uses high-field superconducting magnets to explore the application of magnetospheric physics to the confinement of high-pressure plasma in the laboratory. In 1994, Mauel was named Teacher of the Year at Columbia's Fu Foundation School of Engineering and Applied Science, and, in 2000, he received the Rose Prize for Excellence in Fusion Engineering from the Fusion Power Association. During the 2006-2007 academic year, Mauel served in the Office of International Energy and Commodity Policy at U.S. Department of State as a Jefferson Science Fellow, and he was awarded a Certificate of Appreciation by the Assistant Secretary of State. Dr. Mauel is a fellow of the APS. He served as Chair of the APS Division of Plasma Physics and as member and chair of numerous physics and policy advisory committees addressing issues concerning fusion energy science, plasma physics research and education.
<urn:uuid:f033c5df-3f5e-4a46-9243-320ccd09263c>
2.765625
703
About (Org.)
Science & Tech.
36.36158
1,738
A curve of radius 138 m is banked at an angle of 11°. An 762-kg car negotiates the curve at 82 km/h without skidding. Neglect the effects of air drag and rolling friction. Find the following. (a) the normal force exerted by the pavement on the tires (b) the frictional force exerted by the pavement on the tires (c) the minimum coefficient of static friction between the pavement and the tires
<urn:uuid:b19e6075-eff6-45c4-8e41-d7c621e69ae2>
3.25
95
Tutorial
Science & Tech.
67.785223
1,739
I could have hoped that humanity would have learned the lesson that we know very little about the marine environment. Over the last twelve months we have realized that a species of river dolphins is in fact two (see our section on river dolphins). When politicians tell us that there is 'X amount of cetaceans', often latter evidence is that we actually had 'B', 'C' and 'D' populations making up 'X-Y' of a total for the species. The whaling debate is littered with these issues. The BBC now reports that a species of skate could become the first marine fish driven to extinction by commercial fishing. The BBC goes onto say 'A study reveals that an error in the classification of the species has meant researchers have failed to see just how close to the brink it is'. 'The research team, led by Samuel Iglesias from the Marine Biology Station in Concarneau on the west coast of France, paints a very bleak picture for the future of the flapper skate.. Dr Iglesias said: "The threat of extinction for European Dipturus together with mislabelling in fishery statistics highlight the need for a huge reassessment of population for the different Dipturus species in European waters. "Without revision and recognition of its distinct status the world's largest skate, D. intermedia, could soon be rendered extinct."'
<urn:uuid:a3092ee6-909b-467c-8d1d-ff6cf9b3fe72>
3.4375
286
News (Org.)
Science & Tech.
49.446266
1,740
Last week's announcement of the discovery of a new particle seemed to answer one of the great outstanding questions in physics. But for those who haven't been immersed in all things LHC, the results were likely to raise all sorts of new questions (along with "what was all the fuss about again?"). So, to help navigate the post-Higgs world, we put together a short Q&A, based on questions that some of the Ars staff had. I know we detected it in the Large Hadron Collider, but how did they actually make Higgs bosons? There are two ways to answer that question. The first is that we're simply converting energy into matter. The protons in the collider carry a tremendous amount of energy, and it has to go somewhere. Given Einstein's E = mc2, we know that some of that energy can be converted into matter. That's why things that are much heavier than two protons at rest can pop out of the collisions. But Einstein's equations aren't magic, in that particles don't just poof into existence—there are actual processes that create them. In the LHC, the most common process that ends in a Higgs boson is gluon fusion. Gluons are the (apparently massless) carriers of the strong force that holds quarks together to form things like protons and neutrons. If two of them merge, then one possible outcome is a single Higgs particle. Everyone says that this particle was predicted by the Standard Model, but how exactly? What was missing that made people theorize the Higgs? The Standard Model describes the properties of fundamental particles and the forces that mediate their interactions. Some of these, like the photon, are massless; others, like the W and Z bosons that mediate the weak force, weigh as much as entire atoms (including some that the weak force causes to decay). Although its possible to just say "this is what these things weigh," physicists find this sort of approach dissatisfying. So, they developed a theoretical mechanism that could supply some particles with mass. Several papers, appearing about the same time, suggested that there's a pervasive field that all particles can interact with. Some, like the photon, don't, and remain massless. Others, like the W and Z bosons, undergo large interactions with the field, picking up a large mass in the process. Peter Higgs published the first paper that indicated that this field should have a corresponding particle, which eventually led to it picking up his name: the Higgs boson. (Physicist Matt Strassler has written much more about the particle's history and role in the Standard Model.) With the discovery of the W, Z, and top quark, the Higgs remained the last particle predicted by the Standard Model that remained undiscovered. Finding it became a key test as to whether the Model provided a complete picture of the basic particles and forces. Many scientists are being careful about saying that we've only found a boson that looks like the Higgs. What's that supposed to mean? If you've read our coverage of the Higgs, you know that the Standard Model predicts that it will decay along a variety of specific pathways: two photons, four leptons, etc. The fact that we're seeing something that's a boson, and clearly decays through at least some of these pathways, tells us that we've seen something very much like the Standard Model Higgs. But it may not be precisely the Standard Model version. So far, we don't have enough collisions to tell the Standard Model apart from some related theories. For example, one rare decay pathway should produce two tau particles. (Taus are part of the lepton family, which includes the electron and its heavier cousin the muon. Think of the tau as the electron's morbidly obese uncle.) So far, the CMS detector has seen none of these decays (the ATLAS team hasn't performed this analysis yet), but their absence isn't yet statistically significant. If that continues as more Higgs are produced, then it will suggest that we're looking at a non-standard Higgs. What could that be? There are a number of variations on the theory that predict it may take some of those pathways more or less often than the vanilla version of the theory. And there's a major extension to the Standard Model, called supersymmetry, that suggests that the Standard Model's particles are all parts of larger families, meaning that there would be multiple Higgs bosons, and we've only found one. Matt Strassler told Ars that a few more exotic theories suggest there will be Higgs-like particles that do very different things, some involving extra dimensions. It's only by making more of these bosons that we can start to tell these possibilities apart. Which brings us to our next question. The key thing here is that, if we haven't found the Standard Model Higgs, then we don't get to keep the Standard Model as it is. We could end up with a mildly tweaked version, we could have a Standard Model plus extensions, or we could be seeing hints of something much more significant. Until we have a better understanding of the particle we're seeing, we can't tell any of these apart. If the Large Hadron Collider was made to find the Higgs, what's it going to do now? Make more Higgs, so we can answer the previous question, for starters. CERN's director announced that it will run for a few extra months specifically to get a better statistical handle on whether this is the Standard Model Higgs. Beyond that, many other theoretical particles, including some of those predicted by things like supersymmetry, are already within reach of the energies at the LHC. Once it restarts in a couple of years, it will be running at much higher energies, opening up a greater range for discovery. Even if you don't think it's worth chasing down theoretical particles, the Universe keeps telling us that dark matter is likely to be comprised of a heavy fundamental particle. The LHC should be able to spot these if they're really out there. Does this eliminate the need to build another collider? Actually, it will certainly inform, and possibly motivate, the construction of anything that comes next. The LHC may have been a great Higgs discovery machine, but it's actually not so hot if we want to look at the Higgs in detail (and wanting the answers to the above should suggest we do). The problem is that proton collisions are messy, since you're actually colliding what's essentially a bag of quarks, gluons, and virtual particles, all of which may end up carrying some fraction of the total energy. All sorts of things spill out of the resulting collisions, making it difficult to separate out the Higgs decay. Some of the decay channels are so noisy that they actually made the discovery statistics worse in the recent announcements. A much cleaner way of going about looking at the Higgs would be to collide fundamental particles, ideally with their antiparticles. We could then tune the energy to make producing our 126GeV Higgs much more likely. That was what motivated the construction of SLAC, which smashed electrons together to produce lots of the W and Z bosons. Unfortunately, building one will be a real challenge. Electrons don't like to go around in circles (they lose energy quickly), so we'd have to build a linear collider, one that is longer than anything we've built previously. That gets expensive. The alternative is to build a muon collider, but this would involve the development of lots of new and unproven technology. In the age of tight science budgets, the prospects for a major construction project look bleak. That reminds me—the LHC cost a lot of money. Couldn't that have been put to better use? It's really difficult to guess what scientific advances are going to pay dividends. Logic gates were first considered around 1900; quantum mechanics was developed in the 1930s. It took until the 1970s for them to be married in the form that all of us now use. Restriction enzymes were discovered in the 1960s when people were trying to figure out why only some viruses could infect some bacteria. They ended up being an essential foundation for the biotech industry. I could go on with examples for ages. If anyone tells you which areas of basic research will have the largest economic impact 30 years from now, I'd bet money they're wrong. Might the money have done more good in applied research? Possibly, but even there, there are no guarantees. The technology we actually get is often radically different from what we'd want or expect based on the state of scientific knowledge. In other words, we may want and expect flying cars, but we end up with always-online smartphones. And I'd trade them both for fusion power, the basic physics of which we nailed down decades ago.
<urn:uuid:774c0b9f-f635-4705-8028-03beced576be>
3.3125
1,850
Q&A Forum
Science & Tech.
55.928429
1,741
Michigan State University Libraries Home News about environmental studies resources or events provided by the MSU Libraries. For more information visit the Environmental Studies Resources web page or contact Jon Harrison at firstname.lastname@example.org |« Economic Valuation/Impact of Green Infrastructure Assets and Conservation Spending In Michigan||Find an Energy-Saving Light Bulb »| The GreenFile database by EBSCO is the latest addition to the Environmental Studies suite of databases offered by the MSU Libraries. GreenFILE offers well-researched information covering all aspects of human impact to the environment. Its collection of scholarly, government and general-interest titles includes content on global warming, green building, pollution, sustainable agriculture, renewable energy, recycling, and more. The database provides indexing and abstracts for approximately 295,000 records, as well as Open Access full text for more than 4,600 records. GreenFILE is a resource designed to help individuals and organizations interested in reducing the negative impact and increasing the positive impact they have on the environment. The database includes information for individuals, such as installing solar panels and recycling; for corporation needing information on green agriculture, hybrid cars or waste management; as well as, environmental laws, regulations and studies. The goal is for GreenFILE to be a practical tool for everyday information and a resource for academic study and classroom activities. GreenFILE covers content going back more than 35 years. Journal articles unique to GreenFILE include : Bioscience, Journal of Environmental Planning & Management , Journal of Ecology, and Conservation Biology. The database also contains bibliographic information for key non-scholarly titles such as: E -The Environmental Magazine, Natural Life, and Mother Earth News. To access GreenFILE, go to http://www.greeninfoonline.com. |<< <||> >>|
<urn:uuid:cef19b3f-0e30-43f7-9218-65bc4891124d>
2.515625
380
News (Org.)
Science & Tech.
10.900358
1,742
February 22, 2013 | 7 It’s often said that we know less about the bottom of our own ocean than we do about the surface of Mars. The governments of the world, and our government in particular, seem presently much less than enthusiastic about exploring the oceans of our own planet than in exploring other planets (ocean research seems to have taken a particular hit in the last decade of Congressional budget cuts, although admittedly, all agencies have seen cutbacks). So film director and explorer James Cameron decided to build his own extreme deep sea sub and explore the deepest ocean trenches in the world himself. In the last few years, he descended into several trenches — including the Mariana last March, which at 36,000 feet, is the deepest in the world — but remained pretty mum about what he found. Here’s a PBS News Hour report on that descent: There was also this raw and rather uninformative video of his dive released to the Associated Press: He dropped a few more public hints about what he saw at the American Geophysical Union meeting in December, but as far as I know that is all. The only other manned mission to the the Mariana, in which Jacques Piccard and Don Walsh dropped to the bottom in the Bathyscaphe Trieste, took place in 1960, and the sediment stirred up by their vessel meant they were able to observe little about the life found there. Apparently, at a rather obscure meeting in New Orleans this morning (the 2013 Aquatic Sciences Meeting of the Association for the Sciences of Limnology and Oceanography), Natalya Gallo, a grad student in charge of analyzing the 25 hours of footage Cameron collected while on the bottom of these trenches, presented some “preliminary findings”. To me, this is a bit like televizing the images of the moon landing at an obscure planetary science conference. I don’t know if any science journalists were there, but the description of what was found in the press release announcing the talk was exciting enough I wanted to share it with you here. Early results of Gallo’s analysis reveal a vibrant mix of organisms, different in each trench site. The Challenger Deep featured fields of giant single-cell amoebas called “xenophyophores,” sea cucumbers, and enormous shrimp-like crustaceans called amphipods. The New Britain Trench featured hundreds of stunning stalked anemones growing on pillow lavas at the bottom of the trench, as well as a shallower seafloor community dominated by spoon worms, burrowing animals that create a rosette around them by licking organic matter off the surrounding sediment with a tongue-like proboscis. In contrast, Ulithi’s seafloor ecosystem in the Pacific atolls featured high sponge and coral biodiversity. Wait … what? The spoon worms “create a rosette around them by licking organic matter off the surrounding sediment with a tongue-like proboscis”? Pictures or, even better, video please? Spoon worms, which don’t get nearly the press coverage they deserve, appear to be annelids (like earthworms) that have lost their segmentation but otherwise preserve an internal annelid-like body plan. The proboscis is just weird, though. Known species seem to use it chiefly for filter feeding, so this “licking” behavior, whatever it is, seems to be something out of the ordinary. Here are some spoon worms photographed in a South Korean market: Xenophyophores are even weirder. They appear to be somewhat slime mold-like organisms that consist of a giant bag or cytoplasmic network of cell nuclei that comb the seafloor ingesting food by engulfment (also called phagocytosis). Unlike terrestrial slime molds, their excretions and feces attract particles that are eventually cemented into odd-looking shells, or tests, that surround the organisms. They seem to have a thing for trenches. We know little about them because their tests are prone to crumbling when collected. Here’s the one fuzzy photo I could find: Here’s another interesting finding mentioned in the press release: Proximity to land also played a role in the makeup of the deep-sea environment. Deep in the New Britain Trench, located near Papua New Guinea, Gallo identified palm fronds, leaves, sticks, and coconuts-terrestrial materials known to influence seafloor ecosystems. The Challenger Deep and Ulithi, both more removed from terrestrial influence, were absent of such evidence. Gallo also spotted a dive weight in the Challenger Deep footage, likely used as ballast on another deep-submergence vehicle. No where on Earth, it seems, can escape our footprint. I hope someone was at this meeting to report on the talk. If not, I hope we hear more details about what Cameron found very soon!
<urn:uuid:c0c7d76f-eec5-4902-9424-f8648a65119f>
3.546875
1,025
Personal Blog
Science & Tech.
38.088091
1,743
CHIPS Questions and Answers Why is it important to study diffuse emission in the CHIPS wavelength band? The CHIPS band contains the significant majority of the radiated power from diffuse hot interstellar plasma in its most probable temperature range. Although X-ray and UV studies can detect plasma at greater distances and map it with better angular resolution, extrapolation of such measurements to a total plasma luminosity will be fraught with significant uncertainty until spectroscopic observations of diffuse emission are carried out in the CHIPS band. Can observations with CHIPS disentangle all of the cooling mechanisms that might be taking place in the local interstellar medium? The various cooling mechanisms make observationally distinct predictions for the emission in the CHIPS band. In combination with observations of emission and absorption features at other wavelengths, CHIPS data will be extremely helpful in disentangling these processes. CHIPS will do an excellent job of constraining the electron temperature through measurement of collisionally excited line emission. Determination of the electron temperature is a crucial step in understanding the cooling process. Do current observational limits in the CHIPS band set interesting constraints on the physical properties of plasma in the local bubble? No. Current observational limits were set using instruments with limited spectral resolution and many are well above the expected emission-line fluxes. The marginal detection in 18 million seconds of EUVE observations in regions where the soft X-rays are bright does indicate that diffuse emission is present in our band. What if the foreground absorption column is higher than you have estimated? With nominal assumption, the bright iron emission lines will be detected at 20 to 50 in each sky resel. A higher absorption would reduce the line flux, but, barring some wholly unexpected distribution of the local neutral material, absorption will reduce the EUV emission to undetectable levels in only a limited number of viewing directions. The CHIANTI CIE plasma model predicts somewhat brighter peak iron line fluxes than the Raymond & Smith code used in our analysis. Select one of the following for more information: MISSION - EDUCATION CHIPS Bibliography - CHIPS Q&A - For more information about CHIPS please send an e-mail to Dr. Mark Hurwitz. If you have questions about or problems with this web page, please send an e-mail to the webmaster. University of California, Space Sciences Laboratory 7 Gauss Way, Berkeley, CA 94720-7450, USA CHIPS Project Manager: (510) 486-6340
<urn:uuid:0254edb6-e188-47cc-bb24-882783208225>
2.859375
555
FAQ
Science & Tech.
27.652221
1,744
Family: Dipluridae, Funnelweb Mygalomorphs view all from this family Description 1/8" (3-5 mm). Tan to yellowish-brown to reddish-brown, hairy. Large body. Belongs to the group of primitive spiders commonly called tarantulas. Constructs tube-shaped webs. Habitat Mature coniferous (spruce-fir) forest. Range Southern Appalachian Mountains, western North Carolina and eastern Tennessee. Endangered Status The Spruce-fir Moss Spider is on the U.S. Endangered Species List. It is classified as endangered in North Carolina and Tennessee. This species has strict habitat requirements: it lives in damp mats of moss growing on rocks in the deep shade of mature Fraser Fir and Red Spruce trees. These old-growth forests have been under attack for centuries, felled by timber interests and suffering from the effects of human dominance of the landscape. Recently, an introduced insect, the Balsam Woolly Adelgid, has been killing off fir and spruce trees, and the spider has declined as well. Only two populations of Spruce-fir Moss Spider are known to survive, one on Grandfather Mountain, North Carolina, the other on Mount LeConte, Tennessee.
<urn:uuid:d7ea8be6-59d4-4342-bb2e-f04a598588b4>
2.734375
265
Knowledge Article
Science & Tech.
44.725
1,745
A New Kind of Neutrino Transformation ESnet Helps Scientists Discover How Neutrinos Flavor-shift March 8, 2012 Neutrinos, the wispy particles that flooded the universe in the earliest moments after the Big Bang, are continually produced in the hearts of stars and other nuclear reactions. Untouched by electromagnetism, they respond only to the weak nuclear force and even weaker gravity, passing mostly unhindered through everything from planets to people. Years ago scientists also discovered another hidden talent of neutrinos. Although they come in three basic “flavors”—electron, muon and tau—neutrinos and their corresponding antineutrinos can transform from one flavor to another while they are traveling close to the speed of light. How they do this has been a long-standing mystery. But some new, and unprecedentedly precise, measurements from the multinational Daya Bay Neutrino Experiment are revealing how electron antineutrinos “oscillate” into different flavors as they travel. This new finding from Daya Bay opens a gateway to a new understanding of fundamental physics and may eventually solve the riddle of why there is far more ordinary matter than antimatter in the universe today. The international collaboration of researchers is made possible by advanced networking and computing facilities. In the U.S., the Department of Energy’s high-speed science network, ESnet, speeds data to the National Energy Research Scientific Computing Center (NERSC) where it is analyzed, stored and made available to researchers via the Web. Both facilities are located at the DOE’s Lawrence Berkeley National Laboratory (Berkeley Lab). Nuclear reactors of the China Guangdong Nuclear Power Group at Daya Bay and nearby Ling Ao produce millions of quadrillions of elusive electron antineutrinos every second. The six massive detectors buried in the mountains adjacent to the powerful reactors, make up the Daya Bay Experiment. Researchers in the collaboration count the number of electron antineutrinos detected in the halls nearest the Daya Bay and Ling Ao reactors and calculate how many would reach the detectors in the Far Hall if there were no oscillation. The number that apparently vanishes on the way (oscillating into other flavors, in fact) gives the value of theta one-three, written θ13. Shortly after experimental data is collected, it travels across the Pacific Ocean via the National Science Foundation’s GLORIAD network, which connects to ESnet backbone in Seattle, Washington. From Seattle, ESnet carries the data to the NERSC in Oakland, California. At NERSC the data is processed in real-time on the PDSF cluster, archived in the High Performance Storage System (HPSS) and shared with collaborators around the world via the Daya Bay Offline Data Monitor, a web-based “Science Gateway” hosted by NERSC. The first Daya Bay results show that θ13, once feared to be near zero, instead is “comparatively huge,” remarks Luk Kam-Biu Luk of the Berkeley Lab and the University of California at Berkeley. Luk is co-spokesperson of the Daya Bay Experiment and heads U.S. participation. “What we didn’t expect was the sizable disappearance, equal to about six percent. Although disappearance has been observed in another reactor experiment over large distances, this is a new kind of disappearance for the reactor electron antineutrino,” he explained. "This is a new type of neutrino oscillation, and it is surprisingly large," says Yifang Wang of China's Institute of High Energy Physics (IHEP), co-spokesperson and Chinese project manager of the Daya Bay experiment. "Our precise measurement will complete the understanding of the neutrino oscillation and pave the way for the future understanding of matter-antimatter asymmetry in the universe." Computing’s Crucial Role As a United States Tier-1 facility for the Daya Bay Neutrino Experiment, NERSC is the only site where all of the raw, simulated and derived data are analyzed and archived. From NERSC, ESnet carries Dayabay data to Brookhaven National Laboratory (BNL), which serves as a Tier-2 US facility for the experiment. BNL is responsible for secondary data processing of Daya Bay Data, as well as some archiving. “This experiment could not have been done without NERSC and ESnet,” says Craig Tull, of the Berkeley Lab’s Computational Research Division. “Over the last four to five years we’ve been doing simulations and analysis on PDSF, transferring and archiving data on HPSS, and accessing results on the Science Gateway. The experiment could not have been done with out these resources and the very capable and conscientious staff helping us.” As the US manager of software and computing for the Daya Bay Neutrino Experiment, Tull led the overall computing effort including development of a software framework, called NuWa, which allows researchers all over the world to collaborate in analyzing this experimental data, and of the Spade-driven data management and processing pipeline. He also coordinated with ESnet and NERSC staff to ensure that Daya Bay data could be processed, analyzed and archived in real time, providing scientists immediate insight into the quality of physics data recorded and the performance of detectors. “The software and computing components of the Daya Bay Neutrino Experiment have been extremely crucial to science,” says Tull. “Thanks to the computing expertise at Berkeley Lab, we were able to see antineutrinos with NuWa in the first filled detectors within 24 hours. We were able to see an anti-neutrino deficit in the far hall within days. And finally, we have been able to extract a high-quality θ13 value within only 75 days of start of far-hall running.” "This is really remarkable," says Wenlong Zhan, vice president of the Chinese Academy of Sciences and president of the Chinese Physical Society. "We hoped for a positive result when we decided to fund the project, but we never imagined it could come so quickly." "Exemplary teamwork among the partners has led to this outstanding performance," says James Siegrist, DOE Associate Director of Science for High Energy Physics. "These notable first results are just the beginning for the world's foremost reactor neutrino experiment." The Daya Bay collaboration consists of scientists from the following countries and regions: China, the United States, Russia, the Czech Republic, Hong Kong, and Taiwan. The Chinese effort is led by co-spokesperson, chief scientist, and project manager Yifang Wang of the Institute of High Energy Physics, and the U.S. effort is led by co-spokesperson Kam-Biu Luk and project and operations manager William Edwards, both of Berkeley Lab and UC Berkeley, and by chief scientist Steve Kettell of Brookhaven. ESnet provides the high-bandwidth, reliable connections that link scientists at national laboratories, universities and other research institutions, enabling them to collaborate on some of the world's most important scientific challenges including energy, climate science, and the origins of the universe. Funded by the U.S. Department of Energy's (DOE) Office of Science and located within the Scientific Networking Division at Lawrence Berkeley National Laboratory, ESnet provides scientists with access to unique DOE research facilities and computing resources.
<urn:uuid:538dfd2d-5377-4533-b50b-269bf5c070ee>
3.265625
1,577
News (Org.)
Science & Tech.
28.693091
1,746
1 Streams and lazy evaluation (40 points) We know that comparison sorting requires at least O(n log n) comparisons where were are sorting n elements. Let’s say we only need the first f(n) elements from the sorted list, for some function f. If we know f(n) is asymptotically less than log n then it would be wasteful to sort the entire list. We can implement a lazy sort that returns a stream representing the sorted list. Each time the stream is accessed to get the head of the sorted list, the smallest element is found in the list. This takes linear time. Removing the f(n) elements from the list will then take O(nf(n)). For this question we use the following datatype definitions. There are also some helper functions defined. (* Suspended computation *) datatype 'a stream' = Susp of unit -> 'a stream (* Lazy stream construction *) and 'a stream = Empty | Cons of 'a * 'a stream' Note that these streams are not necessarily infinite, but they can be. Q1.1 (20 points) Implement the function lazysort: int list -> int stream'. It takes a list of integers and returns a int stream' representing the sorted list. This should be done in constant time. Each time the stream' is forced, it gives either Empty or a Cons(v, s'). In the case of the cons, v is the smallest element from the sorted list and s' is a stream' representing the remaining sorted list. The force should take linear time. For example: - val s = lazysort( [9, 8, 7, 6, 5, 4] ); val s = Susp fn : int stream' - val Cons(n1, s1) = force(s); val n1 = 4 : int val s1 = Susp fn : int stream' - val Cons(n2, s2) = force(s1); val n2 = 5 : int val s2 = Susp fn : int stream' - val Cons(n3, s3) = force(s2); val n3 = 6 : int val s3 = Susp fn : int stream' Here is what is given as code: (* Suspended computation *) datatype 'a stream' = Susp of unit -> 'a stream (* Lazy stream construction *) and 'a stream = Empty | Cons of 'a * 'a stream' (* Lazy stream construction and exposure *) fun delay (d) = Susp (d) fun force (Susp (d)) = d () (* Eager stream construction *) val empty = Susp (fn () => Empty) fun cons (x, s) = Susp (fn () => Cons (x, s)) (* Inspect a stream up to n elements take : int -> 'a stream' -> 'a list take': int -> 'a stream -> 'a list *) fun take 0 s = | take n (s) = take' n (force s) and take' 0 s = | take' n (Cons (x, xs)) = x::(take (n-1) xs) My attempt at a solution I tried to do the following which get the int list and transforms it to int stream': (* lazysort: int list -> int stream' *) fun lazysort (:int list) = empty | lazysort (h::t) = cons (h, lazysort(t)); But when calling force it does not return the minimum element. I have to search for the minimum, but I do not know how... I thought of doing insertion sort like following: fun insertsort = | insertsort (x::xs) = let fun insert (x:real, ) = [x] | insert (x:real, y::ys) = if x<=y then x::y::ys else y::insert(x, ys) in insert(x, insertsort xs) end; But I have to search for the minimum and to not sort the list and then put it as a stream... Any help would be appreciated.
<urn:uuid:67b304b1-c964-4636-a74b-4a0ce25fa1ba>
3.53125
876
Q&A Forum
Software Dev.
78.829554
1,747
August 10, 2009 - First Black Holes Born Starving -- return to Press Releases -- Date Issued: August 10, 2009 Relevant Web URLs: Menlo Park, Calif.—The first black holes in the universe had dramatic effects on their surroundings despite the fact that they were small and grew very slowly, according to recent supercomputer simulations carried out by astrophysicists Marcelo Alvarez and Tom Abel of the Kavli Institute for Particle Astrophysics and Cosmology, jointly located at the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University, and John Wise, formerly of KIPAC and now of NASA Goddard Space Flight Center. Several popular theories posit that the first black holes gorged themselves on gas clouds and dust in the early universe, growing into the supersized black holes that lurk in the centers of galaxies today. However, the new results, published today in The Astrophysical Journal Letters, point to a much more complex role for the first black holes. "I'm thrilled that we now can do calculations that start to capture the most relevant physics, and we can show which ideas work and which don't," said Abel. "In the next decade, using calculations like this one, we will settle some of the most important issues related to the role of black holes in the universe." To make their discovery, the researchers created the most detailed simulations to date of the first black holes in the universe that formed from the collapse of stars. The simulations started with data taken from observations of the cosmic background radiation—the earliest view of the structure of the universe. The researchers then applied the basic laws that govern the interaction of matter, allowing the early universe in their simulation to evolve as it did in reality. In the simulation, clouds of gas left over from the Big Bang slowly coalesced under the force of gravity, and eventually formed the first stars. These massive, hot stars burned bright for a short time, emitting so much energy in the form of starlight that they pushed nearby gas clouds far away. Yet these stars could not sustain such a fiery existence for long, and they soon exhausted their internal fuel. This caused one of the stars in the simulation to collapse under its own weight, forming a black hole located in a pocket of emptiness. With very little matter in the near vicinity, this black hole was essentially "starved" of food on which to grow. "Quasars [extremely strong sources of radiation] powered by black holes a billion times more massive than our sun have been observed in the early universe, and we have to explain how these behemoths could have grown so big so fast,” said Alvarez. "Their origin remains among the most fundamental unanswered questions in astrophysics." One explanation for the existence of supermassive black holes in the early universe postulates that the first black holes were "seeds" that grew into much larger black holes by gravitationally attracting and then swallowing matter. But in their simulation, Alvarez, Abel and Wise found that such growth was negligible, with the black hole in the simulation growing by less than one percent of its original mass over the course of a hundred million years. Although the simulations do not yet completely rule out the theory, this makes it less likely that these first black holes could have grown directly into the supermassive black holes observed to have existed less than a billion years later, Alvarez said. An Alternative Theory Although the early stars pushed away nearby clouds of gas, delaying significant growth of the black holes the stars later became, wisps of gas sometimes found their way to the black holes. As this matter was sucked into the black hole in the researchers’ simulation, it accelerated and released enough X-ray radiation to heat gas as much as a hundred light years away to several thousand degrees. The additional heat from the X-rays caused the gas to expand away from the black hole, helping to keep the snack from turning into a feast. Heating due to the X-rays was also enough to effectively prevent nearby gas from collapsing to form stars for tens and maybe even hundreds of millions of years. As a result, the researchers hypothesize, significantly larger than usual gas clouds may have had the opportunity to form without creating stars. Such enormous gas clouds may have eventually collapsed under their own weight, creating a supermassive black hole. "While X-rays from matter falling onto the first black holes hindered their further growth, that very same radiation may have later cleared the way for direct formation of supermassive black holes by suppressing star formation," said Alvarez. "However, a lot of work remains to be done to test whether this idea will actually pan out; this is really just the tip of the iceberg in terms of realistic simulations of black holes in the early universe." "This work will likely make people rethink how the radiation from these black holes affected the surrounding environment," added Wise. "Black holes are not just dead pieces of matter; they actually affect other parts of the galaxy." The Kavli Institute for Particle Astrophysics and Cosmology, initiated by a grant from Fred Kavli and the Kavli Foundation, is a joint institute of Stanford University and SLAC National Accelerator Laboratory. SLAC is a multi-program laboratory exploring frontier questions in astrophysics, photon science, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford for the U.S. Department of Energy Office of Science. by Kelen Tuttle
<urn:uuid:27ced233-6745-4967-8a8e-31e5f258e20a>
3.890625
1,124
News (Org.)
Science & Tech.
33.451729
1,748
Pub. date: 2008 | Online Pub. Date: April 25, 2008 | DOI: 10.4135/9781412963893 | Print ISBN: 9781412958783 | Online ISBN: 9781412963893| Publisher:SAGE Publications, Inc.About this encyclopedia Impacts of Global Warming IMPACTS FROM THE phenomenon known as global warming include environmental, social, and economic effects. Environmental impacts include sea-level rise, melting of the polar ice caps, and an average increase in temperature. These impacts are documented in the reports of the Intergovernmental Panel for Climate Change (IPCC), which commissions reports by scientists worldwide on the issue of climate change. The IPCC Report of 2007 is the first one that reflects scientific consensus that global warming is underway, and that it is primarily human induced. For example, human activities, such as fossil fuel burning, land-use changes, agricultural activity, and the production and use of halocarbons are among the factors causing climate change. The economic report by Nicholas Stern in 2007 highlights that climate change has potentially disastrous consequences for humanity. Perhaps best known, is that temperature variability, specifically temperature increase, will be one of the effects of climate change. While the range ...
<urn:uuid:e9b9663e-a503-4a1d-b284-16f1447bf5b3>
3.9375
252
Content Listing
Science & Tech.
30.482857
1,749
Data and Information - Data Initiatives - Satellite Sensing Systems Data Systems for LCLUC Research The unprecedented large volumes of data for land use research have necessitated the development of innovative data processing, delivery and analysis systems. The evolving EOS Data and Information System and a number of competed research opportunities such as REASON and ACCESS, have provided support for data systems research and development. The MODIS Advanced Data Processing System (MODAPS) at the Goddard Space Flight Center (GSFC) is generating land-cover related products from the daily MODIS instruments on board the Terra and Aqua platforms. Data products at 250m -1km are being reprocessed as the algorithms are improved to provide consistent data records. This system is currently being enhanced to provide MODIS land product distribution capabilities to augment the services provided by the NASA Distributed Active Archive Center at the Eros Data Center to meet the needs of the MODIS science community. The Landsat Ecosystem Disturbance Adaptive processing system is developing procedures for automated atmospheric correction and mosaicing of Landsat data and the generation of high resolution disturbance time series. The Global Land Cover Facility (GLCF) at the University of Maryland has developed a low cost system for processing and distribution of large volumes of land-cover data and enhanced data sets. Similarly, the Landsat.org project developed at Michigan State University (MSU) has developed a platform independent user interface and search engine for on-line purchasing, ordering and sharing of Landsat data worldwide. The Tropical Rain Forest Information Center at MSU provides Landsat derived data sets associated with monitoring tropical deforestation. In partnership with the private sector, NASA purchased a global data set of cloud-free Landsat imagery for 1990 and 2000. These data were orthorectified and are easily accessible and freely available. They have greatly increased the use of Landsat data for LCLUC studies worldwide. In May 2003 the Landsat 7 scan line corrector failed and although the instrument continues to receive data, the imagery are of limited use. With no Landsat instrument ready to replace Landsat 7, there is an increasing data gap, posing a critical impediment to LCLUC science. The LCLUC program, working with the USGS is developing a mid-decadal (2004-2006) high resolution global cloud-free data set to extend the previous global data sets. The data set will include data from Landsat 5, ASTER, EO1 and Landsat 7 temporal composites. This data set will include data provided by foreign ground stations and possibly foreign high resolution satellites. It is hoped that international cooperation concerning this data set could provide a prototype for future international efforts to coordinate high resolution global data acquisition from the increasing number of high resolution assets in the framework of GEOSS. Land use and land cover change studies at regional to global scales require large numbers of field sites for algorithm development and accuracy evaluation. Rapid development in integration of digital camera, hand-held GPS device, computer and internet make it possible for both scientific communities and citizens to collect and share geo-referenced field photos. The Global Geo-Referenced Field Photo Library, developed at the Earth Observation and Modeling Facility of University of Oklahoma, offers the capacity for users to upload, query (by themes and geographically), and download geo-referenced field photos in the library. It offers interactive capacity for users to interpret and classify field photos into relevant land cover types and builds photo-based land cover database. The users can use both photos and associated databases to carry out land use and land cover analysis in a geographical information system. The users who provide field photos can decide whether individual photos are to be shared or not. This tool and the resultant photo library will enable our NASA LCLUC communities to share their field photos, and promote the NASA LCLUC effort in remote sensing. Satellite Sensing Systems Satellite data provide an important source of information for characterizing and monitoring land-cover and land-use change. In some regions it is the only feasible way to provide timely and reliable land-cover assessments and identify areas of rapid change. Recent land-cover history also provides a point of departure for modeling land-cover change. NASA Current Missions for LCLUC Research NASA currently has sensing systems at high, medium and low resolution, which meet the LCLUC program observation needs. NASA satellite systems supplement operational satellites providing systematic measurements to study long-term trends. For example, the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments on the EOS Terra (AM) and Aqua (PM) platforms have significantly improved on the capabilities of the operational NOAA Advanced Very High Resolution Radiometer (AVHRR). These moderate resolution data are used to classify and characterize land-cover at the global scale and to detect land-cover change at the regional scale. They also provide daily monitoring of fire activity which is often an indicator of land-cover change. The operational Defense Meteorological Satellite Program provides a capability to map the extent of night time lights and has been used by LCLUC scientists to document the extent and growth of urban areas. Landsat 7 has provided the systematic high resolution observations necessary to map and quantify land-cover changes at the local to regional scale. The Landsat class observations are a critical underpinning for LCLUC research. The Landsat 7 global acquisition strategy providing multiple cloud-free scenes each year, has facilitated land-cover studies around the world. The LCLUC program has been pioneering methods for regional analysis of Landsat class observations setting the stage for periodic continental and global assessments of land-cover change. In this regard, the combination of systematic moderate and high resolution satellite remote sensing provides the opportunity for global scale studies and forms the basis for a global land observing system. Similarly, the NASA science programs are moving from Missions to Measurements with the aim of utilizing data from different instruments to address science questions. Experimental measurements of limited duration are needed to better understand processes and to test new sensor technologies. For example, the Earth Observer 1 (EO1) system has provided a test-bed for new sensor technology and spaceborne hyper-spectral remote sensing. Similarly the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) sensor has provided new high resolution thermal data collocated with MODIS data for improved surface characterization and validation of coarser MODIS thermal products. In the past experimental microwave satellite sensors operated by Europe and Japan have been used for mapping the extent of wetland areas. NASA LCLUC research in this part of the spectrum has been limited by the absence of a current US microwave sensing system. NASA has also been exploring partnerships with industry for the commercial provision of data to meet the needs of its science community. In particular, hyperspatial data with 1-3m resolution from sensors such as IKONOS and Quickbird have been used to provide detailed validation of high resolution products. NASA Future Missions for LCLUC Research NASA as part of the Integrated Program Office is contributing to the NPOESS Preparatory Project (NPP). The Visible Infrared Imaging Radiometer Suite (VIIRS) instrument is planned to transition MODIS class observations into the operational domain. The VIIRS instrument for launch in 2009 will continue the long term data records of vegetation indices, land-cover and fire. The NASA NPP Science Team is contributing to and evaluating the operational algorithms which will provide VIIRS Environmental Data Records (EDR's). The science team is determining whether these operational products will meet the needs of the science community and which additional or enhanced products will be needed. NASA has been charged with providing a continuity mission for the Landsat class of observations. As the major science driver for this mission, the LCLUC program is concerned that the mission be launched as soon as possible to minimize the Landsat 7 data gap. There are a number of possible solutions to securing rapid high resolution data continuity including a micro satellite imaging constellation. Clearly, as part of this mission, it will be important to establish the pathway for operational provision of Landsat class observations. Additional mission requirements for LCLUC research are being developed through a NRC Decadal Survey for NASA.
<urn:uuid:0832943e-4a93-4733-a5ed-ae1b0838fa4b>
2.953125
1,672
Knowledge Article
Science & Tech.
16.175007
1,750
[APIA, Samoa] Pacific Island researchers will be trained in skills such as coastal hazard mapping as part of a programme to use science to make coastal communities safer and more resilient. The US$1.3-million programme is part of an expansion of the US National Oceanic and Atmospheric Administration (NOAA) Coastal Storms Program (CSP) into US-affiliated Pacific Islands, beginning next month (October) and led by the University of Hawaii Sea Grant College Program. Dolan Eversole, regional coordinator of the CSP, told SciDev.Net that the programme is intended to support the development of tools, services and products to improve coastal hazard preparedness. An intensive two-week training course in the Marshall Islands has already taught local researchers to map hazards facing their communities, such as inundation from storm surges and tsunamis. One of the core missions is to provide a scientific basis for more effective decision-making on disaster management, Eversole said. "We hope to give decision-makers access to scientific information that will assist them in making decisions — for example, whether the decision-makers understand the current science behind sea level rise projections and the uncertainty associated with that. What is the science telling us, what does it mean, and what are the implications locally?" One example is research on coastal erosion indicating that rates and trends in erosion are directly related to sea level rise. This will help the authorities decide how to respond to the erosion, he said. "There is currently work going on in Guam on storm surge modelling and tsunami inundation modelling in American Samoa to understand what the vulnerabilities are to inundation events; from that they develop evacuation plans. That's another example of how research can help inform emergency managers and decision-makers." The CSP will be implemented in the next 2–3 years, with those involved setting up partnerships with other agencies to ensure the programme's long-term sustainability. Paul Anderson, marine conservation analyst for the Secretariat of the Pacific Regional Environment Programme, said the initiative should include Pacific islands that are not associated with the United States. "Perhaps as a part of existing NOAA initiatives in the region, NOAA could look to provide this type of training to those other countries, since they all have similar coastal vulnerabilities," said Anderson. "Underserved" village communities should also be included, rather than focusing on cities such as Fiji's capital Suva, Port Vila in Vanuatu and Pago Pago in American Samoa, he added.
<urn:uuid:ac87d2c8-0111-42c9-b4a1-7033e8a67f62>
2.703125
512
News Article
Science & Tech.
23.919186
1,751
Books, Software, Resources, DVDs to Learn Math Steps to Doing Well in Math Here are some quick steps to help you get better at doing mathematics. Regardless of age, the tips here will help you learn and understand math concepts from primary school right on through to university math. Everyone can do math, be positive and follow the steps here and you'll be on your way to seeing success in math. Self Help Calculus Guide: Calculus for Cats Calculus for Cats resource review. Algebra: Self-teaching Pre-Algebra Books Here, you'll find my recommendations for self-teaching books to assist you to learn pre-algebra at your pace. Teach Yourself Algebra with My Book Recommendations Can you learn algebra on your own? YES! Here are my highly recommended top 5 books to learn algebra on your own. Calculus Picks for the New to Calculus Student Here's a list of my favorite self-teaching books and supplemental texts to learn Calculus. All The Math You'll Ever Need! What's really needed in Math? Courses of study in math.
<urn:uuid:84cb15a4-32c1-4e38-885b-48395a6f11f6>
2.671875
235
Content Listing
Science & Tech.
65.793925
1,752
ok but then il try say for x=1 and y=2 i find f(x+y) not equal to f(x)+f(y) which makes 10 not equal 14. so this is not linear also? can someone give me an example of a linear function that works. ......... wait so the these properties cannot be satisfied if there is a constant, but i thought a linear function could be plotted y=mx+b im confused But Linear Algebra uses a more stringent definition for "linear transformation"- we must have f(x+ y)= f(x)+ f(y) and f(ax)= af(x). As I said above, the only "linear" functions in that sense, from R to R are of the form f(x)= ax for some number a.
<urn:uuid:54ca889c-bbbd-4038-93d3-5cd0cb2dc9f0>
2.984375
167
Q&A Forum
Science & Tech.
96.373957
1,753
VN_LOCK(9) BSD Kernel Manual VN_LOCK(9) vn_lock - acquire the vnode lock #include <sys/types.h> #include <sys/vnode.h> int vn_lock(struct vnode *vp, int flags, struct proc *p); The vn_lock() function is used to acquire the vnode lock. Certain file system operations require that the vnode lock be held when they are called. See sys/kern/vnode_if.src for more details. The vn_lock() function must not be called when the vnode's reference count is zero. Instead, the vget() function should be used. The flags argument may contain the following flags: LK_RETRY Return the vnode even if it has been reclaimed. LK_INTERLOCK Must be set if the caller owns the vnode interlock. LK_NOWAIT Don't wait if the vnode lock is held by someone else (may still wait on reclamation lock on or in- terlock). Must not be used with LK_RETRY. LK_EXCLUSIVE Acquire an exclusive lock. LK_SHARED Acquire a shared lock. The vn_lock() function can sleep. The vn_lock() releases the vnode inter- lock before exit. Upon successful completion, a value of 0 is returned. Otherwise, one of the following errors is returned. [ENOENT] The vnode has been reclaimed and is dead. This error is only returned if the LK_RETRY flag is not passed. [EBUSY] The LK_NOWAIT flag was set and vn_lock() would have slept. The locking discipline is bizarre. Many vnode operations are passed locked vnodes on entry but release the lock before they exit. Discussions with Kirk McKusick indicate that locking discipline evolved out of the pre-VFS way of doing inode locking. In addition, the current locking dis- cipline may actually save lines of code, esp. if the number of file sys- tems is fewer than the number of call sites. However, the VFS interface would require less wizardry if the locking discipline were simpler. The locking discipline is used in some places to attempt to make a series of operations atomic (e.g., permissions check + operation). This does not work for non-local file systems that do not support locking (e.g., NFS). Are vnode locks even necessary? The security checks can be moved into the individual file systems. Each file system can have the responsibility of ensuring that vnode operations are suitably atomic. The LK_NOWAIT flag does prevent the caller from sleeping. The locking discipline as it relates to shared locks has yet to be de- fined. MirOS BSD #10-current March 9, 2001 1 Generated on 2013-04-27 00:20:00 by $MirOS: src/scripts/roff2htm,v 1.77 2013/01/01 20:49:09 tg Exp $ These manual pages and other documentation are copyrighted by their respective writers; their source is available at our CVSweb, AnonCVS, and other mirrors. The rest is Copyright © 2002‒2013 The MirOS Project, Germany. This product includes material provided by Thorsten Glaser. This manual page’s HTML representation is supposed to be valid XHTML/1.1; if not, please send a bug report – diffs preferred.
<urn:uuid:772be54a-171a-44da-b292-d90bc16813fe>
2.78125
753
Documentation
Software Dev.
63.89699
1,754
Researchers at the research center QUANTOP at the Niels Bohr Institute at the University of Copenhagen (Denmark) have constructed an atomic magnetometer, which has achieved the highest sensitivity allowed by quantum mechanics. Sensitive magnetometers could be used to measure electrical activity in the human brain and heart. The results have been published in Physical Review Letters. The ultimate sensitivity of any measurement is determined by the laws of quantum mechanics. These laws, normally most noticeable at the atomic level, become relevant for larger objects as the sensitivity of measurements increase with the development of new technologies. Atoms as magnetic sensors Atoms have a fundamental property called spin, which makes the atoms act like small magnets that are sensitive to external magnetic fields and can be used as magnetic sensors. But each of the atomic spins has a quantum uncertainty, which sets the fundamental limit on the smallest external magnetic fields that the atom can sense. Conventional atomic magnetometers are usually built with a very large number of atoms, because the overall sensitivity of billions of atoms is much greater than that of a single atom. But on the other hand, it is much more difficult to reach the limit of sensitivity given by quantum mechanics. However, researchers at the QUANTOP Center have constructed an atomic magnetometer with the ultimate sensitivity allowed by quantum mechanics. “Moving towards the goal we had to ensure that our method made it possible to suppress not only sources of technical errors, such as fluctuations in the magnetic field due to public transportation, radio waves and so on, but also to eliminate a number of errors of pure quantum mechanical origin”, explains professor Eugene Polzik, Director of the QUANTOP Center at the Niels Bohr Institute. From brains to explosives As a result, the magnetometer can measure in a second a field, which is a hundred billion times weaker than the Earth’s magnetic field. The magnetometer has a wide range of possible uses, because where there is an electric current, there is also a magnetic field. Measurements of magnetic fields can reveal information about the electrical activity in the human brain and heart, the chemical identity of certain atoms, for example, explosives, or simply indicate the presence or absence of metal. The new quantum magnetometer functions at room temperature, which makes it a good alternative to the expensive commercial superconducting magnetometers (the so-called ‘Squids’). “Our quantum magnetometer functions at room temperature which makes it a good alternative to the expensive commercial superconducting magnometers (the so-called ‘Squids’). It has the same sensitivity with a cheaper and simpler instrument”, explains Eugene Polzik. Explore further: The better to see you with: Scientists build record-setting metamaterial flat lens More information: Paper: prl.aps.org/abstract/PRL/v104/i13/e133601
<urn:uuid:072c2532-5096-4207-949b-33b92ba97dc8>
3.265625
596
News Article
Science & Tech.
18.44798
1,755
This project’s official title is "Ocean-Ice Interaction in the Amundsen Sea: the Keystone to Ice-Sheet Stability". A real mouthful, but it captures the essence of what we intend to do, where we will do it and why we feel it is important to do it. Various other measurements have captured the West Antarctic ice sheet changing very rapidly in the region where it flows into the Amundsen Sea, one of the sectors of the Southern Ocean. The spatial pattern strongly suggests that the cause of this change is weaker ice shelves, the floating apron of ice that fringe the perimeter of the ice sheet. Our hypothesis is that warm water is melting the undersides of these ice shelves decreasing the "back pressure" from the ice shelves to help hold the ice sheet. Less backpressure means the ice sheet can flow faster. Faster flow-smaller ice sheet-higher sea levels-slow motion coastal flooding worldwide. Satellite observations have been tremendously valuable in identifying these changes, but can’t tell us what’s going on beneath the ice. Direct observations are required. That’s where our field work comes in. We need to drill through the ice to deploy instruments that will measure what’s going on. This may sound easy, but there are multiple hurdles that must be overcome. Getting there is the first one. These ice shelves are heavily crevassed—landing even a small plane may not be possible. Finding an area large enough to work could be difficult. We know how to use hot water drills to make a hole in the ice, but the hole is narrow forcing us to design new skinny instruments that are smart and sturdy enough to measure the water’s temperature, movement and saltiness. Our instruments will be smart enough to "phone home" and understand new commands we might send. Finally, to be able to use what we learn in one small area, we need to know the shape of not just the floating ice sheet, but also the water cavity beneath the ice and create new computer models so we can simulate what is going on and compare it with our measurements. And remember, this is Antarctica. We can only be there during the middle of the summer, it is cold, it is probably very windy much of the time and even the simplest tasks can be very difficult. Such a monumental scientific undertaking requires a host of talents. Our team includes a variety of highly skilled polar scientists and engineers. This web site introduces them, along with our research plan. Updates will allow visitors to this site to follow our progress. Hopefully, this "over our shoulder" view of an important science project will be informative, educational, exciting and inspiring.
<urn:uuid:900665d2-8d56-4e20-9d75-1e65e9cf1ef9>
3.546875
549
About (Org.)
Science & Tech.
49.198226
1,756
August 20, 2010 Today’s exercise is about a relatively new sorting algorithm. We start with an article Optimizing Your Wife by Kevin Brown, which proposes that the best way for a man to find a wife is to decide how many women he is willing to date before he chooses a wife, we’ll call that N, determine which of the first √N women is “best,” according to whatever matters to him, and then choose the next woman after the first √N that is better than any of the first √N women. For instance, to find the marriageable woman in a batch of a hundred, date ten of them, then marry the next one that is better than any of those ten. You may not find the optimal woman, but you’ll be close. Eric Burnett turned Brown’s idea into a sorting algorithm. First, sample the first √N values at the beginning of an array, then swap any of the remaining values that are better than the greatest value of the sample to the end of the array, swap the greatest value of the sample just before those at the end, then recur on the smaller array before those greatest values. Finish the sort by performing insertion sort on the entire array; that will be quick, since most values are near their final positions. Burnett’s algorithm requires three pointers: the current location of the end of the sample, the current location of the end of the array still under consideration, and a pointer that sweeps through the array. The time complexity is O(n1.5), which is similar to other sorting methods like shell sort and comb sort that have a first stage that nearly sorts the input followed by insertion sort to clean up the rest. Your task is to write a function that sorts an array using marriage sort. When you are finished, you are welcome to read or run a suggested solution, or to post your own solution or discuss the exercise in the comments below. Pages: 1 2
<urn:uuid:685b7a69-20a9-497c-bd58-d99147971095>
2.8125
413
Personal Blog
Software Dev.
52.838098
1,757
Joint Dark Energy Mission Program(s):Physics of the Cosmos Take all the stars, all the planets and everything we can see and detect with telescopes and add them all up. That total will represent only four percent of the universe. If the universe came with a list of ingredients, the ordinary atoms that make up stars, trees and animals would be at the bottom of the label, like some exotic spice. Nearly 25 percent of the universe takes the form of dark matter, a mysterious substance that seems to be intrinsically different from ordinary atoms. And the rest, a whopping 70 percent of the universe, is a mysterious quantity we call dark energy. As a comparison, remember that about 70 percent of the Earth's surface is covered with water. Imagine not knowing what water is! That's the situation we're in with dark energy. The observations that first led to this confusing situation were the observations of the velocities of distant supernovae. The key is that, as we look at supernovae far away, we are seeing them as they were long ago. The fact that they were moving away from us more slowly than expected showed that the expansion of the universe, instead of slowing due to gravity, was actually accelerating. No known component of the Universe could have caused this acceleration. Suggestions for unknown components have included a new kind of fluid, called "quintessence," an unexpected property of the vacuum of empty space, or a fundamental modification of Einstein's theory of gravity. Whatever the final explanation may be, it will revolutionize our understanding of the physics of the Universe. The National Academy of Sciences has stated that the nature of dark energy is probably the most important question in astronomy today. It has been called the deepest mystery in physics, and its resolution is likely to greatly advance our understanding of matter, space, and time. In response to recommendations by the National Academy of Sciences, NASA and the DOE spent several years working together to evaluate a possible future mission which would study the expansion history of the universe. However, a result of the 2010 Decadal report was that JDEM was not selected as one of the ranked recommendations. The new WFIRST Observatory was instead recommended to settle the essential questions in both dark energy and exoplanet research. Last updated: June 6, 2012
<urn:uuid:10f4ebc8-b117-4520-97c1-1fb72c8a00ff>
3.984375
469
Knowledge Article
Science & Tech.
36.779451
1,758
In your interface, you can formally declare an instance variable between the braces, or via @property outside the braces, or both. Either way, they become attributes of the class. The difference is that if you declare @property, then you can implement using @synthesize, which auto-codes your getter/setter for you. The auto-coder setter initializes integers and floats to zero, for example. IF you declare an instance variable, and DO NOT specify a corresponding @property, then you cannot use @synthesize and MUST write your own getter/setter. You can always override the auto-coded getter/setter by specifying your own. This is commonly done with the managedObjectContext property which is lazily loaded. Thus, you declare your managedObjectContext as a property, but then also write a -(NSManagedObjectContext *)managedObjectContext method. Recall that a method, which has the same name as an instance variable/property is the "getter" method. The @property declaration method also allows you other options, such as retain and readonly, which the instance variable declaration method does not. Basically, ivar is the old way, and @property extends it and makes it fancier/easier. You can refer to either using the self. prefix, or not, it doesn't matter as long as the name is unique to that class. Otherwise, if your superclass has the same name of a property as you, then you have to say either like self.name or super.name in order to specify which name you are talking about. Thus, you will see fewer and fewer people declare ivars between the braces, and instead shift toward just specifying @property, and then doing @synthesize. You cannot do @synthesize in your implementation without a corresponding @property. The Synthesizer only knows what type of attribute it is from the @property specification. The synthesize statement also allows you to rename properties, so that you can refer to a property by one name (shorthand) inside your code, but outside in the .h file use the full name. However, with the really cool autocomplete that XCode now has, this is less of an advantage, but is still there. Hope this helps clear up all the confusion and misinformation that is floating around out there.
<urn:uuid:24048d95-d148-4d17-ba10-aeaa0134da31>
2.734375
492
Q&A Forum
Software Dev.
39.561107
1,759
I am going through Effective Java and some of my things which I consider as standard are not suggested by the book, for instance creation of object, I was under the impression that constructors are the best way of doing it and books says we should make use of static factory methods, I am not able to few some advantages and so disadvantages and so am asking this question, here are the benefits of using it. - One advantage of static factory methods is that, unlike constructors, they have names. - A second advantage of static factory methods is that, unlike constructors, they are not required to create a new object each time they’re invoked. - A third advantage of static factory methods is that, unlike constructors, they can return an object of any subtype of their return type. - A fourth advantage of static factory methods is that they reduce the verbosity of creating parameterized type instances. - The main disadvantage of providing only static factory methods is that classes without public or protected constructors cannot be subclassed. - A second disadvantage of static factory methods is that they are not readily distinguishable from other static methods. Reference: Effective Java, Joshua Bloch, Edition 2, pg: 5-10 I am not able to understand the fourth advantage and the second disadvantage and would appreciate if someone can explain those points. I would also like to understand how to decide to use whether to go for Constructor or Static Factory Method for Object Creation.
<urn:uuid:ec7eb7c4-ee94-46d6-b3a6-a8c0f1b8f37d>
2.953125
301
Q&A Forum
Software Dev.
27.404821
1,760
A "domain specific language" is one in which a class of problems (or solutions to problems) can be expressed succinctly, usually because the vocabulary aligns with the that of the problem domain, and the notation is similar (where possible) to that used by experts that work in the domain. What this really means is a grammar representing what you can say, and a set of semantics that defines what those said things mean. This makes DSLs just like other conventional programming langauges (e.g., Java) in terms of how they are implemented. And in fact, you can think of such conventional languages as being "DSL"s that are good at describing procedural solutions to problems (but not necessary good at describing them). The implications are that you need the same set of machinery to process DSLs as you do to process conventional languages, and that's essentially compiler machinery. Groovy has some of this machinery (by design) which is why it can "support" DSLs. See Domain Specific Languages for a discussion about DSLs in general, and a particular kind of metaprogramming machinery that is very helpful for implementing them.
<urn:uuid:2d9b3ae0-6feb-41db-8372-2afec1a10b90>
2.796875
235
Q&A Forum
Software Dev.
44.52
1,761
Today, a global warming progress report. The University of Houston's College of Engineering presents this series about the machines that make our civilization run, and the people whose ingenuity created them. When I face questions about global warming, it's usually a struggle to point out that the problem must be kept in the scientific domain and out of politics. Any of us can fall into the trap of naming whatever political figure we like least, and taking the position opposite to his or hers. With that in mind, let's look a a helpful summary article in this week's Science magazine. It's by a group of climate experts from NASA, the Scripps Institute, and institutes in Germany, Australia, and France. What they've done is straightforward. First, they graph the increase of CO2 concentration, temperature, and sea level, since 1975. Each increases a bit more strongly than a simple linear rise. Maybe they're rising exponentially, maybe not. The changes might not seem extreme. In thirty years, CO2 concentrations are up fifteen percent, Earth's temperature has risen just under a degree Fahrenheit, and sea level has risen three inches. The authors also display the most important predictions made back in 1990. It turns out that CO2 concentration has risen pretty much exactly as it was predicted. Global temperature has risen in accordance with the worst case predictions. And sea level is up 25 percent beyond the worst case predicted. While some other doomsday predictions were far too high, the climate ones were not. So climatologists in 1990 were not Chicken Little, telling us the sky was falling. None of them overestimated what was happening. In fact, it'd be easy to look at this and let ourselves become Chicken Little. One could curve-fit an exponential extrapolation to the data. But extrapolation is no more trustworthy than blindly opposing the We need good analytical predictions. They, in turn, must be built upon a thorough knowledge of weather, chemistry, fluid mechanics, and global economics. The 1990 predictions were pretty good, although somewhat conservative. Predictions are better now. To gain just an inkling of the complexity, let's look again at rising sea levels. The overall rise reflects the ice-cap melting that we're all seeing (although part of the rise comes from thermal expansion of warming oceans). But that net value is an average of larger local sea level variations. The tectonic plates, upon which we live, rise and fall relative to one another. Since Louisiana and Texas are dropping, we see the sea level rising sharply. But Alaska is rising, so Alaskans see their sea level dropping. New Orleans might go under while Anchorage remains dry, or rising sea levels might catch up with tectonic subsidence and flood both. In any case, we are faced with climate change and it's hard to doubt that we play a significant role in that change. Nor can one reasonably doubt the importance of reducing consumption, waste, and emissions, while we look for better information -- while we focus, not on the people we like or dislike, but on the data. I'm John Lienhard, at the University of Houston, where we're interested in the way inventive minds S. Rhamstorf, A. Cazenave, J. A. Church, J. E. Hansen, R. F. Keeling, D. E. Parker, R. C. J. Somerville, Recent Climate Observations Compared to Projections. Science, Vol. 316, 4 May, 2007, pg. 709. To see the data, For more on sea level variation, see: And I find this a useful statement from J. E. Hansen at NASA: Is this relevant? Only hard data can tell us. (photo by JHL) The Engines of Our Ingenuity is Copyright © 1988-2006 by John H.
<urn:uuid:20e81113-3097-40a1-b921-c35da330d771>
3.125
840
Nonfiction Writing
Science & Tech.
57.711664
1,762
How horrid? July on pace to be hottest month on record The USA is closing in on its hottest month in recorded history. With five days to go in July, preliminary data show that the heat could top records set decades ago: "The warmest July for the contiguous U.S. was in 1936, when the nationally averaged temperature was 77.43 degrees, 3.14 degrees above the 20th-century average," said climate scientist Jake Crouch of the National Climatic Data Center. Preliminary data from the center show the national temperature for the first three weeks of July was 3.63 degrees above the 1971-2000 average. If the heat continues, and after the data are more closely analyzed and more final numbers come in, that would top July 1936. Five cities — St. Louis, Indianapolis Chicago, Detroit and Denver — are all on pace to shatter their all-time monthly heat records. "It's hotter here than it is in Arizona," Mary Dominis complained earlier this month while visiting Chicago from Tempe. St. Louis is seeing some unbelievable heat this summer: On Wednesday, the city hit 108 degrees, Weather Underground meteorologist Jeff Masters said. "This marked the 11th day this summer in St. Louis with temperatures of at least 105 degrees," he said, "beating the old record of 10 such days in 1934." There was some relief in St. Louis on Thursday: the temperature didn't break 100. Twenty-four people have died from the heat in St. Louis so far this summer. Through Monday, there have been 3,740 record daily high temperatures set across the nation this month, compared with only 211 record lows, Weather Channel meteorologist Guy Walton said. It's been unusually hot even in torrid Death Valley, Calif. On July 12 the low temperature at Death Valley dropped to just 107 degrees after hitting a high of 128 degrees the previous day, Masters said. Not only did the morning low temperature tie a record for the world's warmest low temperature ever recorded, the average temperature of 117.5 degrees was the world's warmest 24-hour temperature on record. The Climate Prediction Center is forecasting more "excessive heat" in Nebraska, Kansas, Oklahoma, Missouri and Arkansas over the weekend and into early next week. Contributing: The Associated Press
<urn:uuid:2fab5114-c8fa-4f7c-a9a5-96bdb94a26ab>
3.09375
478
News Article
Science & Tech.
57.063403
1,763
Welcome to the discussion forum. In this forum, you may ask questions, start new discussions, and view existing posts. Click to create a discussion account. Click on the button to receive email notifications each time a new discussion is added to this forum. What are types convergent boundaries? How does a convergent boundary form a volcanic island? What are the three types of convergent boundaries? Was the Rocky Mountains created by convergent plate techtonics? Convergent Plate Boundary vs Collisional Plate Boundary What is a convergent boundary What are the dominant types of Igneous rocks and processes that would be produced from a convergent subduction zone?
<urn:uuid:f65b929d-914c-418d-bb71-33bd71432b88>
2.5625
146
Comment Section
Science & Tech.
43.519048
1,764
In quantum mechanics Quantum mechanics, also known as quantum physics or quantum theory, is a branch of physics providing a mathematical description of much of the dual particle-like and wave-like behavior and interactions of energy and matter. It departs from classical mechanics primarily at the atomic and subatomic... , the particle in a box model (also known as the infinite potential well or the infinite square well ) describes a particle free to move in a small space surrounded by impenetrable barriers. The model is mainly used as a hypothetical example to illustrate the differences between classical What "classical physics" refers to depends on the context. When discussing special relativity, it refers to the Newtonian physics which preceded relativity, i.e. the branches of physics based on principles developed before the rise of relativity and quantum mechanics... and quantum systems. In classical systems, for example a ball trapped inside a heavy box, the particle can move at any speed within the box and it is no more likely to be found at one position than another. However, when the well becomes very narrow (on the scale of a few nanometers), quantum effects become important. The particle may only occupy certain positive energy levels. Likewise, it can never have zero energy, meaning that the particle can never "sit still". Additionally, it is more likely to be found at certain positions than at others, depending on its energy level. The particle may never be detected at certain positions, known as spatial nodes. The particle in a box model provides one of the very few problems in quantum mechanics which can be solved analytically, without approximations. This means that the observable properties of the particle (such as its energy and position) are related to the mass of the particle and the width of the well by simple mathematical expressions. Due to its simplicity, the model allows insight into quantum effects without the need for complicated mathematics. It is one of the first quantum mechanics problems taught in undergraduate physics courses, and it is commonly used as an approximation for more complicated quantum systems. See also: the history of quantum mechanics The history of quantum mechanics, as it interlaces with the history of quantum chemistry, began essentially with a number of different scientific discoveries: the 1838 discovery of cathode rays by Michael Faraday; the 1859-1860 winter statement of the black body radiation problem by Gustav... The simplest form of the particle in a box model considers a one-dimensional system. Here, the particle may only move backwards and forwards along a straight line with impenetrable barriers at either end. The walls of a one-dimensional box may be visualised as regions of space with an infinitely large potential energy In physics, potential energy is the energy stored in a body or in a system due to its position in a force field or due to its configuration. The SI unit of measure for energy and work is the Joule... . Conversely, the interior of the box has a constant, zero potential energy. This means that no forces act upon the particle inside the box and it can move freely in that region. However, infinitely large force In physics, a force is any influence that causes an object to undergo a change in speed, a change in direction, or a change in shape. In other words, a force is that which can cause an object with mass to change its velocity , i.e., to accelerate, or which can cause a flexible object to deform... s repel the particle if it touches the walls of the box, preventing it from escaping. The potential energy in this model is given as is the length of the box and is the position of the particle within the box. In quantum mechanics, the wavefunction Not to be confused with the related concept of the Wave equationA wave function or wavefunction is a probability amplitude in quantum mechanics describing the quantum state of a particle and how it behaves. Typically, its values are complex numbers and, for a single particle, it is a function of... gives the most fundamental description of the behavior of a particle; the measurable properties of the particle (such as its position, momentum and energy) may all be derived from the wavefunction. can be found by solving the Schrödinger equation The Schrödinger equation was formulated in 1926 by Austrian physicist Erwin Schrödinger. Used in physics , it is an equation that describes how the quantum state of a physical system changes in time.... for the system is the reduced Planck constant, is the mass Mass can be defined as a quantitive measure of the resistance an object has to change in its velocity.In physics, mass commonly refers to any of the following three properties of matter, which have been shown experimentally to be equivalent:... of the particle, is the imaginary unit In mathematics, the imaginary unit allows the real number system ℝ to be extended to the complex number system ℂ, which in turn provides at least one root for every polynomial . The imaginary unit is denoted by , , or the Greek... Inside the box, no forces act upon the particle, which means that the part of the wavefunction inside the box oscillates through space and time with the same form as a free particle In physics, a free particle is a particle that, in some sense, is not bound. In classical physics, this means the particle is present in a "field-free" space.-Classical Free Particle:The classical free particle is characterized simply by a fixed velocity... are arbitrary complex number A complex number is a number consisting of a real part and an imaginary part. Complex numbers extend the idea of the one-dimensional number line to the two-dimensional complex plane by using the number line for the real part and adding a vertical axis to plot the imaginary part... s. The frequency of the oscillations through space and time are given by the wavenumber In the physical sciences, the wavenumber is a property of a wave, its spatial frequency, that is proportional to the reciprocal of the wavelength. It is also the magnitude of the wave vector... and the angular frequency In physics, angular frequency ω is a scalar measure of rotation rate. Angular frequency is the magnitude of the vector quantity angular velocity... respectively. These are both related to the total energy of the particle by the expression which is known as the dispersion relation In physics and electrical engineering, dispersion most often refers to frequency-dependent effects in wave propagation. Note, however, that there are several other uses of the word "dispersion" in the physical sciences.... for a free particle. The size (or amplitude Amplitude is the magnitude of change in the oscillating variable with each oscillation within an oscillating system. For example, sound waves in air are oscillations in atmospheric pressure and their amplitudes are proportional to the change in pressure during one oscillation... ) of the wavefunction at a given position is related to the probability of finding a particle there by . The wavefunction must therefore vanish everywhere beyond the edges of the box. Also, the amplitude of the wavefunction may not "jump" abruptly from one point to the next. These two conditions are only satisfied by wavefunctions with the form is a positive, whole number. The wavenumber is restricted to certain, specific values given by is the size of the box. Negative values of are neglected, since they give wavefunctions identical to the positive solutions except for a physically unimportant sign change. Finally, the unknown constant may be found by normalizing the wavefunction so that the total probability density of finding the particle in the system is 1. It follows that may be any complex number with absolute value In mathematics, the absolute value |a| of a real number a is the numerical value of a without regard to its sign. So, for example, the absolute value of 3 is 3, and the absolute value of -3 is also 3... √(2/L); these different values of A yield the same physical state, so A = √(2/L) can be selected to simplify. The energies which correspond with each of the permitted wavenumbers may be written as The energy levels increase with , meaning that high energy levels are separated from each other by a greater amount than low energy levels are. The lowest possible energy for the particle (its zero-point energy Zero-point energy is the lowest possible energy that a quantum mechanical physical system may have; it is the energy of its ground state. All quantum mechanical systems undergo fluctuations even in their ground state and have an associated zero-point energy, a consequence of their wave-like nature... ) is found in state 1, which is given by The particle, therefore, always has a positive energy. This contrasts with classical systems, where the particle can have zero energy by resting motionless at the bottom of the box. This can be explained in terms of the uncertainty principle In quantum mechanics, the Heisenberg uncertainty principle states a fundamental limit on the accuracy with which certain pairs of physical properties of a particle, such as position and momentum, can be simultaneously known... , which states that the product of the uncertainties in the position and momentum of a particle is limited by It can be shown that the uncertainty in the position of the particle is proportional to the width of the box. Thus, the uncertainty in momentum is roughly inversely proportional to the width of the box. The kinetic energy of a particle is given by , and hence the minimum kinetic energy of the particle in a box is inversely proportional to the mass and the square of the well width, in qualitative agreement with the calculation above. In classical physics, the particle can be detected anywhere in the box with equal probability. In quantum mechanics, however, the probability density for finding a particle at a given position is derived from the wavefunction as For the particle in a box, the probability density for finding the particle at a given position depends upon its state, and is given by Thus, for any value of n greater than one, there are regions within the box for which , indicating that spatial nodes exist at which the particle cannot be found. In quantum mechanics, the average, or expectation value of the position of a particle is given by For the steady state particle in a box, it can be shown that the average position is always , regardless of the state of the particle. For a superposition of states, the expectation value of the position will change based on the cross term which is proportional to If a particle is trapped in a two-dimensional box, it may freely move in the -directions, between barriers separated by lengths respectively. Using a similar approach to that of the one-dimensional box, it can be shown that the wavefunctions and energies are given respectively by where the two-dimensional wavevector is given by For a three dimensional box, the solutions are where the three-dimensional wavevector is given by An interesting feature of the above solutions is that when two or more of the lengths are the same (e.g. ), there are multiple wavefunctions corresponding to the same total energy. For example the wavefunction with has the same energy as the wavefunction with . This situation is called degeneracy In physics, two or more different quantum states are said to be degenerate if they are all at the same energy level. Statistically this means that they are all equally probable of being filled, and in Quantum Mechanics it is represented mathematically by the Hamiltonian for the system having more... and for the case where exactly two degenerate wavefunctions have the same energy that energy level is said to be doubly degenerate . Degeneracy results from symmetry in the system. For the above case two of the lengths are equal so the system is symmetric with respect to a 90° rotation. Because of its mathematical simplicity, the particle in a box model is used to find approximate solutions for more complex physical systems in which a particle is trapped in a narrow region of low electric potential In classical electromagnetism, the electric potential at a point within a defined space is equal to the electric potential energy at that location divided by the charge there... between two high potential barriers. These quantum well A quantum well is a potential well with only discrete energy values.One technology to create quantization is to confine particles, which were originally free to move in three dimensions, to two dimensions, forcing them to occupy a planar region... systems are particularly important in optoelectronics Optoelectronics is the study and application of electronic devices that source, detect and control light, usually considered a sub-field of photonics. In this context, light often includes invisible forms of radiation such as gamma rays, X-rays, ultraviolet and infrared, in addition to visible light... , and are used in devices such as the quantum well laser A quantum well laser is a laser diode in which the active region of the device is so narrow that quantum confinement occurs. The wavelength of the light emitted by a quantum well laser is determined by the width of the active region rather than just the bandgap of the material from which it is... , the quantum well infrared photodetector A quantum well infrared photodetector , is an infrared photodetector made from semiconductor materials which contain one or more quantum wells. These can be integrated together with electronics and optics to make infrared cameras for thermography. A very common well material is gallium arsenide,... and the quantum-confined Stark effect The quantum-confined Stark effect describes the effect of an external electric field upon the light absorption spectrum or emission spectrum of a quantum well . In the absence of an external electric field, electrons and holes within the quantum well may only occupy states within a discrete set... The probability density does not go to zero at the nodes if relativistic effects are taken into account. - Finite potential well The finite potential well is a concept from quantum mechanics. It is an extension of the infinite potential well, in which a particle is confined to a box, but one which has finite potential walls. Unlike the infinite potential well, there is a probability associated with the particle being found... - Delta function potential - Gas in a box In quantum mechanics, the results of the quantum particle in a box can be used to look at the equilibrium situation for a quantum ideal gas in a box which is a box containing a large number of molecules which do not interact with each other except for instantaneous thermalizing collisions... - Particle in a ring In quantum mechanics, the case of a particle in a one-dimensional ring is similar to the particle in a box. The Schrödinger equation for a free particle which is restricted to a ring is... - Particle in a spherically symmetric potential - Quantum harmonic oscillator The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. Because an arbitrary potential can be approximated as a harmonic potential at the vicinity of a stable equilibrium point, it is one of the most important model systems in quantum mechanics... - Delta potential well (QM) The delta potential is a potential that gives rise to many interesting results in quantum mechanics. It consists of a time-independent Schrödinger equation for a particle in a potential well defined by a Dirac delta function in one dimension.... - Semicircle potential well - Configuration integral (statistical mechanics)
<urn:uuid:516275b6-3f1a-40a3-a982-aa6176296f9e>
3.390625
3,228
Knowledge Article
Science & Tech.
37.941987
1,765
ABOUT SNOWFLAKES: Snow is a form of precipitation. Rising warm air carries water vapor high into the sky, where it cools and condenses into water droplets. Some vapor freezes into tiny ice crystals, which can attract cooled water drops to form snowflakes. As snowflakes fall, they may meet warmer air and melt into raindrops, unless temperatures are below freezing close to the ground: then we get snow. A snow crystal is a single crystal of ice. It usually forms the shape of a hexagonal prism, but as the crystals grow, branches sprout from the corners, creating more complex shapes. Conditions such as temperature and humidity in the atmosphere can influence a snowflake's shape. WHAT'S THE FORECAST: Weather forecasting is the application of science and technology to predict the state of the atmosphere for a future time and a given location. Humankind has attempted to predict the weather since ancient times. For millennia people have tried to forecast the weather. In 650 BC, the Babylonians predicted the weather from cloud patterns. In about 340 BC, Aristotle described weather patterns in Meteorologica. Chinese weather prediction lore extends at least as far back as 300 BC. Ancient weather forecasting methods usually relied observed patterns of events. For example, it might be observed that if the sunset was particularly red, the following day often brought fair weather. This experience accumulated over the generations to produce weather lore. Today, weather forecasts are made by collecting data about the current state of the atmosphere and using computer models of the atmospheric processes to project how the atmosphere will evolve. The American Meteorological Society, the American Mathematical Society, the Mathematical Association of America, the American Statistical Association and the Society for Industrial and Applied Mathematics contributed to the information contained in the TV portion of this report.
<urn:uuid:a0e37488-f43a-4545-b743-364b9bb57970>
3.890625
366
Knowledge Article
Science & Tech.
31.530721
1,766
Distances in the Solar System are very large! To compare the average distances between the Sun and the planets, it's convenient to do it in terms of the average Earth-Sun separation. A very useful approximate definition of the astronomical unit (AU) is: 1 AU = average distance between Sun and Earth = 1.496 × 108km More accurately, the IAU has defined the AU as: "equal to the distance from the centre of the Sun at which a particle of negligible mass, in an unperturbed circular orbit, would have an orbital period of 365.2568983 days." This is slightly less than the mean Sun-Earth distance. See also: parsec.
<urn:uuid:7b513601-de99-4e61-a84c-da82145da4ff>
3.59375
145
Knowledge Article
Science & Tech.
52.525
1,767
Here’s what it looks like: // Example One function triangle(a,b): function sqroot(x): return Math.pow(x,.5) return sqroot( a*a + b*b ) // Example Two for var i=0; i<5; i++: var el = document.getElementById("el"+i) if count % 2 == 0: el.innerHTML = "Hello" else: el.innerHTML = "World" // Pyscript function triangle(a,b): if a > 0 && b > 0: function sqroot(x): if x > 0: return Math.pow(x,.5) else: return 0 return sqroot( a*a + b*b ) else: return 0 How Do I Get It? <script src="pyscript.js"/>to your - Put your pyscript code in a Pyscript is really just a proof of concept. It has a set of unit tests, and as you can see not all functionality is completed yet. Anonymous inline functions do not work, there is no robust handling of indentation, and a number of features (like switch) do not work. But, the whole thing is open source so you can get involved.
<urn:uuid:370d635e-183f-4543-af68-b4c2b8b98c70>
2.515625
271
Tutorial
Software Dev.
88.747725
1,768
BNL Chemistry Department | Photo- and Radiation Chemistry | Group Members |LEAF Facility Layout||LEAF System Components||Features of LEAF||LEAF Publications| Research Highlights from LEAF: Ionic Liquids: Designer Solvents for a Cleaner World (PDF) Storing Energy in Dendrimer Trees: Stabilizing Charge Separation in Dendrimers (PDF) They Bend Before They Break: Fast Scission of Chemical Bonds (PDF) The Center for Radiation Chemistry Research (CRCR) exploits pulse radiolysis techniques to study chemical reactions (and other phenomena) by subjecting samples to pulses of high-energy electrons. The reactions are followed by various methods of time-resolved spectroscopy and other detection techniques. The CRCR includes the new picosecond Laser-Electron Accelerator Facility (LEAF), a 2 MeV Van de Graaff, and a cobalt-60 source. User access to CRCR facilities is encouraged, either through collaboration with BNL staff or via the BNL Center for Functional Nanomaterials (CFN) User Program. Please contact one of our Principal Investigators or the general facility address (email@example.com) for more information. The design of the LEAF accelerator is innovative: the electron pulse is produced by laser light impinging on a photocathode inside a resonant cavity, radio frequency (RF) gun about 30 cm long. The emitted electrons are accelerated to 9.2 MeV within the length of the gun by ~15 megawatt pulse of RF power from a SLAC-type 2.856 GHz klystron. The laser pulse is synchronized with the RF power to produce the electron pulse near the peak field gradient (about 1 MeV/cm). Thus the pulse length and intensity are a function of the laser pulse properties, and electron pulse lengths as short as 5 picoseconds are attainable. RF photocathode electron guns of this type have been built at Brookhaven's Accelerator Test Facility and Source Development Laboratory as well as at several other laboratories and universities, but almost all of these installations are dedicated to accelerator physics or free electron laser development. The BNL Chemistry Department's LEAF accelerator was the first photocathode gun accelerator in the world to be dedicated to pulse radiolysis studies. Today, there are seven other pulse radiolysis facilities similar to LEAF operating or under construction around the world. The International Symposium on Ultrafast Accelerators for Pulse Radiolysis was held at the BNL Chemistry Department in June, 2004, to discuss the state-of-the-art in fast pulse radiolysis systems and to find solutions to technical issues faced by these specialized applications. Click on the above link for information about the meeting and links to other facilities (in the Agenda). The LEAF accelerator system is located in the Augustine O. Allen Laboratories, Rooms 20 - 23 of the Chemistry Building. "Gus" Allen was a pioneer of modern radiation chemistry and the founder of the radiation chemistry program at Brookhaven. For more information, contact Jim Wishart (firstname.lastname@example.org).
<urn:uuid:86cf4627-65a2-4aae-a305-5eb078ad91e4>
2.859375
653
About (Org.)
Science & Tech.
28.870647
1,769
Japan’s Nuclear Crisis Explained: A Sampler The crippled Fukushima nuclear plant as of March 15, 2011. Credit: daveeza/flickr Difficult though it is to believe, the massive earthquake and tsunami that devastated northeast Japan happened less than a week ago. It would have been even harder to believe, when the first damage reports began rolling in, that the quake and tsunami would have taken a backseat in the headlines to the worst nuclear accident since Chernobyl, whose 25th anniversary is coming next month. In fact, there's still a distinct possibility that the Fukushima disaster will surpass Chernobyl and Three Mile Island to become the worst nuclear disaster in history (if you discount Hiroshima and Nagasaki, of course). Getting a handle on exactly what's happening at Fukushima is difficult, for a couple of reasons. First, nobody can actually get inside the plants to inspect the damage close-up. It's simply too dangerous. Second, Japanese officials haven't been very good at providing what information they do have. And third, the situation keeps changing, day-to-day and even hour-to-hour. Nevertheless, a number of news and other organizations have quickly put together some useful resources that can help us understand something of what's happening at Fukushima, and what could happen next. For an up-to-date timeline of the disaster, for example, check out this Wikipedia page. The New York Times has, as always, compiled a wealth of information. They've posted a graphic showing how a plume of radioactivity released from the plant is likely to spread (this graphic does NOT include information about actual radiation levels; only about wind patterns). The Times also has an excellent page showing the current status of each of the six reactors at Fukushima, which is updated daily. And there's an interactive graphic showing how a reactor shuts down and — if things spin out of control, as they may now be doing — how it melts down. There's also an excellent multipart series of graphics at The Washington Post that lays out almost anything you'd care to know about the disaster, from information about the reactors themselves to evacuation plans and more. The U.K.'s Guardian website has a terrific multimedia explainer starring reporter Ian Sample on video, with all sorts of supplementary material. And finally, Slate comes at the story from its usual different angle by providing a worldwide interactive map of where, when and how powerfully tsunamis have struck the world since 1975. If you've seen other examples of great explainers, whether in graphical or other form, please let us know in comments.
<urn:uuid:3da25c8d-04b8-4a2c-90a0-e40a58717678>
3.234375
534
Personal Blog
Science & Tech.
42.601633
1,770
SANTA CRUZ — Hopping on a bumper car and setting the GPS destination to outer space, "Toy Story" character Buzz Lightyear would have been thrilled by a scene at the Santa Cruz Beach Boardwalk. A Science Channel TV crew and UCSC Astronomy Professor Greg Laughlin were equally excited Friday morning, using the Speed Bump ride as a visual analogy to illustrate the concept of gravity assist in outer space. The crew approached Laughlin and the Santa Cruz Boardwalk to shoot an episode of "Through the Wormhole." Hosted by Morgan Freeman, the series explores scientific questions. The episode "Can we outlive the sun?" is expected to air in the show's fourth season in 2013. "In billions of years, the sun will get so bright and luminous that life on Earth will be in big trouble," Laughlin said. To preserve life forms, humans would have to find a way for Earth to move away from the sun by expanding the orbit, Laughlin said. That's where asteroids, gravity and the Boardwalk's bumper cars come in. "You could cause our planet to slowly expand its orbit by having asteroids flying by the Earth hundreds of thousands of times," Laughlin said. "This is what I'm showing when I'm driving around our little Earth with my magnet." Using a metal reproduction of the Earth, a yellow beach ball for the sun and a red magnet, Laughlin and camera operators rode bumper cars, circling their miniature solar system to mimic asteroids. In real space, this merry-go-round action produces a double effect, Laughlin explained. Because of gravity, the mass of the asteroid entering in our orbit produces a force that slightly pulls the Earth forward, causing our planet to expand the trajectory of its orbit. Simultaneously, gravity forces the asteroid to deviate from its trajectory while gaining speed. This latter phenomenon, used by scientists as a "gravity assist" to speed spacecrafts, is ultimately the most interesting part, Laughlin said. "Whether we can outlive the sun is the last thing we need to worry about. We don't have the technology to control asteroids and this won't be an issue before a billion years," Laughlin said. "It's a just a dramatic way of showing how gravity assist works." NASA sent its spacecraft New Horizons in Jupiter's orbit and it gave it a huge boost on its trip to Pluto, Laughlin said. "That will enable us to discover how Pluto looks like years ahead of schedule." The Boardwalk provided the shooting location to tackle the complex matter, Georgalis said. The director said he is comfortable working with Laughlin, who often pushes TV crews to shoot in Santa Cruz. In 2009, the scientist brought the crew to the Boardwalk for a History Channel piece on Jupiter's chemical activity. "Santa Cruz is a great town for filming opportunities: we have the ocean, the Boardwalk, the redwood forests," Laughlin said. "If people see the show and want to come visit Santa Cruz as result, that means I've done my job." <span class='infoBox'><hr> Airs Wednesdays ·"Through the Wormhole" airs at 10 p.m. Wednesdays through Aug. 8 on the Science Channel. For information, see http://science.discovery.com/tv/through-the-wormhole.
<urn:uuid:0cb4af77-6504-4419-b9df-d063482f445c>
2.75
695
News Article
Science & Tech.
58.682028
1,771
Threats to Frogs One of the most pressing threats to frogs today is the chytrid fungus, a deadly skin fungus that has moved across the globe causing amphibian declines in Australia, South America, North America, Central America, New Zealand, Europe, and Africa killing frogs by the millions. The chytrid fungus is responsible for over 100 frog and other amphibian species extinctions since the 1970’s. Chytrid fungus has been detected on at least 285 species of amphibians (including frogs) from 36 countries. Climate change is also having an impact on frogs that live on mountain tops. They are being hit hard since they are dependant on moist leaf litter found in cloud forests as a suitable place to lay their eggs. As temperatures increase further up mountain sides, clouds are being pushed further away and leaves are drying out leaving less suitable habitat for frogs to lay their eggs. As frogs migrate further up the mountain they are faced with the inevitable problem that once they reach the top, unlike birds, they can go no further. Frogs are also facing many threats from many different environmental factors: pollution, infectious diseases, habitat loss, invasive species, climate change, and over-harvesting for the pet and food trades are all contributing to the rapid rise of frog extinctions since 1980. Reasons for Hope Chytrid fungus has been recognized as one of the largest threats to amphibian populations around the world. In 2009 a group of organizations came together to respond to the crisis. Defenders of Wildlife (Washington DC), Africam Safari Park (Mexico), Cheyenne Mountain Zoo (Colorado), the Smithsonian National Zoological Park (Washington DC), the Smithsonian Tropical Research Institute (Panama), Zoo New England (Massachusetts) and Houston Zoo (Texas) have launched the Panama Amphibian Rescue and Conservation Project. There are yet undiscovered species of frogs in the world. A new species of flying frog was discovered in the Himalayan Mountains in 2008.
<urn:uuid:41cc0e52-1089-47c4-b367-00aac24e3935>
4
408
Knowledge Article
Science & Tech.
35.836035
1,772
||This article may be too technical for most readers to understand. (May 2011)| Properties of magnetic materials change with temperature. In physics and materials science, the Curie temperature (Tc), or Curie point, is the temperature where a material's permanent magnetism changes to induced magnetism, or vice versa. The force of magnetism is determined by magnetic moments. The Curie Temperature is the critical point where intrinsic magnetic moments change directions. Magnetic moments are permanent dipole moments within the atom which are made up from electrons angular momentum and spin. Materials have different structures of intrinsic magnetic moments that depend on temperature. It is at a material's specific Curie Temperature where they change directions. Permanent magnetism is from aligned magnetic moments and induced magnetism is disordered magnetic moments forced to align in a magnetic field. For example, the ordered magnetic moments (ferromagnetic, figure 1) change and become disordered (paramagnetic, figure 2) at the Curie Temperature, and vice versa. Higher temperatures make magnets weaker as spontaneous magnetism only occurs below the Curie Temperature. Magnetic susceptibility only occurs above the Curie Temperature and can be calculated from the Curie-Weiss Law which is derived from Curie's Law. In analogy to ferromagnetic and paramagnetic materials, the Curie temperature can also be used to describe the temperature where a material's spontaneous electric polarisation changes to induced electric polarisation, or vice versa. |Material||Curie Temperature (K)| |Iron(III) oxide (Fe2O3)||948| |Iron(II,III) oxide (FeOFe2O3)||858| Magnetic moments Electrons inside atoms contribute magnetic moments from their own angular momentum and from their orbital momentum around the nucleus. Magnetic moments from the nucleus are insignificant in contrast to magnetic moments from electrons. Thermal contribution will result in higher energy electrons causing disruption to their order and alignment between dipoles to be destroyed. Ferromagnetic, paramagnetic, ferrimagnetic and antiferromagnetic materials have different structures of intrinsic magnetic moments. It is at a material's specific Curie Temperature where they change properties. The transition from antiferromagnetic to paramagnetic (or vice versa) occurs at the Néel Temperature which is analogous to Curie Temperature. |Below Tc||Above Tc| Ferromagnetism The magnetic moments in a ferromagnetic material. The moments are ordered and of the same magnitude in the absence of an applied magnetic field. Paramagnetism The magnetic moments in a paramagnetic material. The moments are disordered in the absence of an applied magnetic field and ordered in the presence of an applied magnetic field. Ferrimagnetism The magnetic moments in a ferrimagnetic material. The moments are aligned oppositely and have different magnitudes due to being made up of two different ions. This is in the absence of an applied magnetic field. Antiferromagnetism The magnetic moments in a antiferromagnetic material. The moments are aligned oppositely and have the same magnitudes. This is in the absence of an applied magnetic field. Materials with magnetic moments that change properties at the Curie temperature Ferromagnetic, paramagnetic, ferrimagnetic and antiferromagntic structures are made up of intrinsic magnetic moments. If all electrons within the structure are paired these moments cancel out due to having opposite spins and angular momentums and thus even with an applied magnetic field will have different properties and no Curie Temperature. A material is paramagnetic only above its Curie Temperature. Paramagnetic materials are non-magnetic when a magnetic field is absent and magnetic when a magnetic field is applied. When the magnetic field is absent the material has disordered magnetic moments; that is, the atoms are unsymmetrical and not aligned. When the magnetic field is present the magnetic moments are temporarily realigned parallel to the applied field; the atoms are symmetrical and aligned. The magnetic moment in the same direction is what causes an induced magnetic field. For paramagnetism this response to an applied magnetic field is positive and known as magnetic susceptibility. The magnetic susceptibility only applies above the Curie Temperature for disordered states. Sources of Paramagnetism (Materials which have Curie Temperatures); - All atoms which have unpaired electrons; - Atoms where inner shells are incomplete in electrons; - Free radicals; Above the Curie Temperature the atoms are excited, the spin orientation becomes randomised, but can be realigned in an applied field and the material paramagnetic. Below the Curie Temperature the intrinsic structure has under gone a phase transition, the atoms are ordered and the material is ferromagnetic. The paramagnetic materials induced magnetic fields are very weak in comparison to ferromagnetic materials magnetic fields. Materials are only ferromagnetic below their corresponding Curie Temperatures. Ferromagnetic materials are magnetic in the absence of an applied magnetic field. When a magnetic field is absent the material has spontaneous magnetization which is a result of the ordered magnetic moments; that is for ferromagnetism, the atoms are symmetrical and aligned in the same direction creating a permanent magnetic field. The magnetic interactions are held together by exchange interactions; otherwise thermal disorder would overcome the weak interactions of magnetic moments. The exchange interaction has a zero probability of parallel electrons occupying the same point in time implying a preferred parallel alignment in the material. The Boltzmann factor contributes heavily as it prefers interacting particles to be aligned in the same direction. This is what causes ferromagnets to have strong magnetic fields and high Curie Temperature's around 1000K. Below the Curie Temperature the atoms are aligned and parallel causing spontaneous magnetism; the material is ferromagnetic. Above the Curie temperature the material is paramagnetic as the atoms lose their ordered magnetic moments as the material undergoes a phase transition. Not to be confused with ferromagnetic. Materials are only ferrimagnetic below their materials corresponding Curie Temperature. Ferrimagnetic materials are magnetic in the absence of an applied magnetic field and are made up of two different ions. When a magnetic field is absent the material has a spontaneous magnetism which is the result of ordered magnetic moments; that is, for ferrimagnetism one ion's magnetic moments are aligned facing in one direction with certain magnitude and the other ion's magnetic moments are aligned facing in the opposite direction with a different magnitude. As the magnetic moments are of different magnitudes in opposite directions there is still a spontaneous magnetism and a magnetic field is present. Similar to ferromagnetic materials the magnetic interactions are held together by exchange interactions. The orientations of moments however are anti-parallel which results in a net momentum by subtracting their momentum from one another. Below the Curie Temperature the atoms of each ion are aligned anti-parallel with different momentums causing a spontaneous magnetism; the material is ferrimagnetic. Above the Curie Temperature the material is paramagnetic as the atoms lose their ordered magnetic moments as the material undergoes a phase transition. Antiferromagnetic and the Néel temperature Materials are only antiferromagetic below their corresponding Néel Temperature. This is similar to the Curie Temperature as above the Néel Temperature the material undergoes a phase transition and becomes paramagnetic. The material has equal magnetic moments aligned in opposite directions resulting in a zero magnetic moment and a net magnetism of zero at all temperatures below the Néel Temperature. Antiferromagnetic materials are weakly magnetic in the absence or presence of an applied magnetic field. Similar to ferromagnetic materials the magnetic interactions are held together by exchange interactions preventing thermal disorder from overcoming the weak interactions of magnetic moments. When disorder occurs it is at the Néel Temperature. Curie-Weiss law The Curie-Weiss Law is a simple model derived from a mean-field approximation, this means it works well for the materials temperature,T, much greater than their corresponding Curie Temperature,Tc, i.e. T >> Tc; however fails to describe the magnetic susceptibility, χ, in the immediate vicinity of the Curie point because of local fluctuations between atoms. Both Curie's Law and the Curie-Weiss Law do not hold for T< Tc. Curie's Law for a paramagnetic material; |χ||the magnetic susceptibility; the influence of an applied magnetic field on a material| |M||the magnetic moments per unit volume| |H||the macroscopic magnetic field| |B||the magnetic field| |C||the material-specific Curie constant| |µ0||the permeability of free space. Note - in CGS units is taken to equal one.| |g||the Landé g-factor| |J(J+1)||the eigenvalue for eigenstate J2 for the stationary states within the incomplete atoms shells (electrons unpaired)| |µB||the Bohr Magneton| |total magnetism||is N number of magnetic moments per unit volume| The Curie-Weiss Law is then derived from Curie's Law to be For full derivation see Curie-Weiss Law Physics of Curie temperature Approaching Curie temperature from above As the Curie-Weiss Law is an approximation a more accurate model is needed when the temperature,T, approaches the materials Curie Temperature,TC. Magnetic susceptibility occurs above the Curie Temperature. An accurate model of critical behaviour for magnetic susceptibility with critical exponent γ; As temperature is inversely proportional to magnetic susceptibility when T approaches TC the denominator tends to zero and the magnetic susceptibility approaches infinity allowing magnetism to occur. This is a spontaneous magnetism which is a property of ferromagnetic and ferrimagnetic materials. Approaching Curie temperature from below Magnetism depends on temperature and spontaneous magnetism occurs below the Curie Temperature. An accurate model of critical behaviour for spontaneous magnetism with critical exponent β; The critical exponent differs between materials and for the mean-field model as taken as β=0.5 where T<<TC. The spontaneous magnetism approaches zero as the temperature increases towards the materials Curie Temperature. The spontaneous magnetism, occurring in ferromagnetic, ferrimagnetic and antiferromagnetic materials, approaches zero as the temperature increases towards the material's Curie Temperature. Spontaneous magnetism is at its maximum as the temperature approaches 0K. That is, the magnetic moments are completely aligned and at their strongest magnitude of magnetism due to no thermal disturbance. In paramagnetic materials temperature is sufficient to overcome the ordered alignments. As the temperature approaches 0K the entropy decreases to zero, that is, the disorder decreases and becomes ordered. This occurs without the presence of an applied magnetic field and obeys the third law of thermodynamics. Both Curie's Law and the Curie-Weiss law fail as the temperature approaches 0K. This is because they depend on the magnetic susceptibility which only applies when the state is disordered. Gadolinium Sulphate continues to satisfy Curie's law at 1K. Between 0-1K the law fails to hold and a sudden change in the intrinsic structure occurs at the Curie Temperature. Ising model of phase transitions The Ising model is mathematically based and can analyse the critical points of phase transitions in ferromagnetic order due to spins of electrons having magnitudes of either +/- ½. The spins interact with their neighbouring dipole electrons in the structure and here the Ising model can predict their behaviour with each other. This model is important for solving and understanding the concepts of phase transitions and hence solving the Curie Temperature. As a result many different dependencies that effect the Curie Temperature can be analysed. For example the surface and bulk properties depend on the alignment and magnitude of spins and the Ising model can determine the effects of magnetism in this system. Weiss domains and surface and bulk Curie temperatures Materials structures consist of intrinsic magnetic moments which are separated into domains called Weiss domains. This can result in ferromagnetic materials having no spontaneous magnetism as domains could potentially balance each other out. The position of particles can therefore have different orientations around the surface than the main part (bulk) of the material. This property directly effects the Curie Temperature as there can be a bulk Curie Temperature TB and a different surface Curie Temperature TS for a material. This allows for the surface Curie Temperature to be ferromagnetic above the bulk Curie Temperature when the main state is disordered, i.e. Ordered and disordered states occur simultaneously. The surface and bulk properties can be predicted by the Ising model and electron capture spectroscopy can be used to detect the electron spins and hence the magnetic moments on the surface of the material. An average total magnetism is taken from the bulk and surface temperatures to calculate the Curie Temperature from the material, noting the bulk contributes more. The angular momentum of an electron is either +ħ/2 or - ħ/2 due to it having a spin of ½, which gives a specific size of magnetic moment to the electron; the Bohr Magneton. Electrons orbiting around the nucleus in a current loop create a magnetic field which depends on the Bohr Magneton and magnetic quantum number. Therefore the magnetic moments are related between angular and orbital momentum and affect each other. Angular momentum contributes twice as much to magnetic moments than orbital. For terbium which is a rare earth metal and has a high orbital angular momentum the magnetic moment is strong enough to affect the order above its bulk temperatures. It is said to have a high anisotropy on the surface, that is it's highly directed in one orientation. It remains ferromagnetic on its surface above its Curie Temperature while it's bulk becomes ferrimagnetic and then at higher temperatures it's surface remains ferrimagnetic above its bulk Néel Temperature before becoming completely disordered and paramagnetic with increasing temperature. The anisotropy in the bulk is different to its surface anisotropy just above these phase changes as the magnetic moments will be ordered differently or ordered in paramagnetic materials. Changing a material's Curie temperature Composite materials Composite materials, that is, materials composed from other materials with different properties, can change the Curie Temperature. For example a composite which has silver in can create spaces for oxygen molecules in bonding which decreases the Curie Temperature as the crystal lattice will not be as compact. The alignment of magnetic moments in the composite material affects the Curie Temperature. If the materials moments are parallel with each other the Curie Temperature will increase and if perpendicular the Curie Temperature will decrease as either more or less thermal energy will be needed to destroy the alignments. Preparing composite materials through different temperatures can result in different final compositions which will have different Curie Temperatures. Doping a material can also affect it's Curie Temperature. The density of nanocomposite materials changes the Curie Temperature. Nanocomposites are compact structures on a nano-scale. The structure is built up of high and low bulk Curie Temperatures, however will only have one mean-field Curie Temperature. A higher density of lower bulk temperatures results in a lower mean-field Curie Temperature and a higher density of higher bulk temperature significantly increases the mean-field Curie Temperature. In more than one dimension the Curie Temperature begins to increase as the magnetic moments will need more thermal energy to overcome the ordered structure. Particle size The size of particles in a material's crystal lattice changes the Curie Temperature. Due to the small size of particles (nanoparticles) the fluctuations of electron spins become more prominent, this results in the Curie Temperature drastically decreasing when the size of particles decrease as the fluctuations cause disorder. The size of a particle also affects the anisotropy causing alignment to become less stable and thus lead to disorder in magnetic moments. The extreme of this is superparamagnetism which only occurs in small ferromagnetic particles and is where fluctuations are very influential causing magnetic moments to change direction randomly and thus create disorder. The Curie Temperature of nanoparticles are also affected by the crystal lattice structure, body-centred cubic (bcc), face-centred cubic (fcc) and a hexagonal structure (hcp) all have different Curie Temperatures due to magnetic moments reacting to their neighbouring electron spins. fcc and hcp have tighter structures and as a results have higher Curie Temperatures than bcc as the magnetic moments have stronger effects when closer together. This is known as the coordination number which is the number of nearest neighbouring particles in a structure. This indicates a lower coordination number at the surface of a material than the bulk which leads to the surface becoming less significant when the temperature is approaching the Curie Temperature. In smaller systems the coordination number for the surface is more significant and the magnetic moments have a stronger affect on the system. Although fluctuations in particles can be minuscule, they are heavily dependent on the structure of crystal lattices as they react with their nearest neighbouring particles. Fluctuations are also affected by the exchange interaction as parallel facing magnetic moments are favoured and therefore have less disturbance and disorder, therefore a tighter structure influences a stronger magnetism and therefore a higher Curie Temperature. Pressure changes a material's Curie Temperature. Increasing pressure on the crystal lattice decreases the volume of the system. Pressure directly affects the kinetic energy in particles as movement increases causing the vibrations to disrupt the order of magnetic moments. This is similar to temperature as it also increases the kinetic energy of particles and destroys the order of magnetic moments and magnetism. Pressure also affects the density of states (DOS). Here the DOS decreases causing the number of electrons available to the system to decrease. This leads to the number of magnetic moments decreasing as they depend on electron spins. It would be expected because of this that the Curie Temperature would decrease however it increases. This is the result of the exchange interaction. The exchange interaction favours the aligned parallel magnetic moments due to electrons being unable to occupy the same space in time and as this is increased due to the volume decreasing the Curie Temperature increases with pressure. The Curie Temperature is made up of a combination of dependencies on kinetic energy and the DOS. It is interesting to note that the concentration of particles also affects the Curie Temperature when pressure is being applied and can result in a decrease in Curie Temperature when the concentration is above a certain percent. Orbital ordering Orbital ordering changes the Curie Temperature of a material. Orbital ordering can be controlled through applied Strains[disambiguation needed]. This is a function that determines the wave of a single electron or paired electrons inside the material. Having control over the probability of where the electron will be allows the Curie Temperature to be altered. For example the delocalised electrons can be moved onto the same plane by applied strains within the crystal lattice. The Curie Temperature is seen to increase greatly due to electrons being packed together in the same plane, they are forced to align due to the exchange interaction and thus increases the strength of the magnetic moments which prevents thermal disorder at lower temperatures. Curie temperature in ferroelectric and piezoelectric materials In analogy to ferromagnetic and paramagnetic materials, the Curie Temperature can also used to describe the temperature where a material's spontaneous electric polarisation changes to induced electric polarisation, or vice versa. Electric polarisation is a result of aligned electric dipoles. Aligned electric dipoles are composites of positive and negative charges where all the dipoles are facing in one direction. The charges are separated from their stable placement in the particles and can occur spontaneously, from pressure or an applied electric field. Ferroelectric, dielectric (paraelectric) and piezoelectric materials have electric polarisation. In ferroelectric materials there is a spontaneous electric polarisation in the absence of an applied electric field. In dielectric materials there is electric polarisation aligned only when an electric field is applied. Piezoelectric materials have electric polarisation due to applied mechanical stress distorting the structure from pressure. T0 is the temperature where ferroelectric materials lose their spontaneous polarisation as a first or second order phase change occurs, that is the internal structure changes or the internal symmetry changes. In certain cases T0 is equal to the Curie Temperature however the Curie Temperature can be 10 kelvin lower than T0. |Below T0||Above T0| All ferroelectric materials are piezoelectric. An external force applies pressure on particles inside the material which affects the structure of the crystal lattice. Particles in a unit cell become unsymmetrical which allows a net polarisation from each particle. Symmetry would cancel the opposing charges out and there would be no net polarisation. Below the transition temperature T0 displacement of electric charges causes polarisation. Above the transition temperature T0 the structure is cubic and symmetric, causing the material to become dielectric. Electric charges are also agitated and disordered causing the material to have no electric polarisation in the absence of an applied electric field. Ferroelectric and Dielectric Materials are only ferroelectric below their corresponding transition temperature T0. Ferroelectric materials are all piezoelectric and therefore have a spontaneous electric polarisation as the structures are unsymmetrical. Materials are only dielectric above their corresponding transition temperature T0. Dielectric materials have no electric polarisation in the absence of an applied electric field. The electric dipoles are unaligned and have no net polarisation. In analogy to magnetic susceptibility, electric susceptibility only occurs above T0. Ferroelectric materials when polarised are influenced under hysteresis (Figure 4); that is they are dependent on their past state as well as their current state. As an electric field is applied the dipoles are forced to align and polarisation is created, when the electric field is removed polarisation remains. The hysteresis loop depends on temperature and as a result as the temperature is increased and reaches T0 the two curves become one curve as shown in the dielectric polarisation (Figure 5). A heat-induced ferromagnetic-paramagnetic transition is used in magneto-optical storage media, for erasing and writing of new data. Famous examples include the Sony Minidisc format, as well as the now-obsolete CD-MO format. Other uses include temperature control in soldering irons, and stabilizing the magnetic field of tachometer generators against temperature variation. See also - "Pierre Curie - Biography". Nobelprize.org. The Nobel Foundation 1903. Retrieved 14/03/2013. - Buschow 2001, p5021, table 1 - Jullien 1989, p. 155 - Kittel 1986 - Hall 1994, p. 200 - Jullien 1989, pp. 136-138 - Lüth, Harald Ibach, Hans (2009). Solid-state physics : an introduction to principles of materials science (4th extensively updated and enlarged ed. ed.). Berlin: Springer. ISBN 978-3-540-93803-3. - Levy 1968, pp. 236-239 - Dekker 1958, pp. 217-220 - Levy 1968 - Fan 1987, pp. 164-165 - Dekker 1958, pp. 454-455 - Mendelssohn 1977, p. 162 - Levy 1968, pp. 198-202 - Cusack 1958, p. 269 - Hall 1994, pp. 220-221 - Palmer 2007 - Hall 1994, p. 220 - Jullien 1989, pp. 158–159 - Jullien 1989, pp. 156-157 - Jullien 1989, pp. 153 - Hall 1994, pp. 205-206 - Levy 1968, pp. 201-202 - Kittel 1996, pp. 444 - Myers 1997, pp. 334-345 - Hall 1994, pp. 227-228 - Kittel 1986, pp. 424-426 - Spaldin 2010, pp. 52-54 - Hall 1994, pp. 225 - Mendelssohn 1977, pp. 180-181 - Mendelssohn 1977, p. 167 - Bertoldi 2012 - Brout 1965, pp. 6-7 - Jullien 1989, p. 161 - Rau 1988 - Skomski 2000 - Jullien 1989, pp. 138 - Hall 1994 - Hwang 1998 - Jones 2003 - Lopez-Dominguez 2012 - Bose 2011 - Sadoc 2010 - Myers 1997, pp. 404-405 - Jullien 1989, pp. 56-59 - Hall 1994, p. 275 - Webster 1999 - Kovetz 1990, p. 116 - Pascoe 1973, pp. 186-187 - Hummel 2001, pp. 189 - Pascoe 1973, pp. 190-191 - Webster, John G. (1999). The measurement, instrumentation, and sensors handbook ([Online-Ausg.] ed.). Boca Raton, Fla.: CRC Press published in cooperation with IEEE Press. pp. 6.55–6.56. ISBN 9780849383472. - Pallàs-Areny & Webster 2001, pp. 262–263 - Buschow, K. H. J. (2001). Encyclopedia of materials : science and technology. Elsevier. ISBN 0-08-043152-6. - Kittel, Charles (1986). Introduction to Solid State Physics (sixth ed.). John Wiley & Sons. ISBN 0-471-87474-4. - Pallàs-Areny, Ramon; Webster, John G (2001). Sensors and Signal Conditioning (2nd ed.). John Wiley & Sons. pp. 262–263. ISBN 978-0-471-33232-9. - Spaldin, Nicola A. (2010). Magnetic materials : fundamentals and applications (2nd ed.). Cambridge: Cambridge University Press. ISBN 9780521886697. - Ibach, Harald; Lüth, Hans (2009). Solid-state physics : an introduction to principles of materials science (4th extensively updated and enlarged ed.). Berlin: Springer. ISBN 9783540938033. - Levy, Robert A (1968). Principles of Solid State Physics. Academic Press. ISBN 978-0124457508. - Fan, H.Y (1987). Elements of Solid State Physics. Wiley-Interscience. ISBN 9780471859871. - Dekker, Adrianus J (1958). Solid State Physics. Macmillan. ISBN 9780333106235. - Cusack, N (1958). The Electrical and Magnetic Properties of Solids. Longmans, Green. - Hall, J.R. Hook, H.E. (1994). Solid state physics (2nd ed.). Chichester: Wiley. ISBN 0471928054. - Jullien, André Guinier ; Rémi (1989). The solid state from superconductors to superalloys (Pbk. ed.). Oxford: Oxford Univ. Press. ISBN 0198555547. - Mendelssohn, K. (1977). The quest for absolute zero : the meaning of low temperature physics. with S.I. units. (2nd ed.). London: Taylor and Francis. ISBN 0850661196. - Myers, H.P. (1997). Introductory solid state physics. (2nd ed. ed.). London: Taylor & Francis. ISBN 0748406603. - Kittel, Charles (1996). Introduction to solid state physics (7. ed. ed.). New York [u.a.]: Wiley. ISBN 0471111813. - Palmer, John (2007). Planar Ising correlations ([Online-Ausg.]. ed.). Boston: Birkhäuser. ISBN 9780817646202. - Bertoldi, Dalía S; Bringa, Eduardo M; Miranda, E N (6 June 2012). "Analytical solution of the mean field Ising model for finite systems". Journal of Physics: Condensed Matter 24 (22): 226004. doi:10.1088/0953-8984/24/22/226004. Retrieved 12/02/2013. - Brout, Robert (1965). Phase Transitions. New York, Amsterdam: W.A.Benjamin.INC. - Rau, C.; Jin, C.; Robert, M. (1 January 1988). "Ferromagnetic order at Tb surfaces above the bulk Curie temperature". Journal of Applied Physics 63 (8): 3667. doi:10.1063/1.340679. - Skomski, R.; Sellmyer, D. J. (1 January 2000). "Curie temperature of multiphase nanostructures". Journal of Applied Physics 87 (9): 4756. doi:10.1063/1.373149. - Lopez-Dominguez, Victor; Hernàndez, Joan Manel; Tejada, Javier; Ziolo, Ronald F. (8 January 2013). "Colossal Reduction in Curie Temperature Due to Finite-Size Effects in CoFe O Nanoparticles". Chemistry of Materials 25 (1): 6–11. doi:10.1021/cm301927z. - Bose, S. K.; Kudrnovský, J.; Drchal, V.; Turek, I. (1 November 2011). "Pressure dependence of Curie temperature and resistivity in complex Heusler alloys". Physical Review B 84 (17). doi:10.1103/PhysRevB.84.174422. - Webster, John G. (1999). The measurement, instrumentation, and sensors handbook ([Online-Ausg.] ed.). Boca Raton, Fla.: CRC Press published in cooperation with IEEE Press. ISBN 0849383471. - Kovetz, Attay (1990). The principles of electromagnetic theory. (1st published. ed.). Cambridge [England]: Cambridge University Press. ISBN 0-521-39997-1. - Hummel, Rolf E. (2001). Electronic properties of materials (3. ed. ed.). New York [u.a.]: Springer. ISBN 0-387-95144-X. - Pascoe, K.J. (1973). Properties of materials for electrical engineers. New York, N.Y.: J. Wiley and Sons. ISBN 0471669113. - Jones, Paulsen, Jason A. Lo, Chester C H; Snyder, John E.; Ring, A. P.; Jones, L. L.; Jiles, David C. (Sept. 2003). Study of the Curie temperature of cobalt ferrite based composites for stress sensor applications. 39 , Issue: 5. p. 3316 - 3318. - Hwang, Hae Jin; Nagai, Toru; Ohji, Tatsuki; Sando, Mutsuo; Toriyama, Motohiro; Niihara, Koichi (21 January 2005). "Curie Temperature Anomaly in Lead Zirconate Titanate/Silver Composites". Journal of the American Ceramic Society 81 (3): 709–712. doi:10.1111/j.1151-2916.1998.tb02394.x. - Sadoc, Aymeric; Mercey, Bernard; Simon, Charles; Grebille, Dominique; Prellier, Wilfrid; Lepetit, Marie-Bernadette (1 January 2010). "Large Increase of the Curie Temperature by Orbital Ordering Control". Physical Review Letters 104 (4). doi:10.1103/PhysRevLett.104.046804. - "Pierre Curie - Biography". Nobelprize.org, From Nobel Lectures, Physics 1901-1921, Elsevier Publishing Company, Amsterdam, 1967. The Nobel Foundation 1903. Retrieved 14/03/2013.
<urn:uuid:f97fa1c0-16d4-40c6-8a1f-0f8a38f52922>
4.21875
6,772
Knowledge Article
Science & Tech.
37.061098
1,773
Creature Spotlight of the Week- OrcaBY EARTHFIRST The killer whale which is commonly referred to as the orca is a toothed whale belonging to the oceanic dolphin family. Killer whales are found in all oceans, from the freezing Arctic and Antarctic regions to tropical seas. Killer whales as a species have a diverse diet they feed on fish and also marine mammals such as sea lions, seals, walruses and even large whales. Killer whales are highly social sea mammals that have powerful hunting skills, which enable them to kill sea creatures that are much larger than they are. This is partly where their name originated. Many people believe that orcas should not be held in captivity. In aquariums, orcas tend to live for about 13 years, compared to those in the wild who live 40 to 60 years. The IUCN has assessed the Orca's conservation status and they are not currently listed as a threatened species, however their numbers have declined over the years due to a variety of reasons including declining food stocks, habitat loss, pollution (by PCBs), capture for marine mammal parks, and conflicts with fisheries. For more info visit the WWF
<urn:uuid:2c91a7e2-624f-4f8f-a586-14d6382e9f2f>
3.546875
241
Knowledge Article
Science & Tech.
42.82976
1,774
Temperature change in the Alps and their sub-regions according to different emission scenarios : Nov 12, 2009 : Oct 02, 2009 : Nov 29, 2012 11:37 AM Regional statistics: G = Greater Alpine Region, A = Alps, NW = north-western Alps, NE = northeastern Alps, SW = southwestern Alps, SE = southeastern Alps, H = higher than 1 500 m. This is the latest published version. - Climate change Atmospheric concentration of Nitrous Oxide (ppb) Projections for combined changes in temperature and precipitation Socio-economic projections for the European Union
<urn:uuid:5d4224fb-e395-40ef-b54b-d998b5cec6ba>
2.625
129
Content Listing
Science & Tech.
26.97122
1,775
A Tale of Two Lakes: One Gives Early Warning Signal for Ecosystem Collapse First experimental evidence that radical ecosystem change can be detected in advance Researchers eavesdropping on complex signals from a remote Wisconsin lake have detected what they say is an unmistakable warning--a death knell--of the impending collapse of the lake's aquatic ecosystem. The finding, reported in the journal Science by a team of researchers led by Stephen Carpenter, an ecologist at the University of Wisconsin-Madison (UW-Madison), is the first experimental evidence that radical change in an ecosystem can be detected in advance, possibly in time to prevent ecological catastrophe. "For a long time, ecologists thought these changes couldn't be predicted," says Carpenter. "But we've now shown that they can be foreseen. The early warning is clear. It is a strong signal." "This research shows that, with careful monitoring, we can foresee shifts in the structure of ecosystems despite their complexity," agrees Alan Tessier, program director in NSF's Division of Environmental Biology. "The results point the way for ecosystem management to become a predictive science." The findings suggest that, with the right kind of monitoring, it may be possible to track the vital signs of any ecosystem and intervene in time to prevent what is often irreversible damage to the environment. "With more work, this could revolutionize ecosystem management," Carpenter says. "The concept has now been validated in a field experiment and the fact that it worked in this lake opens the door to testing it in rangelands, forests and marine ecosystems." "Networks for long-term ecological observation, such as the [NSF] Long-Term Ecological Research network, increase the possibility of detecting early warnings through comparisons across sites and among regions," the scientists write in their paper. Ecosystems often change in radical ways. Lakes, forests, rangelands, coral reefs and many other ecosystems are often transformed by overfishing, insect pests, chemical changes in the environment, overgrazing and shifting climate. For humans, ecosystem change can impact economies and livelihoods such as when forests succumb to an insect pest, rangelands to overgrazing, or fisheries to overexploitation. A vivid example of a collapsed resource is the Atlantic cod fishery. Once the most abundant and sought-after fish in the North Atlantic, cod stocks collapsed in the 1990s due to overfishing, causing widespread economic hardship in New England and Canada. Now, the ability to detect when an ecosystem is approaching the tipping point could help prevent such calamities. In the new study, the Wisconsin researchers, collaborating with scientists at the Cary Institute for Ecosystem Studies in Millbrook, N.Y., the University of Virginia in Charlottesville and St. Norbert College in De Pere, Wis., focused their attention on Peter and Paul Lakes, two isolated and undeveloped lakes in northern Wisconsin. Peter is a six-acre lake whose biota were manipulated for the study and nearby Paul served as a control. An explosion of largemouth bass young-of-year accelerated the manipulated lake's changes. Credit: Tim Cline The group led by Carpenter experimentally manipulated Peter Lake over a three-year period by gradually adding predatory largemouth bass to the lake, which was previously dominated by small fish that consumed water fleas, a type of zooplankton. The purpose, Carpenter notes, was to destabilize the lake's food web to the point where it would become an ecosystem dominated by large predators. In the process, the researchers expected to see a relatively rapid cascading change in the lake's biological community, one that would affect all its plants and animals in significant ways. "We start adding these big ferocious fish and almost immediately this instills fear in the other fish," Carpenter says. "The small fish begin to sense there is trouble and they stop going into the open water and instead hang around the shore and structures, things like sunken logs. They become risk-averse." The biological upshot, says Carpenter, is that the lake became "water flea heaven." The system becomes one where the phytoplankton, the preferred food of the lake's water fleas, is highly variable. "The phytoplankton get hammered and at some point the system snap into a new mode," says Carpenter. Throughout the lake's three-year manipulation, all its chemical, biological and physical vital signs were continuously monitored to track even the smallest changes that would announce what ecologists call a "regime shift," where an ecosystem undergoes radical and rapid change from one type to another. It was in these massive sets of data that Carpenter and his colleagues were able to detect the signals of the ecosystem's impending collapse. Ecologists first discovered similar signals in computer simulations of spruce budworm outbreaks. Every few decades the insect's populations explode, causing widespread deforestation in boreal forests in Canada. Computer models of a virtual outbreak, however, seemed to undergo odd blips just before the outbreak. The problem was solved by William "Buz" Brock, a UW-Madison economist who for decades has worked on the mathematical connections of economics and ecology. Automated equipment monitored lake conditions every five minutes. Credit: James Coloso Brock utilized a branch of applied mathematics known as bifurcation theory to show that the odd behavior was in fact an early warning of catastrophic change. In short, he devised a way to sense the transformation of an ecosystem by detecting subtle changes in the system's natural patterns of variability. The upshot of the Peter Lake field experiment, says Carpenter, is a validated statistical early warning system for ecosystem collapse. The catch, however, is that for the early warning system to work, intense and continuous monitoring of an ecosystem's chemistry, physical properties and biota are required. Such an approach may not be practical for every threatened ecosystem, says Carpenter, but he also cites the price of doing nothing. "These regime shifts tend to be hard to reverse. It is like a runaway train once it gets going and the costs--both ecological and economic--are high." In addition to Carpenter and Brock, authors of the Science paper include Jonathan Cole of the Cary Institute of Ecosystem Studies; Michael Pace, James Coloso and David Seekell of the University of Virginia at Charlottesville; James Hodgson of St. Norbert College; and Ryan Batt, Tim Cline, James Kitchell, Laura Smith and Brian Weidel of UW-Madison. April 28, 2011 - Cheryl Dybas, NSF (703) 292-7734 email@example.com - Terry Devitt, University of Wisconsin-Madison (608) 262-8282 firstname.lastname@example.org
<urn:uuid:675a3561-119c-42cd-ba4c-f6d6aefaa02f>
3.21875
1,387
News (Org.)
Science & Tech.
28.616294
1,776
The prime objective of the EUMETSAT Polar System (EPS) Metop mission series is to provide continuous, long-term datasets, in support of operational meteorological and environmental forecasting and global climate monitoring. The EPS programme consists of a series of three polar orbiting Metop satellites, to be flown successively for more than 14 years, from 2006, together with the relevant ground facilities. Metop-A was launched on 19 October 2006 and Metop-B was launched on 17 September 2012. Metop carries a set of 'heritage' instruments provided by the United States and a new generation of European instruments that offer improved remote sensing capabilities to both meteorologists and climatologists. The new instruments will augment the accuracy of temperature humidity measurements, readings of wind speed and direction, and atmospheric ozone profiles. NWP is the basis of all modern global and regional weather forecasting. The data generated by the instruments carried by Metop can be assimilated directly into NWP models to compute forecasts ranging from a few hours up to 10 days ahead. Measurements from infrared and microwave radiometers and sounders on board Metop provide NWP models with crucial information on the global atmospheric temperature and humidity structure, with a high vertical and horizontal resolution. EPS also ensures continuity in the long-term monitoring of factors known to play an important role in climate change, for example changing patterns in the distribution of global cloud, snow and ice cover, and ocean surface temperatures and winds. In particular, the Infrared Atmospheric Sounding Interferometer (IASI) instrument has the ability to detect and accurately measure the levels and circulation patterns of gases that are known to influence the climate, such as carbon dioxide. This heralds a breakthrough in the global monitoring of the climate. The data collected by IASI feeds into the models, for the first time showing the variable global distribution of carbon dioxide as a function of seasons and circulation anomalies such as the Southern Oscillation (also known as El Niño) and the North Atlantic Oscillation (NAO). EPS Programme Background EPS is the European contribution to a joint European-US satellite system, called the Initial Joint Polar-Orbiting Operational Satellite System (IJPS). This is an agreement between EUMETSAT and the National Oceanic and Atmospheric Administration (NOAA). The terms of this partnership were first cemented through an agreement concluded in 1998. To develop EPS there are also cooperative agreements with the European Space Agency (ESA) and Centre National d'Etudes Spatiales (CNES). The Initial Joint Polar-orbiting System comprises a Metop satellite from Europe and a NOAA satellite from USA. The satellites fly in complementary orbits designed to ensure global data coverage at intervals of no more than six hours. EUMETSAT is responsible for coordinating all elements of the development, launch and operation of EPS satellites. This includes developing and procuring the ground segment; procuring the launcher and launch site, and operating the systems. Under the IJPS and Joint Transition Activities (JTA) agreement, EUMETSAT and NOAA have agreed to provide instruments for each other's satellites; exchange all data in real time, and assist each other with backup services. Other partners are the European Space Agency (ESA) and the Centre National d'Etudes Spatiales (CNES) of France. The IJPS is a cooperative effort between EUMETSAT and the National Oceanic and Atmospheric Administration (NOAA). It is comprised of two polar-orbiting satellite systems and their respective ground segments. The IJPS programme contributes to and supports the World Meteorological Organization (WMO) Global Observing System, the Global Climate Observing System, the United Nations Environmental Programme (UNEP), the Intergovernmental Oceanographic Commission (IOC), and other related programmes. Each satellite has a nominal lifetime in orbit of five years, with a six month overlap between the consecutive satellites (i.e. between Metop-A and Metop-B, and between Metop-B and Metop-C), providing more than 14 years of service. The European and American satellites carry a set of identical sensors: AVHRR/3 and the ATOVS suite consisting of AMSU-A, HIRS/4 and MHS. NOAA provides most of the joint instruments on board the satellites and EUMETSAT has developed and provides NOAA with the Microwave Humidity Sounder (MHS). In addition, the Metop satellites carry a set of European sensors, IASI, ASCAT, GOME-2 and GRAS, aimed at improving atmospheric soundings, as well as measuring atmospheric ozone and near-surface wind vectors over the ocean. Metop flies in a polar (Low Earth) orbit corresponding to local 'morning' while the US is responsible for 'afternoon' coverage. The series will provide data for both operational meteorology and climate studies. The combination of instruments on board Metop has remote sensing capabilities to observe the Earth by day and night, as well as under cloudy conditions. The EPS Space Segment includes three successive Metop satellites and is being developed and procured on the basis of cooperation between EUMETSAT and the European Space Agency (ESA). A cooperative agreement with CNES covers the development of the Infrared Atmospheric Sounding Interferometer (IASI). EUMETSAT is responsible for the definition of the overall EPS system, the development and operations of the ground segment, and for the operation of the whole system.
<urn:uuid:dbeaa8e1-28c8-4344-ba39-ae853c6d913a>
3.1875
1,132
About (Org.)
Science & Tech.
18.763561
1,777
C14 - The Double Cluster Algol - Eclipsing Binary Star Perseus lies along the plane of the Milky Way, but as we view it, we are looking away from the galactic centre, so that there are fewer clusters within its boundaries than when looking towards Sagattarius or Scorpius. However, the two clusters that form the Perseus Double Cluster provide one of the best binocular sights in the heavens and Perseus also harbours a very interesting star system, Algol. Whilst, for northern observers, Perseus comes high overhead in the latter months of the year, for southern observers it only just rises above the northern horizon so that, whilst they should be able to observe Algol, sadly, the Double Cluster will lie below their horizon. C14 - Twin Open Clusters B M Visible to the unaided eye as a hazy patch in the Milky Way, binoculars or a small telescope at low power can show both these two beautiful clusters in the same field of view. They are most easily found by sweeping with your eyes, binocular or finder scope to the east and a little south of Cassiopeia, following the line set by its bright stars Gamma and Delta. The bright cores of the two clusters are separated by just less than one Moon diameter, 25 arc minutes, and together they cover over a degree of sky. Given their separation and individual visual brightness of between 4th and 5th magnitude, one should be able to see them as separate entities. But this is not usually the case. Surprisingly perhaps, the best chance to do so is by observing them just as twilight ends; when they first appear to the eye but the background stars of the Milky Way are still invisible. (In a similar fashion, the brighter stars of constellation - those that form the patterns that we learn - show up far more clearly under twilight or light-polluted conditions than when seen in really dark conditions. This is why you are advised to learn the shapes and locations of the constellations when the sky conditions are not too good!) The two clusters, also known as h and Chi Persei, are a beautiful sight in 10x50 binoculars; each cluster having a bright centre and many individually resolved stars. With a low-power eyepiece both can be seen in the same field and then, moving up to medium power, each can be observed in detail. They lie in the Perseus spiral arm of the Milky Way some 7300 light years away, and were both formed about 3 million years ago. Position: 2h 20.5m +57deg 08min Algol - Eclipsing Binary Star E B Algol is one of the most remarkable and most famous individual stars in the sky. Its Arabic name is Al Ghul, which means the 'demon' star (Ghul is related to 'ghoul', a ghost). Why a demon? Because it winks! Every 2.87 days its brightness quickly drops from magnitude 2.1 to 3.4 and then rises again to 2.1 over a period of 10 hours. John Goodricke of York was one of the first astronomers who discovered its regular brightness variations in 1782–3. Much later, in 1881, astronomers realised that the effect could be caused by a binary system in which the orbital plane of the two stars was almost in line with the Earth, so that every 2.87 days there is a partial eclipse! This is when the fainter star of the two comes in front of the brighter. In between each major drop in brightness, there is a much smaller drop as the brighter star comes in front of the fainter. The primary star is a blue B-type star with a surface temperature of 12,000K. The secondary is a much larger but dimmer K-type orange giant star. Interestingly, the two stars do not seem to be following the normal rules of stellar evolution. More massive stars evolve faster than less massive ones, so the orange giant - which has evolved away from the main sequence - should be more massive than the blue primary star. But it has less mass! It appears that material may be flowing from the giant star (so reducing its mass) onto the normal star whose mass is thus increasing. Position: 3h 8.2m +40deg 57min
<urn:uuid:d4face32-c368-4fdf-a8be-4dc11dd9121c>
3
890
Knowledge Article
Science & Tech.
60.366302
1,778
LOS ANGELES (December 3, 2012)--NASA's well-traveled Voyager 1 spacecraft is in a new region at the edge of the solar system and is close to exiting it forever. Scientists call the region the "magnetic highway" and say it's the last stop before interstellar space, or the space between stars. The findings were presented Monday at a meeting of the American Geophysical Union meeting in San Francisco. Voyager 1 and its twin Voyager 2 launched 35 years ago on a tour of the outer planets. Afterward, both spacecraft continued to hurtle toward the fringes of the solar system. Mission chief scientist Ed Stone said it's unknown when Voyager 1 will finally break through to interstellar space, but once that happens, it'll be the first man-made object to leave the solar system.
<urn:uuid:85ae5109-16c4-4a23-a3cc-9c6f25722d19>
3.390625
168
News Article
Science & Tech.
54.597853
1,779
Lesser Long-nosed Bats: Nectar-powered Bat Math! Counting the Calories in Cactus Contributed by Ginny Dalton, Bat Biologist Merlin D. Tuttle,BCI ever wonder: How much energy does it take to operate a bat? How many flowers must a bat visit to stay alive? Get ready to count the calories in a cactus flower! After reading this page, you'll know everything necessary to answer these questions: - How many saguaro flowers does a bat have to visit to sustain it for one day? - For how many minutes does a bat have to forage to get the nectar it needs - How many saguaro plants do you think would be needed to feed a maternity colony of 10,000 Leptos? is the minimum number of hectares of land required to feed a bat colony containing 10,000 Much Energy to Operate a Bat? Let's look at how much energy is required by an adult Lepto. Here's an animal with a big energy demand for its size! (Now, nobody has ever measured this exactly for Leptos, but we can use information about other bats and extend the results logically.) It takes roughly 20.2 kcal to maintain one of these bats for a day. A Lepto uses about 100 times less energy in a day than a human does, as you can see on the chart below. But it weighs 2,000 times less. Obviously it takes more energy to operate an ounce of bat that an ounce of human! Why? Well, for one thing, bats fly--and the energetic cost of flight is high. *NOTE ABOUT CALORIES: There is confusion in the layman's literature regarding calories. The calorie in everyday use is actually a kilocalorie (kcal, also designated Calorie, with a capital "C"), 1000 times larger than a calorie with a lower case "c"). In fact, the makers of labels on food boxes and cans in the grocery store are careless in their representation of this energy unit. The labels correctly use the upper case when stating total Calories of the food within, but then many of them say "based on a 2,000 calorie diet." Note the lower case "c." If literally interpreted, it is based on a 2 Cal (or kcal) diet, which doesn't make any sense. You and I can't live on 2 Calories a day!! They actually meant to write "Calorie;" instead they wrote "calorie." Calories in Cactus Flowers How much energy is available in the nectar of the flowers Leptos visit? (Lepto is short for Leptonycteris curasoae, or lesser long-nosed bats.) It has been calculated that the nectar in saguaro flowers is about 24% sugar. This nectar if very sweet: For comparison, Classic Coke is 10% sugar! Each flower holds about 1.0 ml (milliliter) nectar. A single bat only takes about 0.1 ml with each visit to a flower. A bat's stomach can hold about 4 ml of fluid when full. Of those stomachs measured, 3 ml are sugar water and the remaining 1 ml was pollen. There are about 4 calories (= 0.004 kcal) in a mg (milligram) of sugar. There is 1.0 mg sugar in each microliter (.001 ml) of nectar. So how many calories in 1.0 ml, the amount a flower holds? If a bat drains an entire flower, how many visits would the bat have to make to the flower? And how many total calories would it get from that single flower? (Don't forget to multiply your answer in cal/ml by 0.24 since the nectar contains only 24% sugar.) I got 960 cal (0.960 kcal) in a single flower that contains 1.0 ml nectar. So, since a bat takes about 0.1 ml for each visit, a bat would have to visit about 10 flowers to get those 960 cal. Q. How many flowers total does a bat have to visit to sustain it for one day (20,200 cal = 20.2 8 flowers per saguaro per night, how many saguaro plants would have to be visited? _______ it takes about 30 seconds per visit to a flower (that includes transit time to the flower) for one sip (0.1 ml) of nectar. That makes us wonder: Q. For how many minutes does a bat have to be flying to get the nectar it needs each day (24 hours)? _____________ More Bat Math: Mama and Baby Bats Each female gives birth to a single young each year. Just think how much energy will be needed by the pregnant bats after they give birth and start nursing! Not enough information is available for calculating the exact requirements for pregnant females, but a near-term fetus of a 22- gram female bat can weigh as much as 8 grams. A pregnant bat carrying that heavy a load can require about 40% more power for flight. Of all the calculations conducted on females during the various reproductive stages, lactation (when the young are nursing) is the most energetically demanding on the female. that, for the two months a female is pregnant, she requires an average of 27 flowers per night, since pregnant females require more energy. During pregnancy, how many flowers will one female require? ______ How many flowers will be required by 10,000 females? _______From studies conducted around Tucson, Arizona, the average saguaro produces 295 flowers per plant per growing season. (A single plant blooms from 27 to 61 days and each plant can produce from 82 to 980 blossoms!) Q. How many saguaro plants do you think would be needed to feed a maternity colony of 10,000 Leptos? _________ are 6 saguaro plants per hectare on average. Using the figure of 295 flowers per plant per nearly 2-month season, what is the minimum number of hectares of land required to feed a bat colony containing 10,000 pregnant bats? _______________
<urn:uuid:21b711b3-73d7-4ee0-8113-2f9006242494>
3.109375
1,359
Tutorial
Science & Tech.
69.242251
1,780
For a transaction to take place between a client and a server, a conversation must be established. A conversation is established when a client makes a request by broadcasting a service name and topic name, and a server responds. Transactions can then take place across the conversation. When no more transactions are to be made, the conversation is terminated. The following list identifies the elements involved with client/server activity: A conversation is established when a server responds to a client. A service name is a string broadcast by a client hoping to establish a conversation with a server that recognizes the service name. The service name is usually clearly related to the server application name. The topic name identifies what the conversation between client and server is to be about. For example, it could be the name of a file that is open in the server application. Each topic is attached to one particular server. A server can have many topics. The item usually identifies an element of the file identified by the topic which should be read (in the case of a request) or written to (in the case of a poke). For example, it might refer to a cell in a spreadsheet document. LispWorks User Guide and Reference Manual - 21 Dec 2011
<urn:uuid:ab27fc5f-2d86-45bb-8493-e416cd46d33e>
3.46875
247
Documentation
Software Dev.
42.490322
1,781
Moon Crater Shapes Date: 1999 - 2000 The craters on the moon are obviously round but impacts with a grazing incidence should leave a V shaped crater. I would think that some percentage of the impacts would be at shallow angles of attack but this does not appear to be the case in photos. Have they discovered such craters and if not why?? It doesn't work that way. A crater is like a frozen ripple, and ripples are round. There is an asymmetric distribution of matter in the ripple, and the blanket of stuff ejected from the surface, that do record the angle of incidence. These are superimposed on the ripple, but it's the ripple that is most noticeable because of its sharp Actually, impacts at a grazing angle will also make a round hole. This was a controversial point regarding Meteor Crater in Arizona, which was basically the first site recognized on Earth as a meteor impact site. The fact that no meteorite fragments were found in the earth below the center of the crater was initially taken as evidence against the hypothesis that the structure was an impact crater, but further research showed that if the impact is very energetic, such as a rifle bullet fired into mud, the resulting hole is round, even if the impact angle is quite flat. Richard Barrans Jr., Ph.D. Click here to return to the Astronomy Archives Update: June 2012
<urn:uuid:b485f2c8-49c9-4268-91c3-d88b5c45c43b>
4.15625
299
Comment Section
Science & Tech.
53.441538
1,782
Northern Research Station 11 Campus Blvd., Suite 200 Newtown Square, PA 19073 (610) 557-4132 TTY/TDD Title: Carbon stocks in urban forest remnants: Atlanta and Baltimore as case studies. Chapter 5. Author: Yesilonis, Ian D.; Pouyat, Richard V. Publication: Dordrecht, Netherlands: Springer: 103-120. Key Words: urban, forest, carbon, land use, management Abstract: Urban environments influence carbon (C) and nitrogen (N) cycles of forest ecosystems by altering plant biomass, litter mass and chemistry, passive and active pools of C and N, and the occurrence and activity of decomposer organisms. It is difficult to determine the net effect of C storage due to the number of environmental factors exerting stress on urban forests. Using a conceptual model to synthesize results from gradient studies of forest patches in metropolitan areas, we attempt to explain the mechanisms affecting C cycling. We also assess the relative importance of C accumulation in urban remnant forests with respect to other land uses previously disturbed or managed. The cities of Baltimore and Atlanta are used as case studies. The C density of forest above-ground biomass for Baltimore City, 8 kg m-3, and Atlanta, 10. 6 kg m-3, is significantly higher for both medium- and high-density residential areas. Baltimore City has a forest-soil C density of 10.6 kg m-3, a below-to-above ground ratio of 1.3. Urban forest remnants in these two cities store a high amount of C on a per-unit basis both above- and below ground relative to other land uses, but total C storage is lower due to the lower acreage of urban forest in these cities relative to other land uses. This document is in PDF format. You can obtain a free PDF reader from Adobe. Forest Service Home | USDA.gov | recreation.gov
<urn:uuid:3f7c389f-7d96-446d-960d-7c8cbbc49b80>
2.53125
400
Academic Writing
Science & Tech.
53.873
1,783
The Law of Momentum Conservation Improve your problem-solving skills with problems, answers and solutions from The Calculator Pad.Flickr Physics Visit The Physics Classroom's Flickr Galleries and take a visual tour of the topic of momentum and collisions. Enjoy a rich source of instructional, demonstration and lab ideas on the topic of momentum and impulse.The Laboratory Looking for a lab that coordinates with this page? Try the Action-Reaction Lab from The Laboratory.The Laboratory Looking for a lab that coordinates with this page? Try the What's Cooking Lab from The Laboratory.Treasures from TPF Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on momentum. Momentum Conservation in Explosions As discussed in a previous part of Lesson 2, total system momentum is conserved for collisions between objects in an isolated system. For collisions occurring in isolated systems, there are no exceptions to this law. This same principle of momentum conservation can be applied to explosions. In an explosion, an internal impulse acts in order to propel the parts of a system (often a single object) into a variety of directions. After the explosion, the individual parts of the system (that is often a collection of fragments from the original object) have momentum. If the vector sum of all individual parts of the system could be added together to determine the total momentum after the explosion, then it should be the same as the total momentum before the explosion. Just like in collisions, total system momentum is conserved. Momentum conservation is often demonstrated in a Physics class with a homemade cannon demonstration. A homemade cannon is placed upon a cart and loaded with a tennis ball. The cannon is equipped with a reaction chamber into which a small amount of fuel is inserted. The fuel is ignited, setting off an explosion that propels the tennis ball through the muzzle of the cannon. The impulse of the explosion changes the momentum of the tennis ball as it exits the muzzle at high speed. The cannon experienced the same impulse, changing its momentum from zero to a final value as it recoils backwards. Due to the relatively larger mass of the cannon, its backwards recoil speed is considerably less than the forward speed of the tennis ball. In the exploding cannon demonstration, total system momentum is conserved. The system consists of two objects - a cannon and a tennis ball. Before the explosion, the total momentum of the system is zero since the cannon and the tennis ball located inside of it are both at rest. After the explosion, the total momentum of the system must still be zero. If the ball acquires 50 units of forward momentum, then the cannon acquires 50 units of backwards momentum. The vector sum of the individual momenta of the two objects is 0. Total system momentum is conserved. As another demonstration of momentum conservation, consider two low-friction carts at rest on a track. The system consists of the two individual carts initially at rest. The total momentum of the system is zero before the explosion. One of the carts is equipped with a spring-loaded plunger that can be released by tapping on a small pin. The spring is compressed and the carts are placed next to each other. The pin is tapped, the plunger is released, and an explosion-like impulse sets both carts in motion along the track in opposite directions. One cart acquires a rightward momentum while the other cart acquires a leftward momentum. If 20 units of forward momentum are acquired by the rightward-moving cart, then 20 units of backwards momentum is acquired by the leftward-moving cart. The vector sum of the momentum of the individual carts is 0 units. Total system momentum is conserved. Just like in collisions, the two objects involved encounter the same force for the same amount of time directed in opposite directions. This results in impulses that are equal in magnitude and opposite in direction. And since an impulse causes and is equal to a change in momentum, both carts encounter momentum changes that are equal in magnitude and opposite in direction. If the exploding system includes two objects or two parts, this principle can be stated in the form of an equation as: If the masses of the two objects are equal, then their post-explosion velocity will be equal in magnitude (assuming the system is initially at rest). If the masses of the two objects are unequal, then they will be set in motion by the explosion with different speeds. Yet even if the masses of the two objects are different, the momentum change of the two objects (mass velocity change) will be equal in magnitude. The diagram below depicts a variety of situations involving explosion-like impulses acting between two carts on a low-friction track. The mass of the carts is different in each situation. In each situation, total system momentum is conserved as the momentum change of one cart is equal and opposite the momentum change of the other cart. In each of the above situations, the impulse on the carts is the same - a value of 20 kgcm/s (or cNs). Since the same spring is used, the same impulse is delivered. Thus, each cart encounters the same momentum change in every situation - a value of 20 kgcm/s. For the same momentum change, an object with twice the mass will encounter one-half the velocity change. And an object with four times the mass will encounter one-fourth the velocity change. Since total system momentum is conserved in an explosion occurring in an isolated system, momentum principles can be used to make predictions about the resulting velocity of an object. Problem solving for explosion situations is a common part of most high school physics experiences. Consider for instance the following problem: A 56.2-gram tennis ball is loaded into a 1.27-kg homemade cannon. The cannon is at rest when it is ignited. Immediately after the impulse of the explosion, a photogate timer measures the cannon to recoil backwards a distance of 6.1 cm in 0.0218 seconds. Determine the post-explosion speed of the cannon and of the tennis ball. Like any problem in physics, this one is best approached by listing the known information. m = 1.27 kg d = 6.1 cm t = 0.0218 s m = 56.2 g = 0.0562 kg The strategy for solving for the speed of the cannon is to recognize that the cannon travels 6.1 cm at a constant speed in the 0.0218 seconds. The speed can be assumed constant since the problem states that it was measured after the impulse of the explosion when the acceleration had ceased. Since the cannon was moving at constant speed during this time, the distance/time ratio will provide a post-explosion speed value. The strategy for solving for the post-explosion speed of the tennis ball involves using momentum conservation principles. If momentum is to be conserved, then the after-explosion momentum of the system must be zero (since the pre-explosion momentum was zero). For this to be true, then the post-explosion momentum of the tennis ball must be equal in magnitude (and opposite in direction) of that of the cannon. That is, The negative sign in the above equation serves the purpose of making the momenta of the two objects opposite in direction. Now values of mass and velocity can be substituted into the above equation to determine the post-explosion velocity of the tennis ball. (Note that a negative velocity has been inserted for the cannon's velocity.) vball = - (1.27 kg) (-280 cm/s) / (0.0562 kg) vball = 6323.26 cm/s vball = 63.2 m/s Using momentum explosion, the ball is propelled forward with a speed of 63.2 m/s - that's 141 miles/hour! It's worth noting that another method of solving for the ball's velocity would be to use a momentum table similar to the one used previously in Lesson 2 for collision problems. In the table, the pre- and post-explosion momentum of the cannon and the tennis ball. This is illustrated below. = -355 kgcm/s The variable v is used for the post-explosion velocity of the tennis ball. Using the table, one would state that the sum of the cannon and the tennis ball's momentum after the explosion must sum to the total system momentum of 0 as listed in the last row of the table. Thus, Solving for v yields 6323 cm/s or 63.2 m/s - consistent with the previous solution method. Using the table means that you can use the same problem solving strategy for both collisions and explosions. After all, it is the same momentum conservation principle that governs both situations. Whether it is a collision or an explosion, if it occurs in an isolated system, then each object involved encounters the same impulse to cause the same momentum change. The impulse and momentum change on each object are equal in magnitude and opposite in direction. Thus, the total system momentum is conserved. 1. Two pop cans are at rest on a stand. A firecracker is placed between the cans and lit. The firecracker explodes and exerts equal and opposite forces on the two cans. Assuming the system of two cans to be isolated, the post-explosion momentum of the system ____. a. is dependent upon the mass and velocities of the two cans b. is dependent upon the velocities of the two cans (but not their mass) c. is typically a very large value d. can be a positive, negative or zero value e. is definitely zero 2. Students of varying mass are placed on large carts and deliver impulses to each other's carts, thus changing their momenta. In some cases, the carts are loaded with equal mass; in other cases they are unequal. In some cases, the students push off each other; in other cases, only one team does the pushing. For each situation, list the letter of the team that ends up with the greatest momentum. If they have the same momentum, then do not list a letter for that situation. Enter the four letters (or three or two or ...) in alphabetical order. 3. Two ice dancers are at rest on the ice, facing each other with their hands together. They push off on each other in order to set each other in motion. The subsequent momentum change (magnitude only) of the two skaters will be ____. a. greatest for the skater who is pushed upon with the greatest force b. greatest for the skater who pushes with the greatest force c. the same for each skater d. greatest for the skater with the most mass e. greatest for the skater with the least mass 4. A 62.1-kg male ice skater is facing a 42.8-kg female ice skater. They are at rest on the ice. They push off each other and move in opposite directions. The female skater moves backwards with a speed of 3.11 m/s. Determine the post-impulse speed of the male skater. 5. A 1.5-kg cannon is mounted on top of a 2.0-kg cart and loaded with a 52.7-gram ball. The cannon, cart, and ball are moving forward with a speed of 1.27 m/s. The cannon is ignited and launches a 52.7-gram ball forward with a speed of 75 m/s. Determine the post-explosion velocity of the cannon and cart.
<urn:uuid:9b183fd2-00c9-493b-8c99-e1c8a1a5edc9>
3.9375
2,390
Tutorial
Science & Tech.
60.538752
1,784
|Jul29-06, 08:44 AM||#1| Question on light intensity at the focus of a lens. From using a magnifying lens under the Sun, I gather it can focus all or most of the light impinging on its surface area to a small spot and that is how it is able to create a greater intensity light at its focus (disregarding absorption in the lens.) Correct? What portion of the total light falling on the lens would be delivered to the focal spot ignoring absorption? This would be true no matter how far away the focal point. So the normal dimunition of intensity by the square of distance would not apply. So for example if you had a lens of focal distance 1 AU and put this lens right next to the Sun and directed the lens to shine toward the Earth, the full intensity of the Sun at its surface could be delivered to the Earth. True? In a more realistic scenario if you put the lens some ten to hundreds of thousands of kilometers away from the Sun's surface so it could survive the heating then the intensity delivered to the Earth would still be many times the Sun's normal intensity at the Earth. So for example taking 1 AU as about 150,000,000 km, if we made the focus of the lens be 1 AU and put it 150,000 km from the Sun. Then the intensity of the light at the surface of the lens would be 1000^2 = 1,000,000 (one million) times greater than that normally at the Earth. The lens would deliver all or a large portion of the light falling on it to the focal spot on the Earth. If the area of this spot was 1/1000th that of the area of the lens, then the intensity at the focal spot would then be 1,000,000,000 (one billion) times the intensity normally at the Earth. The intensification of the light at the focus is familiar with a convergent lens, such as with a magnifying lens. But if you had a diverging lens so the focal spot was larger than the lens then the intensity would be less than at the surface of the lens. But the total amount of light delivered to the focal spot would still be all or a large portion of that falling on the surface of the lens. And this would still be true no matter how far is the focal length. Correct? |Jul29-06, 10:21 AM||#2| |Jul30-06, 07:44 PM||#3| You can do a sort of sanity check type of experiment by testing this with an ordinary light bulb as a light source. There is no question that, by moving the lens closer to the source, you can capture more light, BUT you can't focus it as tightly with a longer focal length lens. These are the types of interplays you can test. Finally I will remark that acheiving intesities 1 billion times greater than normal sunlight at the Earths surface is actually not that significant when you consider that high powered lasers can acheive peak intensities at and above this level. Nonetheless an interesting thought experiment. |Similar Threads for: Question on light intensity at the focus of a lens.| |[SOLVED] Light Intensity question||Introductory Physics Homework||2| |What lens shape gives perfect focus?||General Physics||9| |Why does light intensity decrease when light passes through a glass block?||General Physics||7| |Question about light intensity and resistance||Advanced Physics Homework||1| |Question about light intensity and resistance||Introductory Physics Homework||1|
<urn:uuid:32929577-cba4-4fdd-b572-8d2fe897b09f>
3.546875
753
Comment Section
Science & Tech.
62.022674
1,785
It is possible to create a database in a location other than the default location for the installation. Remember that all database access actually occurs through the database backend, so that any location specified must be accessible by the backend. Alternate database locations are created and referenced by an environment variable which gives the absolute path to the intended storage location. This environment variable must have been defined before the backend was started and must be writable by the postgres administrator account. Any valid environment variable name may be used to reference an alternate location, although using variable name with a prefix of PGDATA is recommended to avoid confusion and conflict with other variables. Note: In previous versions of Postgres, it was also permissable to use an absolute path name to specify an alternate storage location. The environment variable style of specification is to be preferred since it allows the site administrator more flexibility in managing disk storage. If you prefer using absolute paths, you may do so by defining "ALLOW_ABSOLUTE_DBPATHS" and recompiling Postgres To do this, either add this line#define ALLOW_ABSOLUTE_DBPATHS 1to the file src/include/config.h, or by specifyingCFLAGS+= -DALLOW_ABSOLUTE_DBPATHSin your Makefile.custom. Remember that database creation is actually performed by the database backend. Therefore, any environment variable specifying an alternate location must have been defined before the backend was started. To define an alternate location PGDATA2 pointing to /home/postgres/data, first type % setenv PGDATA2 /home/postgres/datato define the environment variable to be used with subsequent commands. Usually, you will want to define this variable in the Postgres superuser's .profile or .cshrc initialization file to ensure that it is defined upon system startup. Any environment variable can be used to reference alternate location, although it is preferred that the variables be prefixed with "PGDATA" to eliminate confusion and the possibility of conflicting with or overwriting other variables. To create a data storage area in PGDATA2, ensure that /home/postgres already exists and is writable by the postgres administrator. Then from the command line, type % setenv PGDATA2 /home/postgres/data % initlocation $PGDATA2 Creating Postgres database system directory /home/postgres/data Creating Postgres database system directory /home/postgres/data/base To test the new location, create a database test by typing % createdb -D PGDATA2 test % dropdb test
<urn:uuid:4191b499-f693-4227-86f5-5a43cc1a32f4>
2.828125
541
Documentation
Software Dev.
10.114863
1,786
Scientists study the illusive snow leopard in Northern Afghanistan (The New York Times) - Snow leopards have the advantage of living in “one of the most remote and isolated mountain landscapes in the world,” away from the human threats other large cats face. - Increasingly, snow leopards have been eating livestock, and these encounters with humans might threaten this already endangered species as tensions arise. - Scientists estimate there are between 4,500 and 7,500 snow leopards in the wild, but Dr. Schaller said, “those figures are just wild guesses.” Q&A with Will Potter about his new book: Green Is the New Red: An Insider’s Account of a Social Movement under Siege (Grist) - Will Potter discusses how corporations and lobbyists are attacking the environmental movement with accusations of terrorism, much like red scare. - “As the scale of the ecological crisis we are facing becomes more apparent, and as the backlash against social movements that are challenging our self-destructive culture intensifies, it is difficult to not feel dark, to feel helpless.” Puma commits to eliminating all hazardous waste releases into Chinese waterways as part of Greenpeace’s detox challenge (Greenpeace) - Puma, the third largest sportswear producer in the world, just committed to stop releasing toxic chemical in Chinese rivers by 2020, “beating” both Nike and Adidas in the detox challenge. The National Petroleum Reserve, bustling with biodiversity of arctic life, is soon to be tapped (Yale Environment 360) - “This wetland is home to the most spectacular gathering of migratory birds from all over the world, numbering in the millions.” - The 23 million acre reserve will have is its fate decided in a year, and scientists are hurrying to study “special spots,” which the US will protect within the reserve. And lastly a video to inspire, relish and even lament conservation efforts (TreeHugger) If the above embedded video does not display, here to view it.
<urn:uuid:5db0eaf8-dc27-4e80-b8f1-1935edb19e7d>
3.125
437
Content Listing
Science & Tech.
27.048713
1,787
Feb. 24, 2011 Early in the formation of Earth, some forms of the element chromium separated and disappeared deep into the planet's core, a new study by UC Davis geologists shows. The finding, to be published online by the journal Science Feb. 24, will help scientists understand the early stages of planet formation, said Qing-Zhu Yin, professor of geology at UC Davis and coauthor on the paper. Yin, former postdoctoral scholar Frederic Moynier and Edwin Schauble of the Department and Earth and Space Sciences at UCLA used specialized equipment at UC Davis to make very exact measurements of chromium isotopes in meteorites, compared to rocks from Earth's crust, and use modern high performance computers to simulate early Earth environment. They studied a class of meteorites called chondrites, which are leftovers from the formation of the solar system over four and half billion years ago. As well as adding shiny, rust-proof surfaces to metalwork, chromium adds color to emeralds and rubies. It exists as four stable (non-radioactive) isotopes with atomic masses of 50, 52, 53 and 54. It has been known for decades that chromium isotopes are relatively underrepresented in Earth's mantle and crust, Yin said. That could either be because they were volatile and evaporated into space, or got sucked into Earth's deep core at some point. By making very accurate measurements of chromium isotopes in the meteorites compared to Earth rocks and comparing them to theoretical predictions, the researchers were able to show for the first time that the lighter isotopes preferentially go into the core. From this the team inferred that some 65 percent of the missing chromium is most likely in Earth's core. Furthermore, the separation must have happened early in the planet building process, probably in the multiple smaller bodies that assembled into Earth or when Earth was still molten but smaller than today. Moynier is now assistant professor at the Department of Earth and Planetary Sciences and McDonnell Center for the Space Sciences, Washington University in St Louis. The work was funded by grants from NASA and the National Science Foundation. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. - Frederic Moynier, Qing-Zhu Yin and Edwin Schauble. Isotopic Evidence of Cr Partitioning into Earth’s Core. Science, 24 February 2011 DOI: 10.1126/science.1199597 Note: If no author is given, the source is cited instead.
<urn:uuid:2a280e92-62de-4bc5-a2cd-327af6dbcf52>
3.640625
540
News Article
Science & Tech.
43.358771
1,788
This Chandra image shows two vast cavities - each 600,000 light years in diameter - in the hot, X-ray emitting gas that pervades the galaxy cluster MS 0735.6+7421 (MS 0735 for short). Click here for a high resolution photograph. Astronomers have found the most powerful eruption in the universe using NASA's Chandra X-ray Observatory. A super massive black hole generated this eruption by growing at a remarkable rate. This discovery shows the enormous appetite of large black holes, and the profound impact they have on their surroundings. The huge eruption was seen in a Chandra image of the hot, X-ray emitting gas of a galaxy cluster called MS 0735.6+7421. Two vast cavities extend away from the super massive black hole in the cluster's central galaxy. The eruption, which has lasted for more than 100 million years, has generated the energy equivalent to hundreds of millions of gamma-ray bursts. This event was caused by gravitational energy release, as enormous amounts of matter fell toward a black hole. Most of the matter was swallowed, but some of it was violently ejected before being captured by the black hole. "I was stunned to find that a mass of about 300 million suns was swallowed," said Brian McNamara of Ohio University in Athens. "This is as large as another super massive black hole." He is lead author of the study about the discovery, which is in the January 6, 2005, issue of Nature. Astronomers are not sure where such large amounts of matter came from. One theory is gas from the host galaxy catastrophically cooled and was swallowed by the black hole. The energy released shows the black hole in MS 0735 has grown dramatically during this eruption. Previous studies suggest other large black holes have grown very little in the recent past, and that only smaller black holes are still growing quickly. "This new result is as surprising as it is exciting," said co-author Paul Nulsen of the Harvard-Smithsonian Center for Astrophysics, Cambridge, Mass. "This black hole is feasting, when it should be fasting." Radio emission within the cavities shows jets from the black hole erupted to create the cavities. Gas is being pushed away from the black hole at supersonic speeds over a distance of about a million light years. The mass of the displaced gas equals about a trillion suns, more than the mass of all the stars in the Milky Way. The rapid growth of super massive black holes is usually detected by observing very bright radiation from the centers of galaxies in the optical and X-ray wavebands, or luminous radio jets. In MS 0735 no bright central radiation is found, and the radio jets are faint. The true nature of MS 0735 is only revealed through X-ray observations of the hot cluster gas. "Until now we had no idea this black hole was gorging itself," said co-author Michael Wise of the Massachusetts Institute of Technology in Cambridge, Mass. "The discovery of this eruption shows X-ray telescopes are necessary to understand some of the most violent events in the universe." The astronomers estimated how much energy was needed to create the cavities by calculating the density, temperature and pressure of the hot gas. By making a standard assumption that 10 percent of the gravitational energy goes into launching the jets, they estimated how much material the black hole swallowed. Besides generating the cavities, some of the energy from this eruption should keep the hot gas around the black hole from cooling, and some of it may also generate large-scale magnetic fields in the galaxy cluster. Chandra observers have discovered other cavities in galaxy clusters, but this one is easily the largest and the most powerful. NASA's Marshall Space Flight Center, Huntsville, Ala., manages the Chandra program for NASA's Space Mission Directorate, Washington. Northrop Grumman of Redondo Beach, Calif., was the prime development contractor for the observatory. The Smithsonian Astrophysical Observatory controls science and flight operations from the Chandra X-ray Center in Cambridge, Mass.
<urn:uuid:b46d216e-f498-4b36-b9c9-1b53fc9ec451>
3.5
831
News Article
Science & Tech.
47.701119
1,789
Composition of Solutions Problems and Solutions Calculate the volume of a 0.753 M sodium hydroxide solution that is neutralized by the addition of 100 mL of a 1.203 M sulfuric acid solution. If the volume of a 100 L solution of 1.1 moles of hydrogen in 6.0 moles argon is suddenly doubled, what happens to the mole fraction of hydrogen in that solution? What is the concentration of HCl stock solution that can create 250 mL of a 1.0 M solution that is prepared by the dilution of 50 mL of the HCl stock solution?
<urn:uuid:8bcee731-1a98-4d6f-a6e6-b9a1474d0e87>
2.703125
127
Tutorial
Science & Tech.
76.5535
1,790
Jeanna Bryner LiveScience Staff Writer LiveScience.comWed Mar 21, 4:30 PM ET The fossil of an ancient amphibious reptile with a crocodile's body and a fish's tail has been unearthed in Oregon. Scientists believe the creature's remains were transported by geologic processes nearly 5,000 miles away from where it originally died more than 100 million years ago. The new fossil is the oldest crocodilian ever unearthed in Oregon and one of the few to be unearthed on this side of the Pacific. The “hybrid” animal is thought to be a new species within the genus Thalattosuchia, a group of crocodilians living during the age of dinosaurs. The reptile roamed a tropical environment in Asia about 142 to 208 million years ago. Called a Thalattosuchian, the amphibious creature [image] represents an early milestone in evolutionary history, marking a transition during which these reptiles moved from being semi-aquatic to wholly ocean species.
<urn:uuid:8ac6ce54-16e9-4a3b-a93c-d6805e297bb6>
3.46875
206
Comment Section
Science & Tech.
34.538077
1,791
The ground in the Puget Sound region didn’t just shake during the magnitude 6.8 Nisqually earthquake, it moved — literally. In fact, measurements using global positioning system (GPS) data indicate that in most areas the ground shifted more during the Feb. 28 quake than it normally does in a year. “Not only that, but it moved in completely the opposite direction of what we’ve observed from year to year,” said Anthony Qamar, a University of Washington research associate professor in Earth and space sciences and the state seismologist. Qamar works on a project called PANGA, or Pacific Northwest Geodetic Array, that uses global positioning information to measure how much the ground in western Washington and Oregon move each year relative to a fixed point farther east. PANGA partners include the UW, Central Washington University, Rensselaer Polytechnic Institute in Troy, N.Y., Oregon State University, the U.S. Geological Survey, the U.S. Coast Guard and the Geological Survey of Canada. PANGA’s measurements have shown that typically the central Puget Sound region moves to the east-northeast at about 3 to 5 millimeters per year. By contrast, at Neah Bay on the state’s northwest coast the movement is about 10 millimeters, or a half-inch, per year. That’s because the coast is much closer to the zone where the Juan de Fuca plate dives beneath the North American Plate, and the pressure moving the land surface is much greater than farther inland. In the Nisqually earthquake, GPS sensors showed a Coast Guard station at Point Robinson on the east edge of Maury Island moved 8 millimeters to the south-southwest and the UW campus moved 5 millimeters — about two-tenths of an inch — south-southwest. The data showed that Satsop, which is about midway between the epicenter and the Washington coast, moved west about 6 millimeters and Pacific Beach, on the coast, moved northwest about 4 millimeters. Though currently there are no measurements, Qamar also expects that data eventually will show that areas west of the earthquake’s focus deep beneath the Nisqually River delta north of Olympia rose as much as a half-inch in the quake. He expects that areas to the east will have dropped about one-third of an inch. (An earthquake’s epicenter is the area on the surface that lies directly above the hypocenter, or focus.) The actual movement of the fault at the focus of the earthquake was probably about 1 meter, more than 3 feet, Qamar said. But the fact that the focus was some 34 miles deep in the Earth means the displacement at the surface is far less. PANGA has about 20 permanent global positioning stations running in western Washington and Oregon. There also are 70 National Geodetic Survey sites permanently marked with metal plates that are in the process of being measured with portable GPS equipment to provide a more complete picture of what happened in the Nisqually quake. Those sites, a number of them lined up through the heart of the epicenter region, typically are measured every two years or so. The purpose of PANGA is to allow scientists to see geographic positions changing over time. That happens as pressure is applied from the west by the interaction of the Juan de Fuca and North American plates, pushing this region east, and from the south by the movement of a large chunk of California against Oregon and Washington, pushing the region north. Eventually, those forces will counteract what happened in the Nisqually quake, Qamar said. “I would expect that if we go back and measure the Satsop station in a year or two, we’ll see that it’s right back where it was before the earthquake,” he said. For more information, contact Qamar at (206) 685-7563, (206) 543-7010 or http://www.geophys.washington.edu
<urn:uuid:d2a62db1-8700-4b57-bbc9-b01694e1d91d>
3.109375
840
News Article
Science & Tech.
49.829574
1,792
WebRef Update: Featured Article: X(ML) Marks the Spot X(ML) Marks the Spot Never before has an Internet development witnessed such rave reviews before thorough implementation and testing as the eXtensible Markup Language (XML). XML is definitely the latest buzzword in the developer community. However, you must crawl before you can walk. The idea of creating an XML based e-commerce solution is exciting, but what about a simple XML page which has standard Web media elements such as graphics or Flash files? XML promises to give us intelligent document structures, object oriented document manipulations, synchronized media and a whole lot more. But what is XML exactly, and why has it created such a stir? This article is for those developers who are looking for a hands-on explanation of XML basics. What is XML? The eXtensible Markup Language and HTML are both subsets of Standard Generalized Markup Language (SGML). SGML is a very powerful technology that can be viewed as the parent of many markup languages, which include HTML and XML. With XML, it is possible to create new variations such as the Wireless Application Protocol Markup Language (WAPML or WML), which makes communicating and transactions between a mobile phone and a Web server possible. Of all the aspects of XML, the following is probably the most important: XML only recently became an official W3C recommendation. This means that the consortium still hasn't made a decision about standard XML. Many XML elements used in Explorer 5.0 are based on the W3C draft and they will probably be included in the official XML specs. Netscape has probably made the wise decision to wait with releasing their XML compliant version 5 browser until the official specs have been determined. Enough background information. Let's get into the real deal. The big difference between XML and HTML is the following: an HTML document has three different elements: The first element being the text (e.g. "Welcome to my homepage"). The second element is the document structure such as tables and linebreaks. The third element is the visual markup such as bold text, italic text, graphics and other visual elements. An XML document, however, can actually consist of two or three different pages. Because seeing is believing, I've included a short example below. 1. The first page is the actual XML information you wish to display. In first generation XML sites, this information will probably be text contained in the page called "whatever.xml". This page doesn't have any structure such as a table or visual markup (bold, italic or color). Whatever.xml looks like:<?XML version="1.0" ?> <?XML-stylesheet type="text/xsl" href="whatever.xsl"?> <people> <friend> <name>Lee</name> <address>25 Malvern street</address> <telephone>123 456 789</telephone> </friend> <friend> <name>Susanna</name> <address>11 Durban road</address> <telephone>987 654 231</telephone> </friend> </people> 2. The second page has the Extensible Stylesheet Language (whatever.xsl). This page has HTML and "tags" which takes the data out of whatever.xml and puts into "whatever.xsl". The xsl document has the mark-up such as <body>, <table> and <font>. Whatever.xsl looks like:<?XML version="1.0"?> <xsl:stylesheet XMLns:xsl="http://www.w3.org/TR/WD-xsl"> <xsl:template match="/"> <HTML> <head><title>XML Developer</title></head> <body> <table border="1" cellpadding="3" cellspacing="3"> <xsl:for-each select="people/friend"> <tr> <td><b>Name:</b><br/></td> <td><xsl:value-of select="name"/></td> </tr> <tr> <td>Address:<br/></td> <td><i><xsl:value-of select="address"/></i></td> </tr> <tr> <td>Telephone:<br/></td> <td><i><xsl:value-of select="telephone"/></i><br/></td> </tr> </xsl:for-each> </table> </body> </HTML> </xsl:template> </xsl:stylesheet> 3. The third page is the Document Type Definition. The good news is that a DTD is not always necessary, especially in a simple XML document. The bad news is that a DTD is pretty darn difficult. It contains elements such as attributes and data types. For more information on DTDs, take a look at: The XML version of linebreak is <br/> instead of the HTML <br>. Herein lies the secret in getting around the most common and frustrating markup language bugs [or features, depending on your point of view -eds.], which go by the name of validity or "well- formed code." In the good old Internet days, developers were very meticulous when it came to their coding. If you opened a <font> tag, you'd have to close it with </font>. When browsers got smarter, coders became lazier. As HTML evolved, people also decided that it wasn't necessary to include certain quotes in their code. So what was once <font color="white"> became <font color=white>. And then XML hit the scene. Next: XML Needs Clean Code Revised: May 16, 2000
<urn:uuid:f2be1028-3e4e-4f08-8e06-dbf40b98a018>
3.140625
1,209
Truncated
Software Dev.
67.677158
1,793
Forecast an El Nino or La Nina! |You can see if an El Nino or a La Nina is coming!| |Who hasn't heard of El Nino and la Nina? Almost no one! But who can tell if one of these events that change our weather and climate is coming. Scientists are working to forecast El Nino and La Nina events but as yet our best way to tell what is coming is to look at satellite ocean data and watch for these features as they travel across the ocean. And you can do that!| First let's think about what an El Nino or La Nina is and how it affects us. The simple explantion is that they are hot and cold ocean events that start in the tropical Pacific Ocean. These 'hot and cold' events cover large areas of the tropics, move from weat to east, and can spread north and south along the coasts of the Pacific ocean. The events are so large that they affect the local weather and change the jet stream which affects global weather. For more information read further! |Let's take a look at the ocean!!| This picture was made from data taken from a satellite that measures the height of the ocean. By measuring the height of the ocean surface we can make a map that gives us information about the amount of heat in the ocean. The 'bottom line' is that hot water expands and is higher and cold water takes up less space so it is lower. Think about designing an experiment to prove this? (Hint!) Check out this image taken from our TOPEX/Poseidon satellite - Why are there stripes on this image? Hint! - What color is shows a 'normal' sea surface height and temperature? Hint! - Where in the ocean are the high areas indicating warm water? - What areas of the world (countries) are near high (warm) or low (cool) water? Now for another view ..... - Let's look at a recent image that uses 10 days of data which gives a more global coverage.(Why's that? Hint!) Check out where TOPEX/Poseidon is now! - In this image green shows areas that are a normal height, blue and purple are lower (cooler) than average and yellow, red and white are areas that are higher (warmer) than average. - How much is 14 cm? - Where are the high areas indicating warm water? - What areas of the world are near high (warm) or low (cool) water? |Let's compare and forecast!| ||The image to the left is taken at the height of the '97-'98 El Nino. Note the area of higher and warmer than average water (white) in the east.|| ||The image to the left is the '98-'99 La Nina. Note the area of lower and cooler water in the tropical Pacific, this later moved to the east| |How does the ocean make a difference in the weather?| The ocean affects the temperature and the amount of moisture in the air. How could you do an experiment to test this? Hint! With more moisture in the air, it is more likely to rain if the air is cooled. How could you do an experiment to test this? Hint! With less moisture in the air, even as the air cools going over mountains there will be little rain. Check out these graphics and write your own captions! |So why do we need to know what's coming next?| The El Nino and La Nina conditions are not necessarily bad, its just that we and the landscape adapt to average or 'normal' conditions, so when the weather is not normal, it often causes problems. Map showing some of the impacts from the 1997-98 El Nino Drought is when there is not enough rainfall to support activities that usually occur on a piece of land. These activites include growth of natural vegetation, use of the land for grazing or support of a city. In the case of the latter, the affects of the drought can often be lessened, but in natural areas the affects often result in dramatic natural population -What is your average rainfall? - What would happen to the area that you live in if you had half your annual rainfall? - If you knew that you would be getting half the average rainfall what could people do so it would matter less? -When are fires 'good' and when are they 'bad' Satellite view of Hurricane Mitch (Courtesy:Same old Someone Else) |What would be affected in your life if you were without power for a How would a farmer be affected? How would a city be affected?
<urn:uuid:64c95389-69c4-4e78-a832-cba6209b2e46>
3.734375
996
Tutorial
Science & Tech.
68.224935
1,794
Fixing Potholes with Non-Newtonian Fluid potholes? Forget waiting around for 5 road workers to stand around and watch while one guy fills the pothole with asphalt! Just grab a bit of non-Newtonian fluid and, there – you fixed it: The students, undergraduates at Case Western Reserve University in Cleveland, devised the idea as part of an engineering contest sponsored by the French materials company Saint-Gobain—and took first prize last week. The objective was to use simple materials to create a novel "So we were putzing around with different ideas and things we wanted to work with—and we were like, what’s a common, everyday problem all around the world that everybody hates?" explains 21-year-old team member Curtis Obert. "And we landed on potholes." He and four other students decided on a non-Newtonian fluid as a solution because of its unusual physical properties. "When there’s no force being applied to it, it flows like a liquid does and fills in the holes," says Obert, "but when it gets run over, it acts like a solid." What? Don’t believe us? Check out this video clip of people walking on water in a pool filled with non-Newtonian liquid.
<urn:uuid:5af6453b-641b-45b8-a8d0-06568ad68016>
3.015625
289
Truncated
Science & Tech.
54.622506
1,795
Today’s the big day, when Asteroid 2005 YU55 will pass within about 200,000 miles (and come slightly closer to the moon tomorrow) of our fair planet. This closeness of the approach can be seen in the short movie below (click it to activate). The very dark asteroid is estimated to measure about 400 meters across. Here’s a photo of the rock as it sped through space in our direction yesterday. It’s a pretty good shot considering the rock was still more than 600,000 miles away at the time. We’re all familiar with asteroids presenting end-of-the-world scenarios because of movies like Armageddon, in which a “Texas”-sized asteroid threatened Earth. This, of course, is laughable because Texas is about 1,400 kilometers across and the largest known asteroid in the solar system, Ceres, measures just 900 km in diameter. For a deeper dissection of Armageddon’s scientific flaws, see here. (Side note: The final scene of Armageddon offers an interesting take on the Russian approach to fixing mechanical problems with spaceflight equipment, especially in light of Sunday night’s launch of a Soyuz spacecraft. But I digress.) Anyway, it wouldn’t take an asteroid the size of Texas to cause a global catastrophe. According to NASA an asteroid would need to have a diameter in excess of 2 km to pose planetary-wide environmental consequences. And 2005 YU55 is much smaller than that. Which is not to say it would not have an impact. So what would happen if YU55, traveling relative to Earth at a velocity of 30,000 mph, struck the planet? Bad things, but not catastrophic things unless you’re living underneath the impact. Just for fun, let’s say it hit land about 100 miles west of Houston (it was nice knowing you, Schulenburg). This particular asteroid would probably produce a crater about 4 miles across. If it hit 100 miles from Houston it would produce a wind moving through the city at about 35 mph, and make a noise something like very loud traffic. We would also experience seismic shaking equivalent to about a 6.8 magnitude earthquake. There would be some dust. If you’re planning ahead, for those living in Katy, be sure to evacuate toward the east. You can model your own asteroid impact effects at the delightful Impact: Earth! website.
<urn:uuid:b34240de-d4be-4df7-8f7e-a3dc08b94b51>
3.4375
505
Personal Blog
Science & Tech.
58.808991
1,796
Solar Power May Be Getting More Flexible A solar panel and a yoga mat not often appear in the same sentence. But future solar panels have been compared to mats as a new technology promises to deliver amazing flexibility and efficiency within five years. According to a report on Boston.com, MicroContinuum, a company based in Cambridge, is developing in partnership with three universities what it calls ‘nantennas’, which are designed to collect more solar power than existing solar cells. What the company envisages is a thin, flexible sheet that is cheap to produce. “Imagine a roll-up sheet, like a yoga mat, that you can toss over any structure, or roof tiles whose outermost layers are laced with nantennas”, the website wrote. According to the University of Missouri, one of the partners, they would be able to collect 90 percent of available light. “Our overall goal is to collect and utilize as much solar energy as is theoretically possible and bring it to the commercial market in an inexpensive package that is accessible to everyone,” Professor Patrick Pinhero, an associate professor in the MU Chemical Engineering Department, said in a statement. Article by Antonio Pasolini, a Brazilian writer and video art curator based in London, UK. He holds a BA in journalism and an MA in film and television. |Tags: efficiency MicroContinuum nantennas Solar solar cells solar energy solar panel||[ Permalink ]|
<urn:uuid:2bb1812a-6e0f-4577-a26d-42ab63c8d1fd>
2.90625
301
News Article
Science & Tech.
24.297345
1,797
The world is watching Japan, which, in the wake of a devastating earthquake, is trying to prevent the total meltdown of multiple nuclear reactors. Escaping radiation from the plants has created an ever-widening evacuation zone. Will radioactivity from the plants reach the United States? Yes, it appears so. It may even reach Atlanta, but the amount of radiation will be so tiny it won’t affect human health, according to U.S. officials. “Basic physics and basic science tells us there really can’t be any harm to anyone here in the United States or Hawaii or any territories,” said Gregory Jaczko, chairman of the Nuclear Regulatory Commission. Trace amounts will likely waft over the West Coast Friday, according to this New York Times animation. WSB radio’s weather guru Kirk Mellish provides info on his blog that indicates the remnants of the radioactive plume could travel the 7,600 miles to Atlanta in 15.9 days. That means it would be here March 28. Despite expert opinion that the radiation will have no effect on public health, Americans have bought every Nukepill available. “People are terrified,” said Alan Morris, president of Anbex Inc., of Williamsburg, Va., in The Washington Post. “We’re getting calls from people who are crying and saying things like, ‘Please. Can’t you help me? Can’t you send me anything?’ ”
<urn:uuid:4b26685f-58dc-4a6b-874a-72037c7ef273>
2.53125
313
News Article
Science & Tech.
58.677584
1,798
In some views SOA is represented as a series of 4 layers: Presentation Layer (SOA 1), Business Process Layer (SOA 2), Business Service Layer (SOA 3) and Technical Layer (SOA 4). Typically each layer higher up in the hierarchy consumes services exposed by the layer under it. So the Presentation Layer would consume services provided by the Business Process or Business Service Layers. Service interfaces are described using Web Services Description Language (WSDL), sheltering service consumers from details of service implementation. Web Services are seen as the technical means to implement the decoupled functional layers in a SOA development. Decoupling allows implementations of business functionality at different layers to be swapped in and out without disturbing other layers in the stack. The business idea is that patients are looked after in various healthcare facilities. Frequently applications need to allow selection of a facility and to access facility details for display to human operators. A relational database is used to hold the details of facilities which are a part of the healthcare enterprise. To shelter application developers from the details of the data store facility list and details are made available as a multi-operation web service. This web service will be used to construct the web application that provides a user view into the facilities and facility details. The previous document in this series, “GlassFish ESB v 2.1 Creating a Healthcare Facility Web Service Provider”, walked the reader through the process of implementing a GlassFish ESB v2.1-based, multi-operation web service which returns facility list and facility details. In this document I will walk through the process of developing a Visual Web Application which will use the Web Service as a data provider. We will use the NetBeans 6.5.1 IDE, which comes as part of the GlassFish ESB v2.1 installation. The application will be implemented as a Visual Web JavaServer Faces Application using JSF component provided by Project Woodstock. This application will introduce the technology in a practical manner and show how a multi-operation web service can be used as a data provider, decoupling the web application from the data stores and specifics of data provision. Note that this document is not a tutorial on JavaServer Faces, Visual Web JSF, Project Woodstock components or Web Application development. Note also that all the components and technologies used are either distributed as part of the NetBeans 6.5, as part of the GalssFish ESB v2.1 or are readily pluggable into the NetBeans IDE. All are free and open source. It is assumed that a GlassFish ESB v2.1-based infrastructure, supplemented by the Sun WebSpace Server 10 Portal functionality and a MySQL RDBMS instance, are available for development and deployment of the web application discussed in this paper. It is further assumed that the web service, developed using instructions in “GlassFish ESB v 2.1 – Creating a Healthcare Facility Web Service Provider”, is available and deployed to the infrastructure. The instructions necessary to install this infrastructure are discussed in the blog entry “Adding Sun WebSpace Server 10 Portal Server functionality to the GlassFish ESB v2.1 Installation”, supplemented by the material in blog entry “Making Web Space Server And Web Services Play Nicely In A Single Instance Of The Glassfish Application Server”. Here is the document – 01_FacilityService_WebApplication.pdf
<urn:uuid:50869c38-7c78-4fb3-b91b-907a7e2b0ca0>
2.53125
705
Documentation
Software Dev.
34.949816
1,799