text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
Emacs Lisp uses two kinds of storage for user-created Lisp objects: normal storage and pure storage. Normal storage is where all the new data created during an Emacs session are kept (see Garbage Collection). Pure storage is used for certain data in the preloaded standard Lisp files—data that should never change during actual use of Emacs.
Pure storage is allocated only while temacs is loading the
standard preloaded Lisp libraries. In the file emacs, it is
marked as read-only (on operating systems that permit this), so that
the memory space can be shared by all the Emacs jobs running on the
machine at once. Pure storage is not expandable; a fixed amount is
allocated when Emacs is compiled, and if that is not sufficient for
the preloaded libraries, temacs allocates dynamic memory for
the part that didn't fit. The resulting image will work, but garbage
collection (see Garbage Collection) is disabled in this situation,
causing a memory leak. Such an overflow normally won't happen unless
you try to preload additional libraries or add features to the
standard ones. Emacs will display a warning about the overflow when
it starts. If this happens, you should increase the compilation
SYSTEM_PURESIZE_EXTRA in the file
src/puresize.h and rebuild Emacs.
This function makes a copy in pure storage of object, and returns it. It copies a string by simply making a new string with the same characters, but without text properties, in pure storage. It recursively copies the contents of vectors and cons cells. It does not make copies of other objects such as symbols, but just returns them unchanged. It signals an error if asked to copy markers.
This function is a no-op except while Emacs is being built and dumped; it is usually called only in preloaded Lisp files.
The value of this variable is the number of bytes of pure storage allocated so far. Typically, in a dumped Emacs, this number is very close to the total amount of pure storage available—if it were not, we would preallocate less.
This variable determines whether
defunshould make a copy of the function definition in pure storage. If it is non-
nil, then the function definition is copied into pure storage.
This flag is
twhile loading all of the basic functions for building Emacs initially (allowing those functions to be shareable and non-collectible). Dumping Emacs as an executable always writes
nilin this variable, regardless of the value it actually has before and after dumping.
You should not change this flag in a running Emacs. | <urn:uuid:ad4d8944-58cc-4258-95c3-8dfbddf1f9bf> | 2.640625 | 547 | Documentation | Software Dev. | 42.761805 | 1,500 |
When a symbol is evaluated, it is treated as a variable. The result is the variable's value, if it has one. If the symbol has no value as a variable, the Lisp interpreter signals an error. For more information on the use of variables, see Variables.
In the following example, we set the value of a symbol with
setq. Then we evaluate the symbol, and get back the value that
(setq a 123) ⇒ 123 (eval 'a) ⇒ 123 a ⇒ 123
t are treated specially, so that the
nil is always
nil, and the value of
t; you cannot set or bind them to any other values. Thus,
these two symbols act like self-evaluating forms, even though
eval treats them like any other symbol. A symbol whose name
starts with ‘:’ also self-evaluates in the same way; likewise,
its value ordinarily cannot be changed. See Constant Variables. | <urn:uuid:dd936172-8481-4ecf-befa-9e74985ac766> | 3.5 | 205 | Documentation | Software Dev. | 60.766871 | 1,501 |
Page 3 of 3
Now we come to a sideline in the language paradigm story. There is a lot of talk about dynamic languages at the moment. This is partly because until quite recently the dominant languages Java, C++ and C# were static languages - so it's quite a lot about a reaction to the old guard. The static/dynamic distinction is a very difficult one to pin down precisely. In the old days it could be summed up as the split into compiled and interpreted languages but today it is more about a split in the approach to how object oriented programming should be done.
The current meaning of Dynamic when applied to languages usually refers to typing. When you create an object oriented language you have to decide if it is going to be strongly or weakly typed. Every object defines a type and in a strongly typed language you specify exactly what type can be used in any given place. If a function needs parameters that of type Apple you can't call it used parameters that are Oranges. The alternative approach is to allow objects of any type to be used anywhere and just let the language try to do the best job it can with what it is presented - this is weak typing.
Now in a strongly typed language you can choose to enforce the typing when the language is compiled or at run time. This is the main distinction between static and dynamic typing. In a static typed language you can look at a variable and see that it is being assigned to an Apple just by reading the code. In a dynamically typed language you can't tell what is being assigned to a variable until the assignment is done at run time.
Clearly there is an interaction between strong and weak typing and static and dynamic. A weakly typed language really doesn't have much choice but to use what looks like dynamic typing.
Weak typing or dynamic typing has the ability to make programmer easier to write but the loss of the discipline of controlling type makes it more likely that a runtime error will occur and arguably makes it harder to find bugs.
Dynamic languages my have something to offer the future but there is a sense in which we are simply returning to the wild primitive expression of programming that existed in a time before we learned better. Perhaps every so many generations programmers need to experience programming in the raw.
The final paradigm is the graphic language - if they can be called languages. This is a strange mix of the object-oriented approach and the declarative with a little procedural thrown in. However to classify the approach in this way is to miss the bigger picture - no just to miss the picture. The idea is that if code objects are to mimic real world objects let's give them a physical appearance. In the world of the user interface we are very well accustomed to this approach - a button that you drag and drop onto a page is a physical representation of the button code object. You get to work with the button as if it was a real button - you can click it, drag it, size it, change its color and so on. Graphical objects in the UI lead to the component revolution which we are still developing - from ActiveX to WPF, Widgets and so on.
Now consider using the same approach to building programs in general. You could have a loop component, a conditional component, a module component and so on. This could be assembled just like a user interface by a drag-and-drop designer and "writing the code" would be a matter or connecting them together in a flow of control graph. Some components would need you to write a few lines of procedural code to specify their actions more precisely but mostly components fit together naturally without extra code, just specify a few properties.
This approach to building programs has to date mostly been used in languages such as Scratch and the Lego Mindstorms robots to get children interested in programming. However, what is easy for children should be very easy for us and the method could well translate to more ambitious projects. Only recently Google announced a graphical programming environment for the Android - but it's still in the early stages of testing.
Of all the techniques described so far it is graphical programming that I'd bet was the way of the future - but how far in the future is another matter.
There are a large number of other approaches to programming that we haven't considered but they are mostly side issues and special environments. For example, there is the whole issue of synchronous v asynchronous or event driven programming. Then there is the big question of sequential v parallel programming and so on. There is also the convergence of AI and programming. For example, using genetic algorithms you can evolve a program rather than writing it.
If you have a favourite approach that has been left out, or want to request an article about an approach or any aspect of programming theory, then email the editor:
If you would like to be informed about new articles on I Programmer you can either follow us on Twitter, on Facebook , on Digg or you can subscribe to our weekly newsletter. | <urn:uuid:d76751d3-0d58-4de9-95c0-a1ce4412c76b> | 2.90625 | 1,010 | Truncated | Software Dev. | 44.909719 | 1,502 |
This article was published originally on 8/25/2010
Alert readers across Iowa and in neighboring states are asking, "Why are there are so many dragonflies this summer?" I'm not sure what explains this larger-than-normal number of dragonflies but callers are reporting anywhere from "dozens" to "hundreds" of dragonflies flying in swarms during late afternoon to early evening.
Excessive rainfall in this year does not explain the abundance. Dragonflies develop as nymphs in rivers, streams and lakes. Most take at least one year to develop from the egg to the adult stage, and some take 2 or 3 years. So the swarmers you see now are at least one year old and probably two. These are the offspring of last year’s adults (if not the adults that were flying back in 2008 or 2009). To me, that means this year’s abundance is related to what happened 1 to 3 years ago, and not what happened 1 to 3 months ago. In fact, I predict that dragonfly numbers will be down in the next 1 to 3 years as flooding of 2010 may have been detrimental to nymphs in flooded streams. More water in the stream, and especially flooding, would seem to work against the dragonflies, not for them.
What we do know is that dragonflies are more numerous in high-quality water, so abundance is an indicator of healthy aquatic ecosystems, and that's a good thing.
Dragonflies are often observed long distances from the nearest water. It appears they travel long distances and then congregate (“swarm”) in areas where there is a plentiful flying food source such as emerging winged ants, mosquitoes, etc. Yes, dragonflies eat mosquitoes, but it’s apparent they are not keeping up with this year’s bumper crop.
More information about dragonflies
Dragonfly photographed near Hungry Jack Lake MN. By Richard Minnick. | <urn:uuid:c09fd244-f0ba-4b95-834a-b95040e33159> | 3.140625 | 399 | Knowledge Article | Science & Tech. | 57.702115 | 1,503 |
Historic Caribbean Earthquake Was Felt in NYC
Caribbean seismic hazard map, illustrating the region's complex geologic setting.
CREDIT: U.S. Geological Survey
SAN FRANCISCO — More than 150 years ago, a fault ringing the Caribbean shook half the Atlantic, including New York City, with a mega-earthquake. The quake rivaled those that have struck Indonesia in recent years, geologists reported last week at the annual meeting of the American Geophysical Union.
The Caribbean's beautiful tropical islands and coral reefs rise above a complex junction of four major tectonic plates. Many of the islands sit above a subduction zone, where two plates meet and one slides haltingly under the other, down into the Earth's mantle. The Dec. 26, 2004, Sumatra, Indonesia, earthquake, a subduction zone earthquake that generated deadly tsunamis, has galvanized scientific interest in potential quake hazards from the Caribbean's similar earthquake-producing faults.
The Feb. 8, 1843, Lesser Antilles earthquake was in many ways remarkably similar to the magnitude-8.7 earthquake that struck Sumatra just one year later, in 2005, researchers reported at the meeting. [The 10 Biggest Earthquakes in History]
Historical sleuthing by Francois Beauducel and Nathalie Feuillet of the Paris Institute of Earth Physics upwardly revised the temblor's estimate of magnitude to 8.5 (from a previous estimate of 7.8).
Maps from the French national marine service revealed that many palm-covered islets in the bay at Pointe-à-Pitre, the biggest city on the island of Guadeloupe, disappeared between 1820 and 1869, Feuillet told OurAmazingPlanet. The islets likely subsided, or dropped below sea level. The stress of the two plates stuck together makes the earth's crust flex and warp. After the earthquake, the deformed crust rebounds in some areas and drops down in others.
Portions of Antigua subsided up to 10 feet (3 meters), Feuillet said. Wharfs at the shore of Pointe-à-Pitre sunk one foot (33 centimeters), she found. Cliffs along the island collapsed, and historic accounts describe 5-foot-high (1.5 m) mud fountains.
Combining the 19th-century records of such effects with modern earthquake models helped Beauducel and Feuillet pin down both the quake's magnitude and the location of the fault rupture, the spot where the subduction zone tore apart.
"The only way to explain the subsidence of the islands is to have a rupture … in the very deep part of the subduction zone, between 40 and 60 km (25 to 40 miles) depth," Feuillet said. Such as depth is comparable to that of the 2005 Sumatra quake, the researchers said.
The quake was felt up and down the East Coast, including in New York City, Washington, D.C., Raleigh, N.C., and Charleston, S.C., said Susan Hough of the U.S. Geological Survey. Hough also unearthed reports of shaking at three locations in South America, she said.
But, just as with the 2005 Sumatra quake, there was no giant wave on Feb. 8, 1843. Reports describe a 4-foot (1.2 m) wave in Antigua, but no significant tsunami arose, Feuillet said. Even so, several thousand people died in Pointe-à-Pitre from fires and damage caused by the severe shaking.
MORE FROM LiveScience.com | <urn:uuid:72e18fbb-d0ec-45d4-9de1-7805e1506ee2> | 3.4375 | 756 | News Article | Science & Tech. | 57.841236 | 1,504 |
April Showers Bring Midwest Floods
CREDIT: Jesse Allen/NASA
With rivers in the Midwestern United States already full from thawing winter snow cover, severe rainfall in late April added to the troubles for the region. The National Weather Service predicted in February that the region was primed for flooding, and so far it has lived up to the advanced billing, according to a NASA statement. On the afternoon of April 26, 2011, the Advanced Hydrological Prediction Service (AHPS) was reporting major flooding at 48 river gauges and moderate flooding at 86 gauges along central U.S. rivers.
This map depicts rainfall for the Midwestern U.S. from April 19 to 25, 2011. The estimates were made from the Multi-satellite Precipitation Analysis, based on data from the Tropical Rainfall Measuring Mission (TRMM). Shown in shades of green and blue, rainfall estimates range from 150 millimeters (5.9 inches) to greater than 525 millimeters (20.7 inches).
Ground monitors for the AHPS reported 13.70 inches (348 mm) of rainfall in the southeastern Missouri town of Poplar Bluff between April 22-26. The nearby Black River was pouring over its levee in at least 30 places, and people were evacuated from more than 1,000 homes.
Westville, Oklahoma, received 14.96 inches (380 mm) of rain. In Arkansas, the town of Springdale was deluged with 19.70 inches (500 mm), while nearby Fayetteville collected 13.85 inches (352 mm). The governor of Arkansas declared a state of emergency.
In Carbondale, Illinois, and Paducah, Kentuckyboth near the confluence of the Mississippi and Ohio riversroughly 9 inches (230 mm) fell, leading the governor of Kentucky to declare a state of emergency in advance of a significant flood when the two swollen rivers converge. The U.S. Army Corps of Engineers was planning on April 26 to take the extraordinary step of intentionally breaching the Birds Point levee in southeast Missouri, just downriver of the confluence, in a bid to reduce the amount of water moving down the Mississippi, the Associated Press reported. Breaching the levee was expected to flood up to 130,000 acres of farmland.
NOAA's Hydrometeorological Prediction Center was predicting more heavy rain and severe weather through mid-week in the Mississippi and Ohio river valleys; flood warnings and watches were posted for both basins, as well as the Tennessee Valley. On April 25, the rains were accompanied by at least 38 tornadoes , according to the National Weather Service, and conditions were ripe for more in the coming days.
MORE FROM LiveScience.com | <urn:uuid:0999bfa7-b2d9-48e7-a328-d6a146f466d7> | 3.203125 | 558 | News Article | Science & Tech. | 54.03 | 1,505 |
A major focus in tissue engineering is to create materials that improve and direct cellular interaction. This interaction can be probed by measuring the relative number of cells adhered to a surface, which is thought to be an important step in the cascade of cellular fate processes such as stem cell renewal or differentiation into a specialized cell type.
Now, in new work, a group of Australian researchers have utilized the industrially-relevant plasma polymerization technique to chemically modify the surface of a biologically inert substrate for the purpose of enhancing cell adhesion. Specifically, they examined the effect of two plasma polymerization parameters, discharge power and deposition time, on properties such as film thickness, chemical composition, and cellular attachment. In all cases, the plasma polymers deposited onto the substrates as thin, 8 to 40 nm, films. The chemical composition of the films was found to be more dependent on discharge power than deposition time. They showed that this outcome could be related to way in which the precursor (monomer) fragments and the plasma polymer film forms (e.g. by crosslinking).
Of the two plasma polymers examined, i.e. those formed from either an amine or aldehyde precursor, it was found that the stem cells adhered best to plasma polymers that closely resembled the aldehyde monomer, i.e. plasma polymers formed under lower power and shorter times. These results have practical implications for the fast, efficient, and inexpensive surface functionalization of materials for tissue engineering. | <urn:uuid:60497062-85c5-4b78-811f-08dd5ae8ef53> | 2.765625 | 304 | Academic Writing | Science & Tech. | 25.978432 | 1,506 |
Can Solar Panels Replace Nuclear Energy?
A traditional view is that solar power is cool and hip but doesn’t have nearly the production muscle that nuclear has. Solar panels are certainly safer than nuclear energy, but will they ever be able to replace nuclear power plants? One blogger, Dan Hahn, seems to think that solar can replace nuclear.
Dan Hahn writes that in 2010, enough solar panels were shipped and installed to produce the equivalent energy of seventeen nuclear power plants. Each nuclear power plant can produce one gigawatt of energy annually. In 2010 alone, new solar panel systems generated seventeen gigawatts, the same amount of energy as seventeen nuclear power plants. Not only is solar power safer than nuclear power, but nuclear plants can take years, even decades, to build. Solar panel installation helps get people the power they need without having to wait decades. The relative ease of solar panel installation is another reason solar panels might one day replace nuclear power plants.
Another argument for solar versus nuclear is that residential solar panels have been decreasing in cost. In the past, solar panel installation has been too expensive for most homeowners to be able to afford. However, this might be already starting to change. Dan Hahn writes that the price of a residential solar panel system is decreasing. He writes that in many areas of the country, Baltimore for example, residential solar panels can pay for themselves “in just six years” so cites one href="http://www.solargaines.com/residential.html">Baltimore residential solar installer. Solar panels are not just good for the environment, but they are also good for consumers' wallets as well.
The decreasing costs of residential solar panel systems and ever-growing environmental awareness bodes well for the future of solar power - especially to the tune of 17 gigawatts of power generated last year! There is hope that this green technology will one day be able to power many of our homes and businesses, safely and affordably. A solar-powered future is a bright, safe future.
Last week's Ideas Sata Raid Storage System
Advanced Storage Systems
With the constant, rapid advance in technology,... readMetal Suppliers
A full line metal distributor
Metal is one of the earthly elements that ... click hereMetal Supplier
The right steel supply for you
The economy of today has made us all dig ... read more | <urn:uuid:666f90a6-d7d3-4f3c-81d0-f78433780cc2> | 2.828125 | 483 | Personal Blog | Science & Tech. | 43.469194 | 1,507 |
Big sunspot unleashes intense solar flare
A sunspot known as AR1654 produced the M1-class flare, according to officials with NASA's Solar Dynamics Observatory.
Fri, Jan 11 2013 at 6:40 PM
This view of the flare on Jan. 11, 2013, was recorded by NASA's Solar Dynamics Observatory. (Photo: NASA)
The surface of the sun erupted in a solar flare early today (Jan. 11), unleashing a blast of super-heated plasma into space.
A huge sunspot known as AR1654 produced the M1-class flare at 4:11 a.m. EST (0911 GMT), officials with NASA's Solar Dynamics Observatory said in a description of the event. The SDO spacecraft is one of several sun-watching space telescopes keeping tabs on solar flares and other sun weather events.
According to Spaceweather.com, sunspot AR1654 is growing more active and is now "crackling with M-class solar flares" like the one that erupted today.
"AR1654 is getting bigger as it turns toward Earth," the website reported. "Not only is the chance of flares increasing, but also the chance of an Earth-directed eruption.This could be the sunspot that breaks the recent lengthy spell of calm space weather around our planet."
The sun is in an active phase of its current 11-year weather cycle, which scientists call Solar Cycle 24. The sun's activity cycle is expected to reach its peak (or "solar maximum") in 2013, astronomers have said.
The most powerful solar flares, X-class flares, have the most significant effect on Earth. They can cause long-lasting radiation storms in our planet's upper atmosphere and trigger radio blackouts.
Medium-size M-class flares can cause brief radio blackouts in the polar regions and occasional minor radiation storms. C-class flares, the weakest in scientists' three-tiered classification system, have few noticeable consequences.
Related on Space.com and MNN: | <urn:uuid:c4910c63-d01b-4d60-afb7-aa33a3f948f5> | 3.34375 | 418 | News Article | Science & Tech. | 59.160769 | 1,508 |
Distant starlight has given astronomers the best look yet at a distant icy sibling of Pluto, a dwarf planet called Makemake that appears to be missing its atmosphere, researchers say.
Although this icy world currently lacks an atmosphere, there is still a chance it could form one like a comet when it approaches the point in its orbit that is closest to the sun, scientists added.
In the past decade, astronomers have discovered a slew of "dwarf planets" that dwell with Pluto beyond the orbit of Neptune. Makemake was a world nicknamed "Easterbunny" by its discoverers before officially getting named after the Polynesian creator of humanity and the god of fertility.
The dwarf planet's red-tinged surface is apparently covered by a layer of frozen methane, and is bright enough to be seen by a high-end amateur telescope, despite its current distance of nearly 53 times the distance between the Earth and the sun. [Makemake's Missing Atmosphere (Video)]
Makemake: A plutoid revealed
Makemake is a type of icy dwarf planet known as a plutoid, as are Pluto and the newfound trans-Neptunian worlds Erisand Haumea.
Whereas Pluto has a tenuous atmosphere surrounding it, its near-twin Eris does not, most likely due to Eris' greater distance from the sun and colder surface temperature. Makemake orbits at an intermediate distance from the sun between Pluto and Eris, raising the question of whether it might possess an atmosphere.
In 2011, Makemake passed directly in front of the distant star NOMAD 1181-0235723. This eclipse or occultation helped backlight the icy world, and researchers now reveal data from seven telescopes of this eclipse has helped them pin down Makemake's size, shape and surface properties better than ever. [Dwarf Planets of the Solar System (Infographic)]
"For me it is extremely remarkable that we can get an accurate knowledge of important properties of these mysterious dwarf planets even though they are so far away from the Earth," said lead study author Jose Ortiz, a planetary scientist at the Institute of Astrophysics of Andalucía in Granada, Spain. "Only three years ago we had never observed a single occultation by a trans-Neptunian object, and now we have managed to observe 12 such events, nine of them by our international team."
Such occultations are extremely difficult to predict and observe. For comparison, these worlds are so distant they appear about the same size "as that of a coin seen at a distance of 30 miles (50 kilometers) or smaller," Ortiz told Space.com. "But thanks to our hard work and to an important international collaboration, we were able to beat all the difficulties."
Makemake is about 890 miles (1,430 km) wide, making it about two-thirds the diameter of Pluto. Light from this distant star appeared and disappeared quickly as Makemake passed in front of it. This suggests there was no significant atmosphere around it to smear out the star's light.
At most, Makemake's atmosphere is 80 million to 250 million times thinner than Earth's at sea level, the researchers calculate. Still, there might be patches of atmosphere overlying warmer regions on its surface, such as dark patches that absorb more sunlight.
"We suspect that these dark patches might be concentrated near the latitude of the subsolar point of the planet — the subsolar point is the point of the planet where the solar rays reach the surface perpendicularly, and therefore cause the maximum heating possible," Ortiz said. "These dark patches might form sort of a dark band in the planet."
Wispy atmosphere still possible
Other bodies with patchy atmospheres include Jupiter's moon Io and Saturn's moon Enceladus, which arise "mostly from gas released by volcanoes or the so-called cryovolcanoes, 'volcanoes' which instead of releasing magma release liquid water or a liquid mix," Ortiz said. "Even Mars has areas with a locally denser atmosphere, which in this case arises from sublimation of carbon dioxide ice."
Space news from NBCNews.com
Teen's space mission fueled by social media
Science editor Alan Boyle's blog: "Astronaut Abby" is at the controls of a social-media machine that is launching the 15-year-old from Minnesota to Kazakhstan this month for the liftoff of the International Space Station's next crew.
- Buzz Aldrin's vision for journey to Mars
- Giant black hole may be cooking up meals
- Watch a 'ring of fire' solar eclipse online
- Teen's space mission fueled by social media
Makemake might very well behave like a comet and grow an atmosphere during the parts of its year when it approaches the sun.
"We suspect that this is the case," Ortiz said. "But comets are usually so small and have so little mass that their gravity does not allow them to retain the atmospheres, which escape to space giving rise to the comets' tails. In the case of Makemake, its gravity is much higher and therefore the escape of the gases is not as dramatic as that of the comets."
Future research can focus on looking for other stellar occultations by large trans-Neptunian objects.
"We can now investigate trans-Neptunian objects with far more in depth than we could a few years ago, thanks to the stellar occultation technique," Ortiz said. "This will not only shed light on atmospheric phenomena, but also on important physics of these bodies. We would also like to explain and understand the similarities and differences in composition of the trans-Neptunian objects in general, which requires theoretical developments, models of different physical phenomena and plenty of work in many fields."
The scientists are to detail their findings Thursday in the journal Nature.
- Dwarf Planet Makemake: Icy Wonder (Gallery)
- Meet the Solar System's Dwarf Planets
- Poll: Should Pluto Be a Dwarf Planet or Full-Fledged World?
© 2013 Space.com. All rights reserved. More from Space.com. | <urn:uuid:b4867299-bdb1-454a-928f-bdfd2875de69> | 3.28125 | 1,266 | News Article | Science & Tech. | 40.139112 | 1,509 |
From Ohio History Central
An image of the Viceroy Butterfly
Viceroy butterflies (Limenitis archippus) look almost identical to the monarch butterfly. The identifying difference is that viceroys have a black line across the hindwing and white dots in the black band along the edge. Their wingspan reaches two and a half to three and three-eighths inches. Because they resemble the foul-tasting monarch it has few, if any, predators.
Viceroys are found in habitats that include moist open or shrubby areas such as willow thickets, wet meadows, and lake and swamp edges Males perch or patrol for females around caterpillar host plants, including willow, poplar and cottonwood trees. Females will lay eggs on the tip of the leaves. They will lay only two or three eggs per plant. When the caterpillars emerge, they will eat the eggshell and then begin at night to feed on catkins and leaves of the host trees.
Young caterpillars construct a ball made of leaf bits, animal waste and silk, hanging off the leaf on which they are feeding. Scientists believe this hanging ball may distract predators because they look like bird droppings. Older caterpillars will roll a leaf tip in order to make a shelter for the winter.
After the completion of metamorphosis, adult viceroys emerge and begin to feed.In the spring, before flowers are available their food consists of aphid honeydew, carrion, animal waste and rotting fungus. Later, asters, goldenrod, joe-pye weed, and thistle make up their diet.
Viceroy butterflies can be found throughout most of Ohio. However, in other areas of the United States, it is threatened because of a loss of habitat. | <urn:uuid:a1aedaa8-395f-44ee-a687-11a37e804686> | 3.953125 | 371 | Knowledge Article | Science & Tech. | 49.276483 | 1,510 |
Special Report - Cultivating Future Technologies
As the year 2000 arrives, many people are looking back in time to determine how the world will change in the next millennium. Science has always played a significant role in society, but according to Dr. Gerry Stokes, who leads Pacific Northwest National Laboratory's Environmental Science and Health Division, that role will change in the coming century and in the next millennium. He notes that science has evolved over time and will continue to evolve as it plays an increasingly important role in our daily lives. We asked Dr. Stokes about how science has changed and how it will tackle the tough problems in the future.
How has science evolved in the last century?
Prior to this century, two kinds of science evolved. First, we had Galileo. He asked, `Why should people just think about something when they could go out and measure it?' This way of thinking led to modern experimental science. Then we had Newton and modern mathematically based theoretical science. He brought rigor to the process of creating a self-consistent explanation of existing facts. In the twentieth century, driven by Von Neumann, we began computational science, in which we use computer models to examine the consequences of what we think we already know. While this is related to Newton's theoretical approach, it is very different.
Do you see computer models as the wave of the future?
Yes, but computers aren't large enough to hold everything we know. We have to decide what to put in them, and that is the heart of computational science. Science has traditionally focused on the process of reductionism—taking things apart and forming specialties to look at every little piece. We have to reassemble knowledge to attack the big complicated problems. For example, we don't know how the human body operates as a whole. We study cells, or systems, or some smaller piece of the puzzle that can be brought into the lab or entered into a computer. Computational science will help make the transition from science of the lab to science in the real world.
How will this transition from science in the lab to science in the world take place?
As I look to the future, I see science as being necessarily multidisciplinary and perhaps inter-disciplinary. Teams of people from different disciplines will have to come together to tackle a problem. This will be challenging because we're used to dealing with things in small pieces. There are some technologies in today's world, like automobiles and aircraft, that no one person knows everything there is to know about them. Instead we have specialized experts that understand specific parts and work together to create the product. As we look at the real world, if we're not looking at the whole problem, I don't think we know how to ask the right questions to guide these teams on a path to the solution.
Can you explain what you mean about the "right" questions?
We have a difficult time articulating the big questions. It's not obvious to me that the breakthroughs we need will come from looking through the small windows of traditional science. In studies of global warming the questions being addressed deal with how much the climate is changing, how fast it is changing and what will happen as a result. I'm not convinced that those are the questions we need to be answering. Maybe the question should be more like `How can we characterize the planet in a way to understand how it changes and how we are affected by those changes?'
Has the obligation of science changed in the last century?
The biggest change is that science is far more central to civilization than it was at the start of the century. The advancement of civilization depends on it and reaps the benefits from it. I think society expects more of us.
What does society expect from science?
The world wants more than technology. The public wants science to help make sense of the world around us—to put things into perspective. In that regard, science has a lot to offer. I think that the environment and health are the two biggest challenges the public wants addressed.
How can Pacific Northwest help address those issues?
There are three strands in our environmental mission here at the lab. Environmental science helps us understand the legacy of past practices. Society has created situations that are causing difficulty now, and we need science to help `unfoul the footpath.'
Then there's the stewardship issue. What kind of legacy are we leaving behind? For every gallon of gas we use we're putting five pounds of carbon into the atmosphere. We want to know if some seemingly unconnected act, such as driving cars, is causing the extinction of a species or the elimination of a small island nation.
The focus now is moving to the question of how the environment impacts human health. How is what we're putting into the environment affecting people? The science we use to answer this question is 20 years old. As a society we've based our conclusions on experiments where animals are exposed to high doses and then inferences are made on how lower doses would affect people. Finding a better way is a new and challenging area for science. It's significant because these results are the basis for environmental legislation and regulation.
Why did it take so long for the need to understand how the environment affects human health to rise to the surface?
It comes back to whether we're asking the right questions. Health issues can be very personal. Medicine is very diagnostic. People feel bad and they want to be healed. Outside of epidemiology, there haven't been many attempts to deal with populations as a whole. We've had computer models of climate systems for about 10 years and yet there are no models of the public health. We need to ask questions like how would changing the smoking habits of every person affect society's health? How many people would still get lung disease from other causes? We need to understand the compounding factors to truly determine the risk elements of disease.
Besides computer modeling, what kinds of research are becoming increasingly important?
We're learning what drives biotechnology. We're building an understanding of the human genome, which is the code for life. We will then be able to determine what proteins are being made in cells, but that's only part of the picture. Some are made and destroyed, others combine to form something else.
Now we're beginning to determine what proteins are actually present and what they do. This will create a new class of diagnostics to show how humans react to the environment. In the final analysis, computation will be critical here as well. What's the point of knowing something if you don't know the consequences? With computer modeling you can decrease the amount of experimentation it takes to make the world approachable. | <urn:uuid:c1dccecb-7abb-42bb-ba4e-6eafc25b490f> | 3.421875 | 1,358 | Audio Transcript | Science & Tech. | 52.746225 | 1,511 |
pg_dumpall is a utility for writing out ("dumping") all PostgreSQL databases of a cluster into one script file. The script file contains SQL commands that can be used as input to psql to restore the databases. It does this by calling pg_dump for each database in a cluster. pg_dumpall also dumps global objects that are common to all databases. (pg_dump does not save these objects.) This currently includes information about database users and groups, tablespaces, and properties such as access permissions that apply to databases as a whole.
Since pg_dumpall reads tables from all databases you will most likely have to connect as a database superuser in order to produce a complete dump. Also you will need superuser privileges to execute the saved script in order to be allowed to add users and groups, and to create databases.
The SQL script will be written to the standard output. Use the [-f|file] option or shell operators to redirect it into a file.
pg_dumpall needs to connect several times to the PostgreSQL server (once per database). If you use password authentication it will ask for a password each time. It is convenient to have a ~/.pgpass file in such cases. See Section 31.15 for more information.
The following command-line options control the content and format of the output.
Dump only the data, not the schema (data definitions).
Include SQL commands to clean (drop) databases before recreating them. DROP commands for roles and tablespaces are added as well.
Send output to the specified file. If this is omitted, the standard output is used.
Dump only global objects (roles and tablespaces), no databases.
A deprecated option that is now ignored.
Dump object identifiers (OIDs) as part of the data for every table. Use this option if your application references the OID columns in some way (e.g., in a foreign key constraint). Otherwise, this option should not be used.
Do not output commands to set ownership of objects to match the original database. By default, pg_dumpall issues ALTER OWNER or SET SESSION AUTHORIZATION statements to set ownership of created schema elements. These statements will fail when the script is run unless it is started by a superuser (or the same user that owns all of the objects in the script). To make a script that can be restored by any user, but will give that user ownership of all the objects, specify -O.
Dump only roles, no databases or tablespaces.
Dump only the object definitions (schema), not data.
Specify the superuser user name to use when disabling triggers. This is only relevant if --disable-triggers is used. (Usually, it's better to leave this out, and instead start the resulting script as superuser.)
Dump only tablespaces, no databases or roles.
Specifies verbose mode. This will cause pg_dumpall to output start/stop times to the dump file, and progress messages to standard error. It will also enable verbose output in pg_dump.
Print the pg_dumpall version and exit.
Prevent dumping of access privileges (grant/revoke commands).
This option is for use by in-place upgrade utilities. Its use for other purposes is not recommended or supported. The behavior of the option may change in future releases without notice.
Dump data as INSERT commands with explicit column names (INSERT INTO table (column, ...) VALUES ...). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL databases.
This option disables the use of dollar quoting for function bodies, and forces them to be quoted using SQL standard string syntax.
This option is only relevant when creating a data-only dump. It instructs pg_dumpall to include commands to temporarily disable triggers on the target tables while the data is reloaded. Use this if you have referential integrity checks or other triggers on the tables that you do not want to invoke during data reload.
Presently, the commands emitted for --disable-triggers must be done as superuser. So, you should also specify a superuser name with -S, or preferably be careful to start the resulting script as a superuser.
Dump data as INSERT commands (rather than COPY). This will make restoration very slow; it is mainly useful for making dumps that can be loaded into non-PostgreSQL databases. Note that the restore might fail altogether if you have rearranged column order. The --column-inserts option is safer, though even slower.
Do not wait forever to acquire shared table locks at the beginning of the dump. Instead, fail if unable to lock a table within the specified timeout. The timeout may be specified in any of the formats accepted by SET statement_timeout. Allowed values vary depending on the server version you are dumping from, but an integer number of milliseconds is accepted by all versions since 7.3. This option is ignored when dumping from a pre-7.3 server.
Do not dump security labels.
Do not output commands to create tablespaces nor select tablespaces for objects. With this option, all objects will be created in whichever tablespace is the default during restore.
Do not dump the contents of unlogged tables. This option has no effect on whether or not the table definitions (schema) are dumped; it only suppresses dumping the table data.
Force quoting of all identifiers. This may be useful when dumping a database for migration to a future version that may have introduced additional keywords.
Output SQL-standard SET SESSION AUTHORIZATION commands instead of ALTER OWNER commands to determine object ownership. This makes the dump more standards compatible, but depending on the history of the objects in the dump, might not restore properly.
Show help about pg_dumpall command line arguments, and exit.
The following command-line options control the database connection parameters.
Specifies the host name of the machine on which the database server is running. If the value begins with a slash, it is used as the directory for the Unix domain socket. The default is taken from the PGHOST environment variable, if set, else a Unix domain socket connection is attempted.
Specifies the name of the database to connect to to dump global objects and discover what other databases should be dumped. If not specified, the postgres database will be used, and if that does not exist, template1 will be used.
Specifies the TCP port or local Unix domain socket file extension on which the server is listening for connections. Defaults to the PGPORT environment variable, if set, or a compiled-in default.
User name to connect as.
Never issue a password prompt. If the server requires password authentication and a password is not available by other means such as a .pgpass file, the connection attempt will fail. This option can be useful in batch jobs and scripts where no user is present to enter a password.
Force pg_dumpall to prompt for a password before connecting to a database.
This option is never essential, since pg_dumpall will automatically prompt for a password if the server demands password authentication. However, pg_dumpall will waste a connection attempt finding out that the server wants a password. In some cases it is worth typing -W to avoid the extra connection attempt.
Note that the password prompt will occur again for each database to be dumped. Usually, it's better to set up a ~/.pgpass file than to rely on manual password entry.
Specifies a role name to be used to create the dump. This option causes pg_dumpall to issue a SET ROLE rolename command after connecting to the database. It is useful when the authenticated user (specified by -U) lacks privileges needed by pg_dumpall, but can switch to a role with the required rights. Some installations have a policy against logging in directly as a superuser, and use of this option allows dumps to be made without violating the policy.
Default connection parameters
This utility, like most other PostgreSQL utilities, also uses the environment variables supported by libpq (see Section 31.14).
Since pg_dumpall calls pg_dump internally, some diagnostic messages will refer to pg_dump.
Once restored, it is wise to run ANALYZE on each database so the optimizer has useful statistics. You can also run vacuumdb -a -z to analyze all databases.
pg_dumpall requires all needed tablespace directories to exist before the restore; otherwise, database creation will fail for databases in non-default locations.
To dump all databases:
$ pg_dumpall > db.out
To reload database(s) from this file, you can use:
$ psql -f db.out postgres
(It is not important to which database you connect here since the script file created by pg_dumpall will contain the appropriate commands to create and connect to the saved databases.)
Check pg_dump for details on possible error conditions.
Please use this form to add your own comments regarding your experience with particular features of PostgreSQL, clarifications of the documentation, or hints for other users. Please note, this is not a support forum, and your IP address will be logged. If you have a question or need help, please see the faq, try a mailing list, or join us on IRC. Note that submissions containing URLs or other keywords commonly found in 'spam' comments may be silently discarded. Please contact the webmaster if you think this is happening to you in error.
Proceed to the comment form. | <urn:uuid:449026db-5fa6-4865-862f-94728e07a334> | 2.53125 | 2,014 | Documentation | Software Dev. | 48.209411 | 1,512 |
Streaks of condensed water vapor created in the air by jet airplanes at high altitudes. (Merriam-Websters)
Streamer of cloud sometimes observed behind an airplane flying in clear, cold, humid air. (Encyclopaedia Britannica)
A visible cloud streak, usually brilliantly white in color, which trails behind a missile or other vehicle in flight under certain conditions. (DOD Dictionary of Military Terms)
Contrails can exist in two forms: water droplet and ice crystal.
Under what conditions do Contrails form?
The primary factors in contrail formation are air temperature and moisture content. They are usually seen at the higher colder altidtudes, but will even occur at ground level in Antartica, sometimes causing a visibility problem for jets that take too long to take-off. Contrails started becoming a common sight during World War II, when bombers started flying at altitudes above 30,000 feet. They can exist in two forms: water and ice. A water droplet contrail occurs when an airplane flies though cold and supersaturated moist air and the warm water vapour produced by the engine condenses into tiny droplets. Under colder temperatures the water will freeze creating suspended ice-crystals.
The following graph represents contrail data collected for Houston, TX over several months. See the Trail Research Report for full details.
Temperature and Humidity (degrees of separation between Dew Point and Temperature).
Red points indicate longer relative contrail persistence, blue shorter persistence.
A Contrail forms upon condensation of water vapour produced by the combustion of fuel in the airplane engines. When the ambient relative humidity is high, the resulting water-droplet and/or ice-crystal plume may last for several hours. The trail may be distorted by the winds, etc. (Encyclopaedia Britannica)
What are Chemtrails?
Streaks of chemicals created in the air by spray systems on airplanes at any altitude. Chemicals are sprayed via planes for many purposes including crop dusting and mosquito control. Also fuel is sometimes dumped to reduce weight before landing. But within the Chemtrail observer community Chemtrails are the product of an active large scale operation.
Chemtrails are said to vary from contrails in their length of persistence.
What the hell is really going on?
Some people are reporting what they describe to be unusual activity in the sky, including jets leaving trails at low altitudes, spray lines creating X’s, S’s and parallel lines, lines that slowly spread to create a canopy of haze, and reports of unusual smells, tastes, and even illness related to the trails.
Also, a reddish-brown gel, dropped from low-flying aircraft, has been observed by people in the past and was even documented on Unsolved Mysteries. Samples of this substance have been alledgedly analyzed by Margareta-Erminia Cassani and found to be teaming with biological organisms.
What would be the purpose of releasing these chemicals or biological agents?
This must be decided by the reader for themselves.
There are currently three main hypothesis:
Humans have had the ability to physically affect the weather since learning how to seed clouds in 1946, or possibly 1880. The popular conception of weather manipulation is limited to cloud seeding, but the possibility that the extents of our abilities may have progressed in the meantime is definitely plausible. The fact that the military is very interested in weather control is no secret and many propose that the Chemtrail Phenomena is a part of this. If true, what is the goal of the weather modification and what negative effects could it have on the environment? NASA is currently conducting several programs that are studying the effects of contrails on weather and the effects do not appear to be beneficial.
The use of chemical and biological agents by a government against it’s own people is, unfortunately, a historical fact. Even unintentional accidents can occur. But, some people suggest that Chemtrails could actually be part of a program to reduce the population and many feel Chemtrails have caused them to become ill and perhaps they are right. If the Chemtrails contain biological agents then people already weakened by other factors may have even died as a result of the additional strain on their systems, but could such a diabolical purpose be the ultimate goal? History has taught that even the most unconscionable schemes can be made into reality by men filled with fear and hate, and with such weapons in the hands of government we must remain vigilant until answers are forthcoming.
Chemical and biological weapons have been used for centuries but have recently entered the world stage as a primary threat. Biological agents have the ability to spread and multiply in casualties. These bioweapons are easy to produce and difficult, but possible, to defend against. The recent actions of the military to require anthrax vaccines for all service personnel show that this matter is of high importance. Some propose that the government may be quietly releasing bioagents to vaccinate citizens via the air. This could account for reported illnesses since a vaccine sometimes makes a person sick. Municipal water supplies might not be universal enough and could be easily sampled and tested, but everyone breathes the air. And the federal government rules the air.
Why would Chemtrails be created in the daytime where anyone could see?
Since spraying is being reported day and night, it may be a necessity of the magnitude of the operation. Also, if people noticed trails that only occurred under the cover of darkness they might be more inclined to become suspicious. But when unusual activity occurs in the sky during broad daylight it goes unnoticed because the human mind attempts to interpret things based on past experience, thus a Chemtrail would be just a Contrail to most. If they even bothered to look up and notice.
Why would Chemtrails be created in such noticeable patterns?
If jet planes are used to provide aerial spray coverage of a particular area, and the planes are leaving a trail, there are going to be unusual patterns in the sky. Again, the human oblivious factor comes in to play to lower the impact of the lines.
When did all this start?
Reports of Chemtrails began slowly gaining momentum in 1999, and are increasing rapidly in 2000. There are reports and photographic evidence to suggest that some spraying was occuring as early as 1990.
What evidence exists to support a Chemtrail operation?
No concrete proof exists and no govermental admissions have been issued for the Chemtrail operation. It has been openly admitted by the Pentagon that the U.S. military has performed many biological warfare tests on unknowing servicemen in the past, additionally the Wall Street Journal and the Washington Post have even reported that civilians may have died as a result of exposure to live agents sprayed by the Army and Navy during biological warfare tests.
There are thousands of reports and hundreds of photos. You must decide for yourself what they amount to. Some people think it is just contrails, but many are saying that there is more, that something unusual is happening.
New research indicates that there may be a unique type of trail. Using atmospheric soundings and Flight Explorer, the trails over Houston, TX were observed, measured and analyzed. See the Trail Research Report for full details.
Houston, TX 1/20/01 5:57-6:07pm: Example Flight Explorer Display.
Red indicates greater than 27,000 ft, Orange are lower altitude, and Blue have landed.
What countries have reports of Chemtrails?
Who are the responsible parties?
The largest number of reports seems to be from the US, followed by Canada. Several other countries have some reports, often noted by western visitors. Since reports are coming from multiple countries, it seems to transcend individual governments. But with no obvious controlling body the answer remains to be seen.
|Country||United Nations||NATO Member||NATO Partner|
What is a Sundog, Chemdog?
Why did I see two suns?
These effects are all caused by the refraction of light.
In meteorology, Halo is the name given to a ring of light surrounding the sun or moon. This effect is produced by light as it passes through ice crystals suspended in the air. A Sundog is an even more elusive natural phenomena, which, utilizes the hexagonal shape of the ice crystals along with a “prefered” horizontal orientation of the crystal’s flat faces. The Sundog appears, as a bright spot of light, usually to the left or right of the Halo, while the sun is lower in the sky. A Sunring, Chemring, or Chembow is based on the same priciple but created in the haze of heavy daytime spraying instead of ice crystals. These chemical Sunrings are quite large, probably larger than the water based 22 degree rings. The ring can appear as a 360 degree muted rainbow under ideal conditions. Along the ring, a chemical Sundog, or Chemdog may occur. This has been described by some as “two suns“. Also a Chemdog may be formed in the precipitation of particles from below a Chemtrail. In this case the full Sunring is not created, but the brighter Chemdog is seen.
What is an Iridescent Chembow?
Why did I see a rainbow in a cloud or Chemtrail?
This effect is caused by the diffraction of light.
This is caused by Cloud Iridescence which occurs when sunlight is diffracted by water droplets to create an irregular-shaped rainbow. This effect can also be seen lighting up Chemtrails with a bright blob of spectrum-colors.
The term Chembow is popularly used to apply to both the “bow” of the Sunring and the rainbow-colored patch of the Iridescence effect. Perhaps more correctly applied to the former, as it actually is a bow.
What is the deal with the black lines?
Dark lines or trails in the sky can be caused by a variety of things. Sometimes even normal contrails can appear dark due to lighting effects or excessive pollution. Many black lines are the result of a trail casting a shadow onto a canopy of haze below. It can have the appearance of proceeding the airplane if the sun is shining from behind. Cloud cutting, the process in which a plane flies through a layer of cirrus aviaticus can sometimes create channels which also appear dark.
What about the silver orbs?
A rare but documented phenomena, silver orbs, which are also described as white or nickle in color, have been observed in conjunction with spraying. They are reported to hover in the general area of Chemtrails and then leave. These may be advanced military drones involved with testing the components of the trails. Sighting locations include California, Texas and Alabama.
This information is from http://www.chemtrailcentral.com/chemfaq.shtml | <urn:uuid:f1ed41ad-c43b-4e43-8fbe-37fd6c0ed1dc> | 4 | 2,255 | FAQ | Science & Tech. | 45.030325 | 1,513 |
John Graham, an astronomer at the Carnegie Institution of Washington, explains.
The length of a star's life depends on how fast it uses up its nuclear fuel. Our sun, in many ways an average sort of star, has been around for nearly five billion years and has enough fuel to keep going for another five billion years. Almost all stars shine as a result of the nuclear fusion of hydrogen into helium. This takes place within their hot, dense cores where temperatures are as high as 20 million degrees. The rate of energy generation for a star is very sensitive to both temperature and the gravitational compression from its outer layers. These parameters are higher for heavier stars, and the rate of energy generation--and in turn the observed luminosity--goes roughly as the cube of the stellar mass. Heavier stars thus burn their fuel much faster than less massive ones do and are disproportionately brighter. Some will exhaust their available hydrogen within a few million years. On the other hand, the least massive stars that we know are so parsimonious in their fuel consumption that they can live to ages older than that of the universe itself--about 15 billion years. But because they have such low energy output, they are very faint.
When we look up at the stars at night, almost all of the ones we can see are intrinsically more massive and brighter than our sun. Most longer-lasting stars that are fainter than the sun are just too dim to view without telescopic aid. At the end of a star¿s life, when the supply of available hydrogen is nearly exhausted, it swells up and brightens. Many stars that are visible to the naked eye are in this stage of their life cycles because this bias brings them preferentially to our attention. They are, on average, a few hundred million years old and slowly coming to the end of their lives. A massive star such as the red Betelgeuse in Orion, in contrast, approaches its demise much more quickly. It has been spending its fuel so extravagantly that it cannot be older than about 10 million years. Within a million years, it is expected to go into complete collapse before probably exploding as a supernova.
Stars are still being born at the present time from dense clouds of dust and gas, but they remain deeply embedded in their placental material and cannot be seen in visible light. The enveloping dust is transparent to infrared radiation, however, so scientists using modern detecting devices can easily locate and study them. In so doing, we hope to learn how planetary systems like our own come together.
Answer originally published on February 24, 2003. | <urn:uuid:04791131-342c-45f0-a292-58e57f41a2d5> | 3.828125 | 529 | Knowledge Article | Science & Tech. | 48.679943 | 1,514 |
by Staff Writers
Washington DC (SPX) Jun 15, 2012
Two of our Milky Way's neighbor galaxies may have had a close encounter billions of years ago, recent studies with the National Science Foundation's Green Bank Telescope (GBT) indicate. The new observations confirm a disputed 2004 discovery of hydrogen gas streaming between the giant Andromeda Galaxy, also known as M31, and the Triangulum Galaxy, or M33.
"The properties of this gas indicate that these two galaxies may have passed close together in the distant past," said Jay Lockman, of the National Radio Astronomy Observatory (NRAO). "Studying what may be a gaseous link between the two can give us a new key to understanding the evolution of both galaxies," he added.
The two galaxies, about 2.6 and 3 million light-years, respectively, from Earth, are members of the Local Group of galaxies that includes our own Milky Way and about 30 others.
The hydrogen "bridge" between the galaxies was discovered in 2004 by astronomers using the Westerbork Synthesis Radio Telescope in the Netherlands, but other scientists questioned the discovery on technical grounds. Detailed studies with the highly-sensitive GBT confirmed the existence of the bridge, and showed six dense clumps of gas in the stream.
Observations of these clumps showed that they share roughly the same relative velocity with respect to Earth as the two galaxies, strengthening the argument that they are part of a bridge between the two.
When galaxies pass close to each other, one result is "tidal tails" of gas pulled into intergalactic space from the galaxies as lengthy streams.
"We think it's very likely that the hydrogen gas we see between M31 and M33 is the remnant of a tidal tail that originated during a close encounter, probably billions of years ago," said Spencer Wolfe, of West Virginia University. "The encounter had to be long ago, because neither galaxy shows evidence of disruption today," he added.
"The gas we studied is very tenuous and its radio emission is extremely faint - so faint that it is beyond the reach of most radio telescopes," Lockman said. "We plan to use the advanced capabilities of the GBT to continue this work and learn more about both the gas and, hopefully, the orbital histories of the two galaxies," he added.
Lockman and Wolfe worked with D.J. Pisano, of West Virginia University, and Stacy McGaigh and Edward Shaya of the University of Maryland. The scientists presented their findings at the American Astronomical Society's meeting in Anchorage, Alaska.
National Radio Astronomy Observatory
Stellar Chemistry, The Universe And All Within It
Comment on this article via your Facebook, Yahoo, AOL, Hotmail login.
WISE Finds Few Brown Dwarfs Close to Home
Pasadena CA (JPL) Jun 15, 2012
Astronomers are getting to know the neighbors better. Our sun resides within a spiral arm of our Milky Way galaxy about two-thirds of the way out from the center. It lives in a fairly calm, suburb-like area with an average number of stellar residents. Recently, NASA's Wide-field Infrared Survey Explorer, or WISE, has been turning up a new crowd of stars close to home: the coldest of the brown dw ... read more
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement| | <urn:uuid:7a86fc5e-2416-4f57-970d-e4d60a1a52e9> | 2.6875 | 805 | Truncated | Science & Tech. | 41.044724 | 1,515 |
Nasa scientists have no doubt the world will still be in one piece next week, beyond the Mayan-predicted demise on December 21.
They have already released a video, called 'The World Didn't End Yesterday', which pulls the prophecy to bits.
At the outset of the video, meant to be watched by viewers the day after the supposed end of the world, it says: "If you are watching this video, it means one thing. The world didn't end yesterday".
The video points to the theories of Dr John Carlson, the director of the Centre for Archeoastronomy.
He is backed up by Nasa scientists reported on the space agency's website."The world will not end in 2012," said a group of Nasa scientists.
"Our planet has been getting along just fine for more than 4 billion years, and credible scientists worldwide know of no threat associated with 2012."
They write the predictions started with the idea a planet called Nibiru was headed toward Earth.
"This catastrophe was initially predicted for May 2003, but when nothing happened the doomsday date was moved forward to December 2012 and linked to the end of one of the cycles in the ancient Mayan calendar at the winter solstice in 2012 - hence the predicted doomsday date of December 21, 2012."
They conclude by asking "where is the science?" of the claims, and say there is no credible evidence for any of the assertions made about weird events happening in December 2012.
- © Fairfax NZ News
Is our atmosphere heating up too fast?Related story: (See story) | <urn:uuid:b7f5762a-b4aa-4791-ad9f-d5479123c3af> | 3.109375 | 322 | News Article | Science & Tech. | 59.801747 | 1,516 |
Lesson 2: Primary Production and Upwelling in the Ocean
Colorful Convection Currents
Materials / Preparation
Review the instructions and video at Easy Science Experiments: Colorful Convection Currents
Each group of students will need:
Groups of two to four
Easy Science Experiments: Colorful Convection Currents includes a video demonstration. If you have never seen this activity done, you may want to review the video before trying the activity.
This hot and cold water activity can be rather messy. We encourage you to try the activity yourself before doing it in the classroom. Providing students with buckets can help to avoid major water spills during this activity. Make sure that the bottles are only stacked inside the buckets. You may want to do the activity outside. If you feel that your students will not be able to do the activity, you show the online video, but it is more powerful to have students do the experiment themselves.
Keep an ample supply of hot and cold colored water handy.
For most students, it is intuitive that hot and cold water would mix. To see the cold water staying at the bottom may challenge their assumption. This is a good thing! But it’s important that the students first set up the experiment with the hot water at the bottom. Make sure to have a thorough discussion about what is happening. For students who haven’t studied density, you may want to include some basic density concepts at this point.
OPTIONAL: If time permits, as an extension students can also experiment with salt water of various concentrations—this will deepen the students’ understanding of layers in the water column. | <urn:uuid:4af460aa-762f-4683-bf6f-41f71039e2c4> | 4.09375 | 341 | Tutorial | Science & Tech. | 47.115169 | 1,517 |
Hard shell purple bug at the coast of Puerto Rico
Thu, Feb 26, 2009 at 7:12 PM
I was staying at a hotel on the east coast of the island of Puerto Rico and went to the shore to look at the ocean at around midday. This thing was purple, had a hard shell, did not move at all, about 5 inches long and 3 inches wide. It was withing the rocks. This was in summer 2006.
East coast of Puerto Rico
The creature in your photograph is a Chiton. Chitons are primitive marine molluscs that have shells composed of 8 plates. The shells provide protection against waves which enable Chitons to survive on stormy rocky coasts. Chitons are sometimes called Sea Cradles.
Sat, Feb 28, 2009 at 5:58 AM
Hi Daniel, Ah, another mollusk! This is Acanthopleura granulata (Gmelin, 1791), the West Indian fuzzy chiton. The shell plates of this chiton are actually brownish and are usually very eroded. The pink/purple color on this one is due to a layer of encrusting calcareous red algae. For more info see the Wikipedia article (which I put together.) Best wishes to you,
Susan J. Hewitt
Sun, Mar 1, 2009 at 4:43 AM
I wanted to add:
1. That these chitons do move around, but only at night, grazing on microscopic algae which grows on the rock surface. Each one returns to its same spot on the rock at the end of the night.
2. That the maximum size of this species is about 3 inches in length.
3. There is a really excellent book on the chitons of P.R. called “Los Quitones de Puerto Rico” by Cedar I. Garcia Rios. | <urn:uuid:b118e623-a1a2-48f2-9400-c526bbaad919> | 2.875 | 386 | Comment Section | Science & Tech. | 76.265012 | 1,518 |
Whale of a discovery makes scientific splash
Spade-toothed beaked whale found ashore
It is the world's rarest whale and one of its rarest mammals.
Almost a thing of legend, scientists have never seen a spade-toothed beaked whale alive and, until recently, only had limited skeletal evidence they existed.
So rare is the species that when a pair of dead whales washed up on a New Zealand beach in late 2010, scientists didn't even know what they had.
But now they do ... and they're a bit giddy.
"It was a bit like finding the holy grail," said Anton van Helden, the collection manager of marine mammals at the Museum of New Zealand Te Papa Tongarewa. He's one of the co-author's of a paper published this week in the journal "Current Biology" documenting the discovery.
Another co-author, Rochelle Constantine with the School of Biological Sciences at the University of Auckland, made the discovery in 2011 while testing tissue samples from the whales with colleague Kirsten Thompson.
"When she showed me the result we were both a bit stunned," Constantine said. "We re-ran the sample again to make sure, even though the DNA clearly showed it was a spade-toothed beaked whale."
The results were stunning because only three partial specimens of the species were known to exist -- two collected in New Zealand in 1872 and in the 1950s and a third found on Robinson Crusoe Island off the coast of Chile in 1986.
The spade-toothed beaked whale looks similar to a large black, white and gray dolphin with it's long pointed snout. Scientists believe they grow to be about 17 feet long. Adult males have large exposed teeth as the name suggests.
Mistaken for a Gray's beaked whale after their discovery, the adult female and juvenile male were buried near where they were found on Opape Beach. Scientists were anxious to recover the remains and did in early 2012.
"We now have collected the only complete specimen in the world of this rarest of whales," van Helden said. "Sadly the head of the adult female had washed away through beach erosion."
Still, the recovery of the remains are a bonanza for scientists who study cetaceans.
"Yes, the discovery is indeed most exciting," said geology professor Ewan Fordyce, who studies the evolution of whales and dolphins at University of Otago, but who was not involved in the research. "Now we have an idea of the appearance and size of the species."
It's lifestyle and habits are another story -- not surprising for a species researchers have never seen alive.
Scientists can only guess based on what they know about other beaked whales, which are often boat-shy, spend little time at the surface and dive to depths of 6,600 feet (about 2,000 meters), according to van Helden. They probably dine on squid.
Yet, scientists are excited by the challenge.
"This is the species of whale that we know the least about in the world," Constantine said. "It has never been seen before, as far as we know, and for the first time we have an idea of what it looks like."
Copyright 2012 by CNN NewSource. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. | <urn:uuid:505252b5-9b8c-4e9c-87b0-9a200cac8db4> | 3.703125 | 700 | News Article | Science & Tech. | 55.318404 | 1,519 |
› View larger
Tracking Shuttle Exhaust Reveals More Information About Atmospheric Winds
After the Space Shuttle Atlantis launched for the final time at 11:29 AM (EDT) on July 8, 2011, scientist tracked water vapor in its exhaust on its travels throughout the upper atmosphere. Credit: NASA Photo/Houston Chronicle, Smiley N. Pool
On July 8, 2011 the Space Shuttle Atlantis launched for the very last time. On that historic day, as the world watched its last ascent up into orbit and commentators discussed the program's contributions to space flight and scientific research over 20 years, the shuttle helped spawn one last experiment. As the shuttle reached a height of about 70 miles over the east coast of the U.S., it released – as it always did shortly after launch – 350 tons of water vapor exhaust.
As the plume of vapor spread and floated on air currents high in Earth's atmosphere, it crossed through the observation paths of seven separate sets of instruments. A group of scientists, reporting in online in the Journal of Geophysical Research on August 27, 2012, tracked the plume to learn more about the airflow in the Mesosphere and Lower Thermosphere (MLT) -- a region that is typically quite hard to study. The team found the water vapor spread much faster than expected and that within 21 hours much of it collected near the arctic where it formed unusually bright high altitude clouds of a kind known as polar mesospheric clouds (PMCs). Such information will help improve global circulation models of air movement in the upper atmosphere, and also help with ongoing studies of PMCs.
› View larger
NASA's Aeronomy In the Mesosphere (AIM) mission captures images like this of shining noctilucent clouds, also known as polar mesospheric clouds (PMCs), which hover over Earth's poles in summertime. Credit: NASA/AIM
"Polar mesospheric clouds are the highest clouds on Earth," says space scientist Michael Stevens at the Naval Research Laboratory, Washington, who is first author on the paper. "They shine brightly when the sun is just below the horizon and typically occur over polar regions in the summer. There is some evidence that they are increasing in number and people want to know if this is indicative of climate change or something else that we don't understand."
Since they shine at night, PMCs are also known as noctilucent clouds, and they can serve as an indicator not just of temperature changes, but also of how currents and waves move high in Earth's atmosphere. A visible cloud of water vapor from something like the shuttle also offers a serendipitous way to observe such motions in the upper winds.
"The plume from the shuttle becomes a ready-made experiment to observe the movement in the atmosphere," says Charles Jackman, a scientist at NASA's Goddard Space Flight Center in Greenbelt, Md. who is the project scientist for a NASA mission called Aeronomy Ice in the Mesosphere (AIM) that specifically observes PMCs. "What this team found is interesting since the plume moved so quickly to the pole, indicating that the winds appear much stronger at those latitudes than was thought."
To track the plume across the sky, the scientists collated seven sets of observations, including data from AIM. The first two sets of instruments to see the plume were on a NASA spacecraft called TIMED (Thermosphere Ionosphere Mesosphere Energetics and Dynamics). Next the plume was viewed through the Sub-Millimeter Radiometer on the Swedish Odin satellite. When the plume reached higher latitudes, it was picked up by the ground-based Microwave Spectrometer at the Institute of Atmospheric Physics in Kühlungsborn, Germany as well as an identical ground-based water vapor instrument called cWASPAM1 at the Arctic Lidar Observatory for Middle Atmospheric Research (ALOMAR) in Andenes, Norway. The plume collated into its final shape over the arctic, as a new, extremely bright PMC on July 9, 2011 and there, it could be observed from above by the AIM satellite flying overhead, and from below by another instrument at ALOMAR called the RMR lidar.
Over the course of the plume's travels, these observations showed it spreading horizontally over a distance of some 2000 to 2500 miles. Those parts that drifted into the high latitudes near the North Pole formed ice particles which settled into layers of PMCs down at about 55 miles above Earth’s surface. The speed with which the plume arrived at the arctic was a surprise.
"The speed of the movement in the upper atmosphere gives us new information for our models," says Stevens. "As you get higher up in the atmosphere, we just don't have as many measurements of wind speeds or temperatures. The take-away message here is that we need to improve the models of that region."
› View larger
Noctilucent clouds – also known as polar mesospheric clouds (PMCs) over Wismar, Germany on July 9, 2011. These clouds shine brightly even during the night. Shuttle exhaust made of water vapor formed particularly bright PMC.
Credit: Leibniz-Institute of Atmospheric Physics
Since observations of PMCs may be connected to global climate, it's important to subtract out sporadic effects such as shuttle exhaust from other consistent, long-term effects.
"One of AIM's big goals is to find out how much of the cloud's behavior is naturally induced versus man-made," says Jackman. "This last shuttle launch will help researchers separate the shuttle exhaust from the rest of the observations."
Indeed, the AIM observations showed a clear difference between typical PMCs and this shuttle-made one. Normally smaller particles exist at the top, with larger ones at the bottom. The shuttle plume PMC showed a reversed configuration, with larger particles at the top, and smaller at the bottom – offering a way to separate out such clouds in the historical record.
For more information about NASA’s AIM mission, visit:
For more information about NASA’s TIMED mission, visit:
Karen C. Fox
NASA Goddard Space Flight Center, Greenbelt, MD. | <urn:uuid:9df45a47-164d-41a0-b86f-ca8ddc144cdb> | 3.53125 | 1,291 | News (Org.) | Science & Tech. | 41.84994 | 1,520 |
Date: October 30, 2012
Department: SMAST / Administration
The proposed research will employ a combination of models and observations from ships and satellites to examine linkages between land and ocean and how these interactions affect the ocean's ability to take up CO2 from the atmosphere.
The project was one of 62 proposals considered by NASA as part of its Carbon Monitoring System Science Team, which includes members from across the country.
SMAST Dean Dr. Steven E. Lohrenz will personally lead the research team, which includes partners from universities in Alabama, Georgia, Mississippi and North Carolina.
"The oceans absorb about one third of all the fossil fuel CO2 emitted into the atmosphere, but the contribution of coastal waters to this is still very uncertain. Carbon dioxide in the atmosphere influences climate and can alter the chemistry of the oceans in ways that may negatively impact marine organisms, but we simply don't know enough about how CO2 is being absorbed by the oceans and particularly by coastal waters," said Dr. Lohrenz. "This project is about gathering the scientific data that will help us to fill that gap in our understanding."
SMAST is a leader in this kind of research. SMAST researcher Dr. Jefferson Turner's 25-year study of coastal waters in southeastern Massachusetts has shown the temperature of Buzzards Bay rising by an average of five degrees -- an extraordinarily rapid change with strong potential effects on ocean acidification and CO2 retention.
The project's official name is "Development of Observational Products and Coupled Models of Land-Ocean-Atmospheric Fluxes in the Mississippi River Watershed and Gulf of Mexico in Support of Carbon Monitoring."
Regionally, it will focus primarily on the Mississippi River watershed and northern Gulf of Mexico, as well the southeastern US coast. | <urn:uuid:2c9dfd15-7a93-4787-8b14-94d847c56452> | 3.171875 | 361 | News (Org.) | Science & Tech. | 33.873061 | 1,521 |
w3schools is good place to start learning web technologies before reading any other manual or book. Here I am summarizing the relation among most of XML flavors. I’ll recommend it before starting with w3schools.
In addition of above diagram,
- DTD & XSD both do the same job. But XSD is newer than DTD, in form of XML, and has more datatypes & other options.
- You need not to launch XSLT explicitly to see how your XSL will output finally. XSLT comes with Internet browser. So you just open your XML in browser. It’ll read linked XSL and will display output accordingly.
- XSL-FO can be considered as next version of XSL.
*Editor : I used eclipse for developing all XML flavours.
HTML vs XHTML
XHTML is nothing but XML formatted HTML. You need to follow XML rules nothing else. Like
- proper nesting of tags
- all tags must be closed
- there must be a root tag
- value of an attribute must be enclosed in double quote
- tags name, attributes name and their values must be in low case.
- etc. . . | <urn:uuid:ce674d14-2b01-4381-9f44-dee8ed4fd00e> | 3.0625 | 252 | Personal Blog | Software Dev. | 77.2535 | 1,522 |
1) cell growth
You should look into chemotherapy and cancer medicine in general. Because chemo is mostly effective because it kills fast dividing cells, this has been worked out reasonably well. the 7-10 year number is not really correct, some cells are replaced a lot more slowly.
This is why hair often falls out in cancer treatment, because the follicle cells are growing quickly. Neurons divide very slowly - if at all - and often are never replaced. Fat cells are in between - probably replaced in the 7-10 year range. Heart cells are replaced albeit quite slowly - less than 1% per year, which implies that many cells are with you your entire lifetime.
2) atoms/molecules change
The cell itself is in a continuous state of flux, but different parts of the cell, like cells in the body, change at different rates. Some proteins which make up the cell matrix or the DNA in the nucleus are replaced very rarely (through repair or rearrangement of the chromosome for instance) and most of the chromosome DNA is with the cell for the entire life of the cell.
Most proteins are labelled for degradation and are recycled after a few hours of function. Metabolic compounds such as sugars or salt might drift in and out of the cell continuously, maybe turning over in an hour or so. Fats can be incorporated into the cell and last for years I think. | <urn:uuid:d8e27ed7-8a34-4b64-84aa-17d61e576988> | 2.796875 | 285 | Q&A Forum | Science & Tech. | 52.339461 | 1,523 |
Issue Date: July 27, 2009
Making Graphene In A Flash
No time to make graphene via conventional routes? Then make it "in a flash."
Northwestern University scientists have just demonstrated that graphite oxide can be converted instantly to graphene via photothermal deoxygenation by exposing the material to a pulse of light from an ordinary camera flash (J. Am. Chem. Soc., DOI: 10.1021/ja902348k).
Because of its low cost and wide availability, graphite oxide is a promising precursor for making graphene-based materials, which are being studied for use in polymer composites and electronics. The oxide is typically treated at high temperature or with potent reducing agents such as hydrazine to yield graphene.
Now, Laura J. Cote, Rodolfo Cruz-Silva, and Jiaxing Huang of Northwestern have shown in a video that the flash method is an instantaneous, chemical-free way to transform graphite oxide, an electrical insulator, into graphene, a conductor, at room temperature.
The team has also shown that by applying masking and photolithography methods, the flash technique can be used to fabricate complex patterns, a key step in developing electronic components.
- Chemical & Engineering News
- ISSN 0009-2347
- Copyright © American Chemical Society | <urn:uuid:494ad5b5-3e68-43f9-89b3-28a60cf127d9> | 3.5 | 275 | News Article | Science & Tech. | 31.771335 | 1,524 |
Interesting case. From the wikipedia article, an average white dwarf has a mass of ~0.6 Msun and a radius of ~0.015 Rsun. If we want it to have the same effective temperature as the Sun, and Earth to end up with the same insolation, then the size a of the orbit is determined by the scaling relation
Originally Posted by Xibalba
Rsun^2 / (1 AU)^2 ~ (0.015 Rsun)^2 / a^2 ===> a ~ 0.015 AU ~ 2*10^9 m
which is only five times the distance to the Moon. Tidal forces depend on mass and the inverse cube of distance, which gives us in terms of the lunar tidals:
Ftidal ~ ((1.2*10^30 kg)/(7*10^22 kg)) / 5^3 Ftidal|lunar ~ 1.5*10^5 Ftidal|lunar
Instead of a tidal bulge on the order of a metre, it would be on the order of 100 km. Meaning that until tidal lock is achieved, the only permanent bodies of water would be lakes with sufficiently steep sides. Anything with shallow sides, including the oceans, would lose its water to two big blobs which would do their best to remain stationary with respect to the new sun while the planet rotates along underneath them. I guess life better stays in those sealed bunkers for the duration. The ordinary scaling law for time to tidal lock is something like
Tlock ~ (10^10 years) / ((Trot in days) (Ftidal/Ftidal|lunar)^2) ~ (1 year) / (Trot in days)
where Trot is the original (unlocked) day-length with respect to the tide-inducing body. Unless Earth spins a lot faster upon capture than it does now, the result is on the order of 1 year. If, on the other hand, Earth spins a lot slower, which seems like the more likely scenario to me, then the day-length would be mainly determined by the year-length, which works out on the order of 1 day (as in 24 hours), funnily enough.
Mind you, I'm not sure if this situation might not be too extraordinary for the scaling relation to apply in that form. It assumes that the combination of atmospheric and ocean tides and planetary deformation is sufficient to actually dissipate rotational energy into heat at the maximal rate, which seems questionable. Assuming that it does, the power output would be vast:
Ptidal ~ Erot / Tlock
Ptidal ~ (1/2 I w^2) / ((1 year) / (Trot in days))
Ptidal ~ (1/2 (2/5 M R^2) ((2 pi)/Trot)^2) / ((1 year) * (1 day) / Trot)
Ptidal ~ (7 * (6*10^24 kg) * (6*10^6 m)^2) / ((3*10^7 s) * (10^5 s) * Trot)
Ptidal ~ (5*10^26 kg m^2 / s^2) / Trot
Ptidal ~ Lsun / (Trot in seconds)
I'm not sure where all these numerical coincidences come from, but that aside, it does give one quite a good sense of scale: The Earth has about one ten-thousandth the surface of the Sun, and Trot should be on the order of one hundred thousand, so for the Earth to dissipate that amount of power, it would have to output only one order of magnitude less power per unit surface than the Sun does. Since black-body power output scales with T^4, that means more than half the temperature of the sun. And as rock begins to melt by as little as 1,000 K, this would directly liquify the upper portions of the planet. Assuming the interior has been frozen solid in the interim, one ends up with a sort of inverted planet whose surface is hotter than its core. In the long run, what is of more practical import is that this might just be hot enough to boil away (away as in all the way into space) all of our water, even in as little as that one year we're talking about.
As I said, the planet might well not actually have the capacity to dissipate power at that rate, so things might not get quite that bad. The price for that, though, would be a longer time to lock, so the not-quite-that-bad conditions would last a lot longer. Conclusion: Unless the planet is already as close as no matter to tidal lock when it assumes a habitable-zone orbit, which seems highly unlikely, the process of locking it would pretty much make it uninhabitable... Catch-22. The only upside I see is that this might indeed just about do what you were hoping for, in the long run, and kick-start the planet's dormant geological activity again. | <urn:uuid:d3b3b87d-83e9-49cb-b468-71ed859f82a4> | 3.328125 | 1,063 | Comment Section | Science & Tech. | 64.952609 | 1,525 |
Sci-tech: The wondrous world of science
Understanding ‘chicken talk’
Having contented birds is the desire of every poultry farmer, as that translates directly to higher productivity. The degree of contentment of chickens can be judged by the sounds they make. Modern computer technologies are now being used to decipher the various sounds and gauge the extent of contentment from them. Scientists at the Georgia Institute of Technology and the University of Georgia have teamed up to examine various sounds and scientifically determine the level of stress in an experimental chicken barn.
Different levels of stress were first created by increasing the temperature in the barn or spraying various levels of ammonia and recording the various sounds produced, thereby developing correlations between stress levels and the nature of sounds. The volume and pitch of the sounds as well as the speed at which they are repeated are then analysed by computers after they have been recorded.
The work is aimed at developing an automated software that will continuously monitor and determine stress levels within chicken barns through a real time audio-feed. Specific problems would be automatically detected and the situation rectified through a control system in a timely manner without the need of human intervention. This should result in increased productivity and profitability for the farmers.
Hover cars — and cars of tomorrow
The German automobile manufacturer Volkswagen has built a prototype of the car of the future that will travel while hovering above the road on a cushion of air, never touching it. The “Hover Car”, as it is called, is the result of an initiative launched by the German company in China, known as the “People’s Car Project” (PCP). Ideas about novel cars were invited and some 33 million persons visited the website. As a result 119,000 new and novel ideas were submitted. From them, the Hover Car is one of the three ideas that were selected by Volkswagen to actually build prototypes.
Another related development in cars was to use a compressed air cylinder to power a car instead of a combustion engine. India’s auto giant Tata Motors had acquired the license to manufacture this car from Motor Development International’s (MDI) in Luxembourg in 2007 and the project has now entered its second final phase. The Kevlar cylinder in the car will need to be filled with compressed air, that will carry it for 200-300 kilometres before a refill is again needed. The cost of running it will be a fraction of that for running normal combustion engines.
Meanwhile, work on improving engine efficiencies continues. The husband and wife team of John and Helen Taylor are known for being “the world’s most fuel efficient couple” because they have already won 40 world records for fuel efficient cars. Now they have another record — the longest distance travelled of 2,616 kilometres (fuel efficiency of 84.1 miles per gallon). The record was set while driving a stock 2012 Volkswagen Passat.
With growing global water shortages and decreasing availability of cultivable land caused by the huge increases in the world population (that has now crossed seven billion), scientists are constantly striving to come up with more new efficient ways of growing food plants. These plants should have higher productivity but need lower amounts of water, fertilizer, nutrients and pesticides to grow.
An interesting solution to the problem has been found by the Purdue University researcher Burkhard Schulz. He has discovered that a certain chemical can be used to reduce the size of the plant without reducing the yield. Schulz found that propiconazole, a common fungicide, can be used to create smaller and sturdier corn plants that produce more kernels but consume less water, fertilizer and nutrients to grow. The fungicide is claimed to be harmless to humans as it is commonly sprayed on golf courses to treat fungal dollar spot disease. The chemical works by disrupting steroid production in the plants, responsible for their growth.
Buildings that clean the environment
The Museum of Modern Art (MoMA) is in the process of setting up an outdoor architectural project at Queens in New York that will pluck pollutants from the air while providing shade, shelter and water. The technology has been developed by the US architectural firm HWKN. The project, known as Wendy, employs a fascinating architecture with spikes protruding at different angles with an external fabric skin treated with nano-particles of titanium dioxide that capture and neutralise pollutants. It has been claimed that each such installation would be equivalent to removing the pollution caused by 260 cars on the roads. Such “environmentally friendly shelters” installed along the roads may be tomorrow’s answers to reducing road pollution. | <urn:uuid:10c6c942-5533-4bdb-b466-1653c35ea3b1> | 2.953125 | 931 | News Article | Science & Tech. | 37.128082 | 1,526 |
Metabolic theory of ecology
The metabolic theory of ecology (MTE) is an extension of Kleiber's law and posits that the metabolic rate of organisms is the fundamental biological rate that governs most observed patterns in ecology.
MTE is based on an interpretation of the relationships between body size, body temperature, and metabolic rate across all organisms. Small-bodied organisms tend to have higher mass-specific metabolic rates than larger-bodied organisms. Furthermore, organisms that operate at warm temperatures through endothermy or by living in warm environments tend towards higher metabolic rates than organisms that operate at colder temperatures. This pattern is consistent from the unicellular level up to the level of the largest animals on the planet.
In MTE, this relationship is considered to be the single constraint that defines biological processes at all levels of organization (from individual up to ecosystem level), and is a macroecological theory that aims to be universal in scope and application.
Theoretical background
Metabolic rate scales with the mass of an organism of a given species according to Kleiber's law where B is whole organism metabolic rate (in watts or other unit of power), M is organism mass (in kg), and Bo is a mass-independent normalization constant (given in a unit of power divided by a unit of mass. In this case, watts per kilogram):
At increased temperatures, chemical reactions proceed faster. This relationship is described by the Boltzmann factor, where E is activation energy in electronvolts or joules, t is absolute temperature in kelvins, and k is the Boltzmann constant in eV/K or J/K:
While Bo in the previous equation is mass-independent, it is not explicitly independent of temperature. To explain the relationship between body mass and temperature, these two equations are combined to produce the primary equation of the MTE, where bo is a normalization constant that is independent of body size or temperature:
According to this relationship, metabolic rate is a function of an organism’s body mass and body temperature. By this equation, large organisms have proportionally higher metabolic rates (in Watts) than small organisms, and organisms at high body temperatures have higher metabolic rates than those that exist at low body temperatures.However specific metabolic rate (SMR, in Watts/kg) is given by
Hence SMR for large organisms are lower than small organisms.
Controversy over exponent
There is disagreement amongst researchers about the most accurate value for use in the power function, and whether the factor is indeed universal. The main disagreement is whether metabolic rate scales to the power of 3/4 or 2/3. The majority view is currently that 3/4 is the correct exponent, but a large minority believe that 2/3 is the more accurate value. Although a rigorous exploration of the controversy over choice of scaling factor is beyond the scope of this article, it is informative to understand the biological justification for the use of either value.
The argument that 2/3 should be the correct scaling factor is based on the assumption that energy dissipation across the surface area of three dimensional organisms is the key factor driving the relationship between metabolic rate and body size. Smaller organisms tend to have higher surface area to volume ratios, causing them to lose heat energy at a faster rate than large organisms. As a consequence, small organisms must have higher specific metabolic rates to combat this loss of energy over their large surface area to volume ratio.
In contrast, the argument for a 3/4 scaling factor is based on a hydraulic model of energy distribution in organisms, where the primary source of energy dissipation is across the membranes of internal distribution networks. This model is based on the idea that metabolism is essentially the rate at which an organism’s distribution networks (such as circulatory systems in animals or xylem and phloem in plants) deliver nutrients and energy to body tissues. It therefore takes longer for large organisms to distribute nutrients throughout the body and thus they have a slower metabolic rate. The 3/4 factor is then derived from the observation that selection favors a fractal or near-fractal distribution network for space-filling circulatory systems. All fractal networks terminate in identical units (such as capillary beds), and the number of such units in organisms is proportional to a 3/4 power relationship with body size.
Kolokotrones et al. 2010 showed that relationship between mass and metabolic rate has a convex curvature on logarithmic scale. The curvature explains the variations in the power law exponent.
Despite the controversy over the value of the exponent, the implications of this theory might remain true regardless of its precise numerical value.
Implications of the theory
The metabolic theory of ecology’s main implication is that metabolic rate, and the influence of body size and temperature on metabolic rate, provide the fundamental constraints by which ecological processes are governed. If this holds true from the level of the individual up to ecosystem level processes, then life history attributes, population dynamics, and ecosystem processes could be explained by the relationship between metabolic rate, body size, and body temperature.
Organism level
Small animals tend to grow fast, breed early, and die young. According to MTE, these patterns in life history traits are constrained by metabolism. An organism's metabolic rate determines its rate of food consumption, which in turn determines its rate of growth. This increased growth rate produces trade-offs that accelerate senescence. For example, metabolic processes produce free radicals as a by-product of energy production. These in turn cause damage at the cellular level, which promotes senescence and ultimately death. Selection favors organisms which best propagate given these constraints. As a result, smaller, shorter lived organisms tend to reproduce earlier in their life histories.
Population and community level
MTE has profound implications for the interpretation of population growth and community diversity. Classically, species are thought of as being either r selected (where population size is limited by the exponential rate of population growth) or K selected (where population size is limited by carrying capacity). MTE explains this diversity of reproductive strategies as a consequence of the metabolic constraints of organisms. Small organisms and organisms that exist at high body temperatures tend to be r selected, which fits with the prediction that r selection is a consequence of metabolic rate. Conversely, larger and cooler bodied animals tend to be K selected. The relationship between body size and rate of population growth has been demonstrated empirically, and in fact has been shown to scale to M-1/4 across taxonomic groups. The optimal population growth rate for a species is therefore thought to be determined by the allometric constraints outlined by the MTE, rather than strictly as a life history trait that is selected for based on environmental conditions.
Observed patterns of diversity can be similarly explained by MTE. It has long been observed that there are more small species than large species. In addition, there are more species in the tropics than at higher latitudes. Classically, the latitudinal gradient in species diversity has been explained by factors such as higher productivity or reduced seasonality. In contrast, MTE explains this pattern as being driven by the kinetic constraints imposed by temperature on metabolism. The rate of molecular evolution scales with metabolic rate, such that organisms with higher metabolic rates show a higher rate of change at the molecular level. If a higher rate of molecular evolution causes increased speciation rates, then adaptation and ultimately speciation may occur more quickly in warm environments and in small bodied species, ultimately explaining observed patterns of diversity across body size and latitude.
MTE’s ability to explain patterns of diversity remains controversial. For example, researchers analyzed patterns of diversity of New World coral snakes to see whether the geographical distribution of species fit within the predictions of MTE (i.e. more species in warmer areas). They found that the observed pattern of diversity could not be explained by temperature alone, and that other spatial factors such as primary productivity, topographic heterogeneity, and habitat factors better predicted the observed pattern.
Ecosystem processes
At the ecosystem level, MTE explains the relationship between temperature and production of biomass. The average production to biomass ratio of organisms is higher in small organisms than large ones. This relationship is further regulated by temperature, and the rate of production increases with temperature. As production consistently scales with body mass, MTE predicts that the primary factor that causes differing rates of production between ecosystems is temperature and not the mass of organisms within the ecosystem. This suggests that regions with similar climatic factors would sustain the same primary production, even if standing biomass is different.
See also
- Constructal theory
- Dynamic energy budget
- Evolutionary physiology
- Occupancy-abundance relationship
- Brown, J. H., Gillooly, J. F., Allen, A. P., Savage, V. M., & G. B. West (2004). "Toward a metabolic theory of ecology". Ecology 85 (7): 1771–89. doi:10.1890/03-9000.
- Agutter, P.S., Wheatley, D.N. (2004). "Metabolic scaling: consensus or controversy?". Theoretical biology and medical modelling 1: 13. doi:10.1186/1742-4682-1-13. PMC 539293. PMID 15546492.
- West, G.B., Brown, J.H., & Enquist, B.J. (1999). "The fourth dimension of life: Fractal geometry and allometric scaling of organisms". Science 284 (5420): 1677–9. doi:10.1126/science.284.5420.167. PMID 10356399.
- Kolokotrones, T., Van Savage, Deeds, E. J. (2010). "Curvature in metabolic scaling". Nature 464 (7289): 753–756. doi:10.1038/nature08920.
- Savage V.M., Gillooly J.F., Brown J.H., West G.B. & Charnov E.L. (2004). "Effects of body size and temperature on population growth". American Naturalist 163 (3): 429–441. doi:10.1086/381872. PMID 15026978.
- Enrique Cadenas, Lester Packer, ed. (1999). Understanding the process of ages : the roles of mitochondria, free radicals, and antioxidants. New York: Marcel Dekker. ISBN 0-8247-1723-6.
- Denney N.H., Jennings S. & Reynolds J.D. (2002). "Life history correlates of maximum population growth rates in marine fishes". Proceedings of the Royal Society of London B 269 (1506): 2229–37. doi:10.1098/rspb.2002.2138.
- Hutchinson, G., MacArthur, R. (1959). "A theoretical ecological model of size distributions among species of animals". Am. Nat. 93 (869): 117–125. doi:10.1086/282063.
- Rohde, K. (1992). "Latitudinal gradients in species-diversity: the search for the primary cause". Oikos 65 (3): 514–527. doi:10.2307/3545569. JSTOR 3545569.
- Allen A.P., Brown J.H. & Gillooly J.F. (2002). "Global biodiversity, biochemical kinetics, and the energetic-equivalence rule". Science 297 (5586): 1545–8. doi:10.1126/science.1072380. PMID 12202828.
- Gillooly, J.F., Allen, A.P., West, G.B., & Brown, J.H. (2005). "The rate of DNA evolution: Effects of body size and temperature on the molecular clock". Proc Natl Acad Sci U S A. 102 (1): 140–5. doi:10.1073/pnas.0407735101. PMC 544068. PMID 15618408.
- Terribile, L.C., & Diniz-Filho, J.A.F. (2009). "Spatial patterns of species richness in New World coral snakes and the metabolic theory of ecology". Acta oecologica 35 (2): 163–173. doi:10.1016/j.actao.2008.09.006.
- Banse K. & Mosher S. (1980). "Adult body mass and annual production/biomass relationships of field populations". Ecol. Monog. 50 (3): 355–379. doi:10.2307/2937256. JSTOR 2937256.
- Ernest S.K.M., Enquist B.J., Brown J.H., Charnov E.L., Gillooly J.F., Savage V.M., White E.P., Smith F.A., Hadly E.A., Haskell J.P., Lyons S.K., Maurer B.A., Niklas K.J. & Tiffney B. (2003). "Thermodynamic and metabolic effects on the scaling of production and population energy use". Ecology Letters 6 (11): 990–5. doi:10.1046/j.1461-0248.2003.00526.x. | <urn:uuid:e327ea2e-76c2-4e42-8b55-8cbed599e974> | 3.359375 | 2,788 | Knowledge Article | Science & Tech. | 50.906667 | 1,527 |
A report on a Joint Cold Spring Harbor Laboratory/Wellcome Trust Conference on 'Prion Biology', Hinxton, UK, 7-11 September 2005.
While most recent prion meetings have focused on either mammals or fungi, the conference on prion biology held near Cambridge this September stood out as an attempt to represent research on mammalian and fungal prions equally, in order to provoke discussion on fundamental questions of prion structure, biogenesis, variability and biological role.
Prions of lower eukaryotes
Over the past decade several infective proteins, or prions, have been discovered in genetically tractable lower eukaryotes, where they act like cytoplasmically inherited genetic determinants. The opening talk of the meeting was delivered by Reed Wickner (National Institutes of Health, Bethesda, USA), who was the first to suggest 11 years ago that the non-chromosomal genetic determinants known as [URE3] and [PSI+] in the yeast Saccharomyces cerevisiae were in fact prion proteins (enclosure in square brackets is the conventional nomenclature for cytoplasmically inherited genetic determinants in fungi). The proteins that correspond to [URE3] and [PSI+], regulator of nitrogen metabolism Ure2 and translation termination factor Sup35, respectively, have carboxy-terminal domains that carry out a cellular function and auxiliary amino-terminal prion domains, which can adopt an abnormal 'prion' conformation. The prion domains of both these proteins are rich in glutamine (Q) and asparagine (N), but only that of Sup35 contains oligopeptide repeats, which are presumably required for [PSI+] replication. Previously, Wickner's group had shown that random shuffling of amino acids in the Ure2 prion domain, a procedure named scrambling, usually does not impair the prion-forming capacity of the protein. At this meeting, Wickner described how randomization of the Sup35 prion domain, including the repeat region, also does not block prion formation, and concluded that unusual amino-acid composition, rather than specific sequences, determines prion-forming ability. According to Wickner, these experiments argue for an in-register parallel β-sheet structure for the prion fibrils, as scrambling would disrupt the correspondence of amino acids in any other β-strand structure.
Susan Lindquist (Whitehead Institute, Cambridge, USA) described elegant Sup35 cross-linking experiments that revealed that Sup35 monomers in amyloid fibrils are arranged in a 'head-to-head, tail-to-tail' fashion. Amyloid is the general name given to the fibrillar protein aggregate formed by prions and some other proteins. Such amyloid structure also implies parallel in-register arrangement of β strands in the prion fibrils. Lindquist proposed that these considerations, combined with the β-helical nanotube structure of the Sup35 fibrils, suggested a new structural model for prions, which may have broad implications for amyloids.
Prions come in different variants or 'strains'. In mammals, whose prions are infectious agents causing a set of fatal neurodegenerative diseases, different prion strains are defined by specific incubation times, distribution of vacuolar lesions in the brain, and patterns of accumulation. For yeast [PSI+], strain differences can be revealed by differences in phenotypic manifestation (nonsense suppression caused by the aggregation-dependent inactivation of the translation termination factor Sup35) and stability of maintenance. Generally, 'weak' [PSI+] manifest less stable inheritance and worse phenotypic manifestation than 'strong' [PSI+]. From her results, Lindquist suggested a structural basis for [PSI+] variants: in 'weak' [PSI+] variants a longer Sup35 fragment is incorporated into the amyloid core. The physical basis of prion strain differences was also considered by Jonathan Weissman (University of California, San Francisco, USA). His group had previously shown that Sup35 fibrils obtained in vitro at 4°C and 37°C transform yeast cells to strong [PSI+] variants, and weak [PSI+], respectively. Atomic force microscopy revealed two distinctions between the 4°C (Sc4) and 37°C (Sc37) fibrils. Sc4 fibrils polymerized more slowly than Sc37, but were more fragile and therefore smaller and more numerous, which ensured their efficient polymerization. Correlated with the strong phenotype of Sc4 fibrils is the fact that they are more susceptible to fragmentation in vivo than Sc37, presumably as the result of the activity of chaperone proteins. Thus, the efficiency of fibril severing by chaperones correlates with the mechanical strength of the fibril.
The search for novel prion proteins goes on, as their discovery might enable new prion-related processes in nature to be uncovered and the importance of prions to be estimated. Pascale Beauregard (Université de Montreal, Canada) reported convincing genetic and biochemical data for the existence of a prion in Schizosaccharomyces pombe, the first to be found in a yeast other than S. cerevisiae. This prion, [cif1], allows cell survival in the absence of the essential chaperone, calnexin. Jessica Brown (Massachusetts Institute of Technology, Cambridge, USA) described in her poster a novel prion-like determinant of S. cerevisiae, named [GAR+], which determines resistance to the non-hydrolyzable glucose analog D-(+)-glucosamine. In contrast to known yeast prions, [GAR+] is not cured by deletion of the heat-shock protein gene HSP104, but is cured by simultaneous deletion of the SSA1 and SSA2 genes encoding Hsp70 heat-shock proteins. Ludmila Mironova (St Petersburg University, Russia) described a search for proteins underlying [ISP+], another Hsp104-independent prion-like determinant causing anti-suppression, a phenotype opposite to that of [PSI+]. A likely candidate for the [ISP+] prion protein is the transcriptional factor Sfp1. One of us (I.D.) presented biochemical and microscopic evidence for the prion nature of the Lsm4 protein, one of several candidate prions which were identified previously in a genetic screen for the [PIN+] protein.
Although relatively few investigators study the [Het-s] prion of the filamentous fungus Podospora anserina, their results make a significant contribution to the prion field. Indeed, [Het-s] is the only prion with a confirmed biological function: fusion of a [Het-s] mycelium with one expressing the non-prionizable het-S allele triggers the heterokaryon incompatibility reaction, which leads to the death of the hybrid mycelium. Recent progress in understanding the molecular basis of this incompatibility reaction was reported by Sven Saupe (Institute de Biochemie et de Génétique Cellulaire, Bordeaux, France), who has shown that the carboxy-terminal domain of HET-S is prionizable, but prion formation is blocked by the functional amino-terminal domain. Presumably, HET-S can co-polymerize with the HET-s protein, and their oligomers trigger the incompatibility reaction. Ronald Riek (The Salk Institute, La Jolla, USA), Cristiane Ritter (The Salk Institute) and Ansgar Siemer (ETH Zurich, Switzerland) consecutively presented their excellent collaborative structural studies of [Het-s], which have particularly broad significance. The normally flexible carboxy-terminal tail of the HET-s protein can undergo a spontaneous conformational transition into amyloid fibrils. The fold of these fibrils comprises four β strands made up of two pseudo-repeat sequences, each forming a β-strand-turn-β-strand motif. Structure-based mutagenesis revealed that this conformation is the functional and infectious entity of the HET-s prion.
Several speakers focused on the mechanisms underlying the de novo appearance of yeast prions. It is known that the prion form of the Rnq1 protein, [PIN+], promotes the de novo appearance of [PSI+] and [URE3], apparently by directly seeding QN-rich prion aggregates. Susan Liebman (University of Illinois, Chicago, USA) presented further studies on the interaction between [PIN+], [PSI+] and an artificial prion, [CHI+]. [PIN+] efficiently seeded [CHI+], while [PSI+] stimulated the appearance of [PIN+]. While it appears overall that all QN-rich prions can stimulate each other's appearance, evidence suggesting that similar interactions may occur with non-QN-rich prions was also presented. Mick Tuite and colleagues (University of Kent, Canterbury, UK) have studied the appearance of [PSI+] at natural Sup35 levels. The appearances of [PSI+] were not related to any alterations in the gene SUP35, and they were not affected by chemical agents that cause protein misfolding. The study of proteins associated with Sup35 revealed the presence in [PIN+] [psi-] cells ([psi-] denotes the absence of [PSI+]) of a small oligomeric complex insoluble in the detergent SDS, and containing both Sup35 and Rnq1 proteins. This finding is important because hybrid particles may represent an intermediate step leading to the appearance of [PSI+].
The biological importance of prions was discussed by Kim Allen (Columbia University, New York, USA). Earlier studies suggested that the prion-like behavior of the translational regulator protein CPEB may underlie memory formation in the mollusc Aplysia. Allen showed that several mouse CPEB homologs also form prion-like aggregates in yeast, and that aggregate size, number and distribution are affected by the expression of chaperones. Aggregate formation by mouse full-length CPEB-3 and CPEB-4 proteins was also shown in neuroblastoma cells. The amino-terminal domain of mouse CPEB-3 is rich in glutamine, similar to yeast prions, whereas the amino-terminal domain of CPEB-4 is rich in proline and harbors sequence motifs similar to those implicated in amyloid formation by the mammalian prion protein PrP. While this study does not directly prove the prion-related nature of memory in higher eukaryotes, it represents a significant step towards this.
Claudio Soto (University of Texas, Galveston, USA) presented impressive results on in vitro amplification of PrPSc, the infectious form of PrP, in the protein misfolding cyclic amplification system (PMCA). He demonstrated that PMCA is capable of amplifying prion infectivity with indefinite dilutions of minuscule amounts of initial PrPSc seeds. Soto emphasized the potential application of PMCA for detection of ultra-low levels of infectivity in blood. Surachai Supattapone (Dartmouth Medical School, Hanover, USA) presented the results of experiments in which PMCA was used to generate the protease-resistant conformer of the prion protein using PrPSc purified from scrapie brains and PrPC (the normal conformer of PrP) purified from normal brains. Ongoing bioassay experiments with these in vitro-generated PrPSc produced in the presence of additional synthetic cofactors may eventually reveal all the molecular components required for the efficient replication of prions. While amplification of PrPSc using components extracted from normal and scrapie brains seems completely successful, reconstitution of prion infectivity de novo from synthetic components still remains puzzling.
In his presentation, Bruce Chesebro (Rocky Mountain Laboratories, Hamilton, USA) clearly demonstrated that prion toxicity could be separated from prion infectivity. He showed that the onset of typical clinical scrapie was substantially delayed in mice that expressed PrP without a glycosylphosphatidylinositol anchor. Remarkably, these mice were able to replicate prion infectivity and produced the protease-resistant conformer of PrP in the form of amyloid plaques, but failed to develop clinical symptoms of prion disease for a prolonged time. Byron Caughey (Rocky Mountain Laboratories, Hamilton, USA), on the other hand, took a biochemical approach to identifying the most infectious prion particles. Fractionation of PrP by size revealed that the highest level of infectivity per unit of mass belongs to particles with approximate molecular weights of only 300-600 kDa. A question of great interest is whether these highly infectious prion particles originate from fibril fragmentation or from distinct non-fibrillar species.
Neil Mabbott (Institute for Animal Health, Edinburgh, UK) discussed routes of prion migration between potential sites of exposure and the lymphoid tissues. He emphasized the possibility of acquiring infectious prions through the skin and the role of Langerhans cells (dendritic cells) in transporting prions to the lymphoid tissues. Adriano Aguzzi (University Hospital, Zurich, Switzerland) presented results that suggest a relatively high likelihood of prion transmission through urine, which could be one of the possible means of horizontal spread of prions in brain-wasting disease of elk and deer. Roger Morris (Wolfson Centre for Age-Related Disease, King's College London, London, UK) described his work on identifying the neuronal transmembrane receptor that is involved in the rapid recycling of PrPC and the cellular uptake of PrPSc. He found that PrPSc bound to the surface of primary neurons was rapidly endocytosed. Internalization of PrPSc was in direct competition with internalization of PrPC, implying that the same receptor was involved in both processes.
Edward Malaga-Trillo (University of Konstanz, Germany) presented a new evolutionary perspective on the possible function of PrP and the molecular mechanisms driving the diversification of PrP domains from fish to mammals. He reported the establishment of a novel genetic model for prion research, the zebrafish. Most notably, using the zebrafish model, Malaga-Trillo presented the first clear PrP loss-of-function phenotypes, which might be used to delineate a conserved function of vertebrate PrPs during early development.
In the closing lecture, Christopher Dobson (University of Cambridge, UK) considered general questions of amyloid formation. He presented evidence in support of the concept that the ability of proteins to form amyloid is generic. Many normally non-amyloidogenic proteins can form amyloid in vitro under conditions that destabilize their structure. The fact that very few proteins do form amyloid in vivo may be explained as a result of billions of years of protein evolution. This point of view predicts that, in general, proteins prone to convert to the prion state are not likely to carry a specific prion consensus sequence and are not likely to be identified by sequence analysis.
Probably, the most significant achievements reported at the conference related to prion structure, in both the sense of spatial structure and the role of the primary structure. Important questions for the future relate to the mechanisms of prion propagation, including the role of chaperones and possible curing mechanisms. The number and variety of known prion-like phenomena grows, but only the future will show the full picture of their occurrence and importance for living organisms. | <urn:uuid:3faa83c4-5c5c-4959-8c32-b27db40249f9> | 2.53125 | 3,307 | Academic Writing | Science & Tech. | 18.92376 | 1,528 |
Posted at: 02/15/2013 5:28 PM
Updated at: 02/15/2013 5:52 PM
By: Adam Camp, KOB Eyewitness News 4
A once in a lifetime experience is how one University of New Mexico researcher described the astronomical events Friday.
A 150-meter wide meteor went through earth’s atmosphere and landed in Chelyabinsk, Russia. Just hours later, an asteroid passed just 17,000 miles from earth and NASA tracked it.
Karen Ziegler is a meteorite researcher for UNM.
“It's something very, very special. I think this is a once in a lifetime experience to actually be able to observe and watch something like that happen,” Ziegler said.
The power of the meteor going into the atmosphere is also something to behold.
“It's like a bomb going off. This particular meteorite, they estimated the energy that was released when the meteor entered the earth's atmosphere was the size of a nuclear bomb going off,” Ziegler said.
The meteorite’s impact injured over 1,000 people as the explosion shattered glass and sent debris flying. Ziegler said that debris from the meteorite will send meteorite researchers flocking to Russia.
“Probably all the meteorite handlers in the world are buying their tickets right now to go out there and try to find some pieces so that they can analyze them,” Ziegler said.
As far as tracking the meteor before it came into the earth’s atmosphere, Ziegler said it was still too small for telescopes to pick it up until it started blazing a trail to the earth. | <urn:uuid:9b42e7b0-6be2-4f00-87c7-c6d4fc7b5355> | 2.53125 | 350 | News Article | Science & Tech. | 56.153475 | 1,529 |
The University of Technology in Sydney recently unveiled a new type of graphene nano paper that is ten times stronger than a sheet of steel. Composed of processed and pressed graphite, the material is as thin as a sheet of paper yet incredible durable — this strength and thinness gives it remarkable applications in many industries, and it is completely recyclable to boot.
Photo by Wikimedia Commons
To make graphene paper, raw graphite is milled and purified using a chemical bath, which reshapes its structure, allowing it to be pressed into thin sheets. These graphene sheets boast excellent thermal, electrical and mechanical properties – including excellent hardness and flexibility.
Graphene offers many advantages over steel – it’s two times as hard, six times lighter and ten times higher in tensile strength. This translates into a next-gen material that could immensely benefit the automotive and aviation industries. Lighter planes and cars use less fuel and create less pollution. Companies such as Boeing have already begun using carbon-based materials, so graphene paper would be the next logical step.
Raw graphite is a relatively plentiful material in Australia, where the research is being conducted. The researchers welcome the industry boost that increased demand for raw graphite for graphene paper would provide.
Lead photo © Lisa Aliosio | <urn:uuid:5b1250f5-2334-40d8-bc98-756852d68b45> | 3.4375 | 260 | News Article | Science & Tech. | 25.295302 | 1,530 |
Bosma (1978, 1981b) and Carignan et al. (1990) found a trend for the gas distribution to have the same shape as DM distribution. This correlation between gas and DM is puzzling and if real, has no easy explanation in the light of present CDM models. Not only is there a general trend, but several individual features found in the rotation curve seems to correspond to features in gas circular velocity. This can be observed in Fig. 2 for NGC 1560 and Fig. 8 for NGC 2460.
This fact has inspired a theory (commented in section 3.1), identifying the dark matter with an as yet undetected dark gas. The magnetic hypothesis would provide another explanation, as the rotation curve is due in part to magnetic fields, which are generated by gas. A direct relation is not obvious when very extended curves are obtained (Corbelli and Salucci, 1999). Bosma (1998) himself states that this relation may not be correct. | <urn:uuid:3a38209a-5693-4578-9f8c-b9ccd07f7656> | 2.546875 | 202 | Academic Writing | Science & Tech. | 62.287289 | 1,531 |
Author Sandy Andelman says "Conservation agencies are spending ten's of millions of dollars on systematic planning, but it doesn't translate to saving wildlife". "We need to reallocate dollars spent on 'perfect world' planning scenarios to aggressively pursue opportunities to safeguard habitat for species that are most in need."
Creating networks of parks and protected areas is a cornerstone of global conservation strategies. Yet 40% of highly threatened vertebrates mammals, birds, amphibians and reptiles do not occur in a single protected area around the globe.
Wanting to reverse the rapid decline of species, both public and private conservation groups from the Park Service to The Nature Conservancy face a constant dilemma of when, where, and how to invest limited funds to maximize conservation benefits. In attempts to have a scientific foundation for these decisions, policy makers have invested in complex processes to design blueprints for the optimal configurations of protected area networks.
Ironically, the authors of the new study - leading mathematicians and conservation planners are the very people who have been at the forefront of these modeling efforts. Frustrated with continued species loss, they took a step back to figure out how to improve the system. Surprisingly they found that an opportunistic approach informed by basic scientific information about the abundance and distribution of plants and animals, but heavily focused on how landowners make decisions - will have a better shot at protecting biodiversity over time.
"If it is possible to conserve exactly the sites you want and do it immediately - a conservation bluep
Contact: Kate Stinchcombe
Blackwell Publishing Ltd. | <urn:uuid:37d4efd7-26a4-4b08-bf1f-517d71c094ae> | 3.546875 | 313 | News Article | Science & Tech. | 18.078077 | 1,532 |
Find the vertices of a pentagon given the midpoints of its sides.
You are only given the three midpoints of the sides of a triangle.
How can you construct the original triangle?
Prove that, given any three parallel lines, an equilateral triangle
always exists with one vertex on each of the three lines. | <urn:uuid:354dfe81-ec61-4ea5-b660-7fd74a658943> | 3.015625 | 69 | Q&A Forum | Science & Tech. | 54.514552 | 1,533 |
- Stabile Isotope (1) (remove)
- How availability and quality of nectar and honeydew shape an Australian rainforest ant community (2003)
- Ant communities visiting nectar and honeydew sources were studied in a tropical lowland rainforest in North Queensland, Australia. The study focused on the hypothesis whether the distribution and composition of nectar and honeydew diets influence resource partitioning and competition in the ant community, and thus regulate community composition. Ants were the most common consumers on all extrafloral nectaries, while they constituted only a minority of floral visitors. In total, 43 ant species were observed to consume nectar from extrafloral nectaries (34 plant species) or from flowers (14 plant species), and wound sap exudates (three plant species). Six nectar-foraging ant species attended trophobionts (including at least 12 species of homopterans and two species of lycaenid caterpillars) for honeydew. Ant species showed a significant compartmentalisation of nectar use across plant species, although most ant species visited a broad spectrum of plants that strongly overlapped between different ants. Trophobioses were much more specialised at the study site, and some ant species attended certain trophobionts exclusively. On each plant individual, only a single ant colony was observed attending trophobionts. In contrast, simultaneous co-occurrences between different ant species foraging for nectar on the same plant individuals were common (observed in 23% of the surveys), although these proportions varied strongly across plant and ant species. The two most dominant ant species (Oecophylla smaragdina and Anonychomyrma gilberti) had mutually exclusive territories, and they were each associated with a significantly different assemblage of other ant species on nectar plants. This community pattern corresponds with the concept of ant mosaics that is based on dominance hierarchies. Honeydew and nectar sources varied substantially in carbohydrate and amino acid concentration and composition (HPLC analyses). There was a strong relationship between the composition of these resources and their use by ants, in particular by the dominant O. smaragdina. Among all 32 nectar and honeydew sources analysed, resources actually consumed by this ant were characterised by relatively similar amino acid profiles and higher total sugar concentration. The most common diets of O. smaragdina included two honeydew sources (Sextius ‘kurandae’ membracids on Entada phaseoloides and Caesalpinia traceyi legume lianas) and two extrafloral nectars (Flagellaria indica and Smilax cf. australis) that had the broadest spectrum of amino acids. Furthermore, these trophobioses on lianas showed a significantly higher per capita recruitment of this ant species (number of workers per individual homopteran) compared to trees. F. indica and S. cf. australis extrafloral nectaries were also commonly monopolised by O. smaragdina in a similar way as trophobioses; co-occurrences were significantly rarer than at other nectar sources. Field experiments on nectar preferences were performed using artificial sugar and amino acid solutions in pairwise comparisons. Preferences among sugars were largely concordant between ant species. For most ant species, sucrose was more attractive than any other sugar, and attractiveness increased with sugar concentration. Most ant species also preferred sugar solutions containing mixtures of amino acids over pure sugar solutions. However, choices between different single amino acids in sugar solutions varied substantially and significantly between species. Preferences between solutions were significantly reduced in the presence of competing ant species. Thus the experiments show that both variability in gustatory preferences, especially for amino acids, and conditional effects of competition may be important for resource selection and partitioning in nectar feeding ant communities. Stable carbon and nitrogen isotope composition was analysed for 50 ant species, and additionally for associated plants, homopterans and other arthropods from the study site. Nitrogen isotope ratios (d15N) of ants were not correlated with those of plant foliage from which the ants were collected. Instead, d15N may represent a powerful indicator of trophic position of omnivorous ants like in other foodweb studies, suggesting that members of the ant community spread out in a continuum between largely herbivorous species, feeding on nectar or honeydew, and predatory taxa. Variability between colonies of the same species was also pronounced. d15N values of O. smaragdina colonies from mature forests, where most of their nectar and honeydew sources are found, indicate lower trophic levels than isotope signatures of colonies from open secondary vegetation. This study demonstrates that the distribution and quality of honeydew and nectar sources have a strong structuring impact in diverse tropical ant communities. Amino acids were found to play a key role for ant species preferences and competition, and for nitrogen fluxes to colonies of the arboreal ant fauna. | <urn:uuid:42dd3319-7a1b-4b87-b562-e263b9b65661> | 3.703125 | 1,055 | Academic Writing | Science & Tech. | 11.762268 | 1,534 |
| The Dip Needle is a compass pivoted to move
in the plane containing the magnetic field vector of the earth. It will then
show the angle which the magnetic field makes with the vertical.
The needle must be accurately balanced so that only magnetic torques are exerted on it. Some texts suggest that the dip angle be measured twice, with the poles of the needle reversed by remagetization between trials, and the results averaged. Some instruments allow the needle and circle to be rotated to allow use as a compass.
The Miami apparatus was made by W. & J. George of London and Birmingham.
|This dip needle was made by Ferdinand Ernicke of Berlin, and was on display at the University of Colorado physics department in 1975 when this picture was taken.|
| The dip needle (or inclination compass) at
the left was purchased from Ruhmkorff of Paris, probably in 1875, for Vanderbilt
University. It is now on display in the Garland Collection of Classical
Physics Apparatus at Vanderbilt.
"In carrying out a measurement one sets the needle in the magnetic meridian by turning the support until the needle is vertical, in which case the needle is in a magnetic East-West plane, and then turns the support exactly 90°, at which point the vertical scale circling the needle is in the magnetic meridian. Thereupon the angle the needle makes with the horizontal is the angle of inclination. ... The horizontal circular scale is marked off in half degrees. The associated vernier allows readings to one minute. The vertical scale is marked off in ten-minute intervals." (From Robert A. Lagemann, The Garland Collection of Classical Physics Apparatus at Vanderbilt University (Folio Publishers, Nashville, TN, 1983) pg 152)
The instrument at the left appears to be exactly the same as the one above it. However, it is marked "Gambey à Paris".
It is in the apparatus collection of Case Western Reserve Unversity in Cleveland, Ohio.
This Phelps and Gurley (Troy, New York) dip needle was bought by Dartmouth College in 1862. With its case and extra needle it was valued at $20.00.
Attached to this apparatus when I looked at in June
2001 was the following information: " Provided it is well removed from local
influences such as iron, magnetite and other ferromagnetic materials, a
compass needle that is free to rotate in a vertical plane will point downward
in the northern hemisphere at an angle from the horizontal along the line
of the Earth's magnetic field. The instrument for measuring this angle is
called an inclinometer, dip needle or, most frequently, dip circle."
|| The dip needle at the left is on display at the University
Museum at the University of Mississippi in Oxford. The mechanism pivots
so that it can be used either as a dip needle, or, in the horizontal orientation,
as a compass.
The accompanying placard identifies it as being made
by Lerebours et Secretan of Paris, but it is not in the 1853 L&S catalogue
where so much of the apparatus purchased by Frederick A.P. Barnard in the
second half of the 1850s can be found.
| The dip needle at the right is at the department of physics
at the University of Texas at Austin.
The 1888 Queen catalogue lists it as "Inclination
Compass. Vertical circle, ten inches in diameter, horizontal circle, five
inches; brass posts, base and leveling screws, all delicately finished ...
This dip needle is at Westminster College in New Wilmington, Pennsylvania. It is about 30 cm high and has no maker's name.
It can be flipped horisontally for use as a compass. | <urn:uuid:76a1661c-89cf-47bd-8e0a-26c495acf022> | 4.1875 | 786 | Knowledge Article | Science & Tech. | 50.417341 | 1,535 |
Or in other words, are there differences in average Lyapunov timescale between orbits interior to Jupiter and orbits exterior to Jupiter? I'm trying to answer a question at http://www.quora.com/Why-does-Pluto-have-so-many-satellites/answer/Alex-K-Chen but I'm not totally sure if the last part of my answer is right. I'll quote the last part of it:
If the 2nd theory is true, then it's harder to answer this. One thing for sure though: Jupiter is much farther away, so its tug on the system is a much smaller factor than it is for the inner planets (where it can be a major source of instability over the solar system's lifetime)
In fact - I suspect that another factor is that from Pluto's perspective, Jupiter is practically in the center of the solar system anyways, so you're unlikely to see periods of time where Jupiter is in such a position where its constant gravitational tugs (over several Jupiter orbits) can accumulate and tug a satellite into an unstable orbit (which is what can happen with planets that orbit the Sun at distances interior to Jupiter ). Anyways I don't fully know the physics on this (yet) so some of my details could be wrong - what I do know is that it could happen to both asteroid belt objects and to Mercury's orbit - http://en.wikipedia.org/wiki/Stability_of_the_Solar_System#Mercury.E2.80.93Jupiter_1:1_resonance - perhaps because there are positions where Jupiter's pull on interior bodies is in a direction opposite to that of the Sun's pull (we never hear about the Sun creating any orbital instabilities). And Mercury is much closer to Jupiter than Pluto is.
Of course, Pluto is vulnerable to Neptune's influence, but Pluto and Neptune have a 3:2 orbital resonance so it's relatively safe from collisions with Neptune (although the resonance may not be constant over the solar system's lifetime)
Anyways, we might finally know more once New Horizons reaches Pluto in a few years
http://www.alpheratz.net/murison/papers/Lyapunov/LFM.pdf says that there is something special about asteroids with orbits interior to Jupiter - I'll try to find more information on this. | <urn:uuid:14d2270a-42ab-4f12-bf74-9d46cd9b6424> | 2.65625 | 488 | Q&A Forum | Science & Tech. | 45.951891 | 1,536 |
Vortex2: world's largest tornado research project ever, is underway
Tornado season is in full swing, and researchers are now poised in America's Great Plains with the largest armada of storm chasing vehicles and equipment ever assembled, in order to learn more about these enigmatic and violent storms. The massive Vortex2 field study began Sunday, and for the next seven weeks over 100 scientists in up to 40 science and support vehicles will be roaming through Tornado Alley, seeking to catch tornadoes on the rampage. The three basic questions the $10 million study will attempt to answer are:
- How, when, and why do tornadoes form? Why some are violent and long lasting while others are weak and short lived?
- What is the structure of tornadoes? How strong are the winds near the ground? How exactly do they do damage?
- How can we learn to forecast tornadoes better? Current warnings have an only 13 minute average lead time and a 70% false alarm rate. Can we make warnings more accurate? Can we warn 30, 45, 60 minutes ahead?'
Figure 1. Tornado over Matador, Texas on April 29, 2009. Photo taken by Texas Tech meteorology graduate student Danielle Turner.
Major tornado outbreak possible Wednesday
The Vortex2 project will have its first good chance to help answer these questions on Wednesday, when a strong cold front is expected to pass through an unstable air mass over Missouri and Illinois, triggering severe thunderstorms with tornadoes. The Storm Prediction Center has given these states a "Moderate" chance of severe weather, the second highest alert level. Today, the Vortex2 armada is stationed in western Oklahoma. The cold front that is expected to trigger Wednesday's severe weather outbreak will be moving through Oklahoma today, bringing a slight chance of severe weather to that state. You can follow the progress of the Vortex2 field project this Spring through our new featured Vortex2 blog. This blog is being written by a team of six University of Michigan students that will help deploy the Texas Tech "Sticknet" sensors during a tornado.
Figure 2. Severe weather outlook from NOAA's Storm Prediction Center for Wednesday, May 13.
An average tornado season so far over the U.S.
Through April, U.S. tornado activity was very close to the mean observed during the past five years, according to NOAA's Storm Prediction Center. However, there were just 15 tornado deaths through April, compared to 70 deaths through April of 2008, and the 3-year average of 60 deaths. According to the unofficial seasonal stats at Wikipedia, we've had 57 strong EF2 and EF3 tornadoes so far this year, and two violent EF4 tornadoes. These are fairly typical numbers of strong and violent tornadoes for this point in the season. The season's first EF4 hit Lone Grove, Oklahoma on February 10, killing eight, injuring 46, and destroying 114 homes, and was the strongest February tornado to hit Oklahoma since 1950. The season's second EF4 hit Murfreesboro, Tennessee on April 10, killing two.
Wunderground launches high-definition radar product
In case you missed my post on this in December, wunderground is now providing imagery from a network of 45 Terminal Doppler Weather Radar (TDWR) units located at airports across the U.S. The radars were developed and deployed by the Federal Aviation Administration (FAA) beginning in 1994, as a response to several disastrous jetliner crashes in the 1970s and 1980s caused by strong thunderstorm winds. The crashes occurred because of wind shear--a sudden change in wind speed and direction. Wind shear is common in thunderstorms, due to a downward rush of air called a microburst or downburst. The TDWRs can detect such dangerous wind shear conditions, and have been instrumental in enhancing aviation safety in the U.S. over the past 15 years. The TDWRs also measure the same quantities as our familiar network of 148 NEXRAD WSR-88D Doppler radars--precipitation intensity, winds, rainfall rate, echo tops, etc. However, the newer Terminal Doppler Weather Radars are higher resolution, and can "see" details in much finer detail close to the radar. This high-resolution data has generally not been available to the public until now. Thanks to a collaboration between the National Weather Service (NWS) and the FAA, the data for all 44 of 45 TDWRs is now available in real time. We're calling them "High-Def" stations on our NEXRAD radar page, and they are denoted by a yellow "+" symbol. Only one TDWR radar (Las Vegas) remains to be added; this will happen in June. For more info on how to interpret the new TDWR images, see our radar FAQ page. | <urn:uuid:10842590-8f82-42a8-b381-02bf47bf72cf> | 3.484375 | 991 | News (Org.) | Science & Tech. | 53.755846 | 1,537 |
Engineers Map Volcanic Lightning
Lightning sensors could lead to better eruption warnings
Photo: Carlos Gutierrez/UPI/Landov
FLASH, CRACKLE, POP! Lightning might warn of imminent eruptions.
This story was updated on 31 March 2009.
Volcanic eruptions are often accompanied by spectacular bursts of lightning—Krakatoa, Mount St. Helens, and Vesuvius have provided some relatively recent examples—and yet these breathtaking bolts are not well understood. Obtaining insight into volcanic lightning, besides being of considerable scientific interest, could make it possible to get earlier warnings of eruptions and might even yield clues to the origins of life. With those ends in mind, electrical engineers at the New Mexico Institute of Mining and Technology, in Socorro, have installed compact sensing stations of their own design at Mount Redoubt in Alaska.
Ronald Thomas, professor of electrical engineering at the institute, and his colleagues plan to map the lightning from that volcano’s eruption in three dimensions, hoping to illuminate what causes electrification during some eruptions and how volcanic lightning compares with thunderstorm lightning, which itself is not fully understood.
The sensors, boxed in modified picnic coolers, record the time and magnitude of the radio-frequency impulses that lightning creates. Correlating the time that the waves hit each receiver, the researchers triangulate the position of the radiation source in the sky to within 12 meters. They can then reconstruct the charge structure inside storm clouds, helping them understand what causes lightning and when and how it touches the ground.
To study volcanic lightning, the researchers pack the sensors—along with 160 gigabytes of memory, worth three months of recording time—into 20â¿¿kilogram boxes. Then the researchers must get the sensors to the right place at the right time. On the first two occasions they tried this, they didn’t quite make it in time to get all the data they wanted.
During the January 2006 eruption of Alaska’s Mount Augustine, the team arrived after the eruption had started and were able to set up only two sensors. The data was not enough to generate three-dimensional images, but it revealed a new type of lightning. Until then, lightning in volcano plumes was known to resemble thunderstorm lightning—highly branched flashes that last about half a second. But at Mount Augustine the researchers also found continuous, explosive sparks that lasted only a few milliseconds, which appeared at the mouth of the volcano just when it started erupting. This indicated that the eruption itself, not just the ejected ash and rock, had created a large amount of charge. Thomas is not sure how the charge is generated, something he hopes the Redoubt experiment will reveal.
When the Chaitén volcano erupted in Chile in May 2008, Thomas’s team also arrived later than was ideal, but this time they were able to get four sensors in place, giving them their first 3-D maps. Preliminary analysis showed horizontal lightning up to 8 kilometers long.
At Redoubt, which began erupting on 26 March, they had a head start and for the first time recorded data right at the first eruption. Thomas’s hopes are high: ”We’ll get a lot better estimate of what’s going on inside the volcanic cloud.”
In the kind of storm clouds that generate conventional lightning, ice particles and soft hail collide, building up positive and negative charges, respectively. They separate into layers, and the charge builds up until the electric field is high enough to trigger lightning. The conventional wisdom has been that in volcanic eruptions, charged ash and rock debris produce lightning by analogous processes. From what Thomas and his team have already learned, volcanic lightning might be more complex than that.
If they are successful in developing a mapping system, it could provide useful warning that an eruption has actually begun. ”Just because a volcano is rumbling and making lots of seismic noise, you can’t tell whether it erupted,” Thomas says. Tamsin Mather, a volcano researcher at the University of Oxford, in England, adds that the sensors could be a handy warning system especially for ”remote volcanoes in Alaska or Kamchatka that don’t have people watching them all the time but have plenty of planes that fly in the vicinity.” Airplanes have unknowingly flown into ash, which has sometimes choked their engines.
Volcanic lightning could also yield clues about Earth’s geological past, Mather says. And it could answer questions about the beginning of life on our planet. Scientists suspect that volcanoes on a primeval, sweltering Earth could have been the cradle of life. They had the right ingredients: water, hydrogen, ammonia, and methane. Lightning would have been the essential spark that converted these molecules into amino acids, the building blocks of protein. | <urn:uuid:eab72cb4-a43b-44cb-a4ea-c2ac18aa3c90> | 4.09375 | 1,002 | News Article | Science & Tech. | 39.26499 | 1,538 |
Southern Leopard Frog (Rana sphenocephala)
- The southern leopard frog grows to a length of 2 to 3.5 inches (about 5 to 9 cm). Its color varies from tan to several shades of brown to green. The dorsum (back) is usually covered with irregular dark brown spots between distinct light colored areas. Large dark spots on its legs may create the effect of bands. Other distinguishing characteristics include a light line along its upper jaw, light spot on its tympanum (ear), and long hind legs and toes. It is slender, with a narrow, pointed head. Males are smaller than females, but with enlarged forearms and thumbs and paired vocal sacs that look like balloons when inflated.
- Life History
Southern leopard frogs are very adaptable and are comfortable in many habitats - they just need cover and moisture. These frogs are great jumpers, traveling high and far in just a few jumps. They consume insects and small invertebrates. Predators such as fish, raccoons, skunks and aquatic snakes feed on the leopard frog. It reaches sexual maturity in the first spring after hatching. In Texas, breeding takes place year round depending on temperature and moisture. Several hundred eggs are laid in a cluster just below the water's surface. Tadpoles hatch in about seven to ten days. Newly hatched tadpoles are only about 20 to 25 mm long. They grow to 65 to 70 mm before metamorphosing into frogs, generally between 60 to 90 days. Southern leopard frogs have a lifespan of 3 years.
Southern leopard frogs elude predators by jumping into nearby water and swimming underwater for some distance, while the predator continues looking near the point of entry into the water. They are primarily nocturnal, hiding during the day in vegetation at the water's edge. During wet months, a leopard frog may wander some distance from water, but stays in moist vegetation. They will sometimes wander to colonize.
The mating call is a series of abrupt, deep croaks, creating a guttural trill. The trill rate may be as many as 13 per second. Males call from shore or while floating in shallow water. A leopard frog's mottled coloration helps camouflage it. Southern leopard frogs are often used for teaching dissecting in science classes.
- Shallow freshwater areas are preferred habitat for the southern leopard frog, but they may be seen some distance from water if there is enough vegetation and moisture to provide protection. Southern leopard frogs are also able to live in brackish marshes along the coast.
- Southern leopard frogs range throughout the eastern United States, from New Jersey east as far as Nebraska and Oklahoma and south into the eastern third of Texas.
- The name of the genus comes from the Latin rana (frog). The species name combines the Greek words sphenos (wedgeshaped) and kephale (head) to describe its triangular head. The mating calls of southern leopard frogs are a familiar background sound to many Texans living near ponds, streams and wetlands. To obtain a tape of the calls of frogs and toads of Texas, contact Texas Parks and Wildlife Department, Wildlife Diversity Branch, 512-912-7011. | <urn:uuid:fd96980e-3086-43f3-95d0-b4803516e1e5> | 3.5 | 669 | Knowledge Article | Science & Tech. | 54.859145 | 1,539 |
The objective of this research is to derive land surface temperature from GOES data at hourly intervals for atmospheric model assimilation to improve short range weather forecasts and nowcasting applications.
The rate of change of LST is sensitive to the characteristics of the land surface such as soil moisture, land use and vegetation. Regions of high soil moisture content or dense vegetation which has access to a a source of moisture exhibit cooler LST than dry soil or vegetation which is stressed because a lack of available soil moisture. This is illustrated in the figures to the right.
The retrieval of land surface temperature (LST) from GOES measurements is accomplished with a physical split window algorithm and the 11 and 12 micrometer channels of either the imager or the sounder. The technique is derived from a perturbation form of the radiative transfer equation that is simplified through parameterization to retrieve the surface parmater corrected for atmospheric water vapor effects. The physical approach requires a priori information, which includes estimates of temperature and mixing ration profiles, precipitable water, and skin temperature. The guess information is used with forward radiative transfer code and GOES spectral response information to calculate channel transmittances and brightness temperatures reuired for the solution equations. LST retrievals are only weakly dependant on the guess profile information. The quality of the LST degrades slightly under inversion conditions (either in the first guess or retrieval environment). Under optimal observing conditions (known surface thermal emissivity), LST retrieval errors are as small as 0.2 K. Variations in surface thermal emissivity unaccounted for in the retrieval process will increase the magnitude of the errors. However, in this particular application the time rate of change of the LST is used rather than its absolute value. As a result, the effects of varying thermal emissivity are negligble. The Geophysical Parameter Retrieval page provides more details into the retrieval process.
A technique has been developed for assimilating GOES-IR skin temperature tendencies into the surface energy budget equation of a mesoscale model so that the simulated rate of temperature change closely agrees with the satellite observations. The simulated latent heat flux, which is a function of surface moisture availability, is adjusted based upon differences between the modeled and satellite-observed skin temperature tendencies. For more information on the satellite data assimilation see the MM5 modeling page or the attached chart.
Several forms of validation are now under way. First, data from the ARM/CART network is being used to assess the accuracy of the LST retrievals. Secondly, the MM5 forecast of temperature and moisture is being compared to local ground truth data and regional surface observations as in the attached comparison. Obvious improvements in low level temperature and mixing ratio fields (not shown) are evident in these limited examples.
Last updated on: November 2, 1999 | <urn:uuid:2f21f48a-e54c-4de9-9d0c-ae8025a36d58> | 2.734375 | 581 | Academic Writing | Science & Tech. | 17.613696 | 1,540 |
The experimental evidence collected during the last few years has strongly supported the view that the α particle is a charged helium atom, but it has been found exceedingly difficult to give a decisive proof of the relation. In recent papers, Rutherford and Geiger have supplied still further evidence of the correctness of this point of view. The number of α particles from one gram of radium have been counted, and the charge carried by each determined. The values of several radioactive quantities, calculated on the assumption that the α particle is a helium atom carrying two unit charges, have been shown to be in good agreement with the experimental numbers. In particular, the good agreement between the calculated rate of production of helium by radium and the rate experimentally determined by Sir James Dewar, is strong evidence in favour of the identity of the α particle with the helium atom.
The methods of attack on this problem have been largely indirect, involving considerations of the charge carried by the helium atom and the value of e/m of the α particle. The proof of the identity of the α particle with the helium atom is incomplete until it can be shown that the α particles, accumulated quite independently of the matter from which they are expelled, consist of helium. For example, it might be argued that the appearance of helium in the radium emanation was a result of the expulsion of the α particle, in the same way that the appearance of radium A is a consequence of the expulsion of an α particle from the emanation. If one atom of helium appeared for each α particle expelled, calculation and experiment might still agree, and yet the α particle itself might be an atom of hydrogen or of some other substance.
We have recently made experiments to test whether helium appears in a vessel into which the α particles have been fired, the active matter itself being enclosed in a vessel sufficiently thin to allow the α particles to escape, but impervious to the passage of helium or other radioactive products.
The experimental arrangement is clearly seen in the figure. The equilibrium quantity of emanation from about 140 milligrams of radium was purified and compressed by means of a mercury-column into a fine glass tube A about 1.5 cms. long. This fine tube, which was sealed on a larger capillary tube B, was sufficiently thin to allow the α particles from the emanation and its products to escape, but sufficiently strong to withstand atmospheric pressure. After some trials, Mr. Baumbach succeeded in blowing such fine tubes very uniform in thickness. The thickness of the wall of the tube employed in most of the experiments was less than 1/100 mm., and was equivalent in stopping power of the α particle to about 2 cms. of air. Since the ranges of the α particles from the emanation and its products radium A and radium C are 4.3, 4.8, and 7 cms. respectively, it is seen that the great majority of the α particles expelled by the active matter escape through the walls of the tube. The ranges of the α particles after passing through the glass were determined with the aid of a zinc-sulphide screen. Immediately after the introduction of the emanation the phosphorescence showed brilliantly when the screen was close to the tube, but practically disappeared at a distance of 5 cms. Such a result is to be expected. The phosphorescence initially observed was due mainly to the α particles of the emanation and its product radium A (period 3 mins.). In the course of time the amount of radium C, initially zero, gradually increased, and the α radiations from it of range 7 cms. were able to cause phosphorescence at a greater distance.
The glass tube A was surrounded by a cylindrical glass tube T, 7.5 cms. long and 1.5 cms. diameter, by means of a ground-glass joint C. A small vacuum-tube V was attached to the upper end of T. The outer glass tube T was exhausted by a pump through the stopcock D, and the exhaustion completed with the aid of the charcoal tube F cooled by liquid air. By means of a mercury column H attached to a reservoir, mercury was forced into the tube T until it reached the bottom of the tube A.
Part of the α particles which escaped through the walls of the fine tube were stopped by the outer glass tube and part by the mercury surface. If the α particle is a helium atom, helium should gradually diffuse from the glass and mercury into the exhausted space, and its presence could then be detected spectroscopically by raising the mercury and compressing the gases into the vacuum-tube.
In order to avoid any possible contamination of the apparatus with helium, freshly distilled mercury and entirely new glass apparatus were used. Before introducing the emanation into A, the absence of helium was confirmed experimentally. At intervals after the introduction of the emanation the mercury was raised, and the gases in the outer tube spectroscopically examined. After 24 hours no trace of the helium yellow line was seen; after 2 days the helium yellow was faintly visible; after 4 days the helium yellow and green lines were bright; and after 6 days all the stronger lines of the helium spectrum were observed. The absence of the neon spectrum shows that the helium present was not due to a leakage of air into the apparatus.
There is, however, one possible source of error in this experiment. The helium may not be due to the α particles themselves, but may have diffused from the emanation through the thin walls of the glass tube. In order to test this point the emanation was completely pumped out of A, and after some hours a quantity of helium, about 10 times the previous volume of the emanation, was compressed into the same tube A.
The outer tube T and the vacuum-tube were removed and a fresh apparatus substituted. Observations to detect helium in the tube T were made at intervals, in the same way as before, but no trace of the helium spectrum was observed over a period of eight days.
The helium in the tube A was then pumped out and a fresh supply of emanation substituted. Results similar to the first experiment were observed. The helium yellow and green lines showed brightly after four days.
These experiments thus show conclusively that the helium could not have diffused through the glass walls, but must have been derived from the α particles which were fired through them. In other words, the experiments give a decisive proof that the α particle after losing its charge is an atom of helium.
We have seen that in the experiments above described helium was not observed in the outer tube in sufficient quantity to show the characteristic yellow line until two days had elapsed. Now the equilibrium amount of emanation from 100 milligrams of radium should produce helium at the rate of about .03 c.mm. per day. The amount produced in one day, if present in the outer tube, should produce a bright spectrum of helium under the experimental conditions. It thus appeared probable that the helium fired into the glass must escape very slowly into the exhausted space, for if the helium escaped at once, the presence of helium should have detected a few hours after the introduction of the emanation.
In order to examine this point more closely the experiments were repeated, with the addition that a cylinder of thin sheet lead of sufficient thickness to stop the α particles was placed over the fine emanation tube. Preliminary experiments, in the manner described later, showed that the lead-foil did not initially contain a detectable amount of helium. Twenty-four hours after the introduction into the tube A of about the same amount of emanation as before, the yellow and green lines of helium in this case after one day was of about the same intensity as that after the fourth day in the experiments without the lead screen. It was thus clear that the lead-foil gave up the helium fired into it far more readily than the glass.
In order to form an idea of the rapidity of escape of the helium from the lead some further experiments were made. The outer cylinder T was removed and a small cylinder of lead-foil placed round the thin emanation-tube surrounded the air at atmospheric pressure. After exposure for a definite time to the emanation, the lead screen was removed and gested [sic--tested?] for helium as follows. The lead-foil was placed in a glass tube between two stopcocks. In order to avoid a possible release of the helium present in the lead by pumping out the air, the air was displaced by a current of pure electrolytic oxygen. The stopcocks were closed and the tube attached to a subsidiary apparatus similar to that employed for testing for the presence of neon and helium in the gases produced by the action of the radium emanation on water (Phil. Mag. Nov. 1908). The oxygen was absorbed by charcoal and the tube then heated beyond the melting-point of lead to allow the helium to escape. The presence of helium was then spectroscopically looked for in the usual way. Using this method, it was found possible to detect the presence of helium in the lead which had been exposed for only four hours to the α rays from the emanation. After an exposure of 24 hours the helium yellow and green lines came out brightly. These experiments were repeated several times with similar results.
A number of blank experiments were made, using samples of the lead-foil which had not been exposed to the α rays, but in no case was any helium detected. In a similar way, the presence of helium was detected in a cylinder of tinfoil exposed for a few hours over the emanation-tube.
These experiments show that the helium does not escape at once from the lead, but there is on the average a period of retardation of several hours and possible longer.
The detection of helium in the lead and tin foil, as well as in the glass, removes a possible objection that the helium might have been in some way present in the glass initially, and was liberated as a consequence of its bombardment by the α particles.
The use of such thin glass tubes containing emanation affords a simple and convenient method of examining the effect on substances of an intense α radiation quite independently of the radioactive material contained in the tube.
We can conclude with certainty from these experiments that the α particle after losing its charge is a helium atom. Other evidence indicates that the charge is twice the unit charge carried by the hydrogen atom set free in the electrolysis of water.
University of Manchester,
Nov. 13, 1908
Proc. Roy. Soc. A. lxxxi, pp. 141-173 (1908).
Proc. Roy. Soc. A. lxxxi. p. 280 (1908).
The α particles fired at a very oblique angle to the tube would be stopped in the glass. The fraction stopped in this way would be small under the experimental conditions.
That the air was completely displaced was shown by the absence of neon in the final spectrum. | <urn:uuid:6a2988d4-b120-4574-856d-1ccb64753461> | 3.28125 | 2,242 | Academic Writing | Science & Tech. | 49.773656 | 1,541 |
|How did spiral galaxy
510-13 get bent out of shape?
The disks of many spirals are
thin and flat, but not solid.
Spiral disks are loose conglomerations of
billions of stars and diffuse gas all
orbiting a galaxy center.
A flat disk is thought to be created by sticky collisions
of large gas clouds early in the
Warped disks are not uncommon, though, and even our own
Milky Way Galaxy is
thought to have a small warp.
The causes of spiral warps are still being investigated,
but some warps are thought to result
from interactions or even
collisions between galaxies.
pictured above digitally sharpened, is about 150 million light years away
and about 100,000
Hubble Heritage Team
C. Conselice (U. Wisconsin/STScI) et al., | <urn:uuid:49b1c263-7fa5-42c0-b697-7147a60e3d9b> | 3.875 | 182 | Content Listing | Science & Tech. | 57.234464 | 1,542 |
December's lunar eclipse graced
early morning skies
over the Rocky Mountains in Colorado, USA.
There, this wintry scene
finds the Moon in a cold
blue twilight sky near
the western horizon, above the snowy North American Continental Divide.
About 22 minutes before the sunrise, the reddened lunar disk
is almost completely immersed in
Earth's dark shadow.
This dramatic Rocky Mountain moon set during the
eclipse total phase.
But all parts of the geocentric celestial event
were seen from Pacific regions, Asia, and Australia,
including the entire 51 minutes of totality,
and parts of the final eclipse of 2011
shared in skies around much of
Roger N. Clark | <urn:uuid:f6a90978-b093-447c-a002-cf22b5a57ae5> | 2.625 | 147 | Knowledge Article | Science & Tech. | 38.016257 | 1,543 |
Invertebrates in your Backyard
Uses these techniques to discover what invertebrates call your backyard home.
There are many techniques to survey the invertebrates in your backyard. Use a combination to determine the diversity of invertebrates in your local area.
- Pitfall traps sampling involves placing a small container buried to ground level so that it can collect anything that falls into it. This is a commonly used technique that catches large amount of material for very little effort. Common species found with this method include ants, spiders and beetles. This test is also very easy to standardise.
- Leaf Litter sorting involves collecting leaf litter then sifting through the material to find the invertebrates. Protective clothing should be worn, sample sizes should be the same and equal lengths of time should be spent sifting the leaf litter samples.
- Beat sampling is probably the most widely used technique for collecting invertebrates from vegetation. This is a good technique for collecting beetles, ants, bugs and spiders. Use a sturdy stick is used to beat the vegetation, stunning the invertebrates. They can be collected in a light colour shallow bag or off a drop sheet.
- Dip Netting is the simplest and most effective technique for collecting aquatic invertebrates. Nets can be used to collect from shallow and reedy areas. Organisms are put in a bucket or container along with water from the creek or pond to be identified.
Karen Player , Manager Museum Outreach | <urn:uuid:bf8ce52c-b8eb-4ec5-bf57-aa1d2128cd64> | 3.609375 | 300 | Tutorial | Science & Tech. | 42.087295 | 1,544 |
Gas Secretion and Absorption
One clear advantage of having a swimbladder is that little to no extra energy is necessary in order to remain stationary at a constant level of water. Only a slight control by use of the pectoral fins is required to balance out the propulsive force of water exiting the gills. Fish with no swimbladder on the other hand, such as mackerels, sharks, and rays must expend energy by constantly swimming in order to keep from sinking.
| Another advantage of swimbladders is
oxygen storage. Physoclists and physostomes alike may occasionally
use the oxygen present within their bladder as an emergency backup in times
of urgent need, although, this emergency store can only be of aid for a
few minutes (Jones 1957).
Finally, swimbladders in some fish are known to increase hearing abilities. With the presence of inner ear- swimbladder connections, these fish have exhibited greater sensitivity to sound, however it is not yet clear whether there is also an increase in frequency selectivity (Coombs & Popper 1982a)
Site questions? Email me: email@example.com | <urn:uuid:47085c8f-8d09-4145-9f4e-721220e95ed6> | 3.28125 | 242 | Knowledge Article | Science & Tech. | 29.725455 | 1,545 |
scintillation counterArticle Free Pass
scintillation counter, radiation detector that is triggered by a flash of light (or scintillation) produced when ionizing radiation traverses certain solid or liquid substances (phosphors), among which are thallium-activated sodium iodide, zinc sulfide, and organic compounds such as anthracene incorporated into solid plastics or liquid solvents. The light flashes are converted into electric pulses by a photoelectric alloy of cesium and antimony, amplified about a million times by a photomultiplier tube, and finally counted. Sensitive to X rays, gamma rays, and charged particles, scintillation counters permit high-speed counting of particles and measurement of the energy of incident radiation.
What made you want to look up "scintillation counter"? Please share what surprised you most... | <urn:uuid:6be9a152-1f49-45d9-a567-ae82a6556749> | 3.71875 | 173 | Knowledge Article | Science & Tech. | 18.958151 | 1,546 |
Strong Name (further referred to as "SN") is a
technology introduced with the .NET platform and it brings many possibilities into
.NET applications. But many .NET developers still see Strong Names as security
enablers (which is very wrong!) and not as a technology uniquely identifying
assemblies. There is a lot of misunderstanding about SNs (as we could see
in the article "Building
Security Awareness in .NET Assemblies : Part 3 - Learn to break Strong Name .NET Assemblies
this article attempts to clear those up. Now let's see what SNs are, what we
can use them for and how they work.
Strong Name is a technology based on cryptographic
principles, primary digital signatures; basic idea is presented in the figure
At the heart of digital
signatures is asymmetric cryptography (RSA, EL Gamal), together with hashing
functions (MD5, SHA). So what happens when we want to sign any data? I'll try to
explain what happens in the figure above.
First we must get a public/private
key pair (from our administrator, certification authority, bank, application
etc.) that we will use for encryption/decryption. Then DATA (term DATA
represents general data we want to sign) is taken and run through some hashing algorithm
(like MD5 or SHA - however, MD5 is not recommended) and hash of DATA is
produced. The hash is encrypted by private key of user A and attached to
plaintext data. The DATA and attached signature are sent to user B who takes
public key of user A and decrypts attached signature where hash of DATA is stored
and encrypted. Finally user B runs DATA through the same hashing algorithm as
user A and if both hashes are the same then user B can be pretty sure that the DATA
has not been tampered with and also identity of user A is proven. But this is a
naive scenario because it's hard to securely deliver public keys over insecure
communication channels like Internet. That is why certificates were introduced
but I will not cover it here because certificates aren't used in SNs and
delivery of public key is a matter of publisher's policy (maybe I can cover
distribution of public keys, certificates and certification authorities in another article). Now let's assume that public key was delivered to user B
This process is used in the
creation of SN for .NET applications. You can translate term DATA as
assemblies and apply the same steps to them when SNs are used. But what is the
purpose and usage of this SN technology? Simple - there is the only one reason –
to uniquely identify each assembly. See section 188.8.131.52
of CLI ECMA specification where SNs are defined:
This header entry points to
the strong name hash for an image that can be used to deterministically
identify a module from a referencing point (Section 184.108.40.206).
SNs are not any security
enhancement; they enable unique identification and side-by-side code execution.
Now we know that SNs are not
security enablers. Where to use them then? We can see two scenarios where SNs
can be used:
Versioning solves known problem called
as "DLL hell". Signed assemblies are unique and SN solves problem with
namespace collisions (developers can distribute their assemblies even with the
same file names as shown of figure below). Assemblies signed with SNs are
uniquely identified and are protected and stored in different spaces.
In addition to collision
protection, SN should help developers to uniquely identify versions of their
That is why when developers want
to use GAC (Global Assembly Cache) assemblies must be signed to separate
each publisher's namespace and to separate each version.
The second important feature of
Strong Names is authentication; a process where we want to ensure ourselves
about the code's origin. This can be used in many situations, such as assigning
higher permissions for chosen publishers (as will be shown later) or ensuring
that code is provided by a specific supplier.
It has been shown that signatures
and public keys can be easily removed from assemblies. Yes, that is right but
it is correct behavior even when we use digital signatures in emails or
anywhere else! Let's see how it works!
We can use some analogy from our
real life. Let's assume you are a boss of your company and you are sending an
email to your employees where new prices of your products are proposed. This
email is a plaintext and you use some non-trusted outsourcing mailing services.
Your communication can be easily monitored and your email can be easily accessed
by unauthorized persons who can change its content, for instance your prices
proposed in email.
How to solve that? The answer is cryptography,
again digital signatures that you can use to authenticate to your employees and
to verify content of your email. Simply you have to add a digital signature to
your email and then require your employees will trust just verified
emails that have your valid digital signature. Let's assume that all PKI
infrastructure is set up and working correctly. Now, when an intruder removes
the digital signature from your email, his employees will not trust them
because they can't be verified and application will alert users about this insecure
The same situation is when SNs
are used. You can remove SNs from assemblies , but this makes no sense because just
as in the case of emails, assemblies without SNs can't be trusted when
environment is set up to require those digital signatures or SNs.
This is also related to another very
important point in .NET – Code Groups & Policy Levels. As in
the case of emails, when PKI is setup in a company and security policy is defined that
employees can't trust and verify emails which are not signed or where the encrypted
hash value is different from hashed plaintext content. The same can be done
with .NET Framework using the .NET Configuration tool on each machine or
by group policy for large networks.
This tool provides configuration
options for .NET Framework including Runtime Security where policy
levels and code groups can be set. Policy levels work on
intersection principle as shown in the figure below
Code groups (inside of those policy
levels) provide permission sets for applications that belong to them according
to their evidence (origin, publisher, strong name etc.). The assembly will get
those permissions based on the intersection of code groups from each policy
level applicable to it. This is a very important improvement in security
architecture and improves the traditional Windows security model that is process
centric (see figure below).
.NET introduces Code Access
Security (CAS) which is used to identify the origin of code and assign to it specific
restrictions and then make security policy more granular and protecting against
attacks such as luring attacks.
However my intention isn't to
describe CAS or Windows security internals (I can write about it in other
articles) but show SN principles. Let's move back to it!
Now we can move to the second use
for SN - administrators and developers can use SNs together with code groups to
provide assemblies with higher permissions (not the default ones that assembly
will acquire according to default .NET Framework settings). Let's see an
example! I must point out that this is just a simplified example how SN can
identify publisher, this is NOT a way to obey CLR security or how to use it
in enterprise environment. That is why please try to understand the example
as a general principle available with SNs but NOT as a design pattern!
Usage of SNs as authentication is a more complex problem and there are many
non-trivial issues when SNs are involved. But it's out of scope of this article,
so now back to the sample!
Take my sample Windows Forms
project and rebuild it and put .exe file on any share on your LAN. Then try to
start this application from this share and click on button – what happens? A security
exception is raised because application doesn't have enough privileges.
Now go to .NET Configuration tool
and add a new code group
add new code group called Test
and in the second dialog choose Strong name, click on
Import button and locate the .exe file in Debug folder of project folder and
finally assign full trust for this application
Now you have created a new code group containing just
your sample application. Now go to your network share and try to start sample
application again. And it works! Why? Because it belongs to our new code group Test
with full trust permissions.
Now remove SN from sample application (as described in his article or just
simply remove attribute [assembly: AssemblyKeyFile("KeyFile.snk")]
from AssemblyInfo.cs file), recompile and publish it on share. Try to
run it and what happens? It's not working! Why? Because assembly can't show this
strong name evidence and it belongs to the default code group (with limited
It's not surprising, nothing special, no magic – just
correct usage of Strong Name technology. SNs are easy and powerful but
we have to understand how and where to use them. That is why I want to
outline some "issues" that are connected with SNs that will present all
capabilities that we can expect from SNs.
So what are the weaknesses of SNs? First we have to realize
that SNs are a lightweight version of Authenticode and they provide fast and easily
used technology to get enterprise features like versioning and authentication.
But this ease of use must be paid by something and here goes a list of
- It can be very hard to securely associate publisher with his
public key when certification authorities are not involved. Publisher must ship
his public key by himself and he must ensure that public key is not tampered.
Without certification authorities it's impossible to do it securely when our
products are distributed over insecure channels and there are no other ways to
verify the publisher's public key.
- There is no way how to revoke public key when the private key has
been compromised. As this is easily done in case of certificates (just publish
revoked certificates on CRL, Certificate Revocation List) in case of SNs, revocation
is a nightmare. Just imagine that you as a junior security engineer has
lost USB key with your private key used to sign your assemblies. Then you'll
have to call and email your clients with newly signed assemblies, give them
your new public key and setup all environments again). There is no automatic
way like CRL, everything must be done "by hand".
Authenticode can be considered as more powerful from an enterprise
and architectural perspective. So why not use Authenticode instead of SNs?
Here are the reasons:
- SNs don't require any third party (such as Verisign) to
create signatures and manage public keys. Any developer can easily create
and manage his keys (see chapter "Generate key pair with sn.exe tool" in
my free book ".NET
in Samples") without payment to any third party.
- SNs can avoid network connections and PKI involvement so
applications can run and be verified even when network connections are not
- Authenticode certificates are not a part of assembly names
and that is why they can't separate publisher's namespaces like SNs do. Do
you remember the statement from ECMA in the beginning? That SNs should "deterministically
identify" modules and this is the most important reason. So not a security
enabler but unique identification is the primary reason for SNs! And
Authenticode is not designed for this purpose!
I hope this helps you understand the strong name technology in
the .NET Framework, and helped you see that it is very powerful, but with defined
limits. It is a technology that should be used appropriately.
With SNs we can uniquely identify an assembly and run
side-by-side our assemblies. Security scenarios are not recommended to be used
with Strong Names (even when it's supported by .NET Framework), just in case
you are advanced in security and working with certificates and key management.
There are many design patterns on how to use Strong Names and all this depends on
application architecture, client requirements and infrastructure settings
(Active Directory, PKI etc.).
There could be much more written about it (like usage of SNs
in large companies, problems with key distribution, etc.), but this was not
intended for this article, it was just a reaction to some misinterpretation of
this technology and the article is intended to put it right. | <urn:uuid:b84d8062-e5ab-452e-a009-8ca6e73e32bb> | 2.78125 | 2,694 | Documentation | Software Dev. | 44.48637 | 1,547 |
There are several contributing factors into the decline of the U.S. space
agency, though immediate fixes are not evident. Even though NASA has a
long string of success, the unfortunate shuttle Columbia disaster in 2003,
budget issues, and the looming 2010 retirement of the current generation of
space shuttles are all complicating matters.
"We spent many tens of billions of dollars during the Apollo era to
purchase a commanding lead in space over all nations on Earth," NASA
Administrator Michael Griffin said. "We've been living off the fruit
of that purchase for 40 years and have not ... chosen
to invest at a level that would preserve that commanding lead."
Although Russia has been a long-time competitor to NASA, the Chinese space
agency and Japan Aerospace Exploration Agency (JAXA) have continued to make
steady progress with its intended goals.
Along with multiple missions to Mars, China is preparing for stage two of a
three-part mission to the moon. The first step in the plan, which is
ongoing, included sending a satellite to orbit the moon. The second step
proposes launching a lunar lander before 2010, and the third step involves
collecting soil samples from the moon in the next 12 years.
The Chinese space program also has its first spacewalk scheduled for October. Griffin
admits China will likely beat the U.S. and other nations back to the moon.
India also has a developing space program that may not have the type of budget
of larger space programs, but the country still has had success launching
smaller missions that have shown good results. Its most recent success
was a satellite launch in which 10 satellites launched into orbit aboard one
The U.S. space agency does have its own mission outline for the next 12 years,
but may struggle to meet its goals if the Orion crew vehicle is not completed
on time in 2015.
NASA used to be responsible for sending other nations' satellites into orbit,
but now Russia, India, and China are the three main nations responsible for
helping Israel, Brazil, Singapore and the ESA launch satellites into space. | <urn:uuid:be22bbcf-4a22-4eb5-bdb8-52fd32208c88> | 3.046875 | 444 | News Article | Science & Tech. | 51.175609 | 1,548 |
international university researchers claim that humans have thrown off the
balance between the Earth's rotation, surface air temperatures and
movements in its molten core through our contribution of greenhouse gases.
included in the study were Jean Dickey and Steven Marcus from NASA's Jet
Propulsion Laboratory, along with Olivier de Viron, from the Universite Paris Diderot and Institut de
Physique du Globe de Paris in France.
It is well
known that an Earth day consists of 24 hours, which is the time it takes for
the Earth to make one full rotation. Over a year's time, seasonal changes occur
due to energy exchanges between fluid motions of the Earth's atmosphere, the
oceans and solid Earth itself, which changes the length of a day by about 1
millisecond. In addition, the length of a day on Earth can vary over longer
timescales such as interannual timescales (two to 10 years) or decadal
timescales (10 years).
oceans or motions of its atmosphere cannot explain the variances in the length
of day over longer timescales. Instead, longer fluctuations are explained by
the flow of liquid iron within Earth's outer core, which interacts with the
mantle to determine Earth's rotation. This is also where the Earth's magnetic
field originates, and because researchers cannot observe the flows of liquid
iron directly, the magnetic field is observed at the surface.
have shown that this liquid iron "oscillates in waves of motion that last
for decades," and have timescales that resemble long fluctuations in
Earth's day length. At the same time, other studies have shown that long
variations in Earth's day length are closely related to fluctuations in Earth's
average surface air temperature.
study, the NASA/university team of researchers has linked Earth's rotation,
surface air temperatures and the movement in its molten core. They did this by
mapping existing data on yearly length-of-day observations and fluid movements
within Earth's core against "two time series of annual global average
surface temperature." One dated back to 1880 from NASA's Goddard Institute
of Space Studies in New York, and the other dated back to 1860 from the United
Kingdom's Met Office.
to the study, temperature changes not only occur naturally, but are also affected by human activities.
So researchers used computer climate models of Earth's oceans and atmosphere to
generate changes made by humans. Then, these temperature changes caused by
human activities were removed from the overall total observed temperature
records. What they found was that old temperature data coordinated with data on
Earth's day length and movements of its core until 1930, but after that,
surface air temperatures increased without corresponding changes in movements
of the core or day length. According to the study, this deviation after 1930 is
linked to increased levels of the human contribution of greenhouse gases.
But the new temperature data that
the researchers generated (which subtracted human activity from the equation)
had a temperature record that coordinated with Earth's core movements and day
length, showing how human activity has thrown the Earth's climate off balance.
solid Earth plays a role, but the ultimate solution to addressing climate
change remains in our hands," said Dickey.
unsure as to why these three variables correlate, but hypothesized that Earth's
core movements might interfere with the magnetic fielding of charged particle
fluxes, which may affect cloud formation. This affects how much sunlight the
Earth absorbs and how much is reflected back into space.
research demonstrates that, for the past 160 years, decadal and longer-period
changes in atmospheric
temperature correspond to changes in Earth's length of day if
we remove the very significant effect of atmospheric warming attributed to the
buildup of greenhouse gases due to mankind's enterprise," said Dickey.
"Our study implies that human influences on climate during the past 80
years mask the natural balance that exists among Earth's rotation, the core's
angular momentum and the temperature at Earth's surface."
This study was
published in the Journal of Climate. | <urn:uuid:84080301-03dd-4db5-9f9b-6687a3503e2d> | 3.90625 | 855 | News Article | Science & Tech. | 30.750369 | 1,549 |
What is Cookies?
Interview question and answer by: Hariinakoti
| Posted on: 4/7/2012 | Category: ASP.NET Interview questions
| Views: 702 | | Points: 40
Cookie is a small amount of memory used by webserver with client system.
* The cookie variables will be accesible across different web pages of website towards *client request.
* The important of cookies is storing presonal information of client system to reduce memory burden on server (or) identifying client for different requests.
Cookies are two types:
1.In Memory Cookie
In Memory Cookie: The Cookie variable placed within browser process is called "In Memory Cookie"
Persistant Cookie: The Cookie placed on harddisk memory in client system is called "Persistant Cookie".
My institute Note Book | Asked In:
| Alert Moderator
Found interesting? Add this to: | <urn:uuid:e3dbea54-319b-4063-bac5-c13670035df0> | 2.9375 | 187 | Q&A Forum | Software Dev. | 39.372425 | 1,550 |
Inorganic chemistry is a subdiscipline of chemistry involving the scientific study of the properties and chemical reactions of all chemical elements and chemical compounds other than the vast number of organic compounds (compounds containing at least one carbon-hydrogen covalent bond).
There are a number of subdivisions of inorganic chemistry such as the five subdivisions of the American Chemical Society's Division of Inorganic Chemistry (ASC DIC), namely organometallic chemistry, bioinorganic chemistry, solid-state and materials chemistry, coordination chemistry and nanoscience.
Inorganic chemistry is closely related to other disciplines such as materials science, earth science, mineralogy, geology and crystallography.
Distinctions between inorganic and organic chemistry
The distinction or boundary between inorganic chemistry and organic chemistry is not very well defined. In general, the above definition of inorganic chemistry seemingly excludes carbon compounds but it does not exclude elemental carbon itself. Hence, carbon oxides, carbon sulfides, cyanides and cyanates, metallic carbides and carbonates are included as inorganic compounds.
As another example of the ill-defined distinction between inorganic and organic chemistry, oxalic acid (H2C204) is commonly considered to be an organic compound even though it does not contain a carbon-hydrogen bond.
Classification of inorganic compounds
Inorganic chemistry encompasses a very complicated variety of substances which the distinguished American chemist, F. Albert Cotton (1930 − 2007), grouped into these four classes:
The chemical elements: These have a variety of structure and properties and include:
- Atomic gases such as argon (Ar) and krypton (Kr), as well as molecular gases such as hydrogen (H2) and oxygen (O2).
- Molecular solids such as the phosphorus allotrope (P4), the sulfur allotrope (S8), and the carbon allotrope (C60).
- Network solids such as diamonds and graphite.
- Metals, either solid such as copper (Cu) and tungsten (W) or liquid such as mercury (Hg) and gallium (Ga).
- Simple ionic compounds such as sodium chloride (NaCl), which are soluble in water or other polar solvents.
- Ionic oxides that are insoluble in water, such as zirconium oxide (ZrO2) and mixed oxides such as the mineral "spinel" (MgAl2O4), the mineral "diopside" (CaMg(SiO3)2) and various silicates.
- Other binary halides, carbides, arsenides, nitrides and similar materials. A few examples are silver chloride (AgCl), silicon carbide (SiC), gallium arsenide (GaAs), and boron nitride (BN), some of which could also be considered to be network solids.
- Compounds containing polyatomic ions (also called "complex ions") such as the silicon hexafluoride anion [SiF6]2–, the cobalt hexammine cation [Co(NH3)6]3+ and the ferricyanide anion [Fe(CN)6]3–.
Molecular compounds: These may be solids, liquids or gases and include:
- Simple binary compounds such as phosphorus trifluoride (PF3), sulfur dioxide (SO2) and osmium tetroxide (OsO4).
- Organometallic compounds that characteristically have metal−to−carbon bonds such as nickel carbonyl (Ni(CO)4) and tetra-benzyl-zirconium (Zr(CH2C6H5)4).
- Complex metal-containing compounds.
Inorganic polymers and superconductors: These include various inorganic polymers and superconductors. One example is the polymer named yttrium barium copper oxide (YBa2Cu3O7) which is commonly abbreviated as YBCO. It is a crystalline chemical compound and was the first material to achieve superconductivity above the boiling point of liquid nitrogen (77 K).
Typical inorganic chemical reactions
There is no universally accepted list of the typical, important inorganic reactions. Although there are numerous available sources (books, journal and Internet websites) that include such lists, they all differ to some extent from each other. The inorganic reaction types listed and explained below were drawn from many of the available sources:
Synthesis reaction: (also referred to as combination or composition reaction)
This is a reaction in which two or more reactants combine to form a single product, where each reactant is a chemical element or compound and the reaction product consist of the two reactants. Examples include:
• sodium + chlorine ⇒ sodium chloride
2Na + Cl2 ⇒ 2NaCl
• carbon dioxide + water ⇒ carbonic acid
CO2 + H2O ⇒ H2CO3
• hydrogen + sulfur ⇒ hydrogen sulfide
H2 + S ⇒ H2S
Decomposition reaction: (may be thermal, electrolytic or catalytic decomposition reaction)
This is a reaction in which a chemical compound is separated into elements or simpler compounds. It is often defined as being the opposite of a synthesis reaction. Examples include:
• hydrogen peroxide ⇒ water + oxygen (Hydrogen peroxide spontaneously decomposes into water and oxygen)
2H2O2 ⇒ 2H20 + O2
• calcium carbonate + heat ⇒ calcium oxide + carbon dioxide (Heated calcium carbonate decomposes into calcium oxide
and gaseous carbon dioxide)
CaCO3 + heat ⇒ CaO + CO2
Single displacement reaction: (also referred to as substitution or single replacement reaction)
This is a reaction characterized by one element being displaced from a compound by another element. Examples include:
• copper + hydrochloric acid ⇒ cupric chloride + hydrogen
Cu + 2HCl ⇒ CuCl2 + H2
• zinc + cupric sulfate ⇒ copper + zinc sulfate
Zn + CuSO4 ⇒ Cu + ZnSO4
Metathesis reaction: (also referred to as exchange or double displacement or double replacement reaction)
This is a reaction in which two compounds exchange bonds or ions to form new, different compounds. Examples include:
• sodium sulfate + barium chloride ⇒ barium sulfate + sodium chloride
Na2SO4 + BaCl2 ⇒ BaSO4 + 2NaCl
• silver nitrate + hydrochloric acid ⇒ nitric acid + silver chloride
AgNO3 + HCl ⇒ HNO3 + AgCl
Precipitation reaction: (a specific type of metathesis referred to as aqueous metathesis)
This is a reaction that occurs when two inorganic salt solutions, as in the example below, react to form a solution containing a soluble product and another product that is insoluble and precipitates out of the solution:
• calcium chloride + silver nitrate ⇒ calcium nitrate + silver chloride (Insoluble silver chloride precipitates out of the aqueous
CaCl2 (aq) + 2AgNO3 (aq) ⇒ Ca(NO3)2 (aq) + 2AgCl (s)
Neutralization reaction: (another specific type of metathesis that is sometimes referred to as an acid-base reaction)
This is a reaction in which an acid and a base react to form a salt. Water is also produced in neutralizations with Arrhenius acids, that dissociate in aqueous solution to form hydrogen ions (H+ ), and Arrhenius bases, that form hydroxide ions (OH– ). However, water is not produced in all neutralizations as can be seen below in the neutralization of ammonia. Examples include:
• nitric acid + sodium hydroxide ⇒ sodium nitrate + water
HNO3 + NaOH ⇒ NaNO3 + H2O
• hydrochloric acid + ammonia ⇒ ammonium chloride
HCl + NH3 ⇒ NH4Cl
Redox reaction: (also referred to as oxidation-reduction reaction)
This is a reaction in which the oxidation numbers of atoms are changed. Examples include:
• hydrogen + fluorine ⇒ hydrogen fluoride
H2 + F2 ⇒ 2HF
Hydrogen is oxidized by its oxidation number increasing from zero to +1. Fluorine is reduced by its oxidation number
decreasing from zero to -1.
• iron + cupric sulfate ⇒ ferrous sulfate + copper
Fe + CuSO4 ⇒ FeSO4 + Cu
Iron is oxidized by its oxidation number increasing from zero to +2. Copper is reduced by its oxidation number
decreasing from +2 to zero.
Analysis and characterization of inorganic compounds
The number of known chemical elements that occur naturally on Earth is 94 and the number of diverse inorganic chemical compounds derived by combinations of those elements is virtually innumerable. The characterization of those compounds includes the measurement of chemical and physical properties such as boiling points, melting points, density, solubility, refractive index and the electrical conductivity of solutions.
The techniques of qualitative and quantitative analytical chemistry can provide the composition of a chemical compound in terms of its constituent chemical elements and can thus determine the chemical formula of a compound.
Modern laboratory equipment and techniques can provide many more details for characterizing chemical compounds. Some of the more commonly used modern techniques are:
- Chromatography: A process for separating mixtures of chemicals into their component constituents.
- X-ray diffraction or X-ray crystallography: A technique that determines the three-dimensional arrangement of atoms within a molecule.
- Spectrometry or qualitative Spectroscopy: A technique for the identification of substances through the electromagnetic spectrum emitted from or absorbed by them.
- Voltammetry: An electrochemical method for studying a chemical substance by measuring the electrical potential and/or electric current in an electrochemical cell containing the substance.
- Inorganic Chemistry: A Study Guide, From the website of the University of Waterloo, Canada
- Christopher G. Morris (Editor) (1992), Academic Press Dictionary of Science and Technology, 1st Edition, Academic Press, ISBN 0-12-200400-0.
- Welcome to the ACS DIC Webpage!, From the website of the American Chemical Society Division of Inorganic Chemistry.
- Note: For example, carbon monoxide (CO), carbon dioxide (CO2), carbon disulfide (CS2), sodium cyanide (NaCN), potassium cyanate (KOCN), silicon carbide (SiC) and calcium carbonate (CaCO3)
- F. Albert Cotton, Geoffrey Wilkinson and Paul L. Gaus (1995), Basic Inorganic Chemistry, 3rd Edition, John Wiley, ISBN 0-471-50532-3.First published in 1976 with Professor F. Albert Cotton of Texas A&M University as the main author.
- Note: Allotropes are molecules having different molecular structures. This differs from isotopes which are elements having different atomic structures (i.e., the same number of protons but different numbers of neutrons in the atomic nucleus). The carbon allotrope (C60) is also known as Buckminsterfullerine.
- Note: Network solids are chemical compounds with the atoms being bonded by covalent bonds in a continuous network. Thus, there are no individual molecules in a network solid and the entire solid may be considered to be a macromolecule. Diamond is an example of a network solid with a continuous network of carbon atoms. Another example is graphite, which consists of continuous two dimensional layers of carbon atoms covalently bonded within each layer and with other bond types holding the layers together.
- Yttrium Barium Copper Oxide – YBCO, From the wiki of the Chemistry Department at Imperial College, London, England.
- P.A. Cox (2004), Inorganic Chemistry, 2nd Edition, Taylor & Francis, ISBN 1-85996-289-0.
- Types of Equations, From the website of the Virginia Polytechnic Institute and State University.
- Types of Inorganic Chemical Reactions: Four General Categories, Dr. Anne Marie Helmenstine on the website of About.com: Chemistry.
- Types of Chemical Reactions: List of Common Reactions and Examples, Dr. Anne Marie Helmenstine on the website of About.com: Chemistry.
- Note: An Arrhenius acid is defined as dissociating in aqueous solution to form hydrogen ions and Arrhenius bases, which form hydroxide ions. There are a number of other theories and definitions of acids, namely Brønsted–Lowry acid–base theory, Lewis acids and bases,Usanovitch definition, and various others. | <urn:uuid:f933f0b0-610c-449f-a556-9bf8472284ce> | 3.46875 | 2,710 | Knowledge Article | Science & Tech. | 16.615599 | 1,551 |
A smaller version of an instrument now flying on NASA's Van Allen Probes has won a coveted spot aboard an upcoming NASA-sponsored Cubesat mission — the perfect platform for this pint-size, solid-state telescope.
Weighing just 3.3 pounds, the Compact Relativistic Electron and Proton Telescope (CREPT) will "augment the science of a major flagship mission" and demonstrate the effectiveness of two new technologies that make the instrument four times faster than its 30-pound sibling at gathering and processing data, says CREPT Principal Investigator Shri Kanekal at NASA's Goddard Space Flight Center in Greenbelt, Md.
The small solid-state telescope, which Kanekal and his team are developing under NASA's Low-Cost Access to Space (LCAS) program, will measure energetic electrons and protons in Earth's Van Allen Belts, which are large doughnuts of radiation that surround Earth. CREPT measurements will give scientists a better understanding of the physics of how the radiation belts lose electrons by a process known as electron microbursts.
Discovered in 1958 with instruments aboard NASA's Explorer 1 spacecraft, the Van Allen radiation belts have long intrigued scientists. The inner belt, stretching from about 1,000 to 8,000 miles above Earth's surface, is fairly stable. However, the outer ring, spanning 12,000 to 25,000 miles, can swell up to 100 times its usual size during solar storms, engulfing communications and research satellites, bathing them in harmful radiation.
Further complicating matters, the outer belt does not always respond in the same way to solar storms. Sometimes it swells; sometimes it shrinks — an event caused when electrons in the outer loop either drop into the atmosphere or escape into space.
Microbursts, CREPT's primary object of interest, are one mechanism by which the outer belt loses electrons.
"We don't know when a solar storm hits the Earth what the net effect will be," Kanekal says. "The Van Allen Belts can swell, shrink, or in some cases remain unchanged. To understand what will happen when a solar storm impacts Earth, we need to know not only why the number of particles increases but also how they decrease or get lost. This is why studying microbursts is important. They tell us how particles are lost."
Kanekal, who also is the lead scientist on the Relativistic Electron and Proton Telescope (REPT) now flying on the Van Allen Probes, decided to develop a more compact version of the instrument in 2012 — an effort initially funded by Goddard's Internal Research and Development (IRAD) program.
"To our delight, NASA selected our proposal," Kanekal said, adding that the unit Kanekal created under his IRAD to demonstrate the telescope's flight heritage is expected to fly on a Spanish Cubesat.
Under his $1.5-million LCAS award, Kanekal and his team will spend the next two years building CREPT. In year three, he plans to fly the telescope on a three-unit Cubesat, which more than likely will be launched by an Air Force Falcon 9 rocket. From its polar orbit, CREPT will be able to study electron growth and decay from a low-altitude polar orbit — an observing location that augments the science now being performed by REPT, which is flying in an equatorial orbit at high altitudes.
Although not quite as robust as the larger REPT, the new instrument offers enhanced processing capabilities. It will carry a processor called the SpaceCube-Mini, one of three in a family of IRAD-funded processors developed by Goddard technologist Tom Flatley. This new processor is 25 times faster than the current state-of-the-art processor, the RAD750.
Another CREPT technology is an application-specific integrated circuit developed by Goddard scientist Nick Paschalidis. This analog-to-digital circuit helps analyze data, which then are directly fed into SpaceCube-Mini. Combined, the package provides a factor-of-four improvement in time resolution, meaning that the telescope can take measurements every five milliseconds.
"We made this instrument more compact and we improved how fast we can measure particles," Kanekal said. "Everything came together. We leveraged our technologies from the IRAD program, which really was crucial."
For more information about the REPT, visit:
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system. | <urn:uuid:8e36c9c8-481d-48c3-9b03-aa4e6d111d49> | 3.953125 | 953 | News (Org.) | Science & Tech. | 40.366252 | 1,552 |
The U.S Global Change
Research Program (USGCRP) stands at the threshold of a major transition.
Over the next several years, in addition to continuing to improve our understanding
of the Earthís environment and how it is changing, the program will greatly
advance our knowledge about the implications of such change for society.
The research successes of the last decade have laid the foundation for
a global environmental change information service that will allow global
change research results to be applied more effectively to national needs.
Since its establishment a decade ago, the USGCRP has supported a comprehensive
program of scientific research on the multiple issues presented by climatic
and other changes in the Earth system. USGCRP-supported research has produced
substantial increases in knowledge, predictive understanding, and documented
evidence of global environmental change, including major scientific advances
in the understanding of stratospheric ozone depletion, the El Niño-Southern
Oscillation phenomenon, global climate change, tropical deforestation,
and other issues.
These interlinked problems of global environmental change present long-term
challenges at local and regional scales as well. Science has much to contribute
to the management of these challenges. In the next decade, the USGCRP will
focus on understanding the Earth system as a whole, on the dynamics of
environmental change, and on connecting that knowledge to societal needs,
including the provision of information on regional implications of change.
A series of five broad objectives are guiding the program as it pursues
Determine the origins, rates, and likely future course of
natural and anthropogenic changes.
Increase understanding of the combined effects of multiple
stresses on ecosystems.
Understand and model global environmental change and its
processes on finer spatial scales and across a wide range of timescales.
Address the potential for surprises and abrupt changes in
the global environment.
Understand and assess the impacts of global environmental
change and their consequences for the United States.
of the Program
A recent National Research
Council report, Global Environmental Change: Research Pathways for the
Next Decade, which was commissioned by the USGCRP, has influenced the definition
of the near-term research challenges identified in this report, and is
important input to developing a new long-term research strategy for the
USGCRP, which will be completed in FY 2000.
To respond to the scientific challenges described in the Pathways report,
the USGCRP will be organized and managed as a series of closely-linked
Program Elements. This FY 2000 Implementation Plan and Budget Overview
contains detailed descriptions of a series of research challenges and FY
2000 objectives for each Program Element.
USGCRP Program Elements
Carbon Cycle Science is receiving heightened emphasis within the USGCRP.
The need to understand how carbon cycles through the Earth system is critically
important to the ability to predict future climate change. The USGCRP is
establishing a Carbon Cycle Science Initiative, with significant new investments
proposed in the FY 2000 budget. This effort will provide critical scientific
information on the fate of carbon dioxide in the environment, the sources
and sinks of carbon dioxide on continental and regional scales, and how
sinks might change naturally over time or be enhanced by agricultural or
forestry practices. A new level of interagency coordination is being put
in place to pursue this important objective. The program will be guided
in this task by a science plan that has been drafted with extensive participation
by many of the leading scientists in this field.
Understanding the Earthís Climate System, with a focus on
improving our understanding of the climate system as a whole, rather than
focusing on its individual components, and thus improving our ability to
predict climate change and variability.
Biology and Biogeochemistry of Ecosystems, with a focus on
improving understanding of the relationship between a changing biosphere
and a changing climate and the impacts of global change on managed and
Composition and Chemistry of the Atmosphere, with a focus
on improving our understanding of the global-scale impacts of natural and
human processes on the chemical composition of the atmosphere and determining
the effect of such changes on air quality and human health.
Paleoenvironment and Paleoclimate, with a focus on providing
a quantitative understanding of the envelope of natural environmental variability,
on timescales from centuries to millennia, within which the effects of
human activities on the planetís biosphere, geosphere, and atmosphere can
Human Dimensions of Global Change, with a focus on explaining
how humans intervene in the Earth system and are themselves affected by
the interactions between natural and social processes.
The Global Water Cycle, with a focus on improving our understanding
of the movement of water through the land, atmosphere, and ocean, and on
how global change may increase or decrease regional water availability. | <urn:uuid:7890996e-52d9-4ce6-8c91-b278dcc7e655> | 3.21875 | 1,018 | About (Org.) | Science & Tech. | 13.780409 | 1,553 |
Node:About Environments, Next:Local Variables, Up:About Closure
We said earlier that a variable name in a Scheme program is associated with a location in which any kind of Scheme value may be stored. (Incidentally, the term "vcell" is often used in Lisp and Scheme circles as an alternative to "location".) Thus part of what we mean when we talk about "creating a variable" is in fact establishing an association between a name, or identifier, that is used by the Scheme program code, and the variable location to which that name refers. Although the value that is stored in that location may change, the location to which a given name refers is always the same.
We can illustrate this by breaking down the operation of the
define syntax into three parts:
A collection of associations between names and locations is called an
environment. When you create a top level variable in a program
define, the name-location association for that variable is
added to the "top level" environment. The "top level" environment
also includes name-location associations for all the procedures that are
supplied by standard Scheme.
It is also possible to create environments other than the top level one, and to create variable bindings, or name-location associations, in those environments. This ability is a key ingredient in the concept of closure; the next subsection shows how it is done. | <urn:uuid:0e09e307-3238-43ac-a509-667b6a95dfa0> | 3.421875 | 288 | Documentation | Software Dev. | 31.52659 | 1,554 |
Sphere Packing and Kissing Numbers
Problems of arranging balls densely arise in many
situations, particularly in coding theory (the balls are formed by the
sets of inputs that the error-correction would map into a single
The most important question in this area is Kepler's problem:
what is the most dense packing of spheres in space?
The answer is obvious to anyone who has seen grapefruit stacked in a
grocery store, but a proof remains elusive.
(It is known, however, that the usual grapefruit packing is
the densest packing in which the sphere centers form a
The colorfully named "kissing number problem" refers
to the local density of packings: how many balls can touch another ball?
This can itself be viewed as a version of Kepler's problem
for spherical rather than Euclidean geometry.
and 2nd Ajima-Malfatti points. How to pack three circles in a
triangle so they each touch the other two and two triangle sides. This
problem has a curious history, described in Wells' Penguin Dictionary
of Curious and Interesting Geometry: Malfatti's original (1803)
question was to carve three columns out of a prism-shaped block of
marble with as little wasted stone as possible, but it wasn't until 1967
that it was shown that these three mutually tangent circles are never
the right answer.
this Cabri geometry page,
Malfatti circles page, and the Wikipedia
Malfatti circles page.
- Algorithmic packings
compared. Anton Sherwood looks at deterministic rules for
disk-packing on spheres.
- Apollonian Gasket,
a fractal circle packing formed by packing smaller circles into each
triangular gap formed by three larger circles.
- Basic crystallography diagrams, B. C. Taverner, Witwatersrand.
- The charged particle
model: polytopes and optimal packing of p points in n dimensional spheres.
packing and discrete complex analysis. Research by
Ken Stephenson including pictures, a bibliography, and downloadable circle packing
- Circle packings.
Gareth McCaughan describes the connection between collections
of tangent circles and conformal mapping. Includes some pretty postscript
- Circles in ellipses.
James Buddenhagen asks for the smallest ellipse that contains two
disjoint unit circles.
Discussion continued in a thread on
circles in an ellipse.
- Dense sphere-packings in hyperbolic space.
packings of equal spheres in a cube, Hugo Pfoertner.
With nice ray-traced images of each packing.
See also Martin
Erren's applet for visualizing the sphere packings.
dream about sphere kissing numbers.
- Edge-tangent polytope illustrating Koebe's
theorem that any planar graph can be realized as the set of tangencies
between circles on a sphere. Placing vertices at points having those
circles as horizons forms a polytope with all edges tangent to the sphere.
Rendered by POVray.
Packing Page. Erich Friedman enjoys packing geometric shapes into
other geometric shapes.
- Figure eight knot / horoball diagram.
Research of A. Edmonds into the symmetries of knots,
relating them to something that looks
like a packing of spheres.
The MSRI Computing Group uses
diagram as their logo.
- The fractal art of
Wolter Schraa. Includes some nice reptiles and sphere packings.
- Hermite's constants.
Are certain values associated with dense lattice packings of spheres
Part of Mathsoft's
dense packing of equal disks in a square, D. Boll et al., Elect. J. Combinatorics.
- The Kepler Conjecture on dense packing of spheres.
numbers. Eric Weisstein lists known bounds on the kissing numbers
of spheres in dimensions up to 24.
- Maximizing the
minimum distance of N points on a sphere, ray-traced by Hugo Pfoertner.
sample. Ed Dickey advocates teaching about sphere packings and
kissing numbers to high school students as part of a
strategy involving manipulative devices.
configurations of electrons on a sphere, K. S. Brown.
- Maximum volume
arrangements of points on a sphere, Hugo Pfoertner.
illumination of a sphere. An interesting variation on the problem of
equally spacing points, by Hugo Pfoertner.
circles in circles and circles on a sphere,
Mostly about optimal packing but includes also some nonoptimal spiral
and pinwheel packings.
circles in the hyperbolic plane, Java animation by
Kevin Pilgrim illustrating the effects of changing radii in the
pennies in the plane, an illustrated proof of Kepler's conjecture in
2D by Bill Casselman.
results, D. Boll. C code for finding dense packings of circles in
circles, circles in squares, and spheres in spheres.
- Pennies in
a tray, Ivars Peterson.
packing on a circle and on a sphere,
- Points on
a sphere. Paul Bourke describes a simple random-start hill-climbing
heuristic for spreading points evenly on a sphere, with pretty pictures
and C source.
constellations. Sort of a dynamic version of a sphere packing
problem: how to arrange a bunch of satellites so each point of the
planet can always see one of them?
Schramm's mathematical picture gallery primarily concentrating in
square tilings and circle packings, many forming fractal patterns.
J. A. Sloane's netlib directory includes many references and programs for
sphere packing and clustering in various models. See also his
of sphere-packing and lattice theory publications.
- Soddy's Hexlet,
six spheres in a ring tangent to three others,
Bowl of Integers, a sphere packing combining infinitely many hexlets,
- Sphere distribution problems.
Page of links to other pages, collected by Anton Sherwood.
and lattices. Razvan Surdulescu computes sphere volumes and
describes some lattice packings of spheres.
- Spheres with
colorful chickenpox. Digana Swapar describes an algorithm for
spreading points on a sphere to minimize the electrostatic potential,
via a combination of simulated annealing and conjugate gradient optimization.
patterns in disk packings, Lubachevsky, Graham, and Stillinger,
Visual Mathematics. A procedure for packing unit disks into square
containers produces large grains of hexagonally packed disks
with sporadic rattlers along the grain boundaries.
- Waterman polyhedra,
formed from the convex hulls of centers of points near the origin in an
See also Paul
Bourke's Waterman Polyhedron page.
- What is
arbelos you ask?
From the Geometry Junkyard,
and recreational geometry pointers.
Send email if you
know of an appropriate page not listed here.
from a common source file. | <urn:uuid:fea2ca5c-cf20-4f76-b9e1-3edc485e9b77> | 2.953125 | 1,526 | Content Listing | Science & Tech. | 44.055508 | 1,555 |
Author: This chapter originally appeared as a part of Simkovics, 1998, Stefan Simkovics' Master's Thesis prepared at Vienna University of Technology under the direction of O.Univ.Prof.Dr. Georg Gottlob and Univ.Ass. Mag. Katrin Seyr.
This chapter gives an overview of the internal structure of the backend of Postgres. After having read the following sections you should have an idea of how a query is processed. Don't expect a detailed description here (I think such a description dealing with all data structures and functions used within Postgres would exceed 1000 pages!). This chapter is intended to help understanding the general control and data flow within the backend from receiving a query to sending the results.
Here we give a short overview of the stages a query has to pass in order to obtain a result.
A connection from an application program to the Postgres server has to be established. The application program transmits a query to the server and receives the results sent back by the server.
The parser stage checks the query transmitted by the application program (client) for correct syntax and creates a query tree.
The rewrite system takes the query tree created by the parser stage and looks for any rules (stored in the system catalogs) to apply to the querytree and performs the transformations given in the rule bodies. One application of the rewrite system is given in the realization of views.
Whenever a query against a view (i.e. a virtual table) is made, the rewrite system rewrites the user's query to a query that accesses the base tables given in the view definition instead.
The planner/optimizer takes the (rewritten) querytree and creates a queryplan that will be the input to the executor.
It does so by first creating all possible paths leading to the same result. For example if there is an index on a relation to be scanned, there are two paths for the scan. One possibility is a simple sequential scan and the other possibility is to use the index. Next the cost for the execution of each plan is estimated and the cheapest plan is chosen and handed back.
The executor recursively steps through the plan tree and retrieves tuples in the way represented by the plan. The executor makes use of the storage system while scanning relations, performs sorts and joins, evaluates qualifications and finally hands back the tuples derived.
In the following sections we will cover every of the above listed items in more detail to give a better understanding on Postgres's internal control and data structures. | <urn:uuid:3aa46c79-c265-4a30-a6aa-fe4ce255093a> | 2.796875 | 525 | Documentation | Software Dev. | 49.172672 | 1,556 |
Astronomers Avi Loeb and Edwin Turner recently published a paper proposing a technique for detecting extraterrestrials: use telescopes to look for light pollution from alien cities. From the paper's abstract:
This method opens a new window in the search for extraterrestrial civilizations. The search can be extended beyond the Solar System with next generation telescopes on the ground and in space, which would be capable of detecting phase modulation due to very strong artificial illumination on the night-side of planets as they orbit their parent stars.
I was thinking the same thing when I wrote Containment:
The telescope assembled on the far side of the Moon succeeded in capturing some stunning images, including a few faint pixels of possible light pollution originating from a small rocky planet in the habitable zone of a nearby solar system...
The SETI Institute (Search for Extraterrestrial Intelligence) is already using arrays of Earth-based radio telescopes to search for evidence of alien technology (as dramatized in Carl Sagan's excellent novel, Contact). Since we're already detecting exoplanets, it seems reasonable that within the foreseeable future, the technology could exist to measure light pollution on extrasolar planets, providing the first hard evidence of extraterrestrial intelligence. Perhaps alien civilizations have already detected us. | <urn:uuid:71056b40-3231-48d4-8c6c-da6e6dd93ba8> | 2.984375 | 254 | Personal Blog | Science & Tech. | 18.383095 | 1,557 |
HERE‘s the latest on an issue to which not nearly enough attention is paid by government and industry:
A new study looking at 11,000 years of climate temperatures shows the world in the middle of a dramatic U-turn, lurching from near-record cooling to a heat spike.
Research released Thursday in the journal Science uses fossils of tiny marine organisms to reconstruct global temperatures back to the end of the last ice age. It shows how the globe for several thousands of years was cooling until an unprecedented reversal in the 20th century.
Scientists say it is further evidence that ... | <urn:uuid:0e577c18-ba39-47c9-99b1-c7b9a7130af7> | 2.84375 | 121 | Truncated | Science & Tech. | 53.065202 | 1,558 |
Our oceans are in trouble and so are we. That's the message from Sylvia Earle, keynote speaker Wednesday morning at the Blue Ocean Film Festival and Conservation Event in Monterey.
The good news, she said, is that new technology can raise our awareness, enhance our exploration and improve our ability to act — without ever getting our feet wet.
"The actions we take in the next 10 years will affect the planet for the next 10,000 years," Earle told a packed room of scientists, filmmakers, engineers, educators, divers and artists gathered at the Portola Hotel & Spa for the weeklong meeting dedicated to creating ocean awareness through media, science and research.
As terrestrials, she said, our roots are deep, but not as deep as the creatures of the sea, and this is the time to turn things around.
"It's taken billions of years to make this a hospitable planet," said Earle, an oceanographer and conservationist. "It's taken a frighteningly short time to move things in the other direction."
Perhaps no one knows that better than Earle. Dubbed "Her Deepness" by the New York Times, her lifelong ocean conservation work includes leading the first team of women argonauts, holding the chief scientist position at the National Oceanic and Atmospheric Administration and, currently, being a National Geographic explorer-in-residence.
The U.S. has put millions of dollars into the exploration of a red planet, Mars, but has neglected its own "blue backyard," she
The undersea world of Earle's childhood was quite different from the ocean of today. Global warming, acidification, pollution from plastic and toxins have chronically harmed the health of the water.
But Earle called ignorance the biggest threat to the ocean's well-being, and said she found hope for the future with innovative technologies that transform scientists' ability to see what's under the water. For the last three years, she has worked with Google Earth to create maps that show "there's more than just rocks and water down there."
So far, it's only been scientists talking to scientists, said professor Ove Hoegh-Guldberg, director of the Global Change Institute at The University of Queensland, Australia. The institute is a partner in the Catlin Seaview Survey for the Google Oceans Project, which plans to create a comprehensive map of the territory beneath the sea.
Hoegh-Guldberg, who said exploring the ocean is like discovering the Amazon's rainforest for the first time, said solving the ocean's problems requires public awareness. With a camera that uses three wide-angle lenses designed to take thousands of continuous high-resolution images during each dive, the mapping project is making it so that the push of a computer button at home can make anyone's armchair the start of a diving adventure.
The images can also act as a reference library or help far-flung researchers collaborate without leaving their labs, he said.
Most important, Hoegh-Guldberg believes, is that the technology will get more people to take positive action toward the ocean.
"Now we're beginning to understand that a living ocean keeps us alive and we have to return the favor," Earle said.
Above all, Earle asked the scientists, storytellers and artists at the meeting to use their talents to make sure the next generation could look back and say "thank you" for their ocean legacy.
HSH Prince Albert II of Monaco was a special guest at a Wednesday evening panel that discussed progress to sustain a healthy ocean. Other panelists included Earle; Greg Stone, senior vice president for marine conservation and chief scientist for oceans with Conservation International; Celine Cousteau; and Jane Lubchenco, undersecretary of commerce for oceans and atmosphere and administrator of NOAA.
Elizabeth Devitt can be reached at 684-1188 or email@example.com. | <urn:uuid:4075308c-9699-4856-a29c-c224c378edd2> | 2.9375 | 813 | News Article | Science & Tech. | 40.301448 | 1,559 |
Pieces of the Moon and Mars have been found on Earth before, as well as chunks of Vesta and other asteroids — but what about the innermost planet, Mercury? That’s where some researchers think this greenish meteorite may have originated, based on its curious composition and the most recent data from NASA’s Messenger spacecraft.
Space news from NBCNews.com
Teen's space mission fueled by social media
- Buzz Aldrin's vision for journey to Mars
- Giant black hole may be cooking up meals
- Watch a 'ring of fire' solar eclipse online
- Teen's space mission fueled by social media
NWA 7325 is the name for a meteorite fall that was spotted in southern Morocco in 2012, comprising 35 fragments totaling about 345 grams. The dark green stones were purchased by meteorite dealer Stefan Ralew, who operates the retail site SR Meteorites. Ralew immediately made note of the rocks' deep colors and lustrous, glassy exteriors.
Ralew sent samples of NWA 7325 to researcher Anthony Irving of the University of Washington, a specialist in meteorites of planetary origin. Irving found that the fragments contained surprisingly little iron but considerable amounts of magnesium, aluminum and calcium silicates — in line with what’s been observed by Messenger in the surface crust of Mercury.
Even though the ratio of calcium silicates is higher than what’s found on Mercury today, Irving speculates that the fragments of NWA 7325 could have come from a deeper part of Mercury’s crust, excavated by a powerful impact event and launched into space, eventually finding their way to Earth.
In addition, exposure to solar radiation for an unknown period of time and shock from its formation could have altered the meteorite’s composition somewhat, making it not exactly match up with measurements from Messenger. If this is indeed a piece of our solar system’s innermost planet, it will be the first Mercury meteorite ever confirmed.
But the only way to know for sure, according to a research paper written by Irving and his colleagues, is to conduct further studies on the fragments and, ultimately, samples that are returned from Mercury.
Irving’s team’s findings on NWA 7325 will be presented at the 44th Lunar and Planetary Science Conference, to be held in Houston from March 18 to 22. Read more in this Sky & Telescope article by Kelly Beatty.
More about meteorites:
- Meteorite from California fireball reveals secrets
- Meteorite may be a link to Mars' warm, wet past
- Booming meteorite market creates dilemma
Jason Major is a graphic designer living in Providence, R.I. He writes about astronomy and space exploration on his blog Lights in the Dark, for Discovery News and for Universe Today. This report originally appeared on the Universe Today website on Feb. 4, with the headline "Is This Meteorite a Piece of Mercury?"
Copyright © 2013 Universe Today. Republished with permission. | <urn:uuid:e2782b1a-0e7c-4a39-9c4d-b913e292f4f0> | 3.109375 | 620 | News Article | Science & Tech. | 38.346038 | 1,560 |
Look up monthly U.S., Statewide, Divisional, and Regional Temperature, Precipitation, Degree Days, and Palmer (Drought) rankings for 1-12, 18, 24, 36, 48, 60-month, and Year-to-Date time periods. Data and statistics are as of January 1895.
Please note, Degree Days are not available for Agricultural Belts
Contiguous U.S. Temperature Rankings, September 1901
More information on Climatological Rankings
(out of 119 years)
|Apr - Sep 1901
|60th Coldest||1907||Coldest since: 1899|
|58th Warmest||2012||Warmest since: 1900| | <urn:uuid:44c6c220-b0b7-41c6-bc3d-957d70d5f2f0> | 2.796875 | 144 | Structured Data | Science & Tech. | 60.488056 | 1,561 |
The ozone hole
The discovery by the British Antarctic Survey of the Antarctic ozone hole provided an early warning of the dangerous thinning of the ozone layer worldwide, and spurred international efforts to curb the production of CFCs. If the provisions of the Montreal Protocol on Substances that Deplete the Ozone Layer of 1987 are revised, strengthened and followed, there is a reasonable prospect that the Antarctic ozone hole will permanently repair itself, but not before the next appearance of Halley's comet (in the year 2061)!
Earth's past climate
The ice sheet preserves not only the traces of heavy metals and organic toxins carried into the Antarctic from the inhabited parts of the world but also - frozen into bubbles - samples of previous atmospheres over the past 500,000 years. The bubbles carry information about the climate of the past.
The delicate ecosystem
In the Southern Ocean around the continent increasing levels of fishing threaten the stability of the marine ecosystem. Rising tourist numbers increase the risks of environmental damage at coastal sites. Understanding these risks is essential for sustainability, and to ensure that our management is based on sound scientific data.
Antarctica's contribution to sea level rise
Global climate model predictions of how the Antarctic climate may change over the next 100 years differ in detail from model to model. Most models indicate relatively modest temperature rises around Antarctica over the next 50 years and, over this time period, snowfall is likely to increase over the continent, and this effect may partially offset the rise in sea level. However, there are parts of the continent Antarctic Peninsula and West Antarctic Ice Sheet
where recent observations have indicated an ongoing loss of ice. The mechanisms responsible for those losses are the focus of ongoing research, but there is a significant possibility that they could accelerate over the next 100 years and mean that the Antarctic as a whole becomes a significant contributor to sea level rise, adding to the other sources; thermal expansion of the oceans and melting of icecaps and glaciers elsewhere in the world. | <urn:uuid:78370fd3-6a2c-43e2-b150-55e15fa377b4> | 4.03125 | 396 | Knowledge Article | Science & Tech. | 18.894781 | 1,562 |
By Hans Christian von Baeyer
I pick up a stone and playfully fling it into Lake Matoaka. The stone rises gracefully through the morning air, tips over, and ends its symmetrical trajectory with a plop. Like countless baseballs, footballs, and basketballs, like lumps of lava hurled out of bubbling volcanoes when Earth was young and drops of water splashed up by oceans till the end of time, my missile traces a mathematical curve through space. We physicists find beauty in the timeless perfection of that motion. It represents a rare glimpse of the absolute in this chaotic world of ours.
Galileo first derived the shape of the trajectory and found it to be the figure that Appolonius of Perga over two millennia ago called a parabola. "Ignoring air resistance, cannon balls move along parabolas," we learn in school. But the truth is more intriguing.
Imagine the stone as a point and the Earth all shriveled up and shrunk down to another point four thousand miles below your feet. This is how Newton, who was born the year Galileo died, imagined it. The relation of the stone to the Earth is exactly the same as that of a comet to the sun, and we know the shape of the comet's path: it is an ellipse.
The true figure of the path of the stone is a skinny ellipse, an oval that is about four thousand miles long and only a few miles wide at its widest, with the Earth's center just inside the lower tip and the stone on the edge of the upper end of the oval. Of course, the stone cannot follow the entire trip, because after just a few seconds it falls into the water -- but that, to the physicist, is an inessential detail. The shape of its path through the air, before it sinks into the lake, is the shape of the upper end of that almost unimaginably skinny ellipse.
The stone traces out before my eyes the trajectory of Comet Halley -- but not the part we see when Halley races in hot fury around the sun -- no, it imitates the other part that we never see, when Halley almost coasts to a stop thirty-eight years later and hundreds of millions of miles from the Sun and starts on its return journey, a thirty-eight year fall toward the Sun. I have always wanted to be there when the great comet, far out in the dark cold of outer space, moving almost imperceptibly slowly, comes to the apex of its odyssey and begins its long haul back home to the warmth of the Sun. But I don't need to go that far away. A stone tossed over Lake Matoaka mimics precisely what I would see out there beyond the orbit of Pluto. It is the business of physics to find unity in the diversity of natural phenomena --and to discover analogies between the inaccessible realms of the universe and the immediate world of human experience.
Hans Christian von Baeyer is Chancellor Professor of Physics College of William and Mary, Williamsburg, VA. | <urn:uuid:dab52129-0da8-4a29-a678-a9edf98c4eee> | 3.296875 | 630 | Nonfiction Writing | Science & Tech. | 48.910207 | 1,563 |
Atmospheric Sciences & Global Change Division
Pollution + Storm Clouds = Warmer Atmosphere
Computer modeling reveals new insights on interactions between pollution particles and storms
An anvil cloud looms over the Southern Great Plains site location of the U.S. Department of Energy’s Atmospheric Radiation Measurements (ARM) Climate Research Facility. Enlarge Image
Results: For the first time, researchers at Pacific Northwest National Laboratory have shown that pollution increases warming in the atmosphere through enlarging thunderstorm clouds. The scientists conducted a computational study with resolutions high enough to allow the team to see the clouds develop. They found that for warm summer thunderstorms, pollution particles lead to stronger storms with larger, anvil-shaped clouds, which also last longer. The warming effect dominated by trapping more heat, especially at night, even though these larger clouds also reflected more daytime sunlight warmth back into space.
Why it Matters: Clouds are one of the most poorly understood components of the Earth's climate system. Getting a better understanding of clouds, and how atmospheric particles affect them, is important to better predict the future of climate change.
"Global climate models don't see this effect because thunderstorm clouds simulated in those models do not include enough detail," said Dr. Jiwen Fan, lead author and a scientist at PNNL. "The large amount of heat trapped by the pollution-enhanced clouds could potentially impact regional circulation and modify weather systems."
For more information, see the PNNL News Center, "Pollution teams with thunderclouds to warm atmosphere."
Acknowledgments: This study was supported by the U.S. Department of Energy's (DOE) Office of Science and Biological and Environmental Research (BER) Regional & Global Climate Modeling (RGCM) Program as part of a bilateral agreement with the China Ministry of Sciences and Technology on regional climate research and the U.S. DOE BER's Atmospheric System Research (ASR) Program . The work was performed by Drs. Jiwen Fan and L. Ruby Leung of PNNL; Dr. Daniel Rosenfeld of The Hebrew University of Jerusalem; Dr. Zhanqing Li and Yanni Ding of the University of Maryland.
Reference: Fan J, D Rosenfeld, Y Ding, LR Leung, and Z Li. 2012. "Potential Aerosol Indirect Effects on Atmospheric Circulation and Radiative Forcing through Deep Convection." Geophysical Research Letters 39:L09806. DOI:10.1029/2012GL051851. | <urn:uuid:e0b6a78d-f2ec-4cff-b194-d185bfc422eb> | 3.3125 | 520 | Knowledge Article | Science & Tech. | 42.86746 | 1,564 |
New Discovery Affirms RTB Model Predictions
Even though I’m a budget-hotel kinda guy, occasionally I splurge and stay in a really nice place. It’s fun to get a chance to experience firsthand how the “other half” lives.
A recent study of some of the microbes found in Lake Matano (Indonesia), the world’s eighth deepest lake, provides biologists and geologists a first-hand look at how the earliest life on Earth lived. This new insight provides more evidence for RTB’s origin-of-life model.
RTB and Evolutionary Origin-of-Life Models
One of the key points of difference between the RTB and evolutionary models centers on the timing of life’s first appearance on Earth. The RTB scientific creation model, based on Genesis 1:2 and Deuteronomy 32:9-12, predicts that life should appear early in Earth’s history and that the first life-forms should be inherently complex.
Evolutionary origin-of-life models, on the other hand, require a long percolation time, perhaps up to one billion years, before life can emerge from a primordial soup. These naturalistic scenarios also predict that the first life-forms should be relatively simple.
The Scientific Evidence
As described in Origins of Life, geochemical evidence already indicates that life was present remarkably early in Earth’s history, possibly as far back as 3.8+ billion years ago. (Prior to this time, life would have been impossible on Earth, since the planet’s conditions were “hellish” and unsuitable for life.)
Some origin-of-life researchers, however, question the authenticity of these geochemical finds. They maintain that these markers for early life are actually artifacts produced by inorganic processes.
Banded Iron Formation
One potential biomarker under question is banded iron formations (BIFs). These unusual iron ore deposits are found in sedimentary rocks dated older than 1.8 billion years in age. BIFs are most abundant between 1.8 and 2.5 billion years ago, but also exist in rock formations as old as about 3.8 billion years in age.
BIFs consist of alternating layers of chert (silica) and the minerals hematite (Fe2O3) and magnetite (Fe3O4). Deposits of this type don’t form today. Geologists believe that BIFs formed at a time in Earth’s history when high levels of dissolved iron (Fe2+) and silica existed in the oceans. The silica deposited in ocean sediments to form the chert layers. Geologists maintain that the iron ore “bands” formed when the dissolved Fe2+ became oxidized to form hematite (Fe2O3) and magnetite (Fe3O4).
Most geologists think that BIFs dated between 1.8 and 2.5 billion years ago resulted from biological oxidation when the oxygen generated by cyanobacteria converted Fe2+ to Fe3+
Banded Iron Formations on Early Earth
In other words, BIFs stand as a marker for biological activity. But what about the BIFs deposited in the geological record before that time? Does their presence mean that life existed on Earth as far back as 3.8 billion years ago? Not necessarily, according to some scientists. It’s possible that these BIFs were generated by inorganic oxidation processes or by a UV radiation-driven reaction.
Other researchers have pointed out that the low levels of oxygen on the early Earth make it unlikely that inorganic oxidation could have produced the ancient BIFs. In a similar vein, while scientists have successfully generated BIF-like materials in the lab using UV radiation, it doesn’t seem probable that this process would operate under the complex chemical conditions of the early Earth.
These problems indirectly suggest that biological oxidation accounts for the production of the earliest BIFs on Earth. Still, this explanation comes with challenges. Many origin-of-life researchers tend to doubt if cyanobacteria were present on Earth at 3.8 billion years ago. It’s possible that another group of photosynthetic bacteria (anoxygenic phototrophs) could have produced the BIFs. These bacteria can oxidize Fe2+ to Fe3+ as part of their photosynthetic activity. The issue with this scenario is that these microbes live in highly specialized environments that consist of iron-rich, shallow ephemeral water. These environs are not good analogs to the oceans of the early Earth.
The work of the biologists and geologists on Lake Matano weighs in here. These scientists have just discovered anoxygenic photosynthetic bacteria in Lake Matano that can oxidize Fe2+. This lake closely compares to the most likely conditions for the oceans on early Earth. If photosynthetic bacteria can convert Fe2+ to Fe3+ in Lake Matano, it makes it even more likely that BIFs that date to 3.8 billion years in age are biogenic products generated by bacteria that engage in anoxygenic photosynthesis.
BIFs, along with other biomarkers, collectively indicate that life originated early in Earth’s history as soon as our planet could sustain life. The microbes that generated BIFs must have been metabolically complex, given what we know about the anoxygenic microbes that are capable of phototropically oxidizing Fe2+in Lake Matano.
This new insight adds further support for the RTB origins-of-life model and, at the same time, makes little sense within an evolutionary framework. The sudden appearance of metabolically complex life on Earth comports well with the notion that a Creator intervened to bring about the creation of the first life-forms on Earth.
The accommodations in the Archean oceans for the earliest life on Earth may not meet the four-star quality that many people expect when they stay in a high-end hotel, but it appears to have suited these organisms just fine. | <urn:uuid:0c69b601-c37f-4007-8051-b594a6e92bf0> | 3.25 | 1,262 | News Article | Science & Tech. | 38.198549 | 1,565 |
Latitude And Rain Dictated Where Species Lived
More than 200 million years ago, mammals and reptiles lived in their own separate worlds on the supercontinent Pangaea, despite little geographical incentive to do so. Mammals lived in areas of twice-yearly seasonal rainfall; reptiles stayed in areas where rains came just once a year. Mammals lose more water when they excrete, and thus need water-rich environments to survive. Results are published in the Proceedings of the National Academy of Sciences.
Aggregating nearly the entire landmass of Earth, Pangaea was a continent the likes our planet has not seen for the last 200 million years. Its size meant there was a lot of space for animals to roam, for there were few geographical barriers, such as mountains or ice caps, to contain them.
Yet, strangely, animals confined themselves. Studying a transect of Pangaea stretching from about three degrees south to 26 degrees north (a long swath in the center of the continent covering tropical and semiarid temperate zones), a team of scientists led by Jessica Whiteside at Brown University has determined that reptiles, represented by a species called procolophonids, lived in one area, while mammals, represented by a precursor species called traversodont cynodonts, lived in another. Though similar in many ways, their paths evidently did not cross.
“We’re answering a question that goes back to Darwin’s time,” said Whiteside, assistant professor of geological sciences at Brown, who studies ancient climates. “What controls where organisms live? The two main constraints are geography and climate.”
Turning to climate, the frequency of rainfall along lines of latitude directly influenced where animals lived, the scientists write in a paper published this week in the online early edition of the Proceedings of the National Academy of Sciences. In the tropical zone where the mammal-relative traversodont cynodonts lived, monsoon-like rains fell twice a year. But farther north on Pangaea, in the temperate regions where the procolophonids predominated, major rains occurred only once a year. It was the difference in the precipitation, the researchers conclude, that sorted the mammals’ range from that of the reptiles.
The scientists focused on an important physiological difference between the two: how they excrete. Mammals lose water when they excrete and need to replenish what they lose. Reptiles (and birds) get rid of bodily waste in the form of uric acid in a solid or semisolid form that contains very little water.
On Pangaea, the mammals needed a water-rich area, so the availability of water played a decisive role in determining where they lived. “It’s interesting that something as basic as how the body deals with waste can restrict the movement of an entire group,” Whiteside said.
In water-limited areas, “the reptiles had a competitive advantage over mammals,” Whiteside said. She thinks the reptiles didn’t migrate into the equatorial regions because they already had found their niche.
The researchers compiled a climate record for Pangaea during the late Triassic period, from 234 million years ago to 209 million years ago, using samples collected from lakes and ancient rift basins stretching from modern-day Georgia to Nova Scotia. Pangaea was a hothouse then: Temperatures were about 20 degrees Celsius hotter in the summer, and atmospheric carbon dioxide was five to 20 times greater than today. Yet there were regional differences, including rainfall amounts.
The researchers base the rainfall gap on variations in the Earth’s precession, or the wobble on its axis, coupled with the eccentricity cycle, based on the Earth’s orbital position to the sun. Together, these Milankovitch cycles influence how much sunlight, or energy, reaches different areas of the planet. During the late Triassic, the equatorial regions received more sunlight, thus more energy to generate more frequent rainfall. The higher latitudes, with less total sunlight, experienced less rain.
The research is important because climate change projections shows areas that would receive less precipitation, which could put mammals there under stress.
“There is evidence that climate change over the last 100 years has already changed the distribution of mammal species,” said Danielle Grogan, a graduate student in Whiteside’s research group. “Our study can help us predict negative climate effects on mammals in the future.”
Contributing authors include Grogan, Paul Olsen from Columbia University, and Dennis Kent from Rutgers. The National Science Foundation and the Richard Salomon Foundation funded the research.
Image 1 Caption: More than 200 million years ago, nearly all the land on Earth was part of Pangaea. Animals could roam freely, yet they appear to have sorted themselves into regions. Researchers at Brown are figuring out why. (Credit: Brown University)
Image 2 Caption: The skull of the procolophonid Hypsognathus was found in Fundy basin, Nova Scotia, which was hotter and drier when it was part of Pangaea. Mammals, needing more water, chose to live elsewhere. (Credit: Brown University)
On the Net: | <urn:uuid:f3c83f07-5d87-43fc-8d42-a95031cc8ee9> | 3.796875 | 1,083 | News Article | Science & Tech. | 33.06125 | 1,566 |
Java Swing tutorials - Here you will find many Java Swing examples with running source code. Source code provide here are fully tested and you can use it in your program. Java Swing tutorials first gives you brief description of Swing and then many example are provided. Swing is mostly used for the development of Desktop application.
After learning AWT, lets now see what's Swing? Well, Swing is important to develop Java programs with a graphical user interface (GUI).
Java 2D API
Programming has become more interactive with Java 2D API. You can add images, figures, animation to your GUI and even pass visual information with the help of Java 2D API.
Swing supports data transfer through drag and drop, copy, paste, cut etc. Data transfer works between Swing components within an application and between Java and native applications
Swing in Java also supports the feature of Internationalization. The developers can build applications by which the users can interact worldwide in different languages.
To translate a text in to a particular language is know as Localization. It is a process by which we can change a text to a different language and also we can add some locale-specific components.
What is java swing?
Here, you will know about the Java swing. The Java Swing provides the multiple platform independent APIs interfaces for interacting between the users and GUIs components.
JTable: The JTabel component is more flexible Java Swing component that allows the user to store, show and edit the data in tabular format. It is a user-interface component that represents the data of two-dimensional tabular format. The Java swing implements tables by using the JTable class and a subclass of JComponent.
Tool Tips on Cells in a JTable
This section tells you, how to set the tool tips in the cells in a JTable component. So, you will be able to know about the tool tips. The tool tips are most common graphical user interface
Row, Column and Cell Selections in a JTable
In this section, we are going to describe how to enable the row, column and cell selections in a JTable component. But, what is the term 'enable'?
a Scrollable JTable
In this Java programming section, you will learn how to create a scrollable JTable component. When any table has large volume of data, the use of scrollbar is applied in the JTable.
a JTable Component
In this section you will learn about the packing of a JTable by adjusting it in the center.
Grid Line in JTable
In the earlier section you have learnt for creating a simple JTable that contains predefined grid line with black color. But in this Java programming tutorial, you will learn how to set the colored grid line in JTable component.
Margin Between Cells in a JTable
In this section, you will learn how to set the margin (Gap) between cells in a JTable component. Here we are providing you an example with code that arranges the column margin (Horizontal space) and row margin (Vertical space).
User Edits in a JTable Component
Till now you have got the edit facilities in all JTable in every previous sections but now you will learn a JTable program in which editing facility is not available.
a Table Model between JTable Components
In this section, you will learn how to share a table model between JTable components. Whenever, you want to do for sharing the resources between the JTable components, a table model will essential.
Using Java Swing
In this section, you will learn how to print in java swing. The printable that is passed to setPrintable must have a print method that describes how to send drawing to the printer.
Add Area of Two Figures
This section illustrates you how to add area of two specified figures in Graphics.
Subtract Area between two Figures
This section illustrates you how to subtract the area between the two figures in Graphics.
Show Intersection between the Area of two Shapes
Intersection means 'the common part'. Classes Rectangle2D and Ellipse2D are provided by the package java.awt.geom.*. These classes provides the shapes rectangle and oval respectively.
Show the Exclusive OR between the Area of two Shapes
In this section, we are going to implements of exclusive OR in Graphics. Exclusive OR is a Boolean operator also known as XOR shows here the uncommon part between the two areas.
Another Example of Gradient Paint
A gradient is like a colored strip. It is created by specifying a color at one point and another color at another point. Then the colors will starts changes gradually from one to the other along a straight line between the two points.
Writing Calculator program in Swing
In this tutorial we are providing you an example which illustrates you how to a create calculator in Swing with the source code and screen shot.
JTree: The tree is a special type of graph that designed for displaying data with the hierarchical properties by adding nodes to nodes and keeps the concept of parent and child node.
Creating a JTree Component
In this section, you will learn about the JTree and its components as well as how to create an JTree component. Here, first of all we are going to describe about the JTree and its component.
Adding a Node to the JTree Component
In this section, you will learn how to add or insert a new node to the JTree component. The tree has root node and child of rood node. Sometimes, you need to insert the node, you must be insert your node to the JTree component.
Removing a Node from the JTree Component
In this section, you will learn how to remove a node from the JTree component. Removing a node from JTree it means delete a node from the JTree component to individually and delete the root node directly.
Enable and Disable Multiple Selections in a JTree Component
In this section, you will learn how to enable and disable the multiple selections in a JTree component. The multiple selections in tree component that means user will allow or disallow the permission for selection the more than one tree component at a time.
Displaying Hierarchical data in JTree
In this section, you will learn to display the hierarchical data in JTree . When you select the hierarchical data it is also displayed on the command prompt.
Displaying System Files in JTree
In this section, you will learn to create a JTree that displays system files. The java.util.properties package represents a persistent set of properties for displaying the system files in a tree.
JTree ActionListener Example
In this section, you will learn about JTree Action Listener and its implementations.
In this section, you will learn to make JTree editable nodes. For example. . If you want to edit the name of a tree node then the following program will help you a lot.
Adding Horizontal lines to Group
In this section, you will learn to create a horizontal tree in java..
Adding Line to
In this section, you will learn how to create JTree with Line that means the tree are divides into two parts and separate by a line. Both trees has same root and nodes. A JScrollPane provides a Scrollable property for viewing components to scroll it.
Removing Horizontal Lines
to Node Groups
In this section, you will learn to create none type tree in Java. None type tree means that children nodes are not connected to their parent node
Create JTree using an
In this section you will learn to create a JTree using object that works with Hashtable.
JTree Open Icon
In this section, you will learn to open an icon in JTree. That means when you click any node of a tree, icon will be displayed on the frame.
In this section, you will read about traversal of a tree and its node .Teaches, displaying the node and its path on the command prompt. The Javax.swing.JTree class is a powerful swing component to display data in a tree structure.
Hiding Root Node in
In this section, you will learn to hide the root node of a JTree.
Retrieving JTree structure from database
JTree is used for viewing data in a list. Lists are good for displaying simple lists of information from which the user can make single or multiple selections. In list You can hide different levels of data in the tree, including the root, allowing the display to collapse and expand various parts of the tree.
To convert the temperature, we have created two textField for Fahrenheit value and Celsius value. A button is created to perform an action.
How to handle the text using Key Listener Interface
In this section, you will learn how to handle the text using the key events on the Java Awt component. All the key events are handled through the KeyListener Interface.
Create Multiple Buttons using Java Swing
In this section, you will learn how to create multiple buttons labeled with the letters from A to Z respectively.
Add Edit and Delete Employee Information Using Swing
In this section, you will learn how to add, edit and delete the Employee's information from the database using java swing.
Get JTextField value from another class
In this section, you will learn how to get the textfield value from other class.
Set Color in JOptionPane
In this section, you will learn how to set color in JOptionPane.
Set delay time in JOptionPane
In this section, you will learn how to set the time after which the message should be displayed using JOptionpane.
Create Sine Wave
In this section, you will learn how to create a Sine Wave using Java Swing.
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:0018ddc9-7121-4808-9052-700dce018c20> | 3.3125 | 2,060 | Tutorial | Software Dev. | 53.849763 | 1,567 |
Apr. 1, 2010 A hitherto unknown reproductive system in a species closely related to the olive tree, Phillyrea angustifolia L., has been discovered by researchers in France.
This system explains the high concentration of male individuals co-occurring with hermaphrodites in this species. The hermaphrodites, whose blossoms bear both male and female organs, are divided into two morphologically indistinguishable groups. The plants of each group are sterile among themselves but fully compatible with those of the other group. Under these conditions, the hermaphrodites can fertilize only half of the pollen recipients, whereas the males can pollinate all the hermaphrodites. The disadvantage weighing upon the males is thus neatly counterbalanced. This discovery proves for the first time the possibility of an evolutionary transition from hermaphroditism to dioecy.
A report has been published in Science.
Researchers at the Laboratoire de Génétique et Évolution des Populations Végétales (CNRS/Université de Lille 1) and the Centre d'Écologie Fonctionnelle et Évolutive (CNRS/Université de Montpellier 1, 2 and 3/ENSA Montpellier/CIRAD/Ecole Pratique des Hautes Études) have discovered in Phillyrea angustifolia L., a species closely related to the olive tree, a hitherto unknown reproductive system characterized by incompatibility between hermaphrodite plants.
This new reproductive mode explains the mystery of the high frequencies (up to 50%) of male individuals co-occurring with hermaphrodite individuals in this species. The hermaphrodite individuals, whose blossoms bear both male and female organs, are divided into two morphologically indistinguishable groups. The plants of each group are self-incompatible (they cannot fertilize each other) but fully compatible with plants of the other group. In such a system, a given hermaphrodite plant can pollinate only half of the other hermaphrodites, while a male can pollinate all the hermaphrodites in the population. These conditions neatly offset the reproductive disadvantage affecting the males, which have no female function (and are also referred to as "female-sterile" for this reason) and can thus transmit their genes only by male gametes, and not by both male and female gametes like the hermaphrodites.
In addition, this self-incompatibility within two morphologically identical groups of hermaphrodites could be a key reproductive mode, the origin of plant species with separate genders that evolve through "intermediary" reproductive systems. In the overall context of the evolution of reproductive systems from hermaphroditism toward dioecy (system in which individuals are exclusively either male or female), mixed systems involving the presence in the same species of both females and hermaphrodites (gynodioecy) or both males and hermaphrodites (androdioecy) are considered intermediaries derived from hermaphroditism. However, all previous empirical examples have shown that androdioecy had evolved from dioecious systems through the females' acquisition of a male function, and not from hermaphroditic systems through the loss of the female function by certain hermaphrodites. This new study shows for the first time that a transition from hermaphroditism to androdioecy (presence of hermaphrodite and male individuals within the populations of a single species) is possible.
This discovery of a self-incompatibility system involving only two morphologically indistinguishable groups of hermaphrodite plants comes as a totally unexpected development. One of the researchers' next challenges will be to explain, from a functional point of view, how the number of self-incompatibility groups has been maintained at two.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
- P. Saumitou-Laprade, P. Vernet, C. Vassiliadis, Y. Hoareau, G. Magny (de), B. Dommee, J. Lepart. A Self-Incompatibility System Explains High Male Frequencies in an Androdioecious Plant. Science, 26 March 2010 DOI: 10.1126/science.1186687
Note: If no author is given, the source is cited instead. | <urn:uuid:96c5cc19-1110-42dc-8c08-f3d28cfe3ca5> | 3.140625 | 958 | News Article | Science & Tech. | 19.122737 | 1,568 |
|San José State University|
& Tornado Alley
The equipment for demonstrating the Aharonov-Bohm effect consists of a source of uniform energy electrons, a screen with two slits in it, a screen to capture the interference pattern and a solenoid. The solenoid is an electromagnet encased in an iron tube. The iron tube captures all of the magnetic field created by the electromagnet. Outside of the tube the magnetic field zero.
Without the electromagneti on, the equipment generates the standard interference pattern for the two-slit experiment. A schematic diagram for the experiment is shown below.
When the current in the solenoid is increased there is a shift in the interference pattern on the screen. This is quite surprising because the iron shielding of electromagnet confines the magnetic field entirely to the solenoid itself. The magnet field in the paths of the electrons is zero. In any real experiment the magnetic field would be zero except for background levels.
Paul A.M. Dirac proved in 1931 that in such arrangement there would be a phase shift for the wave function of the electron based not upon the level of the magnet field for the region through which it passes but upon the level of the vector potential function.
If B(X) is the magnetic field function then the vector potential function is such that
For any A(X) equal to the gradient of a scalar function ∇G the curl is zero,
Thus the vector potential function in a region can be nonzero even though the magnetic field is zero.
The wave function ψ(X) is a complex-valued function of the point in space X. The magnitude squared of the wave function |ψ(X)|2 is the probability density for the electron at point X. A wave function can be multiplied by a function of the form exp(-iφ) without affecting the magnitude and thus without affecting the probabilities of the electron being found in any region of space. The quantity φ is called the phase angle.
It was generally thought before the Aharonov-Bohm experiment that changes in phase angle do not affect the behavior of electrons. The experiment showed that the phase angle of electrons could be modified even though the magnetic field through which they pass is zero, and that the modification can be detected.
What Dirac showed is that the change in phase angle of an electron passing through a path S is
where e is the charge of the electron,
h is Planck's constant divided by
2π and J is the line integral
where A is the vector potential for the magnetic field. The phase difference between electrons traveling on path 1 compared to path 2 is thus based upon the difference in the line integrals. This is equivalent to computing the line integral forward on path 1 and then backward on path 2. This in turn is equivalent to computing the line integral around the path created by path 2 to path 1 in a reverse direction. This creates a closed path. By Stoke's Theorem the line integral around a closed path is equal to the integral of the curl of the vector quantity over the surface enclosed by the path. In this case the curl of the vector potential is the magnet field and this is nonzero for the cross-section of the solenoid.
What the Aharonov-Bohm experiment established is that it is not only the electric and
magnetic field that can have observable effects. The vector potential can also produce
observable effects. Originally the vector potential function was only a mathematical
artifact, a convenience. What the Aharonov-Bohm effect shows is that the vector potential
function has a primacy, an existence in its own right.
HOME PAGE OF Thayer Watkins | <urn:uuid:8bb68c4a-91f9-40f5-919f-3a7a122fd716> | 3.921875 | 767 | Academic Writing | Science & Tech. | 47.624724 | 1,569 |
How does GPS work?
The science behind GPS
Now that you know how GPS has been developed, it's time to dive deeper into the subject and find out exactly what happens before your navigation device tells you exactly where you are.
The location of the satellites
To work out where you are, your navigation device needs to know two things: a. The location of at least four satellites above you, b. Your distance from each of those satellites.
The distance of the satellites
A navigation device works out its distance from a GPS satellite from the time it takes the signal to travel to the receiver from the satellite.
The fourth satellite
In satellite navigation perfect timing is everything; the fourth satellite checks the time measurement of the other three satellites to make sure the information about your location is as accurate as possible.
Atmosphere-induced error & Multipath error
So we’ve got perfect timing and we know the satellite's exact position. But up to now our calculations have been based on the speed of light in a vacuum, the only place where the speed of light is a known constant. | <urn:uuid:96cfd2f1-74d0-4653-8f07-d0d884012d4b> | 3.84375 | 226 | Knowledge Article | Science & Tech. | 43.046091 | 1,570 |
GREENBELT, Md., Aug. 16 (UPI) -- An instrument aboard a NASA orbiter has detected helium in the moon's tenuous atmosphere, the space agency said.
These remote-sensing observations by the Lunar Reconnaissance Orbiter confirm measurements taken in 1972 by an experiment deployed on the moon's surface by Apollo 17, a NASA release reported Wednesday.
The instrument aboard the LRO examined ultraviolet emissions in the tenuous atmosphere above the lunar surface, detecting helium in measurements spanning more than 50 orbits.
"The question now becomes, does the helium originate from inside the moon, for example, due to radioactive decay in rocks, or from an exterior source, such as the solar wind?" said Alan Stern of the Space Science and Engineering Division at Southwest Research Institute, Boulder, Colo.
"If we find the solar wind is responsible, that will teach us a lot about how the same process works in other airless bodies," Stern said.
If additional observations rule out the solar wind, then radioactive decay or other internal lunar processes could be producing helium that diffuses from the moon's interior or is released during lunar quakes, scientists said.
"These ground-breaking measurements were enabled by our flexible operations of LRO as a Science Mission, so that we can now understand the moon in ways that were not expected when LRO was launched in 2009," Richard Vondrak, LRO project scientist at NASA's Goddard Space Flight Center, Greenbelt, Md., said.
|Additional Science News Stories|
TORONTO, May 25 (UPI) --A Canadian man has been charged with sexually assaulting a 9-year-old girl in Toronto more than 20 years ago.
BURBANK, Calif., May 24 (UPI) --Baz Luhrmann's big-screen adaptation of the classic novel, "The Great Gatsby," has crossed the $100 million mark at the North American box office.
WASHINGTON, May 24 (UPI) --The U.S. Food and Drug Administration says it's taken a close look at a mobile app that analyzes photos of urine samples and has been in contact with its maker. | <urn:uuid:bb3b5a6b-302c-47ec-be49-e731b57d2c87> | 3.34375 | 442 | News Article | Science & Tech. | 45.698887 | 1,571 |
Links to ornithology programs at Patuxent Wildlife Research Center, Laurel, MD, including large scale survey analysis of bird populations, research tools, datasets and analyses, bird identification, and seasonal bird lists.
Describes how social scientists and natural resource managers work together to develop cooperation and public help in solving complex natural resource issues. Links to the use of public surveys in a project for black-tailed prairie dog management.
Hydrologic data web page for the Water Resources Inventory Area 1 (WRIA 1) Watershed Management Project studying surface and ground water in the Nooksack watershed in northwest Washington. Links to environmental data and maps.
We removed non-native fish from a section of the river and the endangered native species humpback chub increased in abundance. But it is not yet clear that decreased competition explains the rebound in population. | <urn:uuid:e11c4302-137c-4820-82c6-0e06d5e21ad5> | 2.640625 | 172 | Content Listing | Science & Tech. | 23.327664 | 1,572 |
Phenomena described by geophysicist in non-technical terms
As Alaska’s billion lakes become colder and harder, some of them will sport mysterious, spidery cracks extending from small holes in the ice. This phenomenon inspired a geophysicist to figure out what he calls “lake stars.”
“I thought something so pretty and relatively commonly observed should be understandable, so I pursued it,” said Victor Tsai, who wrote perhaps the only paper in existence on lake stars.
Tsai, a geophysicist with the Seismological Laboratory at the California Institute of Technology, developed a mathematical model to explain how the stars form. He recently gave a less technical description of the conditions needed for lake stars to blossom.
“You need relatively thin ice, and a thick snow cover,” he said. “The lake needs to have just frozen over and then had a heavy enough snow to weigh the ice down enough that the snow can become wet from lake water.”
Tsai became interested in lake stars when he spent a summer at Woods Hole Oceanographic Institution in Massachusetts. There, he found that, while many people had guessed at what caused lake stars, there was no established theory to explain them. He set up a lab experiment in which he created the stars indoors, using a plate cooled below freezing. Through a dish of slush, he fed a steady drip of water one degree above freezing. Narrow channels formed in all of his attempts, and he wrote a 13-page paper on “the formation of radial fingers emanating from a central source.”
He provides here a non-technical version on how the stars form: From a hole in the ice, relatively warm lake water flows upward and infiltrates the slushy snow on top. Water then flows outwards through the slush. Some areas of slush melt more due to variations in water flow, allowing an arm of the star to grow faster. As the arms grow, cold robs the water of heat, slowing the growth of some arms and allowing others to sprout.
Lake stars are somewhat reminiscent of a feature familiar to most Alaskans, windshield cracks. Michael Marder, a physicist at the University of Texas in Austin, once explained to me how they happen.
A windshield, he said, isn't one solid piece of glass, it’s two layers pressed around a layer of plastic, which keeps glass from flying during an accident. The entire windshield is about as thick as a pile of five dimes.
During windshield manufacture, a machine presses glass onto the plastic with a pressure of about 800 atmospheres, which is about 800 times the force Earth's atmosphere exerts on us. Auto glass formed under such stress shuns blows that would shatter house window glass. Car windshield glass would be as rock-resistant as granite if it were not for invisible flaws, Marder said.
Flying gravel sometimes finds a weak spot in the glass, leaving behind a pitted, round indentation. Water vapor in the air, even in tiny amounts, helps cracks expand across a windshield. Water molecules act like scissors with edges no thicker than an atom, travelling to the tip of a crack and snipping glass apart.
Temperature differences enhance the growth of cracks. If a windshield's inner surface is 70 degrees Fahrenheit on a 40-below-zero day, a war is being waged within the glass. The cold outside surface of the glass contracts as the hot surface expands. At the interface, cracks expand.
Both the water vapor and temperature scenarios need another element to lengthen lines on a dimpled windshield — bumpy motion, which pulls the glass apart. Because most Alaska road crews spread gravel for winter traction, we have, in good quantity, all the ingredients for cracked windshields.
(Since the late 1970s, the University of Alaska Fairbanks’ Geophysical Institute has provided this column free in cooperation with the UAF research community. Ned Rozell is a science writer for the Geophysical Institute.) | <urn:uuid:703f01c6-a707-4a42-9d1a-30047283dc45> | 3.9375 | 827 | News Article | Science & Tech. | 47.659033 | 1,573 |
Blending of an eclipse image (from the High Altitude Observatory) with a Yohkoh X-ray image (from the Yohkoh Science Team).
Click on image for full size
Image courtesy of the High Altitude Observatory, National Center for Atmospheric Research (NCAR), Boulder, Colorado,
The Solar Corona
Rising above the Sun's chromosphere ,
the temperature jumps sharply from a few tens of thousands of kelvins
to as much as a few million kelvins in the Sun's outer atmosphere,
the solar corona. Understanding the reason the Sun's corona
is so hot is one of the many challenges facing solar physicists today.
Because of the very high temperatures, the corona emits high
energy radiation and can be observed in X-rays. The Earth's
atmosphere absorbs X-rays, but satellites above the atmosphere,
such as the Yohkoh spacecraft, can observe the Sun in these
wavelengths. Shown on the left is a blending of a Yohkoh
X-ray image (reddish colors) with an
eclipse image taken by
the High Altitude Observatory (gray-white colors) on November 3, 1994.
Near the poles of the Sun, the corona is dark for both X-rays and white light.
These regions are coronal holes and are
the source of the solar wind that extends out into
interplanetary space. The scattered white light
shows the density of plasma in the corona. The large white regions extending
out far from the Sun are helmet streamers,
where the solar plasma has been trapped by the Sun's magnetic field.
Shop Windows to the Universe Science Store!Cool It!
is the new card game from the Union of Concerned Scientists that teaches kids about the choices we have when it comes to climate change—and how policy and technology decisions made today will matter. Cool It! is available in our online store
You might also be interested in:
The Kelvin scale is a temperature scale that is often used in astronomy and space science. You are probably more familiar with the Celsius (or Centigrade) scale, which is part of the metric system of measures,...more
An eclipse of the Sun occurs when the Earth passes through the moon's shadow. A total eclipse of the Sun takes place when the Moon is directly between the Sun and the Earth. When a total eclipse does...more
The gas in the solar corona is at very high temperatures (typically 1-2 million kelvins in most regions) so it is almost completely in a plasma state (made up of charged particles, mostly protons and electrons)....more
The TRACE spacecraft was launched April 1st from California. TRACE stands for Transition Region and Coronal Explorer (try saying that fast three times!). This spacecraft has four telescopes on it. The...more
On March 30, 1998, the TRACE spacecraft will be launched. TRACE stands for Transition Region and Coronal Explorer (try saying that fast three times!). This spacecraft has four telescopes on it. The telescopes...more
The last solar eclipse of this millennium was on August 11, 1999. Only people in Europe, the Middle East and India could see it. This was a total solar eclipse, which means that the Moon completely blocked...more
Rising above the Sun's chromosphere , the temperature jumps sharply from a few tens of thousands of kelvins to as much as a few million kelvins in the Sun's outer atmosphere, the solar corona. Understanding...more | <urn:uuid:5dc7c933-f1a8-4233-a15d-01e38018e0ce> | 3.75 | 736 | Content Listing | Science & Tech. | 55.508054 | 1,574 |
Satellite data suggests that March, 2011 was the coolest in more than a decade. The average worldwide temperature in March was .18 degrees below the 30-year average for the month. It was the coldest March since 1999. February was also cold with temperatures running .03 degrees below the long-term average. Satellites began measuring temperature in 1978. The instruments measure the temperature of the atmosphere from the surface up to an altitude of about five miles above sea level. Satellites allow meteorologists to get accurate temperature readings for almost all regions of the Earth. This includes remote desert, ocean and rain forest areas where reliable climate data are not otherwise available. The cooling was largely driven by La Nina, which is a cooling of the equatorial Pacific Ocean. | <urn:uuid:cac9379d-97c9-4cd8-8915-863f64fb6294> | 3.875 | 153 | Knowledge Article | Science & Tech. | 34.913214 | 1,575 |
In mid-July this year, a roar echoed around one of the most remote inlets of northern Greenland -- and an island was born. No ordinary island, but a huge chunk of ice, roughly twice the size of Manhattan, that had broken from the Petermann Glacier.
Scientists gave it the romantic name of PII-2012 and watched it begin to drift slowly into the Nares Strait, which separates Greenland from Canada. Then it began to break up, spawning several smaller ice islands.
The birth of PII-2012 was no isolated event. The Petermann Glacier had lost a much larger chunk in 2010. It also broke into fragments, though that may not be the right word. One of them alone was estimated to weigh 3.5 billion tonnes, or metric tons (3.86 billion short tons), according to E. Julie Halliday, a researcher at Memorial University in Canada.
Canada's ice shelves are also retreating fast. As the Arctic warms, both glaciers and ice-shelves are launching floating islands into the sea that may threaten shipping, the fishing industry and off-shore oil and gas platforms.
The air around northern Greenland and Ellesmere Island has warmed by about 2.5 degrees Celsius in the past 25 years. Ocean temperatures in the Arctic are also thought to have risen, though there is less data on them.
Halliday noted in a paper presented at the Arctic Technology Conference in Houston last week that while "management of a 3.5 billion-tonne ice island away from offshore structures may theoretically be possible, putting it into practice would be logistically very challenging."
One option, she said, would be to cover the surface of the ice island with carbon, which would accelerate its melting, but "the challenge then would become dealing with numerous smaller ice fragments as opposed to one large one." And even a small one could be the size of a football stadium.
Scientists are only now beginning to research these ice islands and the rate at which they melt and divide, especially as the Arctic waters warm and the restraining effect of sea ice disappears. They have been using Autonomous Underwater Vehicles -- the undersea equivalent of surveillance drones -- to map the underside of ice islands.
After the 2010 "calving" from the Petermann, several fragments between them containing billions of tons of ice drifted south along the Labrador coast, interfering with shipping in the Strait of Belle Isle. One traveled 150 miles (240 kilometers) in just one week.
Derek Mueller, a researcher at Carleton University in Ontario, has been following one 12 million-tonne fragment that was one of the progeny of the 2010 calving of Petermann Glacier. Nicknamed Berghaus, it was still wandering around a year later near Bylot Island in Baffin Bay before finally disintegrating in the fall of 2011. | <urn:uuid:5b932368-dc0c-4f38-a00c-adc7721b212a> | 3.28125 | 581 | News Article | Science & Tech. | 47.84558 | 1,576 |
Shale Shocked: ‘Remarkable Increase’ In U.S. Earthquakes ‘Almost Certainly Manmade,’ USGS Scientists Report
A U.S. Geological Survey (USGS) team has found that a sharp jump in earthquakes in America’s heartland appears to be linked to oil and natural gas drilling operations.
As hydraulic fracturing has exploded onto the scene, it has increasingly been connected to earthquakes. Some quakes may be caused by the original fracking — that is, by injecting a fluid mixture into the earth to release natural gas (or oil). More appear to be caused by reinjecting the resulting brine deep underground.
Last August, a USGS report examined a cluster of earthquakes in Oklahoma and reported:
Our analysis showed that shortly after hydraulic fracturing began small earthquakes started occurring, and more than 50 were identified, of which 43 were large enough to be located. Most of these earthquakes occurred within a 24 hour period after hydraulic fracturing operations had ceased.
In November, a British shale gas developer found it was “highly probable” its fracturing operations caused minor quakes.
Then last month, Ohio oil and gas regulators said “A dozen earthquakes in northeastern Ohio were almost certainly induced by injection of gas-drilling wastewater into the earth.”
Now, in a paper to be deliver at the annual meeting of the Seismological Society of America, the USGS notes that “a remarkable increase in the rate of [magnitude 3.0] and greater earthquakes is currently in progress” in the U.S. midcontinent. The abstract is online.EnergyWire reports (subs. req’d) some of the findings:
The study found that the frequency of earthquakes started rising in 2001 across a broad swath of the country between Alabama and Montana. In 2009, there were 50 earthquakes greater than magnitude-3.0, the abstract states, then 87 quakes in 2010. The 134 earthquakes in the zone last year is a sixfold increase over 20th century levels.
The surge in the last few years corresponds to a nationwide surge in shale drilling, which requires disposal of millions of gallons of wastewater for each well. According to the federal Energy Information Administration, shale gas production grew, on average, nearly 50 percent a year from 2006 to 2010.
I foresee a study in the near future, paid for by the oil industry, that concludes the increase in little earthquakes is actually serving to reduce the frequencies and magnitudes of large ones. | <urn:uuid:55dc7eab-76e0-411f-90d6-503ba50de262> | 3.109375 | 515 | Personal Blog | Science & Tech. | 43.561945 | 1,577 |
AROS is a multitasking operating system. This essentially means that multiple
programs may be run at the same time. Every program running is called a task.
But there are also tasks that are not user-programs. There are, for example,
tasks handling the file-system and tasks watching the input devices. Every
task gets a certain amount of time, in which it is running. After this time
it's the next task's turn; the system reschedules the tasks.
Plain tasks are very limited in their capabilities. Plain tasks must not call
a function of dos.library or a function that could call a function of
dos.library (this includes OpenLibrary() for most cases!). Processes
don't have this limitation.
A task is described by a struct Task as defined in exec/tasks.h.
This structure contains information about the task like the its stack, its
signals and some management data. To get the address of a task structure,
struct Task *FindTask( STRPTR name );
The name is a pointer to the name of the task to find. Note that this
name is case-sensitive! If the named task is not found, NULL is
returned, otherwise a pointer to a struct Task is returned .
To get a pointer to the current task, supply NULL as name. This can
The task structure contains a field called tc_UserData. You can use this
for your own purposes. It's ignored by AROS.
A task must be in one of following states (as set in the field
tc_State of the task structure):
- This state should never be set!
- The task is currently running. On single processor architectures, only
one task can be in that state.
- The task is waiting for its activation.
- The task is waiting on some
.. FIXME: signal.
As long as this does not occur, the program doesn't become active; it is
ignored on rescheduling. Most interactive programs are in this state
most of the time, as they wait for user input.
- The task is in an exception.
Do not set these states yourself, unless you know exactly what you are
The field tc_Node.ln_Pri of the struct Node embedded in the task
structure (see exec/nodes.h and the
.. FIXME:: section about exec lists
specifies the priority of the task. Possible priorities reach from -128
to 127. The higher the priority the more processor time the task gets
from the system. To set a task's priority use the function:
BYTE SetTaskPri( struct Task *task, BYTE newpri );
The old priority is returned.
Every task has a stack. A stack is a piece of memory in which a tasks stores
its temporary data. Compilers, for example, use the stack to store variables,
you use in your programs. On many architectures, the stack is also used to
supply library functions with parameters.
The size of the stack is limited. Therefore only a certain amount of data
can be stored in the stack. The stack-size of a task is chosen by its caller
and must be at least 4096 bytes. Tasks should generally not assume that their
stack-size is bigger. So, if a task needs more stack, the stack can be
exchanged by using the function:
void StackSwap( struct StackSwapStruct *sss );
The only argument, sss, is a pointer to a struct StackSwapStruct as
defined in exec/tasks.h.
struct StackSwapStack must contain a pointer to the beginning of the new
stack (strk_Lower), to the end of the new stack (stk_Upper) and a new
stack-pointer (stk_Pointer). This stack-pointer is normally set either to
the same address as stk_Lower or to the same address as stk_Upper,
depending on the kind of CPU used.
When calling StackSwap(), the StackSwapStruct structure supplied as
sss will be filled with information about the current stack.
After finishing using the new stack, the old stack must be restored by
calling StackSwap() a second time with the same StackSwapStruct.
Normally, only compilers need this function. Handle it with great care as
different architectures use the stack in different ways!
A process is an expanded task. Different from a task, it can use functions of
dos.library, because a process structure contains some special fields,
concerning files and directories. But of course, all functions that can be
used on tasks can also be used on processes.
A process is described by a struct Process as defined in
dos/dosextens.h. The first field in struct Process is an embedded
struct Task. The extra fields include information about the file-system,
the console, the process is connected to, and miscellaneous other stuff.
Most functions of dos.library set the secondary error-code of the process
structure on error. This way the caller can determine, why a certain
system-call failed. Imagine, the function Open(), which opens a named
file, fails. There can be multiple reasons for this: maybe the file named
doesn't exist, maybe it is read protected. To find this out, you can query
the secondary error-code set by the last function by using:
DOS-functions return one of the ERROR_ definitions from dos/dos.h.
Applications can, of course, process these error-codes as well (which is
useful in many cases), but often we just want to inform the user what went
wrong. (Applications normally need not care if a file could not be
opened because it did not exist or because it was read protected.) To output
human-readable error messages, dos.library provides two functions:
LONG Fault( LONG code, STRPTR header, STRPTR buffer, LONG length );
BOOL PrintFault( LONG code, STRPTR header );
While PrintFault() simply prints an error message to the standard output,
Fault() fills a supplied buffer with the message. Both functions take
a code argument. This is the code to be converted into a string. You can
also supply a header string, which will prefix the error message.
The header may be NULL, in which case nothing is prefixed.
Fault() also required a pointer to a buffer, which is to be filled with
the converted string. The length of this buffer (in bytes) is to be
passed in as the last argument. The total number of characters put into the
buffer is returned. You are on the safe side, if your buffer has a size of
83 character plus the size of the header.
Examples for the use of these functions can be found in later chapters,
especially in the chapter about
.. FIXME:: Files and Directories.
Secondary error-codes from a program are handed back to the caller. If this
is a shell, the secondary error-code will be put into the field
cli_Result2 of the shell structure (struct CommandLineInterface as
defined in dos/dosextens.h and
.. FIXME:: discussed later.
You can also set the secondary error-code yourself. This way, you can either
to pass it back to another function in your program or to your caller. To
set the secondary error, use:
LONG SetIoErr( LONG code );
code is the new secondary error-code and the old secondary error-code is | <urn:uuid:7673a73f-8801-4880-9065-a4adbabd2564> | 2.890625 | 1,625 | Documentation | Software Dev. | 61.993126 | 1,578 |
Biologists at San Francisco State University are tagging radio trackers onto zombie-like bees infected with a fly parasite to find out more about species population decline.
Bees that are infected with the Apocephalus borealis fly abandon their hives and congregate near outside lights, moving in erratic circles on the ground before dying. This parasitic infection was discovered last year by SF State biology professor John Hafernik and described in a PLoS One paper.
Hafernik and his colleagues are trying to find out how much of a threat the emerging fly parasite might be to the health of honey bee colonies, or if the parasite is linked to the colony collapse disorder that has devastated bee populations in the United States and Europe.
The team is tagging infected bees' thoraxes with transmitters the size of "a fleck of glitter" and then monitoring their movements in and out of a hive on the biology building. Laser readers at the entrance to the hives interact with individual trackers. They are also monitoring other hives nearby to check for signs of the parasite. They are inviting members of the public to get involved through the ZomBeeWatch website. Visitors can upload photos of suspected infected bees to help track the spread of the parasite.
It's important to monitor the comings and goings of bees to understand the progression of the parasitic infection, particularly how long it takes for affected bees to abandon the hive. The original paper found bees disoriented and dying at night, but researchers are keen to find out whether the infected bees only leave the hives to fly in the dark.
Christopher Quock, an San Francisco State graduate biology student, said: "Hopefully in the long run this information might help us understand how much of a health concern these flies are for the bees, and if they truly do impede their foraging behavior. We also want to know whether there are any weak links in the chain of interactions between these flies and honey bees that we could exploit to control the spread of this parasite."
The team also wants to study how the infected bees are treated by uninfected bees. Are they expelled from the hive? Or treated with aggression by other workers?
Biology professor Andrew Zink explained: "If enough of the parasitized bees do the wrong 'waggle' dances to send unparasitized foragers off in the wrong directions for food, or distract unparasitized foragers through antagonistic interactions, the hive's productivity could falter." | <urn:uuid:3d14124d-1e98-41eb-8e7e-cea022d82c89> | 2.734375 | 504 | News Article | Science & Tech. | 34.168939 | 1,579 |
Welcome to the May 2007 episode of Blueshift, from NASA Goddard Space Flight Center. We’ll discuss our search for Earth-like planets outside of our own Solar System. We’ll also look into gamma ray bursts, and how the Swift satellite team is working to solve their mysteries. This episode includes a brain teaser and mailbag question.
- Introduction (0:00 – 1:20)
- Brain Teaser (1:21 – 2:12)
- Interview: Jennifer Wiseman and the Search for Other Worlds (2:13 – 8:11)
We’re finding new planets almost every day – find out what’s out there and how we’re finding them.
- Featured Story: Solving the Puzzles of Gamma-Ray Bursts (8:12 – 14:26)
These mysterious events have had scientists asking questions for years, but now we have some answers.
- Mailbag: What kind of rays are cosmic rays? (14:27 – 17:40)
Get the facts on these fast-moving particles… and old movies.
- James Webb Space Telescope story update (17:41 – 18:17)
New information about the story featured in Episode 1.
- Brain Teaser – Answer (18:18 – 19:06)
- Closing (19:07 – 20:00)
The Search for Other Worlds
In our interview with Jennifer Wiseman, we heard about the technology and methods behind the discovery of other planets outside of our solar system. For more information about these discoveries, visit:
- PlanetQuest: the Search for Another Earth
- California & Carnegie Planet Search Project
- Press Release: Astronomers Find First Earth-like Planet in Habitable Zone (April 25, 2007)
Solving the Puzzles of Gamma-Ray Bursts
The Swift satellite is regularly detecting gamma-ray bursts all over the Universe, powerful events of great interest to astronomers. To find out more about Swift and gamma-ray bursts, take a look at these sites:
- NASA’s Swift Mission
- Press Release: Gamma-Ray Burst Challenges Theory (March 10, 2007)
- Press Release: Gamma-Ray Bursts Active Longer Than Thought (May 22, 2007)
|Trivia Master||Louis Barbier|
|Interview with Jennifer Wiseman||Anita Krishnamurthi|
|Featured Story||Ilana Harrus|
|Theme Music||Naked Singularity|
|Other Music||Outta Scope|
|Executive Producer||Anita Krishnamurthi|
|Responsible NASA Official||Kim Weaver|
No comments yet. | <urn:uuid:bad56675-aba6-4202-b500-2e69c5345eca> | 2.546875 | 551 | Truncated | Science & Tech. | 43.264414 | 1,580 |
One of the world’s most innovative new ideas looks a lot like a stack of Tupperware containers filled with dirt.
And technically, it is.
But it’s also a dirt-powered battery dreamed up by Harvard’s Erez Lieberman-Aiden. And the Bill & Melinda Gates Foundation thinks it just might be a game changer for the developing world.
The Seattle-based foundation announced the winners of its Grand Challenges Exploration grants Thursday, an attempt to spur creative –if not downright unusual — approaches to solving problems in poor countries.
There were 88 grants for $100,000 awarded. The Associated Press has more on the grantees, many of whom proposed solutions to sanitation problems in poverty-stricken nations.
The dirt-powered battery features a microbial fuel cell that recharges using free electrons that are abundant in soil bacteria. The foundation believes it could be used by rural health clinics and for charging mobile technology. | <urn:uuid:ee48ec8e-dba7-4545-a046-9efa09cde3bb> | 2.546875 | 195 | News Article | Science & Tech. | 45.262353 | 1,581 |
Copyright © 1999 by Akimasa Nakamura (Kuma Kogen Astronomical Observatory, Japan)
The CCD image was taken on 1999 October 9.65 UT, using a 0.60-m f/6 Ritchey-Chretien telescope.
This comet was found during 1987 January by Jennifer Wiseman on two photographic plates exposed on 1986 December 28.29 and 28.34 by Brian Skiff at Lowell Observatory's Anderson Mesa Station. The magnitude was estimated as 14. Skiff and Wiseman were able to confirm the comet on 1987 January 19.11. The comet had faded to magnitude 14.5. On both occasions the comet appeared diffuse with a strong condensation.
The first orbit was computed and published by Brian G. Marsden on January 21. He used 6 positions obtained during the period of December 28 to January 21 and indicated the comet was moving in a short-period orbit. He determined the perihelion date as 1986 November 22.76, the perihelion distance as 1.506 AU, and the orbital period as 6.53 years. Marsden said the orbit indicated the comet passed about 0.25 AU from Jupiter during 1984. With nearly a month of observations, this orbit was little different from later orbits computed with several months of observations.
The comet was kept under observation until 1987 May 25.17 when T. Gehrels and J. V. Scotti (Steward Observatory, Arizona, USA) obtained an image of magnitude 19.4 with the 0.91-m Spacewatch telescope.
S. Nakano predicted the comet would next arrive at perihelion on 1993 June 4.39. B. Schmidt obtained three CCD images with the Multiple-Mirror Telescope on Mt. Hopkins on 1993 February 2. These revealed "suspected" weak images. Unfortunately, the images were not certain enough to establish a recovery had been made. No other successful attempts were made until James V. Scotti announced he had recovered the comet on Spacewatch images exposed on December 16. The magnitude was then 20.8 and Scotti said the coma was 13 arc seconds in diameter. In addition, the nuclear condensation had a magnitude of 22.6 and a faint tail extended 0.34 arc minute toward PA 286°. Scotti subsequently found the comet on a single CCD image obtained with Spacewatch. The positions confirmed that the faint object detected by Schmidt in February was the comet. The positions also indicated the prediction by Nakano required a correction of only -0.08 day. No additional observations were obtained during this apparition.
The comet was next predicted to pass perihelion on 2000 January 11.73. The comet was recovered by astronomers at Kitt Peak on 1999 September 13.48. The comet's total magnitude exceeded 13 during November of 1999. The final observation was also obtained at Kitt Peak on 2000 May 1.16.
| cometography.com |
| Comet Information
If you have any questions, please | <urn:uuid:43600f47-0efc-43f9-9491-5c6155cd41a8> | 2.640625 | 616 | Knowledge Article | Science & Tech. | 61.659008 | 1,582 |
Oil spill planning and response remains the primary use of these maps, however they are finding ever-widening use in such areas as coastal resource inventories and assessments, coastal planning, and recreational planning.
The Time Period section in this metadata record represents the dates when the data and information were collected to prepare the GIS products and atlases. Hence, the actual observation of the resource status was completed on, or most likely before, this date. See the atlas-specific metadata for actual survey and data publication dates.
SHORELINE CLASSIFICATION - ESI maps include a shoreline ranking, based on a scale relating sensitivity, natural persistence of oil, and ease of cleanup. The shoreline classification scheme combines an understanding of the physical and biological character of the shoreline environment, as well as the substrate type and grain size. Relationships among physical processes, substrate type, and associated biota produce specific geomorphic/ecological shoreline types, sediment transport patterns, and predictable oil behaviors and biological impacts. The sensitivity ranking (Rank 1 - Rank 10) is dictated by the following factors: relative exposure to wave and tidal energy, shoreline slope, substrate type (grain size, mobility, penetration and/or burial, and trafficability), and biological productivity and sensitivity.
Methods for classifying shorelines include review of existing maps, literature, and remote imagery, incorporated with observations from low-altitude aerial surveys and ground observations.
Base maps, shoreline, wetland boundaries, and aerial photographs are gathered prior to a survey. Using this information, along with any previous studies of the area, the geologist completes a preliminary shoreline classification. This classification is modified during the fieldwork process.
Fieldwork consists of two parts: aerial surveys and ground verifications. During the overflight phase, the geologist annotates the shoreline base map with ESI Rankings, carefully noting transitions in habitats. Shorelines with more than one ESI type in the intertidal zone are annotated on the map in order from landward to seaward ESI classification. A segment of coastline may be assigned up to three ESI shoreline types. In areas where the coastline has changed significantly from the base map (either through natural or artificial processes), the geologist modifies the base map by hand. In addition to classifying the shoreline, the observer takes representative low-altitude, oblique photographs for each ESI habitat.
Ground verification consists of spot-checking to confirm aerial observations. Ideally, an example of each habitat is visited and photographed from the ground. At a minimum, ground verification concentrates on confirming grain-size classification for sedimentary substrates, since this can be difficult to recognize from the air. If a portion of the coast is identified during the overflights as problematic or difficult to classify, that segment is ground checked and maps are updated according to the ground observations.
Once the field component of the project is complete, the maps are scanned and the digital shoreline arcs are updated with the ESI attributes noted in the field. The shape and position of the digital shoreline may also be modified at this time to reflect field observations. After the information from the field map has been incorporated into the digital database, the ESI shoreline is color-coded and replotted at the same scale as the original base maps. The geologist then compares the classified shoreline plots to the original field-annotated base maps and any errors in shoreline attributes, as recorded in the GIS database, are corrected.
SENSITIVE BIOLOGICAL RESOURCES - ESI maps depict oil-sensitive animals and rare plants, as well as habitats that are used by oil-sensitive species. Some habitats, such as submersed aquatic vegetation and coral reefs, that are themselves sensitive to oil spills may also be depicted.
Biological resource information is gathered from local officials who provide expert knowledge and suggest relevant source materials for biological resources in the study area. When the data have been collected and reviewed, the biologist plans how each resource will be mapped throughout the entire study area. During this process, it may be necessary to prioritize the species to be mapped in order to avoid excess clutter, which makes the final product difficult to read or interpret. Considerations may include species that are rare or listed as protected or endangered, or those species that have a particular commercial, recreational, or cultural value in the area. It may also be appropriate to limit some species-mapping to particularly critical life stages, such as nesting or spawning.
Biological features are mapped as points, polygons, and lines, and are given unique numbers corresponding to associated data tables, for easy identification and editing.
HUMAN-USE RESOURCES - ESI maps also include human-use areas that could be impacted by an oil spill, or that could provide access for spill response operations. They include areas that have added sensitivity and value because of their use, such as beaches, parks, and marine sanctuaries; water intakes; and archaeological sites. Human-use resources are divided into four major components: high-use recreational and shoreline access locations, management areas, resource extraction locations, and archaeological and historical cultural resource locations. Each human-use resource is assigned a feature type and feature code. Management areas are typically mapped as polygons, while the remaining socioeconomic resources are generally depicted as points.
For more information about the data sources and process for a particular resource, refer to the metadata record for the desired resource in the ESI atlas of interest.
Animals, plants, and habitats potentially at risk from oil spills are segmented into seven elements based on major taxonomic and functional groupings. Each element is further divided into groups of species or sub-elements with similar taxonomy, morphology, life history, and/or behavior relative to oil spill vulnerability and sensitivity. Attribute data include: species names (common and scientific), the legal status of each species (state and/or federal threatened, endangered, and special concern listings), concentration/abundance, seasonal presence by month, and special life-history time-periods (e.g. spawning, nesting).
Human-use resources can be subdivided into four major components: high-use recreational and shoreline access locations, management areas, resource extraction locations, and archaeological and historical cultural resource locations. Each of these elements is further subdivided based upon types of use.
The files included at <http://data.nodc.noaa.gov/coris/data/NOAA/nos/EnvironmentalSensitivityIndices/VirginIslands/> include the individual map PDFs, as well as files containing ancillary ESI map information: GUIDE.PDF, INTRO.PDF, INDEX.PDF, LEGEND.PDF, SEASON.PDF (for some atlases), and METADATA.PDF. In order for the links between the various documents to work properly, users must maintain the same directory structure and file names on their personal hard drive.
To view a map of a particular area, the user should open the INDEX file and click a region of interest on the map. The map file will open, displaying the ESI for that region. To ensure that the user has seasonality information for each region, the appropriate seasonality table has been packaged as part of each map file. (To view the seasonality information for that map, click the title, óóóEnvironmental Sensitivity Index Map,óóó or simply scroll down to the next page. To return to the ESI map, click the seasonality page title, or simply scroll up until the map is in view.)
The PDFs can be used online or can be printed as individual atlas pages. | <urn:uuid:ad996d49-989d-4d96-a2d2-ab8ba958e3b3> | 3.078125 | 1,589 | Structured Data | Science & Tech. | 17.090912 | 1,583 |
hi, this is my first post here. Am a little nervous thinking how my post will be. but i will try my best to take you through the topic and make you understand the topic. the topic i will discuss is called "Decorator pattern". i will just discuss when to use the decorator pattern and just the basic concept of decorator pattern. What is Decorator pattern? decorator pattern is design pattern which is used to add more functionalities to an existing class dynamically ( at runtime). how does the concept of decroator pattern work? the concept is usually regarded as confusing. but it is simple if you just remember the word "Wrap". we have a class "A" which is implementing and interface "Iinterface"."A" has some specific behaviours "A1","A2" etc. now we want to add a new behaviour "B1" to A. (this should be done without altering Class "A") we can do it in a simple way Create a class "B" implementing "IInterface"."B" will have the behaviour "B1". create a class which will help any "IInterface" implementor to wrap around any other "IInterface" implementor. This is our decorator. now at runtme create an instance of "A".lets call it "objA". now create an instance of "B",lets call it "objB". use the use the decorator to help objB so that it can wrap around objA. now the decorator is nothing but objB which will have objA inside it. so the decorator now has the funtionality of both objB and objA so it will emit the behaviour "A1","A2","B1". note that we did not have to change the class A at all. now if we want to add another functionality "C1" to this decorator object(having objB and objA),how will we do it? we can do it, if we make the Decorator class implement "IInterface". recollect that the purpose of the Decorator is to help any "IInterface" implementor to wrap around any other "IInterface" implementor. now we can have any decorator wrapped in any other decorator. hope i was clear. i will give an example in my next post. | <urn:uuid:800cc3b0-0ac4-4945-b6f6-d994be68074d> | 3.171875 | 482 | Q&A Forum | Software Dev. | 60.226167 | 1,584 |
Common Lisp/Advanced topics/Numbers
Common Lisp has much more support for performing number-crunching tasks than most programming languages. This is achieved by having support for large integers, rational numbers, and complex numbers, as well as many functions to work on them.
Types of numbers
The hierarchy of the number type is as follows:
Fixnums and Bignums
Fixnums are integers which are not too large and can be manipulated very efficiently. Which numbers are considered fixnums is implementation-dependant, but all integers in [-215,215-1] are guaranteed to be such.
Bignums are integers which are not fixnums. Their size is limited by the amount of memory allocated for Lisp, and as such they can be really large. Operations on them are significantly slower than on fixnums. Of course, that doesn't make them less useful.
Ratios represent the ratio of two integers. They have the form numerator/denominator. The function / which performs division always produces ratios when its arguments are integers or ratios. For example
(/ 1 2) will result in 1/2, not 0.5. Other arithmetic operations also work fine with ratios.
Float is short for floating-point number, a datatype used to represent non-integer numbers in most programming languages. There are four kinds of floats in Common Lisp, which provide increasing precision (implementation-dependent). By default, implementations assume short floats, which have limited precision. To input a more precise float, other textual notations must be used, e.g., "1.0d0" for a double-float.
Complex is a datatype for representing complex numbers. The notation for complexes is #C(real imaginary). Real and imaginary parts are both either rational or floating-point. The operations that can be performed on complexes include all arithmetic operations and also many other functions which can be extended to complex numbers (such as exponentiation and logarithm).
The following functions are defined for all kinds of numbers:
- The arithmetical operations +,-,*,/ are quite obvious (note, though, that they can have more than two parameters).
- sin, cos, tan, acos, asin, atan provide trigonometric functions.
- The same, with h at the end (like asinh) provide corresponding hyperbolic functions.
- exp and expt perform exponentiation. exp accepts one parameter and calculates ex, while expt accepts two parameters (base and power).
- sqrt calculates the square root of a number.
- log calculates logarithms. If one parameter is supplied, the natural logarithm is calculated. If there are two parameters, the second parameter is used as the base.
- conjugate returns the complex conjugate of a number. For real numbers the result is the number itself.
- abs returns the absolute value (or magnitude) of a number.
- phase returns the complex argument (angular component) of a number.
- signum returns a number with the same phase as its argument, but with unit magnitude.
The following functions are defined for specific kinds of numbers:
Comparison of numbers
The following functions can be used for comparison of numbers. Each of these functions accepts any number of arguments.
- = returns t if all arguments are numbers of the same value and nil otherwise. Due to imprecise nature of floating-point numbers it is not advised to use = on them.
- /= returns t if all arguments are numbers of different value. Note that
(/= a b c)is not always the same as
(not (= a b c)).
- <, <=, >, >= check if their arguments are in the appropriate monotonous order. These functions can't be applied to complex numbers for obvious reasons.
- max and min return the largest and the least of their arguments, respectively.
Numeric type manipulation
These functions are used to convert numbers from one type to another.
- floor, ceiling, truncate, round take two arguments: number and divisor and return quotient (an integer) and reminder=number-quotient*divisor. The method for choosing the quotient depends on the function. floor chooses the largest integer that is not greater than ratio=number/divisor, ceiling chooses the smaller integer that is larger than ratio, truncate chooses the integer of the same sign as ratio with the largest absolute value that is less than absolute value of ratio, and round chooses an integer that is closest to ratio (if there are two such numbers, an even integer is chosen). Note: these functions return two values (see Multiple values).
- ffloor, fceiling, ftruncate, fround are the same as above but the quotient is converted to the same float type as number.
- (mod a b) returns the second value of (floor a b).
- (rem a b) returns the second value of (truncate a b).
- float converts its first argument (a real) to a float. It may be useful to avoid slow operations with rational numbers (see example 1). The second optional argument may be supplied, which must be float - it will be used as a prototype. The result would be of the same floating-point type as a prototype.
- rational and rationalize convert a real number to rational. When this number is a float rational returns a rational number that is mathematically equivalent to float. rationalize approximates the floating-point number. The former function usually produces ratios with a huge denominator so it's not as useful as you may think.
- numerator and denominator return the corresponding parts of a rational number.
- complex creates a complex number from its real part and imaginary part. Functions realpart and imagpart return real and imaginary part of a number.
Predicate returns a non-nil result if it's true and nil if it is false.
- zerop - the number is zero (there may be several zeros in Lisp - integer zero, real zero, complex zero, there may be negative zeros too).
- plusp, minusp - the real number is positive/negative.
- evenp, oddp - the integer is odd/even.
- integerp - the number is integer (of the type integer - see type tree above).
- floatp - the number is float.
- rationalp - the number is rational.
- realp - the number is real.
- complexp - the number is complex. | <urn:uuid:b4ceb744-f5db-4e95-9895-5dbe65dc5e7a> | 3.8125 | 1,381 | Documentation | Software Dev. | 46.045618 | 1,585 |
It was good to hear from Fed Duay, a two-year veteran of my AP Summer Institute at Manhattan College*
* Which, for the uninitiated, is in the Bronx. No, I don't get it, either.
I have two questions. We are doing the "B field of a straight wire lab", where we can use a compass aligned to the earth's magnetic field and trigonometry to find B's value; then we graph "B vs. 1/r" and use the slope to find the "vacuum permeability" value [or should we find the current as compare to the ammeter reading?]. However I have come across two values for the earth's field: 2x10^-5 T and 5x10^-5 T (or 20 microT and 50 microT respectively). What do you use for the earth's field value?
I actually do the experiment the other way -- I use the ammeter reading and mu naught to find the magnetic field. I like either of the two ways you suggested. That's one of the beautiful aspects of the graphical approach to laboratory... Depending on what you measure or what you look up, a single experiment can be done in a wide variety of ways.
As for the value of Bearth: You're only finding the HORIZONTAL COMPONENT of the earth's magnetic field. Along the east coast, the magnetic field points more down than north, at a "dip angle" that can be close to 70 degrees off of horiontal.
The site http://www.ngdc.noaa.gov/geomag/magfield.shtml will tell you the local magnetic field, including all components. I find at Woodberry Forest the northward magnetic field component is 2.0 x 10^-5 T.
(Fed continues with his second question:)
Also, I see that last year you covered part of waves before finishing magnetism; I am guessing that you needed to do the "standing wave lab on a string" before finishing EM. Is this the reason or is there something else I should be aware of?
Nothing other than personal preference is in play here. The intricacies of electromagnetic waves aren't included on the AP physics B exam. Certainly students are expected to know the EM spectrum, the visible wavelengths, which colors of light have higher frequencies, and so on; but the fact that electric and magnetic fields oscillate in accordance with Maxwell's equations is irrelevant at the physics B level. Standing waves are in no way a prerequisite to magnetism.
I tried sticking in the wave section before magnetism because that breaks up the toughest parts of the AP course. Electricity and magnetism kick my students' butts, especially coming in the dead of winter when they're busy and in bad moods, anyway. I put waves in between, because waves have some cool demonstrations, are easily vizualizable, and are (comparatively) easy .
(You got a question you want answered in Mail Time? Either post a comment, or email me at firstname.lastname@example.org. Those who include an astute and witty criticism of the Cincinnati Bengals or Reds impending disastrous seasons are most likely to see their questions answered.) | <urn:uuid:22b4e8f5-47bd-4302-b161-36fc729adaa3> | 2.53125 | 672 | Q&A Forum | Science & Tech. | 62.518955 | 1,586 |
Using static and non static synchronized method for protecting shared resource is another Java mistake we are going to discuss in this part of our series “learning from mistakes in Java”. In last article we have seen why double and float should not be used for monetary calculation , In this tutorial we will find out why using static and non static synchronized method together for protecting same shared resource is not advisable.
I have seen some times Java programmer mix static synchronized method and instance synchronized method to protect same shared resource. They either don't know or failed to realize that static synchronized and non static synchronized method lock on two different object which breaks purpose of synchronizing shared resource as two thread can concurrently execute these two method breaking mutual exclusive access, which can corrupt status of mutable object or even cause subtle race condition in Java or even more horrible deadlock in java.
Static and non static synchronized method Java
For those who are not familiar static synchronized method locked on class object e.g. for string class its String.class while instance synchronized method locks on current instance of Object denoted by “this” keyword in Java. Since both of these object are different they have different lock so while one thread is executing static synchronized method , other thread in java doesn’t need to wait for that thread to return instead it will acquire separate lock denoted byte .class literal and enter into static synchronized method. This is even a popular multi-threading interview questions where interviewer asked on which lock a particular method gets locked, some time also appear in Java test papers.
Bottom line is that never mix static and non static synchronized method for protecting same resource.
Example of Mixing instance and static synchronized methods
Here is an example of multithreading code which is using static and non static synchronized method to protect same shared resource:
here shared count is not accessed in mutual exclusive fashion which may result in passing incorrect count to caller of getCount() while another thread is incrementing count using static increment() method.
That’s all on this part of learning from mistakes in Java. Now we know that static and non static synchronized method are locked on different locks and should not be used to protect same shared object.
Other Java thread tutorials you may like: | <urn:uuid:23db3bb2-76f3-4261-89ab-37b4430f0042> | 3.25 | 454 | Personal Blog | Software Dev. | 27.496129 | 1,587 |
Manual Section... (3) - page: __freadable
NAME__fbufsize, __flbf, __fpending, __fpurge, __freadable, __freading, __fsetlocking, __fwritable, __fwriting, _flushlbf - interfaces to stdio FILE structure
size_t __fbufsize(FILE *stream);
size_t __fpending(FILE *stream);
int __flbf(FILE *stream);
int __freadable(FILE *stream);
int __fwritable(FILE *stream);
int __freading(FILE *stream);
int __fwriting(FILE *stream);
int __fsetlocking(FILE *stream, int type);
void __fpurge(FILE *stream);
DESCRIPTIONSolaris introduced routines to allow portable access to the internals of the FILE structure, and glibc also implemented these.
The __fbufsize() function returns the size of the buffer currently used by the given stream.
The __fpending() function returns the number of bytes in the output buffer. For wide-oriented streams the unit is wide characters. This function is undefined on buffers in reading mode, or opened read-only.
The __flbf() function returns a nonzero value if the stream is line-buffered, and zero otherwise.
The __freadable() function returns a nonzero value if the stream allows reading, and zero otherwise.
The __fwritable() function returns a nonzero value if the stream allows writing, and zero otherwise.
The __freading() function returns a nonzero value if the stream is read-only, or if the last operation on the stream was a read operation, and zero otherwise.
The __fwriting() function returns a nonzero value if the stream is write-only (or append-only), or if the last operation on the stream was a write operation, and zero otherwise.
The __fsetlocking() function can be used to select the desired type of locking on the stream. It returns the current type. The type argument can take the following three values:
- Perform implicit locking around every operation on the given stream (except for the *_unlocked ones). This is the default.
- The caller will take care of the locking (possibly using flockfile(3) in case there is more than one thread), and the stdio routines will not do locking until the state is reset to FSETLOCKING_INTERNAL.
- Don't change the type of locking. (Only return it.)
The _flushlbf() function flushes all line-buffered streams. (Presumably so that output to a terminal is forced out, say before reading keyboard input.)
SEE ALSOflockfile(3), fpurge(3)
COLOPHONThis page is part of release 3.24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at http://www.kernel.org/doc/man-pages/.
This document was created by man2html, using the manual pages.
Time: 15:26:43 GMT, June 11, 2010 | <urn:uuid:e77222d6-a18c-4015-8e61-20614c40f79d> | 3.046875 | 674 | Documentation | Software Dev. | 49.381047 | 1,588 |
Assembly: System.Xml (in system.xml.dll)
XmlResolver is used to resolve external XML resources, such as entities, document type definitions (DTDs), or schemas. It is also used to process include and import elements found in Extensible StyleSheet Language (XSL) style sheets or XML Schema definition language (XSD) schemas.
XmlUrlResolver is a concrete implementation of XmlResolver and is the default resolver for all classes in the System.Xml namespace. You can also create your own resolver.
You should consider the following items when working with the XmlResolver class.
XmlResolver objects can contain sensitive information such as user credentials. You should be careful when caching XmlResolver objects and should not pass the XmlResolver object to an untrusted component.
If you are designing a class property that uses the XmlResolver class, the property should be defined as a write-only property. The property can be used to specify the XmlResolver to use, but it cannot be used to return an XmlResolver object.
If your application accepts XmlResolver objects from untrusted code, you cannot assume that the URI passed into the GetEntity method will be the same as that returned by the ResolveUri method. Classes derived from the XmlResolver class can override the GetEntity method and return data that is different than what was contained in the original URI.
Your application can mitigate memory Denial of Service threats to the GetEntity method by implementing a wrapping implemented IStream that limits the number of bytes read. This helps to guard against situations where malicious code attempts to pass an infinite stream of bytes to the GetEntity method.
The following example creates an XmlReader that uses an XmlUrlResolver with default credentials.
// Create an XmlUrlResolver with default credentials. XmlUrlResolver resolver = new XmlUrlResolver(); resolver.Credentials = CredentialCache.DefaultCredentials; // Create the reader. XmlReaderSettings settings = new XmlReaderSettings(); settings.XmlResolver = resolver; XmlReader reader = XmlReader.Create("http://serverName/data/books.xml");
Windows 98, Windows 2000 SP4, Windows CE, Windows Millennium Edition, Windows Mobile for Pocket PC, Windows Mobile for Smartphone, Windows Server 2003, Windows XP Media Center Edition, Windows XP Professional x64 Edition, Windows XP SP2, Windows XP Starter Edition
The .NET Framework does not support all versions of every platform. For a list of the supported versions, see System Requirements. | <urn:uuid:e62b0074-7250-4434-8314-3fccf4f71da8> | 2.890625 | 561 | Documentation | Software Dev. | 37.327043 | 1,589 |
4.3 Physical Basis
So far, the motivation for the GCLF as a standard candle is almost totally empirical rather than theoretical. The astrophysical basis for its similarity from one galaxy to another is a challenging problem, and is probably less well understood than for any other standard candle currently in use. Because globular clusters are old-halo objects that probably predate the formation of most of the other stellar populations in galaxies (e.g. Harris 1986, 1988b, 1991; Fall and Rees 1988), to first order it is not surprising that they look far more similar from place to place than their parent galaxies do. Methods for allowing clusters to form with average masses that are nearly independent of galaxy size or type have been put forward by Fall and Rees (1985, 1988), Larson (1988, 1990), Rosenblatt et al. (1988), and Ashman and Zepf (1992) under various initial assumptions. Other constraints arising from cluster metallicity distributions and the early chemical evolution of the galaxies are discussed by Lin and Murray (1991) Brown et al. (1991). None of these yet serve as more than general guidelines for understanding why the early cluster formation process should be so nearly invariant in the early universe.
After the initial formation epoch, dynamical effects on the clusters including tidal shocking and dynamical friction, and evaporation of stars driven by internal relaxation and the surrounding tidal field, must also affect the GCLF within a galaxy over many Gyr, and these mechanisms might well behave rather similarly in large galaxies of many different types. Recent models incorporating these effects (e.g. Aguilar et al. 1988; Lee and Ostriker 1987; Chernoff and Shapiro 1987; Allen and Richstone 1988) show that their importance decreases dramatically for distances 2-3 kpc from the galaxy nucleus, and for the more massive, compact clusters like present-day globulars. In addition, recent photometry (Grillmair et al. 1986; Lauer and Kormendy 1986; Harris et al. 1991) extending in close to the centers of the Virgo ellipticals has shown no detectable GCLF differences with radius. The implication is therefore that today's GCLF resembles the original mass formation spectrum of at least the brighter clusters, perhaps only slightly modified by dynamical processes. Many qualitative arguments can be constructed as to why the GCLFs should, or should not, resemble each other in different galaxies, but at the present time these must take a distant second place to the actual data. | <urn:uuid:3af92f83-cf0f-481c-b4c8-c107f2a3f871> | 2.9375 | 520 | Academic Writing | Science & Tech. | 40.514769 | 1,590 |
You are currently browsing the category archive for the ‘Combinatorics’ category.
Sometimes sequences of numbers are defined recursively, so that given the previous terms of the sequence we can find the next term. A classic example of this is the sequence of Fibonacci numbers, where every two consecutive terms determine the next term, and so if we are given the first two terms, we can calculate the whole sequence. However, if we wanted to, say, calculated the 1000th Fibonacci number, we’d have to start out with the first two, add them to compute the 3rd, add the second and third to compute the 4th, and so on, to slowly build up every single Fibonacci number before the 1000th in order to get there. It’d be much nicer if we had a formula for the Fibonacci sequence. That way, if we wanted the 1000th Fibonacci number, all we’d have to do is plug 1000 into the formula to compute it. Of course, the formula is only useful if it takes less time to crunch it out than it does to do it the brute-force way. A method of finding closed formulas for sequences defined by recurrences is the use of generating functions. Generating functions are functions which “encapsulate” all the information about a sequence, except you can define it without knowing the actual terms of the sequence. The power of generating functions comes from the fact that you can do things like add and multiply them together to create generating functions of other sequences, or write them in terms of themselves to find an explicit formula. Once you have an explicit form for a generating function, you can use some algebra to “extract” the information from the function, which usually means you can find a formula for the sequence in question. | <urn:uuid:6cd7ff11-4326-4fa1-9a67-4b6c632dcf70> | 2.9375 | 379 | Personal Blog | Science & Tech. | 38.804486 | 1,591 |
Diamond anvil cells can apply millions of atmospheres of pressure to a solid or liquid, while allowing it to be observed through the diamond “windows.” For the first time, researchers have introduced optical tweezers into one of these cells in order to trap sample particles. The experiment, described in Physical Review Letters, directly measured the viscosity of the water surrounding the particles. Further development of this technique could permit investigations of the mechanical changes in biological cells and other soft materials placed under high pressure.
A diamond anvil cell (DAC) is a sealed volume sandwiched between the flat, millimeter-wide tips of two diamonds. When squeezed, the pressure in the cell can reach levels found in the core of the Earth. Diamonds are not only strong enough to handle these pressures, but they are also transparent to optical and x-ray probes. However, studying certain mechanical properties requires the controlled application of localized forces, which has been difficult to realize in a DAC.
For their force “handle,” Richard Bowman of the University of Glasgow, in the UK, and his colleagues chose optical tweezers, which are highly focused lasers that trap particles. To overcome the spatial constraints of a DAC, the team used part of their laser to create a second beam that reflected back on the cell. The combined beams trapped micron-sized silica beads suspended in a water sample. Because the optical forces were known, the random vibrations of trapped beads provided a direct measure of the water viscosity. The team recorded a threefold increase in viscosity for a pressure rise of atmospheres—a result that agrees well with previous measurements and builds confidence in the new technique. – Michael Schirber | <urn:uuid:8238d2d8-6334-43a0-a3d0-3e6f29acb91c> | 4.15625 | 349 | Academic Writing | Science & Tech. | 31.953036 | 1,592 |
Quantum system can emit only photons with energy equal (within the uncertainty) to the difference between two energy states.
Even if the atom is in a superposition of energy states
\left|\Psi\right> = C_0 \left|0\right> + C_1 \left|1\right> + C_2 \left|2\right> + \ldots \qquad (1)
with average energy somewhere between the levels, it can emit only certain set of photons: $E_1 - E_0$, $E_2 - E_0$, $E_2 - E_1$ etc.
Emission of a photon is an act of measurement since the energy of the emitted particle contains information about the atom. If the energy of the photon is $E_2 - E_1$ then the energy of the electron in the atom is $E_1$ - the energy of the final state of the transition. The next photon emitted by this atom will have energy equal to $E_1 - E_0$ for sure.
If one observe photons emitted by an ensemble of atoms in state (1) he will see $E_1 - E_0$ photons with probability $\left|C_1\right|^2$, any of $E_2 - E_0$ and $E_2 - E_1$ with probability $\left|C_2\right|^2$ and so on.
The total energy emitted by the system while it is coming to ground state is equal to average energy of state (1) multiplied by the number of atoms in the ensemble.
Energy conservation is not violated.
The same is true for mixed states for which the probability of certain photon is determined by the density matrix of the system. | <urn:uuid:b33feb16-7bb6-42d7-8b99-6b9496ca2c1b> | 3.125 | 375 | Q&A Forum | Science & Tech. | 46.866902 | 1,593 |
The story of the Minotaur
Mazes are very ancient and appear many times in history. According to ancient legend, Daedalus constructed the so called "Cretan Labyrinth" in Knossos, to house the legendary Minotaur. The Minotaur was a fearsome creature, half man and half bull killed by Theseus in the famous legend in which he escapes using a ball of string provided by Ariadne.
Although we don't have direct evidence in the form of buried walls for the shape of the Cretan Labyrinth, there is a traditional idea about its shape, and a very nice geometrical construction for drawing one. This gives us our first link between mathematics and mazes. You can draw this on paper, or if you are on a beach it looks very good drawn into the sand with the help of a stick. To draw a
traditional Cretan Labyrinth, start with the cross and dots on the right.
The picture below shows you how to complete the Cretan Labyrinth. Notice that the when you connect the lines you alternate left and right round the square. Now you can complete the picture.
Building the Cretan Labyrinth
Here is a Java Applet showing how to complete the Cretan Labyrinth. Notice how you connect the lines.
Try following the route from the entrance to the centre. This path is surprisingly long and in a full size labyrinth it would have taken some time to get to the centre. However, there is a further surprise in store. Although the path to the centre is very long there is only one way in and one way out! Theseus had to make no difficult decisions at all on his way to kill the Minotaur. Indeed, it was easy with this design to get to the centre and just as easy to get out again. In short there was no need for threads, Ariadne, broken hearts, suicide or any of the other features of the story.
Exactly the same geometric pattern as the Cretan Labyrinth appears in many different cultures and it is quite a common artistic image. Examples of similar mazes have been found scratched into caves in Cornwall (possibly by visiting Phonecian sea farers or by visiting mathematicians in a moment of boredom), on Roman coins and in pictures drawn by native American Indians. The pattern is of interest to mathematicians because it packs a very long path into a small space.
Using other seeds
An alternative seed
Using different seeds (or starting shapes) when drawing the maze leads to different labyrinths. An important feature of the Cretan Labyrinth is that there is only one entrance and only one route toward the centre. We can ask the (mathematical) question of whether all seeds lead to Labyrinths with one entrance and route.
Although the Cretan maze is the most common, and probably the oldest, there are lots of others that we can draw. Let's try a different seed such as the one above on the right.
The alternative labyrinth
To draw the complete picture remember to alternate drawing the lines from left to right. The resulting maze in this case is shown on the left.
The Rise of the Maze
The term "Labyrinth" is now generally means a construction that leads you from a starting point to a goal by taking you on a tortuous path, but requires no actual decisions. Your whole path is predetermined by the builder of the Labyrinth. Sacred sites were sometimes constructed as labyrinths by people who believed in the action of fate giving you an ultimate destiny which was entirely beyond control. However, following the Christianisation of the Roman Empire, and the belief in the action of free will, a different form of construction came into being. This was the maze.
In a maze intrepid travellers had to make a series of decisions, and their ultimate fates (in particular whether they reached the centre) relied upon the results of those decisions. Mazes were often built into the floors of churches and you were supposed to pray as you found your way towards the centre. The idea of the puzzle maze was developed during the Middle Ages and later into the celebrated hedge maze, often found in the grounds of stately homes.
The modern use of the hedge maze is now purely recreational. The puzzle is usually to find your way to the centre (and out again) starting from the entrance. Many mazes around the world are open to the public and make a great day out. Examples include the Jubilee Maze Centre at Symonds Yat (which has a fine museum of mazes at its centre), and the maze at Longleat, which has bridges and changes its pathways during the day. If you get lost in this maze then clues are available.
Perhaps the most famous public maze in England is the hedge maze at Hampton Court near London, which was constructed in 1690 AD and is still open to the public. If you get the chance, visit the maze at Hampton Court and indeed Hampton Court itself.
We hope that you are convinced that mazes are both fun and important. Now let's think how, as mathematicians, we could try to solve the puzzle of how to get to the centre of a maze (and out again) quickly and reliably.
Mazes and Networks
Next we will explain the link between mazes and networks. In fact we will transform a maze into a network. The result will look very different but the transformation will help us solve the problem of getting to the centre of a maze and then back out again. By doing this we will employ a very useful technique in mathematics:
Transform a hard problem into a simpler one which we can solve more easily.
How to walk round mazes and networks
The point we are trying to reach in a maze is called the centre of the maze. Some mazes can be solved by putting one hand on the hedge and following it round. Unfortunately this doesn't always work. In this section we will show how to transform a maze into a network and give a method for walking round networks which is guaranteed to find the centre. If we've transformed a maze into a network this then solves our problem of finding the way to the centre of the maze. We have chosen the point "M" in our maze. This is where network topology comes in and we will use these ideas to simplify the maze to its essential components.
A sample maze
When you go round a maze it doesn't matter how much you twist and turn, all that does matter are points where you have to make decisions. In a Cretan Labyrinth you don't have to decide anything at all, as once you enter the labyrinth you simply keep walking until you reach the centre. The centre point, the place we are trying to get to, is labeled M. We've put a decision point at the start as you can always decide not to enter the maze.
To simplify the maze we draw a network. In this network we write down all of the decision points as points on a piece of paper. We now draw paths from each of these points to the others, but only if you can go from one to another in the maze without having to make a decision in between. This gives you a network. Here is the maze with all the decision points marked.
A sample maze - with decision points
Here is the completed network for the maze:
A sample maze - as a network
In contrast, here is the network of the Cretan Labyrinth. The only decision points are at the beginning and the end:
The Cretan labyrinth - as a network
Using the network it is MUCH easier to see how to solve the maze. Indeed, we can label the solution just in terms of the decision points that you go through. For the Cretan Labyrinth this gives A -> B. For the more complicated maze one route to the centre is A -> B -> D -> K -> I -> M.
You should be able to find many more routes to the centre using this diagram. How many do you think there are?
The trick of reducing the maze to its bare essentials by finding a diagram which contains all of the information in the maze, is widely used in mathematics. A good example of this is the map of an underground railway system or metro. Often these maps only show the railway lines, stations and interconnections. Distances between stations on the ground do not always correspond to distances on a map. But do travellers on the train care?
Perhaps not, as they often only need to know how to get between stations. In fact, stripping away the unnecessary information might actually help then navigate successfully.
When we think in terms of networks, the problem of solving a maze becomes the following:
Can you find a route in the network which takes you from the beginning to the centre and then back again?
It is worth saying that there are really two different problems here. One is to find the route when you don't have a map of the maze to hand. This is the case in most recreational mazes. The second is to find the route when you do have a map. This case would arise if you were (for example) trying to find your way around a road network or a telephone exchange (or indeed the Underground).
We will consider the case when we don't have a map available.
In a network the decision points are called nodes and the lines connecting nodes are called edges or paths. Given a map of a network, the spaces left between the edges and the space outside are called the faces. If an odd number of paths meet at a node then it is called an odd node and if an even number of paths meet then it is called an even node. A dead end (such as the decision point at C) is an odd node as only one path leads into it.
The parts of a network
Networks were first studied by the great Swiss mathematician Leonard Euler. Euler was one of the most productive mathematicians who ever lived and he created a lot of modern mathematics. In 1736 Euler became interested in networks through trying to solve the problem of the Bridges of Königsberg. Königsberg, now called Kaliningrad, is a town in Russia on the Baltic Sea which has the river Pregel running through it with the island of Kneiphof in the middle of the river. The mainland and the island were connected by bridges in the arrangement shown below:
The bridges of Königsberg
The citizens of Königsberg had noticed that there seemed to be no way of going for a walk in which each bridge was crossed once and once only, but wondered whether they were being stupid and that there might be a route if only they looked hard enough. Euler took up this challenge and started by reducing the problem to a network. In this network, the nodes were the four land masses A, B, C, D and the edges were the bridges. Here is the resulting network:
The network of the bridges of Königsberg
The problem of the bridges of Königsberg can now be stated as follows:
Can you start from any node and construct a route around the network which will bring you back to the node and go down each path once and only once?
It is possible to ask this question for any network, not just for the one above and Euler came up with a brilliant solution to the general problem. Here it is:
- If you have any network which has only even nodes then you can start at any node and find a route which returns you to that node which goes down each path once and once only.
- If the network has exactly two odd nodes then you can construct a route which starts at one odd node and ends up at the other and goes through every path once and once only.
- If the network has more than two odd nodes then there is no route that goes through every path once and once only.
It is worth pointing out that no network can have only one (or indeed any odd number) of odd nodes. The network for the bridges of Königsberg has four odd nodes so no route is possible which crosses over every bridge once and once only. To make this possible the simplest solution is to demolish the bridge between A and C, then at least you can walk from B to D going over each bridge once and only once (although you can't do this if you want to start and finish in the same place). This is a neat solution mathematically, not a great idea if you happen to live in Königsberg.
We seem to have come a long way from solving a maze, but in fact we have nearly finished. The proof of Euler's theorem actually gives us a way of solving the maze. What we do is use the methods described in the proof to construct a route into the centre of the maze and back out again which goes down each path at most twice. To start we first take the network for the maze. Now, these networks have a collection of odd and even nodes and this makes it awkward to use any of the results of the above theorem. Our first step is to convert the maze into one with only even nodes. This we do by the simple process of drawing each path between two nodes twice. What this means on the ground is that in following our way around the maze we are allowed to take each path twice but no more. Think about this - this is very necessary. If we could only go down each path once then there would be no way out of a dead end! For our example maze this gives the following network:
The network of our maze
Doubling up the number of paths in the network corresponding to our maze has converted it into a network with only even nodes. Euler's first result states that in such a network we can construct a path from any node which will return us to that node and which goes down each path once and once only. Now, suppose that we start at the entrance to the maze at point A and find this route. As it goes down every path, sooner or later it will go down a path which leads to the centre of the maze. This is a splendid start. Now continuing along our route we will eventually get back to the start of the maze again. It looks as though we have found a foolproof way of cracking the maze. What is the catch? Well there isn't one really, except that the route we construct may not be optimal (ie. it may be much longer than the shortest route into the centre and back). This makes the method inefficient for solving problems concerned with traffic flow (for which there are much better methods around) but it doesn't matter too much for the networks corresponding to mazes.
As we have not given the proof of Euler's theorem, we can't immediately jump to the solution. Fortunately this has already been done for us and we will describe the method of M. Trèmaux, which is described in the book ``Mathematical Recreations and Essays'' by Rouse Ball. What is nice about the method we are going to describe is that you don't need to use a map of the maze, but you do need to use a packet of peanuts and a bag of crisps.
How to solve a maze using a packet of peanuts and a bag of crisps
You enter the maze, which we will assume has high hedges which you can't see over, and that all paths and nodes (where you make decisions) look very much the same. The peanuts and crisps are used as markers in the maze. Trail the peanuts (here and there) as you go and leave a peanut at all decision points. This will tell you whether you have been to a decision point before or whether you have gone down a path before (without being too ecologically unfriendly). If you go down a path a second time, then trail a path of crisps. If you have a rule that you never go down a path with crisps this will stop you going down that path again. If you reach a decision point which does not have a peanut there we call this a new node. Leave a peanut there, it now becomes an old node. Similarly, a path without peanuts or a path in which you are currently on and dropping peanuts for the first time is a new path. A path with peanuts already on it and on which you are now dropping crisps is called an old path. Here is now how to solve any maze.
- Start at the entrance and take any path.
- If at any point you come to a new node then take any new path.
- If you come to an old node, or the end of a blind alley, and you are on a new path then turn back along this path.
- If you come to an old node and you are on an old path then take a new path (if such exists) or an old path otherwise.
- Never go down a path more than twice.
If you follow this procedure then you will eventually reach the centre and then get back out again. This of course only happens if no one eats the peanuts, and here we have to hope for the best! Try it out on the examples in the exercises, for which a pencil mark will substitute for the peanuts. For an example, on the network of our maze the method gives as one possible route, the route:
Interestingly enough, if you read the account of Harris' adventures in the Hampton Court maze from the book Three Men in a Boat, you will find that he also used a marker. Instead of a peanut, they used a baby's bun which showed them when they had come back to the same point. Unfortunately as there was only one bun available it didn't help much with solving the maze itself and left the baby hungry. Try this method out and see if you can discover why and how it works.
We have seen how to transform a maze into a network. As we mentioned before, you can think of an underground rail system as a network of connected stations. The internet is a network of computers. In fact, there are many other examples of real systems that we can think of as networks.
The aMazing thing about mathematics is its power to Connect them all!
More information on mazes, networks and many other areas of mathematics can be found in our forthcoming book, Mathematics Galore!, Masterclasses, discovery workshops, and team projects in mathematics and its applications by C J Budd and C J Sangwin, to be published soon by OUP.
This article is adapted from one originally published on the website of the Newton Institute, as part of the Posters in the London Underground series.
- Radio Controlled? by Robert Leese (Plus, issue 8).
- Robert Leese explains how the mathematics of colouring graphs (networks) can help avoid interference on your mobile phone.
- Call routing in telephone networks by Richard Gibbens and Stephen Turner (Plus, issue 2).
- Find out how modern telephone networks use mathematics to make it possible for a person to dial a friend in another country just as easily as if they were in the same street, or to read web pages that are on a computer in another continent.
- Theseus and the minotaur
- Try your hand at these fantastic web-based maze puzzles.
About the authors
Chris Sangwin is a Research Fellow at the School of Mathematics and Statistics, University of Birmingham.
Chris Budd is Professor of Applied Mathematics at the University of Bath, and Professor of Mathematics for the Royal Institution. He is particularly interested in applying mathematics to the real world and promoting the public understanding of mathematics. | <urn:uuid:05c0c1fe-24ae-4ddf-8924-8b37af8dcb6a> | 3.5 | 4,013 | Knowledge Article | Science & Tech. | 58.711724 | 1,594 |
Doomsday is still a long way away, but this is what might happen.
Have you ever wondered where we and our planet originally came from or what might happen to our galaxy billions of years from now? These aren’t just philosophical questions — scientists have been looking for clues to our origins and our fate for the last few decades. However, five Chinese scientists say that we need to understand the nature of dark energy to truly foresee the destiny of the Universe.
Scientists believe that dark energy makes up 70 per cent of the Universe’s current content, thanks to calculations about how it affected expansion after the Big Bang. But its properties, which have not been completely defined, may decide the fate of the Universe. One scenario is that everything will end in a big rip, when dark energy density grows to infinity in finite time, and its gravitational repulsion will tear apart all the objects in the Universe.
Scientists from the University of Science and Technology of China, Northeastern University, and Peking University have examined the possibility of this cosmic doomsday in a study published in Sci China-Phys Mech Astron. “We want to infer from the current data what the worst fate would be for the Universe,” the authors said in the study.
To explore this scenario, they needed to find a parametrisation that would cover the overall expansion history of the Universe. They eventually settled on a divergence-free parametrisation for dark energy, called the Ma-Zhang parameterisation, to predict the evolution of the Universe and how far away we are from doomsday.
Using the current cosmological observations, the authors found that in the worst-case scenario, our Universe can still exist at least 16.7 billion years before it ends in a big rip. However, this is the worst-case scenario — the best-fit result suggested that the Universe would last another 103.5 billion years. But the researchers wanted to see what would happen in the worst case scenario. ”The question of ‘where are we going’ is an eternal theme for human beings, so we should have courage to explore this theme.”
They then focused on this scenario and considered the fate of stars and galaxies. In the event that dark energy increases until it can overcome the forces holding objects together, the Milky Way will be torn apart 32.9 million years before the big rip. The Earth will be ripped from the Sun and the moon from the Earth two months and five days before doomsday respectively. The Sun will be destroyed 28 minutes before the end of time, with the Earth exploding only 16 minutes from the end.
“Even microscopic objects cannot escape from the rip,” the authors state. “For example, the hydrogen atom will be torn apart 310-17 seconds before the ultimate singularity.” But even this violent, worst-case scenario is still billions of years in our future. | <urn:uuid:83c61231-5ead-4d97-81e9-099628de7813> | 3.5625 | 596 | News Article | Science & Tech. | 44.674878 | 1,595 |
When Precipitation Patterns Change
Part A: What is Drought?
In Lab 3, you learned to interpret climographs to understand a location's normal climate. Another way that climographs can be used is to plot current conditions over a background of the average conditionsthis provides a graphic way to see how the current year compares to the long term average. These dynamic graphs indicate if current conditions are abnormally hot, cold, wet, or dry.
- Click the thumbnail image at right to see a larger view of a climograph for San Antonio, Texas. The graph shows conditions for January through mid-July of 2008.
- Examine the graph to interpret the conditions in San Antonio. The background colors (pale red for temperatures and light green for accumulated precipitation) show the average conditions compiled from many years of data. The brighter red and green lines show daily temperatures and accumulated rainfall through July of 2008.
- What does the graph indicate about San Antonio's temperature? The temperature was above average during January but has been in the normal range since then.
- What does the cumulative rainfall graph indicate? Rainfall has been below normal all year. The cumulative total for 2008 is roughly one third of the normal total for the end of July.
- What does the graph indicate about San Antonio's temperature?
- Explore current dynamic weather and climate conditions for stations in the United States via data located at NOAA's Southern Regional Climate Center (SRCC). Once on this page, choose 'Station A Station' from the link under the auto-generated graphic.
- To generate temperature and accumulated precipitation maps for any region of the country, start at the Select a Station link above, and type in the name of your station. On the map that appears, click the map icon and then click 'more.' You can switch between the tabs to see the station information, annual summaries, and climate normals. Use the pull down menu to change to another year of interest.
- The National Weather Service (NWS) provides local climate records and summaries. Use the following instructions to locate a climograph for your area of interest. Note: instructions will vary by climate office; not all climate offices offer these types of graphs. A few that do include: Cleveland, Ohio, and Burlington, VT.
- Go to the NWS home page Weather.gov, enter your city and state and click Go.
- On the page that opens, click the forecast office title on the upper-left of the page. This will take you to the local climate office page.
- Scroll down the menu list on the left-hand side of the page and click the link for 'Local' under the 'Climate' header.
- On the page that opens choose the Local Data/Records tab. Look at the list of choices and locate the Climate graphs.
Stop and Think1. List 5 cities or locations for which you examined dynamic climographs or accumulated precipitation maps. Include the date range that you observed. Tell whether each location is wetter than normal, about normal, or drier than normal. Explain your reasoning.
The word "drought" means different things to different people. What visions does the term bring to your mind?
Parched land, dried crops, dust storms, and starving livestock are some of the scenes that people associate with the term drought. Unlike most hazardous weather conditions, drought is not always obvious. Drought can be years in the making, as moisture in the soil evaporates and surface water sources disappear due to the lack of rain.
- Read the information at What is drought? to come up with your own meaningful definition of drought. Discuss your definition with a lab partner to see if it can be improved.
Stop and Think2. Write a definition for drought, in your own words.
- Learn all about drought at the UNL Drought for Kids page.
- Find out what the how drought is studied by reading the links on the, How Do People Study Drought? page.
- Learn about the physical processes that cause or contribute to drought in Earth Observatory's North American Drought Article. Read the information about each contributing factor and view the animations about soil moisture (on the second page of the article). The animations will help you to visualize the feedback loop that exists among rainfall, soil moisture, and temperature.
- What are some of the indicators that drought is present? Indicators of drought include soil moisture that is below normal, lower-than-normal rainfall or snowpack, and decreased water levels in streams and reservoirs.
- The 3 main contributors to drought are high temperatures, low soil moisture content, and atmospheric circulation patterns that keep rain away from an area. Tell how each of these factors promotes drought. Higher surface temperatures result in an increase in evaporation of water. This leads to less moisture being available on the surface. If soils are wet, then much of the heat from incoming sunlight is used to evaporate the water they contain, so temperatures are kept cooler. If soil is dry, then there is little or no water available to evaporate and the land surface gets hotter and drier. Air circulation patterns are strongly affected by sea surface temperatures: air rises over areas of warm ocean water, pulling dry air across land.
Is it a drought today?
- Examine the diagram to see the signs of meteorological, agricultural, and hydrological droughts.
- Interpret the chart to answer the following questions.
- What are the causes of soil water deficiency?
- If an area is experiencing reduced streamflow, which stage of drought is occurring? | <urn:uuid:a0c307c8-239d-46b9-a835-41bb2a32f766> | 3.765625 | 1,148 | Tutorial | Science & Tech. | 49.347593 | 1,596 |
|The H2 Double-Slit Experiment: Where Quantum and Classical Physics Meet|
For the first time, an international research team carried out a double-slit experiment in H2, the smallest and simplest molecule. Thomas Young's original experiment in 1803 passed light through two slits cut in a solid thin plate. In the groundbreaking experiment performed at ALS Beamlines 4.0 and 11.0.1, the researchers used electrons instead of light and the nuclei of the hydrogen molecule as the slits. The experiment revealed that only one "observing" electron suffices to induce the emergence of classical properties such as loss of coherence.
Present-day single photoionization experiments demonstrate double-slit self-interference for a single particle fully isolated from the classical environment. But if quantum particles were put in contact with the classical world in a controlled manner, at what scale would quantum interference begin to diminish and particles start to behave classically? The team decided to study the double photoionization (complete fragmentation) of H2, creating two repelling protons acting as a double slit, a fast interfering electron, and a second electron behaving as an active or inactive observer.
Experiments were performed at two different photon energies: Eϒ = 240 and 160 eV, leaving about 190 and 110 eV to be shared between the two electrons, respectively. At these high photon energies, double photoionization of H2 led in most cases to one fast and one slow electron. The fast electron's energies were 185 to 190 eV; the slow electron’s were 5 eV or less (corresponding to an inactive observer). The interference pattern of the fast electron was conditioned by the presence and velocity of the other: the greater the difference in their speeds, the less their interaction and the more visible the interference patterns. Both electrons were isolated from their surroundings, and quantum coherence prevailed, revealed by the fast electron's wavelike interference pattern at the two protons.
However, at high photon-energy levels, the fast electron absorbed almost all the energy of the incident single photon, leaving the system too rapidly for interaction with the slow electron. Yet the slow electron was also ejected from the molecule through the mysterious process of electron–electron correlation. This "secret entanglement" allows two electrons to remain connected even though far apart. The researchers now had what they needed to build their classical/quantum interface.
They choose ionization events where the slow electron had a bit more energy (5–25 eV) allowing it act as the classical environment (an active observer). The quantum system of the fast electron now interacted with the slow electron and began to decohere, its interference pattern disappearing. However, the overall coherence was still hidden in the two electrons' entanglement.
The dielectron's wavelength was short enough to still interfere (the sum energy of the two electrons was high enough), and there was no environment to disturb the interference as the two electrons were now combined into one quasiparticle. Thus, interference between the entangled electrons could be reconstructed by graphing their correlated momenta from the angles at which they were ejected. Two waveforms appeared in the graph, either of which could be projected to show an interference pattern. Because the two waveforms were out of phase with each other, when viewed simultaneously, the interference vanished.
If the two-electron system is split into its subsystems and one is thought of as the environment of the other, it becomes evident that classical properties such as loss of coherence can emerge even when only four particles are involved. Yet because the two electrons' subsystems are entangled in a tractable way, their quantum coherence can be reconstructed. In solid-state–based quantum computing devices, such electron–electron interaction represents a key challenge as decoherence and loss of information occur on the tiny scale of a single hydrogen molecule. The good news, however, is that, in theory, the information is not completely lost.
Research conducted by D. Akoury, Th. Weber (University Frankfurt, Germany, and Berkeley Lab, U.S.); K. Kreidi, t. Jahnke, A. Staudte, M. Schöffler, N. Neumann, J. Titze, L. Ph. H. Schmidt, A. Czasch, O. Jagutzki, R. A. Costa Fraga, R. E. Grisenti, H. Schmidt-Böcking, R. Dörner (University Frankfurt, Germany); T. Osipov, H. Adaniya, M. H. Prior, A. Belkacem, (Berkeley Lab, CA, U.S.); R. Díez Muiño (Centro de Física de Materiales and Donostia International Physics Center, San Sebastian, Spain); N. A. Cherepkov, S. K. Semenov (State University of Aerospace Instrumentation, St. Petersburg, Russia); P. Ranitovic, C. L. Cocke (Kansas State University, U.S.); J. C. Thompson, A. L. Landers (Auburn University, U.S.)
Research funding: Deutsche Forschungsgemeinschaft and by the U. S. Department of Energy, Office of Basic Energy Sciences (BES). Operation of the ALS is supported by BES. | <urn:uuid:d5a97766-4890-4e47-b8ad-668a033e2f72> | 3.46875 | 1,125 | Academic Writing | Science & Tech. | 43.942948 | 1,597 |
Sometimes the average is anything but average
Facts About Factorials
It all begins with the factorial function, a familiar item of furniture in several areas of mathematics, including combinatorics and probability theory. The factorial of a positive whole number n is the product of all the integers from 1 through n inclusive. For example, the factorial of 6 is 1×2×3×4×5×6=720.
The standard notation for the factorial of n is "n!". This use of the exclamation point was introduced in 1808 by Christian Kramp, a mathematician from Strasbourg. Not everyone is enthusiastic about it. Augustus De Morgan, an eminent British mathematician and logician, complained in 1842 that the exclamation points give "the appearance of expressing surprise and admiration that 2, 3, 4, &c. should be found in mathematical results."
One common application of the factorial function is in counting permutations, or rearrangements of things. If six people are sitting down to dinner, the number of ways they can arrange themselves at the table is 6!. It's easy to see why: The first person can choose any of the six chairs, the next person has five places available, and so on until the sixth diner is forced to take whatever seat remains.
The factorial function is notorious for its rapid rate of growth: 10! is already in the millions, and 100! is a number with 158 decimal digits. As n increases, n! grows faster than any polynomial function of n, such as n 2 or n 3, or any simple exponential function, such as 2 n or e n . Indeed you can choose any constant k, and make it as large as you please, and there will still be some value of n beyond which n! exceeds both n k and k n . (On the other hand, n! grows slower than n n .)
The steep increase in the magnitude of n! becomes an awkward annoyance when you want to explore factorials computationally. A programming language that packs integers into 32 binary digits cannot reach beyond 12!, and even 64-bit arithmetic runs out of room at 20!. To go further requires a language or a program library capable of handling arbitrarily large integers.
In spite of this inconvenience, the factorial function is an old favorite in computer science as well as in mathematics. Often it is the first example mentioned when introducing the concept of recursion, as in this procedure definition:
then return 1
else return n*f!(n-1)
One way to understand this definition is to put yourself in the place of the procedure: You are the factorial oracle, and when someone gives you an n, you must respond with n!. Your task is easy if n happens to be 1, since calculating 1! doesn't take much effort. If n is greater than 1, you may not know the answer directly, but you do know how to find it: just get the factorial of n–1 and then multiply the result by n. Where do you find the factorial of n–1? Simple: Ask yourself—you're the oracle!
This self-referential style of thinking is something of an acquired taste. For those who prefer looping to recursions, here is another definition of the factorial:
for x in n downto 1
product:=product * x
In this case it's made explicit that we are counting down from n to 1, multiplying as we go. Of course we could just as easily count up from 1 to n; the commutative law guarantees that the result will be the same. Indeed, we could arrange the n numbers in any of n! permutations. All the arrangements are mathematically equivalent, although some ways of organizing the computation are more efficient than others.
» Post Comment | <urn:uuid:0f69be89-9ec2-48a2-a75c-2c25280ff747> | 3.453125 | 787 | Personal Blog | Science & Tech. | 62.551725 | 1,598 |
Image via Wikipedia
Did you know that some frogs talk with ultra-sound? In ultrasound, the pitch or frequency of the sound is too high for the human ear to hear. Fish and homing pigeons can see electromagnetic fields, ants can see polarised light, insects and rodents can smell pheromones, so why can't some frogs, somewhere on the planet, hear ultrasound?
According to ABC Science News recently they can, if they are the concave-eared torrent frog that lives in the Huangshan Hot Springs, west of Shanghai, in China. There, a continuous torrent of water and sound fills the mountainous environment of the concave-eared torrent frog. Let’s hope that the frogs are actually alive in the Hot Springs. After all, we’ve all heard the fable about frogs and boiling water.
Anyhoo it was recently discovered that these frogs can generate and hear sounds that are way up in the ultrasonic. They can generate and hear frequencies over 128 kHz. That's more than six-times better than a human can hear.
So next time you want to get your message across in the “mountainous environment of the concave-eared torrent frog” just pick one up and use him as a................................froghorn.
And, apparently, there is a Frog magazine in Dutch. On a recent edition, readers were commenting on the quality of the photographs therein. Presumably frogs-porn?
It’s the way I am them telling. | <urn:uuid:ec1cf832-4b4c-4628-85cc-09d3ea032a4f> | 2.953125 | 312 | Personal Blog | Science & Tech. | 70.702906 | 1,599 |