text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
Radiation & fractionation
I have been reading up on radiation, but have some trouble fully understanding fractionation. Is there any simple approximation of the relation of fractionation and recovery, or some sort of adjusted effective dose?
A single continuous exposure of 1 Gy over 1 hour, compared to 20 acute doses of 50 mGy (which is still an absorbed dose of 1 Gy over 1 hour), what exactly is the difference/relation? And can it be modelled/approximated? Would the fractionation effectively be less (either expressed as an effective dose [Sv] or an arbitrary constant) than the continuous exposure?
So the continuous exposure would be (assuming photons) 1 Sv, but the fractionation actually somewhat less than 1 Sv (due to the tissue/cells being able to partially recover). Or is it not possible to express it such a manner?
I guess it's not a very well understood field? I've had trouble finding more practical or straight-forward material. Everything is pretty vague, "fractionation is less damaging as it spreads out the radiation," but no hard figures or formulas. No real explanation. | <urn:uuid:fc9ecbc3-dc48-43b3-bb27-c79c4c9ad985> | 2.734375 | 231 | Comment Section | Science & Tech. | 42.223781 | 1,400 |
Dec. 19, 2012 An international team of astronomers led by the University of Hertfordshire has discovered that Tau Ceti, one of the closest and most Sun-like stars, may host five planets -- with one in the star's habitable zone.
At a distance of twelve light years and visible with the naked eye in the evening sky, Tau Ceti is the closest single star that has the same spectral classification as our Sun. Its five planets are estimated to have masses between two and six times the mass of Earth -- making it the lowest-mass planetary system yet detected. One of the planets lies in the habitable zone of the star and has a mass around five times that of Earth, making it the smallest planet found to be orbiting in the habitable zone of any Sun-like star.
The international team of astronomers, from the UK, Chile, the USA, and Australia, combined more than six-thousand observations from three different instruments and intensively modelled the data. Using new techniques, the team has found a method to detect signals half the size previously thought possible. This greatly improves the sensitivity of searches for small planets and suggests that Tau Ceti is not a lone star but has a planetary system.
Mikko Tuomi, from the University of Hertfordshire and the first author of the paper, said: "We pioneered new data modelling techniques by adding artificial signals to the data and testing our recovery of the signals with a variety of different approaches. This significantly improved our noise modelling techniques and increased our ability to find low mass planets."
"We chose Tau Ceti for this noise modelling study because we had thought it contained no signals. And as it is so bright and similar to our Sun it is an ideal benchmark system to test out our methods for the detection of small planets," commented Hugh Jones from the University of Hertfordshire.
James Jenkins, Universidad de Chile and Visiting Fellow at the University of Hertfordshire, explained: "Tau Ceti is one of our nearest cosmic neighbours and so bright that we may be able to study the atmospheres of these planets in the not too distant future. Planetary systems found around nearby stars close to our Sun indicate that these systems are common in our Milky Way galaxy."
Over 800 planets have been discovered orbiting other worlds, but planets in orbit around the nearest Sun-like stars are particularly valuable. Steve Vogt from University of California Santa Cruz said: "This discovery is in keeping with our emerging view that virtually every star has planets, and that the galaxy must have many such potentially habitable Earth-sized planets. They are everywhere, even right next door! We are now beginning to understand that Nature seems to overwhelmingly prefer systems that have a multiple planets with orbits of less than one hundred days. This is quite unlike our own solar system where there is nothing with an orbit inside that of Mercury. So our solar system is, in some sense, a bit of a freak and not the most typical kind of system that Nature cooks up."
"As we stare the night sky, it is worth contemplating that there may well be more planets out there than there are stars … some fraction of which may well be habitable," remarked Chris Tinney from the University of New South Wales.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
- M. Tuomi, H. R. A. Jones, J. S. Jenkins, C. G. Tinney, R. P. Butler, S. S. Vogt, J. R. Barnes, R. A. Wittenmyer, S. O'Toole, J. Horner, J. Bailey, B. D. Carter, D. J. Wright, G. S. Salter, D. Pinfield. Signals embedded in the radial velocity noise. Periodic variations in the tau Ceti velocities. Astronomy & Astrophysics, 2012; DOI: 10.1051/0004-6361/201220509
Note: If no author is given, the source is cited instead. | <urn:uuid:df06c3f5-ca01-43c1-8ca4-d3dbdc1cbea0> | 3.296875 | 851 | News (Org.) | Science & Tech. | 55.479163 | 1,401 |
Feb. 19, 2013 Species facing widespread and rapid environmental changes can sometimes evolve quickly enough to dodge the extinction bullet. Populations of disease-causing bacteria evolve, for example, as doctors flood their "environment," the human body, with antibiotics. Insects, animals and plants can make evolutionary adaptations in response to pesticides, heavy metals and overfishing.
Previous studies have shown that the more gradual the change, the better the chances for "evolutionary rescue" -- the process of mutations occurring fast enough to allow a population to avoid extinction in changing environments. One obvious reason is that more individuals remain alive when change is gradual or moderate, meaning there are more opportunities for a winning mutation to emerge.
Now University of Washington biologists using populations of microorganisms have shed light for the first time on a second reason. They found that the mutation that wins the race in the harshest environment is often dependent on a "relay team" of other mutations that came before, mutations that emerge only as conditions worsen at gradual and moderate rates.
Without the winners from those first "legs" of the survival race, it's unlikely there will even be a runner in the anchor position when conditions become extreme.
"That's a problem given the number of factors on the planet being changed with unprecedented rapidity under the banner of climate change and other human-caused changes," said Benjamin Kerr, UW assistant professor of biology.
Kerr is corresponding author of a paper in the advance online edition of Nature the week of Feb. 9.
Unless a species can relocate or its members already have a bit of flexibility to alter their behavior or physiology, the only option is to evolve or die in the face of challenging environmental conditions, said lead author Haley Lindsey of Seattle, a former lab member. Other co-authors are Jenna Gallie, now with ETH Zurich, the Swiss Federal Institute of Technology, and Susan Taylor of Seattle.
The species studied was Escherichia coli, or E. coli, a bacterium commonly found in the lower intestine and harmless except for certain strains that cause food-poisoning sickness and death in humans. The UW researchers evolved hundreds of populations of E.coli under environments made ever more stressful by the addition of an antibiotic that cripples and kills the bacterium. The antibiotic was ramped up at gradual, moderate and rapid rates.
Mutations at known genes confer protection to the drug. Researchers examined these genes in surviving populations from gradual- and moderate-rate environments, and found multiple mutations.
Using genetic engineering, the scientists pulled out each mutation to see what protectiveness it provided on its own. They found some were only advantageous at the lower concentration of the drug and unable to save the population at the highest concentrations. But those mutations "predispose the lineage to gain other mutations that allow it to escape extinction at high stress," the authors wrote.
"That two-step path leading to the double mutant is not available if a population is immersed abruptly into the high-concentration environment," Kerr said. For populations in that situation, there were only single mutations that gave protection against the antibiotic.
"The rate of environmental deterioration can qualitatively affect evolutionary trajectories," the authors wrote. "In our system, we find that rapid environmental change closes off paths that are accessible under gradual change."
The work was funded by the National Science Foundation, including money through the consortium known as the Beacon Center for the Study of Evolution in Action, and UW Royalty Research Funds.
The findings have implications for those concerned about antibiotic-resistant organisms as well as those considering the effects of climate and global change, Kerr said. For instance, antibiotics found at very low concentrations in industrial and agricultural waste run-off might be evolutionarily priming bacterial populations to become drug resistant even at high doses.
As for populations threatened by human-caused climate change, "our study does suggest that there is genuine reason to worry about unusually high rates of environmental change," the authors wrote. "As the rate of environmental deterioration increases, there can be pronounced increases in the rate of extinction."
Other social bookmarking and sharing tools:
- Haley A. Lindsey, Jenna Gallie, Susan Taylor, Benjamin Kerr. Evolutionary rescue from extinction is contingent on a lower rate of environmental change. Nature, 2013; DOI: 10.1038/nature11879
Note: If no author is given, the source is cited instead. | <urn:uuid:612602ab-d4d1-400b-96b5-d233d34a7c6e> | 3.984375 | 902 | News Article | Science & Tech. | 25.16925 | 1,402 |
View Full Version : What's the difference between model and store?
1 Sep 2012, 7:06 PM
I'm new to ExtJS. I wonder what the difference between model and store is.
When to use model, and when to use store?
I thought the model is the schema, and the store is the data itself.
But it seems not a right concept...
Using SQL terminology:
A model is like a schema.
An instance of a model (record) is like a row.
A store is like a table.
2 Sep 2012, 11:26 PM
You have to see the store as a collection of data resulting from a specific query, while the model will be the definition of your data fields types
3 Sep 2012, 12:39 AM
A model contains:
associations - allowing you to associate other data stores with this model
proxy - to get your data to-from localstorage or server
Can be used without a store for instance to load and save a form with a single record.
A store contains:
sorters - to sort your data
filters - to sort your data
group-ers - to group your data
Used for grids, XTemplate generated views, charts and other visualisations.
** note you can also put fields: and data: and proxy into a store and have it configured inline so no need for a model, but this is usually not used...
why not go to the fast track course and get up to speed in a week! ;-)
Powered by vBulletin® Version 4.1.5 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved. | <urn:uuid:2993e3b4-c890-4e90-96f4-b74aea2a658e> | 2.671875 | 348 | Comment Section | Software Dev. | 72.806242 | 1,403 |
People got very excited in 2004 when NASA’s rover Opportunity discovered evidence that Mars had once been wet. Where there is water, there may be life. After more than 40 years of human exploration, culminating in the ongoing Mars Exploration Rover mission, scientists are planning still more missions to study the planet. The Phoenix, an interagency scientific probe led by the Lunar and Planetary Laboratory at the University of Arizona, is scheduled to land in late May on Mars’s frigid northern arctic, where it will search for soils and ice that might be suitable for microbial life (see “Mission to Mars,” November/December 2007). The next decade might see a Mars Sample Return mission, which would use robotic systems to collect samples of Martian rocks, soils, and atmosphere and return them to Earth. We could then analyze the samples to see if they contain any traces of life, whether extinct or still active.
Such a discovery would be of tremendous scientific significance. What could be more fascinating than discovering life that had evolved entirely independently of life here on Earth? Many people would also find it heartening to learn that we are not entirely alone in this vast, cold cosmos.
But I hope that our Mars probes discover nothing. It would be good news if we find Mars to be sterile. Dead rocks and lifeless sands would lift my spirit.
Conversely, if we discovered traces of some simple, extinct life-form–some bacteria, some algae–it would be bad news. If we found fossils of something more advanced, perhaps something that looked like the remnants of a trilobite or even the skeleton of a small mammal, it would be very bad news. The more complex the life-form we found, the more depressing the news would be. I would find it interesting, certainly–but a bad omen for the future of the human race.
How do I arrive at this conclusion? I begin by reflecting on a well-known fact. UFO spotters, Raëlian cultists, and self-certified alien abductees notwithstanding, humans have, to date, seen no sign of any extraterrestrial civilization. We have not received any visitors from space, nor have our radio telescopes detected any signals transmitted by any extraterrestrial civilization. The Search for Extra-Terrestrial Intelligence (SETI) has been going for nearly half a century, employing increasingly powerful telescopes and data-mining techniques; so far, it has consistently corroborated the null hypothesis. As best we have been able to determine, the night sky is empty and silent. The question “Where are they?” is thus at least as pertinent today as it was when the physicist Enrico Fermi first posed it during a lunch discussion with some of his colleagues at the Los Alamos National Laboratory back in 1950.
Here is another fact: the observable universe contains on the order of 100 billion galaxies, and there are on the order of 100 billion stars in our galaxy alone. In the last couple of decades, we have learned that many of these stars have planets circling them; several hundred such “exoplanets” have been discovered to date. Most of these are gigantic, since it is very difficult to detect smaller exoplanets using current methods. (In most cases, the planets cannot be directly observed. Their existence is inferred from their gravitational influence on their parent suns, which wobble slightly when pulled toward large orbiting planets, or from slight fluctuations in luminosity when the planets partially eclipse their suns.) We have every reason to believe that the observable universe contains vast numbers of solar systems, including many with planets that are Earth-like, at least in the sense of having masses and temperatures similar to those of our own orb. We also know that many of these solar systems are older than ours. | <urn:uuid:981101dc-8a3c-41d3-acb1-f805ca121c91> | 3.59375 | 772 | Nonfiction Writing | Science & Tech. | 38.62754 | 1,404 |
On September 6, 2009, the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument on NASA’s Terra satellite captured this simulated natural color image of the Station fire, burning in the San Gabriel Mountains north of Los Angeles. The fire started on August 26 in La Canada/Flintridge near NASA’s Jet Propulsion Laboratory in Pasadena (seen at the bottom of the image), and soon grew to become the largest fire in Los Angeles County’s history.
Ten days after its start, the fire had consumed more than 160,000 acres (251 square miles) of forest, leaving behind a charred, blackened landscape, as it spread eastward. Smoke from the actively burning area can be seen on the right side of the image; the large dark gray area dominating the image is the evidence of forest and chaparral destruction.
With its 14 spectral bands from the visible to the thermal infrared wavelength region and its high spatial resolution of 15 to 90 meters (about 50 to 300 feet), ASTER images Earth to map and monitor the changing surface of our planet. ASTER is one of five Earth-observing instruments launched December 18, 1999, on NASA’s Terra satellite.
The instrument was built by Japan’s Ministry of Economy, Trade and Industry. A joint U.S./Japan science team is responsible for validation and calibration of the instrument and the data products.
The broad spectral coverage and high spectral resolution of ASTER provides scientists in numerous disciplines with critical information for surface mapping and monitoring of dynamic conditions and temporal change.
Example applications are: monitoring glacial advances and retreats; monitoring potentially active volcanoes; identifying crop stress; determining cloud morphology and physical properties; wetlands evaluation; thermal pollution monitoring; coral reef degradation; surface temperature mapping of soils and geology; and measuring surface heat balance.
The U.S. science team is located at NASA’s Jet Propulsion Laboratory, Pasadena, Calif. The Terra mission is part of NASA’s Science Mission Directorate, Washington, D.C. | <urn:uuid:2f10e655-e9c5-4df4-9ab6-1a886886d0b9> | 3.9375 | 424 | Truncated | Science & Tech. | 31.775695 | 1,405 |
UCL’s Institute of Origins was created to promote world-leading research into the Origins and Evolution of the Universe, the basis of life and how we came to exist.
The Institute aims to bring together under one umbrella the rich and highly respected multi-disciplinary expertise that UCL has built up over several decades to research topics spreading in scale from the microscopic to the cosmological.
Institute provides a forum for workers in the departments of Mathematics, Physics & Astronomy, Space & Climate Physics and Earth Sciences to develop new and innovative collaborative research projects, in which fundamental issues connected to the origins and evolution of universe can be tackled in a consistent theoretical and physical way.
- What is the origin of the highest-energy cosmic rays?
- How does gas and dust evolve during the star formation cycle?
- Do Einstein’s equations really describe our Universe?
- Can extra-terrestrial environments support putative alien life?
These and other questions Institute aims to answer through its research. For more - check Research themes. | <urn:uuid:fb026668-1919-46b8-b9a7-5c9a7648f14e> | 2.71875 | 212 | About (Org.) | Science & Tech. | 24.582656 | 1,406 |
DEERFIELD, Ill., Jan. 7 (UPI) -- An early ancestor of today's birds had teeth -- and not just any teeth, but ones evolved for a special diet, U.S. paleontologists say.
Writing in the Journal of Vertebrate Paleontology, researchers say a study of a species of early bird, Sulcavis geeorum, suggests it had a durophagous diet, meaning the bird's teeth were capable of eating prey with hard exoskeletons like insects or crabs.
The new specimen, a fossil from the the Early Cretaceous period of 121 million to 125 million years ago, greatly increases the known diversity of tooth shape in early birds and hints at previously unrecognized ecological diversity, they said.
The fossil from an early group of birds known as enantiornithines was found in China and has robust teeth with grooves on the inside surface that likely made them stronger to deal with harder food items, researchers said.
No previous bird species have been found with such grooves, ridges, striations, serrated edges or any other form of dental ornamentation, researchers said.
"While other birds were losing their teeth, enantiornithines were evolving new morphologies and dental specializations," lead study author Jingmai O'Connor said.
"We still don't understand why enantiornithines were so successful in the Cretaceous but then died out -- maybe differences in diet played a part."
|Additional Science News Stories| | <urn:uuid:c410bdf5-db8b-4807-b795-0c7ca3023339> | 3.84375 | 315 | News Article | Science & Tech. | 42.301806 | 1,407 |
More About Ellipses
Steven Dutch, Natural and Applied Sciences,
University of Wisconsin - Green Bay
First-time Visitors: Please visit Site Map and Disclaimer. Use
"Back" to return here.
Find the Center of an Ellipse
Sometimes you have an ellipse but don't know the center. Finding the
center is easy.
- Draw two arbitrary parallel lines cutting chords across the ellipse (in
- Bisect the chords and draw a line through the midpoints of the chords
- Bisect the resulting line. The biscting point is the center.
proof is to imagine doing this construction on a circle, the shearing the circle
out of shape into an ellipse.
Find the Axes of an Ellipse
Since you can easily find the center of an ellipse, finding the axes is just
- Given an ellipse with unknown axes and center, find the center as above.
- Construct a circle with center at the center of the ellipse and
intersecting the ellipse at four points.
- Bisect the arcs of the circle (not shown), or
- Construct the rectangle joining the points where the ellipse and
- Construct the perpendicular bisectors of the sides of the rectangle, or connect opposing pairs of arc bisectors.
Find the Foci of an Ellipse
||Given the major and minor axes of an ellipse, you can always find
the foci. You need the foci for some construction methods. Just draw radii of length a from the ends of the minor axis.
Given the foci, however, you can't uniquely determine the axes. You need
additional information such as the length of one axis. However, the
major axis is always along the line through the foci and the minor axis
always perpendicularly bisects the line between the foci.
Access Crustal Movements Notes Index
Return to Professor Dutch's Home Page
Created 28 December 1998, Last Update
30 January 2012
Not an official UW Green Bay site | <urn:uuid:f07108cc-2222-4284-a944-4269d648096b> | 3.859375 | 439 | Tutorial | Science & Tech. | 48.585211 | 1,408 |
Image of the solar corona in white light (outer circle, blue and white) and X-Rays (inner circle, red, yellow, and black) on April 22, 1994, courtesy of the High Altitude Observatory and the Yohkoh Science team. The dashed circle is the solar radius.
Click on image for full size
Image courtesy of the High Altitude Observatory, National Center for Atmospheric Research (NCAR), Boulder, Colorado, USA. NCAR is sponsored by the National Science Foundation.The solar X-ray image is from the Yohkoh mission of ISAS, Japan.
The Solar Atmosphere
The visible solar atmosphere consists of three regions: the
the chromosphere, and the solar corona. Most of the visible (white) light
comes from the photosphere,
this is the part of the Sun we actually see.
The chromosphere and
corona also emit white light, and can
be seen when the light from the photosphere is blocked out, as occurs in
a solar eclipse.
The sun emits electromagnetic radiation at many other wavelengths as well.
Different types of radiation (such as radio, ultraviolet, X-rays, and gamma
rays) originate from different parts of the sun. Scientists use
to detect this radiation and study different parts of the solar atmosphere.
The solar atmosphere is so hot that the gas is primarily in a plasma state:
electrons are no longer bound to atomic nuclei, and the gas is made up
of charged particles (mostly protons and electrons). In this charged
state, the solar atmosphere is greatly influenced by the
strong solar magnetic fields that thread through it.
These magnetic fields, and the outer solar atmosphere (the corona) extend out into
interplanetary space as part of the solar wind.
Shop Windows to the Universe Science Store!Cool It!
is the new card game from the Union of Concerned Scientists that teaches kids about the choices we have when it comes to climate change—and how policy and technology decisions made today will matter. Cool It! is available in our online store
You might also be interested in:
Most of the energy we receive from the Sun is the visible (white) light emitted from the photosphere. The photosphere is one of the coolest regions of the Sun (6000 K), so only a small fraction (0.1%)...more
Rising above the Sun's chromosphere , the temperature jumps sharply from a few tens of thousands of kelvins to as much as a few million kelvins in the Sun's outer atmosphere, the solar corona. Understanding...more
An eclipse of the Sun occurs when the Earth passes through the Moon's shadow. A total eclipse of the Sun takes place only during a new moon, when the Moon is directly between the Sun and the Earth. When...more
There is a giant magnetic "bubble" in space around the Sun. That "bubble" is called the heliosphere. In a sense, we Earthlings live within the outer atmosphere of our Sun. The solar...more
Space weather is a very complex scientific field. Scientists who study space weather use computer models a lot. Space weather is a bit like weather on Earth in this way because weather forecasters on our...more
Large impressive loop-like structures on the edge of the solar disk sometimes stand out brightly against the dark background of space. Though these structures, called "prominences", appear to be very bright...more
The Sun has a very large and very complex magnetic field. The magnetic field at an average place on the Sun is around 1 Gauss, about twice as strong as the average field on the surface of Earth (around...more | <urn:uuid:e37ed691-3761-4a9c-a88b-41b6a93f18f1> | 3.71875 | 765 | Knowledge Article | Science & Tech. | 54.788677 | 1,409 |
A baseball speeds from the hands of a pitcher, a slave to Newton’s laws. But in the brain of the batter who is watching it, something odd happens. Time seems to dawdle. The ball moves in slow motion, and becomes clearer. Players of baseball, tennis and other ball sports have described this dilation of time. But why does it happen? Does the brain merely remember time passing more slowly after the fact? Or do experienced players develop Matrix-style abilities, where time genuinely seems to move more slowly?
According to five experiments from Nobuhiro Hagura at University College London, it’s the latter. When we prepare to make a movement – say, the swing of a bat – our ability to process visual information speeds up. The result: the world seems to move slower.
At first glance, this might seem to contradict a now-classic experiment by David Eagleman. He threw volunteers off a tall fairground ride and asked them to stare at a special watch, to see if their perception of time would slow. It didn’t. They merely remembered the experience as being long and drawn out afterwards. (See my earlier post for the details.)
But there’s a critical difference between the two studies. Eagleman studied time perception while people were actually undergoing a crisis—in this case, falling to their possible doom. But Hagura showed that time appears more leisurely before an event, rather than during it—when we’re preparing to move, rather than moving.
Hagura first asked volunteers to press a key for as long as a white disc appeared on a screen. The disc would then be replaced by a hollow target. In some trials, the volunteers had to release their key and touch the target. In others, they were told to keep pressing the key. In every case, they had to say how long the white disc stayed up for, compared to all the previous trials in the experiment. Hagura found that the volunteers deemed the durations to be longer if they were preparing to move, than if they were planning to keep still.
Perhaps the volunteers who were about to reach out were just more excited or attentive? Not so. When Hagura changed the task from pressing (or not pressing) the target, to naming (or ignoring) a letter, the time-slowing effect vanished. Preparing to move makes the difference, rather than just preparing for any old task.
In a third variation, the white disc was replaced by two possible targets instead of just one. In some trials, the disc had a line that told the volunteers which of the two targets was correct, allowing them to prepare the right movement. In other trials, there was no line, and the volunteers had to make their move when the two targets appeared. As you might have guessed by now, they thought the white disc stayed up longer if they were preparing to move their arm in a specific direction, but not if they were simply waiting.
These three sets of results support the idea that time moves more slowly when we prepare an action. But they could also be explained in the same way that Eagleman’s results were: Time only seemed to pass more slowly because the volunteers remembered it doing so. But two final experiments suggest that, instead, preparing to move actually slows “the flow of visual experience”.
First, Hagura replaced the solid white target with one that flickered at different frequencies. The volunteers had to say whether it was flickering faster or slower than usual, compared to previous trials. If they were preparing to hit the screen, they said that the high-frequency flickers were slower than they actually were.
Second, Hagura showed his volunteers a stream of rapidly flashing letters, while they held a key. Each letter appeared for just 35 milliseconds, and the whole series went by in less than a second. Somewhere in the stream, there was a C or a G, but never both. Once the sequence had stopped, as before, the volunteers either kept holding their key, or touched the screen. Their task was to say whether they had seen a C or a G.
If the volunteers were preparing to reach out, they got the right answer about 66 percent of the time. If they kept still, their success rate was just 59 percent. By readying their arms to touch the screen, they were better able to spot their target amid the zooming letters. This difference was particularly marked if the C or G appeared towards the end of the flashing sequence – the longer the volunteers spent preparing to move, the slower time seemed to pass.
How does the slowing effect actually work? We don’t know. Hagura notes that there are certainly connections between the parts of the brain that encode the passage of time, and those that prepare sequences of movement. The details, however, are still unknown.
Why does the effect happen? Hagura argues that speeding up our powers of perception allows us to change, tweak and halt our course of action on the fly. He writes: “As expert ballgame players assert, being maximally prepared may allow ‘more time’ to perfect the hit.”
That would be a clear benefit, but Andrew Welchman, who studies perception at the University of Birmingham, wonders if there are any drawbacks. “You never get anything in the brain for free, so if you get better at one moment in time, you should get worse at another,” he says. “Take someone who moves a lot versus someone how moves little. They should both be calibrated to the same external time, so the one who moves a lot needs to have more ‘downtime’ to keep in step.” A bout of Neo-like bullet-time should be followed by a burst of perceptual sluggishness.
For example, Welchman says that when we move our eyes around, our visual sensitivity plummets immediately before, during and after the movement. This is called saccadic suppression. The standard interpretation is that we’re “filtering out the junk” – the “smeary visual signals” that we get when our eyes move too quickly. “But framed in light of this paper, it might be a way of resetting the clock so that the person stays calibrated to the visual world around them,” says Welchman.
Reference: Hagura, Kanai, Orgs & Haggard. 2012. Ready steady slow: action preparation slows the subjective passage of time. Biology Letters http://dx.doi.org/10.1098/rspb.2012.1339 | <urn:uuid:5db5cf11-115e-49eb-aa80-55ef0dca96ad> | 3.3125 | 1,364 | Nonfiction Writing | Science & Tech. | 58.192157 | 1,410 |
Because is only 0.7 percent of naturally occurring uranium, its supply is fairly limited and could well only last for about 50 years of full-scale use. The other 99 percent of the uranium can also be utilized if it is first converted into plutonium by neutron bombardment:
The production of plutonium can be carried out in a breeder reactorA nuclear reactor designed to produce nuclear fuel as it produces energy. which not only produces energyA system's capacity to do work. like other reactors but is designed to allow some of the fast neutrons to bombard the , producing plutonium at the same time. More fuel is then produced than is consumed.
Breeder reactors present additional safety hazards to those already outlined. They operate at higher temperatures and use very reactive liquidA state of matter in which the atomic-scale particles remain close together but are able to change their positions so that the matter takes the shape of its container metals such as sodium in their cooling systems, and so the possibility of a serious accident is higher. In addition the large quantities of plutonium which would be produced in a breeder economy would have to be carefully safeguarded. Plutonium is an α emitter and is very dangerous if taken internally. Its half-lifeIn chemical kinetics, the time it takes for one half of the limiting reactant to be consumed. In nuclear chemistry, the time for half of a sample to undergo radioactive decay. is 24 000 years, and so it will remain in the environment for a long time if dispersed. Moreover, can be separated chemically (not by the much more expensive gaseous diffusionThe spreading of one substance into another (usually involves gases or liquids). used to concentrate ) from fission products and used to make bombs. Such a material will obviously be attractive to terrorist groups, as well as to countries which are not currently capable of producing their own atomic weapons. | <urn:uuid:81fa5386-755b-45d4-a2c0-8d9005faa120> | 3.40625 | 375 | Knowledge Article | Science & Tech. | 35.239351 | 1,411 |
Our knowledge concerning the surface of Venus comes from a limited amount of information obtained by the series of Russian Venera landers, and primarily from extensive radar imaging of the planet. The radar imaging of the planet has been performed both from Earth-based facilities and from space probes. The most extensive radar imaging was obtained from the Magellan orbiter in a 4-year period in the early 1990s. As a consequence, we now have a detailed radar picture of the surface of Venus. The adjacent animation shows the topography of the surface as determined using the Magellan synthetic aperture radar (black areas are regions not examined by Magellan). An MPEG movie (303 kB) of this animation is also available.
Much of the surface of Venus appears to be rather young. The global data set from radar imaging reveals a number of craters consistent with an average Venus surface age of 300 million to 500 million years.
There are two "continents", which are large regions several kilometers above the average elevation. These are called Istar Terra and Aphrodite Terra. They can be seen in the preceding animation as the large green, yellow, and red regions indicating higher elevation near the equator (Aphrodite Terra) and near the top (Ishtar Terra).
|Hemispheres of Venus (Ref)|
The center image (a) is centered at the North Pole. The other four images are centered around the equator of Venus at (b) 0 degrees longitude, (c) 90 degrees east longitude, (d) 180 degrees and (d) 270 degrees east longitude. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. (Here is a more extensive discussion of these hemispheric views.)
|A Volcano (Ref)||Apparent Lava Flows (Ref)|
In all of
these radar images you should bear in mind that bright spots correspond to
regions that reflect more radar waves than other regions. Thus, if you could
actually see these regions with your eyes the patterns of brightness and
darkness would probably not be the same as in these images. However, the basic
features would still be the same.
There are rift valleys as large as the East African Rift (the largest on Earth). The image shown below illustrates a rift valley in the West Eistla Region, near Gula Mons and Sif Mons.
|Rift valley on Venus|
The perspective in cases like this is synthesized from radar data taken from different positions in orbit.
African Rift on Earth is a consequence of tectonic motion between the African
and Eurasian plates (the Dead Sea in Israel is also a consequence of this same
plate motion). Large rift valleys on Venus appear to be a consequence of more
local tectonic activity, since the surface of Venus still appears to be a
|A Field of Craters||The Largest Crater (Ref)|
|The surface of Venus from Venera 14 (Ref)| | <urn:uuid:98f84e8a-c73e-4cf2-9c53-4b613f94b23a> | 4.34375 | 622 | Knowledge Article | Science & Tech. | 38.611933 | 1,412 |
A link from one of readers (thanks Ashley!) pointed us to a story on MSNBC about a very large Lion’s Mane jellyfish (Cyanea capillata) that broke apart and stung up to 100 people on a New Hampshire beach last Wednesday. Lion’s Manes can get very big, their bell can be over 3 feet. Their tentacles though are another story and quite intimidating! A small Lion’s Mane can have a tentacle trail 10 feet long. A much larger one may have over 150 tentacles trailing over 30 feet behind it!
So how can jellyfish sting if they break apart or are dead and washed up on the beach? The tiny stinging cells, called nematocysts, can be thought of like a mouse trap. One you set the mouse trap it only needs a trigger to do its damage. It doesn’t need any outside help to be maintain. It just has purpose, to sit and wait for an unfortunate victim to trigger the hard-wired response that millions of years of evolution have refined into a potent venom delivery system. Much the like the mouse trap, once set it does not let go easily.
From MSNBC/LiveScience writer Jeanna Bryner:
Though not a common occurrence, marine biologist Sean Colin says with such a large jellyfish, and so many trailing tentacles (not to mention those that break off in the water), the occurrence is feasible.
“It’s certainly not common, but it’s certainly in the realm of possibility, because they do have so many tentacles if they’re that large. If they’re broken up they could be all over the place,” said Colin who is at Roger Williams University in Bristol, R.I.
Profile of a giant
This species is typically found in the cooler regions of the Pacific Ocean, Atlantic Ocean, North Sea and Baltic Sea. And they rarely show up on this beach. “I haven’t seen anything like this in my life, said Brian Warburton, who has been with the New Hampshire State Parks department for six years.
All the action transpired in about 20 minutes, when Warburton and his colleagues administered first aid (vinegar treatment). “There wasn’t time to sit and measure this thing. We just got rid of it,” Warburton told LiveScience. “Think about a glob of Jell-O you’re trying to pick up with two hands,” he said, explaining the need for a pitchfork to pick it up.
Nematocysts are proteinaceous substances and are not living cells or organelles. They discharge extremely rapidly and work by building an immense amount of pressure inside the cell (up to 15 MPa or 2176 lbs/in2) by storing oodles of calcium ions. When discharged (see above), the ions are rapidly ejected into the surrounding cytoplasm, setting off the chain of events resulting in a painful sting. Research by Nüchter and colleagues measured the escape velocity and kinetics of nematocyst discharge in the freshwater hydrozoan, Hydra. The steps above took place during 700 nanoseconds, creating an acceleration of 5,410,000 g!
Not all nematocysts are filled with venom though, but it is not a chance you should be willing to take. Since the stinging cells matter not whether its creator is alive (at least over the shorter term, the protein does degrade rapidly), it still does a lot of damage on its own its always a good idea to approach a beach jellyfish with caution and your flippy-floppies on. Very important to make sure your kids know never to touch a jellyfish unless with a stick from a safe distance. As I tell my 3 children, jellyfish are pretty from a distance!
Nüchter, T., Benoit, M., Engel, U., Özbek, S., & Holstein, T. (2006). Nanosecond-scale kinetics of nematocyst discharge Current Biology, 16 (9) DOI: 10.1016/j.cub.2006.03.089 | <urn:uuid:f918762f-3b67-4ad1-afde-567aa6fdb2e8> | 2.921875 | 863 | Comment Section | Science & Tech. | 63.064304 | 1,413 |
Inheritance describes a relationship between two (or more) types, or classes, of objects in which one is said to be a "subtype" or "child" of the other, as result the "child" object is said to inherit features of the parent, allowing for shared functionality, this lets programmers re-use or reduce code and simplifies the development and maintenance of software.
Inheritance is also commonly held to include subtyping, whereby one type of object is defined to be a more specialized version of another type (see Liskov substitution principle), though non sub-typing inheritance is also possible.
Inheritance is typically expressed by describing classes of objects arranged in an inheritance hierarchy (also referred to as inheritance chain), a tree-like structure created by their inheritance relationships.
For example, one might create a variable class "Mammal" with features such as eating, reproducing, etc.; then define a subtype "Cat" that inherits those features without having to explicitly program them, while adding new features like "chasing mice". This allows commonalities among different kinds of objects to be expressed once and reused multiple times.
In C++ we can then have classes that are related to other classes (a class can be defined by means of an older, pre-existing,
class ). This leads to a situation in which a new class has all the functionality of the older class, and additionally introduces its own specific functionality. Instead of composition, where a given class contains another class, we mean here derivation, where a given class is another class.
This OOP property will be explained further when we talk about Classes (and Structures) inheritance in the Classes Inheritance Section of the book.
If one wants to use more than one totally orthogonal hierarchy simultaneously, such as allowing "Cat" to inherit from "Cartoon character" and "Pet" as well as "Mammal" we are using multiple inheritance.Last modified on 13 November 2012, at 02:25 | <urn:uuid:9a20ccee-39d2-4e2e-b39d-d7f4cb702252> | 4.25 | 414 | Knowledge Article | Software Dev. | 19.043377 | 1,414 |
The standard 'Scientific
' explanation is that the carbon-carbon bond
s in diamond are too stable, no enzyme
would be able to overcome the energy barrier
necessary to disassemble diamond.
However, diamond is something of a special case for carbon compounds (oh, fullerenes are probably pretty inedible). There are organisms that 'eat' rocks, reduce gold salts to elemental gold and other improbable diets (from the point of veiw of sugar-eaters). There are bacteria that derive energy from sodium and some that produce hydrogen or eat methane. Many organisms can synthesise silica polymers - and I don't in principle see why you couldn't engineer bacteria to make silicon chips.
The message is that it's more surprising what Life can do than what it can't. | <urn:uuid:38c25a29-31f8-41e3-857d-ec29ed46d1b5> | 2.9375 | 165 | Personal Blog | Science & Tech. | 36.135085 | 1,415 |
Cnidarians are water animals that have a simple, usually symmetrical, body with a mouth opening. Stinging cells on tentacles around the mouth catch prey. Cnidarians are either bell-shaped and mobile, like the jellyfish, or tubes anchored to one spot, like coral and sea anemones.
All cnidarians have stinging cells. Many are able to reproduce asexually (without mating) and sexually. There are 9,000 species.
(corals, sea fans, sea pens, sea anemones)
Features: anchored polyp (tube-like) form, carnivorous (eat flesh), often in groups
Features: some free-living, others anchored, most in colonies (large groups), mostly carnivorous | <urn:uuid:0caaa280-9d5d-4b44-89b2-45bd84ce8936> | 3.3125 | 157 | Knowledge Article | Science & Tech. | 30.819648 | 1,416 |
If you guys remember all of the big genome sequencing projects of the 90s and the early aughts, they’ve been continuing and the amount of raw data they have been giving back to us has exponentially accelerated. However, those of us trying to understand the biological realities of what all of those sequences actually mean were very quickly left behind and have been falling further and further behind as the advance of sequencing technology accelerates faster than we could ever hope to keep up with. The central problem is that while it turns out that we can get computers to do our pipetting for us if we pay engineers enough – we can’t get computers to do our thinking for us. Like mathematicians with some of the fanciest calculators imaginable, we can get the tools NCBI gives us to show us amazing things in amazing ways, but they can’t tell us what it all means. For the genomes we get to make any kind of sense a human being has to abstract meaning from it and communicate that meaning in understandable language – and there is no way around that limitation – there will only ever be ways to optimize it. This is really what synthetic biology is trying to do from its own weird and attractive but easily dangerously simplistic perspective.
E Andrianantoandro, S Basu, et al. Published 2006 in Molecular Systems Biology. doi:10.1038/msb4100073
Credit: Chuck Wadey, www.ChuckWadey.com
Synthetic biologists engineer complex artificial biological systems to investigate natural biological phenomena and for a variety of applications. We outline the basic features of synthetic biology as a new engineering discipline, covering examples from the latest literature and reflecting on the features that make it unique among all other existing engineering fields. We discuss methods for designing and constructing engineered cells with novel functions in a framework of an abstract hierarchy of biological devices, modules, cells, and multicellular systems. The classical engineering strategies of standardization, decoupling, and abstraction will have to be extended to take into account the inherent characteristics of biological devices and modules. To achieve predictability and reliability, strategies for engineering biology must include the notion of cellular context in the functional definition of devices and modules, use rational redesign and directed evolution for system optimization, and focus on accomplishing tasks using cell populations rather than individual cells. The discussion brings to light issues at the heart of designing complex living systems and provides a trajectory for future development.
If there is a God of creation that went around designing the genomes of all of the living things on Earth, they are the sloppiest, most frustrating, terrible programmer you could possibly imagine. The Intelligent Design proponents are particularly frustrating to me as a biologist having seen how fundamentally unintelligent the design of living critters actually is when you get down to the real moving parts. At least it is designed according to a sort of logic so fundamentally alien to our own that by any human standard we couldn’t help but call it stupid. Looking at life through the lens of Max Delbrück’s slowly fulfilled dream of a science of molecular genetics to replace the stamp collecting of Drosophila genetics1, the organization of information, regulation, and function in genomes makes precious little intuitive sense in terms of human logic. When you think about it; silly things like fundamentally unrelated systems being piled on top of each other such that one can’t be manipulated without messing up the other – necessitating otherwise functionless patches to the paired system whenever the other is modified, or Rube Goldberg-esque fragile systems of regulation that respond to all kinds of wrong stimuli, or systems of global regulation that are pretty analogous to reading the same giant program in either Python or C++ to produce one of two desired global results, or the kinds of systems that you can just tell are 99.9% amateur patch jobs are really what you would expect from systems designed exclusively by the entropic trial and error of evolution.
The end goal of the folks behind synthetic biology is pretty simple on the face of it. They want to turn biological systems into abstractions that can be manipulated by people who don’t understand the lower parts. While this might seem like a trivial goal, when you really understand what it means, it becomes clear that it has the potential to change the world in intensely profound ways and the very nature of life itself – that is if they can actually make that work in a functional way. At the moment genomes can only really be meaningfully understood or manipulated by folks like me with expensive and rare educations. This is because in order to create de novo anything like a solid grasp of how anything so beautiful as the lac operon works in E. coli one needs to have a pretty good understanding of how things like DNA-binding proteins work, how the structure of DNA relates to its function, how ligand binding works, how transcription initiation works, and how enzymes do their thing. Similarly, in order to have any hope of understanding how one would manipulate systems like that, you’d need to have a good understanding of how cell competency works and can be created, how to manipulate plasmid vectors, the anti-parallel nature of DNA , how to use antibiotics and resistance cassettes to select for desired strains, what TATA boxes do, how Shine-Delgarno sequences work, how RNA polymerases tend to like to bind, how to choose which regulation mechanism to use, and that doesn’t even include the technical skills necessary to actually do it yourself. Their idea is to turn genes, gene cassettes, and genetic systems into ‘BioBricks’ that their manipulators don’t need to understand to be useful (in a way analogous to how Perl programmers and Sys Admins don’t need to understand Assembly language to be useful) and can pay to have manipulated in industrially mechanized ways. At the moment the iGEM folks are using the levels of abstraction they can already create to harness to creativity of undergrads with their competition, but what may lie ahead is much much cooler.
Until this summer I did nothing but make fun of the nascent science of Synthetic Biology, having only been exposed to its many nuttier proponents. Maryr of Metafilter was absolutely right when she went all Mol Bio hipster and declared: “I heard of iGEM before it was cool. BioBricks is for people who can’t handle real cloning,” in this thread about what is still solidly iGEMs neatest project. BioBrick really is just a new name for gene cassette, things that have been actively studied and manipulated since the 60s. What convinced me that this could actually be really amazingly cool was a talk Drew Endy gave at the most recent Bacteriophage conference in Brussels about the research that is going on in his lab, the parts he needed from us, and why. (37:38) [Don’t be intimidated by the technical nature of the talk – even if you zone out during the technical bits you can totally still get the point]. In it, he describes his lab’s quest to create what amounts to a living computer – programmable systems architecture within E. coli. The current project involves using the architecture he is building to create a trivially readable clock that reads out in binary that would track the number of generations that a culture of bacteria has gone through – which would itself be amazingly useful. However, if created, these kinds of systems architecture combined with sensor proteins, enzymes, and regulator molecules understood as BioBricks could make life understandable by people who are to us as programmers are to hardware engineers. Here is another detailed talk focused more towards computer folks than biologists and here is another shorter talk he has given that is more geared towards laymen at a higher level of abstraction.
While I was sitting in that talk, knowing that the phage community does indeed have all of the parts he wants and then some, I couldn’t help but get goose bumps recalling one of my favorite stories from Science Fiction: The Nine Billion Names of God (part 2 ) by Arthur C. Clarke. Where suddenly I was, by way of analogy, a monk in his Lamasery slowly going about the task of annotating out the 10,000,000,000,000, 000,000,000,000,000,000 (1031) names of creation. If we really can systematize the genome of a living organism into neat little boxes like a well designed program according to the sensibilities and biases of human logic that would, in a very real and profound way, give us the ability to remake life in our own image in a way that very much evokes the line in Genesis that phrase comes from.
How cool would that be?
It is still however worth being very cautious about what promise synthetic biology may hold. There seems to be a whole cottage industry, particularly around the singularity movement, that has been set up to help people pretend they understand biology, and molecular genetics in particular, by calling it synthetic biology and making fanciful claims that people have different interests in being true. It preys on the scientific illiteracy of its audience counting on there being few enough people with the education to call them out on the sizable amount of fundamentally false false stuff they are communicating for them to get away with it. There are indeed huge limitations to this kind of thinking ever producing anything of meaningful value, which it has yet to do, that have nothing to do with a need for bigger computers or most anything else that singularity folks tend to point at as growing exponentially. The Singularity University is indeed an elaborate fraud run by folks with precious little understanding of biology.
1From the 1920s to the 1930s there was a mass movement of out of work physicists, having suddenly run out of things to do when we figured out to much of physics, to biology. They brought with them a mechanistic view of how the universe works that they used to cause massive transformations in how we understand and interact with biology. One of the most influential of these scientific interlopers was Max Delbrück who quickly reasoned that, if we were ever going to understand how life works, we would need to start with the simplest organism possible and work our way up. He isolated seven bacteriophages against E. coli B, originally just his lab strain, and named them in a series T1 through T7. The central idea was that he and his growing number of colleagues* would focus on truly understanding how these phages worked and use that knowledge to generalize to Escherichia coli, then the mouse, and then the elephant and us. An essential component of this was the “Phage Treaty” among researchers in the field, which Delbrück organized in order to limit the number of model phage and hosts so that folks could meaningfully compare results. What came out of their original focus, in many respects encapsulated in Erwin Schrödinger’s What is life?, has shed light on so much as to truly redefine our self-understanding, much less medicine
The Luria–Delbrück experiment elegantly demonstrated that in bacteria, genetic mutations arise in the absence of selection, rather than being a response to selection, that is in all of life.
The Hershey–Chase experiment showed once and for all that nucleic acids were in fact the heritable molecule in not just T2 phage and E. coli, but indeed all of life.
Easily the snarkiest, most badass, and likely most important published scientific paper ever, written as an accessible single page, about the double helix structure of DNA. Jim Watson changed majors from ornithology to genetics after reading What is Life and became Luria’s graduate student, while Crick was an older former physicist who also claimed inspiration from Schrödinger. The structure of DNA, and its relationship to function that they discovered, is true for all of life.
Soon afterwards the adapter hypothesis and central dogma, both of which are (at least simplistically) true for all of life. | <urn:uuid:3ef7c364-d1af-4794-b165-540dfb009e63> | 2.796875 | 2,487 | Personal Blog | Science & Tech. | 30.832799 | 1,417 |
I'm having some trouble with this proof problem. Anyone have any ideas?
"Let p be an integer other than 0, +/- 1 with this property: Whenever b and c are integers such that p | bc, then p | b or p | c. Prove p is prime. [Hint: If d is a divisor of p, say p = dt, then p | d or p | t. Show that this implies d = +/- p or d = +/- 1.]
I know how to prove p *can be* a prime, however I've been unable to prove p *must* be a prime. I was thinking of using gcd(p, b) = p if p | b (and vice versa). Then p = pn + bm. But I'm unable to come up with anything. In the hint it claims to show this is true if d is a divisor of p, but I'm not sure how to even get there. Any help would be appreciated! | <urn:uuid:4114dfd8-6d08-4852-bdb4-110a13ddac77> | 2.515625 | 210 | Q&A Forum | Science & Tech. | 103.764338 | 1,418 |
Does anybody know if there exists a mathematical explanation of Mendeleev table in quantum mechanics? In some textbooks (for example in "F.A.Berezin, M.A.Shubin. The Schrödinger Equation") the authors present quantum mechanics as an axiomatic system, so one could expect that there is a deduction from the axioms to the main results of the discipline. I wonder if there is a mathematical proof of the Mendeleev table?
P.S. I hope the following will not be offensive for physicists: by a mathematical proof I mean a chain of logical implications from axioms of the theory to its theorem. :) This is the standard approach everywhere in mathematics. For instance, in Griffiths' book I do not see axioms at all, therefore I can't treat the reasonings at pages 186-193 as a proof of Mendeleev table. By the way, that is why I did not want to ask this question at a physical forum: I do not think that people there will even understand my question. However, after Bill Cook's suggestion I made an experiment - and you can look at the results here: http://theoreticalphysics.stackexchange.com/questions/473/is-the-mendeleev-table-explained-in-quantum-mechanics
So I ask my colleagues-mathematicians to be tolerant.
P.P.S. After closing this topic and reopening it again I received a lot of suggestions to reformulate my question, since in its original form it might seem too vague for mathematicians. So I suppose it will be useful to add here, that by the Mendeleev table I mean (not just a picture, as one can think, but) a system of propositions about the structure of atoms. For example, as I wrote here in comments, the Mendeleev table claims that the first electronic orbit (shell) can have only 2 electrons, the second - 8, the third - again 8, the fourth - 18, and so on. Another regularity is the structure of subshells, etc. So my question is whether it is proved by now that these regularities (perhaps not all but some of them) are corollaries of the system of axioms like those from Berezin-Shubin book. Of course, this assumes that the notions like atoms, shells, etc. must be properly defined, otherwise the corresponding statements could not be formulated. I consider this as a part of my question -- if experts will explain that the reasonable definitions are not found by now, this automatically will mean that the answer is 'no'.
The following reformulation of my question was suggested by Scott Carnahan at http://meta.mathoverflow.net/discussion/1202/should-a-mathematician-be-a-robot/#Item_0 : "Do we have the mathematical means to give a sufficiently precise description of the chemical properties of elements from quantum-mechanical first principles, such that the Mendeleev table becomes a natural organizational scheme?"
I hope, this makes the question more clear. | <urn:uuid:1230a7ba-990c-4eb8-93aa-d197d538f189> | 2.59375 | 657 | Q&A Forum | Science & Tech. | 50.086388 | 1,419 |
Claessens has discovered that the kauri trees in New Zealand prevent landslides. When these enormous conifers reached a certain age, they stabilise areas prone to landslides. This maximises the benefit the trees gain by living far longer than other tree species.
At present the slopes are drained and large concrete structures are placed to prevent the landslides and the associated mud flows. According to Claessens planting kauri trees is a natural and in the longer-term possibly better solution for this problem.
During his doctoral research, the Belgian researcher developed a dynamic landscape model to simulate the distribution of soil due to landslides. For this he studied the landscape, soil and vegetation dynamics in the Waitakere Ranges Regional Park in New Zealand. The model can be used to predict the locations where landslides will occur and researchers can also use it to calculate how rainfall affects the soil.
Waitakere Ranges Regional Park is situated on the North Island of New Zealand. About 1000 years ago this entire island was covered with kauri trees, which can reach a height of 50 metres and grow in the most inhospitable places. The largest kauri tree in New Zeeland is the Tane Mahuta ('king of the forest'). This tree has reached the honourable age of 1500 years, is more than 51 metres high and has a girth of 13.7 metres.
Some of the remaining kauri forests of the island are still inhabited by the original islanders, the Maori's. They use the trees to build canoes and houses. From the mid-19th century onwards, many kauri trees were chopped down by Europeans for the timber trade. This led to the disappearance of most of these colossal conife
Contact: Dr Lieven Claessens
Netherlands Organization for Scientific Research | <urn:uuid:64d141aa-b3bb-4700-98f9-eb42b659c1b1> | 4.09375 | 368 | News Article | Science & Tech. | 45.611635 | 1,420 |
CORVALLIS, Ore. – The ebb and flow of the ocean tides, generally thought to be one of the most predictable forces on Earth, are actually quite variable over long time periods, in ways that have not been adequately accounted for in most evaluations of prehistoric sea level changes.
Due to phenomena such as ice ages, plate tectonics, land uplift, erosion and sedimentation, tides have changed dramatically over thousands of years and may change again in the future, a new study concludes.
Some tides on the East Coast of the United States, for instance, may at times in the past have been enormously higher than they are today – a difference between low and high tide of 10-20 feet, instead of the current 3-6 foot range.
And tides in the Bay of Fundy, which today are among the most extreme in the world and have a range up to 55 feet, didn’t amount to much at all about 5,000 years ago. But around that same time, tides on the southern U.S. Atlantic coast, from North Carolina to Florida, were about 75 percent higher.
The findings were just published in the Journal of Geophysical Research. The work was done with computer simulations at a high resolution, and supported by the National Science Foundation and other agencies.
“Scientists study past sea levels for a range of things, to learn about climate changes, geology, marine biology,” said David Hill, an associate professor in the School of Civil and Construction Engineering at Oregon State University. “In most of this research it was assumed that prehistoric tidal patterns were about the same as they are today. But they weren’t, and we need to do a better job of accounting for this.”
One of the most interesting findings of the study, Hill said, was that around 9,000 years ago, as the Earth was emerging from its most recent ice age, there was a huge amplification in tides of the western Atlantic Ocean. The tidal ranges were up to three times more extreme than those that exist today, and water would have surged up and down on the East Coast.
One of the major variables in ancient tides, of course, was sea level changes that were caused by previous ice ages. When massive amounts of ice piled miles thick in the Northern Hemisphere 15,000 to 20,000 years ago, for instance, sea levels were more than 300 feet lower.
But it’s not that simple, Hill said.
“Part of what we found was that there are certain places on Earth where tidal energy gets dissipated at a disproportionately high rate, real hot spots of tidal action,” Hill said. “One of these today is Hudson Bay, and it’s helping to reduce tidal energies all over the rest of the Atlantic Ocean. But during the last ice age Hudson Bay was closed down and buried in ice, and that caused more extreme tides elsewhere.”
Many other factors can also affect tides, the researchers said, and understanding these factors and their tidal impacts is essential to gaining a better understanding of past sea levels and ocean dynamics.
Some of this variability was suspected from previous analyses, Hill said, but the current work is far more resolved than previous studies. The research was done by scientists from OSU, the University of Leeds, University of Pennsylvania, University of Toronto, and Tulane University.
“Understanding the past will help us better predict tidal changes in the future,” he said. “And there will be changes, even with modest sea level changes like one meter. In shallow waters like the Chesapeake Bay, that could cause significant shifts in tides, currents, salinity and even temperature.” | <urn:uuid:92cfa7a8-c4ac-4f98-b4e6-fd5945c40921> | 4.03125 | 760 | News Article | Science & Tech. | 43.625787 | 1,421 |
In the face of a changing climate many species must adapt or perish. Ecologists studying evolutionary responses to climate change forecast that cold-blooded tropical species are not as vulnerable to extinction as previously thought. The study, published in the British Ecological Society's Functional Ecology, considers how fast species can evolve and adapt to compensate for a rise in temperature.
The research, carried out at the University of Zurich, was led by Dr Richard Walters, now at Reading University, alongside David Berger now at Uppsala University and Wolf Blanckenhorn, Professor of Evolutionary Ecology at Zurich.
"Forecasting the fate of any species is difficult, but it is essential for conserving biodiversity and managing natural resources," said lead author Dr Walters. "It is believed that climate change poses a greater risk to tropical cold-blooded organisms (ectotherms), than temperate or polar species. However, as potential adaptation to climate change has not been considered in previous extinction models we tested this theory with a model forecasting evolutionary responses."
Ectotherms, such as lizards and insects, have evolved a specialist physiology to flourish in a stable tropical environment. Unlike species which live in varied habitats tropical species operate within a narrow range of temperatures, leading to increased dangers if those temperatures change.
"When its environment changes an organism can respond by moving away, adapting its physiology over time or, over generations, evolving," said Walters. "The first two responses are easy to identify, but a species' ability to adapt quick enough to respond to climate change is an important and unresolved question for ecologists."
The team explored the idea that there are also evolutionary advantages for species adapted to warmer environments. The 'hotter is better' theory suggests that species which live in high temperatures will have higher fitness, resulting from a shorter generation time. This may allow them to evolve relatively quicker than species in temperate environments.
The team sought to directly compare the increased risk of extinction associated with lower genetic variance, owing to temperature specialisation, with the lowered risk of extinction associated with a shorter generation time.
"Our model shows that the evolutionary advantage of a shorter generation time should compensate species which are adapted to narrow temperature ranges," said Walters. "We forecast that the relative risk of extinction is likely to be lower for tropical species than temperate ones."
"The tropics are home to the greatest biodiversity on earth, so it imperative that the risk of extinction caused by climate change is understood," concluded Walters. "While many questions remain, our theoretical predictions suggest tropical species may not be as vulnerable to climate warming as previously thought."
Explore further: Why we need to put the fish back into fisheries
More information: Walters.R, Blanckenhorn. W, Berger. D, Forecasting extinction risk of ectotherms under climate warming: an evolutionary perspective, Functional Ecology, August 2012, DOI: 10.1111/j.1365-2435.2012.02045.x | <urn:uuid:c69fa6c9-99d5-4af0-93a9-fbcd9c36f1c7> | 3.875 | 601 | News Article | Science & Tech. | 19.311766 | 1,422 |
Orthogonal Complements and the Lattice of Subspaces
We know that the poset of subspaces of a vector space is a lattice. Now we can define complementary subspaces in a way that doesn’t depend on any choice of basis at all. So what does this look like in terms of the lattice?
First off, remember that the “meet” of two subspaces is their intersection, which is again a subspace. On the other hand their “join” is their sum as subspaces. But now we have a new operation called the “complement”. In general lattice-theory terms, a complement of an element in a bounded lattice (one that has a top element and a bottom element ) is an element so that and .
In particular, since the top subspace is itself, and the bottom subspace is we can see that the orthogonal complement satisfies these properties. The intersection is trivial, since the inner product is positive-definite as a bilinear form, and the sum is all of , as we’ve seen.
Even more is true. The orthogonal complement is involutive (when is finite-dimensional), and order-reversing, which makes it an “orthocomplement”. In lattice-theory terms, this means that , and that if then .
First, let’s say we’ve got two subspaces of . I say that . Indeed, if is a vector in then it for all . But since any is also a vector in , we can see that , and so as well. Thus orthogonal complementation is
Now let’s take a single subspace of , and let be a vector in . If is any vector in , then by the (conjugate) symmetry of the inner product and the definition of . Thus is a vector in , and so . Note that this much holds whether is finite-dimensional or not.
On the other hand, if is finite-dimensional we can take an orthonormal basis of and expand it into an orthonormal basis of all of . Then the new vectors form a basis of , so that . A vector in is orthogonal to every vector in exactly when it can be written using only the first basis vectors, and thus lies in . That is, when is finite-dimensional. | <urn:uuid:b195ff4f-68c6-4b12-b7cc-e9f02fce6c71> | 2.875 | 507 | Personal Blog | Science & Tech. | 58.036136 | 1,423 |
A Guest Post by Basil Copeland
Like many of Anthony’s readers here on WUWT, I’ve been riveted by all the revelations and ongoing discussion and analysis of the CRUtape Letters™ (with appropriate props to WUWT’s “ctm”). It might be hard to imagine that anyone could add to what has already been said, but I am going to try. It might also come as a surprise, to those who reckon me for a skeptic, that I do not think that anything was revealed that suggests that the global temperature data set maintained by CRU was irreparably damaged by these revelations. We’ve known all along that the data may be biased by poor siting issues, handling of station dropout, or inadequate treatment of UHI effects. But nothing was revealed that suggests that the global temperature data sets are completely bogus, or unreliable.
I will return to the figure at the top of this post below, but I want to introduce another figure to illustrate the previous assertion:
This figure plots smoothed seasonal differences (year to year differences in monthly anomalies) for the four major global temperature data sets: HadCRUT, GISS, UAH and RSS. With the exception of the starting months of the satellite era (UAH and RSS), and to a lesser degree the starting months of GISS, there is remarkable agreement between the four data sets – where they overlap – especially with respect to the cyclical pattern of natural climate variation. This coherence gives me confidence that while there may be problems with the land-sea data sets, they accurately reflect the general course of natural climate variation over the period for which we have instrumental data. While we need to continue to insist upon open access to the data and methods used to chronicle global and regional climate variation, and refine the process to remove the biases which may be present from trying to make the data fit the narrative of CO2 induced global warming, it would be wrong to conclude that the “CRUtape Letters” prove that global warming does not exist. That has never really been the issue. The issue has been the extent of warming (have the data been distorted in a way that would overstate the degree of warming?), the extent to which it is the result of natural climate variation (as opposed to human influences), and the extent to which it owes to human influences other than the burning of fossil fuels (such as land use/land cover changes, urban heat islands, etc.). And flowing from this, the issue has been whether we really know enough to justify the kind of massive government programs said to be necessary to forestall climate catastrophe.
Figure 2 plots the composite smooth against the backdrop of the monthly seasonal differences of the four global temperature data sets:
Many readers may recognize the familiar episodes of warming and cooling associated with ENSO and volcanic activity in the preceding figure. With a little more smoothing, we get a pattern like that depicted in Figure 3, which other readers may notice looks a lot like the cycles that Anthony and I have attributed to lunar and solar influences (they are the same):
In either case, the thing to note is that over time climate goes through repetitive episodes of warming and cooling. You have to look closely on Figures 2 and 3 – it is much clearer in Figure 1 – but episodes of warming exist when the smooth is above zero, and cooling episodes exist when the smooth is below zero. Remember, by design, the smooth is not a plot of the temperature itself, but of the trend in the temperature, i.e. the year to year change in monthly temperatures. The intent is to demonstrate and delineate the range of natural climate variation in global temperatures. It shows, in effect, the trend in the trend – up and down over time, with natural regularity, while perhaps also trending generally upward over time.
Which brings us to Figure 1. Here we are focusing in on the last 30 years, and a forecast to 2050 derived by a simple linear regression through the (composite) smooth of Figure 3. (Standard errors have been adjusted for serial correlation.) There has been an upward trend in the global temperature trend, and when this is projected out to 2050, the average is 0.114°C per decade ± 0.440°C per decade. Yes, you read that right: ± 0.440°C per decade. Broad enough to include both the worst imaginations of the IPCC and the CRU crowd, as well as negative growth rates, i.e. global cooling. Because if the truth be told, natural climate variation is so – well, variable – that no one can say with any kind of certainty what the future holds with respect to climate change. Be skeptical of any statistical claims to the contrary.
I think we can say, however, with reasonable certainty, that earth’s climate will remain variable, and that this will frustrate the effort to blame climate change on CO2 induced AGW. Noted on the image at the top of this post is a quote from Kevin Trenberth from the CRUtape Letters™: “The fact is that we cannot account for the lack of warmth at the moment, and it is a travesty that we can’t.” Trenberth betrays a subtle bias here – he cannot acknowledge the recent period of global cooling. It is, rather, “a lack of warmth.” But he is right that it is a “travesty” that we cannot fully account for the ebb and flow of earth’s energy balance, and ultimately, climate change. I think Trenberth just sees it as a lack of monitoring methods or devices. But I think there still remains a considerable lack of knowledge, or understanding, about the mechanics of natural climate variation. If you look carefully at Figure 1, you will notice that there seem to be upper and lower limits to the range of natural climate variability. On the scale depicted in Figure 1 (the scale is different with other degrees of smoothing), when warming reaches a limit of approximately 0.08-0.10°C per year, the warming slows down, and eventually a period of cooling takes place, always with the space of just a few years. Homeostasis, anyone? While phenomenon like ENSO are the effect of this regularity in natural climate variation, they are not the cause of it.
In my opinion, what is the real travesty of the global warming ideology is the hijacking of climate science in the service of a research agenda that has prevented science from investigating the full range of natural climate variation, because that would be an inconvenient truth. We see this, quite clearly, in the CRUtape Letters™ where the Medieval Warm Period is just “putative,” and a rather inconvenient truth that needs to be suppressed. Or the “1940′s blip” that implies that global temperatures increased just as rapidly in the early part of the 20th Century, as they did at the end of the 20th Century, an inconvenient truth at odds with the narrative preferred by the IPCC.
It is a truism that “climate varies on all time scales.” With respect to the variability demonstrated here, I’m convinced that someday it will be acknowledged that variability on this scale is dominated by lunar and solar influences. On longer scales, such as the ebb and flow from the Medieval Warm Period, through the Little Ice Age, and now into the “Modern Warm Period,” I do not think climate science yet has any real understanding of the underlying causes of such climate change. If we are, as seems possible, on the verge of a Dalton or Maunder type minimum in solar activity, we may eventually have an answer to whether solar activity can account for centennial scale changes in earth’s climate. And I do think it is reasonable to conclude, at the margin, that human activity has had some influence. It is hard to imagine population growing from one to six billion over the past one and a half centuries without some effect. Most likely, the effect is on local and regional scales, but this might add up to a discernible impact on global temperature. But until all of the forces that determine the full range of natural climate variability are understood better than they are now, there is no scientific justification for the massive overhaul of economic and government structures being promoted under the guise of climate change, or global warming. | <urn:uuid:24d44c19-90b4-46bb-b61f-344b51a9bdc3> | 2.5625 | 1,740 | Personal Blog | Science & Tech. | 41.56506 | 1,424 |
Scientific American published an article summarizing what I’ve written about for a couple of years: the IPCC’s projections aren’t 100% correct. Gasp – the horror! But, contrary to what skeptics think, the direction the IPCC’s reports were wrong are opposite of what they claim. The projections time and again underestimated future changes. I think a valid complaint, and one I’ve made many times myself, is that the IPCC process is too conservative – it takes too long to get the kind of consensus they’re looking for. Rapidly changing conditions are not well handled by the IPCC process. When there is conflicting evidence of something, the IPCC has tended to say nothing in an effort not to upset anybody. The good news is there are indications this is changing. The list:
This is the biggest one. Too many studies focused on moderate emission pathways, when yearly updates showed our actual emissions were on the high range of those considered by the IPCC. I actually posted on this two days ago: CO2 Emissions Continue to Track At Top of IPCC Range. This has implications for every other process that follows.
More accurately, energy in the climate system is the variable of interest. It is easy to point out that temperatures since 2000 haven’t increased as much as projected. It is also easy to compare observed trends since 1980 and claim AR4 models over-predicted temperature rise. This conflates a couple of issues: the AR4 wasn’t used to project since 1980. More importantly, the difference between observed trends since 1980 and projected temperatures from half of the AR4 models was less than 0.04°C (0.072°F). That’s pretty darned small. With respect to the trend since 2000, the real issue is energy gain. The vast majority of energy has accumulated in the oceans:
More specifically, if the heat is transported quickly to the deep ocean (>2000ft), the sea surface temperature doesn’t increase rapidly. Nor does atmosphere or land temperatures change. This is true at least in the short-term. When the ocean transports this heat from the deep back to the surface, we should be able to more easily measure that heat. Put simply, the temporary hiatus of temperature rise is just that: temporary. Are we prepared for when that hiatus ends?
The relatively small increase in near-surface air and land temperatures is thus explained. The IPCC never claimed the 4.3° to 11.5°F temperature rise (AR4 projection) would happen by 2020 – it is likely to happen by 2100. Expect more synergy between projected temperatures and observed temperatures in the coming years. Also remember that climate is made up of long-term weather observations.
Additionally, aerosols emitted by developing nations have been observed to reflect some of the incoming solar radiation back to space. Once these aerosols precipitate out of the atmosphere or are not emitted at some point in the future, the absorption of longwave radiation by the remaining greenhouse gases will be more prominent. The higher the concentration of gases, the more radiation will be absorbed and the faster the future temperature rise is likely to be. These aerosols are thus masking the signal that would otherwise be measured if they weren’t present.
3. Arctic Meltdown
This is the big story of 2012. The Arctic sea ice melted in summer 2012 to a new record low: an area the size of the United States melted this year! Even as late as 2007 (prior to the previous record-low melt), the IPCC projected that Arctic ice wouldn’t decrease much until at least 2050. Instead, we’re decades ahead of this projection – despite only a relatively small global temperature increase in the past 25 years (0.15°C or so). What will happen when temperatures increase by multiple degrees Centigrade?
4. Ice sheets
These are the land-based sheets, which are melting up to 100 years faster than the IPCC’s first three reports. 2007′s report was the first to identify more rapid ice sheet melt. The problem is complex cryospheric dynamics. Understandably, the most remote and inhospitable regions on Earth are the least studied. Duh. That’s changing, with efforts like the fourth International Polar Year, the results of which are still being studied and published. Needless to say, modern instrumentation and larger field campaigns have resulted in advances in polar knowledge.
5. Sea Level Rise
It’s nice being relevant. I just posted something new on this yesterday: NOAA Sea-Level Rise Report Issued – Dec 2012. The 3.3mm of sea-level rise per year is higher than the 2001 report’s projection of 2mm per year. Integrated over 100 years, that 1mm difference results in 4″ more SLR. But again, with emission and energy underestimates, the 3.3mm rate of SLR is expected to increase in future decades, according to the latest research. Again, another mm per year results in another 4″ 100 years from now. Factors affecting SLR that the IPCC didn’t address in 2007 includes global ocean warming (warmer water takes up more volume), faster ice sheet melt, and faster glacial melt. Additionally, feedback mechanisms are still poorly understood and therefore not well represented in today’s state-of-the-art models.
6. Ocean Acidification
The first 3 IPCC reports didn’t even mention this effect. In the past 250 years, ocean acidity has increased by 30% – not a trivial amount! As the article points out, research on this didn’t even start until after 2000.
7. Thawing Tundra
Another area that is not well-studied and therefore not well understood. The mechanics and processes need to be observed so they can be modeled more effectively. 1.5 trillion tons of carbon are locked away in the currently frozen tundra. If these regions thaw, as is likely since the Arctic has observed the most warming to date, methane could be released to the atmosphere. Since methane acts as a more efficient GHG over short time frames, this could accelerate short-term warming much more quickly than projected (See temperatures above). The SciAm article points out the AR5, to be released next year, will once again not include projections on this topic.
8. Tipping Points
This is probably the most controversial aspect of this list. Put simply, no one knows where potential tipping points exist, if they do at all. The only way we’re likely to find out about tipping points is by looking in the past some day in the future. By then, of course, moving back to other side of the tipping point will be all but impossible on any time-frame relevant to people alive then.
There are plenty of problems with the UNFCCC’s IPCC process. Underestimation of critical variables is but one problem plaguing it. Blame it on scientists who, by training, are very conservative in their projections and language. They also didn’t think policymakers would fail to curtail greenhouse gas emissions. Do policymakers relying on the IPCC projections know of and/or understand this nuance? If not, how robust will their decisions be? The IPCC process needs to be more transparent, including allowing more viewpoints to be expressed, say in an Appendix compendium. The risks associated with underestimating future change are higher than the opposite. | <urn:uuid:05b24e17-d7d4-4e56-baaf-4644f0253fbf> | 2.8125 | 1,535 | Personal Blog | Science & Tech. | 54.107649 | 1,425 |
Marc Buie of the Lowell Observatory describes observations of Pluto in 1996 and their comparison with the 1994 observations that were reported in a 1996 press release. They showed structure on Pluto's surface. He has also compiled a list of "good Pluto WWW Pages," assigning grades of A+ to C for them.
JPL continues to plan a Pluto Express mission to arrive at Pluto approximately AD 2010 to make close-up observations and to measure Pluto's atmosphere before it freezes out and settles onto the surface.
The discovery of over a dozen objects orbiting the Sun beyond Neptune's orbit makes Pluto less special. The observational status of Pluto is discussed in the text, as well as the question "Is Pluto a Planet?" Over a dozen published sources refer to Pluto as something less than a planet, but Clyde Tombaugh is quoted in a summary article in USA Today (March 4, 1996, pp. 1-2) as saying "Pluto is far bigger than any asteroid.... The kids want Pluto to be a planet. I get hundreds of letters. [Talk of demoting Pluto] makes them mad."
My own position? I have an interest in history and historical astronomy, and that sways me to the side of still saying that we have nine planets, with Pluto as one of them.
Kaare Aksnes, president of the International Astronomical Union's panel on nomenclature, is quoted in the same USA Today article as saying, "I'm pretty sure all the members would be against demoting Pluto in this way." Even though the latest data minimize the importance of Pluto on a planetary scale, Aksnes continues, "we would do Pluto and Tombaugh an injustice and create confusion if we were to reclassify Pluto now. I believe that most people, be they astronomers or not," would agree." Though the Aksnes committee does not actually have authority to decide the issue, it is perhaps the nearest to the topic of IAU committees.
The Hubble Space Telescope has imaged Pluto for the first time at sufficiently high resolution that we can see surface features. The resolution on Pluto is about 100 km, so there are two dozen pixels across the image. The two views show opposite hemispheres. We cannot know exactly what the dark and light areas are. They may be basins or impact craters. Probably, most of the light regions on the surface are regions of frost. These regions would change with Pluto's seasons.
A movie is also available showing Pluto's rotation.
Credit: Alan Stern (Southwest Research Institute), Marc Buie (Lowell Observatory), NASA and ESA
Fran Bagenal at the University of Colorado has assembled a World Wide Web homepage for Pluto, giving both history and current science. Links are also provided to other Pluto homepages, including the Jet Propulsion Laboratory's Pluto Express, the Pluto subsection of the Los Alamos National Laboratory's set of planet homepages, and maps of Pluto and Charon computed by Marc Buie of the Lowell Observatory. | <urn:uuid:50e9f3b6-a817-42e1-b7c1-88d70e105e66> | 3.109375 | 609 | Knowledge Article | Science & Tech. | 46.01259 | 1,426 |
Search the Archives
Latest News in Science: August 1998
- No stories published
- Genes - the 'book' of life?
- Space policy small step for Australia
- Hunting for the real 'Planet X'
- Earthquake scientists' trial splits experts
- Australia pushes for Antarctic protection
Wednesday, 15 May 2013
Record-setting gamma-ray burst shocks astronomers. Also; study suggests water on Earth and Moon came from the same source, and Earth's inner-core out of sync with the rest of the planet. | <urn:uuid:9f02bbf4-72c2-4f60-93b3-476b8a84bf6e> | 2.578125 | 109 | Content Listing | Science & Tech. | 51.13658 | 1,427 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Thursday, 16 December 2010
StarStuff Podcast After a 33-year odyssey, NASA's Voyager 1 spacecraft reaches the very edge of our solar system. Plus: new theory explains rings and ice moons of Saturn; mysterious carbon-rich planet raises questions about how planets form; and test-flight success for Falcon-9 rocket.
Tuesday, 23 March 2010 25
Great Moments in Science Dr Karl talks a lot because talking about stuff is his job. Even so, he was very surprised when he heard that women have more to say than men.
Thursday, 1 October 2009
Dr Karl on triple j Why do we get an urge to wee when we hear the sound of running water? How did water come to be on Earth? And do you lose weight when you pass wind?
Thursday, 24 September 2009
Dr Karl on triple j Why did the sun appear blue in the dust storm yesterday? What causes the white marks you can get on your fingernails? How can women who haven't given birth lactate? And can you smell danger?
Thursday, 17 September 2009
Dr Karl on triple j How does stainless steel soap work? Can animals get sunburn and skin cancer? Who invented the concept of time? And why do you see different colours and shapes when you close your eyes?
Thursday, 10 September 2009
Dr Karl on triple j Could my child love eating dirt because of an iron deficiency? Why do spacecraft re-enter the atmosphere so fast? And why do the mushrooms in my paddocks grow in big circles?
Thursday, 3 September 2009
Dr Karl on triple j Why do tattoos become lumpy when the weather changes? If you were allergic to cats would you also be allergic to lions and tigers? Can the image resolution of a digital camera beat the human eye?
Thursday, 27 August 2009
Dr Karl on triple j Why do you vomit when you overexercise? Can we create AC electricity from sunlight? Do you actually see red when you're angry? And why do people feel heavier when they're asleep?
Thursday, 20 August 2009
Dr Karl on triple j What causes bags under your eyes? Does using sunscreen reduce your body's ability to produce Vitamin D? And what is a shooting star and why does it shoot?
Thursday, 13 August 2009
Dr Karl on triple j Dr Karl debunks some persistent internet hoaxes. Plus: How are scientists able to reconstruct what a person looked like from only their skeleton? And how does hermaphroditism occur? | <urn:uuid:e4fac495-a61c-4e9f-95b0-b3f65fb09b40> | 2.875 | 527 | Content Listing | Science & Tech. | 74.886751 | 1,428 |
C++ is an object-oriented enhancement of the C programming language and is becoming the language of choice for serious software development.
C++ has crossed the Single Book Complexity Barrier. The individual features are not all that complex, but when put together in a program they interact in highly non-intuitive ways. Many books discuss each of the features separately, giving readers the illusion that they understand the language. But when they try to program, they're in for a painful surprise (even people who already know C).
C++: The Core Language is for C programmers transitioning to C++. It's designed to get readers up to speed quickly by covering an essential subset of the language.
The subset consists of features without which it's just not C++, and a handful of others that make it a reasonably useful language. You can actually use this subset (using any compiler) to get familiar with the basics of the language.
Once you really understand that much, it's time to do some programming and learn more from other books. After reading this book, you'll be far better equipped to get something useful out of a reference manual, a graphical user interface programming book, and maybe a book on the specific libraries you'll be using. (Take a look at our companion book, Practical C++ Programming.)
C++: The Core Language includes sidebars that give overviews of all the advanced features not covered, so that readers know they exist and how they fit in. It covers features common to all C++ compilers, including those on UNIX, Windows NT, Windows, DOS, and Macintosh.
Comparison: C++: The Core Language vs. Practical C++ Programming
O'Reilly's policy is not to publish two books on the same topic for the same audience. We'd rather spend twice the time on making one book the industry's best. So why do we have two C++ tutorials? Which one should you get?
The answer is they're very different. Steve Oualline, author of the successful book Practical C Programming, came to us with the idea of doing a C++ edition. Thus was born Practical C++ Programming. It's a comprehensive tutorial to C++, starting from the ground up. It also covers the programming process, style, and other important real-world issues. By providing exercises and problems with answers, the book helps you make sure you understand before you move on.
While that book was under development, we received the proposal for C++: The Core Language. Its innovative approach is to cover only a subset of the language -- the part that's most important to learn first -- and to assume readers already know C. The idea is that C++ is just too complicated to learn all at once. So, you learn the basics solidly from this short book, which prepares you to understand some of the 200+ other C++ books and to start programming.
These two books are based on different philosophies and are for different audiences. But there is one way in which they work together. If you are a C programmer, we recommend you start with C++: The Core Language, then read about advanced topics and real-world problems in Practical C++ Programming. | <urn:uuid:4db01cfa-2e93-426c-92cc-863dea62f75a> | 3.203125 | 655 | Product Page | Software Dev. | 55.733875 | 1,429 |
History and Acknowledgements
Smart pointers are objects which store pointers to dynamically allocated (heap) objects. They behave much like built-in C++ pointers except that they automatically delete the object pointed to at the appropriate time. Smart pointers are particularly useful in the face of exceptions as they ensure proper destruction of dynamically allocated objects. They can also be used to keep track of dynamically allocated objects shared by multiple owners.
Conceptually, smart pointers are seen as owning the object pointed to, and thus responsible for deletion of the object when it is no longer needed.
The smart pointer library provides five smart pointer class templates:
|scoped_ptr||<boost/scoped_ptr.hpp>||Simple sole ownership of single objects. Noncopyable.|
|scoped_array||<boost/scoped_array.hpp>||Simple sole ownership of arrays. Noncopyable.|
|shared_ptr||<boost/shared_ptr.hpp>||Object ownership shared among multiple pointers|
|shared_array||<boost/shared_array.hpp>||Array ownership shared among multiple pointers.|
|weak_ptr||<boost/weak_ptr.hpp>||Non-owning observers of an object owned by shared_ptr.|
|intrusive_ptr||<boost/intrusive_ptr.hpp>||Shared ownership of objects with an embedded reference count.|
These templates are designed to complement the std::auto_ptr template.
They are examples of the "resource acquisition is initialization" idiom described in Bjarne Stroustrup's "The C++ Programming Language", 3rd edition, Section 14.4, Resource Management.
A test program, smart_ptr_test.cpp, is provided to verify correct operation.
A page on compatibility with older versions of the Boost smart pointer library describes some of the changes since earlier versions of the smart pointer implementation.
A page on smart pointer timings will be of interest to those curious about performance issues.
A page on smart pointer programming techniques lists
some advanced applications of
These smart pointer class templates have a template parameter, T, which specifies the type of the object pointed to by the smart pointer. The behavior of the smart pointer templates is undefined if the destructor or operator delete for objects of type T throw exceptions.
T may be an incomplete type at the point of smart pointer declaration. Unless otherwise specified, it is required that T be a complete type at points of smart pointer instantiation. Implementations are required to diagnose (treat as an error) all violations of this requirement, including deletion of an incomplete type. See the description of the checked_delete function template.
Note that shared_ptr does not have this restriction, as most of its member functions do not require T to be a complete type.
The requirements on T are carefully crafted to maximize safety yet allow handle-body (also called pimpl) and similar idioms. In these idioms a smart pointer may appear in translation units where T is an incomplete type. This separates interface from implementation and hides implementation from translation units which merely use the interface. Examples described in the documentation for specific smart pointers illustrate use of smart pointers in these idioms.
Note that scoped_ptr requires that T be a complete type at destruction time, but shared_ptr does not.
Several functions in these smart pointer classes are specified as having "no effect" or "no effect except such-and-such" if an exception is thrown. This means that when an exception is thrown by an object of one of these classes, the entire program state remains the same as it was prior to the function call which resulted in the exception being thrown. This amounts to a guarantee that there are no detectable side effects. Other functions never throw exceptions. The only exception ever thrown by functions which do throw (assuming T meets the common requirements) is std::bad_alloc, and that is thrown only by functions which are explicitly documented as possibly throwing std::bad_alloc.
Exception-specifications are not used; see exception-specification rationale.
All the smart pointer templates contain member functions which can never throw
exceptions, because they neither throw exceptions themselves nor call other
functions which may throw exceptions. These members are indicated by a comment:
// never throws.
Functions which destroy objects of the pointed to type are prohibited from throwing exceptions by the common requirements.
January 2002. Peter Dimov reworked all four classes, adding features, fixing bugs, and splitting them into four separate headers, and added weak_ptr. See the compatibility page for a summary of the changes.
May 2001. Vladimir Prus suggested requiring a complete type on destruction. Refinement evolved in discussions including Dave Abrahams, Greg Colvin, Beman Dawes, Rainer Deyke, Peter Dimov, John Maddock, Vladimir Prus, Shankar Sai, and others.
November 1999. Darin Adler provided operator ==, operator !=, and std::swap and std::less specializations for shared types.
September 1999. Luis Coelho provided shared_ptr::swap and shared_array::swap
May 1999. In April and May, 1999, Valentin Bonnard and David Abrahams made a number of suggestions resulting in numerous improvements.
October 1998. Beman Dawes proposed reviving the original semantics under the names safe_ptr and counted_ptr, meeting of Per Andersson, Matt Austern, Greg Colvin, Sean Corfield, Pete Becker, Nico Josuttis, Dietmar Kühl, Nathan Myers, Chichiang Wan and Judy Ward. During the discussion, the four new class names were finalized, it was decided that there was no need to exactly follow the std::auto_ptr interface, and various function signatures and semantics were finalized.
Over the next three months, several implementations were considered for shared_ptr, and discussed on the boost.org mailing list. The implementation questions revolved around the reference count which must be kept, either attached to the pointed to object, or detached elsewhere. Each of those variants have themselves two major variants:
Each implementation technique has advantages and disadvantages. We went so far as to run various timings of the direct and indirect approaches, and found that at least on Intel Pentium chips there was very little measurable difference. Kevlin Henney provided a paper he wrote on "Counted Body Techniques." Dietmar Kühl suggested an elegant partial template specialization technique to allow users to choose which implementation they preferred, and that was also experimented with.
But Greg Colvin and Jerry Schwarz argued that "parameterization will discourage users", and in the end we choose to supply only the direct implementation.
Summer, 1994. Greg Colvin proposed to the C++ Standards Committee classes named auto_ptr and counted_ptr which were very similar to what we now call scoped_ptr and shared_ptr. [Col-94] In one of the very few cases where the Library Working Group's recommendations were not followed by the full committee, counted_ptr was rejected and surprising transfer-of-ownership semantics were added to auto_ptr.
[Col-94] Gregory Colvin, Exception Safe Smart Pointers, C++ committee document 94-168/N0555, July, 1994.
[E&D-94] John R. Ellis & David L. Detlefs, Safe, Efficient Garbage Collection for C++, Usenix Proceedings, February, 1994. This paper includes an extensive discussion of weak pointers and an extensive bibliography.
$Date: 2004/10/05 15:45:50 $
Copyright 1999 Greg Colvin and Beman Dawes. Copyright 2002 Darin Adler. Permission to copy, use, modify, sell and distribute this document is granted provided this copyright notice appears in all copies. This document is provided "as is" without express or implied warranty, and with no claim as to its suitability for any purpose. | <urn:uuid:9c21b6f0-26b8-4faf-92b6-5f9ffbfec117> | 3.125 | 1,656 | Documentation | Software Dev. | 37.630115 | 1,430 |
Cities Across U.S. Bore Brunt of Record-Setting July Heat
Preliminary climate data for July shows that many cities across the U.S. experienced record-setting months, with temperatures propelled upwards by a massive area of High Pressure, more popularly known as a Heat Dome, that kept cooling rains at bay.
For example, in St. Louis, Mo., where the year-to-date has been the warmest such period on record, the city has already exceeded its all-time record for the greatest number of days with high temperatures of 105°F or above, beating the 10 such days that occurred during the Dust Bowl in 1934.
In Wichita, Kan., July was the fourth warmest month on record, tied with 1934. Wichita recorded 21 100-degree days during the month, which was the second greatest such tally ever recorded there during the month of July.
July was also Denver’s warmest month on record, with an average temperature of 78.9°F, beating the previous record that was set in 1934 by more than a full degree. The "Mile High City" had seven 100-degree days, and there were 27 days with a high temperature of 90°F or higher.
And for many in the lower 48 states, August is coming in the same way July ended — dangerously hot, with only sporadic rain showers. In Oklahoma City, high temperatures on Wednesday were forecast to “flirt with” the city’s all-time high temperature record of 113°F, according to the National Weather Service (NWS). Tulsa, Okla., is likely to come close to its all-time record of 115°F as well.
Of all the areas that have been hammered by the intense heat, the Mississippi Ozarks and southeast Kansas may take the prize for enduring the worst of it, at least during the past two weeks, when a brutal combination of high heat and a pronounced lack of rainfall took hold. In Joplin, Mo., the average high temperature during July was 99.7°F, and the city only received a trace of rainfall. Joplin’s average temperature during July was 6.4°F above average, and the month ranked as the city’s fourth-warmest on record.
July's heat also extended northward to Chicago, where the "Windy City" experienced its third-warmest July, with an average temperature of 81.1°F, which was 7.1°F above normal. In Rockford, Ill., July was the warmest such month on record, with an average temperature that was 7.0°F above average.
In Oklahoma City, high temperatures in 2012 (red line) have been well above average. Large grey background is the all-time temperature range. Dashed lines are 15 day running mean temperature, and the small grey box shows daily norms. Click on the image for a larger version. Credit: Patrick Marsh.
On the East Coast, Washington D.C. had its second-hottest July, exceeded only by July 2011, which was 0.5°F warmer. This July, however, set new standards for the greatest number of 100-degree days, with seven, and tied 1930 for the longest string of consecutive 100-degree days, with four. According to the Weather Service, one-third of July days now have record high-minimum temperatures that were set in either 2010, 2011, or 2012, indicating that each of the past three Julys have featured unusually warm overnight low temperatures.
The record warm July comes in a year that has been extraordinarily warm, particularly in the U.S., where record daily high temperatures have been outnumbering record daily low temperatures by a nearly 9-to-1 ratio. As of June, the latest month for which figures are available, we've experienced the 328th month in a row — that's more than 27 years — with global temperatures above the 20th century average. The last month to come in below average was February of 1985.
For the U.S., the warm July follows a warmer-than-average June, which came on the heels of the warmest spring on record, which in turn was the culmination of the warmest March, third-warmest April, and second-warmest May. This year marked the first time that all three months during the spring season ranked among the 10 warmest, since records began in 1895. The heat has been accompanied by, and is also feeding off of, an expanding and intense drought, which is now covering a majority of the country. | <urn:uuid:966425a8-c469-403f-aef4-d2194614a667> | 3.046875 | 946 | News Article | Science & Tech. | 62.266865 | 1,431 |
NSMC carries out activities of theoretical and experimental researches on radiation transmission in the atmosphere, the algorithm for meteorological satellite data processing as well as the application of meteorological satellite data. These researches have in turn supported the development of NSMC, promoted the meteorological satellite data application in China and also the international cooperation. The main functions of NSMC include:
NSMC has facilities of satellite meteorology research, system design, computer system, satellite, operation control, and operational and service system support. The three ground stations of Beijing, Guangzhou, Urumqi are also under the supervision of NSMC. The ground system consisted of three receiving stations and a data processing center has fulfilled the tasks of data receiving and processing. By using the ground system, NSMC receives and processes meteorological satellite data from both Chinese and foreign satellites. These data have played an important role in the fields of weather forecasting, natural disaster monitoring and national economy. To help weather stations and various users to receive and use the meteorological satellite data directly, the NSMC is engaged in the researches of satellite data receiving and communication techniques and manufacture of both hardware and software for various meteorological satellite data applications systems.
Meteorological satellite operators coordinate their activities on a global scale through participation in the Coordination Group for Meteorological Satellites (CGMS) which meets once a year. The China Meteorological Administration has participated in the CGMS since 1989. Other members include Japan, Russia, the USA and EUMETSAT. The World Meteorological Organization is also a member, representing the key user community. | <urn:uuid:30215c53-89c1-4d2e-b659-786f3a8b1197> | 2.546875 | 321 | About (Org.) | Science & Tech. | -0.211302 | 1,432 |
Virtual file system Part 1
Virtual File System is an interface providing a clearly defined link between the operating system kernel and the different File Systems. The VFS supplies the applications with the system calls for file management (like “open”, “read”, “write” etc.), maintains internal data structures (the administrative data for maintaining the integrity of the File System), and passes tasks onto the appropriate actual File System. Another important job of the VFS is, performing standard actions. For example, as a rule, no File System implementation will actually provide an lseek() function, as the functions of lseek() are provided by a standard action of the VFS.
Kernel’s representation of the File Systems
The representation or layout of data on a floppy disk, hard disk or any other storage media may differ considerably from one implementation of File System to another. But the actual representation of this data in Linux kernel’s memory is the same for all File System implementations. The Linux management structures for the File Systems are similar to the logical structure of a Unix File System.
The VFS calls the file-system-specific functions for various implementations to fill up these structures. These functions are provided by every File System implementation and are made known to the VFS via the function register_filesystem(). This function sets up the file_system_type structure it has passed, in a singly linked list headed by the pointer “file_systems”. The file_system_type structure gives information about a specific File System implementation. The structure is as follows:
struct super_block *(*read_super)(struct super_block *, void *, int);
struct file_system_type *next;
· The function “read_super(..)” forms the mount interface, i.e. it is only via this function that further functions of the File System implementation will be made known to the VFS. It takes three parameters:
* A super_block structure in which the data relevant to this instance of File System implementation is filled up.
* A character string (in this case void *), which contains further mount options for the file system.
* A flag, which is used to indicate whether unsuccessful mounting should be reported. This flag is used only by the kernel function mount_root(), as this calls all the read_super() functions present in the various File System implementations.
* The “name” field contains the name of the actual File System. | <urn:uuid:cdb2f748-eadb-46e4-a8f9-7ce30a54c3bf> | 3.640625 | 521 | Documentation | Software Dev. | 34.966883 | 1,433 |
Mission: To observe, understand and model the hydrological cycle and energy fluxes in the Earth's atmosphere and at the surface.
The Global Energy and Water Cycle Exchanges Project (GEWEX) is an integrated
program of research, observations, and science activities that focuses on the atmospheric, terestrial, radiative, hydrological, coupled processes, and interactions that determine the global and regional hydrological cycle, radiation and energy transitions, and their involvement in climate change. The International
GEWEX Project Office (IGPO) is the focal point for the planning and implementation of
all GEWEX activities.
The goal of GEWEX is to reproduce and
predict, by means of suitable models, the variations of the global hydrological regime,
its impact on atmospheric and surface dynamics, and variations in regional hydrological
processes and water resources and their response to changes in the environment, such as
the increase in greenhouse gases. GEWEX will provide an order of magnitude improvement in
the ability to model global precipitation and evaporation, as well as accurate assessment
of the sensitivity of atmospheric radiation and clouds to climate change.
GEWEX is the core project in the World Climate Research Programme (WCRP) concerned
with studying the dynamics and thermodynamics of the atmosphere and interactions with the
By virtue of this central role, GEWEX has links with all other WCRP
projects, in particular, the Climate Variability and
Predictablity (CLIVAR) Project, the Stratospheric
Processes and their Role in Climate (SPARC) Project, and the Climate and Cryosphere (CIiC) Project.
GEWEX plays a central role in the interaction
of WCRP with many international organizations and programs dealing with climate
observations. As part of WCRP's input to the Group on Earth Observations (GEO) Global Earth Observation System of Systems (GEOSS), GEWEX brings its unique expertise in
two specific societal benefit areas, climate and water. GEWEX is leading in the
development of plans for the global data reprocessing effort and a observation strategy,
and serves as a demonstration project for future climate observational networks in GEOSS. GEWEX supports the Integrated Global
Water Cycle Observations (IGWCO) Theme developed under the Integrated Global Observing Strategy Partnership
(IGOS-P) and currently part of GEO (activities were merged in 2008).
GEWEX Research Foci
GEWEX also maintains close links to the International Land Ecosystem-Atmospheric Processes
Study (iLEAPs) of the International
Geosphere-Biosphere Program (IGBP).
GEWEX is composed of several components designed to address
the elements of the scientific focus, the global energy and water cycle.
- Data and Assessment
- Determine atmospheric and surface radiation fluxes and heating with the precision needed
to predict transient climate variations and decadal-to-centennial climate trends.
- Demonstrate - in particular at the regional scale - skill in predicting changes in water resources and soil moisture on time
scales up to seasonal and annual as an integral part of the climate system.
and Prediction - Develop accurate global model formulation of the energy and water
budget and demonstrate predictability of their variability and response to climate
forcing. See the Global Atmospheric System Studies (GASS) Panel and the Global Land/Atmosphere System Study (GLASS) Panel.
GEWEX Cross-Cutting Themes:
In the implementation of GEWEX, priority continues to be
given to three main cross-cutting themes:
- Assembly of global climatological data sets based on merging
in situ measurement and satellite observations in order to determine the atmospheric and
surface fluxes that drive the climate system, to provide benchmark values for the present
climate, to document interannual variability and climate change, and to validate models.
- Atmospheric and land surface process studies to improve
understanding of the main thermodynamic forces driving the climate system of energy
exchanges in the atmosphere, characterizing the regional global and water energy budgets,
to evaluate the role of evaporation and precipitation processes in regional rainfall
anomalies, to examine changes in soil moisture and ground water balance, and to improve
paramertization of these processes in models.
- Application of GEWEX data and process studies in models as a
basis for developing extended range precipitation forecasts, studying water resource
variability, improving the realism of simulations of the climate response to anthropogenic
forcing and global warming assessments, and for providing input to other WCRP activities. | <urn:uuid:9129ea87-1caf-48d4-8149-fa90e6c2d22a> | 2.703125 | 972 | About (Org.) | Science & Tech. | -3.076509 | 1,434 |
Every user who can log in on the system is identified by a unique number called the user ID. Each process has an effective user ID which says which user's access permissions it has.
Users are classified into groups for access control purposes. Each process has one or more group ID values which say which groups the process can use for access to files.
The effective user and group IDs of a process collectively form its persona. This determines which files the process can access. Normally, a process inherits its persona from the parent process, but under special circumstances a process can change its persona and thus change its access permissions.
Each file in the system also has a user ID and a group ID. Access control works by comparing the user and group IDs of the file with those of the running process.
The system keeps a database of all the registered users, and another database of all the defined groups. There are library functions you can use to examine these databases. | <urn:uuid:0212885e-c820-4537-b9f2-f8f7408a1bff> | 2.734375 | 193 | Documentation | Software Dev. | 49.293339 | 1,435 |
Windhunter is a project for hydrogen mass production based on electrolysis of the sea water. It consists of a platform that sustains some wind mills. They produce electricity following that electricity produces hydrogen and oxygen through the well-known electrolysis process. The wind turbines have a power of 2 MegaWatt. From the experiments presented in the “hydrogen power” category on cars-and-trees.com, it is required only a small amount of current (abt 10Amps) to make enough hydrogen to run a car. Imagine the possibilities of 2MW power in continuous run. Anyway, some calculations have to be made, but it’s obvious it can generate in time great quantities of energy. For free.
The platform can be put in the sea or oceans and doesn’t interfere with any human eyesight (except maybe ships travelling across the sea).
The bad part is that its construction is not cheap at all.
You can see the whole project on http://www.windhunter.org or if your want to see specific details directly, you can download a document presenting all the facts of this project here: http://www.windhunter.org/Windhunter_Integrated_Info.doc
More like this article
Not what you were looking for? Search The Green Optimistic!
Join the Discussion4033 total comments so far. What's your opinion ?
Electrolysis of seawater produces hydrogen, for sure, but it also produces chlorine in aboundance. It might produce some oxyten as well. But what is to be done with the chlorine?The reason is that in electrolysis the most reactive elements are the most likely to be separated to the two electrodes. Since seaware contains a lot of sodium chloride, NaCl, these elements will react preferentially to hydrogen and oxygen. Consequently, the sodium will be attracted to the negative electrode, where it reacts with the H2O to produce hydrogen and sodium hydroxide. The chlorine is attracted to the positive electrode where it is released.Try it. | <urn:uuid:39ff2758-2756-4c33-9f37-2d10bc44b36a> | 3.234375 | 422 | Personal Blog | Science & Tech. | 47.978125 | 1,436 |
Grassland in Mabi County destroyed by glacial lake outburst flood (GLOF). This area used to be farmland, now it's covered with black glacial deposits after the glacial lake burst. Global warming causes Himalayan glaciers to melt at an unprecedented rate, making GLOF more frequent. Latest research (2009) indicates that in the Chinese Himalayas region, there are currently 143 glacial lakes and 44 of them are very high risk of bursting.
© © Greenpeace / Du Jiang | <urn:uuid:925035c8-4b10-42e8-90d0-bcef592b3b78> | 2.515625 | 100 | Knowledge Article | Science & Tech. | 47.438581 | 1,437 |
Despite La Nina, US sets 4 Major Heat Records in First Half of 2012
Warmest U.S. Spring On Record: NOAA
Planetark.org, June 8, 2012
So far, 2012 has been the warmest year the United States has ever seen, with the warmest spring and the second-warmest May since record-keeping began in 1895, the U.S. National Oceanic and Atmospheric Administration reported on Thursday.
Temperatures for the past 12 months and the year-to-date have been the warmest on record for the contiguous United States, NOAA said.
The average temperature for the contiguous 48 states for meteorological spring, which runs from March through May, was 57.1 degrees F (13.9 C), 5.2 degrees (2.9 C) above the 20th century long-term average and 2 degrees F (1.1 C) warmer than the previous warmest spring in 1910.
Record warmth and near-record warmth blanketed the eastern two-thirds of the country from this spring, with 31 states reporting record warmth for the season and 11 more with spring temperatures among their 10 warmest.
"The Midwest and the upper Midwest were the epicenters for this vast warmth," Deke Arndt of NOAA's Climatic Data Center said in an online video. That meant farming started earlier in the year, and so did pests and weeds, bringing higher costs earlier in the growing season, Arndt said.
"This warmth is an example of what we would expect to see more often in a warming world," Arndt said.
More long-lasting heat waves, record-high daytime temperatures and record-high overnight low temperatures are to be expected in a warming world, said Jake Crouch of NOAA's National Climatic Data Center.
CARBON DIOXIDE MILESTONE
"And that's what we're seeing," Crouch said by telephone. "We've seen it quite a bit over the last 12 months."
Alaska's spring months were 2.7 degrees F (1.5 C) cooler than average and 10.5 percent wetter and snowier, while drought spread over Hawaii, though exceptional drought was eliminated across the island state.
Warmth was evident in parts of the Arctic in May, where sea ice declined rapidly at first and then more slowly through the month, ending at below average levels for 1979-2000, according to the National Snow and Ice Data Center.
However, there was more ice cover in the Arctic in May 2012 than in May 2011, the center said on Wednesday on its website nsidc.org/arcticseaicenews/ . There was heavy ice in the Bering Sea, but unusually low ice extent in the Barents and Kara Seas.
Another Arctic measurement related to climate reached a milestone this spring, NOAA reported: the concentration of atmospheric carbon dioxide at Barrow, Alaska, reached 400 parts per million, the first time a monthly average for this greenhouse gas passed that level at a remote location.
The level of 450 ppm is regarded by many scientists and environmental activists as the upper limit the planet can afford if global temperature rise is to be kept to within 3.6 degrees F (2 C) this century. Some advocates suggest 350 ppm is a more appropriate target.
The 400 ppm mark for carbon dioxide in less remote locations, such as Cape May, New Jersey, has been reached for several years in the springtime, NOAA said in a statement.
But measurements of carbon dioxide over 400 ppm at remote sites like Barrow - and at six other remote Arctic sites - reflect long-term human emissions of the climate-warming gas, rather than direct emissions from a nearby population center.
The global monthly mean level of atmospheric carbon dioxide was about 394 ppm in April, compared to 336 ppm in 1979, pre-industrial levels of about 278 ppm and ice age levels of about 185 ppm.
Four Major Heat Records Fall in Stunning NOAA Report
Climatecentral.org, June 8, 2012
Four major heat records fell in a stunning new climate report from the National Oceanic and Atmospheric Administration (NOAA) on Thursday. The lower 48 states set temperature records for the warmest spring, largest seasonal departure from average, warmest year-to-date, and warmest 12-month period, all new marks since records began in 1895. While the globe has been tracking slightly cooler than recent years — thanks in part to the influence of now dissipated La Nina conditions in the tropical Pacific — the U.S. has been sizzling.
The average springtime temperature in the lower 48 was so far above the 1901-2000 average — 5.2°F, to be exact — that the country set a record for the largest temperature departure for any season on record since 1895.
Spring 2012 beat 1910, which had held the title for record warm spring, by a healthy margin of 2°F. No doubt much of this was driven by the massive heat wave that gripped the country during March, but unusual warmth continued during April and May, albeit not as intense. Such warming trends are consistent with both the influence of manmade global warming, particularly the prevalence of record warm nighttime temperatures, and natural variability has also favored warmer-than-average conditions so far this year. Studies show that as greenhouse gases continue to increase in the atmosphere, the odds of heat extremes are growing as well.
According to NOAA’s National Climatic Data Center, the spring of 2012 “was the culmination of the warmest March, third warmest April and second warmest May. This marks the first time that all three months during the spring season ranked among the 10 warmest, since records began in 1895.”
Des Moines, Iowa offers a case study of just how warm it’s been. The year-to-date there has averaged a whopping 8 degrees F above average, with many other cities across the country tracking close to that figure as well.
Most of the states that experienced record or near-record warmth this spring were located east of the Rocky Mountains, with 31 states setting records for warmest spring temperatures. Remarkably, not a single state in the lower 48 was cooler than average this spring, and only Oregon and Washington had spring temperatures that were close to average. Although there were exceptions, much of the country had a drier-than-average spring with Colorado, Delaware, Indiana, Utah, and Wyoming coming in with a top 10 driest spring.
The record warmth helped propel the U.S. Climate Extremes Index, which tracks the highest and lowest 10 percent of extremes in temperature, precipitation, drought and tropical storms and hurricanes across the contiguous U.S., to a record-large 44 percent during the March-May period, which was more than twice the average value. “Extremes in warm daytime temperatures (81 percent) and warm nighttime temperatures (72 percent) covered large areas of the nation” were mainly responsible for this record.
Spring was unusual for the pre-season tropical weather, as two tropical storms developed before the official start of the Atlantic Hurricane Season on June 1. Tropical Storm Beryl made landfall near Jacksonville, Fla., on May 28, and brought heavy rainfall to parts of the Southeast that were in the grips of a severe drought. This year marked the third time on record that two tropical storms occurred during May in the North Atlantic Basin.
Major drought has remained elsewhere, though, and drought plus high winds led to ideal conditions for wildfires in the West. The White-Water Baldy Fire Complex in New Mexico, which was the result of two separate fires that combined into a massive conflagration, broke the record set just last year for the largest wildfire in New Mexico history. | <urn:uuid:ac9f73cf-af29-40e8-9096-f9ea9a0587b5> | 2.65625 | 1,593 | News Article | Science & Tech. | 54.518061 | 1,438 |
Get those emergency supplies ready. Caltech scientists working with the United States Geological Survey have modeled the next big quake based on last week’s temblor in China. Here’s their scenario for a 7.8 magnitude event along the San Andreas fault in Southern California:
_10 a.m.: The San Andreas Fault ruptures, sending shock waves racing at 2 miles per second.
_30 seconds later: The agricultural Coachella Valley shakes first. Older buildings crumble. Fires start. Sections of Interstate 10, one of the nation’s major east-west corridors, break apart.
_1 minute later: Interstate 15, a key north-south route, is severed in places. Rail lines break; a train derails. Tremors hit burgeoning Riverside and San Bernardino counties east of Los Angeles.
_1 minute, 30 seconds later: Shock waves advance toward the Los Angeles Basin, shaking it violently for 55 seconds.
_2 minutes later: The rupture stops near Palmdale, but waves march north toward coastal Santa Barbara and into the Central Valley city of Bakersfield.
_30 minutes later: Emergency responders begin to fan across the region. A magnitude-7 aftershock hits, but sends its energy south into Mexico. Several more big aftershocks will hit in following days and months.
Major fires following the quake would cause the most damage, said Keith Porter, of the University of Colorado.
Here’s the latest quake map depicting western China | <urn:uuid:c22918ac-1344-4a8e-9c6e-fbf251f665d0> | 3.203125 | 305 | News Article | Science & Tech. | 51.428684 | 1,439 |
August 01, 2011
When it comes to magnetic fields, Jupiter is the ultimate muscle car. It's endowed with the biggest, brawniest magnetic field of any planet in the solar system, powered by a monster engine under the hood.
Figuring out how this mighty engine, or dynamo, works is one goal of NASA's Juno mission, which is scheduled to begin its five-year, 400-million-mile (643,737,600- kilometer-mile) voyage to Jupiter this month. Juno will orbit the planet for about a year, investigating its origin and evolution. Juno has eight instruments to probe its internal structure and gravity field, measure water and ammonia in its atmosphere, map its powerful magnetic field and observe its intense auroras.
The magnetic field studies will be the job of Juno's twin magnetometers, designed and built at NASA's Goddard Space Flight Center in Greenbelt, Md. They will measure the field's magnitude and direction with greater accuracy than any previous instrument, revealing it for the first time in high-definition.
Read the full story at http://www.nasa.gov/mission_pages/juno/news/juno20110801.html .
DC Agle 818-393-9011
Jet Propulsion Laboratory, Pasadena, Calif. | <urn:uuid:9f80eb67-ffb9-4c25-920b-1d915a2060c6> | 3.375 | 267 | News (Org.) | Science & Tech. | 51.076635 | 1,440 |
plant or animal
person, or thing adopted by a group as a representative symbol
terrestrial - living
on or in the ground; not aquatic
- any fresh, marine, or terrestrial crustacean of the order isopoda,
having seven pairs of legs adapted for crawling and has a flattened body
- any marine
or fresh water crustaceans of the subclass copepoda that has an elongated body
and a forked tail
Troglobites, Troglophiles, Trogloxenes
A copepod in
trogloxenes , troglophiles!! This is not gibberish. These are names of
different categories of terrestrial cave animals and bacteria. Then we
have stygobites, stygophilies, and stygloxenes. These animals and
bacteria are a bit different, they live in water. They are
aquatic. There are a lot more,
like extremophiles. These little organisms live in harsh environments
like glaciers, swamps, and volcanoes. They can not be seen with the
naked eye so you need a microscope. You might think that living in a
cave habitat can be rather difficult, and indeed it is.
Trogloxenes that you might know are: bats, bears, foxes, and
raccoons. Bats are the most common trogloxenes, and have become a cave
mascot. Trogloxenes can live above ground, or below
ground. They are cave visitors.
Troglophiles are animals that can go either way, they can live in a
cave or outside of a cave. Many of these will be insects like: crickets,
centipedes, and some salamanders. These can be called cave lovers.
Troglobites are true cave dwellers. Most troglobites have special
adaptations that help them adjust to life in complete darkness. Some
troglobites have poor eye sight or have no eyes at all. They can sense
vibrations or moving objects with their very long and sensitive
antennas. They are also able to hear, smell, and feel as well. Troglobites are pale, white, or transparent.
Because of this, troglobites can not come in contact with sunlight
because the results
can prove to be fatal. Some examples of troglobites are blind flatworms,
eyeless shrimp, isopods, and copepods.
A cave can be a habitat for many interesting life forms. These life
forms have adapted to their lives below or above the surface. These
living creatures make the world a more interesting place to be. | <urn:uuid:a5306186-9187-4558-b76c-34322653b694> | 3.65625 | 575 | Knowledge Article | Science & Tech. | 40.389309 | 1,441 |
Hi....to my understanding.... nanoscale Au can be produced by various techniques...to name a few VCLDI, EEW, Chemical route, phyiscal methods etc.... all these techniques doesnt yield nanoparticles of same size, shape, purity etc.... generally depending on the applications the synthesizing techniques is chosen...and each technique has its own temp requirements.... and finally one technique which most widely used for nano colloid preparation is thru reduction of Au(CN)3.. and this process is room temperature....on the other hand plasma process will require higher end temperature.... do correct me if i am wrong...
sorry for the delayed reply... i was suffering from chicken pox and was nt in the lab for couple of weeks... by the way... as for as mechnical grinding is concern... we have nt tried in the my lab..i mean gold mech. grinding.. bt then.. as such there is no much important is given to temperature... furthermore since gold is very very ductile metal...mech grinding will and should be done in cryo condition in order have significant size reduction...
I am working on interaction of gold nanoparticles with microtubules which are one of the cytoskeletal protein that have important role in some disease. we purified this protein from the brain and hope to find a way to improve memory. if you are interested contact me
Welcome! Nanopaprika was cooked up by Hungarian chemistry PhD student in 2007. The main idea was to create something more personal than the other nano networks already on the Internet. Community is open to everyone from post-doctorial researchers and professors to students everywhere.
There is only one important assumption: you have to be interested in nano!
The invertion of 10G optical transceiver has greatly increase the networking speed, for the science behind the transceiver, please check website and learn about CWDM SFP transceiver, www.fiberoptictransceiver.net
The XFP (10 Gigabit Small Form Factor Pluggable) is a standard for transceivers for high-speed computer network and telecommunication links that use optical fiber.please visit www.xfptransceiver.com for more info | <urn:uuid:2722132e-b1f0-443b-8f13-44c243d6f029> | 2.5625 | 451 | Comment Section | Science & Tech. | 54.68641 | 1,442 |
An ultra-fast U.S. military drone that streaked across the sky at 13,000 mph and met its demise in the Pacific was doomed by the excessive heat of hypersonic travel, which literally peeled away the drone's metal skin, military officials have revealed.
A seven-month study by the military's Defense Advanced Research Projects Agency, or DARPA, has found that the so-calledHypersonic Technology Vehicle 2 (HTV-2) amazingly recovered from shockwaves that forced it to roll while traveling at Mach 20 (about 20 times the speed of sound) in an August 2011 test. But the unmanned aircraft was unable to cope with damage to its exterior caused by its extreme speed, DARPA officials said.
According to DARPA, "a gradual wearing away of the vehicle's skin as it reached stress tolerance limits was expected. However, larger than anticipated portions of the vehicle’s skin peeled from the aerostructure." [Photos: DARPA Hypersonic Mach 20 Test]
The entire HTV-2 test flight lasted nine minutes, with HTV-2 actually flying in a controlled manner for three minutes, DARPA officials said.
DARPA launched the arrowhead-shaped HTV-2 flight on Aug. 11 in the second of two tests of a prototype for a hypersonic glider as part of the advanced Conventional Prompt Global Strike weapons program, which is aimed at developing a bomber capable of reaching any target on Earth within an hour. The first test occurred in 2010.
Space news from NBCNews.com
Teen's space mission fueled by social media
Science editor Alan Boyle's blog: "Astronaut Abby" is at the controls of a social-media machine that is launching the 15-year-old from Minnesota to Kazakhstan this month for the liftoff of the International Space Station's next crew.
- Buzz Aldrin's vision for journey to Mars
- Giant black hole may be cooking up meals
- Watch a 'ring of fire' solar eclipse online
- Teen's space mission fueled by social media
"The initial shockwave disturbances experienced during second flight, from which the vehicle was able to recover and continue controlled flight, exceeded by more than 100 times what the vehicle was designed to withstand," DARPA Acting Director Kaigham J. Gabrielsaid in a statement. "That's a major validation that we’re advancing our understanding of aerodynamic control for hypersonic flight."
The HTV-2 launched atop a rocket from California's Vandenberg Air Force Base, then came streaking back to Earth at hypersonic speeds. Hypersonic flight is typically defined as any flight that surpasses the speed of Mach 5.
When HTV-2 reached Mach 20, it experienced temperatures of nearly 3,500 degrees Fahrenheit. NASA's space shuttles, for comparison, flew at speeds of up to Mach 25 when they re-entered Earth's atmosphere.
A DARPA engineering review board found"most probable cause of the HTV-2 Flight 2 premature flight termination was unexpected aeroshell degradation, creating multiple upsets of increasing severity that ultimately activated the Flight Safety System."
That safety system, once it realized the HTV-2 was in an unrecoverable situation, destroyed the vehicle by pitching it into the ocean.
"The result of these findings is a profound advancement in understanding the areas we need to focus on to advance aerothermal structures for future hypersonic vehicles. Only actual flight data could have revealed this to us," said Air Force Maj. Chris Schulz, DARPA program manager.
DARPA officials said that more analysis on the HTV-2 test flight will continue via ground tests and will be used as a resource for future Conventional Prompt Global Strike weapons technology efforts.
7 Sci-Fi Weapons of Tomorrow Here TodayHow DARPA's HTV-2 Hypersonic Bomber Test WorksPhotos: Breaking the Sound BarrierCopyright 2012 SPACE.com, a TechMediaNetwork company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
© 2013 Space.com. All rights reserved. More from Space.com. | <urn:uuid:649e0599-ed8a-4cdb-9278-05ba2dc596cd> | 2.65625 | 851 | News Article | Science & Tech. | 46.41459 | 1,443 |
LONG BEACH, Calif. — NASA's 23-year-old Hubble Space Telescope is still going strong, and agency officials said Tuesday they plan to operate it until its instruments finally give out, potentially for another six years at least.
After its final overhaul in 2009, the Hubble telescope was expected to last until at least 2015. Now, NASA officials say they are committed to keeping the iconic space observatory going as long as possible.
"Hubble will continue to operate as long as its systems are running well," Paul Hertz, director of the Astrophysics Division in NASA's Science Mission Directorate, said here at the 221st meeting of the American Astronomical Society. Hubble, like other long-running NASA missions such as the Spitzer Space Telescope, will be reviewed every two years to ensure that the mission is continuing to provide science worth the cost of operating it, Hertz added.
In fact, Hubble supporters hope it will continue to run even after its successor, the James Webb Space Telescope (JWST), is launched — an event planned for 2018.
"We are not planning to arbitrarily end the operation of Hubble when JWST is launched," Hertz said during a NASA Town Hall Meeting at the AAS conference. "It may be great if we get at least one year of overlap between JWST and Hubble." [Building the James Webb Space Telescope (Photos)]
The Hubble Space Telescope was launched in April 1990, and has since been upgraded five times by astronauts in orbit. Its last space shuttle servicing mission in May 2009 left the scope with two new instruments, including a wide-field camera and a high-precision spectrograph to spread out light into its constituent wavelengths.
The space telescope is named after the late astronomer Edwin Hubble (1889-1953), who proved that the universe is expanding.
"It's working better than ever, 23 years in," Dan Coe, an astronomer working with Hubble at the Space Telescope Science Institute in Baltimore, Md., told Space.com. "We're still pushing the frontier."
Coe agreed that overlap time with both Hubble and James Webb operating simultaneously would be ideal. Such a plan would allow the observatories to work on complementary projects and provide crosschecks between the two telescopes' measurements.
How long Hubble can run also depends on NASA's budget, which, like funding for all federal agencies, is uncertain given the economic challenges in the United States.
"It all comes down to money," Coe said.
Funding the development of the James Webb Space Telescope is currently taking up almost half of NASA's total budget of $1.3 billion for astrophysics in 2013, Hertz said.
Space news from NBCNews.com
Teen's space mission fueled by social media
- Buzz Aldrin's vision for journey to Mars
- Giant black hole may be cooking up meals
- Watch a 'ring of fire' solar eclipse online
- Teen's space mission fueled by social media
The observatory has an estimated price tag of $8.7 billion, and will cost about $628 million in 2013 alone.
In contrast, Hubble will cost about $98 million in 2013.
- Celestial Photos: Hubble Space Telescope's Latest Cosmic Views
- Hubble Telescope’s Hidden Treasures Revealed | Video
- Most Amazing Hubble Discoveries
© 2013 Space.com. All rights reserved. More from Space.com. | <urn:uuid:5ef8910d-72cc-4fea-aa0a-5e7603c5a50c> | 2.609375 | 699 | News Article | Science & Tech. | 49.407688 | 1,444 |
RNA ligation - (Dec/03/2001 )
I'm not sure if you can do single stranded RNA ligation, but I have another idea. What if you did first strand synthesis on the RNA. Then, do a second strand synthesis (protocol in maniatis or current protocols). Next, blunt the ends of the double stranded cDNA you produced (just to be sure). Next, make double stranded linker DNA that has a specific primer site in it and do a ligation with your double stranded DNA. Finally, do a PCR reaction with this ligation as a template and the primer you designed into the linker DNA, and a specific primer you choose from sequence you already know. You will probably get several bands, but hopefully you can estimate the size of the band you want. Gel purify that band and sequence it directly or subclone it. There are protocols in maniatis and current protocols on how to do each of these steps. In theory, I think this should work- any thoughts from anyone else?
Is there such thing as RNA ligation? I have a RNA virus genome to sequence but being not circular, I have problem getting the ends of the genome. I was thinking if there is a possible way of circularizing the RNA genome then I will be able to do RT using random primers followed by outward PCR using the primers from the known region.
Thanks for your suggestion. Looks very feasible. Just to clarify something. Do I do 1st strand synthesis using random primer or oligodT primer ? BTW the genome size of the viral RNA is about 7kb, is it possible to stretch RT to that length?
I think optimally, oligo dT could go that far, but I would stick with random primers. Whenever I am cloning things from rtPCR, I consistently have had more success with random primed cDNA compared to oligodT. Good luck! | <urn:uuid:d5641108-618f-4931-af04-15377d81bf44> | 2.515625 | 394 | Comment Section | Science & Tech. | 62.964286 | 1,445 |
The Reptiles of Nauru1
By Buden, Donald W
Abstract: Eleven species of reptiles are reported from Nauru in the first systematic treatment of the herpetofauna. Four of the species are marine; the seven others include six lizards (four geckos, two skinks) and one snake. Gehyra mutilata (Wiegman), G. oceanica (Lesson), Pelamis platura (Linnaeus), and Ramphotyphlops braminus (Daudin) are recorded on Nauru for the first time. With the exception of Emoia arnoensis Brown & Marshall, which is endemic to eastern Micronesia, the herpetofauna consists of species that range widely among the west-central Pacific Ocean islands. The only known record of E. arnoensis from Chuuk possibly is based on a misassigned locality, in which case the range of the species would be limited to the Marshall Islands, Nauru, and Kosrae. There is no evidence to suggest that habitat modification on Nauru stemming largely from more than a century of phosphate mining has reduced the number of reptile species. LITTLE IS KNOWN of the fauna of the island republic of Nauru. The island’s small size, remote location, and ecological impoverishment all doubtlessly have contributed to the paucity of zoological investigations, leaving a gap in our knowledge of the biodiversity of this area of the Pacific. Information on the reptiles is especially scanty; the herpetofauna has never been reviewed systematically. Waite’s (1903) list of reptiles from Nauru included only two species (both skinks): Lygosoma cyanurum [=Emoia cyanura (Lesson)] and L. atrocostatum (=E. arnoensis Brown & Marshall). Much later, Brown (1991) described E. arnoensis nauru based largely on material collected by H. Cogger in 1983. In addition, Bauer and Henle (1994) included Nauru in a list of locality records for the geckos Hemidactylus frenatus Dumeril & Bibron and Lepidodactylus lugubris (Dumeril & Bibron), and Webb (1994) reported on the unusual occurrence of a crocodile. Reports on sea turtles from Nauru consist only of a few passing remarks largely lacking in substantive detail. In this study I include an annotated list of all the species of reptiles recorded on Nauru, and it is based on personal observations, specimens I recently collected, data from museum specimens and catalogs, gleanings from the literature, and information provided by local residents.
Nauru (0[degrees] 30′ S, 166[degrees] 56′ E) is a small (21 km^sup 2^) raised atoll island in the west-central Pacific Ocean (Figure 1). It is approximately 2,100 km northeast of New Guinea; the nearest island is Banaba (=Ocean Island) 300 km to the east. The climate is equatorial; the average monthly temperature ranges from 27 to 29[degrees]C, and the average annual rainfall is 2,098 mm, with the wettest months being December to April. A narrow, coastal belt roughly 100 to 300 m wide abuts a scarp that rises to approximately 30-40 m in most areas to form the edge of a central plateau; the maximum elevation is 72 m at Command Ridge. Approximately 10,000 islanders reside mainly along the coast and in a small settlement centered about a brackish lake (Buada Lagoon) in a low area of the plateau in the southwestern part of the island. The coastal vegetation consists largely of strand, scrub, scattered coconut trees, and a variety of ornamentals and fruit trees. Much of the original vegetation of the central plateau was stripped away during a century of phosphate mining, leaving behind a skeletal landscape of limestone pinnacles about 4-8 m high from around which the topsoil and phosphate deposits were removed. Many areas have since regenerated to a karstic scrubland with small pockets of residual forest dominated by tomano trees, Calophyllum inophyllum L., and strangler fig, Ficus prolixa G. Forst. The most extensive remnant forest areas are on the gentler slopes of the scarp and at its base. In describing the impact of human activities on the environment of Nauru, Thaman (1992:153) stated ”long habitation; almost a century of open-cast phosphate mining; continuous bombing, destruction, and displacement of the people during World War II; rapid urbanization; and the abandonment of agriculture and subsistence activities on Nauru have arguably produced one of the most severely modified natural and cultural floras on earth.” Further descriptions of the physiognomy and vegetation of Nauru are provided by Manner et al. (1984), Thaman et al. (1994), and Morrison and Manner (2005).
MATERIALS AND METHODS
I visited Nauru during 12-25 December 2006 and 29 March-5 April 2007 to conduct surveys of birds, reptiles, butterflies, and dragonflies; the 81 specimens of reptiles collected (by hand) were fixed in 10% formalin, washed, transferred to 35% isopropanol, and deposited in the Bishop Museum, Honolulu (BPBM); the Museum of Comparative Zoology, Harvard University (MCZ); the National Museum of Natural History, Smithsonian Institution (USNM); and the Natural History Museum, London (NHM). Terms of abundance used to appraise overall status are based largely on my visual surveys: common (at least 30, but often many more, sightings per day under optimum conditions), fairly common (approximately 10-30 encounters per day), uncommon (up to 10 per day, and unrecorded on some days), scarce (usually no more than five per day and may be unrecorded on many days), vagrant (unexpected on geographic grounds and known only from one or two records). Surveys were conducted during different times of the day under a variety of sunny and cloudy conditions but not during rain. Snout-vent length in Emoia arnoensis was measured with a millimeter rule to the nearest whole millimeter. Values in Table 1 are rounded to the nearest tenth resulting in some totals greater than 100%.
Crocodylus cf. porosus Schneider
A ”small” and presumably young crocodile first observed by swimmers at a beach on Nauru on 18 September 1994 was captured and brought to the local police station where it was observed by many people (Webb 1994:13); the specimen was not saved. Webb (1994) indicated that the animal was photographed, but no photograph was examined by him nor by any other members of the Crocodile Specialist Group in Australia (C. Manolis, pers. comm.). The nearest population of crocodiles, and the most likely source of the Nauru record, is C. porosus in the Solomon Islands, over 1,000 km to the southwest.
Chelonia mydas Linnaeus and Eretmochelys imbricata Linnaeus
There are no well-documented specimen records of turtles on Nauru. Several reports that mention turtles in passing lack substantive detail, but some refer to at least two species, the green turtle, Chelonia mydas, and the hawksbill turtle, Eretmochelys imbricata. Hambruch (1915:197), for example, included ”gru en schildkro te” (green turtle) and ”echte schildkro te” (true turtle? =hawksbill?) in a list of animals recorded on Nauru; his accompanying illustrations (Hambruch 1915: figs. 281 and 282) are unidentifiable as to species. Ernest Stephen was marooned on Nauru sometime during the 1870s at the age of 14 and spent most of his life on the island. In his recollections of Naruan customs and beliefs (written around 1902 or 1903: Wedgwood in Stephen 1936), he remarked that ”turtles rarely visit the island; [and that] at first the natives would not eat them, for they thought that they were spirits of their departed” (Stephen 1936:57). Thaman and Hassall (1998:24) stated that ”both the hawksbill and green turtles . . . are occasionally present . . . [and that] some beaches were reportedly once nesting areas although this is no longer the case.” In addition, Fiji Customs reported the importation of a small amount of worked tortoiseshell [presumably from E. imbricata] from Nauru in 1978 (Groombridge and Luxmoore 1989). I saw no turtles on Nauru, but several residents told me that turtles occasionally visited the island, and that one had been captured not too long before as it was crossing the circumferential road; the species was unidentified.
Gehyra mutilata (Weigmann)
The stump-toed or mutilating gecko occurs naturally from India and Sri Lanka through Southeast Asia to China, Papua New Guinea, and the Indo-Australian archipelago (Lever 2003). It is widespread in the Pacific (McCoy 1980, Zug 1991). The absence of allozyme protein variation between the Pacific Basin populations and those in the ancestral home of the species in southern Asia supports a hypothesis of a relatively recent and probable human-assisted dispersal into Oceania (Fisher 1997). Gehyra mutilata is scarce on Nauru; one collected under flaking bark of a Calophyllum tree in a small patch of forest on 17 December and another on a tree trunk at night in the Buada Lagoon settlement on 18 December 2006 are the only records. It was the only gecko not encountered in edificarian habitats, including buildings, walls, and other such structures constructed by humans (Table 1), but is usually found in such habits elsewhere in Micronesia (pers. obs.).
Gehyra oceanica (Lesson)
The oceanic gecko is widespread in the Pacific and is common on Nauru. It was observed in edificarian and ruderal habitats as well as in remnant forest and often in small colonies occupying a building or a single tree. Seven were collected from the outside walls of a house and adjacent buildings in Nibok District, all within a 15-min span shortly after sunset on 13 December. Hemidactylus frenatus Dumeril & Bibron
Native to Asia, the house gecko has colonized much of Oceania since World War II, often outcompeting or otherwise displacing other species (Hunsaker 1966, Petren et al. 1993, Case et al. 1994). It is common on Nauru, especially on the cement walls of buildings, where it was regularly observed feeding on insects drawn to lights. It was frequently encountered also on tree trunks in the settlements as well as in more remote areas of the island. The time of its introduction to Nauru is unknown. The earliest specimen record I found is usnm 200470, collected by R. V. Wood on 18 April 1976 and accompanied by the annotation that the species was ”common everywhere in forest,” which indicates that it was already well established. The record mentioned in Bauer and Henle (1994) is based on this specimen (A. Bauer, pers. comm.).
Lepidodactylus lugubris (Dumeril & Bibron)
The mourning gecko is widespread in the Pacific (Gibbons 1985). It is common on Nauru, being especially numerous in edificarian habitats and less frequently encountered in forest patches on tree trunks at night and under flaking bark during the day. Bauer and Henle (1994) and Bauer (pers. comm.) recorded it first on Nauru based on a specimen in the Australian Museum (ams R-7109). The collector and collection date are unknown, but the specimen was presented by A. H. S. Lucas and registered into the ams collection in 1919 and could have been collected any time before that date (R. Sadlier, pers. comm.).
Emoia arnoensis Brown & Marshall
The Arno Atoll skink is endemic to eastern Micronesia: the nominate form in the Marshall Islands and eastern Caroline Islands, and E. a. nauru on Nauru. Cogger (in Brown 1991) found E. a. nauru only in a small forest of Ficus trees and in the dense surrounding shrub growth, and mainly on the aerial roots of trees. I saw no more than 20 during a total of 3 weeks on Nauru and no more than six in one day. They were most frequently encountered on cement and stone walls that were bordered by dense thickets of shrubs, vines, and weeds alongside a road in a semiresidential area on the southwestern rim of the plateau. Others were seen among limestone pinnacles, on aerial roots of Ficus trees, and on the trunks of fallen trees throughout the island. The majority of those I encountered were extremely wary and typically sought refuge in abundantly available holes in the ground, or rock faces, which were always close by. In contrast, N. and B. Vander Velde (pers. comm.) stated that examples of the nominate subspecies they encountered in the Marshall Islands were readily approached and could be easily captured by hand.
In snout-vent length, the 23 adults of the nominate subspecies from the Caroline and Marshall islands that Brown (1991) examined ranged from 73.0 to 85.5 mm, and the 13 E. a. nauru ranged from 69.8 to 91.0 mm. The six specimens I collected on Nauru are larger than any reported by Brown (1991) and ranged from 92 to 101 mm (ave. 95.2 mm). Most of the others I saw were of similar size, with only two or three that might have been considered juveniles.
Emoia cyanura (Lesson)
This is the most common lizard on Nauru, being especially numerous in the coastal belt, along stone walls, and in leaf litter under shady forest trees. On two separate occasions, individuals I observed foraging along the waterline at the beach ran into tide pools at my approach and swam several meters to the opposite side.
Pelamis platura (Linnaeus)
The yellow-bellied sea snake is the most widely distributed of all sea snakes, ranging from the east coast of Africa through the Indian and Pacific oceans to the west coast of the Americas (Pickwell and Culotta 1980, Heatwole 1999). It is pelagic and seldom encountered along shorelines. Collection data for the only two (and previously unreported) records for Nauru are incomplete. One fluid- preserved specimen in the Nauru Hospital laboratory was said by current hospital staff to have been found in driftwood that was washing ashore sometime during the early to mid-1990s. Several islanders, including hospital staff, told me of another sea snake (presumably another P. platura) that was found on or near the shore approximately 2-3 yr before but was not saved.
Ramphotyphlops braminus (Daudin)
The Brahminy blind snake, native to Southeast Asia, is considered ”the most successful disperser in the snake world . . . [and] the most probable [dispersal] mechanism is in the root balls of ornamental (more recently) or food (historically) plants transported by humans” (Crombie and Pregill 1999:66). It is established in tropical and subtropical regions worldwide, including various Pacific islands (Gibbons 1985). The flattened, mummified remains of a Brahminy blind snake I found approximately 150 m east of the Odn Aiwo Hotel, on the road to Buada Lagoon, 31 March 2007, is the only record for Nauru. The specimen (MCZ R-185647) is in very poor condition but identifiable on the basis of size, coloration, and scutellation (20 scale rows and shape of rostral, with ca. 330 middorsal scales); identification was confirmed by Van Wallach (Museum of Comparative Zoology, Harvard University). Two resident islanders told me of seeing what are almost certainly ( based on their descriptions) additional examples of this species, referring to small, shiny black, wormlike animals, with a pointed or spine- tipped tail.
With the exception of the occasional yellowbellied sea snake (Pelamis platura), at least two species of sea turtles, and a vagrant crocodile (Crocodylus cf. porosus), all of which are marine, the herpetofauna of Nauru consists of six species of lizards (four geckos, two skinks) and one blind snake, Ramphotyphlops braminus (Table 2). The crocodile represents an unusual extralimital record. The only other extralimital records, and presumed examples of long- distance dispersal of salt water crocodiles in Micronesia, include one C. porosus adult captured in Pohnpei on 21 March 1971 (Allen 1974) and another near Ailinglaplap Atoll in the Marshall Islands in October 2004 (Manolis 2005; N. Vander Velde, pers. comm.). The nearest population of crocodiles is roughly 1,500 km and 2,000 km to the south and southwest (in the Papua New Guinea/Solomon Islands region) of Pohnpei and the Marshall Islands, respectively.
Amphibians do not occur on Nauru, although the hospital laboratory has two fluidpreserved cane toads, Bufo marinus (Linnaeus). These are without accompanying data but were said by hospital staff to have been found at the airport in cargo arriving on a flight from Saipan or Kosrae sometime around the mid-1990s.
The blind snake is known definitely from only one salvaged road- killed specimen but is probably more numerous than the single record indicates; its cryptic habits make assessment difficult. Five of the six species of lizards on Nauru are widespread in Oceania. The four geckos live to different degrees commensally with humans, and all may have reached Nauru with human assistance. Fisher (1997) presented molecular evidence supporting a hypothesis of natural dispersal of Gehyra oceanica in the southern Pacific but not to the exclusion of human-assisted transport. Three of the geckos are common, but Gehyra mutilata is scarce; its low numbers are possibly due to a negative impact of the presence of Hemidactylus frenatus (see, for example, Buden and references cited therein). Of the two species of skinks on Nauru, Emoia cyanura has a very broad distribution in the Pacific and has been recorded on more different island groups in the Pacific Basin than any other skink (Adler et al. 1995). Emoia arnoensis, on the other hand, is the only reptile on Nauru that has a relatively limited distribution, being endemic to eastern Micronesia and with an endemic subspecies on Nauru. Both E. cyanura and E. arnoensis are the only species mentioned in the first report of reptiles on Nauru (Waite 1903).
Emoia arnoensis has a limited distribution in eastern Micronesia, where it is possibly confined to the Marshall Islands, Nauru, and Kosrae; I consider the single record from Chuuk as questionable. Brown (1991) recorded the nominate subspecies in the Marshall Islands only on Arno Atoll, whence he examined 16 specimens collected mainly by Ross Kiester, who recorded it on 15 of the 33 islands that he surveyed in 1968 (Kiester 1983). Elsewhere in the Marshalls, Gressitt (1961) recorded it on Jaluit Atoll, and a specimen that Brown reported as from Lae Atoll in the Caroline Islands (USNM 132258) was collected on Lae Islet, Lae Atoll, in the Marshall Islands by R. Fosberg in 1952 (G. Zug, pers. comm.). In addition, a juvenile E. cf. arnoensis collected on Maloelap Atoll by Nancy Vander Velde on 11 April 2006 is in the Bishop Museum (BPBM 23974).
Emoia arnoensis has been recorded in the Caroline Islands only on Kosrae and Rug Island (Brown 1991). I saw it on Kosrae occasionally, usually on the forest floor and in rocky areas near streams, during June and July 2002. The Rug Island record is based on one specimen (cas-su 7541) collected by A. P. Lundin, undated but cataloged (in Stanford University collections) in 1938. Rug [=also Ruc or Ruk] is an old and disused name for Chuuk Islands (formerly Truk) and, in some usage, may refer specifically to Fefan (=Fefen) Island. Kiester (1983) remarked that he did not encounter E. arnoensis in Chuuk, and I did not observe it during several hours on Fefan in June 2003 and for about an hour in July 2007 nor on any of the other Chuuk Lagoon islands that I visited occasionally over the past several years. Inasmuch as the specimen was examined by Brown, it is unlikely to be a misidentified dark (melanistic) form of Lamprolepis smaragdina. Black or nearly black L. smaragdina have been recorded on several of the low coralline islands of Chuuk (Kepler 1994), and I have observed several also on Fefan and other high islands in the lagoon. The cas herpetological collection contains no other specimens collected by Lundin, but the cas fish collection has specimens that Lundin collected from both Chuuk (Rug I.) and Kosrae (D. Catania, pers. comm.). Possibly Lundin’s specimen of E. arnoensis may be mislabeled as to locality and may have originated from Kosrae, not Rug. Emoia arnoensis ranges from the Marshall Islands and Nauru westward to Kosrae, then apparently skips Pohnpei, and is known from Chuuk only from the Lundin record. A search of Stanford University archives produced no additional information on the specimen or on A. P. Lundin (P. White, pers. comm.). The status of E. arnoensis on Chuuk is somewhat equivocal, and the record is in need of confirmation. To what extent the more than 100 yr of continuing habitat degradation, largely by mining operations, has affected the number of reptile species present on Nauru is uncertain because adequate baseline studies are lacking. However, there is no evidence to indicate that the herpetofauna was any richer in premining times than it is now. The earliest report on the reptiles, which dates back to the very early stages of mining, includes only two species of skinks (Waite 1903). However, among the species of lizards that are widely distributed in Micronesia (including small, low-lying atolls of the Caroline Islands and Marshall Islands) and that are unknown from Nauru are Lepidodactylus moestus (Peters), Nactus pelagicus (Girard), Perochirus ateles (Dumeril), Emoia boettgeri (Sternfeld), E. caeruleocauda (De Vis), E. impar (Werner), E. jakati (Kopstein), Eugongylus albofasciolatus (Gu nther), Lamprolepis smaragdina (Lesson), and Lipinia noctua (Lesson). Most, if not all, of these can be found in habitats considerably altered by human activity (pers. obs.). Additional surveys may reveal the presence of some of these species on Nauru. Alternatively, their absence may be real and the especially meager herpetofauna likely a combined attribute of small island size and distance from potential source areas.
For providing information on Nauru specimens in their respective institutions, I thank Carla Kishinami and Fred Kraus (Bishop Museum), Jens Vindum (California Academy of Sciences), Traci Hartsell, Ken Tighe, and George Zug (National Museum of Natural History), and Ross Sadlier (Australian Museum). For additional pertinent information relating to Nauru, including assistance with the literature, I thank George Balazs, Aaron Bauer, Lui Bell, Dave Catania, Charlie Manolis, Mike McCoy, Randy Thaman, Nancy and Brian Vander Velde, and Pam White. I thank Van Wallach for confirming identi- fication of Ramphotyphlops braminus and for reviewing a preliminary draft of the manuscript. I am especially grateful to Alamanda Lauti, director of the Nauru campus of the University of the South Pacific, who greatly assisted in overcoming many of the obstacles associated with issues of transportation, entry documents, and collecting permits. I also thank Dale Deireragea for accompanying me in the field on numerous occasions and for sharing his knowledge of the island.
1 Manuscript accepted 30 October 2007.
Adler, G. H., C. C. Austin, and R. Dudley. 1995. Dispersal and speciation of skinks among archipelagos in the tropical Pacific Ocean. Evol. Ecol. 9:529-541.
Allen, G. R. 1974. The marine crocodile, Crocodylus porosus, from Ponape, eastern Caroline Islands, with notes on food habits of crocodiles from the Palau Archipelago. Copeia 1974:553.
Bauer, A. M., and K. Henle. 1994. Family Gekkonidae (Reptilia, Sauria). I. Australia and Oceania. Das Tierreich, Teilband 109. Walter de Gruyter, Berlin and New York.
Brown, W. C. 1991. Lizards of the genus Emoia (Scincidae) with observations on their evolution and biogeography. Calif. Acad. Sci. Mem. 15:1-94.
Buden, D. W. 2007. Reptiles of Satawan Atoll and the Mortlock Islands, Chuuk State, Federated States of Micronesia. Pac. Sci. 61:415-428.
Case, T. J., D. T. Bolger, and K. Petren. 1994. Invasions and competitive displacement among house geckos in the tropical Pacific. Ecology 75:464-477.
Crombie, R. I., and G. K. Pregill. 1999. A checklist of the herpetofauna of the Palau Islands (Republic of Belau), Oceania. Herpetol. Monogr. 13:29-80.
Fisher, R. N. 1997. Dispersal and evolution of the Pacific Basin gekkonid lizards Gehyra oceanica and Gehyra mutilata. Evolution 51:906-921.
Gibbons, J. R. H. 1985. The biogeography and evolution of Pacific island reptiles and amphibians. Pages 125-142 in G. Grigg, R. Shine, and H. Ehmann, eds. Biology of Australasian frogs and reptiles. Royal Zoological Society of New South Wales, Sydney.
Gressitt, J. L. 1961. Terrestrial fauna. Pages 69-74 in D. I. Blumenstock, ed. A report on typhoon effects upon Jaluit Atoll. Atoll Res. Bull. 75.
Groombridge, B., and R. Luxmoore. 1989. The green turtle and hawksbill (Reptilia: Cheloniidae): World status, exploitation and trade. CITES Secretariat, Lausanne, Switzerland.
Hambruch, P. 1915. Nauru. Ergebnisse der Su dsee-Expedition 1908- 1910. II. Ethnographie: B. Mikronesien, Band 1. L. Friederichsen and Co., Hamburg.
Hunsaker, D. 1966. Notes on the population expansion of the house gecko, Hemidactylus frenatus. Philipp. J. Sci. 95:121-122.
Heatwole, H. 1999. Sea snakes. University of New South Wales Press, Ltd., Australia.
Kepler, A. K. 1994. Report: Chuuk coastal resource inventory, terrestrial surveys, August 4-14, 1993. Administrative report to CORIAL (Coastal, Ocean, Reef, and Island Advisors, Ltd.), Federated States of Micronesia Government, The Nature Conservancy Hawai’i, and East-West Center, University of Hawai’i.
Kiester, A. R. 1983. Zoogeography of the skinks (Sauria: Scincidae) of Arno Atoll, Marshall Islands. Pages 359-364 in A. G. J. Rhodin and K. Myiata, eds. Advances in herpetology and evolutionary biology: Essays in honor of Ernest E. Williams. Museum of Comparative Zoology, Harvard University, Cambridge, Massachusetts.
Lever, C. 2003. Naturalized reptiles and amphibians of the world. Oxford University Press, Oxford.
McCoy, M. 1980. Reptiles of the Solomon Islands. Wau Ecology Institute Handbook 7. Wau, Papua New Guinea.
Manner, H. I., R. R. Thaman, and D. C. Hassall. 1984. Phosphate mining induced vegetation changes on Nauru Island. Ecology 65:1454- 1465.
Manolis, C. 2005. Long-distance movement by a saltwater crocodile. Crocodile Specialist Group Newsletter 24 (4): 18.
Morrison, R. J., and H. I. Manner. 2005. Premining pattern of soils on Nauru, central Pacific. Pac. Sci. 59:523-540.
Petren, K., D. T. Bolger, and T. J. Case. 1993. Mechanisms in the competitive success of an invading sexual gecko over an asexual native. Science (Washington, D.C.) 259:354-358.
Pickwell, G. V., and W. A. Culotta. 1980. Pelamis, P. platurus. Cat. Am. Amphib. Reptiles 255:1-4.
Stephen, E. 1936. Notes on Nauru. Oceania 7:34-63.
Thaman, R. R. 1992. Vegetation of Nauru and Gilbert Islands: Case studies in poverty, degradation, disturbance, and displacement. Pac. Sci. 46:128-158.
Thaman, R. R., F. R. Fosberg, H. I. Manner, and D. C. Hassall. 1994. The flora of Nauru. Atoll Res. Bull. 392:1-232.
Thaman, R. R., and D. C. Hassall. 1998. Republic of Nauru: National environmental management strategy and national environmental action plan. South Pacific Regional Environment Program (SPREP), Apia, Samoa.
Waite, E. R. 1903. The reptiles. Page 2 in A. J. North, E. R. Waite, C. Hedley, W. J. Rainbow, T. Whitelegge, and C. Anderson, contributors. Notes on the zoology of Paanopa or Ocean Island and Nauru or Pleasant Island, Gilbert Group. Rec. Aust. Mus. 5:1-15.
Webb, G. 1994. Nauru: Vagrant crocodile. Crocodile Specialist Group Newsletter 13 (4): 13.
Zug, G. R. 1991. The lizards of Fiji: Natural history and systematics. Bishop Mus. Bull. Zool. 2:1-136.
Donald W. Buden2
2 Division of Natural Sciences and Mathematics, College of Micronesia-FSM, P.O. Box 159, Kolonia, Pohnpei, Federated States of Micronesia 96941 (e-mail: email@example.com).
Copyright University Press of Hawaii Oct 2008
(c) 2008 Pacific Science. Provided by ProQuest LLC. All rights Reserved. | <urn:uuid:29df1b53-2764-4eba-875c-2201e5a5ee38> | 2.921875 | 6,887 | Academic Writing | Science & Tech. | 53.139692 | 1,446 |
Ponder this ‘strange telescope’ buried deep at the South PoleFindings from extraordinary scientific research in the heart of Antarctica -- the IceCube Neutrino Observatory -- that includes a UW-River Falls connection will be presented on campus and at a Main Street café the week of Nov. 26-Dec. 1.
By: Phil Pfuehler, River Falls Journal
Findings from extraordinary scientific research in the heart of Antarctica -- the IceCube Neutrino Observatory -- that includes a UW-River Falls connection will be presented on campus and at a Main Street café the week of Nov. 26-Dec. 1.
“The IceCube is the biggest and strangest telescope in the word, and the largest science project ever funded in Wisconsin,” says Jim Madsen, UWRF Physics Department chairman and professor who’s been to the South Pole to help set up the IceCube Observatory.
“It involves more than 250 scientists around the world in 38 institutions. Despite trying technological challenges associated with new ideas that need to work in the extreme Antarctic environment, construction of the project was completed in six years -- on time, under budget, and exceeding specifications.”
As described in a February 2007 Journal story, the IceCube telescope, with its network of cylindrical light-detecting cables, was being assembled and gradually submerged into borings more than a mile deep under crystal-clear ice.
The intent was to chart ghostly subatomic particles called neutrinos that come from decaying radioactive elements.
Neutrinos are born by spatial disruptions, like the collision of two stars. Billions pass unseen through the Earth.
“The neutrinos are produced far out in space but collide near the South Pole with atoms,” Madsen said. “Before they collide, they are invisible.”
Madsen said scientists are already translating early IceCube data.
“The big things we have seen are, we confirmed that neutrinos don't travel faster than the speed of light,” he said. “We also have shown that at higher energies, neutrinos continue to change from one type to another as they travel.
“It is like you buy a Buick, but as you drive, it continually changes to a Ford then to a Honda then back to a Buick. Only when you stop does it stop changing.
“So your neighbor only sees one type of car in the driveway when you get home. Each day though it could be either a Buick, a Ford, or a Honda.”
For those curious to learn more about the work at IceCube Neutrino Observatory, here are the three local presentations:
--Noon-1 p.m. Tuesday, Nov. 27, “The Reinvented Planetarium: Digital Projection System,” open house, at 201 Agricultural Science building on the UWRF campus.
--3:30-5:30pm, Tuesday, Nov. 27, “Meet a Scientist,” at Dish and the Spoon Café, 208 N. Main St., downtown River Falls.
--7-9 p.m. Tuesday, Nov. 27, “IceCube: A New View of the Universe from the South Pole,” at the UWRF University Center Ballroom. (Those who attend can try out cold-weather gear, use a computer to control lights and see how scientists try to find nearly invisible particles from deep space. | <urn:uuid:c510ed8c-5b39-4e00-b015-651825a370c8> | 3.3125 | 729 | News Article | Science & Tech. | 55.044339 | 1,447 |
May 1, 2012 On 5 and 6 June this year, millions of people around the world will be able to see Venus pass across the face of the Sun in what will be a once-in-a-lifetime experience.
It will take Venus about six hours to complete its transit, appearing as a small black dot on the Sun's surface, in an event that will not happen again until 2117.
In this month's Physics World, Jay M Pasachoff, an astronomer at Williams College, Massachusetts, explores the science behind Venus's transit and gives an account of its fascinating history.
Transits of Venus occur only on the very rare occasions when Venus and Earth are in a line with the Sun. At other times Venus passes below or above the Sun because the two orbits are at a slight angle to each other. Transits occur in pairs separated by eight years, with the gap between pairs of transits alternating between 105.5 and 121.5 years -- the last transit was in 2004.
Building on the original theories of Nicolaus Copernicus from 1543, scientists were able to predict and record the transits of both Mercury and Venus in the centuries that followed.
Johannes Kepler successfully predicted that both planets would transit the Sun in 1631, part of which was verified with Mercury's transit of that year. But the first transit of Venus to actually be viewed was in 1639 -- an event that had been predicted by the English astronomer Jeremiah Horrocks. He observed the transit in the village of Much Hoole in Lancashire -- the only other person to see it being his correspondent, William Crabtree, in Manchester.
Later, in 1716, Edmond Halley proposed using a transit of Venus to predict the precise distance between Earth and the Sun, known as the astronomical unit. As a result, hundreds of expeditions were sent all over the world to observe the 1761 and 1769 transits. A young James Cook took the Endeavour to the island of Tahiti, where he successfully observed the transit at a site that is still called Point Venus.
Pasachoff expects the transit to confirm his team's theory about the phenomenon called "the black-drop effect" -- a strange, dark band linking Venus's silhouette with the sky outside the Sun that appears for about a minute starting just as Venus first enters the solar disk.
Pasachoff and his colleagues will concentrate on observing Venus's atmosphere as it appears when Venus is only half onto the solar disk. He also believes that observations of the transit will help astronomers who are looking for extrasolar planets orbiting stars other than the Sun.
"Doing so verifies that the techniques for studying events on and around other stars hold true in our own backyard.. In other words, by looking up close at transits in our solar system, we may be able to see subtle effects that can help exoplanet hunters explain what they are seeing when they view distant suns," Pasachoff writes.
Not content with viewing this year's transit from Earth, scientists in France will be using the Hubble Space Telescope to observe the effect of Venus's transit very slightly darkening the Moon. Pasachoff and colleagues even hope to use Hubble to watch Venus passing in front of the Sun as seen from Jupiter -- an event that will take place on 20 September this year -- and will be using NASA's Cassini spacecraft, which is orbiting Saturn, to see a transit of Venus from Saturn on 21 December.
"We are fortunate in that we are truly living in a golden period of planetary transits and it is one of which I hope astronomers can take full advantage," he writes.
Editors note: Looking directly at the sun can cause severe and permanent eye damage. Do not look directly at Venus' transit of the sun.
For more information see Wikipedia article.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
- Jay M Pasachoff. Venus: it's now or never. Physics World, Volume 25, Issue 5, May 2012 [link]
Note: If no author is given, the source is cited instead. | <urn:uuid:8ed69503-90e8-44b7-84f0-994f63ec80c4> | 3.859375 | 862 | Truncated | Science & Tech. | 51.800161 | 1,448 |
How are chromosomes ‘painted’?
Your 23 pairs of chromosomes contain around 24,000 pairs of genes. FISH – fluorescent in situ hybridisation – can be used to ‘paint’ chromosomes. Scientists prepare chromosomes on a microscope slide, tag a copy of the gene they want to find with a fluorescing dye and add it to the slide. Under a special microscope, the same gene shows up as a brightly coloured dot on the chromosome. | <urn:uuid:b546ae24-1f75-4d79-95b6-a723c2028e5a> | 3.390625 | 95 | Knowledge Article | Science & Tech. | 51.819183 | 1,449 |
Optical image of the central HH1-2 region (colors) with a superposition (contours) of the IR emission detected with the LW2 filter of ISOCAM. The positions of the VLA1 & 2 &4 sources are indicated by white filled circles. The position towards which we have discovered the three infrared windows is indicated by a black filled square and coincides, within the astrometric errors, with that of VLA1+VLA2 objects; the red circle indicates the PFOV the infrared observations (6 arcsec). More images here
The European Space Agency's infrared space telescope, ISO, has measured the size of a proto-planetary system, surrounding a newly-born star, a Spanish team of astronomers report in tomorrow's issue of the journal Science.
ISO sees a very young 'baby-star' surrounded by a disk of the same diameter as Jupiter's orbit, in which planets are likely to form in the future.
Stars are born within thick 'cocoons' of dust very difficult to penetrate, and for this reason current models describing the process are very incomplete.
Astronomers know, in broad terms, that the future star begins to form within the dust cloud by accreting material which forms a disk, the same disk out of which planets, comets and all the components of a planetary system will probably form in the future -- the disk is actually called a 'protoplanetary disk'.
Once the star-to-be has gathered enough material, the high pressures and temperatures in its centre trigger the first nuclear reactions and the star 'lights up' -- it starts the 'ignition'. During this process the very young star or 'protostar' emits jets of material that can be detected with different techniques. Astronomers use these detectable signs to classify the evolutionary stages of the new-born stars.
The system observed by ISO was previously thought to be at the earliest evolutionary stage, in fact, so young that the protostar had not yet had time to ignite. However, ISO results contradict this belief.
"We are seeing the earliest stages of formation of a planetary system. There is already a central object hot enough to work as a star and to heat up its surrounding protoplanetary disk.
The star is already 'lit up'", says Spanish astronomer Jos=E9 Cernicharo, from the Instituto de Estructura de la Materia (CSIC), in Madrid, main author of the article being published in Science.
The system observed by ISO's infrared camera, ISOCAM, is 1200 light years away in a star-forming region in the Orion nebula. It's called VLA1/2. Cernicharo and his group estimate that the central star and its surrounding matter might be at an average temperature of at least 500 degrees Kelvin.
It is surrounded by a protoplanetary disk whose diameter is four times the distance from the Earth to the Sun, the same as Jupiter's orbit.
"This is the first time we can determine the size of the regions where where a low mass star and its planets are being formed", Cernicharo says.
ISO was also able to analyse the chemical composition of the large cocoon of material enshrouding both the star and its protoplanetary disk, a structure called by the researchers the 'placental' envelope.
It is much colder, and made up of grains of dust covered by ices of water, carbon dioxide, methane and probably methanol. This chemical information, another 'first' of the work, will contribute substantially to understanding the star-birth processes, say the researchers.
ISO results also indicate -- as highlighted by the team in Science -- that these systems will be observable with the new generation of large (8 metre class) ground-based infrared telescopes.
Current knowledge so far suggested that these very dusty objects could only be detected at far-infrared wavelengths not accessible from the ground, but ISO has shown that they can also be seen at certain very precise infrared wavelengths which do indeed cross the Earth's atmosphere -- the so-called 'infrared windows' at which ground-based infrared telescope work.
Astronomers Spy Baby Gas Giants 100 Light Years Away
Washington - March 29, 2000 - Planet-hunting astronomers have crossed an important threshold in planet detection, with the discovery of two planets that may be smaller in mass than Saturn. | <urn:uuid:608205c8-1ba4-4119-9720-b4a3604be342> | 2.96875 | 907 | News Article | Science & Tech. | 34.990452 | 1,450 |
The people in south Asia had no warning of the next disaster rushing toward them the morning of December 26, 2004. One of the strongest earthquakes in the past 100 years had just destroyed villages on the island of Sumatra in the Indian Ocean, leaving many people injured. But the worst was yet to come—and very soon. For the earthquake had occurred beneath the ocean, thrusting the ocean floor upward nearly 60 feet. The sudden release of energy into the ocean created a tsunami (pronounced su-NAM-ee) event—a series of huge waves. The waves rushed outward from the center of the earthquake, traveling around 400 miles per hour. Anything in the path of these giant surges of water, such as islands or coastlines, would soon be under water.
The people had already felt the earthquake, so why didn't they know the water was coming?
As the ocean floor rises near a landmass, it pushes the wave higher. But much depends on how sharply the ocean bottom changes and from which direction the wave approaches.
Energy from earthquakes travels through the Earth very quickly, so scientists thousands of miles away knew there had been a severe earthquake in the Indian Ocean. Why didn't they know it would create a tsunami? Why didn't they warn people close to the coastlines to get to higher ground as quickly as possible?
In Sumatra, near the center of the earthquake, people would not have had time to get out of the way even if they had been warned. But the tsunami took over two hours to reach the island of Sri Lanka 1000 miles away, and still it killed 30,000 people!
It is important, though, to understand just how the tsunami will behave when it gets near the coastline. As the ocean floor rises near a landmass, it pushes the wave higher. But much depends on how sharply the ocean bottom changes and from which direction the wave approaches. Scientists would like to know more about how actual waves react.
MISR has nine cameras all pointed at different angles. So the exact same spot is photographed from nine different angles as the satellite passes overhead. The image at the top of this page was taken with the camera that points forward at 46°. The image caught the sunlight reflecting off the pattern of ripples as the waves bent around the southern tip of the island. These ripples are not seen in satellite images looking straight down at the surface. Scientists do not yet understand what causes this pattern of ripples. They will use computers to help them find out how the depth of the ocean floor affects the wave patterns on the surface of the ocean. Images such as this one from MISR will help.
Images such as these from MISR will help scientists understand how tsunamis interact with islands and coastlines. This information will help in developing the computer programs, called models, that will help predict where, when, and how severely a tsunami will hit. That way, scientists and government officials can warn people in time to save many lives. | <urn:uuid:db2613b9-457b-405c-a9e8-cf6b3053cdc7> | 4.6875 | 607 | Knowledge Article | Science & Tech. | 59.732901 | 1,451 |
Now poor moose are being blamed for global warming.
Norwegian newspapers, citing research from Norway's technical university, said a motorist would have to drive 13,000 kilometers in a car to emit as much CO2 as a moose does in a year.
Bacteria in a moose's stomach create methane gas which is considered even more destructive to the environment than carbon dioxide gas. Cows pose the same problem.
Norway has some 120,000 moose but an estimated 35,000 are expected to be killed in this year's moose hunting season, which starts on September 25, Norwegian newspaper VG reported.
Stay informed with our free news services:
|All news from SPIEGEL International||Twitter | RSS|
|All news from Zeitgeist section||RSS|
© SPIEGEL ONLINE 2007
All Rights Reserved
Reproduction only allowed with the permission of SPIEGELnet GmbH | <urn:uuid:9d981d31-c6bb-4952-be20-a74847ac04c7> | 2.96875 | 192 | News Article | Science & Tech. | 42.552467 | 1,452 |
Data reported by the weather station: 552790
Latitude: 31.36 | Longitude: 90.01 | Altitude: 4701
|Main||Year 1996 climate||Select a month|
To calculate annual averages, we analyzed data of 363 days (99.18% of year).
If in the average or annual total of some data is missing information of 10 or more days, this is not displayed.
The total rainfall value 0 (zero) may indicate that there has been no such measurement and / or the weather station does not broadcast.
|Annual average temperature:||0.2°C||363|
|Annual average maximum temperature:||5.5°C||363|
|Annual average minimum temperature:||-6.2°C||363|
|Annual average humidity:||45.3%||358|
|Annual total precipitation:||360.46 mm||363|
|Annual average visibility:||29.5 Km||363|
|Annual average wind speed:||14.7 km/h||363|
Number of days with extraordinary phenomena.
|Total days with rain:||92|
|Total days with snow:||53|
|Total days with thunderstorm:||43|
|Total days with fog:||7|
|Total days with tornado or funnel cloud:||0|
|Total days with hail:||14|
Days of extreme historical values in 1996
The highest temperature recorded was 19°C on August 17.
The lowest temperature recorded was -29.2°C on January 2.
The maximum wind speed recorded was 122.2 km/h on January 29. | <urn:uuid:c3fcc140-bd36-4673-9094-086cde8921c2> | 2.5625 | 356 | Structured Data | Science & Tech. | 72.329555 | 1,453 |
Want to stay on top of all the space news? Follow @universetoday on Twitter
The average Earth surface temperature is 14° C. That’s 287 kelvin, or 57.2° F.
As you probably realize, that number is just an average. The Earth’s temperature can be much higher or lower than this temperature. In the hottest places of the planet, in the deserts near the equator, the temperature on Earth can get as high as 57.7° C. And then in the coldest place, at the south pole in Antarctica, the temperature can dip down to -89° C.
The reason the average temperature on Earth is so high is because of the atmosphere. This acts like a blanket, trapping infrared radiation close to the planet and warming it up. Without the atmosphere, the temperature on Earth would be more like the Moon, which rises to 116° C in the day, and then dips down to -173° C at night. | <urn:uuid:86d90c21-1f3c-4921-a8db-cbd4b6a112ed> | 3.625 | 204 | Knowledge Article | Science & Tech. | 69.944741 | 1,454 |
Science Fair Project Encyclopedia
- Green algae
- land plants (embryophytes)
- non-vascular embryophytes
- vascular plants (tracheophytes)
- seedless vascular plants
- seed plants (spermatophytes)
Plants are a major group of living things (about 300,000 species), including familiar organisms such as trees, flowers, herbs, and ferns. Aristotle divided all living things between plants, which generally do not move or have sensory organs, and animals. In Linnaeus' system, these became the Kingdoms Vegetabilia (later Plantae) and Animalia. Since then, it has become clear that the Plantae as originally defined included several unrelated groups, and the fungi and several groups of algae were removed to new kingdoms. However, these are still often considered plants in many contexts. Indeed, any attempt to match "plant" with a single taxon is doomed to fail, because plant is a vaguely defined concept unrelated to the presumed phylogenic concepts on which modern taxonomy is based.
- See main article at Embryophytes
Most familiar are the multicellular land plants, called embryophytes. They include the vascular plants, plants with full systems of leaves, stems, and roots. They also include a few of their close relatives, often called bryophytes, of which mosses are the most common.
All of these plants have eukaryotic cells with cell walls composed of cellulose, and most obtain their energy through photosynthesis, using light and carbon dioxide to synthesize food. About 300 plant species do not photosynthesize but are parasites on other species of photosynthetic plants. Plants are distinguished from green algae, from which they evolved, by having specialized reproductive organs protected by non-reproductive tissues.
Bryophytes first appeared during the early Palaeozoic. They can only survive in moist environments, and remain small throughout their life-cycle. This involves an alternation between two generations: a haploid stage, called the gametophyte, and a diploid stage, called the sporophyte. The sporophyte is short-lived and remains dependent on its parent.
Vascular plants first appeared during the Silurian period, and by the Devonian had diversified and spread into many different land environments. They have a number of adaptations that allowed them to overcome the limitations of the bryophytes. These include a cuticle resistant to desiccation, and vascular tissues which transport water throughout the organism. In many the sporophyte acts as a separate individual, while the gametophyte remains small.
The first primitive seed plants, Pteridosperms (seed ferns) and Cordaites, both groups now extinct, appeared in the late Devonian and diversified through the Carboniferous, with further evolution through the Permian and Triassic periods. In these the gametophyte stage is completely reduced, and the sporophyte begins life inside an enclosure called a seed, which develops while on the parent plant, and with fertilisation by means of pollen grains. Whereas other vascular plants, such as ferns, reproduce by means of spores and so need moisture to develop, some seed plants can survive and reproduce in extremely arid conditions.
Early seed plants are referred to as gymnosperms (naked seeds), as the seed embryo is not enclosed in a protective structure at pollination, with the pollen landing directly on the embryo. Four surviving groups remain widespread now, particularly the conifers, which are dominant trees in several biomes. The angiosperms, comprising the flowering plants, were the last major group of plants to appear, emerging from within the gymnosperms during the Jurassic and diversifying rapidly during the Cretaceous. These differ in that the seed embryo is enclosed, so the pollen has to grow a tube to penetrate the protective seed coat; they are the predominant group of flora in most biomes today.
Algae and Fungi
The algae comprise several different groups of organisms that produce energy through photosynthesis. The most conspicuous are the seaweeds, multicellular algae that often closely resemble terrestrial plants, found among the green, red, and brown algae. These and other algal groups also include various single-celled creatures and forms that are simple collections of cells, without differentiated tissues. Many can move about, and some have even lost their ability to photosynthesize; when first discovered, these were considered as both plants and animals.
The embryophytes developed from green algae; the two are collectively referred to as the green plants or Viridaeplantae. The kingdom Plantae is now usually taken to mean this monophyletic group, as shown above. With a few exceptions among the green algae, all such forms have cell walls containing cellulose and chloroplasts containing chlorophylls a and b, and store food in the form of starch. They undergo closed mitosis without centrioles, and typically have mitochondria with flat cristae.
The chloroplasts of green plants are surrounded by two membranes, suggesting they originated directly from endosymbiotic cyanobacteria. The same is true of the red algae, and the two groups are generally believed to have a common origin. In contrast, most other algae have chloroplasts with three or four membranes. They are not in general close relatives of the green plants, acquiring chloroplasts separately from ingested or symbiotic green and red algae.
Unlike embryophytes and algae, fungi are not photosynthetic, but are saprophytes: they obtain their food by breaking down and absorbing surrounding materials. Most fungi are formed by microscopic tubes called hyphae, which may or may not be divided into cells but contain eukaryotic nuclei. Fruiting bodies, of which mushrooms are the most familiar, are actually only the reproductive structures of fungi. They are not related to any of the photosynthetic groups, but are close relatives of animals.
The photosynthesis and carbon fixation conducted by land plants and algae are the ultimate source of energy and organic material in nearly all habitats. These processes also radically changed the composition of the Earth's atmosphere, which as a result contains a large proportion of oxygen. Animals and most other organisms are aerobic, relying on oxygen; those that do not are confined to relatively few, anaerobic environments.
Much of human nutrition depends on cereals. Other plants that are eaten include fruits, vegetables, herbs, and spices. Some vascular plants, referred to as trees and shrubs, produce woody stems and are an important source of building material. A number of plants are used decoratively, including a variety of flowers.
Simple plants like algae may have short life spans as individuals, but their populations are commonly seasonal. Other plants may be organized according to their seasonal growth pattern:
- Annual: live and reproduce within one growing season.
- Biennial: live for two growing seasons; usually reproduce in second year.
- Perennial: live for many growing seasons; continue to reproduce once mature.
Among the vascular plants, perennials include both evergreens that keep their leaves the entire year, and deciduous plants which lose their leaves for some part. In temperate and boreal climates, they generally lose their leaves during the winter; many tropical plants lose their leaves during the dry season.
The growth rate of plants is extremely variable. Some mosses grow less than 1 μm/h, while most trees grow 25-250 μm/h. Some climbing species, such as kudzu, which do not need to produce thick supportive tissue, may grow up to 12500 μm/h.
Plant fossils include roots, wood, leaves, seeds, fruit, pollen, spores and amber (the fossilized resin produced by some plants). Fossil land plants are recorded in terrestrial, lacustrine, fluvial and nearshore marine sediments. Pollen, spores and algae (dinoflagellates and acritarchs) are used for dating sedimentary rock sequences. The remains of fossil plants are not as common as fossil animals, although plant fossils are locally abundant in many regions worldwide.
Early fossil plants are well known from the Devonian period, including the chert of Rhynie in Aberdeenshire, Scotland. The best preserved examples, from which their cellular construction has been described, have been found at this locality. The preservation is so perfect that sections of these ancient plants show the individual cells within the plant tissue. The Devonian period also saw the evolution of what many believe to be the first modern tree, Archaeopteris. This fern-like tree combined a woody trunk with the fronds of a fern, but produced no seeds.
The Coal Measures are a major source of Palaeozoic plant fossils, with many groups of plants in existence at this time. The spoil heaps of coal mines are the best places to collect; coal itself is the remains of fossilised plants, though structural detail of the plant fossils is rarely visible in coal. In the Fossil Forest at Victoria Park in Glasgow, Scotland, the stumps of Lepidodendron trees are found in their original growth positions.
The fossilized remains of conifer and angiosperm roots, stems and branches may be locally abundant in lake and inshore sedimentary rocks from the Mesozoic and Caenozoic eras. Sequoia and its allies, magnolia, oak, and palms are often found.
Petrified wood is common in some parts of the world, and is most frequently found in arid or desert areas were it is more readily exposed by erosion. Petrified wood is often heavily silicified (the organic material replaced by silicon dioxide), and the impregnated tissue is often preserved in fine detail. Such specimens may be cut and polished using lapidary equipment. Fossil forests of petrified wood have been found in all continents.
Fossils of seed ferns such as Glossopteris are widely distributed thoughout several continents of the southern hemisphere, a fact that gave support to Alfred Wegener's early ideas regarding Continental drift theory.
References and further reading
- Thomas N Taylor and Edith L Taylor. The Biology and Evolution of Fossil Plants. Prentice Hall, 1993.
- Tree of Life
- Chaw, S.-M. et al. Molecular Phylogeny of Extant Gymnosperms and Seed Plant Evolution: Analysis of Nuclear 18s rRNA Sequences (pdf file) Molec. Biol. Evol. 14 (1): 56-68. 1997.
Botanical and vegetation databases
- e-Floras (Flora of China, Flora of North America and others)
- United States of America
- Flora Europaea
- 'Dave's Garden' horticultural plant database
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:b8724c44-5980-4235-935c-5f06d06d99e8> | 3.90625 | 2,280 | Knowledge Article | Science & Tech. | 32.690165 | 1,455 |
An international team of researchers led by Masamune Oguri at Kavli IPMU and Naohisa Inada at Nara National College of Technology conduced an unprecedented survey of gravitationally lensed quasars, and used it to measure the expansion history of the universe. The result provides strong evidence that the expansion of the universe is accelerating. There were several observations that suggested the accelerated cosmic expansion, including distant supernovae for which the 2011 Nobel Prize in Physics was awarded. The team’s result confirms the accelerated cosmic expansion using a completely different approach, which strengthens the case for dark energy. This result will be published in The Astronomical Journal.
Full Story: http://www.ipmu.jp/node/1281
Fang Lizhi, a major voice for human rights and democracy and a pioneering scientist in his native China, continued to advance the field of astrophysics at the UA for more than 20 years before he died last week.
Human rights activist Fang Lizhi, who died last week at age 76, had been a professor in the University of Arizona department of physics and an adjunct professor with the UA’s Steward Observatory for more than 20 years, where he made highly regarded contributions to astrophysics.
Fang was world renowned for his outspoken and active role in promoting human rights in his native China.
Considered an “undesirable element” by the Chinese government, Fang was dismissed from the Chinese nuclear program and reassigned in 1958 to the University of Science and Technology of China, or USTC, which is regarded as China’s equivalent of the Massachusetts Institute of Technology.
Full Story: http://uanews.org/node/46176
NASA’s 747 Shuttle Carrier Aircraft (SCA) with space shuttle Discovery mounted atop will fly approximately 1,500 feet above various parts of the Washington, D.C. metropolitan area on Tuesday, April 17.
The flight, in cooperation with the Federal Aviation Administration, is scheduled to occur between 10 and 11 a.m. EDT. NASA Television and the agency’s web site will provide live coverage.
The exact route and timing of the flight depend on weather and operational constraints. However, the aircraft is expected to fly near a variety of landmarks in the metropolitan area, including the National Mall, Reagan National Airport, National Harbor and the Smithsonian’s Udvar-Hazy Center. When the flyover is complete, the SCA will land at Dulles International Airport.
One day in the fall of 2011, Neil Sheeley, a solar scientist at the Naval Research Laboratory in Washington, D.C., did what he always does – look through the daily images of the sun from NASA’s Solar Dynamics Observatory (SDO).
But on this day he saw something he’d never noticed before: a pattern of cells with bright centers and dark boundaries occurring in the sun’s atmosphere, the corona. These cells looked somewhat like a cell pattern that occurs on the sun’s surface — similar to the bubbles that rise to the top of boiling water — but it was a surprise to find this pattern higher up in the corona, which is normally dominated by bright loops and dark coronal holes.
Sheeley discussed the images with his Naval Research Laboratory colleague Harry Warren, and together they set out to learn more about the cells. Their search included observations from a fleet of NASA spacecraft called the Heliophysics System Observatory that provided separate viewpoints from different places around the sun. They describe the properties of these previously unreported solar features, dubbed “coronal cells,” in a paper published online in The Astrophysical Journal on March 20, 2012 that will appear in print on April 10.
I just got this Tweet.
ISS will be visible passing at your location -weather permitting- on
April 11, 2012, 05:56:04 MUT
Is it a good one?
This time, the International Space Station will be flying over at 27 degrees. It will look like a very bright star (magnitude -2.0).
Where to look?
ISS will come up in the north and will be heading for southeast.
“We’re thrilled,” said LCOGT Scientific Director Tim Brown, “to have our first telescope in such a well-supported site, with superbly dark skies.”
The 1-meter (40-inch) telescope will be used for both research and outreach to K-12 schools. It is part of a large planned network of LCOGT telescopes to be installed around the world, and the first of five (two 1-meter and three 0.4-meter) and possibly more LCOGT telescopes to be installed at McDonald Observatory over the next few years.
The most recent spacecraft tracking and telemetry data were collected on April 4 using the Deep Space Network’s 34 meterStation 15 at Goldstone, California. Aside from the issues in work with the Ultrastable Oscillator (see the Jan. 5, 2012 Significant Events) and the Cosmic Dust Analyzer, the Cassini spacecraft is in an excellent state of health and its subsystems are operating normally. Information on the present position of the Cassini spacecraft may be found on the “Present Position” page at: http://saturn.jpl.nasa.gov/mission/presentposition/.
Telemetry data from the targeted Enceladus encounter E-17 on March 27 were transmitted 1.3 billion kilometers to Earth on Wednesday; every bit was captured successfully by the Deep Space Network. The Ion and Neutral Mass Spectrometer (INMS) was able to discern variations in CO2 density among the individual gas jets as the spacecraft dove through the Enceladus south polar plume. The Cassini Plasma Spectrometer (CAPS), which was recently powered back on, acquired excellent data in and near the plume, along with the Cosmic Dust Analyzer (CDA) and INMS. A spectacular image of the south polar plume may be seen here, along with images of the icy moons Janus, taken March 27, and Dione, taken March 28:http://saturn.jpl.nasa.gov/news/cassinifeatures/feature20120328/
Engineers and astronomers are celebrating the much anticipated first light of the MOSFIRE instrument, now installed on the Keck I telescope at W. M. Keck Observatory. MOSFIRE (Multi-Object Spectrometer For Infra-Red Exploration) will vastly increase the data gathering power of what is already the world’s most productive ground-based observatory.
“This is a near-infrared multi-object spectrograph, similar to our popular LRIS and DEIMOS instruments, only at longer wavelengths,” explained Keck Observatory Observing Support Manager Bob Goodrich. “The MOSFIRE project team members at Keck Observatory, Caltech, UCLA, and UC Santa Cruz are to be congratulated, as are the observatory operations staff who worked hard to get MOSFIRE integrated into the Keck I telescope and infrastructure. A lot of people have put in long hours getting ready for this momentous First Light.”
Three of the six crew members living aboard the International Space Station will take questions from reporters during a news conference on Wednesday, April 11, at 9:15 a.m. CDT. The conference will air live on NASA Television and will be streamed on the agency’s website.
The news conference will link up reporters with NASA Expedition 30 Commander Dan Burbank and Flight Engineers Don Pettit and European Space Agency astronaut Andre Kuipers.
The crew members will discuss research they are conducting, the myriad of cargo delivery vehicles visiting the station — including SpaceX Dragon, the first American commercial vehicle — and the return of Burbank and cosmonauts Anton Shkaplerov and Anatoly Ivanishin in their Soyuz spacecraft later this month.
One of the world’s largest astronomy archives, containing a treasure trove of information about myriad stars, planets, and galaxies, has been named in honor of the United States Senator from Maryland Barbara Mikulski.
Called MAST, for the Barbara A. Mikulski Archive for Space Telescopes, the huge database contains astronomical observations from 16 NASA space astronomy missions, including the Hubble Space Telescope.
“In celebration of Sen. Mikulski’s career-long achievements, and particularly this year, becoming the longest-serving woman in U.S. Congressional history, we sought NASA’s permission to establish the Senator’s permanent legacy to science by naming the optical and ultraviolet data archive housed here at the Institute in her honor,” said Matt Mountain, director of the Space Telescope Science Institute (STScI) in Baltimore, Md.
STScI is the science operations center for Hubble and its upcoming successor, the James Webb Space Telescope. | <urn:uuid:56de895f-8137-4398-a92a-f6af711947c1> | 2.625 | 1,853 | Content Listing | Science & Tech. | 40.172503 | 1,456 |
Biodiversity Heritage Library
Browse Our Collection by:
Subject "Burrowing animals"
The importance of catfish burrows in maintaining fish populations of tropical freshwater streams in western Ecuador / Garrett S. Glodek.
By: Glodek, Garrett S.
Publication info: [Chicago] :Field Museum of Natural History,1978.
Contributed by: University of Illinois Urbana Champaign
Subjects: Burrowing animals Catfishes Ecology Ecuador Effect of water levels on Fishes Habitations Palenque River | <urn:uuid:75794740-80ea-4434-8ce1-b3d2ef563163> | 3.1875 | 111 | Content Listing | Science & Tech. | 0.245 | 1,457 |
Google is now serving up more than a hundred years of photographs from Life Magazine. The pictures of the early days of astronomy are just spectacular. The archives contain images of many astronomers who were critical figures in the development of the field, but who have yet to have telescopes named after them. A large fraction of them also seemed to smoke pipes.
A huge hero of mine is Walter Baade. Baade was the guy who essentially took over observations at Mt Wilson during the blackouts of WWII. With the lights of Los Angeles snuffed out, and unable to serve in the military himself, he pushed the telescopes on Mt Wilson to their limits, and established the study of stellar populations in nearby galaxies.
There are some terrific pictures of Walter Adams working at Mt Wilson. In the picture below, he’s holding the telescope controls used for guiding. During an astronomical observation, you have to move the telescope to compensate for the earth’s rotation. Nowadays, your computer can take care of it by adjusting the position to keep a bright star at a fixed position on a CCD camera. Back then, you looked through a little spotting scope, and manually adjusted the telescope position to keep it pointed at the right part of the sky. If you let it drift, your image would be blurry. No pee breaks for you, Dr. Adams!
The guy kneeling in the figure below is Gerard Kuiper, working on a telescope at McDonald Observatory. He was a planetary astronomer, and the guy for whom the “Kuiper Belt” in the outer solar system was named, although Edgeworth probably deserved more credit for it. (Kuiper actually does have an airborne observatory named after him).
And you have to love this picture of Frank Drake, working at the National Radio Astronomy Observatory in Greenbank West Virginia. You really can never have enough toggle switches. FYI, Drake is the guy behind the “Drake Equation”, used to estimate the likelihood of contact with extraterrestrial civilizations.
And finally, a wonderful overhead shot of the 100″ telescope at Mt. Wilson
The pictures above are a tiny fraction of the available pictures of working scientists. Cancel your afternoon appointments and dive in. | <urn:uuid:0146518b-5d1e-4ea6-8a34-c54f34f6df6f> | 3.234375 | 454 | Personal Blog | Science & Tech. | 42.923767 | 1,458 |
June 11, 2010
Dead birds smothered in icky, gooey brown oil are the iconic images of most any oil spill, including the ongoing one in the Gulf. Even a small amount of oil can kill a bird. Oil sticks to feathers, destroying their waterproofing ability and exposing the bird to extremes of temperature. And ingested oil can harm internal organs.
The birds that survive long enough to be rescued can often be cleaned. The International Bird Rescue Research Center has treated birds from more than 150 spills over the last four decades, and it has teamed up with Tri-State Bird Rescue to wash birds rescued from the Gulf spill.
Cleaning the birds is a multi-step process, and it can be a stressful one for the bird. Beforehand, the bird is examined and its health stabilized. It may be suffering from exhaustion, dehydration, hypothermia or the toxic effects of ingested oil. Once the bird is healthy enough to handle the ordeal of washing, trained staff and volunteers clean it in a tub of warm water mixed with one percent Dawn dishwashing detergent. (IBRRC discovered in the late 1970s not only that Dawn was great at removing oil, but also that it didn’t irritate birds’ skin or eyes and could even be ingested—accidentally, of course—without harm.) When the water is dirty, the bird is moved to a second tub, and so on, until the water remains clean. Then the bird is thoroughly rinsed. Once it is dry, the bird will preen and restore the overlapping, weatherproof pattern of its feathers. After it is deemed healthy, the bird is released to an oil-free area.
Cleaning one bird can take hours and up to 300 gallons of water. Survival rates are about 50 to 80 percent on average, the IBRRC says, though this depends on the species. (As of earlier this week, the center had rescued 442 live birds, 40 of which had been cleaned were healthy enough to be released back into the wild.)
Some scientists, however, have questioned the value of putting so much effort into saving birds when the benefits are unclear. “It might make us feel better to clean them up and send them back out,” University of California, Davis ornithologist Daniel Anderson told Newsweek. “But there’s a real question of how much it actually does for the birds, aside from prolong their suffering.”
There is no long-term data on survival after the birds have been released. But there is concern that many birds may simply return to their oil-soaked homes to die. And there is evidence that the survivors have shorter life spans and fewer surviving chicks.
But it’s hard to just leave these creatures to die, especially as they have been harmed by a man-made disaster. To me, at least, it seems irresponsible to not even try. As we begin to measure the damage from this spill, leaving these innocent victims on their own shouldn’t be an option.
Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week. | <urn:uuid:2d3b0c34-2324-48d2-bef4-a03870a0a5ee> | 3.1875 | 643 | News Article | Science & Tech. | 60.058439 | 1,459 |
Two stories appeared this week with more bad news on climate change. First, a review of 866 papers found that animal and plant species are shifting their ranges northward, while polar species are dying out.
The linked article from the Post included several paragraphs on the economic implications, in this case ski resorts and power companies. I imagine that they are not the only industries facing challenges.
"Wild species don't care who is in the White House," Parmesan said. "It is very obvious they are desperately trying to move to respond to the changing climate. Some are succeeding. But for the ones that are already at the mountaintop or at the poles, there is no place for them to go. They are the ones that are going extinct."
Among the most affected species, Parmesan said, are highland amphibians in the tropics. She said more than two-thirds of 110 species of harlequin frogs, which occupy mountain cloud forests in Central America, have become extinct in the past 35 years.
Meanwhile, many pest species -- including roaches, fleas, ticks and tree-killing beetles -- are surviving warming winters in increasing numbers. "We are seeing throughout the Northern Hemisphere that pests are able to have more generations per year, which allows them to increase their numbers without being killed off by cold winter temperatures," said Parmesan.
Meanwhile, the rate of increase of carbon emissions has risen since 2000. Around that year , the annual rate of increase rose from about 1% to 2.5%. The Global Carbon Project, which established those figures, identified two causes.
"There has been a change in the trend regarding fossil fuel intensity, which is basically the amount of carbon you need to burn for a given unit of wealth," explained Corinne Le Quere, a Global Carbon Project member who holds posts at the University of East Anglia and the British Antarctic Survey.
"From about 1970 the intensity decreased - we became more efficient at using energy - but we've been getting slightly worse since the year 2000," she told the BBC News website.
"The other trend is that as oil becomes more expensive, we're seeing a switch from oil burning to charcoal which is more polluting in terms of carbon." | <urn:uuid:577bcf3c-8dec-4793-b0aa-101996281e00> | 2.6875 | 454 | Personal Blog | Science & Tech. | 48.678987 | 1,460 |
NASA began observing a dust storm on the planet Mars on November 10, 2012. Martian dust storms are the largest such storms in our solar system. Over the century that astronomers have monitored them through telescopes – and now via spacecraft – these periodic storms have been know to rage for months and grow to cover the entire planet Mars. This one, however, appeared to be dissipating by early December, 2012.
Dust storms on Mars sometimes start in the months before Mars is closest to the sun, as it soon will be. Mars will reach perihelion – its closest point to the sun – in January 2013. Each Martian year lasts about two Earth years. Regional dust storms expanded and affected vast areas of Mars in 2001 and 2007, but not between those years and not since 2007.
The image above is a mosaic taken by a spacecraft in orbit around Mars, the wonderful Mars Reconnaissance Orbiter, on November 18, 2012. Small white arrows outline the area in Mars’ southern hemisphere where the 2012 Martian dust storm was building. The storm was not far from two Mars rovers, Opportunity and Curiosity.
At that time, Rich Zurek, chief Mars scientist at NASA’s Jet Propulsion Laboratory, Pasadena, California said:
This is now a regional dust storm. It has covered a fairly extensive region with its dust haze, and it is in a part of the planet where some regional storms in the past have grown into global dust hazes. For the first time since the Viking missions of the 1970s, we are studying a regional dust storm both from orbit and with a weather station on the surface.
That weather station on Mars comes from the Mars rover Curiosity, which landed on Mars on August 5, 2012. NASA says Curiosity’s weather station detected atmospheric changes related to the storm. For example, its sensors measured decreased air pressure and a slight rise in overnight low temperature. In fact, dust storms on Mars are known to raise the air temperature of the planet, sometimes globally.
The Opportunity rover on Mars – that stalwart vehicle that has been tooling around on the Red Planet since 2004 and is now near the Endeavour crater on Mars – does not have a weather station. Opporunity was within 837 miles (1,347 kilometers) of the storm on November 21, NASA said, and did observe a slight drop in atmospheric clarity from its location. If the storm had taken over the entire planet and clouded over the sky, it would have impacted Opportunity most heavily, because that rover relies on the sun for energy. The rover’s energy supply would be disrupted if dust from the air fell on its solar panels.
Meanwhile, the car-sized Curiosity rover would fare better since it is powered by plutonium instead of solar cells.
Curiosity and the Mars Reconnaissance Orbiter are working together to provide a weekly Mars weather report from the orbiter’s Mars Color Imager, which you can see here.
Bottom line: As Mars nears its perihelion or closest point to the sun in January 2013, a major dust storm broke out in the planet’s southern hemisphere, where summer is coming. NASA is tracking the storm with both the Curiosity and Opportunity rovers on the Martian surface, and from above with the Mars Reconnaissance Orbiter. These dust storms on Mars sometimes rage for months and cover the entire planet. This one seems to have died down suddenly. | <urn:uuid:79cd84c3-9aea-4123-a05f-43931a24850e> | 4.15625 | 697 | Knowledge Article | Science & Tech. | 50.284294 | 1,461 |
Photograph by Donna Eaton
A hippo peers from a plant-covered pool in Kenya’s Masai Mara Game Reserve. These massive mammals keep cool by submerging their massive bodies in African ponds, rivers, and lakes for up to 16 hours a day. Though they can hold their breath for perhaps half an hour if necessary, hippos typically leave the tops of their heads above the surface. At night hippos leave the water and roam overland to graze. If caught on land too long during the heat of the day the animals can dehydrate quickly.
Photograph by Craig Arnold
A Zambian hippo sends an aggressive message by displaying sharp canine teeth that can reach 20 inches (51 centimeters) in length. Bulls use this open-mouthed “gaping” display while standing face to face with one another in order to determine which animal is dominant. Sometimes a show of strength is not enough and the behavior leads to potentially fatal battles. Hippos are dangerous to humans as well.
Photograph by Amanda Cotton
Manatees cruise slowly in shallow, warm coastal waters and rivers—like Florida’s gin-clear Crystal River, pictured here. The massive mammals (up to 1,300 pounds or 600 kilograms) are born underwater and never leave the water as long as they live—though they surface to breathe every few minutes. Also known as sea cows, they are insatiable grazers, browsing on a variety of aquatic grasses, weeds, and algae.
Several different manatee species live along the Atlantic coast of the Americas, Africa’s west coast, and the Amazon River.
Photograph by Danny Brown
The muskrat is a common denizen of wetlands, swamps, and ponds, where it dens by tunneling into muddy banks. This large rodent has a body a foot (30 centimeters) long and a flat tail that nearly doubles its length. Muskrats are well adapted for the water and begin swimming at only ten days old. Perhaps best known for their communication skills, muskrats exchange information with one another and warn off predators with their distinctive odor, or musk.
Photograph by Sergey Gabdurakhmanov
The world is home to many seals but only one truly freshwater species—the Baikal seal. This seal inhabits the Russian lake of the same name, which is the world’s deepest. Though new generations of Baikal seals are born each year at rookeries like this one, the species does face serious threats. Illegal hunting is an issue, as is widespread pollution from paper and pulp mills and other industry around the lakeshore.
Amazon River Dolphin
Photograph by Kevin Schafer
The charismatic Amazon river dolphin uses echolocation to track down fish and crustaceans in murky river waters. During annual floods the dolphins actually swim through flooded forests to hunt among the trees. Often pink or very pale, the dolphins are relatively easy to spot. The bright hue and the boto's natural curiosity around boats have made the dolphins easy prey for fishermen who target them (illegally) for use as catfish bait. Populations have experienced serious declines in recent years; among traditional Amazonian peoples the boto was long considered a supernatural being that was able to take human form.
Photograph by Mark Godfrey
The world’s biggest rodent, the capybara, grows to more than 4 feet (130 centimeters) long and tips the scales at up to 145 pounds (66 kilograms). These water-loving mammals reach such size by grazing on grasses and aquatic plants.
Capybaras are physically well adjusted to their watery environs. They have webbed toes to help them swim well and can dive underwater for five minutes or more. Capybaras are found in Central and South America, populating lakes, rivers, and wetlands from Panama south to Brazil and northern Argentina.
The Nature Conservancy is working with partners to protect habitat for the capybara, including the watery Llanos grasslands. The group is working with local landowners to create private reserves in critical habitat areas and helping bring more resources to a 63,000-acre (25,500-hectare) public protected area in the province of Casanare in northeastern Colombia.
Photograph by Juan Alvarez
The capybara’s eyes, ears, and nostrils are situated high on its head so that it can remain above the surface while the animal swims. The social mammals travel and live in groups dominated by an alpha male and defend their feeding and wallowing territories. Humans hunt (and raise) capybaras for their leather and their meat—which is especially popular during Lent because some South American Catholics consider the animal, like fish, an acceptable alternative to beef or pork.
Photograph by Mike Paterson
Beavers are environmental engineers second only to humans in their ability to dramatically reshape the landscape to their liking. Using their powerful jaws and teeth, they fell trees by the dozens to create wood and mud dams, 2 to 10 feet (1 to 3 meters) high and more than 100 feet (30 meters) long. Beaver dams block brooks and streams to flood fields and forest alike. The resulting ponds, which can be enormous, are then graced with a branch-and-mud lodge, which the beavers enter via secure underwater passages.
Photograph by Derek Dafoe
Though they are clumsy on land, beavers glide in the water with finlike webbed feet and rudderlike tails, which help them swim along at some 5 miles (8 kilometers) an hour. The mammals also boast a sort of natural wet suit in the form of their oily and water-resistant fur.
Beavers eat aquatic plants, roots, leaves, bark, and twigs. Their teeth grow throughout their lives so wood gnawing is actually necessary to keep them from growing too large and curved. A single beaver gnaws down hundreds of trees each year—typically dropping a 6-inch (15-centimeter) diameter tree in just 15 minutes.
Photograph by Lee Streitz
This sleepy river otter also has a playful side. These water-loving mammals seem to take pleasure in sliding and diving and can swim gracefully with their webbed feet and paddlelike tails. Otters have specialized nostrils and ears that close in the water, as well as water-repellent fur. Young otters begin to swim when they are only about two months old. River otters live in burrows by the edge of rivers or lakes in close proximity to the fish they feed on.
Photograph by Stephen Babka
The platypus is an improbable mishmash of an animal: It has a furry, otterlike body, a ducklike bill and webbed feet, and a beaverlike paddle tail. Like those other animals platypuses swim well and spend much of their time in the water. Unlike otters or beavers, they lay eggs—one of only two mammals known to do so. Male platypuses also have venomous stingers on their rear feet. These animals burrow near the water’s edge and feed by digging underwater for worms, shellfish, and insects.
Help Save the Colorado River
You can help restore freshwater ecosystems by pledging to cut your water footprint. For every pledge, Change the Course will restore 1,000 gallons back to the Colorado River.
Sandra is a leading authority on international freshwater issues and is spearheading our global freshwater efforts.
He's paddled the Colorado River from its headwaters to the delta, in an effort to bring awareness to this mighty river at risk.
For more than 15 years, Osvel Hinojosa Huerta has been resurrecting Mexico's Colorado River Delta wetlands.
Water Currents, by Sandra Postel and Others
A year in the making, this video highlights nature's splendor.
A wetland flourishes in Mexico thanks to a treatment plant.
Scientists investigate the impacts of "micro plastics" on lake ecosystems.
Special Ad Section
The World's Water
NG's new Change the Course campaign launches. When individuals pledge to use less water in their own lives, our partners carry out restoration work in the Colorado River Basin.
A special series on how grabbing water from poor people and future generations threatens global food security, environmental sustainability, and local cultures. | <urn:uuid:a6832002-e376-4f5b-9983-6d46b680d43d> | 3.3125 | 1,726 | Content Listing | Science & Tech. | 44.465734 | 1,462 |
The idea of the hypernova was first proposed by Dr. Bohdan Paczynski of Princeton University
. He wanted to explain the gamma ray burst
s that typically last a few second
s at a time, come from seemingly random
directions in space, and have the potential to produce more energy than anything else in the rest of the universe
for a few seconds. The first remnants of such an explosion were identified by Q. Daniel Wang of Northwestern University
, using work done by Dr. You-Hua Chu of the University of Illinois at Urbana-Champaigne
as a base. The two remnants he identified reside in the galaxy M101
in the Ursa Major Constellation
. They have been given the easily memorable names NGC5471B
. The former is a nebula
that is expanding at at least a hundred miles a second while the latter is one of the largest pieces of supernova
wreckage known at 850 light year
s across. Both are about ten times brighter than any known supernova remnants in our galaxy
Little is known for sure about these powerful explosions, although it is suspected that they are the product of the collapse of extremely massive stars or their collisions with superdense objects such as neutron stars. Both relationships also imply that hypernovae probably have something to do with the formation of black holes. Aside from the physics that I'm not going to butcher by trying to explain, these relationships have been deduced primarily from the locations of gamma ray bursts' points of origin as well as the locations of various remnants of hypernovae, which both tend to be in areas of intensive star formation. Said areas are also hotspots for the formation of neutron stars, black holes, and other associated objects.
Another hypernova that may come to play an important role in our lives in the near future is Eta Carainae. While it is not yet a hypernova, it is suspected that it will probably become one relatively soon due to its unstable patterns of brightness and dimness over the past 150 years that has culminated recently in an intensive brightening spell. It now radiates around 400 million times as much light and energy as the sun and is brightening in a way that astronomers do not understand. On the bright side, though, if it does explode again, it will probably be too far away (7500 light years) to hurt those of us who are protected from gamma ray bursts by an atmosphere. If, however, you are an orbital satellite or have any friends who are orbital satellites, I'd be very frightened indeed.
A really cool picture of Eta Carainae can be found at http://earthfiles.com/earth040.html, where I also got much of the information on it. Most of the rest of the information in this writeup comes from http://www.space.com/scienceastronomy/astronomy/astrobizarre_000928.html.
It should also be noted that I am the layest of lay persons when it comes to this stuff, so if I've gotten anything wrong, please /msg sludgeel so that I can correct it. | <urn:uuid:db36eb8c-30a7-4887-a53e-acb1c1b6fe92> | 3.765625 | 646 | Personal Blog | Science & Tech. | 48.413299 | 1,463 |
Seeing red, or green - chemistry of color vision
Dr. Ali Zand is researching the chemistry underlying our ability to perceive colors and its implications on color blindness.
At first glance, Dr. Ali Zand's research on the chemistry of color vision defies the very foundation of the art world, where red, blue and yellow are the "primary colors" from whence all other colors are made. In the chemistry world, the colors are determined by red, blue and green proteins- heresy to those schooled in the universally accepted theory of the color wheel.
According to Zand, associate professor of Chemistry at Kettering University, the human eye perceives all colors based on three colored proteins (rhodopsins): red, blue and green. Before all the artists and art majors start rioting in the streets, take a moment to consider the chemistry that backs up Zand's claim.
"Why humans see different colors is based on how these proteins react in the cone cells of the eye," he said. "There is only one chromophore (one molecule) that is responsible for a chemical reaction that takes place in the eye allowing humans to see," Zand said, "that molecule is Retinal, a form of vitamin A."
Retinal combines with a protein called Opsin to form Rhodopsin, the chemical entity (protein) responsible for vision. Opsins are found in the rod and cone cells of the eye. Zand's research is concerned with the question 'If there is only one molecule involved in vision, how can that one molecule allow us to see so many different colors?"
To find out, Zand, and Dr. Babak Borhan, associate professor of Chemistry at Michigan State University and his research group, collaborated to engineer a protein that could mimic Rhodopsin's response to chemical and physical changes. They had to engineer a surrogate protein because Rhodopsin is a membrane-bound protein that could not easily be separated and purified for research purposes.
"Rhodopsin has to stay within the membrane to maintain its conformation (shape)," said Zand. "We engineered a protein called Cellular Retinoic Acid-binding Protein II (CRAP II - who said scientists don't have a sense of humor?) to use in research."
To understand the importance of this CRAP II protein, it's necessary to explain the process of vision. When the light hits the eye it passes through the cornea, the lens and the vitreous fluid. Those three objects focus the light on the tissue lining the inner part of the eye, which is the retina. The retina is made up of thousands of rod and cone cells that are activated through the absorption of light by Rhodopsin.
Inside of the active site of Opsin is 11-cis-Retinal. The chemical structure of cis-Retinal has a bend in its carbon structure that straightens out (becomes more planar) when light is absorbed. This causes a cascade of events. Rhodopsin is a G-protein coupled receptor; it binds itself to and activates the G-protein called transducin. When Rhodopsin absorbs light, transducin releases a subunit of its molecular structure called the alpha segment. The alpha segment attaches itself to another protein leading to cascade of events resulting in closure of calcium ion channels within the cell. This causes a gradient in ion concentrations inside the cell versus outside the cell, initiating an electrical impulse that leads to a neurotransmitter release into a synapse. The neurotransmitter is picked up by another neuron and the neuron transmits the signal to the brain resulting in a visual image.
'The rod cells are very sensitive to different shades of light and darkness; and cone cells are actually the cells that allow humans to see color', said Zand.
The rod cell has an outer segment that has various membranous discs containing a seven-alpha-helical protein (seven alpha helixes that are bound together like a barrel) called Opsin. Cone cells possess the same discs, but they are part of the actual cell membrane of the cone cell, notdisconnected from the plasma membrane as in the rod cells.
In the cone cell there are three different types of Opsins: Red Opsin, Blue Opsin and Green Opsin. The wavelength where Retinal absorbs light is different for each. The Blue Rhodopsin absorbs at 420 nanometers; Green Rhodopsin absorbs at 530 nanometers and Red Rhodopsin absorbs at 570 nanometers.
There may be only one chromophore (Retinal), but because the proteins are a little different, the way the Retinal is attached to a protein has a great deal of affect on its wavelength of absorption.
"Color blindness results when the absorption wavelengths of Red Rhodopsin and Green Rhodopsin are closer together. Red and Green Rhodopsin are 96 percent sequence identical," said Zand. All the rhodopsins have about 348 amino acids and two oligosaccharide (sugar) chains. Human red and green cone proteins differ by only 15 amino acids.
Hence, the color blindness arises from either mutations or lack of one or more of the opsin proteins. Although color vision deficiencies are termed color blindness, in reality it is simply a reduced ability to distinguish between colors. Most people who are color blind cannot distinguish between red and green because mutations in Red Rhodopsin can lead to shift in absorbance from 570 nm to 530 nm and vice versa. These individuals can, however, distinguish between blue and red or blue and green because blue absorbs at a much lower wavelength. However, they cannot distinguish between red and green.
"There are people who lack cone pigments or within whim the Opsins within the cone cells are non-functional, and they can only see shades of black, white and gray," Zand said, "but it's very rare."
Scientists have proposed many different theories to explain why this one molecule absorbs light at three different wavelengths and allows us to see different colors.
One theory is related to how the protein arranges itself around the Retinal and the bend inthe carbon structure of Retinal. When the structure is bent, the carbons are not on the same plane. The more bent the structure the more blue-shifted the light absorption is going to be.
Another hypothesis relates to the point at which Retinal attaches to Opsin. At the attachment point there is a nitrogen molecule with a positive charge. Because the positive charge requires a counter ion- a negative charge, the location of that negative charge is very crucial. The Retinal becomes more planar in the active site as the distance between the positive and negative charges increases. Therefore, it is hypothesized that the negative charge and positive charge are very close in Blue Rhodopsin whereas they are far apart in the Red Rhodopsin.
Zand and the researchers at Michigan State University have engineered the CRAP II protein to bind to Retinal instead of Retinoic Acid, and they are currently looking at whether they can prove these hypotheses by measuring how twisted the molecule is within the active site as well as trying to mimic the Rhodopsin in changing the positions of the negative charges. Their tests will attempt to show if a shift in the absorption from red to green to blue can be accomplished by merely changing the position of the counter-ion (negative charge) within the protein active-site, or by causing different twists in the Retinal.
Could unraveling the chemistry underlying our color vision enable science to chemically alter someone's eye so they could see color if they were colorblind? "The only way that could be done is through stem cell research or gene therapy, by inoculating the eye with stem cells that would eventually differentiate into rod and cone cells possessing healthy proteins enabling the person to see color," said Zand.
For all those artists who still insist the primary colors are red, blue and YELLOW, Zand sympathizes, but said yellow is merely "green" with envy.
Written by Dawn Hibbard | <urn:uuid:1749083a-ee86-46b5-88d3-d8b21f899fc9> | 3.828125 | 1,642 | News Article | Science & Tech. | 40.125325 | 1,464 |
10 Truly Eccentric Organisms
Most species of organisms are unrecognized for their unique, versatile abilities, appearance and existence. When we think of organisms we often think of the typical dog, cat and other household pets. When we think of wild organisms, we think of zebras, lions, monkeys and other animals seen in habitats that are known for housing wild animals, like zoos. When thinking of aquatic organisms we think of jellyfish, goldfish, and sharks. We have become accustomed to a stereotype of the organisms that form each label of organisms; people are becoming ignorant to the variety and eccentricity of organisms we have among us today. This following list is of 10 eccentric organisms, showing the plethora of organisms within each species.
Sea pigs are sea creatures closely related to sea cucumbers, belonging to the kingdom Animalia. These aquatic organisms are about four inches in length, and have 10 tentacles with tub like feet that are used not for swimming, but for marching along the ocean floor. Sea pigs are bottom feeders and detect food by scent; they remove organic particles from the mud with their deflating and inflating tentacles and eat the particles trapped in their tentacles. These sea creatures obtained their name from their pink-tint and chubby body, do not fall short of idiosyncrasy.
The yeti crab was only discovered in 2005 by Marine biologists in the Pacific Ocean. This organism resembles the mythical organism yeti with its hair-like sinuous, and resides in the hydrothermal vents of the pacific ocean that protrude a toxic liquid that to the average organism would be deadly. This organism is not well-researched yet, but its albino-like eyes suggest that the organism may be blind and it is suspected to feed off of the toxic minerals from the hydrothermal vents.
The viperfish, which can easily be recognized from the Disney film “Finding Nemo” with its hinged lower-jaw containing long stringy, pointy teeth, is a deepwater fish that lives in tropical and temperate waters. The viperfish varies from 12 to 24 inches in length and swims in depths from 250 to 5,000 feet. Although the viperfish is frightening in appearance, it is preyed on by sharks and even dolphins! This fish can live up to 40 years and holds the Guinness world record for largest teeth in comparison to head size in a fish.
This marine crab is the biggest arthropod in the word in overall size containing eight legs. The Japanese spider crab is found about 150-800 meters deep off the southern coasts of the Japanese island Honshu and has a leg span of 3.6 meters. This crab can live up to a miraculous 100 years and are known to be calm animals. The Japanese spider crab feeds off animal carcasses, plants and shellfish; they could be considered the vulture of the sea.
The giant isopod, a crustacean living in the Atlantic Ocean, is an alien-looking sea creature that exists in the pitch darkness of the bathypelagic zone in depths as deep as 7,020 feet. This interesting organism has stayed relatively the same for the past 130 million years! The giant isopod can be up to 14 inches in length and up to 30 inches in height and have four sets of jaws. The giant isopod is a scavenger that feeds with its four sets of jaws, on dead whales, fish, and squid. This organism has the ability to survive without food for more than eight weeks!
The Chinese giant salamander has stayed almost exactly the same in resemblance as its ancestors of 30 million years. This organism is the largest known salamander in existence and its habitats include mountain streams and lakes of China. This salamander can grow up to 73 inches in length and have lived up to 80 years at a time. The Giant Salamander does not have eyelids, and therefore has poor vision and rely on sensory nodes to detect possible vibrations made by predators. This amphibian has the ability to breathe through the pores and wrinkles in the skin and is mainly a nocturnal animal that hunts during the night feeding on crabs, crayfish, fish, frogs, insects, shrimps, snails and worms.
This proteus, aquatic, snake-like amphibian is a blind organism that lives in the caves of the subterranean waters. The Olm is about 8-12 inches and the small fragile superior limbs contain three fingers and the inferior limbs containing two. The olm’s skin resembles the color and texture of humans and is sometimes recognized as the “human fish” for its skin. The olm has not only external gills but also has lungs that are rarely actually used during the respiratory process. Due to their eyes that lay deep beneath the dermis and only somewhat detect light, the olm is dependent on their acute sense of smell and hearing abilities to survive.
The giant grenadier is the only member of the Albatrossia genus that is found along the north Pacific part of Japan to the Okhotsk and Bering seas. This fish can reach up to seven feet in length and has been proven to live up to at least 56 years-old. The Giant Grenadier feeds mainly on different species of squids, crabs, worms, shrimps and echinoderms and is well known for its frightening resemblance to snakes with its long pointy tail and its large eyes.
This fish, also known as the oarfish is the longest bony fish in existence. It is found in depths from 300-1000 meters in any of the world’s oceans. The king of Herrings remains in deep waters and does not surface often; typically if it surfaces, it dies when doing so. This 16 foot long fish was first discovered washed up dead on the shore in Bermuda in 1860. It is believed by scientists that the great myth of a sea serpent could have branched off of a king of herrings sighting. Despite that, this organism is a fish, it does not have any scales and although frightening in size, is not a threat to the human race based on its small teeth and one dorsal fin.
This cuddly creature is a domestic rabbit bred for long and soft wool. They originate from Angora, Turkey, are affable companions for those looking for a pet, and live up to seven years when well taken care of. There are five types of Angora rabbit breeds including English, German, Giant, French and Satin. The Angora rabbit can be up to 12 pounds and in spite of their large fluffy appearance, are very active creatures. | <urn:uuid:cf9df807-7844-4705-8359-13855377443c> | 3.21875 | 1,363 | Listicle | Science & Tech. | 52.344073 | 1,465 |
A thermally insulated piston cylinder device initially contains 0.2m^3, 0.8kg of air at 20 degrees Celcius, and the piston is free to move. Now air at 600kPa and 80 degrees Celcius is slowly applied to the device through a supply line until the volume increases by 50 percent. Using constant specific heats and assume air as ideal gas, determine the entropy generation.
I've been thinking about this for a long time but could not figure it out. I would appreciate any help. | <urn:uuid:ce6c3cc4-87f1-4a5f-8818-9f9cab8a03be> | 2.90625 | 105 | Q&A Forum | Science & Tech. | 62.546395 | 1,466 |
[Numpy-discussion] "Nyquist frequency" in numpy.fft docstring
Sun Jul 11 18:13:44 CDT 2010
Hi! I'm a little confused: in the docstring for numpy.fft we find the
"For an even number of input points, A[n/2] represents both positive and
negative Nyquist frequency..."
but according to http://en.wikipedia.org/wiki/Nyquist_frequency (I know, I
know, I've bad mouthed Wikipedia in the past, but that's in a different
"The *Nyquist frequency*...is half the sampling
signal <http://en.wikipedia.org/wiki/Discrete_signal> processing
system...The Nyquist frequency should not be confused with the
*, which is the lower bound of the sampling frequency that satisfies the
Nyquist sampling criterion for a given signal or family of signals...*Nyquist
rate*, as commonly used with respect to sampling, is a property of a
signal <http://en.wikipedia.org/wiki/Continuous-time_signal>, not of a
system, whereas *Nyquist frequency* is a property of a discrete-time system,
not of a signal."
Yet earlier in numpy.fft's docstring we find:
"...the discretized input to the transform is customarily referred to as a *
Should we be using "Nyquist rate" instead of "Nyquist frequency," and if
not, why not?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the NumPy-Discussion | <urn:uuid:c6e88302-6165-4cb7-bee3-07f7c0a9cf58> | 2.875 | 356 | Comment Section | Software Dev. | 65.777796 | 1,467 |
Date: 9/2/96 at 22:52:58 From: Mohd Nasir Mahmud Subject: Radians Dear Sir, Could you tell me why pi radians = 180 degrees?
Date: 10/24/96 at 16:5:50 From: Doctor Jaime Subject: Radians This is an immediate consequence of the definition of radians and degrees. In fact 1 radian is defined as the central angle subtended by the arc of length 1 on a unit circle. 1 degree is the central angle subtended by (1/360)th of the circumference of a unit circle. This measure dates back to the Babylonians, who used a base 60 number system. Since the circumference of the unit circle is 2 Pi, it follows that there are 2 Pi radians in 360 degrees, or Pi radians = 180 degrees. -Doctor Jaime, The Math Forum Check out our web site! http://mathforum.org/dr.math/
Search the Dr. Math Library:
Ask Dr. MathTM
© 1994-2013 The Math Forum | <urn:uuid:c7c74107-6861-40a6-9c3c-93a2761bcd3c> | 3.59375 | 218 | Comment Section | Science & Tech. | 82.960921 | 1,468 |
A Villager David has a plot of land of the shape of a quadrilateral. The Village head decided to take over some portion of his plot from one of the corners toconstruct a health center. Suzie Agrees to the above proposal with the condition that he should be given equal amount of land in lieu of his land adjoining his plot so as to form a triangular plot. Explain how this plan can be implemented.
Please solve this | <urn:uuid:11dedc48-f881-412e-8b75-2d141c022009> | 3.296875 | 89 | Q&A Forum | Science & Tech. | 56.888654 | 1,469 |
AB and AC are equal chords of a circle. AM and BN are parallel chords through A and B respectively. Prove that AN is parallel to CM.
I've tried to prove angles NAM and AMC are equal but failed miserably. Could someone please help me with this question? | <urn:uuid:570c0734-cb33-4251-ba58-2b155f18c6cd> | 2.53125 | 59 | Q&A Forum | Science & Tech. | 74.304327 | 1,470 |
Application Configuration Files
Application configuration files contain settings specific to an application. This file contains configuration settings that the common language runtime reads (such as assembly binding policy, remoting objects, and so on), and settings that the application can read.
The name and location of the application configuration file depend on the application's host, which can be one of the following:
The configuration file for an application hosted by the executable host is in the same directory as the application. The name of the configuration file is the name of the application with a .config extension. For example, an application called myApp.exe can be associated with a configuration file called myApp.exe.config.
In Visual Studio projects, place the .config file in the project directory and set its Copy To Output Directory property to Copy always or Copy if newer. Visual Studio automatically copies the file to the directory where it compiles the assembly.
For more information about ASP.NET configuration files, see ASP.NET Configuration Settings
Internet Explorer-hosted application.
If an application hosted in Internet Explorer has a configuration file, the location of this file is specified in a <link> tag with the following syntax:
<link rel="ConfigurationFileName" href="location">
In this tag, location is a URL to the configuration file. This sets the application base. The configuration file must be located on the same Web site as the application. | <urn:uuid:fd003215-f710-461c-8456-94260ff34ad8> | 2.875 | 288 | Documentation | Software Dev. | 25.886154 | 1,471 |
4.2. Protoplanetary disks
Contrary perhaps to the expectation that protoplanetary disks would be deeply embedded within the clouds from which they form, and they would therefore be inaccessible to optical observations, HST revealed many dozens of protoplanetary disks ("proplyds"; e.g., Bally, O'Dell and McCaughrean 2000; O'Dell 2001; O'Dell et al. 1993; O'Dell and Wong 1996, following the initial correct identification by Churchwell et al. 1987 and Meaburn 1988). Many of these disks are seen silhouetted against the background nebular light (when they are shielded from photoionization), with some possessing ionized skins and tails (e.g., Bally, O'Dell and McCaughrean 2000; Henney and O'Dell 1999, Fig. 12).
Figure 12. Protoplanetary disks (Proplyds) in the Orion Nebula (M42), HST/WFPC2. Credit: NASA, C. R. O'Dell (Vanderbilt University), and M. McCaughrean (Max-Planck-Institute for Astronomy). http://hubblesite.org/newscenter/archive/1995/45/
The ubiquity of the protoplanetary dust disks (they are seen in 55%-97% of stars; Hillebrand et al. 1998, Lada et al. 2000) demonstrates that at least the raw materials for planet formation are in place around many young stars. Indeed, in a few cases, like the dust ring and disk in HR 4796A and the nearly edge-on disk surrounding Beta Pictoris, the detailed HST images reveal gaps and warping (respectively) that could represent the effects of orbiting planets (Schneider et al. 1999, Kalas et al. 2000).
Another aspect of the protoplanetary disks that is significant for planet formation is the discovery of evaporating disks in the Orion Nebula. As was noted in Section IIIA, some of the Orion proplyds were shown to be evaporating (due to photo-ablation by UV radiation from young, nearby stars) at rates of ~ 10-7 to 10-6 M yr-1 (e.g., Henney and O'Dell 1999). Given that the masses of these disks are typically of order 10-2 M (if normal interstellar grains are assumed, so that the observed dust emission can be scaled to the total mass), this implies lifetimes for these disks of 105 years or less. There exists, however, some evidence that the grain sizes in Orion's disks may, in fact, be relatively large - perhaps of the order of millimeters (Throop 2000). The latter conclusion is based on the fact that the outer portions of the disks appear to be gray (they do not redden background light), and on the failure to detect the disks at radio wavelengths in spite of the implied large extinction in the infrared (hiding the central star in some cases). The observations are thus consistent with grain sizes in excess of the radio wavelength used, of 1.3 mm. When we think about the potential implications of these two findings (about disk lifetimes and grain sizes), we realize that they may have interesting consequences for the demographics of planets in Orion. The relatively short disk lifetimes but relative large grain sizes may mean that while rocky (terrestrial) planets can form in these strongly irradiated environments, giant planets (that require the accretion of hydrogen and helium from the protoplanetary disk) cannot (unless their formation process is extremely fast; Boss 2000, Mayer et al. 2002). It is nevertheless clear from the many observations of "hot Jupiters" (giant planets with orbital radii 0.05 AU) that less extreme environments do exist, in which giant planets not only form, but also have sufficient time to gravitationally interact with their parent disk and migrate inward, to produce the distribution in orbital separations we observe today (see, for example, Lin, Bodenheimer, and Richardson 1996, Armitage et al. 2002).
While disks around young stars produce jets and form planets, similar structures around old stars help perhaps to shape incredible "sculptures" around dying stars. | <urn:uuid:fb27e39a-1e34-46c6-b680-951f862a36ce> | 3.90625 | 882 | Academic Writing | Science & Tech. | 54.068308 | 1,472 |
From my research in Hungary, one of the key things that has struck me is the way in which multiple representations and images are used to support the development of very abstract and complex mathematical ideas. This month's website will seek to illustrate this and offer some problems and resources that help students to develop notions of abstract mathematical ideas through exposure to a range
of representations of them. This all sounds rather obscure and abstract in itself so let me illustrate this with two examples one from a class of 7 year old students and the other from a class of 15 year olds.
For the younger students the concept under consideration was the idea of the number six. During the course of one lesson focused on this number, the children were offered a range of iconic and symbolic representations of six and asked to identify collections of six objects. This range comprised:
Pattern on a die, finger pattern, collections of objects, collections of actions, the Cuisenaire rods, the 'number picture', (you can find them here doc .pdf ), dominoes with six spots, Roman numerals, the symbol 6, number line, 6
o'clock on an analogue clock face, coins.
I would argue that this rich range of representations of 6 enabled the children to abstract a notion of the 'sixness' of six that transcended the different representations. All the different representations have their value and potential applications: some stress the notion of a number representing a collection and so build from counting such as a collection of objects of actions; others stress
aspects of the structure of six such as the finger pattern which draws attention to six being one more than five; others emphasise the wholeness of six as an entity that supports students away from a counting notion of number such as the symbol 6, the 6 Cuisenaire rod or even potentially the die pattern that can be recognized without counting the spots; some stress the place of six in the
sequence of counting or Natural numbers such as the number line.
For the 15 year olds, the lesson that I observed was focused on the solution of simultaneous equations involving trigonometric functions. In this lesson the students were able to identify solutions to complicated pairs of equations through their knowledge of the meanings of the functions that were being considered. They were able to sketch the relevant graphs of the functions, consider their
ranges and domains and use these ideas to produce solutions or to identify when solutions could not be found.
This seemed to me to be linked to my observations of lessons with the younger children. In both cases the students had access to a range of representations, images and mathematical models and were able to bring appropriate images to the problems with which they were presented.
In some of our problems this month we offer a range of representations of mathematical ideas to work with, in others we offer a problem that lends itself to solution with one representation in mind. In all of the problems we are seeking to explore the power of a variety of representations, images and models with a view to supporting students in enriching their understanding of various abstract
mathematical concepts through exposure to this variety. The aim of the exercise is to deepen students' understanding of the abstract mathematical concepts involved in the process of generalizing from a variety of models and representations.
Matching Numbers is an interactive game in which the task is to match different representations of numbers in pairs. In fact the set of cards has four possible representations of each of the numbers so children can discuss how each of those representations shows the number. When they have played the game themselves, they can make their own sets of
number cards showing different representations of numbers using this blank set.
How do you see it? is an activity with a difference in that as the children work on these they will have the opportunity to exhibit their own individual ways of thinking about simple calculations. The article referred to in the teachers' notes will enable you to explore some of the findings about those calculations that children find more difficult
because of the order of the information.
Let's divide up offers a story scenario in which three different conceptions of division are presented. We hope that children will be able to explore the different conceptions to deepen theri understanding of the mathematical operation of division. They may be encouraged to make up their own stories about that involve division conceived of as sharing,
grouping, successive subtraction or the inverse of multiplication.
Matching Fractions is another interactive game of memory but this time there are four representations of a number of fractions to match in pairs. We have a tendency to use pizzas as our main representation of fractions for young children and this can cause them problems with developing their conceptual understanding of fractions so this is a useful
activity to tackle this as it offers a range of different representations including fractions of quantities bigger than one. Once again children can create their own fraction representations and their own game using the blank card set.
In What Numbers Can We Make?, students are invited to work with numbers chosen from a linear sequence. In order to explain the patterns they find, they need to explore ideas from modular arithmetic, which can be represented geometrically, numerically, and algebraically. The same representations can be adapted in the follow-up problems Take Three From
Five, and What Numbers Can We Make Now?. An understanding of algebraic and graphical representations is also required when playing Diamond Collector.
Factorising with Multilink and Pair Products both make the links between number and algebra through geometric representations. Students are encouraged to generalise by going beyond simple pattern-spotting and engaging instead with the underlying structure.
Polar Bearings brings attention to the relationship between cartesian and polar coordinates. By striving to represent certain curve shapes using different systems, students will realise that choice of coordinates is actually arbitrary and can lead to useful algebraic simplification (or unnecessary complication!). The problem Trig Reps looks at different representations of 'trigonometry' and encourages students to derive many familiar properties of sine and cosine using each representation. Through engagement with this activity students will hopefully gain a deeper understanding of trigonometry and realise that certain representations are more useful for solving certain types of
problem. Both problems reinforce the important mathematical notion that the 'underlying' mathematics can be dressed up in different ways as required. It reminds me of the popular saying that 'There is no such thing as bad weather, only the wrong choice of clothes....' | <urn:uuid:5a006efa-e28c-4029-afb8-4d9be87d37ca> | 3.640625 | 1,294 | Academic Writing | Science & Tech. | 31.801774 | 1,473 |
- Resurvey of a GLORIA Target Region in the Swiss National Park (2011)
- There is no doubt that recent global climate change is in process and affects life on earth. Especially mountain ecosystems are supposed to be highly sensitive to climate change due to the vertical compression of life zones, rough abiotic environment and limiting ecological factors. Therefore, the European Alps is one of the best observed ecosystems where many studies figured out how climate change is affecting biodiversity. Probably the biggest and most well-known project is the GLORIA-Europe initiative established by Prof. Dr. Georg Grabherr from University of Vienna. The aim of this project is to establish a world-wide long-term monitoring network in alpine ecosystems to detect effects of climate change on the vegetation of mountain summits using standardised methods. This study is involved in the GLORIA initiative to resurvey four calcareous and four siliceous summits at Swiss National Park in summer 2009/10. The aim of this study is to answer the questions if there are changes between the first (2002/03) and second survey in plant species number, species frequency and in heterogeneity between plots. Furthermore, is altitude, cardinal direction and bedrock influencing changes or are there species groups reacting different and what are the reasons behind it? In total 226 species were found in 2009 and 2010 with almost 80% more species on the siliceous summits. Species turnover rate between the two surveys is relatively high (15-30%) and also frequency is increasing for several species. But, there are no effects of bedrock or exposition and no differences for species groups. This study shows that fluctuation of species turnover is due to fluctuation of phenological development. Furthermore, differences in plot heterogeneity can be explained by phenological fluctuation. However, there are hints for initiating effects of climate change. The occurrence of L. decidua on three lower summits and the high content of new found species with a lower distribution limit at the montane belt on PMU as well as general increase in plant frequency could be caused by climate change. Hence, these hints of climate change should be focused on in future investigations as long-term effects of climate change are expected. | <urn:uuid:ba3afc57-f7cc-411a-b349-2be760717ef5> | 3.140625 | 447 | Academic Writing | Science & Tech. | 33.37877 | 1,474 |
WASHINGTON (AP) - This probably comes as no surprise: Federal scientists say July was the hottest month ever recorded in the contiguous United States.
The average temperature for the Lower 48 last month was 77.6 degrees. That breaks the old record from July 1936, during the Dust Bowl, by two-tenths of a degree. Records go back to 1895.
Last month also was 3.3 degrees warmer than the 20th century average for July.
The first seven months of 2012 were the warmest on record for the nation. And August 2011 through July this year was the warmest 12-month period on record.
Climate scientist Jake Crouch of the National Climatic Data Center in Asheville, N.C., said the U.S. is getting a double whammy of both localized heat and drought along with effects of global warming. | <urn:uuid:4556227f-044a-4714-8aba-aab2213b977c> | 3.109375 | 173 | News Article | Science & Tech. | 74.221304 | 1,475 |
Schwenzer, S. P.; Abramov, O.; Allan, C. C.; Clifford, S. M.; Cockell, C. S.; Filiberto, J. ; Kring, D. A.; Lasue, J.; McGovern, P. J.; Newsom, H. E.; Treiman, A. H.; Vaniman, D. T. and Wiens, R. C.
Puncturing Mars: How impact craters interact with the Martian cryosphere.
Earth and Planetary Science Letters, 335
(Click here to request a copy from the OU Author.
Geologic evidence suggests that the Martian surface and atmospheric conditions underwent major changes in the late Noachian, with a decline in observable water-related surface features, suggestive of a transition to a dryer and colder climate. Based on that assumption, we have modeled the consequences of impacts into a ~2–6 km-thick cryosphere. We calculate that medium-sized( few 10s of km diameter impact craters can physically and/or thermally penetrate through this cryosphere, creating liquid water through the melting of subsurface ice in an otherwise dry and frozen environment. The interaction of liquid water with the target rock produces alteration phases that thermochemical modeling predicts will include hydrous silicates (e.g.,nontronite, chlorite, serpentine). Thus, even small impact craters are environments that combine liquid water and the presence of alteration minerals that make them potential sites for life to proliferate. Expanding on the well-known effects of large impact craters on target sites, we conclude that craters as small as ~5–20 km (depending on latitude)excavate large volumes of material from the subsurface while delivering sufficient heat to create liquid water(through the melting of ground ice) and drive hydrothermal activity. This connection between the surface and subsurface made by the formation of these small, and thus more frequent, impact craters may also represent the most favorable sites to test the hypothesis of life on early Mars.
|| 2012 Elsevier B.V.
||NASA Mars Fundamental Research Programme, NASA MDAP, NASA PGG, NASA Mars Science Laboratory Project, NASA Astrobiology Institute Director's discretionary fund
||Early Mars; cryosphere; impact crater; impact-generated hydrothermal; astrobiology; search for extraterrestrial life; Mars surface; impact processes
||Science > Physical Sciences
|Interdisciplinary Research Centre:
||Centre for Earth, Planetary, Space and Astronomical Research (CEPSAR)
||18 Jun 2012 14:25
||16 Nov 2012 12:27
Actions (login may be required) | <urn:uuid:e7df3a64-d3e8-48ea-94db-184524d80c0d> | 3.015625 | 553 | Academic Writing | Science & Tech. | 43.336904 | 1,476 |
Formation. Location: The Kimberley, NW Australia. Age: Upper Devonian, Frasnian. 350 million years.
Fig.1. 3D skull of the placoderm Mcnamaraspis
kaprios. Courtesy of Dr.J.Long.
The placoderm fish Mcnamaraspis was approximately
25 cm long and, like other placoderms, had a bony head shield
which was joined to the 'shark'-like body (Fig. 1). Placoderms
were the first jawed fish. The Mcnamaraspis skull exhibits
annular cartilage preserved in the snout, which has never been
observed in other placoderm specimens. This facilitated the entrance
of water over its olfactory organs and hence its sense of smell
was acute. This, together with the sharp teeth, probably made
the fish a highly successful predator (Fig 2).
Fig 2. Reconstruction of the placoderm Mcnamaraspis
kaprios. Courtesy of Dr.J.Long
Fig 3. Fig
Fig 3. Front view of an arthrodire placoderm fish.
Note the hard, bony teeth used for grabbing shrimp-like crustaceans.
Fig 4. Head plates of the long-snouted placoderm Fallocosteus
turnerae. Courtesy of Dr.J.Long.
Fig 5. The skull and lower jaw of a Gogo lungfish,
Griphognathus whitei. Fig 6. The 3D skull morphology of
another Gogo lungfish, Chirodipterus australis.
Fig 7. Reconstruction of the Gogo reef fauna.
Courtesy of Dr.J.Long. | <urn:uuid:52b23c36-4572-4676-8d64-5939c3bfa069> | 3.328125 | 369 | Knowledge Article | Science & Tech. | 64.118406 | 1,477 |
Swiss scientists say Europe's recent rapid temperature increase is likely due to an unexpected greenhouse gas: water vapor.
Researchers at the World Radiation Center in Davos, Switzerland, say elevated surface temperatures caused by other greenhouse gases have enhanced water evaporation and contributed to a cycle that stimulates further surface temperature increases.
The scientists say their findings might help answer a long-debated Earth science question about whether the water cycle could strongly enhance greenhouse warming.
The Swiss researchers examined surface radiation measurements from 1995 to 2002 over the Alps in Central Europe and found strongly increasing total surface absorbed radiation, concurrent with rapidly increasing temperatures.
The authors, led by Rolf Philipona of the World Radiation Center, show experimentally that 70 percent of the rapid temperature increase is very likely caused by water vapor feedback. They indicate the remaining 30 percent is likely due to increasing manmade greenhouse gases.
They suggest their observations indicate Europe is experiencing an increasing greenhouse effect and the dominant part of the rising heat emitted from the Earth's atmosphere (longwave radiation) is due to water vapor increase.
The report appears in the journal Geophysical Research Letters.
Copyright 2005 by United Press International
Explore further: Collisions of coronal mass ejections can be super-elastic | <urn:uuid:e91cbd20-af99-4f5a-977b-543c709f8f58> | 4 | 251 | News Article | Science & Tech. | 8.689417 | 1,478 |
Common Lisp functions are partial; they are not defined for all possible inputs. But ACL2 functions are total. Roughly speaking, the logical function of a given name in ACL2 is a completion of the Common Lisp function of the same name obtained by adding some arbitrary but ``natural'' values on arguments outside the ``intended domain'' of the Common Lisp function.
ACL2 requires that every ACL2 function symbol have a ``guard,''
which may be thought of as a predicate on the formals of the
function describing the intended domain. The guard on the primitive
car , for example, is
(or (consp x) (equal x nil)),
which requires the argument to be either an ordered pair or
nil. We will discuss later how to specify a guard for a
defined function; when one is not specified, the guard is
is just to say all arguments are allowed.
But guards are entirely extra-logical: they are not involved in the axioms defining functions. If you put a guard on a defined function, the defining axiom added to the logic defines the function on all arguments, not just on the guarded domain.
So what is the purpose of guards?
The key to the utility of guards is that we provide a mechanism, called ``guard verification,'' for checking that all the guards in a formula are true. See verify-guards. This mechanism will attempt to prove that all the guards encountered in the evaluation of a guarded function are true every time they are encountered.
For a thorough discussion of guards, see the paper [km97] in the | <urn:uuid:4e783bcb-b0d6-4ed1-a695-9ba57331844b> | 2.90625 | 328 | Documentation | Software Dev. | 48.296681 | 1,479 |
Article: Python Resources
If I might suggest an addition to the list, I found this tutorial a good synthetic reference:
Heres another great source from MIT open course ware website to check out a whole semester intro to programming video lectures with python. you may want to post above
I want to throw in a few more code editors, for people who don't want a full blown IDE. The first two I think are great for beginners, and the third (Vim) is for people who love the modal interface in Rhino and want a scriptable command line interface for text editing. I would NOT recommend Vim to a beginner.
- Notepad++ - A simple code editor for Windows
- E TextEditor - Comparable to TextMate, but for Windows
- Vim - A simple, powerful cross-platform text editor with a steep learning curve. I recommend gVim for windows or MacVim for Mac.
Here are two more similar to Notepad++, I'm sure the list can go on and on but I thought they deserved to be listed.
- PS Pad - Have used on many of my standard installs for years.
- Context - Used this for a while also, but stopped using when I found PS Pad.
I recommend you to add an API to the editor. Or do like processing, thus making any exterior editor workable for python in rhino. | <urn:uuid:0d3996d9-da99-42b3-9d3b-8d6a948f93b2> | 2.515625 | 285 | Comment Section | Software Dev. | 52.719168 | 1,480 |
astroengine writes "A microscopic worm used in experiments on the space station not only seems to enjoy living in a microgravity environment, it also appears to get a lifespan boost. This intriguing discovery was made by University of Nottingham scientists who have flown experiments carrying thousands of tiny Caenorhabditis elegans (C. elegans) to low-Earth orbit over the years. It turns out that this little worm has genes that resemble human genes and of particular interest are the ones that govern muscle aging. Seven C. elegans genes usually associated with muscle aging were suppressed when the worms were exposed to a microgravity environment. Also, it appears spaceflight suppresses the accumulation of toxic proteins that normally gets stored inside aging muscle. Could this have implications for understanding how human physiology adapts to space?" | <urn:uuid:478b13dd-7d3a-406d-aeb3-2d5723db179d> | 3.296875 | 161 | Comment Section | Science & Tech. | 24.092045 | 1,481 |
Researchers have developed a more reliable approach to synthetic biology, the assembly of genetic 'standard parts' to create an organism with desired traits. They've been able to combine a library of parts with computer models that help predict the behavior of those parts when they're combined in a living system. The approach takes some of the trial and error out of the process, moving 'tweaking' of the system earlier in the process.
The team used their improved method to build a genetic timer for brewer's yeast, capable of causing the yeast to clump together within a fermentation vat at a specific time. We'll talk with a member of the team about the research, and what improved synthetic biology might be used for.
Produced by Charles Bergquist, Director and Contributing Producer | <urn:uuid:8df856e8-dda1-417d-9150-71d6625b3871> | 3.25 | 157 | Truncated | Science & Tech. | 38.843448 | 1,482 |
A lot of ‘performance tests’ are posted online lately. Many times these performance tests are implemented and executed in a way that completely ignores the inner workings of the Java VM. In this post you can find some basic knowledge to improve your performance testing. Remember, I am not a professional performance tester, so put your tips in the comments!
For example, some days ago a ‘performance test’ on while loops, iterators and for loops was posted. This test is wrong and inaccurate. I will use this test as an example, but there are many other tests that suffer from the same problems.
So, let’s execute this test for the first time. It tests the relative performance on some loop constructs on the Java VM. The first results:
Iterator – Elapsed time in milliseconds: 78
For – Elapsed time in milliseconds: 28
While – Elapsed time in milliseconds: 30
Allright, looks interesting. Let’s change the test a bit. When I reshuffle the code, putting the Iterator test at the end, I get:
For – Elapsed time in milliseconds: 37
While – Elapsed time in milliseconds: 28
Iterator – Elapsed time in milliseconds: 30
Hey, suddenly the For loop is the slowest! That’s weird!
So, when I run the test again, the results should be the same, right?
For – Elapsed time in milliseconds: 37
While – Elapsed time in milliseconds: 32
Iterator – Elapsed time in milliseconds: 33
And now the While loop is a lot slower! Why is that?
Getting valid test results is not that easy!
The example above shows that obtaining valid test results can be hard. You have to know something about the Java VM to get more accurate numbers, and you have to prepare a good test environment.
Some tips and tricks
- Quit all other applications. It is a no-brainer, but many people are testing with their systems loaded with music players, RSS-feed readers and word processors still active. Background processes can reduce the amount of resources available to your program in an unpredictable way. For example, when you have a limited amount of memory available, your system may start swapping memory content to disk. This will have not only a negative effect on your test results, it also makes these results non-reproducible.
- Use a dedicated system. Even better than testing on your developer system is to use a dedicated testing system. Do a clean install of the operating system and the minimum amount of tools needed. Make sure the system stays as clean as possible. If you make an image of the system you can restore it in a previous known state.
- Repeat your tests. A single test result is worthless without knowing if it is accurate (as you have seen in the example above). Therefore, to draw any conclusions from a test, repeat it and use the average result. When the numbers of the test vary too much from run to run, your test is wrong. Something in your test is not predictable or consistent. Try to fix your test first.
- Investigate memory usage. If your code under test is memory intensive, the amount of available memory will have a large impact on your test results. Increase the amount of memory available. Buy new memory, fix your program under test.
- Investigate CPU usage. If your code under test is CPU intensive, try to determine which part of your test uses the most CPU time. If the CPU graphs are fluctuating much, try to determine the root cause. For example Garbage Collection, thread-locking or dependencies on external systems can have a big impact.
- Investigate dependencies on external systems. If your application does not seem to be CPU-bound or memory intensive, try looking into thread-locking or dependencies on external systems (network connections, database servers, etcetera)
- Thread-locking can have a big impact, to the extent that running your test on multiple cores will decrease performance. Threads that are waiting on each other are really bad for performance.
The Java HotSpot compiler
The Java HotSpot compiler kicks in when it sees a ‘hot spot’ in your code. It is therefore quite common that your code will run faster over time! So, you should adapt your testing methods.
The HotSpot compiler compiles in the background, eating away CPU cycles. So when the compiler is busy, your program is temporarily slower. But after compiling some hot spots, your program will suddenly run faster!
When you make a graph of the througput of your application over time, you can see when the HotSpot compiler is active:
Througput of a running application over time
The warm up period shows the time the HotSpot compiler needs to get your application up to speed.
Do not draw conclusions from the performance statistics during the warm up time!
- Execute your test, measure the throughput until it stabilizes. The statistics you get during the warm up time should be discarded.
- Make sure you know how long the warm up time is for your test scenario. We use a warm up time of 10-15 minutes, which is enough for our needs. But test this yourself! It takes time for the JVM to detect the hot spots and compile the running code.
From Dries Buytaert I received a link to a paper called Statistically rigorous Java performance evaluation. I highly recommend reading it when you want to know more about measuring Java performance.
Remember, I am not a professional performance tester, so put your tips in the comments! | <urn:uuid:2e405b66-8473-4b72-8190-81c6e599d87f> | 2.515625 | 1,157 | Personal Blog | Software Dev. | 50.538699 | 1,483 |
Moore’s Law has been around for 46 years. It’s a descriptor for the trend we’ve seen in the development of computer hardware for decades, with no sign of slowing down, where the number of transistors that can be placed on an integrated circuit doubles every two years.
The law is named after Gordon Moore, who described this pattern in 1965. He would know a thing or two about integrated circuits. He co-founded Intel in 1968.
Moore has said in recent years that there’s about 10 or 20 years left in this trend, because “we’re approaching the size of atoms which is a fundamental barrier.” But then, he said, we’ll just make bigger chips.
Ray Kurzweil, who we mentioned in last weekend’s piece on transhumanism, is known for his thoughts on another subject even more than he is known for his thoughts on transhumanism. That subject is the technological Singularity.
The singularity comes after the time when our technological creations exceed the computing power of human brains, and Kurzweil predicts that based on Moore’s Law and the general trend of exponential growth in technology, that time will come before the mid-21st century.
We’ll see artificial intelligence that exceeds human intelligence around the same time, he says. But there’s more to it than just having created smarter intelligences. There are profound ramifications, but we’ll get to those soon.
Technological singularity was a term coined by Vernor Vinge, the science fiction author, in 1983. “We will soon create intelligences greater than our own,” he wrote. “When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding.”
He was unifying the thoughts of many of his predecessors, Alan Turing and I. J. Good among them.
The idea is that when we become capable of creating beings more intelligent than us, it stands to reason that they — or their near-descendants — will be able to create intelligences more intelligent than themselves. This exponential growth of intelligences would work much like Moore’s Law — perhaps we can call it Kurzweil’s Law — but have more profound significance. When there are intelligences capable of creating more intelligent beings in rapid succession, we enter an age where technological advances move at a rate we can’t even dream of right now.
And that’s saying something: thanks to the nature of exponential growth, technological advance is already making headway at the fastest pace we’ve ever seen.
The singularity doesn’t refer so much to the development of superhuman artificial intelligence — although that is foundational to the concept — as it does to the point when our ability to predict what happens next in technological advance breaks down.
What Will the Singularity Look Like?
Singularitarians say that we simply can’t imagine what such a future would be like. It’s hard to flaw that logic. Imagine, in a world where human intelligence is near the bottom of the ladder, what the world would look like even a short decade later. The short answer is: you can’t! The point is that as more intelligent beings they’ll be capable of not just imagining, but creating things we can’t even dream about.
We can speculate as to the changes the Singularity would bring that would enable that exponential growth to continue. Once we build computers with processing power greater than the human brain and with self-aware software that is more intelligent than a human, we will see improvements to the speed with which these artificial minds can be run. Consider that with faster processing speeds, these AIs could do the thinking of a human in shorter amounts of time: a year’s worth of human processing would become eight months, then eventually weeks, days, minutes and at the far end of the spectrum, even seconds.
There is some debate about whether there’s a ceiling to the processing speed of intelligence, though scientists agree that there is certainly room for improvement before hitting that limit. As with speculation in general, nobody can really speculate as to where that limit may sit, but it’s still fascinating to imagine an intelligence doing the thinking that a human does in one year in one minute.
With that superhuman intelligence and incredibly fast, powerful processing power, it’s not a stretch to imagine that software re-writing its own source code as it arrives at new conclusions and attempts to progressively improve itself.
The Age of the Posthuman
What’s interesting is that there is potential for such post-Singularity improvements to machine speed and intelligence to crossover to human minds. Futurists speculate that such advanced technology would enable us to improve the processing power, intelligence and accessible memory limits of our own minds through changing the structure of the brain, or ‘porting’ our minds on to the same hardware that these intelligences will run on.
In last week’s piece I asked whether we’d be able to tell when we crossed the line from transhuman to posthuman, or whether that line would be ever-moving as we found new ways to augment ourselves.
But here’s another, contrary question: could the Singularity, should it arrive, bring the age of the posthuman? If we are able to create superhuman intelligence and then upgrade our own intelligence by changing the fundamental structure of our minds, is that posthuman enough?
Augmentation is one thing, and upgrading human blood to vasculoid and allowing us to switch off emotions when we need to avoid an impulse purchase are merely augmentations. Increasing our baseline intelligence and processing speed seems to me to be much more significant: an upgrade over an augment.
There is, of course, no reason to think that our creations would have any interest in us or improving the hardware on which we currently run. Many science fiction authors have postulated that superhuman artificial intelligence would in fact want us extinct, given that our species’ behavior doesn’t lend itself to sustainability.
Is the Singularity Near?
The real question, of course, is whether such a technological singularity will ever happen. Just because it has been predicted by some doesn’t mean it will, and there’s plenty of debate on both sides of the argument. Ever the technological optimist, I’m going to avoid the question in this piece — though that’s not to say I don’t think it’s an important one. You can have a look at David Brin’s fantastic article, Singularities and Nightmares: Extremes of Optimism and Pessimism About the Human Future, for more discussion of that question. I’m fond of this quote from Brin’s piece:
“How can models, created within an earlier, cruder system, properly simulate and predict the behavior of a later and vastly more complex system?”
Of course, if you accept that quote as the basis for any argument, it’s just as hard to map the progress of and towards the singularity as it is to deny that it will happen.
According to Kurzweil’s predictions, we will see computer systems as powerful as the human brain in 2020. We won’t have created artificial intelligence until after 2029, the year in which Kurzweil predicts we will have reverse-engineered the brain. It’s that breakthrough that will allow us to create artificial intelligence, and begin to explore other ideas like that of mind uploading.
Current trends certainly don’t oppose such a timeline, and in 2009, Dr Anthony Berglas wrote in a paper entitled “Artificial Intelligence Will Kill Our Grandchildren” that:
“A computer that was ten thousand times faster than a desktop computer would probably be at least as computationally powerful as the human brain. With specialized hardware it would not be difficult to build such a machine in the very near future.”
Important to consider is that if Kurzweil’s predictions come true, in 2029 when we’ve reverse engineered the brain we would have already had nine years of improvement on those computer systems with brain-like power and capacity. In this timeline, as soon as we create artificial intelligence it will already be able to think faster and with faster access to more varied input than humans thanks to the hardware it runs on.
By 2045, Kurzweil says, we will have expanded the capacity for intelligence of our civilization — comprised by that stage of both software and people — one billion fold.
One only needs to look at history to see our capacity for rapid improvement in retrospect. One of my favorite metrics is life expectancy. In 1800, the average life expectancy was 30, mostly due to high infant mortality rates — though the kind of old age we see as common today was a rare event then. In 2000, the life expectancy of developed countries was 75. If we can more than double the average life expectancy in our society in the space of a historical blip, there’s much more to be excited about ahead. | <urn:uuid:734812bf-9f5f-499a-bdf1-37151f9fa6d0> | 3.171875 | 1,939 | Personal Blog | Science & Tech. | 41.568671 | 1,484 |
- Author: Katherine E. Kerlin
How wood is used after it is cleared from a forest and where that forest is located largely affects the amount of greenhouse gas emissions released into the atmosphere, according to a new study by UC Davis.
The study, published this week in the advance online edition of the journal Nature Climate Change, provides a deeper understanding of the complex global impacts of deforestation on carbon storage and greenhouse gas emissions.
When trees are felled to create solid wood products, such as lumber for housing, that wood retains much of its carbon for decades, the researchers found. In contrast, when wood is used for bioenergy or turned into pulp for paper, nearly all of its carbon is released into the atmosphere. Carbon is a major contributor to greenhouse gases.
“We found that 30 years after a forest clearing, between 0 percent and 62 percent of carbon from that forest might remain in storage,” said lead author J. Mason Earles, a doctoral student with the UC Davis Institute of Transportation Studies. “Previous models generally assumed that it was all released immediately.”
The researchers analyzed how 169 countries use harvested forests. They learned that the temperate forests found in the United States, Canada and parts of Europe are cleared primarily for use in solid wood products, while the tropical forests of the Southern Hemisphere are more often cleared for use in energy and paper production.
“Carbon stored in forests outside Europe, the USA and Canada, for example, in tropical climates such as Brazil and Indonesia, will be almost entirely lost shortly after clearance,” the study states.
The study’s findings have potential implications for biofuel incentives based on greenhouse gas emissions. For instance, if the United States decides to incentivize corn-based ethanol, less profitable crops, such as soybeans, may shift to other countries. And those countries might clear more forests to make way for the new crops. Where those countries are located and how the wood from those forests is used would affect how much carbon would be released into the atmosphere.
Earles said the study provides new information that could help inform climate models of the Intergovernmental Panel on Climate Change, the leading international body for the assessment of climate change.
“This is just one of the pieces that fit into this land-use issue,” said Earles. Land use is a driving factor of climate change. “We hope it will give climate models some concrete data on emissions factors they can use.”
In addition to Earles, the study, “Timing of carbon emissions from global forest clearance,” was co-authored by Sonia Yeh, a research scientist with the UC Davis Institute of Transportation Studies, and Kenneth E. Skog of the U.S. Department of Agriculture Forest Service.
The study was funded by the California Air Resources Board and the David and Lucile Packard Foundation. | <urn:uuid:61c5ff5f-ee63-4de7-9168-15540a293bf3> | 3.765625 | 591 | News (Org.) | Science & Tech. | 36.536725 | 1,485 |
Authors: H. Ron Harrison
Galileo studied bodies falling under gravity and Tycho Brahe made extensive astronomical observations which led Kepler to formulate his three famous laws of planetary motion. All these observations were of relative motion. This led Newton to propose his theory of gravity which could just as well have been expressed in a form that does not involve the concept of force. The approach in this paper extends the Newtonian theory and the Special Theory of Relativity by including relative velocity by comparison with electromagnetic effects and also from the form of measured data. This enables the non-Newtonian effect of gravity to be calculated in a simpler manner than by use of the General Theory of Relativity (GR). Application to the precession of the perihelion of Mercury and the gravitational deflection of light gives results which agree with observations and are identical to those of GR. This approach could be used to determine the non-Newtonian variations in the trajectories of satellites.
Comments: 8 Pages. Appendix added
Unique-IP document downloads: 123 times
Add your own feedback and questions here: | <urn:uuid:d820d5af-039b-44b9-a401-27f9ab572390> | 3.484375 | 222 | Knowledge Article | Science & Tech. | 28.091749 | 1,486 |
Science Fair Project Encyclopedia
Frequency response is the measure of any system's response to frequency, but is usually used in connection with electronic amplifiers and similar systems, particularly in relation to audio signals. Because the human ear is generally not sensitive to phase, the frequency response is typically characterized by the magnitude of the system's response, measured in dB, versus frequency. The frequency response of a system is typically measured by applying an impulse to the system and measuring its response (see impulse response), sweeping a pure tone through the bandwidth of interest, or by applying a maximum length sequence.
Once a frequency response has been measured (e.g., as an impulse response), providing the system is linear time invariant, its characteristic can be approximated with arbitrary accuracy by a digital filter. Similarly, if a system is demonstrated to have a poor frequency response, a digital filter can be applied to the signals prior to their reproduction to compensate for the problem.
Frequency responses are often used to indicate the accuracy of amplifiers and speakers for reproducing audio. As an example, a high fidelity amplifier may be said to have a frequency response of 20Hz - 20,000Hz ±1dB, which tells you that the system amplifies equally all frequencies within that range and within the limits quoted. Such a measure does not include any other indicators of quality (e.g., non-linear distortions of the signal, signal-to-noise ratio, etc...). Frequency response therefore does not guarantee a given quality of audio reproduction, but only indicates that a piece of equipment meets the basic requirements needed for it.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:5be33dd9-8d72-447b-9917-95699895403d> | 4.46875 | 354 | Knowledge Article | Science & Tech. | 30.682504 | 1,487 |
Image: Cunjevoi Illustration
Cunjevoi can form large colonies on rock platforms. They have a hard outer coat that is often covered in green and brown algae. Cunjevoi have a cylinder-shaped body with two openings at the top. They can grow up to 30 cm in height.
- Andrew Howells
- © Australian Museum
- Common name:
- Scientific name:
- Pyura stolonifera
Cunjevoi live on rocky shores in Australia. They are found on reefs and rock platforms and wharf pylons. Cunjevoi are found in waters up to 12 m deep.
Cunjevoi eat plankton. Cunjevoi take sea water in through one of the openings and remove all plankton from it. The plankton is moved to the stomach. Unwanted water is passed out through a second opening.
People fishing use Cunjevoi for bait. Orange Tritons eat Cunjevoi.
Cunjevoi are sometimes called sea squirts because they can spray a jet of seawater out of their body when squeezed. Cunjevoi belong to a group of animals called Tunicates. | <urn:uuid:16ebf0e7-2953-45c8-993d-d870e5043971> | 3.9375 | 257 | Knowledge Article | Science & Tech. | 54.011849 | 1,488 |
By Cameron Chai
Scientists at the Brookhaven National Laboratory have discovered the nanostructure of a new carbon form that could explain its behavior as a high-absorbent sponge when it receives electric charge.
The material recently developed at The University of Texas, Austin, could be integrated into “supercapacitor” devices for high capacity energy storage while maintaining other properties including quick recharge time, rapid release of energy, and long lifetime cycles of 10,000 charge/discharge cycles.
Dong Su and Eric Stach use a powerful electron microscope to analyze samples of activated graphene at Brookhaven’s Center for Functional Nanomaterials. Says Stach: “The CFN provides access to scientists around the world to solve cutting-edge problems in nanoscience and nanotechnology. This work is exactly what this facility was established to do.”
According to Eric Stach, Brookhaven materials scientist, this makes it suited to store electrical energy needing rapid energy release in electric cars or to regularize power derived from alternate sources like wind and solar power. Stach has also co-authored the research paper released recently in Science.
The Supercapacitors resemble batteries due to their capability to store energy. Batteries store a lot of energy through chemical reactions and release it over long periods. But supercapacitors store charge in ion form on the electrodes’ surface like static electricity without depending on chemical reactions. Charging the electrodes make the ions separate or polarize allowing the charge to get stored at the junction of the electrodes and electrolyte. Electrode pores enhance the surface area where the electrolyte can travel and communicate. This enhances the quantity of energy being stored. The limited charge in the capacitors allows them to be applied in mobile electronic systems, which need limited energy and can operate over long periods.
The team used potassium hydroxide to remodel chemically adapted graphene platelets by creating a more porous form of carbon that needed to be characterized at the nanoscale. Investigations revealed that the material's three-dimensional nanostructure comprised a network of curved, nano-thick walls that formed nano pores measuring 1 to 5nm wide.
Stach said that along with computational studies they are trying to comprehend the formation of this three-dimensional network to customize the pore sizes to be most advantageous for particular applications including capacitive storage, catalysis, and fuel cells.
The team carried out its research at the Lab's Center for Functional Nanomaterials, the National Synchrotron Light Source and the National Center for Electron Microscopy at Lawrence Berkeley National Laboratory. The facilities are supported by the DOE Office of Science.
The research program was funded by DOE's Office of Science. The work at UT - Austin was funded by the Office of Science, the National Science Foundation, and the Advanced Technology Institute. | <urn:uuid:1a59f4b0-ce1e-472e-a3b0-74f4b45d6cec> | 3.40625 | 582 | News Article | Science & Tech. | 14.527729 | 1,489 |
Geodetic Reference Systems
Definition of the various Geodetic Reference Systems and their realizations is not only important with regard to scientific work, but will also be of major importance for the practical applications in the fields of geodesy and navigation if seen against the background of an ever-increasing use of space geodetic techniques, as e.g. GPS and the future Galileo System.
With the definition of the reference systems the speed of light, the radius and the flattening of the earth, a Cartesian coordinate system etc. are exemplarily defined.
The individual reference systems are realized at the earth’s surface as reference frames through marked points (survey points) with their coordinates.
Global terrestrial reference systems are Cartesian coordinates having their origin in the earth’s center of mass and their orientation aligned with the rotational axis of the earth. The national coordinate and height systems may constitute local reference systems with their own reference ellipsoids, geoid models and map projections, and also densifications of the global reference frame. | <urn:uuid:572a17b6-f220-4097-8a87-980eb73f29f7> | 3.6875 | 214 | Documentation | Science & Tech. | 23.702001 | 1,490 |
Double the PressureScience brain teasers require understanding of the physical or biological world and the laws that govern it.
In general, if you have a gas in a container and you double the amount of gas, the new pressure will be double the old pressure. I say "in general" because this isn't exactly true, but it is close enough for the purposes of this teaser.
So my question is this: If you have a tire filled with the standard 32 psi and you double the amount of air molecules in the tire, what pressure will your tire gauge now read? Assume that the tire does not expand, and that the first sentence of this teaser is exactly true.
The answer is NOT 64 psi.
HintAtmospheric pressure is about 15 psi. How does this affect the answer?
Why? Because pressure gauges are set to read "0 psi" when the pressure being read is the same as atmospheric pressure. This is very convenient because you can easily tell from the gauge if the container is under pressure or vacuum. However, this also means that "0 psi" does not really mean that there is no pressure in the container - true zero psi occurs at full vacuum.
So to solve this problem, you have to recognize the fact that at 32 psi there are enough air molecules in the tire to increase the pressure from full vacuum (no molecules) to 32 psi. Since atmospheric pressure is about 15 psi, then the real pressure is 32 + 15 = 47 psi. Since this is the real pressure in the tire, you can now double it to get 94 psi. If the real pressure is 94 psi, the gauge will read 94 - 15 = 79 psi.
See another brain teaser just like this one...
Or, just get a random brain teaser
If you become a registered user you can vote on this brain teaser, keep track of
which ones you have seen, and even make your own.
Back to Top | <urn:uuid:b8ee9860-c7ed-43ec-8f6d-79259c847ee2> | 3.53125 | 396 | Personal Blog | Science & Tech. | 66.143831 | 1,491 |
A new report warns that climate change is causing shifts in species composition faster than expected. Co-author and Cary scientist Peter Groffman comments, "cold temperatures are a critical regulator of species outbreaks and also of species distributions".
A new report says the effects of climate change are already being felt in bug-infested forests of the Intermountain West, in reduced flows of the Colorado River basin and in the amount of snow that falls in the Rocky Mountains.
Nearly every day, we read about problems caused by invaders like the emerald ash borer killing trees across New York, West Nile virus killing people across the United State (1,499 so far), zebra mussels clogging water intakes and changing the Great Lakes and Hudson River ecosystems and Burmese pythons eating everything in the Everglades.
Everybody seems to know about food webs these days — how primary producers capture the energy of the sun, and pass it along to consumers and then on to predators — but I’m not sure that most people really understand what food webs are about.
Specific trails and roads on our 2,000 acre research campus have been designated for public access, and our grounds provide visitors with a unique opportunity to connect with nature and view local wildlife. | <urn:uuid:f13243f4-9aae-4b84-aee9-6131e99a276c> | 3.140625 | 255 | News (Org.) | Science & Tech. | 25.715063 | 1,492 |
A QuadTree is a spatial partitioning strategy used to make queries on relationships between 2D spatial data such as coordinates in a Geographic Information System (GIS), or the location of objects in a video game. For instance, you may need to know all of the objects within a region on a map, test whether objects are visible by a camera, or optimize a collision detection algorithm.
The QuadTree is so named because it recursively partitions regions into four parts, with leaf nodes containing references to the spatial objects. Querying the QuadTree is a function of traversing the tree nodes that intersect the query area.
The OctTree is the analogous structure used for 3 dimensional problems.
For a masterful collection of demos and variations on the QuadTree and other spatial indexing methods, see Frantisek Brabec and Hanan Samet's site, or use the references at the end of this article.
There are many spatial partitioning methods, each with the goal of providing an efficient way of determining the position of an item in a spatial domain. For example, a database query can be considered as a graphical problem. Consider a query on a database containing date of birth and income: a query against all people between 35 and 50 years of age and incomes between 30,000 and 60,000 per year is the same as a query for all restaurants in the city of Vancouver: they are 2 dimensional spatial queries.
Several spatial indexing methods are more efficient in time and space, and are easily generalizable to higher dimensions. However, the QuadTree is specialized to the 2D domain, and it is easy to implement.
The general strategy of the QuadTree is to build a tree structure that partitions a region recursively into four parts, or Quads. Each Quad can further partition itself as necessary. A pre-requisite is that you must know the bounds of the area to be encompassed; the basic algorithm does not lend itself to the addition or removal of areas under consideration without rebuilding the index.
When an item is inserted into the tree, it is inserted into a Quad that encompasses the item's position (or spatial index). Each Quad has a maximum capacity. When that capacity is exceeded, the Quad splits into four sub-quads that become child nodes of the parent Quad, and the items are redistributed into the new leaves of the QuadTree. Some variations set the maximum capacity to one, and subdivide until each leaf contains at most a single item (Adaptive QuadTree).
To query a QuadTree for items that are inside a particular rectangle, the tree is traversed. Each Quad is tested for intersection with the query area.
- Quads that do not intersect are not traversed, allowing large regions of the spatial index to be rejected rapidly.
- Quads that are wholly contained by the query region have their sub-trees added to the result set without further spatial tests: this allows large regions to be covered without further expensive operations.
- Quads that intersect are traversed, with each sub-Quad tested for intersection recursively.
- When a Quad is found with no sub-Quads, its contents are individually tested for intersection with the query rectangle.
Other operations on the QuadTree could include:
- Deletion: An object is removed from the QuadTree, empty quads are removed
- Merge: Two quadtrees are merged, indexes are rebuilt
- Nearest Neighbour: Common to more advanced spatial indexes, a Query could ask for the nearest neighbours to a given object. A simple implementation would be to take the object's bounding rect and inflate it by an amount based on the neighbor proximity. Objects in the result set would be sorted by increasing distance.
These operations are not demonstrated in this code.
This implementation of the QuadTree has the following variations:
The QuadTree has been changed to index items with rectangular bounds rather than points. This allows it to be used with lines, and polygons.
- On insertion, new quads are created until there are no Quads able to contain an item's rectangle. IE: the item is inserted into the smallest quad that will contain it.
- There is no maximum number of items in a Quad, there is a minimum Quad size (necessary to avoid massive tree growth if an item happens to have a very small area).
- Because the Quad an item is stored in is related to the size of the item, both leaf nodes and parent nodes store items.
- The QuadTree's performance will be severely impacted if there are many large items.
- The Quadtree's performance will be best when the size of most items are close to the minimum quad size.
After writing this code, I find that this particular variation bears a striking resemblance to the "MX-CIF QuadTree".
Note: There are other operations on QuadTrees such as deleting a node, or find the nearest neighbour. These are not supported in this implementation.
The following two diagrams show the spatial relationship of the QuadTree with the tree structure. The coloured regions represent objects in the spatial domain. Those that are entirely within a quad are shown in the tree structure in their smallest enclosing quad. You can see that the green shape, since it intersects two of the highest level Quads and does not fit into either is placed in the root quad. The red and purple shapes are placed in child nodes at level one since they are the largest enclosing Quads. The blue shape is at level three along with the orange shape. The Yellow shape is at level four. This tree is adaptive in that it does not create quads until insertion is requested.
Using the Code
QuadTree class is a generic class. The generic parameter has a restriction that it must inherit from the
IHasRect interface which defines a property
Rectangle. Creating a
QuadTree requires an area, the demo application uses the main form's
QuadTree<Item> m_quadTree = new QuadTree<Item>(this.ClientRectangle);
Inserting items into the
QuadTree is done on a left mouse click, querying items in a
QuadTree is done with a right mouse drag:
private void MainForm_MouseUp(object sender, MouseEventArgs e)
if (m_dragging && e.Button== MouseButtons.Right)
m_selectedItems = m_quadTree.Query(m_selectionRect);
m_dragging = false;
Random rand = new Random(DateTime.Now.Millisecond);
m_quadTree.Add(new Item(e.Location, rand.Next(25) + 4)); } Invalidate();
Run the demo application, and left click anywhere in the client rectangle: an object is inserted at the click point with a random size. Right-click and drag: a selection rectangle is created. Release the mouse button: the
QuadTree is queried with the selection rectangle. The
QuadTree renderer draws the
QuadTree nodes and the objects in the
QuadTree in random colours. It also draws the selection region and highlights the selected nodes.
There are two components of
QuadTree performance: insertion and query. Insertion can be very expensive because it involves several intersection tests per item to be inserted. The number of tests depends on the size of the region (the root of the
QuadTree) and on the minimum Quad size configured. These two numbers have to be tuned per application. Loading many items into the
QuadTree (bulk load, or indexing) tends to be very CPU intensive. This overhead may not be acceptable; consider storing the
QuadTree structure on disk (not covered in this article).
QuadTree is designed to be faster at querying the spatial domain than iteration, but the performance of the index depends on the distribution of objects in the domain. If items are clustered together, the tree tends to have many items in one branch which defeats the strategy of being able to cull large regions, and reduce the number of intersection tests. The worst case performance happens when all objects are in one small cluster that is the same size as the smallest Quad; in this case the performance of the
QuadTree will be slightly worse than just iterating through all objects.
If items are uniformly distributed across the spatial domain, performance is approximately O(n*log n).
Points of Interest
- Generic implementation; allows you to use it with any class that implements
- Colour used to draw the node is stored in an hashtable; allows the colour of the Quad on screen to be constant over the life of the
- In the
QuadTreeRenderer class, note the anonymous delegate used to draw the
QuadTreeNodes; allows the
QuadTree to be tested, and visualized, without adding specific code to the class to do so.
- H. Samet, The Design and Analysis of Spatial Data Structures, Addison-Wesley, Reading, MA, 1990. ISBN 0-201-50255-0
- H. Samet, Applications of Spatial Data Structures: Computer Graphics, Image Processing, and GIS, Addison-Wesley, Reading, MA, 1990. ISBN 0-201-50300-0.
- Mark de Berg, Marc van Kreveld, Mark Overmars, Otfried Schwarzkopf, Computational Geometry: Algorithms and Applications, 2nd Edition, Springer-Verlag 2000 ISBN: 3-540-65620-0
- Initial version with regions and simple Insert and Query operations, demo application
I've been lead architect in several software companies. I've worked in the Justice and Public Safety area for the last 7 years, writing facial recognition, arrest and booking software and emergency management/GIS software. Prior to that I worked in the games industry with 3D animation.
Currently I'm working on some GIS/mapping software for outdoor enthusiasts. I intend to spin off portions of this into the open source community as time permits. | <urn:uuid:92423e76-6503-4fb6-84f8-b7edcde431ee> | 3.65625 | 2,088 | Documentation | Software Dev. | 43.348559 | 1,493 |
Hi, Im new to coding im after learnig so that i can build websites for myself and my friends ...Im after some help in coding IE: php,html etc to build basic websites that involve images and text. Thanks
I remember when I first started building websites I had trouble understanding what languages were used for what and how to use them. So I'll try to clarify that up for you:
HTML - the base language of the web. No matter what language you use on the internet, HTML will be a part of your website. HTML is what puts all the different languages together into one page.
CSS - The design language. CSS will allow you to position elements, change element colors, font styles and many other things can be done with CSS. I highly suggest mastering CSS as you can do MANY MANY things with it. Even Animation now with CSS3.
PHP - The server-side scripting language. PHP is personally my favorite language just because you can do SO MUCH with it. PHP is what will handle anything like contact forms, form submissions, logins, registrations, database management and everything like that. This is the language you will use if you need to store info in a database.
Mysql is also very easy to learn, and really, there's not a lot that you actually have to learn to get it to do what you want. Experience mostly just helps in getting the info faster and more efficiently.
Of the two, I'd begin with PHP. It's a little easier to understand. However, it's a server-side language, so you'll require a host to run the programs. You can set up a local-host on your computer, but it's not super simple. (If you have a little cash, just buy a cheap shared-hosting site and work on there. That's what I did).
All that said, my recommendation is to learn Python. From all the research I did it's by far the easiest to learn and use. You can even use it for server-side programming instead of PHP. (Although I didn't try it, it looks far more complicated to get it running than PHP. -Python 3 that is-)
Even if you don't end up using Python, the lessons you learn from it will allow you to quickly pick up any programming language you choose to learn. Programming is a major challenge, Python makes it as easy as possible.
I'd recommend going right to the syntax of the language you want to learn, and studying that until you understand it. I remember struggling through tutorials, feeling like I wasn't really getting it because I wouldn't understand all the examples or terminology. After that I have always got my head around the concept of the language first, be it a simple markup language or programming, after which I can expand my knowledge of that language very quickly with relative ease.
Oh, and did I mention everything you ever need to know about web development is a quick Google search away? I've heard that up to 9 out of 10 web developers are self-taught, all you need is the motivation and time. | <urn:uuid:1deb286a-abc5-4cf5-9bd4-9574fb3a5455> | 2.75 | 637 | Comment Section | Software Dev. | 67.537724 | 1,494 |
Did NASA have a dirty little secret about the Apollo 12 mission? A team of researchers have located and reviewed NASA's archived Apollo-era 16 millimeter film -- and have come up with a definitive answer to the persistent claim in both the press and on the Web that a microbe survived 2.5 years on the moon.
Apollo 12 was launched at 11:22:00 a.m. EST on November 14, 1969. The mission plan called for a landing in the Oceanus Procellarum-Ocean of Storms-area. This site was near the Surveyor III and other earlier unmanned missions to the moon. It landed there almost five days after their launch. They collected rock samples, mostly basalt and igneous rocks.
The Surveyor III camera-team thought they had detected a microbe that had lived on the moon for all those years, "but they only detected their own contamination," Rummel added.
Rummel, along with colleagues Judith Allton of NASA’s Johnson Space Center and Don Morrison, a former space agency lunar receiving laboratory scientist, recently presented their co-authored paper: "A Microbe on the Moon? Surveyor III and Lessons Learned for Future Sample Return Missions."
Elsewhere, while the Apollo moon "microbes" were being debunked, research by a team of scientists at the University of London reinforced a theory that evidence of life on the early Earth might be found in rocks on the moon that were ejected during the Late Heavy Bombard period -- about four billion years ago when the Earth was subjected to a rain of asteroids and comets.
Given that material from early Mars has been found in meteorites on Earth, it certainly seems reasonable that tens of thousands of tons of terrestrial meteorites may have arrived there during the Late Heavy Bombardment.
Research by a team under Ian Crawford and Emily Baldwin of the Birkbeck College School of Earth Sciences at the University of London in 2008 used sophisticated technology to simulate the pressures any such terrestrial meteorites might have experienced during their arrival on the lunar surface. In many cases, the pressures could be low enough to permit the survival of biological markers, making the lunar surface a productive place to look for evidence of early terrestrial life.
Any such markers are unlikely to remain on Earth, where they would have been erased long ago by more than three billion years of volcanic activity, later meteor impacts, or simple erosion by wind and rain.
However, meteorites arriving on Earth are decelerated by passing through our atmosphere. As a result, while the surface of the meteorite may melt, the interior is often preserved intact.
Could a meteorite from Earth survive a high-velocity impact on the lunar surface? Crawford and Baldwin used finite element analysis to simulate the behavior of two different types of meteors impacting the lunar surface.
Crawford and Baldwin's group simulated their meteors as cubes, and calculated pressures at 500 points on the surface of the cube as it impacted the lunar surface at a wide range of impact angles and velocities. In the most extreme case they tested (vertical impact at a speed of some 11,180 mph, or 5 kilometers per second), Crawford reports that "some portions" of the simulated meteorite would have melted, but "the bulk of the projectile, and especially the trailing half, was subjected to much lower pressures."
At impact velocities of 2.5 kilometers per second or less, "no part of the projectile even approached a peak pressure at which melting would be expected."
Crawford concluded that biomarkers ranging from the presence of organic carbon to "actual microfossils" could have survived the relatively low pressures experienced by the trailing edge of a large meteorite impacting the moon.
Crawford suggests that the key to finding terrestrial material is to look for water locked inside, these hydrates, can be detected using infrared (IR) spectroscopy. Many minerals on Earth are formed in processes involving water, volcanic activity, or both. By contrast, the moon lacks both water and volcanoes.
Crawford and his co-authors believe that a high-resolution IR sensor in lunar orbit could be used to detect any large (over one meter) hydrate meteorites on the lunar surface, while a lunar rover with such a sensor "could search for smaller meteorites exposed at the surface."
Crawford suggests that it might be necessary to dig below the surface to find terrestrial meteorites. He adds that collecting samples, observing them on the lunar surface, and picking those that warrant a return to Earth for detailed analysis "would be greatly facilitated by a human presence on the moon."
The last U.S. astronaut to set foot on the moon, Dr. Harrison Schmitt, was a geologist. With NASA's plans for a return to the moon later in this century shelved, it looks like it will be up to China to search for hydrated rocks, and solve the mystery of how life began on the Earth. | <urn:uuid:c97602ef-faa0-4bb6-a20a-e324a6c544e7> | 3.90625 | 1,019 | Nonfiction Writing | Science & Tech. | 39.706186 | 1,495 |
There hasn’t been a ton of news coming our of the Phoenix Mars Mission, which landed in late May and is still struggling with soil delivery to on-board labs. Scientists worked with engineers last weekend, examining how the icy soil on Mars interacts with the scoop on the Lander’s robotic arm. They are experimenting with various techniques to deliver a sample to one of the instruments.
“It has really been a science experiment just learning how to interact with the icy soil on Mars — how it reacts with the scoop, its stickiness, whether it’s better to have it in the shade or the sunlight,” said Phoenix Principal Investigator Peter Smith of the University of Arizona.
A month ago, it was announced that initial chemistry experiments had yielded useful information. “We are awash in chemistry data,” said Michael Hecht of NASA’s Jet Propulsion Laboratory, lead scientist for the Microscopy, Electrochemistry and Conductivity Analyzer, or MECA, instrument on Phoenix. “We’re trying to understand what is the chemistry of wet soil on Mars, what’s dissolved in it, how acidic or alkaline it is.” Three more wet-chemistry cells are still available for use later in the mission.
The Martian soil appears to be an analog to soils found in the upper dry valleys in Antarctica. The soil just below the surface on the landing site is described as very basic, with a pH of between eight and nine. Compounds of salts found there include magnesium, sodium, potassium and chloride.
Another analytical instrument, the Thermal and Evolved-Gas Analyzer (TEGA), has baked its first soil sample to 1,000 degrees Celsius (1,800 degrees Fahrenheit). TEGA scientists have begun analyzing the gases released at a range of temperatures to identify the chemical make-up of soil and ice. Analysis is a weeks-long process.
Would the conditions present support life? Well, nothing has been discovered yet that would rule that out. | <urn:uuid:73b9662d-c439-438d-baa6-8638fa1fb1e5> | 3.46875 | 417 | News Article | Science & Tech. | 44.759028 | 1,496 |
The Coriolis Effect
The Coriolis force comes from the rotation of Earth. Earth spins on its axis at a rate of one rotation per 24 hours. At the equator, this is equivalent to approximately 1,600 km per hour—this is the speed a person standing at the equator experiences. But at the North and South Poles, the speed is zero. This differential in speed causes eddies (swirling patterns) in the atmosphere. These in turn affect weather patterns.
Put a few drops of food coloring on a tennis ball, gently lower it into a tub of water, and give it a spin with your fingers. Note the patterns of motion that the food coloring makes in the water.
Hurricanes spin counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere because of the Coriolis force.
We don't notice the spinning of Earth directly, because we move at constant velocity (speed and direction).
A popular myth holds that the water in toilets and sinks demonstrates the Coriolis effect (the observed effect of the Coriolis force) by draining counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere. However, this actually has to do with the design of the toilet or sink rather than where it is located on Earth.
NASA scientists must take Coriolis effects into consideration when they launch rockets. In fact, space launching facilities, including the Johnson Space Center in Houston and Cape Canaveral in Florida, are located in the south to take advantage of the greater speed of Earth's surface at those latitudes (distances north or south of the equator, measured by imaginary lines running east to west parallel to the equator). This activity shows you several ways to demonstrate the Coriolis force and its effects.
- 2-L soda bottle
- food coloring
- metric ruler
- merry-go-round or a swivel chair and weights
- safety goggles
- steel washer, nut, or other small weight
- I-m nylon fishing string or line
These effects were first described by GaspardGustave de Coriolis (1792–1843), a French engineer and mathematician.
- Fill a 2-L soda bottle with water. Turn it upside down and let the bottle begin to pour out. Swirl the bottle clockwise until a miniature cyclone starts. Study the cyclone as the water pours out. Notice that the swirl will remain powered by gravity even if you hold the bottle still. For a more dramatic effect, first release a drop of food coloring from a height of 10 cm and allow it to settle into the water. As an extension, you can vary bottle sizes and mouth openings to find out what conditions work best to support this motion.
- Get on a small merry-go-round and give it a good spin. Move toward the center. Notice what happens to the rate of rotation. You spin faster because of a principle called the conservation of angular momentum.
- Put on your safety goggles, and swing a small weight in a circular orbit at the end of a I-m string. Let the string wind around your finger as shown. The result is always the same—as the length of the string decreases, the speed of the weight increases. The string may be compared to a nearly massless merry-go-round and the weight to a heavy person.
Angular momentum is a quantity that is based upon an object's mass and rate of rotation.
Move back to the edge and the spinning slows down. You can demonstrate the same effect in a swivel chair by holding weights in your arms, spinning, and then moving your arms toward and away from your body, or by observing figure skaters as they change their rate of rotation using their arms.
In physics terms, the weight has a radial velocity (speed of rotation in respect to angle) toward your finger because of the shortening of the string. The radial velocity interacts with the rotational velocity (the speed and direction the weight turns) to produce an acceleration that is tangential (touching but not intersecting) to the path of the weight and acts to speed up the weight.I
As artificial satellites fall toward Earth out of their orbit, the radius of their orbit (the distance to Earth's center) decreases and their speed increases, until friction becomes so great that they burn up in the atmosphere.
Rocket Science: 50 Flying, Floating, Flipping, Spinning Gadgets Kids Create Themselves by Jim Wiese (New York: John Wiley & Sons, 1995).
The Spinning Blackboard and Other Dynamic Experiments on Force and Motion (Exploratorium Science Snackbook series) by Paul Doherty (New York: John Wiley & Sons, 1996).
Warning is hereby given that not all Project Ideas are appropriate for all individuals or in all circumstances. Implementation of any Science Project Idea should be undertaken only in appropriate settings and with appropriate parental or other supervision. Reading and following the safety precautions of all materials used in a project is the sole responsibility of each individual. For further information, consult your state’s handbook of Science Safety. | <urn:uuid:9c20f42c-5b5f-4e59-bccf-7f2b13d70696> | 4.03125 | 1,053 | Tutorial | Science & Tech. | 49.474717 | 1,497 |
The Speed of a Moth
Which is faster? A small moth or a songbird? The answer does surprise. A study published in March 2011 in Proceedings of the Royal Society B by researchers at Rothamsted Research, and the universities of Lund (Sweden), Greenwich and York, reports the surprising finding that night-flying moths are able to match their songbird counterparts for travel speed and direction during their annual migrations but they use quite different strategies to do so - information that adds to our understanding of the lifestyle of such insects, which are important for maintaining biodiversity and food security. This new international study of moth migration over the UK, and songbird migration over Sweden, funded by the Biotechnology and Biological Sciences Research Council (BBSRC) and the Swedish Research Council, shows that songbirds (mainly Willow Warblers) and moths (Silver Y moths) have very similar migration speeds — between 30 kilometers and 65 kilometers per hour — and both travel approximately northwards in the spring and southwards in the autumn.
A moth is an insect closely related to the butterfly, both being of the order Lepidoptera. Moths form the majority of this order. Most species of moth are nocturnal
A songbird is a bird belonging to the suborder Passeri of the perching birds. This group contains some 4000 species, in which the vocal organ typically is developed in such a way as to produce a diverse and elaborate bird song.
Dr Jason Chapman, Rothamsted Research, one of the lead authors on the paper said "Songbirds such as warblers and thrushes are able to fly unassisted about four times faster than migratory moths, which might appear to be largely at the mercy of the winds. So we had assumed that songbirds would travel much faster over the same distance. It was a great surprise when we found out the degree of overlap between the travel speeds - the mean values are almost identical, which is really remarkable."
The discovery gives fresh insight into exactly how moths are able to travel in their billions from summer breeding grounds in the UK and elsewhere in northern Europe to their winter quarters in the Mediterranean region and sub-Saharan Africa, thousands of miles away. This is important information in the context of declining moth populations and a critical need for pollinating insects to ensure maximum yields of food crops in the face of a potential food crisis — the more we understand about the life cycle and lifestyle of these insects, the better we can understand and mitigate the challenges they face for survival.
The team used specially-designed radars to track the travel speeds and directions of many thousands of individual Silver Y moths and songbirds on their night-time spring and autumn migrations.
The similarity in speed results from differing strategies: moths fly only when tailwinds are favorable, so gaining the maximum degree of wind assistance; whereas birds fly on winds from a variety of directions, and consequently receive less assistance.
Moths are therefore more efficient in their flight as opposed to the birds who fly when they want to.
The findings therefore demonstrate that moths and songbirds have evolved very different behavioral solutions to the challenge of moving great distances in a seasonally-beneficial direction in a short period of time.
Moths, and particularly their caterpillars, are a major agricultural pest in many parts of the world. Examples include corn borers and bollworms. The caterpillar of the gypsy moth causes severe damage to forests in the northeast United States, where it is an invasive species. In temperate climates, the codling moth causes extensive damage, especially to fruit farms.
Some moths are farmed. The most notable of these is the silkworm, the larva of the domesticated moth Bombyx mori. It is farmed for the silk with which it builds its cocoon.
Knowing how the moth moves about will lead to a better understanding of their world wide ecological role.
For further information: http://www.eurekalert.org/pub_releases/2011-03/babs-mma030811.php | <urn:uuid:8bf5c9c4-5b82-4313-8bda-f844f514fc6e> | 3.5625 | 838 | Knowledge Article | Science & Tech. | 37.838373 | 1,498 |
A directory is a kind of file that contains other files entered under various names. Directories are a feature of the file system.
Emacs can list the names of the files in a directory as a Lisp list,
or display the names in a buffer using the
ls shell command. In
the latter case, it can optionally display information about each file,
depending on the options passed to the
This function returns a list of the names of the files in the directory directory. By default, the list is in alphabetical order.
If full-name is non-
nil, the function returns the files' absolute file names. Otherwise, it returns the names relative to the specified directory.
If match-regexp is non-
nil, this function returns only those file names that contain a match for that regular expression—the other file names are excluded from the list. On case-insensitive filesystems, the regular expression matching is case-insensitive.
If nosort is non-
directory-filesdoes not sort the list, so you get the file names in no particular order. Use this if you want the utmost possible speed and don't care what order the files are processed in. If the order of processing is visible to the user, then the user will probably be happier if you do sort the names.(directory-files "~lewis") ⇒ ("#foo#" "#foo.el#" "." ".." "dired-mods.el" "files.texi" "files.texi.~1~")
An error is signaled if directory is not the name of a directory that can be read.
This is similar to
directory-filesin deciding which files to report on and how to report their names. However, instead of returning a list of file names, it returns for each file a list
), where attributes is what
file-attributeswould return for that file. The optional argument id-format has the same meaning as the corresponding argument to
file-attributes(see Definition of file-attributes).
This function expands the wildcard pattern pattern, returning a list of file names that match it.
If pattern is written as an absolute file name, the values are absolute also.
If pattern is written as a relative file name, it is interpreted relative to the current default directory. The file names returned are normally also relative to the current default directory. However, if full is non-
nil, they are absolute.
This function inserts (in the current buffer) a directory listing for directory file, formatted with
lsaccording to switches. It leaves point after the inserted text. switches may be a string of options, or a list of strings representing individual options.
The argument file may be either a directory name or a file specification including wildcard characters. If wildcard is non-
nil, that means treat file as a file specification with wildcards.
If full-directory-p is non-
nil, that means the directory listing is expected to show the full contents of a directory. You should specify
twhen file is a directory and switches do not contain ‘-d’. (The ‘-d’ option to
lssays to describe a directory itself as a file, rather than showing its contents.)
On most systems, this function works by running a directory listing program whose name is in the variable
insert-directory-program. If wildcard is non-
nil, it also runs the shell specified by
shell-file-name, to expand the wildcards.
MS-DOS and MS-Windows systems usually lack the standard Unix program
ls, so this function emulates the standard Unix program
lswith Lisp code.
As a technical detail, when switches contains the long ‘--dired’ option,
insert-directorytreats it specially, for the sake of dired. However, the normally equivalent short ‘-D’ option is just passed on to
insert-directory-program, as any other option. | <urn:uuid:db58271f-d93c-4c47-bbd4-c966b7a25c6d> | 3.15625 | 844 | Documentation | Software Dev. | 46.495689 | 1,499 |