text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
Time and Frequency from A to Z: D to Do
A number or series of numbers used to identify a given day with the least possible ambiguity. The date is usually expressed as the month, day of month, and year. However, integer numbers such as the Julian Date are also used to express the date.
Daylight Saving Time
The part of the year when clocks are advanced by one hour, effectively moving an hour of daylight from the morning to the evening. In 2007, the rules for Daylight Saving Time (DST) have changed for the first time since 1986. The new changes were enacted by the The Energy Policy Act of 2005, which extended the length of DST by about one month in the interest of reducing energy consumption. DST will now be in effect for 238 days, or about 65% of the year, although Congress retained the right to revert to the prior law should the change prove unpopular or if energy savings are not significant. Under the current rules, DST in the U.S. begins at 2:00 a.m. on the second Sunday of March and ends at 2:00 a.m. on the first Sunday of November.
Daylight Saving Time is not observed in Hawaii, American Samoa, Guam, Puerto Rico, the Virgin Islands, and the state of Arizona (not including the Navajo Indian Reservation, which does observe).
The time that elapses between the end of one measurement and the start of the next measurement. This time interval is generally called dead time only if information is lost. For example, when making measurements with a time interval counter, the minimum amount of dead time is the elapsed time from when a stop pulse is received to the arrival of the next start pulse. If a counter is fast enough to measure every pulse (if it can sample at a rate of 1 kHz, for instance, and the input signals are at 100 Hz), we can say there is no dead time between measurements.
Disciplined Oscillator (DO)
An oscillator whose output frequency is continuously steered (often through the use of a phase locked loop) to agree with an external reference. For example, a GPS disciplined oscillator (GPSDO) usually consists of a quartz or rubidium oscillator whose output frequency is continuously steered to agree with signals broadcast by the GPS satellites.
The apparent change of frequency caused by the motion of the frequency source (transmitter) relative to the destination (receiver). If the distance between the transmitter and receiver is increasing the frequency apparently decreases. If the distance between the transmitter and receiver is decreasing, the frequency apparently increases. To illustrate this, listen to the sound of a train whistle as a train comes closer to you (the pitch gets higher), or as it moves further away (the pitch gets lower). As you do so, keep in mind that the frequency of the sound produced at the source has not changed. | <urn:uuid:8885f164-65e1-4fdd-beef-17d693860475> | 3.484375 | 589 | Knowledge Article | Science & Tech. | 51.032085 | 300 |
NOAA: December Global Ocean Temperature Second Warmest on Record
For the year, 2009 Annual Temperature Tied for Fifth-Warmest
January 21, 2010
The global ocean surface temperature was the second warmest on record for December, according to scientists at NOAA’s National Climatic Data Center in Asheville, N.C. Based on records going back to 1880, the monthly NCDC analysis is part of the suite of climate services NOAA provides. Scientists also reported the combined global land and ocean surface temperature was the eighth warmest on record for December.
For 2009, global temperatures tied with 2006 as the fifth-warmest on record. Also, the earth’s land surface for 2009 was seventh-warmest (tied with 2003) and the ocean surface was fourth-warmest (tied with 2002 and 2004.)
Highlights for December 2009
- The global ocean temperature was the second warmest on record, behind 1997. The temperature anomaly was 0.97 degree F above the 20th century average of 60.4 degrees F.
- The combined global land and ocean surface temperature was the eighth warmest on record, at 0.88 degree F above the 20th century average of 54.0 degrees F.
- The global land surface temperature was 0.63 degree F above the 20th century average of 38.7 degrees F - the coolest December anomaly since 2002.
Global Temperature Highlights for 2009
- For the calendar year 2009, the global combined land and ocean surface temperature of 58.0 degrees F tied with 2006 as the fifth-warmest on record. This value is 1.01 degree F above the 20th century average.
- NCDC scientists also noted the average temperature for the decade (2000-09), 57.9 degrees F, was the warmest on record surpassing the 1990-99 average of 57.7 degrees F. value.
- Arctic sea ice covered an average of 4.8 million square miles during December. This is 6.6 percent below the 1979-2000 average extent and the fourth lowest December extent since records began in 1979.
- Antarctic sea ice extent in December was 2.1 percent above the 1979-2000 average, resulting in the 14th largest December extent on record. December Arctic sea ice extent has decreased by 3.3 percent per decade since 1979, while December Antarctic sea ice extent has increased by 0.6 percent per decade over the same period.
- Northern Hemisphere snow cover during December 2009 was the second largest extent, behind 1985, on record. North American snow cover for December 2009 was the largest extent since satellite records began in 1967.
NCDC’s preliminary reports, which assess the current state of the climate, are released soon after the end of each month. These analyses are based on preliminary data, which are subject to revision. Additional quality control is applied to the data when late reports are received several weeks after the end of the month and as increased scientific methods improve NCDC’s processing algorithms.
Scientists, researchers and leaders in government and industry use NCDC’s monthly reports to help track trends and other changes in the world’s climate. The data have a wide range of practical uses, from helping farmers know what to plant, to guiding resources managers with critical decisions about water, energy and other vital assets.
NOAA understands and predicts changes in the Earth's environment, from the depths of the ocean to the surface of the sun, and conserves and manages our coastal and marine resources. | <urn:uuid:5d709afc-d370-4072-996f-c8a3234be552> | 2.9375 | 718 | News (Org.) | Science & Tech. | 57.596935 | 301 |
During the next two weeks, you can help build a map of global light pollution, assisting scientists and astronomers as they monitor the loss of virgin night skies. You just have to look at the stars and write down what you see — or, more likely, what you don’t see.
Imagine if every time you needed to officially identify yourself you had to be sedated and knocked out cold. This might sound only slightly less stressful than checking through security at the airport, but for animals being tracked by wildlife authorities and researchers it’s a regularity that is not only stressful, but potentially harmful.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:500e5554-819d-4c21-8fac-36b69903181e> | 2.53125 | 177 | Content Listing | Science & Tech. | 48.038874 | 302 |
Evolution in the light of intelligent design - New entries
Appendix (human appendix) - despite it's name, no longer considered superfluous or rudimentary (Tyler)
Acritarchs - oldest known protists (Tyler)
The picture emerging of the Late Archaean is one that includes prokaryotes and eukaryotes, photosynthesis, an oxygenated atmosphere and lots of biological activity. This is a big contrast from the picture even 10 years ago. The significance for our thinking about origins is that the eons of time demanded by Darwinian processes are not available.
Archaea - horizontal gene transfer - review of The Archaea's Tale (Tyler)
He presents evidence that Darwinian evolution does not go back to the beginning of life. When we compare genomes of ancient lineages of living creatures, we find evidence of numerous transfers of genetic information from one lineage to another. In early times, horizontal gene transfer, the sharing of genes between unrelated species, was prevalent. It becomes more prevalent the further back you go in time. - Freeman Dyson
Butterfly sex ratios in Samoa - and natural selection (Tyler)
Sex ratios are distorted by the presence of a maternally inherited bacterium which has the effect of selectively killing male embryos. The authors report ratios of >99% female to nearly 1:1. These were different on different islands and at different times. The genetics of this shift of sex ratios is summarised in one paragraph with some supporting online data. There is not enough information here for anyone to either confirm or challenge their conclusions.
Cell - molecular recognition - advantages of cellular key-lock not being an exact fit. (Tyler)
So, something that could have been interpreted as evidence for tinkering evolution is discovered to have advantages after all. Furthermore, it has potential for the design of human systems operating in noisy environments. By invoking "evolutionary selection", the authors suggest an evolutionary context for their work. However, there is no evidence that evolutionary selection was involved, and the link with evolutionary theory is gratuitous.
Central dogma (Tyler)
Casual observers might say they find chaos in the emerging picture of the genome, but systems biology is tracking down extraordinary sophistication at the molecular biology level, indicating that theories (like Darwinism) that are undirected and stochastic have little to offer 21st Century biology.
Exoplanets - atmospheres (Tyler)
Gecko - feet a standard for adhesion (Tyler)
... the gecko does not demonstrate just a single trait with enhanced performance. There are issues of adhesion and delamination, self-cleaning, and achieving a sustained adhesive performance. What we have in the gecko is exquisite design and, for that, biomimetics needs a methodology that can relate well to intelligent engineering design concepts.
Molecular recognition in the cell (Tyler)
Protists - oldest known protists (Tyler)
Sensory perception - advanced perception in Permian amniotes (Tyler)
The discovery of a highly-evolved auditory apparatus in Middle Permian parareptiles even further emphasizes that the entire groundplan for the impressive evolutionary history of amniotes was already largely in place by the end of the Paleozoic; what followed was in fact only a subsequent tinkering of earlier inventions." Darwinism needs time, but the fossil record no longer provides it.
Stasis - tribolites (Tyler)
Trilobites - variation and stasis as a pattern
The research documented both rapid morphological variation and subsequent stasis. ... One hypothesis is that radiations occur because organisms are designed to vary, but the process results in genetic impoverishment that leads to stasis.
Variation - tribolites (Tyler) | <urn:uuid:dbdde9c1-8368-4111-879a-4dad9d61cd92> | 3.09375 | 771 | Content Listing | Science & Tech. | 16.36004 | 303 |
The first thing to note about the psqlODBC driver (or any ODBC driver) is that there must exist a driver manager on the system where the ODBC driver is to be used. There exists a free ODBC driver for Unix called iODBC which can be obtained via http://www.iodbc.org. Instructions for installing iODBC are contained in the iODBC distribution. Having said that, any driver manager that you can find for your platform should support the psqlODBC driver, or any other ODBC driver for that matter.
To install psqlODBC you simply need to supply the --enable-odbc option to the configure script when you are building the entire PostgreSQL distribution. The library and header files will then automatically be built and installed with the rest of the programs. If you forget that option or want to build the ODBC driver later you can change into the directory src/interfaces/odbc and do make and make install there.
The installation-wide configuration file odbcinst.ini will be installed into the directory /usr/local/pgsql/etc/, or equivalent, depending on what --prefix and/or --sysconfdir options you supplied to configure. Since this file can also be shared between different ODBC drivers you can also install it in a shared location. To do that, override the location of this file with the --with-odbcinst option.
Additionally, you should install the ODBC catalog extensions. That will provide a number of functions mandated by the ODBC standard that are not supplied by PostgreSQL by default. The file /usr/local/pgsql/share/odbc.sql (in the default installation layout) contains the appropriate definitions, which you can install as follows:
psql -d template1 -f LOCATION/odbc.sqlwhere specifying template1 as the target database will ensure that all subsequent new databases will have these same definitions.
psqlODBC has been built and tested on Linux. There have been reports of success with FreeBSD and with Solaris. There are no known restrictions on the basic code for other platforms which already support Postgres. | <urn:uuid:12956776-ea5d-4ea0-9486-c9781f2e6021> | 2.625 | 445 | Tutorial | Software Dev. | 52.14017 | 304 |
Pascal is an influential imperative and procedural programming language, intended to encourage good programming practices using so called structured programming and data structuring.
: What is the reason for this problem? If I leave pascal doing anything
: in loop, After ~30 secs loop is runiing slower than before, but loop returns at normal speed when i move mouse or press a...
: Hi There
: I'm using Turbo Pascal for Windows 1.5, and using WinCrt in my program.
: 1. How can I use color text or color background? I try to use
: Textcolor(1); Textbackground(4);, but it...
: when using this code:
: procedure TForm1.ApplicationEvents1Message(var Msg: tagMSG;
: var Handled: Boolean);
: if (Msg.message = wm_KeyUp) or (Msg.message=wm_KeyDown) then
There are a lot of applications where speed is a critical factor -- such as real-time programs. MS Windows and Unix are not real-time operating systems, so speed is not all that...
Is there any way in Pascal (or asm) to shut down the computer (you know, as in Windows - you press shut down and its turning itself off power).
I'd be grateful. | <urn:uuid:12c439b2-de89-49db-af74-dbe35321f621> | 2.8125 | 280 | Comment Section | Software Dev. | 66.903623 | 305 |
Hansen is threatening humanity again with climate dice
Perceptions of Climate Change: The New Climate Dice
We conclude that extreme heat waves, such as that in Texas and Oklahoma in 2011 and Moscow in 2010, were “caused” by global warming
Hansen has been holding the dice since he had a full head of hair, and has been coming up snake-eyes for 25 years.
July, 1936 was the hottest month in US history. Did global warming cause the heat wave? | <urn:uuid:757fd6f8-d93e-4ecd-adb0-aec36d25ca61> | 2.59375 | 101 | Personal Blog | Science & Tech. | 50.753333 | 306 |
Antimatter came about as a solution to the fact that the equation describing a free particle in motion (the relativistic relation between energy, momentum and mass) has not only positive energy solutions, but negative ones as well! If this were true, nothing would stop a particle from falling down to infinite negative energy states, emitting an infinite amount of energy in the process--something which does not happen. In 1928, Paul Dirac postulated the existence of positively charged electrons. The result was an equation describing both matter and antimatter in terms of quantum fields. This work was a truly historic triumph, because it was experimentally confirmed and it inaugurated a new way of thinking about particles and fields.
In 1932, Carl Anderson discovered the positron while measuring cosmic rays in a Wilson chamber experiment. In 1955 at the Berkeley Bevatron, Emilio Segre, Owen Chamberlain, Clyde Wiegand and Thomas Ypsilantis discovered the antiproton. And in 1995 at CERN, scientists synthesized anti-hydrogen atoms for the first time.
When a particle and its anti-particle collide, they annihilate into energy, which is carried by "force messenger" particles that can subsequently decay into other particles. For example, when a proton and anti-proton annihilate at high energies, a top-anti-top quark pair can be created!
An intriguing puzzle arises when we consider that the laws of physics treat matter and antimatter almost symmetrically. Why then don't we have encounters with anti-people made of anti-atoms? Why is it that the stars, dust and everything else we observe is made of matter? If the cosmos began with equal amounts of matter and antimatter, where is the antimatter?
Experimentally, the absence of annihilation radiation from the Virgo cluster shows that little antimatter can be found within ~20 Megaparsecs (Mpc), the typical size of galactic clusters. Even so, a rich program of searches for antimatter in cosmic radiation exists. Among others, results form the High-Energy Antimatter Telescope, a balloon cosmic ray experiment, as well as those from 100 hours worth of data from the Alpha Magnetic Spectrometer aboard NASA's Space Shuttle, support the matter dominance in our Universe. Results from NASA's orbiting Compton Gamma Ray Observatory , however, are uncovering what might be clouds and fountains of antimatter in the Galactic Center.
We stated that there is an approximate symmetry between matter and antimatter. The small asymmetry is thought to be at least partly responsible for the fact that matter outlives antimatter in our universe. Recently both the NA48 experiment at CERN and the KTeV experiment at Fermilab have directly measured this asymmetry with enough precision to establish it. And a number of experiments, including the BaBar experiment at the Stanford Linear Accelerator Center and Belle at KEK in Japan, will confront the same question in different particle systems.
Antimatter at lower energies is used in Positron Emission Tomography (see this PET image of the brain).
But antimatter has captured public interest mainly as fuel for the fictional starship Enterprise on Star Trek.
In fact, NASA is paying attention to antimatter as a possible fuel for interstellar propulsion. At Penn State
University, the Antimatter Space Propulsion group is addressing the challenge of using antimatter
annihilation as source of energy for propulsion. See you on Mars?
Answer originally posted October 18, 1999 | <urn:uuid:0ee63bec-49ee-4e62-a8f7-6b79d5db4f5b> | 3.6875 | 717 | Knowledge Article | Science & Tech. | 26.957256 | 307 |
Exceptional Beginner Chemistry Set
Can you make dazzling colors in flame tests? Create your own mini fire extinguisher? With these hands-on lab sets, you will perform highly rewarding experiments while building a strong foundation in chemistry. The 80-page, full-color experiment manual guides aspiring young chemists through each of the 125 experiments. Kit includes safety glasses, professional-quality equipment and enough chemicals for repeated experiments. Uses a 9-volt battery (not included).
Learn about indicators with litmus solution and write a secret message in invisible ink. Test the inks from your colored markers on the chromatography racetrack to reveal their different color components. Experiment with air pressure, surface tension, and the physical properties of fluids.
Experiment with two well-known metals, iron and copper. Investigate carbon dioxide. Dissolve metals with electrochemical reactions. Explore water and its elements, saturated and unsaturated solutions, and crystals. Split water into hydrogen and oxygen with electrolysis, and form oxygen from hydrogen peroxide.
Experiment with soaps, detergents, and emulsions of water and oil. Investigate chemistry in the kitchen by experimenting with sugar, honey, starch, eggs and proteins, fatty acids, and calcium.
Begin to build a strong foundation in chemistry with exposure to a broad range of chemical phenomena and hands-on laboratory experiences. This kit provides clear instructions for preparing and performing the experiments, offers safety advice, offers explanations for the observed occurrences, and asks and answers questions about the results.
Ages 10 and up.
- Protective goggles
- Two dropper pipettes
- Clip for 9-volt battery
- Safety cap with dropper insert for litmus bottle
- Copper wire
- Two large graduated beakers
- Two lids for large graduated beakers
- Four test tubes
- Test tube brush
- Rubber stopper with hole
- Rubber stopper without hole
- Sodium carbonate
- Potassium hexacyanoferrate(II)
- Calcium hydroxide
- Ammonium iron(III) sulfate
- Copper(II) sulfate
- Citric acid
- Litmus powder
- Small bottle for litmus solution
- Lid opener
- Double-headed measuring spoon
- Angled tube
- Experiment station (part of the polystyrene insert)
WARNING! — This set contains chemicals that may be harmful if misused. Read cautions on individual containers carefully. Not to be used by children except under adult supervision. | <urn:uuid:3516df27-bfe4-43db-8b1d-95372d9e7b16> | 3.328125 | 519 | Product Page | Science & Tech. | 24.123062 | 308 |
Posted Sunday, Feb. 24, 2013, at 8:00 AM
Years ago, in 1999, some odd pictures were returned from The Mars Global Surveyor space probe orbiting the red planet. They showed what looked for all the world(s) like trees, banyan trees, dotting the Martian landscape. They made quite a splash on the internet, and you can see why; here’s a section of one of the pictures:
Image credit: NASA/JPL/Malin Space Science Systems
No fooling, they really do look like trees. The usual pseudoscience website went nuts—well, more nuts—claiming they were life on Mars. More rational heads knew they were formed from some sort of natural non-biological process, but what?
Over time, more and better pictures were taken, and eventually the story became clear. Hints were found when these features were detected at extreme latitudes, and only in the spring. That meant they must be related to the change in seasons, specifically to the weather warming. That, plus some high-resolution images, made it possible to eventually figure out what they are.
Mars has a thin atmosphere that’s mostly carbon dioxide. In the winter at the poles it gets cold enough that this CO2 freezes out, becoming frost or snow on the Martian surface—what we on Earth call dry ice. It gets this name because when you warm it up, it doesn’t melt: It turns directly from a solid into a gas, a process called sublimation.
Image credit: Arizona State University/Ron Miller
In the Martian spring sunlight warms the ground, which warms the layers of dry ice. They sublimate slowly, and—here’s the cool part—from the bottom up. Dry ice is very white and reflective, so sunlight doesn’t warm it efficiently. The ground is darker, and absorbs the solar warmth. This tends to heat the pile of dry ice from the sides and underneath at the edges.
The newly released gaseous carbon dioxide needs somewhere to go. It might just leak away from the side, but some will find its way deeper into the dry ice pack, toward the center. If the gas finds a weak spot in the ice it’ll burst through, creating a hole. Other trickles of CO2 under the ice will flow that way as well, and eventually find that hole. What you get, then, is dry ice on the surface laden with cracks, converging on a single spot where the gas can then leak out into the Martian atmosphere like dry geysers. The plumes of CO2 will carry with them dust from the ground under the dry ice pack, depositing the darker dust on the brighter surface ice, discoloring it.
And when you look at them from above, you see what look like trees! After a while, the carbon dioxide frost sublimates away entirely, and all you’re left with are weird looking spidery channels in the ground, up to a couple of meters deep, created by erosion as the carbon dioxide gas wended its way under the dry ice pack. These are even called araneiform features, meaning spider-like. They also kinda look like the cell bodies of neurons. Unsettling. But probably a better situation than an infestation of giant alien tree spiders.
How cool is that? While reading about this, I found various other features that have a similar origin, created from carbon dioxide gas flow. One aspect really got to me, a simple but terrifically strange observation: In some of these features on Mars, the tracks get wider as they go uphill. That’s the opposite of what you’d expect from the flow of an actual liquid; channels created by, say, water on Earth get wider as they flow downhill. This means whatever formed those channels must be flowing uphill. So the culprit must be gas, not liquid.
That is so flippin’ weird! It’s bizarre enough that a major component of a planet’s air might freeze out at all, but then to have some it flow uphill in the spring, and also to create those creepy spidery things?
Mars is a damn odd place. | <urn:uuid:641c25c1-c61b-496d-816a-517009d91725> | 4.0625 | 878 | Personal Blog | Science & Tech. | 59.815987 | 309 |
Hyenas cooperate, problem-solve better than primates
(NC&T/DU) Captive pairs of spotted hyenas (Crocuta crocuta) that needed to tug two ropes in unison to earn a food reward cooperated successfully and learned the maneuvers quickly with no training. Experienced hyenas even helped inexperienced partners do the trick.
When confronted with a similar task, chimpanzees and other primates often require extensive training and cooperation between individuals may not be easy, said Christine Drea, an evolutionary anthropologist at Duke University.
Drea's research, published online in the October issue of Animal Behavior, shows that social carnivores like spotted hyenas that hunt in packs may be good models for investigating cooperative problem solving and the evolution of social intelligence. She performed these experiments in the mid-1990s but struggled to find a journal that was interested in non-primate social cognition.
"No one wanted anything but primate cognition studies back then," Drea said. "But what this study shows is that spotted hyenas are more adept at these sorts of cooperation and problem-solving studies in the lab than chimps are. There is a natural parallel of working together for food in the laboratory and group hunting in the wild."
Drea and co-author Allisa N. Carter of the Univ. of California at Berkeley, designed a series of food-reward tasks that modeled group hunting strategies in order to single out the cognitive aspects of cooperative problem solving. They selected spotted hyenas to see whether a species' performance in the tests might be linked to their feeding ecology in the wild.
|A pair of captive hyenas cooperatively solving a task to get some food. (Photo: Christine Drea)|
The first experiment sought to determine if three pairs of captive hyenas could solve the task without training. "The first pair walked in to the pen and figured it out in less than two minutes," Drea said. "My jaw literally dropped."
Drea and Carter studied the actions of 13 combinations of hyena pairs and found that they synchronized their timing on the ropes, revealing that the animals understood the ropes must be tugged in unison. They also showed that they understood both ropes had to be on the same platform. After an animal was experienced, the number of times it pulled on a rope without its partner present dropped sharply, indicating the animal understood its partner's role.
"One thing that was different about the captive hyena's behavior was that these problems were solved largely in silence," Drea said. Their non-verbal communication included matching gazes and following one another. "In the wild, they use a vocalization called a whoop when they are hunting together."
In the second and third experiments, Drea found that social factors affected the hyenas' performance in both positive and negative ways. When an audience of extra hyenas was present, experienced animals solved the task faster. But when dominant animals were paired, they performed poorly, even if they had been successful in previous trials with a subordinate partner.
"When the dominant females were paired, they didn't play nicely together," Drea said. "Their aggression toward each other led to a failure to cooperate."
When a na´ve animal unfamiliar with the feeding platforms was paired with a dominant, experienced animal, the dominant animals switched social roles and submissively followed the lower-ranking, na´ve animal. Once the na´ve animal became experienced, they switched back.
Both the audience and the role-switching trials revealed that spotted hyenas self-adjust their behavior based upon social context.
It was not a big surprise that the animals were strongly inclined to help each other obtain food, said Kay Holekamp, a professor of zoology at Michigan State University who studies the behavioral ecology of spotted hyenas.
"But I did find it somewhat surprising that the hyenas' performance was socially modulated by both party size and pair membership," Holekamp said. "And I found it particularly intriguing that the animals were sensitive to the na´vetÚ of their potential collaborators."
Researchers have focused on primates for decades with an assumption that higher cognitive functioning in large-brained animals should enable organized teamwork. But Drea's study demonstrates that social carnivores, including dogs, may be very good at cooperative problem solving, even though their brains are comparatively smaller.
"I'm not saying that spotted hyenas are smarter than chimps," Drea said. "I'm saying that these experiments show that they are more hard-wired for social cooperation than chimpanzees."
This site is no longer updated.
Click this link to have updated biology news.
About the Author
ęTheAllINeed.com All rights reserved | <urn:uuid:7193b74f-6980-4546-a411-41c2c10c22a3> | 3.265625 | 979 | News Article | Science & Tech. | 35.190647 | 310 |
Researchers with the U.S. Geological Survey and the U.S. Fish and Wildlife Service have used unmanned aircraft in three trials to count the number of sandhill cranes that visit the Monte Vista National Wildlife Refuge and found them to be a safe alternative for both birds and scientists.
“What these systems do is they help to more quickly fly over the cranes,” said Leanne Hanson, a USGS biologist who is overseeing the use of the aircraft. “They don’t flush the birds so there’s no mid-air collision potential.”
More than 20,000 cranes typically make a stopover in the valley from late February to April.
Traditionally, wildlife biologists have used fixed-wing aircraft to count animals but those planes pose a threat to the birds and also use more fuel than their smaller, electrically-charged counterparts. The Raven small unmanned aerial vehicle, as it’s properly called, is 3-feet long and has a 55-inch wingspan. It can fly between 150 feet and 1,000 feet above ground.
The geological survey secured five of the decommissioned craft from the U.S. Army in 2009. While the army maintains ownership, an agreement allows the geological survey to use planes at no cost. Since then, the geological survey had to secure permission from the Federal Aviation Administration to fly over the refuge below 2,000 feet.
They also had to settle on the best time to fly. While the craft can fly with an attached video camera and film during the day, the cranes spend much of those hours dispersed across the valley.
But at night the birds cram into the Monte Vista refuge, which has some of the few wetlands in the valley that aren’t covered with ice in late February and March thanks to the pumping of groundwater.
“It looks like from what we did last night we got fairly good coverage of all the cranes that are on the refuge,” said Jim Dubovsky, a migratory bird specialist with the U.S. Fish and Wildlife Service.
At night a thermal, infrared sensor is attached to the craft and researchers can download the files and count what they see from their own computer screens.
Dubovsky said the cranes have a recognizable heat signature in comparison to the other birds that use the refuge, such as ducks and geese.
Meanwhile the geological survey is developing other uses for the planes.
Hanson said the agency is looking at hot springs within lake and river systems that provide unique habitat, checking historical mating areas for sage grouse and monitoring mountain pine beetle damage in northern Colorado, among other uses.
Source: Pueblo Chieftain | <urn:uuid:db0ef56e-a149-4e12-a6c4-fcf4b5eb36e8> | 3.453125 | 553 | News Article | Science & Tech. | 53.074896 | 311 |
Elemental analysis is a process where a sample of some material (e.g., soil, waste or drinking water, bodily fluids, minerals, chemical compounds) is analyzed for its elemental and sometimes isotopic composition. Elemental analysis can be qualitative (determining what elements are present), and it can be quantitative (determining how much of each are present). Elemental analysis falls within the ambit of analytical chemistry, the set of instruments involved in decyphering the chemical nature of our world.
For synthetic chemists, elemental analysis or "EA" almost always refers to CHNX analysis — the determination of the percentage weights of carbon, hydrogen, nitrogen, and heteroatoms (X) (halogens, sulfur) of a sample. This information is important to help determine the structure of an unknown compound, as well as to help prove the structure and purity of a synthesized compound.
The most common form of elemental analysis, CHN analysis, is accomplished by combustion analysis. In this technique, a sample is burned in an excess of oxygen, and various traps collect the combustion products — carbon dioxide, water, and nitric oxide. The weights of these combustion products can be used to calculate the composition of the unknown sample.
Other quantitative methods include:
- Gravimetry, where the sample is dissolved and then the element of interest is precipitated and its mass measured or the element of interest is volatilized and the mass loss is measured.
- Optical atomic spectroscopy, such as flame atomic absorption, graphite furnace atomic absorption, and inductively coupled plasma atomic emission, which probe the outer electronic structure of atoms.
To qualitatively determine which elements exist in a sample, methods include:
- Mass spectrometric atomic spectroscopy, such as inductively coupled mass spectrometry, which probes the mass of atoms.
- Other spectroscopy which probes the inner electronic structure of atoms such as X-ray fluorescence, particle induced x-ray emission, x-ray photoelectron spectroscopy, and Auger electron spectroscopy.
- Electrochemical methods
There is no pharmaceutical or device industry support for this site and we need your viewer supported Donations | Editorial Board | Governance | Licensing | Disclaimers | Avoid Plagiarism | Policies | <urn:uuid:df340b51-ca16-473c-9b5f-8b1e02d00178> | 3.625 | 472 | Knowledge Article | Science & Tech. | 6.327784 | 312 |
ExploraTour - How to Build a Star
"But wait a minute," you say. "We've tried this nuclear fusion stuff on Earth to produce energy and so far it hasn't worked very well. How does the sun succeed where we have failed?"
You are right. Operational nuclear power plants on Earth use fission reactions to produce power. They work by splitting apart heavy nuclei like Uranium-235 or Plutonium-239. The combined mass of the resulting lighter nuclei is less than the original heavy nucleus. The missing mass is converted to energy, mostly in the form of heat.
Uranium-235 and Plutonium-239 are very rare elements that are difficult to extract. The Earth's reserves will be used up in a relatively short time. In addition, the products left over from the fission reaction are radioactive and thus dangerous to humans. This radioactive waste has to be disposed of very carefully.
We have high hopes that nuclear power plants using fusion reactions to liberate energy will soon be developed. | <urn:uuid:774cbc1e-d50b-432b-9ca1-4a27d0d9fdf4> | 4.0625 | 209 | Knowledge Article | Science & Tech. | 54.718394 | 313 |
If you read this blog regularly, you know I have a fondness for the so-called “missing eruptions” — that is, volcanic events found in ice core or sediment records but not yet identified in the geologic/volcanic record. The most glaring right now is the eruption of 1258 A.D., supposedly 1.8 times as large as the 1815 eruption of Tambora, but no candidate volcano has been conclusively identified as the source. Another enigmatic climate event that has a little more potential to be matched with a volcano happened during the mid-1450s, a period that saw cold winters in China, dry fogs in Constantinople and stunted tree ring growth around the world. It also saw one of the biggest cases of sulfur loading in the atmosphere in the last few thousand years, rivaling that of the famous 1783 Laki eruption in Iceland. All these climatic effects have been attributed to an eruption in the New Hebrides arc, specifically the Kuwae caldera in Vanuatu. However, the relationship between this eruption at the climate signatures — and the existence of the eruption itself — is still hotly debated. | <urn:uuid:eecc926a-6ecd-4a92-bf87-2115959015a7> | 3.140625 | 239 | Personal Blog | Science & Tech. | 41.674462 | 314 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
October 24, 1998
Explanation: Sunrise seen from low Earth orbit by the shuttle astronauts can be very dramatic indeed ( and the authors apologize to Hemingway for using his title!). In this breathtaking view, the Sun is just visible peaking over towering anvil-shaped storm clouds whose silhouetted tops mark the upper boundary of the troposphere, the lowest layer of planet Earth's atmosphere. Sunlight filtering through suspended dust causes this dense layer of air to appear red. In contrast, the blue stripe marks the stratosphere, the tenuous upper atmosphere, which preferentially scatters blue light.
Authors & editors:
NASA Technical Rep.: Jay Norris. Specific rights apply.
A service of: LHEA at NASA/ GSFC
&: Michigan Tech. U. | <urn:uuid:c9075ef6-33ed-4b24-8c3e-133f541a5196> | 3.171875 | 192 | Knowledge Article | Science & Tech. | 43.921905 | 315 |
Diversity-biological as well as social, linguistic and cultural diversity-is the lifeblood of sustainable development and human welfare. It is key to resilience-the ability of natural and social systems to adapt to change and is essential for nearly every aspect of our lives.
That’s why, in the run-up to the IUCN World Conservation Congress in Barcelona, with its theme A Diverse and Sustainable World the latest issue of World Conservation is going ‘back to basics’.
It asks the question: How can we expect to tackle poverty and climate change if we don’t look after the natural wealth of animals, plants, microorganisms and ecosystems that make our planet inhabitable?
The articles look at the scientific, social, economic and cultural case for keeping diversity, showing how biodiversity supports our health and physical security, food production, medical research, livelihoods, tourism, artistic expression and cultural life. | <urn:uuid:48b8e187-61e6-4b50-beb6-a8821c058d10> | 2.859375 | 190 | Knowledge Article | Science & Tech. | 18.728694 | 316 |
I believe that, from the perspective of the C++ Standard, there is no difference between #include "xyz.h" and #include "xyz.cpp" if they both contain the same thing. In practice, an IDE might create the makefile (or other build script) such that "xyz.cpp" is compiled even when it should not be, possibly leading to redefinition errors.
SQLite's amalgamation is an example of an optimisation where the final source code is generated from the various header files and source files such that it becomes one big source file. This might be a better way than trying to develop by including (non-header) source files all over the place.
But it can.Quote:
It would be truly amazing of the Microsoft compiler if it can do that.
I cannot say for sure, but usually when compiling a Release, it is a pretty long process and many files are typically re-compiled, and during a Debug build, you do not use optimizations.Quote:
But is that to say every time a cpp file is changed, the whole project needs to be recompiled? Since that's the only way cross-file inlining can be done?
But I think the optimization is done at the linker stage, so perhaps only the linking stage needs to be redone.
For one thing, it is considered bad practice to include source files. Not that it is really such a bad thing if used like this, but anyway.Quote:
If that is the case... then what's the difference between that and including cpp files? (and keeping dummy header files for human reference, or include all headers before all cpp's?)
Secondly, the entire code base is completely re-compiled everytime, even if nothing has changed in those source files.
Thirdly, I guess there will be complications, such as global variables with internal linkage, and such. Probably much more.
Not sure what you are hinting at?Quote:
I thought one of the main advantages of using headers is that the project can be incrementally compiled.
I think we should not call it "linking" when we're talking of "inlining from all of the source code", because what really happens is that the compiler is doing the work in two or three steps. The first step involves reading and "understanding" the source code. The second step involves generating the actual binary code. In the case of "whole program optimization", you'd only spend a little bit of time parsing the code and making some intermediate form that can be used for producing the final binary. But certainly, some of the steps in the actual code generation step will involve quite a bit of "hard work" for the processor, compared to just linking together already compiled object files. But for a total build from scratch, I'd expect that it's not much difference. And as Elysia says, most development is done in debug builds, where very little time is spent on optimization. | <urn:uuid:2f3a7e6a-1591-4c19-b893-45b791782dd2> | 2.75 | 619 | Comment Section | Software Dev. | 53.381491 | 317 |
|Name, Symbol, Number|| krypton, Kr, 36
|Chemical series||noble gases|
|Group, Period, Block||18, 4, p|
|Appearance|| colorless |
|Atomic mass||83.798(2) g/mol|
|Electron configuration||[Ar] 3d10 4s2 4p6|
|Electrons per shell||2, 8, 18, 8|
|Density|| (0 °C, 101.325 kPa)|
|Melting point|| 115.79 K|
(-157.36 °C, -251.25 °F)
|Boiling point|| 119.93 K|
(-153.22 °C, -243.8 °F)
|Critical point||209.41 K, 5.50 MPa|
|Heat of fusion||1.64 kJ·mol−1|
|Heat of vaporization||9.08 kJ·mol−1|
|Heat capacity||(25 °C) 20.786 J·mol−1·K−1|
|Crystal structure||cubic face centered|
|Electronegativity||3.00 (Pauling scale)|
| Ionization energies|
|1st: 1350.8 kJ·mol−1|
|2nd: 2350.4 kJ·mol−1|
|3rd: 3565 kJ·mol−1|
|Atomic radius (calc.)||88 pm|
|Covalent radius||110 pm|
|Van der Waals radius||202 pm|
|Thermal conductivity||(300 K) 9.43 mW·m−1·K−1|
|Speed of sound||(gas, 23 °C) 220 m/s|
|Speed of sound||(liquid) 1120 m/s|
|CAS registry number||7439-90-9|
Krypton (IPA: /ˈkrɪptən/ or /ˈkrɪptan/) is a chemical element with the symbol Kr and atomic number 36. A colorless, odorless, tasteless noble gas, krypton occurs in trace amounts in the atmosphere, is isolated by fractionating liquefied air, and is often used with other rare gases in fluorescent lamps. Krypton is inert for most practical purposes but it is known to form compounds with fluorine. Krypton can also form clathrates with water when atoms of it are trapped in a lattice of the water molecules.
Notable characteristics Edit
Krypton, a noble gas due to its very low chemical reactivity, is characterized by a brilliant green and orange spectral signature. It is one of the products of uranium fission. Solidified krypton is white and crystalline with a face-centered cubic crystal structure which is a common property of all "rare gases".
In 1960 an international agreement defined the metre in terms of light emitted from a krypton isotope. This agreement replaced the longstanding standard metre located in Paris which was a metal bar made of a platinum-iridium alloy (the bar was originally estimated to be one ten millionth of a quadrant of the earth's polar circumference). But only 23 years later, the Krypton-based standard was replaced itself by the speed of light—the most reliable constant in the universe. In October 1983 the Bureau International des Poids et Mesures (International Bureau of Weights and Measures) defined the metre as the distance that light travels in a vacuum during 1/299,792,458 s.
Like the other noble gases, krypton is widely considered to be chemically inert. Following the first successful synthesis of xenon compounds in 1962, synthesis of krypton difluoride was reported in 1963. Other fluorides and a salt of a krypton oxoacid have also been found. ArKr+ and KrH+ molecule-ions have been investigated and there is evidence for KrXe or KrXe+.
There are 32 known isotopes of krypton. Naturally occurring krypton is made of five stable and one slightly radioactive isotope. Krypton's spectral signature is easily produced with some very sharp lines. 81Kr is the product of atmospheric reactions with the other naturally occurring isotopes of krypton. It is radioactive with a half-life of 250,000 years. Like xenon, krypton is highly volatile when it is near surface waters and 81Kr has therefore been used for dating old (50,000 - 800,000 year) groundwater. 85Kr is an inert radioactive noble gas with a half-life of 10.76 years, that is produced by fission of uranium and plutonium. Sources have included nuclear bomb testing, nuclear reactors, and the release of 85Kr during the reprocessing of fuel rods from nuclear reactors. A strong gradient exists between the northern and southern hemispheres where concentrations at the North Pole are approximately 30% higher than the South Pole due to the fact that most 85Kr is produced in the northern hemisphere, and north-south atmospheric mixing is relatively slow.
Krypton fluoride laser Edit
- For more details on this topic, see Krypton fluoride laser.
The compound will decompose once the energy supply stops. During the decomposition process, the excess energy stored in the excited state complex will be emitted in the form of strong ultraviolet laser radiation.
- Los Alamos National Laboratory - Krypton
- USGS Periodic Table - Krypton
- "Chemical Elements: From Carbon to Krypton" By: David Newton & Lawrence W. Baker
- "Krypton 85: a Review of the Literature and an Analysis of Radiation Hazards" By: William P. Kirk
|This page uses content from Wikipedia. The original article was at Krypton. The list of authors can be seen in the page history. As with Chemistry, the text of Wikipedia is available under the GNU Free Documentation License.| | <urn:uuid:dd2b7efe-0dbd-42a5-b50d-781bb9434163> | 3.28125 | 1,260 | Knowledge Article | Science & Tech. | 60.270636 | 318 |
You should always externalize resources such as images and strings from your application
code, so that you can maintain them independently. Externalizing your
resources also allows you to provide alternative resources that support specific device
configurations such as different languages or screen sizes, which becomes increasingly
important as more Android-powered devices become available with different configurations. In order
to provide compatibility with different configurations, you must organize resources in your
res/ directory, using various sub-directories that group resources by type and
For any type of resource, you can specify default and multiple alternative resources for your application:
- Default resources are those that should be used regardless of the device configuration or when there are no alternative resources that match the current configuration.
- Alternative resources are those that you've designed for use with a specific configuration. To specify that a group of resources are for a specific configuration, append an appropriate configuration qualifier to the directory name.
For example, while your default UI
layout is saved in the
res/layout/ directory, you might specify a different layout to
be used when the screen is in landscape orientation, by saving it in the
directory. Android automatically applies the appropriate resources by matching the
device's current configuration to your resource directory names.
Figure 1 illustrates how the system applies the same layout for two different devices when there are no alternative resources available. Figure 2 shows the same application when it adds an alternative layout resource for larger screens.
The following documents provide a complete guide to how you can organize your application resources, specify alternative resources, access them in your application, and more:
- Providing Resources
- What kinds of resources you can provide in your app, where to save them, and how to create alternative resources for specific device configurations.
- Accessing Resources
- How to use the resources you've provided, either by referencing them from your application code or from other XML resources.
- Handling Runtime Changes
- How to manage configuration changes that occur while your Activity is running.
- A bottom-up guide to localizing your application using alternative resources. While this is just one specific use of alternative resources, it is very important in order to reach more users.
- Resource Types
- A reference of various resource types you can provide, describing their XML elements, attributes, and syntax. For example, this reference shows you how to create a resource for application menus, drawables, animations, and more. | <urn:uuid:393c5454-aa47-4d5a-9009-34999bb3ff39> | 3.171875 | 498 | Documentation | Software Dev. | 6.87839 | 319 |
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Subject Index][Author Index]
Re: The Permo-Triassic extinction killed the dinosaurs, according to Fox News
What an excellent example of sloppy journalism. The sad thing is that the paper
could have made a very interesting news story in the right hands. The authors
found exceptionally high concentrations of nano-scale silica dust in the
end-Permian coals, which they suggest combines with organic pollutants from the
coal to cause elevated rates of lung cancer. They think the acidic fallout from
the Siberian Traps eruptions accelerated erosion to produce the silica dust,
which washed into the peat bogs.
By the way, read that first sentence carefully, and you discover that plants
were walking the earth along with the dinosaurs 250 million years ago. Maybe
give it to your students and say "spot the errors."
At 9:32 AM -0500 1/8/10, Thomas R. Holtz, Jr. wrote:
>"The tremendous volcanic eruption thought to be responsible for Earth's
>largest mass extinction - which killed more than 70 percent of plants and
>dinosaurs walking the planet 250 million years ago - is still taking lives
>Well, to be fair, 0 of the dinosaur individuals present in the latest
>Permian survived into the earliest Triassic. Of course, 0 of the human
>individuals present in the latest Permian survived into the earliest
>But it gets... Er... Better (?):
>"Scientists investigating the high incidence of lung cancer in China's Xuan
>Wei County in Yunnan Province conclude that the problem lies with the coal
>residents use to heat their homes. That coal was formed by the same
>250-million-year-old giant volcanic eruption - termed a supervolcano - that
>was responsible for the extinction of the dinosaurs. The high silica content
>of that coal is interacting with volatile organic matter in the soil to
>cause the unusually high rates of lung cancer."
>Coal. Formed by a basaltic eruption.
>(Okay, in the paper itself, it states that the coal formed at the P/Tr
>boundary, but not that it was produced by volcanic action as such. The
>particular coal seam seems to be comparable to the Z Coal in the Hell
>Thomas R. Holtz, Jr.
>Email: firstname.lastname@example.org Phone: 301-405-4084
>Office: Centreville 1216
>Senior Lecturer, Vertebrate Paleontology
>Dept. of Geology, University of Maryland
>Faculty Director, Earth, Life & Time Program, College Park Scholars
>Faculty Director, Science & Global Change Program, College Park Scholars
>Mailing Address: Thomas R. Holtz, Jr.
> Department of Geology
> Building 237, Room 1117
> University of Maryland
> College Park, MD 20742 USA
Jeff Hecht, science & technology writer
email@example.com or firstname.lastname@example.org
Boston Correspondent: New Scientist magazine
Contributing Editor: Laser Focus World
525 Auburn St., Auburndale, MA 02466 USA
tel. 617-965-3834 http://www.jeffhecht.com | <urn:uuid:4c38a9aa-d0c0-4113-a0cc-88674c6dd2b9> | 2.609375 | 718 | Comment Section | Science & Tech. | 55.808237 | 320 |
Economic growth in China has led to significant increases in fossil fuel consumption © stock.xchng (frédéric dupont, patator)
Per capita CO2 emissions in China reach EU levels
Global emissions of carbon dioxide (CO2) – the main cause of global warming – increased by 3% last year. In China, the world’s most populous country, average emissions of CO2 increased by 9% to 7.2 tonnes per capita, bringing China within the range of 6 to 19 tonnes per capita emissions of the major industrialised countries.
In the European Union, CO2 emissions dropped by 3% to 7.5 tonnes per capita. The United States remain one of the largest emitters of CO2, with 17.3 tonnes per capita, despite a decline due to the recession in 2008-2009, high oil prices and an increased share of natural gas.
According to the annual report ‘Trends in global CO2 emissions’, released today by the JRC and the Netherlands Environmental Assessment Agency (PBL), the top emitters contributing to the global 34 billion tonnes of CO2 in 2011 are: China (29%), the United States (16%), the European Union (11%), India (6%), the Russian Federation (5%) and Japan (4%).
With 3%, the 2011 increase in global CO2 emissions is above the past decade's average annual increase of 2.7%.
An estimated cumulative global total of 420 billion tonnes of CO2 has been emitted between 2000 and 2011 due to human activities, including deforestation. Scientific literature suggests that limiting the rise in average global temperature to 2°C above pre-industrial levels – the target internationally adopted in UN climate negotiations – is possible only if cumulative CO2 emissions in the period 2000–2050 do not exceed 1 000 to 1 500 billion tonnes. If the current global trend of increasing CO2 emissions continues, cumulative emissions will surpass this limit within the next two decades | <urn:uuid:5dbd7929-f5e4-4e00-8ee1-ac82d4729d56> | 3.03125 | 398 | Knowledge Article | Science & Tech. | 42.668752 | 321 |
Plants have evolved a number of cold-response genes encoding proteins that induce tolerance to freezing, alter water absorption and initiate many other low temperature induced processes. In the 1 April Genes and Development, Jian-Kang Zhu and colleagues of the Department of Plant Sciences, University of Arizona, shed light on how these genes are regulated.
Lee et al. report that the protein HOS1 negatively regulates cold-response genes in Arabidopsis. At low temperatures, HOS1 relocalizes from the cytoplasm to the nucleus where it regulates gene expression; hos1 mutants show an excessive induction of cold-response genes. The HOS1 gene was mapped to chromosome II of Arabidopsis and cloned. It encodes a protein of 915 amino acids with a nuclear localization signal and a RING finger. Proteins with this motif have been implicated in the breakdown of other proteins by a process that involves ubiquitination.
Lee et al. speculate that HOS1 might regulate the function of cold-response genes by targeting the gene products for degradation.
Lee H, Xiong L, Gong Z, Ishitani M, Stevenson B, Zhu JK: The Arabidopsis HOS1 gene negatively regulates cold signal transduction and encodes a RING finger protein that displays cold-regulated nucleo-cytoplasmic partitioning. Genes Dev 2001, 15.
Department of Plant Sciences, University of Arizona | <urn:uuid:3649223e-8026-4378-8234-d8647036ba6d> | 2.984375 | 297 | Academic Writing | Science & Tech. | 27.405926 | 322 |
- If the Earth rotated in the opposite sense (clockwise rather than counterclockwise), how long would the solar day be?
- Suppose that the Earth’s pole was perpendicular to its orbit. How would the azimuth of sunrise vary throughout the year? How would the length of day and night vary throughout the year at the equator? at the North and South Poles? where you live?
- You are an astronaut on the moon. You look up, and see the Earth in its full phase and on the meridian. What lunar phase do people on Earth observe? What if you saw a first quarter Earth? new Earth? third quarter Earth? Draw a picture showing the geometry.
- If a planet always keeps the same side towards the Sun, how many sidereal days are in a year on that planet?
- If on a given day, the night is 24 hours long at the North Pole, how long is the night at the South Pole?
- On what day of the year are the nights longest at the equator?
- From the fact that the Moon takes 29.5 days to complete a full cycle of phases, show that it rises an average of 48 minutes later each night.
- What is the ratio of the flux hitting the Moon during the first quarter phase to the flux hitting the Moon near the full phase?
- Titan and the Moon have similar escape velocities. Why does Titan have an atmosphere, but the Moon does not?
Friday, October 30, 2009
Astronomers have confirmed that an exploding star spotted by Nasa's Swift satellite is the most distant cosmic object to be detected by telescopes.
In the journal Nature, two teams of astronomers report their observations of a gamma-ray burst from a star that died 13.1 billion light-years away.
The massive star died about 630 million years after the Big Bang.
UK astronomer Nial Tanvir described the observation as "a step back in cosmic time".
Professor Tanvir led an international team studying the afterglow of the explosion, using the United Kingdom Infrared Telescope (UKIRT) in Hawaii.
Swift detects around 100 gamma ray bursts every year
He told BBC News that his team was able to observe the afterglow for 10 days, while the gamma ray burst itself lasted around 12 seconds.
The event, dubbed GRB 090423, is an example of one of the most violent explosions in the Universe.
It is thought to have been associated with the cataclysmic death of a massive star - triggered by the centre of the star collapsing to form a "stellar-sized" black hole.
"Swift detects something like 100 gamma ray bursts per year," said Professor Tanvir. "And we follow up on lots of them in the hope that eventually we will get one like this one - something really very distant."
Another team, led by Italian astronomer Ruben Salvaterra studied the afterglow independently with the National Galileo Telescope in La Palma.
Little red dot
He told BBC News: "This kind of observation is quite difficult, so having two groups have the same result with two different instruments makes this much more robust."
"It is not surprising - we expected to see an event this distant eventually," said Professor Salvaterra.
"But to be there when it happens is quite amazing - definitely something to tell the grandchildren."
A GAMMA-RAY BURST RECIPE
Models assume GRBs arise when giant stars burn out and collapse
During collapse, super-fast jets of matter burst out from the stars
Collisions occur with gas already shed by the dying behemoths
The interaction generates the energetic signals detected by Swift
Remnants of the huge stars end their days as black holes
The astronomers were able to calculate the vast distance using a phenomenon known as "red shift".
Most of the light from the explosion was absorbed by intergalactic hydrogen gas. As that light travelled towards Earth, the expansion of the Universe "stretches" its wavelength, causing it to become redder.
"The greater that amount of movement [or stretching], the greater the distance." he said.
The image of this gamma ray burst was produced by combining several infrared images.
"So in this case, it's the redness of the dot that indicates that it is very distant," Professor Tanvir explained.
Before this record-breaking event, the furthest object observed from Earth was a gamma ray burst 12.9 billion light-years away.
"This is quite a big step back to the era when the first stars formed in the Universe," said Professor Tanvir.
"Not too long ago we had no idea where the first galaxies came from, so astronomers think this is a profound moment.
"This is... the last blank bit of the map of the Universe - the time between the Big Bang and the formation of these early galaxies."
Data from two powerful telescopes confirmed the result
And this is not the end of the story.
Bing Zhang, an astronomer from the University of Nevada, who was not involved in this study, wrote an article in Nature, explaining its significance.
The discovery, he said, opened up the exciting possibility of studying the "dark ages" of the Universe with gamma ray bursts.
And Professor Tanvir is already planning follow-up studies "looking for the galaxy this exploding star occurred in."
Next year, he and his team will be using the Hubble Space Telescope to try to locate that distant, very early galaxy.
Source: BBC News | <urn:uuid:dbe88d6f-99d3-40e4-ae1c-e659a8cace09> | 3.625 | 1,143 | Content Listing | Science & Tech. | 56.8545 | 323 |
|ROSAT Home Page||ROSAT
The time variability of X-ray emission can be studied with the HRI since each detected photon has its time of arrival recorded by the detector. The accuracy of this time is limited by the electronic resolution of the HRI processor which is 61 sec relative to the ROSAT spacecraft clock. The relative arrival times of photons during a single observation is accurate to this value. The absolute accuracy of the ROSAT spacecraft clock, and its conversion to UT, is expected to be a few milliseconds.
The HRI has a processing dead time during which events may not be counted which varies between 0.36 and 1.35 msec per event. The variation is discussed in section 188.8.131.52 and depends on the fine position of the event. Thus there is a dead time correction that needs to be made for calculating the true event rate from a source. A mean dead time of = 0.81 msec can be used for this purpose, and the true rate is then given by: , where n is the observed rate.
Due to the telescope wobble and the small variations in the QE of the HRI on spatial scales of a few arc minutes (see Fig. 5.13), the count rate of a source can vary by 5% between the extremities of a wobble. This can produce a low amplitude source variability on a time scale of approximately 100-400 seconds in some sources.
As for the PSPC, the HRI observations are typically interrupted once per orbit, and sometimes as much a three times per orbit. Typical continuous viewing times for a source will be about 2000 seconds, with some cases lasting up to 4000 seconds. Long term monitoring of sources on time scales of weeks of more will be limited by the solar view constraints of the satellite. This limits source accessibility to about one month every six months for a source in the elliptic plane, with greater access time for sources closer to the ecliptic poles. | <urn:uuid:32e7c4e9-4f66-4a50-825f-0e4d7779c26b> | 2.5625 | 403 | Knowledge Article | Science & Tech. | 57.032754 | 324 |
New research shows that dolphins can stay awake for at least 15 days in a row without experiencing fatigue, or other negative side effects.
To put this in perspective, human research subjects have been able to stay awake for only eight to 10 days, and all experienced progressive deterioration in concentration, motivation, perception and other mental processes as the period of sleep deprivation increased.
Dolphins can stay awake for this long because they sleep with only half of their brain at a time. This process, called unihemispheric sleep, was thought to have evolved as a way to allow dolphins to continue breathing at the surface while resting.
Brian Branstetter from the National Marine Mammal Foundation and colleagues studied two bottlenose dolphins (Tursiops truncatus), one male and one female, and found that they could use echolocation with “near-perfect accuracy” for up to 15 days. Both dolphins showed no signs of fatigue for five days, and the female continued other tasks for 10 additional days.
“These majestic beasts are true unwavering sentinels of the sea. The demands of ocean life on air breathing dolphins have led to incredible capabilities, one of which is the ability to continuously, perhaps indefinitely, maintain vigilant behavior through echolocation” said Branstetter.
The research was published in the journal PLoS ONE on October 17: Dolphins Can Maintain Vigilant Behavior through Echolocation for 15 Days without Interruption or Cognitive Impairment.
Copyright © 2012 by Marine Science Today, a publication of Marine Science Today LLC. | <urn:uuid:a0f556e0-26a3-4dcc-a0ed-23efa46bb223> | 3.78125 | 320 | News Article | Science & Tech. | 26.925051 | 325 |
Search Loci: Convergence:
The Reader may here observe the Force of Numbers, which can be successfully applied, even to those things, which one would imagine are subject to no Rules. There are very few things which we know, which are not capable of being reduc'd to a Mathematical Reasoning; and when they cannot it's a sign our knowledge of them is very small and confus'd; and when a Mathematical Reasoning can be had it's as great a folly to make use of any other, as to grope for a thing in the dark, when you have a Candle standing by you.
Of the Laws of Chance (1692)
Georg Cantor at the Dawn of Point-Set Topology
A first course in point-set topology can be challenging for the student because of the abstract level of the material. In an attempt to mitigate this problem, we use the history of point-set topology to obtain natural motivation for the study of some key concepts. In this article, we study an 1872 paper by Georg Cantor. We will look at the problem Cantor was attempting to solve and see how the now familiar concepts of a point-set and derived set are natural answers to his question. We emphasize ways to utilize Cantor's methods in order to introduce point-set topology to students.
In his introduction to his book Introduction to Phenomenology , Msgr. Robert Sokolowski writes
As a philosopher, Msgr. Sokolowski is accustomed to the traditional methods of teaching philosophy to undergraduates – start with Plato, Aristotle and the other ancients, continue with developments through the Scholastic and Enlightenment eras, and then show how modern philosophy builds upon all that has gone before. He must be puzzled, then, by the lack of attention to the historical development of ideas that generally attends to the teaching of mathematics. He perceives that something important is missing, and he is correct.
In recent years, interest has grown considerably in developing an historical approach to the teaching of mathematics. Victor Katz has edited an anthology of articles giving different perspectives on the development of mathematics in general from an historical point of view . Some authors, such as Klyve, Stemkoski, and Tou, focus on one particular historical figure – in their case, Euler – important to the development of mathematics . There is also interest in the historical development of certain areas of mathematics commonly included in the undergraduate curriculum. Brian Hopkins has written a textbook introducing discrete mathematics from an historical point of view ; David Bressoud has written two textbooks that present analysis from an historical perspective (, ); and Adam Parker has compiled an original sources bibliography for ordinary differential equations instructors that contains many of the original papers in ODEs.
This is the first paper in a planned series that will outline ways to introduce point-set topology concepts motivated by their place in history. To borrow a phrase from David Bressoud, it is an "attempt to let history inform pedagogy" [2, p. vii]. A growing collection of the historic papers that are important to the development of point-set topology may be found on the author's web site.
This paper focuses on the seminal work of Georg Cantor (1845-1918), a German mathematician well-known for his contributions to the foundations of set theory, but whose contributions to point-set topology are not very well known. Cantor’s works are collected in . For complete biographical information, see Dauben’s definitive work .
Table Of Contents
Scoville, Nicholas, "Georg Cantor at the Dawn of Point-Set Topology," Loci (March 2012), DOI: 10.4169/loci003861 | <urn:uuid:1133c1bd-455a-4f42-be03-ecffa85e1482> | 2.75 | 763 | Academic Writing | Science & Tech. | 34.359673 | 326 |
Unitary Method Problem
Date: 02/01/99 at 19:13:36 From: Tamara Subject: A Unitary Method Problem Runts come in a carton. There are 8 packages in one carton. There are 3 boxes in each package. If there are 170 runts in one box, how many runts are there in 6 cartons?
Date: 03/01/99 at 17:48:36 From: Doctor Swiss Subject: Re: A Unitary Method Problem This can be a difficult question to take in all at once. The best way to approach it is to break it down into steps. The first step is to notice how runts, boxes, packages, and cartons are related. It might help to draw a diagram, like this one: 170 runts/box --> 3 boxes/package --> 8 packages/carton --> 6 cartons Just so you know, by runt/box, we mean runts in a box. Now you can see how everything is related, and you can also see the steps that you have to take. Now to find out the number of runts in 6 cartons, it would help us to see how many packages are in 6 cartons. Why? Well, once we know the number of packages in 6 cartons, we can find out the number of boxes in 6 cartons, and finally the number of runts in 6 cartons. So now we are trying to find the number of packages in 6 cartons. We know the number of packages on 1 carton is 8. What if we had 2 cartons? Then we would have 8 cartons from the first package and 8 from the second, for a total of 8 + 8 = 16. Note that this is also 8 * 2 = 16 cartons. Try this for 3 cartons. I hope you can see the jump and figure out that there are 8 * 6 packages in 6 cartons. You can repeat this process to find that there are 3 * 8 * 6 boxes in 6 cartons. You need one more step to find the number of runts in 6 cartons. It turns out that this problem is just one big multiplication problem. Please write back if you need more help or have any more questions. Good luck, - Doctors Swiss, Teeple, and Stacey, The Math Forum http://mathforum.org/dr.math/
Search the Dr. Math Library:
Ask Dr. MathTM
© 1994-2013 The Math Forum | <urn:uuid:62fdfc79-523b-416e-907b-66a7613df15b> | 2.65625 | 514 | Q&A Forum | Science & Tech. | 84.993487 | 327 |
A few months ago I read about a very simple but fun probability puzzle. Someone tells you:Try to solve it yourself. John Baez mentions that you would think or he would think that the information about Tuesday is irrelevant because the days of the week are independent of the sex and we only care about the latter.
“I have two children. At least one is a boy born on a Tuesday. [And if it were not the case, I would have told you.] What is the probability I have two boys?”
So you would think that there are 4 equally represented groups of 2-kid families, namely boy-boy, boy-girl, girl-boy, and girl-girl families where the two hyphenated words refer to the younger and older kid, respectively. Only the girl-girl families are eliminated, and 1 of the remaining 3 groups is a two-boy family, so the conditional probability is 1/3.
However, that's a wrong result. The information about the Tuesday actually does matter. Here's why:
In all families with exactly 2 children, one may label the children as the "younger" and "older" one, even if the difference is just in seconds.
Each kid may be born on any day and have any sex, so there are 14 equally likely possibilities for each child. The two children are independent (forget that the phenomenon of twins tends to increase the same-day pairs), so there are 14 x 14 possibilities for two kids. Each of these 14 x 14 possibilities is equally likely. So 1/196 of the world's families with exactly 2 kids fits each condition.
Among the 196 types of the families, how many of them contain at least one Tuesday son? Well, in 14 of them, the younger kid is a Tuesday son (the older one may be anything chosen from the 14 possibilities). In 14 other of them (the younger can be anything), the older one is a Tuesday son. However, I have counted the families with two Tuesday sons twice. So there are 14+14-1 = 27 possibilities among the 196 for which the condition "at least one kid is a Tuesday son" is satisfied.
This is the assumption which is a part of the calculation of the conditional probability. We need the other part, too. Among these 27/196 of the families, 13/196 of all families have two boys, by pure counting, so the result is
P = 13/27as the fraction of the families that satisfied the condition. Note that it is just slightly less than 1/2 = 13.5/27 i.e. much more than 1/3. I had to highlight the result because almost no one reads the full article and almost no one notices that the right results is neither 1/3 nor 1/2.
Indeed, the large difference of the right result from 1/3 appears because one de facto identifies one of the sons by mentioning that it is the kid from Tuesday. If you assumed there were infinitely many days in a week and you would take any family with at least one Tuesday kid, the "Tuesday" information would identify this kid completely (two Tuesday kids would be infinitesimally unlikely), and the question what is the probability of 2 sons would be reduced to the question what is the probability that the other, equally specific kid - the non-Tuesday kid - is male - which is of course 1/2.
I will discuss this "identification" and reasons why the result is close to 1/2 at the very end.
Indistinguishable kids' bound states
With kids that would satisfy the Bose or Fermi statistics, the counting would be different but equally straightforward. Instead of 14 x 14 = 196 possibilities, one has 14 x 15 / 2 = 105 for bosons (the symmetric triangle) and 14 x 13 / 2 = 91 (the antisymmetric triangle) for fermions. Among the 105 or 91 options, how many of them contain at least one Tuesday son? Well, in these two cases, we can't say which of them is older and younger: they're identical.
So if there is at least 1 Tuesday son, the number of states with at least 1 Tuesday son is 14 for the bosons - we can just create the other particle into the 1-particle state - or 13 for the fermions - we can also add the second creation operator, but with another Tuesday son, the state will vanish because of Pauli's exclusion principle.
Among these 14 or 13 states respectively, for bosons and fermions, 7 or 6 are two-son states, respectively. So the odds are 7/14 = 1/2 for the bosons and 6/13 for the fermions. Note that the bosons literally saturate the 1/2 bound while the fermions are just slightly below it.
Why not one third?
Finally, I want to comment on "why the information about Tuesday matters". If we sum up the probabilities for the problems where the son is born on Sunday, Monday... and up to Saturday, shouldn't we get the same result? And by symmetry, the result must be equal for all 7 days, so doesn't each term have to be 1/3?
The answer is that we can't add the probabilities in this way because the "at least one Monday son" etc. are assumptions, not propositions conditioned by these assumptions, and they're not disjoint. At any rate, the calculation is nonlinear because the conditional probabilities have the probability of the assumption in the denominator rather than the numerator, so you can't simply add the possibilities in any way.
The word term in the previous paragraph is therefore incorrect.
How and why 1/3 gets enhanced to nearly 1/2
If you were only told that "one of the kids is a boy", the mixed families would be overrepresented over the two-boy families by the 2-to-1 ratio because boy-girl and girl-boy families are as likely as boy-boy families; again, the kids notation is younger-older.
However, if you're told that "one of the kids is a Tuesday boy", this overrepresentation almost disappears. Why? Because 1/7 of the boy-girl and girl-boy families have a Tuesday boy. But (approximately) 2/7 of the boy-boy families have at least one Tuesday boy because each of these two boys has a chance to be born on Tuesday.
In this way, the boy-boy families (nearly) compensate the factor of two by which they were underrepresented relatively to the mixed families.
Bonus: this puzzle and crackpot Sean Carroll's misunderstanding of logic
This logical puzzle is actually a very precise pedagogical example showing what's wrong with the thinking of various people about the arrow of time. Some people - those who say that the information about Tuesday doesn't matter and who typically end up with the result 1/3 - think that
Prob(cond,any_day) = Prob(cond,Monday) + ... + Prob(cond,Sunday)where "cond" is an extra condition. So if we make a statement about a specific object and if this statement doesn't prefer any day of the week, then adding the information about "its" day of the week doesn't matter. It only reduces the probability by a factor of 7 if the probability is day-blind.
That's right for "conclusions" or "outcomes". However, the error that these people are making is that they think that this "additive" counting of the probabilities also holds for the probabilities of assumptions, i.e. probabilities of conditions in the conditional probability. But no such a linearity exists over there. Conditions (and initial states) don't follow the same maths as the outcomes (and final states)!
There is no condition-outcome or past-future symmetry in mathematical logic! That's why it matters for the probabilities whether the information about Tuesday is specified even though there is nothing special about Tuesday. | <urn:uuid:f41a2762-3903-4c42-a292-d7c0d70bff3d> | 2.578125 | 1,661 | Comment Section | Science & Tech. | 59.928435 | 328 |
By Mark Kinver
Science and environment reporter, BBC News
The populations of the world's common birds are declining as a result of continued habitat loss, a global assessment has warned.
The survey by BirdLife International found that 45% of Europe's common birds had seen numbers fall, as had more than 80% of Australia's wading species.
The study's authors said governments were failing to fund their promises to halt biodiversity loss by 2010.
The findings will be presented at the group's World Conference in Argentina.
The State of the World's Birds 2008 report, the first update since 2004, found that common species - ones considered to be familiar in people's everyday lives - were declining in all parts of the world.
In Europe, an analysis of 124 species over a 26-year period revealed that 56 species had declined in 20 countries.
Farmland birds were worst affected, with the number of European turtle-doves (Streptopelia turtur) falling by 79%.
In Africa, birds of prey were experiencing "widespread decline" outside of protected areas. While in Asia, 62% of the continent's migratory water bird species were "declining or already extinct".
"For decades, people have been focusing their efforts on threatened birds," explained lead editor Ali Stattersfield, BirdLife International's head of science.
"But alongside this, we have been working to try to get a better understanding of what is going on in the countryside as a whole."
By consolidating data from various surveys, the team of researchers were able to identify trends affecting species around the world.
"It tells us that environmental degradation is having a huge impact - not just for birds, but for biodiversity as well," she told BBC News.
While well-known reasons, such as land-use changes and the intensive farming, were causes, Ms Stattersfield said that it was difficult to point the finger of blame at just one activity.
"The reasons are very complex," she explained. "For example, there have been reported declines of migratory species - particularly those on long-distance migrations between Europe and Africa.
"It is not just about understanding what is happening at breeding grounds, but also what is happening at the birds' wintering sites."
She said the findings highlighted the need to tackle conservation in a number of different ways.
"It is not enough to be looking at individual species or individual sites; we need to be looking at some of the policies and practices that affect our wider landscapes."
The global assessment also showed that rare birds were also continuing to be at risk.
One-in-eight of the world's birds - 1,226 species - was listed as being Threatened. Of these, 190 faced an imminent risk of extinction.
The white-rumped vulture, a once common sight in India, has seen its population crash by 99.9% in recent years.
An anti-inflammatory drug for cattle, called diclofenac, has been blamed for poisoning the birds, which eat the carcasses of the dead livestock.
"That has been a really shocking story," Ms Stattersfield said.
The world is failing in its 2010 pledge to achieve a significant reduction in the current rate of loss of biodiversity
Dr Mike Rand,
BirdLife International's CEO
"Four years ago, we were not even sure what was responsible for the dramatic declines. It happened so suddenly, people were not prepared for it.
"Since then, the basis for the decline is well understood and measures are being taken to remove diclofenac from veterinary use in India.
"However, it is still available for sale and there still needs to be a lot more work to communicate the problem at a local level.
"But it demonstrates that we can get to the bottom of the reasons behind declines."
The plight of albatrosses becoming entangled in long-line fishing tackle has also been the subject of sustained campaigning, attracting high-profile supporters such as Prince Charles and yachtswoman Dame Ellen MacArthur.
About 100,000 of the slow-breeding birds are estimated to drown each year as a result of being caught on the lines' fish hooks.
But fisheries in a growing number of regions are now introducing measures to minimise the risk to albatrosses.
Ms Stattersfield said these examples showed that concerted effort could investigate and identify what was adversely affecting bird populations.
But she quickly added that prevention was always better than finding a cure.
"We don't want to have to react to problems that come about from bad practice.
"What we are trying to do with this report is to be as clear as possible about what are the underlying causes, and then present a range of conservation measures that can preserve birds and biodiversity."
BirdLife International will use the report, which is being published at its week-long World Conference in Buenos Aires and on the group's website, to call for governments to make more funds available for global conservation.
"Effective biodiversity conservation is easily affordable, requiring relatively trivial sums at the scale of the global economy," said Dr Mike Rands, BirdLife's chief executive.
He estimated that safeguarding 90% of Africa's biodiversity would cost less than US $1bn (£500m) a year.
"The world is failing in its 2010 pledge to achieve a significant reduction in the current rate of loss of biodiversity," he warned.
"The challenge is to harness international biodiversity commitments and that concrete actions are taken now." | <urn:uuid:73f989c0-8fdf-4b9b-88c7-3308c8965da8> | 3.125 | 1,136 | News Article | Science & Tech. | 42.722433 | 329 |
Despite their collective efficiency and order, beehives are often plagued by scourges that would rival a medieval city. Varroa mites, deformed wing virus, and intestinal fungi are just a few of the worst. Now researchers have identified a new enemy that ought to strike fear in the hearts of honey bees: A tiny fly that lays its eggs in the bee abdomen, giving rise to maggots that wiggle out near the victim's head. So far, the infection rate does not appear to be high enough to cause problems for hives, but experts are casting a wary eye on the fly. "It's certainly worth a lot more attention," says Dennis vanEngelsdorp of the University of Maryland, College Park.
The parasitism was discovered by accident. In 2008, John Hafernik, an entomologist at San Francisco State University in California, was looking for insects to feed to praying mantises he had collected for a class. He scooped up some dead honey bees that were lying under a light outside his building on campus and left several of the corpses in a vial on his desk. About a week later, Hafernik noticed maggots in the vial. "I knew there was something strange going on," he recalls. After the maggots matured into flies, entomologist Brian Brown from the Natural History Museum of Los Angeles County in California identified the insects as Apocephalus borealis, a kind of scuttle fly. The flies are native to North America and were known to parasitize bumble bees, but they had not been seen afflicting honey bees.
When Hafernik and his students collected more dead bees under the light outside the building, they found that the vast majority had been parasitized by the scuttle fly. In a clear plastic box in the lab, they observed the flies chasing live honey bees and laying eggs in them. After a week, up to a dozen larvae squirmed out near the bee's head. In the wild, as the larvae grow inside them, infected bees abandon the hive at night, head for bright lights, and then die stumbling on the ground.
The problem was not unique to the campus; the researchers found fly-parasitized bees in three out of four honey bee hives sampled in the San Francisco Bay Area, they report online today in PLoS ONE. The good news is that when Hafernik's group examined a hive that had been set up near the entomology building a few years ago, only about 5% to 15% of the forager bees were infected—not a level that would threaten the hive. For individual bees, of course, being parasitized is bad news. "It's a death sentence," Hafernik says. "We don't find bees that are surviving." In addition, the flies appear to be able to transmit deformed wing virus, which is fatal, and the deadly fungus Nosema ceranae, which causes bee diarrhea.
It's not clear when or how the fly might have jumped from bumble bees to honey bees. Because the fly is present across the continent, the next step is to figure out where it is parasitizing honey bees. DNA analysis of commercial hive samples suggests that the flies are present in South Dakota and the Central Valley of California. (Honey bees are trucked between these two locations.) The distribution of the flies in Europe or Asia is unknown. "Extensive surveys are now needed on the distribution of the flies in the global honey bee population," says bee pathologist Elke Genersch of the Institute for Bee Research in Hohen Neuendorf, Germany, who was not involved in the study.
The parasites conceivably might play a role in colony collapse disorder (CCD), the sudden abandonment that has been resulting in the loss of 7% of hives a year in the United States. "Anything that further stresses the bee population and increases bee losses can contribute to CCD," says Eric Mussen of the University of California, Davis, who was not involved in the study. But given the infection rate observed in the San Francisco State University hive, the parasite "does not appear to be a dominant factor," he says. The situation could change if the flies are able to reproduce within bee hives and thus easily parasitize many bees, Genersch says. "Such a high host density might allow the fly population to explode." | <urn:uuid:b3b1b38c-5c83-4b55-ba48-24971aa87ed5> | 3.234375 | 906 | News Article | Science & Tech. | 47.171763 | 330 |
Will The Earth Stop Rotating?
Date: 1999 - 2000
Will the earth stop rotating?
Yes, but not for a long long long time. (If I remember correctly, it is
currently slowing down by about half a second per century.) As the
earth rotates it gets stretched and squeezed by tidal forces. The
energy required to do this work comes from the earth's rotation.
The simple answer to this is No.
It is believed that the Earth's day will be twice as long as it is now, in
about 5 thousand million years time, but there is too much momentum in the
Earth to stop it from rotating.
By the way, at the moment the Earth is rotating its fastest since the late
1920s, having lost approximately 0.63 milliseconds per day in the last 12
months (to June 28, 2001) against atomic time, based on preliminary
International Earth Rotation Service data; compared with 3.13 milliseconds
per day in 1972, and 3.89 milliseconds per day in 1912. The Earth GAINED on
atomic time in 1929 by 0.35 ms/day.
Because of tidal friction.... yes it will. In fact, it is slowing as
we ride on it now. Actually, it will not stop, but rather the period of
rotation will equal its period of revolution. I do not have the number
at hand, but I seem to recall that each (solar) year is .00024 seconds
slower than the year one century earlier. The number may not be
correct, but the concept is. In the same way that the moon has rotates
around the earth, the earth will eventually rotate around the sun... if
the sun does not supernova first!
There is a small tidal drag on the earth caused by the gravitational
forces of the moon and sun which have a small effect on the earth's
rotation, but the effect, while measurable, is exceedingly small. On the
other hand, the reason the moon always presents the same face to the earth
it is believed was caused by tidal drag of the earth on the moon, which is
much greater because the mass of the moon is so much smaller than that of
Click here to return to the Astronomy Archives
Update: June 2012 | <urn:uuid:04155edf-d0d0-4ea6-b914-a10ba3c95a22> | 3.15625 | 476 | Knowledge Article | Science & Tech. | 74.65 | 331 |
Using OpenMP - The Book and Examples
Use this forum to discuss the book: Using OpenMP- Portable Shared Memory Parallel Programming
by Barbara Chapman, Gabriele Jost and Ruud van der Pashttp://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=11387
The sources are available as a free download under the BSD license. Each source comes with a copy of the license. Please do not remove this.
You are encouraged to try out these examples and perhaps use them as a starting point to better understand and perhaps further explore OpenMP.
Each source file constitutes a full working program. Other than a compiler and run time environment to support OpenMP, nothing else is needed.
With the exception of one example, there are no source code comments. Not only are these examples very straightforward, they are also discussed in the above mentioned book.
As a courtesy, each source directory has a make file called "Makefile". This file can be used to build and run the examples in the specific directory.
Before you do so, you need to activate the appropriate include line in file Makefile. There are include files for several compilers and Unix based Operating Systems (Linux, Solaris and Mac OS to precise). These files have been put together on a best effort basis.
The User's Guide that is bundled with the examples explains this in more detail.
Please post your feedback about the book and/or these examples to this forum. | <urn:uuid:cdc62d61-188b-4a3a-9317-ec4f7188eca6> | 2.890625 | 312 | Comment Section | Software Dev. | 45.194425 | 332 |
An Analysis of the Classic Arctic Outbreak Event of Late December 2008-Early January 2009
By Christian M. Cassell
Stratospheric Role |
The 2008-2009 winter was characterized by colder than normal temperatures and above normal snowfall for each month from October through March. While there was no one significant snow event that overshadowed any other this past winter, a bitterly cold Arctic outbreak that persisted for more than two weeks brought the coldest temperatures in a decade to the Anchorage area, and grabbed headlines around the world for extreme cold in interior parts of the state. This analysis will show how the outbreak developed and how it was able to persist for a prolonged period of time.
*-Indicates a record low value for that particular date.
1. Summary of temperatures and records from the outbreak
The following chart is a breakdown of temperatures and extremes at Anchorage during the two week Arctic outbreak.
**-Indicates tying or setting of the lowest temperature of this decade (2000-2009).
Though it is arbitrary as to when the outbreak began and ended based on the numbers, the temperature at Anchorage dropped below zero degrees during the evening hours of December 29th, and remained below zero until January 8th except for a one-hour period during the afternoon of January 5th when the temperature managed to make it to 0.4 degrees briefly during the mid afternoon hours. This represented the longest streak of sub-zero days since 30 January . 5 February 1999.
Additionally, the eleven-day streak (29 Dec . 8 Jan) with the minimum temperature falling to
-10 degrees or lower from the official reporting station at the National Weather Service office on Sand Lake Road was the longest such streak since 17-29 December 1961. Therefore, while there were no record low minimum temperature values set at the official temperature station in Anchorage, the duration of the cold in terms of minimum temperatures at or below -10 degrees was the longest such stretch in 47 years.
Go to next page
Stratospheric Role | | <urn:uuid:9aae393d-86f4-4331-b269-7344c5e77b24> | 2.78125 | 406 | Truncated | Science & Tech. | 38.90308 | 333 |
Closing in on the Planck constant
Sep 25, 1998
Physicists in the US have made the most accurate measurements ever of the Planck constant, h. Edwin Williams and colleagues at the National Institute of Standards and Technology in Gaithersburg, Maryland, measured h by comparing the voltage needed to control the velocity of a coil moving vertically in an magnetic field, with the current that has to be passed through the coil to balance gravity in the same magnetic field. The measurement could lead to a new reference stand for the kilogram (Phys. Rev. Lett. 81 2404).
The kilogram is currently defined by a platinum-iridium alloy maintained at the Bureau International des Poids et Mesures (BIPM) in Paris and six official copies. However, the official mass of the standard kilogram has been known to vary with time, hence the interest in defining the kilogram is terms of fundamental constants like h.
The highly stable magnetic field needed for the experiment is generated by a superconducting magnet that has been cooled to 4 Kelvin. The experiment also uses two induction coils: the lower coil is fixed to the support structure of the experiment, while the upper coil can move. This upper coil is also attached to a wheel balance above the experiment. In the first stage of the measurement, the mass balance is empty and a small force is applied to the upper coil, forcing it to move at 2mm/s. The researchers found that this generated a voltage of 1.018 ± 0.001 V across the moving coil. In the second stage of the experiment, a 500 g countermass is balanced by a - 10.18 mA current in the induction coil. Both stages were repeated over many months to obtain a value of 6.62606891(58) x 10-34 Joule seconds for the Planck constant.
This result - which corresponds to an accuracy of 9 parts in 108 - is a factor of 15 better than previous measurements. The team hope to improve on this result by another factor of 10 by modifying their experiment. | <urn:uuid:cd89c0d8-ef60-4435-b4e8-e79d00dd783c> | 3.84375 | 420 | News Article | Science & Tech. | 57.31381 | 334 |
Ultrafast electron microscope makes movies
Dec 8, 2006
Physicists have created a new form of electron microscopy that can make "movies" of atoms as they undergo ultra-rapid chemical or structural transitions. Ahmed Zewail and colleagues at the California Institute of Technology in the US have used coincident electron and laser pulses to follow vanadium and oxygen atoms as they rearranged themselves on a vanadium oxide surface over the course of several picoseconds. The researchers say that the technique could also be used to study a wide range of ultrafast biological and physical phenomena. (Proc. Natl. Acad. Sci. 103 18427)).
Electron microscopes have better resolution than optical microscopes because high-energy electrons have a much shorter wavelength than light. The resolution can be further improved by using coherent electron wavepackets, which can contain as few as one electron. The wavelengths of these packets are much smaller than the space between individual atoms and can be brought to a very sharp focus, allowing objects to be imaged with atomic-scale resolution. The packets are of extremely short duration and this can be exploited to take “snapshots” of atoms as they undergo structural or chemical transitions.
In 2005, Zewail and colleagues used coherent electron packets to take single snapshots of a number of materials and biological samples. Now the researchers have further refined their technique to take a time sequence of images that allowed them to watch vanadium and oxygen atoms rearrange themselves in a process that can take as little as 100 femtoseconds ( 10-13 seconds).
The timing sequence is generated by femtosecond laser pulses as illustrated in the figure "Ultrafast microscope". Each pulse is split into two pulses – one is used by the microscope to create the electron pulse and the other is used to heat the sample. According to Zewail, the crucial and most difficult part of the technique is coordinating the arrivals of the laser and electron pulses at the sample with an accuracy of just a few femtoseconds. This is particularly difficult because the laser pulse travels at the speed of light, while the electron pulse lags behind at about two thirds the speed of light.
The coincident laser pulse is used to heat the sample and drive a transition from a low-temperature crystal structure to a high-temperature structure. By changing the delay between the laser and electron pulses in regular time steps, the researchers were able to take snapshots of the atoms at different sample temperatures.
Zewail and colleagues found that vanadium oxide undergoes a “first-order” phase transition from a low-temperature “monoclinic” phase to a high-temperature tetragonal “rutile” phase at around 67°C. This result is a breakthrough in itself because the precise nature of this transition has been a mystery since the material was discovered almost a century ago.
The team now plans to try their technique on other materials; “the scope is very wide from semiconductors and metals to organics and biological assemblies,” says Zewail. | <urn:uuid:2f1970f4-49fa-40b8-b5c8-2587723cbf65> | 3.625 | 638 | News Article | Science & Tech. | 28.197514 | 335 |
The coral is at the mercy of natural circumstances like water quality and starfish population, but can be aided by human intervention. The first step to take will be decreasing population and CO2 emissions.
Coral cover in the Great Barrier Reef has dropped by more than half over the last 27 years, according to scientists, a result of increased storms, bleaching and predation by population explosions of a starfish which sucks away the coral’s nutrients.
At present rates of decline, the coral cover will halve again within a decade, though scientists said the reef could recover if the crown-of-thorns starfish can be brought under control and, longer term, global carbon dioxide emissions are reduced.
“This latest study provides compelling evidence that the cumulative impacts of storms, crown-of-thorns starfish (Cots) and two bleaching events have had a devastating effect on the reef over the last three decades,” said John Gunn, chief executive of the Australian Institute of Marine Science.
Coral reefs are an important part of the marine ecosystem as sources of food and as protection for young fish. They are under threat around the world from the effects of bleaching, due to rising ocean temperatures, and increasing acidification of the oceans, which reduces the corals’ ability to build their calcium carbonate structures.
The Great Barrier Reef is the most iconic coral reef in the world, listed as a Unesco world heritage site and the source of $A5bn (£3.2bn) a year to the Australian economy through tourism. The observations of its decline are based on more than 2,000 surveys of 214 reefs between 1985 and 2012. The results showed a decline in coral cover from 28% to 13.8% – an average of 0.53% a year and a total loss of 50.7% over the 27-year period. The study was published on Monday in the Proceedings of the National Academy of Sciences journal (subscription).
Two-thirds of the coral loss has occurred since 1998 and the rate of decline has increased in recent years, averaging around 1.45% a year since 2006. “If the trend continued, coral cover could halve again by 2022,” said Peter Doherty, a research fellow at the institute.
Tropical cyclones, predation by Cots, and bleaching accounted for 48%, 42%,and 10% of the respective estimated losses. In the past seven years the reef has been affected by six major cyclones. Cyclone Hamish, for example, ran along the reef, parallel to the coast for almost 930 miles (1,500km), leaving a trail of destruction much greater than the average cyclone, which usually crosses the reef on a path perpendicular to the coast.
The starfish problem was first recorded in 1962 at Green Island off Cairns. “When we say outbreaks, we mean explosions of Cots populations to a level where the numbers are so large that they end up eating upwards of 90% of a reef’s coral,” Gunn said. “Since 1962 there have been major outbreaks every 13-14 years.”
The evidence suggests that outbreaks of Cots start two or three years after major floods in northern rivers.
In September, scientists at the International Union for Conservation of Nature announced that Caribbean coral reefs are on the verge of collapse, with less than 10% of the reef area showing live coral cover. The collapse was due to environmental issues, including over-exploitation, pollution and climate change.
David Curnick, marine and freshwater programme co-ordinator at the Zoological Society of London, said many of the most endangered coral species around the world were also under severe pressure from the aquarium trade.
“Corals are notoriously hard to propagate in captivity and therefore the trade is still heavily dependent on harvesting from the wild.”.”
He said the results of the Great Barrier Reef survey were not surprising and the challenge for conservationists was to limit the localised threats to give reefs a chance to recover and develop resilience against the effects of climate change. “This is challenging but entirely achievable and there are many community-led projects around the world demonstrating this.”
Corals can recover if given the chance. But this is slow – in the absence of cyclones, Cots and bleaching, the Great Barrier Reef can regrow at a rate of 2.85% a year, the scientists wrote. Removing the Cots problem alone would allow coral cover to increase at 0.89% a year.
Reducing Cots means improving water quality around the rivers at the northern end of the reef to reduce agricultural run-off – high levels of nutrients flowing off the land feed and allow high survival of Cots larvae. Another option is some form of biological control of populations – Gunn said there were promising results from research on naturally occurring pathogens that could keep Cots in check, but it was not ready to be applied in the field.
He said the future of the Reef lay partly in human hands. “We can achieve better water quality, we can tackle the challenge of crown-of-thorns, and we can continue to work to ensure the resilience of the reef to climate change is enhanced. However, its future also lies with the global response to reducing carbon dioxide emissions. The coral decline revealed by this study – shocking as it is – has happened before the most severe impacts of ocean warming and acidification associated with climate change have kicked in, so we undoubtedly have more challenges ahead.” | <urn:uuid:abe92949-5b38-4785-8ba0-6474ec1a9938> | 3.90625 | 1,145 | News Article | Science & Tech. | 45.925316 | 336 |
Free Search (1444 videos)
IML-1: International Microgravity Laboratory
- Title IML-1: International Microgravity Laboratory
- Released 01/01/1992
- Length 00:16:05
- Language English
- Footage Type Documentary
A 16:07 minute description of Space Lab, the world's first reusable space laboratory built in Europe and launched in December 1983 and the international collaborative project IML, the International Microgravity Laboratory. Through interviews with ESA astronaut Ulf Merbold and ESA Project Scientist Claude Brillouet the video explains what microgravity is and why it is important for research in many different scientific areas including astronomy, Earth observation, biology and human physiology. Research was also carried out in the IML on the "critical point", the transitional state of certain materials such as the moment when ice changes to water. | <urn:uuid:d5ac6cf9-7c7b-4cd5-b879-aede3d52f6c2> | 2.53125 | 174 | Truncated | Science & Tech. | -0.670341 | 337 |
Part of twisted.python View Source
These are methods which you can register pre-call and post-call external functions to augment their functionality. People familiar with more esoteric languages may think of these as "method combinations".
This could be used to add optional preconditions, user-extensible callbacks (a-la emacs) or a thread-safety mechanism.
The four exported calls are:
All have the signature (class, methodName, callable), and the callable they take must always have the signature (instance, *args, **kw) unless the particular signature of the method they hook is known.
Hooks should typically not throw exceptions, however, no effort will be made by this module to prevent them from doing so. Pre-hooks will always be called, but post-hooks will only be called if the pre-hooks do not raise any exceptions (they will still be called if the main method raises an exception). The return values and exception status of the main method will be propogated (assuming none of the hooks raise an exception). Hooks will be executed in the order in which they are added.
|Class||HookError||An error which will fire when an invariant is violated.|
|Function||addPre||hook.addPre(klass, name, func) -> None|
|Function||addPost||hook.addPost(klass, name, func) -> None|
|Function||removePre||hook.removePre(klass, name, func) -> None|
|Function||removePost||hook.removePre(klass, name, func) -> None|
|Function||PRE||(private) munging to turn a method name into a pre-hook-method-name|
|Function||POST||(private) munging to turn a method name into a post-hook-method-name|
|Function||ORIG||(private) munging to turn a method name into an `original' identifier|
|Function||_XXX||String manipulation garbage.|
|Function||_addHook||(private) adds a hook to a method on a class|
|Function||_removeHook||(private) removes a hook from a method on a class|
|Function||_enhook||(private) causes a certain method name to be hooked on a class|
|Function||_dehook||(private) causes a certain method name no longer to be hooked on a class|
Add a function to be called before the method klass.name is invoked.
Add a function to be called after the method klass.name is invoked.
Remove a function (previously registered with addPre) so that it is no longer executed before klass.name.
Remove a function (previously registered with addPost) so that it is no longer executed after klass.name. | <urn:uuid:baa0a3d3-1319-453e-8d47-16c383b535f2> | 2.6875 | 613 | Documentation | Software Dev. | 45.147802 | 338 |
posted last year in Dev Platform category by Dongsun Choi
There are various techniques to improve the performance of your Java application. In this article I will talk about Statement Pooling Configuration and its effect on Garbage Collection process.
Statement Pooling allows to improve the performance of an application by caching SQL statements that are used repeatedly. Such caching mechanism allows to prepare frequently used statements only once and reuse them multiple times, thus reducing the overall number of times the database server has to parse, plan, and optimize these queries. A well-configured number of statements (maxStatements) to be cached can be as good as tuning the Garbage Collection. Now let's see how Statement Pooling can affect the Garbage Collection.
Why Check the Number of Statement in the Pool?
Often the size of the JDBC statement pool is set to the default value. Using the default value, of course does not usually lead to any special issue. But a well-configured maxStatements value can be as effective as GC tuning. If you are using the default maxStatements value and would like to optimize the use of memory, let's think about the correct statement pool value before attempting GC tuning.
As was discussed in Understanding Java Garbage Collection, a weak generational hypothesis (most objects quickly become unreachable and a reference from an old object to a new object is rare) was used as the precondition when creating garbage collector in Java. For the majority of NHN web services there should be a response within 300ms at the latest, unless it is a special case. Therefore, NHN web services are more applicable to the above situations than the general stand-alone type applications.
The GC Process between HTTP Request and Response
When developing a web service using web containers like Tomcat and other frameworks, the lifespan of objects created by a developer tend to be either very short or very long. Web developers usually write codes like Interceptor, Action, BO, or DAO (BO and DAO are generated and used as singletons from applicationContex in Spring, and are not the target of GC). The objects generated from these codes stay alive for a very brief time that exists between the time HTTP is requested and the time it has responded. For this reason, such objects are usually collected during Young GC.
There are also objects, such as singleton objects, that stay alive long enough to exist for the lifecycle of Tomcat. Such objects will be promoted to the old area soon after Tomcat starts running. Yet, when continuously monitoring web applications through jstat and the like, there are always some objects promoted to the old area during Young GC. These objects are usually used after being stored in the cache used for improving the performance of frameworks in most of the containers and projects. Whether the cached objects become the target of GC or not is determined by their cache hit ratio, not their age, so unless the hit ratio is 100%, they cannot avoid being promoted to the old area, even when the Young GC cycle is set to be long.
Among these caches, statement pooling affects the memory usage the most. If you are using iBatis, as iBatis processes all SQLs as preparedStatment, you will be using statement pooling. If the size of statement pooling is smaller than the number of SQLs being used, the cache hit ratio will decrease and result in cache maintenance cost. Objects that are reachable in the old area become the target of GC and will be retrieved, then will be regenerated during the HTTP request process, only to be cached and promoted to the old area. The full GC cycles are affected by this process.
Size of the Statement Objects
It would be safe to say that the size of a single statement object is proportional to the length of the SQL code processed by the same statement. Even for a long and complex SQL, the size of the object should be around 500 bytes. The object's small size would seem to have little effect on the full GC cycles, but such an assumption would be incorrect.
When you look at the JDBC specifications, each connection has its own statement pool (maxStatementsPerConnection), as described in Figure 1 below. So, although a statement object is as small as 500 bytes, if there are many connections, the statements cache may occupy the proportional amount of the heap.
Figure 1: Relationship between the Connection and the Statement.
(Though the statement has the ResultSet, it should be clarified that ResultSet is not an object for caching. ResultSet is allocated as null when rs.close() is called by iBatis, then retrieved in the young area during young GC.)
The Effect of Statement Pool's Cache Hit Ratio on the Full GC
A simple test program was created to assess the effect of cache hit ratio on the full GC. One cache hit ratio was set to 100% while the other was set to 50%. When the same amount of load was applied, the results presented in Table 1 and 2 were obtained.
In both cases, the occurrences of young GC were very similar but the results for the full GC was different. When the cache hit ratio was 100%, full GC occurred only once, because the number of objects promoted to the old area during young GC was small. When the ratio was 50%, full GC occurred 4 times because the number of statement objects promoted to the old area during young GC was high, as the objects were cached in the statement pool, then removed from the pool in LRU way, then cached again at the next request.
Table 1. Cache hit ratio = 100%.
Table 2. Cache hit ratio = 50%.
I would like to add one more thing. When the cache hit ratio is 50%, it violates the 2nd category of weak generational hypothesis I introduced previously. When low cache hit ratio causes frequent pool registration and subsequent removal, it means the statement object generated in the young area is being referenced in the pool from the old area, which leads to additional strain during GC because the card marking technique is used to manage the references separately.
In Lucy (NHN's internal Java Framework), the maxStatements value for statement pooling in Oracle and MySQL is 500. In most cases, 500 should be enough. However, when more SQL is being used, increasing the default value to meet such demand would be a way to improve the system efficiency (when using $(String replacement) for query on iBatis for the reason of table partitioning and the like, the number of queries must be multiplied by the number of partitioned tables).
However, when the default value is higher than necessary, this leads to a different problem. A higher value means more memory usage and higher likelihood of an Out Of Memory (OOME) occurrence.
In a situation where the number of SQLs are 10,000 and the number of connections are 50, then the total size of statement objects is about 250 MB. (500 byte * 50 * 10,000 = 250 MB). It should be easy to determine the likelihood of OOME occurrence by checking the Xmx configuration for the service in use.
What strategy do you follow to determine the correct number of statements to be pooled? Share your experience in the comments below.
By Dongsun Choi, Senior Engineer at Game Service Solution Team, NHN Corporation. | <urn:uuid:58e286fd-915a-44e4-a97d-e39ff2786f38> | 2.515625 | 1,492 | Documentation | Software Dev. | 47.591944 | 339 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Monday, 18 February 2013
Heavy metal music fans in a mosh pit act like atoms in a gas - a finding that could advance emergency evacuation design and planning.
Wednesday, 21 September 2011
An Australian seismologist says this week's trial of Italian scientists for failing to warn of a devastating earthquake could muzzle experts from sharing their knowledge in the future.
Friday, 27 August 2010
Australia's leading body responsible for monitoring space weather has dismissed claims that a massive solar storm could "wipe out the Earth's entire power grid".
Monday, 12 July 2010
Australian researchers develop software to let mobile phones communicate with each other where there is no reception.
Thursday, 18 February 2010
Society needs to learn from resilient ecosystems if it is to better cope with unanticipated shocks in the future, say experts.
Tuesday, 10 February 2009
An Australian fire-behaviour specialist who helped authorities track the infernos, says the golden rule of surviving a bushfire - evacuate early or fight to the bitter end - still stands, despite the weekend's high death toll.
Monday, 9 February 2009
Australians remain unprepared to deal with bushfires despite a long history of loss and devastation from natural disasters, according to some of the country's leading bushfire researchers.
Monday, 10 March 2008
We can expect an average three catastrophic, magnitude 9 or greater earthquakes around the world each century, according to a new study. | <urn:uuid:3ab4852f-3f65-46af-a429-f89b21ced198> | 2.703125 | 305 | Content Listing | Science & Tech. | 29.245053 | 340 |
Climate Change May Lead to Fewer -- But More Violent -- Thunderstorms Tuesday, July 10, 2012
Number of flash floods and forest fires could increase with temperature, says TAU researcher
Researchers are working to identify exactly how a changing climate will impact specific elements of weather, such as clouds, rainfall, and lightning. A Tel Aviv University researcher has predicted that for every one degree Celsius of warming, there will be approximately a 10 percent increase in lightning activity.
This could have negative consequences in the form of flash floods, wild fires, or damage to power lines and other infrastructure, says Prof. Colin Price, Head of the Department of Geophysics, Atmospheric and Planetary Sciences at Tel Aviv University. In an ongoing project to determine the impact of climate change on the world's lightning and thunderstorm patterns, he and his colleagues have run computer climate models and studied real-life examples of climate change, such as the El Nino cycle in Indonesia and Southeast Asia, to determine how changing weather conditions impact storms.
An increase in lightning activity will have particular impact in areas that become warmer and drier as global warming progresses, including the Mediterranean and the Southern United States, according to the 2007 United Nations report on climate change. This research has been reported in the Journal of Geophysical Research and Atmospheric Research, and has been presented at the International Conference on Lightning Protection.
From the computer screen to the real world
When running their state-of-the-art computer models, Prof. Price and his fellow researchers assess climate conditions in a variety of real environments. First, the models are run with current atmospheric conditions to see how accurately they are able to depict the frequency and severity of thunderstorms and lightning in today's environment. Then, the researchers input changes to the model atmosphere, including the amount of carbon dioxide in the atmosphere (a major cause of global warming) to see how storms are impacted.
To test the lightning activity findings, Prof. Price compared their results with vastly differing real-world climates, such as dry Africa and the wet Amazon, and regions where climate change occurs naturally, such as Indonesia and Southeast Asia, where El Nino causes the air to become warmer and drier. The El Nino phenomenon is an optimal tool for measuring the impact of climate change on storms because the climate oscillates radically between years, while everything else in the environment remains constant.
"During El Nino years, which occur in the Pacific Ocean or Basin, Southeast Asia gets warmer and drier. There are fewer thunderstorms, but we found fifty percent more lightning activity," says Prof. Price. Typically, he says,we would expect drier conditions to produce less lightning. However, researchers also found that while there were fewer thunderstorms, the ones that did occur were more intense.
Fire and flood warning
An increase in lightning and intense thunderstorms can have severe implications for the environment, says Prof. Price. More frequent and intense wildfires could result in parts of the US, such as the Rockies, in which many fires are started by lightning. A drier environment could also lead fires to spread more widely and quickly, making them more devastating than ever before. These fires would also release far more smoke into the air than before.
Researchers predict fewer but more intense rainstorms in other regions, a change that could result in flash-flooding, says Prof. Price. In Italy and Spain, heavier storms are already causing increased run-off to rivers and the sea, and a lack of water being retained in groundwater and lakes. The same is true in the Middle East, where small periods of intense rain are threatening already scarce water resources.
For more environment and ecology news from Tel Aviv University, click here. | <urn:uuid:b7e1cd3d-cfd1-4c46-9cf1-aa54d40e8c89> | 3.3125 | 751 | News (Org.) | Science & Tech. | 31.281752 | 341 |
News - Ocean Observations
Photo credit: Peter Rejcek
Posted: July 7, 2010
Courtesy: Antarctic Sun
By Peter Rejcek
Underwater robots and marine animals outfitted with scientific sensors are part of a proposed strategy for monitoring polar oceans into the 21st century, particularly a stretch of sea along the western Antarctic Peninsula, which is undergoing rapid climate changes.
The proposal comes in the June 18, 2010 issue of the journal Science by a group of scientists who conduct research in Antarctica, most of whom currently work on the Palmer Long Term Ecological Research (PAL LTER) program.
Since 1993, the PAL LTER has monitored the region near the U.S. Antarctic Program ’s Palmer Station , close to the northern end of the peninsula, mainly on an annual ship-based survey each January. The scientists suggest profound changes to the environment necessitate new ways to make measurements of the ocean and atmosphere.
For example, midwinter surface temperatures have increased by about 6 degrees centigrade in the past 50 years. Eighty-seven percent of the western peninsula glaciers are in retreat, and the sea ice season has shortened by nearly 90 days.
In their report, the scientists describe a multi-faceted approach to ocean observation, using glider robots that measure ocean characteristics continuously for weeks at a time and tourist vessels, ferries, and other “ships of opportunity†outfitted with chemical and biological sensors.
In the last few years, the PAL LTER program added autonomous underwater vehicles called Slocum gliders through a group from Rutgers University led by Oscar Schofield , who is the lead author on the review paper in Science.
“In just the first few weeks that we had the glider out last year, we collected as much data as the cruises had collected since 1993,†said Hugh Ducklow , a co-author of the Science paper and lead principal investigator for the PAL LTER, in an earlier interview with the Sun.
The authors also suggest outfitting oceanographic instruments on animals such as elephant seals and penguins to provide information on animal behavior and oceanographic conditions. Recent tagging of Adélie penguins nesting near Palmer Station has helped scientists understand the link between nutrient upwelling in underwater canyons and where penguins forage.
“We’re looking for ways to use our existing capabilities to obtain data,†said Ducklow, director of the Ecosystems Centerat the Marine Biological Laboratory (MBL), in a recent press release from MBL. “Our goal is to make things cheaper and get a lot of them out there. This will help to narrow down uncertainty about the effects of warming on the polar oceans in the coming decades to century.â€
The authors concede that deployment of the observational systems will “require international cooperation given the scale of effort required; however, because many of the technologies have been demonstrated to be effective it is not unreasonable to believe that these networks could be deployed in five to 10 years.
“The benefits of better understanding the marine ecosystem, and being better able to predict, protect, and make use of its resources, are strong drivers to make this a reality.†| <urn:uuid:5d47cba1-4b20-45d1-ab44-d623e4d5f02d> | 2.90625 | 681 | News Article | Science & Tech. | 29.755172 | 342 |
Solar storms active, but normal
Illustration showing blasts of particles and magnetic field from the Sun that impact the magnetosphere, the magnetic bubble around the Earth (courtesy NASA).
Outbursts observed on the sun last week do not portend new problems for GPS reception or other systems as solar flares and eruptive events known as coronal mass ejections fire up during an increasingly active phase, said a National Oceanic and Atmospheric Administration space weather expert.
Widespread reports of last week’s solar activity, following a very tranquil period, may have created an impression that solar storms were unusually powerful, the expert said. Magnetic fields on the sun’s surface have intensified, showing up as increased sunspots and generating eruptive activity as a quiet portion of a well-known 11-year sunspot cycle ends.
“We have come out of such a quiet period that it’s pretty interesting from that point of view,” said Joseph Kunches, a space weather scientist for NOAA’s Space Weather Prediction Center. “The last outbreak was back in 2006. The sun has been pretty dormant.”
A ball of hot gas, the sun does not rotate as a rigid body. Turbulent effects of that uneven rotation can produce explosive results on the surface, Kunches said.
The recently noted outbursts featured a coronal mass ejection measured at Level 3 on a 1 to 5 scale for solar storms. Putting that in context, Level 3 events occur approximately 200 times during the 11-year cycle, with most outbursts clustered near the cycle’s peak, Kunches said.
Although a solar flare’s “lightning-bolt-like quick indication” can be the earliest evidence of a sudden release of energy on the sun, Kunches said, a more subtle development scientists monitor is whether a portion of the sun’s outer mass--its corona--has been blown off into space.
Such coronal mass ejections are “very directional,” sending a cloud of charged particles hurtling away from the sun. When a coronal mass ejection is observed “right in the middle of the sun,” watch for a plasma cloud to head for the earth, taking between 30 and 72 hours to arrive.
“Last week we had three eruptions from the center,” he said. “Some time later we felt the effects of those plasma fields that disturbed and energized the earth’s magnetic field.” Spaceweather.com reported that an Aug. 9 flare emanating from sunspot 1263 was followed by brief disruption of communications “at some VLF and HF radio frequencies.”
The potential for GPS interference exists because atmospheric noise generated by a coronal mass ejection can drown out the GPS signal, which may be unable to “punch through this mush of electrons.” Or, a GPS unit may seem to be working properly, but its indications are off by up to 50 meters, Kunches said. He consults this real-time map to monitor electron activity in the atmosphere.
Perhaps it was a combination of the solar cycle and the news cycle that generated the interest in solar activity, which, he said, “just wasn’t that big a deal last week.”
On Aug. 10, the Space Weather Prediction Center’s three-day report of solar and geophysical activity reported high solar activity for Aug. 8 and 9, but predicted low to moderate activity for the following three days as an active area rotated around the sun’s west limb.
August 10, 2011 | <urn:uuid:a44388a2-6fcf-47b4-937d-fc996ba7b47c> | 3.15625 | 753 | News Article | Science & Tech. | 47.963629 | 343 |
They are both nest-building social insects, but paper wasps and honey bees organize their colonies in very different ways. In a new study, researchers report that despite their differences, these insects rely on the same network of genes to guide their social behavior.
The study appears in the Proceedings of the Royal Society B: Biological Sciences.
Honey bees and paper wasps are separated by more than 100 million years of evolution, and there are striking differences in how they divvy up the work of maintaining a colony, said University of Illinois entomology professor Gene Robinson, who led the study with postdoctoral researcher Amy Toth.
"Honey bees have a sharp division of labor between queens, which reproduce, and workers, which care for the brood and forage for food, while among paper wasps social roles are much more fluid," he said. "And yet the same genes can be used by these different organisms to do similar kinds of things. This is the genetic toolkit idea: The same genetic elements are used for different types of division of labor."
A genetic toolkit already has been found for physical traits, such as the development of eyes, said Robinson, who is also a professor in the Institute for Genomic Biology. For example, the same gene, called PAX-6, is involved in eye development in mammals and insects, even though it is virtually certain that these structures did not evolve from a similar structure in a common ancestor.
For the new study, the researchers compared the activation of genes in the brains of four groups of female paper wasps (Polistes metricus) that have different roles in the nest, with some more active in reproduction and others more active in provisioning the brood.
The purpose of the study was to determine if differences in brain gene activity between the wasps rely on the same networks of genes that in the honey bee (Apis mellifera) drive their division of labor.
A previous study of paper wasps by Robinson, Toth and their colleagues obtained a partial sequence of the wasp genome and looked at the expression of 32 genes. That analysis, published in Science in 2007, showed that – as in honey bees – most of the targeted genes are activated differently in different groups of paper wasps. But those genes were hand-picked because they were important to honey bees, Robinson said. For this reason, the team wanted to take a second look at the broad array of genes in the wasp – to be sure that the pattern they had identified was indeed special to wasps as well as bees.
Crop sciences professor Matt Hudson, the team's bioinformatics expert, used a computer algorithm to mine the sequencing data from the previous study to design a microarray. The microarray allowed the researchers to simultaneously measure those genes that were most active in the paper wasp brain.
"We expect that Polistes has got somewhere in the range of 10,000 genes, and we expect that at least half of them, but not all of them, would be expressed in the brain," said Hudson, who also is a professor in the Institute for Genomic Biology. The effort identified more than 4,900 genes that were active in the wasp brain.
The new analysis confirmed that the same genes and gene regulators that are important to the division of labor within a honey bee hive also are used by the wasps as they take on different roles in the nest. | <urn:uuid:c515e6d2-19b4-4614-92dc-90e2a7529cd6> | 3.65625 | 698 | News Article | Science & Tech. | 34.793888 | 344 |
Observations on zugunruhe in spring migrating Eared Grebes.
|Abstract:||About 200 North American Eared Grebes (Podiceps nigricollis californicus) at Tule Lake Refuge in northern California were observed engaging in successive waves of mass pattering and pattering flights on 25 May 2011. Most grebes present in a part of a canal were involved in this activity. Counts of grebes on the morning of 26 May suggest an important portion of the Eared Grebes seen in pattering could have left the area over night. The behavior was characterized as zugunruhe. Directed mass pattering of Eared Grebes may contribute to synchronization of the onward migration of the birds involved.|
Migratory birds (Research)
Animal flight (Research)
|Publication:||Name: The Wilson Journal of Ornithology Publisher: Wilson Ornithological Society Audience: Academic Format: Magazine/Journal Subject: Biological sciences Copyright: COPYRIGHT 2012 Wilson Ornithological Society ISSN: 1559-4491|
|Issue:||Date: March, 2012 Source Volume: 124 Source Issue: 1|
|Topic:||Event Code: 310 Science & research|
|Geographic:||Geographic Scope: United States Geographic Code: 1USA United States|
North American Eared Grebes (Podiceps nigricollis californicus) are
seldom seen in flight, except when they migrate (Bent 1919, Gaunt et al.
1990). The migration of the species has been well studied (Storer and
Jehl 1985, Gaunt et al. 1990, Jehl 1997, Cullen et al. 1999, Jehl and
McKernan 2002, Jehl and Henry 2010). Cullen et al. (1999) indicate
migration flights begin around dusk and end before dawn. Jehl and Henry
(2010) note strict correspondence of departure with near-total darkness.
Grebes tend to gather as the time for departure nears (Jehl and McKernan
2002). Predeparture activities include group diving, and submerging and
surfacing in near unison. A unique call is given as grebes prepare to
depart and immediately before actual take-off (Jehl and Henry 2010).
Daytime flights are possibly observed only when grebes rebuild their flight muscles prior to migration when they may perform one or two short practice flights (Jehl and Henry 2010) or race across the surface in short practice flights, often in small groups (Jehl and McKernan 2002). I was surprised to observe a mix of pattering and flight by larger groups of Eared Grebes in Northern California during daylight conditions. I describe these common pattering flight maneuvers and discuss their possible meaning.
A study of courtship of Eared Grebes was undertaken at Upper Klamath Lake, Oregon, and Lower Klamath Refuge and Tule Lake Refuge, both in northern California, from 14 to 27 May 2011. This region is known to support thousands of Eared Grebes each year for nesting, water levels permitting. The California refuges hosted 7,397 and 3,700 nests, respectively, in 2003 and 2004 (Shuford et al. 2006). Fieldwork was from 0700 to 1700 hrs each day using a car as a blind. The car was parked at suitable places along roads near bodies of water and remained immobile for up to 3 hrs. The behavior and displays of grebes were documented either by photograph, video film or immediate voice recording. All observations of pattering flights are from Tule Lake Refuge, part of the Klamath Basin National Wildlife Refuges, an artificial water impoundment of mostly open water covering ~5,200 ha at an altitude of 1,200 m and surrounded by croplands. The observations were in an area called the English Channel (41[degrees] 51' 202 N, 121[degrees] 29' 727 W) in the central part of the wildlife tour into the refuge. This is an L-shaped canal, <50 m in width. It opens at its northern end into large sump lA, an open and shallow area of the lake. It takes a left turn after ~1.6 km in a straight line from north to south (NS canal or NS part of the English Channel) and continues east for another 0.5 km (EW canal or EW part of the English Channel) until ending at a dam-levee that separates it from the adjacent larger sump 1B (Fig. 1). The entire canal is devoid of emerging vegetation.
I differentiate between pattering (a grebe with flapping wings runs with paddling feet or even partially glides over the water surface, but remains in constant contact with the water), pattering flight (after an initial pattering, a grebe is airborne for a distance limited to a few meters during which it does not touch the water surface), and real flight (the distance covered while airborne exceeds 10 m). It is well established that Eared Grebes use pattering in the retreat display and during escape/pursuit or more generally during aggression (McAllister 1958, Cullen et al. 1999); these occurrences are not included. My objectives in this paper are to provide a full description of pattering and pattering flights by larger numbers of grebes, and to discuss possible reasons for their occurrences.
Observations in the southern English Channel on 25 May started at 0900 hrs. Over 200 Eared Grebes were scattered partially in loose groups all over the EW part of the English Channel around midday when about three quarters of them engaged in pattering. The grebes did so in consecutive waves, all into a western direction towards the connection to the NS canal. The sudden take-off by one or two grebes seemed to cause others in their immediate vicinity and on their way to move in the same direction. Groups of 10-30 birds pattered over a short distance (20-30 m), some briefly loosing contact with the water surface in a pattering flight. Grebes getting briefly airborne possibly did so to avoid collision with conspecifics that remained stationary on the water surface. Grebes landed ahead of others that started similar maneuvers in their wake, perhaps carrying along some of those that had just stopped pattering. A few additional waves of pattering were launched. Some birds dived after landing; others elevated their necks, remained alert, and looked around without changing their westward orientation. Most of the population, including subgroups closer to the NS canal which were not observed to patter, was swimming in the direction of the NS canal. The eastern and central parts of the EW canal were rather empty of Eared Grebes after some 2-3 rain, leaving only a few Westem (Aechmophorus occidentalis) and Clark's grebes (A. clarkii) and a few ducks remaining. Fewer than 100 Eared Grebes were still swimming in the western part of the EW canal towards the connection with the NS canal when they encountered about 40 birds swimming in a group to reenter the EW canal. A rough count less than 10 min later indicated that >200 Eared Grebes had again spread over this canal.
Pattering and pattering flights started anew only ~20 min after the start of the first general movement by the Eared Grebes. Take-off by one or two Eared Grebes incited others in their surroundings to join as before. The birds moved westward in several waves and continued swimming into the same direction after landing. More grebes left the EW canal where only about 30 remained, all towards its western end. A first group of swimming grebes returned ~1 min later. It was followed by other loose groups. I counted 130 grebes 5 rain later and soon >240 birds were again present inside the EW canal.
A longer period without group pattering, but with continuous calling, occasional displays and much surface feeding on phantom midges (Chaoborus crystillinus) followed until ~1400 hrs. Individual grebes performed feeding dives, but no group diving, or submerging and surfacing in near unison was observed. The general pattering in waves and westward swimming towards the connection with the NS canal started again and most Eared Grebes finally left the EW canal. The first grebes had turned and swam to return to the EW canal when a sudden simultaneous eastward pattering of >50 re-entering grebes occurred. Two or three more waves by other groups followed immediately. Five minutes later, 232 grebes were counted inside the EW canal.
Only the continuous and contiguous calls of the birds were heard for ~20 min. Ten birds then initiated a fourth round of pattering in waves. This time, the grebes had no common general direction. The grebes more in the central part of the observed area moved towards the dam, those already closer to the eastern end pattered into a more southwestward to westward direction. The population present divided into two groups. About 100 grebes were clustered near the dam and another 100 were scattered over the upper western third of the EW canal. The space in between both groups remained mostly empty. The western group started immediately to swim eastward while the eastern group slowly dispersed. The groups soon melted and spread over the empty space that had separated them.
Perhaps five additional pattering flights of up to 4-5 grebes were observed in between the different mass pattering and pattering flights. It was not known whether these were premature attempts to initiate a wave or whether they were unrelated to the mass movements.
The observations ended at ~1700 hrs and 257 grebes were counted in the EW canal (26 in the connecting corner square to the NS canal), 65 were present in the lower half of the NS canal and 347 in the upper half. Only five additional Eared Grebes were detected at the mouth to sump 1A. Other parts of the sump close to the English Channel were empty of Eared Grebes. A count of the birds at 0700 hrs on the following day totaled exactly 400 individuals, 269 less than the previous count. Only 77 grebes were observed inside the EW canal (28 in the connecting corner square) while the NS canal had 323 grebes. Three hours later, 126 Eared Grebes were recorded in the EW canal and 337 in the NS canal. The two counts on 26 May revealed quite differing numbers of grebes. The EW canal held 131 to 180 grebes less and the entire English Channel held 206 to 269 grebes less than on the afternoon of 25 May.
Eared Grebes had arrived at Tule Lake Refuge in the course of the previous 2-3 weeks. I assume that shortly after arrival, their wing muscles were still in good flight condition on 25 May and intense practicing could not have explained the mass pattering. Most birds were actively courting, but the group pattering did not appear to be related to pair bonding. There is also no reason to believe the grebes tried to divert an aerial predator with common flight activity as several instances of Bald Eagles (Haliaeetus leucocephalus) appearing in flight over the grebes or even trying a catch in the canal did not trigger much reaction. Birds pattering to escape a pursuing conspecific or to flee possible danger incited alarm at the most to a handful of other Eared Grebes in their immediate vicinity. The generalized pattering by larger groups of Eared Grebes observed appeared unrelated to courtship, aggression, fear or predator presence. A similar or comparable behavior by Black-necked Grebes (P. n. nigricollis) in Europe has not been reported.
There is comparable agitation in Silvery Grebes (P. occipitalis) during migration towards breeding areas. Fjeldsa (1982) noted that Silvery Grebes show high restlessness and form long lines that move back and forth on a lake from where, in the subsequent night, at least part of the population departed. He termed this pre-migratory restlessness. Movements of a group of 70 Silvery Grebes at Laguna Las Encadenadas, Argentina, in December 2006, were not limited to swimming, but included sudden quasi-simultaneous take-offs of individuals more at the rear end of the line. Some flew up, reaching a height of ~2 m, possibly to avoid collision with the birds preceding them. They landed again in front of the group that was moving in one direction. The grebes at the rear end acted similarly. The group changed direction as it approached the shore, but continued swimming in a line, and pattering and flying from the back to the front (Konter 2009).
Eared Grebes at Tule Lake Refuge all swam actively into the same direction, although they did not form one line. They showed pattering and pattering flights in waves and repeated the directed group movements. A priori the comparison of total counts of grebes inside the English Channel on the following day strongly suggests at least a major portion of the population had left the area. Zugunruhe seems an appropriate characterization for the Eared Grebes' behavior. Additional pre-departure activities at Tule Lake Refuge including group diving, submerging and surfacing in near unison as reported by Jehl and Henry (2010) were not obvious. The grebes' diving and swimming seemed to be predominantly related to feeding, except the dives after mass pattering involved only a minority of a group. Active vocalization may have helped group cohesion, but it could not be distinguished from advertising by solitary birds or from contact calling by partners momentarily separated.
It is not known to where the departing grebes flew and whether they targeted breeding areas in the region or flew a long distance. Eared Grebes can move to other sites used for breeding, even after arrival in a breeding area, or emigrate from the region (Cullen 1998). It is also unknown whether the grebes departed in flocks from the English Channel and whether they headed in one or different directions. I assume they were migrants and the extent of their pattering flight maneuvers suggests an eagerness to move on.
The counts of 25 and 26 May show that not all Eared Grebes had left the English Channel over night. Grebes present in the NS part were not observed on 25 May and they may not have engaged in group pattering. The first count on 26 May showed that low numbers of grebes were present inside the EW canal and the higher later count suggests that new grebes were continuously settling there. Over 200 Eared Grebes left the English Channel during the night and this number corresponds as an order of magnitude to the numbers involved in the group pattering. Thus, most pattering grebes could have left over night and it is likely their zugunruhe contributed to a simultaneous departure. They were gradually replaced by conspecifics moving into the EW canal on the following day.
Eared Grebes often do not arrive within a short lapse of time inside a breeding region where numbers generally build up over several weeks. They synchronize, however, nest establishment (McAllister 1956, Boe 1994). In this context, it is of interest to further investigate how a conspicuous pre-migratory group pattering as observed at Tule Lake Refuge may contribute to a coordinated onward flight inside a breeding region that would facilitate simultaneous colony establishment by large numbers of pairs. Unfortunately, the data from Tule Lake Refuge do not permit any conclusion to be drawn.
I am grateful to Michele Nuss from the Tule Lake Refuge Headquarters who was of great help in the preparation of my fieldwork. I thank J. R. Jehl Jr and C. E. Braun for critical review and constructive comments on the first draft.
Received 13 July 2011. Accepted 19 September 2011.
BENT, A. C. 1919. Life histories of North American diving birds, Order Pygopodes. U.S. National Museum Bulletin 107:1-47.
BOE, J. S. 1994. Nest site selection by Eared Grebes in Minnesota. Condor 96:19-35.
CULLEN, S. A. 1998. Population biology of Eared Grebes in naturally fragmented habitat. Thesis. Simon Fraser University, Burnaby, British Columbia, Canada.
CULLEN, S. A., J. R. JEHL, AND G. L. NUECHTERLEIN. 1999. Eared Grebe. The birds of North America. Number 433.
FJELDSA, J. 1982. Some behaviour patterns of four closely related grebes, Podiceps nigricollis, P. gallardoi, P. occipitalis, and P. taczanowskii, with reflections on phylogeny and adaptive aspects of the evolution of displays. Dansk Ornithologisk Forenings Tidsskrift 76:37-68.
GAUNT, A. S., R. S. HIKIDA, J. R. JEHL JR., AND L. FENBERT. 1990. Rapid atrophy and hypertrophy of an avian flight muscle. Auk 107:649-659.
JEHL JR., J. R. 1997. Cyclical changes in body composition in the annual cycle and migration of the Eared Grebe Podiceps nigricollis. Journal of Avian Biology 28:132-142.
JEHL JR., J. R. AND A. E. HENRY. 2010. The postbreeding migration of Eared Grebes. Wilson Journal of Ornithology 122:217-227.
JEHL JR., J. R. AND R. L. MCKERNAN. 2002. Biology and migration of Eared Grebes at Salton Sea. Hydrobiologia 473:245-253.
KONTER, A. 2009. Observations on diving times, on pre-migratory restlessness and on some displays of Silvery Grebes Podiceps occipitalis. Regulus Wissenschaftliche Berichte 24:67-71.
MCALLISTER, N. 1958. Courtship, hostile behavior, nest-establishment and egg laying in the Eared Grebe (Podiceps caspicus). Auk 75:290-311.
SHUFORD, W. D., D. L. THOMSON, D. M. MAUSER, AND J. BECKSTRAND. 2006. Abundance and distribution of nongame waterbirds in the Klamath Basin of Oregon and California from comprehensive surveys in 2003 and 2004. PRBO Conservation Science, Petaluma, California, USA.
STORER, R. W. AND J. R. JEHL JR. 1985. Moult patterns and moult migration in the Black-necked Grebe Podiceps nigricollis. Ornis Scandinavica 16:253-260.
Andre Konter (1)
(1) Museum of Natural History, 25, rue Munster, Luxembourg L-2150, Luxembourg; e-mail: email@example.com
|Gale Copyright:||Copyright 2012 Gale, Cengage Learning. All rights reserved.| | <urn:uuid:2aeeaf9e-3230-4652-98f2-c49e22d6bc2b> | 3.15625 | 3,989 | Academic Writing | Science & Tech. | 55.555742 | 345 |
stressArticle Free Pass
stress, in physical sciences and engineering, force per unit area within materials that arises from externally applied forces, uneven heating, or permanent deformation and that permits an accurate description and prediction of elastic, plastic, and fluid behaviour. A stress is expressed as a quotient of a force divided by an area.
There are many kinds of stress. Normal stress arises from forces that are perpendicular to a cross-sectional area of the material, whereas shear stress arises from forces that are parallel to, and lie in, the plane of the cross-sectional area. If a bar having a cross-sectional area of 4 square inches (26 square cm) is pulled lengthwise by a force of 40,000 pounds (180,000 newtons) at each end, the normal stress within the bar is equal to 40,000 pounds divided by 4 square inches, or 10,000 pounds per square inch (psi; 7,000 newtons per square cm). This specific normal stress that results from tension is called tensile stress. If the two forces are reversed, so as to compress the bar along its length, the normal stress is called compressive stress. If the forces are everywhere perpendicular to all surfaces of a material, as in the case of an object immersed in a fluid that may be compressed itself, the normal stress is called hydrostatic pressure, or simply pressure. The stress beneath the Earth’s surface that compresses rock bodies to great densities is called lithostatic pressure.
Shear stress in solids results from actions such as twisting a metal bar about a longitudinal axis as in tightening a screw. Shear stress in fluids results from actions such as the flow of liquids and gases through pipes, the sliding of a metal surface over a liquid lubricant, and the passage of an airplane through air. Shear stresses, however small, applied to true fluids produce continuous deformation or flow as layers of the fluid move over each other at different velocities like individual cards in a deck of cards that is spread. For shear stress, see also shear modulus.
Reaction to stresses within elastic solids causes them to return to their original shape when the applied forces are removed. Yield stress, marking the transition from elastic to plastic behaviour, is the minimum stress at which a solid will undergo permanent deformation or plastic flow without a significant increase in the load or external force. The Earth shows an elastic response to the stresses caused by earthquakes in the way it propagates seismic waves, whereas it undergoes plastic deformation beneath the surface under great lithostatic pressure.
What made you want to look up "stress"? Please share what surprised you most... | <urn:uuid:79e0dfc6-44f0-433d-bdf7-1f27c991027e> | 4.21875 | 547 | Knowledge Article | Science & Tech. | 46.057482 | 346 |
evidence suggests that life originated in extreme environments,
for example, at high temperatures. The National Science
Foundation (NSF) has initiated a program called Life in
the Extreme Environment (LExEn) that is dedicated to finding
new and exciting organisms that live in harsh environments.
The Extreme 2000 research expedition, at hydrothermal vent
sites in the Sea of Cortés, is led by marine scientists
George Luther and Craig Cary from the University of Delaware
and Anna-Louise Reysenbach from Portland State University.
Their chief objective is to make real-time chemical measurements
at the vents using microsensors developed by Dr. Luthers
group, which will guide the microbiologists and molecular
biologists in Dr. Carys and Dr. Reysenbachs
groups in finding organisms that are descendants of early
Chemical Detective Work at the Bottom
of the Sea
hydrothermal vents home to the closest relatives of the
oldest life on Earth? Using special tools housed in a wand
on the sub Alvin, researchers will be testing the
chemistry of vent water in search of microscopic organisms.
The wand houses a thermometer, an apparatus called the
Sipper to collect small water samples, and a super-sensitive
The analyzer is like a sophisticated underwater snooper.
It can be used near the vents and, from its chemical readings,
tell scientists what kind of microbes might live there.
While our food chain is based on energy from the sun, the
suns rays never reach the deep sea. There, organisms
must rely on a different energy source: the chemicals that
rocket out of the vents.
During a previous expedition, the Extreme 2000 scientific
team found that the presence of two compounds hydrogen
sulfide (H2S) and iron monosulfide (FeS) may be an
important indicator of the oldest microscopic vent life.
These compounds react to form the mineral pyrite (fools
gold) and hydrogen gas. The hydrogen provides the
energy that these microbes need to grow.
With the analyzers help, marine scientists may be
able to track down the nearest descendants of the first
life on Earth, and perhaps on other planets.
Europa, one of the moons of Jupiter, is covered in ice.
However, recent findings suggest that portions of the ice
move, which is strong evidence that liquid water lies beneath
the ice. The water may be maintained in its liquid state
by hydrothermal vents. If hydrothermal vents exist on Europa,
theres a possibility that ancient microbes could live | <urn:uuid:34eb878c-35eb-443f-b0b8-8f0942552023> | 3.984375 | 546 | Knowledge Article | Science & Tech. | 35.707536 | 347 |
The success of a concurrent system depends on well designed hardware, flexible software that controls the hardware, and clear marketing vision. To adapt the changing marketing requirement, hardware and software need to have flexible architectures. This article is focused on software development issues, and discusses concurrency design at application layer as opposed to concurrency inside the Operating System.
Any multi-threaded system can be considered a concurrent system; for example, a lengthy task can be implemented as a background thread so that it will not block the graphical user interface. Here, we are discussing concurrent systems which can be characterized by the following traits:
- System input and output can be clearly identified.
- System internal consists of system resources, such as hardware modules, which are used to process system input and generates system output.
- One or more execution steps are needed for a system input to be processed by system resources and to become a system output.
- The system resources and their relationship are identified by system analysis. The concurrency properties of system resources determine the constraints between execution steps, which ultimately define the system concurrency behavior.
The above description can be illustrated in figure 1. The system consists of three resources: 2 inputs and 2 outputs. Each input needs to go through two steps to become output. Resources 1 and 2 are independent and can be parallel. The output of resources 1 and 2 are the input of resource 3, which is independent of 1 and 2.
Figure 1. A Sample Concurrent System
Most concurrent systems have these design goals:
- Have an easy to understand software architecture so that the desired concurrency can be implemented and verified quickly.
- Have a solid system concurrency kernel to adapt system environmental changes such as inconsistent hardware responses, and still achieve high system reliability.
- Have a good scalable architecture to adapt new requirement changes.
- System concurrency and throughput are well understood by all teams involved in the system specification and design, not just by a few key software engineers. Therefore, the concurrent software should expose how the current system internally works with minimum cost, so that the team communications can be conducted effectively.
- Different system concurrencies can be achieved with different execution configurations without major interruption to the system reliability.
For a complex concurrent system, the design cost to achieve such goals could be very high for inexperienced engineers. Most systems end up with only a few engineers who can understand and maintain the fragile concurrent kernels.
What is needed to meet the concurrent system design goal from a management perspective?
- To have the capability to quickly understand the marketing or hardware concurrency requirement, and to provide a clear road map on how to achieve the desired software system concurrency at an early stage of development, not when delivering the alpha or beta product. This requires the software engineer to have a clear understanding of the system resource concurrency at the very beginning.
- To shorten the cycle of turning the desired concurrency into a real functioning software system.
- To communicate the achievable concurrency goal to other teams frequently, and to adjust the concurrency accordingly based on new marketing input, or new resource constraints improvements or limitations, such as hardware.
The followings are common issues found in a concurrent software design:
- The software team member is not very experienced in concurrency design. Most teams have engineers knowing threads, critical sections, semaphores, and events. But, this usually does not guarantee achieving the design goals listed above.
- The understanding of system concurrency is very slow. The software engineer could not present a full picture of how the system concurrency design is going to be working until the alpha or beta stage. Therefore, nobody will question how the system concurrency is designed since there is no good method to communicate the software design. The engineer usually gives you his/her understanding of the system concurrency on small pieces, which is hard to convince the software team manager or other teams that the software team has fully understood the system and will be able to deliver on schedule.
- The marketing group has a wrong system throughput assumption and commitment at the beginning of a project, with false understanding of the system resource constraints, or the complexity for available engineers to achieve the desired high throughput without sacrificing software system reliability. The marketing group might assume that software engineers could just achieve it, but have no way to verify it during the process until it is too late.
- Almost all designs do not have a clear distinction between the code controlling the system resource operation and the code performing the system resource concurrency. This architecture makes it very hard to enhance upon new concurrency requirements. By simply using a synchronization object, such as a critical section, event, or semaphore from the Operating System, it's almost impossible to perform such a partition without a major investment on the system architecture design. Unfortunately, most applications do not separate the two domains and let one engineer handle all of them, who is already overwhelmed by the concurrency choreography. The software manager usually does not understand the importance of such a design, or they don't have the time to spend on infrastructure building, and just want to see something is beginning to work. The result is that more time will be wasted during debugging and the feature enhancement period.
- The fragile concurrency architecture is hard to understand. It is almost impossible for new engineers to take over the design, except to abandon the old one, and then propose a "better architecture" which usually goes through the same design cycle and delays the schedule. The software manager usually is not aware of the engineer's redesign approach except to accept it, since both the engineer and manager have no choice to improve the old architecture.
- The manager and software engineer mistakenly think that object oriented analysis of the concurrent hardware modules will guarantee a good concurrent software design which delivers a flexible concurrent software architecture. Most OOA just help engineers to identify objects in a system without concurrency analysis, and engineers have to use a synchronization object in the Operating System to address concurrency. If this approach is used, it will not help achieve the concurrent system design goals listed above. Unfortunately, most systems are designed with such approach.
- Engineers begin to experience an unexplained hang, and begin to put a sleep function somewhere to solve weird timing problems, simply because the understanding of system resource concurrency is not complete at the beginning, and the design can not adapt to a different running environment. When switching to different platforms, such as a faster machine, software needs major retest, or a possible overhaul. And the engineer and the manager begin to hide facts from the upper manager. The development cost goes up, and the software always needs major "improvements" to adapt to a new hardware with a newly tuned concurrency, which should not happen if it is well designed at the beginning.
- Typically, the design of a system concurrency is architected by a senior person in a team, and it is very hard for other people to challenge the delicate design. System maintenance and enhancement for the concurrency part is a major issue with such a design approach.
How to Address Those Issues
The cost of making a complex concurrent system flexible and reliable is extremely high for average engineers who simply use the Operating System's critical section, semaphore, event, and thread. To address the above problems, we need to develop a platform to help engineers in modeling a concurrent system with an easily understood object model, communicating the design by a user friendly graphical user interface, and verifying the internal concurrency of the design quickly by simulation.
- A simple concurrent object model is needed. An object oriented analysis method based on the model should be easy to perform.
- An inter-task communication mechanism is provided based on the object model to allow task synchronization.
- A design development toolkit is needed to support the object oriented analysis, and helps software engineers to spend more time on understanding the system concurrency, and system resource controlling during implementation, instead of struggling with multithreaded code that is implemented with Operating System synchronization objects such as critical sections, semaphores, and events.
- The design platform provides a graphic presentation of the system concurrent execution status that helps software engineers to present and to validate a design effectively. Eventually, it will help the whole team, even different groups, to understand the system concurrency internals.
- The object model and its development toolkit allow separation of the code performing system resource concurrency and the code performing system resource control. In figure 1, the code controlling Resource 1 is a resource control domain. The code controlling independent Resources 1 and 2 to operate in parallel is a system resource concurrency domain. This architecture helps the manager to partition the concurrent system design work into two domains so that it can be assigned to different engineers to improve team productivity and product reliability.
Usually, a manager will not offer a resource to implement above the environment to help the design long term, since it is very time consuming and no immediate results can be seen.
JEK Platform is designed to address these issues with the above concurrent system design goals, and makes concurrent problems easier to model for software engineers. JEK SDK automatically turns a modeled application job into a concurrent execution engine. The object model also separates the resource synchronization code and the resource control code, so that the engineer can spend more time understanding system concurrency, instead of dealing with Operating System synchronization objects, which is used by most software engineers. It will also help the engineer to spend more time communicating their understanding of the system concurrency within a team, or with other teams.
Here, two samples are presented (please go to www.jekplatform.com/CodeProjectSamples.htm to get the source code) to demonstrate how the JEK Platform works:
- Sample 1: Philosophers dining problem.
- Sample 2: Automated coffee machine.
Sample 1. Philosophers Dining Problem
The philosophers dining problem is five philosophers sitting around a table doing what they do best: thinking and eating. In the middle of the table is a plate of food, and in between each philosopher is a fork. The philosophers spend most of their time thinking, but when they get hungry, they reach for the two forks next to them and start eating. A philosopher cannot begin eating until he has both forks. When he is done eating, he puts the sticks down and continues thinking.
To solve the problem with JEK Platform, five routes are defined to represent the actions of five philosophers. Each route has two tasks: eat and think. Obviously, eat task of each philosopher's route can not be active at the same time because of resource constraints. A Mutex synchronization resource is used to restrict the eat task of each philosopher. The resource allocation scheduling algorithm in the JEK Kernel has an important feature to avoid a deadlock in this sample: if one philosopher gets a fork and finds another is already taken, it will release the one and notify other tasks in other routes so that other philosopher can continue to eat.
The following diagram, figure 2, illustrates the philosopher dining job execution engine's timing diagram. The application code is pretty simply since it does not need to handle threads and thread synchronization that is handled in the JEK SDK, but simply describes the resources (forks) and tasks (philosophers' actions).
Figure 2. JEK Studio monitors the philosophers dining job execution
In figure 2, JEK Studio GUI has four components illustrated by four yellow bubbles:
- The task matrix presents the application engine internal structure and the real-time execution activity status.
- The task timing diagram presents a more detailed real-time execution status for tasks, which helps developers to understand and to validate concurrent system behavior quickly.
- The activity resource matrix presents real-time task activity resource status.
- The synchronization resource matrix presents real-time task occupy status.
- The task trace window displays log status, which is also saved in a log file.
In JEK Studio, five routes are shown in the task matrix. Its execution is shown in the task execution timing diagram. The four blue bubbles are explained as follows:
- Blue bubble 1. Job execution engine starts to execute job. Philosophers 1 and 4 start to eat.
- Blue bubble 2. Philosophers 3 and 5 start to eat at the same time. Philosophers 1 and 4 start to rest at the same time. The reason that philosopher 3 and 5 can start to eat at the same time is because the application code is configured so that the eat time for all philosophers are the same. The rest times are also the same for all philosophers.
- Blue bubble 3. It's interesting to observe the job execution status after a few loops. Philosophers 1, 2, 4 are resting. Philosophers 3 and 5 are eating. If observed carefully, philosophers 3 and 5 are not starting to eat at the exact same time. Philosophers 1, 2, 4 do not rest at the same time.
- Blue bubble 4. This is another interesting job execution status. Only one philosopher, #2, is eating at this moment. Philosophers 1, 3, 4, 5 are all resting. The reason, the resting times of all philosophers are longer than eating times.
- Another observation is that fairness is not guaranteed for each route. It's unpredictable which philosopher will get a chance to eat next time based on the scheduler used.
- Route starvation is possible. In other words, some philosophers might never get a chance to eat. This is not demonstrated in the graph since the result is random. You can try to start the engine a few times and the results could be different each time.
If not using simulation, it is very hard for a software engineer to answer: if scenarios marked by the blue bubbles 3 and 4 are possible.
Sample 2. Automated Coffee Machine
An automated coffee machine mixes milk, sugar, and coffee into a cup, and serves the cup to customer when it is done.
Figure 3. Coffee machine model analysis
The coffee machine has five robots:
- Platform robot. It holds the coffee cup, so that milk, sugar, and coffee can be poured into it and gets mixed. After the coffee is mixed, it moves the cup with the mixed coffee to a customer.
- Cup robot. It puts an empty cup onto the platform robot.
- Milk robot. It pours milk into the coffee cup on the platform robot.
- Sugar robot. It pours sugar into the coffee cup on the platform robot.
- Coffee robot. It pours coffee into the coffee cup on the platform robot.
Coffee machine operating procedure:
- All robots are in initial positions.
- Cup robot puts an empty cup onto the platform cup.
- Milk robot pours milk into cup.
- Sugar robot pours sugar into cup.
- Step 3 and step 4 can be parallel.
- Coffee robot pours coffee into cup.
- Platform robot moves mixed coffee to customer.
Important operating requirements of the coffee machine are as follows:
- Above operating procedures have to be followed. Otherwise, the robot's positions might be in wrong places, and results in robot damages.
- Milk and sugar need to be poured into the cup before coffee, so that the coffee can be mixed properly without requiring adding a coffee stir robot that increases the complexity of the machine.
To solve the problem with JEK Platform, two routes are defined. One route is designed to control the cup robot and the platform robots. Another route is designed to control milk, sugar, and coffee robots. The reason to define these routes is that the task steps inside each route are sequential. The synchronization resource between routes is the platform robot. For detailed analysis and code, please go to http://www.jekplatform.com/CodeProjectSamples.htm to download the complete JEK Platform and to look for sample section 7: Machine Control.
Figure 4 is the coffee machine execution timing diagram implemented with the JEK SDK and presented with the JEK Studio. X axis is time. Y axis is tasks. A bar is the execution time for a task. Tasks within one route are displayed with one color. Different tasks in one route are displayed with different Y axis values. Multiple bars in the same Y axis represent the same task executed at different times.
Route 1 controlling milk, sugar, and coffee robot (orange color) has three tasks from bottom to top:
- Task1_1: control milk robot to pour milk.
- Task1_2: control sugar robot to pour sugar.
- Task1_3: control coffee robot to pour coffee.
Route 2 controlling cup robot (blue) has two tasks from bottom to top:
- Task2_1: control cup robot to put cup onto platform robot.
- Task2_2: control platform robot to serve mixed coffee to customer.
The six blue bubbles in figure 4 are explained as follows:
- Blue bubble 1. Task2_1 puts a cup onto the platform robot.
- Blue bubble 2. Task1_1 and Task1_2 pour milk and sugar into an empty cup at the same time.
- Blue bubble 3. Task1_2 finishes pouring sugar and Task1_1 is still pouring milk.
- Blue bubble 4. Task1_3 starts pouring coffee.
- Blue bubble 5. Task2_2 controls the platform robot to serve mixed coffee.
- Blue bubble 6. Repeat the same process.
Figure 4. Coffee machine execution status of solution 1
This robot is not very efficient. Route 1 is idle after blue bubble 5. To increase the throughput, another independent platform robot is added to make route 1 as busy as possible. The position of platform robot 1 is different from that of platform robot 2. Therefore, the control code for pouring milk, sugar, and coffee is different in context, but have the same structure. Figure 5 is a new robot diagram.
Figure 5. Platform 2 robot is added
Route 3 (burgundy red color) is added to serve second cup, which is presented as red in figure 6. It has identical tasks as defined in route 2. Since a new platform robot is added, route 2 and 3 are redefined as follows:
Route 2 controlling cup robot (blue) has two tasks:
- Task2_1: control cup robot to put cup onto platform robot 1.
- Task2_2: control platform robot 1 to serve mixed coffee to customer 1.
Route 3 controlling cup robot (burgundy red) has two tasks:
- Task3_1: control cup robot to put cup onto platform robot 2.
- Task3_2: control platform robot 2 to serve mixed coffee to customer 2.
Figure 6. Solution 2 coffee machine has two platform robots
The five blue bubbles in figure 6 are explained as follows.
- Blue bubble 1. Task3_1 puts the cup onto platform robot 2. Both routes 2 and 3 are started at the same time, but only one route can use the cup robot. It is random that route 2 gets the cup robot.
- Blue bubble 2. Task1_1 and Task1_2 pour milk and sugar into an empty cup at the same time after a cup is put on platform robot 2. Note: the execution context of route 2 is platform robot 1 since it has a different location than platform robot 2. In other words, the control code is different when the context is different.
- Blue bubble 3. Task2_1 starts to put a cup on platform robot 1. Coffee robot uses task1_3 to pour coffee into the cup on platform robot 1. Both tasks are started at the same time.
- Blue bubble 4. Platform robot 2 uses task3_2 to serve mixed coffee to customer 2. Task1_1 and Task1_2 pour milk and sugar into an empty cup on platform robot 1. The route 1 execution context (pouring milk, sugar, and coffee into which platform robot) is not visible from the timing diagram. It is only visible from the trace window or log file.
- Blue bubble 5. Task1_3 pours coffee into platform robot 1.
Comparing figures 4 and 6, route 1 is almost busy all the time. Therefore, the throughput of the coffee machine with two platform robots is increased.
But, if looking carefully, the machine with two platform robots can be even faster. The reason is that the coffee robot can serve another platform robot while milk and sugar robots are serving one platform robot, if the time of putting one cup on the platform is shorter than the time of pouring coffee by the coffee robot.
To increase the speed of the coffee machine, an engine design with different route configuration is used. Route 1 represents the actions of serving sugar, milk, and coffee to platform robot 1. Route 2 represents the actions of serving sugar, milk, and coffee to platform robot 2. The design of route 3 and 4 are the same as before. Figure 7 is the timing diagram of a new coffee machine.
Figure 7. New concurrency of higher system throughput for the coffee machine with 2 platform robots
The five blue bubbles in figure 7 are explained as follows:
- Blue bubble 1. Task4_1 puts cup onto platform robot 1.
- Blue bubble 2. Task1_1 and Task1_2 pour milk and sugar into empty cup at the same time to the cup platform robot 1. The cut robot starts to put a cup on platform 2 since it is free at this time.
- Blue bubble 3. Task2_2 starts to add sugar into cup 2 on platform robot 2. The reason that this action can happen is that the sugar robot just finishes adding sugar for cup 1 on platform robot 1. It is also clear that the milk robot is still busy pouring milk into cup 1. Therefore, cup 2 only has sugar for now.
- Blue bubble 4. Task2_1 starts to pour milk into cup 2 since it just finishes pouring milk for cup 1 on platform robot 1. Coffee robot begins to add coffee into cup 1 on platform robot 1.
- Blue bubble 5. Cup robot begins to serve cup 1 to customer since coffee is done for cup 1.
It is obvious that the choreography of this new coffee machine is different from that of the previous coffee machine. It appears to a user that its robots are smarter and works more intelligently since it starts to do the next job more promptly.
The above three samples demonstrate the following:
- It is easy to model and to analyze the concurrency of machine control with the JEK Platform.
- To adapt to a new hardware configuration, the JEK SDK helps achieve new system concurrency with minimum code changes.
- JEK Studio can visually identify system throughput potential quickly so that better throughput can be achieved.
- The samples demonstrated in these solutions have a pretty simple architecture (see downloaded code).
If not using the JEK Platform, can we solve the problem quickly? A few questions are raised here.
- If several teams, such as marketing, hardware, and software, are working on the product, they might stop when solution 2 is working. Do they know that solution 3 is the best solution? Can they figure it out quickly?
- Team members might be satisfied when they see that robots are working in parallel. If it is found that improvements can be made, how much code change is needed from solution 2 to solution 3 if implemented with synchronization objects in the Operating System?
- How much competitiveness a company would gain to have a more efficient and reliable machine?
How to Get the Two Samples
Please go to http://www.jekplatform.com/CodeProjectSamples.htm to download the complete JEK Platform which includes the two samples. | <urn:uuid:e284b760-a01f-49e4-8cf1-71388aedbea7> | 3.21875 | 4,871 | Academic Writing | Software Dev. | 45.019693 | 348 |
COLUMBUS, Ohio (AP) -- A group of international algae experts say there are no quick or easy solutions to clear algae from Lake Erie and Grand Lake St. Marys in Ohio.
In the case of Grand Lake St. Marys, it could even take decades.
The Columbus Dispatch (http://bit.ly/WtnmLC ) reports that experts spoke about the algae problem in Ohio's lakes at the EcoSummit 2012 conference this week in Columbus.
Harry Gibbons of, Seattle-based Tetra Tech lumped Grand Lake St. Marys in with other lakes around the world that suffer from summertime blooms of toxic blue-green algae.
The algae are common in most lakes but grow thick feeding on phosphorus from manure, fertilizers and sewage that rains wash into nearby streams.
Information from: The Columbus Dispatch, http://www.dispatch.com | <urn:uuid:cd0c7a86-fae8-4a5b-8b0e-892849c5c38c> | 2.53125 | 182 | Truncated | Science & Tech. | 60.98378 | 349 |
Modern programming languages have little support for writing secure software,
making it all too easy to write programs with exploitable vulnerabilities.
In these lectures, we explore a general technique based on type qualifiers
that allows programmers to write down, in their souce code, their intentions
with respect to security. We will describe how to mechanically verify that
annotated code adheres to the policy.
We will discuss the theoretical foundations and practical implementation issues.
As a particular example, we show how to use type qualifiers to find format-string
vulnerabilities in widely-deployed C programs and to find other security
vulnerabilites in the Linux kernel. we will also look at alias analysis, another
important program analysis problem, and show how a must-alias analysis system
corresponds to a system for statically checking access control.
This series of lectures will discuss the requirements, protocols, and
components of network security software on the Internet. Topics will
include secure tunnels, security for web services, privacy constraints,
design features that create or address DoS threats, and the use of
programmable security tokens in network protocols. The primary emphasis
will be the relationship between models and design, including topics like
the quantification of DoS threats, models for code security in programmable
tokens, strategies for composition and interoperation, and practical
strategies for formal analysis of network protocol designs and software.
In these lectures, we will analyze the security infrastructure in
current, main-stream programming systems and platforms such as
the Java Virtual Machine and Common Language Runtime. We will explain
how byte code verification collaborates with the class loader and
security manager to provide a secure run-time environment.
We will also use theoretical tools to determine what properties
current security systems based on stack inspection have
and provide concrete proposals for improving the infrastructure
for next-generation programming languages and systems. | <urn:uuid:c22fb70a-8efe-41d8-8a5e-de429096e99f> | 2.625 | 396 | Content Listing | Software Dev. | 5.734788 | 350 |
A study led by researchers at the University of Colorado has determined that the pace of planet warming in the first decade of this century was slowed by volcanic activity, and not by industrial activity in Asia, as was previously believed.
Previous research in 2009 had suggested that an increase in stratospheric aerosols tied to a 60 percent increase in sulfur dioxide emissions over China and India had negated about 25 percent of the global warming that scientists attribute to greenhouse gas emissions.
That cooling effect occurs when sulfur dioxide emissions rise 12 to 20 miles to the stratospheric aerosol layer of the atmosphere. There, chemical reactions create sulfuric acid and water particles that reflect sunlight back into space.
Now, research led by study author Ryan Neely, conducted as part of his doctoral thesis at CU, has shown that global warming from 2000 to 2010 was tamped down by sulfur dioxide from volcanic eruptions, not industrial emissions in China and India, where such activity has greatly increased in recent years.
"It's good to know this is coming from volcanoes; its a natural thing, and it's not something we're doing as a planet," said Neely, who is now a post-doctoral fellow in the advanced study program at the National Center for Atmospheric Research.
The 10-year window addressed by the study did not see massive activity on the scale of Mount Pinatubo in the Philippines, which erupted in 1991 in the second-largest volcanic event of the 20th century. Still, there was sufficient volcanic activity in the 2000s from the tropics to Alaska that made an impact in the stratosphere.
The new research piggybacks on a 2011 study led by Susan Solomon, a former scientist at the National Oceanic and Atmospheric Administration who is now at the Massachusetts Institute of Technology, which showed that stratospheric aerosols -- without isolating their source -- offset about one-quarter of the greenhouse-effect warming of Earth in the past 10 years.
To determine what was contributing to that, Neely said he realized, "You couldn't do it from observations alone. It's all intermingled, and you can't separate the two sources easily. It was going to take a very specialized model that no one has done before."
CU's Janus supercomputer was pressed into service to conduct seven computer runs, each of them simulating 10 years of atmospheric activity linked to both coal-burning activities in Asia and to volcanic emissions around the world.
Each run required about a week of computer time, utilizing 192 processors, enabling the team to isolate coal-burning pollution in Asia from aerosol contributions tied to volcanic eruptions.
Neely said the work would have taken a single computer about 25 years to complete.
The fact that the emissions from industrial activity on the other side of the planet is not affecting temperature fluctuations tied to particulates in the stratosphere, Neely said, does not mean that all those human-caused emissions are good for the environment.
"A lot of people would take it that way," he said, "but it's bad for other reasons -- for acid rain reasons, and just for putting pollution into the atmosphere, and not to mention all the carbon dioxide they're emitting, when they burn all the coal," he said.
In a news release, study co-author and CU professor Brian Toon said, "The biggest implication here is that scientists need to pay more attention to small and moderate volcanic eruptions, when trying to understand changes in Earth's climate.
"But overall, these eruptions are not going to counter the greenhouse effect. Emissions of volcanic gases go up and down, helping to cool or heat the planet, while greenhouse gas emissions from human activity just continue to go up."
Contact Camera Staff Writer Charlie Brennan at 303-473-1327 or email@example.com. | <urn:uuid:8694d936-0085-4edd-a6e6-c8493988c285> | 3.328125 | 784 | News Article | Science & Tech. | 33.929922 | 351 |
Unprecedented sea-level rise over 20th century pins down future of rising oceans
Sea-level rises from global warming remain one of the big unknowns in the jostling crowd of climate threats. We know that warming water expands - and so coastal communities will be threatened, as the sea swells in response to rising temperatures. But what about the melting of the land-based ice-sheets of Greenland and Antarctica? And the disappearing mountain glaciers of the Rockies and the Andes? How will a warming world spill these frozen waters into the sea, and so back onto our low-lying cities? Those questions are something that climate scientists try to model, but which they still find difficult to predict.
One way to help knock down that uncertainty is to turn to past records of sea-level fluctuations, to see if they can point to the future. Tie these records tightly enough to detailed temperature records, and scientists may get a better grasp on what their models should be showing. That's what a research team, including Penn State's Michael Mann, have done - looking to North Carolina's salt marshes for a fresh insight into the sea-level conundrum. The results are published online in today's Proceedings of the National Academy of Sciences.
The international team of scientists turned to North Carolina, in part because this area of the world has been geologically quiet for many millennium. Previous sea-level reconstructions are in those parts of the world still on the move - due to tectonic shifts, or where the land is rebounding from the burden of the last Ice Age. That makes deciphering which sea-level changes are down to temperature fluctuations more difficult.
But the marshes off of the US east coast have had a much simpler history - so taking account of the slow resettling of the earth's foundations is much easier here. In order to work out the level of the sea over the last two-thousand years, the team looked at thick slices of muddy sediment in the sheltered waters of the Pamlico Sound. These contain foramnifera, microscopic fossils which can tell scientists the depth of the overlying waters. They were dated accurately using a combination of radiocarbon and pollen sequences.
The results tallied well with other reconstructions, and with local and global tidal gauge records. 'The temperature and sea level reconstructions were determined independently from each other, and yet each shows what we would expect based on the other," said Mann. "Higher temperatures correspond with higher rates of sea level change and vice versa."
The story of sea level changes fitted a reasonably well-known narrative - and emphasized the suddenness of the most recent changes ascribed to man's tinkering with the climate.
From 100 years BC until 950 AD, sea levels held fairly constant, but with the regional warming of the northern hemisphere, known as the Medieval Warming period, sea levels rose by 2 inches per century. That rise halted with the onset of the 'Little Ice Age' when sea-levels fell slightly. But once the industrialization of the 19th century was well underway, the tide turned firmly back to rising sea levels - and dramatically so.
Over the last century, sea-levels rose at the equivalent of 8-inches per century - a rate unprecedented in the 2000 year-old record. The paper concludes "..in North Carolina the mean rate of rise was 2.1 mm/y in response to 20th century warming. This historical rate of rise was greater than any other persistent, century-scale trend during the past 2,100 years,' the researchers report. That can be seen as an ominous pointer to the future for low-lying communities.
Top Image Credit: © Wollwerth Imagery | <urn:uuid:66c1334a-f2a1-4bea-a6b3-481e80977839> | 3.78125 | 756 | Truncated | Science & Tech. | 48.226896 | 352 |
We consider a simple pure substance under hydrostatic conditions
described by the following fundamental equation:
where the extensive variables U, V and N are the internal energy, the volume, and the number of particles respectively, and the intensive variables T, p and are the temperature, the pressure and the chemical potential respectively.
Equation () corresponds to the choice of the variables U, V and N as independent variables of the entropy S(U,V,N). These variables are precisely those which are fixed and determine the macrostate of the members of the Microcanonical Ensemble and consequently S is the relevant potential in this statistical ensemble.
It is useful to define the following quantities: ,
and so that Eq. () can
then be written in the dimensionless form:
In general, for other thermodynamic systems with degrees of freedom, one will have:
where are extensive variables, and the corresponding entropic conjugate variables. Massieu-Planck functions are entropic thermodynamic potentials defined as Legendre transformations of the entropy. In the case of a pure substance, the following (dimensionless) potentials can be formally defined:
The function was first introduced by Massieu , and it is called Massieu's potential. The function was introduced by Planck and is called Planck's. potential.
Given the extensivity of , and using Euler's theorem for
homogeneous functions, it is easy to see that . Therefore
the Legendre transformation of all variables redefines the entropy,
Substituting Eq. () into the differentials of the potentials defined above one gets:
From Eq. () one obtains:
The above equations allow a re-derivation of all the standard thermodynamic equations in terms of , and . For instance, Maxwell relations can be deduced, by imposing that the equations ()-() are exact differentials (equality of crossed derivatives). Moreover, Eq. () is the Gibbs-Duhem equation which states that the complete set of intensive variables of the system are not all independent. On the other hand, the extremal condition of leads us to deduce that , and are homogeneous at equilibrium . | <urn:uuid:f64a9fe0-fb27-407b-891a-d7ab31581e3a> | 2.78125 | 451 | Academic Writing | Science & Tech. | 23.064551 | 353 |
Apr 4, 2012 / ENERGY GLOBE Award
Project presentation - "One Child - One Solarlight"
Electricity is a scarce commodity in Africa. The whole continent spends not more electricity than New York. At night the villages are dark. The only source of light for many children – while they do their homework – is kerosene light, which is expensive, unhealthy and smells. The sun, however, is clean and for free!
The nonprofit organization Solux started the model project „One child, one solar light“ in Ghana to bring light to people who live in darkness. The goal was to supply solar lights to children and their families in off‐grid regions.
The company Solar4Ghana Ltd. was founded to inform children, parents and teachers about the advantages of solar lights. Solux also provides micro credits, so that solar light is affordable to anyone. After one year of the project:
- More than six persons profit from a single solar light.
- The quality of life of 35,000 people has been significantly improved via solar lamps.
- 100% of the users recommend Solux solar lights.
- This model project is transferable to and reproducible in other countries.
More information: www.solar4ghana.com | <urn:uuid:6f850524-efdf-4980-8376-340a7f059486> | 2.90625 | 264 | News (Org.) | Science & Tech. | 50.322626 | 354 |
At various points along the path toward productive nanosystems for molecular manufacturing it would be useful to be able to calculate the properties and reactions of assemblies of atoms of various sizes. Within the domain of non-relativistic quantum mechanics, such information is supplied by the Schrödinger equation, but this can only be solved analytically for the hydrogen atom and ions with only one electron. For larger atoms and molecules, numerical solutions require compromises between computational feasibility and accuracy. Recent work from researchers at Argonne National Laboratory suggests that machine learning can be an efficient alternative to numerical computations. A hat tip to KurzweilAI.net for pointing to this New Scientist article by Lisa Grossman “Molecules from scratch without the fiendish physics“:
A SUITE of artificial intelligence algorithms may become the ultimate chemistry set. Software can now quickly predict a property of molecules from their theoretical structure. Similar advances should allow chemists to design new molecules on computers instead of by lengthy trial-and-error.
Our physical understanding of the macroscopic world is so good that everything from bridges to aircraft can be designed and tested on a computer. There’s no need to make every possible design to figure out which ones work. Microscopic molecules are a different story. “Basically, we are still doing chemistry like Thomas Edison,” says Anatole von Lilienfeld of Argonne National Laboratory in Lemont, Illinois.
The chief enemy of computer-aided chemical design is the Schrödinger equation. In theory, this mathematical beast can be solved to give the probability that electrons in an atom or molecule will be in certain positions, giving rise to chemical and physical properties.
But because the equation increases in complexity as more electrons and protons are introduced, exact solutions only exist for the simplest systems: the hydrogen atom, composed of one electron and one proton, and the hydrogen molecule, which has two electrons and two protons. …
The researchers developed a machine learning model to calculate the atomisation energy—the energy of all the bonds holding a molecule together and applied it to a database of 7165 small organic molecules of known structure and atomization energy and containing up to seven atoms of carbon, nitrogen, oxygen, or sulfur, plus the number of hydrogen atoms necessary to saturate the bonds. These molecules had atomization energies ranging from 800 to 2000 kcal/mol. The model was trained on a subset of 1000 compounds and then used to calculate the energies of the remaining molecules in the database. The results showed a mean error of only 9.9 kcal/mol, comparable to the accuracy of methods based upon the Schrödinger equation, but the computations were done in milliseconds rather than hours. The authors suggest that extensions of their approach might permit rational molecule design or molecular dynamics calculations of systems of atoms undergoing chemical reactions. | <urn:uuid:73d1cea5-f27a-4547-a5b0-80544b2bc1bf> | 3.453125 | 583 | Nonfiction Writing | Science & Tech. | 17.564268 | 355 |
Electricity and magnetism
The dot product Introduction to the vector dot product.
The dot product
⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles.
- Let's learn a little bit about the dot product.
- The dot product, frankly, out of the two ways of multiplying
- vectors, I think is the easier one.
- So what does the dot product do?
- Why don't I give you the definition, and then I'll give
- you an intuition.
- So if I have two vectors; vector a dot vector b-- that's
- how I draw my arrows.
- I can draw my arrows like that.
- That is equal to the magnitude of vector a times the
- magnitude of vector b times cosine of the
- angle between them.
- Now where does this come from?
- This might seem a little arbitrary, but I think with a
- visual explanation, it will make a little bit more sense.
- So let me draw, arbitrarily, these two vectors.
- So that is my vector a-- nice big and fat vector.
- It's good for showing the point.
- And let me draw vector b like that.
- Vector b.
- And then let me draw the cosine, or let me, at least,
- draw the angle between them.
- This is theta.
- So there's two ways of view this.
- Let me label them.
- This is vector a.
- I'm trying to be color consistent.
- This is vector b.
- So there's two ways of viewing this product.
- You could view it as vector a-- because multiplication is
- associative, you could switch the order.
- So this could also be written as, the magnitude of vector a
- times cosine of theta, times-- and I'll do it in color
- appropriate-- vector b.
- And this times, this is the dot product.
- I almost don't have to write it.
- This is just regular multiplication, because these
- are all scalar quantities.
- When you see the dot between vectors, you're talking about
- the vector dot product.
- So if we were to just rearrange this expression this
- way, what does it mean?
- What is a cosine of theta?
- Let me ask you a question.
- If I were to drop a right angle, right here,
- perpendicular to b-- so let's just drop a right angle
- there-- cosine of theta soh-coh-toa so, cah cosine--
- is equal to adjacent of a hypotenuse, right?
- Well, what's the adjacent?
- It's equal to this.
- And the hypotenuse is equal to the magnitude of a, right?
- Let me re-write that.
- So cosine of theta-- and this applies to the a vector.
- Cosine of theta of this angle is equal to ajacent, which
- is-- I don't know what you could call this-- let's call
- this the projection of a onto b.
- It's like if you were to shine a light perpendicular to b--
- if there was a light source here and the light was
- straight down, it would be the shadow of a onto b.
- Or you could almost think of it as the part of a that goes
- in the same direction of b.
- So this projection, they call it-- at least the way I get
- the intuition of what a projection is, I kind of view
- it as a shadow.
- If you had a light source that came up perpendicular, what
- would be the shadow of that vector on to this one?
- So if you think about it, this shadow right here-- you could
- call that, the projection of a onto b.
- Or, I don't know.
- Let's just call it, a sub b.
- And it's the magnitude of it, right?
- It's how much of vector a goes on vector b over-- that's the
- adjacent side-- over the hypotenuse.
- The hypotenuse is just the magnitude of vector a.
- It's just our basic calculus.
- Or another way you could view it, just multiply both sides
- by the magnitude of vector a.
- You get the projection of a onto b, which is just a fancy
- way of saying, this side; the part of a that goes in the
- same direction as b-- is another way to say it-- is
- equal to just multiplying both sides times the magnitude of a
- is equal to the magnitude of a, cosine of theta.
- Which is exactly what we have up here.
- And the definition of the dot product.
- So another way of visualizing the dot product is, you could
- replace this term with the magnitude of the projection of
- a onto b-- which is just this-- times the
- magnitude of b.
- That's interesting.
- All the dot product of two vectors is-- let's just take
- one vector.
- Let's figure out how much of that vector-- what component
- of it's magnitude-- goes in the same direction as the
- other vector, and let's just multiply them.
- And where is that useful?
- Well, think about it.
- What about work?
- When we learned work in physics?
- Work is force times distance.
- But it's not just the total force
- times the total distance.
- It's the force going in the same
- direction as the distance.
- You should review the physics playlist if you're watching
- this within the calculus playlist. Let's say I have a
- 10 newton object.
- It's sitting on ice, so there's no friction.
- We don't want to worry about fiction right now.
- And let's say I pull on it.
- Let's say my force vector-- This is my force vector.
- Let's say my force vector is 100 newtons.
- I'm making the numbers up.
- 100 newtons.
- And Let's say I slide it to the right, so my distance
- vector is 10 meters parallel to the ground.
- And the angle between them is equal to 60 degrees, which is
- the same thing is pi over 3.
- We'll stick to degrees.
- It's a little bit more intuitive.
- It's 60 degrees.
- This distance right here is 10 meters.
- So my question is, by pulling on this rope, or whatever, at
- the 60 degree angle, with a force of 100 newtons, and
- pulling this block to the right for 10 meters, how much
- work am I doing?
- Well, work is force times the distance, but not just the
- total force.
- The magnitude of the force in the direction of the distance.
- So what's the magnitude of the force in the
- direction of the distance?
- It would be the horizontal component of this force
- vector, right?
- So it would be 100 newtons times the
- cosine of 60 degrees.
- It will tell you how much of that 100
- newtons goes to the right.
- Or another way you could view it if this
- is the force vector.
- And this down here is the distance vector.
- You could say that the total work you performed is equal to
- the force vector dot the distance vector, using the dot
- product-- taking the dot product, to the force and the
- distance factor.
- And we know that the definition is the magnitude of
- the force vector, which is 100 newtons, times the magnitude
- of the distance vector, which is 10 meters, times the cosine
- of the angle between them.
- Cosine of the angle is 60 degrees.
- So that's equal to 1,000 newton meters
- times cosine of 60.
- Cosine of 60 is what?
- It's square root of 3 over 2.
- Square root of 3 over 2, if I remember correctly.
- So times the square root of 3 over 2.
- So the 2 becomes 500.
- So it becomes 500 square roots of 3 joules, whatever that is.
- I don't know 700 something, I'm guessing.
- Maybe it's 800 something.
- I'm not quite sure.
- But the important thing to realize is that the dot
- product is useful.
- It applies to work.
- It actually calculates what component of what vector goes
- in the other direction.
- Now you could interpret it the other way.
- You could say this is the magnitude of a
- times b cosine of theta.
- And that's completely valid.
- And what's b cosine of theta?
- Well, if you took b cosine of theta, and you could work this
- out as an exercise for yourself, that's the amount of
- the magnitude of the b vector that's
- going in the a direction.
- So it doesn't matter what order you go.
- So when you take the cross product, it matters whether
- you do a cross b, or b cross a.
- But when you're doing the dot product, it doesn't matter
- what order.
- So b cosine theta would be the magnitude of vector b that
- goes in the direction of a.
- So if you were to draw a perpendicular line here, b
- cosine theta would be this vector.
- That would be b cosine theta.
- The magnitude of b cosine theta.
- So you could say how much of vector b goes in the same
- direction as a?
- Then multiply the two magnitudes.
- Or you could say how much of vector a goes in the same
- direction is vector b?
- And then multiply the two magnitudes.
- And now, this is, I think, a good time to just make sure
- you understand the difference between the dot product and
- the cross product.
- The dot product ends up with just a number.
- You multiply two vectors and all you have is a number.
- You end up with just a scalar quantity.
- And why is that interesting?
- Well, it tells you how much do these-- you could almost say--
- these vectors reinforce each other.
- Because you're taking the parts of their magnitudes that
- go in the same direction and multiplying them.
- The cross product is actually almost the opposite.
- You're taking their orthogonal components, right?
- The difference was, this was a a sine of theta.
- I don't want to mess you up this picture too much.
- But you should review the cross product videos.
- And I'll do another video where I actually compare and
- contrast them.
- But the cross product is, you're saying, let's multiply
- the magnitudes of the vectors that are perpendicular to each
- other, that aren't going in the same direction, that are
- actually orthogonal to each other.
- And then, you have to pick a direction since you're not
- saying, well, the same direction that
- they're both going in.
- So you're picking the direction that's orthogonal to
- both vectors.
- And then, that's why the orientation matters and you
- have to take the right hand rule, because there's actually
- two vectors that are perpendicular to any other two
- vectors in three dimensions.
- Anyway, I'm all out of time.
- I'll continue this, hopefully not too confusing, discussion
- in the next video.
- I'll compare and contrast the cross
- product and the dot product.
- See you in the next video.
Be specific, and indicate a time in the video:
At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger?
Have something that's not a question about this content?
This discussion area is not meant for answering homework questions.
Share a tip
When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831...
Have something that's not a tip or feedback about this content?
This discussion area is not meant for answering homework questions.
Discuss the site
For general discussions about Khan Academy, visit our Reddit discussion page.
Flag inappropriate posts
Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.
- disrespectful or offensive
- an advertisement
- low quality
- not about the video topic
- soliciting votes or seeking badges
- a homework question
- a duplicate answer
- repeatedly making the same post
- a tip or feedback in Questions
- a question in Tips & Feedback
- an answer that should be its own question
about the site | <urn:uuid:897a62d5-1a0a-42ef-a9ef-517a73b9b936> | 4.53125 | 2,885 | Truncated | Science & Tech. | 74.68687 | 356 |
Hacking Quantum Cryptography Just Got Harder
With quantum encryption, in which a message gets encoded in bits represented by particles in different states, a secret message can remain secure even if the system is compromised by a malicious hacker.
CREDIT: margita | Shutterstock
VANCOUVER, British Columbia — No matter how complex they are, most secret codes turn out to be breakable. Producing the ultimate secure code may require encoding a secret message inside the quantum relationship between atoms, scientists say.
Artur Ekert, director of the Center for Quantum Technologies at the National University of Singapore, presented the new findings here at the annual meeting of the American Association for the Advancement of Science.
Ekert, speaking Saturday (Feb. 18), described how decoders can adjust for a compromised encryption device, as long as they know the degree of compromise.
The subject of subatomic particles is a large step away from the use of papyrus, the ancient writing material employed in the first known cryptographic device. That device, called a scytale, was used in 400 B.C. by Spartan military commanders to send coded messages to one another. The commanders would wrap strips of papyrus around a wooden baton and write the message across the strips so that it could be read only when the strips were wrapped around a baton of matching size. [The Coolest Quantum Particles Explained]
Later, the technique of substitution was developed, in which the entire alphabet would be shifted, say, three characters to the right, so than an "a" would be replaced by "d," and "b" replaced by "e," and so on. Only someone who knew the substitution rule could read the message. Julius Caesar employed such a cipher scheme in the first century B.C.
Over time, ciphers became more and more complicated, so that they were harder and harder to crack. Harder, but not impossible.
"When you look at the history of cryptography, you come up with a system, and sooner or later someone else comes up with a way of breaking the system," Ekert said. "You may ask yourself: Is it going to be like this forever? Is there such a thing as the perfect cipher?"
The perfect cipher
The closest thing to a perfect cipher involves what's called a one-time pad.
"You just write your message as a sequence of bits and you then add those bits to a key and obtain a cryptogram," Ekert said."If you take the cryptogram and add it to the key, you get plain text. In fact, one can prove that if the keys are random and as long as the messages, then the system offers perfect security."
In theory, it's a great solution, but in practice, it has been hard to achieve. [10 Best Encryption Software Products]
"If the keys are as long as the message, then you need a secure way to distribute the key," Ekert said.
The nature of physics known as quantum mechanics seems to offer the best hope of knowing whether a key is secure.
Quantum mechanics says that certain properties of subatomic particles can't be measured without disturbing the particles and changing the outcome. In essence, a particle exists in a state of indecision until a measurement is made, forcing it to choose one state or another. Thus, if someone made a measurement of the particle, it would irrevocably change the particle.
If an encryption key were encoded in bits represented by particles in different states, it would be immediately obvious when a key was not secure because the measurement made to hack the key would have changed the key.
This, of course, still depends on the ability of the two parties sending and receiving the message to be able to independently choose what to measure, using a truly random number generator — in other words, exercising free will — and using devices they trust.
But what if a hacker were controlling one of the parties, or tampering with the encryption device?
Ekert and his colleagues showed that even in this case, if the messaging parties still have some free will, their code could remain secure as long as they know to what degree they are compromised.
In other words, a random number generator that is not truly random can still be used to send an undecipherable secret message, as long as the sender knows how random it is and adjusts for that fact.
"Even if they are manipulated, as long as they are not stupid and have a little bit of free will, they can still do it," Ekert said.
MORE FROM LiveScience.com | <urn:uuid:d0f6c11e-5d9a-4eac-8057-aad25b3d2613> | 3.390625 | 946 | News Article | Science & Tech. | 44.867095 | 357 |
Zoologger is our weekly column highlighting extraordinary animals – and occasionally other organisms – from around the world
Step from a sunlit hillside into the darkness of a cave, and you immediately have a problem: you can't see. It's best to stand still for a few minutes until your eyes adjust to the dimness, otherwise you might blunder into a hibernating bear that doesn't appreciate your presence.
The same thing will happen when you leave again: the brightness of the sun will dazzle you at first. That's because your eyes have two types of receptor: one set works in bright light and the other in dim light. Barring a few minutes around sunset, only one set of receptors is ever working at any given time.
Peters' elephantnose fish has no such limitations. Its peculiar eyes allow it to use the two types of receptor at the same time. That could help it to spot predators as they approach through the murky water it calls home.
Peters' elephantnose fish belongs to a large family called the elephantfish, all of which live in Africa. They get their name from the trunk-like protrusions on the front of their heads. But whereas the trunks of elephants are extensions of their noses, the trunks of elephantfish are extensions of their mouths.
To find a Peters' elephantnose fish, you must lurk in muddy, slow-moving water. Look closely, because the fish is brown and so is the background.
It finds its way through the murk using its trunk, which generates a weak electrical field that helps it sense its surroundings and even discriminate between different objects. The fish's electric sense allows it to hunt insect larvae in pitch darkness.
The fish has paid a price for its electrical sensitivity. Processing the signals takes brainpower, so it has an exceptionally large brain. As a result, 60 per cent of the oxygen taken in by the fish goes to its brain. Even humans, with our whopping brains, only devote 20 per cent of our oxygen to them.
Now for its eyes. Most vertebrates, including humans, have two types of light receptors on their retinas: rods and cones. Rods can sense dim light, but become bleached in bright light and stop working. Cones can't see in dim light, but given enough light they can see fine details and colours.
Most animals' eyes are specialised for one or the other. Animals that are active during the day tend to have more cones than nocturnal animals such as foxes. In the human eye, the cones are clustered in a central region called the fovea, where the light is sharply focused, and the rods are outside it. As a result, we have excellent daytime vision and rather poor night vision.
The retina of the Peters' elephantnose fish looks completely different. It is covered with cup-shaped depressions. Around 30 cones sit inside each cup, and a few hundred rods are buried underneath.
Because of the peculiar design of the fish's retina, it was thought to be blind until about 10 years ago, says Andreas Reichenbach of the Paul Flechsig Institute for Brain Research in Leipzig, Germany. Reichenbach has now worked out what the cups are for.
Each cup has a layer of massive cells that are full of guanine crystals. These form a mirrored surface that amplifies the light intensity within the cups, ensuring that the cones have enough light to work with.
At the same time, because the cups are eating up so much of the light, only a small amount reaches the cones. As a result, both sets of receptors are supplied with the right amount of light.
Yet when Reichenbach tested the fishes' vision, they didn't seem to do very well. For instance, they could only see objects that covered a big swathe of their visual field. If humans had vision that bad, we would miss any object whose width was less than one sixth of a full moon.
However, the Peters' elephantnose fish were very good at spotting large moving objects against a cluttered background – essential for fish that live in dirty water. Presented with a monitor displaying a black stimulus on a white background, they took as long to spot it as goldfish. But when a grey noise pattern – like an untuned TV – was superimposed, the elephantnose fish spotted the stimulus faster than the goldfish.
The fish's ability to see the wood for the trees probably helps it spot incoming predators like catfish. So Reichenbach thinks its oddball visual system isn't a mistake. "It's the right type for this fish," he says.
Journal reference: Science, DOI: 10.1126/science.1218072
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Thu Jun 28 22:15:54 BST 2012 by Freederick
If I understand correctly, each cup-shaped depression serves as a single aggregate receptor, combining the output of all the individual light-sensitive cells comprising it.
In effect, the fish is trading resolution for sensitivity. This is the same sort of effect as used to be employed in high-ISO photographic film, where the larger, flattened grains of photosensitive chemicals resulted in high sensitivity, at the cost of a coarse-grained image.
The fish employs an even more effective method, effectively combining many smaller "grains" into one huge hypersensitive receptor.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:af338612-8df7-49f3-953a-3be7407dfe41> | 3.390625 | 1,252 | Truncated | Science & Tech. | 53.685916 | 358 |
- 1 hammer
- something hard and resonant (and inanimate) to bang it on
- 1 clock with a second hand
- 1 measuring tape
- 1 helper
- 1 pair of binoculars
Sound travels at 344 metres per second in air at 20 °C. This is slow enough for noises to be noticeably delayed when heard from even quite a short distance. You can use this effect to measure the speed of sound.
Ask your helper to a hit a wall or a piece of metal repeatedly with the hammer, about twice every second. The exact frequency of the beat doesn't matter; it can be measured later. But the beat should be regular.
Now start walking away, looking back from time to time to watch your helper pounding away as you listen to the sound of their hammering. As the distance increases, the delay after each beat before the sound arrives will become longer and longer.
Eventually the delay ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:b9c9157d-c12e-4d9b-b2f0-ad73f0197497> | 3.703125 | 218 | Truncated | Science & Tech. | 64.579538 | 359 |
|National Weather Service|
Contents: About, Graph, Status Maps, History Button, Credits
To use this website, click on the appropriate REGION. This will update the list of STATIONS and show a "Status Map" for that region. Click on your desired station, either on the map or in the list of STATIONS. This will bring up a graph of the total water level, as well as a text file that contains the numbers used in the graph.
The graph combines several sources of data to produce a total water level prediction. To do so, it graphs the observed water levels in comparison to the predicted tide and predicted surge before the current time. This allows it to compute the "Anomaly". The "Anomaly" is the amount of water that was not predicted by either the tide or the storm surge model. This "Anomaly" is averaged over 5 days, and is then added to the future predictions of the tide and storm surge to predict the Total Water Level.Example:
The first thing one notices is that there are two magenta vertical lines. The earlier one is when the storm surge model was run. It is run at 0Z and 12Z every day and the text form is available at: http://www.nws.noaa.gov/mdl/marine/etsurge.htm. The later magenta line is when the graph was generated. It is currently being generated 15 minutes after the top of every hour. (This is also the date that follows the label.)
The next thing one notices are the horizontal lines labeled MLLW, MSL, MHHW, and MAT. These stand for the Mean Lower Low Water, Mean Sea Level, Mean Higher High Water, and Maximum Astronomical Tide. MAT was computed using our tide model, by computing the maximum of the predicted value for every hour (on the hour) for 19 years. The thought is that there is probably flooding if the total water level crosses MAT. The other datums came from http://www.co-ops.nos.noaa.gov/data_res.html.
One might next notice the red observation line. This is based on data attained from Tides Online . Please see their Disclaimer for information as to the quality of these observations. If there is no red line, then either Tides Online does not have data for that station, or there has been a communications break down. In this case, the graph computes an anomaly based on what data it has, or sets it to 0. Then it predicts the total water level for all hours, or after the last of any observations it does have.
The next thing of interest is the blue Tide line. This is the astronomical tide at every hour. The Harmonic Constants used were obtained from http://www.co-ops.nos.noaa.gov/data_res.html.
We then note the gold storm surge curve, which is created by "pasting" one 48 hour prediction to the next 48 hour prediction. That is, using 12 hours from each prediction until the last prediction where we use 48 hours. The result is that we may generate kinks in the curve every 12 hours, where the model adjusted its prediction based on new data from the GFS wind model.
Next we note the green curve, which is the "Anomaly" referred to above. This is simply the observation - (tide + storm surge). Preferably it is constant. The amount of deviation from a constant is an approximation of our error. Since we add the 5 day average of this value to our prediction, the perfect forecast does not have to have a zero Anomaly.
Finally we see the black forecast curve. This is what we are really interested in, which is the total water level created by adding the 5 day average anomaly to the predicted tide, and the predicted storm surge.
The history button allows one to see how the model has done over the last day or so. It displays 3 graphs. The first one is the current graph based on the current model run, and the current observations. The second graph is the last graph generated using the last model run. The third graph is the last graph generated using the next to last model run. This gives a view of the model over the last 24 to 36 hours depending on when the current time is.
To print this page out (Netscape instructions) it is recommended that you right click on the history frame and choose "Open Frame in New Window". Then choose page setup, and set the top and bottom margins to 0. Then choose print, and preferably send it to a color printer, (although a black and white does work). The result should be 3 graphs on the same page.
We would like to thank the following people/organizations: | <urn:uuid:2ef9c003-396e-4752-b9a6-dcc3dae7c434> | 3.171875 | 985 | Documentation | Science & Tech. | 64.47905 | 360 |
The associative array -- an indispensable data type used to describe a collection of unique keys and associated values -- is a mainstay of all programming languages, PHP included. In fact, associative arrays are so central to the task of Web development that PHP supports dozens of functions and other features capable of manipulating array data in every conceivable manner. Such extensive support can be a bit overwhelming to developers seeking the most effective way to manipulate arrays within their applications. In this article, I'll offer 10 tips that can help you shred, slice and dice your data in countless ways.
1. Adding Array Elements
PHP is a weakly typed language, meaning you're not required to explicitly declare an array nor its size. Instead you can both declare and populate the array simultaneously:
Additional array elements can be appended like this:
$capitals['Arkansas'] = 'Little Rock';
If you're dealing with numerically indexed arrays and would rather prepend and append elements using an explicitly-named function, check out the array_push() and array_unshift() functions (these functions don't work with associative arrays).
2. Removing Array Elements
To remove an element from an array, use the unset() function:
When using numerically indexed arrays you have a bit more flexibility in terms of removing array elements in that you can use the array_shift() and array_pop() functions to remove an element from the beginning and end of the array, respectively.
3. Swapping Keys and Values
Suppose you wanted to create a new array called $states, which would use state capitals as the index and state names as the associated value. This task is easily accomplished using the array_flip() function:
Suppose the previous arrays were used in conjunction with a Web-based "flash card" service, and you wanted to provide students with a way to test their knowledge of worldwide capitals, U.S. states included. You can merge arrays containing both state and country capitals using the array_merge() function:
Suppose the data found in an array potentially contains capitalization errors, and you want to correct these errors before inserting the data into the database. You can use the array_map() function to apply a callback to every array element:
The Standard PHP Library (SPL) offers developers with quite a few data structures, iterators, interfaces, exceptions and other features not previously available within the PHP language. Among these features is the ability to iterate over an array using a convenient object-oriented syntax:
$capitals = array(
'Arizona' => 'Phoenix',
'Alaska' => 'Juneau',
'Alabama' => 'Montgomery'
$arrayObject = new ArrayObject($capitals);
foreach ($arrayObject as $state => $capital)
printf("The capital of %s is %s<br />", $state, $capital);
// The capital of Arizona is Phoenix
// The capital of Alaska is Juneau
// The capital of Alabama is Montgomery
This is just one of countless great features bundled into the SPL, be sure to consult the PHP documentation for more information.
About the Author
Jason Gilmore is the founder of the publishing and consulting firm WJGilmore.com. He also is the author of several popular books, including "Easy PHP Websites with the Zend Framework", "Easy PayPal with PHP", and "Beginning PHP and MySQL, Fourth Edition". Follow him on Twitter at @wjgilmore. | <urn:uuid:8d9d41d5-5f23-454d-95a8-8abc57ab3eae> | 3.046875 | 722 | Tutorial | Software Dev. | 24.560172 | 361 |
Researchers at UCLA have built a cheap, optics-free holographic microscope capable of detecting bacteria like E. coli in things like water, food, and blood. And by cheap, we mean really cheap. The researchers say it costs less than $100 to build.
The microscope has two ways of analyzing samples: a transmission mode and a reflection mode. The transmission mode is good for transparent media, like thin slices of a sample or clear liquids. In this case, the microscope’s laser can easily penetrate and analyze microscopic objects. For denser, more solid samples the microscope uses holography to generate a 3-D image of the sample that can be beamed to remote computers for further analysis if necessary.In reflection mode, the microscope basically splits the laser beam using a mirror. It then uses one half of the beam to illuminate the sample. On the other side the sample beam and the control beam are recombined. Some “clever mathematics” can then use resulting the changes in the beam to generate a 3-D image of the object sampled.
But while that may sound fairly high-tech, there are no expensive optics or other pricey components required. The photo sensors are of the variety often found in smartphones, and small lasers like the one used in the device are really inexpensive these days as well. That all means that these holographic microscopes could be widely deployed at little cost.
And that’s the idea. Places that don’t have access to high-tech diagnostic equipment could use these devices to sample food and water--or even human blood--for harmful bugs and beam the images to more powerful computing devices elsewhere for analysis or diagnosis. That could help contain contaminations and outbreaks faster, saving lives while keeping costs down.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:bbb7aa65-c191-461e-a3b7-5d90d73beae4> | 3.125 | 413 | News Article | Science & Tech. | 44.394353 | 362 |
CSIRO Marine and Atmospheric Research, Barrie Hunt, says 'Despite 2010 being a very warm year globally, the severity of the 2009-2010 northern winter and a wetter and cooler Australia in 2010 relative to the past few years have been misinterpreted by some to imply that climate change is not occurring.'
'Recent wet conditions in eastern Australia mainly reflect short-term climate variability and weather events, not longer-term climate change trends. Conclusions that climate is not changing are based on a misunderstanding of the roles of climatic change caused by increasing greenhouse gases and climatic variability due to natural processes in the climatic system.
'These two components of the climate system interact continuously, sometimes enhancing and sometimes counteracting one another to either exacerbate or moderate climate extremes.'
Mr Hunt says his climatic model simulations support what is clear from recent observations – that in addition to the role of climate change linked to human activity, natural variability produces periods where the global climate can be either cooler or warmer than usual. Mr Hunt’s results were published in the latest edition of the international journal Climate Dynamics.
He says some such natural temperature variations can last for 10 to 15 years, with persistent variations of about 0.2°C.
'Such natural variability could explain the above average temperatures observed globally in the 1940s, and the warm but relatively constant global temperatures of the last decade.'
Mr Hunt also found that seasonal cold spells will still be expected under enhanced greenhouse conditions. For example, monthly mean temperatures up to 10°C below present values were found to occur over North America as late as 2060 in model simulations, with similar cold spells over Asia. Variations of up to 15°C below current temperatures were found to occur on individual days, even in 2060, despite a long-term trend of warming on average.
'These results suggest that a few severe winters in the Northern hemisphere are not sufficient to indicate that climatic change has ceased. The long-term trends that characterise climate change can be interpreted only by analysing many years of observations.'
'Future changes in global temperature as the concentration of greenhouse gases increases will not show a simple year-on-year increase but will vary around a background of long-term warming. Winters as cold as that recently experienced in the Northern Hemisphere, however, will become progressively less frequent as the greenhouse effect eventually dominates,' Mr Hunt said.
This underlying warming trend, reflected in the projections of future climate and the observation that the past decade has been the warmest in the instrumental record, underline the need to both adapt to what is now inevitable change and mitigate even greater changes.
Photographs are copyright by law. If you wish to use or buy a photograph you must contact the photographer directly (there is a hyperlink in most cases to their website, or do a Google search.) with your request.
Please do not contact
as we cannot give permission for use of other photographer’s images. | <urn:uuid:df0d3c64-9bd5-4d02-98f8-7a7c217e5a58> | 3.40625 | 601 | Knowledge Article | Science & Tech. | 26.796172 | 363 |
June 14, 2007 At one time Cyclone Gonu was a powerful Category 5 storm packing sustained winds of 160 mph (139 knots), according to the Joint Typhoon Warning Center, making it the most powerful cyclone ever to threaten the Arabian Peninsula since record keeping began back in 1945. Fortunately the storm weakened significantly by the time it brushed the far eastern tip of Oman, but it still threatened petroleum shipping lanes in the northern part of the Arabian Sea that are unprepared for such an intense cyclone.
While tropical cyclones occasionally form in the Arabian Sea, they rarely exceed tropical storm intensity. In 2006, Tropical Storm Mukda was the only tropical system to form in the region and it remained well out to sea before dissipating.
Gonu became a tropical storm on the morning (local time) of Sat., Jun. 2, in the east-central Arabian Sea. After some initial fluctuations in direction, it settled on a northwesterly track and began to intensify. Gonu strengthened from tropical storm intensity on the morning of June 3 to Category 2 that night. By daybreak on June 4, Gonu had intensified to Category 4 with winds estimated at 132 mph (115 knots).
NASA's Tropical Rainfall Measuring Mission (TRMM) satellite captured an image of Gonu as it was moving northwest through the central Arabian Sea. Taken on Mon., Jun. 4 at 0323 UTC (11:23 p.m. EDT on Sun., Jun. 3), it shows the horizontal distribution of rain intensity looking down on the storm. TRMM reveals the tell-tale signs of a potent storm. Not only does Gonu have a complete, well-formed symmetrical eye surrounded by an intense eyewall (innermost red ring), this inner eyewall is surrounded by a concentric outer eyewall (outermost red and green ring). This double eyewall structure only occurs in very intense storms. Eventually the outer eyewall will contract and replace the inner eyewall.
Another image provides a unique 3-D perspective of Gonu using data collected from the TRMM Precipitation Radar from the same overpass as the previous image. Higher radar echo tops are indicated in red. The areas of intense rain in the previous image are associated with deep convective towers both in the innermost eyewall and in parts of outer eyewall. The inner ring has the higher tops at this time. Deep convective towers near the storm's center can be a precursor to future strengthening as they indicate that large amounts of heat are being released into the storm's core. At the time of these images, Gonu was a Category 4 cyclone. Several hours later, Gonu reached Category 5 intensity.
The system finally began to weaken during the night of June 4 and was downgraded to a Category 3 storm at 1200 UTC (8:00 a.m. EDT) on June 5.
NASA's Quikscat spacecraft also observed Gonu. Its SeaWinds scatterometer, a specialized microwave radar, measured near-surface wind speed and direction within the storm.
Gonu continued to weaken as it neared the coast of Oman. The center remained just offshore Oman's northeast coast as a Category 1 storm before turning northward towards Iran, where it is expected to make landfall as a tropical storm.
TRMM is a joint mission between NASA and the Japanese space agency JAXA. QuikScat is managed by NASA's Jet Propulsion Laboratory. Images produced by Hal Pierce (SSAI / NASA GSFC). Caption by Steve Lang (SSAI / NASA GSFC), Mike Bettwy (RSIS / NASA GSFC), and NASA/JPL/QuikScat Science Team.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:304f6c93-31f0-49d8-9017-4c0c2dd36ed6> | 3.140625 | 806 | News Article | Science & Tech. | 51.248756 | 364 |
The chemical traces of water have been found in this moon rock, called the Genesis Rock. The moon rock was collected by astronauts during the Apollo 15 mission in 1971 and is thought to be a piece of the moon's primordial crust. Image: NASA/Johnson Space Center
The discovery of "significant amounts" of water in moon rock samples collected by NASA's Apollo astronauts is challenging a longstanding theory about how the moon formed, scientists say.
Since the Apollo era, scientists have thought the moon came to be after a Mars-size object smashed into Earth early in the planet's history, generating a ring of debris that slowly coalesced over millions of years.
That process, scientists have said, should have flung away the water-forming element hydrogen into space.
But a new study suggests the accepted scenario is not possible given the amount of water found in moon rocks collected from the lunar surface in the early 1970s during the Apollo 15, 16 and 17 missions. By "water," the researchers don't mean liquid water, but hydroxyl, a chemical that includes the hydrogen and oxygen ingredients of water.
Those water-forming elements would have been on the moon all along, the scientist said. [Water on the Moon: The Search in Photos]
"I still think the impact scenario is the best formation scenario for the moon, but we need to reconcile the theory of hydrogen," study leader Hejiu Hui, an engineering researcher at the University of Notre Dame, told SPACE.com.
The results were published in Nature Geoscienceon Sunday (Feb. 17).
Water in moon's 'Genesis Rock'
Past studies have suggested water-forming elements came to the moon from outside sources long after the moon's crust cooled. The solar wind — a stream of particles emanating from the sun — as well as meteorites and comets were pegged as possible sources ofwater depositson the moon in recent studies.
But that explanation does not account for the amount of water found in the Apollo samples, the researchers stated in the new study.
Because they found hydroxyl deep inside each sampled rock, the scientists say they have eliminated the solar wind moon water explanation, because those particles can penetrate the surface only slightly. An impact from an asteroid or comet could push the hydrogen in further, but it would not be as pristine as the samples the researchers observed, because it would have melted from the heat of the asteroid collision.
Researchers probed samples from the late Apollo missions, including the famous "Genesis Rock" that was named for its advanced age of 4.5 billion years, about the same time the moon is thought to have formed.
Using an infrared spectrometer, the researchers found water embedded in the Genesis Rock, as well as all the Apollo samples they studied. This implies that the various landing sites of Apollo 15, 16 and 17 each had water present.
Hui's research flies in the face of past analyses of Apollo rocks that found they were very dry, except for a small bit of water attributed to the rock containers leaking when they were returned to Earth.
Past instruments that analyzed these samples, however, were not very sensitive. Hui said those older spectrometers had a sensitivity of around 50 parts per million (ppm), while his instruments were able to detect water at concentrations of about 6 ppm in anorthosites and 2.7 ppm in troctolites, which are both igneous rocks found in the moon's crust.
Troctolites form in the highlands as part of the moon's highland upper crust, and anorthosites are believed to be a part of the moon's "primary" crust, which solidified around the same time as other bodies in the solar system. | <urn:uuid:6f64d0fb-6a9f-4041-a215-454df9c1625b> | 3.84375 | 762 | News Article | Science & Tech. | 45.452112 | 365 |
The magnitude system works quite well for quantifying the brightness of stars. We know that a 6th magnitude star will be barely visible to the unaided eye from rural areas, yet easily seen in even the smallest of telescopes.
The magnitude system doesn’t work as well for deep-sky objects. Consider the spiral galaxy M33 in Triangulum. Listed as a 6th magnitude object, it’s notoriously difficult to view in telescopes. M33 is elusive because its light is spread over an area four times that of the full moon. Defocus a 6th magnitude star until it’s that large and you’ll get the idea.
Another reason why M33 is such a demanding target is its location in a star-poor region of the late autumn sky. I usually find it by training my telescope on an area roughly 4 ½ degrees west and slightly north of alpha (a) Trianguli. You can also trace an imaginary line from the Andromeda Galaxy (M31) to the star beta (b) Andromedae, then extend an equal distance beyond (refer to the accompanying finder chart). In either case, begin a low power sweep of the area until you encounter a large, faint glow.
The key to observing M33 is to use an eyepiece that affords a field of view of at least 1½ to 2 degrees. One of the best views I’ve had of M33 was with a 4-inch f/4 RFT (the Edmund Astroscan) and a magnifying power of 16X. I’ve spotted it with 7X50 binoculars, and some observers even report seeing it with the unaided eye. The key, of course, is to conduct a search for M33 from a dark-sky site on a clear, moonless evening.
Numerous sources credit the discovery of M33 to Messier himself (in 1764); however evidence exists that the true discoverer may have been the Italian astronomer Giovanni Battista Hodiema over a century earlier.
M33 is part of the Local Group of galaxies that includes our Milky Way and the Andromeda Galaxy. It’s approximately half the size of the Milky Way and lies about 2.9 million light-years away. | <urn:uuid:a964755c-840c-4fb5-bf10-6a64094c1257> | 3.71875 | 470 | Knowledge Article | Science & Tech. | 60.508222 | 366 |
It came closer ... closer ... and then it started heading away. But you may not have noticed at all.
An asteroid passed relatively close to Earth around 2:24 p.m. ET Friday. As scientists had been predicting all week, it did not hit.
A different and unrelated small asteroid entered the atmosphere over Russia on Friday, hours before the much larger asteroid's fly-by, injuring about 1,000 people. Scientists say that incident was a pure coincidence.
The larger asteroid, called 2012 DA14, never got closer than 17,100 miles to our planet's surface.
Stargazers in Australia, Asia and Eastern Europe could see the asteroid with the aid of a telescope or binoculars. At the Gingin Observatory in Australia, the asteroid appeared as a bright white streak as viewers watched a live NASA video feed.
Scientists are studying this asteroid so extensively that they can already predict its path for most of the 21st century, said Paul Chodas of NASA's Near Earth Object team.
But it is only one of thousands of objects that are destined to one day enter our neighborhood in space.
"There are lots of asteroids that we're watching that we haven't yet ruled out an Earth impact (for), but all of them have an impact probability that is very, very low," Don Yeomans, manager of the Near-Earth Object Program Office at NASA's Jet Propulsion Laboratory, said at a press briefing.
The long and short of it
The asteroid is thought to be 45 meters -- about half a football field -- long. Current estimates suggest that the Russian meteor -- which was a tiny asteroid before it hit the Earth's atmosphere -- was only 15 meters wide, making it much harder to detect. | <urn:uuid:6b71fad7-574b-4701-a7ca-2b35c09beeb0> | 3.4375 | 356 | News Article | Science & Tech. | 61.130968 | 367 |
A new world record wind gust: 253 mph in Australia's Tropical Cyclone Olivia
The 6,288-foot peak of New Hampshire's Mount Washington is a forbidding landscape of wind-swept barren rock, home to some of planet Earth's fiercest winds. As a 5-year old boy, I remember being blown over by a terrific gust of wind on the summit, and rolling out of control towards a dangerous drop-off before a fortuitously-placed rock saved me. Perusing the Guinness Book of World Records as a kid, three iconic world weather records always held a particular mystique and fascination for me: the incredible 136°F (57.8°C) at El Azizia, Libya in 1922, the -128.5°F (-89.2°C) at the "Pole of Cold" in Vostok, Antarctica in 1983, and the amazing 231 mph wind gust (103.3 m/s) recorded in 1934 on the summit of Mount Washington, New Hampshire. Well, the legendary winds of Mount Washington have to take second place now, next to the tropical waters of northwest Australia. The World Meteorological Organization (WMO) has announced that the new world wind speed record at the surface is a 253 mph (113.2 m/s) wind gust measured on Barrow Island, Australia. The gust occurred on April 10, 1996, during passage of the eyewall of Category 4 Tropical Cyclone Olivia.
Figure 1. Instruments coated with rime ice on the summit of Mt. Washington, New Hampshire. Image credit: Mike Theiss.
Tropical Cyclone Olivia
Tropical Cyclone Olivia was a Category 4 storm on the U.S. Saffir-Simpson scale, and generated sustained winds of 145 mph (1-minute average) as it crossed over Barrow Island off the northwest coast of Australia on April 10, 1996. Olivia had a central pressure of 927 mb and an eye 45 miles in diameter at the time, and generated waves 21 meters (69 feet) high offshore. According to Black et al. (1999), the eyewall likely had a tornado-scale mesovortex embedded in it that caused the extreme wind gust of 253 mph. The gust was measured at the standard measuring height of 10 meters above ground, on ground at an elevation of 64 meters (210 feet). A similar mesovortex was encountered by a Hurricane Hunter aircraft in Hurricane Hugo of 1989, and a mesovortex was also believed to be responsible for the 239 mph wind gust measured at 1400 meters by a dropsonde in Hurricane Isabel in 2003. For reference, 200 mph is the threshold for the strongest category of tornado, the EF-5, and any gusts of this strength are capable of causing catastrophic damage.
Figure 2. Visible satellite image of Tropical Cyclone Olivia a few hours before it crossed Barrow Island, Australia, setting a new world-record wind gust of 253 mph. Image credit: Japan Meteorological Agency.
Figure 3. Wind trace taken at Barrow Island, Australia during Tropical Cyclone Olivia. Image credit: Buchan, S.J., P.G. Black, and R.L. Cohen, 1999, "The Impact of Tropical Cyclone Olivia on Australia's Northwest Shelf", paper presented at the 1999 Offshore Technology Conference in Houston, Texas, 3-6 May, 1999.
Why did it take so long for the new record to be announced?
The instrument used to take the world record wind gust was funded by a private company, Chevron, and Chevron's data was not made available to forecasters at Australia's Bureau of Meteorology (BOM) during the storm. After the storm, the tropical cyclone experts at BOM were made aware of the data, but it was viewed as suspect, since the gusts were so extreme and the data was taken with equipment of unknown accuracy. Hence, the observations were not included in the post-storm report. Steve Buchan from RPS MetOcean believed in the accuracy of the observations, and coauthored a paper on the record gust, presented at the 1999 Offshore Technology Conference in Houston (Buchan et al., 1999). The data lay dormant until 2009, when Joe Courtney of the Australian Bureau of Meteorology was made aware of it. Courtney wrote up a report, coauthored with Steve Buchan, and presented this to the WMO extremes committee for ratification. The report has not been made public yet, and is awaiting approval by Chevron. The verified data will be released next month at a World Meteorological Organization meeting in Turkey, when the new world wind record will become official.
New Hampshire residents are not happy
Residents of New Hampshire are understandably not too happy about losing their cherished claim to fame. The current home page of the Mount Washington Observatory reads, "For once, the big news on Mount Washington isn't our extreme weather. Sadly, it's about how our extreme weather--our world record wind speed, to be exact--was outdone by that of a warm, tropical island".
Comparison with other wind records
Top wind in an Atlantic hurricane: 239 mph (107 m/s) at an altitude of 1400 meters, measured by dropsonde in Hurricane Isabel (2003).
Top surface wind in an Atlantic hurricane: 211 mph (94.4 m/s), Hurricane Gustav, Paso Real de San Diego meteorological station in the western Cuban province of Pinar del Rio, Cuba, on the afternoon of August 30, 2008.
Top wind in a tornado: 302 mph (135 m/s), measured via Doppler radar at an altitude of 100 meters (330 feet), in the Bridge Creek, Oklahoma tornado of May 3, 1999.
Top surface wind not associated with a tropical cyclone or tornado: 231 mph (103.3 m/s), April 12, 1934 on the summit of Mount Washington, New Hampshire.
Top wind in a typhoon: 191 mph (85.4 m/s) on Taiwanese Island of Lanya, Super Typhoon Ryan, Sep 22, 1995; also on island of Miyakojima, Super Typhoon Cora, Sep 5, 1966.
Top surface wind not measured on a mountain or in a tropical cyclone: 207 mph (92.5 m/s) measured in Greenland at Thule Air Force Base on March 6, 1972.
Top wind measured in a U.S. hurricane: 186 mph (83.1 m/s) measured at Blue Hill Observatory, Massachusetts, during the 1938 New England Hurricane.
Buchan, S.J., P.G. Black, and R.L. Cohen, 1999, "The Impact of Tropical Cyclone Olivia on Australia's Northwest Shelf", paper presented at the 1999 Offshore Technology Conference in Houston, Texas, 3-6 May, 1999.
Black, P.G., Buchan, S.J., and R.L. Cohen, 1999, "The Tropical Cyclone Eyewall Mesovortex: A Physical Mechanism Explaining Extreme Peak Gust Occurrence in TC Olivia, 4 April 1996 on Barrow Island, Australia", paper presented at the 1999 Offshore Technology Conference in Houston, Texas, 3-6 May, 1999. | <urn:uuid:3cf8391c-7628-4b73-b23d-af8d16292401> | 2.984375 | 1,482 | Personal Blog | Science & Tech. | 62.804295 | 368 |
Microformats in Context
There has been a lot of discussion in XML circles as to how far the extensibility revolution promised by XML can take (or has taken) us. Is XML really a tool for creating specialized languages so that information can be expressed in the most natural formats practical? Or is it just a way to reduce the burden on those who write code to consume web content (be strict in what you accept so that you can be liberal with your time spent fly-fishing). Are schema technologies a way to manage the flexibility that XML brings to the table, or just another weapon to put down users ("You don't validate. Go away")? Of course, the way I've posed these questions reveals my bias. I think that XML should be a tool for expressiveness and controlled diversity on the Web. I disagree strongly with the notion, recently expressed in a few quarters, that there are only a few viable XML formats, and that people should stop creating more. At the center of this controversy is the new Web 2.0 hotness: microformats. If you're not already familiar with this phenomenon, first read "What Are Microformats".
It's a DIV's World
Microformats enshrine the idea that rather than creating whole new vocabularies, developers should piggy-back off existing, widely supported and deployed formats such as XHTML. (In this article I'll focus mostly on microformats with XHTML as a host language.) The problem is that XHTML, at its best, does is good for basic document structure but, at its worst, tends to be used for the presentation of documents. Microformats are a lightweight way to express more specialized information within the structure of XHTML without changing its syntax. The idea is that the success of this approach rests on modest (hence "micro") constructs in modules that are mutually independent and focused on very specific domains. Through such simplicity and modularity microformats minimize the strain on the host languages, as well as the implementation effort and overall conceptual load.
Unfortunately, the strain is rarely avoided in practice. Many of the XHTML-based microformats
I've seen abuse the semantics of XHTML.
a/@rel tends to come in
for special abuse. The HTML 4.01 recommendation, whose semantics are adopted
by XHTML, says:
This attribute describes the relationship from the current document to the anchor specified by the href attribute. The value of this attribute is a space-separated list of link types.
A microformat, such as Google's
rel='nofollow', stretches this
definition to breaking. "Don't follow this link" is an instruction to the
user agent (more likely an automated agent such as a search index robot). This
is related to what was known as "actuation" in the XLink specification and
a very different matter from the conceptual relationship between the two documents.
I'll hasten to add that these problems are to some extent understood in the
microformats camp, and that there are some quite reasonable uses of
rel-tag. Then again there is
which is still designated a draft but does perpetuate
without any apology in the spec. The abuse of
a/@rev in the vote-links
microformats is an even more heinous example. Before you write off my complaints
about abuse of existing XHTML constructs as too rarefied and academic, consider
that it leads to a very real problem when microformats collide.
Will the Real rel Please Stand Up
There are only so many XHTML attributes to hitch a ride on, and if you can
stretch the semantics of each attribute pretty much to suit yourself, it's
inevitable that you will need to use clashing microformats. Imagine you have
a weblog that automatically asserts
rel='nofollow' on comment
links to discourage comment spam. An example comment looks as follows.
<p>Nice blog. Buy your medz <a href='http://medz.com' rel='nofollow'>here</a></p>
But you have another tool that looks for personnel links within your organization and marks them using a colleague designation in the XFN microformat.
<p>I just want to be sure your readers know we're aware of the stability problems with the latest release. I've posted some workarounds on <a href='http://mf-wizards.com/~jdoe/' rel='colleague'>my own blog</a>.</p>
You now have some sorting out to do. Of course you cannot have two
on the same element. You could set a priority that XFN annotation overrides
rel-'nofollow' (this is probably what you'd want in practice), but this means
that suddenly your microformats are no longer really independent, and they're
certainly not modular. Microformat tools have to be aware of the different
specs that might clash, and you introduce a bit of a negative network effect.
You could use the NMTOKENS escape hatch, which would mean that after both tools
have done their work the comment would look as follows:
<p>I just want to be sure your readers know we're aware of the stability problems with the latest release. I've posted some workarounds on <a href='http://mf-wizards.com/employees/jdoe/' rel='colleague nofollow'>my own blog</a>.</p>
One problem with this is that when you have a microformat such as XFN, which
already allows multiple tokens within
a/@rel, you're still inviting
clashes because it's not clear which tokens are part of XFN, and which come
from other conventions. It also becomes a land grab for terms across microformats.
rel='date' as a statement that you have a romantic
involvement with the person represented by the resource indicated by the
This could make for some stickiness in a microformat for references to calendar
rel='date' would have a markedly different meaning.
U. G. L. Y. You Ain't Got No Alibi...!
Another problem that stems from being restricted to a host language is that you often end up with very contorted and ugly constructs to force the fit. XOXO is an eminent example of this problem. I once did an exploration of XOXO as a language for exchanging weblog lists, rather than the more established, but quite awful, OPML. I ended up with something like Listing 1. | <urn:uuid:de843d34-aa1b-47ea-b9be-46a6fab4ff18> | 2.5625 | 1,380 | Personal Blog | Software Dev. | 52.931013 | 369 |
Note the cold spots are not along the geographic line from the N. Pole to the S. Pole but where the globe, tilted and tipped, is getting the least amount of sunshine! Depending on the tipping and what part of the globe is getting sunlight, the coldest spot is not the geographic N. Pole, but that part of the globe that gets less sunlight. On the weather maps below, the Italy Face makes Russia, not Sweden, receive less sunlight as Sweden is tilted toward the Sun. The Americas Face makes the areas NW of Hudson Bay, not the N. Pole, receive less sunlight as this part receives a shorter day as it is pushed into an early sunset during the tilt swing. The New Zealand face shows an uneven distribution of cold along latitude 60° as the globe is pushed up and away from the Sun over New Zeland. This is then pulled forward over the India Face for warmer temperatures North of Mongolia. The weather maps, and the verbal descriptions, match.
Dec 3: No precise sunset data because of the clouds, but but somewhere between 280° and 320°. [Assume 300°. Compass reading, subtract 30° for deviation. Skymap expects Azi 238°. Sunset NORTH by 32°]
Dec 19: I think we have passed the actual Solstice already a week ago at least, as the sun is already now again higher in my view in Europe than it was 2 weeks ago. I feel we have rolled by at least 12-30 degrees.
Dec 21: Sun set SSW rather than SW at Azi 225°.
Dec 2: Sunset early by 30 minutes, SOUTH.
Dec 3: Sunset SOUTH by 21°
Dec 5: Sunrise NORTH by 11°
Dec 11: Sunset SOUTH by 12°
Dec 11: Sunset SOUTH by 14°
Dec 13: Sunset SOUTH by 22°
Dec 14: Sunrise NORTH by 7°
Dec 26: Sunrise NORTH by 12°
Dec 6: Sunrise high by 19° NORTH, early.
Dec 7: Sunrise 50 minutes, NORTH.
Dec 8: dark in the 2nd week of Nov at 5 PM, normal for in the first week of Dec.
Dec 21: Sunset SOUTH by 14°
Dec 23: Sunset SOUTH by 16°
Dec 24: Sunset SOUTH by 18°
Dec 8: SOUTH by 38°!
Dec 10: Sunset SOUTH by 6°
Dec 18: Sunset SOUTH by 3°
Dec 22: Sunset SOUTH by 8°
Dec 18: Sunset 5° SOUTH.
Sunset Dec 3: SOUTH by 9°.
Sunrise Dec 11: SOUTH by 8°
Sunset Dec 11: SOUTH by 11°
Midday Dec 12: SOUTH by 25° and too HIGH by 15-20° deg!
Sunset Dec 12: SOUTH by 11°
Sunrise Dec 7: SOUTH, late by 47 minutes.
Sunset Dec 6: SOUTH, late by 28 minutes. | <urn:uuid:c22f6858-5b30-4acd-b9da-15d49797c5ba> | 2.8125 | 627 | Comment Section | Science & Tech. | 86.902402 | 370 |
Bob Henson | July 2, 2012 • With a ferocity to match the record heat it displaced, a thunderstorm complex raced from Illinois to the Delaware coast in a mere 12 hours on Friday evening, June 29. It knocked down countless trees and power lines, with wind gusts topping 80 miles per hour in many spots. It threw millions of people into turmoil, with air conditioners, computers, and phones out for days. And it brought to light a weather word du jour with an obscure but intriguing history.
This storm complex was a derecho (pronounced deh-REY-cho). It’s a phenomenon too infrequent to be familiar, but too dangerous to be ignored. A derecho is akin to the gust fronts we commonly experience when a thunderstorm arrives, except it plays out in far more spectacular fashion.
While the high winds of a tornado or hurricane spin around powerful, circular updrafts, a derecho’s wind consists of rain-cooled air that descends and plows into very warm, unstable air. Most such downbursts only span a few miles and last a few minutes, but sometimes the atmosphere is primed for this process to intensify in a repetitive fashion. In that case, the winds generate new thunderstorm updrafts as they push forward, and in turn, this creates more rain-fueled downdrafts. If there’s a brisk jet stream adding momentum to the successive downdrafts, then a derecho can race forward at interstate speeds along a track that’s almost bullet-straight, often traversing several states in a single day or night.
Most parts of the United States east of the Rockies experience a derecho about every year or two. They’re typically strongest and most common in two areas: the Corn Belt of the Midwest (where the June 29 event began) and the Ozark Mountain region, centered on southwest Missouri and northwest Arkansas.
Derechos favor the months of May, June, and July, when rain-cooled downdrafts can slam into extremely warm, moist air. That was certainly the case this time, as much of the Ohio Valley and mid-Atlantic were experiencing one of the hottest early-summer days in their weather history. The derecho lashed the Washington area just hours after Reagan National Airport hit 104°F, which was two degrees above the city’s previous June record.
While they don’t hold a candle to the worst tornadoes, derechos can easily inflict damage comparable to an EF1 twister on the enhanced Fujita scale, and their havoc is wreaked over a much larger area. In many parts of the central or eastern U.S., a wind gust of 100 mph (161 kph) is more likely to come from a derecho than a tornado. Virginia governor Bob McDonnell said the June 29 damage was his state’s most extensive for any single weather event outside of a hurricane.
The derecho’s name helps illuminate the meteorology behind it. In Spanish, derecho has several meanings, including “straight.” The word was plucked by Iowa scientist Gustavo Hinrichs in 1883 to describe a type of thunderstorm-related wind he dubbed “the straight blow of the prairies.” He may well have intended a direct contrast to tornadoes, whose Spanish root tornar means “to turn.”
Hinrichs discussed derechos in an 1888 article for the now-defunct American Meteorological Journal, accurately describing several aspects of the phenomenon based on an Iowa example. But after shifts in U.S. meteorology put a damper on severe weather research, the term languished in the meteorological dustbin for nearly a century, until it was revived in a 1987 paper by Robert Johns and William Hirt. (Here’s an essay (PDF) by Johns on the origin and use of the term.) After that, the term quickly caught on among meteorologists, although it’s only now entering more general use—a trend that might accelerate with coverage of the June 29 event.
By modern forecasting standards, the derecho in D.C. came as a relative surprise. Residents did get several hours of notice that wild weather was possible, thanks to a severe thunderstorm watch. And the arrival of the derecho itself was well warned. But less than 24 hours earlier, it wasn’t obvious that such a destructive event was in the cards for the Washington area.
On Friday morning, the New York Times’ national forecast called for thunderstorms from South Dakota and Nebraska to Maine and Massachusetts, with most producing little rain; the mid-Atlantic outlook focused on the heat risk. At NOAA’s Storm Prediction Center (SPC), the initial severe weather outlook for June 29, issued around 2 a.m. EDT, did not include Maryland or Virginia in its primary risk area for the day, and the odds of high wind in those states were pegged at less than 5%.
By morning, though, the signals were starting to come together in data from radiosondes (weather balloons) and forecasts from weather models, which increasingly pointed toward a storm complex moving from the Midwest toward the Appalachians. Derechos seldom cross the Appalachians intact, which keeps D.C.-area forecasters cautious about forecasting such a leap. Indeed, a storm complex that produced 80 to 90 mph winds in Chicago on Sunday, 1 July, fizzled en route. But on June 29, the extreme warmth and depth of the air mass, plus energy from the jet stream, kept the derecho powerful all the way to the Atlantic Ocean.
By 2:30 p.m. EDT, the fast-moving storms had already produced a 91-mph (147-kph) wind gust in Indiana. SPC raised the risk of damaging winds in the D.C. area and noted that “the system may continue to the coast.”
Over the last few weeks, NCAR’s advanced research version of the Weather Research and Forecasting model (dubbed ARW) has been producing detailed forecasts twice daily, in part to support a study of thunderstorms and air chemistry in Colorado, Oklahoma, and Arizona called DC3. These forecasts track the atmosphere at horizontal points separated by only 3 kilometers (1.9 miles), which is a sharp enough resolution to capture many aspects of a given day’s thunderstorm action.
The ARW outlook produced with data from 8:00 a.m. EDT captured the genesis of the derecho across Illinois and its rampage toward D.C. later that day. The model indicated a few pockets of surface wind of at least 35 meters per second (78 mph) along the derecho's path. (See map at left.)
Given its toll of damage and disruption—possibly the largest from a derecho in U.S. history—it’s tempting to call this event a super derecho. In fact, a group of scientists did just that for another powerful storm not long ago.
Clark Evans, a former NCAR postdoctoral researcher now at the University of Wisconsin–Milwaukee, teamed with Morris Weisman (NCAR) and Lance Bosart (University at Albany, State University of New York) to study a stupendous event from May 8, 2009, that they dubbed a “super derecho.” The storm complex moved from Kansas to Kentucky, spinning off tornadoes and developing a vortex that resembled a hurricane’s warm central core. Weisman and Evans analyzed the derecho in companion talks at a 2010 meeting of the American Meteorological Society, and the team now has a paper in the works on the event and how it was depicted by the ARW model. Some aspects were predicted 24 hours in advance.
How similar were the 2009 and 2012 derechos? While the 2009 event showed up in models a full day ahead, the details of the 2012 derecho didn’t begin crystallizing until a few hours before it formed. There were a number of other differences as well, including the time of day (nighttime vs. daytime), geographic location (central Plains vs. Ohio Valley and mid-Atlantic), tornadic activity (there was little to none with the 2012 event), and the presence of an intense vortex with the 2009 derecho.
“Though they both were high-end events responsible for significant damage and are part of the same archetype of meteorological phenomena, the two cases had many aspects that were quite different,” says Evans. | <urn:uuid:72904d0e-c03d-4a42-8d98-faa5ff0c59a1> | 2.9375 | 1,769 | Nonfiction Writing | Science & Tech. | 53.439911 | 371 |
A History of Innovation, a Path to the Future
March 1994 Visual C++ Strategy
The Microsoft Visual C++ development system family of products meets the need of developers to develop sophisticated Windows-based applications quickly. It also protects existing investments in the Microsoft Foundation Class Library (MFC) and the Windows API and establishes a clear path to developing 32-bit applications for Windows that run on multiple software and hardware platforms.
With the Visual C++ development system family of products, Microsoft’s strategy is to provide the shortest path for developers to access the full power of Windows, the smallest and fastest executables, and the safest investment for the future.
These fundamental principles have guided the development of Visual C++ and will continue to do so in the future. In order to deliver on these promises, we have chosen to focus our development efforts as follows:
C++ is the language we will focus on for the long term. We will implement new features in the language as standards emerge. However, in order to reduce the complexity that C++ brings with its power, we will implement a subset of the language in our class library and minimise the work involved in creating Windows-based applications in C++.
We believe that the greatest gains in productivity can be achieved through the use of a well-written class library. A key element of Visual C++ is the MFC architecture, which not only provides the fundamentals for developing Windows-based applications but also provides significant amounts of reusable code that developers can use in their applications. In addition, we have designed the Visual C++ development environment so that developers can easily exploit the full power of MFC.
Improvements in the development environment and build throughput provide the next level of productivity gains. Visual programming tools, integration and task orientation are the key areas of focus for achieving this objective with Visual C++.
Microsoft continues to lead the industry in C and C++ optimisations to help create the smallest and fastest executables possible. We have tuned MFC so that developers incur very little overhead in developing applications using C++ and MFC compared to using C and the Windows SDK.
Tools are provided for all the platforms supported by the Windows family. MFC and the tool set support all the latest features of the Windows operating system so that developers can take advantage of these features in their applications.
It is essential to preserve customers’ knowledge base and their investments in code into the future. For that reason, the MFC architecture is built around the Windows API to utilise existing knowledge. Existing applications based on C and the Windows API can be migrated to C++ and MFC. Once developers invest in MFC, their investment is completely protected from version to version and platform to platform.
Visual C++1.0:The move to C++
Research into developer needs defined the major design goals of Microsoft Visual C++ 1.0. Research showed that While 70% of developers wanted to move to C++, only 15% were actually using it because of the difficulty in learning and using the language. Furthermore, developers didn’t want to learn an entirely new API or throw away their existing source code. Thus Visual C++ was created to provide the shortest path to C++ programs for Windows.
To ease migration to C++, Microsoft introduced the innovative wizard technology along with a new, more powerful version of MFC. MFC version 2.0 included and application architecture and features that further simplified application development, while providing full backward compatibility with version 1.0. Wizards were added to make it easier to exploit the full power of MFC. Wizards make it possible to generate fully featured applications for Windows without writing thousands of lines of tedious implementation code. An integrated editor, compiler, debugger and source browser make the edit-build-debug cycle as short as possible. An integrated visual resource editor makes it possible to develop user interfaces rapidly. In addition, Visual C++ 1.0 continued to focus on providing the smallest and fastest executables of any commercially available compiler.
Because it combined industry-leading performance with breakthroughs in ease of use, Visual C++ 1.0 became the shortest path to developing C++ applications for Windows.
Microsoft Foundation Classes
MFC 1.0 was first introduced in 1992 with Microsoft + 7.0. MFC 2.0 in Visual C++ 1.0 followed in February 1993. MFC 2.0 built on the Windows-specific foundation established in 1992 by adding sophisticated architectural elements such as document/view architecture, high-level application-specific features such as MDI support, tool bars, status bars, true device-independent output including print preview, and Object Linking and Embedding (OLE) support. In short, designing MFC to handle the standard interface chores and platform issues eliminated much of the tedious details facing C++ application developers. Most importantly, MFC 1.0 was upward-compatible with MFC 2.0. MFC 1.0 programs merely had to be recompiled to gain all of the new benefits of MFC 2.0. Investments in code were preserved without sacrificing innovation.
To ensure compatibility, we implemented MFC 1.0 and MFC 2.0 using a practical subset of C++ that exploited the language’s strengths without relying on unusual or compiler-specific features. As a result, the framework is portable between different vendors’ tools and compilers. MFC was written to solve real application problems of professional developers. It is tuned for both speed and size, to produce professional, industrial-strength C++ code for Windows with copious diagnostic support. MFC has been licensed to other leading C++ tools vendors including Blue Sky, Metaware, Symantic and WATCOM, affirming it as the premier C++ application framework for Windows.
Now, with version 2.5, MFC demonstrates its leadership by adding support for OLE version 2.0 and Open Database Connectivity (ODBC) while maintaining source-code compatibility with MFC 2.0. The MFC library has been available for the Win32 API since August 1993 and later versions will soon support other platforms, further demonstrating its portability.
A year after Visual C++ 1.0 was released, 81% of developers are using C++, with 30% already having completed several C++ projects. Visual C++ and MFC have helped make the move to C++ a reality.
Microsoft Visual C++ 1.0, 32-bit Edition: The Move to 32-Bits
Because developers needed tools to create 32-bit applications for Windows, in August 1993 Microsoft introduced Visual C++, 32-bit edition. This was the first integrated 32-bit development environment for professional C++ programmers for Windows that was hosted on the Windows NT operating system and that targeted both Windows (via the Win32s API) and Windows NT (via Win32). To ease developers’ learning curves, this product has the same tool set and class library as Visual C++ 1.0, but on 32 bits. It is the safest investment in C++ development tools because it provides an easy migration path from previous 16-bit Microsoft C++ products. MFC applications need only be recompiled to become fully functional 32-bit Windows-based applications. Because developers can write applications from a single source-code base that will run on Windows 3.1 and Windows NT, the 32-bit edition increases their productivity and allows them to take advantage of the performance benefits of 32-bit programming on both platforms.
Windows NT is the host for the 32-bit compiler because it is the most robust development platform choice. In addition, the true 32-bit platform enables the development of powerful 32-bit applications. We exploited its pre-emptive multitasking and multithreading capabilities to enable seamless background compilation and simultaneous multiple project builds. To aid debugging we added in the 32-bit Edition a memory Window on the current contents, support for structured exception-handling to help in locating errors, and support for threads, allowing developers to analyse and view window messages, relationships between windows, processes and threads as well as their details. Books Online technology revolutionised the presentation and usefulness of product documentation.
New optimisations for the I486 and Pentium processors helped ensure the smallest and fastest 32-bit executables. Visual C++, 32-bit Edition is proven technology: Microsoft used it to build Windows NT itself.
Visual C++ 1.5: The Move to OLE 2.0
In April 1993, Microsoft released OLE 2.0 – a breakthrough for component-based software development. OLE 2.0 provides a standard means of defining what an object is and how objects can interact. With OLE 2.0, developers from different companies can write objects that interact with one another without relying on the specifics of how those objects work. With OLE 2.0 developers can use entire applications as components, which makes application integration and combining information from multiple applications easier. Hundreds of companies have announced support for OLE 2.0, and corporations and independent software vendors (ISVS) are already using it to develop sophisticated applications for their end users.
Visual C++ 1.5 Makes OLE 2.0 Easy
Microsoft research shows that although more than 75% of developers would like to develop applications for OLE 2.0, less than 10% of them had started development before Visual C++ 1.5 was available. This was because native OLE 2.0 development can be difficult. However, Visual C++ 1.5 and MFC 2.5 make OLE 2.0 development easy. Using Appwizard, developers can create an OLE 2.0 client, server or container (even with OLE Automation) in seconds. Because AppWizard creates default code to deal with all OLE messages and requests, developers need only write code for the functionality that they want to support, thus significantly reducing the amount of time it takes to get started with OLE 2.0 applications.
AppWizard, C1assWizard and MFC 2.5 support full OLE 2.0 functionality, including toolbar and menu negotiation, visual editing, drag and drop, in-place activation, all structure and storage mechanisms, incremental read/write, component object model, and synchronous function completion. More than 19,000 lines of C++ code are provided in the MFC 2.5 Library to support OLE 2.0 development - code that developers don’t have to write.
Enabling Database Development
Incorporating support for database access in Windows-based applications can be difficult, Yet it is increasingly important. MFC makes it easy. ODBC classes and wizards allow full ODBC support for accessing local or remote databases to be built into Windows-based applications with just a mouse click. ODBC drivers are available today for a diverse set of database formats and implementations.
Plans of Leading Edge Developers for Windows
In order to better understand developers’ needs for the future, we surveyed more than 5,000 developers who attended the Win32 Professional Developer’s Conference in December 1993. We asked developers specifically what their plans were for 1994. The key results are shown in figure 1:
|More than 80% are using MFC|
|More than 50% are targeting Win32-based applications|
|More than 50% are using Win32 as a host platform|
|More than 20% are targeting Win32-based RISC and Macintosh platforms|
|More than 60% are targeting OLE 2.0 More than 90% are using high-performance hardware (Intel 486 20MB RAM, CD-ROM)|
Figure 1 - developer plans for 1994
New Issues for Developers in 1994
The survey indicated a very strong interest in the development community in targeting Win32 and OLE 2.0-based applications using MFC. Developers also want the flexibility to target other software and hardware platforms as they become more popular. They would like to do this through a single source-code base that is developed using a single tools set and class library. Finally, developers have the necessary hardware to host a Win32-based platform such as Windows NT and exploit the advanced capabilities that Windows NT provides.
Key Challenges for Visual C++ Tools
From the perspective of development tools, these goals presented us with four challenges:
Provide tools that developers can use to create leading-edge applications for Windows that exploit the latest systems features, such as OLE 2.0, while enabling them to use their code investment in the future.
Provide a clear and easy migration path to the 32-bit future of Windows through guaranteed upward compatibility.
Provide a consistent tool set and class library that enable developers to easily target multiple platforms through a single code base.
Provide tools that allow developers to exploit advances in host operating system platforms to increase productivity. For example, support background builds on a multitasking operating system such as Windows NT.
Microsoft’s Answer: Visual C++ Everywhere
The Visual C++ 2.0 family of products provides a consistent tool set and class library (MFC) that enables the development of Win32-based applications that target a wide range of platforms including Windows, Windows NT (Inte1- and RISC-based), Microsoft Windows “Chicago” (the next major release of Microsoft Windows), and the Macintosh. The Visual C++ 2.0 family utilises Microsoft’s systems strategy to provide the Win32 API along with key systems features such as OLE 2.0 and ODBC on all these platforms. Developers need only learn one tool set and API (MFC and Win32) to develop and support Win32-based applications that are portable and compatible across a range of leading platforms. Developers will be able to continue maintaining their 16-bit applications for Windows using Visual C++ 1.5. In addition, we will work with developers licensed to provide the Windows API on Unix platforms to make sure that developers can easily port their MFC applications using these platforms as well.
The Visual C++ 2.0 Family
The Visual C++ 2.0 family consists of the following products:
Visual C++ 2.0 Intel Edition.
This product hosts on a Win32- and Intel-based platforms (Windows NT or Windows “Chicago”) and enables developers to target Win32- and Intel-based applications that run on Windows (through Win32s), Windows NT, and Windows “Chicago”.
Visual C++ 2.0 RISC Editions.
These products host on a Win32- and RISC-based platform and target the same platform. The tool set is identical to the Intel-based tool set described above. The first version will be available for the MIPS platform followed by the Digital Alpha AXP and other RISC platforms supported by Windows NT.
Visual C++ 2.0, Cross-development edition for Macintosh.
This product is an add-on to the Visual C++ 2.0, Intel Edition and enables developers to target the Macintosh through a core Win32 code base. When this product is installed on top of the Visual C++ 2.0, Intel Edition, developers will gain new options to target the Maointosh. All development, compilation and linking are performed on the Windows NT-based machine. Debugging is performed across a network or serial connection to a Macintosh using the integrated debugger in the Visual C++ development environment.
The table in figure 2 illustrates the Visual C++ 2.0 product family:
Visual C++ 2.0: Continue the Innovation
In addition to the multiplatform support, Visual C++ will continue to provide the shortest path to developing applications that exploit the full power of the Windows family. Visual C++ 2.0 continues the leadership established by Visual C++ 1.5 by making it easy to develop 32-bit OLE 2.0 and ODBC applications through innovative wizards technology and a 32-bit compatible version of MFC. In addition, all versions of Visual C++ 2.0 exploit the power of Windows NT as the host development environment.
Visual C++ 2.0 provides a completely redesigned and integrated environment (IDE) that makes application development even easier than with previous versions of Visual C++. Everything is truly integrated, so there is no longer a separate App Studio application for resource editing, for example. The IDE includes all the latest innovations found in the latest Microsoft applications such as customisable toolbars and dockable windows.
Powerful project-management capabilities make it possible to manage complex development projects. Developers will also gain a dramatic boost in productivity through a new incremental linker that is aimed at producing 10-second links so developers can get from edit to EXE very quickly. The Browser can now be updated on demand rather than on every build, and it supports navigation of both definitions and references. Finally, C++ templates and exception handling have been added to the C++ compiler, enabling developers to benefit from the latest C++ language features.
We have continued our leadership in C-based optimisations to turn our focus on new C++ optimisation. Our goal with the new optimisation technology, called Opt++, is to eliminate most, if not all, of the additional overhead that comes with using C++ rather than C. This technology along with highly tuned versions of MFC will enable developers to deliver applications built using C+ and MFC that have very little performance overhead compared to that of comparable programs built using C and the SDK.
Maximum Compatibility and Portability
Visual C++ 2.0 once again demonstrates our commitment to preserving developers’ investments in MFC. Visual C++ does this in a number of ways:
Version to version. MFC 3.0 is completely upward compatible with previous versions of MFC. new features provided with MFC 3.0 can easily be incorporated with little additional work.
16- to 32-bit conversion. 16-bit Windows-based applications written using MFC 2.5 included in Visual C++ 1.5 need only be recompiled with MFC 3.0 to convert them to 32-bit applications.
Intel to RISC. Applications written to MFC 3.0 are 100% source-code compatible across all Win32 platforms. They only need to be recompiled with the processor-specific Visual C++ compilers to produce executable files for different target platforms.
Win32 to Macintosh applications. As above, most MFG 3.0 applications can simply be recompiled with the Visual C++ 68K cross-compiler to produce executable files for the Macintosh. Applications written directly to the Win32 API can achieve 80% or greater compatibility, depending on the APIs used by the application.
Visual C++ Portability: RISCbased Multiplatform Editions
RISC editions of Visual C++ are identical to the Intel-based Visual C++ product. They also exploit the fact that Windows NT is the same on all hardware platforms.
This means that developers who use Visual C++ can use the same tools, regardless of which platform they are targeting. Also, features of the Visual C+ language - for example, C++ templates or exception handling - are implemented in the same manner on all platforms.
The same APIs, Win32 and MFC, are implemented identically across all supported platforms. A developer’s knowledge and code base can be used across all RISC platforms.
To provide true targeting for Windows NT, 21 development system must support all Windows NT-compatible hardware platforms.
Microsoft is working closely with all licensees of Windows NT and is already shipping versions of the Microsoft C/C++ development tools for MIPS and Alpha AXP through the Win-32 SDK program.
Microsoft has licensed the compiler to several licensees of Windows NT and is actively integrating the compiler with each chip makers’ back-end optimising code generators.
Visual C++ 2.0 has also been designed to be completely portable across various Win32-based hardware platforms. Microsoft is working with licensees of Windows NT to port the entire Visual C++ development environment, including MFC, to all Windows NT-compatible hardware platforms.
The result is a compatible language implementation, integrated tool set, system API and C++ application framework across the entire family of Windows-compatible hardware. Developers will be able to use their existing source code, makefiles and development experience with Visual C++. They will benefit from the ease of use of the Visual G++ development system and from the high performance of the chip makers’ optimised native-code generators. The application architecture is depicted in figure 3.
|Figure 3 - The application architecture|
Visual C++ Portability: Macintosh Cross-development Edition
The cross-development tool set is an add-on to the Intel-based Visual C++ 2.0 development environment tailored for targeting the 68K-based family of Macintosh computers, and in the future, Power Macintosh versions of the same tools.
The product provides all the tools necessary for developing and debugging native applications for the Macintosh. These include an optimising C/C++ cross-compiler, linker, resource compiler, remote debugger and profiler. The tool set also includes a Macintosh version of MFC 3.0 and a Win32 portability library. These tools are all integrated into the Visual C++ 2.0 development environment.
The architecture of Macintosh applications generated by Visual C++ is shown in figure 4.
|Figure 4 - Macintosh application architecture|
The application source code can be Written in C or C++ and can be from a single source-code base that supports both Windows and the Macintosh. The Win32 portability libraries for the Macintosh simplify the porting of an application written to the Win32 API and/or to the Macintosh Foundation Class Library. These libraries generate appropriate Macintosh System 7 instructions from calls to the Win32 API. As depicted in the figure, developers can also program directly to the System 7 API to take advantage of platform-specific features. The resulting application, however, incorporates the native look and feel of a Macintosh application. Rigorous performance-tuning of the Win32 portability library ensures there will be very little performance penalty for the ported application. In fact, Microsoft uses this technology for its Macintosh application developments, such as Excel and FoxPro for the Macintosh.
OLE Custom Controls
Microsoft recently announced the OLE custom control architecture, which merges the popular Microsoft Visual Basic custom control architecture with the open, standard architecture of OLE. OLE custom controls make component-based development a reality by allowing developers to easily utilise existing bodies of functional code encapsulated as OLE controls. An OLE control is a custom control, implemented as an OLE 2.0 compound document object with visual editing support. An OLE control has additional capabilities beyond those of ordinary OLE objects, such as the ability to fire events. OLE controls will be supported in the future by all Microsoft development tools, including applications through Visual Basic, Applications edition.
Most OLE 2.0 objects require a substantial amount of implementation effort. Fortunately, an OLE Control DeveIoper’s Kit (CDK) - along with Visual C++ - provides most of the required implementation, so developers only have to fill in details that are specific to the OLE control. The CDK includes both 16- and 32-bit MFC extensions to support controls, a Controlwizard for creating an initial custom control project, C1assWizard extensions, a Test Container to verify the OLE control, full online documentation, and sample code.
Microsoft Visual C++ and the OLE CDK make component-based software development a reality.
Visual C++ 2.0 for Intel and MIPS and the cross-development system for the Macintosh 68K are scheduled to be made available in the first half of 1994. A future version of this tool set will target other Windows NT- and RISC-based platforms and the Power Macintosh platform. The OLE CDK will be available in the same time frame as Visual C++ 2.0 for Intel.
The Visual C++ development environment and the Microsoft Foundation Class Library will continue to evolve to support Microsoft Windows operating systems in the years ahead.
Overload Journal #5 - Sep 1994 + Programming Topics
|Browse in :||
All > Topics > Programming (488)
Any of these categories - All of these categories | <urn:uuid:0dbf9581-7577-4b95-8e5b-255701854a09> | 2.75 | 4,989 | Knowledge Article | Software Dev. | 42.782835 | 372 |
July 18, 2012
Since the Industrial Revolution, ocean acidity has risen by 30 percent as a direct result of fossil-fuel burning and deforestation. And within the last 50 years, human industry has caused the world’s oceans to experience a sharp increase in acidity that rivals levels seen when ancient carbon cycles triggered mass extinctions, which took out more than 90 percent of the oceans’ species and more than 75 percent of terrestrial species.
Rising ocean acidity is now considered to be just as much of a formidable threat to the health of Earth’s environment as the atmospheric climate changes brought on by pumping out greenhouse gases. Scientists are now trying to understand what that means for the future survival of marine and terrestrial organisms.
In June, ScienceNOW reported that out of the 35 billion metric tons of carbon dioxide released annually through fossil fuel use, one-third of those emissions diffuse into the surface layer of the ocean. The effects those emissions will have on the biosphere is sobering, as rising ocean acidity will completely upset the balance of marine life in the world’s oceans and will subsequently affect humans and animals who benefit from the oceans’ food resources.
The damage to marine life is due in large part to the fact that higher acidity dissolves naturally-occurring calcium carbonate that many marine species–including plankton, sea urchins, shellfish and coral–use to construct their shells and external skeletons. Studies conducted off Arctic regions have shown that the combination of melting sea ice, atmospheric carbon dioxide and subsequently hotter, CO2-saturated surface waters has led to the undersaturation of calcium carbonate in ocean waters. The reduction in the amount of calcium carbonate in the ocean spells out disaster for the organisms that rely on those nutrients to build their protective shells and body structures.
The link between ocean acidity and calcium carbonate is a directly inverse relationship, which allows scientists to use the oceans’ calcium carbonate saturation levels to measure just how acidic the waters are. In a study by the University of Hawaii at Manoa published earlier this year, researchers calculated that the level of calcium carbonate saturation in the world’s oceans has fallen faster in the last 200 years than has been seen in the last 21,000 years–signaling an extraordinary rise in ocean acidity to levels higher than would ever occur naturally.
The authors of the study continued on to say that currently only 50 percent of the world’s ocean waters are saturated with enough calcium carbonate to support coral reef growth and maintenance, but by 2100, that proportion is expected to drop to a mere five percent, putting most of the world’s beautiful and diverse coral reef habitats in danger.
In the face of so much mounting and discouraging evidence that the oceans are on a trajectory toward irreparable marine life damage, a new study offers hope that certain species may be able to adapt quick enough to keep pace with the changing make-up of Earth’s waters.
In a study published last week in the journal Nature Climate Change, researchers from the ARC Center of Excellence for Coral Reef Studies found that baby clownfish (Amphiprion melanopus) are able to cope with increased acidity if their parents also lived in higher acidic water, a remarkable finding after a study conducted last year on another clownfish species (Amphiprion percula) suggested acidic waters reduced the fish’s sense of smell, making it likely for the fish to mistakenly swim toward predators.
But the new study will require further research to determine whether or not the adaptive abilities of the clownfish are also present in more environmentally-sensitive marine species.
While the news that at least some baby fish may be able to adapt to changes provides optimism, there is still much to learn about the process. It is unclear through what mechanism clownfish are able to pass along this trait to their offspring so quickly, evolutionarily speaking. Organisms capable of generation-to-generation adaptations could have an advantage in the coming decades, as anthropogenic emissions push Earth to non-natural extremes and place new stresses on the biosphere.
Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week. | <urn:uuid:d5fc8f97-1ffe-4404-b9ee-d359c5162435> | 3.796875 | 860 | Truncated | Science & Tech. | 20.203028 | 373 |
Tornadoes and Climate Change: Huge Stakes, Huge Unknowns
Posted: 12:05 PM EDT on May 23, 2013
We currently do not know how tornadoes and severe thunderstorms may be changing due to climate change, nor is there hope that we will be able to do so in the foreseeable future. It does not appear that there has been an increase in U.S. tornadoes stronger than EF-0 in recent decades, but climate change appears to be causing more extreme years--both high and low--of late. We may see an increase in the number of severe thunderstorms over the U.S. by late this century.
Read This Blog Entry
Other Featured Blogs:
Did you know that...
Large golf ball-sized hail was produced from thunderstorms over the eastern United States on this date in 1988. Also, in 1990, a cloudburst washed topsoil and large rocks into the town of Culdesac, Idaho.
More Weather Education Resources | <urn:uuid:e7908a91-7b74-42c2-ab95-59e1ffdccb6a> | 3.015625 | 204 | Content Listing | Science & Tech. | 64.028462 | 374 |
Exceptions are a means of breaking out of the normal flow of control
of a code block in order to handle errors or other exceptional
conditions. An exception is
The Python interpreter raises an exception when it detects a run-time error (such as division by zero). A Python program can also explicitly raise an exception with the raise statement. Exception handlers are specified with the try ... except statement. The try ... finally statement specifies cleanup code which does not handle the exception, but is executed whether an exception occurred or not in the preceding code.
Python uses the ``termination''
When an exception is not handled at all, the interpreter terminates
execution of the program, or returns to its interactive main loop. In
either case, it prints a stack backtrace, except when the exception is
Exceptions are identified by string objects or class instances. Selection of a matching except clause is based on object identity (i.e., two different string objects with the same value represent different exceptions!) For string exceptions, the except clause must reference the same string object. For class exceptions, the except clause must reference the same class or a base class of it.
When an exception is raised, an object (maybe
None) is passed
as the exception's ``parameter'' or ``value''; this object does not
affect the selection of an exception handler, but is passed to the
selected exception handler as additional information. For class
exceptions, this object must be an instance of the exception class
See also the description of the try statement in section 7.4 and raise statement in section 6.8. | <urn:uuid:369dd57f-25d9-44e6-832c-29ed8d0645d2> | 4.09375 | 331 | Documentation | Software Dev. | 47.998813 | 375 |
Reading this tutorial has probably reinforced your interest in using Python -- you should be eager to apply Python to solve your real-world problems. Now what should you do?
You should read, or at least page through, the Python Library Reference, which gives complete (though terse) reference material about types, functions, and modules that can save you a lot of time when writing Python programs. The standard Python distribution includes a lot of code in both C and Python; there are modules to read Unix mailboxes, retrieve documents via HTTP, generate random numbers, parse command-line options, write CGI programs, compress data, and a lot more; skimming through the Library Reference will give you an idea of what's available.
The major Python Web site is http://www.python.org/; it contains code, documentation, and pointers to Python-related pages around the Web. This Web site is mirrored in various places around the world, such as Europe, Japan, and Australia; a mirror may be faster than the main site, depending on your geographical location. A more informal site is http://starship.python.net/, which contains a bunch of Python-related personal home pages; many people have downloadable software there. Many more user-created Python modules can be found in the Python Package Index (PyPI).
For Python-related questions and problem reports, you can post to the newsgroup comp.lang.python, or send them to the mailing list at firstname.lastname@example.org. The newsgroup and mailing list are gatewayed, so messages posted to one will automatically be forwarded to the other. There are around 120 postings a day (with peaks up to several hundred), asking (and answering) questions, suggesting new features, and announcing new modules. Before posting, be sure to check the list of Frequently Asked Questions (also called the FAQ), or look for it in the Misc/ directory of the Python source distribution. Mailing list archives are available at http://www.python.org/pipermail/. The FAQ answers many of the questions that come up again and again, and may already contain the solution for your problem.
See About this document... for information on suggesting changes. | <urn:uuid:38f062ba-9102-4f56-a7fd-1c3d0fed3eab> | 2.65625 | 451 | Customer Support | Software Dev. | 49.919888 | 376 |
The thick durable sea ice that routinely cloaked much of the Arctic Ocean in colder decades in the 20th century is increasingly relegated to a few clotted places along northern Canada and Greenland, according to the latest satellite analysis of the warming region.
The following video gives you a fascinating view of one patch of sea ice through 90 days, provided by a webcam left behind by researchers who annually set up camp near the North Pole to check ocean and ice conditions up close.
The new analysis, published in the Journal of Geophysical Research on Tuesday, is the latest of many findings supporting the idea that the region has shifted to a new state in which seasonal ice, which forms in winter and melts in the summer, dominates. This is the main reason biologists have concerns for the long-term welfare of polar bears, which have a harder time sustaining their weight and reproducing when summertime ice is thin. At the same time, the shift bodes well for shippers, like the German company Beluga, that have plans to start sending goods from Asia to northern Europe through the fabled, but long impassible, Northern Sea Route over Russia.
The study, conducted by scientists from NASA, the University of Washington and the California Institute of Technology estimated changes from 2003 to 2008 in the total volume and thickness of what’s called multi-year ice, the yards-thick floes that can persist through a summer (here’s some video I shot while standing on a mix of old and thinner ice in 2003), and seasonal ice, which can grow to 6 feet in thickness in winter but vanishes in summer.
For a look at how this summer’s Arctic sea-ice season may unfold, visit Sea Ice Outlook 2009, where more than a dozen groups of ice researchers are posting experimental forecasts of how the ice is likely to fare. There’s a strong consensus that the season will see much less sea ice than the average for the period monitored by satellites (from 1979 onward), but is unlikely to see the extent of open water measured in 2007.
To get a sense of how the views of Arctic experts have coalesced around a rising human influence on the region’s climate, you can scan previous stories from 2001, 2005, and 2007 on ice trends and possible causes. | <urn:uuid:4e953074-259f-4363-a1d9-ec3bbe891cc4> | 3.359375 | 465 | News Article | Science & Tech. | 27.597235 | 377 |
||This article needs additional citations for verification. (March 2011)|
Nuclear meltdown is an informal term for a severe nuclear reactor accident that results in core damage from overheating. The term is not officially defined by the International Atomic Energy Agency or by the U.S. Nuclear Regulatory Commission. However, it has been defined to mean the accidental melting of the core of a nuclear reactor, and is in common usage a reference to the core's either complete or partial collapse. "Core melt accident" and "partial core melt" are the analogous technical terms for a meltdown.
A core melt accident occurs when the heat generated by a nuclear reactor exceeds the heat removed by the cooling systems to the point where at least one nuclear fuel element exceeds its melting point. This differs from a fuel element failure, which is not caused by high temperatures. A meltdown may be caused by a loss of coolant, loss of coolant pressure, or low coolant flow rate or be the result of a criticality excursion in which the reactor is operated at a power level that exceeds its design limits. Alternately, in a reactor plant such as the RBMK-1000, an external fire may endanger the core, leading to a meltdown.
Once the fuel elements of a reactor begin to melt, the fuel cladding has been breached, and the nuclear fuel (such as uranium, plutonium, or thorium) and fission products (such as cesium-137, krypton-88, or iodine-131) within the fuel elements can leach out into the coolant. Subsequent failures can permit these radioisotopes to breach further layers of containment. Superheated steam and hot metal inside the core can lead to fuel-coolant interactions, hydrogen explosions, or water hammer, any of which could destroy parts of the containment. A meltdown is considered very serious because of the potential, however remote, that radioactive materials could breach all containment and escape (or be released) into the environment, resulting in radioactive contamination and fallout, and potentially leading to radiation poisoning of people and animals nearby.
Nuclear power plants generate electricity by heating fluid via a nuclear reaction to run a generator. If the heat from that reaction is not removed adequately, the fuel assemblies in a reactor core can melt. A core damage incident can occur even after a reactor is shut down because the fuel continues to produce decay heat.
A core damage accident is caused by the loss of sufficient cooling for the nuclear fuel within the reactor core. The reason may be one of several factors, including a loss-of-pressure-control accident, a loss-of-coolant accident (LOCA), an uncontrolled power excursion or, in reactors without a pressure vessel, a fire within the reactor core. Failures in control systems may cause a series of events resulting in loss of cooling. Contemporary safety principles of defense in depth ensure that multiple layers of safety systems are always present to make such accidents unlikely.
The containment building is the last of several safeguards that prevent the release of radioactivity to the environment. Many commercial reactors are contained within a 1.2-to-2.4-metre (3.9 to 7.9 ft) thick pre-stressed, steel-reinforced, air-tight concrete structure that can withstand hurricane-force winds and severe earthquakes.
- In a loss-of-coolant accident, either the physical loss of coolant (which is typically deionized water, an inert gas, NaK, or liquid sodium) or the loss of a method to ensure a sufficient flow rate of the coolant occurs. A loss-of-coolant accident and a loss-of-pressure-control accident are closely related in some reactors. In a pressurized water reactor, a LOCA can also cause a "steam bubble" to form in the core due to excessive heating of stalled coolant or by the subsequent loss-of-pressure-control accident caused by a rapid loss of coolant. In a loss-of-forced-circulation accident, a gas cooled reactor's circulators (generally motor or steam driven turbines) fail to circulate the gas coolant within the core, and heat transfer is impeded by this loss of forced circulation, though natural circulation through convection will keep the fuel cool as long as the reactor is not depressurized.
- In a loss-of-pressure-control accident, the pressure of the confined coolant falls below specification without the means to restore it. In some cases this may reduce the heat transfer efficiency (when using an inert gas as a coolant) and in others may form an insulating "bubble" of steam surrounding the fuel assemblies (for pressurized water reactors). In the latter case, due to localized heating of the "steam bubble" due to decay heat, the pressure required to collapse the "steam bubble" may exceed reactor design specifications until the reactor has had time to cool down. (This event is less likely to occur in boiling water reactors, where the core may be deliberately depressurized so that the Emergency Core Cooling System may be turned on). In a depressurization fault, a gas-cooled reactor loses gas pressure within the core, reducing heat transfer efficiency and posing a challenge to the cooling of fuel; however, as long as at least one gas circulator is available, the fuel will be kept cool.
- In an uncontrolled power excursion accident, a sudden power spike in the reactor exceeds reactor design specifications due to a sudden increase in reactor reactivity. An uncontrolled power excursion occurs due to significantly altering a parameter that affects the neutron multiplication rate of a chain reaction (examples include ejecting a control rod or significantly altering the nuclear characteristics of the moderator, such as by rapid cooling). In extreme cases the reactor may proceed to a condition known as prompt critical. This is especially a problem in reactors that have a positive void coefficient of reactivity, a positive temperature coefficient, are overmoderated, or can trap excess quantities of deleterious fission products within their fuel or moderators. Many of these characteristics are present in the RBMK design, and the Chernobyl disaster was caused by such deficiencies as well as by severe operator negligence. Western light water reactors are not subject to very large uncontrolled power excursions because loss of coolant decreases, rather than increases, core reactivity (a negative void coefficient of reactivity); "transients," as the minor power fluctuations within Western light water reactors are called, are limited to momentary increases in reactivity that will rapidly decrease with time (approximately 200% - 250% of maximum neutronic power for a few seconds in the event of a complete rapid shutdown failure combined with a transient).
- Core-based fires endanger the core and can cause the fuel assemblies to melt. A fire may be caused by air entering a graphite moderated reactor, or a liquid-sodium cooled reactor. Graphite is also subject to accumulation of Wigner energy, which can overheat the graphite (as happened at the Windscale fire). Light water reactors do not have flammable cores or moderators and are not subject to core fires. Gas-cooled civilian reactors, such as the Magnox, UNGG, and AGCR type reactors, keep their cores blanketed with non reactive carbon dioxide gas, which cannot support a fire. Modern gas-cooled civilian reactors use helium, which cannot burn, and have fuel that can withstand high temperatures without melting (such as the High Temperature Gas Cooled Reactor and the Pebble Bed Modular Reactor).
- Byzantine faults and cascading failures within instrumentation and control systems may cause severe problems in reactor operation, potentially leading to core damage if not mitigated. For example, the Browns Ferry fire damaged control cables and required the plant operators to manually activate cooling systems. The Three Mile Island accident was caused by a stuck-open pilot-operated pressure relief valve combined with a deceptive water level gauge that misled reactor operators, which resulted in core damage.
Light water reactors (LWRs)
Before the core of a light water nuclear reactor can be damaged, two precursor events must have already occurred:
- A limiting fault (or a set of compounded emergency conditions) that leads to the failure of heat removal within the core (the loss of cooling). Low water level uncovers the core, allowing it to heat up.
- Failure of the Emergency Core Cooling System (ECCS). The ECCS is designed to rapidly cool the core and make it safe in the event of the maximum fault (the design basis accident) that nuclear regulators and plant engineers could imagine. There are at least two copies of the ECCS built for every reactor. Each division (copy) of the ECCS is capable, by itself, of responding to the design basis accident. The latest reactors have as many as four divisions of the ECCS. This is the principle of redundancy, or duplication. As long as at least one ECCS division functions, no core damage can occur. Each of the several divisions of the ECCS has several internal "trains" of components. Thus the ECCS divisions themselves have internal redundancy – and can withstand failures of components within them.
The Three Mile Island accident was a compounded group of emergencies that led to core damage. What led to this was an erroneous decision by operators to shut down the ECCS during an emergency condition due to gauge readings that were either incorrect or misinterpreted; this caused another emergency condition that, several hours after the fact, led to core exposure and a core damage incident. If the ECCS had been allowed to function, it would have prevented both exposure and core damage. During the Fukushima incident the emergency cooling system had also been manually shut down several minutes after it started.
If such a limiting fault were to occur, and a complete failure of all ECCS divisions were to occur, both Kuan, et al and Haskin, et al describe six stages between the start of the limiting fault (the loss of cooling) and the potential escape of molten corium into the containment (a so-called "full meltdown"):
- Uncovering of the Core – In the event of a transient, upset, emergency, or limiting fault, LWRs are designed to automatically SCRAM (a SCRAM being the immediate and full insertion of all control rods) and spin up the ECCS. This greatly reduces reactor thermal power (but does not remove it completely); this delays core becoming uncovered, which is defined as the point when the fuel rods are no longer covered by coolant and can begin to heat up. As Kuan states: "In a small-break LOCA with no emergency core coolant injection, core uncovery [sic] generally begins approximately an hour after the initiation of the break. If the reactor coolant pumps are not running, the upper part of the core will be exposed to a steam environment and heatup of the core will begin. However, if the coolant pumps are running, the core will be cooled by a two-phase mixture of steam and water, and heatup of the fuel rods will be delayed until almost all of the water in the two-phase mixture is vaporized. The TMI-2 accident showed that operation of reactor coolant pumps may be sustained for up to approximately two hours to deliver a two phase mixture that can prevent core heatup."
- Pre-damage heat up – "In the absence of a two-phase mixture going through the core or of water addition to the core to compensate water boiloff, the fuel rods in a steam environment will heat up at a rate between 0.3 °C/s (0.5 °F/s) and 1 °C/s (1.8 °F/s) (3)."
- Fuel ballooning and bursting – "In less than half an hour, the peak core temperature would reach 1,100 K (1,520 °F). At this temperature the zircaloy cladding of the fuel rods may balloon and burst. This is the first stage of core damage. Cladding ballooning may block a substantial portion of the flow area of the core and restrict the flow of coolant. However complete blockage of the core is unlikely because not all fuel rods balloon at the same axial location. In this case, sufficient water addition can cool the core and stop core damage progression."
- Rapid oxidation – "The next stage of core damage, beginning at approximately 1,500 K (2,240 °F), is the rapid oxidation of the Zircaloy by steam. In the oxidation process, hydrogen is produced and a large amount of heat is released. Above 1,500 K (2,240 °F), the power from oxidation exceeds that from decay heat (4,5) unless the oxidation rate is limited by the supply of either zircaloy or steam."
- Debris bed formation – "When the temperature in the core reaches about 1,700 K (2,600 °F), molten control materials [1,6] will flow to and solidify in the space between the lower parts of the fuel rods where the temperature is comparatively low. Above 1,700 K (2,600 °F), the core temperature may escalate in a few minutes to the melting point of zircaloy [2,150 K (3,410 °F)] due to increased oxidation rate. When the oxidized cladding breaks, the molten zircaloy, along with dissolved UO2 [1,7] would flow downward and freeze in the cooler, lower region of the core. Together with solidified control materials from earlier down-flows, the relocated zircaloy and UO2 would form the lower crust of a developing cohesive debris bed."
- (Corium) Relocation to the lower plenum – "In scenarios of small-break LOCAs, there is generally a pool of water in the lower plenum of the vessel at the time of core relocation. Release of molten core materials into water always generates large amounts of steam. If the molten stream of core materials breaks up rapidly in water, there is also a possibility of a steam explosion. During relocation, any unoxidized zirconium in the molten material may also be oxidized by steam, and in the process hydrogen is produced. Recriticality also may be a concern if the control materials are left behind in the core and the relocated material breaks up in unborated water in the lower plenum."
At the point at which the corium relocates to the lower plenum, Haskin, et al relate that the possibility exists for an incident called a fuel-coolant interaction (FCI) to substantially stress or breach the primary pressure boundary when the corium relocates to the lower plenum of the reactor pressure vessel ("RPV"). This is because the lower plenum of the RPV may have a substantial quantity of water - the reactor coolant - in it, and, assuming the primary system has not been depressurized, the water will likely be in the liquid phase, and consequently dense, and at a vastly lower temperature than the corium. Since corium is a liquid metal-ceramic eutectic at temperatures of 2,200 to 3,200 K (3,500 to 5,300 °F), its fall into liquid water at 550 to 600 K (530 to 620 °F) may cause an extremely rapid evolution of steam that could cause a sudden extreme overpressure and consequent gross structural failure of the primary system or RPV. Though most modern studies hold that it is physically infeasible, or at least extraordinarily unlikely, Haskin, et al state that that there exists a remote possibility of an extremely violent FCI leading to something referred to as an alpha-mode failure, or the gross failure of the RPV itself, and subsequent ejection of the upper plenum of the RPV as a missile against the inside of the containment, which would likely lead to the failure of the containment and release of the fission products of the core to the outside environment without any substantial decay having taken place.
Breach of the Primary Pressure Boundary
There are several possibilities as to how the primary pressure boundary could be breached by corium.
- Steam Explosion
As previously described, FCI could lead to an overpressure event leading to RPV fail, and thus, primary pressure boundary fail. Haskin, et al. report that in the event of a steam explosion, failure of the lower plenum is far more likely than ejection of the upper plenum in the alpha-mode. In the even of lower plenum failure, debris at varied temperatures can be expected to be projected into the cavity below the core. The containment may be subject to overpressure, though this is not likely to fail the containment. The alpha-mode failure will lead to the consequences previously discussed.
- Pressurized Melt Ejection (PME)
It is quite possible, especially in pressurized water reactors, that the primary loop will remain pressurized following corium relocation to the lower plenum. As such, pressure stresses on the RPV will be present in addition to the weight stress that the molten corium places on the lower plenum of the RPV; when the metal of the RPV weakens sufficiently due to the heat of the molten corium, it is likely that the liquid corium will be discharged under pressure out of the bottom of the RPV in a pressurized stream, together with entrained gases. This mode of corium ejection may lead to direct containment heating (DCH).
Severe Accident Ex-Vessel Interactions and Challenges to Containment
Haskin, et al identify six modes by which the containment could be credibly challenged; some of these modes are not applicable to core melt accidents.
- Dynamic pressure (shockwaves)
- Internal missiles
- External missiles (not applicable to core melt accidents)
Standard failure modes
If the melted core penetrates the pressure vessel, there are theories and speculations as to what may then occur.
In modern Russian plants, there is a "core catching device" in the bottom of the containment building, the melted core is supposed to hit a thick layer of a "sacrificial metal" which would melt, dilute the core and increase the heat conductivity, and finally the diluted core can be cooled down by water circulating in the floor. However there has never been any full-scale testing of this device.
In Western plants there is an airtight containment building. Though radiation would be at a high level within the containment, doses outside of it would be lower. Containment buildings are designed for the orderly release of pressure without releasing radionuclides, through a pressure release valve and filters. Hydrogen/oxygen recombiners also are installed within the containment to prevent gas explosions.
In a melting event, one spot or area on the RPV will become hotter than other areas, and will eventually melt. When it melts, corium will pour into the cavity under the reactor. Though the cavity is designed to remain dry, several NUREG-class documents advise operators to flood the cavity in the event of a fuel melt incident. This water will become steam and pressurize the containment. Automatic water sprays will pump large quantities of water into the steamy environment to keep the pressure down. Catalytic recombiners will rapidly convert the hydrogen and oxygen back into water. One positive effect of the corium falling into water is that it is cooled and returns to a solid state.
Extensive water spray systems within the containment along with the ECCS, when it is reactivated, will allow operators to spray water within the containment to cool the core on the floor and reduce it to a low temperature.
These procedures are intended to prevent release of radiation. In the Three Mile Island event in 1979, a theoretical person standing at the plant property line during the entire event would have received a dose of approximately 2 millisieverts (200 millirem), between a chest X-ray's and a CT scan's worth of radiation. This was due to outgassing by an uncontrolled system that, today, would have been backfitted with activated carbon and HEPA filters to prevent radionuclide release.
However in case of Fukushima incident this design also at least partially failed: large amounts of highly radioactive water were produced and nuclear fuel has possibly melted through the base of the pressure vessels.
Cooling will take quite a while, until the natural decay heat of the corium reduces to the point where natural convection and conduction of heat to the containment walls and re-radiation of heat from the containment allows for water spray systems to be shut down and the reactor put into safe storage. The containment can be sealed with release of extremely limited offsite radioactivity and release of pressure within the containment. After a number of years for fission products to decay - probably around a decade - the containment can be reopened for decontamination and demolition.
Unexpected failure modes
Another scenario sees a buildup of hydrogen, which may lead to a detonation event, as happened for three reactors during Fukushima incident. Catalytic hydrogen recombiners located within containment are designed to prevent this from occurring; however, prior to the installation of these recombiners in the 1980s, the Three Mile Island containment (in 1979) suffered a massive hydrogen explosion event in the accident there. The containment withstood the pressure and no radioactivity was released. However, in Fukushima recombiners did not work due the absence of power and hydrogen detonation breached the containment.
Speculative failure modes
One scenario consists of the reactor pressure vessel failing all at once, with the entire mass of corium dropping into a pool of water (for example, coolant or moderator) and causing extremely rapid generation of steam. The pressure rise within the containment could threaten integrity if rupture disks could not relieve the stress. Exposed flammable substances could burn, but there are few, if any, flammable substances within the containment.
Another theory called an 'alpha mode' failure by the 1975 Rasmussen (WASH-1400) study asserted steam could produce enough pressure to blow the head off the reactor pressure vessel (RPV). The containment could be threatened if the RPV head collided with it. (The WASH-1400 report was replaced by better-based[original research?] newer studies, and now the Nuclear Regulatory Commission has disavowed them all and is preparing the overarching State-of-the-Art Reactor Consequence Analyses [SOARCA] study - see the Disclaimer in NUREG-1150.)
It has not been determined to what extent a molten mass can melt through a structure (although that was tested in the Loss-of-Fluid-Test Reactor described in Test Area North's fact sheet). The Three Mile Island accident provided some real-life experience, with an actual molten core within an actual structure; the molten corium failed to melt through the Reactor Pressure Vessel after over six hours of exposure, due to dilution of the melt by the control rods and other reactor internals, validating the emphasis on defense in depth against core damage incidents. Some believe a molten reactor core could actually penetrate the reactor pressure vessel and containment structure and burn downwards into the earth beneath, to the level of the groundwater.
By 1970, there were doubts about the ability of the emergency cooling systems of a nuclear reactor to prevent a loss of coolant accident and the consequent meltdown of the fuel core; the subject proved popular in the technical and the popular presses. In 1971, in the article Thoughts on Nuclear Plumbing, former Manhattan Project (1942–1946) nuclear physicist Ralph Lapp used the term "China syndrome" to describe a possible burn-through, after a loss of coolant accident, of the nuclear fuel rods and core components melting the containment structures, and the subsequent escape of radioactive material(s) into the atmosphere and environment; the hypothesis derived from a 1967 report by a group of nuclear physicists, headed by W. K. Ergen.
The geographic, planet-piercing concept of the China syndrome derives from the misperception that China is the antipode of the United States; to many Americans, it is the “the other side of the world”. Moreover, the hypothetical transit of a meltdown product to the other side of the Earth (i.e. China) ignores the fact that the Earth's gravity tends to pull all masses towards its center. Assuming a meltdown product could persist in a mobile molten form for long enough to reach the center of the Earth; gravity would prevent it continuing to the other side.
Other reactor types
Other types of reactors have different capabilities and safety profiles than the LWR does. Advanced varieties of several of these reactors have the potential to be inherently safe.
CANDU reactors
CANDU reactors, Canadian-invented deuterium-uranium design, are designed with at least one, and generally two, large low-temperature and low-pressure water reservoirs around their fuel/coolant channels. The first is the bulk heavy-water moderator (a separate system from the coolant), and the second is the light-water-filled shield tank. These backup heat sinks are sufficient to prevent either the fuel meltdown in the first place (using the moderator heat sink), or the breaching of the core vessel should the moderator eventually boil off (using the shield tank heat sink). Other failure modes aside from fuel melt will probably occur in a CANDU rather than a meltdown, such as deformation of the calandria into a non-critical configuration. All CANDU reactors are located within standard Western containments as well.
Gas-cooled reactors
One type of Western reactor, known as the advanced gas-cooled reactor (or AGCR), built by the United Kingdom, is not very vulnerable to loss-of-cooling accidents or to core damage except in the most extreme of circumstances. By virtue of the relatively inert coolant (carbon dioxide), the large volume and high pressure of the coolant, and the relatively high heat transfer efficiency of the reactor, the time frame for core damage in the event of a limiting fault is measured in days. Restoration of some means of coolant flow will prevent core damage from occurring.
Other types of highly advanced gas cooled reactors, generally known as high-temperature gas-cooled reactors (HTGRs) such as the Japanese High Temperature Test Reactor and the United States' Very High Temperature Reactor, are inherently safe, meaning that meltdown or other forms of core damage are physically impossible, due to the structure of the core, which consists of hexagonal prismatic blocks of silicon carbide reinforced graphite infused with TRISO or QUADRISO pellets of uranium, thorium, or mixed oxide buried underground in a helium-filled steel pressure vessel within a concrete containment. Though this type of reactor is not susceptible to meltdown, additional capabilities of heat removal are provided by using regular atmospheric airflow as a means of backup heat removal, by having it pass through a heat exchanger and rising into the atmosphere due to convection, achieving full residual heat removal. The VHTR is scheduled to be prototyped and tested at Idaho National Laboratory within the next decade (as of 2009) as the design selected for the Next Generation Nuclear Plant by the US Department of Energy. This reactor will use a gas as a coolant, which can then be used for process heat (such as in hydrogen production) or for the driving of gas turbines and the generation of electricity.
A similar highly advanced gas cooled reactor originally designed by West Germany (the AVR reactor) and now developed by South Africa is known as the Pebble Bed Modular Reactor. It is an inherently safe design, meaning that core damage is physically impossible, due to the design of the fuel (spherical graphite "pebbles" arranged in a bed within a metal RPV and filled with TRISO (or QUADRISO) pellets of uranium, thorium, or mixed oxide within). A prototype of a very similar type of reactor has been built by the Chinese, HTR-10, and has worked beyond researchers' expectations, leading the Chinese to announce plans to build a pair of follow-on, full-scale 250 MWe, inherently safe, power production reactors based on the same concept. (See Nuclear power in the People's Republic of China for more information.)
Experimental or conceptual designs
Some design concepts for nuclear reactors emphasize resistance to meltdown and operating safety.
The PIUS (process inherent ultimate safety) designs, originally engineered by the Swedes in the late 1970s and early 1980s, are LWRs that by virtue of their design are resistant to core damage. No units have ever been built.
Power reactors, including the Deployable Electrical Energy Reactor, a larger-scale mobile version of the TRIGA for power generation in disaster areas and on military missions, and the TRIGA Power System, a small power plant and heat source for small and remote community use, have been put forward by interested engineers, and share the safety characteristics of the TRIGA due to the uranium zirconium hydride fuel used.
The Hydrogen Moderated Self-regulating Nuclear Power Module, a reactor that uses uranium hydride as a moderator and fuel, similar in chemistry and safety to the TRIGA, also possesses these extreme safety and stability characteristics, and has attracted a good deal of interest in recent times.
The liquid fluoride thermal reactor is designed to naturally have its core in a molten state, as a eutectic mix of thorium and fluorine salts. As such, a molten core is reflective of the normal and safe state of operation of this reactor type. In the event the core overheats, a metal plug will melt, and the molten salt core will drain into tanks where it will cool in a non-critical configuration. Since the core is liquid, and already melted, it cannot be damaged.
Advanced liquid metal reactors, such as the U.S. Integral Fast Reactor and the Russian BN-350, BN-600, and BN-800, all have a coolant with very high heat capacity, sodium metal. As such, they can withstand a loss of cooling without SCRAM and a loss of heat sink without SCRAM, qualifying them as inherently safe.
Soviet Union-designed reactors
Soviet designed RBMKs, found only in Russia and the CIS and now shut down everywhere except Russia, do not have containment buildings, are naturally unstable (tending to dangerous power fluctuations), and also have ECCS systems that are considered grossly inadequate by Western safety standards. The reactor from the Chernobyl Disaster was a RBMK reactor.
RBMK ECCS systems only have one division and have less than sufficient redundancy within that division. Though the large core size of the RBMK makes it less energy-dense than the Western LWR core, it makes it harder to cool. The RBMK is moderated by graphite. In the presence of both steam and oxygen, at high temperatures, graphite forms synthesis gas and with the water gas shift reaction the resultant hydrogen burns explosively. If oxygen contacts hot graphite, it will burn. The RBMK tends towards dangerous power fluctuations. Control rods used to be tipped with graphite, a material that slows neutrons and thus speeds up the chain reaction. Water is used as a coolant, but not a moderator. If the water boils away, cooling is lost, but moderation continues. This is termed a positive void coefficient of reactivity.
Control rods can become stuck if the reactor suddenly heats up and they are moving. Xenon-135, a neutron absorbent fission product, has a tendency to build up in the core and burn off unpredictably in the event of low power operation. This can lead to inaccurate neutronic and thermal power ratings.
The RBMK does not have any containment above the core. The only substantial solid barrier above the fuel is the upper part of the core, called the upper biological shield, which is a piece of concrete interpenetrated with control rods and with access holes for refueling while online. Other parts of the RBMK were shielded better than the core itself. Rapid shutdown (SCRAM) takes 10 to 15 seconds. Western reactors take 1 - 2.5 seconds.
Western aid has been given to provide certain real-time safety monitoring capacities to the human staff. Whether this extends to automatic initiation of emergency cooling is not known. Training has been provided in safety assessment from Western sources, and Russian reactors have evolved in result to the weaknesses that were in the RBMK. However, numerous RBMKs still operate.
It is safe to say that it might be possible to stop a loss-of-coolant event prior to core damage occurring, but that any core damage incidents will probably assure massive release of radioactive materials. Further, dangerous power fluctuations are natural to the design.
Lithuania joined the EU recently, and upon acceding, it has been required to shut the two RBMKs that it has at Ignalina NPP, as such reactors are totally incompatible with the nuclear safety standards of Europe. It will be replacing them with some safer form of reactor.
The MKER is a modern Russian-engineered channel type reactor that is a distant descendant of the RBMK. It approaches the concept from a different and superior direction, optimizing the benefits, and fixing the flaws of the original RBMK design.
There are several unique features of the MKER's design that make it a credible and interesting option: One unique benefit of the MKER's design is that in the event of a challenge to cooling within the core - a pipe break of a channel, the channel can be isolated from the plenums supplying water, decreasing the potential for common-mode failures.
The lower power density of the core greatly enhances thermal regulation. Graphite moderation enhances neutronic characteristics beyond light water ranges. The passive emergency cooling system provides a high level of protection by using natural phenomena to cool the core rather than depending on motor-driven pumps. The containment structure is modern and designed to withstand a very high level of punishment.
Refueling is accomplished while online, ensuring that outages are for maintenance only and are very few and far between. 97-99% uptime is a definite possibility. Lower enrichment fuels can be used, and high burnup can be achieved due to the moderator design. Neutronics characteristics have been revamped to optimize for purely civilian fuel fertilization and recycling.
Due to the enhanced quality control of parts, advanced computer controls, comprehensive passive emergency core cooling system, and very strong containment structure, along with a negative void coefficient and a fast acting rapid shutdown system, the MKER's safety can generally be regarded as being in the range of the Western Generation III reactors, and the unique benefits of the design may enhance its competitiveness in countries considering full fuel-cycle options for nuclear development.
The VVER is a pressurized light water reactor that is far more stable and safe than the RBMK. This is because it uses light water as a moderator (rather than graphite), has well understood operating characteristics, and has a negative void coefficient of reactivity. In addition, some have been built with more than marginal containments, some have quality ECCS systems, and some have been upgraded to international standards of control and instrumentation. Present generations of VVERs (the VVER-1000) are built to Western-equivalent levels of instrumentation, control, and containment systems.
However, even with these positive developments, certain older VVER models raise a high level of concern, especially the VVER-440 V230.
The VVER-440 V230 has no containment building, but only has a structure capable of confining steam surrounding the RPV. This is a volume of thin steel, perhaps an inch or two in thickness, grossly insufficient by Western standards.
- Has no ECCS. Can survive at most one 4 inch pipe break (there are many pipes greater than 4 inches within the design).
- Has six steam generator loops, adding unnecessary complexity.
- However, apparently steam generator loops can be isolated, in the event that a break occurs in one of these loops. The plant can remain operating with one isolated loop - a feature found in few Western reactors.
The interior of the pressure vessel is plain alloy steel, exposed to water. This can lead to rust, if the reactor is exposed to water. One point of distinction in which the VVER surpasses the West is the reactor water cleanup facility - built, no doubt, to deal with the enormous volume of rust within the primary coolant loop - the product of the slow corrosion of the RPV. This model is viewed as having inadequate process control systems.
Bulgaria had a number of VVER-440 V230 models, but they opted to shut them down upon joining the EU rather than backfit them, and are instead building new VVER-1000 models. Many non-EU states maintain V230 models, including Russia and the CIS. Many of these states - rather than abandoning the reactors entirely - have opted to install an ECCS, develop standard procedures, and install proper instrumentation and control systems. Though confinements cannot be transformed into containments, the risk of a limiting fault resulting in core damage can be greatly reduced.
The VVER-440 V213 model was built to the first set of Soviet nuclear safety standards. It possesses a modest containment building, and the ECCS systems, though not completely to Western standards, are reasonably comprehensive. Many VVER-440 V213 models possessed by former Soviet bloc countries have been upgraded to fully automated Western-style instrumentation and control systems, improving safety to Western levels for accident prevention - but not for accident containment, which is of a modest level compared to Western plants. These reactors are regarded as "safe enough" by Western standards to continue operation without major modifications, though most owners have performed major modifications to bring them up to generally equivalent levels of nuclear safety.
During the 1970s, Finland built two VVER-440 V213 models to Western standards with a large-volume full containment and world-class instrumentation, control standards and an ECCS with multiply redundant and diversified components. In addition, passive safety features such as 900-tonne ice condensers have been installed, making these two units safety-wise the most advanced VVER-440's in the world.
The VVER-1000 type has a definitely adequate Western-style containment, the ECCS is sufficient by Western standards, and instrumentation and control has been markedly improved to Western 1970s-era levels.
Chernobyl disaster
In the Chernobyl disaster the fuel became non-critical when it melted and flowed away from the graphite moderator - however, it took considerable time to cool. The molten core of Chernobyl (that part that did not vaporize in the fire) flowed in a channel created by the structure of its reactor building and froze in place before a core-concrete interaction could happen. In the basement of the reactor at Chernobyl, a large "elephant's foot" of congealed core material was found. Time delay, and prevention of direct emission to the atmosphere, would have reduced the radiological release. If the basement of the reactor building had been penetrated, the groundwater would be severely contaminated, and its flow could carry the contamination far afield.
The Chernobyl reactor was an RBMK type. The disaster was caused by a power excursion that led to a meltdown and extensive offsite consequences. Operator error and a faulty shutdown system led to a sudden, massive spike in the neutron multiplication rate, a sudden decrease in the neutron period, and a consequent increase in neutron population; thus, core heat flux very rapidly increased to unsafe levels. This caused the water coolant to flash to steam, causing a sudden overpressure within the reactor pressure vessel (RPV), leading to granulation of the upper portion of the core and the ejection of the upper plenum of said pressure vessel along with core debris from the reactor building in a widely dispersed pattern. The lower portion of the reactor remained somewhat intact; the graphite neutron moderator was exposed to oxygen containing air; heat from the power excursion in addition to residual heat flux from the remaining fuel rods left without coolant induced oxidation in the moderator; this in turn evolved more heat and contributed to the melting of the fuel rods and the outgassing of the fission products contained therein. The liquefied remains of the fuel rods flowed through a drainage pipe into the basement of the reactor building and solidified in a mass later dubbed corium, though the primary threat to the public safety was the dispersed core ejecta and the gasses evolved from the oxidation of the moderator.
Although the Chernobyl accident had dire off-site effects, much of the radioactivity remained within the building. If the building were to fail and dust was to be released into the environment then the release of a given mass of fission products which have aged for twenty years would have a smaller effect than the release of the same mass of fission products (in the same chemical and physical form) which had only undergone a short cooling time (such as one hour) after the nuclear reaction has been terminated. However, if a nuclear reaction was to occur again within the Chernobyl plant (for instance if rainwater was to collect and act as a moderator) then the new fission products would have a higher specific activity and thus pose a greater threat if they were released. To prevent a post-accident nuclear reaction, steps have been taken, such as adding neutron poisons to key parts of the basement.
The effects of a nuclear meltdown depend on the safety features designed into a reactor. A modern reactor is designed both to make a meltdown unlikely, and to contain one should it occur.
In a modern reactor, a nuclear meltdown, whether partial or total, should be contained inside the reactor's containment structure. Thus (assuming that no other major disasters occur) while the meltdown will severely damage the reactor itself, possibly contaminating the whole structure with highly radioactive material, a meltdown alone should not lead to significant radiation release or danger to the public.
In practice, however, a nuclear meltdown is often part of a larger chain of disasters (although there have been so few meltdowns in the history of nuclear power that there is not a large pool of statistical information from which to draw a credible conclusion as to what "often" happens in such circumstances). For example, in the Chernobyl accident, by the time the core melted, there had already been a large steam explosion and graphite fire and major release of radioactive contamination (as with almost all Soviet reactors, there was no containment structure at Chernobyl). Also, before a possible meltdown occurs, pressure can already be rising in the reactor, and to prevent a meltdown by restoring the cooling of the core, operators are allowed to reduce the pressure in the reactor by releasing (radioactive) steam into the environment. This enables them to inject additional cooling water into the reactor again.
Reactor design
Although pressurized water reactors are more susceptible to nuclear meltdown in the absence of active safety measures, this is not a universal feature of civilian nuclear reactors. Much of the research in civilian nuclear reactors is for designs with passive nuclear safety features that may be less susceptible to meltdown, even if all emergency systems failed. For example, pebble bed reactors are designed so that complete loss of coolant for an indefinite period does not result in the reactor overheating. The General Electric ESBWR and Westinghouse AP1000 have passively activated safety systems. The CANDU reactor has two low-temperature and low-pressure water systems surrounding the fuel (i.e. moderator and shield tank) that act as back-up heat sinks and preclude meltdowns and core-breaching scenarios.
Fast breeder reactors are more susceptible to meltdown than other reactor types, due to the larger quantity of fissile material and the higher neutron flux inside the reactor core, which makes it more difficult to control the reaction.
Accidental fires are widely acknowledged to be risk factors that can contribute to a nuclear meltdown.
United States
There have been at least eight meltdowns in the history of the United States. All are widely called "partial meltdowns."
- BORAX-I was a test reactor designed to explore criticality excursions and observe if a reactor would self limit. In the final test, it was deliberately destroyed and revealed that the reactor reached much higher temperatures than were predicted at the time.
- The reactor at EBR-I suffered a partial meltdown during a coolant flow test on November 29, 1955.
- The Sodium Reactor Experiment in Santa Susana Field Laboratory was an experimental nuclear reactor which operated from 1957 to 1964 and was the first commercial power plant in the world to experience a core meltdown in July 1959.
- Stationary Low-Power Reactor Number One (SL-1) was a United States Army experimental nuclear power reactor which underwent a criticality excursion, a steam explosion, and a meltdown on January 3, 1961, killing three operators.
- The SNAP8ER reactor at the Santa Susana Field Laboratory experienced damage to 80% of its fuel in an accident in 1964.
- The partial meltdown at the Fermi 1 experimental fast breeder reactor, in 1966, required the reactor to be repaired, though it never achieved full operation afterward.
- The SNAP8DR reactor at the Santa Susana Field Laboratory experienced damage to approximately a third of its fuel in an accident in 1969.
- The Three Mile Island accident, in 1979, referred to in the press as a "partial core melt," led to the permanent shutdown of that reactor.
Soviet Union
In the most serious example, the Chernobyl disaster, design flaws and operator negligence led to a power excursion that subsequently caused a meltdown. According to a report released by the Chernobyl Forum (consisting of numerous United Nations agencies, including the International Atomic Energy Agency and the World Health Organization; the World Bank; and the Governments of Ukraine, Belarus, and Russia) the disaster killed twenty-eight people due to acute radiation syndrome, could possibly result in up to four thousand fatal cancers at an unknown time in the future and required the permanent evacuation of an exclusion zone around the reactor.
During the Fukushima I nuclear accidents, three of the power plant's six reactors reportedly suffered meltdowns. Most of the fuel in the reactor No. 1 Nuclear Power Plant melted. TEPCO believes No.2 and No.3 reactors were similarly affected. On May 24, 2011, TEPCO reported that all three reactors melted down.
Meltdown incidents
- There was also a fatal core meltdown at SL-1, an experimental U.S. military reactor in Idaho.
Large-scale nuclear meltdowns at civilian nuclear power plants include:
- the Lucens reactor, Switzerland, in 1969.
- the Three Mile Island accident in Pennsylvania, U.S.A., in 1979.
- the Chernobyl disaster at Chernobyl Nuclear Power Plant, Ukraine, USSR, in 1986.
- the Fukushima I nuclear accidents following the earthquake and tsunami in Japan, March 2011.
Other core meltdowns have occurred at:
- NRX (military), Ontario, Canada, in 1952
- BORAX-I (experimental), Idaho, U.S.A., in 1954
- EBR-I (military), Idaho, U.S.A., in 1955
- Windscale (military), Sellafield, England, in 1957 (see Windscale fire)
- Sodium Reactor Experiment, (civilian), California, U.S.A., in 1959
- Fermi 1 (civilian), Michigan, U.S.A., in 1966
- Chapelcross nuclear power station (civilian), Scotland, in 1967
- Saint-Laurent Nuclear Power Plant (civilian), France, in 1969
- A1 plant, (civilian) at Jaslovské Bohunice, Czechoslovakia, in 1977
- Saint-Laurent Nuclear Power Plant (civilian), France, in 1980
China Syndrome
The China syndrome (loss-of-coolant accident) is a fictional nuclear reactor operations accident characterized by the severe meltdown of the core components of the reactor, which then burn through the containment vessel and the housing building, then notionally through the crust and body of the Earth until reaching the other side, which in the United States is jokingly referred to as being China.
The system design of the nuclear power plants built in the late 1960s raised questions of operational safety, and raised the concern that a severe reactor accident could release large quantities of radioactive materials into the atmosphere and environment. By 1970, there were doubts about the ability of the emergency cooling systems of a nuclear reactor to prevent a loss of coolant accident and the consequent meltdown of the fuel core; the subject proved popular in the technical and the popular presses. In 1971, in the article Thoughts on Nuclear Plumbing, former Manhattan Project (1942–1946) nuclear physicist Ralph Lapp used the term "China syndrome" to describe a possible burn-through, after a loss of coolant accident, of the nuclear fuel rods and core components melting the containment structures, and the subsequent escape of radioactive material(s) into the atmosphere and environment; the hypothesis derived from a 1967 report by a group of nuclear physicists, headed by W. K. Ergen. In the event, Lapp’s hypothetical nuclear accident was cinematically adapted as The China Syndrome (1979).
The geographic, planet-piercing concept of the China syndrome derives from the misperception that China is the antipode of the United States; to many Americans, it is the “the other side of the world”. Moreover, the hypothetical transit of a meltdown product to the other side of the Earth (i.e. China) ignores the fact that the Earth's gravity tends to pull all masses towards its center. Assuming a meltdown product could persist in a mobile molten form for long enough to reach the center of the Earth; momentum loss due to friction (fluid viscosity) would prevent it continuing to the other side.
See also
- Behavior of nuclear fuel during a reactor accident
- Chernobyl compared to other radioactivity releases
- Chernobyl disaster effects
- High-level radioactive waste management
- International Nuclear Event Scale
- List of civilian nuclear accidents
- Lists of nuclear disasters and radioactive incidents
- Nuclear fuel response to reactor accidents
- Nuclear safety
- Nuclear power
- Nuclear power debate
- Martin Fackler (June 1, 2011). "Report Finds Japan Underestimated Tsunami Danger". New York Times.
- International Atomic Energy Agency (IAEA) (2007). IAEA Safety Glossary: Terminology Used in Nuclear Safety and Radiation Protection (2007edition ed.). Vienna, Austria: International Atomic Energy Agency. ISBN 92-0-100707-8. Retrieved 2009-08-17.
- United States Nuclear Regulatory Commission (NRC) (2009-09-14). "Glossary". Website. Rockville, Maryland, USA: Federal Government of the United States. pp. See Entries for Letter M and Entries for Letter N. Retrieved 2009-10-03.
- Reactor safety study: an assessment of accident risks in U.S. commercial nuclear power plants, Volume 1
- Hewitt, Geoffrey Frederick; Collier, John Gordon (2000). "4.6.1 Design Basis Accident for the AGR: Depressurization Fault". Introduction to nuclear power (in Technical English). London, UK: Taylor & Francis. p. 133. ISBN 978-1-56032-454-6. Retrieved 2010-06-05.
- "Earthquake Report No. 91". JAIF. May 25, 2011. Retrieved May 25, 2011.
- Kuan, P.; Hanson, D. J., Odar, F. (1991). Managing water addition to a degraded core. Retrieved 2010-11-22.
- Haskin, F.E.; Camp, A.L. (1994). Perspectives on Reactor Safety (NUREG/CR-6042) (Reactor Safety Course R-800), 1st Edition. Beltsville, MD: U.S. Nuclear Regulatory Commission. p. 3.1–5. Retrieved 2010-11-23.
- Haskin, F.E.; Camp, A.L. (1994). Perspectives on Reactor Safety (NUREG/CR-6042) (Reactor Safety Course R-800), 1st Edition. Beltsville, MD: U.S. Nuclear Regulatory Commission. pp. 3.5–1 to 3.5–4. Retrieved 2010-12-24.
- Haskin, F.E.; Camp, A.L. (1994). Perspectives on Reactor Safety (NUREG/CR-6042) (Reactor Safety Course R-800), 1st Edition. Beltsville, MD: U.S. Nuclear Regulatory Commission. pp. 3.5–4 to 3.5–5. Retrieved 2010-12-24.
- ANS : Public Information : Resources : Special Topics : History at Three Mile Island : What Happened and What Didn't in the TMI-2 Accident
- Nuclear Industry in Russia Sells Safety, Taught by Chernobyl
- 'Melt-through' at Fukushima? / Govt. suggests situation worse than meltdown http://www.yomiuri.co.jp/dy/national/T110607005367.htm
- Test Area North
- Walker, J. Samuel (2004). Three Mile Island: A Nuclear Crisis in Historical Perspective (Berkeley: University of California Press), p. 11.
- Lapp, Ralph E. "Thoughts on nuclear plumbing." The New York Times, 12 December 1971, pg. E11.
- "China Syndrome". Merriam-Webster. Retrieved December 11, 2012.
- Presenter: Martha Raddatz (15 March 2011). "ABC World News". ABC.
- Allen, P.J.; J.Q. Howieson, H.S. Shapiro, J.T. Rogers, P. Mostert and R.W. van Otterloo (April–June 1990). "Summary of CANDU 6 Probabilistic Safety Assessment Study Results". Nuclear Safety 31 (2): 202–214.
- http://www.insc.anl.gov/neisb/neisb4/NEISB_1.1.html INL VVER Sourcebook
- Partial Fuel Meltdown Events
- ANL-W Reactor History: BORAX I
- Wald, Matthew L. (2011-03-11). "Japan Expands Evacuation Around Nuclear Plant". The New York Times.
- The Chernobyl Forum: 2003-2005 (2006-04). "Chernobyl’s Legacy: Health, Environmental and Socio-economic Impacts". International Atomic Energy Agency. p. 14. Retrieved 2011-01-26.
- The Chernobyl Forum: 2003-2005 (2006-04). "Chernobyl’s Legacy: Health, Environmental and Socio-Economic Impacts". International Atomic Energy Agency. p. 16. Retrieved 2011-01-26.
- Hiroko Tabuchi (May 24, 2011). "Company Believes 3 Reactors Melted Down in Japan". The New York Times. Retrieved 2011-05-25. | <urn:uuid:593ff668-f2a3-43a3-a234-69537b1789d6> | 4.1875 | 11,510 | Knowledge Article | Science & Tech. | 41.095975 | 378 |
July 24, 2011
The photo above shows a lovely group of mushrooms nestled against the trunk of a eucalyptus tree. The association between the fungi and the tree however is no accident. This is a mutualistic relationship, where the two species assist each other, and in fact probably would be poorer without each other. Mutualism is any relationship between two species of organisms that benefits both species. Up to a quarter of the mushrooms you see while walking through the woods actually make their living through a mutualistic relationship with the trees in the forest. Remember of course that the mushroom is just the reproductive structure of a far more extensive organism consisting of a highly intertwined mass of fine white threads called a mycelium.
The word mycorrhiza is derived from the Classical Greek words for "mushroom" and "root." In a mycorrhizal association, the fungal hyphae of an underground mycelium are in contact with plant roots but without the fungus parasitizing the plant. While it's clear that the majority of plants form mycorrhizas, the exact percentage is uncertain, but it's likely to lie somewhere between 80 and 90 percent. When the fungus’ mycelium envelopes the roots of the tree the effect is to greatly increase the soil area covered by the tree’s root system. This essentially extends the plant’s reach to water and nutrients, allowing it to utilize more of the soil’s resources. This mutualistic association provides the fungus with a relatively constant and direct access to carbohydrates, such as glucose and sucrose, supplied by the plant. In return the plant gains the benefits of the mycelium's higher absorptive capacity for water and mineral nutrients (due to comparatively large surface area of mycelium-to-root ratio), thus improving the plant's mineral absorption capabilities. Photo taken on May 7, 2011.
Photo details: Camera Maker: Canon; Camera Model: Canon EOS 50D; Focal Length: 70.0mm; Aperture: f/10.0; Exposure Time: 0.013 s (1/80); ISO equiv: 1250; Exposure Bias: -1.00 EV; Metering Mode: Matrix; Exposure: aperture priority (semi-auto); White Balance: Auto; Flash Fired: No (enforced); Orientation: Normal; Color Space: sRGB. | <urn:uuid:8821cc54-1a17-46f7-9c23-b857acf0dd8d> | 3.40625 | 492 | Personal Blog | Science & Tech. | 43.200575 | 379 |
A. There is an instant at which the string is completely straight.
B. When the two pulses interfere, the energy of the pulses is momentarily zero.
C. There is a point on the string that does not move up or down.
D. There are several points on the string that do not move up or down.
E. A and C are both true.
F. B and D are both true. | <urn:uuid:abc13551-d525-435b-9b24-7edce26b05a0> | 2.890625 | 88 | Q&A Forum | Science & Tech. | 93.35131 | 380 |
A Java Runtime Environment (JRE), version greater than 1.6, is required to run Jython and GeoScript. Chances are your system already has a JRE installed on it. A quick way to test is to execute the following from the command line:
% java -version java version "1.6.0_26" Java(TM) SE Runtime Environment (build 1.6.0_26-b03-384-10M3425) Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02-384, mixed mode)
If the command is not found or the Java version is less than 1.6 you must install a new JRE. Otherwise you can continue to the next step.
A JRE can be downloaded from Sun Microsystems.
It is possible to run GeoScript with a different non Sun JRE. However the Sun JRE is recommended as it has been thoroughly tested.
Jython version greater than 2.5.1 is required for GeoScript. The current version can be downloaded from http://www.jython.org/. After install ensure that the Jython bin directory is on the path:
Where <JYTHON_DIR> is the root Jython installation directory.
Run ez_setup.py with Jython:
Some newer features are only avaialble in the latest 1.3 build, which is still considered experimental.
Unpack the GeoScript archive:
Change directory into the root of the unpacked archive and execute setup.py:
cd geoscript-1.2 jython setup.py install
Depending on your setup the install may require root privileges.
That’s it. GeoScript should now be installed on the system. To verify the install execute the geoscript command:
% geoscript Jython 2.5.1 (Release_2_5_1:6813, Sep 26 2009, 13:47:54) [Java HotSpot(TM) Client VM (Apple Inc.)] on java1.5.0_20 Type "help", "copyright", "credits" or "license" for more information. >>> import geoscript >>>
If you do not get an import error congratulations! GeoScript is installed on the system. | <urn:uuid:ed373a01-f120-466b-bb33-95b45314fad3> | 2.5625 | 482 | Tutorial | Software Dev. | 74.319605 | 381 |
Hot Weather Gets Scientists' Attention
Originally published on Wed July 11, 2012 5:30 am
RENEE MONTAGNE, HOST:
Across America people are sweltering through extreme heat this year, continuing a long-term trend of rising temperatures. Inevitably, many are wondering if the scorching heat is due to global warming. Scientists are expected to dig into the data and grapple with that in the months to come. They've already taken a stab at a possible connection with last year's extreme weather events, like the blistering drought in Texas. NPR's Richard Harris reports.
RICHARD HARRIS, BYLINE: Weather researchers from around the world are now taking stock of what happened in 2011. It was not the hottest year on record, but it was still in the top 15. Jessica Blunden from the National Climatic Data Center says 2011 had its own memorable characteristics.
JESSICA BLUNDEN: People may very well remember this year as a year of extreme weather and climate.
HARRIS: There were devastating droughts in Africa, Mexico, and Texas. In Thailand, massive flooding kept people's houses underwater for two months.
BLUNDEN: Here in the United States, we had one of our busiest and most destructive seasons on record in 2011. There were seven different tornado and severe weather outbreaks that each caused more than a billion dollars in damages.
HARRIS: So what's going on here? Federal climate scientist, Tom Karl, said one major feature of the global weather last year was a La Nina event. That's a period of cooler Pacific Ocean temperatures and it has effects around the globe, primarily in producing floods in some parts of the world and droughts in others.
TOM KARL: By no means did it explain all of the activity in 2011, but it certainly influenced a considerable part of the climate and weather.
HARRIS: Karl and Blunden are part of a huge multinational effort to sum up last year's weather and say what it all means. They provided an update by conference call. Clearly, long-term temperature trends are climbing as you'd expect as a result of global warming. Tom Peterson from the Federal Climate Data Center says the effort now is to look more closely at individual events.
TOM PETERSON: You've probably all heard the term you can't attribute any single event to global warming, and while that's true, the focus of the science now is evolving and moving onto how is the probability of event change.
HARRIS: And there researchers report some progress. For example, last year's record-breaking drought in Texas wasn't simply the result of La Nina. Peter Stott from the British Meteorology Office says today's much warmer planet played a huge role as well, according to the study the group released on Tuesday.
PETER STOTT: The result that they find is really quite striking, in that they find that such a heat wave is now about 20 times more likely during a La Nina year than it was during the 1960s.
HARRIS: A second study found that an extraordinary warm spell in London last November was 60 times more likely to occur on our warming planet than it would have been over the last 350 years. But that's not to say everything is related to climate change. There's no clear link between the spate of tornadoes and global warming, and devastating floods in Thailand last year, turn out to be the result of poor land use practices.
Even so, Kate Willett of the British Weather Service says there is a global trend consistent with what scientists expect climate change to bring.
KATE WILLETT: So, in simple terms, we can say that the dry regions are getting drier and the wet regions are getting wetter.
HARRIS: This year's extreme events are different from last year's, but they all fit into a coherent picture of global change. Richard Harris, NPR News. Transcript provided by NPR, Copyright NPR. | <urn:uuid:e8e46237-1e26-4326-b62c-a25477bd0d59> | 3.15625 | 822 | Truncated | Science & Tech. | 53.82621 | 382 |
kottke.org posts about pimovie
Pi, God, and apartment supercomputers Jul 18 2005
The New Yorker recently ran a feature on how a couple of mathematicians helped The Met photograph a part of The Hunt of the Unicorn tapestries. That same week, they ran from their extensive archives a 1992 profile of the same mathematicians, brothers David and Gregory Chudnovsky. The Chudnovskys were then engaged in calculating as many digits of pi as they could using a homemade supercomputer housed in their Manhattan apartment. There's some speculation that director Darren Aronfsky based his 1998 film, Pi, on the Chudnovskys and after reading the above article, there's little doubt that's exactly what he did:
They wonder whether the digits contain a hidden rule, an as yet unseen architecture, close to the mind of God. A subtle and fantastic order may appear in the digits of pi way out there somewhere; no one knows. No one has ever proved, for example, that pi does not turn into nothing but nines and zeros, spattered to infinity in some peculiar arrangement. If we were to explore the digits of pi far enough, they might resolve into a breathtaking numerical pattern, as knotty as "The Book of Kells," and it might mean something. It might be a small but interesting message from God, hidden in the crypt of the circle, awaiting notice by a mathematician.
The Chudnovsky article also reminds me of Contact by Carl Sagan in which pi is prominently featured as well.
According to Wolfram Research's Mathworld, the current world record for the calculation of digits in pi is 1241100000000 digits, held by Japanese computer scientists Kanada, Ushio and Kuroda. Kanada is named in the article as the Chudnovskys main competitor at the time.
(Oh, and as for patterns hidden in pi, we've already found one. It's called the circle. Just because humans discovered circles first and pi later shouldn't mean that the latter is derived from the former.) | <urn:uuid:d34fa334-0767-4289-b1bc-0392e180cdaf> | 2.609375 | 425 | Personal Blog | Science & Tech. | 43.302 | 383 |
[Tutor] adding together strings
Fri, 7 Apr 2000 08:02:02 -0700
> Hello python tutors,
> I hope you can help me with a little problem. (I am a beginner).
> I am working on a program which will access some files in a folder on the
> desktop without me having to type in the whole address every time.
> Here is what I would like to do.
> filename = raw_input("type name of file ")
> filename="C:\windows\desktop\targetfolder\" + filename
> When I try this kind of thing at the command line it works fine, but when I
> put it into a module it tells me that "filename" is an "invalid token"
I'm suprised it works at the command line - you have a problem here with
Python's rules for escaping special characters in strings. The primary
problem is that the \ before the final " on the second line escapes the ",
so that Python thinks of it as being a quote within the string rather than
the quote that ends the string.
Similarly, although you may not have noticed it, the \t in \targetfolder
gets turned into a tab.
You can fix this immediately by something like:
filename = raw_input("type name of file ")
filename="C:\\windows\\desktop\\targetfolder\\" + filename
In other words, in regular strings in Python, whenevr you want a "\" you
should type a "\\".
A better solution to your problem, because it will be breeding good habits
for the future, is to use the os.path module's functions, particularly
os.path.join. This will help if you ever have to work with unix or mac
systems in python; and means you don't have to work with all those double
backslashes on your system.
Hope this helps. | <urn:uuid:6b44b8a1-5afb-4888-b8f0-8c82c30d88d9> | 2.546875 | 409 | Comment Section | Software Dev. | 57.046184 | 384 |
Visualizing 1 + 1/x
Date: 10/10/2003 at 21:25:38 From: Mary Subject: logic How can I show that the sum of a positive number and its reciprocal is at least two?
Date: 10/11/2003 at 06:06:27 From: Doctor Luis Subject: Re: logic Hi Mary, Adding a positive number x > 0 and its reciprocal 1/x gives you the function f(x) = x + 1/x If you're familiar with calculus, you can see that solving for the extrema points gives you f'(x) = 1 - 1/x^2 = 0 1 = 1/x^2 x^2 = 1 x = 1 (reject negative root since x > 0) Since f"(x) = 2/x^3 is positive for x>0, we know that f(x) is concave upward. This means that the critical point x=1 gives you a minimum. This minimum value is f(1) = 1 + 1/1 = 2. In the following diagram, I've graphed the two functions y = x + 1/x and y = 2. Even if you are not familiar with calculus maybe you can follow the following chain of reasoning: The square of any nonzero real number is positive. As an inspired guess, pick x-1 as the real number to be squared. Then, (x-1)^2 >= 0 (True for all x. Equality holds only for x=1.) x^2 - 2x + 1 >= 0 x^2 + 1 >= 2x Now, let x > 0, since we are only interested in positive numbers. This means that 1/x > 0 too. So, we can multiply by 1/x without reversing the sign of our inequality: (1/x)*(x^2 + 1) >= (1/x)*(2x) x + 1/x >= 2 This proves that the sum of x > 0 and its reciprocal 1/x adds up to at least 2. I hope this helped! Let us know if you have any more questions. - Doctor Luis, The Math Forum http://mathforum.org/dr.math/
Search the Dr. Math Library:
Ask Dr. MathTM
© 1994-2013 The Math Forum | <urn:uuid:46a4f20a-ca55-4a81-bcfd-57c978a4eebf> | 3.234375 | 482 | Comment Section | Science & Tech. | 99.736441 | 385 |
Work continues on readying Curiosity for surface operations on Mars, with characterisation phase well underway.
The week has seen the rover’s Chemistry and Camera system – ChemCam – undergoing its calibration tests using a target system located towards the back of the rover, while scientists have been looking for candidates for the first full test firing of the system at a suitable surface target.
ChemCam is a complex system split between Curiosity’s mast and body. The mast unit is the large box-like unit at the top of the mast. It contains a laser unit, a remote micro-imager (RMI) and a telescope for focusing both.
The body unit carries three spectrographs for chemical analysis and has its own power supply and an electronic interface to the rover’s central computer system.
ChemCam has two main functions, split between the laser system (the Laser-induced Breakdown Spectroscopy (LIBS), to give it its proper name) and the Remote Micro-Imager (RMI).
LIBS is designed to fire series of laser pulses at a target spot smaller than 1 millimetre on the surface of rocks and soils, vaporizing it. Light from the resultant plasma is captured by the telescope and sent via fibre-optics to the on-board spectrographs for analysis, which should provide information in unprecedented detail about minerals and micro structures in Martian rocks. Additionally, the laser can be used to remove dust from the surfaces of rocks, allowing the drill on Curiosity’s hand to obtain samples of the rock free from surface contaminants.
The RMI provides black-and-white images at 1024×1024 resolution in a 0.02 radian (1.1 degree) field of view – approximately equivalent to a 1500mm lens on a 35mm camera. RMI has two functions. In the first, it will be used in conjunction with LIBS to identify suitable targets and target locations (targets can be selected autonomously or via Earth-based selection and command). Working independently of LIBS, it will be used to obtain close-up images in support of robot arm-mounted experiments or provide images of very distant objects.
This week, ChemCam was calibrated using a target system mounted on the rear section of the rover, mounted below the UHF antenna. As a result of this, ChemCam was confirmed ready for operations, and is expected to make it first test-firing on an actual Martian rock sample on Saturday August 18th. The sample is provisionally designated N165, and sits a short distance from the rover.
ChemCam is a joint US / French experiment, with the US Los Alamos National Laboratory providing the body unit, the French national space agency (CNES) proving the mast unit (RMI, laser, etc.) and JPL the fibre-optic link between the two.
Use the page numbers below left to continue reading | <urn:uuid:e6bb1611-4ea3-4b5b-ad3d-ecba04ea69a8> | 3.46875 | 597 | News Article | Science & Tech. | 41.673831 | 386 |
Plants modified with protectant genes designed to kill resistant insects can extend the usefulness of currently used pest-control methods and delay the development of pesticide-resistant bugs, according to Purdue University scientists and their collaborators from the University of Wisconsin-Madison, Monsanto Co., the University of Illinois and the University of California, Davis. The researchers' findings appear in this month's issue of the Journal of Theoretical Biology.
"We always thought that it would take a Michael Jordan of toxins - a superstar of toxins to effectively halt insect resistance to the current generation of insecticides," said Barry Pittendrigh, a Purdue associate professor of entomology and lead author of the study. "We found that moderately effective genetically engineered protectants used in plants in the buffer zone around the main crops can play a major role in insect control, and they should be easier to identify than highly effective protectants.
"You don't find a superstar very often, but it may not be difficult to find good players, or worthwhile insect-control agents."
Farmers who use bioengineered crop protectants also use a buffer, or refuge, around the outside of fields that contains plants lacking the high-toxicity genetic modification in the main field that kills most insects. The refuge, usually about 20 percent of the acreage planted, delays development of insects resistant to the main-field, high-toxicity protectants, but some individuals in the destructive insect group have genes that allow them to survive.
Using a computer model, the scientists determined that within a refuge, one could add a moderate plant protectant, or journeyman player, that kills 30 p
Contact: Susan Steeves | <urn:uuid:fc32b3f6-ff2e-409b-a726-497c698582d3> | 3.34375 | 339 | News Article | Science & Tech. | 13.865897 | 387 |
This comment has been deleted
Images courtesy Peter Vrsansky, Slovak Academy of Sciences
Published August 30, 2012
A glowing green cockroach would seem much easier to kill than our more familiar kitchen pests, but this particular insect evolved its own set of lights to avoid exactly such predatory attention, according to a new study.
Luchihormetica luckae glows to mimic the bioluminescent click beetle, whose glow warns predators of its toxicity.
For one thing, while many life-forms have evolved their own flashiness, most are found in the deep-sea—making bioluminescence a relatively rare trait on land. But L. luckae is particularly rare, in that it glows to mimic another insect.
Other uses of bioluminescence in the insect world, as in the case of the common firefly, are more attuned to attracting mates—lighting up to find love in the dark simply saves time.
Unfortunately, it also makes one much more visible to predators.
"Bioluminescence is like any evolutionary tool—there is no single use for it. It can attract, deter, or even be used as an invisibility cloak of sorts," said Olivia Judson, an evolutionary biologist and author of Dr. Tatiana's Sex Advice to All Creation.
Land Animals Glowed Later Than Thought?
The scientists studied an L. luckae cockroach collected in 1939 and housed at the National Museum of Natural History in Washington, D.C. The team employed new technology to scan and analyze the biological mechanism responsible for the luminescence.
They determined that the wavelengths of light released from both the click beetle and L. luckae—though developed via distinct evolutionary processes—are precisely the same.
The new research may also provide evidence for a much later evolution of land-based bioluminescence, according to the study authors.
That's because click beetles evolved their predator-deterring glow only 65 million years ago—recently compared with the 400-million-year-old development of underwater bioluminescence. (See a prehistoric time line.)
Glowing Roach a Flash in the Pan?
L. luckae could prove to be a flash in the evolutionary-science pan.
The one specimen analyzed in the study had been collected from a very specific region recently decimated by volcanic eruption. Scientists now consider the creature so rare that collecting further specimens could cause its extinction.
So chances are you won't be finding these little glowing pests raiding your cabinets.
The glowing-cockroach study appeared recently in the journal Naturwissenschaften.
>>Look for Olivia Judson's writing on bioluminescence in a future issue of National Geographic magazine.
These six scientists were snubbed for awards or robbed of credit for discoveries … because they were women.
Scientists can control the self-assembly of molecules to build nano-size flowers in the lab, a new study says.
Global warming is causing more extreme weather. But when it comes to tornadoes, it could go either way. | <urn:uuid:9bedc31c-d4fd-4fd2-aed2-84530d9583c2> | 3.421875 | 638 | News Article | Science & Tech. | 37.298416 | 388 |
Our main goal here is to give a quick visual summary that is at once convincing and data rich. These employ some of the most basic tools of visual data analysis and should probably become form part of the basic vocabulary of an experimental mathematician. Note that traditionally one would run a test such as the Anderson-Darling test (which we have done) for the continuous uniform distribution and associate a particular probability with each of our sets of probability, but unless the probability values are extremely high or low it is difficult to interpret these statistics.
Experimentally, we want to test graphically the hypothesis of normality and randomness (or non-periodicity) for our numbers. Because the statistics themselves do not fall into the nicest of distributions, we have chosen to plot only the associated probabilities. We include two different types of graphs here. A quantile-quantile plot is used to examine the distribution of our data and scatter plots are used to check for correlations between statistics.
The first is a quantile-quantile plot of the chi square base 10 probability values versus a a discrete uniform distribution. For this graph we have placed the probabilities obtained from our square roots and plotted them against a perfectly uniform distribution. Finding nothing here is equivalent to seeing that the graph is a straight line with slope 1. This is a crude but effective way of seeing the data. The disadvantage is that the data are really plotted along a one dimensional curve and as such it may be impossible to see more subtle patterns.
The other graphs are examples of scatter plots. The first scatter plot shows that nothing interesting is occurring. We are again looking at probability values this time derived from the discrete Cramer-von Mises (CVM) test base 10,000. For each cube root we have plotted the point , where is the CVM base 10,000 probability associated with the first 2500 digits of the cube root of i and is the probability associated with the next 2500 digits. A look at the graph reveals that we have now plotted our data on a two dimensional surface and there is a lot more `structure' to be seen. Still, it is not hard to convince oneself that there is little or no relationship between the probabilities of the first 2500 digits and the second 2500 digits.
The last graph is similar to the second. Here we have plotted the probabilities associated with the Anderson-Stephens statistic of the first 10,000 digits versus the first 20,000 digits. We expect to find a correlation between these tests since there is a 10,000 digit overlap. In fact, although the effect is slight, one can definitely see the thinning out of points from the upper left hand corner and lower right hand corner.
Figure 1: Graphs 1-3 | <urn:uuid:6697aede-f5b6-4d7b-b653-9cc6d6586fb4> | 3.5625 | 554 | Academic Writing | Science & Tech. | 42.204993 | 389 |
No for "dry", yes for "wet".
For "dry friction", such as a box on a floor, it is relatively constant. Why is this? Most objects are microscopically rough with "peaks" that move against each-other. As more pressing force is applied, the peaks deform more and the true contact area is increases proportionally. The surfaces adhere forming a bond that will take a certain amount of shear force to break. Since the molecules are moving much faster ~300m/s than the box they have plenty of time to adhere (so velocity is not an issue). However, static friction is sometimes be higher, in one explanation because the peaks have time to settle and interlock with each-other. Neglecting static friction, force is constant.
The simplest case in wet friction is two objects separated by a film of water. In this case there is zero static friction, as the thermal energy is sufficient to disrupt any static, shear-bearing water molecule structure. However, water molecules still push and pull on each-other, transferring momentum from the top to the bottom. The rate of momentum transfer i.e. "friction" grows in proportion to how much momentum is available, which in turn grows with velocity. Thus, force is linear with velocity.
However, interesting things happen when the bulk mass of the water gets important. In this case, bumps, etc on the surface push on the water creating currents that can ram into bumps on the other surface. If you double the velocity, your bumps will push twice as much water twice as fast for 4 times the force; force is quadratic to velocity. You can plug in formulas for the linear case (which depends on viscosity) and quadratic case (which depends on density) to see which one "wins" (this is roughly the Reynolds number), if there is no clear winner the answer is complex (see the Moody diagram).
Nevertheless these are approximations and the real answer could fail to follow these "rules". | <urn:uuid:55986c1f-03be-4b53-892f-8b50cf3b888c> | 3.203125 | 417 | Q&A Forum | Science & Tech. | 52.590476 | 390 |
Major Section: HISTORY
Example Forms: ACL2 !>:puff* :max ACL2 !>:puff* :x ACL2 !>:puff* 15 ACL2 !>:puff* "book"where
General Form: :puff* cd
cdis a command descriptor (see command-descriptor) for a ``puffable'' command. See puff for the definition of ``puffable'' and for a description of the basic act of ``puffing'' a command.
Puff*is just the recursive application of puff.
Puff*prints the region puffed, using
To puff a command is to replace it by its immediate subevents, each
of which is executed as a command. To
puff* a command is to replace
the command by each of its immediate subevents and then to
each of the puffable commands among the newly introduced ones.
For example, suppose
"ab" is a book containing the following
(in-package "ACL2") (include-book "a") (include-book "b")Suppose that book
defuns for the functions
Now consider an ACL2 state in which only two commands have been
executed, the first being
(include-book "ab") and the second being
(include-book "c"). Thus, the relevant part of the display
pbt 1 would be:
1 (INCLUDE-BOOK "ab") 2 (INCLUDE-BOOK "c")Call this state the ``starting state'' in this example, because we will refer to it several times.
:puff 1 is executed in the starting state. Then the first
command is replaced by its immediate subevents and
:pbt 1 would
1 (INCLUDE-BOOK "a") 2 (INCLUDE-BOOK "b") 3 (INCLUDE-BOOK "c")Contrast this with the execution of
:puff* 1in the starting state.
Puff*would first puff
(include-book "ab")to get the state shown above. But then it would recursively
puff*the puffable commands introduced by the first puff. This continues recursively as long as any puff introduced a puffable command. The end result of
:puff* 1in the starting state is
1 (DEFUN A1 ...) 2 (DEFUN A2 ...) 3 (DEFUN B1 ...) 4 (DEFUN B2 ...) 5 (INCLUDE-BOOK "c")Observe that when
puff*is done, the originally indicated command,
(include-book "ab"), has been replaced by the corresponding sequence of primitive events. Observe also that puffable commands elsewhere in the history, for example, command 2 in the starting state, are not affected (except that their command numbers grow as a result of the splicing in of earlier commands). | <urn:uuid:7bd5fd8e-4c80-4b18-a90d-19caa3325195> | 3.203125 | 605 | Documentation | Software Dev. | 63.6203 | 391 |
C++ is the canonical example of a language that combines low-level and high-level features1. It doesn't simulate anything, it provides native support for almost every high-level construct you'll usually find in a common high-level language and almost every low-level construct you'll find in C.
But of course the terms are highly relative, there was a point in time (not that long ago2) where C was considered a very high level language. And there are quite a few other languages that offer considerable low-level functionalities while still commonly regarded as high-level, and vice versa, the lines are kind of fuzzy.
As for the syntax, that's something that naturally affected by the language's level of abstraction. Low-level generally means:
In computer science, a low-level programming language is a programming language that provides little or no abstraction from a computer's instruction set architecture. Generally this refers to either machine code or assembly language. The word "low" refers to the small or nonexistent amount of abstraction between the language and machine language; because of this, low-level languages are sometimes described as being "close to the hardware."
So naturally a low-level language adopts a syntax that's closer to machine code, which is inherently non human friendly. Quite a few languages, like C++, have adopted a wide variety of syntactic sugar, as a mechanism to make things easier to read or to express. But syntactic sugar is something that almost every high level language has opted for, C++'s sugar alone doesn't make it a low-level language.
As for the complexity of a low & high-level language, it's also natural: It's a tool with multiple goals, every single goal adds to its complexity. That's unavoidable regardless of the goal. High-level languages are not "better" than low-level one, they are just more concentrated on one goal. Languages that are designed with ease of use as a primary goal tend to be high-level, but that's only important if the necessary trade-offs to achieve the goal don't affect your applications.
Low or high level doesn't really matter, languages are primarily tools. You should choose the one that best fits whatever you're building in combination with what skills you have. Most popular languages are multi-purpose and Turing complete, in theory they are valid choices for building almost anything. There are no absolutes, of course, you may win in some areas if you opt for a high-level language and in others if you opt for a lower-level one, even within the same application.
Most large scale applications mix and match, following the "right tool for the job" mentality, and that's a more efficient approach, imho, than trying to have your cake and eat it too.
1 But please note that there isn't a definitive answer on what's considered a strictly high-level feature and what a low-level one.
2 In human years, in software years it was long ago... | <urn:uuid:1057cee7-b38e-492a-9886-804e9b564515> | 3.515625 | 621 | Q&A Forum | Software Dev. | 43.192733 | 392 |
Caecilians are amphibians that lack limbs. They look a bit like earthworms or snakes and can grow up to 1.5 m (5 ft) in length. As they generally live underground, they are the most under-studied group of amphibians.
No. Some caecilians give birth to live young and some salamanders have larvae that essentially resemble the adult stage, but with external gills. There are many terrestrial frog species that emerge as froglets directly from the egg, bypassing the tadpole stage altogether. This adaptation allows them to live far from water bodies (on mountain tops for instance), and provides the parents with an increased ability to guard their eggs, which are laid on land. It also removes a serious risk that aquatic larvae must face: predation by fish or dragonfly larvae. Many terrestrial salamanders employ this strategy as well. (Photo credit: Fogden).
Amphibians are the oldest land vertebrates. Ichthyostega was an amphibian species that lived in Greenland 362 million years ago.
The Northern & Southern Gastric Brooding frogs Rheobatrachus vitellinus and R. silus lived in eastern Australia. These amazing frogs could actually shut down their gastric juices while rearing their young inside their stomachs! They therefore held great promise for advances in human medicine, as research on these frogs may have resulted in a cure for ulcers.
Unfortunately, the gastric-brooding frogs vanished within a few years of being discovered by scientists--the health of humans and frogs is clearly intertwined. On the right you can see a tiny R. silus froglet emerging from its mother's mouth. (Photo by D. Sarille; top photo of R. vitellinus is by M. Davies)
The smallest frogs are the Paedophryne dekot and Paedophryne verrucosa from Papua New Guinea, sizing in at only only 9 mm in length. Next up is the critically endangered Cuban frog Eleutherodactylus iberia. These frogs measure only 10 mm (0.4 in) when fully grown. They are threatened by pesticides, and by large-scale mining operations that destroy their habitat (Photo of E. iberia by M. Lammertink)
Izecksohn's Toad Brachycephalus didactylus from southeastern Brazil reaches full size at only 10mm (0.4 in). It is known in Brazil as "sapo-pulga" -- the Flea Toad.
The world's largest frog is the Goliath Frog Conraua goliath, which lives in western Africa. They can grow to be over 30 cm (1 ft) long, and weigh over 3 kg (6.6 lbs). This species is endangered, due to conversion of rainforests into farmland, and due to their being used as a local food source.
The strawberry poison dart frog Dendrobates pumilio has an extraordinary reproductive strategy. Females lay their eggs in the leaf-litter or on plants. When the tadpoles hatch, they climb onto the mother's back. She then transports them to small pockets of water in bromeliads or other vegetation, often high in the trees. She returns intermittently through their development to lay unfertilized eggs in the water. These eggs serve as the tadpoles' primary food source. Dendrobates pumilio occurs throughout the Caribbean coast of Central America. Other poison-dart frog species carry their tadpoles around as well. Note the tadpoles in the photo to the right. (Top photo of D. pumilio taken at Red Frog Beach, Bocas del Toro, Panama. Bottom photo is the Gulfo Dulce Poison Dart Frog Phyllobates vittatus on Costa Rica's Osa Peninsula)
Assa darlingtoni, commonly called the marsupial frog, lives in the rainforests of eastern Australia, where it lays its eggs in moist leaf-litter. Both parents guard the nest of about 30 eggs, and when the froglets emerge, they crawl into the father's two hip-pockets, where they hang out for several weeks. The adult in the picture is about the size of a thumbnail, imagine how small the froglets are!
The word amphibian is derived from Greek and means 'two lives', referring to the fact that most amphibians spend their larval stage as aquatic, herbivorous tadpole, and their adult stage as terrestrial carnivore. However, some amphibians spend virtually their entire lives in the water (i.e. African clawed frogs Xenopus laevis, and mudpuppies Necturus). Others, like the Puerto Rican coqui Eleutherodactylus coqui or Dunn's salamander Plethodon dunni from Oregon, spend their entire lives on land: they lay their eggs in moist leaf-litter, bypass the tadpole stage and may never enter a water body. (Photo is of Whistling Treefrog Litoria verreauxii)
Tadpoles have gills like fish, and most adult frogs have lungs like yours. However, amphibians have permeable skin that allows them to absorb both water and oxygen directly from the environment, right through their skin. Plethodontid salamanders have no lungs: they breathe solely through their skin and through the tissues lining their mouths. The world's first known lungless frog, Barbourula kalimantanensis, was recently found in the jungles of Borneo. The largest lungless amphibian is an 80 cm (2.5 ft) caecilian Atretochoana eiselti from Brazil. (Photo by D. Bickford of the Evolutionary Ecology and Conservation Lab).
The Australian stony creek frogs Litoria wilcoxii and Litoria jungguy occasionally build a sand nest for their eggs. In the photo at right, eggs are in the center of the nest, which is immediately beside a stream. Thus the eggs are kept in a moist environment, safe from fish for the time being. The next large rain will wash them into the stream and they will emerge as tadpoles.
Not much. True toads (bufonids) tend to have short legs and dry 'warty' skin, though there are plenty of frog species that fit this description as well. Toads tend to have toxic secretions, but so do poison dart frogs. However, toads do have significantly higher chances of resembling that alien who lives down the street from you. (Photo of American toad Bufo americanus, our national toad).
Those are the paratoid glands, which hold a cocktail of toxic secretions. Since toads are pretty slow they need to defend themselves from predators. The cane toad Bufo marinus has 20 bufotoxins, some of which are potent enough to kill a snake many times its size. Contrary to urban legend, if you lick one, you'll probably just throw up. The Sonoran Desert Toad Bufo alvareus has secretions that can cause hallucinations.
Most toxic amphibians (like cane toads or poison dart frogs) accumulate their toxins from the insects they eat. But Australia's critically endangered Corroboree frogs Pseudophryne corroboree and P. pengilleyi manufacture their own toxins. They may be the only vertebrates capable of such a feat. (Photo credit unknown)
A batrachologist is a person who studies amphibians. While "batracho" has been used in science for over 150 years to denote amphibians, the term batrachologist has only come into recent usage. Formerly, the term herpetologist was used, but this name encompassed those who studied amphibians and/or reptiles.
Frog deformities have caused alarm since the early 1990's, when high numbers of frogs in the Midwest were found with missing limbs, extra limbs or other developmental abnormalities. Many of these deformations are caused by a trematode parasite Ribeiroia ondatrae that burrows into tadpoles' hind limbs. Why did the malformation rate increase so dramatically in the last two decades? This is unknown, but it may be due to increased levels of eutrophication (an un-natural state caused by excessive amounts of fertilizer entering a water body), which allowed snails that are used by the trematode as an intermediary host to increase in numbers, thus providing optimal breeding conditions for the trematode. Furthermore, pesticides have been shown to weaken frogs' immune systems and make them more vulnerable to trematode infections. The photo on the right is a 6-legged Spotted Grass Frog Limnodynastes tasmaniensis. Kind of cool, but in a not-so-cool kind of way. (Photo credits unknown).
Some frogs breed in ephemeral pools that form after heavy rains. To ensure that their tadpoles do not die when their puddle dries, the tadpoles are often adapted to metamorphose quickly, perhaps within a week or two. Other frogs, however, like the Tailed frog Ascaphus truei from the Pacific Northwest or Australia's Barred Frog Mixophyes live in permanent ponds or streams and can remain in the tadpole stage for 2 or 3 years.
Speaking of Barred Frogs, the eyes of Fleay's Barred Frog Mixophyes fleayi actually change color as they get older. Juveniles have partially red eyes, but in adults, the red has changed to blue.
Wood frogs Rana sylvatica are the only North American frog that lives above the Arctic Circle. Frogs are ectotherms (cold-blooded) meaning they cannot internally control their body temperatures. Wood frogs are adapted to cold winters being able to survive a deep freeze: Their breathing, blood flow, and heartbeat stop, and ice crystals form beneath their skin. While ice crystals in human skin would result in serious problems (frostbite), wood frogs are safe because high glycogen levels in their cells act like anti-freeze, restricting the frozen areas to the extra-cellular fluid, where no tissue damage will occur. Cool frogs!
Some species only live a few years, but many live 6 or 7 years. The African Clawed Frog Xenopus laevis and the Green Tree Frog Litoria caerulea can live about 30 years in captivity. Determining their life span in the wild is difficult, but if anybody wants to follow some frogs around for a couple decades, please let us know.
Frogs inhabit some of the driest regions on Earth. As frogs need to remain moist to survive, some frogs burrow underground to avoid the hot dry weather up above. They have specialized shovel-like pads on their arms or legs that let them to go up to 1.5 m (5 ft) down. If no rains come, that's fine. These frogs slow down their metabolism and enter a state called aestivation, which is similar to hibernation. And they shed layers of skin that surround them like a protective cocoon to retain moisture. Some frogs remain underground for 10 months. When the rains come, these frogs appear en masse on the surface for the biggest party of the year. (Photo: Ornate Burrowing frog Limnodynastes ornatus in New South Wales)
Check out this excellent video about burrowing frogs in Africa:
Skin secretions from at least three species of Australian frogs (the Green Treefrog Litoria caerulea, the Southern Orange-eyed Treefrog Litoria chloris, and the Green-Eyed Treefrog Litoria genimaculata) can completely inhibit HIV, the virus that causes AIDS.
OK, so now that we know Southern Orange-eyed Treefrogs probably aren't going to get AIDS, this seems like the appropriate time to show the following video. This one's for the people who've gotten this far down the web page but still aren't sure if they think frogs are cool or not. Make sure your speakers are on...
Please embed this video on your web page if you like it. Note that frogs only party when the temperature and recent rainfall are just right. Climate change therefore would act a bit like the cops did when you were in high school and held that party at your parents' place.
Most frogs and toads have external fertilization (eggs are laid outside of the female's body and then fertilized by the male), but the Tailed Frog Ascaphus truei, which lives in the US Pacific Northwest, has internal fertilization. Many salamanders have internal fertilization as well. Males drop a spermatophore (a gelatinous mass of sperm, more or less) in their favorite location. The lucky female then comes along and pick up this spermatophore with her cloaca to fertilize the eggs inside her body. Caecilians are the only group of amphibians in which all species utilize internal fertilization.
Frogs have both a common name and a scientific name, which is in Latin. Thus the African Clawed Frog is also known as Xenopus laevis. The scientific name consists of a frog's genus followed by its species (this is called binomial nomenclature). Carl Linnaeus devised this system in the 18th century so that scientists could be certain they were always referring to the correct species. For instance, there is a 'Green Treefrog' in Europe, America and Australia, but they are all different species: Hyla arborea, Hyla cinerea and Litoria caerulea.
They do, and they also have a clear nictitating membrane, which allows them to protect their eyes without obstructing their vision. You can see the nictitating membrane on this partially submerged Gray Treefrog Hyla versicolor from northern Virginia.
Australia's Striped Rocket Frog Litoria nasuta can jump a distance equivalent to 55 times its body length! That would be like you jumping a football field! How do they do that? Their legs are twice as long as the rest of their body, and their leg muscles are 1/3 of their overall weight. These frogs are so cool we had to put a picture of one on our Frogs of Australia poster! (Photo taken at the Booyal Crossing, west of Bundaberg, Queensland)
The cave-dwelling salamander Proteus anguinus (known as the Olm) has a body mass of just 15-20 g , but a predicted maximum lifespan of over a century. A new paper by Voituron et al. (2010) has analyzed years' worth of weekly records from a 400-animal captive breeding colony in the French Pyrenees. The average adult Olm lifespan was 68.5 years; sexual maturity was attained at 15.6 years, on average. In contrast, the next longest-lived amphibian is the Japanese giant salamander (Andrias japonicus), weighing over 30 kg but with a maximum lifespan of only 52 years in captivity. This Cool Frog Fact is courtesy of AmphibiaWeb.
Darwin's frogs are characterized by a nasal prolongation and their unique brood system, named neomelia, in which males breed their offspring in their vocal pouch. In Rhinoderma darwinii the offspring leave the mouth as metamorphosed froglets. On the other hand, R. rufum has their tadpoles only for two weeks, after which they are released into water in a relatively early tadpole stage. Unfortunately, Rhinoderma populations have declines and R. rufum is no longer found in the wild. Contributed by Johara Bourke.
Technically, yes! Amphibians are ectotherms, which means they rely on the environment to regulate their own body heat. However, the term "cold-blooded" has a negative connotation and sometimes amphibians are perceived to not have concern for other members of their own species. Yet it should be known that there are some incredibly dedicated "cold-blooded" mothers and fathers in the Wild World Of Frogs!
In ephemeral marshes and ponds in Panama, the neo-tropical frog Leptodactylus insularum actively defends her eggs and tadpoles from predators. Here she is seen guarding her recently hatched tadpoles. There are about 3,000 of them! She will stay with them until the tadpoles metamorphose into little froglets. What a good mom!
Frogs in trees, Frogs in ponds.
Frogs on the ground, frogs all around.
Little prcious creatures helping nature in so many ways.
Just want to sit back and enjoy warm sunny days.
SAVE THE FROGS!
--Frog Poetry by Haley Summer Ford
If you have some amphibian expertise, feel free to submit a Cool Frog Fact below!
Be sure to hit the submit button! | <urn:uuid:138bc259-3bb8-4a53-9f8c-c3fe9db6130a> | 3.734375 | 3,558 | Knowledge Article | Science & Tech. | 50.889806 | 393 |
I always hear the “nuclear power is very dangerous” argument, and I realize whenever people hear the scary word “nuclear”, the think big bad explosions.
Did you know that the biggest nuclear meltdown in American history (there have only been two), the three mile island incident, happened very close to a crowded city. Number of deaths…….ZERO. Number of injuries…….ONE.
Now consider how many thousands of people die annually from coal mining, drilling, hell there are fatalities from setting up solar panels and falling off the roof.
Power plants are safe, pollute much less (and none if they properly dispose of the waste), and can power the hell out of out country (nearly 30% of the U.S’s power already comes from nuclear power).
So why do most people have a such an ignorant stigma against this method of energy production?
- Solar Producer: Why Not Supporting The Nuclear Energy? (7/26/2011)
- are Nuclear Power Plant considered “green energy”?
- Solar Producer: When Will America Use Nuclear Energy To Produce Needed Electricity? (7/14/2011)
- Solar Power: Do We Realize That When We Gear Up To Build The Next Generation Of Power Plants We Will Be Short Of Skills? (7/22/2011)
- Solar Power: What Are Some Differences And Similarities In Solar Power And Nuclear Power? (7/29/2011) | <urn:uuid:100f35bd-3cfa-4f62-9144-c9a8900bf625> | 2.65625 | 304 | Personal Blog | Science & Tech. | 55.819242 | 394 |
|project overview > introduction
Solar MURI is a collaborative project studying magnetic eruptions
on the Sun and their effects on the Earth's space environment.
("MURI" stands for Multidisciplinary University Research Initiative,
a research program funded by DoD.) The aim of the project is to improve our
ability to predict space weather from solar observations.
The project will construct a series of physically connected,
observationally tested models of the Sun and its interplanetary
environment. These models will allow us to use observations
of the Sun's atmosphere and magnetic configuration to determine:
|A Solar Prominence Observed by the EIT Instrument on SOHO
Ultimately, our goal is to provide several extra days of notice prior to
an SEP event or geomagnetic storm.
- When a magnetic eruption is imminent
- If that magnetic eruption will impact the Earth's space environment
- Whether this will result in a Solar Energetic Particle (SEP) bombardment
and/or a geomagnetic storm
A number of intermediate goals must be achieved to complete the Solar MURI
project. These are summarized below:
- Measure the solar magnetic field with sufficient accuracy and coverage to
discern which magnetic properties are the key to determining whether
eruptions will occur
- Understand the physics governing magnetic eruptions on the Sun sufficiently
well to construct realistic numerical simulations
- Simulate the interplanetary propagation of Coronal Mass Ejections (CMEs) out
to 1.0 AU with sufficient accuracy to construct accurate models of conditions
upstream of the Earth
- Couple models of the Sun's magnetic lower atmosphere, lower corona,
upper corona, and solar wind in such a way that a model of an unstable
magnetic configuration on the Sun can be propagated out to the Earth
- Verify the performance of these coupled models with test
cases based on observed magnetic eruptions, their interplanetary disturbances
(Interplanetary Coronal Mass Ejections - ICMEs), the SEP events, and the general
levels of geomagnetic response
- Years 1 - 3: Collect the necessary observations. Develop the numerical
modelling codes and the interfaces between these codes.
- Years 4 - 5: Apply the coupled simulation codes to a set of observed
CMEs. Evaluate their performance in determining the consequences
of solar observations. | <urn:uuid:a87cdbfd-3685-4fbd-bb92-5060dd564e48> | 3.53125 | 497 | About (Org.) | Science & Tech. | 5.97035 | 395 |
Question by Alexis: chemistry reaction problem?? about mass? please help! thanks!?
An experiment that led to the formation of the new field of organic chemistry involved the synthesis of urea, CN2H4O, by the controlled reaction of ammonia and carbon dioxide.
2 NH3(g) + CO2(g) CN2H4O(s) + H2O(l)
What is the mass of urea when ammonia is reacted with 100. g of carbon dioxide?
Answer by jreut
Use dimensional analysis and stoichiometry:
100 g CO2 x 1 mole CO2 / 44 g CO2 x 1 mol urea / 1 mole CO2 x 60 g urea / 1 mole urea =
100/44*60= 136. grams of urea produced.
The first term, 100 g CO2, is your starting amount.
The second fraction, 1 mol CO2 / 44 g CO2, is a conversion factor that equals 1, since there are 44 g CO2 in a mole of CO2.
The third fraction is the stoichiometric ratio in the chemical equation: for every one mole of CO2 consumed, 1 mol of urea is formed.
The fourth fraction is the conversion factor back to grams.
Add your own answer in the comments!
- Installing Virtue OLED Board & Laser Eyes in Dye DM9 Paintball Gun
- Bridging Digital and Physical Worlds With SixthSense
- Official Angry Birds 3 Star Walkthrough Theme 3 Levels 1-5
- HTC Schubert
- Sketching Out a Future for the Stylus | <urn:uuid:1f0694c7-c307-49e9-9302-c31b7f0251dc> | 2.96875 | 335 | Q&A Forum | Science & Tech. | 78.739294 | 396 |
|The Surprising Appearance of Nanotubular Fullerene D5h(1)-C90|
The previously undetected fullerene D5h(1)-C90—with a distinct nanotubular shape—has been isolated as the major C90 isomer produced from Sm2O3-doped graphite rods and structurally identified by single-crystal x-ray diffraction. Fullerenes are well-defined molecules that consist of closed cages of carbon atoms and distinct inside and outside surfaces. They tend to form very small crystals; consequently, high-resolution data was collected using small-molecule crystallography at ALS Beamline 11.3.1. The discovery of nanotubular D5h(1)-C90, which is a fullerene with 90 carbon atoms and D5h symmetry, opens a bridge between molecular fullerenes and carbon nanotubes.
In recent years, the well-known solid allotropes diamond and graphite have been joined by new allotropes: fullerenes, carbon nanotubes, and graphene. Diamond consists of four-coordinate carbon atoms with tetrahedral geometry, while the other allotropes involve three-coordinate carbon atoms. In graphite, these carbon atoms are arranged in hexagonal sheets that are stacked upon one another. Graphene is simply a single hexagonal graphitic sheet with a thickness of only one atom.
Carbon nanotubes can be conceived as hexagonal graphene sheets rolled into cylindrical shapes. These tubes may consist of a single wall of carbon atoms (single-walled carbon nanotubes) or may consist of multiple layers of tubes nested inside one another (multi-wall carbon nanotubes). Carbon nanotubes are produced as mixtures in which the individual tubes can vary in length, width, precise alignment of the component hexagons, and the chemical nature of the unique carbon atoms at the two ends of the tube. Graphene is likewise produced as sheets of varying size with generally less well-defined structures for those carbon atoms at the outer edges.
Fullerenes of varying sizes (from 60 to more than 500 carbon atoms) have also been observed, and individual molecules such as C60 and C70 have been isolated in pure form. Each fullerene is constructed of 12 pentagonal rings of carbon atoms and a number of hexagonal rings. For example, the prototypical C60, the most readily prepared fullerene, has 20 hexagonal rings in addition to the 12 pentagons.
Isolating higher fullerenes in an isomerically pure form is challenging, especially since the number of isomers increases as the size of the fullerene cage expands, as per the isolated pentagon rule (IPR). The IPR requires that each pentagon be surrounded by five hexagons to avoid strain-inducing pentagon–pentagon contact. There are 46 isomers of C90 that obey the IPR, but none of these isomers had previously been obtained in pure form. Indeed, in the absence of Sm2O3, no D5h(1)-C90 has ever been detected.
The oblong fullerene D5h(1)-C90 belongs to a set of nanotube-like fullerenes with the formula C60+10n, which have alternating D5h symmetry (when n is odd and the end caps are eclipsed) or D5d symmetry (when n is even and the end caps are staggered). The structure of D5h(1)-C90 (n = 3) is thus closely related to that of C70 (n = 1). However, within this family only C60, C70, and D5h(1)-C90 have been isolated in pure form and characterized crystallographically.
The isolation of D5h(1)-C90 provides a unique molecular model for carbon nanotubes that will allow scientists to explore the chemical and physical properties of a distinctly cylindrical fullerene. The armchair-style belts that are found at the waist of D5h(1)-C90 are a unique feature of this particular fullerene, but are the fundamental building block of carbon nanotubes.
Research conducted by H. Yang, A. Jiang, Z. Wang, and Z. Liu (Zhejiang University, P.R. China); H. Jin (Jiliang University, P.R. China); B.Q. Mercado, M.M. Olmstead, and A.L. Balch (University of California, Davis); and C.M. Beavers (Berkeley Lab).
Research funding: National Science Foundation and the Natural Science Foundation of China. Operation of the ALS is supported by the U.S. Department of Energy, Office of Basic Energy Sciences.
Publication about this research: H. Yang, C.M. Beavers, Z. Wang, A. Jiang, Z. Liu, H. Jin, B.Q. Mercado, M.M. Olmstead, and A.L. Balch, "Isolation of a small carbon nanotube: The surprising appearance of D5h(1)-C90," Angew. Chem. Int. Ed. 49, 886 (2010).
ALS Science Highlight #217 | <urn:uuid:2f9c3991-1432-4449-97f6-f09dea37f2f9> | 3.4375 | 1,108 | Academic Writing | Science & Tech. | 49.6411 | 397 |
Science Fair Project Encyclopedia
Chalcedony is one of the cryptocrystalline varieties of the mineral quartz, having a waxy luster. It may be semitransparent or translucent and is usually white to gray, grayish-blue or some shade of brown, sometimes nearly black. Other shades have been given different names. A clear red chalcedony is known as carnelian or sard; a green variety colored by nickel oxide is called chrysoprase. Prase is a dull green. Plasma is a bright to emerald-green chalcedony that is sometimes found with small spots of jasper resembling blood drops; it has been referred to as blood stone or heliotrope. Chalcedony is one of the few minerals other than quartz that is found in geodes.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:e588db7b-ec20-4a2f-a0ec-63d57c6f1119> | 3.78125 | 197 | Knowledge Article | Science & Tech. | 44.505086 | 398 |
Joined: Oct. 2006
Ever hear of the Bacterial Flagellum? Upon electron microscope examination, it looks very much like a machine!
In fact, it bears a strong resemblance to an outboard motor, you know, the kind you see on the back of those small aluminum-fishing boats.
Any way, the Bacterial Flagellum is what scientists call an “irreducible complex system”. An irreducible complex system is made in such a way that if you take away any one part of the system, the system ceases to function. A good example of this is the common mousetrap. Take away any one part, and it ceases to function. In the case of irreducibly complex organisms, taking away any one part causes it to die.
Question #6 The theory of evolution says that less complex organisms evolved into more complex life forms. How could the Bacterial Flagellum have evolved from a lower life form? What is its' transitional fossil? | <urn:uuid:4bd9bc64-1499-49d0-b513-8236057bf840> | 3.796875 | 212 | Q&A Forum | Science & Tech. | 50.977998 | 399 |