text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
To create a table, you start and end a line using the table marker "||". Between those start and end markers, you can create any number of cells by separating them with "||". To get a centered cell that spans several columns, you start that cell with more than one cell marker. Adjacent lines of the same indent level containing table markup are combined into one table.
For more information on the possible markup, see HelpOnEditing.
Apart from the option to repeat cell markers to get columns spanning several other columns, you can directly set many HTML table attributes. Any attributes have to be placed between angle brackets <...> directly after the cell marker.
The wiki-like markup has the following options:
<50%>: cell width
<(>: left aligned
<)>: right aligned
<^>: aligned to top
<v>: aligned to bottom
<#XXXXXX>: background color
If you use several conflicting options like <(:)>, the last option wins. There is no explicit option for vertical centering (middle), since that is always the default.
In addition to these, you can add some of the traditional, more long-winded HTML attributes (note that only certain HTML attributes are allowed). By specifying attributes this way, it is also possible to set properties of the table rows and of the table itself, especially you can set the table width using ||<tablewidth="100%">...|| in the very first row of your table, and the color of a full row by ||<rowbgcolor="#FFFFE0">...|| in the first cell of a row. As you can see, you have to prefix the name of the HTML attribute with table or row.
General table layout and HTML like options:: ||||||<tablewidth="80%">'''Heading'''|| ||cell 1||cell2||cell 3|| ||<rowspan=2> spanning rows||||<bgcolor='#E0E0FF'> spanning 2 columns|| ||<rowbgcolor="#FFFFE0">cell2||cell 3|| Cell width:: || narrow ||<:99%> wide || Spanning rows and columns:: ||<|2> 2 rows || row 1 || || row 2 || ||<-2> row 3 over 2 columns || Alignment:: ||<(> left ||<^|3> top ||<v|3> bottom || ||<:> centered || ||<)> right || Colors:: ||<#FF8080> red ||<#80FF80> green ||<#8080FF> blue ||
- General table layout and HTML like options
spanning 2 columns
- Cell width
- Spanning rows and columns
row 3 over 2 columns | <urn:uuid:a697fedf-3f41-4d68-a7ab-4439b6a63379> | 3.40625 | 569 | Customer Support | Software Dev. | 62.628793 | 2,000 |
Thu September 27, 2012
Streams Of Water Once Flowed On Mars; NASA Says Photos Prove It
Originally published on Thu September 27, 2012 8:31 pm
NASA's Curiosity rover has found definitive proof that water once ran across the surface of Mars, the agency announced today. NASA scientists say new photos from the rover show rocks that were smoothed and rounded by water. The rocks are in a large canyon and nearby channels that were cut by flowing water, making up an alluvial fan.
"You had water transporting these gravels to the downslope of the fan," NASA researchers say. The gravel then formed into a conglomerate rock, which was in turn likely covered before being exposed again.
The agency's scientists presented their findings of the former streambed on Mars at a news conference today.
"A River Ran Through It," Curiosity's operators tweeted Thursday. "I found evidence of an ancient streambed on Mars, similar to some on Earth."
"From the size of gravels it carried, we can interpret the water was moving about 3 feet per second," said Curiosity science co-investigator William Dietrich, "with a depth somewhere between ankle and hip deep."
The rocks have not undergone scientific analysis. But the NASA team says that taken with geographic data from Mars orbiters, the photographs tell a story all their own.
The images show rocks with round, smooth surfaces; many of them have been broken down into sizes smaller than one inch in diameter.
"The shapes tell you they were transported and the sizes tell you they couldn't be transported by wind," co-investigator Rebecca Williams said. "They were transported by water flow."
"There is earlier evidence for the presence of water on Mars," the agency said in a press release, "but this evidence — images of rocks containing ancient streambed gravels — is the first of its kind."
NASA's team has named the rock outcrop that reveals the former streambed "Hottah," after Canada's Hottah Lake.
Scientists have not yet estimated the age of the rocks, which may have been buried beneath the surface. Their age could be several billion years.
The next step will be to find a good spot to drill into the rock, NASA says. And they'll be looking for possible carbon deposits to determine whether the water on Mars once supported life.
The photographs released Thursday are among more than 13,000 raw images Curiosity has captured. The rover took the photos during its mission to Mars' Gale Crater. The rocks in question lie between the crater's north rim and Mount Sharp, a mountain inside the crater.
NASA investigators presented the results of their analysis at NASA's Jet Propulsion Laboratory in Pasadena, Calif. You can read other posts about Curiosity in our archive. | <urn:uuid:7ebf103f-99c0-42ef-b1d5-fecbc4bb35bb> | 3.828125 | 570 | News Article | Science & Tech. | 50.996163 | 2,001 |
In the 1920s, examining photographic plates from the
Mt. Wilson Observatory's
100 inch telescope,
Edwin Hubble determined the distance to the
decisively demonstrating the existence of other galaxies far beyond
the Milky Way.
His notations are evident on the historic plate image
inset at the lower right, shown in context with ground based
and Hubble Space Telescope images of the region made
nearly 90 years later.
By intercomparing different plates, Hubble
searched for novae, stars which underwent a
sudden increase in brightness.
He found several on this plate and marked them with an "N".
Later, discovering that the one near the upper right corner (marked by
lines) was actually a type of
variable star known as
he crossed out the "N" and wrote "VAR!".
Thanks to the work of Harvard
astronomer Henrietta Leavitt, cepheids,
regularly varying pulsating stars, could be used as standard candle
Identifying such a star allowed Hubble to show
that Andromeda was not a small cluster of stars and gas within our own
galaxy, but a large galaxy in its own right at a substantial
distance from the Milky Way.
discovery is responsible for establishing our modern concept of a
Universe filled with galaxies.
R. Gendler, Z. Levay and the
Hubble Heritage Team | <urn:uuid:c4a12908-519a-4291-bce8-43604a6d0a46> | 3.984375 | 294 | Knowledge Article | Science & Tech. | 42.238893 | 2,002 |
Effects of immunostimulation on social behavior, chemical communication and genome-wide gene expression in honey bee workers (Apis mellifera)
1 Laboratoire Ecologie Evolution Symbiose, UMR CNRS 6556, University of Poitiers, 40 avenue du Recteur Pineau, Cedex, F-86022, POITIERS, France
2 Department of Entomology, Center for Pollinator Research, Center for Chemical Ecology, Huck Institutes of the Life Sciences, Pennsylvania State University, University Park, PA, 16802, USA
3 Previous address: Department of Entomology, North Carolina State University, Raleigh, NC, 27695, USA
BMC Genomics 2012, 13:558 doi:10.1186/1471-2164-13-558Published: 16 October 2012
Social insects, such as honey bees, use molecular, physiological and behavioral responses to combat pathogens and parasites. The honey bee genome contains all of the canonical insect immune response pathways, and several studies have demonstrated that pathogens can activate expression of immune effectors. Honey bees also use behavioral responses, termed social immunity, to collectively defend their hives from pathogens and parasites. These responses include hygienic behavior (where workers remove diseased brood) and allo-grooming (where workers remove ectoparasites from nestmates). We have previously demonstrated that immunostimulation causes changes in the cuticular hydrocarbon profiles of workers, which results in altered worker-worker social interactions. Thus, cuticular hydrocarbons may enable workers to identify sick nestmates, and adjust their behavior in response. Here, we test the specificity of behavioral, chemical and genomic responses to immunostimulation by challenging workers with a panel of different immune stimulants (saline, Sephadex beads and Gram-negative bacteria E. coli).
While only bacteria-injected bees elicited altered behavioral responses from healthy nestmates compared to controls, all treatments resulted in significant changes in cuticular hydrocarbon profiles. Immunostimulation caused significant changes in expression of hundreds of genes, the majority of which have not been identified as members of the canonical immune response pathways. Furthermore, several new candidate genes that may play a role in cuticular hydrocarbon biosynthesis were identified. Effects of immune challenge expression of several genes involved in immune response, cuticular hydrocarbon biosynthesis, and the Notch signaling pathway were confirmed using quantitative real-time PCR. Finally, we identified common genes regulated by pathogen challenge in honey bees and other insects.
These results demonstrate that honey bee genomic responses to immunostimulation are substantially broader than the previously identified canonical immune response pathways, and may mediate the behavioral changes associated with social immunity by orchestrating changes in chemical signaling. These studies lay the groundwork for future research into the genomic responses of honey bees to native honey bee parasites and pathogens. | <urn:uuid:e3fa0175-0b97-49f3-b5b4-4dec2fd2ce77> | 2.59375 | 587 | Academic Writing | Science & Tech. | -7.019429 | 2,003 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
Because of these obstacles, the number of physicists working on the theory had dropped to two—Schwarz and Michael Green, of Queen Mary College, London—by the mid-1980s. But in 1984 these two die-hard string theorists achieved a major breakthrough. Through a remarkable calculation, they proved that the equations of string theory were consistent after all. By the time word of this...
The incorporation of supersymmetry with string theory is known as superstring theory, and its importance was recognized in the mid-1980s when an English theorist, Michael Green, and an American theoretical physicist, John Schwarz, showed that in certain cases superstring theory is entirely self-consistent. All potential problems cancel out, despite the fact that the theory requires a massless...
What made you want to look up "Michael Green"? Please share what surprised you most... | <urn:uuid:1f18dced-7ca5-4b21-82b3-b3881f309153> | 3.40625 | 211 | Truncated | Science & Tech. | 55.44779 | 2,004 |
Seriously? Yes, and not just on a plane. Jake Socha, a Virginia Tech biologist, and his team recently completed a study that sheds some light on how some of these creatures fly. They reported their findings last week at the American Physical Society Division of Fluid Dynamics meeting.
Socha and his colleagues used high-speed video cameras to record Chrysepolea paradisi, one of five species of Asian tree-dwelling snakes, as they launched off a 15m (49 ft) tower! You can see the amazing footage here.
This “flying” is an important technique for these snakes. According to Live Science:
When these snakes leap, it's not to nosedive; it's to glide from tree to tree, a feat they can accomplish at distances of at least 79 feet (24 m).
Four cameras recorded the curious snakes as they glided. This allowed the scientists to create and analyze 3-D reconstructions of the animals' body positions during flight.
The researchers found that the snakes never actually achieved a proper “glide” (where the forces generated by their bodies exactly counteract gravity). They didn't exactly fall straight to the ground either. Instead, Socha says, “the snake is pushed upward—even though it is moving downward—because the upward component of the aerodynamic force is greater than the snake's weight.
“Hypothetically, this means that if the snake continued on like this, it would eventually be moving upward in the air—quite an impressive feat for a snake. But our modeling suggests that the effect is only temporary, and eventually the snake hits the ground to end the glide.”
In other words, as Buzz Lightyear would say, the snakes are simply “falling with style.”
Do the curves of the snake, or body position, mid-air affect this flying-falling? The researchers intend to find out. Stay tuned and look out!
Image: Jake Socha | <urn:uuid:3f731f54-0459-4acb-9b39-128dbbeb08d6> | 4.0625 | 412 | News Article | Science & Tech. | 53.549958 | 2,005 |
Published on Friday, February 16, 2007 by Reuters
Greenhouse Gases Hit New High, Rise Accelerates
by Alister Doyle
OSLO - Greenhouse gases widely blamed for causing global warming have jumped to record highs in the atmosphere, apparently stoked by rising emissions from Asian industry, a researcher said on Friday.
"Levels are at a new high," said Kim Holmen, research director of the Norwegian Polar Institute which oversees the Zeppelin measuring station on the Arctic archipelago of Svalbard about 1,200 km (750 miles) from the North Pole.
He told Reuters that concentrations of carbon dioxide, the main greenhouse gas emitted largely by burning fossil fuels in power plants, factories and cars, had risen to 390 parts per million (ppm) from 388 a year ago.
Levels have hit peaks almost every year in recent decades, bolstering theories of warming, and are far above 270 ppm before the Industrial Revolution of the 18th century. Climate scientists say the heat-trapping gas is blanketing the planet.
Holmen said the increase of 2 ppm from 2006 reflected an accelerating rise in recent years. "When I was young, scientists were talking about 1 ppm rise" every year, he said. "Since 2000 it has been a very rapid rate."
"The large increases in release rates are definitely in the Asian economies," led by China, he said. China is opening coal-fired power plants at the rate of almost one a week.
Carbon dioxide concentrations peak just before the northern hemisphere spring, when plants start soaking up the gas as they grow. Southern hemisphere seasons have less effect since there are fewer land masses -- and plants -- south of the equator.
The Zeppelin station is run in cooperation with Stockholm University and is one of the main measuring points along with a station in Hawaii. Remoteness from industrial centers helps.
Scientists say the concentration of carbon dioxide, according to the modern records, is at its highest in the atmosphere in at least 650,000 years.
The world's top climate scientists said in a report on February 2 they were more than 90 percent certain that human activities, led by burning fossil fuels, were to blame for warming. That was up from 66 percent certainty in a previous report in 2001.
The U.N.'s Intergovernmental Panel on Climate Change said that temperature rises were set to accelerate and could gain by between 1.1 and 6.4 Celsius (2.0-11.5 Fahrenheit) by 2100, bringing more floods, droughts and rising sea levels.
Apart from human emissions from burning fossil fuels, he said there were other factors that could affect carbon dioxide levels in future.
On the one hand, plants may grow more in a warmer world, soaking up more carbon dioxide. But if the soil gets warmer, dead plants and leaves may rot more in winter, releasing more carbon.
Any heating of the oceans may means less absorption of carbon dioxide, partly because the greater buoyancy of warmer water inhibits a mixing with deeper levels.
© Reuters Ltd 2007 | <urn:uuid:b3d29764-2fd3-4a8c-92f2-35f7b0232a7e> | 3.140625 | 624 | News Article | Science & Tech. | 52.739618 | 2,006 |
Isentropic LiftLifting of air that is traveling along an upward-sloping isentropic surface.
Isentropic lift often is referred to erroneously as overrunning, but more accurately describes the physical process by which the lifting occurs. Situations involving isentropic lift often are characterized by widespread stratiform clouds and precipitation, but may include elevated convection in the form of embedded thunderstorms.
You can either type in the word you are looking for in the box below or browse by letter.
Browse by letter:# A B C D E F G H I J K L M N O P Q R S T U V W X Y Z | <urn:uuid:1fc62f10-767f-404c-a08f-e128aea15f12> | 2.71875 | 139 | Structured Data | Science & Tech. | 47.450708 | 2,007 |
Why Citizen Science?
No matter where you live—along the coast, in the heartland, or somewhere in between—your life is affected by water quality. Water quality issues raise many questions that are important to us all at some point or another. Is the tap water safe to drink? Will I get sick if I swim in this lake? Why are so many fish dying in the bay? Why does this water look unusual—murky, discolored, or even remarkably clear?
Perhaps one of the most important questions to ask is: How do we know when our water is healthy? To put it simply, healthy water is water that can support and sustain life. “Water quality” is a blanket term for how the physical, chemical and biological characteristics of a water sample measure up to a set of standards. Water quality can be evaluated through a number of different tests such as color, odor, temperature, acidity, bacteria content, biological diversity, and many others.
In places like Chesapeake Bay, a major water quality concern stems from nutrient loading (increased levels of dissolved nitrogen and phosphorous). As water makes its way through a local watershed (region of land that drains into a body of water) and eventually to the ocean, it is inevitably affected by how people use the land. Runoff from fertilizers applied to agricultural fields, golf courses, and suburban lawns; deposition of nitrogen from the atmosphere; soil erosion; and discharge from aquaculture facilities and sewage treatment plants all contribute to increasing nutrient content in coastal waters.
More than 150 rivers and streams feed the Chesapeake Bay watershed. Over the last several decades, more and more land in the watershed has been converted from forest to farms, cities, and residential suburbs. This development has brought with it an increase in the land area covered by hard surfaces like roads, sidewalks, parking lots, and rooftops in places where nutrient-rich water was once absorbed and filtered by soil and plants. Higher-yield agricultural practices have also played a role in contributing increased amounts of nutrients to rivers and streams in the watershed during this time. “Dirty” water now flows, largely unfettered, through the watershed, which has led to an overabundance of nitrogen and phosphorous in the Bay. Algae (also known as phytoplankton) in the water feed off these nutrients and can bloom in excess when nutrient concentrations get too high. These algae are the base of the marine food chain and are essential to the health and productivity of the oceans. However, when they are overfed, they can do more harm than good. Algal blooms can prevent sunlight from reaching the bottom of the bay, resulting in a loss of aquatic vegetation and disruption of animal habitats. Decomposing algae can deplete dissolved oxygen in the water to dangerously low levels. This process, known as eutrophication, can result in fish kills and suffocation of other marine life. If enough oxygen is removed from the water, the area becomes a “dead zone,” where no aquatic life can survive.
In the 1970s, Chesapeake Bay developed one of these dead zones. Fortunately, dead zones are reversible. By monitoring local water quality, key sources of nutrients can be identified and actions can be taken to ensure that harmful nutrient loading is reduced. Today, Chesapeake Bay continues to struggle to recapture its health, and remains on the Environmental Protection Agency’s “dirty waters” list. Read Drought and Deluge Change Chesapeake Bay Biology to find out how Chesapeake Bay resident and NASA scientist Jim Acker was able to identify links between nutrients and water quality in Chesapeake Bay.
How Citizen Scientists Can Meaningfully Contribute Using Their Own Observations
The benefits of water quality monitoring have been likened to those of visiting your doctor for periodic checkups. Observing trends over time can help avoid major health problems or readily identify them when they do occur. Bodies of water in the United States are monitored not just by state, federal, and local agencies, but also by universities and volunteers.
University or government-funded scientists typically monitor water quality at the mouth of major river systems, but they do not have enough resources to track water quality through every river and stream in a watershed. This makes volunteers and citizen scientists vitally important in the effort to monitor and maintain water quality standards across the nation.
Citizen scientists can meaningfully contribute by monitoring water quality (including nitrogen concentration) in a nearby river, lake, or stream, and identifying potential sources of pollution in their local and regional watersheds. This can significantly increase the amount of water quality data available to government agencies for bodies of water that may otherwise go unassessed. A network of citizen scientists can potentially monitor an entire watershed. This type of grass-roots science has great potential for identifying local sources of contamination, which could ultimately be used to reduce a community’s impact on the ocean.
How Citizen Scientists Can Meaningfully Contribute Using Satellite Data
With the help of watershed maps, the paths of streams and rivers can be traced to the ocean. Citizen scientists can then tie their local water quality observations to marine phytoplankton blooms observed by NASA satellites, and identify potential dead zone locations. Although dead zones cannot be directly identified from space, algal blooms can be tracked by monitoring chlorophyll concentrations in surface waters. By looking at variations in chlorophyll levels over time, citizen scientists can determine if relationships exist between local events in their own watershed and phytoplankton blooms in the ocean. For example, a citizen scientist in Missouri or Iowa might monitor the Gulf of Mexico near the mouth of the Mississippi River and note that ocean chlorophyll levels increase about two weeks after nitrogen levels spike in their local river. By correlating satellite observations with a unique network of ground observations, a new understanding of the dynamics that drive phytoplankton blooms may emerge in the scientific community. | <urn:uuid:c11f00d3-5605-47ba-ba0c-75844a017c2c> | 4.03125 | 1,221 | Knowledge Article | Science & Tech. | 28.354568 | 2,008 |
Physics and Star WarsThe Interstellar triology, Star Wars, uses science and technology in their settings and storylines, though they were not considered "hard" science fiction. Star Wars concentrates mainly on the epic drama and not on the "technobabble". It has borrowed freely from the scientific world. The series has showcased many interesting technological concepts, both in the movies and in an extensive line of novelss and comics. These vivid imaginings, and the discussions they have started amongst fanss, have inspired many people to enter the world of science.
The Star Wars movies are a vehicle for entertainment and their primary aim is to deliver drama, not scientific knowledge. Many of the on-screen technologies created or borrowed for the Star Wars universe were used mainly as plot devices, and not as elements of the story in their own right.
The iconic status that Star Wars has gained in popular culture allows it to be used as an accessible introduction to real scientific concepts. Many of the technologies used in the Star Wars universe are impossible, according to current theory. However, the process of understanding why they are considered impossible can educate people while simultaneously entertaining them.
|Table of contents|
2 External links, resources, references
Compare with: Physics and Star Trek | <urn:uuid:fbff2c91-3597-4911-8675-20a68a31984e> | 2.9375 | 254 | Knowledge Article | Science & Tech. | 29.614286 | 2,009 |
- Universities team up to track Atlantic sturgeon and prevent accidental bycatchPosted 1 day ago
- Nitrate enters groundwater-fed streams decades after field applicationPosted 1 day ago
- GIS mapping tool will help Wisconsin fish farm startups plot their pondsPosted 2 days ago
- Vaisala WXT520: Weather station designed with monitoring systems in mindPosted 2 days ago
- Sensors to help sort out Storm Lake’s sediment issues in IowaPosted 3 days ago
- Coal mining and stream insects: Researchers explore the TDS connectionPosted 6 days ago
- NexSens CB-Series coastal buoys offer flexibility in monitoring system designPosted 7 days ago
- Ohio State University’s experimental wetland research park seeks scientistsPosted 10 days ago
Rockies Research: Mountaintop lakes may be environmental sentinels
Stretched across the border between Wyoming and Montana lie the Beartooth Mountains, part of the central U.S. Rockies. Among these mountains are more than 3,000 lakes at elevations ranging from 5,000 feet to over 11,000 feet.
These lakes remain frozen for as many as 10 months of the year due to the extreme cold and deep snowpack that develops. Because of the severe climate, these lakes respond strongly to outside pressure.
They are sensitive indicators — sentinels — of both local and larger scale environmental changes. For example, while visitors to the region may remark about the pristine beauty of the lakes, research has demonstrated that nitrogen deposition from cities throughout the western U.S. has been falling on the mountains in snow and rain, slowly enriching the lakes with nitrogen.
This gradual nitrogen increase has stimulated greater algae growth and altered the underwater ecosystems in many ways. In nearby Red Lodge, Mont., records of snowfall date back many decades, highlighting dramatic changes the region has experienced. Since 1970, average annual snowfall has decreased from about 250 inches to about 100 inches per year. Such dramatic changes, common throughout the western U.S., affect everything from the prevalence of wildfires to the transparency of the lakes.
Scientists from Miami University of Ohio and the University of Maine are conducting research on a series of alpine and subalpine lakes throughout the Beartooth Mountains. Alpine lakes, lying above the tree line, are often cold and clear.
In order to understand these sensitive ecosystems better, researchers from Miami deployed a data buoy this summer in Heart Lake, a remote alpine lake with an elevation of 10,350 feet located just inside Montana. Heart Lake lies in a small granitic watershed; steep walls shelter water that is deeper than 100 feet.
Despite its beauty, Heart Lake is unusual by alpine lake standards. While most lakes in the region are low in both dissolved organic carbon (DOC) and chlorophyll (an indicator of algal biomass), Heart Lake has unusually high chlorophyll, often as high as 15-20 µg/L. In contrast, the average chlorophyll concentration of many lakes in the region is about 1.5 µg/L. The high concentration in Heart Lake is more commonly found in Midwest agricultural reservoirs than Montana cold mountain lakes.
Challenging terrain, lack of roads, and thin air meant researchers had to think creatively in order to study Heart Lake. The remoteness of the lake meant all equipment had to be carried in backpacks. Additionally, since the deployment was in a National Forest, the buoy needed to have a low profile.
Working with Fondriest Environmental, graduate students at Miami designed a mobile buoy that was modular and lightweight. For example, anchors were fashioned from sleeping bag stuff sacks filled with shoreline rocks, instead of the more traditional 70-pound pyramid weights. The final buoy weight was 35 pounds (not including sensors). When scientists first visited Heart Lake in early July, several inches of ice remained. Warm summer temperatures, however, quickly melted the ice, and the buoy was deployed. Scientists visited the lake weekly throughout the rest of the summer, collecting manual samples, changing data logger batteries, and calibrating sensors.
Connected to the buoy was a YSI sonde with temperature, conductivity, dissolved oxygen, chlorophyll, and turbidity probe units. Also suspended beneath the buoy was a Turner CDOM sensor (with Zebra-Tech wiper), a Biospherical radiometer sensors measuring transparency to both UV and PAR at two depths, and a temperature string to help understand the lake’s thermal structure. Finally, a topside-mounted Vaisala weather station measured air temperature, wind speed, wind direction, relative humidity, barometric pressure, and rainfall. Sensors were powered and run by a NexSens SDL500 submersible data logger.
The data show that Heart lake changes rapidly once the ice cover melts. For example, the chlorophyll concentration, as estimated from the YSI probe, climbed rapidly from less than 5 µg/L to more than 11 µg/L within a week. It then quickly settled back down to a long period of about 2 µg/L until the buoy was removed in late August.
This suggests that the high chlorophyll concentration may be stimulated by nutrients in the watershed that enter the lake during snowmelt. Other data analyses are still being conducted, but these initial results suggest that alpine lakes exhibit clear signals of broader landscape phenomenon. This sentinel quality makes alpine lakes an ideal (and beautiful) place to study environmental processes and changes.
— Kevin Rose is a PhD candidate in the Department of Zoology at Miami University working with Dr. Craig Williamson. Kevin’s research focuses on understanding optical indicators of allochthony and carbon cycling in aquatic ecosystems. | <urn:uuid:5d0ce22a-9aa6-4c66-adbf-27f73a5aaf44> | 3.09375 | 1,163 | Content Listing | Science & Tech. | 37.247121 | 2,010 |
Larry Bell, Contributor
I write about climate, energy, environmental and space policy issues.
Page 3 of 3
A 2010 survey of media broadcast meteorologists conducted by the George Mason University Center for Climate Change Communication found that 63% of 571 who responded believe global warming is mostly caused by natural, not human, causes. Those polled included members of the American Meteorological Society (AMS) and the National Weather Association.
A more recent 2012 survey published by the AMS found that only one in four respondents agreed with UN Intergovernmental Panel on Climate Change claims that humans are primarily responsible for recent warming. And while 89% believe that global warming is occurring, only 30% said they were very worried.
A March 2008 canvas of 51,000 Canadian scientists with the Association of Professional Engineers, Geologists and Geophysics of Alberta (APEGGA) found that although 99% of 1,077 replies believe climate is changing, 68% disagreed with the statement that “…the debate on the scientific causes of recent climate change is settled.” Only 26% of them attributed global warming to “human activity like burning fossil fuels.” Regarding these results, APEGGA’s executive director, Neil Windsor, commented, “We’re not surprised at all. There is no clear consensus of scientists that we know of.”
A 2009 report issued by the Polish Academy of Sciences PAN Committee of Geological Sciences, a major scientific institution in the European Union, agrees that the purported climate consensus argument is becoming increasingly untenable. It says, in part, that: “Over the past 400 thousand years – even without human intervention – the level of CO2 in the air, based on the Antarctic ice cores, has already been similar four times, and even higher than the current value. At the end of the last ice age, within a time [interval] of a few hundred years, the average annual temperature changed over the globe several times. In total, it has gone up by almost 10 °C in the northern hemisphere, [and] therefore the changes mentioned above were incomparably more dramatic than the changes reported today.”
The report concludes: “The PAN Committee of Geological Sciences believes it necessary to start an interdisciplinary research based on comprehensive monitoring and modeling of the impact of other factors – not just the level of CO2 – on the climate. Only this kind of approach will bring us closer to identifying the causes of climate change.”
Finally, although any 98% climate consensus is 100% baloney, this is something all reasonable scientists should really agree about. | <urn:uuid:c67954b9-39b6-481e-9fc8-4cd1cb5c7356> | 3.328125 | 534 | Personal Blog | Science & Tech. | 25.774296 | 2,011 |
July 6, 2012: Six-year-old Alexander Merrill of Sioux Falls, S.D., cools off in a cloud of mist at the Henry Doorly Zoo in Omaha, Neb., as temperatures reached triple digits.AP Photo/Nati Harnik
Climate blogger Steven Goddard, who in the past helped program government climate models, has put together a graph to show how the adjustment warmed the most recent years and cooled past years.Steven Goddard
Aug. 16, 2012: Dust is carried by the wind behind a combine harvesting corn in a field near Coy, Ark., following a brutal combination of a widespread drought and a mostly absent winter that pushed the average annual U.S. temperature last year up.AP Photo/Danny Johnston
July 6, 2012: Timmy Benson wears a wet towel on his head as he cools off in a fountain set up outside Busch Stadium before a Cardinals / Marlins baseball game in St. Louis.AP Photo/Jeff Roberson
2012 was a scorcher, but was it the warmest year ever?
A report released this week by the National Oceanic and Atmospheric Agency (NOAA) called it "the warmest year ever for the nation." Experts agree that 2012 was a hot year for the planet. But it’s that report -- and the agency itself -- that’s drawing the most heat today.
"2012 [wasn't] necessarily warmer than it was back in the 1930s ... NOAA has made so many adjustments to the data it's ridiculous," Roy Spencer, a climatologist at the University of Alabama in Huntsville, told FoxNews.com.
A brutal combination of a widespread drought and a mostly absent winter pushed the average annual U.S. temperatures up last year, to 55.32 degrees Fahrenheit according to the government. That's a full degree warmer than the old record set in 1998 -- and breaking such records by a full degree is unprecedented, scientists say.
But NOAA has adjusted the historical climate data many times, skeptics point out, most recently last October. The result, says popular climate blogger Steve Goddard: The U.S. now appears to have warmed slightly more than it did before the adjustment.
"The adjusted data is meaningless garbage. It bears no resemblance to the thermometer data it starts out as," Goddard told FoxNews.com. He's not the only one to question NOAA's efforts.
"Every time NOAA makes adjustments, they make recent years [relatively] warmer. I am very suspicious, especially for how warm they have made 2012," Spencer said.
The newly adjusted data set is known as "version 2.5," while the less adjusted data is called "version 2.0."
NOAA defended its adjustments to FoxNews.com.
Government climate scientist Peter Thorne, speaking in his personal capacity, said that there was consensus for the adjustments.
"These have been shown through at least three papers that have appeared in the past 12 months to be an improvement,” he said.
NOAA spokesman Scott Smullen agreed.
"These kinds of improvements get us even closer to the true climate signal, and help our nation even more accurately understand its climate history," he said.
One problem in weather monitoring occurs when there is a "break point" -- an instance where a thermometer is moved, or something producing heat is built near the thermometer, making temperature readings before and after the move no longer comparable.
"Version 2.5 improved the efficiency of the algorithm.... more of the previously undetected break points are now accounted for," Smullen explained.
He added that the report also recalculated "the baseline temperatures [that] were first computed nearly 20 years ago in an era with less available data and less computer power."
Spencer says that the data do need to be adjusted -- but not the way NOAA did it. For instance, Spencer says that urban weather stations have reported higher temperatures partly because, as a city grows, it becomes a bit hotter. But instead of adjusting directly for that, he says that to make the urban and rural weather readings match, NOAA “warmed the rural stations’ [temperature readings] to match the urban stations” -- which would make it seem as if all areas were getting a bit warmer.
Aaron Huertas, a spokesman for the Union of Concerned Scientists, argued that the debate over the adjustments misses the bigger picture.
"Since we broke the [temperature] record by a full degree Fahrenheit this year, the adjustments are relatively minor in comparison,"
"I think climate contrarians are doing what Johnny Cochran did for O.J. Simpson -- finding anything to object to, even if it obscures the big picture. It's like they keep finding new ways to say the 'glove doesn't fit' while ignoring the DNA evidence."
Climate change skeptics such as blogger and meteorologist Anthony Watts are unconvinced.
"Is history malleable? Can temperature data of the past be molded to fit a purpose? It certainly seems to be the case here, where the temperature for July 1936 reported ... changes with the moment," Watts told FoxNews.com.
"In the business and trading world, people go to jail for such manipulations of data." | <urn:uuid:36190303-5d10-4ff0-96ff-c4a2120414f2> | 2.625 | 1,081 | News Article | Science & Tech. | 56.17544 | 2,012 |
The functions described in this section (
printf and related
functions) provide a convenient way to perform formatted output. You
printf with a format string or template string
that specifies how to format the values of the remaining arguments.
Unless your program is a filter that specifically performs line- or
character-oriented processing, using
printf or one of the other
related functions described in this section is usually the easiest and
most concise way to perform output. These functions are especially
useful for printing error messages, tables of data, and the like. | <urn:uuid:c91e3746-12cd-405b-bf1b-7be5876c989e> | 3.03125 | 114 | Documentation | Software Dev. | 32.207486 | 2,013 |
Doomsday and the Big Bang Experiment
It is expected that nuclear research known as CERN experiment will reveal how the universe was created. Everything depends on the success of the experiment. This is the world's largest experiment which may have positive or negative outcomes. This may cost lives of human beings or disclose the secrets of this universe.
But apart from the debate on the success of the experiment it is also important to consider that in this world, where millions of people live below poverty line, millions are dying due to natural calamity, there are increasing global warming issues and terrorism worldwide, was it correct to do this experiment. The amount of money invested on this experiment might have helped the universe in tackling other problems and their solutions.
There is another piece of CERN LHC News that says a group of people hacked it and leave it on the extreme point by just mentioning that there are loopholes in the security of the project. The Large Hydron Collider (LHC) experiment could be helpful for the entire universe or it may destroy the universe, or it could have some positive and negative impacts too. The entire thing will be disclosed by the final stage of the experiment.
So now when the experiment has already started and there is so much to look forward to, let's hope for the best. The final decision will be what is destined.
Tags: LHC CERN News , Large Hydron Collider Exp , LHC News , Doomsday , World Ending , Experiment
This work is licensed under a Creative Commons Attribution 3.0 License. | <urn:uuid:cdc80be4-639f-4d09-a1b1-90fe3885d28a> | 3.03125 | 310 | Truncated | Science & Tech. | 42.946238 | 2,014 |
Biosphere 2, privately funded ecological research project in which eight people lived sealed in a 3.15-acre (1.28-hectare) structure for two years (Sept. 26, 1991–Sept. 26, 1993). Located in Oracle, Ariz., about 35 mi (56 km) north of Tucson, and designed to depend on the outside only for electricity and sunlight, Biosphere 2 was intended to test the feasibility of a self-sustaining space colony. It contained over 3,500 plant and animal species and attempted to reproduce five ecosystems (see ecology)—desert, grassland, marsh, ocean, and rain forest. The human inhabitants (four men and four women) were to grow all their food and recycle their wastes, but used some seed stocks as food. The project's validity was questioned by scientists who criticized the plan to use outside electricity, the presence of stores of food and animal feed, and other aspects. A decline in the oxygen level led to the pumping of oxygen into the complex in 1993. A second crew entered Biosphere in Mar., 1994, but various disagreements and allegations of mismanagement made by the chief financial backer, Edward Bass, finally led to the abandonment of attempts at self-sufficient living. From 1995 to 2003 the management of the project was taken over by Columbia Univ., which used the facility for education and scientific research on environmental issues.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on Biosphere 2 from Infoplease:
See more Encyclopedia articles on: Environmental Studies | <urn:uuid:d30244c1-ca2f-46ba-a165-ca8e328caa97> | 3.15625 | 323 | Knowledge Article | Science & Tech. | 45.389755 | 2,015 |
‘Extinct’ Whale Found: Rare Pygmy Right Whale Is The Last Survivor Of An Ancient Species
The pygmy right whale has fascinated scientists since its discovery in the 1800s. It’s an uncommon sight, and its peculiar arched, frown-like snout is unlike any other whale alive today.
According to new research, published in the Proceedings of the Royal Society B, the reason that the pygmy right whale stands out so much might be that it’s the last survivor in a group of whales that are believed to have gone extinct roughly 2 million years ago.
“The living pygmy right whale is, if you like, a remnant, almost like a living fossil,” explains Felix Marx, a paleontologist at the University of Otago in New Zealand. “It’s the last survivor of quite an ancient lineage that until now no one thought was around.”
Pygmy right whales are known to roam the oceans of the Southern Hemisphere, but very little data has been collected on them. It’s believed that they grow no longer than 21 feet long, which makes it the smallest of all the baleen whales.
Researchers say that, upon looking at the creature’s skull, it became clear that it more closely resembled a group of whales called cetotheres, which first appeared 15 million years ago before going extinct 13 million years later.
The findings help explain how pygmy whales evolved and may also help shed light on how these ancient “lost” whales lived. The new information is also a first step in reconstructing the ancient lineage all the way back to the point when all members of this group first diverged, he said, according to Live Science.
The news comes a day after researchers from the University of Georgia and Gray’s Reef National Marine Sanctuary announced the discovery of a five-foot-long bone belonging to an Atlantic gray whale, which is believed to have gone extinct at some point after the 1700s. | <urn:uuid:9e51f694-71a0-44c9-868d-ae5424030a45> | 3.5 | 421 | News Article | Science & Tech. | 44.458148 | 2,016 |
SINGAPORE — A team from the Chinese Academy of Sciences trekked across frigid highlands in Tibet to confirm a significant recent discovery about climate change. They drilled and analyzed five ice cores from various locations on the Tibetan Plateau to find that the concentration of black carbon, or soot, in the ice has increased by two to three times since 1975.
At Zuoqiupu Glacier, on the southern edge of the plateau downwind from the Indian subcontinent, black carbon deposition rose by 30 percent between 1990 and 2003.
What are the implications of these and other related findings in recent years by numerous researchers from different countries and scientific agencies?
International efforts to combat global warming focus on cutting emissions of six greenhouse gases from human activity. A panel of scientists advising the United Nations has concluded that these gases — chiefly carbon dioxide, methane and nitrous oxide — are most likely to be responsible for warming the planet to potentially dangerous levels.
Yet a growing body of research in the past few years points to another potent source of human-induced warming: airborne aerosol particles, especially black carbon — a key component of soot.
When coal, oil and other fossil fuels are burned without enough oxygen to complete combustion, one of the by-products is black carbon. A similar process takes place with the burning of biomass, including wood, cow dung and crop residues, although the by-product is mainly organic carbon, which scientists say has a lesser warming effect than black carbon.
Asia, with its rapid economic growth and agricultural expansion, is now the leading global source of the tiny particles of soot from this incomplete burning. They rise into the atmosphere and mix with different emissions, including nitrates and sulphates, to form aerosols.
Black carbon absorbs sunlight, as do other greenhouse gases. This blanket-effect warms the atmosphere. But particles of sulphate or nitrate alone reflect solar radiation, thereby cooling the planet. Indeed, advocates of geo-engineering to combat global warming have proposed pumping sulphate aerosols into the atmosphere to slow climate change.
However, Kimberly Prather, professor in the Department of Chemistry and Biochemistry at the University of California, San Diego, and a colleague published research last year showing that sulphate and nitrate play a different role when they mix with black carbon.
The two researchers checked atmospheric aerosols over Riverside, Calif., and Mexico City, using an instrument that measures the size, chemical composition and optical properties of aerosols in real time. Their study showed that jagged bits of fresh soot quickly become coated with a spherical shell of other chemicals, particularly sulphate, nitrate and organic carbon, through light-driven chemical reactions.
“The coating acts like a lens and focuses the light into the center of the particle, enhancing warming,” professor Prather says.
The measurements showed that in the atmosphere the aerosol combination increased the warming effect of the coated black carbon particles to 1.6 times that of pure black carbon particles.
North America and Europe have reduced aerosol levels by enacting clean-air regulations and transport fuel standards. Major developing Asian economies have been slower to follow and the extensive soot-laden pollution now shows up in satellite photos as a huge brownish haze stretching over large parts of South and Southeast Asia and China.
This haze is a health hazard as well as an extra source of a global warming. The U.N. Environment Program (UNEP) said in a 2008 report on atmospheric brown clouds that they were also a major threat to water and food security in Asia.
Black carbon does not only warm the atmosphere. As the pollution is carried by prevailing winds in the Northern Hemisphere, it affects other countries that have cleaner air standards such as Japan, and even the U.S. on the other side of the Pacific Ocean.
Aerosols have another warming effect. The coated black carbon particles do not waft around forever. As winds drop, they are deposited on land and sea surfaces, including snow and ice, creating a smudging effect.
The dirty snow and ice absorb more sunlight, thus warming faster than pure snow and ice, which reflect light. The impact of aerosols has been observed as far away as the Arctic, where sea ice is in rapid retreat, and the Greenland ice sheet, which is also melting.
By some measures, black carbon accounts for roughly half the global warming potential of carbon dioxide, the main greenhouse gas. Moreover, while carbon dioxide can stay in the atmosphere for over a century, aerosols only remain for a few weeks at most.
Could tighter international controls on soot emissions provide a quick fix for climate change? A growing body of evidence suggests that black carbon can be controlled more easily and cost-effectively than greenhouse gases like carbon dioxide.
An Indian-Swedish research team has used carbon analysis to conclude that about two-thirds of the soot-laden brown cloud pollution in Asia comes from biomass burning (mainly household cooking and slash-and-burn agriculture) and one-third from fossil fuel combustion (mainly coal burning for industry and power, and use of sulphur-laden diesel fuel in trucks and ships).
They and other scientists have called for a rapid scaling-up of programs to discourage open air burning and spread the use of low-cost but efficient household stoves and biogas, which could also aid poverty reduction. These scientists also propose tighter transport fuel standards.
While reducing carbon dioxide concentrations is important, changes we make today will not be felt for quite a while, whereas changes we make today on soot and sulphate could affect our planet on timescales of months, says Prather. “This could buy time while we grapple with the problems of reducing carbon dioxide and other greenhouse gases.”
Michael Richardson is a visiting senior research fellow at the Institute of South East Asian Studies in Singapore. | <urn:uuid:5f5f8d86-c387-4756-a9fe-3ad46e736988> | 3.828125 | 1,198 | News Article | Science & Tech. | 35.107143 | 2,017 |
RED-WINGED BLACKBIRD (Photo by Ted Schroder)
Ithaca, NY—As oil washes ashore along the Gulf Coast, the Cornell Lab of Ornithology is asking birders to keep an eye on nesting birds—not just near water, but hundreds of miles inland.
“Wildlife biologists are monitoring species such as pelicans and plovers in the immediate path of the oil,” said Laura Burkholder at the Cornell Lab of Ornithology. “But we need bird watchers across the country to help us find out if birds that pass through or winter in the Gulf region carry contamination with them, possibly creating an ‘oil shadow’ of declines in bird reproduction hundreds of miles from the coast.”
To help, Burkholder said that anyone with an interest in birds can learn how to find and monitor nests as part of the Cornell Lab’s NestWatch project It involves visiting a nest for a few minutes, twice per week, and recording information such as how many eggs it contains, how many chicks hatch, and how many leave the nest.
“Many birds that nest in backyards all across North America, such as Red-winged Blackbirds and Tree Swallows, spend part of the year along the Gulf of Mexico, where they could be affected by the oil spill,” Bukholder said. “Toxins often have profound effects on reproduction, and it’s possible that toxins encountered in one environment can affect the birds in another environment, after they arrive on their breeding grounds.”
When participants across large regions contribute information, Burkholder said, scientists can assess changes in nesting success in relation to environmental factors such as habitat loss, climate change, and pollution.
Citizen-science participants have helped the Cornell Lab monitor the success rates of nesting birds for 45 years. Now, Burkholder said, it’s especially critical to capture data on nesting birds to reveal the health of birds before they encounter the oil spill—as well as in the years ahead, to detect possible long-term effects.
To participate sign in here. | <urn:uuid:9afbc564-7239-4cf7-95c0-2835097f1cc0> | 3.15625 | 436 | News Article | Science & Tech. | 35.475962 | 2,018 |
Development of keeled flowers
A study using scanning electron microscopy has revealed that the keeled petals of Leguminosae and Polygalaceae are fundamentally different.
25 Mar 2011
Scanning electron micrographs of dissected floral buds of Polygala violacea (left) and P. gomesiana (right) (Image: M. Angélica Bello Gutierrez).
In keeled flowers, one of the petals (or two fused petals) forms a complex hooded structure that encloses the reproductive organs. The keel can facilitate pollen presentation in cases where pollen is deposited on it.
Keeled flowers are highly characteristic of some Leguminosae and Polygalaceae, two of the four families that comprise the rosid eudicot order Fabales. Research on floral development at Kew and Reading University, conducted by former Kew PhD student Angélica Bello, shows that the characteristic crest that occurs on the keel of some Polygalaceae develops relatively late in floral ontogeny.
Despite some ontogenetic similarities, the morphologies of the two types of keeled flowers are fundamentally different, suggesting a functional convergence between these two closely related families.
Item from Dr Paula Rudall (Head of Micromorphology, RBG Kew)
Originally published in Kew Scientist, issue 38
Bello M.A., Hawkins J.A. & Rudall, P.J. (2010). Floral ontogeny in Polygalaceae and its bearing on the origin of keeled flowers in Fabales. International Journal of Plant Sciences 171: 482–498.
Scientific Research and Data
- Kew Science Project – Floral Evolution
- Kew Science Project – Monocot Floral Evolution
- Kew Science Project – Floral Evolution in Lamiales
Help Kew break new ground and inspire new generations
By making a donation to Kew today you can help our scientists to find out more about the fascinating world of plants, break new ground and inspire generations of young people to get to know plants better.
Our scientific programmes are focused on understanding plants and conserving the world's plant life and habitats at risk. Plants are essential to life on earth. In a world where the sustainability of the planet’s rich biodiversity is becoming less certain, Kew’s science work is ever more critical. Find out how your donation can make a difference.
Browse Kew news
- In the Gardens
- Science and conservation
- How you are helping
- Specialist science
- Kew blogs
- All Kew news
Keep up to date with events and news from Kew
- around the world
- the UK
- at risk
- ground breaking
- needs help
- english heritage
- Kew overseas
- verge of extinction
- wet tropics
- gifts that help
- South East Asia
- hot spot
- english garden
Follow Kew on Twitter
Unable to parse the data in the RSS file. | <urn:uuid:518f2425-77a7-4b69-865c-ba45d2bee6a6> | 3.453125 | 626 | News (Org.) | Science & Tech. | 35.984681 | 2,019 |
The National Ecological Observatory Network (NEON) is a continental-scale observatory designed to gather and provide 30 years of ecological data on the impacts of climate change, land use change and invasive species on natural resources and biodiversity. NEON is a project of the National Science Foundation, with many other U.S. agencies and NGOs cooperating.
All NEON data and information products will be freely available via the Web. NEON’s open-access approach to its data and information products will enable scientists, educators, planners, decision makers and the public to map, understand and predict the effects of human activities on ecology and effectively address critical ecological questions and issues.
NEON: A Community-Driven Resource
The NEON concept was born in the ecological community and the design honed by thousands of dedicated people participating in workshops and reviews. Read more about the community working to bring NEON to fruition. | <urn:uuid:1c8aed8a-c33b-4a1b-95ed-f30148bbd193> | 3.34375 | 185 | About (Org.) | Science & Tech. | 22.137336 | 2,020 |
Science is basically the combination of good logical reasoning with good practical knowledge of actual natural phenomena. All humans do some logical reasoning and have some practical knowledge of some actual natural phenomena, but most have to busy themselves with feeding themselves and their families as best they can. Few have been able to devote much of their time to reasoning and/or gaining better knowledge of nature, and only some of these have made small or big contributions to science.
In considering science theory, this site concentrates on physics theories from the now entirely untaught ideas of William Gilbert, Rene Descartes and Isaac Newton to Albert Einstein and beyond - and we also have good related sections on Galileo Galilei, on Johannes Kepler, on Gravity phenomena, on Light, on String Theory and physics now on The Standard Model, on Probability Science and on Science Philosophy.
Get this website as a Zoomable, Searchable and Printable pdf Ebook with helpful Bookmarks -
about £2 at New Science Theory PDF Ebook
our Sitemap shows any sections updated since its 1.1.2012 (Or for £9 get the nice A4 paperback version at New Science Theory book)
PHYSICS NEWS. The most powerful electromagnetic charged-particle accelerator built to date was switched on in 2008 at CERN, and this £5 billion Large Hadron Collider (LHC) has been smashing electrically charged protons and charged heavy atom nuclei into each other at energies much greater than any achieved before. The LHC machine housed in an underground 27 kilometre (17 mile) tunnel is for accelerating 'atom-smashing' experiments that since the 1950's have somehow attracted the majority of modern funding for experimental physics. Charged particle beams are being electromagnetically accelerated in opposite directions through the ring-shaped machine, cooled to just 1.9 degrees above absolute zero (minus 271C), to velocities up to maybe 99.99% of the speed of light ?! The electrically charged particle beams 'collide' in four detectors, designed like giant microscopes but still not capable of observing any actual collision contact. Supporters of a variety of physics theories hope that its experiments may support their theory. Initial analyses of LHC experiments to 2012 seem to be ruling out some multi-dimension theories, some sub-quark particles theories, some string theories and some supersymmetry-sparticles theories. (see http://physicsworld.com/cws/article/indepth/44805)
And the near-light speeds have maybe not shown expected Einsteinian effects.
CERN reported 4 July 2012 that LHC teams 'have discovered a new particle consistent with the Standard Model predicted Higgs boson'. More research is needed to prove if the new 125GeV particle actually is the Higgs boson, probably not before 2015, but many physicists including Peter Higgs have rushed to claim it proved. More experiments and analyses are to come, and no doubt more generally useless 'physics theories'. Many of these modern physics theories are illdefined and some do not really cover electromagnetism and are varieties of push-physics only though it remains unproved if gravity, electromagnetism or 'collision' involve any push-contact. The CERN LHC has already produced mini-big-bangs and some hope that it may also produce mini-black-holes, and some see its present 7TeV as its maximum safe power though by 2015 it will be run at 14TeV to prevent it being outdone by newer atom-smashers like the T2K Neutrino Collider. And you have to wonder if modern physics has been seriously dumbed-down as 2009 saw two physicists claiming that 'the LHC was disabled by a bird from the future' - discussed in our Science History section.
November 2012 sees some modern 'mainstream physicists' now pushing to abolish the teaching of classical experimental physics in schools as 'obsolete'. They want the experiments by Newton on light, by Galileo on gravity and by Gilbert on magnetics/electrics to be deleted from human history. It seems that only modern thought-experiment mathematical physics, or conjectural-physics, should be taught. This is being pushed at the US President through YouTube in a video "Open Letter to the President : Physics Education", which seems to be from the 'Perimeter Institute for Theoretical Physics' of Canada. The physicist or physicists concerned were obviously taught classical physics at school in the awful way it is always taught now, with no study of the works of Newton, Galileo and Gilbert. This attempt at killing real experimental physics and its associated theories can be seen on YouTube at http://www.youtube.com/watch?v=BGL22PTIOAM
Those who have specialised only in logical reasoning have often been called philosophers, and some of the best of these first emerged in Ancient Greece. The most rigorous logical reasoning, as with Euclid, has often been in the field of mathematics. Those who have specialised only in gaining better knowledge of nature have often been artisans or nature lovers, and their studies often have been concerned with their work or their leisure. Here metallurgy and astronomy were two fairly significant fields of study, with many others. The chief scientific advance in gaining better knowledge of nature came with the realisation that it chiefly needed the precise measurement of natural phenomena so that the rigours of number could replace vagueness and be better amenable to logical reasoning so that the two chief elements of science better combined.
Early ideas on the natural world generally took some vague magical or religious form of theorising, as that natural bodies had life forces or that god caused everything. In line with this, the widely accepted though entirely unproven explanation of gravity by the philosopher Aristotle was that all bodies had 'a natural tendency' to move to their 'natural place'. Such unproven opinion was to be challenged by the emerging experimental science method, chiefly in getting rigorous factual descriptions of more natural phenomena and then in developing all kinds of theories to try to explain the known facts. The many science theories came in two basic types - Black Box theories of laws of universe behaviour like gravity to explain what happens, but not trying to explain why things happen, and full-explanation theories that did seek to explain why things happen.
Human knowledge of natural phenomena has undoubtedly always been increasing to some extent since our species began, though often in accidental or ad hoc ways and some discoveries have been lost and re-discovered again later. Yet on average human history has involved progress in factual knowledge of nature and in technology deriving from that knowledge as in producing first farming and then industry. But theories of nature showed little or no progress in our early history, and indeed have struggled to show progress in modern times also.
It was maybe not until the 1500's that real planned science emerged first in Europe, with the chief requirement that both good logical reasoning and good practical knowledge of actual natural phenomena must be combined to try to produce valid descriptions of natural phenomena and valid science theories. Though there were earlier neo-science developments in different parts of the world, the real emergence of science was driven first by Europe wanting to explore and exploit the wider world, and then by Europe's developing industrial revolution. World exploring required use of the astronomer's stars and of the magnetic compass. After his death in 1543 Nicolaus Copernicus published an improved description of heavenly bodies where the Earth correctly orbited the Sun, and a basic compass was in some use from the 1200's. William Gilbert in 1600 (shortly before his death) published his many science experiments and his physics chiefly concerning magnetism and improved compass use but deriving a rarely understood full-explanation effluvia signal theory of physics relating to the Earth and bodies generally.
Like many other early scientists then, Galileo Galilei (1564-1642) experimenting chiefly in mechanics and astronomy with a little on push theory had a lot of trouble from the catholic church and governments for that and for backing Copernicus, but William Gilbert (1544-1603) working mainly on magnetism in protestant England openly dismissed Aristotle and all philosophising or theorising that was not directly substantiated by scientific experiment, and practised what he preached with his one early publication concentrating on his many experiments and a little on an attraction theory which was dismissed by Galileo - and Johannes Kepler (1571-1630) working in mathematics, optics and astronomy developed a 'forcefield push' version of Gilbert's physics and also backed Copernicus.
But then the philosopher Rene Descartes (1596-1650) produced his mechanical push physics theory that impressed many as fitting with much of the emerging science - and it was later falsely claimed with that of the mathematician and physicist Isaac Newton (1643-1727) though he himself favoured Gilbert attraction theory but settled for a black-box physics theory like a few other physicists then. While advances continued in other sciences, physics theory had to wait about 200 years before Albert Einstein produced his new partial-explanation forcefield spacetime theory. One basic advance in physics then had been the discovery that the originally supposed elementary particles 'atoms' seemed basically mini-solar-systems with smaller particles and mini-action-at-a-distance. Strong evidence that solids are far from solid supported the conclusion that at least some 'pushes' may not be contact pushes and so maybe at least partly supports either a field type physics or a signal type physics where signals establish contact but do no pushing ?
After Newton, physics theory seems to have somewhat sidelined experimental study in favour of mathematical study, so that increasingly universities located theoretical physics in their mathematics departments rather than in physics departments. And certainly new physics theory since Einstein, such as 'string' and 'loop' theory, seems to largely have been on the mathematics and structure of fields and/or of 'elementary' particles as possibly explaining everything somehow though it perhaps is muddy water - and 'fields' may yet be shown to not exist and/or the 'elementary particles' may yet be shown to be mini-mini-solar-systems themselves. In physics the big may be as reasonable a model of the small as vice versa, or not, and a signal physics may yet prove of some use also.
Many have been involved in the development of science, and many more in supporting or opposing it, covering all countries. But the key science theory ideas around physics can perhaps best be seen by going backwards from Einstein. Einstein considered that the theory that he chiefly had to face up to was Newton's, and Newton considered that the theories that he chiefly had to face up to were Descartes' and Gilbert's. Few understood Newton's evaluation of Gilbert, but it seems the key physics theories were indeed those of Gilbert, Descartes, Newton and Einstein which this site examines further on other pages in an interrelated way rather than entirely separately. On this site you can start with William Gilbert and somewhat simpler early physics theories and journey on to rather more complex modern physics theories.
While Newton considered various possible explanations of gravity and other 'forces', he ended up supporting none and insisting that physics should support none. He concluded that black-box mathematical behaviour laws were enough for science, and that any explanation must involve untestible unseens and be 'outside science'. This basic conclusion of Newton can certainly be challenged, but Einstein and others ignoring it and claiming Newton's theory was a simple billiard ball push theory was one of the worst mistakes in physics theory history. It meant that no physicist has worked from or built on Newton's actual physics position - only on a simplified false 'Newton position' ?
And although Gilbert, Descartes and Newton took science as not allowing contradictions, Einstein and others later adopted 'duality physics' for light and for particles requiring them both to be 'wave' and be 'not-wave' and so allowing contradiction in their science. Not just allowing contrary interpretations and contrary mathematics, but allowing actual contradiction in experiments and in actual nature. This became possible by rejecting earlier strict definitions of 'wave' and 'particle' and basically using no strict definitions.
The interest of Gilbert and Newton in signal physics theory was perhaps before its time and has really been developed by nobody since. And they were less interested in the physical nature of any signal emissions, be they particle emissions or energy emissions or wave emissions, than in how bodies experimentally responded to natural signals. Some modern physicists are now talking of a 'quantum-information' physics, a 'quantum computation' physics or a 'digital' physics involving maybe a 'cellular automaton universe' - including among others Pablo Arrighi and Jonathan Grattage affiliated with the University of Grenoble and ENS de Lyon, France (see http://membres-lig.imag.fr/arrighi/). And the possible relevance in physics still of Gilbert-Newton 'attraction physics' is maybe also suggested by a recent quote of Google on them letting application developers for their Android phones use C or C++ code "as in signal processing, intensive physics simulations, and some kinds of data processing".
It is maybe of some small interest that Einstein was the only one of these four major scientists to marry, suggesting that having a family to feed or other major activities can hinder the development of substantial new science !? But more positive is the fact they all seem to have retained their mental capacities well in old age - maybe an old-age IQ fall from 100 to 95 gives poor mental functioning but an IQ fall from 165 to 160 still leaves excellent mental functioning when older ?
The ideas presented on this site are based on extensive studies of William Gilbert and of much of Descartes, Newton and Einstein and others relating to their theories. Currently the internet offers little of these four to read online, and much of their work has still not been translated, so this site will be trying to help with that over time. Science histories often have serious weaknesses , and for basic physics history this website's interpretations are the best and should be studied first, but you may also like a look at a mostly not too unreasonable summary of science history at http://faculty.kirkwood.edu/ryost/chapter1.htm
Physics experiments and physics theories have at times come from very different types of sources, some good and some not. Early good physicists, like Galileo or William Gilbert, often had no physics training and some were hobby physicists or anti-establishment physicists.
Today some insist that every good physicist must have a physics degree, and that everybody with a physics degree is a good physicist (but we certainly do not have 900,000 Isaac Newtons today). It may seem more accurate to say that today a good physicist should probably have a physics degree, and that some with a physics degree are probably good physicists.
1. But this issue maybe needs clarifying somewhat to account for the fact that physics involves basically two different aspects - experiment and theory - and useful physics experiment seems to have somewhat less need of formal training than physics theory. Hence most technology advance has been independent of theory, so a computer engineer working for Google may produce some good physics experiments.
2. A further issue concerns the nature of formal physics theory training, in earlier times including substantial philosophy and history of science - but today seeming entirely confined to post-Einstein physics theory. This may suggest that most of today's formally trained physicists may have too narrow a focus to their physics theory ideas, so a philosopher or historian might be a better source.
We should of course still expect most good physics today to come from those with a physics degree, but should not be entirely surprised if some good physics ideas comes from a philosopher or engineer. A modern William Gilbert is possible.
All great scientists do need to have some great skill or skills, but all great scientists do not need to have every possible great skill. But highly skilled people perhaps tend to be one of three skill types ;
1. Mathematicians and rule followers
Some great scientists like Isaac Newton have had great mathematical skill, and have been great at mathematical rule following reasoning. Of course some of them, maybe also including Isaac Newton, have also had some great artist-artisans rule breaking experimenting skills.
2. Artist-artisans and rule breakers
Some great scientists like Galileo Galilei have had great artist-artisan skill, and have been great at rule breaking experimenting. Of course some of them, maybe also including Galileo Galilei, have also had great mathematical rule following reasoning skill.
3. All-rounders or multi-skilled
Some great scientists may have had great mathematical skill and great artist-artisan skill, but some of these may have employed one strength more than the other. These may have been great at rule following reasoning and great at rule breaking experimenting, but some of these employed one more than the other. This might depend on their own view of science and of its priorities at the time, and some great scientists have had different views on that.
Most of the big leaps in science has been the work of great individuals working alone, while many of the smaller advances have come from team collaboration maybe partly due to teams mostly being composed of too narrow a range of skill types ? But honest science has always been the more useful, as in not putting up a false simplified-Newton to knock down. Newton certainly never claimed that a light ray would not bend towards the sun, nor that a gyroscope some miles above Earth would hold a perfectly stable spin. And Newtonian physics does not imply either of these claims. Some modern scientists can seem to show a perhaps low regard for truth at times ?
While artist-artisan based skills often show culture differences - as in Egyptian, Roman and other art/science/technology - mathematics has generally developed as one mathematics involving the following of one set of rules. And while science does seem to require that there can be only one actual truth of anything, it can reasonably be claimed that science does not also require that there can be only one valid description of one truth. So modern physics dependence on mathematics only may be inadequate. Art often describes the same thing in different ways successfully, and a science with one mathematics may still validly allow of different image-theory explanations. But a one-truth science does not seem to really allow of contradictory explanations such as Duality Theory in current physics ?
While we do consider science theory generally, this site is the very best at examining the fundamentals of physics, and at considering the more important new discoveries in physics and physics projects. If you want to really learn physics then this website really helps people with mastering physics online, and can also point you to some of the best physics ebooks online.
PS. Some might say that the last 50 years has maybe seen no significant new physics theory published, and maybe generally business and government hijack any new science to their own ends anyway, leaving little real value to any new science ? But I have been sitting on a new general science theory for the last 40 years developed after the first BSc degree I took. Then for a second BSc degree when I took year 1 Philosophy, I part ran it past the Professor of Philosophy who had been a Physicist, in a 1985 essay for him on the history of physics. He gave that top marks and promptly made several attempts to get me to switch to majoring in philosophy under him (which I would have done but at that time I could not see it as a practical career option for feeding my new wife and baby). But being satisfied that the basics of my new general science theory may possibly be worth at least a temporary publishing rather than just all dying with me, I have now put the basics of it on this website - in the hope that you may find it interesting (and this website is all interrelated so studying all of it should help you understand it). Additionally, this site simply tries to clarify some of the basics of science theory history to date as I see it - though many do interpret science history differently and often very wrongly. Some of the problems involved in the history of science are discussed in our Science History, or you can check our Site Map.
You can do a good search of this website, or of the web, below ;
Or two websites to slightly help inform you on what physicists and astronomers are up to lately are
Physics World at http://physicsworld.com/ and Universe Today at http://www.universetoday.com/
Get our first Android science news App - 'New Science Info' - in the Google Play app store for news on science.
And now our second Android gravity App - 'Sun Pull' - is also in the Google Play app store to help you re-design the solar system.
You can try it here in our Solar System section. Hopefully more useful science Apps may follow ?!
You are welcome to link to this website homepage, eg http://www.new-science-theory.com/ | <urn:uuid:1aa519ad-f617-40f7-89c9-ac0cfbd7feed> | 3.0625 | 4,293 | About (Org.) | Science & Tech. | 32.400819 | 2,021 |
ISAAC ASIMOV would probably have been horrified at the experiments under way in a robotics lab in Slovenia. There, a powerful robot has been hitting people over and over again in a bid to induce anything from mild to unbearable pain - in apparent defiance of the late sci-fi sage's famed first law of robotics, which states that "a robot may not injure a human being".
But the robo-battering is all in a good cause, insists Borut Povše, who has ethical approval for the work from the University of Ljubljana, where he conducted the research. He has persuaded six male colleagues to let a powerful industrial robot repeatedly strike them on the arm, to assess human-robot pain thresholds.
It's not because he thinks the first law of robotics is too constraining to be of any practical use, but rather to help future robots adhere to the rule. "Even robots designed to Asimov's laws can collide with people. We are trying to make sure that when they do, the collision is not too powerful," Povše says. "We are taking the first steps to defining the limits of the speed and acceleration of robots, and the ideal size and shape of the tools they use, so they can safely interact with humans."
Povše and his colleagues borrowed a small production-line robot made by Japanese technology firm Epson and normally used for assembling systems such as coffee vending machines. They programmed the robot arm to move towards a point in mid-air already occupied by a volunteer's outstretched forearm, so the robot would push the human out of the way. Each volunteer was struck 18 times at different impact energies, with the robot arm fitted with one of two tools - one blunt and round, and one sharper.
The volunteers were then asked to judge, for each tool type, whether the collision was painless, or engendered mild, moderate, horrible or unbearable pain. Povše, who tried the system before his volunteers, says most judged the pain was in the mild to moderate range.
The team will continue their tests using an artificial human arm to model the physical effects of far more severe collisions. Ultimately, the idea is to cap the speed a robot should move at when it senses a nearby human, to avoid hurting them. Povše presented his work at the IEEE's Systems, Man and Cybernetics conference in Istanbul, Turkey, this week.
"Determining the limits of pain during robot-human impacts this way will allow the design of robot motions that cannot exceed these limits," says Sami Haddadin of DLR, the German Aerospace Centre in Wessling, who also works on human-robot safety. Such work is crucial, he says, if robots are ever to work closely with people. Earlier this year, in a nerve-jangling demonstration, Haddadin put his own arm on the line to show how smart sensors could enable a knife-wielding kitchen robot to stop short of cutting him.
"It makes sense to study this. However, I would question using pain as an outcome measure," says Michael Liebschner, a biomechanics specialist at Baylor College of Medicine in Houston, Texas. "Pain is very subjective. Nobody cares if you have a stinging pain when a robot hits you - what you want to prevent is injury, because that's when litigation starts."
- New Scientist
- Not just a website!
- Subscribe to New Scientist and get:
- New Scientist magazine delivered every week
- Unlimited online access to articles from over 500 back issues
- Subscribe Now and Save
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
A Very Complex Problem
Wed Oct 13 22:40:07 BST 2010 by Henry Harris
I'm not sure where this gets us since this is actually a very complex problem that has many dimensions. For example is the subject an adult or a child? Is the subject male or female? Is the subject a friend? Is the subject a boss? Is the subject a hostile carrying a weapon? Human robot interactions are a pandora's box of complications.
A Very Complex Problem
Thu Oct 14 10:07:17 BST 2010 by Freederick
Actually, they are trying to address a very specific situation: there is an industrial robot working on an assembly line. A human is detected entering the work area. How much should the robot slow down its movements? Too slow, and work stalls. Too fast, and you risk injury to the human.
This is a narrowly defined industrial safety problem, and such factors as weapon possession, age, or friendship are totally irrelevant here. We are not talking about adapting robots for social interaction in human society, at least not yet.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:b6a4c306-eee3-4d8c-9832-8b5d0fbc9bd6> | 3.328125 | 1,102 | Comment Section | Science & Tech. | 48.718808 | 2,022 |
Report an inappropriate comment
Sun Feb 24 07:17:00 GMT 2013 by Liza
Eric, bacteria convert organic matter to CO2 *after* the organism has died. As long as the plants are alive and growing, they won't get the chance. So: arctic summer- air temperature's warm enough to have plants growing- bacteria don't touch live plants- plants die (partly) off during arctic winter- the dead parts fall under the live plants, or are the lowest, oldest plant parts anyway, so they're sandwiched between live plants and permafrost- the dead parts are shaded and don't thaw in summer- bacteria can't eat them because they're frozen- permafrost accumulates. There may be a small amount of dead material at the surface sufficiently explosed to thaw in summer to get decomposed, but the total balance will be towards accumulation of frozen organic matter. | <urn:uuid:7d11046d-dcbc-4263-b526-c0ebca3365d8> | 3.375 | 188 | Comment Section | Science & Tech. | 38.251884 | 2,023 |
The CANDU reactor is a Canadian-invented, pressurized heavy water reactor developed initially in the late 1950s and 1960s by a partnership between Atomic Energy of Canada Limited (AECL), the Hydro-Electric Power Commission of Ontario (now known as Ontario Power Generation), Canadian General Electric (now known as GE Canada), as well as several private industry participants. The acronym "CANDU", a registered trademark of Atomic Energy of Canada Limited, stands for "CANada Deuterium Uranium". This is a reference to its deuterium-oxide (heavy water) moderator and its use of uranium fuel (originally, natural uranium). All current power reactors in Canada are of the CANDU type. Canada markets this power reactor abroad.
The CANDU reactor is conceptually similar to most light water reactors, although it differs in the details.
Fission reactions in the nuclear reactor core heat a fluid, in this case heavy water (see below). This coolant is kept under high pressure to raise its boiling point and avoid significant steam formation in the core. The hot heavy water generated in this primary cooling loop is passed into a heat exchanger heating light water in the less-pressurized secondary cooling loop. This water turns to steam and powers a conventional turbine with a electrical generator attached to it. Any excess heat energy in the steam after flowing through the turbine is rejected into the environment in a variety of ways, most typically into a large body of cool water, such as a lake, river or ocean. Heat can also be disposed of using a cooling tower, but they are avoided whenever possible because they reduce the plant's efficiency. More recently-built CANDU plants, such as the Darlington Nuclear Generating Station near Toronto, Ontario, use a discharge-diffuser system that limits the thermal effects in the environment to within natural variations.
At the time of its design, Canada lacked the heavy industry to cast and machine the large, heavy steel pressure vessel used in most light water reactors. Instead, the pressure is contained in much smaller tubes, 10 cm diameter, that contain the fuel bundles. These smaller tubes are easier to fabricate than a large pressure vessel. In order to allow the neutrons to flow freely between the bundles, the tubes are made of zircaloy, which is highly transparent to neutrons. The zircaloy tubes are surrounded by a much larger low-pressure tank known as a calandria, which contains the majority of the moderator.
Canada also lacked access to uranium enrichment facilities, which were then extremely expensive to construct and operate. The CANDU was therefore designed to use natural uranium as its fuel, like the ZEEP reactor, the first Canadian reactor. Traditional designs using light water as a moderator will absorb too many neutrons to allow a chain reaction to occur in natural uranium due to the low density of active nuclei. Heavy water absorbs fewer neutrons than light water, allowing a high neutron economy that can sustain a chain reaction even in unenriched fuel. Also, the low temperature of the moderator (below the boiling point of water) reduces changes in the neutrons' speeds from collisions with the moving particles of the moderator ("neutron scattering"). The neutrons therefore are easier to keep near the optimum speed to cause fissioning; they have good spectral purity. At the same time, they are still somewhat scattered, giving an efficient range of neutron energies.
The large thermal mass of the moderator provides a significant heat sink that acts as an additional safety feature. If a fuel assembly were to overheat and deform within its fuel channel, the resulting change of geometry permits high heat transfer to the cool moderator, thus preventing the breach of the fuel channel, and the possibility of a meltdown. Furthermore, because of the use of natural uranium as fuel, this reactor cannot sustain a chain reaction if its original fuel channel geometry is altered in any significant manner.
In a traditional light water reactor (LWR) design, the entire reactor core is a single large pressure vessel containing the light water, which acts as moderator and coolant, and the fuel arranged in a series of long bundles running the length of the core. To refuel such a reactor, it must be shut down, the pressure dropped, the lid removed, and a significant fraction of the core inventory, such as one-third, replaced in a batch procedure. The CANDU's calandria-based design allows individual fuel bundles to be removed without taking the reactor off-line, improving overall duty cycle or capacity factor. A pair of remotely-controlled fueling machines visit each end of an individual fuel string. One machine inserts new fuel while the other receives discharged fuel.
A lower 235U density also generally implies that less of the fuel will be consumed before the fission rate drops too low to sustain criticality (due primarily to the relative depletion of 235U compared with the build-up of parasitic fission products). However, by avoiding the uranium enrichment process, overall utilization of mined uranium in CANDU reactors is significantly less than in light-water reactors, about 30-40% less, using current designs.
A CANDU fuel assembly consists of a number of zircaloy tubes containing ceramic pellets of fuel arranged into a cylinder that fits within the fuel channel in the reactor. In older designs the assembly had 28 or 37 half-meter long fuel tubes with 12 such assemblies lying end to end in a fuel channel. The relatively new CANFLEX bundle has 43 tubes, with two pellet sizes. It is about 10 cm (four inches) in diameter, 0.5 m (20 inches) long and weighs about 20 kg (44 lb) and replaces the 37-tube bundle. It has been designed specifically to increase fuel performance by utilizing two different pellet diameters.
A number of distributed light-water compartments called liquid zone controllers help control the rate of fission. The liquid zone controllers absorb excess neutrons and slow the fission reaction in their regions of the reactor core.
CANDU reactors employ two independent, fast-acting safety shutdown systems. Shutoff rods penetrate the calandria vertically and lower into the core in the case of a safety-system trip. A secondary shutdown system involves injecting high-pressure gadolinium nitrate solution directly into the low-pressure moderator.
Compared with light water reactors, a heavy water design is "neutron rich". This makes the CANDU design suitable for "burning" a number of alternative nuclear fuels. To date, the fuel to gain the most attention is mixed oxide fuel (MOX). MOX is a mixture of natural uranium and plutonium, such as that extracted from former nuclear weapons. Currently, there is a worldwide surplus of plutonium due to the various agreements between the United States and the former Soviet Union to dismantle many of their warheads. However, the security of these supplies is a cause for concern. One way to address this security issue is by converting the warhead into fuel and burning the plutonium in a CANDU reactor.
Plutonium can also be extracted from spent nuclear fuel reprocessing. While this consists usually of a mixture of isotopes that is not attractive for use in weapons, it can be used in a MOX formulation reducing the net amount of nuclear waste that has to be disposed of.
Plutonium isn't the only fissile material in spent nuclear fuel that CANDU reactors can utilize. Because the CANDU reactor was designed to work with natural uranium, CANDU fuel can be manufactured from the used (depleted) uranium found in light water reactor (LWR) spent fuel. Typically this "Recovered Uranium" (RU) has a U-235 enrichment of around 0.9%, which makes it unusable to an LWR, but a rich source of fuel to a CANDU (natural uranium has a U-235 abundance of roughly 0.7%). It is estimated that a CANDU reactor can extract a further 30-40% energy from LWR fuel by recycling it in a CANDU reactor.
Recycling of LWR fuel does not necessarily need to involve a reprocessing step. Fuel cycle tests have also included the DUPIC fuel cycle, or direct use of spent PWR fuel in CANDU, where used fuel from a pressurized water reactor is packaged into a CANDU fuel bundle with only physical reprocessing (cut into pieces) but no chemical reprocessing. Again, where light-water reactors require the reactivity associated with enriched fuel, the DUPIC fuel cycle is possible in a CANDU reactor due to the neutron economy which allows for the low reactivity of natural uranium and used enriched fuel.
Several Inert-Matrix Fuels have been proposed for the CANDU design, which have the ability to "burn" plutonium and other actinides from spent nuclear fuel, much more efficiently than in MOX fuel. This is due to the "inert" nature of the fuel, so-called because it lacks uranium and thus does not create plutonium at the same time as it is being consumed.
CANDU reactors can also breed fuel from natural thorium, if uranium is unavailable.
The second CANDU was the Douglas Point reactor, a more powerful version rated at roughly 200 MWe and located near Kincardine, Ontario. Douglas Point went into service in 1968, and ran until 1984. Uniquely among CANDU stations, Douglas Point incorporated an oil-filled window which offered a view of the east reactor face, even when the reactor was operating. The Douglas Point type was exported to India, and was the basis for India's fleet of domestically-designed and built 'CANDU-derivatives'. Douglas Point was originally planned to be a two-unit station, but the second unit was cancelled because of the success of the larger 515 MWe units at Pickering.
In parallel to the development of the classic CANDU heavy-water design, experimental CANDU variants were developed. WR-1, located at the AECL's Whiteshell Laboratories in Pinawa, Manitoba, used vertical pressure tubes and organic oil as the primary coolant. The oil used has a higher boiling point than water, allowing the reactor to operate at higher temperatures and lower pressures than a conventional reactor. This reactor operated successfully for many years, and promised a significantly higher thermal efficiency than water-cooled versions. Gentilly-1, near Trois-Rivières, Québec, was also an experimental version of CANDU, using a boiling light-water coolant and vertical pressure tubes, but was not considered successful and was closed after 7 years of fitful operation.
The successes at NPD and Douglas Point led to the decision to construct the first multi-unit station in Pickering, Ontario. Pickering A, consisting of units 1 to 4, went into service in 1971. Pickering B, consisting of units 5 to 8, went into service in 1983, giving a full-station capacity of 4,120 MWe. The station is placed very close to the city of Toronto, in order to reduce transmission costs.
Pickering A was placed into voluntary lay-up in 1997, as a part of Ontario Hydro's Nuclear Improvement plan. Units 1 and 4 have since been returned to service, although not without considerable controversy regarding significant cost-overruns, especially on Unit 4. (The refurbishment of Unit 1 was essentially on-time and on-budget, accounting for delays in project startup imposed by the Ontario provincial government.)
In 2005, Ontario Power Generation announced that refurbishment of Units 2 and 3 at Pickering A would not be pursued, contrary to expectations. The reason for this change in plan was economic: the material condition of these units was much poorer than had existed for Units 1 and 4, particularly the condition of the steam generators, and thus the refurbishment costs would be much higher. This rendered a return-to-service of Units 2 and 3 uneconomical. A project to decommission these units is currently in the early stages of planning.
However, some of this potential savings is offset by the initial, one time cost of the heavy water. The heavy water required must be more than 99.75% pure and tonnes of this are required to fill the calandria and the heat transfer system. The next generation reactor (the Advanced CANDU Reactor, also called the "ACR") mitigates this disadvantage by having a smaller moderator size and by using light water as a coolant.
Since heavy water is less efficient at transferring energy from neutrons, the moderator volume (relative to fuel volume) is larger in CANDU reactors compared with light-water designs, making a CANDU reactor core generally larger than a light water reactor of the same power output. In turn, this implies higher building costs for standard features like the containment building. This is offset to some degree by the calandria-based construction, but even considering this, the CANDU tends to have higher capital costs compared with other designs. In fact, CANDU plant costs are dominated by construction costs, the price of fuel representing perhaps 10% of the cost of the power it delivers. This is true in general of nuclear plants, where the plant cost and cost of operations represent about 65% of overall lifetime cost. Due to the lower fuelling costs compared to light water reactor designs, the levelized lifetime cost on a "per-kWh" basis tends to be comparable to these other designs.
When first being offered, CANDUs offered much better "running" time statistics, the capacity factor, than light-water reactors of a similar generation. At the time, light-water (LWR) designs spent, on average, about half of their time in maintenance or refueling outages. However, since the 1980s dramatic improvements in LWR outage management have narrowed the gap between LWR and CANDU, with several LWR units achieving capacity factors in the 90% and higher range, with an overall fleet performance of 89.5% in 2005. The latest-generation CANDU 6 reactors have demonstrated an 88-90% capacity factor, but overall fleet performance is dominated by the older Canadian units which generally report capacity factors on the order of 80%.
Some CANDU plants suffered from cost overruns during construction, primarily due to external factors. For instance, a number of imposed construction delays led to roughly a doubling of the projected cost of the Darlington Nuclear Generating Station near Toronto, Ontario. Technical problems and redesigns added about another billion to the resulting $14.4 billion price. In contrast, the two CANDU 6 reactors more recently installed in China at the Qinshan site were completed on-schedule and on-budget, an achievement attributed to tight control over scope and schedule.
Another concern is tritium production. Although heavy water is relatively immune to neutron capture, a small amount of the deuterium turns into tritium via this process. Tritium, when mixed with deuterium, undergoes nuclear fusion more easily than any other elemental mixture. Small amounts of tritium can be used in both the "trigger" of an A-bomb and the "fusion boost" of a boosted fission weapon. Tritium can also be used in the main fusion process of an H-bomb, but in this application it is typically generated in situ by neutron irradiation of lithium-6.
Tritium is extracted from some CANDU plants in operation in Canada, primarily to improve safety in case of heavy-water leakage. The gas is stockpiled and used in a variety of commercial products, notably "powerless" lighting systems and medical devices. In 1985 what was then Ontario Hydro sparked controversy in Ontario due to its plans to sell tritium to the US. The plan, by law, involved sales to non-military applications only, but some speculated that even this minor penetration of the market would aid the U.S. nuclear weapon program. Demands for this supply in the future appear to outstrip production; in particular the needs of future generations of experimental fusion reactors like ITER will use up a significant amount of any potential stockpile. Currently between 1.5 and 2.1 kg of tritium are recovered yearly at the Darlington separation facility, of which a minor fraction is sold.
The 1998 Operation Shakti test series in India included one bomb of about 45 kT yield that India has publicly claimed was a hydrogen bomb. An offhand comment in the BARC publication Heavy Water - Properties, Production and Analysis appears to suggest that the tritium was extracted from the heavy water in the CANDU and PHWR reactors in commercial operation. Janes Intelligence Review quotes the Chairman of the Indian Atomic Energy Commission as admitting to the tritium extraction plant, but refusing to comment on its use. It is known, however, that India has developed the technology to create tritium from the neutron-irradiation of lithium-6 in reactors, a process that is several orders of magnitude more efficient than the extraction of tritium from irradiated heavy water.
CANDU reactors have been proposed as the main vehicle for planned supply replacement and growth in Ontario, Canada, a province that currently generates over 50% of its electricity from CANDU reactors, with Canadian government help with financing. Interest has also been expressed in Western Canada, where CANDU reactors are being considered as heat and electricity sources for the energy-intensive oil sands extraction process, which currently uses natural gas. Energy Alberta Corporation, headquartered in Calgary, announced August 272007 that they had filed application for a license to build a new nuclear plant at Lac Cardinal (30 km west of the town of Peace River, Alberta). The application would see an initial twin AECL ACR-1000 plant go online in 2017, producing 2.2 gigawatt (electric).
Romania is in discussions for the completion of its multi-unit nuclear plant at Cernavoda, now consisting of two operating CANDU reactors. Three more partially-completed CANDU reactors exist on the same site, part of a project discontinued at the close of the Nicolae Ceauşescu regime.
Turkey has repeatedly shown interest in the CANDU reactor, but so far has chosen not to pursue nuclear energy. In the summer of 2006, Turks protested against plans for building nuclear reactors.
The units are designed with a planned operating life of over fifty years, which will be achieved with a mid-life program to replace some of the key components, such as the fuel channels. The plants have a projected average annual capacity factor of more than nintety per cent.
Enhancement of the CANDU 6 design to achieve higher plant output include: the installation of an Ultrasonic Flow Meter (UFM) to improve the accuracy of feedwater flow measurements, improvements in turbine design itself and change in condenser vacuum system design for operation at lower condenser pressures.
AECL continues to develop other features to further improve the plant’s performance while maintaining the basic features of the CANDU 6 design, which over time have proven to be extremely reliable with an excellent production record since the early 1980s. The additional enhancements include:
At the same time the basic and defining design features of CANDU are all maintained:
It is expected that the capital cost of constructing these plants will be reduced by up to 40% compared to current CANDU 6 plants.
Tritium is generated in all nuclear power designs; however, CANDU reactors generate more tritium in their coolant and moderator than light-water designs, due to neutron capture in heavy hydrogen. Some of this tritium escapes into containment and is generally recovered; however a small percentage (about 1%) escapes containment and constitutes a routine radioactive emission from CANDU plants (also higher than from an LWR of comparable size). Operation of a CANDU plant therefore includes monitoring of this effluent in the surrounding biota (and publishing the results), in order to ensure that emissions are maintained below regulatory limits.
In some CANDU reactors the tritium concentration in the moderator is periodically reduced by an extraction process, in order to further reduce this risk. Typical tritium emissions from CANDU plants in Canada are less than 1% of the national regulatory limit, which is based upon the guidelines of the International Commission on Radiological Protection (ICRP) (for example, the maximum permitted drinking water concentration for tritium in Canada, 7000 Bq/L, corresponds to 1/10 of the ICRP's public dose limit). Tritium emissions from other CANDU plants are similarly low.
In general there is significant public controversy associated with radioactive emissions from nuclear power plants, and for CANDU plants one of the main concerns is tritium. In 2007 Greenpeace published a critique of tritium emissions from Canadian nuclear power plants by Dr. Ian Fairlie. This report was reviewed by Dr. Richard Osborne and found to be in significant error.
High temperature lubricants sought for advanced engines. (super-hot heavy duty diesel engine) (Special Report - Synlubes)
Apr 19, 1991; High Temperature Lubricants Sought for Advanced Engines GATHERSBURG, Md. -- Liquid lubricants for tomorrow's super-hot heavy duty... | <urn:uuid:f4f24777-5342-4091-a821-5f81533c5b38> | 3.875 | 4,386 | Knowledge Article | Science & Tech. | 33.983366 | 2,024 |
In Java, thread scheduler can use the thread priorities in the form of integer value to each of its thread to determine the execution schedule of threads . Thread gets the ready-to-run state according to their priorities.
This page discusses - Synchronized Threads.
Multithreading in Java
This page discusses - Multithreading in Java.
This page discusses - Interthread Communication.
Creation of MultiThreads
Like creation of a single thread, You can also create more than one thread (multithreads) in a program using class Thread or implementing interface Runnable. | <urn:uuid:a1d3e8da-fdfc-4674-a483-50916bdd4005> | 3.5625 | 122 | Documentation | Software Dev. | 37.189951 | 2,025 |
Glue beats gecko's sticking power
10 October 2008
The prospect of a 'Spider Man' suit which would allow the wearer to scale sheer walls has moved a step closer: a carbon nanotube-based material has smashed records for sticking power to a vertical surface, and it can be easily peeled away too.
A gecko's ability to climb walls and hang from ceilings relies on the millions of sub-microscopic branched hairs called setae on the animal's feet - resulting in powerful van der Waals forces being generated between the setae and the surface. Over the last six years, researchers have created a range of synthetic mimics of this system.
Top: Gecko's foot and setae; Bottom: Carbon nanotube glue
Now, Liangti Qu from the University of Dayton, Ohio, US, and colleagues have synthesised a new material, based on multi-walled carbon nanotubes, that sticks onto a smooth vertical surface with adhesive forces of around 100 Ncm-2 - almost 10 times more than a gecko's foot, and nearly three times greater than the best artificial 'gecko' material. 'A three centimetre square of the material could hold 150 kg,' says Liming Dai, one of the leaders of the research group. Yet the material can also easily be pulled away from a surface.
The key to the new material's properties lies in the structure of its nanotubes. They are grown under low pressure chemical vapour deposition in such a way that the exposed ends of the tubes are not aligned; rather they are floppy and entangled, like so many strands of spaghetti.
A bottle of cola suspended from a glass surface using the nanotube glue
When a patch of the material is pressed on to a vertical surface and a load attached to pull it downwards, the strands of 'spaghetti' are stretched so that they are aligned against the surface, significantly increasing the area of contact between the hairs and the substrate. But without the shear force downwards, the nanotubes have far less sticking power. When tugged in the normal plane (perpendicular to the vertical surface), they can be easily detached from the wall.
The material showed reduced, but still significant, adhesion on rough surfaces such as sandpaper. Possible applications, says Dai, include pads for a robot's feet to enable it to climb walls, or as a way of joining electronic components without the need for soldering. He says the modified nanotubes are no more expensive to produce than standard nanotubes, and that the price of these is falling as more industrial-scale production facilities emerge.
Stanislav Gorb of the Max-Planck Institute for Metals Research in Stuttgart, Germany, is an expert on biomaterials. He says that while he likes the study because of the way that carbon nanotubes have been modified to achieve novel properties, the system is not truly analogous to that of the gecko because a relatively large 'preload' is needed to obtain the adhesion: in other words, the patch of material must first be pressed with some force to the surface. 'The trick of the gecko and other animals is minimal load with strong adhesion,' Gorb points out.
Interesting? Spread the word using the 'tools' menu on the left.
L Qu et al, Science, 2008, 322, 238 (DOI: 10.1126/science.1159503)
Also of interest
18 February 2008
Surgeons and climbers could benefit from new materials inspired by gecko's feet
Researchers in the US have attempted to surpass nature and create a synthetic material that sticks to surfaces at the nanometre level.
Comment on this story at the Chemistry World blog
Read other posts and join in the discussion
External links will open in a new browser window | <urn:uuid:2e89e3ca-2a94-4ba0-9e1f-6dc6cd56f552> | 3.171875 | 792 | News Article | Science & Tech. | 39.799673 | 2,026 |
Oct. 22, 2010 Astronomers in Japan, using an X-ray detector on the International Space Station, and at Penn State University, using NASA's Swift space observatory, are announcing the discovery of an object newly emitting X-rays, which previously had been hidden inside our Milky Way galaxy in the constellation Centaurus.
The object -- a binary system -- was revealed recently when an instrument on the International Space Station named MAXI (Monitor of All-Sky X-ray Image) on the Exposed Facility of the Japanese Experiment Module "Kibo" caught it in the act of erupting with a massive blast of X-rays known as an X-ray nova. The MAXI mission team quickly alerted astronomers worldwide to the discovery of the new X-ray source at 2:00 a.m. EDT on Wednesday, 20 October, and NASA's Swift Observatory quickly conducted an urgent "target-of-opportunity" observation nine hours later, which allowed for the location of the X-ray nova to be measured accurately.
"The collaboration between the MAXI and Swift teams allowed us to quickly and accurately identify this new object," said Jamie Kennea, the Swift X-ray Telescope instrument scientist at Penn State University who is leading the Swift analysis. "MAXI and Swift's abilities are uniquely complementary, and in this case have provided a discovery that would not have been possible without combining the knowledge obtained from both."
The Swift detection confirmed the presence of the previously unknown bright X-ray source, which was named MAXI J1409-619. "The Swift observation suggests that this source is probably a neutron star or a black hole with a massive companion star located at a distance of a few tens of thousands of light years from Earth in the Milky Way," said David Burrows, professor of astronomy and astrophysics at Penn State and the lead scientist for Swift's X-ray Telescope. "The contribution of Swift's X-ray Telescope to this discovery is that it can swing into position rapidly to focus on a particular point in the sky and it can image the sky with high sensitivity and high spatial resolution."
"MAXI has demonstrated its capability to discover X-ray novae at great distances," said Kazutaka Yamaoka, assistant professor at Aoyama Gakuin University and a member of the MAXI team. "The MAXI team is planning further coordinated observations with NASA satellites to reveal the identity of this source."
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | <urn:uuid:7110cbeb-35eb-4055-9899-7dcf6ad8de07> | 3.125 | 545 | News Article | Science & Tech. | 31.627274 | 2,027 |
Habitat Affects Escape Behavior of Birds
It seems birds of the same species raised in diverse settings behave differently in the face of a threat. Birds raised in an urban environment react differently than country-bred birds when faced with a predator.
A study undertaken by Diego Ibanez-Alamo, researcher at the University of Granada (UGR) and Anders Pape Moller from Paris-Sud University, highlights the fact that urbanization plays an influential role in a bird's survival strategies.
They analyzed the escape techniques of 1,132 birds belonging to 15 species in different rural and urban areas.
Like Us on Facebook
The study showed that city birds have changed their behavior to adapt to new threats like cats, which are their main enemies in an urban habitat, instead of their more traditional enemies in the countryside, such as the sparrow hawk.
"When they are captured, city birds are less aggressive, they produce alarm calls more frequently, they remain more paralyzed when attacked by their predator and they lose more feathers than their countryside counterparts," explained by Juan Diego Ibanez-Alamo.
They were surprised to see that urbanization was directly linked with these differences. This finding gives rise to the concept that escape strategies evolve alongside the expansion of cities.
"It is crucial to discover how birds adapt to transformations in their habitat so that we can decrease their effects," said Ibanez-Alamo..
"Predation change caused by city growth is serious," outlined Ibanez-Alamo.
As the scientist indicates, tactics against their hunters are "crucial" so that birds can adapt to their new environment: "Birds should modify their behavior to be able to survive in cities because if not, they will become extinct at the mercy of urban growth."
These results appear in the journal, Animal Behavior. | <urn:uuid:cd6f5793-37ef-4050-ada7-07a73ce582e4> | 3.828125 | 375 | News Article | Science & Tech. | 30.529636 | 2,028 |
Scientists have long known of dissimilarities in anatomy and activity between the brains of women and men—now a rodent study shows that even individual neurons behave differently depending on sex.
Robert Clark of the University of Pittsburgh School of Medicine and his colleagues found that cultured neurons from female rats and mice survived longer than did neurons from their male counterparts when facing starvation. Such sex differences had been evident for decades in other body tissues, but so far no one had looked at brain cells, Clark says. When he and his team deprived the cells of nutrients, female neurons consumed mainly fat resources to stay alive, whereas large amounts of male cells started to eat up their own protein-based building blocks—and subsequently died.
The findings suggest that tailoring nutrition to a patient’s gender during critical care—for example, after illnesses that temporarily cut off the brain’s nutrient supply, such as stroke—might help prevent brain cell death, Clark posits. Men’s neurons might fare better on a high-protein diet, for instance, whereas high fat content would probably nourish women’s brain cells best, he adds.
Self-cannibalism makes sense for body tissues other than the brain, but why male neurons engaged in it to such a large extent is a mystery, Clark says. “You can understand why during famine, you would want to break down muscle to preserve the rest of your body, but it’s harder to understand why you would want to break down proteins within your brain.”
Note: This article was originally printed with the title, "Neuron Cannibalism."
This article was originally published with the title Neuron Cannibalism. | <urn:uuid:7a3ac89e-ae09-44f7-98c5-b4237a5dab4c> | 3.75 | 345 | Truncated | Science & Tech. | 38.936705 | 2,029 |
Astronomical Eyepiece and Solar Filters
Astronomical filters can enhance the appearance of the moon, planets, and nebulae. The moon and in some cases planets are too bright when viewed through a telescope. Lunar or neutral density filters reduce this brightness, making it easier to discern fine details. In other cases one color dominates the image and more detail can be seen if this color is partially filtered. Nebulae benefit from narrow bandpass filters that selectively pass very specific wavelengths of light being emitted by ionized hydrogen and oxygen. Solar filters are absolutely essential for safely observing the sun.
Nebular filters are designed for observing emission nebulae, dramatically increasing both the contrast and detail that is visible.
Lunar and planetary filters enhance the view by reducing glare or subduing a predominant color, allowing more subtle details to become visible.
Many astronomical filters are designed for visual observing, but some are specifically designed to be used exclusively for imaging applications using CCD or Digital SLR cameras.
Solar filters permit the safe observation of the sun with a telescope. White light filters are useful for observing sunspots. Hydrogen Alpha filters reveal details on both the surface, and in the corona. | <urn:uuid:4996244d-aad8-4dce-9d26-fe6445637543> | 3.265625 | 247 | Knowledge Article | Science & Tech. | 18.824875 | 2,030 |
Climate Change Information Project - February 2013 Digest Vol. 8
WASHINGTON (Reuters) - Thousands of protesters gathered on the Washington's National Mall on Sunday calling on President Barack Obama to reject the controversial Keystone XL oil pipeline proposal and honor his inaugural pledge to act on climate change.
Organizers of the Forward on Climate event estimated that 35,000 people from 30 states turned out in cold, blustery conditions for what they said was the biggest climate rally in U.S. history.
For the full story visit http://www.reuters.com/assets/print?aid=USBRE91G0GZ20130217
Below are articles from the Climate Change Information Project. The focus of the project is to get scientifically-based articles relating to climate change disseminated to the broader public. Currently, most of this information tends to stay within the scientific community.
2012 Sustained Long-Term Climate Warming Trend, NASA Finds
Jan. 15, 2013 — NASA scientists say 2012 was the ninth warmest of any year since 1880, continuing a long-term trend of rising global temperatures. With the exception of 1998, the nine warmest years in the 132-year record all have occurred since 2000, with 2010 and 2005 ranking as the hottest years on record. For complete story http://www.sciencedaily.com/releases/2013/01/130115190218.htm
Increases in Extreme Rainfall Linked to Global Warming
Feb. 1, 2013 A worldwide review of global rainfall data led by the University of Adelaide has found that the intensity of the most extreme rainfall events is increasing across the globe as temperatures rise. For complete story go to http://www.sciencedaily.com/releases/2013/02/130201100036.htm
Unprecedented Glacier Melting in the Andes Blamed On Climate Change
Jan. 22, 2013 Glaciers in the tropical Andes have been retreating at increasing rate since the 1970s, scientists write in the most comprehensive review to date of Andean glacier observations. The researchers blame the melting on rising temperatures as the region has warmed...For complete aticlehttp://www.sciencedaily.com/releases/2013/01/130122101907.htm
Loss of Arctic Sea Ice Speeds Domino Effect of Warming Temperatures at High Latitudes
Jan. 23, 2013 Melting Arctic sea ice is no longer just evidence of a rapidly warming planet -- it's also part of the problem.
Alan Werner, professor of geology at Mount Holyoke College, said that decreasing amounts of Arctic snow and ice in summer will lead to a greater degree of heat absorption at the North Pole. For complete story:
Feb. 11, 2013 Ancient carbon trapped in Arctic permafrost is extremely sensitive to sunlight and, if exposed to the surface when long-frozen soils melt and collapse, can release climate-warming carbon dioxide gas into the atmosphere much faster than previously thought. For complete article http://www.sciencedaily.com/releases/2013/02/130211162116.htm
Heat Waves, Storms, Flooding: Climate Change to Profoundly Affect U.S. Midwest in Coming Decades
Jan. 18, 2013 In the coming decades, climate change will lead to more frequent and more intense Midwest heat waves while degrading air and water quality and threatening public health. Intense rainstorms and floods will become more common, and existing risks to the Great Lakes will be exacerbated. For complete article http://www.sciencedaily.com/releases/2013/01/130118104121.htm
Predictions of the Human Cost of Climate Change
Feb. 8, 2013 A new book, Overheated: The Human Cost of Climate Change, predicts a grim future for billions of people in this century. It is a factual account of a staggering human toll, based on hard data. For complete article:
There are no comments to this post(Back to bremerm blog | Write a Comment | Subscribe)
facebook | del.icio.us | digg | stumbleupon | RSS | slashdot | twitter | <urn:uuid:50c5f80e-7064-4178-8b19-5a048e9a26fb> | 2.578125 | 854 | Content Listing | Science & Tech. | 56.007305 | 2,031 |
The Atacama Desert in Chile is on the wrong side of the Andes. The trade winds that might bring moisture from the east get caught in the mountains instead, making this the driest place on Earth. But this turns out to be perfect for the Atacama Large Millimeter Array—the most powerful telescope in history. At 5,000 meters—about as high as Everest Base Camp—the atmosphere is thin, and the heavens seem near.
The Chajnantor Plateau of the Atacama dwarfs the radio telescopes of ALMA, a project that began construction nine years ago and is still in progress. ALMA, a collaboration among scientists and governments from Europe, Asia, and North and South America, aims to gather a fuller view of the universe than has ever before been possible.
Scientists and engineers work in the control room at ALMA.
Radio frequencies received by the antennas are translated into data and carried by fiber-optic cables to the operations support facility.
Because the site’s altitude is so inhospitable, the project’s roughly 600 workers—scientists, technicians, office staff, and construction workers—sleep in the living quarters at a more tolerable 3,000 meters. Only a few guards and those working overnight stay at 5,000 meters.
Between 12-hour shifts, workers get some rest.
In their off-time, workers dine with colleagues from the United States, Canada, Japan, Taiwan, Europe, and Chile.
Workers stand inside the “receiver room” of one of the radio telescopes that make up the project.
Massive warehouses provide shelter for both the equipment and the workers.
Inside a warehouse, technicians work to assemble an antenna before it’s deployed.
Workers test a radio telescope being readied for operation.
A scientist performs some final tests in the receiver room before taking this antenna online.
A radio telescope stands in the thin air of Chile’s Atacama Desert, ready to receive data from the stars. | <urn:uuid:fdb0c8cb-1199-491e-9158-350b7e408730> | 3.5625 | 421 | Truncated | Science & Tech. | 45.242956 | 2,032 |
by Elizabeth K. Gardner
West Lafayette IN (SPX) Dec 02, 2011
A drop in carbon dioxide appears to be the driving force that led to the Antarctic ice sheet's formation, according to a recent study led by scientists at Yale and Purdue universities of molecules from ancient algae found in deep-sea core samples.
The key role of the greenhouse gas in one of the biggest climate events in Earth's history supports carbon dioxide's importance in past climate change and implicates it as a significant force in present and future climate.
The team pinpointed a threshold for low levels of carbon dioxide below which an ice sheet forms in the South Pole, but how much the greenhouse gas must increase before the ice sheet melts - which is the relevant question for the future - remains a mystery.
Matthew Huber, a professor of earth and atmospheric sciences at Purdue, said roughly a 40 percent decrease in carbon dioxide occurred prior to and during the rapid formation of a mile-thick ice sheet over the Antarctic approximately 34 million years ago.
A paper detailing the results was published Thursday (Dec. 1) in the journal Science.
"The evidence falls in line with what we would expect if carbon dioxide is the main dial that governs global climate; if we crank it up or down there are dramatic changes," Huber said. "We went from a warm world without ice to a cooler world with an ice sheet overnight, in geologic terms, because of fluctuations in carbon dioxide levels."
For 100 million years prior to the cooling, which occurred at the end of the Eocene epoch, Earth was warm and wet. Mammals and even reptiles and amphibians inhabited the North and South poles, which then had subtropical climates.
Then, over a span of about 100,000 years, temperatures fell dramatically, many species of animals became extinct, ice covered Antarctica and sea levels fell as the Oligocene epoch began.
Mark Pagani, the Yale geochemist who led the study, said polar ice sheets and sea ice exert a strong control on modern climate, influencing the global circulation of warm and cold air masses, precipitation patterns and wind strengths, and regulating global and regional temperature variability.
"The onset of Antarctic ice is the mother of all climate 'tipping points,'" he said. "Recognizing the primary role carbon dioxide change played in altering global climate is a fundamentally important observation."
There has been much scientific discussion about this sudden cooling, but until now there has not been much evidence and solid data to tell what happened, Huber said.
The team found the tipping point in atmospheric carbon dioxide levels for cooling that initiates ice sheet formation is about 600 parts per million. Prior to the levels dropping this low, it was too warm for the ice sheet to form. At the Earth's current level of around 390 parts per million, the environment is such that an ice sheet remains, but carbon dioxide levels and temperatures are increasing.
The world will likely reach levels between 550 and 1,000 parts per million by 2100. Melting an ice sheet is a different process than its initiation, and it is not known what level would cause the ice sheet to melt away completely, Huber said.
"The system is not linear and there may be a different threshold for melting the ice sheet, but if we continue on our current path of warming we will eventually reach that tipping point," he said. "Of course after we cross that threshold it will still take many thousands of years to melt an ice sheet."
What drove the rise and fall in carbon dioxide levels during the Eocene and Oligocene is not known.
The team studied geochemical remnants of ancient algae from seabed cores collected by drilling in deep-ocean sediments and crusts as part of the National Science Foundation's Integrated Ocean Drilling program. The biochemical molecules present in algae vary depending on the temperature, nutrients and amount of dissolved carbon dioxide present in the ocean water.
These molecules are well preserved even after many millions of years and can be used to reconstruct the key environmental variables at the time, including carbon dioxide levels in the atmosphere, Pagani said.
Samples from two sites in the tropical Atlantic Ocean were the main focus of this study because this area was stable at that point in Earth's history and had little upwelling, which brings carbon dioxide from the ocean floor to the surface and could skew measurements of atmospheric carbon dioxide, Huber said.
In re-evaluating previous estimates of atmospheric carbon dioxide levels using deep-sea core samples, the team found that continuous data from a stable area of the ocean is necessary for accurate results.
Data generated from a mix of sites throughout the world's oceans caused inaccuracies due to variations in the nutrients present in different locations.
This explained conflicting results from earlier papers based on the deep-sea samples that suggested carbon dioxide increased during the formation of the ice sheet, he said.
Constraints on temperature and nutrient concentrations were achieved through modeling of past circulation, temperature and nutrient distributions performed by Huber and Willem Sijp at the University of New South Wales in Australia.
The collaboration built on Huber's previous work using the National Center for Atmospheric Research Community Climate System Model 3, one of the same models used to predict future climates, and used the UVic Earth System Climate Model developed at the University of Victoria, British Columbia.
"The models got it just about right and provided results that matched the information obtained from the core samples," he said.
"This was an important validation of the models. If they are able to produce results that match the past, then we can have more confidence in their ability to predict future scenarios."
In addition to Huber, Pagani and Sijp, paper co-authors include Zhonghui Liu of the University of Hong Kong, Steven Bohaty of the University of Southampton in England, Jorijntje Henderiks of Uppsala University in Sweden, Srinath Krishnan of Yale, and Robert DeConto of the University of Massachusetts-Amherst.
The National Science Foundation, Natural Environment Research Council, Royal Swedish Academy and Yale Department of Geology funded this work.
In 2004 the team used evidence from deep-sea core samples to challenge the longstanding theory that the ice sheet developed because of a shift from warm to cool ocean currents millions of years ago.
The team found that a cold current, not the warm one that had been theorized, was flowing past the Antarctic coast for millions of years before the ice sheet developed. Huber next plans to investigate the impact of an ice sheet on climate.
"It seems that the polar ice sheet shaped our modern climate, but we don't have much hard data on the specifics of how," he said. "It is important to know by how much it cools the planet and how much warmer the planet would get without an ice sheet."
Climate Dynamics Prediction Laboratory
Beyond the Ice Age
Comment on this article via your Facebook, Yahoo, AOL, Hotmail login.
Carbon cycling was much smaller during last ice age than in today's climate
Bristol UK (SPX) Nov 23, 2011
Atmospheric carbon dioxide (CO2) is one of the most important greenhouse gases and the increase of its abundance in the atmosphere by fossil fuel burning is the main cause of future global warming. In past times, during the transition between an ice age and a warm period, atmospheric CO2 concentrations changed by some 100 parts per million (ppm) - from an ice age value of 180 ppm to about ... read more
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement| | <urn:uuid:382c9246-8d43-4e94-ba3d-d659ff4d79b6> | 3.28125 | 1,658 | Truncated | Science & Tech. | 35.658527 | 2,033 |
Data reported by the weather station: 107630 (EDDN)
Latitude: 49.5 | Longitude: 11.08 | Altitude: 318
|Main||Year 1968 climate||Select a month|
To calculate annual averages, we analyzed data of 366 days (100% of year).
If in the average or annual total of some data is missing information of 10 or more days, this is not displayed.
The total rainfall value 0 (zero) may indicate that there has been no such measurement and / or the weather station does not broadcast.
|Annual average temperature:||8.5°C||366|
|Annual average maximum temperature:||13.0°C||365|
|Annual average minimum temperature:||4.2°C||364|
|Annual average humidity:||77.6%||362|
|Annual total precipitation:||-||-|
|Annual average visibility:||13.8 Km||366|
|Annual average wind speed:||9.2 km/h||366|
Number of days with extraordinary phenomena.
|Total days with rain:||221|
|Total days with snow:||55|
|Total days with thunderstorm:||45|
|Total days with fog:||254|
|Total days with tornado or funnel cloud:||0|
|Total days with hail:||3|
Days of extreme historical values in 1968
The highest temperature recorded was 31.1°C on July 1.
The lowest temperature recorded was -22.8°C on January 13.
The maximum wind speed recorded was 51.9 km/h on January 6. | <urn:uuid:2ed29869-20aa-4d27-a744-c824a3b48e1b> | 2.8125 | 354 | Structured Data | Science & Tech. | 69.704762 | 2,034 |
Want to stay on top of all the space news? Follow @universetoday on Twitter
Globular clusters are gravitationally bound, dense concentrations of stars. There can be hundreds of thousands of stars in a cluster, and they are so close together that it’s hard to distinguish globular clusters outside of our galaxy from stars within our own galaxy just using ground-based telescopes: in other words, these big bunches of far away stars can look like a single, nearby star. But astronomers recently used the Hubble Space Telescope’s sharp eyes to identify, incredibly, over 11,000 globular clusters in the Virgo cluster of galaxies. And in doing so, they also noticed something interesting about where the globulars are located. Globular clusters don’t seem to form uniformly from galaxy to galaxy; instead they like to be where the action is near the center of galaxy clusters. The globulars are also more prevalent in dwarf galaxies near the center of the cluster of galaxies.
Hubble’s Advanced Camera for Surveys resolved the star clusters in 100 galaxies of various sizes, shapes, and brightnesses, even in faint, dwarf galaxies. Comprised of over 2,000 galaxies, the Virgo cluster is the nearest large galaxy cluster to Earth, located about 54 million light-years away.
Astronomers have long known that the giant elliptical galaxy at the cluster’s center, M87, hosts a larger-than-predicted population of globular star clusters. The origin of so many globulars has been a long-standing mystery.
“Our study shows that the efficiency of star cluster formation depends on the environment,” said Patrick Cote of the Herzberg Institute of Astrophysics in Victoria, British Columbia. “Dwarf galaxies closest to Virgo’s crowded center contained more globular clusters than those farther away.”
The team found a bounty of globular clusters in most dwarf galaxies within 3 million light-years of the cluster’s center, where the giant elliptical galaxy M87 resides. The number of globulars in these dwarfs ranged from a few dozen to several dozen, but these numbers were surprisingly high for the low masses of the galaxies they inhabited. By contrast, dwarfs in the outskirts of the cluster had fewer globulars. Many of M87′s star clusters may have been snatched from smaller galaxies that ventured too close to it.
“We found few or no globular clusters in galaxies within 130,000 light-years from M87, suggesting the giant galaxy stripped the smaller ones of their star clusters,” explained Eric Peng of Peking University in Beijing, China, and lead author of the Hubble study. “These smaller galaxies are contributing to the buildup of M87.”
Hubble’s “eye” is so sharp that it was able to pick out the fuzzy globular clusters from stars in our galaxy and from faraway galaxies in the background. “With Hubble we were able to identify and study about 90 percent of the globular clusters in all our observed fields,” Peng said. “This was crucial for dwarf galaxies that have only a handful of star clusters.”
Evidence of M87′s galactic cannibalism comes from an analysis of the globular clusters’ composition. “In M87 there are three times as many globulars deficient in heavy elements, such as iron, than globulars rich in those elements,” Peng said. “This suggests that many of these ‘metal-poor’ star clusters may have been stolen from nearby dwarf galaxies, which also contain globulars deficient in heavy elements.”
Studying globular star clusters is critical to understanding the early, intense star-forming episodes that mark galaxy formation. They are known to reside in all but the faintest of galaxies.
“Star formation near the core of Virgo is very intense and occurs in a small volume over a short amount of time,” Peng noted. “It may be more rapid and more efficient than star formation in the outskirts. The high star-formation rate may be driven by the gravitational collapse of dark matter, an invisible form of matter, which is denser and collapses sooner near the cluster’s center. M87 sits at the center of a large concentration of dark matter, and all of these globulars near the center probably formed early in the history of the Virgo cluster.”
The fewer number of globular clusters in dwarf galaxies farther away from the center may be due to the masses of the star clusters that formed, Peng said. “Star formation farther away from the central region was not as robust, which may have produced only less massive star clusters that dissipated over time,” he explained.
Original News Source: HubbleSite Press Release | <urn:uuid:d1ac49a9-5ba6-414e-ade6-f040425a1e88> | 3.890625 | 1,008 | News Article | Science & Tech. | 42.178992 | 2,035 |
Re-topology, or sometimes just retopology, is the process of adjusting the topology of an existing 3D model. Topology refers to mesh-based 3D models, and is the way the individual polygons are meshed together into a 3d grid, with each polygon sharing a dividing line with each of it?s neighbours, until the mesh encloses a hollow 3D shape.
Below, we offer a selection of links from our resource databases which may match this term.
Related Dictionary Entries for Re-topology:
Resources in our database matching the Term Re-topology:
Results by page
By merging the traditionally distinct fields of topology and geometry, through the discovery of a new type of mathematics called persistent homology, math researchers have created a set of equations which simply describe the pattern and placement of complex fractals as part of a polygonal model - such as the unique froth on a wave, within the reach of real-time rendering
Industry News containing the Term Re-topology:
Results by page
BBC weather forcasting is about to get a makeover in the style of VR gameworlds. Weatherscape XT, a 3D meterological software suite developed by New Zealand firm Metra, and with help from the BBC itself, is set to give a realistic-looking,...
nternet, networks of connections between Hollywood actors, etc, are examples of complex networks, whose properties have been intensively studied in recent times. The small-world property (that everyone has a few-step connection to celebriti...
The freely available global mapping software proliferating the Internet ? Google Earth in the main ? is being eagerly adopted by researchers from a variety of fields as the most effective way to compare geographic data.
Researchers are edging toward the creation of new optical technologies using "nanostructured metamaterials" capable of ultra-efficient transmission of light, with potential applications including advanced solar cells and quantum computing...
Combining hospital MRIs with the mathematical tool known as network analysis, a group of researchers at UC San Francisco and UC Berkeley have mapped the three-dimensional global connections within the brains of seven adults who have genetic... | <urn:uuid:236148de-6181-482f-9d0f-b5fc706f88a3> | 2.96875 | 453 | Content Listing | Science & Tech. | 34.042636 | 2,036 |
Workshop Resources: Can a Good Climate Go Bad? Past, Present, and Future Climate
Welcome to the online resources for the 2006 educators workshop, Can a Good Climate Go Bad? Past, Present, and Future Climate. This workshop, presented at the University of Texas by Teri Eastburn of UCAR Education and Outreach, is intended to expose educators to scientific information about climate and share a collection of favorite classroom activities. This web portal is intended to provide the web links and additional information to those who attended the workshop and share resources to others who could not attend.
The workshop is divided into four parts, each with classroom activities and a corresponding Powerpoint presentation that is provided here. Topics covered include:
- Part 1 (PDF): Introductory Activities, Earth as a System, the Sun-Earth Connection, and Energy
- Part 2 (PDF): Climate vs. Weather, Earth's Past Climate, Climate Models
- Part 3 (PDF): Can a Good Climate Go Bad? Our Changing Climate
- Part 4: Climate Care: What We Need to Do and What's Being Done Locally, Nationally, Globally
Activities Corresponding to Part 1:
- Climate Change Survey
- Climate Lingo Bingo
- The Dynamic Systems Game
- Albedo: Some Like It Hot
- Taking Light Apart; Putting Light Back Together
- Electromagnetic Standing Wave Demonstration
Activities Corresponding to Part 2:
- Paleoclimates and Pollen
- Adaptation Investigation
- Climate and Weather: Different, but Together
- Article by M. Glantz: What Makes Good Climates Go Bad?
Activities Corresponding to Part 3:
- The Arctic Climate Impact Assessment Report The ACIA is the first comprehensive, integrated assessment of climate change and ultraviolet radiation across the entire Arctic region.
- Listening Critically to a Politicized Topic: The Voices of Climate
- The Nitrogen Cycle Game
- Carbon Cycle Pursuit Game
- Carbon Dioxide Sources and Sinks
- Energy: Drains, Gains, and Budget Strains
- Thermal Expansion and Sea Level Rise
- Mosquito Vectors and Disease Transport
Activities Corresponding to Part 4:
- CO2: How Much Do You Spew?"
- Global Connections Activity from Facing the Future.org
- One Planet, Many People Powerpoint Presentation -- Developed by the UNEP to illustrate how humans have altered their environment and continue to make observable and measureable changes to the natural world. | <urn:uuid:d239b8da-238e-4387-a665-d8b6f88ed2ad> | 3.390625 | 521 | Content Listing | Science & Tech. | 18.936218 | 2,037 |
U.S. Drought Monitor
The national drought footprint shrank slightly this week, as heavy rains fell across the South, Southeast, Midwest and parts of the Mid-Atlantic states, and major snowfall blanketed parts of the Rocky Mountains and Northern Cascades, bringing relief to those regions. However, the hardest-hit drought region — the Great Plains — continued to experience drier-than-average conditions, with the drought continuing to hold on.
A new federal drought outlook issued on Thursday projects that the drought conditions are likely to remain entrenched through April, and that the drought may even worsen from the Plains to the Rockies and into the Southwest, along with another area of persistent and expanding drought in the Southeast, including southern Georgia and the Florida Panhandle.
14-Day Observed Precipitation
Rainfall during the past two weeks, showing the rains that fell from Texas through the Lower Mississippi River Valley, but lack of precipitation across the Plains. Credit: NOAA.
“Unfortunately it looks like most of the central and southern Plains . . . is going to continue to have significant drought problems,” said Anthony Artusa, a seasonal forecaster with the National Oceanic and Atmospheric Administration (NOAA).
The economic impacts of this drought have been staggering. The drought of 2011-12, which is still ongoing, is comparable in size to severe droughts that occurred in the 1950s, and is already being blamed for more than $35 billion in crop losses alone, according to the reinsurance company Aon Benfield. Others estimate that the total cost could exceed $100 billion, making this event rival Hurricane Sandy for the most expensive natural disaster of 2012.
In Texas, which has been struggling with severe drought conditions since 2011, the areas of the state that recently emerged from drought are expected to slip right back into it during the latter half of the winter season and into spring, NOAA said.
As of Jan. 15, 58.87 percent of the land area in the lower 48 states was experiencing some form of drought conditions, according to the U.S. Drought Monitor. That marks a slight improvement from last week, when the number was 60.26 percent. More than half of the continental U.S. has been under at least moderate drought conditions since June. The drought peaked in July, when nearly 62 percent of the lower 48 states were classified as being in moderate drought or worse conditions.
According to climate scientists, the drought was most likely initially set into motion by the pattern of water temperatures in the Pacific and Atlantic Oceans, which can alter weather patterns, but manmade global warming may then have amplified the drought event by leading to multiple extreme heat events during the spring and summer of 2012. These heat events accelerated the development and intensification of the drought.
A recently released draft federal climate assessment shows that as the climate continues to warm in the next few decades, drought events are likely to become more frequent and severe, with more significant water supply and agricultural impacts in parts of the U.S.
U.S. Seasonal Drought Outlook
NOAA's Seasonal Drought Outlook for Jan. 17 through April 30, 2013. Credit: NOAA.
Kentucky saw the greatest turnaround this week, as the percentage of the state under no drought grew to 91.55 percent — up from 73.29 percent — returning the state to drought levels last seen in March 2012. Improvements were also pronounced in the South, where rain fell in Arkansas, Louisiana, and Texas.
Areas of the High Plains, though, saw conditions deteriorate yet again, as soil moisture remains extremely low heading into the second half of the winter and then the spring growing season. This could result in below-average precipitation and above-average temperatures across the Plains during the spring as the atmosphere responds to the dry surface conditions, a self-perpetuating drought feedback, Artusa said.
The percentage of land area under severe drought or worse in Colorado, Kansas, Nebraska, Wyoming and the Dakotas grew slightly this week, from 86.20 percent to 87.25 percent. A similar expansion was observed in Georgia, where the area of the state under moderate drought conditions or worse expanded from 87.21 percent to 91.24 percent.
A long period of below-average precipitation has led to exceptionally low water levels on the Mississippi River, which may force authorities to close the river to shipping traffic, something that would have major economic consequences. On Thursday morning, the water level on the river near St. Louis was 1.95 feet below average, and is forecasted to come within striking distance of the record low — 6.2 feet below average — set in 1940. The Army Corps of Engineers has been dredging sediment and removing large rocks from the riverbed to ensure that the commercial waterway stays open to barge traffic, but it’s not clear if that will succeed if precipitation remains below average in areas upstream.
NOAA’s drought outlook does offer some hope that beneficial precipitation will fall in the Upper Mississippi River Valley and the Upper Midwest, areas that are currently along the outskirts of the drought region, including states such as Minnesota, Illinois, Iowa and Wisconsin. In the Southeast, some drought improvement is projected for northeastern Georgia through Southern Virginia.
(More on weather.com: Drought Disaster of 2012) | <urn:uuid:56a67344-868e-4396-98e2-a3d372b0d0f0> | 2.796875 | 1,089 | News Article | Science & Tech. | 50.667102 | 2,038 |
Every day an 18-wheel tanker truck pulls up alongside a lush forest near Duke University in North Carolina. Within a short time, the truck’s cargo of dreaded carbon dioxide gas begins flowing through a series of pipes and onto a forest rich with loblolly pines and small hardwood trees.
For four years now, scientists at Duke have inundated the forest with carbon dioxide, the principle greenhouse gas that is expected to wreak havoc on the planet in the decades ahead by elevating temperatures, causing sea level to rise, and severely altering vegetation around the globe.
Why, one might ask, would these good people deliberately subject the forest to such harsh treatment?
The goal of the project is to replace theory and conjecture with hard facts about the impact of increased levels of carbon dioxide, produced primarily by the burning of fossil fuels. Those facts are hard to come by, because the effect will be decades long, and it’s not easy to nail down evidence in such a complex arena. So the Duke researchers are addressing one fundamental question: What effect will elevated levels of carbon dioxide have on plant life?
The preliminary answer seems to be that at least some of the trees in the forests will love it, growing more rapidly, reproducing more robustly, thriving at a time when some parts of the globe will slip perilously into a rising sea.
“It’s really dramatic,” says Shannon LaDeau, a doctoral candidate at Duke who is running part of the long-term experiment.
The pines are growing about 25 percent faster than pines just outside the experiment, and they are twice as likely to be reproductively active. “They are making three times as many cones” which carry and incubate their seeds, she adds.
So if the trees there are doing so well, why is the world in an uproar over global warming? Because the Duke experiment addresses only one part of a problem that is extremely complicated.
What’s good for the loblollies is devastating to other living organisms, including coral reefs. Researchers at Columbia University’s Biosphere 2 in the Arizona desert have found that the atmospheric level of carbon dioxide that we might expect in a few decades will dissolve the reefs like an ice cube in boiling water.
So we can expect some good and some bad effects from global warming. Arid regions that lack water for agriculture may get a lot more rain, but low-lying regions will most likely slip below sea level.
And the forests, while robust in some areas, will almost surely change.
“You’re certainly going to change the competitive dynamics between different species,” LaDeau says.
“We could have a change in forest composition, dominated by those species that can use carbon dioxide efficiently at the expense of others,” adds William H. Schlesinger, professor of botany and the principal investigator on the project.
Localized Greenhouse Effect
The Duke experiment is an interesting marriage of technology and science. The carbon dioxide is pumped into a series of pipes surrounding a plot of land about 90 feet in diameter.
“These are big pipes that extend above the canopy of the pine forest,” LaDeau says. The level of carbon dioxide is continuously monitored. When the level drops, the system delivers more gas, and if it rises too high, it simply shuts down.
“If the wind comes out of the west, it turns on the pipes on the west side,” she adds, keeping the level precisely the same, even on a windy day. | <urn:uuid:d31a4fb4-c36f-41de-8594-4d3d4d30a335> | 3.75 | 739 | News Article | Science & Tech. | 39.831642 | 2,039 |
September 10, 2009 | 3
The illegal trade of bushmeat—meat and products made from wildlife—has grown dramatically in the past several years, thanks to high demand, enormous profits, a lack of law enforcement and minimal sentencing for criminals caught trafficking in bushmeat. The worldwide market for these illegal products reached an estimated $5 billion to $8 billion in 2008.
One of the major challenges in combating the bushmeat trade is identifying the source species for the meat and products. Once an animal has been carved up, meat looks like meat and leather looks like leather. How is anyone to know if it came from a species that is protected under national or international law?
A technique called DNA bar coding could be the answer. According to a paper published in the September 1 online edition of the journal Conservation Genetics, DNA bar codes can be used to quickly and clearly distinguish the source species of meat or leather goods for many rare and threatened species.
How would it be employed? Rather than work up a complete genetic profile of organic matter, the authors used DNA bar coding to look at a short region of the mitochondrial cytochrome c oxidase subunit 1 (COX1) gene. The DNA would then be identified in a lab at a low cost, since only the COX1 gene would need to be processed.
The researchers didn’t actually examine any endangered species, but they did sequence the bar code region of 25 commonly traded mammals and reptiles, many of which are embargoed from international trade by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES).
"The species in our study are among the most commercially harvested species in South America and Africa," lead author Mitchell Eaton said in a prepared statement. "They are often partially prepared by the time they get to urban markets, which can make the species identification impossible." Eaton led the research as part of his doctoral research at the University of Colorado at Boulder.
The species examined came from South America and Africa, and included duikers, spiral-horned antelope, red river hogs, old world monkeys, alligators and crocodiles. The DNA sequences generated from this study will be added to the Barcode of Life Data Systems, an online, open-access database of bar codes.
Even though many of the samples they tested had degraded through the leather-making process or due to age, the researchers found they were still able to extract the COX1 sequence in most cases. In their paper, the authors conclude that with minimal effort and simple refinements to existing DNA extraction and polymerase chain reaction (PCR) protocols, "accurate bar code sequence data can be obtained from most wildlife products encountered in bushmeat monitoring programs and wildlife investigations."
"There is consensus on using the same fragment of DNA, COX1, to construct a library of life," said co-author George Amato, director of the Sackler Institute for Comparative Genomics at the American Museum of Natural History, in a prepared statement. "This is an example of where new genetic technology can be transformative to society, by using bar codes to catalog the diversity of ecosystems, to monitor invasive species, to search for pathogens in the food supply, and to observe wildlife trafficking for the pet trade and other commercial markets."
This isn’t the first time that DNA has been used to help identify wildlife products seized from illegal traders. Last year, Samuel Wasser of the Center for Conservation Biology introduced a genetic method to trace the origin of poached ivory. Earlier this year, research published in the Proceedings of the National Academy of Sciences recommended a set of standards for the DNA bar coding of plants.
Image: leather products on display in a craft market in Brazzaville, Congo. (Credit: Mitchell Eaton) | <urn:uuid:f82aaac1-9edb-4974-a305-d3a5924155ca> | 3.421875 | 773 | News Article | Science & Tech. | 29.698217 | 2,040 |
Posts Tagged: Alexander Nguyen
Scorpions--to fear or to revere?
The Bohart Museum of Entomology's open house last Sunday drew visitors of all ages who marveled at the scorpions glowing under ultraviolet light.
UC Davis entomology major Alexander Nguyen flashed a UV light on the critters as his audience watched in amazement.
Most--but not all--of the world's scorpions glow under ultraviolet light, says Lynn Kimsey, director of the Bohart Museum, which houses more than seven million insect specimens.
Scorpions are not insects, but arachnids, the same as spiders. Ranging in size from 9 mm to 21 mm, scorpions have eight legs (arachnid alert!) and grasping claws that help conquer their prey. But it's their venom that kills. And all scorpions possess venom.
UC Davis entomologist Bruce Hammock and his lab made the news back in 2003 when they published a study that showed that scorpions produce two venoms: a pre-venom to deter predators and immobilize small prey, and then the good stuff, the powerful venom that's meant to kill.
It's like saving the best for last or waiting for the venom glands to pump and reload, so to speak.
So, why do they glow?
Scientists believe it's because of the fluorescent material found in the scorpion's hard outer covering.
"The fact that they glow serves no physiological function," said Bohart senior museum scientist Steve Heydon. "It's probably a quirk of chemical makeup."
Great quote..."a quirk of chemical makeup."
Scorpion glowing under ultraviolet light at the Bohart Museum of Entomology. (Photo by Kathy Keatley Garvey)
UC Davis entomology undergraduate student Alexander Nguyen flashes a UV light on a scorpion, as Professor Demosthenes Pappagianis, M.D., Ph.D., of Medical Microbiology and Immunology, watches. (Photo by Kathy Keatley Garvey)
Most scorpions glow under an ultraviolet light, but now a discovery on Alcatraz Island reveals that a certain species of millipedes will, too.
Forensic entomologist Robert Kimsey of the UC Davis Department of Entomology, who does fly research on Alcatraz, said that bait laced with a non-toxic fluorescent dye to estimate the rat population in February yielded the expected result: the glow of rat urine and feces.
But something else was glowing nearby: millipedes.
Had they consumed some of the rat bait?
No. An experiment at the Bohart Museum of Entomology on the UC Davis campus showed that these millipedes (Xystocheir dissecta (Wood) glow under black lights, just like scorpions.
Lynn Kimsey, director of the Bohart Museum and a professor of entomology at UC Davis, said the species is a relatively abundant species in the Bay Area. “This particular species of millipedes glowed all along, but nobody was paying any attention to it,” she said.
She suspects that the millipedes on Alcatraz Island originated from soil transported over from the nearby Angel Island when “The Rock” was just that—rock with little or no soil.
Meanwhile, if you attend the Bohart Museum's open house from 1 to 4 p.m. on Sunday, June 3 at 1124 Academic Surge, California Drive, you'll see scorpions and millipedes glowing.
And there's something else to draw you in: a special live display of the California dogface butterfly by naturalist/photographer Greg Kareofelas of Davis. If all goes as planned, an adult will emerge from its chrysalis. If this doesn't happen (well, you can't tell a buttterfly when to emerge!) you can watch the life cycle on his PowerPoint presentation, to run continuously throughout the open house.
And, you can ask Kareofelas all about the California dogface butterfly (Zerene eurydice), which, by the way, is close to royalty--it's California's designated state insect.
This millipede (Xystocheir dissecta) glows under ultraviolet light. Alexander Nguyen of the UC Davis Entomology Club captured this image on Alcatraz, during one of UC Davis forensic entomologist Robert Kimsey's field trips. | <urn:uuid:0c1bb768-743e-4ed1-a3fe-557e93155e36> | 2.5625 | 925 | Personal Blog | Science & Tech. | 49.450707 | 2,041 |
Grass. Scientists have imitated natural photosynthesis and created a record-fast molecular catalyzer. (Credit: © Nejron Photo / Fotolia)
Artificial Photosynthesis Breakthrough: Fast Molecular Catalyzer -- Science Daily
ScienceDaily (Apr. 12, 2012) — Researchers from the Department of Chemistry at the Royal Institute of Technology (KTH) in Stockholm, Sweden, have managed to construct a molecular catalyzer that can oxidize water to oxygen very rapidly. In fact, these KTH scientists are the first to reach speeds approximating those is nature's own photosynthesis. The research findings play a critical role for the future use of solar energy and other renewable energy sources.
Read more ....
My Comment: This has the potential of being big. | <urn:uuid:06198aa9-b3f7-43ec-bec2-b343f51124f4> | 3.03125 | 158 | Truncated | Science & Tech. | 28.452879 | 2,042 |
Molecular Biology and Evolution, doi:10.1093/molbev/msm077
Grinding up Wheat: a Massive Loss of Nucleotide Diversity Since Domestication
A Haudry et al.
Several demographic and selective events occurred during the domestication of wheat from the allotetraploid wild emmer (Triticum turgidum ssp. dicoccoides). Cultivated wheat has since been affected by other historical events. We analysed nucleotide diversity at 21 loci in a sample of 101 individuals representing four taxa corresponding to representative steps in the recent evolution of wheat (wild, domesticated, cultivated durum and bread wheats), to unravel the evolutionary history of cultivated wheats and to quantify its impact on genetic diversity. Sequence relationships are consistent with a single domestication event and identify two genetically different groups of bread wheat. The wild group is not highly polymorphic, with only 212 polymorphic sites among the 21,720 bp sequenced, and, during domestication, diversity was further reduced in cultivated forms — by 69% in bread wheat and 84% in durum wheat — with considerable differences between loci, some retaining no polymorphism at all. Coalescent simulations were performed and compared with our data, to estimate the intensity of the bottlenecks associated with domestication and subsequent selection. Based on our 21-locus analysis, the average intensity of domestication bottleneck was estimated at about 3 — giving a population size for the domesticated form about one third that of wild dicoccoides. The most severe bottleneck, with an intensity of about 6, occurred in the evolution of durum wheat. We investigated whether some of the genes departed from the empirical distribution of most loci, suggesting that they might have been selected during domestication or breeding. We detected a departure from the null model of demographic bottleneck for the hypothetical gene HgA. However, the atypical pattern of polymorphism at this locus might reveal selection on the linked locus Gsp1A, which may affect grain softness — an important trait for end-use quality in wheat. | <urn:uuid:ada14b95-41c1-4a28-9284-568a6f0e0943> | 2.828125 | 437 | Academic Writing | Science & Tech. | 14.600573 | 2,043 |
If you are looking for TeX support, please use the VietTUG Google Group
Bash: some useful "set"
» Votes: 2/2
When using Bash, you can change the default behavior of the script/shell by
set command. Example
set -e: The script/shell will exit immediately if a simple command exits with a non-zero status, unless the command that fails is part of an until or while loop, part of an if statement, part of a
||list, or if the command's return status is being inverted using
set -C: Prevent output redirection using
from overwriting existing files. You need to use, for e.g,
tee -a, if you really want to overwrite some file.
set -u: Treat unset variables as an error when performing parameter expansion. An error message will be written to the standard error, and a non-interactive shell will exit.
set -x: Print a trace of simple commands and their arguments after they are expanded and before they are executed. This is very useful when debugging your script, though it is not really helpful in many cases.
set -u is very useful and it's highly recommended when your script/shell is running as
root. For example, if you invoke
rm -rfv /$_MY_DIR/etc/ and if the variable
$_MY_DIR isn't defined (unset), your script is simply executed as
rm -rfv /etc/ -- this is dangerous as that means that you delete all configuration on your server. Please note that, an unset variable isn't an empty variable. (A defined variable is DEFINED but it may as an EMPTY VALUE.) | <urn:uuid:3e9cfbc3-6c38-48e7-bcb1-b12736e0a134> | 2.671875 | 363 | Documentation | Software Dev. | 54.63369 | 2,044 |
Electrodynamics/Dipoles and Multipoles
In the previous sections of the book, charges and charge distributions have been treated in a general fashion. In this section of the book, we will allow for ordered charge configurations of which molecular matter is actually comprised of. For the most part, ordinary matter is electrically neutral, but it is highly saturated with pairs of charges called dipoles. For us, dipoles will be the building blocks of dielectric and magnetic materials.
An electric monopole is a single charge, and a dipole is two opposite charges closely spaced to each other, or something which looks like that electrically. Dipoles are actually very abundant in nature. For example, a water molecule has a large permanent electric dipole moment. Its positive and negative charges are not centered at the same point; it behaves like like two equal and opposite charges separated by a small distance. Another occurrence happens to uncharged pith balls. In presences of a charged object, the uncharged pith ball will be attracted to the charged object because the little dipoles have responded to the electric field of the rod.
The Electric Potential due to a Dipole
As it turns out, it is better mathematically to not think of dipoles as a collection of positive and negative charges but as a separate object all together. Take a moment and imagine two opposite charges with magnitude Q and they are separated by a small distance s . The question to ask then is, what is the potential at some distance r away from the configuration?Last modified on 26 May 2011, at 20:38 | <urn:uuid:5c6bb8a3-b26d-4fa7-bbf7-333709851afb> | 4.28125 | 324 | Knowledge Article | Science & Tech. | 31.814659 | 2,045 |
Cumulus clouds are a type of low-level cloud that can have noticeable vertical development and clearly defined edges. Cumulo- means "heap" or "pile" in Latin. They are often described as "puffy" or "cotton-like" in appearance, and generally have flat bases. Cumulus clouds, being low-stage clouds, are generally less than 6,500 feet (2,000 m) in altitude unless they are the more vertical cumulus congestus form. Cumulus clouds may appear by themselves, in lines, or in clusters.
Cumulus clouds are often precursors of other types of cloud, such as cumulonimbus, when influenced by weather factors such as instability, moisture, and temperature gradient. Normally, cumulus clouds produce little or no precipitation, but they can grow into the precipitation-bearing congestus or cumulonimbus clouds. Cumulus clouds can be formed from water vapor, supercooled water droplets, or ice crystals, depending upon the ambient temperature. They come in many distinct subforms, and generally cool the earth by reflecting the incoming solar radiation. Cumulus clouds are part of the larger category of cumuliform clouds, which include stratocumulus clouds, cumulonimbus clouds, cirrocumulus clouds, and altocumulus clouds.
Cumulus clouds form via atmospheric convection as air warmed by the surface begins to rise. As the air rises, the temperature drops (following the lapse rate), causing the relative humidity (RH) to rise. If convection reaches a certain level the RH reaches one hundred percent, and the "wet-adiabatic" phase begins. At this point a positive feedback ensues: since the RH is above 100%, water vapour condenses, releasing latent heat, warming the air and spurring further convection.
In this phase, water vapor condenses on various nuclei present in the air, forming the cloud. This creates the characteristic flat-bottomed puffy shape associated with cumulus clouds. The size of the cloud depends on the temperature profile of the atmosphere and the presence of any inversions. During the convection, surrounding air is entrained (mixed) with the thermal and the total mass of the ascending air increases.
Rain forms in a cumulus cloud via a process involving two non-discrete stages. The first stage occurs after the droplets coalesce onto the various nuclei. Langmuir writes that surface tension in the water droplets provides a slightly higher pressure on the droplet, raising the vapor pressure by a small amount. The increased pressure results in those droplets evaporating and the resulting water vapor condensing on the larger droplets. Due to the extremely small size of the evaporating water droplets, this process becomes largely meaningless after the larger droplets have grown to around 20 to 30 micrometers, and the second stage takes over. In the accretion phase, the raindrop begins to fall, and other droplets collide and combine with it to increase the size of the raindrop. Langmuir was able to develop a formula[note 1] which predicted that the droplet radius would grow unboundedly within a discrete time period.
The liquid water density within a cumulus cloud has been found to change with height above the cloud base rather than being approximately constant throughout the cloud. At the cloud base, the concentration was 0 grams of liquid water per kilogram of air. As altitude increased, the concentration rapidly increased to the maximum concentration near the middle of the cloud. The maximum concentration was found to be anything up to 1.25 grams of water per kilogram of air. The concentration slowly dropped off as altitude increased to the height of the top of the cloud, where it immediately dropped to zero again.
Cumulus clouds can form in lines stretching over 300 miles (480 km) long called cloud streets. These cloud streets cover vast areas and may be broken or continuous. They form when wind shear causes horizontal circulation in the atmosphere, producing the long, tubular cloud streets. They generally form during high-pressure systems, such as after a cold front.
The height at which the cloud forms depends on the amount of moisture in the thermal that forms the cloud. Humid air will generally result in a lower cloud base. In temperate areas, the base of the cumulus clouds is usually below 6,000 feet (1,800 m) above ground level, but it can range up to 8,000 feet (2,400 m) in altitude. In arid and mountainous areas, the cloud base can be in excess of 20,000 feet (6,100 m).
Cumulus clouds can be composed of ice crystals, water droplets, supercooled water droplets, or a mixture of them. The water droplets form when water vapor condenses on the nuclei, and they may then coalesce into larger and larger droplets. In temperate regions, the cloud bases studied ranged from 500 to 1,500 metres (1,600 to 4,900 ft) above ground level. These clouds were normally above 25 °C (77 °F), and the concentration of droplets ranged from 23 to 1300 droplets per cubic centimeter (380 to 21,300 droplets per cubic inch). This data was taken from growing isolated cumulus clouds that were not precipitating. The droplets were very small, ranging down to around 5 micrometers in diameter. Although smaller droplets may have been present, the measurements were not sensitive enough to detect them. The smallest droplets were found in the lower portions of the clouds, with the percentage of large droplets (around 20 to 30 micrometers) rising dramatically in the upper regions of the cloud. The droplet size distribution was slightly bimodal in nature, with peaks at the small and large droplet sizes and a slight trough in the intermediate size range. The skew was roughly neutral. Furthermore, large droplet size is roughly inversely proportional to the droplet concentration per unit volume of air. In places, cumulus clouds can have "holes" where there are no water droplets. These can occur when winds tear the cloud and incorporate the environmental air or when strong downdrafts evaporate the water.
Cumulus clouds come in four distinct species, cumulis humilis, mediocris, congestus, and fractus; one variety, cumulus radiatus; and with seven supplementary features, cumulus pileus, velum, virga, praecipitatio, arcus, pannus, and tuba. Cumulus fractus (also called scud) and cumulus pannus are shredded forms of cumulus clouds that normally appear beneath the parent cloud during precipitation. Cumulus humilis clouds look like puffy, flattened shapes. Cumulus congestus clouds look similar, except that they have some vertical development. Cumulus congestus clouds have a cauliflower-like structure and tower high into the atmosphere, hence their alternate name "towering cumulus". Cumulus radiatus clouds form in radial bands called cloud streets. Cumulus virga clouds are cumulus clouds producing virga, and cumulus praecipitatio clouds produce precipitation. Cumulus arcus clouds have a gust front, and cumulus tuba clouds have funnel clouds or tornadoes. Cumulus pileus clouds refer to cumulus clouds that have grown so rapidly as to force the formation of pileus over the top of the cloud. Cumulus velum clouds have an ice crystal veil over the growing top of the cloud.
Cumulus humilis clouds usually indicate fair weather. Cumulus mediocris clouds are similar, except that they have some vertical development, which implies that they can grow into cumulus congestus or even cumulonimbus clouds, which can produce heavy rain, lightning, severe winds, hail, and even tornadoes. Cumulus congestus clouds, which appear as towers, will often grow into cumulonimbus storm clouds. They can produce precipitation. Glider pilots often pay close attention to cumulus clouds, as they can be indicators of rising air drafts or thermals underneath that can suck the plane high into the sky—a phenomenon known as cloud suck.
Cumulus clouds can also produce acid rain. The acidity is largely formed by the oxidation of sulfur dioxide, the most plentiful acidifying gas, into sulfate ions. The main oxidizing compounds are hydrogen peroxide and ozone. Various nitrogen oxides can also react with hydroxide ions to form acids.
Effects on climate
Due to reflectivity, clouds cool the earth by around 12 °C (54 °F), an effect largely caused by stratocumulus clouds. However, at the same time, they heat the earth by around 7 °C (45 °F) by reflecting emitted radiation, an effect largely caused by cirrus clouds. This averages out to a net loss of 5 °C (41 °F). Cumulus clouds, on the other hand, have a variable effect on heating the earth's surface. The more vertical cumulus congestus species and cumulonimbus genus of clouds grow high into the atmosphere, carrying moisture with them, which can lead to the formation of cirrus clouds. The researchers speculated that this might even produce a positive feedback, where the increasing upper atmospheric moisture further warms the earth, resulting in an increasing number of cumulus congestus clouds carrying more moisture into the upper atmosphere.
Relation to other clouds
Cumulus clouds are a form of low-étage cloud along with the related cumuliform cloud stratocumulus. These clouds form from ground level to 6,500 feet (2,000 m) at all latitudes. Stratus clouds are also low-étage. In the middle étage are the alto clouds, which consist of the cumuliform cloud altocumulus and the stratiform cloud altostratus. Middle-étage clouds form from 6,500 feet (2,000 m) to 13,000 feet (4,000 m) in polar areas, 23,000 feet (7,000 m) in temperate areas, and 25,000 feet (7,600 m) in tropical areas. The high-étage clouds are all cirriform, one of which, cirrocumulus, is also cumuliform. The other clouds in this étage are cirrus and cirrostratus. High-étage clouds form 10,000 to 25,000 feet (3,000 to 7,600 m) in high latitudes, 16,500 to 40,000 feet (5,000 to 12,000 m) in temperate latitudes, and 20,000 to 60,000 feet (6,100 to 18,000 m) in low, tropical latitudes. Cumulonimbus clouds, the other cumuliform cloud, extend vertically rather than remaining confined to one étage.
Cirrocumulus clouds
Cirrocumulus clouds form in patches and cannot cast shadows. They commonly appear in regular, rippling patterns or in rows of clouds with clear areas between. Cirrocumulus are, like other members of the cumuliform category, formed via convective processes. Significant growth of these patches indicates high-altitude instability and can signal the approach of poorer weather. The ice crystals in the bottoms of cirrocumulus clouds tend to be in the form of hexagonal cylinders. They are not solid, but instead tend to have stepped funnels coming in from the ends. Towards the top of the cloud, these crystals have a tendency to clump together. These clouds do not last long, and they tend to change into cirrus because as the water vapor continues to deposit on the ice crystals, they eventually begin to fall, destroying the upward convection. The cloud then dissipates into cirrus. Cirrocumulus clouds come in four species: stratiformis, lenticularis, castellanus, and floccus. They are iridescent when the constituent supercooled water droplets are all about the same size.
Altocumulus clouds
Altocumulus clouds are a middle-étage cloud that forms from 6,500 feet (2,000 m) high to 13,000 feet (4,000 m) in polar areas, 23,000 feet (7,000 m) in temperate areas, and 25,000 feet (7,600 m) in tropical areas. They can have precipitation and are commonly composed of a mixture of ice crystals, supercooled water droplets, and water droplets in temperate latitudes. However, the liquid water concentration was almost always significantly greater than the concentration of ice crystals, and the maximum concentration of liquid water tended to be at the top of the cloud while the ice concentrated itself at the bottom. The ice crystals in the base of the altocumulus clouds and in the virga were found to be dendrites or conglomerations of dendrites while needles and plates resided more towards the top. Altocumulus clouds can form via convection or via the forced uplift caused by a warm front.
Stratocumulus clouds
A stratocumulus cloud is another type of a cumuliform cloud. Like cumulus clouds, they form at low levels and via convection. However, unlike cumulus clouds, their growth is almost completely retarded by a strong inversion. As a result, they flatten out like stratus clouds, giving them a layered appearance. These clouds are extremely common, covering on average around twenty-three percent of the earth's oceans and twelve percent of the earth's continents. They are less common in tropical areas and commonly form after cold fronts. Additionally, stratocumulus clouds reflect a large amount of the incoming sunlight, producing a net cooling effect. Stratocumulus clouds can produce drizzle, which stabilizes the cloud by warming it and reducing turbulent mixing.
Cumulonimbus clouds
Cumulonimbus clouds are the final form of growing cumulus clouds. They form when cumulus congestus clouds develop a strong updraft that propels their tops higher and higher into the atmosphere until they reach the tropopause at 60,000 feet (18,000 m) in altitude. Cumulonimbus clouds, commonly called thunderheads, can produce high winds, torrential rain, lightning, gust fronts, waterspouts, funnel clouds, and tornadoes. They commonly have anvil clouds.
Some cumuliform clouds have been discovered on most other planets in the solar system. On Mars, the Viking Orbiter detected cirrocumulus and stratocumulus clouds forming via convection primarily near the polar icecaps. The Galileo space probe detected massive cumulonimbus clouds near the Great Red Spot on Jupiter. Cumuliform clouds have also been detected on Saturn. In 2008, the Cassini spacecraft determined that cumulus clouds near Saturn's south pole were part of a cyclone over 2,500 miles (4,000 km) in diameter. The Keck Observatory detected whitish cumulus clouds on Uranus. Like Uranus, Neptune has methane cumulus clouds. Venus, however, does not appear to have cumulus clouds.
See also
- The formula was , with being the time to infinite radius, being the viscosity of air, being the fractional percentage of water droplets accreted per unit volume of air that the drop falls through, being the concentration of water in the cloud in grams per cubic meter, and being the initial radius of the droplet.
- "Cloud Classification and Characteristics". National Oceanic and Atmospheric Administration. Retrieved 18 October 2012.
- Geerts, B (April 2000). "Cumuliform Clouds: Some Examples". Resources in Atmospheric Sciences. University of Wyoming College of Atmospheric Sciences. Retrieved 11 February 2013.
- "Cumulus clouds". Weather. USA Today. Retrieved 16 October 2012.
- Stommel 1947, p. 91
- Mossop & Hallett 1974, pp. 632–634
- Langmuir 1948, p. 175
- Langmuir 1948, p. 177
- Stommel 1947, p. 94
- Weston 1980, p. 433
- Weston 1980, pp. 437–438
- "Cloud Classifications". JetStream. National Weather Service. Retrieved 23 October 2012.
- Warner 1969, p. 1049
- Warner 1969, p. 1051
- Warner 1969, p. 1052
- Warner 1969, p. 1054
- Warner 1969, p. 1056
- Warner 1969, p. 1058
- "WMO classification of clouds" (PDF). World Meteorological Organization. Retrieved 18 October 2012.
- Pretor-Pinney 2007, p. 17
- "L7 Clouds: Stratus fractus (StFra) and/or Cumulus fractus (CuFra) bad weather". JetStream - Online School for Weather: Cloud Classifications. National Weather Service. Retrieved 11 February 2013.
- Allaby, Michael, ed. (2010). "Pannus". A Dictionary of Ecology (4 ed.). Oxford University Press. ISBN 9780199567669. Retrieved 11 February 2013.
- "Weather Glossary". The Weather Channel. Retrieved 18 October 2012.
- Pretor-Pinney 2007, p. 20
- Dunlop 2003, pp. 77–78
- Ludlum 2000, p. 473
- Dunlop 2003, p. 79
- Garret, et al. 2006, p. i
- Thompson, Philip; Robert O'Brien (1965). Weather. New York: Time Inc. pp. 86–87.
- Pagen 2001, pp. 105–108
- Junge 1960, p. 227
- Cho, Iribarne & Niewiadomski 1989, p. 12907
- "Cloud Climatology". International Satellite Cloud Climatology Program. National Aeronautics and Space Administration. Retrieved 12 July 2011.
- "Will Clouds Speed or Slow Global Warming?". National Science Foundation. Retrieved 23 October 2012.
- Del Genfo, Lacis & Ruedy 1991, p. 384
- "Cumulonimbus Incus". Universities Space Research Association. 5 August 2009. Retrieved 23 October 2012.
- Miyazaki et al. 2001, p. 364
- Hubbard & Hubbard 2000, p. 340
- Funk, Ted. "Cloud Classifications and Characteristics" (PDF). The Science Corner. National Oceanic and Atmospheric Administration. p. 1. Retrieved 19 October 2012.
- Parungo 1995, p. 251
- "Common Cloud Names, Shapes, and Altitudes" (PDF). Georgia Institute of Technology. pp. 2, 10–13. Retrieved 12 February 2011.
- Ludlum 2000, p. 448
- Parungo 1995, p. 252
- Parungo 1995, p. 254
- Carey et al 2008, p. 2490
- Carey et al 2008, p. 2491
- Carey et al 2008, p. 2494
- Wood 2012, p. 2374
- Wood 2012, p. 2398
- Ludlum 2000, p. 471
- "NASA SP-441: Viking Orbiter Views of Mars". National Aeronautics and Space Administration. Retrieved 26 January 2013.
- "Thunderheads on Jupiter". Jet Propulsion Laboratory. National Aeronautics and Space Administration. Retrieved 26 January 2013.
- Minard, Anne (14 October 2008). "Mysterious Cyclones Seen at Both of Saturn's Poles". National Geographic News (National Geographic). Retrieved 26 January 2013.
- Boyle, Rebecca (18 October 2012). "Check Out The Most Richly Detailed Image Ever Taken Of Uranus". Popular Science. Retrieved 26 January 2013.
- Irwin 2003, p. 115
- Bougher & Phillips 1997, pp. 127–129
- Bougher, Stephen Wesley; Phillips, Roger (December 1997). Venus II: Geology, Geophysics, Atmosphere, and Solar Wind Environment. University of Arizona Press. ISBN 978-0-8165-1830-2. Retrieved 26 January 2013.
- Carey, Lawrence D.; Niu, Jianguo; Yang, Ping; Kankiewicz, J. Adam; Larson, Vincent E.; Haar, Thomas H. Vonder (September 2008). "The Vertical Profile of Liquid and Ice Water Content in Midlatitude Mixed-Phase Altocumulus Clouds". Journal of Applied Meteorology and Climatology 47 (9): 2487–2495. doi:10.1175/2008JAMC1885.1.
- Cho, H. R.; Iribarne, J. V.; Niewiadomski, M. (20 September 1989). "A Model of the Effect of Cumulus Clouds on the Redistribution and Transformation of Pollutants" (PDF). Journal of Geophysical Research 94 (D10): 12,895–12,910. Retrieved 28 November 2012.
- Del Genfo, Anthony D.; Lacis, Andrew A.; Ruedy, Reto A. (30 May 1991). "Simulations of the effect of a warmer climate on atmospheric humidity". Nature (Nature Publishing Group) 351: 382–385. doi:10.1038/351382a0.
- Dunlop, Storm (June 2003). The Weather Identification Handbook. Lyons Press. ISBN 978-1585748570. Retrieved 15 February 2013.
- Garrett, T. J.; Dean-Day, J.; Liu, C.; Barnett, B.; Mace, G.; Baumgardner, D.; Webster, C.; Bui, T.; Read, W.; and Minnis, P. (19 April 2006). "Convective formation of pileus cloud near the tropopause". Atmospheric Chemistry and Physics 6 (5): 1185–1200. doi:10.5194/acp-6-1185-2006. ISSN 1680-7316. Retrieved 18 October 2012.
- Hubbard, Richard; Hubbard, Richard Keith (5 May 2000). "Glossary". Boater's Bowditch: The Small Craft American Practical Navigator (2 ed.). International Marine/Ragged Mountain Press. ISBN 978-0-07-136136-1.
- Irwin, Patrick (July 2003). Giant Planets of Our Solar System: Atmospheres, Composition, and Structure (1 ed.). Springer. p. 115. ISBN 978-3-540-00681-7. Retrieved 26 January 2013.
- Junge, C. E. (1960). "Sulfur in the Atmosphere". Journal of Geophysical Research 65 (1): 227. doi:10.1029/JZ065i001p00227. Retrieved 28 November 2012.
- Langmuir, Irving (October 1948). "The Production of Rain by a Chain Reaction in Cumulus Clouds at Temperatures Above Freezing". Journal of Meteorology 5 (5): 175–192. doi:10.1175/1520-0469(1948)005<0175:TPORBA>2.0.CO;2. ISSN 0095-9634. Retrieved 19 October 2012.
- Ludlum, David McWilliams (2000). National Audubon Society Field Guide to Weather. Alfred A. Knopf. ISBN 0-679-40851-7. OCLC 56559729.
- Miyazaki, Ryo; Yoshida, Satoru; Dobashit, Yoshinori; Nishita, Tomoyula (2001). "A method for modeling clouds based on atmospheric fluid dynamics". Proceedings Ninth Pacific Conference on Computer Graphics and Applications. Pacific Graphics 2001. p. 363. doi:10.1109/PCCGA.2001.962893. ISBN 0-7695-1227-5.
- Mossop, S. C.; Hallett, J. (November 1974). Ice Crystal Concentration in Cumulus Clouds: Influence of the Drop Spectrum 186 (4164). Science Magazine. pp. 632–634. doi:10.1126/science.186.4164.632. Retrieved 11 February 2013.
- Pagen, Dennis (2001). The Art of Paragliding. Black Mountain Books. pp. 105–108. ISBN 0-936310-14-6.
- Parungo, F. (May 1995). "Ice Crystals in High Clouds and Contrails". Atmospheric Research 38: 249. doi:10.1016/0169-8095(94)00096-V. OCLC 90987092.
- Pretor-Pinney, Gavin (June 2007). The Cloudspotter's Guide: The Science, History, and Culture of Clouds. Penguin Group. ISBN 9781101203316. Retrieved 15 February 2013.
- Stommel, Harry (June 1947). "Entrainment of Air Into a Cumulus Cloud". Journal of Meteorology (American Meteorological Society) 4 (3): 91–94. doi:10.1175/1520-0469(1947)004<0091:EOAIAC>2.0.CO;2. Retrieved 17 October 2012.
- Warner, J. (September 1969). "The Microstructure of Cumulus Cloud. Part I. General Features of the Droplet Spectrum". Journal of the Atmospheric Sciences 26 (5): 1049 1059. doi:10.1175/1520-0469(1969)026<1049:TMOCCP>2.0.CO;2. ISSN 1520-0469. Retrieved 17 October 2012.
- Weston, K. J. (October 1980). "An Observational Study of Convective Cloud Streets". Tellus 32 (35): 433–438. doi:10.1111/j.2153-3490.1980.tb00970.x. Retrieved 14 February 2013.
- Wood, Robert (August 2012). Stratocumulus Clouds 140 (8). doi:10.1175/MWR-D-11-00121.1. ISSN 1520-0493. Retrieved 19 October 2012.
|Wikimedia Commons has media related to: Cumulus clouds| | <urn:uuid:d511ef72-4314-4e90-8209-24549d78b297> | 4.40625 | 5,511 | Knowledge Article | Science & Tech. | 63.372334 | 2,046 |
I recently blogged that you can now play Angry Birds in your web browser. This opens up all sorts of video analysis possibilities for physics lessons and assessment. Students can easily make their own videos or you can pre-record your own. Videos can be recorded using Jing, Screencast-O-Matic, or Camtasia Studio. Analysis can be done in Logger Pro or Tracker.
Here are some possible investigations to carry out (shared by Michael Magnuson on the WNYPTA email list):
1. Make a reasonable estimate for the size of an angry bird, and determine the value of g in Angry Bird World. Why would the game designer want to have g be different than 9.8 m/s²? Download Angry Birds video.
2. Does the blue angry bird conserve momentum during its split into three? Download Red and Blue Birds video.
3. Does the white bird conserve momentum when it drops its bomb? Why would the game designer want the white bird to drop its bomb the way that it does? Download White Bird video.
4. Describe in detail how the yellow bird changes velocity. You will need to analyze more than one flight path to answer this question. Download Yellow Birds video.
5. Shoot an angry bird so that it bounces off one of the blocks. Determine the coefficient of restitution and the mass of the angry bird. Download Red Birds and Falling Block video.
You can download each video using the links above or get them all here.
Other posts with ideas about how to use Angry Birds in physics class:
- Rhett Allain’s analysis of The Physics of Angry Birds.
- John Burk’s post Introducing projectile motion using Angry Birds
- Peter Kupfer’s post Angry Birds and Physics
How have you used (or will use) Angry Birds in the classroom?
UPDATE 12-28-2011: Our class has been featured on CUNY-TV’s “Science and U!” Jump to 10:25 in the video below: | <urn:uuid:cbae3902-5532-4ea9-b7b7-919234ad0234> | 3.296875 | 422 | Personal Blog | Science & Tech. | 70.689083 | 2,047 |
NASA has moved into extra-solar planetary weather forecasting. Well, mapping, but one has to start somewhere. Researchers using the agency's Spitzer infrared space telescope have mapped the weather patterns of two extremely hot, distant planets. The May 9th edition of Nature carries a description of the winds on the surface of …
Sounds like they found HELL
Well, if a hot, dark planet is out there as "Spicy" is described, I'd say this is a good candidate for the proverbial HELL we've all been told we're going to.
Now, if only they can find a cool, bright place with lots of harps...
Windy & Spicy
Sounds like a curry.
Well done on some creative renaming - it really worked! :)
SI Units please
Centigrade or kelvin are much better to use - I find it hard to understand a scale that is related to the temperature of a horse's anus.
It's NASA, not SI
NASA did the research, hence the ridiculous units.
They're not too good at converting between imperial and SI units... remember the mixup with the Mars satellite that put it in the planet?
Dumbing down much?
Thanks for clarifying that the light from Windy takes 60 years to reach us and that it's 60 light-years away. Could someone clarify for me how long it takes the light to get from Spicy at 279 light-years away?
- Geek's Guide to Britain INSIDE GCHQ: Welcome to Cheltenham's cottage industry
- 'Catastrophic failure' of 3D-printed gun in Oz Police test
- Game Theory Is the next-gen console war already One?
- BBC suspends CTO after it wastes £100m on doomed IT system
- Peak Facebook: British users lose their Liking for Zuck's ad empire | <urn:uuid:481c4c3d-0b46-48b3-b837-7a78f0423840> | 2.734375 | 384 | Comment Section | Science & Tech. | 62.984274 | 2,048 |
EMIPLIB is a library to facilitate the development of programs that need to stream several kinds of media over IP. The library consists of several kinds of components that can be linked together in various ways, thereby providing a flexible framework. It also provides some ready-to-use classes for the transmission of audio and video over IP. Streams originating from the same participant can be synchronized.
GRALE is a set of tools - a library and a number of accompanying applications - to study gravitational lenses. Gravitational lenses are astronomical objects so massive that their gravitational pull even deflects light rays. This can cause multiple copies of the same background object to be visible, like a cosmic mirage. The locations and shapes of these copies can provide information about the mass distribution of the gravitational lens, which GRALE can help recover using a genetic algorithm-based method. Apart from these so-called lens inversions, it's also possible to simulate gravitational lenses.
The JThread package contains classes that represent a thread and a mutex. On a Unix-like platform, the pthread library is used as the underlying thread mechanism. On an MS Windows platform, Win32 threads are used. By using these wrapper classes, you can easily create applications that use threads without having to worry about which platform the program will be running on. | <urn:uuid:4ee2d316-1d47-436f-a385-5e7561477f18> | 2.515625 | 265 | Content Listing | Software Dev. | 33.881245 | 2,049 |
Become a fan of h2g2
How did the universe begin? Or did it begin? Sir Fred Hoyle didn't think so. He derisively coined the term the Big Bang. Now Fred Hoyle is dead and the theory is still alive.
14 billion years ago, give or take a couple billion years, the universe as we know it wasn't here. Today we believe we live in an expanding universe. The event which started it all is known as the Big Bang. Why do we say 14 billion years? Well, that is an estimate based on 20th Century research. In the early 20th Century, an astronomer called Edwin Hubble made the discovery that distant stars emitted the same light as close-up stars (and the Sun), except they appeared more red. What's more, the more distant the star, the redder the star appeared. He recalled that another man Vesta M Slipher had found the same thing in 1912 while researching the spectra of Nebulae.
Hubble reasoned that since the wavelengths of red are longer something must be stretching them. In 1929 he came up his formula v=Hd to account for the redshift. As things recede, their wavelengths become longer. As things approach, their wavelengths are compressed. In sound this is called the Doppler shift. The train whistle of a receding train is lower in pitch because the wavelength of the sound is being stretched out.
Since space is expanding, there must have been a time in the past when the universe was small. Perhaps infinitely small. Here is what some think happened back then: there was a singularity1. In this singularity was all the matter needed to make a universe. This singularity exploded. Maybe because of natural causes, or maybe because of divine intercession.
This explosion created temperatures on the order of 1033 degrees. According to one model2, at 10-35 seconds the universe made a large increase in size as the temperature cooled. This lasted until about 10-30 seconds. Here the rate of expansion slowed. By the end of the first second it was cool enough for the first elements to appear. The Universe continued to cool and to grow.
As it cooled down, there was a clumping effect which gave birth to galaxies. Within those galaxies, stars began to form. Many say that stars are still being born today, billions of years later. Microwave background radiation from the original state lingered and was used as evidence of the Big Bang. The horizon line continued to move outward as the universe expanded. Some wonder if it will continue expanding forever.
What does this mean for us? Will the universe just keep expanding? Well, there are at least two forces at work here. One of these is the momentum from the original explosion. It (obviously) makes the universe expand. The opposing force is gravity. It will pull the universe back to whence it came.
If the momentum and the gravitational effect are equal, then the universe will continue to expand but at a decreasing rate. If there are other forces, outweighing the gravitational attraction, then the rate of expansion may actually increase. But if there's more gravity, then eventually the universe will collapse back into a singularity. Some call this the Big Crunch. This may be followed by another Big Bang. We don't know.
Scientists like to think that the forces are balanced. It creates a nice flat universe and makes their calculations easier. How can we know if we have enough gravity to continue growing and never stop? That is a question being researched by cosmologists. Gravity is proportional to the mass of an object. We have methods of estimating the mass of objects in deep space. So we look for all the stuff in the universe, and from that calculate its mass, and therefore its gravity. Unfortunately, it's hard to see objects in the universe, since we can only see bright things like stars, and they only make up about 10 percent of the critical mass we need. The other 90 percent is either dark matter or dark energy (of which little will be said here).
Another evidence for the Big Bang is the Cosmic Microwave Background. Two scientists, by names of Penzias and Wilson, picked up some microwave signals in their antenna receiver. This background gave a uniform temperature of 3°K in all directions. This is consistent with the Big Bang theory.
This brings us back to the late Fred Hoyle. He didn't like this theory. Instead, he proposed the Steady State Theory, which provided an alternate version of cosmology. The discovery of the Cosmic Background Radiation and the confirmation of large amounts of helium in the Universe 1000 seconds after creation were said to 'spell the death knell' of the Steady State Theory3.
There are heaps of questions still unsolved, such as, where did all the Big Bang material come from? What happens before the beginning of time if anything? Is time something that also contracted to a point at the Big Bang?
What happens when you get a universe of zero size and infinite density? Is it the same as a black hole? We don't know. Perhaps the efforts of cosmologists will bring some answers in our lifetime. Perhaps not. But when you are dealing in billions of years, what is man that thou art mindful of him? Perhaps the answers are too big to fit in our small minds. Was there a Big Bang? Many people think that there was; but evidence is still coming in, and researchers will continue to wrestle with this issue for many years to come. | <urn:uuid:8d5933ad-4ae4-4a7a-bdf1-02cb6ad1ad5b> | 3.8125 | 1,133 | Personal Blog | Science & Tech. | 58.575672 | 2,050 |
|Maintainerfirstname.lastname@example.org, email@example.com, firstname.lastname@example.org|
Support for using
Text data with native code via the Haskell
foreign function interface.
Interoperability with native code
Text type is implemented using arrays that are not guaranteed
to have a fixed address in the Haskell heap. All communication with
native code must thus occur by copying data back and forth.
Text type's internal representation is UTF-16, using the
platform's native endianness. This makes copied data suitable for
use with native libraries that use a similar representation, such
as ICU. To interoperate with native libraries that use different
internal representations, such as UTF-8 or UTF-32, consider using
the functions in the
A type representing a number of UTF-16 code units.
Safe conversion functions
O(n) Perform an action on a temporary, mutable copy of a
Text. The copy is freed as soon as the action returns.
Unsafe conversion code
O(1) Return the length of a
Text in units of
is useful for sizing a target array appropriately before using
Foreign functions that use UTF-16 internally may return indices in
Word16 instead of characters. These functions may
safely be used with such indices, as they will adjust offsets if
necessary to preserve the validity of a Unicode string. | <urn:uuid:6c3e72ce-8ef9-44fa-9aae-fdb884fd0e8d> | 2.53125 | 296 | Documentation | Software Dev. | 37.119 | 2,051 |
In 1985 Klaus von Klitzing won the Nobel Prize for discovery of the quantised Hall effect. The previous Nobel prize awarded in the area of semiconductor physics was to Bardeen, Shockley and Brattain for invention of the transistor. Everyone knows how important transistors are in all walks of life, but why is a quantised Hall effect significant?
100 years ago E.H. Hall discovered that when a magnetic field is applied
perpendicular to the direction of a current flowing through a metal a voltage
is developed in the third perpendicular direction. This is well understood
and is due to the charge carriers within the current being deflected towards
the edge of the sample by the magentic field. Equilibrium is achieved when
the magnetic force is balanced by the electrostatic force from the build
up of charge at the edge. This happens when Ey = vxBz
.The Hall coefficient is defined as RH =
and since the current density is jx = vxNq
, RH =1/Nq
in the case of a single species of charge carrier. RH can thus
be measured to find N the density of carriers in the material. Often
this transverse voltage is measured at fixed current and the Hall resistance
recorded. It can easily be seen that this Hall resistance increases linearly
with magnetic field.
In a two-dimensional metal or semiconductor the Hall effect is also observed, but at low temperatures a series of steps appear in the Hall resistance as a function of magnetic field instead of the monotonic increase. What is more, these steps occur at incredibly precise values of resistance which are the same no matter what sample is investigated. The resistance is quantised in units of h/e2 divided by an integer. This is the QUANTUM HALL EFFECT.
The figure shows the integer quantum Hall effect in a GaAs-GaAlAs heterojunction, recorded at 30mK. The QHE can be seen at liquid helium temperatures, but in the millikelvin regime the plateaux are much wider. Also included is the diagonal component of resistivity, which shows regions of zero resistance corresponding to each QHE plateau. In this figure the plateau index is, from top right, 1, 2, 3, 4, 6, 8.... Odd integers correspond to the Fermi energy being in a spin gap and even integers to an orbital LL gap. As the spin splitting is small compared to LL gaps, the odd integer plateaux are only seen at the highest magnetic fields. Important points to note are:
The zeros and plateaux in the two components of the resistivity tensor are intimately connected and both can be understood in terms of the Landau levels (LLs) formed in a magnetic field.
In the absence of magnetic field the density of states in 2D is constant as a function of energy, but in field the available states clump into Landau levels separated by the cyclotron energy, with regions of energy between the LLs where there are no allowed states. As the magnetic field is swept the LLs move relative to the Fermi energy.
When the Fermi energy lies in a gap between LLs electrons can not move to new states and so there is no scattering. Thus the transport is dissipationless and the resistance falls to zero.
The classical Hall resistance was just given by B/Ne. However, the number of current carrying states in each LL is eB/h, so when there are i LLs at energies below the Fermi energy completely filled with ieB/h electrons, the Hall resistance is h/ie2. At integer filling factor this is exactly the same as the classical case.
The difference in the QHE is that the Hall resistance can not change
from the quantised value for the whole time the Fermi energy is in a gap,
i.e between the fields (a) and (b) in the diagram, and so a plateau results.
Only when case (c) is reached, with the Fermi energy in the Landau level,
can the Hall voltage change and a finite value of resistance appear.
This picture has assumed a fixed Fermi energy, i.e fixed carrier density, and a changing magnetic field. The QHE can also be observed by fixing the magnetic field and varying the carrier density, for instance by sweeping a surface gate.
Although it might be thought that a perfect crystal would give the strongest effect, the QHE actually relies on the presence of dirt in the samples. The effect of dirt and disorder can best be though of as creating a background potential landscape, with hills and valleys, in which the electrons move. At low temperature each electron trajectory can be drawn as a contour in the landscape. Most of these contours encircle hills or valleys so do not transfer an electron from one side of the sample to another, they are localised states. A few states (just one at T=0) in the middle of each LL will be extented across the sample and carry the current. At higher temperatures the electrons have more energy so more states become delocalised and the width of extended states increases.
The gap in the density of states that gives rise to QHE plateaux is the gap between extended states. Thus at lower temperatures and in dirtier samples the plateaus are wider. In the highest mobility semiconductor heterojunctions the plateaux are much narrower.
In very high mobility samples extra plateaux appear between the regular quantum Hall plateaux, at resistances given by h/e2 divided by a rational fraction p/q instead of an integer. This is the fractional quantum Hall effect (FQHE). Early observations found that q was always an odd number and that certain fractions gave rise to much stronger features than others. The FQHE is much more difficult to explain since it originates from many electron correlations, but for this reason has been of great interst to theoreticians and experimentallists alike.
In some materials there are more than one species of charge carrier. These may be elecrons in different conduction band minima, different spatially confined subbands or electrons and holes simultaneously present. The numbers and mobilities of all the species have to be considered to find the transport coefficients.
If there are electrons and holes the total filling factor is the difference between the filling factors for electrons and holes. At certain fields this can be zero, at which point the Hall resistance itself becomes zero!
Last updated 05/02/97 by David R Leadley.
All rights reserved. Text and diagrams from this page may only be used for non-profit making academic excerises and then only when credited to D.R. Leadley, Warwick University 1997. | <urn:uuid:4d49f668-e836-42fe-946a-68e62fe1ff0a> | 3.5625 | 1,394 | Knowledge Article | Science & Tech. | 48.9762 | 2,052 |
Vincent Van Gogh painted his most turbulent images when insane. The Labrador Current resembles Van Gogh’s paintings when it becomes unstable. There is no reason that mental and geophysical instability relate to each other. And yet they do. Russian physicist Andrey Kolmogorov developed theories of turbulence 70 years ago that Mexican physicist applied to some of Van Gogh’s paintings such as “Starry Sky:”
The whirls and curls evoke motion. The colors vibrate and oscillate like waves that come and go. There are rounded curves and borders in the tiny trees, the big mountains, and the blinking stars. Oceanographers call these rounded curves eddies when they close on themselves much as is done by a smooth wave that is breaking when it hits the beach in violent turmoil.
Waves come in many sizes at many periods. The wave on the beach has a period of 5 seconds maybe and measures 50 meters from crest to crest. Tides are waves, too, but their period is half a day with a distance of more than 1000 km from crest to crest. These are scales of time and space. There exist powerful mathematical statements to tell us that we can describe all motions as the sum of many waves at different scales. Our cell phone and computer communications depend on it, as do whales, dolphins, and submarines navigating under water, but I digress.
The Labrador Shelf Current off Canada moves ice, icebergs, and ice islands from the Arctic down the coast into the Atlantic Ocean. To the naked eye the ice is white while the ocean is blue. Our eyes in the sky on NASA satellites sense the amount of light and color that ice and ocean when hit by sun or moon light reflects back to space. An image from last friday gives a sense of the violence and motion when this icy south-eastward flowing current off Labrador is opposed by a short wind-burst in the opposite direction:
Flying from London to Chicago on April 6, 2008, Daniel Schwen photographed the icy surface of the Labrador Current a little farther south:
The swirls and eddies trap small pieces of ice and arrange them into wavy bands, filaments, and trap them. The ice visualizes turbulent motions at the ocean surface. Also notice the wide range in scales as some circular vortices are quiet small and some rather large. If the fluid is turbulent in the mathematical sense, then the color contrast or the intensity of the colors and their change in space varies according to an equation valid for almost all motions at almost all scales. It is this scaling law of turbulent motions that three Mexican physicists tested with regard to Van Gogh’s paintings. They “pretended” that the painting represents the image of a flow that follows the physics of turbulent motions. And their work finds that Van Gogh indeed painted intuitively in ways that mimics nature’s turbulent motions when the physical laws were not yet known.
There are two take-home messages for me: First, fine art and physics often converge in unexpected ways. Second, I now want to know, if nature’s painting of the Labrador Shelf Current follows the same rules. There is a crucial wrinkle in motions impacted by the earth rotations: While the turbulence of Van Gogh or Kolmogorov cascades energy from large to smaller scales, that is, the larger eddies break up into several smaller eddies, for planetary-scale motions influenced by the Coriolis force due to earth’s rotation, the energy moves in the opposite direction, that is, the large eddies get larger as the feed on the smaller eddies. There is always more to discover, alas, but that’s the fun of physics, art, and oceanography.
Aragón, J., Naumis, G., Bai, M., Torres, M., & Maini, P. (2008). Turbulent Luminance in Impassioned van Gogh Paintings Journal of Mathematical Imaging and Vision, 30 (3), 275-283 DOI: 10.1007/s10851-007-0055-0
Ball, P. (2006). Van Gogh painted perfect turbulence news@nature DOI: 10.1038/news060703-17
Wu, Y., Tang, C., & Hannah, C. (2012). The circulation of eastern Canadian seas Progress in Oceanography, 106, 28-48 DOI: 10.1016/j.pocean.2012.06.005 | <urn:uuid:b6c2e1db-62c3-45fa-b7e0-9bd8a5dd987b> | 3.234375 | 927 | Personal Blog | Science & Tech. | 58.843243 | 2,053 |
Digit Patterns of the Powers of 5
Date: 09/14/98 at 21:04:53 From: Tim Peterson Subject: Multiplication digit patterns I was playing with powers of 5, and I saw a pattern in the digits of the results: 5^1: 5 5^2: 25 5^3: 125 5^4: 625 5^5: 3125 5^6: 15625 5^7: 78125 5^8: 390625 5^9: 1953125 5^10: 9765625 5^11: 48828125 5^12: 244140625 5^13: 1220703125 5^14: 6103515625 5^15: 30517578125 The 1's digit is always 5, and the 10's is always 2. The third digit alternates between 1 and 6; The fourth digit repeats 4 digits: 3, 5, 8, and 0; The fifth digit repeats 8 digits: 1, 7, 9, 5, 6, 2, 4, and 0; And the sixth (I think) repeats 16 digits. What I want to know is why does each digit repeat an increasing power of 2 number of digits?
Date: 09/16/98 at 12:57:53 From: Doctor Rick Subject: Re: Multiplication digit patterns Hi, Tim. Interesting observation! The short answer to your question, "Why?" is, because 10 = 5 * 2. Of course, that calls for some further explanation. I won't try to say everything that could be said on the subject, but I'll point you to some areas for further investigation. In case you want to look up more information related to your observations, they come under the branch of math called number theory, and the topic in number theory called "residues of powers." The first thing to notice is that, once a power of 5 has the same last 3 digits (for example) as a lower power of 5, the last 3 digits will repeat forever. Can you see that from the way you multiply? 625 15625 * 5 * 5 ---- ----- 3125 78125 You start from the right, and by the time you have written the last 3 digits of the product, you haven't yet looked beyond the last 3 digits of the multiplicand. This means that, if 5^n and 5^(n+p) have the same last 3 digits, then 5^(n+1) and 5^(n+p+1) will have the same last 3 digits also, and so on. The pattern repeats forever. Now, there are "only" 1000 different numbers that you could have in those last 3 digits, so sooner or later, as you keep computing powers, you're bound to get a number that you already got for a lower power. That means that if you raise any number to powers, you will find a repeating pattern - only the repeat period is usually much longer. The interesting question is, why does 5 give short repeat periods? One thing you can do to expand your investigation is to look at raising other numbers to powers. What are the repeat periods for the last 1, 2, and 3 places? They are longer than for 5, so this will take some work. The next thing to think about is how this relates to modular arithmetic. If two numbers (5^n and 5^(n+4), for instance) have the same last 3 digits, this is the same as saying: 5^n mod 1000 = 5^(n+4) mod 1000 (Remember "mod", or "%" in the C language, means the remainder after you divide.) In number theory, this is written as: 5^n is-congruent-to 5^(n+4) (mod 1000) where is-congruent-to is an equal sign but with 3 lines instead of 2. Another avenue for exploration is changing from 1000 to something else. Try mod 2, mod 3, etc. Here is something to think about that may help you explain the patterns you see. This relates to my comment about 10 = 5 * 2 being important. 5^n 5^n 5^(n-3) ---- = ---------- = ------- 1000 (2^3)(5^3) 2^3 Think about how the 3-digit pattern relates to mod 8, the 4-digit pattern to mod 16, etc. Can the same sort of thing be done if you use another number instead of 5? I have given you a lot more questions than answers. Have fun with it, and be sure to let me know what you find. I have some ideas, but there is a lot more that I don't know about this subject. - Doctor Rick, The Math Forum http://mathforum.org/dr.math/
Search the Dr. Math Library:
Ask Dr. MathTM
© 1994-2013 The Math Forum | <urn:uuid:2d4327d8-6b3e-4475-8e12-c3779d96f9f0> | 3.40625 | 1,012 | Comment Section | Science & Tech. | 89.555529 | 2,054 |
Hello all, i need to rearrange this formula to make E the subject (E=)
M= 0.67 log (0.37 E) +1.46
While i can rearrange most formula, the ones that contain log seem to confuse me.
Thanks in advance!
To get rid of a log(x) term to get an x, you have the exponential of it. So as an example, if I have y = log(x) and I want to get x, then I take the exponential of both sides which gives exp(y) = exp(log(x)) = x which gives exp(y) = x.
Sorry to bother you again, Chiro, but if you were to enter this on a graphical calculator to find E how would you do this? Is there an "exp" button that i am missing? Just to compare answers, what value for E do you get if M = 9.5?
There are usually two types of logarithms: base 10 and base e. In terms of exponential, there is a button (usually its called e^x or exp(x)) and that will take the exponential in base e. If it's base ten then you simply calculate 10 to the power of the number to get the exponential in base 10.
Using a common computer package called R, I get the answer:
E = 439877.81464595574653
If I use base 10 I get:
The reason I include base 10 is that unfortunately, sometimes people use log to mean log to the base 10 rather than to the natural base and it's a real pain in the neck when people say log(x) is log base 10 instead of log base e.
One thing though is that if you see ln(x) this is always base e no matter what. | <urn:uuid:69f1cff9-e9f9-4d6c-a71d-36716e50753b> | 2.625 | 383 | Q&A Forum | Science & Tech. | 79.909777 | 2,055 |
Writes only the starting XML element to an XML document or stream.
This member is overloaded. For complete information about this member, including syntax, usage, and examples, click a name in the overload list.
|WriteStartObject(XmlDictionaryWriter, Object)||Writes the start of the object's data as an opening XML element using the specified XmlDictionaryWriter.|
|WriteStartObject(XmlWriter, Object)||Writes the start of the object's data as an opening XML element using the specified XmlWriter.|
The , WriteObjectContent, and WriteEndObject methods must be implemented. The three methods are used in succession to write the complete serialization using the pattern: write start, write content, and write end. If the implementation writes using XML elements, attributes can be inserted before writing the contents of the object. The three methods are also called by the virtual implementation of the WriteObject method. | <urn:uuid:d700dfa3-4dde-4642-bdc1-08b5c9c0c2ec> | 3.140625 | 196 | Documentation | Software Dev. | 31.006046 | 2,056 |
Proposed fleet of probes to determine if life really exists on Mars
Washington, Apr 24 (ANI): Astrobiologists are calling for a mission to Mars with "a strong and comprehensive life detection component."
At the heart of their proposal is a small fleet of sensor packages that can punch into the Martian soil and run a range of tests for signs of ancient or existing life.
Washington State University astrobiologist, who is leading a group of 20 scientists, call the mission BOLD. It's both an acronym for Biological Oxidant and Life Detection and a nod to the proposal's chutzpah. The proposal, which comes as NASA is reevaluating its Mars exploration program,
"We really want to address the big questions on Mars and not fiddle around," says Dirk Schulze-Makuch, whose earlier proposals have included an economical one-way trip to the red planet.
"With the money for space exploration drying up, we finally have to get some exciting results that not only the experts and scientists in the field are interested in but that the public is interested too."
The BOLD mission would feature six 130-pound probes that could be dropped to various locations.
Shaped like inverted pyramids, they would parachute to the surface and thrust a soil sampler nearly a foot into the ground upon landing. On-board instrumentation would then conduct half a dozen experiments, transmitting data to an orbiter overhead.
The soil analyzer would moisten a sample and measure inorganic ions, pH and light characteristics that might get at the sample's concentration of hydrogen peroxide.
Schulze-Makuch has hypothesized that microbial organisms on Mars could be using a mixture of water and hydrogen peroxide as their internal fluid. The compound might also account for several of the findings of the Viking Mars landers in the late 1970s.
The probe's microscopic imager would look for shapes similar to known terrestrial microfossils.
Another instrument would look for single long molecules similar to the long nucleic acids created by life on earth.
Some experiments would repeat work done by the Viking landers but with a greater precision that could detect previously overlooked organic material.
Each probe would have about a 50-50 chance of landing successfully. But with the redundancy of six probes, the chance of one succeeding is better than 98 percent.
The study appears in the journal Planetary and Space Science. (ANI)
Read More: Charkhari State | Sarila State | Distt.board | Mahdaiya State | Science College | State Bank Of Hyderabad | State Bank Of India | Veterinary Science College | Science Institute Lsg So | State Bank Of Mysore Colony | Madras University Po | World University Centre | State Bank Colony | Pondicherry University | Salem Dt.board Buildings | Calicut Arts & Science College | State Farm Colony | Kohima Science College | Shahbad Distt.board Ara | Saharsa Dist.board | Space
SONIA GANDHI DEPLORES NAXAL ATTACK ON CONGRESS LEADERS
May 26, 2013 at 12:20 AM
SHIKARA TREK BEGINS IN J&K AFTER 30 YEARS (NNIS Exclusive)
May 25, 2013 at 11:05 PM
CONG LEADER KILLED, PARTY CHIEF ABDUCTED IN NAXAL ATTACK IN CG
May 25, 2013 at 10:29 PM | <urn:uuid:041c4782-4479-41e3-b2ad-b35aa3791af9> | 3 | 710 | News Article | Science & Tech. | 40.255539 | 2,057 |
Depth Charge: Using Atomic Force Microscopy to Study Subsurface Structures
For Immediate Release: June 23, 2010
Contact: Michael Baum
Over the past couple of decades, atomic force microscopy (AFM) has emerged as a powerful tool for imaging surfaces at astonishing resolutions—fractions of a nanometer in some cases. But suppose you're more concerned with what lies below the surface? Researchers at the National Institute of Standards and Technology (NIST) have shown that under the right circumstances, surface science instruments such as the AFM can deliver valuable data about sub-surface conditions.
Their recently published* work with colleagues from the National Aeronautics and Space Administration (NASA), National Institute of Aerospace, University of Virginia and University of Missouri could be particularly useful in the design and manufacture of nanostructured composite materials. Engineers are studying advanced materials that mix carbon nanotubes in a polymer base for a wide variety of high-performance applications because of the unique properties, such as superior strength and electrical conductance, added by the nanotubes. The material chosen by the research team as their test case, for example, is being studied by NASA for use in spacecraft actuators because it may outperform the heavier ceramics now used.
But, says NIST materials scientist Minhua Zhao, "one of the critical issues to study is how the carbon nanotubes are distributed within the composite without actually breaking the part. There are very few techniques available for this kind of non-destructive study." Zhao and his colleagues decided to try an unusual application of atomic force microscopy.
The AFM is actually a family of instruments working on the same basic principal: a delicate needle-like point hovers just above the surface to be profiled and responds to weak, atomic-level forces. A typical AFM senses so-called "van der Waals forces," very short-range forces exerted by molecules or atoms. This restricts the instrument to the surface of samples.
Instead, the team used an AFM designed to use the stronger, longer-range electrostatic force (technically an EFM), measuring the interaction between the probe tip and a charged plate beneath the composite sample. What makes it work, says Zhao, is that the nanotubes are electrical conductors with high dielectric constant (a measure of how the material affects an electric field), but the polymer is a low dielectric constant material. Such huge dielectric constant differences between nanotubes and the polymer is the key to the success of this technique, and with properly chosen voltages the nanotubes show up as finely detailed fibers dispersed below the composite's surface.
The goal, according to Zhao, is to control the process well enough to allow quantitative measurements. At present the group can discriminate different concentrations of carbon nanotubes in the polymer, determine conductive networks of the nanotubes and map electric potential distribution of the nanotubes below the surface. But the measurement is quite tricky, many factors, including probe shape and even humidity affect the electrostatic force.
The team used a specially designed probe tip and a patented, NIST-designed AFM humidity chamber.** An interesting, not yet fully understood effect, says Zhao, is that increasing the voltage between the probe and the sample at some point causes the image contrast to invert, dark regions becoming light and vice versa. The team is studying the mechanism of such contrast inversion.
"We are still optimizing this EFM technique for subsurface imaging," says Zhao. "If the depth of nanostructures located from the film surface can be determined quantitatively, this technique will be a powerful tool for nondestructive subsurface imaging of high dielectric nanostructures in a low dielectric matrix, with a broad range of applications in nanotechnology."
* M.H. Zhao, X.H. Gu, S.E. Lowther, C. Park, Y.C. Jean and T. Nguyen. Subsurface characterization of carbon nanotubes in polymer composites via quantitative electric force microscopy. Nanotechnology 21 (2010) 225702 doi:10.1088/0957-4484/21/22/225702.
** J.W. Martin, E. Embree and M.R. VanLandingham. Humidity Chamber For Stylus Atomic Force With Cantilever. U.S. Patent No. 6,490,913 B1, Dec. 10, 2002. | <urn:uuid:775781f9-4a94-4738-abad-681ee88a68ae> | 3.421875 | 925 | News (Org.) | Science & Tech. | 33.052857 | 2,058 |
Copyright © University of Cambridge. All rights reserved.
'Avalanche!' printed from http://nrich.maths.org/
Mathematicians and scientists use experiments to model what happens in avalanches, so they can understand them better. In this investigation, students will model an avalanche, collect data and display their findings with graphs.
Equipment required (per small group of students):
- at least two (preferably more) of sand, gravel, couscous, small seeds (eg. coriander), dried peas, raisins, lentils, rice, or similar - you need enough of each to give a good heap on an A5 size piece of paper
- a funnel and a clamp-stand to hold it vertically over a sheet of squared paper (A4 would be OK, A3 would be better)
- a set of measuring spoons or weighing scales (the kind you cook with would be fine)
The basic experiment:
- Put the funnel in the clamp so the funnel points vertically downwards.
- Put the piece of squared paper on the table so the funnel is pointing down at the middle of the paper.
- Pour 1 tablespoon or 20g of the chosen avalanche substance through the funnel.
- Mark the area covered on the piece of paper, and record it as the 1st tablespoon or 20g of the substance.
- Measure the height of the heap as accurately as possible, ensuring the ruler is vertical.
- Measure the angle between the piece of paper and the slope of the heap as accurately as possible.
- Repeat steps 3-6 until an avalanche occurs, recording the number of tablespoons or 20g portions at each stage. Describe the avalanche (see below).
- Continue with the experiment until the avalanche substance runs out.
- Make a note of any problems in the experiment, or anything which may have made it less accurate than would have have been desirable.
Describing the avalanche:
- Record that an avalanche has occurred on the area, height and angle results.
- Describe the avalanche. Questions to help:
- was it just a small trickle or a large fall or somewhere in between?
- how far down the slope did the avalanche go?
- is there anything else you observed?
Graphing the data:
- Draw a bar graph to show the height of the heap at each stage, with the number of tablespoonfuls or 20g measured amounts of the substance on the horizontal axis and the height of the heap on the vertical axis.
- Repeat for the angle between the horizontal and the slope of the heap.
- Estimate the area covered by the heap at each stage, by counting squares. Then draw a bar graph of area (vertical axis) against the number of amounts of the substance (horizontal axis).
- On your graphs, label the points at which avalanches occurred.
Questions to consider:
- Do the three graphs have much the same shape, or not? How are they similar, how are they different?
- Can you see any patterns in when avalanches occurred, or does it seem to be quite random?
Now look at the descriptions of the avalanches, and try to classify them as small, medium or large events.
- Are there any patterns in the graphs for the severity of the avalanches?
At this stage, it would be good if groups of students make reports on what they have observed about avalanches so far, using their graphs, their observations about when avalanches occur and about how severe they are.
There are several ways in which the basic experiment might be extended. Students should start by deciding what they want to investigate, then plan their experiments, making notes on what they will do. Planning should include quantitative information where appropriate.
Students could decide on their own question, or use these sets of questions as a guide:
- What difference does the size of particle make? What happens if the particle size is smaller/larger? How do the results compare with previous ones?
- What difference does it make if two or more substances are mixed? How does the mixture compare with what happens with the substances on their own? Does it make a difference if the particle sizes are similar or different? How does varying the proportions of each affect the results?
- What difference does it make if water is added to a substance? Water should be added in measured amounts, seeing how this affects the results.
Using your graphs to think about avalanches
Collect all your results and graphs together. What do they tell you about avalanches? Try answering these questions to help you:
- Which substances had the most frequent avalanches?
- Which substances had the least frequent avalanches?
- Which substances had the most severe avalanches?
- Which substances had the least severe avalanches?
- Is there a maximum area covered, or height, or angle, for heaps of particular substances?
- Does the frequency or severity of an avalanche relate to the area of the existing heap, or its height, or the angle it makes with the paper?
- What difference do you think particle size makes?
- What difference does the amount of water make?
- How does it affect things if you mix two or more substances?
- Do the proportions in which they are mixed matter?
Now apply what you've discovered to snow avalanches:
- What sort of snow is likely to have the most dangerous avalanches - snow with small or large particles?
- Is wet snow likely to be more dangerous than dry snow, or vice versa?
- Avalanches happen on snowy slopes. Are slopes with slight (say up to 20°), moderate (say 20° to 50°) or steep (more than 50°, say) likely to be most dangerous?
- If snow melts then refreezes, making bigger icy particles, and then new, smaller snow crystals fall, what would be predict about the likelihood of an avalanche?
Which of these suggestions do you think would be sensible advice to people trying to prevent or at least control avalanches? Which wouldn?t be sensible?
- Chop down all the trees in case they are damaged by avalanches.
- Trigger small, controlled avalanches very early in the morning to clear away accumulated snow.
- Put fences, posts, windbreaks or dams on slopes to divert the avalanches.
- Grow new trees on slopes to break up avalanches.
- Build houses on snowy slopes to divert the avalanches.
- Fire guns in the late afternoon to start an avalanche and clear away accumulated snow.
Can you suggest any ideas of your own? | <urn:uuid:0eac12b5-dc0f-4af5-8642-c70acc60efa9> | 4.4375 | 1,371 | Tutorial | Science & Tech. | 53.842226 | 2,059 |
CGI::Pretty - module to produce nicely formatted HTML code
CGI::Pretty is a module that derives from CGI. It's sole function is to allow users of CGI to output nicely formatted HTML code.
When using the CGI module, the following code: print table( TR( td( "foo" ) ) );
produces the following output: <TABLE><TR><TD>foo</TD></TR></TABLE>
If a user were to create a table consisting of many rows and many columns, the resultant HTML code would be quite difficult to read since it has no carriage returns or indentation.
CGI::Pretty fixes this problem. What it does is add a carriage return and indentation to the HTML code so that one can easily read it.
- print table( TR( td( "foo" ) ) );
now produces the following output: <TABLE> <TR> <TD>foo</TD> </TR> </TABLE>
CGI::Pretty is far slower than using CGI.pm directly. A benchmark showed that it could be about 10 times slower. Adding newlines and spaces may alter the rendered appearance of HTML. Also, the extra newlines and spaces also make the file size larger, making the files take longer to download.
With all those considerations, it is recommended that CGI::Pretty be used primarily for debugging.
The following tags are not formatted: <a>, <pre>, <code>, <script>, <textarea>, and <td>.
If these tags were formatted, the
user would see the extra indentation on the web browser causing the page to
look different than what would be expected. If you wish to add more tags to
the list of tags that are not to be touched, push them onto the
- push @CGI::Pretty::AS_IS,qw(XMP);
If you wish to have your own personal style of indenting, you can change the
- $CGI::Pretty::INDENT = "\t\t";
would cause the indents to be two tabs.
Similarly, if you wish to have more space between lines, you may change the
- $CGI::Pretty::LINEBREAK = "\n\n";
would create two carriage returns between lines.
If you decide you want to use the regular CGI indenting, you can easily do the following:
- $CGI::Pretty::INDENT = $CGI::Pretty::LINEBREAK = "";
Brian Paulsen <Brian@ThePaulsens.com>, with minor modifications by Lincoln Stein <email@example.com> for incorporation into the CGI.pm distribution.
Copyright 1999, Brian Paulsen. All rights reserved.
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
Bug reports and comments to Brian@ThePaulsens.com. You can also write to firstname.lastname@example.org, but this code looks pretty hairy to me and I'm not sure I understand it! | <urn:uuid:4963064f-39fd-4aa7-aada-6bc68702a355> | 2.65625 | 638 | Documentation | Software Dev. | 59.948725 | 2,060 |
(Phys.org)—Robert F. Service has published a News & Analysis piece in the journal Science describing the progress being made in nanowire photovoltaics. One of those innovations is described in another paper published in the same journal by a team working on indium phosphate nanowire technology. In their paper, they describe how in creating micrometer sized wires they have managed to build a non-silicon based solar cell that is capable of converting almost 14 percent of incoming sunlight into electric current.
Researchers the world over are on a quest to create a cheaper alternative to silicon based solar cells, some of which have been focused on using indium phosphate because it is more efficient at turning sunlight into electricity – unfortunately, it's not very good at absorbing sunlight. In this new research, the team turned to nanowire technology to help it do a better job.
The idea is to create a small forest of wires standing on end atop a platform, with each wire just 1.5 micrometers tall and with a diameter of 180 nanometers. The bottom part of each wire is doped to cause an excess positive charge, the top doped to give it an excess negative charge with the middle remaining neutral – all standing on a bed of silicon dioxide. The team caused such a setup to come about by dropping gold flakes on a silicon bed and adding silicon phosphate to cause wires to grow which were kept clean and straight via etching using hydrochloric acid. The result is a photovoltaic cell capable of converting 13.8 percent of incoming sunlight into electricity while absorbing 71 percent of the light above the band gap.
In addition to being nearly as efficient as traditional silicon based solar cells, this new type of cell can also be bent to allow for shaping into flexible panels that allow for more options when mounting. It also allows for a smaller overall surface area. The team suggests that solar cells made using this approach might be best used in concentrated systems using lenses, though it's not clear as yet whether they would stand up to the intense heat. There's also the problem of creating the cells on a scale large enough for them to be sold commercially at a reasonable cost.
Explore further: A new world record for solar cell efficiency
More information: 1. InP Nanowire Array Solar Cells Achieving 13.8% Efficiency by Exceeding the Ray Optics Limit, Science, DOI: 10.1126/science.1230969
Photovoltaics based on nanowire arrays could reduce cost and materials consumption compared to planar devices, but have exhibited low efficiency of light absorption and carrier collection. We fabricated a variety of millimeter-sized arrays of p-i-n doped InP nanowires and found that the nanowire diameter and the length of the top n-segment were critical for cell performance. Efficiencies up to 13.8% (comparable to the record planar InP cell) were achieved using resonant light trapping in 180-nanometer-diameter nanowires that only covered 12% of the surface. The share of sunlight converted into photocurrent (71%) was six times the limit in a simple ray optics description. Furthermore, the highest open circuit voltage of 0.906 volt exceeds that of its planar counterpart, despite about 30 times higher surface-to-volume ratio of the nanowire cell.
2. Performance of Nanowire Solar Cells on the Rise, DOI: 10.1126/science.339.6117.263 . www.sciencemag.org/content/339/6117/263.summary | <urn:uuid:ba56d2e2-bc8a-469d-85ed-0d307efe5e21> | 3.703125 | 741 | News Article | Science & Tech. | 47.906026 | 2,061 |
(PhysOrg.com) -- While orbiting Saturn for the last six years, NASA's Cassini spacecraft has kept a close eye on the collisions and disturbances in the gas giant's rings.
While orbiting Saturn for the last six years, NASA's Cassini spacecraft has kept a close eye on the collisions and disturbances in the gas giant's rings. They provide the only nearby natural laboratory for scientists to see the processes that must have occurred in our early solar system, as planets and moons coalesced out of disks of debris.
New images from Cassini show icy particles in Saturn's F ring clumping into giant snowballs as the moon Prometheus makes multiple swings by the ring. The gravitational pull of the moon sloshes ring material around, creating wake channels that trigger the formation of objects as large as 20 kilometers (12 miles) in diameter.
"Scientists have never seen objects actually form before," said Carl Murray, a Cassini imaging team member based at Queen Mary, University of London. "We now have direct evidence of that process and the rowdy dance between the moons and bits of space debris."
Murray discussed the findings today (July 20, 2010) at the Committee on Space Research meeting in Bremen, Germany, and they are published online by the journal Astrophysical Journal Letters on July 14, 2010. A new animation based on imaging data shows how one of the moons interacts with the F ring and creates dense, sticky areas of ring material.
Saturn's thin, kinky F ring was discovered by NASA's Pioneer 11 spacecraft in 1979. Prometheus and Pandora, the small "shepherding" moons on either side of the F ring, were discovered a year later by NASA's Voyager 1. In the years since, the F ring has rarely looked the same twice, and scientists have been watching the impish behavior of the two shepherding moons for clues.
Prometheus, the larger and closer to Saturn of the two moons, appears to be the primary source of the disturbances. At its longest, the potato-shaped moon is 148 kilometers (92 miles) across. It cruises around Saturn at a speed slightly greater than the speed of the much smaller F ring particles, but in an orbit that is just offset. As a result of its faster motion, Prometheus laps the F ring particles and stirs up particles in the same segment once in about every 68 days.
"Some of these objects will get ripped apart the next time Prometheus whips around," Murray said. "But some escape. Every time they survive an encounter, they can grow and become more and more stable."
Cassini scientists using the ultraviolet imaging spectrograph previously detected thickened blobs near the F ring by noting when starlight was partially blocked. These objects may be related to the clumps seen by Murray and colleagues.
The newly-found F ring objects appear dense enough to have what scientists call "self-gravity." That means they can attract more particles to themselves and snowball in size as ring particles bounce around in Prometheus's wake, Murray said. The objects could be about as dense as Prometheus, though only about one-fourteenth as dense as Earth.
What gives the F ring snowballs a particularly good chance of survival is their special location in the Saturn system. The F ring resides at a balancing point between the tidal force of Saturn trying to break objects apart and self-gravity pulling objects together. One current theory suggests that the F ring may be only a million years old, but gets replenished every few million years by moonlets drifting outward from the main rings. However, the giant snowballs that form and break up probably have lifetimes of only a few months.
The new findings could also help explain the origin of a mysterious object about 5 to 10 kilometers (3 to 6 miles) in diameter that Cassini scientists spotted in 2004 and have provisionally dubbed S/2004 S 6. This object occasionally bumps into the F ring and produces jets of debris.
This movie features a simulation showing the changes to a portion of Saturn's F ring as the shepherding moon Prometheus swings by it. Image Credit: NASA/JPL/SSI
"The new analysis fills in some blanks in our solar system's history, giving us clues about how it transformed from floating bits of dust to dense bodies," said Linda Spilker, Cassini project scientist at NASA's Jet Propulsion Laboratory in Pasadena, Calif. "The F ring peels back some of the mystery and continues to surprise us."
The late Kevin Beurle was made the honorary first author on this paper because of his contributions in developing software and designing observation sequences for this research. He died in 2009.
Explore further: Communications satellite launched into space | <urn:uuid:1d765964-88a4-49b6-a853-a001116408f2> | 3.734375 | 961 | News Article | Science & Tech. | 49.166832 | 2,062 |
Our planet is shaped by the oceans, the dynamic geology and the changing climate. It teems with life and we, in particular, have a massive impact as we build homes, grow food, travel and feed our ever-hungry need for energy. Mathematics is vital in understanding all of these, which is why 2013 has been declared as the year for the Mathematics of Planet Earth.
Why are we so clever? In evolutionary terms this isn't obvious:
evolution tends to favour cheap solutions and the human brain is
expensive. It consumes about 20% of our body's energy budget yet it only makes up 2% of our body
mass. So why did it make evolutionary sense for us
humans to develop powerful brains? Game theory provides a possible answer.
This month 70 teenage girls from nineteen countries including Bulgaria, Saudi Arabia and Finland came to the University of Cambridge to participate in the inaugural European Girls' Mathematical Olympiad (EGMO).
The Plus office has opened in Barcelona! The weather is fine, the architecture is spectacular and everyone has been very friendly. And, importantly, the food is delicious! From the welcoming dinner with the conference organisers (and a delicious glass of port), to the focaccia de xocolata from the cafe round the corner to the pigs skin tapas we tried last night!
Today sees the launch of The Aperiodical, a new maths
magazine/blog aimed at people interested in mathematics who want to
read stuff. Aperiodical will post news stories related to maths, opinion pieces,
videos, feature articles, as well as blog posts. It will also publish accounts of monthly MathsJams and host the Carnival of Mathematics, a monthly blogging carnival.
In our Science fiction, science fact project we asked you which question from the frontier of physics you'd most like to see answered on Plus. We have just closed the poll and with nearly 20% of your vote the winning question is Does infinity exist?.
If, like us, you like fractals, then you will love the work of Frank Milordi, aka FAVIO. Milordi is a former Director of Engineering and Technology who creates mind challenging computer images based on the mathematics of chaos and fractals. You may be familiar with his work already, as one of his beautiful fractal images adorns one of the latest Plus postcards. | <urn:uuid:b0d7b250-6462-48b1-b199-fdd912714695> | 2.609375 | 487 | News (Org.) | Science & Tech. | 45.575576 | 2,063 |
One day old, and already a mathematician
In the last few years, researchers have become accomplished at finding out what goes on in the minds of tiny children, even new-born babies. This is done either by watching their gaze (looking away indicates familiarity or boredom, staring intently indicates surprise or interest) or by giving them a dummy (the more they suck, the more interested they are). This means that we can tell what expectations babies have in different situations, and when those expectations are violated. What we have learnt is that, amazingly, we all come into this world ready-supplied with basic mathematical understanding.
"We are born with a core sense of cardinal number", says neuropsychologist Brian Butterworth, author of The mathematical brain, reviewed in this issue of Plus. "We understand that sets have a cardinality, that is, that collections have a number associated with them and it doesn't really matter what the members of that set are. Infants, even in the first week of life, notice when the number of things that they're looking at changes.
Was that a 2 or a 3?
Impressive as this ability is, newborn babies are even more mathematically accomplished. They have arithmetical expectations, says Butterworth. "If you show a baby that you're hiding one thing behind a screen, and then you show the baby that you're hiding another thing behind the screen, the baby will expect there to be two things behind the screen, and will be surprised if this expectation is violated." So even before babies can focus their eyes, they are surprised to see a sum with the wrong answer!
Numbers on the brain
These core abilities, which Butterworth calls the "number module", may be the foundation of everything we learn about mathematics later in our lives. He speculates on this in The mathematical brain - "but I have to stress that it is speculation, because what we need to know is whether babies use the same bit of brain as adults. Adults use the left parietal lobe for this ability to recognise small cardinalities. If babies use the same bit of brain, then the course of learning more advanced mathematics builds on this core. If it's a different bit of brain, it's back to the drawing board."
The notion that children have no mathematical abilities whatsoever until they are old enough to have elements of logical reasoning (four or five years old) is very influential, and was held by the famous educationalist Piaget. Clearly this view isn't correct, but according to Butterworth, some of the mathematical abilities Piaget studied may have deeper aspects that children don't achieve until they're four or five. However, he thinks that "these abilities, such as one-to-one correspondence, are built on a basis which is innately specified. Manipulating sets really does need the achievement of some kind of logical abilities that babies don't have. So maybe Piaget was right in a way, but if he was working today he would see that the child has more going for it when it gets to four or five than simply transitive reasoning, class inclusion, these very general logical ideas, it's also got an primitive idea of cardinality."
As a neuropsychologist, Butterworth has seen many patients with bizarre deficits caused by brain damage. In fact, some of the earliest clues to the existence of the number module came from such patients. "I came across patients who seemed to be perfectly alright in every other respect except their mathematical ability", he says. "Something happened to their brain, as the result of a stroke usually, and afterwards they seemed to be unable to do mathematics. This is a condition known as acalculia. A lot of people thought that mathematics was just language and we thought that if this was so then how could it be that Mrs G. speaks perfectly well, reasons okay, but can't count above 4? So we started to investigate in a bit more detail, and kept our eyes open for patients with other similar kinds of problems and that's really how we got started.
"Recently I've been seeing patients who have terribly disordered language but whose maths is still perfectly good, for example, one guy who has an incredibly striking dissociation. He is unable to understand the simplest words. If you ask him 'what is this?' he can't say 'watch', and if you ask him to point to a watch, he can't do that either. But he is still able to do long multiplication and long division, and to understand the principles behind these operations."
Adults thinking about mathematics tend to think about it as something logical, which of course it is, it has its own structure, but it doesn't develop according to that structure in our minds. You might think that you would have to have the concept of zero before developing thinking about sets and cardinalities, but what neuropsychology shows is that this isn't so. The number module isn't something we develop according to some logically consistent scheme, instead it's inbuilt - instinctive, in fact. "The child's acquisition of mathematical ideas actually seems to recapitulate the history of mathematics", says Butterworth. "But it doesn't recapitulate the logic of mathematics. For example, in the history of mathematics, the concept of zero is rather late. In the Frege-Russell construction of numbers it's rather early! So I would say that we can reinterpret the history of mathematics in the light of the child's development. We could say that some ideas are very easy, rather straightforward extensions of what the individual was born with, and some ideas are rather more complicated, because they're not so natural. Ideas like probability for example, are not very natural. We're very bad at probability, which of course is why insurance companies and banks are rich! You don't really get a mathematical theory of probability until the seventeenth century. That just reflects that ideas of probability are very difficult."
Using my hands teaches me maths
Interestingly, and suggestively, there is evidence that early mathematical development is related to certain physical skills. We all start to count on our fingers, and only later do most (but by no means all!) of us abandon our fingers in favour of mental calculation. Butterworth and his colleagues have just started a project looking at people with dyspraxia. "This means they have difficulty in controlling their bodily movements", he explains. "There are degrees of it, mostly dyspraxics are just a bit clumsy. They tend to have particularly poor finger dexterity, and we want to know, what's their maths like? We have anecdotal evidence that these people are worse at maths than the average, both as children and as adults. But we don't know why that is. It might have to do with their manual dexterity or lack of it, or it might have to do with something else. There might be a common cause for a whole range of different difficulties. We want to know if the kinds of difficulties they have are the sorts you would expect them to have if they had problems counting on their fingers when they were little."
One particularly interesting case, Butterworth says, concerns a woman with a very rare genetic disorder, who was born with neither hands nor feet. She reportedly says that, when doing mental arithmetic, she puts her "imaginary hands" on an imaginary table in front of her and uses them to do the calculation. So it seems that the connection between our hands and our number ability is deeper than we might think at first glance. It's interesting to speculate that hands might be a crucial part of what raises human mathematical ability so far above that of other animals, many of whom are also able to distinguish small cardinalities, but who never develop anything further based on that ability.
Putting in the hours
So far we've only talked about the most basic mathematics - arithmetic and an inbuilt notion of cardinal number. What about more advanced, or adult, mathematical ability? The evidence seems to explain how things can go very wrong - via brain damage or physical problems with dexterity - but what about when things go very right? How come some people are so good at mathematics, and so creative?
In Western culture, the most prevalent theory about talent is that it is innate. When someone is outstandingly good at something, we describe them as "gifted", and say they are "naturals". This idea is not so common in other societies, where hard work is seen as the primary reason why some people excel.
According to Butterworth, all the evidence supports the hard work theory. He goes so far as to say that the only "statistically significant" indicator of mathematical excellence is the number of hours put in. This seems to suggest that anyone could be a superb mathematician if they are willing to put in the hours - but the truth is slightly more nuanced. The crucial word here is "willing". Butterworth says that "anybody who is a good mathematician is slightly obsessed with maths - or more slightly obsessed - and they put a lot of hours into thinking about it. So they are unusual in that respect. But they may be no more unusual than anybody who is very good at what they do, because they have to have a certain obsessiveness or otherwise they're not going to be able to put in the hours to get to this level of expertise. This is true of musicians, it's probably true of waiters. Now, if you start putting in the hours when you are very young, how are we going to tell whether your adult state has got to do with what your brain was like before you started to put in the hours, or what it was like because you put in the hours?"
Which came first?
Butterworth is slightly impatient with this chicken and egg question - which comes first, zeal or hard work? He says that "if, for whatever reason, you start working hard at mathematics when all your classmates don't, then the teacher is going to favour you, so you're going to get external rewards, and you're going to get the internal rewards of being able to do something rather well that your mates aren't so good at, and so you'll start off a virtuous circle of external rewards, internal rewards, you work a bit harder, you get even farther ahead of your classmates, who aren't actually putting in the time. So it wouldn't be surprising that if random people who for some reason select to pursue maths on the whole get rewarded because they are going to be better than their peers."
There are particular cases which give great weight to what we might call the "zeal theory of excellence". Butterworth describes the recent case of Rüdiger Gamm, a German who started to teach himself to become a prodigious calculator in his twenties, because he wanted to win a prize on a TV game show. He won the prize, and became very famous in Germany as a calculator. "He can do wonderful things, because he spent four hours a day since he was twenty working on it, learning new tricks, learning the table of cubes and cube roots, and to the power of four and fourth roots and so on. He learned all the tricks he could find, and worked out tricks for himself."
All that maths has tired me out
About this article
Helen Joyce is editor of Plus.
For this article Helen interviewed Brian Butterworth, Professor of Cognitive Neuropsychology at University College, London and founding editor of the academic journal "Mathematical Cognition". He has taught at Cambridge and held visiting appointments at the universities of Melbourne, Padua and Trieste, MIT and the Max Planck Institute at Nijmegen. He is currently working with colleagues on the neuropsychology and the genetics of mathematical abilities.
You can find out more at his website www.mathematicalbrain.com. | <urn:uuid:67276069-5391-466a-a29c-d0aef002d5a8> | 3.390625 | 2,414 | Audio Transcript | Science & Tech. | 46.203252 | 2,064 |
2.3.3 A Penguin Foraging Simulation Game
Display materials for this Activity and tell students that they will be simulating the foraging behavior of penguins. Have them review the Adelie fact sheet, and discuss items which seem most of interest to your class, setting the foraging simulation in its real-world context. Explain that the washers, toothpicks, M&Ms, and marbles represent penguin food items. Then demonstrate the use of the clothespin to represent a penguin's bill! The object of the game is to capture as much "prey" (in the paper cup) as you can within a time limit. The goal is to accumulate 500 points, expending the least energy in the shortest period of time.Sidebar: Foraging Facts
Many factors contribute to the chick-raising and foraging success of penguins in Antarctica, including:
Adapted with permission from the Los Marineros Curriculum Guide, a marine science curriculum available from the Santa Barbara Museum of Natural History at 805-682-4711, ext. 311.
Use a globe to show that all 17 species of penguins live south of the equator. One species, the Galapagos penguin, lives on the equator in the path of the cold Peru Current. Seven kinds of penguins visit Antarctica, but only two species, the Adelie and Emperor penguins, breed exclusively on the Antarctic continent.
How are the adult Adelie penguins able to survive while sitting on the nest? (Blubber or body fat is a primary food source.)
Penguins are the only birds that migrate by swimming. Students can research and map their migration routes, up the west coast of South America to Tetal Point in northern Chile, or up to the east coast of South America past Argentina as far north as Rio de Janeiro in Brazil. Estimate the distances they travel. Using satellite images located on-line, students can match the migratory routes of penguins with the location of currents. What assumptions can they make about migration routes by looking at infrared imagery? (penguins follow cold water currents)
Research North America's own "penguins," the flightless Great Auks. Learn how Great Auks were similar to penguins. Find out why they were slaughtered (for food, their feathers, and for stuffed specimens). These birds became extinct in 1844 when two museum collectors landed on a remote island off Iceland, strangled the last surviving pair for their collection and then smashed the last egg.
Information on Flightless Birds, Behavior, Breeding, Locomotion, Colonies,
Adelies, Emperors, Gentoos, Chinstraps and Crested penguins
Sounds and sights from wildlife sound recordist and NSF Artist-in-Residence,
Doug Quin, including penguins, leopard and Weddell seals, and the sounds
The Adelie Penguin Monitoring Program of the Australian Antarctic Division
| From The Field | Video Information | Researcher Q
& A | | <urn:uuid:49aab467-ec31-4b5f-954c-fbb23a424cb2> | 4.1875 | 629 | Tutorial | Science & Tech. | 44.535538 | 2,065 |
Since 1949, The California Cooperative Oceanic Fisheries Investigations (CalCOFI), a unique partnership among NOAA Fisheries Service, Scripps Institution of Oceanography and the California Department of Fish and Game, has been monitoring the California Current. The information is used to better understand the marine environment off the coast of California, relevant to the management of its living resources in a changing climate. CalCOFI conducts quarterly cruises off southern & south-central California, collecting a suite of hydrographic and biological data on station and underway. This season's survey is conducted upon NOAA Ship Bell M. Shimada and R/V Ocean Starr (formerly NOAA Ship David Starr Jordan now operated by Ocean Services, Inc)
Follow the progress of the first part of this season's survey on the CalCOFI Twitter feed:
CalCOFI Twitter site
The researchers post near real-time maps of the relative number of fish eggs for coastal pelagic species (sardine, jack mackerel, anchovy) collected using the continuous underway fish egg sampler and super-imposed on satellite sea surface temperature maps.
- SWFSC Ship Operations and CalCOFI
- SWFSC Fisheries Research Division | <urn:uuid:bac6082c-1668-48d8-9211-5100e9e75d67> | 2.921875 | 240 | News (Org.) | Science & Tech. | 7.91476 | 2,066 |
Mar 22, 2010
Guest Blog - Electricity used when pulling weight
We don't typically post items related to science fairs (I get a lot of them - they're always enjoyable but most aren't related to NXT but just use an NXT robot to facilitate something else), but I thought this was an interesting project worth sharing, especially because it's using an NXT to obtain the results - I've posted Keizo's results in the accompanying image.
BTW - Keizo is in 4th grade.
Thanks for sending this in Keizo! - Jim
How Much Electricity Is Used When A Robot Pulls A Certain Weight
By Keizo M.
Hypothesis: I think that the robot will use twice as much electricity when the weight of the car is doubled.
Why I Chose This Project: Last year I got a LEGO Mindstorms NXT 2.0 for my birthday. I really enjoyed it, but while using it, I noticed that the batteries run out quickly. I decided to measure how much electricity would be used when a robot pulls a weight.
Materials: -Lego Mindstorms ® NXT 2.0 -Digital Voltmeter -6 Rechargeable Batteries -Weights
1) Make a robotic car out of Lego Mindstorms NXT 2.0
2) Program the car to go forward and backward for 10 minutes
3) Charge the batteries
4) Measure volts in batteries
5) Put batteries in the car
6) Put weight on the car
7) Run the program
8) After 10 minutes stop the car
9) Take out batteries, and collect data
10) Repeat step 3 through 7 with different weights
Conclusion: When weight was increased, more electricity was used but it didn’t double. For example, when I doubled the weight from 1 pound to 2 pounds there was only a 0.066-volt difference.
Discussion: There wasn’t a huge difference when I increased the weight. Next time, I should make the robot go for 30 minutes instead of 10 minutes or put a heavier weight
Posted by: James Floyd Kelly (Jim) at 7:26 AM | <urn:uuid:5342e887-c066-476e-ab35-241727f892d9> | 3.25 | 441 | Personal Blog | Science & Tech. | 65.792164 | 2,067 |
It would seem that a beast covered in fur would be opposed to the planet staying warm but I have to tell you guys, our fur ain’t made of Gore-Tex and Thinsulate – so I’m pretty stoked that a report out of London indicates that, like the Medieval Warm Period, our current climate situation is likely to be preventing a mini-Ice Age:
Without human carbon dioxide emissions the next ice age would be imminent, according to a Nature Geoscience study led by a UCL scientist.
In the paper, scientists led by Professor Chronis Tzedakis (UCL Geography) have been able to ‘fingerprint’ the timing of past ice age activations, or ‘glacial inceptions’ by identifying the onset of abrupt temperature changes in Greenland and Antarctica.
By applying this ‘fingerprint’ method to a nearly identical interglacial period with similar levels of summer solar radiation to our own current period, some 780 thousand years ago, the researchers have been able to determine that glacial inception would indeed be expected to occur sometime within the next 1500 years, a blink of an eye in the context of the Earth’s lifespan. But due to high CO2 levels, and associated radiative forcing of global temperatures, it is expected to be delayed.
In a related announcement, the sun is hot.
Not Kells hot but pretty hot nonetheless. | <urn:uuid:19ab1842-449a-47dd-b9e7-40b539f8313a> | 3.203125 | 291 | Personal Blog | Science & Tech. | 26.087515 | 2,068 |
European astronomers have discovered a planet with about the mass of the Earth orbiting a star in the Alpha Centauri system — the nearest to Earth. It is also the lightest exoplanet ever discovered around a star like the Sun. The planet was detected using the HARPS instrument on the 3.6-metre telescope at ESO’s La Silla Observatory in Chile. The results will appear online in the journal Nature on 17 October 2012.
read more: http://www.eso.org/public/news/eso1241/
2012 DA14 is an approximately 40 meter diameter asteroid that will take a close approach to Earth in early 2013. Contrary to some reports on the web, there is no danger of it hitting us during this encounter. This visualization shows the trajectory of the asteroid as computed by JPL’s HORIZONS
( #Astronomy )
Pretty picture: Enceladus, in lovely color
Feb. 6, 2012 | 13:38 PST | 21:38 UTC
Here’s an awesome picture to start off the week. The data came from Cassini’s flyby of Enceladus on January 31, 2011; it was part of Cassini’s January 2012 data release. Most of the visible globe is lit by yellowish light reflected first from Saturn; only a thin crescent receives sunlight. At bottom center, Enceladus’ south polar plumes erupt into space. They are back-lit by the Sun. As usual for awesome Cassini color photos posted here, this one was processed by Gordan Ugarkovic.
The Pillars of Creation no longer exist. In 2007, the astronomers announced that they were destroyed about 6,000 years ago by the shock wave from a supernova.Because of the limited speed of light, the shock wave’s approach to the pillars can currently be seen from Earth, but their actual destruction will not be visible for another millennium. | <urn:uuid:298a2fc1-af6f-4a36-9a41-284615b59f8b> | 3.703125 | 404 | Content Listing | Science & Tech. | 55.665917 | 2,069 |
It is generally well known that partial differential equations that model fluid motion can exhibit “shock waves”. In fact, the subject I will write about today is generally presented as the canonical example for such behaviour in a first course in partial differential equations (while also introducing the method of characteristics). The focus here, however, will not be so much on the formation of shocks, but on the profile of the shock boundary. This discussion tends to be omitted from introductory texts.
Solving Burgers’ equation
First we recall the inviscid Burgers’ equation, a fundamental partial differential equation in the study of fluids. The equation is written
Equation 1. Inviscid Burgers’ equation
where is the “local fluid velocity” at time and at spatial coordinate . The solution of the equation is closely related to its derivation: notice that we can re-write the equation as
The question we consider is the initial value problem for the PDE: given some initial velocity configuration , we want to find a solution to Burgers’ equation such that .
The traditional way of obtaining a solution is via the method of characteristics. We first observe (1) the alternate form of the equation above means that if is a curve tangent to the vector field , we must have be a constant valued function of the parameter . (2) Plugging this back in implies that along such a curve , the vector field is constant. (3) A curve whose tangent vector is constant is a straight line. So we have that a solution of the Burgers’ equation must verify
And we call the family of curves given by the characteristic curves of the solution.
To extract more qualitative information about Burgers’ equation, let us take another spatial derivative of the equation, and call the function . Then we have
So letting be a characteristic curve, and write , we have that along the characteristic curve
So in particular, we see that if , must blow up in time .
So what does this mean? We’ve seen that along characteristic lines, the value of stays constant. But we’ve also seen that along those lines, the value of its spatial derivative can blow up if the initial slope is negative. Perhaps the best thing to do is to illustrate it with two pictures. In the pictures the thick, red curve is the initial velocity distribution , shown with the black line representing the -axis: so when the curve is above the axis, initially the local fluid velocity is positive, and the fluid is moving to the right. The blue curves are the characteristic lines. In the first image to the right, we see that the initial velocity distribution is such that the velocity is increasing to the right. And so is always positive. We see that in this situation the flow is divergent, the flow lines getting further and further apart, corresponding to the solution where gets smaller and smaller along a flow line. For the second image here on our left, the situation is different. The initial velocity distribution starts out increasing, then hits a maximum, dips down to a minimum, and finally increases again. In the regions where the velocity distribution is increasing, we see the same “spreading out” behaviour as before, with the flow lines getting further and further apart (especially in the upper left region). But for flowlines originating in the region where the velocity distribution is decreasing, those characteristic curves gets bunched together as time goes on, eventually intersecting! This intersection is what is known as a shock. From the picture, it becomes clear what the blow-up of means: Suppose the initial velocity distribution is such that for two points . Since the flow line originating from is moving faster, it will eventually catch up to the the flow line originating from . When the two flow lines intersect, we have a problem: if we follow the flow line from , the function must take the value at the point; but if we follow the flow line from , the function must take the value at the point. So we cannot consistently assign a value to the function at the points of intersection for flow-lines in a way that satisfies Burgers’ equation.
Another way of thinking about this difficulty is in terms of particle dynamics. Imagine the line being a highway, and points on it being cars. The dynamics of the traffic flow described by Burgers’ equation is one in which each driver starts at one speed (which can be in reverse), and maintains that speed completely without regard for the cars in front of or behind it. If we start out with a distribution where the leading cars always drive faster than the trailing ones, then the cars will spread further apart as time goes on. But if we start out with a distribution where a car in front is driving slower than a car behind, the second car will eventually catch up and crash into the one in front. And this is the formation of the shock wave.
(Now technically, in this view, once the two cars crash their flow-lines should end, and so cars that are in front of the collision and moving forward should not be affected by the collision at all. But if we imagine that instead of real cars, we are driving bumper cars, so after a collision, the car in front maintains speed at the velocity of the car that hit it, while the car in back drives at the velocity of the car it hit [so the they swap speeds in an elastic collision], then we have something like the picture plotted above.)
Having established that shocks can form, we move on to the main discussion of this post: the geometry of the set of shock singularities. We will consider the purely local effects of the shocks; by which we mean that we will ignore the chain reactions as described in the parenthetical remark above. Therefore we will assume that at the formation of the shock, the flow-lines terminate and the particles they represent disappear. In other words, we will consider only shocks coming from nearest neighbor collisions. In this scenario, the time of existence of a characteristic line is precisely governed by the equation on we derived before: that is given , the characteristic line emanating from will run into the shock precisely at the time . (It will continue indefinitely in the future if the derivative is positive.)
The most well-known image of a shock formation is the image on the right, where we see the classic fan/wedge type shock. (Due to the simplicity in sketching thie diagram by hand, this is probably how most people are introduced to this type of diagrams, either on a homework set or in class.) What we see here is an illustration of the fact that
If for , we have , and , then the shock boundary is degenerate: it consists of a single focal point.
To see this analytically: observe that because the blow-up time depends on the first derivative of the initial velocity distribution, for such a set-up the blow-up time is constant for the various points. Then we see that the spatial coordinate of the blow-up will be . But since is linear in , we have
is constant. And therefore the shock boundary is degenerate.
Next we consider the case where vanishes at some point , but . The two pictures to the right of this paragraph illustrates the typical shock boundary behaviour. On the far right we have the slightly aphysical situation: notice that for a particle coming in from the left, before it hits its shock boundary, it first crosses the shock boundary formed by the particles coming in from the right. This is the situation where the third derivative is positive, and the cusp point which corresponds to the shock boundary for opens to the future. The nearer picture is the situation where the third derivative is negative, with the cusp point opening downwards. Notice that since we are in a neighborhood of a point where the second derivative vanishes, the initial velocity distributions both look almost straight, and it is hard to distinguish from this image the sign of the third derivative. The picture on the far right is based on an arctan type initial distribution, whereas the nearer picture is based on an type initial distribution. Let us again analyse the situation more deeply. Near the point , we shall assume that for some constant. And we will assume, using Galilean transformations, that . Then letting , we have
Thus as a function of , the blow-up times of flow lines are given by
Solving for their blow-up profile then gives (after quite a bit of algebraic manipulation)
which can be easily seen to be a cusp: at . And it is clear that the side the cusp opens is dependent on the sign of the third derivative, .
The last bit of computation we will do is for the case . In this case we can take
as an approximation. Then the blowup times will be
which leads to the blowup profile being [Thanks to Huy for the correction.]
and a direct computation will then lead to the conclusion that in this generic scenario, the shock boundary will be everywhere tangent to the flow-line that ends there. | <urn:uuid:4209c4ad-5e9c-46f7-a01a-fa522e4bc89a> | 3.46875 | 1,853 | Academic Writing | Science & Tech. | 44.357468 | 2,070 |
TAU Researcher Says Plants Can See, Smell, Feel, and Taste Monday, July 30, 2012
Unlocking the secrets of plant genetics could lead to breakthroughs in cancer research and food security
Increasingly, scientists are uncovering surprising biological connections between humans and other forms of life. Now a Tel Aviv University researcher has revealed that plant and human biology is much closer than has ever been understood — and the study of these similarities could uncover the biological basis of diseases like cancer as well as other "animal" behaviors.
In his new book What a Plant Knows (Farrar, Straus and Giroux) and his articles in Scientific American,Prof. Daniel Chamovitz, Director of TAU's Manna Center for Plant Biosciences, says that the discovery of similarities between plants and humans is making an impact in the scientific community. Like humans, Prof. Chamovitz says, plants also have "senses" such as sight, smell, touch, and taste.
Ultimately, he adds, if we share so much of our genetic makeup with plants, we have to reconsider what characterizes us as human.
These findings could prompt scientists to rethink what they know about biology, says Prof. Chamovitz, pointing out that plants serve as an excellent model for experiments on a cellular level. This research is also crucial to food security, he adds, noting that knowledge about plant genetics and how plants sense and respond to their environment is central to ensuring a sufficient food supply for the growing population — one of the main goals of the Manna Center.
Seeing the light
Prof. Daniel Chamovitz
One of the most intriguing discoveries of recent years is that a group of plant genes used to regulate responses to light is also part of the human DNA. These affect responses like the circadian rhythm, the immune system, and cell division.
A plant geneticist, Prof. Chamovitz was researching the way plants react to light when he discovered an group of genes that were responsible for a plant "knowing" whether it was in the light or in the dark. He first believed that these genes were specific to plant life, but was surprised to later identify the same group of genes in humans and animals.
"The same group of proteins that plants use to decide if they are in the light or dark is also used by animals and humans," Prof. Chamovitz says. "For example, these proteins control two seemingly separate processes. First, they control the circadian rhythm, the biological clock that helps our bodies keep a 24 hour schedule. Second, they control the cell cycle — which means we can learn more about mutations in these genes that lead to cancer." In experiments with fruit flies who had a mutated version of one of these genes, Prof. Chamovitz and his fellow researchers observed that the flies not only developed a fly form of leukemia, but also that their circadian rhythm was disrupted, leading to a condition somewhat like permanent jet-lag.
Plants use light as a behavioral signal, letting them know when to open their leaves to gather necessary nutrients. This response to light can be viewed as a rudimentary form of sight, contends Prof. Chamovitz, noting that the plants "see" light signals, including color, direction, and intensity, then integrate this information and decide on a response. And plants do all this without the benefit of a nervous system.
And that's not the limit of plant "senses." Plants also demonstrate smell — a ripe fruit releases a "ripening pheromone" in the air, which is detected by unripe fruit and signals them to follow suit — as well as the ability to feel and taste. To some degree, plants also have different forms of "memory," allowing them to encode, store, and retrieve information.
Just like us
Beyond the genes that regulate responses to light, plants and humans share a bevy of other proteins and genes — for example, the genes that cause cystic fibrosis and breast cancer. Plants might not come down with these diseases, but the biological basis is the same, says Prof. Chamovitz. Because of this, plants are an excellent first stop when looking for a biological model, and could replace or at least enhance animal models for human disease in some types of research.
He is working alongside Prof. Yossi Shiloh, Israel Prize winner and incumbent of the David and Inez Myers Chair of Cancer Genetics at Tel Aviv University's Sackler Faculty of Medicine, to understand how the genes Chamovitz discovered function in protecting human cells from radiation.
For more biology and evolution news from Tel Aviv University, click here. | <urn:uuid:208b6dae-f468-4414-ad36-7127da6ab8e6> | 3.234375 | 947 | News (Org.) | Science & Tech. | 43.41751 | 2,071 |
Saturday 25 May
Honeycomb coral (Favites abdita)
What’s the World’s Favourite Species?Find out here.
Honeycomb coral fact file
- Find out more
- Print factsheet
Honeycomb coral description
Favites abdita is part of the Faviidae family, a common group of reef-building, ‘stony’ corals, characterised by a hard, calcareous skeleton, called a ‘corallite’. Favites abdita forms what are known as ‘massive’ colonies, meaning that the coral grows in a characteristic mound or dome shape which has roughly similar dimensions (typically close to a metre) in all directions (3). Favites abdita is usually pale brown, darker coloured in more turbid (cloudy) environments, with brown or green oral discs (the soft tissue between the mouth and the surrounding tentacles of the anemone-like polyp), and thick, rounded corallite walls (4).Top
Honeycomb coral biology
A honeycomb coral colony is composed of numerous individual polyps, which can reproduce asexually by a process called ‘budding’ (where each polyp divides itself into two or more daughter polyps). Favites abdita is also a hermaphrodite and can reproduce sexually by producing small, pink eggs and white sperm packets, which are released into the water during a short spawning period, usually around mid-November (4) (5) (6).
Like other reef-building corals, Favites abdita has many microscopic, photosynthetic algae, called zooxanthellae, living within the polyp tissues. The coral and the algae have a mutually beneficial relationship; the coral provides protection for the algae, which in return provide energy and nutrients for the coral through photosynthesis. Both Favites abdita and its zooxanthellae are very sensitive to changes in water temperature and acidity, and any increase in the water temperature greater than one or two degrees above the normal average can stress the coral and cause ‘bleaching’, a phenomenon in which the coral expels it zooxanthellae and turns white (4) (7).Top
Honeycomb coral rangeTop
Honeycomb coral habitat
Favites abdita is found in most reef environments, although it is most common on subtidal and rocky reefs, on reef slopes and in lagoons. It usually inhabits depths between 1 and 15 metres, although it does occur down to 40 metres on rubble substrate that separates different reefs (1).Top
Honeycomb coral statusTop
Honeycomb coral threats
The proportion of corals threatened with extinction has increased dramatically in recent decades, with current estimates suggesting that a third of all coral species have an ‘elevated risk’ of extinction (8). Detailed studies have found that around 20 percent of the world’s coral reefs have been already been destroyed, while at least 24 percent of remaining reefs face a high risk of collapse (9).
Threats to Favites abdita include damage caused by fisheries, pollution from agriculture and industry, human developments, recreation and tourism. Favites abdita is also targeted for the aquarium trade. Corals are particularly affected by the changing global climate, with rising sea temperatures, ocean acidification and mass coral bleaching events all contributing to significant declines in corals. In addition, these varying conditions have greatly increased the susceptibility of corals to disease, a factor which has recently emerged as a major cause of reef deterioration (1) (8) (9).Top
Honeycomb coral conservation
Favites abdita is listed on Appendix II of the Convention on International Trade in Endangered Species (CITES), which means that all trade in the species should be carefully monitored. It is also known from several Marine Protected Areas. Further research into aspects of Favites abdita’s ecology, abundance, population trends, habitat status and taxonomy is required in order to find out more about how the species is likely to respond to the increasing number of threats throughout its range. The identification and establishment of new protected areas may prove crucial for the conservation of Favites abdita and many other corals, while further research into disease, pathogen and parasite management in corals is also needed (1).
Find out more
For further information on the conservation of coral reefs see:Top
This information is awaiting authentication by a species expert, and will be updated as soon as possible. If you are able to help please contact:
- Simple plants that lack roots, stems and leaves but contain the green pigment chlorophyll. Most occur in marine and freshwater habitats.
- Containing free calcium carbonate, chalky.
- A group of organisms living together. Individuals in the group are not physiologically connected and may not be related, such as a colony of birds. Another meaning refers to organisms, such as bryozoans, which are composed of numerous genetically identical modules (also referred to as zooids or ‘individuals’), which are produced by budding and remain physiologically connected.
- Possessing both male and female sex organs.
- Metabolic process characteristic of plants in which carbon dioxide is broken down, using energy from sunlight absorbed by the green pigment chlorophyll. Organic compounds are made and oxygen is given off as a by-product.
- Typically sedentary soft-bodied component of cnidaria, a group of simple aquatic animals including the sea anemones, corals and jellyfish. A polyp comprises a trunk that is fixed at the base, and a mouth that is placed at the opposite end of the trunk and is surrounded by tentacles.
- The production or depositing of eggs in water.
IUCN Red List (September, 2010)
CITES (September, 2010)
Coral Hub (October, 2010)
- Veron, J.E.N. (2000) Corals of the World. Australian Institute of Marine Science, Townsville, Australia.
- Kojis, B.L. and Quinn, N.J. (1982) Reproductive ecology of two Faviid corals (Coelenterata: Scleractinia). Marine Ecology Progress Series, 8: 251-255.
- Richmond, R.H. and Hunter, C.L. (1990) Reproduction and recruitment of corals: comparisons among the Caribbean, the Tropical Pacific and the Red Sea. Marine Ecology Progress Series, 60: 185-203.
- Veron, J.E.N. (1993) Corals of Australia and the Indo-Pacific. University of Hawaii Press, Honolulu, Hawaii.
- Carpenter, K.E. et al. (2008) One-third of reef-building corals face elevated extinction risk from climate change and local impacts. Science, 321(5888): 560-563.
Miththapala, S. (2008) Coral Reefs. Coastal Ecosystem Series (Volume 1). Ecosystems and Livelihoods Group Asia, IUCN, Colombo, Sri Lanka. Available at:
More »Related species
Play the Team WILD game
This species is featured in:
This species is featured in Jewels of the UAE, which showcases biodiversity found in the United Arab Emirates in association with the Environment Agency – Abu Dhabi.
MyARKive offers the scrapbook feature to signed-up members, allowing you to organize your favourite ARKive images and videos and share them with friends.
Terms and Conditions of Use of Materials
Copyright in this website and materials contained on this website (Material) belongs to Wildscreen or its licensors.
Visitors to this website (End Users) are entitled to:
- view the contents of, and Material on, the website;
- download and retain copies of the Material on their personal systems in digital form in low resolution for their own personal use;
- teachers, lecturers and students may incorporate the Material in their educational material (including, but not limited to, their lesson plans, presentations, worksheets and projects) in hard copy and digital format for use within a registered educational establishment, provided that the integrity of the Material is maintained and that copyright ownership and authorship is appropriately acknowledged by the End User.
End Users shall not copy or otherwise extract, alter or manipulate Material other than as permitted in these Terms and Conditions of Use of Materials.
Additional use of flagged material
Green flagged material
Certain Material on this website (Licence 4 Material) displays a green flag next to the Material and is available for not-for-profit conservation or educational use. This material may be used by End Users, who are individuals or organisations that are in our opinion not-for-profit, for their not-for-profit conservation or not-for-profit educational purposes. Low resolution, watermarked images may be copied from this website by such End Users for such purposes. If you require high resolution or non-watermarked versions of the Material, please contact Wildscreen with details of your proposed use.
Creative commons material
Certain Material on this website has been licensed to Wildscreen under a Creative Commons Licence. These images are clearly marked with the Creative Commons buttons and may be used by End Users only in the way allowed by the specific Creative Commons Licence under which they have been submitted. Please see http://creativecommons.org for details.
Any other use
Please contact the copyright owners directly (copyright and contact details are shown for each media item) to negotiate terms and conditions for any use of Material other than those expressly permitted above. Please note that many of the contributors to ARKive are commercial operators and may request a fee for such use.
Save as permitted above, no person or organisation is permitted to incorporate any copyright material from this website into any other work or publication in any format (this includes but is not limited to: websites, Apps, CDs, DVDs, intranets, extranets, signage, digital communications or on printed materials for external or other distribution). Use of the Material for promotional, administrative or for-profit purposes is not permitted. | <urn:uuid:422fa7e0-2562-4d35-8e0a-73305f8e7d82> | 3.859375 | 2,125 | Knowledge Article | Science & Tech. | 30.120427 | 2,072 |
New C-band scanning ARM precipitation radar on Manus Island
The ARM Climate Research Facility is a U.S. Department of Energy national user facility for the study of global climate change by the national and international research community. It consists of a network of highly instrumented ground stations; mobile and aerial facilities; and a data archive. ARM collaborates extensively with other laboratories, agencies, universities, and private firms in gathering and sharing data. Some of ARM’s collaborators
include DOE laboratories, NOAA, NASA, the Climate Change Prediction program, and many U.S. and international universities.
Data gathered by ARM equipment and facilities are available to any user at no cost. Scientists can also request virtual or in-person access to one of ARM’s research sites. Proposals for conducting field campaigns and scientific research using the ARM Climate Research Facility are welcome from all members of the global scientific community. | <urn:uuid:772ab524-2ea0-48b4-9ffc-035de85d9d24> | 2.546875 | 184 | About (Org.) | Science & Tech. | 23.794091 | 2,073 |
What is a black hole?
It is not really a hole, but a region in space with extremely strong gravity, so strong that not even light can escape. It is called black, because it gives off no light. It is impossible to observe or detect a black hole directly. Nobody has ever done this. But most astronomers believe they exist and continue to look for signs of indirect evidence that they exist. They do this by looking for the influence of black holes on other objects in space.
What is a comet?
It is an icy object found in our solar system, sometimes called a ‘dirty snowball’. As the comet passes near the Sun, the outer ice turns into gas and leaves a trail – the comet’s tail. The comet’s tail always points away from the Sun, even when it is moving away from the Sun. At those times the comet is flying tail first.
When is Halley’s comet due?
Halley’s comet is visible every 76 years. It was last seen in 1986. It will next be visible in 2061. | <urn:uuid:54a21c68-ad48-4c50-bfe1-56869bc7acdd> | 3.796875 | 226 | Knowledge Article | Science & Tech. | 73.180036 | 2,074 |
From Stone Age tools to the Large Hadron Collider, there’s always the potential for catastrophe when it comes to science. And what better way to get your middle schoolers psyched about scientific leaps than challenging them to avoid disaster? Each chapter of the book describes a scientific breakthrough and the consequent catastrophes associated with it. Experiments are rated from Low (no risk of catastrophe) to High (involves the use of fire, hot liquids, or hazardous substances, making adult supervision required). Kids will love the hands-on experiments and researcher parents will revel in preventing potential calamity.
The Book of Potentially Catastrophic Science: 50 experiments for daring young scientists by Sean Connolly
$13.99 from ThinkGeek.com | <urn:uuid:84cb6556-2116-4f61-9b09-b3ee7a157825> | 3.53125 | 153 | Product Page | Science & Tech. | 39.47 | 2,075 |
|Bialowieza primeval forest, Poland
The Bialowieza Forest (150 000 ha) is the last remaining fragment of primary deciduous and mixed forest in the European lowlands. It is the place where giant trees, not found anywhere else in Europe, exist. Many species which disappeared from other regions or are endangered are still present there, including large mammals (European bison, wolf and lynx) and rare species dependent on dead wood or large trees: woodpeckers, saproxylic coleopterans and fungi. Therefore, Bialowieza Forest is also a key refuge for relict, critically endangered species, because it is the only place where ecological and evolutionary processes, typical for the temperate forest biome, can still be observed.
Unfortunately, only 38% of the Polish part of Bialowieza Forest (the rest is located in Belarus) is currently protected as the national park and nature reserves. Despite of being a Natura 2000 site, most of the forest has undergone commercial logging and artificial reforestation also in the remaining natural stands. Forestry exploitation has been devastating pristine and unique habitats with the most precious flora and fauna species. Key ecological processes are being disrupted and unless the current unsustainable exploitation is stopped the few remaining fragments of natural old-growth forest will disappear in a matter of years. This loss of biodiversity will be irreparable.
Scientists warn that the protected part of the forest is too small to guarantee suitable conditions for most populations of endangered species and to safeguard natural processes in the long term. For instance, in the last two decades, the population of the White-backed Woodpecker has decreased by about 28% for the forest as a whole, but by 36% in stands managed commercially. This tremendous loss of biodiversity is a violation of the national law and it stands in opposition to the European nature conservation Directives. For decades, scientists and public opinion have demanded to stop timber harvesting except for small quotas covering the needs of local people.
OTOP (BirdLife Partner in Poland) recently entered administrative procedures aiming to change the forest management plan for the Bialowieza Forest and the related Natura 2000 management plan. OTOP is also involved in a proposal – now considered in the parliament – to change existing regulations, to enable easier extensions of existing and establishment of new national parks in Poland.
Wesołowski T. (2005): Virtual conservation: how the European Union is turning blind eye to its vanishing primaeval forests. Conservation Biology 19: 1349-1358.
OTOP, (BirdLife partner in Poland)
Dr Przemyslaw Chylarecki, pch(at)miiz.waw.pl | <urn:uuid:1c3fb4f3-73b9-4cdb-8171-0e0b4f5823c5> | 3.5625 | 557 | Knowledge Article | Science & Tech. | 24.116401 | 2,076 |
If you have not yet read the radiation
primer, you are invited to do so.
David Groves, PhD, has
shown that the x-ray environment of space would quickly render any
photographs unusable. [Bennett and Percy, Dark Moon,
Dr. Groves' study contains a number of serious errors.
Although Dr. Groves gives figures for the x-ray dosage to which he
submitted his test films, he does not in any way show that this is the
expected amount of x-ray energy that exists anywhere in cislunar space
or on the lunar surface. This key omission makes Dr. Groves' study of
Dr. Groves used a Bronica ETRSi 120 roll film camera in his tests.
He does not explain why he did not use a Hasselblad EL/500 or EL/700
camera, the type of camera supplied to NASA for use in the Apollo
missions. It is still manufactured by Hasselblad and suitable period
examples of which can be obtained easily from second-hand dealers.
Use of a dissimilar camera limits the extent to which Dr. Groves'
results can be applied to Apollo photographs.
Further, Hasselblad claims they added additional protection to the
film magazines. Dr. Groves does not document any similar changes he
may have made to the film magazine of his test camera. Nor does he
comment upon the possible effect of any of those modifications.
Dr. Groves' inattention to the specifics of the Apollo camera design
questions his ability to accurately simulate the effects of x-rays on
Dr. Groves first took pictures of a standard color chart, then
bombarded that film with x-rays. Then he used standard procedures to
develop the film and observe the results. He found that the images
were significantly fogged in some cases, and completely obliterated in
more extreme cases.
He provided absolutely no shielding around the film during its
exposure to the x-rays. It is unclear whether he left the film inside
its magazine as the Apollo astronauts would have done. Since the
Hasselblad magazines were modified to provide thicker material for the
casing, and the film was kept in the magazines during the entire
mission, it is not clear whether Dr. Groves' procedure constitutes an
What is clear, however, is that Dr. Groves exposed the film to
x-rays thousands of times more intense than what occurs in space. He
used a linear accelerator to bombard the film with an 8 MeV (million
electron-volts) beam of x-rays. X-ray astronomers say the x-rays from
celestial sources radiate at energy levels of less than 5 keV
(thousand electron-volts). The measurement of x-ray energy is similar
to the rating of light bulbs by wattage. The difference between five
thousand electron volts -- ambient x-rays in space -- and eight
million electron-volts -- Dr. Groves' experiment -- is obviously
very large. This factor alone invalidates Dr. Groves' study as an
accurate depiction of the ambient x-ray conditions in space.
Energy level is quite important. Not only do more energetic
x-rays fog film to a greater extent, they also penetrate various
substances to a greater extent. This makes the question of shielding
very acute. 3 keV x-rays, for example, will not even penetrate air
for more than a dozen centimeters.
Dr. Groves exposed his film to x-rays more than a thousand
times more energetic than occur in space.
The experiment subjected the film to three levels of exposure, all
at the absurdly intense 8 MeV energy level. The levels are given in
the study as "25 rem", "50 rem" and "100 rem". Those who have read
the primer and studied the nomenclature
of radiation will immediately realize that this is the wrong unit.
"Rem" applies only to absorbed radiation in human tissue. It
is completely inapplicable to radiation absorbed by photographic
film. The appropriate unit of measure for this study would be either
"rads" or "Grays". It so happens that for x-rays 1 rad is equivalent
to 1 rem, but Dr. Groves' apparent misunderstanding of the concepts of
absorbed dose is very much out of place in a study purporting to give
an expert opinion on radiation exposure.
If we graciously correct Dr. Groves' error of nomenclature and
assume he means exposures in rads, we are still faced with two further
questions. First, how was absorbed dose computed? It is notoriously
difficult to measure the amount of radiation actually absorbed by any
Second, the 25-100 rads to which Dr. Groves exposed the films is
quite excessive. It would take nearly six years in a
spacecraft in cislunar space -- barring any serious solar events -- to
absorb 25 rads of dosage from all sources combined, not just from
Dr. Groves' study contains far too many egregious errors to be
considered predictive in any way of the behavior of Ektachrome film
under the conditions experienced during Apollo space flights. He has
employed levels of radiation far in excess of what can be defensibly
claimed for ambient x-ray radiation in cislunar space. | <urn:uuid:f59728a1-5317-40a2-ac76-b19d6db190ca> | 3.53125 | 1,146 | Knowledge Article | Science & Tech. | 50.820189 | 2,077 |
Scientists at the Norwegian Academy of Science and Letters have designed a new fuel cell inspired by the bronchial structure of the lungs. The new design requires less of the expensive platinum catalyst while also boosting efficiency.
The fuel cell features channels modeled after bronchial tubes that supply hydrogen and oxygen to their respective electrodes. This system spreads the gases more uniformly across the electrodes, which boosts the cell's efficiency and creates greater surface area so that less platinum is required.
Hydrogen fuel cells are still too cost prohibitive (among other things) for mass production and a lot of that has to do with the platinum catalyst. This design is pretty exciting because it would lower the cost of the fuel cell while also boosting output. That's a win-win.
written by Yael, October 02, 2010
|< Prev||Next >| | <urn:uuid:a732fca5-5812-4f1f-98c8-1741d62cf5a6> | 3.515625 | 169 | News (Org.) | Science & Tech. | 42.715 | 2,078 |
Solar timeSolar time is based on the idea that, when the sun reaches its highest point in the sky, it is noon. Apparent solar time is based on the apparent solar day, which is the interval between two successive returns of the sun to the local meridian. Solar time can be measured by a sundial.
The length of a solar day varies throughout the year. This is because the Earth's orbit is an ellipse, and not a circle, and the Earth moves faster when it is nearest the Sun and slower when it is farthest from the sun. (see Kepler's laws of planetary motion) Because of this, apparent solar days are shorter in March and September than they are in June or December. (The amount of daylight also varies because of the 23.5º tilt of the Earth's axis. (see Tropical year)
Mean solar time is based on a fictional mean sun which travels at a constant rate throughout the year. The length of a mean solar day is a constant 24 hours throughout the year although, as noted above, the amount of daylight varies.
The difference between apparent solar time and mean solar time, which is sometimes as great as 15 minutes, is called the equation of time. | <urn:uuid:d988863f-31ba-466a-a988-82750186cf62> | 4.34375 | 250 | Knowledge Article | Science & Tech. | 61.171295 | 2,079 |
Study looks to nuclear energy as micro-scale fuel
A trio of UW-Madison engineers have a new scale in mind for nuclear energy: Rather than huge plants powering entire cities, they envision tiny batteries turning a single microscopic gear.
Extremely small amounts of radioactive material already perform functions in smoke detectors, photocopiers, pacemakers and other devices. But the researchers are studying whether even tinier amounts might provide a power source for the tiny experimental machines being built in laboratories worldwide.
Nuclear engineering professors James P. Blanchard and Douglass L. Henderson, along with electrical engineering professor Amit Lal, are leading the three-year, $450,000 project. The U.S. Department of Energy, which looks at potential new uses for nuclear energy, supports the research.
Known as micro-electromechanical structures, of MEMS devices, they tend to be smaller than a width of human hair, about 60 to 70 microns, Blanchard said. Because of their small size, they can perform extremely precise functions in applications such as medical equipment, environmental management and automobiles.
The most common MEMS device is the tiny sensor that tells auto air bags when to deploy. But MEMS technology is limited by the lack of a reasonable power source that is lightweight and intense enough to run such a small device.
"There's nothing being studied like this on this scale," Blanchard said. "The key issue for us is the size of the radiation source."
Using extremely small amounts of radioactive material in products is not new, Blanchard said. For example, smoke detectors function with the aid of a radioactive substance that electrically charges the air. Some photocopiers use a of radioactive material to eliminate the static charge in sheets of paper.
Exposure to radioactive materials is by nature hazardous and their use is closely regulated. But the amount of material required for these applications is so minuscule it would not pose safety risks or require regulation, he said. The traces of radioactive material are encapsulated so no one is exposed to the radiation.
The idea is to harness the natural decay of radioactive material and convert it into a power source, without use of a reaction such as fission or fusion, Blanchard said. By keeping the material securely contained within the device, he said, the majority would decay before it leaves the product.
"We would capture the energy from that decay, and convert it into a power source that could run a sensor or tiny moving part," Blanchard said.
The energy could either be in the form of heat or charged particles, both of which could be generated in sufficient amounts to be useful on a tiny scale. Blanchard said alpha and beta particles generate high voltages.
Lal, who builds MEMS devices, said micro-machines could see many more applications once the power issue is solved. The technology has been stalled by a near-absence of power sources that can efficiently fuel something so tiny. "You would be creating a set of applications that have never been possible," he said.
For example, MEMS batteries could be used as power sources for small hand-held computing devices or micro-laboratories. Sensors could be developed to recognize "signatures" of gases from chemical plants or oil pipelines, to give early warning of leaks. It could also be used in a network of self-powered sensors to guard against chemical warfare. The tiny sensors could also be mixed right into the grease of heavy machinery to detect when maintenance is required.
"The biggest impact could be in making everyday systems more reliable, safer and smarter by integrating these sensor systems," Lal said.
The project is part of Nuclear Engineering Education Research (NEER) at the Energy Department, which primarily funds university-based projects in fission and other applications of radiation. This is part of a small category of funding that encourages potential alternative uses for nuclear energy. | <urn:uuid:16cc7273-8757-4e7c-9a84-41a73b28d140> | 3.453125 | 801 | News Article | Science & Tech. | 34.926793 | 2,080 |
The Future is Sustainability
The U.S. Environmental Protection Agency has commissioned a new National Research Council sustainability study.
On the second day in December the United States Environmental Protection Agency (EPA) commemorates four decades of work on behalf of the environment. On the final day of November the agency made a special announcement at the Marian Koshland Science Museum of the National Academy of Science (NAS) on the future of the EPA. EPA Administrator Lisa P. Jackson, along with NAS President Ralph Cicerone, revealed the launch of a new National Research Council (NRC) study to provide an operational framework for the EPA to incorporate sustainability into all its programs. The study will help the agency build upon its expertise in protecting human health and the environment. The speech coincided with the Aspen Institute’s unveiling of a list of the ten ways EPA has strengthened America.
The NRC will produce a Green Book that will focus on energy, water, material and land issues. In summer 2011 the report will be completed and the EPA will review the recommendations. This effort signifies a shift in the environmental protection movement to confront challenges through a sustainability lens.
Dr. Paul Anastas, Assistant Administrator for EPA’s Office of Research and Development, stated, “there is widespread recognition across the scientific community that sustainability, holistic thinking, and a systems approach to environmental protection are the only way forward. The study launched yesterday is the critical step that so many sustainability scientists have been waiting for.”
Published: Wednesday, December 01, 2010 | <urn:uuid:d7ad3123-5423-48a5-8709-7bb2fef49e5f> | 2.515625 | 314 | News (Org.) | Science & Tech. | 25.930302 | 2,081 |
Professor Robert Diaz of the Virginia Institute of Marine Science, College of William and Mary, is a co-editor of "Valuing the Ocean" a major new study by an international team of scientists and economists that attempts to measure the ocean's monetary value and to tally the costs and savings associated with human decisions affecting ocean health.
The study estimates that if human impacts on the ocean continue unabated, declines in ocean health and services will cost the global economy $428 billion per year by 2050, and $1.979 trillion per year by 2100. Alternatively, steps to reduce these impacts could save more than a trillion dollars per year by 2100, reducing the cost of human impacts to $612 billion.
Diaz says the study report "describes the state of the science for six threats to the global ocean, what can happen if all these threats act together, and the economic consequences of taking or not taking action." He notes that the study is unique in stressing the interactions between and among multiple threats, which include acidification, low-oxygen "dead zones," overfishing, pollution, sea-level rise, and warming.
In addition to co-editing the 300-page study, Diaz is a lead author on the chapters that monetize the impacts of dead zones and the combined effects of multiple stressors. Research by Diaz and colleagues shows that over-fertilization of ocean waters has led to a sharp increase in the number, size, and duration of low-oxygen dead zones around the world over the last 50 years, which could lead to major impacts on fisheries. The study estimates an annual decrease in global fisheries value of $88 billion by 2050, and $343 billion by 2100, unless steps are taken to reduce nutrient inputs and global warming. Warmer water holds less oxygen, thereby intensifying dead zones.
A release from the Stockholm Environment Institute—the agency that coordinated the international study—states "The ocean is the victim of a massive market failure. The true worth of its ecosystems, services, and functions is persistently ignored by policy makers and largely excluded from wider economic and development strategies… This collaborative book presents an unequivocal argument in favor of placing the ocean at the centre of plans to build a sustainable future, while for the first time calculating the actual monetary value of the critical ocean services that we stand to lose."
The study's positive message is that local actions can make a global difference. "Thanks to close links between globally and locally acting stressors," says SEI, "coordinated small-scale interventions can aggregate upwards to have major significance."
Diaz and other report editors and authors will present the findings of their study during the "Planet Under Pressure" conference in London on Monday, March 26th. Convened by a number of international scientific bodies—the International Council for Science, DIVERSITAS, Earth System Science Partnership, International Geosphere-Biosphere Programme, International Human Dimensions Programme, and the World Climate Research Programme—the "PUP" conference is a key platform for the international science community to inform delegates to the United Nations' upcoming Conference on Sustainable Development.
The U.N. Conference—which will take place in Brazil on 20-22 June 2012—will mark the 20th anniversary of the 1992 U.N. Conference on Environment and Development in Rio de Janeiro, and the 10th anniversary of the 2002 World Summit on Sustainable Development in Johannesburg. During the June "URio+20 summit," heads of state and government from around the world will join together to "secure renewed political commitment for sustainable development, assess the progress to date and the remaining gaps in the implementation of the outcomes of the major summits on sustainable development, and address new and emerging challenges."
AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system. | <urn:uuid:784d88e7-c77d-47a0-ab00-7ca6626b4030> | 3.0625 | 801 | News (Org.) | Science & Tech. | 27.607485 | 2,082 |
Astronomers Discover the Nearest Young Planet-Forming StarUniversity of Hawaiʻi
Institute for Astronomy
Karen Rehbock, (808) 956-6829
Institute for Astronomy
A team of scientists including University of Hawaii astronomers have found the nearest example of a young planet-forming star.
Using the telescopes on Mauna Kea, Hawaii, they have discovered a spectacular circumstellar dust disk with indirect evidence for newly formed planets around the star AU Microscopium (AU Mic). At a distance of only 33 light years, this is the nearest star with a visible disk, enabling astronomers to directly observe the primordial material for making planets.
"We know that extrasolar planets are common, but understanding how they form is an outstanding question. Because AU Mic is so near to Earth, it provides us a special opportunity to examine planet formation in great detail," said team member Dr. Michael Liu, an astronomer at the University of Hawaii Institute for Astronomy.
The results will be published in this week's online Science Express
AU Mic is a dim red star, with only half the mass and one-tenth the energy output as the Sun. Previous studies had shown that AU Mic is about 12 million years old. In comparison, our Sun is about 4.6 billion years old.
"Unfortunately, we can't go back in time and observe our own Solar System. But by studying these very young stars, we can examine how planets are forming around them, and thus indirectly learn about the origin of our own Solar System," said Liu.
The team used the James Clerk Maxwell Telescope on Mauna Kea, Hawaii to study the sub-millimeter radiation from AU Mic. This radiation, which is invisible to the naked eye, comes from very cold dust grains orbiting the star in the form of a disk. Such disks are believed to be the nurseries for forming planets.
By analyzing the radiation, the team deduced that dust particles only existed at large distances from the star, and were missing inside a radius of about 17 AU. This would be slightly inside the orbit of Uranus in our own Solar System.
"The dust missing from the inner regions of AU Mic is the telltale sign of an orbiting planet. The planet sweeps away any dust in the inner regions, keeping the dust in the outer region at bay," said Liu.
The team's visible light images of AU Mic reveal a spectacular edge-on disk, extending out to at least 210 Astronomical Units (AU), or about 20 billion miles. In comparison, the known edge of our Solar System is about 50 AU, or four times smaller. AU Mic's disk is visible due to small dust particles which reflect the light of the central star.
The images were obtained with an instrument known as a coronagraph deployed on the with the University of Hawaii's 2.2-meter telescope. The coronagraph blocked out the bright glare of the central star, allowing detection of the faint edge-on disk.
"This fascinating system shows how the exceptional clarity, darkness, and transparency of the Mauna Kea skies allows Hawaii astronomers to make frontier discoveries," said Dr. Rolf Kudritzki, Director of the Institute for Astronomy.
"AU Mic is a common red dwarf star, which comprise 85% for all stars. By studying this nearby system, we might learn about how the majority of planetary systems can form," said team member Dr. Paul Kalas, an astronomer at the University of California, Berkeley and a Ph.D. recipient from the University of Hawaii.
Images of disks around nearby stars are very rare, and AU Mic is the closest dust disk found since the discovery 20 years ago of a disk around beta Pictoris, a star about 2.5 times the mass of the sun and 65 light years away. Though the two stars are far apart on the sky, they appear to have been formed at the same time and are traveling together through the galaxy.
AU Mic is close enough that future imaging with the Hubble Space Telescope or ground-based telescopes using adaptive optics can study the detailed structure of the disk and perhaps directly image the light from any planets.
"We're waiting for the next observing season to go back and study the physical properties of the disk. But we expect other teams to do the same thing - there will be lots of follow-up," said Kalas.
"Astronomers will be studying AU Mic for many years to come," said Liu.
This work was supported by the National Science Foundation and the NASA Astronomical Search for Origins program.
The Institute for Astronomy at the University of Hawaii conducts research into galaxies, cosmology, stars, planets, and the sun. Its faculty and staff are also involved in astronomy education, deep space missions, and in the development and management of the observatories on Haleakala and Mauna Kea. Refer to http://www.ifa.hawaii.edu/for more information about the Institute.
For more information, visit: http://www.ifa.hawaii.edu | <urn:uuid:c3072169-0dbd-4a91-bb53-e6cf34a8b288> | 3.34375 | 1,038 | News (Org.) | Science & Tech. | 50.45392 | 2,083 |
A teenager in Ukraine has unexpectedly helped scientists in Victoria, B.C., make a rare discovery.
Earlier this month, Kirill Dudko, 14, was watching a live stream of cameras on the ocean floor when he saw something unusual — a female elephant seal eating a hagfish.
Researcher Kim Juniper says an email from the teen in Donetsk, Ukraine, caused a flurry of excitement.
“Monday morning we had an email from him saying, ‘I saw something strange and weird. Some monster just ate a fish in front of me. What was it?’ And that sent all of us into a bit of a flurry to back this up."
Juniper says it's the first time a seal has ever been recorded eating a hagfish, a creature so slimy other predators spit them out.
Dudko doesn't speak much English, and he was up past his bed time on a school night to explain the story with help from his mother Svetlana.
"I'm very proud of my son,” she told CBC News.
The researchers at Neptune Canada — a research project that links an 800-kilometre network of instruments on the ocean floor off Vancouver Island directly to the internet — are also proud of Dudko and are grateful for the boost he's given to citizen science.
“Really this is going to be a life-changing experience for this young man, who only in the last few months has developed this interest in marine biology and now he's off and running,” Juniper said.
Also on HuffPost:
Bonobos, relatives of the common chimpanzee, have won a reputation for promiscuity. Bonobos do not form long-term, sexual partnerships. Rather, they engage in sexual activity with single or multiple partners. They will participate in both hetero and homosexual encounters. In Bonobo society, sex is used for reproduction, but it's also a means of greeting and conflict resolution.
The antechinus, a mouse-like marsupial, is polygamous. Each antechinus female will mate with several males in a breeding season, with the result that a single antechinus litter has several fathers. The antechinus mating ritual is long and exhausting with copulation lasting up to twelve hours. In fact, following the breeding season, there is complete die-off of the physiologically exhausted males of the group.
Dolphins are known for their playful nature and happy dispositions. It's no wonder they're so cheerful; they mate several times a day. Although the reproductive act is short, dolphins also engage in a variety of sexual behaviors simply for pleasure. Dolphins have hetero and homosexual partners and will sometimes behave sexually towards other whale and dolphin sub-species, resulting in fertile hybrids like the Wolphin. Occasionally, dolphins behave sexually towards other animals, including human beings.
Queen Honey Bee
The queen bee in a honey bee hive is encouraged to be as promiscuous as possible. During a single mating flight, a queen bee can mate with up to forty drones. The more sexual partners a queen has, the more attractive she is to the worker bees that keep her hive running.
Don't let those innocent faces fool you, rabbits are notorious for their vigorous and indiscriminate breeding. Female rabbits often breed with several males during one mating season. A rabbit's gestation period is only 30 days, so they may breed several times in one year, copulating with different partners each time.
Northern Elephant Seal
Male elephant seals are extremely aggressive towards one another, fighting to become "beach masters." A beach master protects a harem of 30-100 female elephant seals and, in turn, mates with as many of the females as possible. A successful male can impregnate up to 50 females in single mating season and sire over 500 pups in a lifetime.
Eastern Garter Snake
Unlike most snakes, the female Eastern Garter Snake does not lay eggs, but rather gives birth to live young. Breeding is competitive. Sometimes, if several males find a female at the same time, the entire company forms a "breeding ball," the snake equivalent of an orgy. The snakes wrap around one another in an attempt to mate.
Male warthogs make use of a reproductive system known as "overlap promiscuity." Unlike lions or elephants seals, male warthogs don't defend or provide for a group of females in exchange for mating rights. Instead, male warthogs simply roam around to different territories, mating with a female from each population before moving on to new territories. Like an old-fashioned sailor, the male warthog has a lady in every port.
The Topi Antelope is a fascinating animal in that it displays the reverse sexual behavior of most mammals. Females are the aggressive, promiscuous pursuers, while males are stand-offish and choosy. Females are fertile for only one day each year. During the month-long mating period surrounding this day, they will copulate with as many as twelve partners, mating several times with each one. It is not uncommon for male Topis to collapse with exhaustion or fight off possible female partners. | <urn:uuid:7ca74a6e-fe74-4fdb-a562-96b1296621f4> | 2.84375 | 1,063 | News Article | Science & Tech. | 45.203302 | 2,084 |
A recent study conducted for The IUCN Red List of Threatened Species™ has determined that 20% of hagfish species are at an elevated risk of extinction*. Scientists warn that this figure could be much higher.
The results of this research, carried out in association with Conservation International (CI), indicate that the primary causes of hagfish declines are the direct and indirect effects of fisheries.
Hagfish represent an ancient and unique evolutionary lineage; as bottom feeders they play an important role by cleaning the ocean floor and recycling nutrients into the food web which maintains the overall health of the ecosystems they inhabit.
“By consuming the dead and decaying carcasses that have fallen to the ocean floor, hagfish clean the floor creating a rich environment for other species including commercial fish such as cod, haddock and flounder,” says Landon Knapp, research assistant for the IUCN Marine Biodiversity Unit at Old Dominion University and lead author of the study. “The presence of hagfish in areas of intense fishing is extremely important as large amounts of bycatch are discarded."
Particular areas of concern highlighted in the study include southern Australia, where the only hagfish species present is threatened, and the coast of southern Brazil. Also of concern are the species found in the East China Sea, the Pacific coast of Japan, and coastal Taiwan; in these areas, four of the 13 hagfish species occurring are threatened with extinction.
“In many geographic regions, only one or two hagfish species are present, and therefore the loss or decline of even a single species in these areas will have detrimental effects on ecosystems as a whole, as well as the fisheries that depend on them,” says Dr Michael Mincarone, Professor of Zoology at Universidade Federal do Rio de Janeiro, an author of the study.
Fisheries worldwide directly profit from the harvesting of hagfish, such as Myxine garmani (Vulnerable) and Eptatretus burgeri (Near Threatened) for leather and food. Hagfish are also an important part of the food chain, being prey for fishes, seabirds and even marine mammals, including seals. When fishing pressure was focused on hagfish in certain locations in the north-western Atlantic, the stock of other commercial species, such as flounder, plummeted.
Overexploitation and destructive fishing practices are major threats to several hagfish species, including Myxine paucidens and Paramyxine taiwanae, both listed as Endangered. No current conservation measures or legislation exist to protect hagfish populations.
“Additional data is required and controls for the regulation and management of hagfish fisheries and other threats to hagfish populations are urgently needed to ensure the survival of these important species,” says Dr Kent Carpenter, Professor at Old Dominion University, manager of IUCN’s Marine Biodiversity Unit and an author of the paper.
“Hagfish are a great example of one of those ‘not-so-cute’ species that play a vital role in ecosystem health,” says Cristiane Elfes, Programme Officer for the CI-IUCN Biodiversity Assessment Unit. “This study highlights the impact we have on hagfish and the importance of protecting them to maintain the stability of ocean ecosystems.”
*For those groups that have been comprehensively assessed on the IUCN Red List, the percentage of threatened species can be calculated, but the actual number of threatened species is often uncertain because it is not known whether Data Deficient (DD) species are actually threatened or not. Therefore, the percentage presented above provides the best estimate of extinction risk for this group (excluding Extinct species), based on the assumption that Data Deficient (DD) species are equally threatened as data sufficient species. In other words, this is a mid-point figure within a range from x% threatened species (if all DD species are not threatened) to y% threatened species (if all DD species are threatened). Available evidence indicates that this is a best estimate.
For example, for hagfishes, 20% of species (excluding DD species) are threatened, although the precise figure is uncertain and could lie between 12% (if all DD species are not threatened) and 51% (if all DD species are threatened).
For more information, please contact:
Borjana Pervan, IUCN Media Relations, t +41 22 999 0115, m +41 79 857 4072, e firstname.lastname@example.org;
Kathryn Pintus, Communications, IUCN Species Programme, t +41 22 999 0154, e email@example.com;
Kevin Connor, Media Manager, Conservation International, t + 1 703 341 2405, e firstname.lastname@example.org
Spokespeople available for interviews:
Landon Knapp, Research Assistant, Old Dominion University, Lknap003@odu.edu
Heather Harwell, Post-Doctoral Research Associate, Global Marine Species Assessment, IUCN Global Species Programme, email@example.com
TOTAL HAGFISH SPECIES ASSESSED = 76
Extinct = 0
Extinct in the Wild = 0
Critically Endangered = 1
Endangered = 2
Vulnerable = 6
Near Threatened = 2
Data Deficient = 30
Least Concern = 35
The hagfish assessments
The hagfish assessments are a part of the Global Marine Species Assessment’s mission to complete more than 20,000 marine species assessments for inclusion on the IUCN Red List of Threatened Species. The Global Marine Species Assessment Unit (GMSA), or Marine Biodiversity Unit, is a joint initiative of IUCN and Conservation International. The GMSA is headquartered in the Department of Biology at Old Dominion University in Norfolk, Virginia, and is largely enabled by the generous support of the New Hampshire Charitable Foundation and Tom Haas.
- Full media release
- Full report
- List of Hagfish species on the IUCN Red List (click on the names in the list to open the Species Fact Sheets) | <urn:uuid:5d31ee9c-44f7-4ff7-98fc-471b819d4712> | 3.6875 | 1,312 | News (Org.) | Science & Tech. | 25.117619 | 2,085 |
Rainfall and lack thereof has been a center of interest of late in Portland, with our 51 day rain streak coming to an end Monday. But there have been days when it's sprinkled or rained and yet did not get the elusive 0.01" needed to end our dry streak.
How does the rain gauge determine if the rain was enough to measure or not? It depends on the weight of the accumulated water.
The Automated Surface Observation Stations (ASOS) used at Portland Int'l Airport and many of the larger NOAA/FAA sites all use rather new equipment called "AWPAG" -- Automated Weather Precipitation Accumulation Gauge.
They just in the past few years replaced Heated Tipping Buckets (HTB -- this is the government we're talking about and they never met an acronym they didn't like.) Portland was upgraded on May 1, 2006.
AWPAG is newer technology that collects rain, then uses a precise electrical sensor that can exactly determine the amount of rain based on the weight of the water it has collected. It's coated in an anti-freeze to prevent precipitation from freezing in cold weather and in general, has shown to be much more accurate in snow and freezing rain than the older HTBs.
How does it measure snow? The rain gauge is also heated so that snow melts into water and is then measured as rainfall equivalent. That's how the climate book might note we had 0.12" of "rain" on a date where it snowed all day. (Incidentally, a rough rule of thumb is 10 inches of snow = 1 inch of rain, but that ratio can vary depending on the water content of the snow.)
If rain is observed or sensed by the equipment, but not enough collects to measure 0.01", it counts as a "Trace" and does not count as measurable rain. That is how it can drizzle for hours and but not count as a rainy day, yet some days a few minutes of rain is enough water to get the elusive 0.01 inches. (There have been days when the morning mist or drizzle has been heavy enough to register 0.01". And yes, those situations count officially as a rainy day too.)
Most other rain gauges, including the vast majority of home weather stations, use the old tipping bucket method.
The way those work is there is a tiny little "sea-saw" rain collector inside an electronic rain gauge. The top of the gauge is a funnel that has a certain radius at the top and bottom. The water then gets funneled down to a lever that is equally divided with walls on three sides, and ends left open.
The lever then sits on a fulcrum point that tips up and down like a see-saw. Enough rain has to collect on side of the lever to be enough weight to tip the scale down that way -- that dumps the water out the end and is calibrated as such to register 0.01" inch of rain. The rain then collects anew on the "high side" of the lever until it has enough water to tip the scale down the other way, registers another 0.01", then collects the other side. (This image will give you a general idea of how they work.)
Now, imagine thunderstorms and microbursts in the Midwest - that little rain level would be flipping back and forth on overtime! Indeed, it was found these rain gauges tended to under-report heavy rainfall events and also wasn't perfect at processing snow and freezing rain events, thus the NOAA switch to the new AWPAG technology.
If you're a gluten for punishment, here are the specs of the ASOS in all their glory government-acornymed detail. | <urn:uuid:3d5714e1-2f1d-4ae0-849c-1aa45a189849> | 2.671875 | 769 | Personal Blog | Science & Tech. | 65.188932 | 2,086 |
"Invasive Plants are non-indigenous species or strains that become established in natural communities and wild areas, replacing native vegetation" (Invasive Plant Association of Wisconsin Web Site).
Why Should We Care about Invasive Plants?
Students, researchers, and the public come to the Lakeshore Nature Preserve to learn, study, and enjoy nature. Invasive non-native plant species threaten natural areas and restoration efforts. They invade natural areas, killing existing native plants and creating a simplified ecosystem that will not support a diverse set of native animals. They also invade restorations, preventing the establishment of native plants. Many of these invasive plants increase erosion by killing native ground level plants that normally hold soil.
Many invasive plants have become established in the Lakeshore Nature Preserve including:
Garlic Mustard, a ground layer plant, kills native woodland wildflowers by shading them.
Buckthorn and Honeysuckle, shrubs that form brushy thickets, shade out understory plants, create a good home for Garlic Mustard, and increase erosion.
Burdock, a plant with burs, catches human clothing and sometimes traps and kills bats and birds.
Canada Thistle, a particularly persistent and aggressive plant, invades open areas.
Recognize and Manage Invasive Plants
Many organizations have web sites to help people learn about invasive plants including:
Wisconsin Department of Natural Resources (DNR) Invasive Species: Plants - www.dnr.wi.gov/invasives/plants.htm
Invasive Plant Association of Wisconsin -www.ipaw.org
DNR Invasive Species of the Future - www.dnr.wi.gov/invasives/futureplants/
How You Can Help Control Invasive Plants
Volunteer to help remove invasive plants in the Preserve (see hints for Garlic Mustard below). Give money to help control invasive plants. Remove invasive plants in your own yard. Clean your shoes before you enter the Preserve so that you will not introduce seeds from other areas. Educate yourself about the emerging invasive plants and help control them in your neighborhood (see the last web site above).
Advantages of Controlling Invasive Plants
Controlling invasive non-native plant species in the Lakeshore Nature Preserve will help preserve natural plant and animal diversity and make the area more useful for research and environmental education. Controlling these plants will support efforts to establish native plants on the shorelines, decreasing erosion and stormwater runoff and improving Lake Mendota water quality. Controlling invasive plants will preserve the beauty of the Lakeshore Nature Preserve for future generations.
Garlic Mustard Control
When Garlic Mustard is pulled, be sure to get the root – broken off plants resprout and bloom later.
Bag and landfill all of your Garlic Mustard because second year plants will bloom and produce seeds even if they are pulled.
Volunteers pull Garlic Mustard frequently in the Preserve from mid-April thru mid-June. Contact Cathie Bruner (firstname.lastname@example.org or 265-9275) to participate. | <urn:uuid:2484279f-b174-4ec4-966a-b93e84afb60a> | 3.734375 | 656 | Knowledge Article | Science & Tech. | 29.730937 | 2,087 |
Getting Started: More on Xcode
Volume Number: 19 (2003)
Issue Number: 10
Column Tag: Programming
More on Xcode
by Dave Mark
Let's take the debugger for a spin. As we've done for the past few months, we'll work with the Sketch sample project. As a reminder, the Sketch files live in /Developer/Examples/AppKit/Sketch/. Launch Xcode and open the project Sketch.pbxproj.
Setting a BreakpointWorking with the Debugger
When the project window appears, click on the Project Symbols smart group (left side of the window in the Groups & Files list). When your symbols list appears, click in the search field (right side of toolbar) and search for the string init. As you learned last month, this will winnow the list of project symbols down to those containing the string init.
Figure 1 shows this list with the init in the file SKTGraphic.m selected. This is the file we're going to edit and debug. Double-click this line so an editing window appears, listing the contents of SKTGraphic.m.
Figure 1. Search the Project Symbols for files containing init.
Setting a Breakpoint
Scroll about one quarter of the way down the source file and click on line 178. As a reminder, the navigation bar is the strip immediately above the main editing pane. The line number follows the file name, which is immediately to the right of the navigation arrows (see Figure 2). Once you locate line 178, set a breakpoint by clicking in the left column, to the left of the line in the editing window. Take a look at Figure 2. Note that the breakpoint arrow appears to the left of line 178. This means that the debugger, once started, will stop execution just before it executes the if statement on line 178.
Figure 2. Line 178 is selected. Note the line number in the navigation bar and the breakpoint set on the left.
To start the debugger, click the Debug icon in the SKTGraphic.m editing window's toolbar. Depending on things like the speed of your machine, how much of the program is already compiled, etc., this may take a bit. Be patient.
Once your code is built, the Sketch app is launched. Click on the rectangle tool and drag out a rectangle. As you release the mouse button, the debugger will hit that breakpoint and bring the Debug window to the front. Figure 3 shows the Debug window when you hit the breakpoint.
Figure 3. TheDebug window, showing the program paused at a breakpoint. Note the display of Locals in the upper-right pane.
Take a look at the upper-right pane in Figure 3. Notice the display of variables. You can manipulate the column widths by dragging on the splits between the Variable, Value, and Summary column headers. I maximized the width of the Value column in Figure 3. You can click on the disclosure triangles to reveal fields within structs. For example, origin has an x and a y component. The origin line lists the values of all the origin fields. Open the triangle and each field gets its own line.
If you double-click on the x value, the value of x turns into a text editing field allowing you to modify the value of x. Just as you'd expect.
But if you double-click on the origin value (the one that lists x and y values separated by a comma), the origin formatter will appear in a text editing field. In this case, the formatting string is:
x=%x%, y=%y%%origin%, %size%
Double-click the size value column and the size formatter will appear:
Let's change the formatter. Change it to read:
When you finish the edit (hit enter), the change you made will propagate up through both the display of size and any other displays that reference size. To see this for yourself, click the Arguments triangle, then the self triangle, then the _bounds triangle to reveal origin and size within _bounds. On my display (see Figure 4), the _bounds value field is:
x=50, y=63, w=107, h=169
When we changed one size formatter, the change propagated to all fields that displayed a value of type size. Data formatters are tied to type definitions, which are global.
Figure 4. In this shot, notice that self._bounds.size uses the modified size formatter.
If you are unfamiliar with formatters, grab your favorite C text and do a bit of digging. Just like a formatter in a printf(), Xcode formatters allow you to specify how the debugger displays your variables. There's even an API so you can write your own formatters. See the Xcode release notes for more info.
Figure 5 shows the debugger controls you'll find at the top of your Debug window. Chances are, if you've ever used a debugger before, you'll recognize most of them. Terminate terminates the process being debugged, Restart terminates the process and restarts it from the beginning. Pause pauses and Continue continues execution from the current stopping point. Step Over executes the next line of code without stepping into any functions, while Step Into steps to the next line of code, even if it means stepping into a function. Step Out completes the current function and stops immediately after the function's return in the calling function.
Figure 5. The debugger controls at the top of the Debug window.
The (in my opinion) coolest icon of the bunch, the Fix scotch tape dispenser is the equivalent of selecting Fix from the Debug menu. As I mentioned in last month's column, there appears to be a glitch with this icon in the Jaguar build of Xcode which I believe is fixed in the Panther build. Once Panther is officially released, I'll definitely be working exclusively with the Panther tools, so we'll get a sense of what's what in the current release of the tools.
The Breakpoints icon brings up the Breakpoints window (see Figure 6), which lists all your current breakpoints, organized by source file. You can turn breakpoints on and off using the checkbox associated with each breakpoint.
Figure 6. The Breakpoints window.
The Console Drawer icon opens a console drawer below the Debug window, allowing you to directly enter gdb commands. This is extremely cool, especially if you come from the Unix world and learned debugging using gdb, adb, or the equivalent.
You've got the gdb doc on your hard drive:
Once More Into the Breach
Let's take one more spin through the debugger. If Sketch is currently debugging, click the Terminate icon. Now restart Sketch by clicking the Debug icon. When Sketch starts running, do not do anything in Sketch, just use the dock to return to Xcode. Back in Xcode, select Debug - ... from the Window menu to bring the Debug window to the front. Now, click on the Pause button to stop Sketch at whatever line was executing when you clicked the Pause button.
Figure 7 shows the event trace pane which is a stack showing the call sequence with main at the bottom and the current function at the top (in this case, the trap mach_msg_trap). Basically, this is what your app looks like when it is in an idle state, waiting for an event to happen.
Figure 7. The event trace pane shows call sequence stack.
Next, click the Continue icon to get Sketch back up and running. Back in Sketch, click on the rectangle tool and drag out a rectangle. When you let go, you should pop back into the debugger and your familiar breakpoint.
Click Step Into a few times until this line is highlighted:
temp = _bounds;
Remember, since this line is highlighted it has not yet executed. Click Step Into one more time. As you can see in Figure 8, when you execute this line and assign a value to temp, the temp fields displayed in the Value column are displayed in red, showing that they have changed.
If you click Step Into one more time, you'll change bounds. To see this change, you'll need to open the self triangle. self._bounds will turn red and temp will return to black.
Figure 8. When we change the value in temp, the temp fields turn red.
For our final trick, click on the Console Drawer icon. Scroll to the bottom of the window till you see this prompt:
At the prompt, type the command continue and hit return. This is exactly as if you had clicked the Continue icon in the toolbar. You are now back in Sketch and you can drag out some more shapes, as you like.
Till Next Month...
Whelp, I'm out of space again. <sigh>. There's so much more I want to talk about. Next month, we're going to take the debugger through its paces and we'll also play around with a feature called code completion, one of my very favorite parts of XcodeYou know, there are way more things I want to write about than there is space in the magazine. Note to self - see about adding more pages to mag. Ah, well. I've got a cool column idea for next month. Not sure I can pull it off. We'll See see. Look for you then...
Dave Mark is a long-time Mac developer and MacTech contributor. Author of more than a dozen books on various Mac-development topics, Dave is all about Xcode these days. Last month's column focused on the editor interface as well as fix-and-continue. This month's installment will take the debugger through a few of its paces, and explore Xcode's code completion feature. | <urn:uuid:ab25de27-c789-4098-9273-92ae491be8a2> | 2.546875 | 2,036 | Personal Blog | Software Dev. | 68.015764 | 2,088 |
This article is
- freely available
Efficacy of Treatments against Garlic Mustard (Alliaria petiolata) and Effects on Forest Understory Plant Diversity
School of Forest Resources and Environmental Science, Michigan Technological University, 1400 Townsend Drive, Houghton, MI 49931, USA
* Author to whom correspondence should be addressed.
Received: 1 June 2012; in revised form: 17 July 2012 / Accepted: 19 July 2012 / Published: 3 August 2012
Abstract: Garlic mustard, an invasive exotic biennial herb, has been identified in the Upper Peninsula of Michigan, but is not yet widely distributed. We tested the effectiveness and impact of management tools for garlic mustard in northern hardwood forests. Six treatment types (no treatment control, hand-pull, herbicide, hand-pull/herbicide, scorch, and hand-pull/scorch) were applied within a northern hardwood forest invaded by garlic mustard. We sampled understory vegetation within plots to compare garlic mustard abundance (distinguishing first and second year plants) and native plant diversity before and after treatment. Results immediately following treatment indicated that garlic mustard seedling abundance was significantly reduced by herbicide, hand-pull/herbicide, scorch, and hand-pull/scorch treatments, and that adult abundance was reduced by all treatments. However, sampling of treatment sites one year later showed an increase in seedling abundance in herbicide and hand-pull/herbicide plots. Adult garlic mustard abundance after one year was lower than the control with the exception of the hand-pull plots where adult abundance did not differ. After one year, understory species richness and Shannon’s Diversity were lower in the herbicide and pull/herbicide treatments. Based on these results, we conclude that single-year treatment of garlic mustard with hand-pulling, herbicide, and/or scorching is ineffective in reducing garlic mustard abundance and may inadvertently increase the success of garlic mustard, while negatively impacting native understory species.
Keywords: invasive plants; northern hardwoods; Upper Michigan
Article StatisticsClick here to load and display the download statistics.
Notes: Multiple requests from the same IP address are counted as one view.
Cite This Article
MDPI and ACS Style
Shartell, L.M.; Nagel, L.M.; Storer, A.J. Efficacy of Treatments against Garlic Mustard (Alliaria petiolata) and Effects on Forest Understory Plant Diversity. Forests 2012, 3, 605-613.
Shartell LM, Nagel LM, Storer AJ. Efficacy of Treatments against Garlic Mustard (Alliaria petiolata) and Effects on Forest Understory Plant Diversity. Forests. 2012; 3(3):605-613.
Shartell, Lindsey M.; Nagel, Linda M.; Storer, Andrew J. 2012. "Efficacy of Treatments against Garlic Mustard (Alliaria petiolata) and Effects on Forest Understory Plant Diversity." Forests 3, no. 3: 605-613. | <urn:uuid:1e3b3f4f-b12d-43bb-847a-5b64afe72c11> | 2.8125 | 651 | Truncated | Science & Tech. | 34.617322 | 2,089 |
Optical Projection Tomography (OPT)
Optical Projection Tomography (OPT) is an optical imaging tool that creates high resolution 3D images of samples that are over ten times larger than samples imaged with other optical microscopic techniques (1). OPT images also have excellent contrast between a fluorescent region of interest and the rest of the specimen.
In OPT, a digital camera is used to capture an image of a transparent specimen. A specialized lens ensures that the image is formed only by the rays of light that are approximately parallel. This image is a projection through the specimen. A series of these projections obtained at various positions about the sample is used to create a 3-dimensional image of the sample. OPT is limited to transparent specimens as it is necessary to minimize light scattering through the sample. Mouse embryos are ideal specimens for OPT as they are naturally transparent.
One of the great strengths of OPT is the ability to acquire fluorescent images. This allows a researcher to tie fluorescent molecules to a particular type of molecule, such as a type of protein or gene. OPT can be used to map the distribution of the fluorescent molecules, which is also a map of the distribution of molecule under study.
At the Mouse Imaging Centre, we have begun research into altering the data processing methods in order to obtain higher resolution images of larger specimens. This will allow researchers to use OPT to study larger embryos and even complete organs.
Above Left: Three orthogonal planes out of an autofluorescence OPT
reconstruction of an E10.5 wildtype mouse embryo. The resolution of this
image is approximately 20 microns. Above Right: Surface rendering of
an E9.5 wildtype mouse embryo. Red: cardiac, white transparent: embryo.
Above: Surface renderings of the cardiac region of E12.5 wildtype
(left) and E12.5 Baf60c knockdown (right) mouse embryos. The outflow
tract of the wildtype heart undergoes constriction and septation,
whereas the outflow tract of the mutant has no constriction and little
septation. Red: myocardium, yellow: interior vessels of great
arteries. The myocardium is rendered transparently in the second set of
images to easily see the interior of the great vessels. This work is
described in "Baf60c is essential for function of BAF chromatin
remodelling complexes in heart development" by H. Lickert et. al.,
Nature Volume 432, Nov 4 2004. | <urn:uuid:ebb70018-f9d5-4b58-a58e-63702d693038> | 3.0625 | 523 | Knowledge Article | Science & Tech. | 38.59051 | 2,090 |
Scientists led by University of Toronto’s Dr. Eugenia Kumacheva report they have discovered a way to predict the organization of nanoparticles. The approach is to treat them much the same as ensembles of molecules formed from standard chemical reactions. Observations found self-organization is an efficient strategy for producing nanostructures with complex, hierarchical architectures. "Our work paves the way for the prediction of the properties of nanoparticle ensembles,” Kumacheva said. The findings were published in July 9, 2010 issue of Science. | <urn:uuid:85055f5d-6743-46ee-88e6-9bc1b04f88b3> | 2.875 | 115 | News Article | Science & Tech. | 25.826748 | 2,091 |
MTH209A Fundamentals of Mathematics I
Lead Faculty: Dr. Igor Ya Subbotin
A study of the real number system and its subsystems, ancient and modern numeration systems, problem-solving and simple number theory. Includes teaching materials and discussion of today's professional organizations. This is a content course, not a methods course.
- Describe historically significant contributions to the development of mathematics.
- Differentiate between and use inductive and deductive reasoning.
- Develop a list of problem-solving strategies.
- Find sums, terms, understand notation for arithmetic, geometric, binary, Fibonacci and power sequences.
- Understand the concept of function and what is meant by variable, domain, range, rate of change for linear, quadratic, logarithmic, exponential and other functions.
- Use interpolation and extrapolation.
- Use scientific notation.
- Recognize the platonic solids, the semiregular polyhedra, perform elementary constructions, and measurements.
- Use manipulatives to explain operations.
- Know the properties of the real number system. | <urn:uuid:145bab8e-b8c8-4236-94c2-419042ce2477> | 3.046875 | 235 | Product Page | Science & Tech. | 11.3325 | 2,092 |
Code Listing 14: Query that demonstrates the LENGTH function
SQL> select first_name, LENGTH(first_name) length
2 from employee
3 order by length desc, first_name;
a query that uses the LENGTH function to
display the length of all FIRST_NAME values
from the EMPLOYEE table.
The online version of this article at bit.ly/
JAQPk3 includes examples of LENGTH and
other character functions in WHERE and
ORDER BY clauses.
8 rows selected.
SQL> select INSTR('Mississippi', 'issi',
2 from dual;
you with this task. Listing 12 shows a query
that uses the SUBSTR function to extract the
first three characters of every LAST_NAME
value from the EMPLOYEE table. The SUBSTR
function takes two required parameters
and one optional input parameter. The first
parameter is the literal or column value on
which you want the SUBS TR function to
operate. The second parameter is the position of the starting character for the substring, and the optional third parameter is
the number of characters to be included in
the substring. If the third parameter is not
specified, the SUBSTR function will return
the remainder of the string.
Listing 13 demonstrates the SUBS TR and
INS TR functions working together to display
the portion of every LAS T_NAME value from
the EMPLOYEE table that contains the “ton”
substring. In this example, the output from
the INS TR function provides the value for the
input parameter that specifies the position
for the SUBSTR function’s starting character.
In the LAST_NAME values in which the substring “ton” is not found, the entire LAS T_
NAME value is returned, for two reasons:
SUBSTR treats a starting position of 0 the
same as a starting position of 1 (that is, as the
first position in the string), and because the
query omits the optional length parameter,
the full remainder of the string is returned.
function takes as input the literal or column
value you want to search, followed by the
substring pattern to search for. In Listing 11,
the INS TR function finds the “ton” pattern in
only two column data values—both of them
Newton—and returns 4 as their position.
Because it did not find the search string in any
other values, the output for those values is 0.
Two additional parameters—starting
position and occurrence—are optional. The
starting position specifies the character in
the string from which to begin your search.
The default behavior is for the search to
begin at the first character—otherwise
known as character position 1. The occurrence parameter lets you specify which
occurrence of the substring you’d like to find.
For example, the word Mississippi includes
two occurrences of the “issi” substring. To
search for the starting-position location of
the second occurrence of this pattern, you
must provide the INSTR function with an
occurrence parameter of 2:
This article has shown you how character
functions can be used in SELECT statements
to manipulate the ways data is displayed.
You’ve seen how to convert data values to
uppercase, lowercase, and mixed cases and
how to search for strings within strings.
You’ve also seen how to pad and trim data
and how to specify a string’s total length.
By no means does this article provide an
exhaustive list of the Oracle character functions. Review the documentation for more
The next installment of SQL 101 will
discuss number functions and other miscellaneous functions.
Melanie Caffrey is
a senior development
manager at Oracle. She
is a coauthor of Expert
for Oracle Developers
and DBAs (Apress, 2011) and Expert Oracle
Practices: Oracle Database Administration from
the Oak Table (Apress, 2010).
INSTR('MISSISSIPPI','ISSI', 1, 2)
1 row selected.
EX TRACTING STRINGS FROM STRINGS
Sometimes you need to extract a portion of
a string for your desired output. The SUBS TR
(for substring) character function can assist
WHEN SIZE MAT TERS
Occasionally you need to determine a
string’s length—for example, to determine
the maximum number of characters a form
entry field should permit. Listing 14 shows
online-only article content
SQL 101, Parts 1–5
READ more about relational database
design and concepts
Oracle Database Concepts 11g
Release 2 ( 11. 2)
Oracle Database SQL Language Reference 11g
Release 2 ( 11. 2)
Oracle SQL Developer User’s Guide Release 3. 1
DOWNLOAD the sample script for | <urn:uuid:8348f91f-5885-444b-96c5-03a146fbd09e> | 3.140625 | 1,047 | Tutorial | Software Dev. | 50.289344 | 2,093 |
And it's easier to teach one thing early on and expand that knowledge later. If you teach to much, they will not retain the information.
For example, on day one you show them comments.
"Comments begin with the hash (or octothorpe) character."
# I am a comment!
And leave the POD for a later day...
"Today we will learn about POD. POD is short for Plain Old Documentation and it is a fairly simple way of documenting your code. We'll begin by adding a heading to your existing project and then cut back to the code."
# may or may not be lots of code here
Beginner::Project - a project worthy of a beginner Perl programmer
# your code continues here...
Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
Read Where should I post X? if you're not absolutely sure you're posting in the right place.
Please read these before you post! —
Posts may use any of the Perl Monks Approved HTML tags:
Outside of code tags, you may need to use entities for some characters:
- a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
Link using PerlMonks shortcuts! What shortcuts can I use for linking?
See Writeup Formatting Tips and other pages linked from there for more info.
| & || & |
| < || < |
| > || > |
| [ || [ |
| ] || ] || | <urn:uuid:8b33e483-1700-4d71-85fa-27c1525a4eab> | 2.796875 | 429 | Comment Section | Software Dev. | 79.674353 | 2,094 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
We found 15 results on physics.org and 143 results in our database of sites
142 are Websites,
0 are Videos,
and 1 is a Experiments)
Search results on physics.org
Search results from our links database
Lord Patrick Maynard Stuart Blackett (1897 - 1974) work was focused on cosmic rays and he helped designed a counter-controlled cloud chamber, a brilliant invention which managed to make cosmic rays ...
Martin Rees (1942 - ) has made important contributions to the theories of galaxy formation, galaxy clustering, and the origin of the cosmic background radiation.
Information on the high energy radiation which strikes the Earth from space.
This site gives a brief but detailed description of how a microwave oven cooks food.
A demonstration of how a sieve can stop microwaves, preventing butter from melting inside a microwave oven.
This is a fantastic site that covers more than just physics and tracks the history of our universe right from its beginnings. It has lots of information but also movies to watch and teacher resources.
A blog written by a group of physicists and astrophysicists on the stuff that interests them: science but also arts, politics, culture, technology, academia, and miscellaneous trivia
A wealth of info about the sun, the earth's magnetosphere, space weather, cosmic rays, solar wind etc
Some questions and answers relating to superheating in a microwave oven.
Info on microwaves. While there are some radar bands from 1,300 to 1,600 MHz, most microwave applications fall in the range 3,000 to 30,000 MHz (3-30 GHz).
Showing 21 - 30 of 143 | <urn:uuid:7991912c-26de-4d1a-a97f-35f6f395b6cc> | 2.625 | 388 | Content Listing | Science & Tech. | 55.180645 | 2,095 |
X-rays from peeling tape
This project is an exploration of X-ray production from pulling scotch tape. You may not believe this, but hear me out. If you peel tape off a reel in vacuum at a few cm per second rate, megahertz of 10-40 keV X-rays are produced. X-ray imaging is easy.
For inspiration see http://www.nature.com/nature/videoarchive/x-rays/ and "Correlation between nanosecond X-ray flashes and stick–slip friction in peeling tape," Carlos G. Camara1,2, Juan V. Escobar1,2, Jonathan R. Hird1 & Seth J. Putterman, Nature 455, 1089-1092 (23 October 2008) | doi:10.1038/nature07378; Received 30 December 2007; Accepted 27 August 2008
This project requires vacuum and attention to X-ray radiation hazards.
Add your name here to participate:
- Duncan Carlsmith | <urn:uuid:83ea2476-9bec-4995-96eb-8e5118d70589> | 2.78125 | 213 | Product Page | Science & Tech. | 76.176953 | 2,096 |
Before we begin, you should understand the basic PostgreSQL system architecture. Understanding how the parts of PostgreSQL interact will make the next chapter somewhat clearer. In database jargon, PostgreSQL uses a simple "process per-user" client/server model. A PostgreSQL session consists of the following cooperating Unix processes (programs):
A supervisory daemon process (the postmaster),
the user's frontend application (e.g., the psql program), and
one or more backend database servers (the postgres process itself).
A single postmaster manages a given collection of databases on a single host. Such a collection of databases is called a cluster (of databases). A frontend application that wishes to access a given database within a cluster makes calls to an interface library (e.g., libpq) that is linked into the application. The library sends user requests over the network to the postmaster (Figure 10-1(a)), which in turn starts a new backend server process (Figure 10-1(b))Figure 10-1(c)). From that point on, the frontend process and the backend server communicate without intervention by the postmaster. Hence, the postmaster is always running, waiting for connection requests, whereas frontend and backend processes come and go. The libpq library allows a single frontend to make multiple connections to backend processes. However, each backend process is a single-threaded process that can only execute one query at a time; so the communication over any one frontend-to-backend connection is single-threaded.
One implication of this architecture is that the postmaster and the backend always run on the same machine (the database server), while the frontend application may run anywhere. You should keep this in mind, because the files that can be accessed on a client machine may not be accessible (or may only be accessed using a different path name) on the database server machine.
You should also be aware that the postmaster and postgres servers run with the user ID of the PostgreSQL "superuser". Note that the PostgreSQL superuser does not have to be any particular user (e.g., a user named postgres), although many systems are installed that way. Furthermore, the PostgreSQL superuser should definitely not be the Unix superuser, root! It is safest if the PostgreSQL superuser is an ordinary, unprivileged user so far as the surrounding Unix system is concerned. In any case, all files relating to a database should belong to this Postgres superuser. | <urn:uuid:06489ac7-42d6-446a-ae22-14007cd3ac4a> | 3.09375 | 521 | Documentation | Software Dev. | 38.693788 | 2,097 |
Understanding the Southern Ocean that rings Antarctica is critical to understanding the climate of the whole world, according to Jorge Sarmiento, Princeton's George J. Magee Professor of Geoscience and Geological Engineering and associated faculty member of the Princeton Environmental Institute. Sarmiento wants to deploy more research instruments into the Southern Ocean to get better data on what's happening in the waters near the bottom of the globe.
Photo by Denise Applewhite
Video feature: Study of Southern Ocean critical to understanding of climate change
Posted January 7, 2013; 12:00 p.m.
Whether it's the economics of clean energy, the politics of Washington or claims over the severity of the problem itself, the debate over climate change is loud and crowded. One aspect that often goes overlooked is the Southern Ocean ringing Antarctica at the bottom of the globe. But that, says Jorge Sarmiento, is about to change.
Play the "Unlocking the Mysteries of the Southern Ocean" video.
"In terms of bio-production in the ocean and the impact of climate change, it turns out the Southern Ocean is super critical," said Sarmiento, Princeton's George J. Magee Professor of Geoscience and Geological Engineering and an associated faculty member of the Princeton Environmental Institute (PEI). "It may hold many keys to more fully understanding the science behind all of this. I'd say it's a revolutionary time for this type of study."
Since joining Princeton's faculty in 1980, Sarmiento has studied the vital role Earth's oceans play in the complex biochemical process through which carbon is exchanged among water, soil and atmosphere.
"When I first got interested in the carbon cycle around 1984, I figured I'd spend a few years on it and move on to something else, because that's the way my career had been up to that point," he said. "But the more research I did, the more I found the Earth's carbon system to be a font of eternal stimulation. Every time I thought I had solved one problem, something new would pop up. My curiosity really couldn't be satisfied."
Sarmiento, who is also director of the University's Program in Atmospheric and Oceanic Sciences, has studied how the ocean absorbs and recycles carbon dioxide produced by deforestation and fossil fuels. His work has had implications for a range of concerns, such as the future sustainability of fisheries to improving predictive models that allow us to anticipate how nutrient cycles and biological life respond to climate change.
He has been involved with the Carbon Mitigation Initiative (CMI), a collaboration begun in 2000 between PEI and BP that brings together scientists, engineers and policy experts to design carbon mitigation strategies that are safe, effective and affordable.
"I think there's a real deep appreciation for the value of the scientific backbone behind understanding these things and the way that understanding can lead to finding and advocating for solutions," Sarmiento said. "That's a real strength of CMI."
Last spring, Sarmiento and his research team of graduate and postdoctoral students played a vital role in authoring CMI's 11th annual report and participated in the collective's annual two-day meeting. CMI members presented research advances made in climate science, low-carbon energy technology, carbon capture and storage, and climate policy.
The group focused in part on the Southern Ocean, and its role in the Earth's carbon cycle cannot be overstated, according to Sarmiento.
For one thing, Southern Ocean winds and buoyancy fluxes — which determine the density of ocean surfaces — are the principal source of energy for driving the critical large-scale, deep overturning circulation that occurs throughout the ocean. Vertical exchanges in the Southern Ocean are responsible for supplying nutrients that fertilize three-quarters of the biological production in the global ocean north of 30°S.
"About 75 percent of biological production north of 30°S is due to this up-flow of nutrients," said Sarmiento. "That's really exceptional."
In other words, most of the organic matter filtered through the global ocean would be lost forever if the Southern Ocean wasn't pumping it back up north. Without this force, Sarmiento says the Earth's oceans would essentially die in 30 years.
The Southern Ocean also absorbs about 60 percent of the Earth's anthropogenic (or man-made) heat and 50 percent of anthropogenic carbon — even though the body of water only occupies about a quarter of the planet's surface ocean area. This so-called biological pump helps to transfer carbon from the surface to the deep, thus mitigating climate change.
"The Earth isn't as warm as it should be because a large amount of heat caused by greenhouse gases is trapped in the ocean," Sarmiento said. "And a great deal of that trapping process takes place in the Southern Ocean."
Little is known about how the Southern Ocean's pump system works. Even less is known about how to use it for predicting climate change.
The Southern Ocean is the least observed and least understood region of the world's oceans. According to Sarmiento, out of the 615,734 nitrate profiles available globally (which measure nutrient levels in the ocean), only 3,647 are for south of 30°S. Moreover, just 560 of these profiles are for south of 45°S. This leaves vast swaths of the Southern Ocean that have never been studied. To close the gap, Sarmiento has been working to obtain funding from the National Science Foundation that would allow him to drop dozens of robotic floats throughout the Southern Ocean to study it more closely.
About 3,500 of these autonomous floats are positioned around the world, and every 10 days they surface from a depth of one mile to measure conductivity and temperature. Sarmiento's floats would go one step further and be outfitted with sensors to measure nitrate, oxygen and pH. Sarmiento would be able to study multidimensional profiles of the ocean once impossible to obtain. He won't know until March if funding is approved, but the possibility of conducting these types of studies excites him.
"We have a chance to obtain vast amounts of data, more than any other time in history," he said. "It's extraordinary. A revolution is taking place."
Second-year postdoctoral research associate Thomas Frölicher has been working with Sarmiento on the relationship between climate change and the Southern Ocean. He says the professor's work will inevitably have a far-reaching impact on the study of climate change.
"Jorge's research leads us to a better understanding of Southern Ocean biogeochemical processes, which helps pin down one of the greatest sources of uncertainties in predicting the fate of anthropogenic carbon and the climate as a whole," said Frölicher. "He combines scientific curiosity, creativity and willingness to pursue ideas that at first seem difficult to solve, but yield extraordinary results in the long run."
It's a sentiment echoed by many of Sarmiento's colleagues, including Daniel Sigman, a co-investigator on the Southern Ocean project and the Dusenbury Professor of Geological and Geophysical Sciences.
"Jorge insists on clear, uncluttered logic, in both his research and his instruction," said Sigman. "More than any other researcher I know, Jorge pursues the Socratic method; Jorge will often introduce an idea that he thinks is particularly interesting and then use others' responses to fashion a map of its possible significance. In his interactions with his research group, gentle but pointed questioning is followed up with calculations, model simulations and investigations of data. This simple interactive approach has yielded transformative insights into ocean processes and has trained generations of high-impact researchers to identify the weak links in accepted ideas."
Sarmiento's undergraduate students value his teaching methods, too. Senior Elizabeth Shoenfelt says her experience in Sarmiento's "Ocean, Atmosphere and Climate" course led her to concentrate in geosciences.
"It was just such an incredibly positive experience I had in that course," said Shoenfelt. "Professor Sarmiento was incredibly encouraging, and made sure the problem sets were appropriately challenging. The need to be creative and synthesize information developed our problem-solving skills, but the solution was very attainable and augmented our understanding of the lecture material. He always makes time to answer our questions and help develop our understanding as we work through problem sets, and is always very kind and encouraging."
In the complex and nuanced world of climate change, Sarmiento isn't interested in the politics, the economics or even the specific design of possible solutions. Rather, he's intent on continuing scientific study that could ultimately provide solutions.
"I see myself as being part of the truth squad," said Sarmiento. "What I and my research team do is provide truthful information that enables people to make decisions about mitigation based on a real scientific understanding of the systems at play. It's nice to be able to say you want to save the world, but it also comes down to an innate curiosity for me. That's what drives me forward every day, and I'm beyond fortunate that I get to do that here at Princeton." | <urn:uuid:ced2e5b7-05af-4fe0-b722-8832e81c8f19> | 3.0625 | 1,891 | Truncated | Science & Tech. | 38.915063 | 2,098 |
It’s again time for one of those puzzling results that if they turn out to be true, would have some very important implications and upset a lot of relatively established science. The big issue of course is the “if”. The case in question relates to some results published this week in Nature by Joanna Haigh and colleagues. They took some ‘hot off the presses’ satellite data from the SORCE mission (which has been in operation since 2003) and ran it through a relatively complex chemistry/radiation model. These data are measurements of how the solar output varies as a function of wavelength from an instrument called “SIM” (the Spectral Irradiance Monitor).
It has been known for some time that over a solar cycle, different wavelengths vary with different amplitudes. For instance, Lean (2000) showed that the UV component varied by about 10 times as much as the total solar irradiance (TSI) did over a cycle. This information (and subsequent analyses) have lent a lot of support to the idea that solar variability changes have an important amplification via changes in stratospheric ozone (Shindell et al (2001), for instance). So it is not a novel finding that the SIM results in the UV don’t look exactly like the TSI. What is a surprise is that for the visible wavelengths, SIM seems to suggest that the irradiance changes are opposite in sign to the changes in the TSI. To be clear, while the TSI has decreased since 2003 (as part of the descent into the current solar minimum), SIM seems to indicate that the UV decreases are much larger than expected, while irradiance in visible bands has actually increased! This is counter to any current understanding of what controls irradiance on solar cycle timescales.
What are the implications of such a phenomena? Well, since the UV portion of the solar input is mostly absorbed in stratosphere, it is the visible and near-IR portions of the irradiance change that directly influence the lower atmosphere. Bigger changes in the UV also imply bigger changes in stratospheric ozone and temperature, and this influences the tropospheric radiative forcing too. Indeed, according to Haigh’s calculations, the combination of the two effects means that the net radiative forcing at the tropopause is opposite in sign to the TSI change. So during a solar minimum you would expect a warmer surface!
Much of the longer term variance in solar output has been hypothesised to follow what happens over the solar cycle and so if verified, this result would imply that all current attributions to solar variability of temperature changes in the lower atmosphere and surface ocean would be of the wrong sign. Mechanisms elucidated in multiple models from multiple groups would no longer have any validity. It would be shocking stuff indeed.
Conceivably, there might be another missing element (such as a cosmic-ray/cloud connection) that would counteract this physics and restore the expected sign of the change, but no-one has succeeded in finding any mechanism that would quantitatively give anything close the size of effect that would now be required (see our previous posts on the subject).
So is this result likely to be true? In my opinion, no. The reason why has nothing to do with problems related to the consequences, but rather from considerations of what the SIM data are actually showing. This figure gives a flavour of the issues:
(courtesy Judith Lean). Estimates of irradiance in three bands are given in each panel, along with the raw measurements from various satellite instruments over the last 30 years. The SIM data are the purple dots in the third panel. While it does seem clear that the overall trend from 2003 to 2009 is an increase, closer inspection suggests that this anti-phase behaviour only lasts for the first few years, and that subsequently the trends are much closer to expectation. It is conceivable, for instance, that there was some undetected or unexpected instrument drift in the first few years. The proof of the pudding will come in the next couple of years. If the SIM data show a decrease while the TSI increases towards the solar maximum, then the Haigh et al results will be more plausible. If instead, the SIM data increase, that would imply there is an unidentified problem with the instrument.
In the meantime, this is one of those pesky uncertainties we scientists love so much… | <urn:uuid:4ff8cb2e-a2a5-47c8-81fe-97a9dc1a1edb> | 3.203125 | 895 | Personal Blog | Science & Tech. | 39.723666 | 2,099 |