text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
April 19, 1996 This document contains a high-level proposal for embedding fonts in HTML documents on the World Wide Web. Clients interact with platform-specific services (called "embedding services" in this document) that provide much of the embedding functionality. The embedding services used by the clients perform the following functions: The Embedding Services create an embedded font structure from a specified font or fonts. The font structure has a known length and stream identifier, which the clients use to package the structure appropriately. Rather than actually embed a font structure in an HTML document, we propose that clients create a separate file to contain the embedded font or fonts, perhaps called a .FONT file, with its own URL. This file would have the following MIME specification: An HTML document would contain a reference to the associated font file, similar to the way graphics or other objects are referenced within a web document. We propose the tag <FONT FILE> to associate fonts with a web document. For example, <FONT FILE = Name.FONT> The following scenario outlines the process an authoring client might follow when embedding a font in an HTML document: Display clients will use a procedure similar to the following to load and display embedded fonts: OPEN ISSUE regarding HTML FORMS: Fonts with read-only embedding privileges (preview and print embedding) have previously only been allowed to be loaded for use in read-only documents. A web author may unknowingly embed a read-only font for use with an HTML form, which allows a user to modify and enter text. Rather than create a new embedding level for this purpose, or modifying the existing read-only level to permit this use of the font, the client should substitute a local font for the read-only font used in the form. Authoring clients need to determine which fonts are actually used in a document before embedding the fonts. Fonts that are associated with a document but not actually displayed should not be embedded. Authoring clients also need to determine which of the fonts used in a document should actually be embedded. Fonts that will exist on the remote system, such as the Windows core fonts, should not be embedded. Users may also notify the authoring client of fonts they do not want to embed. Authoring clients are responsible for maintaining a shared typeface exclusion list that lists fonts that should not be embedded. If an authoring client requests the font be subsetted, the client must supply the list of characters used in the document. Authoring and display clients are responsible for defining functions that the embedding services can use to write the font structure to the .FONT file. The embedding services report the embedding privileges the font creator has applied to the font, and clients must respect those privileges. After loading and displaying a document with a font intended for temporary use, a client must uninstall the font. When loading a document with embedded fonts that the creator has labeled fully-installable, the display client should ask the user whether to permanently install the font or use it only temporarily. Otherwise, users may unwittingly load numerous fonts on their computer that they never regularly use. Microsoft has worked with the font industry to develop standards for identifying embeddability within font files. The embeddability of a TrueType font is determined by the creator of the font. Information about the level of embedding permitted for the font is contained in the fsType bit field of the OS/2 table, as described in the TrueType 1.0 Font File Specification. OS/2 table Description fsType bit settings 1 Restricted License Embedding. The font must not be modified, embedded, or exchanged in any manner without first obtaining permission of the legal owner. 2 Preview and Print Embedding. The font may be embedded within documents, but must only be installed temporarily on the remote system. Documents containing the font can only be opened as "read-only." 3 Editable Embedding. The font may be embedded within documents, but must only be installed temporarily on the remote system. Documents containing the font can be opened for reading and writing.
<urn:uuid:c7b76406-7a9c-42c8-9d25-ffd8b8f77740>
2.90625
857
Documentation
Software Dev.
41.88008
1,300
Well inside the Arctic Circle, scientists have found black smoker vents farther north than anyone has ever seen before. The cluster of five vents — one towering nearly four stories in height — are venting water as hot as 570 F. Dissolved sulfide minerals that solidify when vent water hits the icy cold of the deep sea have, over the years, accumulated around the vent field in what is one of the most massive hydrothermal sulfide deposits ever found on the seafloor, according to Marvin Lilley, a UW oceanographer. He’s a member of an expedition led by Rolf Pedersen, a geologist with the University of Bergen’s Centre for Geobiology, aboard the research vessel G.O. Sars. The vents are located at 73 degrees north on the Mid-Atlantic Ridge between Greenland and Norway. That’s more than 120 miles from the previous northernmost vents found during a 2005 expedition, also led by Pedersen. Other scientists have detected plumes of water from hydrothermal vents even farther north but have been unable to find the vent fields on the seafloor to image and sample them. In recent years scientists have been interested in knowing how far north vigorous venting extends. That’s because the ridges where such fields form are so stable up north, usually subject only to what scientists term “ultra-slow” spreading. That’s where tectonic forces are pulling the seafloor apart at a rate as little as three-fifths of an inch in a year. This compares to lower latitudes where spreading can be up to eight times that amount, and fields of hydrothermal vents are much more common. “We hadn’t expected a lot of active venting on ultra-slow spreading ridges,” Lilley said. The active chimneys in the new field are mostly black and covered with white mats of bacteria feasting on the minerals emitted by the vents. Older chimneys are mottled red as a result of iron oxidization. All are the result of seawater seeping into the seafloor, coming near fiery magma and picking up heat and minerals until the water vents back into the ocean. The same process created the huge mound of sulfide minerals on which the vents sit. That deposit is about 825 feet in diameter at its base and about 300 feet across on the top and might turn out to be the largest such deposit seen on the seafloor, Lilley said. Additional mapping is needed. “Given the massive sulfide deposit, the vent field must surely have been active for many thousands of years,” he said. The field has been named Loki’s Castle partly because the small chimneys at the site looked like a fantasy castle to the scientists. The Loki part refers to a Norwegian god renowned for trickery. A University of Bergen press release about the discovery said Loki “was an appropriate name for a field that was so difficult to locate.” Indeed this summer’s expedition and the pinpointing of the location of the vents last month follows nearly a decade of research. Finding the actual field involved extensive mapping. It also meant sampling to detect warm water and using optical sensors lowered in the ocean to determine the chemistry, both parts that involved Lilley. He said a key sensor was one developed by Ko-ichi Nakamura of the National Institute for Industrial Science and Technology, Japan, that detects reduced chemicals that are in the water as a result of having been processed through a hydrothermal vent. A remotely operated vehicle was used to finally find the vents. The difficulties of the task are described in an expedition Web diary, see “Day 17: And then there were vents” at http://www.geobio.uib.no/View.aspx?mid=1062&itemid=90&pageid=1093&moduledefid=71. The area around the vents was alive with microorganisms and animals. Preliminary observations suggest that the ecosystem around these Arctic vents is diverse and appears to be unique, unlike the vent communities observed elsewhere, the University of Bergen press release said. The expedition included 25 participants from five countries.
<urn:uuid:51748c22-01b5-40ce-a9d0-0a9811fd8a59>
3.625
875
News Article
Science & Tech.
48.682728
1,301
Well, the Tri-State weather may take an unusual turn again in the seasons ahead. The latest from the National Oceanic Atmospheric Administration indicates we could be set for an El Nino Winter. An El Nino may form in the Pacific Ocean within six months, perhaps altering the number of Atlantic hurricanes while bringing rain to the drought-stricken southeastern United States. For the Tri-State, most studies show that an El Nino winter often means less precipitation and warmer than usual temperatures during the season. The Climate Prediction Center has issued an El Nino Watch because “there is a 50 percent chance” that the central Pacific will warm before the end of the year. The formation of an El Nino, a warming of the Pacific, can, and usually does, have a major impact on the nation's weather and on energy and agriculture markets. The most immediate impact could be on this year's Hurricane season. Plus, this weather oddity can produce threats to the orange crops in Florida. El Nino enhances Atlantic Ocean wind shear, which is a change in speed or direction of winds at different levels in the atmosphere. The winds tear at the structure of growing tropical systems, preventing them from organizing or strengthening. Across the nation, the southern United States can sometimes experience dangerous flooding because of the re-adjustment of the Pacific Jet Stream. This storm track lines up across the south allowing for storms to repeatedly bring torrential rain to that area. As for our area, while many El Nino years do produce warmer and wetter weather, the intensity of the El Nino outbreak can have a lot to do with the outcome. If you would like to learn more, you can go here.
<urn:uuid:7226b947-88e8-4667-9ec4-544a539c3e28>
3.328125
349
News (Org.)
Science & Tech.
44.858151
1,302
This is a drawing of the Galileo probe exploring the environment of Jupiter. Click on image for full size Image from: The Jet Propulsion Laboratory Can there be Life in the Environment of Titan? Titan's atmosphere is a lot like the Earth's, except that it is very cold, from -330 degrees to -290 degrees! Like the Earth, there is a lot of Nitrogen and other complex molecules. There also may be an ocean of methane, or perhaps a liquid water layer inside the moon. Except for the cold, these signs would be favorable for some sort of life. Some creatures on Earth are known to live in an environment of very cold water. In the atmosphere there are layers of clouds composed of complex molecules such as methane. Moreover there is energy from ultraviolet light, and the charged particles of the magnetosphere. This type of environment, aside from the cold, is the kind of environment in which scientists think life began. Overall, the environment sounds unfriendly to life as we know it on earth, because of the cold. Since not much is known about the moon Titan, up close exploration of this moon, with a probe, as shown in this drawing, would help scientists better understand if life could survive there. Shop Windows to the Universe Science Store! Our online store on science education, ranging from evolution , classroom research , and the need for science and math literacy You might also be interested in: This is an image of the Earth's moon, shown in the lower left, with the much smaller icy moons of Saturn. The moons in order, starting from the top left are: Mimas, Enceladus, Tethys, Dione, Rhea, and...more Dione was discovered by G. Cassini in 1684. Dione is the 7th farthest moon from Saturn, with a standoff distance of 377,400 km. It is a small icy moon, lightly cratered, with wispy white streaks across...more The surface of Dione does not have many craters. Instead it has wispy white streaks similar to those found on Rhea extending for many kilometers over the entire surface. These two things indicate that...more The surface of Enceladus does not have many craters. Instead it has grooves similar to those found on Ganymede. These grooves extend for many kilometers over the surface. The presence of grooves indicates...more Helene was discovered by the French astronomers Pierre Laques, Raymond Despiau and J. Lecacheux on February 29, 1980. Even though Helene is so far away, they were able to make their discovery at an observatory...more Hyperion was discovered by W. Bond in 1848. Hyperion is the 3rd farthest moon from Saturn, with a standoff distance of 1,481,000 km. Hyperion is 175 x 100 km (117 x 67 miles) in size. Its dimensions make...more Rhea was discovered by G. Cassini in 1672. Rhea is the 5th farthest moon from Saturn. It is one of the icy moons, similar to the Galilean satellites. Rhea is about as wide as the state of California is...more The surface of Rhea is typical of an icy moon. Rhea is as heavily cratered (despite the appearance of this picture) as Saturn's "death star" moon Mimas on its leading side. Its trailing side has unusual...more
<urn:uuid:47cea5e8-6159-45d7-aa63-7cabcf2b7030>
3.375
722
Knowledge Article
Science & Tech.
61.393187
1,303
February 20, 2013 People who live by large, inland bodies of water have a phrase in their lexicon that describes the blizzards that hit them throughout the winter: “lake-effect snow.” When wintry winds blow over wide swaths of warmer lake water, they thirstily suck up water vapor that later freezes and drops as snow downwind, blanketing cities near lake shores. These storms are no joke: a severe one dumped nearly 11 feet of snow over the course of week in Montague, N.Y. before New Year’s Day, 2002; another week-long storm around Veteran’s Day in 1996 dropped around 70 inches of snow and left more than 160,000 residents of Cleveland without power. Other lake-effect snowstorms, such as those that skim the surface of Utah’s Great Salt Lake, are more of a boon, bringing fresh, deep powder to ski slopes on the leeward side of nearby mountains. But new research shows that mountains don’t just force the moisture-laden winds to dump snow. Mountains upwind can actually help guide the cold air patterns over lakes, helping to produce severely intense snowstorms. Mountains far afield can also deflect cold wind away from water, reducing a lake’s ability to fuel large storms. If these forces work with smaller topographic features, they may help illuminate whether gently rolling hills near the Great Lakes contribute to the creation and intensity of lake-effect snow. The research, published yesterday in the American Meteorology Society‘s journal, Monthly Weather Review, focused on wind patterns that swirl around the Great Salt Lake. “What we’re showing here is a situation where the terrain is complicated–there are multiple mountain barriers, not just one, and they affect the air flow in a way that influences the development of the lake-effect storm over the lake and lowlands,” said the study’s senior author Jim Steenburgh, in a statement. Steenburgh, a professor of atmospheric sciences at the University of Utah, and lead author Trevor Alcott, a recent doctoral graduate from the university and now a researcher at the National Weather Service in Salt Lake City, became interested in studying Utah’s winter weather after they noticed that current weather forecast models struggle to anticipate the intensity of the dozen or so lake-effect storms that strike their state’s major cities each winter. These models don’t include the effects of topography, such as the Wasatch Range (which forms the eastern border of the valley that encloses the Great Salt Lake), the Oquirrh Mountains (which forms the western border of the valley) or the mountains along the north and northwest borders of Utah some 150 miles away from the population centers of Salt Lake City and Provo. So Alcott and Steenburgh ran a computer simulation that incorporated mountains close to the lake as well as those closer to the Idaho and Nevada borders to mimic the creation of a moderate lake effect storm that occurred over the Great Salt Lake from Oct. 26-27, 2010, which brought up to 11 inches of snow to the Wasatch. After their first simulation–their “control”–was complete, they ran several more simulations that plucked out geographic features. Using this method, “We can see what happens if the upstream terrain wasn’t there, if the lake wasn’t there, if the Wasatch Range wasn’t there,” Steenburgh explained. When they removed the lake and all mountains from their simulation, the model didn’t produce any snowfall. When they kept all the mountains but removed the lake, only 10% of the snow simulated the model of the real storm fell. Keeping the lake but flattening all the mountains resulted in only 6 percent of the snow falling. Resurrecting the Wasatch Range but removing the other mountains yielded 73 percent of the snow compared to the simulation of the real storm. But the real surprise is what happened when both the Wasatch and Oquirrh ranges were retained, but the ranges in northern Utah at the Idaho and Nevada borders were removed. The result? 61 percent more snowfall than simulated in the real storm. The Wasatch and Oquirrh ranges form a funnel, guiding wind over the lake and enhancing snowfall in the downwind cities of Salt Lake City and Provo. Further, without the barrier of the northern mountains, which range between 7,600 feet to 10,000 feet in peak elevation–considerably less than the Wasatch’s peak elevation of nearly 12,000 feet, waves of cold air can reach the Great Salt Lake without deflection. In effect, Utah’s major cities are shielded by moderately sized mountains that together cast a long snow shadow! Sign up for our free email newsletter and receive the best stories from Smithsonian.com each week. No Comments » No comments yet.
<urn:uuid:fc1ef12e-f46d-4b48-8245-a6ec339636a6>
3.453125
1,016
News Article
Science & Tech.
46.521252
1,304
Sequential compression and decompression is done using the classes BZ2Compressor and BZ2Decompressor. Create a new compressor object. This object may be used to compress data sequentially. If you want to compress data in one shot, use the compress() function instead. The compresslevel parameter, if given, must be a number between 9; the default Provide more data to the compressor object. It will return chunks of compressed data whenever possible. When you've finished providing data to compress, call the flush() method to finish the compression process, and return what is left in internal buffers. Finish the compression process and return what is left in internal buffers. You must not use the compressor object after calling this method. Create a new decompressor object. This object may be used to decompress data sequentially. If you want to decompress data in one shot, use the decompress() function instead. Provide more data to the decompressor object. It will return chunks of decompressed data whenever possible. If you try to decompress data after the end of stream is found, EOFError will be raised. If any data was found after the end of stream, it'll be ignored and saved in See About this document... for information on suggesting changes.
<urn:uuid:d7610c1a-d921-4d75-9cab-62610b94415e>
2.859375
281
Documentation
Software Dev.
50.375341
1,305
Editor's note: Colin Stuart is an astronomy and science writer, who also works as a Freelance Astronomer for the Royal Observatory Greenwich in London. His first book is due to be published by Carlton Books in September 2013. Follow @skyponderer on Twitter. London (CNN) -- Reports coming from Russia suggest that hundreds of people have been injured by a meteor falling from space. The force of the fireball, which seems to have crashed into a lake near the town of Chebarkul in the Ural Mountains, roared through the sky early on Friday morning local time, blowing out windows and damaging buildings. This comes on the same day that astronomers and news reporters alike were turning their attention to a 40 meter asteroid -- known as 2012 DA14 -- which is due for a close approach with Earth on Friday evening. The asteroid will skirt around our planet, however, missing by some 27,000 kilometers (16,777 miles). Based on early reports, there is no reason to believe the two events are connected. And yet it just goes to show how much space debris exists up there above our heads. It is easy to think of a serene solar system, with the eight planets quietly orbiting around the Sun and only a few moons for company. The reality is that we also share our cosmic neighborhood with millions of other, much smaller bodies: asteroids. Made of rock and metal, they range in size from a few meters across, up to the largest -- Ceres -- which is 1000 kilometers wide. They are left over rubble from the chaotic birth of our solar system around 5000 million years ago and, for the most part, are found in a "belt" between the orbits of Mars and Jupiter. But some are known to move away from this region, either due to collisions with other asteroids or the gravitational pull of a planet. And that can bring them into close proximity to the Earth. Once a piece of space-rock enters our atmosphere, it becomes known as a meteor. Traveling through the sky at a few kilometers per second, friction with the air can cause the meteor to break up into several pieces. Eyewitnesses have described seeing a burst of light and hearing loud, thunderous noises. This, too, is due to the object tearing through the gases above our heads. If any of the fragments make it to the ground, only then are they called meteorites. Such events are rare, but not unprecedented. An object entered Earth's atmosphere in 1908 before breaking up over Siberia. The force of the explosion laid waste to a dense area of forest covering more than 2000 square kilometers. It is not hard to imagine the devastation of such an event over a more highly populated region. The Earth is sprinkled with around 170 craters also caused by debris falling from space. The largest is found near the town of Vredefort in South Africa. The impact of a much larger asteroid -- perhaps as big as 15 kilometers across -- is famously thought to have finished off the dinosaurs 65 million years ago. It is easy to see why, then, that astronomers are keen to discover the position and trajectory of as many asteroids as possible. That way they can work out where they are heading and when, if at all, they might pose a threat to us on Earth. It is precisely this sort of work that led to the discovery of asteroid 2012 DA14 last February by a team of Spanish astronomers. However, today's meteor strike shows that it is not currently possible to pick up everything. A non-profit foundation, led by former NASA astronaut Ed Lu, wants to send a dedicated asteroid-hunting telescope into space that can scan the solar system for any potential threats. For now, astronomers will use Friday's fly-by to bounce radar beams off 2012 DA14's surface, hoping to learn more about its motion and structure. One day this information could be used to help move an asteroid out of an Earth-impacting orbit. This latest meteor over Russia just goes to show how important such work is and how crucial it is that we keep our eye on the sky. The opinions expressed in this commentary are solely those of Colin Stuart.
<urn:uuid:0c9e9eed-c93a-4331-8fc6-375d8c613978>
3.3125
840
Truncated
Science & Tech.
54.1125
1,306
The Technical Details: Determining Delta Values When we talk about the isotopic ratio in a sample, we talk about the delta value. Let's look at how a delta value is actually calculated: - The first step in figuring out the δ13C for a sample is to find the ratio of 13C to 12C within the sample. Next compare (by dividing) this ratio to the ratio of 13C to 12C in a standard. - There is a specific standard, with a known, unchanging ratio of 13C to 12C that all laboratories use in their comparison. For the stable carbon isotopes, this standard is a limestone (called Pee Dee Belemnite—or PDB) from South Carolina. Although PDB is no longer run as the standard, other carbonates (with a known, unchanging 13C to 12C ratio) are used and compared on the PDB scale. The carbonate standard is reacted with an acid to create gaseous CO2, so that the sample and standard are both in the same phase. - Often the sample and standard may have very similar ratios of the two stable isotopes, which will give you a value very close to (but not exactly) 1. Two sample bellows are used so that the sample is compared to a standard Many samples that actually have different ratios of 13C to 12C will give what seems like similar values (say, for example, 0.99 and 0.98). These samples do have different isotopic ratios, but this is hard to see when they only differ after the decimal point. To make this difference easier to see, 1 is subtracted from this value, and then this new calculation is multiplied by 1,000 to give the actual δ13C of the sample. - This makes it much easier to see the difference between two samples. For the ratios of 0.99 and 0.98, the delta values are -10‰ and -20‰ respectively. The equation for this is: Making the Values More “Friendly” Even when comparing samples with ratios of 13C to 12C of 0.99 and 0.98, the delta notation is much easier. Well, when we look at ratios that atmospheric scientists actually study, it becomes infinitely easier to compare using delta notation–in fact it would be too difficult without! |Carbon Pool||13C||Actual ratio of 13C to 12C| |Ocean & Atmosphere||-8‰||0.011142| Why Go Through This Much Work? This seems like an awful lot of calculations when you can just look at differences among samples in their 13C to 12C ratios and ignore all of the calculation steps. The reason that it is conventional to compare to a standard (and then continue on to the next steps in order to get a more ‘friendly’ value) is so that it is easier to compare results both among isotope laboratories and within a single laboratory over a long time period. It is impossible to have an isotope ratio mass spectrometer that perfectly finds the ratio of 13C to 12C in a sample. Isotope ratio mass spectrometers measure relative isotopic ratios much better than actual ratios. By comparing to a standard, the precision of the data values are much, much better since all values are relative to a given standard. For example, if the ratio for both the sample and standard are overestimated (or underestimated) by the same relative amount, then dividing the two values will account for this, making it possible to compare δ13C among laboratories all across the world. The formula for determining the Δ14C of a sample is similar to δ13C: The difference is in the term FN[x], which is still a comparison of the sample to a standard. However, after this comparison, several other calculations occur to find FN[x]. - The ratio is corrected for “background” 14C counts, where atoms or molecules that were accidentally and incorrectly identified as 14C are no longer included. - The ratio is additionally corrected for the small amount of radioactive decay between the time the sample was collected and the time it was measured, so that the Δ14C at the time of collection rather than the time of analysis is reported. - The final difference is that Δ14C is normalized, where the effect of fractionation is removed. That is, we know from the 13C measurements that, for example, when carbon dioxide is photosynthesized by plants, it fractionates, resulting in proportionately less 13C in the plant. The same thing happens to 14C, so plants have proportionately less 14C than the atmosphere does. If we know how much 13C fractionation occurs, we can calculate precisely how much 14C fractionation there is. We then calculate how much 14C would have been in the sample if it had not fractionated. This is the Δ14C. Why go to all this trouble? The main reason is that for radiocarbon dating, scientists want to study how much 14C has decayed, not how much has fractionated, and this normalization allows them to do just that. The second reason is that it makes it easier to understand the 14C in the atmosphere – now when plants photosynthesize CO2, the Δ14C value in the atmosphere does not change. Of course, we can always reverse the calculations to discover the amount of 14C without applying this normalization, and this is written as δ14C. For even more gory details, see: http://www.radiocarbon.org/Pubs/Stuiver/index.html
<urn:uuid:70641808-af24-4ed4-9507-38f917936cfa>
4.28125
1,181
Documentation
Science & Tech.
54.24152
1,307
Did Hurricane Wilma have 209 mph sustained winds? At last week's 30th Conference on Hurricanes and Tropical Meteorology of the American Meteorological Society, Dr. Eric Uhlhorn of NOAA's Hurricane Research Division presented a poster that looked at the relationship between surface winds measured by the SFMR instrument and flight-level winds in two Category 5 storms. Hurricane Hunter flights done into Category 5 Supertyphoon Megi (17 October 2010) and Category 5 Hurricane Felix (03 September 2007) found that the surface winds measured by SFMR were greater than those measured at flight level (10,000 feet.) Usually, surface winds in a hurricane are 10 - 15% less than at 10,000 feet, but he showed that in super-intense Category 5 storms with small eyes, the dynamics of these situations may generate surface winds that are as strong or stronger than those found at 10,000 feet. He extrapolated this statistical relationship (using the inertial stability measured at flight level) to Hurricane Wilma of 2005, which was the strongest hurricane on record (882 mb), but was not observed by the SFMR. He estimated that the maximum wind averaged around the eyewall in Wilma at peak intensity could have been 209 mph, plus or minus 20 mph--so conceivably as high as 229 mph, with gusts to 270 mph. Yowza. That's well in excess of the 200 mph minimum wind speed a top end EF-5 tornado has. The Joplin, Missouri EF-5 tornado of May 22, 2011 had winds estimated at 225 - 250 mph. That tornado ripped pavement from the ground, leveled buildings to the concrete slabs they were built on, and killed 161 people. It's not a pretty thought to consider what Wilma would have done to Cancun, Key West, or Fort Myers had the hurricane hit with sustained winds of what the Joplin tornado had. Figure 1. Hurricane Wilma's pinhole eye as seen at 8:22 a.m. CDT Wednesday, Oct. 19, 2005, by the crew aboard NASA's international space station as the complex flew 222 miles above the storm. At the time, Wilma was the strongest Atlantic hurricane in history, with a central pressure of 882 mb and sustained surface winds estimated at 185 mph. The storm was located in the Caribbean Sea, 340 miles southeast of Cozumel, Mexico. Image source: NASA's Space Photo Gallery. Figure 2. Damage in Joplin, Missouri after the EF-5 tornado of May 22, 2011. Image credit: wunderphotographer thebige. Official all-time strongest winds in an Atlantic hurricane: 190 mph The official record for strongest winds in an Atlantic hurricane is 190 mph, for Hurricane Allen of 1980 as it was entering the Gulf of Mexico, and for Hurricane Camille of 1969, as it was making landfall in Pass Christian, Mississippi. In Dr. Bob Sheets' and Jack Williams' book, Hurricane Watch, they recount the Hurricane Hunters flight into Camile as the hurricane reached peak intensity: On Sunday afternoon, August 17, and Air Force C-130 piloted by Marvin Little penetrated Camille's eye and measured a pressure of 26.62 inches of mercury. "Just as we were nearing the eyewall cloud we suddenly broke into a clear area and could see the sea surface below," the copilot, Robert Lee Clark, wrote in 1982. "What a sight! Although everyone on the crew was experienced except me, no one had seen the wind whip the sea like that before...Instead of the green and white splotches normally found in a storm, the sea surface was in deep furrows running along the wind direction....The velocity was beyond the descriptions used in our training and far beyond anything we had ever seen." So, the 190 mph winds of Camille were an estimate that was off the scale from anything that had ever been observed in the past. The books that the Hurricane Hunters carried, filled with photos of the sea state at various wind speeds, only goes up to 150 mph (Figure 2). I still used this book to estimate surface winds when I flew with the Hurricane Hunters in the late 1980s, and the books are still carried on the planes today. In the two Category 5 hurricanes I flew into, Hugo and Gilbert, I never observed the furrowing effect referred to above. Gilbert had surface winds estimated at 175 mph based on what we measured at flight level, so I believe the 190 mph wind estimate in Camille may be reasonable. Figure 3. Appearance of the sea surface in winds of 130 knots (150 mph). Image credit: Wind Estimations from Aerial Observations of Sea Conditions (1954), by Charlie Neumann. Figure 4. Radar image of Hurricane Camille taken at 22:15 UTC August 17, 1969, a few hours before landfall in Mississippi. At the time, Camille had the highest sustained winds of any Atlantic hurricane in history--190 mph. The infamous hurricane hunter flight into Wilma during its rapid intensification While I was at last week's conference, I had a conversation with Rich Henning, a flight meteorologist for NOAA's Hurricane Hunters, who served for many years as a Air Reconnaissance Weather Officer (ARWO) for the Air Force Hurricane Hunters. Rich told me the story of the Air Force Hurricane Hunter mission into Hurricane Wilma in the early morning hours of October 19, 2005, as Wilma entered its explosive deepening phase. The previous airplane, which had departed Category 1 Wilma six hours previously, flew through Wilma at an altitude of 5,000 feet. They measured a central pressure of 954 mb when they departed the eye at 23:10 UTC. The crew of the new plane assumed that the hurricane, though intensifying, was probably not a major hurricane, and decided that they would also go in at 5,000 feet. Winds outside the eyewall were less than hurricane force, so this seemed like a reasonable assumption. Once the airplane hit the eyewall, they realized their mistake. Flight level winds quickly rose to 186 mph, far in excess of Category 5 strength, and severe turbulence rocked the aircraft. The aircraft was keeping a constant pressure altitude to maintain their height above the ocean during the penetration, but the area of low pressure at Wilma's center was so intense that the airplane descended at over 1,000 feet per minute during the penetration in order to maintain a constant pressure altitude. By they time they punched into the incredibly tiny 4-mile wide eye, which had a central pressure of just 901 mb at 04:32 UTC, the plane was at a dangerously low altitude of 1,500 feet--not a good idea in a Category 5 hurricane. The pilot ordered an immediate climb, and the plane exited the other side of Wilma's eyewall at an altitude of 10,000 feet. They maintained this altitude for the remainder of the flight. During their next pass through the eye at 06:11 UTC, the diameter of the eye had shrunk to an incredibly tiny two miles--the smallest hurricane eye ever measured. During their third and final pass through the eye at 0801 UTC, a dropsonde found a central pressure of 882 mb--the lowest pressure ever observed in an Atlantic hurricane. In the span of just 24 hours, Wilma had intensified from a 70 mph tropical storm to a 175 mph category 5 hurricane--an unprecedented event for an Atlantic hurricane. Since the pressure was still falling, it is likely that Wilma became even stronger after the mission departed. I'll have a new post by Tuesday at the latest.
<urn:uuid:75e1a01b-18fc-4daf-9e77-e843d26d94e3>
3.03125
1,554
Personal Blog
Science & Tech.
54.932505
1,308
October 4, 2005: Intricate wisps of glowing gas float amid a myriad of stars in this image of the supernova remnant, N132D. The ejected material shows that roughly 3,000 years have passed since the supernova blast. As this titanic explosion took place in the Large Magellanic Cloud, a nearby neighbor galaxy some 160,000 light-years away, the light from the supernova remnant is dated as being 163,000 years old from clocks on Earth. This composite image of N132D comprises visible-light data taken in January 2004 with Hubble's Advanced Camera for Surveys, and X-ray images obtained in July 2000 by Chandra's Advanced CCD Imaging Spectrometer. The complex structure of N132D is due to the expanding supersonic shock wave from the explosion impacting the interstellar gas of the LMC. A supernova remnant like N132D provides information on stellar evolution and the creation of chemical elements such as oxygen through nuclear reactions in their cores. When viewing objects in space, one must realize that the speed of light is a finite quantity, and that many objects that we are observing with high-powered telescopes, like Hubble, are extremely far away. If we refer to the speed of light as an unchanging value, and state that nothing can go faster than this speed, we can then use the term "light-second," "light-minute," "light-hour", and so on up to "light-year" as finite quantities of distance that are equal to the distance that light travels in that amount of time. Based on the speed of light and the distance from Earth to the Sun, we can say that the Sun is 8 light-minutes away from the Earth and vice-versa. If the Sun showed a flare, it would be visible on Earth 8 minutes later. If an object is seen in the Large Magellanic Cloud (LMC), it takes 160,000 years for the light from the LMC to reach us. If some event occurs in the LMC, like a supernova, astronomers on Earth viewing the supernova going off today know that the supernova actually exploded 160,000 years ago. If our telescopes show that 3,000 years have passed since the time of the supernova, based on the presence of ejection material in the remnant, the actual clock-time of when that event occurred based on our Earth calendars was 3,000 + 160,000 years ago, or 163,000 years ago. Since similar objects are at various distances from Earth, astronomers usually remove the light-travel time to the object when talking about the age or when an event occurred.
<urn:uuid:eb362c51-222d-4ae3-8b43-9b171d658de2>
4.25
542
Knowledge Article
Science & Tech.
46.355431
1,309
(Submitted August 15, 1998) I'm a middle school geography teacher with no formal expertise in, but a lifelong fascination with, astronomy and space in general. I seem to remember from a long ago college astronomy course a discussion of Oblers' Paradox that explains why we don't have perpetual daylight despite the billions of bright stars that presumably send their light to all parts of the earth. In trying to explain this concept to my eight-year old daughter, I get tongue-tied by all the technical jargon involved. Can you help me put my explanation in layman's In an infinite universe, which has existed forever, we shouldn't have night. Imagine a universe divided into shells, with stars of a single brightness distributed evenly --- if you look at a shell twice as far, each star is only a quarter as bright, but there are four times as many stars, so each shell is equally bright. If you have an infinite number of shells, you end up with infinite brightness! The big bang cosmology solves this, mainly by the implied age of the universe. We only see light emitted within the last 12 billion years (or whatever the age of the universe might be). This is a long time, but certainly not infinite, and not enough to make the night sky bright. Koji Mukai & Maggie Masetti for Ask an Astrophysicist
<urn:uuid:50288173-7962-4a33-bd98-92118d6da69e>
3.578125
297
Q&A Forum
Science & Tech.
46.477452
1,310
not like the gas carbon dioxide for which only 'crazies' consider a pollutant. I suppose it could be said astronauts pollute their environment. They do not need to use the larger biosphere("to clean the air") which should tell you how easy it is. In addition to ground transport, air transport is a consideration. (We can leave sea for another day... a large place to hide or dump trash for a time) I did some editing and put in bold a comment about a or the rate-determining step involving a layer of atmosphere. The rds is a chemical term i searched for a few days ago and just got around to reading. You also will note that they do not use the word saturation but instead speak of a new equilibrium. Table 4 (CONCAWE (1997), EC (1996)) shows how the emissions of CO, hydrocarbons, NOx and particulate matter have been reduced in Europe, reflecting the ability of technology to deliver reductions in emissions. The data show how the largest reductions in emissions have already taken place, with projections that further reductions will be possible by the introduction of on-board diagnostic systems, in-service emissions testing, recall programmes and fuel quality improvements (CONCAWE, 1997). These reductions in petrol and diesel engined vehicle emissions are sufficient to leave little room for improvement by switching to alternative hydrocarbon fuels such as natural gas or vegetable oil. The only cleaner option, as far as local emissions are concerned, is for a zero-emissions vehicle powered by electricity or hydrogen fuel cells. For such vehicles, it is important to consider, however, the total environmental impact of their use, as the air pollution emissions from remote generation of electricity or production of hydrogen fuel could possibly exceed the exhaust emissions that a conventional vehicle would produce. The main advantage of zero-emission vehicles is that the emissions can be relocated to where they are further from human receptors, so benefits to human health can be obtained while other environmental impacts are not reduced (see Fig. 1). Many decades, they say. You can probably take that with a grain of salt. When comparing different impacts of aircraft upon the global atmosphere with each other, and with the effect of emissions from other transport sectors and non transport related activity, the most challenging aspect of CO2 is perhaps the time scale over which it has an effect. CO2 is chemically sufficiently unreactive for its dominant removal process to be physical. Solution in the water of the upper ocean and exchange of carbon between the atmosphere and terrestrial biomass are relatively rapid, with the combined annual flux amounting to 20% of the atmospheric carbon reservoir mass of 750 GT (Houghton et al., 1996), but these fluxes are bi-directional. The rate determining step for net removal of carbon is mixing from the surface and intermediate ocean to the much larger carbon reservoir of the deep oceans. At the turn of the 21st Century, anthropogenic carbon emissions of 7 to 8 GT per year (including deforestation) are greater than the equilibrium rate of removal at current atmospheric and surface ocean concentrations, such that an amount of carbon equal to around half the emissions each year are removed and the imbalance results in a steady increase in atmospheric carbon dioxide levels. Were emissions to remain constant at today’s rate, the atmospheric concentration would reach an equilibrium level about one third higher than today’s value towards the end of the 21st Century. The global total emissions of CO2 from aviation in 1990 was about 450 million tonnes of carbon (Barrett, 991), which was less than 20% of global road transport emissions and about 3% of total anthropogenic emissions. Furthermore, historical emissions of CO2 from aviation are almost zero going back just a few decades into the mid 20th Century, while around half the carbon dioxide from all anthropogenic sources currently in the atmosphere was emitted before 1980, so the overwhelming majority of the total is from non-aviation sources. The small contribution of aviation is, however, increasing, and the small amounts of CO2 being emitted by aircraft now will remain in the air for many decades. Finally, water vapour from jet engines can also form line-shaped clouds in the free troposphere. The temperature of these clouds is lower than that of Earth’s surface, so their black body radiation is less than what would be emitted from Earth’s surface were the clouds not there, resulting in net warming. This is more significant than the amount of incoming solar radiation reflected, so that overall the contrails have a warming effect on climate at the surface. Usually, contrails evaporate again within minutes or even seconds such that their impact is negligible, but under certain meteorological conditions they can be sufficiently persistent [and] a large part of the sky can become obscured continually along a major flight path until weather conditions change many hours or days later. In the stratosphere, contrails are never persistent because of the low ambient relative humidity there, although the water vapour from aircraft is not removed rapidly by precipitation as it is in the troposphere so has a small warming effect on climate because of its greenhouse gas properties. -Current ability to quantify impact and major sources of uncertainty- In theory, the impact of aircraft emissions on upper troposphere and lower stratosphere chemistry can be quantified using global models of circulation and chemistry (such as Johnson et al., 1999). However, despite the fact that the reaction mechanisms are now qualitatively understood, quantifying the impact of aircraft emissions remains elusive. There are two main reasons for this: Firstly, the chemical reaction cycles are complex, as different gas-phase and heterogeneous pathways become more important at different temperatures. Small errors in the predicted mix of different pollutants can propagate via resulting errors in the relative rates of two or more competing reactions to end up with quite unrealistic simulated O3 concentrations. Not only must the chemical composition of the upper troposphere and stratosphere be simulated accurately, but rates of mixing between layers as well as chemistry determines the composition, the temperature needs to be known to determine where heterogeneous processes occur, and the temperature has a large influence on the mixing. The whole process of stratospheric O3 destruction in particular is a highly non-linear catastrophic process. Secondly, emissions of aircraft in the upper troposphere and stratosphere occur along highly localised flight paths that vary in time and space. The physical size of these is much less than the resolution of the global-scale models that are required to simulate chemistry in the upper troposphere and stratosphere. This problem of scale is added to the fact that the total emissions from aircraft are at least as difficult to quantify as emissions for road traffic are on the ground. It is exacerbated by the fact that other sources of the same pollutants in the upper troposphere and lower stratosphere, such as lightening and mixing from the lower troposphere, are also very difficult to quantify accurately. Any one of these difficulties would make calculations of the total atmospheric impact of aircraft emissions liable to error. Combined, they present a very formidable challenge indeed for the science of atmospheric chemistry modelling. The most recent calculations indicate that the effect of aircraft NOx emissions on producing O3 in the upper troposphere / lower stratosphere is greater than the effect of sulphur and soot emissions on destroying O3, except at high latitudes Colvile et al., 2000. PDF
<urn:uuid:b0c003c3-8094-4242-b055-387bd1f23831>
2.84375
1,511
Comment Section
Science & Tech.
26.738492
1,311
Date: Dec 20, 2012 7:43 AM Author: Brigham Andrew White Subject: Help with inequality Can someone help me with the steps involved to solve the following inequality?: 2/(x-1) >= -1 The method I attempted was to solve it the same as though it were an equation but it doesn't seem to give the correct answer. Thanks.
<urn:uuid:189af6d0-b3f5-419b-858a-0be82173744f>
2.546875
74
Q&A Forum
Science & Tech.
69.293672
1,312
Body Surface Area Calculator Library Home || Full Table of Contents || Suggest a Link || Library Help |Calculate the surface area of your body (in square meters) from height and mass.| |Levels:||Middle School (6-8), High School (9-12)| |Resource Types:||Web Interactive/Java| |Math Topics:||Terms/Units of Measure, Human Biology| © 1994-2013 Drexel University. All rights reserved. The Math Forum is a research and educational enterprise of the Drexel University School of Education.
<urn:uuid:0e2cb730-b30b-4987-a912-1611a2a808b3>
2.625
123
Content Listing
Science & Tech.
36.006149
1,313
Geoengineering schemes need ranking system to avoid wasting money, destroying the planet October 26, 2008 With so-called geoengineering proposals proliferating as concerns over climate change mount, Philip Boyd of New Zealand's NIWA warns that "no geo-engineering proposal has been tested or even subjected to preliminary trials". He says that despite widespread media attention, scientists have yet to even come up with a way to rank geoegineering schemes for their efficacy, cost, associated risk, and timeframe. Thus is it unclear whether ideas like carbon burial, geochemical carbon capture, atmospheric carbon capture, ocean fertilization, cloud manipulation, "space sunshades", or strategically-placed pollution can be effective on a time-scale relevant to humankind, economical, or even safe. "The rationale for any geo-engineering scheme must be based on its efficacy," he writes, then noting that existing proposals have often started out with "overoptimistic claims on efficacy", "oversimplistic cost estimates", and failure to recognize "unwanted" and "potentially expensive" side-effects. To better evaluate human solutions to a human-created problem, Boyd writes that scientists "must apply metrics that incorporate efficacy, cost, risk and time in order to rank where future research effort is best focused." He proposes a transparent ranking system based on objective criteria to determine what projects are most promising and therefore worthy of limited government research funds. "Such an assessment of all of the well-established proposals is urgently needed but so far entirely lacking," he writes. "Funding research into only a few promising schemes, according to such metrics, may lead to one or two relatively reliable mitigation options that can be placed in a 'climate-change toolbox'. In the near future, we must decide the relative importance of time, cost, risk and efficacy in tackling climate change if it is decided to press ahead with a geo-engineering approach. Of course, it could transpire after such an analysis that climate mitigation strategies with a very low risk but apparently higher costs, such as direct carbon capture and storage, are the best approach." "As the costs of inaction and of delaying the mitigation of climate change are rising, an initial high investment — matched with a very low risk — may seem more and more reasonable," he concludes. Philip W. Boyd.Ranking geo-engineering schemes. Nature geoscience | VOL 1 | NOVEMBER 2008 | www.nature.com/naturegeoscience Shell Oil funds "open source" geoengineering project to fight global warming (7/21/2008) Shell Oil is funding a project that seeks to test the potential of adding lime to seawater as a cost-effective way to fight global warming by sequestering large amounts of carbon dioxide in the world's oceans, reports Chemistry & Industry magazine. Geoengineering solution to global warming could destroy the ozone layer (4/24/2008) A proposed plan to fight global warming by injecting sulfate particles into Earth's upper atmosphere could damage the ozone layer over the Arctic and Antarctic, report researchers writing in the journal Science. Planktos kills iron fertilization project due to environmental opposition (2/19/2008) Planktos, a California-based firm that planned a controversial iron-fertilization scheme in an attempt to qualify carbon offsets, announced that it failed to find sufficient funding for its efforts and would postpone its project indefinitely. Too early to say if iron seeding will slow global warming - scientists (1/10/2008) Schemes to use feed the ocean with iron as a way to enhance carbon sequestration from the atmosphere are premature and could be damaging to sea life and marine ecosystems, warns a letter published in the journal Science by an international group of scientists. New research discredits a $100 billion geoengineering fix to global warming (11/29/2007) Scientists have revealed an important discovery that raises doubts concerning the viability of plans to fertilize the ocean to solve global warming, a projected $100 billion venture.
<urn:uuid:2dfeeb51-c307-4fcd-b8ef-69d4949b7409>
2.65625
829
Content Listing
Science & Tech.
19.33357
1,314
Finding affordable ways to make technology available to everyone is a common challenge. Now, a researcher at NASA's Goddard Space Flight Center, Greenbelt, Md. has done that with the process that creates "nanotubes." A nanotube is a tiny, hollow, long, thin and strong tube with an outside diameter of a nanometer that is formed from atoms such as carbon. Nanotubes are really important in technology, because when they are made a certain way, a nanotube can conduct (allow movement of) electricity as well as copper does. When they are made a slightly different way, nanotubes are electrical semiconductors, which mean they can be switched between insulating from electricity to conducting electricity. Semiconductors make it possible to miniaturize electronic components. Nanotubes can be either semiconductors or conductors depending on how they are made. Nanotubes are also stronger than steel, so long filaments can be used to create super-tough lightweight materials. To understand how strong a nanotube is, think of a hair holding up a barbell. Although the carbon nanotubes were discovered 15 years ago, their use has been limited due to the complex, dangerous, and expensive methods for their production. However, Goddard researchers Drs. Jeannette Benavides and Henning Leidecker developed a simpler, safer, and much less costly process to make these carbon nanotubes. The key was that they figured out how to produce bundles of these nanotubes without using metal, which reduced the costs tremendously and made a better quality product. Earlier this year, NASA Goddard licensed its patented technique for manufacturing these high-quality "single-walled carbon nanotubes" to Idaho Space Materials (ISM) in Boise, Idaho. Now the carbon nanotubes based on this creation process are being used by researchers and companies that are working on things that will impact almost every facet of life, such as new materials with ceramics and polymers. Polymers are tiny molecules strung in long repeating chains, like DNA in our bodies. Polymers are also in proteins and starches in foods we eat, or in plastics, for example. "ISM believes that carbon nanotubes will be a building block for a better world, making people's lives better through a wide range of uses, including medical advances, fuel cells, video displays, solar cells, and a host of other applications," explained ISM vice president Roger Smith. "I'm very excited to see that this agreement is now making carbon nanotubes more readily available, particularly for academic and other research programs," said Dr. Benavides, who demonstrated the technology to ISM and provided expertise during process to make the technology come to market. "The fact that they now have access to lower cost carbon nanotubes [means great things] for the future of nanotechnology." Source: Goddard Space Flight Center Explore further: Scientists develop cheaper, more efficient fuel cells
<urn:uuid:63aef173-4132-4a14-9974-19aea95afd40>
4.28125
617
News Article
Science & Tech.
26.52749
1,315
Aliens under the rainbow The geometry that gives rise to rainbows may help scientists to find out whether other planets contain water, which is necessary to sustain life. Rainbows are formed because light rays are bent, or refracted, and scattered as they enter droplets of liquid that hang in the atmosphere. The refraction occurs because light waves are slowed as they enter the droplet — think of a shopping trolley slowing down as you push it onto a lawn at an angle, and changing its direction as a result. The amount by which the light rays are slowed, and hence bent, depends on the liquid's consistency and is measured by its refractive index. Thus, different liquids give rise to rainbows at different angles, a fact that enabled researchers to determine that the clouds of Venus are droplets of concentrated sulfuric acid. Researchers now suggest that the same approach could be used to detect clouds made of liquid water in a planet's atmosphere. posted by Plus @ 4:12 PM
<urn:uuid:6817f16d-d0d9-436b-a805-995fb3a45f64>
3.703125
200
News Article
Science & Tech.
41.584295
1,316
The computer (or more accurately the compiler) doesn't really care at all what number base you use in your source code. Most commonly used programming languages support bases 8 (octal), 10 (decimal) and 16 (hexadecimal) directly. Some also sport direct support for base 2 (binary) numbers. Specialized languages may support other number bases as well. (By "directly support", I mean that they allow entry of numerals in that base without resorting to mathematical tricks such as bitshifting, multiplication, division etc. in the source code itself. For example, C directly supports base-16 with its 0x number prefix and the regular hexadecimal digit set of 0123456789ABCDEF. Now, such tricks may be useful to make the number easier to understand in context, but as long as you can express the same number without them, doing so - or not - is only a convenience.) In the end, however, that is inconsequential. Let's say you have a statement like this following: int n = 10; The intent is to create an integer variable and initialize it with the decimal number 10. What does the computer see? i n t n = 1 0 ; 69 6e 74 20 6e 20 3d 20 31 30 3b (ASCII, hex) The compiler will tokenize this, and realize that you are declaring a variable of type int with the name n, and assign it some initial value. But what is that value? To the computer, and ignoring byte ordering and alignment issues, the input for the variable's initial value is 0x31 0x30. Does this mean that the initial value is 0x3130 (12592 in base 10)? Of course not. The language parser must keep reading the file in the character encoding used, so it reads 0 followed by a statement terminator. Since in this language base 10 is assumed, this reads (backwards) as "0 ones, 1 tens, end". That is, a value of 10 decimal. If we specified a value in hexadecimal, and our language uses 0x to specify that the following value is in hexadecimal, then we get the following: i n t n = 0 x 1 0 ; 69 6e 74 20 6e 20 3d 20 30 78 31 30 3b (ASCII, hex) The compiler sees 0x (0x30 0x78) and recognizes that as the base-16 prefix, so looks for a valid base-16 number following it. Up until the statement terminator, it reads 10. This translates to 0 "ones", 1 "sixteens", which works out to 16 in base 10. Or 00010000 in base 2. Or however else you like to represent it. In either case, and ignoring optimizations for simplicity's sake, the compiler allots enough storage to hold the value of an int type variable, and places there the value it read from the source code into some sort of temporary holding variable. It then (likely much later) writes the resulting binary values to the object code file. As you see, the way you write numerical values in the source code is completely inconsequential. It may have a very slight effect on compile times, but I would imagine that (again, ignoring such optimizations such as disk caching by the operating system) things like random turbulence around the rotating platters of the disk, disk access times, data bus collisions, etc., have a much greater effect. Bottom line: don't worry about it. Write numbers in a base that your programming language of choice supports and which makes sense for how the number will be used and/or read. You spent far more time reading this answer than you will ever recover in compilation times by being clever about which number base to use in source code. ;)
<urn:uuid:0e08e071-3740-495f-909d-38d7f5336f7d>
3.859375
807
Q&A Forum
Software Dev.
58.284539
1,317
Prog. Theor. Phys. Vol. 82 No. 3 (1989) pp. 555-562 Source Abundance of Cosmic Rays Randall Laboratory of Physics, University of Michigan, Ann Arbor Research Institute for Fundamental Physics, Kyoto University, Kyoto 606 (Received March 4, 1989) The source abundance of primary cosmic rays is computed and compared with the solar abundance. Then a model, proposed by one of the authors, which is based on quantum effects on gravity, is discussed and shown to yeild a prediction for the source abundance. The prediction of the model is compared with the calculated source abundance. DOI : 10.1143/PTP.82.555 - John R. Letaw, R. Silberberg and C. H. Tsao, Ap. J. Suppl. Series 56 (1984), 369. - Y. Tomozawa, "Cosmic Rays, Quantum Effects on Gravity, and Gravitational Collapse", lectures given at the Second Workshop on Fundamental Physics, University of Puerto Rico, Humacao, ed. E. Esteban (1986), p. 144. - M. S. Longair, High Energy Astrophysics (Cambridge University Press, 1981), p. 312. - R. Silberberg and C. H. Tsao, Ap. J. Suppl. Series No. 220(I) 25 (1973), 315; No. 220(II) 25 (1973), 335. R. Silberberg, C. H. Tsao and J. R. Letaw, Ap. J. Suppl. Series 58 (1985), 873. - A. G. W. Camerons, "Elemental and Nuclidic Abundances in the Solar System", Essays in Nuclear Astrophysics, ed. C. A. Barkes, D. D. Clayton and D. N. Schramm (1982). - W. R. Binns, R. K. Fickle, T. L. Garrard, M. H. Israel, J. Klarmann, E. C. Stone and C. J. Waddington, Ap. J. 247 (1981), L115. - M. Cassé and P. Gorel, Ap. J. 221 (1978), 703 (for Z ≤28). N. R. Brewster, P. S. Frier and C. J. Waddington, Ap. J. 264 (1983), 329. Also see Ref. 6) for Z ≤40. - S. E. Woosley and T. A. Weaver, Annu. Rev. Astron. Astrophys. 24 (1986), 205. - Y. Tomozawa, Quantum Field Theory, ed. F. Mancini (1985), p. 241; The INS International Symposium on Composite Model of Quarks and Leptons, ed. H. Terazawa and M. Yasue (1985), p. 386; "Quantum Corrections to Gravitational Potential and Gravitational Collapse", The 26th International Astrophysical Colloquium, Liege (1986), p. 137; "Mass and Length Scale of Black Holes in Quasars and Active Galactic Nuclei", Conference on Active Galactic Nuclei, Atlanta, ed. H. R. Miller and P. Wiita (Springer Verlag, 1987), p. 236.
<urn:uuid:5ca78bc2-6b81-44ef-9a40-96e157cda963>
2.625
724
Academic Writing
Science & Tech.
91.627852
1,318
Active regions on the solar surface are generally thought to originate from a strong toroidal magnetic field generated by a deep seated solar dynamo mechanism operating at the base of the solar convection zone. Thus the magnetic fields need to traverse the entire convection zone before they reach the photosphere to form the observed solar active regions. Understanding this process of active region flux emergence is therefore a crucial component for the study of the solar cycle dynamo. This article reviews studies with regard to the formation and rise of active region scale magnetic flux tubes in the solar convection zone and their emergence into the solar atmosphere as active regions. Coronal holes are the darkest and least active regions of the Sun, as observed both on the solar disk and above the solar limb. Coronal holes are associated with rapidly expanding open magnetic fields and the acceleration of the high-speed solar wind. This paper reviews measurements of the plasma properties in coronal holes and how these measurements are used to reveal details about the physical processes that heat the solar corona and accelerate the solar wind. It is still unknown to what extent the solar wind is fed by flux tubes that remain open (and are energized by footpoint-driven wave-like fluctuations), and to what extent much of the mass and energy is input intermittently from closed loops into the open-field regions. Evidence for both paradigms is summarized in this paper. Special emphasis is also given to spectroscopic and coronagraphic measurements that allow the highly dynamic non-equilibrium evolution of the plasma to be followed as the asymptotic conditions in interplanetary space are established in the extended corona. For example, the importance of kinetic plasma physics and turbulence in coronal holes has been affirmed by surprising measurements from the UVCS instrument on SOHO that heavy ions are heated to hundreds of times the temperatures of protons and electrons. These observations point to specific kinds of collisionless Alfvén wave damping (i.e., ion cyclotron resonance), but complete theoretical models do not yet exist. Despite our incomplete knowledge of the complex multi-scale plasma physics, however, much progress has been made toward the goal of understanding the mechanisms ultimately responsible for producing the observed properties of coronal holes. We review the properties of solar convection that are directly observable at the solar surface, and discuss the relevant underlying physics, concentrating mostly on a range of depths from the temperature minimum down to about 20 Mm below the visible solar surface. The properties of convection at the main energy carrying (granular) scales are tightly constrained by observations, in particular by the detailed shapes of photospheric spectral lines and the topology (time- and length-scales, flow velocities, etc.) of the up- and downflows. Current supercomputer models match these constraints very closely, which lends credence to the models, and allows robust conclusions to be drawn from analysis of the model properties. At larger scales the properties of the convective velocity field at the solar surface are strongly influenced by constraints from mass conservation, with amplitudes of larger scale horizontal motions decreasing roughly in inverse proportion to the scale of the motion. To a large extent, the apparent presence of distinct (meso- and supergranulation) scales is a result of the folding of this spectrum with the effective "filters" corresponding to various observational techniques. Convective motions on successively larger scales advect patterns created by convection on smaller scales; this includes patterns of magnetic field, which thus have an approximately self-similar structure at scales larger than granulation. Radiative-hydrodynamical simulations of solar surface convection can be used as 2D/3D time-dependent models of the solar atmosphere to predict the emergent spectrum. In general, the resulting detailed spectral line profiles agree spectacularly well with observations without invoking any micro- and macroturbulence parameters due to the presence of convective velocities and atmosphere inhomogeneities. One of the most noteworthy results has been a significant reduction in recent years in the derived solar C, N, and O abundances with far-reaching consequences, not the least for helioseismology. Convection in the solar surface layers is also of great importance for helioseismology in other ways; excitation of the wave spectrum occurs primarily in these layers, and convection influences the size of global wave cavity and, hence, the mode frequencies. On local scales convection modulates wave propagation, and supercomputer convection simulations may thus be used to test and calibrate local helioseismic methods. We also discuss the importance of near solar surface convection for the structure and evolution of magnetic patterns: faculae, pores, and sunspots, and briefly address the question of the importance or not of local dynamo action near the solar surface. Finally, we discuss the importance of near solar surface convection as a driver for chromospheric and coronal heating. This article surveys the development of observational understanding of the interior rotation of the Sun and its temporal variation over approximately forty years, starting with the 1960s attempts to determine the solar core rotation from oblateness and proceeding through the development of helioseismology to the detailed modern picture of the internal rotation deduced from continuous helioseismic observations during solar cycle 23. After introducing some basic helioseismic concepts, it covers, in turn, the rotation of the core and radiative interior, the “tachocline” shear layer at the base of the convection zone, the differential rotation in the convection zone, the near-surface shear, the pattern of migrating zonal flows known as the torsional oscillation, and the possible temporal variations at the bottom of the convection zone. For each area, the article also briefly explores the relationship between observations and models.
<urn:uuid:02ff8782-f914-4296-8743-e76730e0d47e>
3.078125
1,195
Content Listing
Science & Tech.
9.923298
1,319
Source: Climate Change Reconsidered Bali, R., Agarwal, K.K., Ali, S.N. and Srivastava, P. 2011. Is the recessional pattern of Himalayan glaciers suggestive of anthropogenically induced global warming? Arabian Journal of Geosciences 4: 1087-1093. Bali et al. (2011) introduce their review of what is known about Himalayan glaciers by noting that a “glacial inventory carried out by the Geological Survey of India reveals the existence of over 9,000 valley glaciers in India and at least about 2,000 glaciers in Nepal and Bhutan,” citing Raina (2006). And they say that “following the alarmist approach of the Intergovernmental Panel on Climate Change (IPCC),” a number of subsequent reports related to the bleak future of Himalayan glaciers have been issued, mainly through the media. These reports, as they describe them, have suggested that “almost all Indian glaciers including the Gangotri glacier will vanish from the Earth in the next few decades.” More particularly, they say the reports suggest that “initially, there would be flooding followed by the drying of glacial fed rivers of the Indian subcontinent, desertification, rise of sea level, submergence of the coastal areas, spread of diseases, drop in the production of food grains, etc.,” all due, of course, to “anthropogenically induced global warming.” So what’s the real story? (more…)
<urn:uuid:7bf40d0b-507e-4788-a92e-2f3a60370fd6>
2.671875
322
Truncated
Science & Tech.
48.174328
1,320
Milky Way's Birkeland Current Falsifies "Black Hole" Assumption Anatomy of the Milky Way core At present, we find ourselves in the unsatisfying position of having remarkable new observational insight into the nature of the galactic center but lacking a sturdy interpretive framework. - Robert L. Brown and Harvey S. Listz "Sagittarius A and it's Environment" Annual Review of Astronomy and Astrophysics Written in 1984 it appears that since then things have only According to some estimates the bright radio source known as Sagittarius A* (pronounced A-star), residing at the heart of our Milky Way galaxy, is more than 50 light-years wide. It's radio glow stems from synchrotron radiation, the result of charged particles spiraling around magnetic field lines at relativistic speeds. Assumptions have been made that the Sag A* complex is the result of a massive black hole. Yet, save for mathematical computer models based on gravitational conjecture, not a single black hole has ever been found. Might the well known plasma dynamics of an electric universe reveal the true nature of the Milky Way's galactic center? The central most region of our galaxy is filled with a variety of dense molecular clouds (plasma). A variety of factors such as temperature, chemical composition, bulk velocity etc are known to separate plasma of differing characteristics into separate molecular clouds. This self-organization is a fundamental aspect of plasma and is the 'life like' quality that prompted Irving Langmuir to so name them after blood plasma. The Sagittarius A complex is divided into "Sag A East" and "Sag A West". Running perpendicular to the galactic plane is a series of filaments known to be magnetic called the Arc. They collectively form a long linearly polarized filament that appears to also interact with other molecular clouds but the filaments do not appear to be deflected by that interaction. That characteristic and their perpendicular relationship also implies that the "threads" composing the Arc are tracing the path of magnetic field lines. One of these molecular clouds is known as "M-0.02-0.07" or simply the "50 km s-1 cloud". This particular dense cloud of plasma, often mistakenly called a "gas", is considered "unique" due to high levels of energetic activity. This energetic activity is the result of an interaction which produces the central "plasma-focused plasmoid" and non-thermal filamentation as explained in Wheel within a Wheel Within the brightest region of Sagittarius A* yet another dynamic feature resides. It is known as the "circumnuclear disk" and has been described as an orbiting oval shaped ring of "molecular gas". However, that description belittles the true nature of the Milky Way galaxy's plasma torus. At an estimated 100 kilometers per second the plasma torus appears to orbit, and is said to "feed" the central 7 light-year wide feature known as the " The "wheel within a wheel" of our Milky Way galaxy. The Mini-Spiral is composed of the "Eastern Arm", the "Western Arc", the "Northern Arm" and all three appear to be joined at a relatively small central "bar" such as those which distinguish barred spiral galaxies from spiral galaxies. Astrophysicists are generally at great pains to determine the cause of such highly energetic activity but not one of their number has, or can, explain how their gravity only universe can account for the wide variety of compelling features collectively embodied in the Milky Way's galactic nucleus. It's "wheel within a wheel", or the "Mini-Spiral", has sent them scrambling for any number of assumptive gravitational scenarios such as 'tidally stretched and disrupted clouds', "gravitational potential due to the point-mass", "accretion disk", explosive "blast waves" from supernova - although the magnitude of Sag A* refutes that notion, 'molecular cloud collisions', "shock models", and of course a theoretical black hole "with over times the mass of the Sun." Or is it 3.7 million solar masses? But no one has explained how so many supposedly "young stars" and star clusters, such as The Arches Cluster, and The Quintuplet Cluster can exist in a region so close to an alleged black hole. In it's attempts to wrestle with the cause of anomalous gravitational behavior modern astrophysics inadvertently misconstrues the known plasma dynamic of self-organization and reinterprets the observed behavior as "self-consistent". Atop this interpretation any number of gravitational scenarios and inferences are then placed. The conventional theory of stellar formation via gravitational collapse fails at galactic center. For example: the standard model for star formation, gas clouds from which stars form should have been ripped apart by tidal forces from the supermassive black hole. Evidently, the gravity of a dense disk of gas around Sagittarius A* offsets the tidal forces and allows stars to form. The tug-of-war between the black hole's tidal forces and the gravity of the disk has also favored the formation of a much higher proportion of massive stars than normal. - "Stars Surprisingly Form in Extreme Environment Around Milky Way's Black Hole" [Emphasis added] Finding such big star clusters so near the gravitational pull of the galactic center is surprising; tidal forces should rip them apart. - Angelle Tanner, Sky & Telescope When observation contradicts the gravity only cosmology the result is to immediately 'morph' the supposed gravitational characteristics of the ever pliable theoretical black hole. It is habitually done on Not only do we see the 'scavenging' of plasma via well known Marklund Convection which " inwards, with the normal E x B/B2 velocity, towards the center of a cylindrical flux tube" it has been a lack of familiarity with plasma dynamics that has gravitationally interpreted the E x B drift of plasma towards the Mini-Spiral as " material... falling inward". The Serpent in the Sky When we assign culpability for radio structures many hundreds of kilo parsecs in extent to "nuclear activity" and then ascribe that activity to a massive nuclear black hole, we appear to basing our conclusions in large measure on informed, or perhaps inspired speculation. We may be correct, but we also may be simply engaged in clever legerdemain. - Robert L. Brown and Harvey S. Listz "Sagittarius A and it's Environment": Annual Review of Astronomy and Astrophysics [Emphasis added] The double helix nebula in infrared - Credit: M. Morris UCLA In June 2006 NASA's Spitzer Space Telescope and UCLA announced the "unprecedented" discovery of Double Helix Nebula. The customary photo released on the occasion merely revealed the approximately 80 light-year long tip of a proverbial iceberg. It appears to have been Mark Morris of UCLA who made the connection and described the Double Helix Nebula in a manner appropriate for the active plasma dynamics of an electric The direct connection between the circumnuclear disk and the double helix is ambiguous, but the images show a possible meandering channel that warrants further investigation - M. Morris "A magnetic torsional wave near the Galactic Centre traced by a 'double helix' nebula" Nature Journal Letters vol 440 [Emphasis added] The Double Helix Nebula not sitting still. At a distance of perhaps some 300 light-years from the Sag A* complex the Double Helix Nebula exhibits unusually high dust temperatures for a galactic feature so far above the galactic plane and unaccompanied by nearby star formation. Morris also points out that the axis of the Double Helix Nebula points "roughly" towards galactic center and is oriented along the galaxy's axis of rotation. Morris and his colleagues say the cause of the twist may be a huge disc of gas, known as the circumnuclear disc, which orbits just a few light years outside the black hole at our galaxy's center. Morris told New Scientist the magnetic lines should be anchored in the circumnuclear Again, to accredit the existence of such fully formed electromagnetic structures within such close proximity to a theoretical black hole should refute the existence of the latter. Morris then searched for a "meandering channel" through which a possible "torsional Alfven wave" could travel from the bright circumnuclear disk of the Sag A* complex. Although heavily obscured by dust as can bee seen from comparative photos, it appears that Morris successfully traced the 'dust infused' portion of a "meandering" Birkeland current at least 300 light-years in length towards it's point of intersect with the 50 km s-1 cloud and circumnuclear disk. In addition, the unusually hot dust within the 80 light-year long tip is directly related to the scavenging of dust and plasma via the plasma related process of Marklund convection as misconstrued in the abovementioned Sky & Telescope article as being " falling inward". The plasma flow is usually inwards as matter is accumulated in the filaments revealing helically twisted densities greater than the surrounding When coupled with the work of Anthony Peratt wherein: Plasmas in relative motion are coupled by the currents they drive in each other and nonequilibrium plasma often consists of current-conducting In the laboratory and in the Solar System, filamentary and cellular morphology is a well-known property of plasma. As the properties of the plasma state of matter is believed not to change beyond the range of our space probes, plasma at astrophysical dimensions must also be filamentary. - A. L. Peratt and the universe: large scale dynamics, filamentation, and Consider the structural formations: The plasma torus (circumnuclear disk), the "Mini-Spiral" enclosed within it, dust undergoing inwardly directed radial convection apparently up and out along the massive Birkeland current filament away from the galaxy center. The very existence of such structural integrity stares in complete defiance of said black hole theory. During particle-in-cell simulations with up to 12 filaments Peratt also noted that multiple Birkeland currents can "neck off" leaving fewer (2-3) in number to account for the majority of "cosmic plasma phenomena". Through the decades long work of plasma physics the Electric Universe is not found "lacking a sturdy interpretive framework". The Double Helix Nebula fully demonstrates the nature of galactic-dimensioned Birkeland currents. The Double Helix Nebula: a magnetic torsional wave propagating out of the Galactic centre: Mark Morris (UCLA), Keven Uchida (Cornell), Tuan Do (UCLA) (see pages 11 & 14 for graphical presentation of A trip to Galactic Center: Sky & Telescope The Origin of the High-Energy Activity at the Galactic Center: F. Yusef-Zadeh, W. Purcell, E. Gotthelf Permalink to this article. Public comment may be made on this article on the Thunderbolts Forum/Thunderblogs (free membership required).
<urn:uuid:4857f76d-4562-4a42-8b6b-c4bebd17c240>
2.953125
2,507
Comment Section
Science & Tech.
30.433822
1,321
Twisted Web is: - an HTTP server, that can be used as a library or run as a stand-alone server - an HTML templating engine - an HTTP client library Twisted Web supports numerous standards; for example, it can serve as a WSGI and CGI container, or an XMLRPC server. It can also serve static content. Twisted Web provides built-in support for name-based virtual hosts, reverse proxying, XML parsing, and more. Twisted Web is a very simple to set up stand-alone server. For example, to run a server that serves static content out of the current directory, you can just run this short command line: twistd web --path . --port 8080 To run a WSGI application, it's just as simple: twistd web --wsgi my.application.name --port 8080 Because Twisted Web is also a Python library with a documented API, you can configure your server entirely using Python. For example, let's say you have a bunch of directories with names corresponding to each domain you want to serve from your web server. Here's the configuration file which creates a virtual host configuration serving static content for each domain out of the directory matching its name: # virtual.rpy from twisted.python.filepath import FilePath from twisted.web.static import File from twisted.web.vhost import NameVirtualHost resource = NameVirtualHost() for p in FilePath(".").children(): resource.addHost(p.basename(), File(p.path)) This configuration can be run with: twistd web --resource-script=virtual.rpy --port 8080 Unlike some other simple-to-run Python web servers, Twisted Web is an production-grade server that can be used to deploy real applications. Among other sites, this web site (twistedmatrix.com) is run entirely via Twisted Web. Because it's programmable, you can customize your deployment as much or as little as you like, including having your web server run periodic tasks. Because it's self-contained and requires no configuration, it's ideal for developing web applications because your production environment can mirror your deployment environment very closely with little effort. What It's Not Twisted Web is a web server, and a framework for doing things with the web - although it shares some components in common with frameworks like Django, it's not a "web framework" in the same sense. The Twisted ecosystem includes many different web-related tools. Learn which one is right for you. See the Downloads page. Twisted Web is available under the MIT Free Software licence. Subscribe to the twisted-web mailing list or visit the #twisted.web channel on irc.freenode.net to ask questions.
<urn:uuid:2bc465ee-0a3e-40dd-93ee-3b2f8769c664>
2.703125
585
Knowledge Article
Software Dev.
55.908814
1,322
Greenland ice a benchmark for warming Core data Greenland was about eight degrees warmer 130,000 years ago than it is today, an analysis of an almost three-kilometre-long ice core in Greenland has revealed. The finding by an international team of 38 institutions from 14 nations provides an important benchmark for climate change modelling and gives an insight into how the natural world will respond to global warming in the future. The study, which involves CSIRO researchers, also suggests Antarctica's ice sheets may be more vulnerable to warming than previously thought. Published in today's Nature journal, the results flow out of a four-year expedition known as the North Greenland Eemian Ice Drilling operation (NEEM). Dr David Etheridge, principal research scientist with CSIRO Marine and Atmospheric Research who has worked on the project, says the NEEM program is the first to successfully reach down into Greenland's ice core into the Eemian period, which stretched from 130,000 years to 115,000 years ago. "It has been something of a holy grail for Greenland work to achieve this … we are getting to ice close to the bedrock where you get melting and mixing of the ice layers." Etheridge says in a process similar to assembling a jigsaw puzzle, scientists used comparisons with gas elements in Antarctica's deep ice core records to re-assemble the layers in their original sequence. Deep ice drilling in the Antarctic has reached as far back as 800,000 years. Past and future It is important to understand what happened in Greenland during the Eemian period because the temperatures experienced then are "within the realms of where we are heading", says Etheridge. However, he says the previous warming was due to the Earth receiving more of the Sun's radiation due to its orbit at the time, while today's warming is being driven by increases in greenhouse gases in the atmosphere. Nature paper co-author Dr Mauro Rubino, of CSIRO Marine and Atmospheric Research, says it had been previously estimated that Greenland's temperature was about 4°C warmer during the Eemian than now. But this latest work used analysis of water-stable isotopes to estimate "the temperature 130,000 years ago was up to 8°C warmer [in Greenland] than what it is today", says Rubino. It also shows sea levels were on average 6 metres higher. The results provide "important benchmarks for future climate change projections" in temperature and the contribution of the two main ice sheets to sea level rises, Rubino says. He says the study also reveals the Greenland ice sheet did not melt as much as previously thought so was not the major contributor to sea level at that time. "It shows the major contribution to sea level rises was not coming from the Greenland ice shelf," he says. "It was previously believed that Greenland melted entirely [during the Eemian], but in fact the ice sheet was not that much different from what it is now. "Most of the contribution to sea level rise comes from these two big ice reserves [in Greenland and the Antarctica] so one of the possible interpretations is Antarctica is more susceptible to climate change than we thought." Etheridge agrees. He says the work shows the Greenland ice sheet survived during the Eemian - although it was about 400 metres thinner. "From that figure you can deduce how much it contributed to the sea level rise and it is not as much as was thought. "That throws things back to Antarctica ... previously the thought was Antarctica was too cold and too stable to be impacted." Etheridge says CSIRO was invited by lead institution, the University of Copenhagen, to be involved in NEEM at its formation because of its expertise in analysing air composition in air bubbles trapped in deep ice. Rubino says their team began analysis of gas bubbles from the first 80 to 100 metres of ice core down to the final 2540 metre depth. This helped track changes in climate and temperature on a year-by-year basis. He says the concentration of greenhouse gases such as carbon dioxide, methane and nitrous oxide in the air bubbles from the Eemian was much lower than what it is today.
<urn:uuid:96cf8d51-9a89-4975-ae4f-fecfebf943e1>
3.84375
860
News Article
Science & Tech.
43.843977
1,323
Science Fair Project Encyclopedia A ballistic missile is a missile, usually with no wings or fins, with a prescribed course that cannot be altered after the missile has burned its fuel, whereafter its course is governed by the laws of ballistics. In order to cover large distances ballistic missiles must be launched very high into the air or in space, in a sub-orbital spaceflight; for intercontinental missiles the altitude halfway is ca. 1200 km. When in space and no more thrust is provided, the missiles are freefalling. Long and medium range ballistic missiles are generally designed to deliver nuclear warheads because their payload is too limited for conventional explosives to be efficient, and because the extreme heat of re-entry would damage chemical or biological payloads. Many advanced ballistic missiles have several rocket stages and their course can be slightly adjusted from one stage to the next. Ballistic missiles can vary widely in range and use, and are often divided into categories based on range. The US distinguishes: - Intercontinental ballistic missile (ICBM): range greater than 5500 km - Intermediate-range ballistic missile (IRBM): range between 3000 and 5500 km - Medium-range ballistic missile (MRBM): range between 1000 and 3000 km - Short-Range Ballistic Missile (SRBM): range less than 1000 km. An example is the Scud. Medium to short range missiles are often called theatre ballistic missiles (TBM). Using a missile with a considerably longer range than the distance from launch site to target can make sense: it can reach a higher altitude and come down with a higher speed, making defense more difficult. E.g. a missile with a range 3000 km fired at a target that is only 500 km away could arrive at its target after having reached an altitude of about 1200 km - roughly the height reached by ICBMs. Like them, it would arrive at a speed of typically more than 6 km/s. The first ballistic missile was the V-2 rocket, developed by Nazi Germany in the 1940s, which was successully launched for the first time on October 3, 1942 and used for the first time in operation on September 8, 1944. Specific types of ballistic missiles include: - Agni missile - Blue Steel missile - Blue Streak missile - Minuteman missile - SS-24 missile - SS-18 missile - Peacekeeper missile - Polaris missile - Poseidon missile - Prithvi missile - CSS-2 missile - Condor missile - Jericho missile - Skybolt ALBM - Surya ICBM launched from fixed sites, mobile launchers and submarines. Specific types of ballistic missile submarines include: - Benjamin Franklin class submarine - Ohio class submarine - Resolution class submarine - Triomphant class - Redoutable class - additional ballistic missile submarines - http://www.fas.org/nuke/intro/missile/index.html - an introduction to ballistic missiles The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:730f7132-a857-46c4-90e7-d4a007bfd253>
3.921875
649
Knowledge Article
Science & Tech.
34.549926
1,324
The scientists, who are affiliated with the University of Washington (Seattle, WA), Case Western Reserve University (Cleveland, OH), the University of Bari (Bari, Italy), Washington University (St. Louis, MO), Washington State University (Pullman, WA), and Duke University (Durham, NC), report their findings online today in the journal Genome Research. Dr. Evan E. Eichler, Associate Professor of Genome Sciences at the University of Washington, heads the team. "Primate genomic sequence comparisons are becoming useful for elucidating the evolutionary history and organization of our own genome," he explains. "Such studies are particularly informative within human pericentromeric regions ?areas of rapid change in genomic structure." Pericentromeric regions are sequences of DNA that lie in close proximity to the centromere, which plays a critical role in chromosomal separation during cell division. Pericentromeric regions contain an abundance of segmental duplications, which are large DNA sequences that exhibit strong similarity to the euchromatic ancestral loci from which they were copied. According to Eichler, the limited number of comparisons of pericentromeric regions among closely related primates suggests extraordinary dynamism, where duplication, deletion, and rearrangement of large segments of DNA occur at an unprecedented scale. Eichler's group performed a comprehensive structural an Source:Cold Spring Harbor Laboratory
<urn:uuid:0aa51ce3-034b-4199-a6a7-f9eea7102942>
3.1875
288
News Article
Science & Tech.
-1.969545
1,325
Phase diagram of ESA complexes at 100 mM NaCl salt concentration as a function of pentanol and dodecane content relative to the surfactant weight [wt %]. The natural phenomenon of self-assembly — how molecules or other entities gather to become ordered objects or arrays — occurs in many areas of science, from nanomaterials to biology. Resulting from basic, well understood forces, such as electrostatics, self-assembly may allow scientists to manipulate materials at at tiny scales, ultimately yielding great advances in fields such as data storage, pharmaceuticals, and catalysis. In efforts to better understand self-assembly, many researchers are studying self-assembling systems in solution that consist of an electrolyte (any molecule that dissociates into ions in solution) attached to, or “in complex with,” a surfactant, a substance that lowers the surface tension between two liquids (or between a liquid and a solid). These liquid systems, called polyelectrolyte-surfactant complexes (PSCs), are good platforms for scientists to study how polymers and small molecules bind and also how they behave in solution. Looking ahead, scientists are targeting PSCs as a route toward creating functional materials with specific jobs, such as nanostructures for drug delivery. But investigating PSC behavior can be difficult because the complexes self-assemble into phases of many different structures, including cubic assemblies of various-sizes, hexagonal assemblies of cylinders, and stacks. Scientists want to find a method that will allow them to “dial in" a preferred phase. In a significant step toward this ability, researchers working at NSLS recently mapped out the full phase diagrams of a new, enhanced type of PSC, determining all the possible phases given different solution concentrations. The group, which consists of researchers from Stony Brook University, NSLS, and the University of Massachusetts Amherst, call their system an electrostatically self-assembled amphiphilic complex, or ESA. “My lab is interested in creating ordered nanoporous materials for applications in filters, catalysis and fuel-cell membranes,” said the study’s corresponding scientist, Helmut Strey from Stony Brook University. “This published work is the first step towards this goal by employing self-assembly to create porous materials with tunable pore size.” In PSC systems, the different phases are dictated mainly by spherical clusters of surfactant molecules, called micelles, which form when the surfactant is in solution. The phases are governed by the micelles’ size and curvature, and how “bendy" they are, and thus these are the characteristics the researchers need to control. At NSLS, the group discovered that adding a cosurfactant called pentanol to the mix allows the surfactant micelles to loosen up. By then incorporating large amounts of an oily liquid called dodecane, the researchers caused each phase's unit cell (the building block of the structure) to nearly double in size. This degree of manipulation is a big step forward in the drive to understand self-assembly and use it to create new materials and technologies. Said Strey, “The advantage of self assembly is that to make a material you just have to throw all the ingredients together and the material forms by itself. In the future, we will develop techniques to solidify our structures to create usable nanoporous materials.” The scientists used x-rays to study the system in its various concentrations, employing an advanced technique that allows them to study many samples at once. This high throughput is achieved using a well plate scanner, a sample analysis system that combines a well plate — a tool that can hold many samples at once via with an array of small wells — with a motorized stage that can hold up to three well plates at a time and pass them through the x-ray beam. Using this method scientists can study up to 1,000 samples per day. “This development is exciting because of the possibilities beyond this research,” said co-author Elaine DiMasi, NSLS physicist and group leader for the Soft Matter Interface beamline at the future National Synchrotron Light Source II. “As beams become smaller and data acquisition becomes faster, potentially every dataset becomes an 'image,' whether we map out a sample in real space, in 'chemical composition' space, like this work, or in processing space, such as by changing temperature or applying mechanical stress. “Once we can render such data in automatic ways, we have a way to 'light up' regions of interest even in very large parameter spaces, and quickly hone in on the properties of interest to our applications.” At NSLS-II, the third-generation light source that will succeed NSLS, the extremely bright x-ray beams produced will enable researchers to increase throughput, approaching a rate of one sample per second! This research is described in the August 25, 2011, online edition of Macromolecules. Summary Slide (.pdf)
<urn:uuid:15319dc3-c0a6-4dff-986f-1b4974afcbd0>
3.421875
1,046
News (Org.)
Science & Tech.
25.509133
1,326
A magnetic field is a vector field responsible for generating forces on electrically charged objects and magnets. Magnetic fields are generated by magnetic dipoles, moving electric charges, or changing electric fields. Magnetic fields are linked to electric fields; light, for instance, is a propagating electric and magnetic wave. Relativistically, a magnetic force in one inertial frame corresponds to an electric force in another. When trying to determine the magnetic forces and fields, the right hand rule often proves useful. Earth's Magnetic Field The magnetic field of Earth is directional nearly north-to-south, although slightly askance, meaning that "magnetic north" is not the same as "true north," and a person who is orienteering must take into account this change of declination, although it is truly only marginally relevant, unless you are close to either pole. This field has been decaying at a rapid rate of about about 5% per century, which casts doubt on the theory that the Earth is billions of years old. This decay suggests that, at some point, the poles will invert. Scientists have speculated about the history of Earth's magnetic field. One group that makes use of the Bible as a resource for science suggests that the history of the Earth's magnetic field is as depicted to the right. - ↑ Boy Scout Handbook. - ↑ K.L. McDonald and R.H. Gunst, 'An analysis of the earth’s magnetic field from 1835 to 1965,’ ESSA Technical Report, IER 46-IES 1, U.S. Govt. Printing Office, Washington, 1967. - ↑ http://www.answersingenesis.org/creation/v20/i2/magnetic.asp
<urn:uuid:fa2edbfe-2074-40c6-8dc6-b8a83277e6ab>
3.671875
360
Knowledge Article
Science & Tech.
56.71065
1,327
Wildfires across Russia. Devastating floods in Pakistan. Deadly landslides and flash floods in India and China. Heat wave across the United States. Severe drought in Niger. Taken together, scientists warn the events match predictions for extreme climate events caused by global warming. This year is on track to be the warmest since reliable temperature records began over a century ago, mainly due to a buildup of greenhouse gases from fossil fuels. We speak to Jeff Masters, co-founder and director of meteorology for Weather Underground, a weather information website. [includes rush transcript] This is a rush transcript. Copy may not be in its final form. AMY GOODMAN: Wildfires across Russia. Devastating floods in Pakistan. Deadly landslides and flash floods in India and China. Heat waves across the United States. Severe drought in Niger. Taken together, scientists warn they could match predictions for extreme climate events caused by global warming. This year is on track to be the warmest since reliable temperature records began over a century ago, mainly due to a buildup of greenhouse gases from fossil fuels. That’s according to the UN World Meteorological Organization. In Moscow, the heat has doubled the daily death rate. This is Andrei Seltsovsky, the head of Moscow’s health department. ANDREI SELTSOVSKY: [translated] The average death rate in the city during normal times is between 360 to 380 people a day. Today, we have around 700. This is no secret. Everyone thinks we are trying to keep it secret. Look, it is 40 degrees Celsius on the street. AMY GOODMAN: Over 1,600 have been killed by the floods in Pakistan, and landslides have killed over 300 in China. The estimated damage to homes, infrastructure and crops caused by the wildfires and the floods amounts to billions of dollars. And in Russia, among the world’s largest exporters of wheat, all future grain exports have been banned for the rest of the year. Well, we begin today with an overview of the extreme weather events in countries around the world. Dr. Jeff Masters is co-founder and director of meteorology for Weather Underground, a weather information website — his latest post is about the heat wave in Russia — joining us now from Ann Arbor, Michigan. Welcome to Democracy Now! Dr. Masters, talk about, first, what is happening in Russia. DR. JEFF MASTERS: Well, in Russia, they’re getting a heat wave unlike anything that’s ever been recorded in that country. Certainly going back the last 130 years, when we had good records, and then probably going back as much as a thousand years, if you look at the historical records, there has never been heat like this in Moscow and over this huge area of Russia. To give you some idea of what we’re talking about, back in 1920, Russia recorded its highest temperature in Moscow on record, 99 degrees Fahrenheit. That record has been broken five times just in the past two weeks. So this is unprecedented heat, not only for Moscow, but for a huge region of Russia and some neighboring countries, as well. AMY GOODMAN: And talk about the effects in Russia from wildfires to what we’re seeing, what, more than 300 more deaths in Moscow alone a day as a result of the heat, that also leads to terrible problems of pollution. DR. JEFF MASTERS: Yeah, the combined effects of heat and then air pollution and then smoke from fires is a terrible killer. We saw in 2003, when we had a similar heat wave over France and most of Europe, the death toll reached over 40,000. And I think in Russia we’re going to be seeing death tolls certainly in the tens of thousands from this heat wave, as well. The smoke, in particular, is causing a tremendous hardship on the people, and the elderly, in particular, in Russia. AMY GOODMAN: Talk about the rest of the world, Dr. Masters. DR. JEFF MASTERS: OK. Well, the entire world, if you look at the past six months, has experienced its warmest year on record, going back to the late 1800s when we first started making measurements. And so, it’s not a surprise that we might be seeing record heat waves and record high temperatures being set. In fact, there are seventeen countries in the world that have set their extreme all-time heat record this year. And that’s the most we’ve ever seen. The previous time was back in 2007, when fifteen countries set their all-time heat record. And those heat records this year include a 128-degree Fahrenheit reading in Pakistan, which is the highest temperature ever reliably recorded in the entire continent of Asia. So there’s been heat all over the globe. The ocean temperatures have been at record warm levels this year, and including in the Tropical Atlantic, where we’re expecting a severe hurricane season. So, it’s heat, heat, heat, is the name of the game this year on planet earth. AMY GOODMAN: July 2010 was the sixth-straight record warm month in the Tropical Atlantic and had the third-warmest anomaly of any month in history. What does that mean? DR. JEFF MASTERS: Well, when you get those kind of sea surface temperatures, it provides extra energy for hurricanes, which is what we’re most worried about, because hurricanes are heat engines. They suck heat out of the ocean, and they convert that heat to the energy of their winds. So we’re expecting at least double the usual number of intense hurricanes this year, the ones that are most likely to do high levels of damage. And the other thing all that heat does, if it’s down in the Caribbean, where a lot of it is, is it causes coral mortality. You get bleaching episodes where you’ll kill a large amount of coral. And back in 2005, the last year we had temperatures this warm in the Tropical Atlantic, we had a massive die-off of coral, particularly in the Virgin Islands and some of the neighboring islands. AMY GOODMAN: Jeff Masters, talk about the United States. DR. JEFF MASTERS: Well, in the US, we’ve had some heat waves here, as well. It’s particularly in the South that we’ve had some daily records set. And as a rule, though, we haven’t gotten the extreme heat that has been seen in other parts of the world, back in — over in Asia and Africa and Europe. So the US has kind of lucked out this year. I mean, it’s been hot, but we haven’t had severe drought or extreme heat waves like have been experienced in other parts of the world. Certainly, if we would have had the kind of weather that Russia is experiencing, it would have been an all-time record heat wave for this country, and probably been the worst natural disaster in American history. AMY GOODMAN: The Asian southwest monsoon, exceptionally deadly this year, what countries, what areas, regions of the earth, did it affect? DR. JEFF MASTERS: Well, primarily Pakistan, where they had the worst flooding in that country’s history, but also neighboring areas of India, along Kashmir, and then China, where over 700 people are dead today in a landslide that was triggered by heavy monsoon rains. Afghanistan, as well, has seen heavy monsoon rains. So it’s been an exceptional year for the monsoon. And I think part of that is due to the fact that the temperatures have been so warm in that area. I mean, we’ve seen record heat in Asia this year. And when you have a very warm atmosphere, you can evaporate more water vapor into it, which potentially can cause more flooding. And, in fact, if you look at the past few decades, the amount of heavy precipitation events in the Indian monsoon has increased substantially. We’ve seen almost a doubling in some of these very heavy rainfall events. Although the total amount of water falling down has not changed much, it’s these extreme events that have increased, and that’s in line with what climate prediction models are showing will continue to happen this century as the climate warms. AMY GOODMAN: The ice island that broke off of Greenland that’s four times the size of Manhattan, can you talk about the significance of this? DR. JEFF MASTERS: Well, that particular ice island broke off the northern coast of Greenland, where we haven’t really seen this kind of effect before. And what’s going on is that the temperatures in Greenland over the past decade have been very, very warm. And particularly over North Greenland this year, we’ve seen some very warm temperatures that have caused substantial melting of some of these glaciers and made them more vulnerable to large icebergs calving off of them. So, it’s just kind of a continuation of the warm pattern we’ve seen up there in the Arctic over the past few decades. AMY GOODMAN: What is the connection between all of this extreme weather and climate change, global warming, Jeff Masters? DR. JEFF MASTERS: Well, what we’re seeing this year is a preview of things to come. As the earth continues to warm, we’re going to see more extreme precipitation events, we’re going to see more heat waves, and this year is kind of a foretaste of that. Now, not every year is going to be like this. For instance, if you look at last year, it was a relatively quiet year as far as natural disasters go. The amount of dollars paid by the insurance companies was below average. But we’re going to start seeing more and more years like this year when you get these amazing events that cause tremendous death and destruction. And the concern I have is that as this extreme weather continues to increase in coming decades and the population increases, the ability of the international community to respond to these disasters and provide aid to victims is going to be stretched to the limit, and we’re not going to be able to respond. And we’re going to get in a situation where there’s going to be global emergencies that continue year after year all across the countries, and we’re just going to be struggling to cope with that. AMY GOODMAN: Finally, Jeff Masters, explain what Weather Underground is and what you feel about — I mean, there is tremendous amount of weather reported in the corporate media, in the mainstream media, all the time. People tune in constantly for it. It’s more and more frequent, because it affects people’s lives. And yet, you rarely hear the words "global warming" put together with extreme weather, so there’s no sense anyone can do anything about it. DR. JEFF MASTERS: Well, you can put those words together with extreme weather. You have to talk a little bit carefully about it, because no single event can be blamed on climate change or global warming. But what I like to say sometimes is that we load the dice in favor of more extreme events. And perhaps it’s a better analogy to say we’re putting more spots on the dice. It used to be, when you roll the dice, you’d get snake eyes or you’d get double sixes. But I think now, particularly when we’re talking about extreme temperature events, where there’s more spots on the dice, you can’t roll snake eyes anymore, but you can roll a thirteen. And that’s what happened to Russia this year; they rolled a thirteen. I don’t think that kind of event was possible until recent decades, because global warming increased the baseline temperature of the world. So, I talk all the time about global warming and say, you know, we have to be cognizant of the fact that global warming is making heavy precipitation events and extreme heat waves more likely. Can’t say this particular heat wave is to blame for it, but, boy, it sure is going to be more and more the case, as the decades go on, we’ll see these type of events. AMY GOODMAN: Would you call on your fellow weathermen and women — I mean, there are conferences, I presume, that you go to where all of you are talking — to start making this link in the mainstream media when they do their news reports? DR. JEFF MASTERS: Well, I do talk to some of these people, but most of them don’t go to the scientific conferences. They tend to stay in their TV studios and look at their own data. I think there’s a disconnect between the research community and TV meteorologists. And a lot of TV meteorologists are very skeptical that human-caused global climate change is real. They’ve been seduced by the view pushed by the fossil fuel industry that humans really aren’t responsible. And you can come up with all kinds of excuses — I’m sure you’ve heard them all — that, you know, climate scientists are doing it for — you know, to get attention and research money, that the temperature record is flawed, because, you know, we’ve got the heat island effect in cities and so on. But all of that is just propaganda that’s been put out by the PR industry of the fossil fuel industry, and it’s convinced a lot of TV meteorologists that that’s the case. So, it’s a tough road here, because we’re fighting a battle against an enemy that’s very well funded, that’s intent on providing disinformation about what the real science says. So, I do my best, but it’s a tough, uphill battle. AMY GOODMAN: Who is the enemy? DR. JEFF MASTERS: Well, it’s the PR industry in favor of the fossil fuel interests trying to convince us that global climate change is not real. AMY GOODMAN: Dr. Jeff Masters, I want to thank you very much for being with us, co-founder and director of meteorology for Weather Underground, an internet weather information service. This is Democracy Now! When we come back, we’re joined by Pablo Solón. He’s Bolivia’s ambassador to the United Nations, just back from climate talk negotiations in Bonn. Stay with us.
<urn:uuid:66a995ef-7394-499e-9fa0-6a1702d670df>
2.953125
3,057
Audio Transcript
Science & Tech.
60.110169
1,328
The Encyclopedia of Earth is dedicated to expanding and enhancing opportunities for education on environmental topics. Please take a moment to explore the many resources below, and provide feedback on using online resources for education. Initiatives of the Environmental Information Coalition - EoE in the Classroom See how educators are using the Encyclopedia of Earth in classrooms for course preparation and teaching. - Student Science Communication Project Browse articles written by students and published on the Encyclopedia of Earth with guidance from faculty and the EoE expert community. Readers and Online Courses - NCSE-NASA Interdisciplinary Climate Change Education The NNICCE team is developing a robust curricular package for a general education course on climate change that universities across the country can readily adopt and adapt. - Ecology for Teachers Reader Explore a graduate level course reader developed by Mark McGinley for an interdisciplinary program to teach high school teachers to teach ecology. - Environmental Contaminants and Toxicology Reader Delve into a broad introductory reader developed by Emily Monosson covering contaminants and toxicology. - AP Environmental Science Online Course Utilize this thorough online course put together by the University of California College Prep to prepare students for the College Board’s Advanced Placement Environmental Science test. - CAMEL: Climate Adaptation and Mitigation eLearning, an on online learning community - Collections of articles and resources around a topic or region - E-books published on the Encyclopedia of Earth - Environmental Classics that are often used in environmental science and studies programs
<urn:uuid:d2f4e387-4027-4a17-8fa3-de16b96eb208>
3.671875
317
Content Listing
Science & Tech.
-10.84478
1,329
Return to the Mammalia Index Weasel (Short Tailed) or (Ermine) The short-tailed weasel, also called an ermine or stoat, is found all over Canada, the northern United States, Europe, and Asia. They live in similar habitats to the long-tailed weasel including the taigas and tundra of Siberia. They are very similar in all ways to the long-tailed weasel, with one distinct difference. They have a much shorter tail, just half their body length and with a black tip. They always turn white in winter (they only range in places that have snow in winter). They are active and alert, good swimmers and climbers. They are territorial and will attack trespassers and are though they are active mostly at night (nocturnal) they can be out at any time during the day. They are fierce hunters feeding mostly on small rodents but also taking rabbits, birds, reptiles and even fruit in summer. They kill their prey by biting down hard on the base of the skull. Predators include foxes, coyotes, badgers, falcons and hawks. Short-tailed weasels live alone except to mate. Though it can be more than 9 months between the time they mate and the female has her 6 young, weasels are not truly pregnant until March and they have the young about 6 weeks later. This is called delayed implantation and many wild mammals have this same adaptation. It allows the animals to mate in the fall when they are more active, than try to find each other in late winter. The young are born in late April to early May. Within 2 months they can kill their own prey. Lifespan and/or Conservation Status If a short-tailed weasel survives to become an adult they may live for several years. They are listed as Lower Risk - least concern. Species: Mustela erminea Citing This Reference CITING RESEARCH REFERENCES When you research information you must cite the reference. Citing for websites is different from citing from books, magazines and periodicals. The style of citing shown here is from the MLA Style Citations (Modern Language Association). When citing a WEBSITE the general format is as follows. Author Last Name, First Name(s). "Title: Subtitle of Part of Web Page, if appropriate." Title: Subtitle: Section of Page if appropriate. Sponsoring/Publishing Agency, If Given. Additional significant descriptive information. Date of Electronic Publication or other Date, such as Last Updated. Day Month Year of access <URL>. All text on Exploring Nature was written by author, Sheri Amsel Here is an example of citing this page: Amsel, Sheri. “Mammalia.” Weasel (Short Tailed) or (Ermine). Exploring Nature Educational Resource. © 2005 - 2013. May 25, 2013. <http://exploringnature.org/db/detail.php?dbID=43&detID=997> - Coloring Page - Labeling Page - Download Hi-Res B&W Diagram - Download Hi-Res Color Diagram - Download Hi-Res In-Habitat Poster
<urn:uuid:7f539f81-34d1-4408-a126-c8c8ff2ec717>
3.78125
683
Knowledge Article
Science & Tech.
53.618257
1,330
C6n SEA practice and biodiversity Jo Treweek, SES, UK Helen Byron, Imperial College London, UK Dave le Maitre, CSIR Environmentek, South Africa Key issues to be addressed Biodiversity supports many livelihoods and provides essential goods and services to millions of people. However, its values are often under-emphasised in development planning. The first World Summit on Environment and Development in Rio de Janeiro (1992) emphasised the importance of biodiversity as the basis of our very existence, to be used wisely and sustainably and conserved for current and future generations. The main threats to biodiversity globally are associated with human activities causing habitat loss or damage. These threats need to be carefully managed if significant, irreversible losses of biodiversity are to be avoided. The Convention on Biological Diversity (CBD) requires Parties to integrate as far as possible and as appropriate the conservation and sustainable use of biological diversity into relevant sectoral or cross-sectoral plans and programmes. The CBD also, along with the Ramsar Convention and the Convention on Migratory Species, recognises SEA as an important mechanism for building biodiversity into development planning to promote its conservation and sustainable use. SEA can help to: • Build biodiversity objectives into plans • Identify biodiversity-friendly alternatives • Identify and manage cumulative threats • Plan effective mitigation strategies to ensure no net loss of biodiversity • Put in place monitoring programmes to provide necessary biodiversity data • Strengthen biodiversity partnerships and information networks This session will draw on experience and examples of SEA practice from different countries to review the extent to which biodiversity issues are addressed. The session will explore important biodiversity considerations and principles that should be adopted to ensure good practice in SEA. Emerging international guidance on SEA and biodiversity will be discussed and experiences presented at the conference will contribute to its further development.
<urn:uuid:ac76738c-2f98-4049-a90e-40a3cdf096b1>
3.125
375
News (Org.)
Science & Tech.
-6.577714
1,331
Forests losing the ability to absorb man-made carbon The sprawling forests of the northern hemisphere which extend from China and Siberia to Canada and Alaska are in danger of becoming a gigantic source of carbon dioxide rather than being a major "sink" that helps to offset man-made emissions of the greenhouse gas. Studies show the risk of fires in the boreal forests of the north has increased in recent years because of climate change. It shows that the world's temperate woodlands are beginning to lose their ability to be an overall absorber of carbon dioxide. Scientists fear there may soon come a point when the amount of carbon dioxide released from the northern forests as a result of forest fires and the drying out of the soil will exceed the amount that is absorbed during the annual growth of the trees. Such a prospect would make it more difficult to control global warming because northern forests are seen as a key element in the overall equations to mitigate the effect of man-made CO2 emissions. Two studies published today show that the increase in forest fires in the boreal forests – the second largest forests after tropical rainforests – have weakened one of the earth's greatest terrestrial sinks of carbon dioxide. One of the studies showed that in some years, forest fires in the US result in more carbon dioxide being pumped into the atmosphere over the space of a couple of months than the entire annual emissions coming from cars and energy production of a typical US state. A second study found that, over a 60-year period, the risk of forest fires in 1 million sq kms of Canadian wilderness had increased significantly, largely as a result of drier conditions caused by global warming and climate change. Tom Gower, professor of forest ecology at the University of Wisconsin-Madison, said his study showed that fires had a greater impact on overall carbon emissions from boreal forests during the 60-year period than other factors such as rainfall, yet climate was at the heart of the issue. The intensity and frequency of forest fires are influenced by climate change because heatwaves and drier undergrowth trigger the fires. "Climate change is what's causing the fire changes. They're very tightly coupled systems," Professor Gower said. "All it takes is a low snowpack year and a dry summer. With a few lightning strikes, it's a tinderbox," he said. Historically, the boreal forests have been a powerful carbon sink, with more carbon dioxide being absorbed by the forests than being released. However, the latest study, published in the journal Nature, suggests the sink has become smaller in recent decades, and it may actually be shifting towards becoming a carbon source, Professor Gower said. "The soil is the major source, the plants are the major sink, and how those two interplay over the life of a stand [of trees] really determines whether the boreal forest is a sink or a source of carbon," he said. "Based on our current understanding, fire was a more important driver of the carbon balance than climate was in the past 50 years. But if carbon dioxide concentration really doubles in the next 50 years and the temperature increases 4C to 8C, all bets may be off." The second study, published in Carbon Balance and Management, found carbon dioxide emissions from some forest fires exceeded the annual car and energy emissions from individual US states. Christine Wiedinmyer of the US National Centre for Atmospheric Research in Boulder, Colorado, used satellite imaging datato estimate CO2 output based on the degree of forest cover in a particular area. In some years, the amount of CO2 released from forest fires was equivalent to about 5 per cent of the man-made total. But in other years, more widespread and intense forest fires resulted in massively increased emissions. "There is a significant potential for additional net release of carbon from forests of the United States due to changing fire dynamics in the coming decades," Dr Wiedinmyer said. From the blogs A slight deviation from style this week and admittedly a bit weird, but at least I can finally say I... Owen Howells is a DJ/producer who grew up in Australia but was born in the UK. He came back to the U... Justice, the bedrock of our society is for sale under the Government’s latest plan to sell legal aid... Take inspiration from the green-fingered brigade who have been showing off their creativity at the R... - 1 What, let gays get married? We must be bonkers - 2 'Something passed underneath us, quite close': Airbus A320 has close encounter with UFO - 3 Rocky Horror star Tim Curry 'suffers major stroke' - 4 Exclusive: How MI5 blackmails British Muslims - 5 Lord of the Sings: Sir Christopher Lee, 91, to release heavy metal album BMF is the UK’s biggest and best loved outdoor fitness classes Find out what The Independent's resident travel expert has to say about one of the most beautiful small cities in the world Nook is donating eReaders to volunteers at high-need schools and participating in exclusive events throughout the campaign. Get the latest on The Evening Standard's campaign to get London's children reading. Win anything from gadgets to five-star holidays on our competitions and offers page.
<urn:uuid:12b97da4-0dc3-412d-bf88-6b446af586a6>
3.34375
1,086
Truncated
Science & Tech.
44.760553
1,332
Researchers at Notre Dame have developed a paste of semiconducting nanoparticles called solar paint (or "Sunbelievable") that could lead to easier-to-produce solar cells. First, they mix t-butanol, water, cadmium sulfide and titanium dioxide for 30 minutes. Next, they mask off a clear electrode with office tape. Once the tape is in place, they spread the mixture onto the electode and then anneal it with a heat gun. Finally, they sandwich an electrolyte solution between the new electrode and a graphene composite electrode. And then, it's time for testing under a beam of artificial light. The best performing cell paint's light-to-electricity efficiency is 1% so far. The efficiencies of commercial silicon solar cells are usually between 10 and 15%. The paints’ efficiencies, although low, are “quite decent” for a first-generation material. Read more: cen.acs.org
<urn:uuid:b0739254-86c0-4b94-ad92-303e3a01e3ba>
3.765625
203
News Article
Science & Tech.
44.554685
1,333
Walking, but not running, with the real 'Hobbits' By Stephanie Guzik, (Volunteer Science Writer) Scientists are a big step closer to understanding the evolution of walking in humans. Bipedalism—or walking upright on two feet—has long been considered one of the hallmarks of human evolution, signaling the transition from an ape-like reliance on arms and hands for locomotion to an upright gait using the legs and feet. In the cover story of the May 7, 2009 issue of the scientific journal Nature, a global team of scientific collaborators, led by William L. Jungers of Stony Brook University Medical Center and co-authored by Smithsonian’s Matt Tocheri of the Department of Anthropology’s Human Origins Program, investigates the foot anatomy and bipedalism in a primitive human species called Homo floresiensis—more commonly referred to as the “hobbit.” Since the discovery of the first hobbit skeleton in 2003 on the Indonesian island Flores, researchers have marveled over the scientifically intriguing hominin (human-like) species. The first and most complete hobbit skeleton is exceedingly small for an adult human—only 3 ½ feet tall with a brain size similar to that of a chimpanzee. Similarly, the anatomy of the hobbit skull, shoulder, and wrist is highly reminiscent of other earlier hominin species that predate the origin of modern humans roughly 200 thousand years ago. The current research conducted by Jungers and colleagues focused on the hobbit’s relatively complete feet—a rare find in hominin fossil discoveries—and their analyses uncover an intriguing mix of both human- and ape-like features. Jungers notes, "A foot like this one has never been seen before in the human fossil record," and it "offers the most complete glimpse to date of how a primitive bipedal foot was designed and differed from that of later hominins and modern humans." The hobbit's big toe--or hallux-- illustrates this dichotomy: it is adducted, meaning that like in modern humans, the big toe points in the same direction as the other (lateral) toes; but it is considerably shorter than the lateral toes which is a characteristic reminiscent of chimpanzees and other apes. Meanwhile, the hobbit's lateral toes (the forefoot) are proportionally long compared with the ankle bones (the hindfoot) in a manner that, again, is similar to apes. Furthermore, the relative proportions of the hobbit’s foot in comparison to its leg bones intriguingly also share similarities with chimpanzees. The length of the hobbit foot from the heel to the longest toe is exceedingly long compared to the rest of the leg— almost 70% of the femur (thigh bone) length—in contrast to modern human feet which are approximately half the length of the femur, regardless of whether you are short or tall. Combining an overall flat-footed bone structure in the hobbit’s foot with a relatively short big toe and long lateral toes, as well as a relatively long foot and short legs, hobbits were definitely bipeds (walked upright) but they would not have moved with the same gait that we have today. Hobbits would have required a different leg motion to compensate for these large differences in leg and foot proportions and they likely would not have been able to run as efficiently as modern humans do. Since the hobbit was first unearthed, its skeleton has been the focus of both scientific intrigue and controversy. Some scientists have provided data that suggest the hobbits are a different species of human that are closely related to, but distinct from, our own species, Homo sapiens, and other hominins like Neandertals (Homo neanderthalensis). Others believe the hobbits are simply modern humans with a disease resulting in small stature. The results reported by Jungers and colleagues add more evidence that the hobbits are a distinct human species, Homo floresiensis, noting that there are no known diseases that cause alterations in limb proportions as seen in the hobbit. Moreover, the authors argue that the many primitive features seen in the hobbit skeleton suggest this novel species may not owe its recent ancestry to Homo erectus, the hominin species credited with the first migration out of Africa roughly 1.8 million years ago, but rather to another more primitive species of the genus Homo. Although more fossil discoveries and analyses are needed to fully elucidate the interwoven evolutionary stories of Homo floresiensis and bipedalism in hominins, the striking combination of human- and ape-like traits of the hobbit foot adds important pieces to both puzzles. This report by Jungers and colleagues presents conclusive data that the primitive human nicknamed the hobbit—named after the diminutive human-like creatures of JRR Tolkien’s fictional writings—has officially walked out of fictional literature and into the real story of human evolution. Co-author Matt Tocheri remarks, “These particular 'hobbit' feet may never have walked into Mordor, but they certainly remind us how little we know about which other hominin species walked out of Africa and the many possible places their feet helped take them." [ TOP ]
<urn:uuid:e11e4b27-19de-4690-8d46-e472f7d711c1>
3.75
1,088
News Article
Science & Tech.
20.153315
1,334
Fig 19-17. Section through a young leaf of F. chiloensis. A) stomata, B) air space, C) thick cuticle of upper leaf surface, D) upper epidermal cell, E) palisade cell, F) mesophyll cell. The interior cell surface exposed to air space is from 2.2 to 4.4 times greater than the exposed outer surface; in F. chiloensis it is about four times greater. Oxygen in the air enters through the stomata, comes in contact with the cell walls and enters the cells. Carbon dioxide and water are given off and go out through the stomata.
<urn:uuid:991a7d05-5364-4b8c-8798-05739ff3adbc>
3
138
Knowledge Article
Science & Tech.
74.59725
1,335
Color and Vision Visit The Physics Classroom's Flickr Galleries and enjoy a photo overview of the topic of light and color.Color Television Explore how a television uses R, G, and B pixels to produce ... millions of colors.PhET Simulation: Color Vision Mix R, G and B light with varying intensities using this Java applet from PhET.Mixing Colors Mix light colors at the Ontario Science Center and learn about the principles of color addition. Looking for a lab that coordinates with this page? Try the Color Addition Lab from The Laboratory.Curriculum Corner Learning requires action. Give your students this sense-making activity from The Curriculum Corner.Color Addition The red-green-blue color swatches on this page provide a great opportunity to demonstrate addition of R, G, and B in varying amounts.Treasures from TPF Need ideas? Need help? Explore The Physics Front's treasure box of catalogued resources on visible light and color.General Atomics Sciences: Chromatics - The Science of Color This downloadable, 100-plus page book discusses various aspects of light production in the visible spectrum and color addition and subtraction.General Atomics Sciences: It's a Colorful Life Deepen your understanding of color with this free, downloadable book on color; contains theory and ideas for labs. Color perception, like sound perception, is a complex subject involving the disciplines of psychology, physiology, biology, chemistry and physics. When you look at an object and perceive a distinct color, you are not necessarily seeing a single frequency of light. Consider for instance that you are looking at a shirt and it appears purple to your eye. In such an instance, there may be several frequencies of light striking your eye with varying degrees of intensity. Yet your eye-brain system interprets the frequencies that strike your eye and the shirt is decoded by your brain as being purple. The subject of color perception can be simplified if we think in terms of primary colors of light. We have already learned that white is not a color at all, but rather the presence of all the frequencies of visible light. When we speak of white light, we are referring to ROYGBIV - the presence of the entire spectrum of visible light. But combining the range of frequencies in the visible light spectrum is not the only means of producing white light. White light can also be produced by combining only three distinct frequencies of light, provided that they are widely separated on the visible light spectrum. Any three colors (or frequencies) of light that produce white light when combined with the correct intensity are called primary colors of light. There are a variety of sets of primary colors. The most common set of primary colors is red (R), green (G) and blue (B). When red, green and blue light are mixed or added together with the proper intensity, white (W) light is obtained. This is often represented by the equation below: R + G + B = In fact, the mixing together (or addition) of two or three of these three primary colors of light with varying degrees of intensity can produce a wide range of other colors. For this reason, many television sets and computer monitors produce the range of colors on the monitor by the use of red, green and blue light-emitting phosphors. The addition of the primary colors of light can be demonstrated using a light box. The light box illuminates a screen with the three primary colors - red (R), green (G) and blue (B). The lights are often the shape of circles. The result of adding two primary colors of light is easily seen by viewing the overlap of the two or more circles of primary light. The different combinations of colors produced by red, green and blue are shown in the graphic below. (CAUTION: Because of the way that different monitors and different web browsers render the colors on the computer monitor, there may be slight variations from the intended colors.) These demonstrations with the color box illustrate that red light and green light add together to produce yellow (Y) light. Red light and blue light add together to produce magenta (M) light. Green light and blue light add together to produce cyan (C) light. And finally, red light and green light and blue light add together to produce white light. This is sometimes demonstrated by the following color equations and graphic: Yellow (Y), magenta (M) and cyan (C) are sometimes referred to as secondary colors of light since they are produced by the addition of equal intensities of two primary colors of light. The addition of these three primary colors of light with varying degrees of intensity will result in the countless other colors that we are familiar (or unfamiliar) with. Any two colors of light that when mixed together in equal intensities produce white are said to be complementary colors of each other. The complementary color of red light is cyan light. This is reasonable since cyan light is the combination of blue and green light; and blue and green light when added to red light will produce white light. Thus, red light and cyan light (blue + green) represent a pair of complementary colors; they add together to produce white light. This is illustrated in the equation below: R + C = R + (B + G) = Each primary color of light has a secondary color of light as its complement. The three pairs of complementary colors are listed below. The graphic at the right is extremely helpful in identifying complementary colors. Complementary colors are always located directly across from each other on the graphic. Note that cyan is located across from red, magenta across from green, and yellow across from blue. The production of various colors of light by the mixing of the three primary colors of light is known as color addition. The color addition principles discussed on this page can be used to make predictions of the colors that would result when different colored lights are mixed. In the next part of Lesson 2, we will learn how to use the principles of color addition to determine why different objects look specific colors when illuminated with various colors of light. 1. Two lights are arranged above a white sheet of paper. When the lights are turned on they illuminate the entire sheet of paper (as seen in the diagram below). Each light bulb emits a primary color of light - red (R), green (G), and blue (B). Depending on which primary color of light is used, the paper will appear a different color. Express your understanding of color addition by determining the color that the sheet of paper will appear in the diagrams below. 2. If magenta light and yellow light are added together, will white light be produced? Explain.
<urn:uuid:5801114e-0305-4468-91c9-85706beda87e>
3.96875
1,365
Tutorial
Science & Tech.
50.931962
1,336
World heading for warmest year yet - UK Met Office "Globally 2002 is likely to be warmer than 2001, and may even break the record set in 1998," said Briony Horton, the Meteorological Office's climate research scientist. The Intergovernmental Panel on Climate Change, the body that advises governments on long-term climatic variations, blames global warming, caused by rising emissions of greenhouse gases which trap heat in the atmosphere, for the rise in temperatures, a Met Office spokesman said. "We agree with them," he told Reuters. "Since 1970 there has been a marked trend in the rise of global temperatures. "The actual rise prior to 1970 was partly man-made and partly due to natural effects. But since 1970 scientists are in fairly general agreement that warming can be attributed to man's polluting activities." The Met Office said global temperatures were 0.57 degrees Celsius (1.03 Fahrenheit) higher than the long term average of about 15 degrees (59F) in the period from January to June. In the nearly 150 years since recording began, only in 1998 has the difference been higher, 0.6 degrees (1.08F), and that was caused by the influence of the El Nino weather phenomenon. The figures also showed that the northern hemisphere had enjoyed its warmest ever half year, with temperatures 0.73 degrees (1.31F) above the long term average. The Met Office spokesman said scientists predicted that, depending on the level of pollution, global temperatures would rise between 1.4 (2.52F) and as much as 5.9 degrees (10.62F) in the next 100 years. "That's the worst case scenario and it would cause major problems of melting icecaps and tremendous flooding," he said. The Met Office compiles its figures from data collected from observatories round the world, as well as from ships at sea.
<urn:uuid:6f80aa08-1ce8-4f08-bf96-8acfcad0ad0c>
3.546875
393
News Article
Science & Tech.
60.287861
1,337
Eutrophication is the biological response of water to overenrichment by plant nutrients, particularly nitrogen and phosphorus. Public concern began to rise in the 1960s (although the term "eutrophication" is older), when nutrient enrichment was rapidly making many bodies of water increasingly fertile. This eutrophication was mainly caused by the addition of plant nutrients from human activities, called, in this context, artificial or anthropogenic eutrophication. The phenomenon is a consequence of society's municipal, industrial, and agricultural use of plant nutrients and their subsequent disposal. Lakes and reservoirs have a finite life span. They may pass through periods in their existence when they become more or less fertile, according to different factors--principally their geographical position or the climatic conditions (Moss, 1988). The process of eutrophication has been used deliberately as a way to fertilize and thus to increase phytoplankton production and, indirectly, the population of fish within a lake or reservoir. What is new in the past few decades, however, is the extent of enrichment of lakes and rivers throughout the world as a result of the growing human population, more intensive agricultural and industrial activities, and the development of large sewage systems associated with large metropolitan areas. Until recently, a relative lack of control over the sources of the nutrients or over their effect upon the aquatic ecosystems has resulted in changes occurring within decades rather than over the centuries--or longer--in which such changes would appear naturally Many studies of lake s around the world have provided evidence of human-induced changes. Good examples of such studies are those carried out on the Great Lakes (Beeton & Edmondson, 1972; Sly, 1991). In the United Kingdom, eutrophication has been identified as an extremely widespread problem and has been blamed for damaging many aquatic sites in England known as Sites of Special Scientific Interest, despite government claims that only a few surface waters have been affected (Carvalho & Moss, 1995). In a study commissioned by English Nature, a statutory conservation agency in England, it was found that 79 Sites of Special Scientific Interest showed signs of eutrophication. As a result, English Nature has called for a large-scale investment program to deal with the eutrophication problem in aquatic wildlife sites (English Nature, 1997). Anthropogenic eutrophication appears to be the main problem. Excessive fertility in lakes and reservoirs results in heavy growth of phytoplankton, particularly of blue-green algae (cyanobacteria), that may form thick mats at the water surface and thus spoil the appearance of the lake. Some species of cyanobacteria may produce substances that are highly toxic to fish, birds, or mammals. In some cases, dense blooms of algae have resulted in fish kills by causing the hypolimnion to become anaerobic. Increased crops of phytoplankton often clog the filters of water treatment plants and make the treatment of water more costly Furthermore, some unwanted organic substances produced by the algae can pass through the filters at water treatment plants and cause unpleasant tastes and odors, or may even be toxic to human consumers. Eutrophication thus can not only impair aesthetic qualities of the water, but also affect the use of water for water supply, fisheries, and recreation. The essential elements required by living cells to sustain growth and reproduction are carbon, oxygen, hydrogen, other macronutrients, and trace elements. Of these, carbon is the most important, the main reservoir being atmospheric carbon dioxide. Carbon is easily soluble in water and is thus unlikely to be a limiting factor for algae growth, except during intense blooms. Oxygen and hydrogen are freely available in the water in most circumstances. The most important macronutrients are calcium, magnesium, potassium, phosphorus, nitrogen, sulfur, iron, and silicon. Phosphorus is important because it is the only nutrient whose proportional abundance is lower in the lithosphere than in plant tissue. It is thus a prime candidate to become a limiting factor in algae growth. The main reservoir of nitrogen is atmospheric dinitrogen, which is not available to plants directly, consequently nitrogen might be a limiting factor as well. Trace elements, including boron, chlorine, cobalt, copper, manganese, molybdenum, zinc, and, in some cases, vitamin complexes, are required in very small quantities. The "law of the minimum," which was first formulated by Justus von Liebig, states that growth is limited by whatever is in shortest supply (Gibson, 1971; Welch, 1980). For the reasons stated above, phosphorus and nitrogen are said to be "key nutrients"; in some circumstances, they may become limiting. Therefore, they are in most cases the nutrients that control algae growth, though some diatom species may be limited by silica. Other factors, such as light, may also limit algal productivity. Supply of Phosphorus and Nitrogen to Lakes Phosphorus is the 11th most abundant element in the earth's crust, and it is geochemically classed as a trace element. In nature, phosphorus exists almost exclusively as phosphate, a great part of which is sorbed to soil particles or incorporated into soil organic matter. Phosphate deposits occur in the earth's crust principally as the mineral apatite: [Ca.sub.5](F,C1,OH,1/2[CO.sub.3])[([PO.sub.4]).sub.3]. The initial natural source of phosphorus is weathering of such rocks. Weathering liberates phosphate from the mineral, and the phosphate can then enter the biosphere through uptake by plants. The initial source of nitrogen is the atmospheric reservoir of gaseous dinitrogen. Nitrogen gas is chemically very stable. It must be converted by nitrogen fixation, by microorganisms living principally in the soil but also in aquatic environments, before it is available to most living organisms. In natural water, nitrogen is present as dissolved dinitrogen, ammonia, and salts of the nitrate and nitrite ions; in addition, there are nitrogen-containing organic compounds primarily attributable to the presence of life. In a natural, undisturbed environment, nutrient sources are the drainage of the catchment, the direct atmospheric deposition (rainfall and dry depositions) onto the water surface, and the internal recycling from lake sediments. Ahl estimates the background phosphorus input to be in the range of 3 to 10 kilograms (kg) of phosphorus per square kilometer per year, depending on the size and the characteristics of the basin (Ahl, 1988). He also estimates the …
<urn:uuid:459653bf-4a69-4a13-a131-f6816c893a85>
3.671875
1,369
Knowledge Article
Science & Tech.
26.41734
1,338
This section illustrates you how to read and write from/to a serialized file through the hash table in Java. This section provides an example with the complete code of the program. Following program has the facility if the specified serialized file does not exist the the program creates the serialized file otherwise. This program first write to the specified serialized file if it exists otherwise read all contents by de-serializing the file. FileOutputStream fileOut = new FileOutputStream("HTExample.ser"): Above code creates an object "fileOut" of the FileOutputStream with it's constructor which takes a file name which contains ".ser" extension that determines for the serialized file either which has to be created or written with the specified value in a hash table in the program. Here is the code of the program: If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:8bf2a0d8-e6d7-4ad2-951d-82b0850b5ff7>
3.203125
211
Documentation
Software Dev.
41.85219
1,339
RICHLAND, Wash. -- Soot from pollution causes winter snowpacks to warm, shrink and warm some more. This continuous cycle sends snowmelt streaming down mountains as much as a month early, a new study finds. How pollution affects a mountain range's natural water reservoirs is important for water resource managers in the western United States and Canada who plan for hydroelectricity generation, fisheries and farming. Scientists at the Department of Energy's Pacific Northwest National Laboratory conducted the first-ever study of soot on snow in the western states at a scale that predicted impacts along mountain ranges. They found that soot warms up the snow and the air above it by up to 1.2 degrees Fahrenheit, causing snow to melt. "If we can project the future -- how much water we'll be getting from the rivers and when -- then we can better plan for its many uses," said atmospheric scientist Yun Qian. "Snowmelt can be up to 75 percent of the water supply, in some regions. These changes can affect the water supply, as well as aggravate winter flooding and summer droughts." The soot-snow cycle starts when soot, a byproduct of burning fossil fuels, darkens snow it lands upon, which then absorbs more of the sun's energy than clean white snow. The resulting thinner snowpack reflects less sunlight back into the atmosphere and further warms the area, continuing the snowmelt cycle. This study revealed regional changes to the snowpack caused by soot, whereas other studies looked at the uniform changes brought by higher air temperatures due to greenhouse gases. Previous studies have examined the effect of airborne or snowbound soot on global climate and temperatures. Qian and his colleagues at PNNL used a climate computer model to zoom in on the Rocky Mountain, Cascade, and other western United States mountain ranges. They modeled how soot from diesel engines, power plants and other sources affected snowpacks it landed on. They found that changes to snow's brightness results in its melting weeks earlier in spring than with pristine snow. In addition, less mountain snow going into late spring means reduced runoff in late spring and summer. They will report their findings in an upcoming issue of the Journal of Geophysical Research -- Atmospheres. Making Snowhills from Mountains Researchers know that soot settles on snow. And like an asphalt street compared to a concrete sidewalk, dirty snow retains more heat from the sun than bright white snow. Qian and colleagues wanted to determine to what degree dark snow contributes to the declining snowpack. To get the kind of detail from their computer model that they needed, the PNNL team used a regional model called the Weather Research and Forecasting model -- or WRF, developed in part at the National Center for Atmospheric Research in Boulder, Colo. Compared to planet-scale models that can distinguish land features 200 kilometers apart, this computer model zooms in on the landscape, increasing resolution to 15 kilometers. At 15 kilometers, features such as mountain ranges and soot deposition are better defined. Recently, PNNL researchers added a software component to WRF that models the chemistry of tiny atmospheric particles called aerosols and their interaction with clouds and sunlight. Using the WRF-chem model, the team first examined how much soot in the form of so-called black carbon would land on snow in the Sierra Nevada, Cascade and Rocky Mountains. Then the team simulated how that soot would affect the snow's brightness throughout the year. Finally, they translated the brightness into snow accumulation and melting over time. "Earlier studies didn't talk about snowpack changes due to soot for two reasons," said atmospheric scientist and co-author William Gustafson. "Soot hasn't been widely measured in snowpack, and it's hard to accurately simulate snowpack in global models. The Cascades have lost 60 percent of their snowpack since the 1950s, most of that due to rising temperatures. We wanted to see if we could quantify the impact of soot." Their simulations compared well to data collected on snowpack distribution and water runoff. But their first experiment did not include all sources of soot, so they modeled what would happen if enough soot landed on snow to double the loss of brightness. In this computer simulation, the regional climate and snowpack changed significantly, and not in a simply predictable way. Overall, doubling the dimming of the snow did not lead to twice as high temperature changes -- it led to an approximate 50 percent increase in the snow surface temperature. The drop in snow accumulation, however, more than doubled in some areas. Snowpack over the central Rockies and southern Alberta, for example, dropped two to 50 millimeters over the mountains during late spring and early winter. The most drastic changes occurred in March, the model showed. The team also found that soot decreased snow's brightness in two ways. About half of soot's effect came from its dark color. The other half came indirectly from reducing the size of the snowpack, exposing the underlying darker earth. Studies like this one start to unmask pollution's role in the changing climate. While greenhouse gases work unseen, soot bares its dark nature, with a cloak that slowly steals summertime's snow.
<urn:uuid:8807b2f0-5a67-4317-b23b-2484c6f603ab>
3.875
1,070
News Article
Science & Tech.
48.042244
1,340
Mar. 18, 2003 The idea that even small asteroids can create hazardous tsunamis may at last be pretty well washed up. Small asteroids do not make great ocean waves that will devastate coastal areas for miles inland, according to both a recently released 1968 U.S. Naval Research report on explosion-generated tsunamis and terrestrial evidence. University of Arizona planetary scientist H. Jay Melosh is talking about it today (March 17) at the 34th annual Lunar and Planetary Science Conference in League City, Texas. His talk, "Impact-Generated Tsunamis: an Over-Rated Hazard," is part of the session, "Poking Holes: Terrestrial Impacts." Given all life's worries, new evidence that asteroids smaller than a kilometer in diameter won't generate catastrophic tsunamis is welcome news, and not only for coast dwellers. It will save taxpayers the cost of financing searches for small Earth-approaching asteroids, a savings of billions of dollars, Melosh said. (The current NASA-funded effort to search and map truly hazardous Earth-approaching asteroids -- those one kilometer or larger in diameter -- is now half done and on track to be finished by the end of the decade, Melosh noted. NASA funds NEAT, LINEAR and the UA Spacewatch programs in this effort.) The idea that asteroids as small as 100 meters across pose a serious threat to humanity because they create great, destructive ocean waves, or tsunamis, every few hundred years was suggested in 1993 at a UA-hosted asteroids hazards meeting in Tucson. At that meeting, a distinguished Leiden Observatory astrophysicist named J. Mayo Greenberg, who since has died, countered that people living below sea level in the Netherlands for the past millennium had not experienced such tsunamis every 250 years as the theory predicted, Melosh noted. But scientists at the time either didn't follow up or they didn't listen, Melosh added. While on sabbatical in Amsterdam in 1996, Melosh checked with Dutch geologists who had drilled to basement rock in the Rhine River delta, a geologic record of the past 10,000 years. That record shows only one large tsunami at 7,000 years ago, the Dutch scientists said, but it coincides perfectly in time to a giant landslide off the coast of Norway and is not the result of an asteroid-ocean impact. In addition, Melosh was highly skeptical of estimates that project small asteroids will generate waves that grow to a thousand meters or higher in a 4,000-meter deep ocean. Concerned that such doubtful information was -- and is -- being used to justify proposed science projects, Melosh has argued that the hazard of small asteroid-ocean impacts is greatly exaggerated. Melosh mentioned it at a seminar he gave at the Scripps Institution of Oceanography a few years ago, which is where he met tsunami expert William Van Dorn. Van Dorn, who lives in San Diego, had been commissioned in 1968 by the U.S. Office of Naval Research to summarize several decades of research into the hazard posed by waves generated by nuclear explosions. The research included 1965-66 experiments that measured wave run-up from blasts of up to 10,000 pounds of TNT in Mono Lake, Calif. The experiments indeed proved that wave run-up from explosion waves produced either by bombs or bolides (meteors) is much smaller relative to run-up of tsunami waves, Van Dorn said in the report. "As most of the energy is dissipated before the waves reach the shoreline, it is evident that no catastrophe of damage by flooding can result from explosion waves as initially feared," he concluded. The discovery that explosion waves or large impact-generated waves will break on the outer continental shelf and produce little onshore damage is a phenomenon known in the defense community as the "Van Dorn effect." But Van Dorn was not authorized to release his 173-page report when he and Melosh met in 1995. Melosh, UA planetary sciences alumnus Bill Bottke of the Southwest Research Institute and others agreed at a science conference last September that they needed to find the report. Bottke found the title - "Handbook of Explosion-Generated Water Waves" - in a Google search. Given a title, UA science librarian Lori Critz then discovered that the report had been published and added to the University California San Diego library collection in March 2002. Bottke also tracked it down, and had the report by the time Melosh requested it by interlibrary loan. Both made several photocopies. Melosh said, "I since found out it was actually read into the Congressional Record as part of the MX Missile controversy." BIOSKETCH: H. JAY MELOSH Melosh, a professor in the UA planetary sciences department and Lunar and Planetary Laboratory, is well known for his work in theoretical geophysics and planetary surfaces. His principal research interests are impact cratering, planetary tectonics, and the physics of earthquakes and landslides. His recent research has focused on studies of the giant impact origin of the moon, the K/T boundary impact that extinguished the dinosaurs, the ejection of rocks from their parent bodies, and the breakup and collision of comet Shoemaker-Levy 9 with Jupiter. Melosh also is active in astrobiological studies that relate chiefly to the exchange of microorganisms between the terrestrial planets. Melosh earned his doctorate from Caltech in 1973 and joined the UA faculty in 1982. He is on the 12-member science team for Deep Impact, a $279 million robotic mission that will become the first to penetrate the surface of a comet when it smashes its camera-carrying copper probe into Comet Tempel 1 on July 4, 2005. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
<urn:uuid:9a196416-72dd-4fce-848f-d2f2e22fc121>
3.53125
1,244
News Article
Science & Tech.
41.243849
1,341
Understanding the Black Hole in the Heart of Our Galaxy Wednesday, March 01, 2006 Almost 100 years ago, Albert Einstein wondered about the most captivating prediction of general relativity — that isolated pockets of space-time exist in the universe. Now, Prof. Fulvio Melia, one of the world’s leading astrophysicists, presents a wealth of recent evidence that just such an entity, with a mass of about three million suns, is indeed lurking at the center of our galaxy, the Milky Way, in the form of a super massive “black hole”. In celebration of the World Year of Physics, the University of Tulsa, the American Physical Society and the TU Department of Physics will present a lecture by Melia, who is a professor of physics and astronomy at the University of Arizona and author of “The Edge of Infinity: Super massive Black Holes in the Universe.” Prof. Melia will discuss “The Black Hole at the Center of Our Galaxy,” on Thursday, March 9, 2006 in the Business Administration Hall on the University of Tulsa campus. A reception will be held at 6:15 p.m. preceding the lecture, which begins at 7:00 p.m. Both events are free and open to the public. Both “The Black Hole” and “The Edge of Infinity” are written for a general audience. Each book is a nontechnical account that conveys the excitement generated by the quest to expose what these giant distortions in the fabric of space and time have to say about our origin and ultimate destiny for the general reader. For more information on the lecture, please contact TU Physics Prof. Parameswar Hari, (918) 631-3128, or e-mail email@example.com. For more information about Fulvio Melia, please visit his website at http://www.physics.arizona.edu/~melia/
<urn:uuid:9a71ac3f-64e1-4834-9404-3c58fe4d37c4>
2.703125
412
News (Org.)
Science & Tech.
50.253796
1,342
global - Define global variable Ordinarily, each Scilab function, has its own local variables and can "read" all variables created in the base workspace or by the calling functions. The global keyword allow to make variables read/write across functions. Any assignment to that variable, in any function, is available to all the other functions declaring it global . If the global variable doesn't exist the first time you issue the global statement, it will be initialized to the empty matrix. //first: calling environnment and a function share a variable global a a=1 deff('y=f1(x)','global a,a=x^2,y=a^2') f1(2) a //second: three functions share variables deff('initdata()','global A C ;A=10,C=30') deff('letsgo()','global A C ;disp(A) ;C=70') deff('letsgo1()','global C ;disp(C)') initdata() letsgo() letsgo1() who , isglobal , clearglobal , gstacksize , resume ,
<urn:uuid:a188654e-4c2d-433f-bd66-5251cc50bb99>
2.734375
243
Documentation
Software Dev.
38.765388
1,343
The fputcsv() function formats a line as CSV and writes it to an open file. This function returns the length of the written string, or FALSE on failure. |file||Required. Specifies the open file to write to| |fields||Required. Specifies which array to get the data from| |separator||Optional. A character that specifies the field separator. Default is comma ( , )| |enclosure||Optional. A character that specifies the field enclosure character. Default is "| Tip: Also see the fgetcsv() function. The CSV file will look like this after the code above has been executed: Your message has been sent to W3Schools.
<urn:uuid:5d43d039-7c25-4be0-b2cf-9d5c06f7d319>
3.34375
149
Documentation
Software Dev.
55.31249
1,344
- What is a Cetacean? - Common Species - Rare Species - Other Species Key FactsLength: Up to 3.8 metres Range: Widely distributed in all major oceans, absent in polar regions Threats: Marine litter, pollution, acoustic disturbance Diet: Manly squid, some octopus and cuttlefish Latin: Grampus griseus The Risso's dolphin has a robust, stocky body and a tall, falcate (curved) dorsal fin. The melon (forehead) is blunt and bulbous with a unique V-shaped crease running from the upper lip to the blowhole. This species has no prominent beak and just two to seven pairs of teeth in the lower jaw. Adult Risso’s dolphins measure between 2.6 to 3.8 metres in length and can live for more than 30 years. The colour pattern varies greatly between individuals, and with age. Calves are born grey, but turn darker grey to dark brown as they become juveniles. As they age, the skin tone lightens to silvery-grey in some cases and the body is increasingly covered with scratches and scars inflicted by other Risso’s dolphins and prey species such as squid. Habitat and Distribution Risso’s dolphins are widely distributed throughout most oceans and seas between 60° North and 55° South. The north of Scotland represents the northern limit for this species. In the Hebrides, Risso's dolphins tend to inhabit deeper water, which is home to their preferred prey of squid, octopus and cuttlefish. They can occasionally be seen in coastal areas. In the Hebrides, Risso's dolphins are usually seen singly or in groups of up to 20 animals, although in other areas they are reported in large groups of several hundred individuals. Social behaviour is gregarious and sometimes rough, possibly accounting for some of the scars and tooth rake marks seen in adult animals; observed behaviours include breaching, tail slapping, spy-hopping, splashing and sometimes striking one another. Risso's dolphins are commonly seen travelling and surfacing slowly and will rarely approach vessels or bow-ride. Food and Foraging The diet of the Risso's dolphin consists mainly of squid, with some octopus and cuttlefish, and it has been suggested that they feed at night-time when their preferred prey migrate towards the surface. They are able to dive for about 30 minutes to depths in excess of 1000 metres, and sometimes forage cooperatively. Their soft-bodied prey is caught with teeth in the lower jaw and swallowed whole. Scars from such encounters are visible on the skin surface. Status and Conservation Many squid eating marine animals, including turtles and sea birds, swallow plastic bags that they mistake for their prey. Once ingested, plastic may accumulate in the stomach of the animal causing starvation and eventual death. It is likely that Risso’s dolphins commonly encounter plastic bags in the ocean and may be affected by this. Risso’s dolphins are also subject to incidental capture in fishing nets causing drowning, may be disturbed by noise produced by offshore oil and gas exploration, and are exposed to marine pollutants including organochlorines (pesticides). Risso’s dolphins are protected under UK and EU law, principally under Schedule 5 of the Wildlife and Countryside Act 1981, the Nature Conservation (Scotland) Act 2004 and by the 1992 EU Habitats and Species Directive.
<urn:uuid:e4f04983-806c-4797-b4d7-884e285ff216>
3.734375
729
Knowledge Article
Science & Tech.
44.97615
1,345
Changing Planet: Fading Corals The delicate balance of life and environment which sustains coral reefs globally is under threat. The dramatic increase in atmospheric CO2 in the past few decades has produced an increase in ocean temperature and acidity. Coral diseases have also been on the increase, due to changes in their environment as well as pollution. Click on the video at the left to watch the NBC Learn video - Changing Planet: Fading Corals. Lesson plan: Changing Planet: Fading Corals Shop Windows to the Universe Science Store! is a fun group game appropriate for the classroom. Players follow nitrogen atoms through living and nonliving parts of the nitrogen cycle. For grades 5-9. You might also be interested in: How did life evolve on Earth? The answer to this question can help us understand our past and prepare for our future. Although evolution provides credible and reliable answers, polls show that many people turn away from science, seeking other explanations with which they are more comfortable....more A coral reef is like an underwater city. Corals and algae construct the framework that rises off the tropical ocean floor and attract many diverse inhabitants. Schools of multicolored fish glide above...more New research has found that bacteria are responsible for killing 85% of the corals in reefs near the Florida Keys. The bacteria is a fecal coliform bacteria called Serratia marcescens and it is commonly...more Increasing amounts of carbon dioxide are released into the atmosphere from burning of fossil fuels. Some of that carbon dioxide makes its way into the world’s oceans. This changes the chemistry of...more Here's a safe and easy way to make lightning. You will need a cotton or wool blanket. This experiment works best on a dry, cool night. Turn out all the lights and let your eyes adjust to the darkness....more It takes 3 seconds for sound to travel 1 kilometer (5 seconds to travel 1 mile). The next time a thunderstorm comes your way, look out your bedroom window and watch for lightning. When you see a lightning...more Why do the 53rd Weather Reconnaissance Squadron and the Hurricane Research Division use different airplanes? Actually, they only use two main types. The top two airplanes in the graphic, the WC-130H Hercules...more Rain, wind, tornadoes, and storm surge related to hurricanes cause change to natural environments, damage to the human-built environment, and even loss of life. When a hurricane is over the ocean and far...more
<urn:uuid:e6b09bbb-6c74-4ab8-96fd-04edf5ee6680>
3.65625
516
Tutorial
Science & Tech.
56.299395
1,346
(rebroadcast of live show) Target: Grades 3-5 Length: 60 minutes Guide: Online, see Internet site Internet: http://scifiles.larc.nasa.gov ⇒ Floating tennis shoes and oil globs wash up on the beach to set the tree house detectives in motion to investigate a unique world under the sea. Join them as they dive into learning about ocean floor topography, ocean currents, oil clean-up, and more.To order a copy of this video, please visit the Central Operation of Resources for Educators Web site. This program was first broadcast on NASA TV Education File Schedule November 22, 2004.
<urn:uuid:787072b5-3502-4831-964e-0e0de0d635d8>
2.6875
140
Truncated
Science & Tech.
54.898182
1,347
A ring of radiation previously unknown to science fleetingly surrounded Earth last year before being virtually annihilated by a powerful interplanetary shock wave, scientists say. NASA's twin Van Allen space probes, which are studying the Earth's radiation belts, made the cosmic find. The surprising discovery — a new, albeit temporary, radiation belt around Earth — reveals how much remains unknown about outer space, even those regions closest to the planet, researchers added. After humanity began exploring space, the first major find made there were the Van Allen radiation belts, zones of magnetically trapped, highly energetic charged particles first discovered in 1958. "They were something we thought we mostly understood by now, the first discovery of the Space Age," said lead study author Daniel Baker, a space scientist at the University of Colorado. These belts were believed to consist of two rings: an inner zone made up of both high-energy electrons and very energetic positive ions that remains stable in intensity over the course of years to decades; and an outer zone comprised mostly of high-energy electrons whose intensity swings over the course of hours to days depending primarily on the influence from the solar wind, the flood of radiation streaming from the sun. [How NASA's Twin Radiation Probes Work (Infographic)] The discovery of a temporary new radiation belt now has scientists reviewing the Van Allen radiation belt models to understand how it occurred. Radiation rings around Earth The giant amounts of radiation the Van Allen belts generate can pose serious risks for satellites. To learn more about them, NASA launched twin spacecraft, the Van Allen probes, in the summer of 2012. The satellites were armed with a host of sensors to thoroughly analyze the plasma, energetic particles, magnetic fields and plasma waves in these belts with unprecedented sensitivity and resolution. Unexpectedly, the probes revealed a new radiation belt surrounding Earth, a third one made of super-high-energy electrons embedded in the outer Van Allen belt about 11,900 to 13,900 miles (19,100 to 22,300 kilometers) above the planet's surface. This stable ring of space radiation apparently formed on Sept. 2 and lasted for more than four weeks. "The feature was so surprising, I initially foolishly thought the instruments on the probes weren't working properly, but I soon realized the lab had built such wonderful instruments that there wasn't anything wrong with them, so what we saw must be true," Baker said. This newfound radiation belt then abruptly and almost completely disappeared on Oct. 1. It was apparently disrupted by an interplanetary shock wave caused by a spike in solar wind speeds. "More than five decades after the original discovery of these radiation belts, you can still find new unexpected things there," Baker said. "It's a delight to be able to find new things in an old domain. We now need to re-evaluate them thoroughly both theoretically and observationally." A radiation mystery It remains uncertain how this temporary radiation belt arose. Van Allen mission scientists suspect it was likely created by the solar wind tearing away the outer Van Allen belt. "It looks like its existence may have been bookended by solar disturbances," Baker said. Future study of the Van Allen belts can reveal if such temporary rings of radiation are common or rare. "Do these occur frequently, or did we get lucky and see a very rare circumstance that happens only once in a while?" Baker said. "And what other unusual revelations might come now that we are really looking at these radiation belts with new, modern tools?" The scientists detailed their findings online Feb. 28 in the journal Science. - NASA's Radiation Belt Storm Probe Mission in Pictures - How NASA's Twin Radiation Belt Storm Probes Work (Infographic) - Earth Quiz: Do You Really Know Your Planet?
<urn:uuid:9c72cc2e-a017-4988-969f-d6534daff827>
4.03125
769
News Article
Science & Tech.
38.787331
1,348
A large part of the Division for Planetary Science conference's second day (back on October 11th) was devoted to studies of the largest satellite of Saturn, Titan. For this report, I am going to focus on the studies of the features that have been interpreted as hydrocarbon lakes. They were originally discovered through the Cassini orbiter's synthetic aperture radar observations. The features reflected the radar signals extremely poorly and, as a result, appeared as dark patches in those radar images. This sort of signature indicates that the surface being observed is extremely flat, consistent with a calm liquid surface. The outlines of these areas are also very reminiscent of shore lines—some are even accompanied by what appear to be drainage channels—and the climatic conditions at their sites allow the presence of liquid methane and ethane. Thus, the radar-dark features near the north pole have been interpreted as liquid-filled depressions, similar to what we call "lakes" on Earth. However, a calm liquid surface is not the only possible scenario that fits the radar image of the features. For example, depressions filled with fine loose particles (which we would call sand on Earth) could appear dark in radar signature. Although the climatology of the polar region makes this scenario unlikely, there has been no direct confirmation that these radar-dark patches are indeed filled with liquid, and researchers continue to study the properties of the features we have been calling the "lakes" to see what they really are. Specifically, Titan researchers have been searching for signs of specular reflections of sunlight off of the lakes' surface, which would constitute undisputable evidence that those features are indeed filled with liquid. So far, however, no such reflections have been observed. Even if the lakes are actually filled with liquid, the absence of specular reflections is not particularly surprising because Titan's atmosphere contains a very thick layer of haze, which blocks most direct sunlight. Scientists have been looking for reflections in the spectral windows of Titan's atmosphere, where some wavelengths of light penetrate down to the surface, but they still have not spotted anything. This could be either because we have not observed the lakes from the right angle to see the reflections, or perhaps because the lakes are not filled with liquid. Specular Reflection on Earth, seen on my way home from the conference. Specular reflection is a very powerful effect—notice that the bright reflected sun dominates the image even though water covers a very small portion of it. The image also illustrates the importance of the viewing angle; the river spans the entire width of the image but only a small section shows specular reflection. The latest results presented at the conference drew a very mixed picture. First, Roger Clark of US Geological Survey presented an analysis that indicated the lakes must be, at most, several millimeters deep to be consistent with the spectrum obtained using Cassini's Visual and Infrared Mapping Spectrometer (VIMS) instrument. He used the spectral characteristics of liquid hydrocarbons to show that VIMS spectral coverage can measure the depth of lakes up to tens of centimeters before things get too deep; the liquid hydrocarbon at the sites of the "lakes" seem to be much shallower. While Clark hinted that this result makes the feature more similar to mudflats or playas on Earth, he also pointed out there could be something floating on the surface of the lakes that could mask any deep layer of liquid underneath. As the density of liquid hydrocarbons is very low, it is unclear what can float on a liquid body of this sort. Next, Jason Barnes of the University of Idaho pointed out that the lakes appear the same regardless of the direction of sunlight or the angle of observation. This is a bit of a puzzle, since the spectrum of light scattered by any material usually appear different depending on the geometry of the illumination and the observation. Barnes explained this by proposing that the uniform appearance of the lakes is the product of Titan's atmosphere. Since Titan is completely covered by a thick layer of haze that scatters sunlight, the surface is illuminated evenly from all angles. We experience the same effect when it is cloudy on Earth; even though there is plenty of light during the day, there are no shadows cast on the ground because the clouds scatter sunlight and create a uniform illumination. Thus, if the light given off from the surface of Titan's lakes is a reflection of the hazy sky, it will appear the same regardless of the observation geometry. Barnes also presented many images that show geological features around the lakes that suggest that the depth of lakes has varied in the recent past, perhaps indicating seasonal changes in their levels. Another presentation by Jason Soderblom of the University of Arizona showed that the lakes' surfaces are extremely poor reflectors of light when observed in the 5-micron spectral window using the VIMS instrument. His observations indicate that the lakes' surfaces reflect no detectable light—he places an upper limit of one percent reflectivity based on the sensitivity of the data. His observations also detect no scattering or specular reflection of sunlight. Soderblom argues that this is consistent with an extremely smooth surface, like that of quiescent liquid, in which sunlight reflects only in one direction and the specular reflection can be seen only from a very narrow range of observation angles. In short, one analysis says there is almost no liquid on the surface of those lakes, another shows the depth of the lakes has varied recently and suggests that the hazy sky is reflected on the lake surfaces, while another says there is no light reflecting from the lakes' surfaces. It is unclear whether there is a scenario that consistently explains all existing observational data, including those presented at the meeting. This situation nicely illustrates a real scientific process, directed towards understanding what makes things appear and behave the way they do. As scientists, we focus on things we cannot explain and try to draw a picture that's consistent with everything we know—in fact, even though the three studies presented above may seem to contradict each other, the three presenters are co-authors on each other's studies. To be sure, the prevailing interpretation in the field remains that those "lakes" are indeed liquid-filled depressions on the ground; there remain some things that still need to be explained before we draw any final conclusions. The scientific process is still very much ongoing, so stay tuned for future updates!
<urn:uuid:d0f51a14-ab37-4940-b681-946d39d502fe>
3.546875
1,300
Personal Blog
Science & Tech.
28.302532
1,349
DR EMILY BALDWIN Posted: November 09, 2009 By looking far back into the depths of the Universe, astronomers have found a galaxy located at just 787 million years after the big bang, and 22 other early galaxies. The big bang blasted the Universe into existence 13.7 billion years ago, and 400,000 years later, temperatures had cooled enough for electrons and protons to join and form neutral hydrogen. Within one billion years the first stars and galaxies were born. These new residents of the Universe radiated energy and ionized the hydrogen, initiating what is known as the reionization period. Astronomers know that this era ended about one billion years after the Big Bang, but when it began has remained something of a mystery.This is a composite of false colour images of the galaxies found at the early epoch around 800 million years after the big bang. The upper left panel presents the galaxy confirmed in the 787 million year old universe.Ê These galaxies are in the Subaru Deep Field. Image: M. Ouchi et al. Now, researcher Masami Ouchi of the Carnegie Observatories, and colleagues, have implemented a technique to shed light on the Universe's first brood of galaxies. “We look for ‘dropout’ galaxies,” says Ouchi. “We use progressively redder filters that reveal increasing wavelengths of light and watch which galaxies disappear from or ‘dropout’ of images made using those filters.” Older, more distant galaxies ‘dropout’ of progressively redder filters and the specific wavelengths reveal the galaxies’ distance and age. “What makes this study different is that we surveyed an area that is over 100 times larger than previous ones and, as a result, had a larger sample of early galaxies (22) than past surveys. Plus, we were able to confirm one galaxy’s age.” The hydrogen signature of one galaxy reveals its age as 787 million years post big bang, the first age-confirmation of a so-called dropout galaxy. Ouchi confirms that since all the galaxies were found using the same dropout technique, they are likely to be the same age. Ouchi's team made the observations using a custom-made, super-red filter on the wide-field camera of the 8.3-metre Subaru Telescope between 2006 and 2009 as part of the Subaru Deep Field and Great Observatories Origins Deep Survey North field. Using this data and that from other studies, they determined that star formation rates were considerably lower from 800 million years to one billion years after the big bang, than after this time frame. Accordingly, the rate of ionization would be very slow during this early time, due to the low star-formation rate. “We were really surprised that the rate of ionization seems so low, which would constitute a contradiction with the claim of NASA’s WMAP satellite. It concluded that reionization started no later than 600 million years after the Big Bang,” says Ouchi. “We think this riddle might be explained by more efficient ionizing photon production rates in early galaxies. The formation of massive stars may have been much more vigorous then than in today’s galaxies. Fewer, massive stars produce more ionizing photons than many smaller stars.” The research will feature in a December issue of the Astrophysical Journal. This special publication features the photography of British astro-imager Nik Szymanek and covers a range of photographic methods from basic to advanced. Beautiful pictures of the night sky can be obtained with a simple camera and tripod before tackling more difficult projects, such as guided astrophotography through the telescope and CCD imaging. U.S. & WORLDWIDE STORE Mars rover poster This new poster features some of the best pictures from NASA's amazing Mars Exploration Rovers Spirit and Opportunity. U.S. & WORLDWIDE STORE HOME | NEWS ARCHIVE | MAGAZINE | SOLAR SYSTEM | SKY CHART | RESOURCES | STORES | SPACEFLIGHT NOW © 2013 Pole Star Publications Ltd.
<urn:uuid:58d5da0b-0dab-4cf6-ab86-25008d97912f>
3.1875
856
News Article
Science & Tech.
43.810254
1,350
Using a simple algorithm, Belokurov et. al discovered this almost perfect Einstein-ring around a luminous red galaxy in the SDSS database: They called it the Cosmic Horseshoe. The ring has a diameter of 10 arcseconds, which counts as large. The lensing galaxy has a mass of about 5 x 1012 solar mass - about ten times the mass of the Milky Way! For comparison, here's an other lens, also from the SDSS, discovered serendipitously, the "8 O'clock Arc": This is less ring-shaped, it has 3 images of the same background galaxy on the top and a fourth on the bottom. It shows a similar size, but the lensing galaxy is estimated to have a mass of about 1/5 of that in the center of the Cosmic Horseshoe. A news article. The scientific article.
<urn:uuid:4812e8bb-6c3e-4b20-9dea-fdf47f61876d>
2.890625
187
Personal Blog
Science & Tech.
61.626848
1,351
In The Netherlands, Belgium, Germany and Spain Astronet, Kennislink, Astroforum, Sterrenkids, Dutch Copernicus Public Observatory, Belgian Mira Public Observatory, Spanish Serviastro, Associación Argentina "Amigos de la Astronomía" in Argentina and Hochschule Offenburg in Germany organized live webcasts of the event. Most webcast were clouded out or had a partrial view of the eclipse. Only in argentina Luis Manzerola and his team of the Asociación Argentina "Amigos de la Astronomía" viewed a complete eclipse: In the Netherlands Bas and Brechje van Beek of Skyglory captured the eclipse through a 101mm F5.4 Genesis Tele Vue telescope using a Canon 350DH camera, sensitized for H-alpha, at 200 ASA. Norbert Schmidt (pictured below left behind the computer) webcasted the eclipse from Copernicus Public Observatory. He took this shot of totality at 03:04 UT. In the Netherlands many observers gathered at Copernicus Public Observatory. At times they experienced fog, but in general the eclipse could be followed untill mid-totality. Carl Koppeschaar observing at Copernicus Public Observatory. Copyright: Asociación Argentina "Amigos de la Astronomía" A lunar eclipse occurs when the Full Moon passes through the Earth's shadow. The Moon encounters the penumbra, the Earth's outermost shadow zone, at 00:35 Universal Time (UT). About thirty minutes later a slight dusky shading can be noticed on the leading edge of the Moon. At 01:43 UT the Moon begins its entry into the innermost shadow zone, or umbra. For more than an hour a circular shadow creeps across the Moon's face. At 03:01 UT, the Moon is completely within Earth's dark shadow. It then takes on an eerie coppery tint that can be compared with the colour of blood. During a total eclipse the Moon shines with a orange reddish glow. Photograph: Robert Smallegange (Leeuwarden, The Netherlands). Without Earth's atmosphere, the Moon would disappear completely once immersed in the umbra. Longer wavelengths of light penetrate Earth's atmosphere better than shorter wavelengths, which is why the rising or setting sun looks reddish. In essence, the ruddy tint of a totally eclipsed moon comes from the ring of atmosphere around Earth's limb that scatters a sunset-like glow into the umbra. During totality a ring of reddish sunlight surrounds the Earth. The hue actually changes from one eclipse to another, ranging from a bright coppery orange to brownish. The Moon may darken so much that it becomes all but invisible to the unaided eye. These very dark lunar eclipses often occur after exceptional volcanic eruptions. Totality ends at 03:51 UT, when the moon's leading edge exits the umbra. The moon leaves the umbra completely at 05:09 UT, and the eclipse ends at 06:17 UT when the moon makes its last contact with the penumbra. Path of the Moon through Earth's umbral and penumbral shadows during the Total Lunar Eclipse of February 20-21, 2008. Times in UT (= GMT). Courtesy: Science@NASA/Larry Koehn. More animations at Shadow & Substance. Request of observations Dr. Richard Keen (Program for Atmospheric and Oceanic Sciences (PAOS), University of Colorado) is interested in brightness estimates (total visual magnitude and Danjon L values) of the moon during totality. His plan is to summarize the results from the 2003-2004 series of lunar eclipses in the Smithsonian Volcano Bulletin after this eclipse of October. Richard Keen communicates: This is a mass mailing to all of you who have observed lunar eclipses in the past and/or have an interest in these events. Last week I gave a presentation at a climate conference which included some results of your observations, and a copy of the presentation is attached to this message. Once again, I would be interested in hearing of your observations of this week's eclipse (Wednesday evening, February 20. on the west side of the Atlantic and Thursday morning for those on the east side of the Atlantic). Some of you may make "reverse binocular" magnitude estimates, or Danjon "L" estimates, or both. Below are some web links with information about the eclipse and about observing methods, along with links to presentations at the climate conference your observations were presented at. In summary, the recent eclipses show that the atmosphere has been clear of volcani aerosols since about 1995, and that this has contributed about 0.2 degrees to the recent warming. Enjoy clear skies for the eclipse! Dr. Richard Keen Here's a brief description of one way to measure the brightness of a lunar eclipse: The totally eclipsed moon is usually brighter than most comparison stars (expect about magnitude -3 at second and third contacts, and -1.4 at mid-totality, assuming no volcanic dust present), and its brightness needs to be reduced before a direct comparison can be made. An easy way to do this is to view the moon through reversed binoculars with one eye, comparing the reduced lunar image with stars seen directly with the other eye. The estimated magnitude of the reduced moon can be adjusted by a factor depending on the magnification of the binoculars, yielding the actual magnitude of the moon. For example, reversed 10x50 binoculars will reduce the apparent diameter of the moon by a factor of 10, or its brightness by a factor of 100, or 5 magnitudes. If the reduced moon appears like a magnitude 3 star, the actual moon is 5 magnitudes brighter, or -2. The corrections for 8x, 7x, and 6x binoculars are 4.5, 4.2, and 3.9 magnitudes, respectively. These correction factors assume the stated magnification of the binoculars is correct, and neglects light loss in the optics. More accurate correction factors can be empirically derived from observations of Venus, Jupiter, or Sirius. Observations made from the beginning to end of totality will reveal the darkening of the moon as it slips deeper into the umbra, but the most useful observations (for measuring volcanic dust) are those taken near mid-totality. Reports should include time(s) of observation, size of binoculars (or other method) used, and identity of comparison stars or planets. Articles about how volcanoes can affect the brightness of a lunar eclipse: Here are predictions for the immersions and emersions of craters and mountains on the Moon: Immersion and Emersion Times (UT) for the Total Lunar Eclipse of February 21, 2008 Immersion Crater/mountain Emersion 01.46 Riccioli 04.06 01.48 Grimaldi 04.05 01.49 Aristarchus 04.22 01.53 Kepler 04.19 01.55 Billy 04.06 01.56 Harpalus 04.35 01.57 Bianchini 04.36 02.00 Pytheas 04.30 02.01 Copernicus 04.27 02.03 Timocharis 04.36 02.05 Pico 04.43 02.05 Plato 04.44 02.08 Piton 04.44 02.10 Autolyticus 04.43 02.11 Campanus 04.08 02.14 Aristoteles 04.52 02.15 Eudoxus 04.51 02.16 Manilius 04.42 02.19 Menelaus 04.46 02.23 Dionysius 04.42 02.24 Plinius 04.49 02.24 Endymion 05.02 02.27 Vitruvius 04.53 02.27 Tycho 04.07 02.33 Censorinus 04.47 02.34 Proclus 04.59 02.37 Taruntius 04.55 02.40 Messier 04.52 02.42 Goclenius 04.47 02.47 Langrenus 04.53 Eduard Masana (left), Salvador Ribas (right)
<urn:uuid:55c466a9-b479-4127-a96b-cbe361950314>
2.546875
1,746
Knowledge Article
Science & Tech.
61.626288
1,352
Buried in Albert Einstein’s mail one spring day in 1953 lay a letter from an ordinary mortal, a 20-year-old high school dropout named John Moffat. Two more disparate correspondents would be hard to imagine. Moffat was an impoverished artist and self-taught physicist. Einstein was a mythic figure—the world’s most famous scientist. Moffat was living with his British father and Danish mother in Copenhagen. Einstein was at the Institute for Advanced Study in Princeton, New Jersey. Yet both men were outsiders. In his later years, Einstein had become increasingly isolated from the physics community, refusing to embrace the strange but powerful theory of quantum mechanics—with its particles that are also waves and that exist in no specific place until they’re observed. Nature, he argued, couldn’t be so perverse. So for nearly 30 years he had pursued a quixotic goal: the creation of a unified field theory to describe all the forces of nature and to demystify the quantum world. That was the occasion for Moffat’s letter. He thought he could offer Einstein some constructive criticism. “I wrote him to say that I wasn’t happy about what he was doing,” Moffat recalls. There was nothing unusual about this. Plenty of people sent letters to Einstein, not all of them rational. But in Moffat’s case something unexpected happened: Einstein wrote back. “Dear Mr. Moffat,” the reply began. “Our situation is the following. We are standing in front of a closed box which we cannot open, and we try hard to discover about what is and is not in it.” That closed box is the universe, of course, and no one had done more to pry off the lid than Einstein. Yet in the eyes of nearly all his colleagues he had contributed almost nothing of importance to physics for almost 20 years. Were they right? Did he squander his genius by chasing vainly after an ultimate theory? That is the conventional view. But at least a few physicists now argue that Einstein was far ahead of his time, raising questions that will challenge researchers for decades. “It’s often said that Einstein wasted his time later in life,” says Moffat, who went on to become a theoretical physicist. “This, of course, is erroneous. Einstein never wasted his time.” Einstein’s split with mainstream physics came at the very height of his career. In 1927, when he was 48, the world’s leading physicists gathered at a conference in Brussels to debate an issue that remains contentious to this day: What does quantum mechanics have to say about reality? Einstein had won the Nobel Prize in physics for research that showed that light consists of particles of energy—research that laid the groundwork for quantum mechanics. Yet he dismissed the new theory out of hand. At the conference, he clashed with the great Danish physicist Niels Bohr, launching a feud that would last until Einstein’s death in 1955. Bohr championed the strange new insights emerging from quantum mechanics. He believed that any single particle—be it an electron, proton, or photon—never occupies a definite position unless someone measures it. Until you observe a particle, Bohr argued, it makes no sense to ask where it is: It has no concrete position and exists only as a blur of probability. Einstein scoffed at this. He believed, emphatically, in a universe that exists completely independent of human observation. All the strange properties of quantum theory are proof that the theory is flawed, he said. A better, more fundamental theory would eliminate such absurdities. “Do you really believe that the moon is not there unless we are looking at it?” he asked. “He saw in a way more clearly than anyone else what quantum mechanics was really like,” British physicist Julian Barbour says. “And he said, ‘I don’t like it.’” In the years after the conference in Brussels, Einstein leveled one attack after another at Bohr and his followers. But for each attack Bohr had a ready riposte. Then in 1935 Einstein devised what he thought would be the fatal blow. Together with two colleagues in Princeton, Nathan Rosen and Boris Podolsky, he found what appeared to be a serious inconsistency in one of the cornerstones of quantum theory, the uncertainty principle. Formulated in 1927 by the German physicist Werner Heisenberg, the uncertainty principle puts strict limits on how accurately one can measure the position, velocity, energy, and other properties of a particle. The very act of observing a particle also disturbs it, Heisenberg argued. If a physicist measures a particle’s position, for example, he will also lose information about its velocity in the process. Einstein, Podolsky, and Rosen disagreed, and they suggested a simple thought experiment to explain why: Imagine that a particle decays into two smaller particles of equal mass and that these two daughter particles fly apart in opposite directions. To conserve momentum, both particles must have identical speeds. If you measure the velocity or position of one particle, you will know the velocity or position of the other—and you will know it without disturbing the second particle in any way. The second particle, in other words, can be precisely measured at all times. Einstein and his collaborators published their thought experiment in 1935, with the title “Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?” The paper was in many ways Einstein’s swan song: Nothing he wrote for the rest of his life would match its impact. If his critique was right, quantum mechanics was inherently flawed. Bohr argued that Einstein’s thought experiment was meaningless: If the second particle was never directly measured, it was pointless to talk about its properties before or after the first particle was measured. But although quantum physics eventually carried the day, it wasn’t until 1982, when the French physicist Alain Aspect constructed a working experiment based on Einstein’s ideas, that Bohr’s argument was vindicated. In 1935 Einstein was convinced that he had refuted quantum mechanics. And from then until his death 20 years later, he devoted nearly all his efforts to the search for a unified field theory.
<urn:uuid:85029add-616a-4ad7-b03c-89f243f87996>
2.609375
1,315
Nonfiction Writing
Science & Tech.
49.164923
1,353
Use the following tips and techniques when you design a UML 2.0 Activity Diagram. Usually you create Activity Diagrams after State Machine Diagrams. To design a UML 2.0 Activity Diagram, follow this general procedure: - Create one or more activities. You can place several activities on a single diagram, or create a separate diagram for each. Warning: You cannot create nested activities. - Usually activities are linked to states or transitions on State Machine Diagrams. Switch to your State Machine Diagrams and associate the activities you just created with states and transitions. Tip: After that you can find that some more activities must be created, or the same activity can be used in several places. - Switch back to the Activity Diagram. Think about flows in your activities. You can have an object flow (for transferring data), a control flow, both or even several flows in each activity. - Create starting and finishing points for every flow. Each flow can have the following starting points: - Initial node - Activity parameter (for object flow) - Accept event action - Accept time event action Each flow finishes with a Activity Final or Flow Final node. If your activity has several starting points, they can be used simultaneously. - Create object nodes. You do not link object nodes to classes on your Class Diagrams. However, you can use hyperlinks for better understanding of your diagrams. - Create action nodes for your flows. Flows can share actions. Warning: You cannot create nested actions. - For object flows, add pins to actions. Connect actions and pins by flow links. - Add pre- and postconditions. You can create plain text or OCL conditions. - You can optionally create shortcuts to related elements of other diagrams. To add an activity parameter to an activity: - In the Tool Palette, press the Activity Parameter button. - Click the target activity. Or: Choose AddActivity Parameter on the activity context menu. Result: An Activity Parameter node is added to the activity as a rectangle. Note that the activity parameter node is attached to its activity. You can only move the node along the activity borders. Note: Activity parameters cannot be connected by control flow links.
<urn:uuid:fbb7db04-6d16-42b2-a184-8155e07ac796>
3.328125
472
Tutorial
Software Dev.
41.476515
1,354
When two declarations in the same scope describe the same object or function, the two declarations must specify compatible types. These two types are then combined into a single composite type that is compatible with the first two. More about composite types later. The compatible types are defined recursively. At the bottom are type specifier keywords. These are the rules that say that unsigned short is the same as unsigned short int, and that a type without type specifiers is the same as one with int. All other types are compatible only if the types from which they are derived are compatible. For example, two qualified types are compatible if the qualifiers, const and volatile, are identical, and the unqualified base types are compatible.
<urn:uuid:9c2223f5-6614-4c73-9814-2eafabe7e1d3>
3.09375
144
Documentation
Software Dev.
28.725081
1,355
Source code: Lib/bdb.py The bdb module handles basic debugger functions, like setting breakpoints or managing execution via the debugger. The following exception is defined: The bdb module also defines two classes: This class implements temporary breakpoints, ignore counts, disabling and (re-)enabling, and conditionals. Breakpoints are indexed by number through a list called bpbynumber and by (file, line) pairs through bplist. The former points to a single instance of class Breakpoint. The latter points to a list of such instances since there may be more than one breakpoint per line. When creating a breakpoint, its associated filename should be in canonical form. If a funcname is defined, a breakpoint hit will be counted when the first line of that function is executed. A conditional breakpoint always counts a hit. Breakpoint instances have the following methods: Delete the breakpoint from the list associated to a file/line. If it is the last breakpoint in that position, it also deletes the entry for the file/line. Mark the breakpoint as enabled. Mark the breakpoint as disabled. Print all the information about the breakpoint: The Bdb class acts as a generic Python debugger base class. This class takes care of the details of the trace facility; a derived class should implement user interaction. The standard debugger class (pdb.Pdb) is an example. The skip argument, if given, must be an iterable of glob-style module name patterns. The debugger will not step into frames that originate in a module that matches one of these patterns. Whether a frame is considered to originate in a certain module is determined by the __name__ in the frame globals. New in version 2.7: The skip argument. The following methods of Bdb normally don’t need to be overridden. Auxiliary method for getting a filename in a canonical form, that is, as a case-normalized (on case-insensitive filesystems) absolute path, stripped of surrounding angle brackets. Set the botframe, stopframe, returnframe and quitting attributes with values ready to start debugging. This function is installed as the trace function of debugged frames. Its return value is the new trace function (in most cases, that is, itself). The default implementation decides how to dispatch a frame, depending on the type of event (passed as a string) that is about to be executed. event can be one of the following: For the Python events, specialized functions (see below) are called. For the C events, no action is taken. The arg parameter depends on the previous event. If the debugger should stop on the current line, invoke the user_line() method (which should be overridden in subclasses). Raise a BdbQuit exception if the Bdb.quitting flag is set (which can be set from user_line()). Return a reference to the trace_dispatch() method for further tracing in that scope. If the debugger should stop on this function call, invoke the user_call() method (which should be overridden in subclasses). Raise a BdbQuit exception if the Bdb.quitting flag is set (which can be set from user_call()). Return a reference to the trace_dispatch() method for further tracing in that scope. If the debugger should stop on this function return, invoke the user_return() method (which should be overridden in subclasses). Raise a BdbQuit exception if the Bdb.quitting flag is set (which can be set from user_return()). Return a reference to the trace_dispatch() method for further tracing in that scope. If the debugger should stop at this exception, invokes the user_exception() method (which should be overridden in subclasses). Raise a BdbQuit exception if the Bdb.quitting flag is set (which can be set from user_exception()). Return a reference to the trace_dispatch() method for further tracing in that scope. Normally derived classes don’t override the following methods, but they may if they want to redefine the definition of stopping and breakpoints. This method checks if the frame is somewhere below botframe in the call stack. botframe is the frame in which debugging started. This method checks if there is a breakpoint in the filename and line belonging to frame or, at least, in the current function. If the breakpoint is a temporary one, this method deletes it. This method checks if there is a breakpoint in the filename of the current frame. Derived classes should override these methods to gain control over debugger operation. This method is called from dispatch_call() when there is the possibility that a break might be necessary anywhere inside the called function. Handle how a breakpoint must be removed when it is a temporary one. This method must be implemented by derived classes. Derived classes and clients can call the following methods to affect the stepping state. Stop after one line of code. Stop on the next line in or below the given frame. Stop when returning from the given frame. Stop when the line with the line no greater than the current one is reached or when returning from current frame Start debugging from frame. If frame is not specified, debugging starts from caller’s frame. Stop only at breakpoints or when finished. If there are no breakpoints, set the system trace function to None. Set the quitting attribute to True. This raises BdbQuit in the next call to one of the dispatch_*() methods. Derived classes and clients can call the following methods to manipulate breakpoints. These methods return a string containing an error message if something went wrong, or None if all is well. Set a new breakpoint. If the lineno line doesn’t exist for the filename passed as argument, return an error message. The filename should be in canonical form, as described in the canonic() method. Delete the breakpoints in filename and lineno. If none were set, an error message is returned. Delete the breakpoint which has the index arg in the Breakpoint.bpbynumber. If arg is not numeric or out of range, return an error message. Delete all breakpoints in filename. If none were set, an error message is returned. Delete all existing breakpoints. Check if there is a breakpoint for lineno of filename. Return all breakpoints for lineno in filename, or an empty list if none are set. Return all breakpoints in filename, or an empty list if none are set. Return all breakpoints that are set. Derived classes and clients can call the following methods to get a data structure representing a stack trace. Get a list of records for a frame and all higher (calling) and lower frames, and the size of the higher part. Return a string with information about a stack entry, identified by a (frame, lineno) tuple: The following two methods can be called by clients to use a debugger to debug a statement, given as a string. Debug a statement executed via the exec statement. globals defaults to __main__.__dict__, locals defaults to globals. Debug a single function call, and return its result. Finally, the module defines the following functions: Check whether we should break here, depending on the way the breakpoint b was set. If it was set via line number, it checks if b.line is the same as the one in the frame also passed as argument. If the breakpoint was set via function name, we have to check we are in the right frame (the right function) and if we are in its first executable line. Determine if there is an effective (active) breakpoint at this line of code. Return a tuple of the breakpoint and a boolean that indicates if it is ok to delete a temporary breakpoint. Return (None, None) if there is no matching breakpoint.
<urn:uuid:1a41f7b6-1940-4386-ab5b-dc92e6419769>
2.875
1,701
Documentation
Software Dev.
52.352028
1,356
Expression statements are used (mostly interactively) to compute and write a value, or (usually) to call a procedure (a function that returns no meaningful result; in Python, procedures return the value None). Other uses of expression statements are allowed and occasionally useful. The syntax for an expression statement is: An expression statement evaluates the expression list (which may be a single expression). In interactive mode, if the value is not None, it is converted to a string using the built-in repr()function and the resulting string is written to standard output (see section 6.6) on a line by itself. (Expression statements None are not written, so that procedure calls do not cause any output.) See About this document... for information on suggesting changes.
<urn:uuid:a966e3c7-a27a-4567-bbdc-1763d4e67c8a>
3.265625
168
Documentation
Software Dev.
39.1603
1,357
The Wilson's phalarope is a shorebird that breeds in the Northern Prairie Pothole Region of North America. It is a migratory species that migrates down to South America for the winter. In breeding they are found in clear shallow wetlands. It is the female phalarope which is larger and more brightly colored. She is the one in charge. She will choose her mate and territory and will defend that territory. Shortly after she lays her eggs she will begin to migrate south leaving the male to incubate the eggs and care for the young. While migrating back to South America they will often stop at bodies of salt water where they will forage for food by swimming in a tight circle. This forms a small whirlpool which helps to bring aquatic insects and crustaceans to the surface. 3 hours ago
<urn:uuid:78bf768b-153b-4488-a158-7ece93499a1c>
3.015625
168
Personal Blog
Science & Tech.
57.733333
1,358
Object pool pattern ||This article has an unclear citation style. (March 2012)| The object pool pattern is a software creational design pattern that uses a set of initialized objects kept ready to use, rather than allocating and destroying them on demand. A client of the pool will request an object from the pool and perform operations on the returned object. When the client has finished, it returns the object, which is a specific type of factory object, to the pool rather than destroying it. Object pooling can offer a significant performance boost in situations where the cost of initializing a class instance is high, the rate of instantiation of a class is high, and the number of instances in use at any one time is low. The pooled object is obtained in predictable time when creation of the new objects (especially over network) may take variable time. However these benefits are mostly true for objects that are expensive with respect to time, such as database connections, socket connections, threads and large graphic objects like fonts or bitmaps. In certain situations, simple object pooling (that hold no external resources, but only occupy memory) may not be efficient and could decrease performance. Handling of empty pools Object pools employ one of three strategies to handle a request when there are no spare objects in the pool. - Fail to provide an object (and return an error to the client). - Allocate a new object, thus increasing the size of the pool. Pools that do this usually allow you to set the high water mark (the maximum number of objects ever used). - In a multithreaded environment, a pool may block the client until another thread returns an object to the pool. When writing an object pool, the programmer has to be careful to make sure the state of the objects returned to the pool is reset back to a sensible state for the next use of the object. If this is not observed, the object will often be in some state that was unexpected by the client program and may cause the client program to fail. The pool is responsible for resetting the objects, not the clients. Object pools full of objects with dangerously stale state are sometimes called object cesspools and regarded as an anti-pattern. The presence of stale state is not always an issue; it becomes dangerous when the presence of stale state causes the object to behave differently. For example, an object that represents authentication details may break if the "successfully authenticated" flag is not reset before it is passed out, since it will indicate that a user is correctly authenticated (possibly as someone else) when they haven't yet attempted to authenticate. However, it will work just fine if you fail to reset some value only used for debugging, such as the identity of the last authentication server used. Inadequate resetting of objects may also cause an information leak. If an object contains confidential data (e.g. a user's credit card numbers) that isn't cleared before the object is passed to a new client, a malicious or buggy client may disclose the data to an unauthorized party. If the pool is used by multiple threads, it may need the means to prevent parallel threads from grabbing and trying to reuse the same object in parallel. This is not necessary if the pooled objects are immutable or otherwise thread-safe. Some publications do not recommend using object pooling with certain languages, such as Java, especially for objects that only use memory and hold no external resources. Opponents usually say that object allocation is relatively fast in modern languages with garbage collectors; while the operator new needs only ten instructions, the classic delete pair found in pooling designs requires hundreds of them as it does more complex work. Also, most garbage collectors scan "live" object references, and not the memory that these objects use for their content. This means that any number of "dead" objects without references can be discarded with little cost. In contrast, keeping a large number of "live" but unused objects increases the duration of garbage collection. In some cases, programs that use garbage collection instead of directly managing memory may run faster. In the .NET Base Class Library there are a few objects that implement this pattern. System.Threading.ThreadPool is configured to have a predefined number of threads to allocate. When the threads are returned, they are available for another computation. Thus, one can use threads without paying the cost of creation and disposal of threads. Java supports thread pooling via java.util.concurrent.ExecutorService and other related classes. The executor service has a certain number of "basic" threads that are never discarded. If all threads are busy, the service allocates the allowed number of extra threads that are later discarded if not used for the certain expiration time. If no more threads are allowed, the tasks can be placed in the queue. Finally, if this queue may get too long, it can be configured to suspend the requesting thread. See also - Goetz, Brian (2005-09-27). "Java theory and practice: Urban performance legends, revisited". IBM developerWorks. Archived from the original on 2005-09-27. Retrieved 2012-08-28. - Goetz, Brian (2005-09-27). "Java theory and practice: Garbage collection in the HotSpot JVM". IBM developerWorks. Archived from the original on 2003-11-25. Retrieved 2012-08-28. - Kircher, Michael; Prashant Jain; (2002-07-04). "Pooling Pattern". EuroPLoP 2002. Germany. Retrieved 2007-06-09. - OODesign article - Improving Performance with Object Pooling (Microsoft Developer Network ) - Developer.com article - Portland Pattern Repository entry - Apache Commons Pool: A mini-framework to correctly implement object pooling in Java - Game Programming Patterns: Object Pool
<urn:uuid:7c22c959-2307-44d3-912d-911561f6c208>
3.15625
1,216
Knowledge Article
Software Dev.
45.853679
1,359
Nov 8, 2011 Project CLAMER finds 'disturbing' evidence of changes to Europe's seas Project CLAMER, an 18-month initiative involving 17 European marine institutes, has amassed some 'convincing' and 'disturbing' evidence of changes in the European marine environment, according to its organisers. The project, which has now come to an end, synthesized an extensive collection of academic papers published since 1998 on climate change and Europe's marine environments and combined this with a groundbreaking opinion poll. The online survey of 10,000 residents in 10 European countries reveals widespread concern about climate change, led by worries about sea-level rise and coastal erosion. The poll also found that those worried by climate change largely blame the phenomenon on other groups of people or nations, and assign governments and industry responsibility for mitigating the problem (though they perceive government and industry as ineffective on the issue). Some 86% of respondents said climate change is caused entirely, mainly or in part by human activities. Only 8% thought it was mainly or entirely caused by natural processes; in the US, around 32–36% hold this view. Co-ordinated by the Marine Board of the European Science Foundation, with contributions from more than 20 scientists, the CLAMER synthesis and related book examine the environments of the North Sea, Baltic Sea, Arctic Ocean, North East Atlantic Ocean, Mediterranean Sea and the Black Sea. The study found that Europeans face greater risk of illness, property damage and job losses because of the impacts of climate change on the seas around them. "Millions of euros in health costs may result from human consumption of contaminated seafood, ingestion of water-borne pathogens, and, to a lesser degree, through direct occupational or recreational exposure to marine diseases," states the CLAMER synthesis. "Climatic conditions are playing an increasingly important role in the transmission of these diseases." The researchers found that sea-level rise, combined with higher waves being recorded in the North Atlantic and more frequent and severe storms, threaten up to 1 trillion euros' worth of Europe's physical assets within 500 m of the shore. Some 35% of Europe's GDP is generated within 50 km of the coast, the synthesis notes. "Sea-level rise of 80 to 200 cm could wipe out entire countries…causing sea floods, massive economic damage, large movements of populations from inundated areas, salinity intrusion and loss of wetlands including the ecosystem services that they provide." The CLAMER synthesis also suggests the need for Europe's commercial fisheries to reduce catch in some places and make adjustments in others due to warming water, ocean acidification, and altered salinity and oxygen content. "Some of the biggest [changes] will be required in Europe's seas, where temperatures are rising faster than the open North Atlantic," according to one research paper in the CLAMER collection.
<urn:uuid:a3fadfeb-3bb5-472b-a5c8-7b6ec2d6d2a4>
2.921875
584
News Article
Science & Tech.
21.211134
1,360
Depth range (m): 0 - 4.8 Depth range (m): 0 - 4.8 Note: this information has not been validated. Check this *note*. Your feedback is most welcome. Molecular Biology and Genetics Statistics of barcoding coverage: Elminius modestus Public Records: 0 Specimens with Barcodes: 3 Species With Barcodes: 1 Elminius modestus is a species of barnacle in the family Balanidae, native to Australia, Tasmania and New Zealand, but now spread to Britain and the north west coasts of Europe. It reaches a maximum size of about 10 millimetres (0.39 in) in diameter. Distribution and habitat E. modestus originated in Australia and was first seen in British waters, in Chichester Harbour, during the Second World War. It was believed to have arrived on the hulls of ships, or possibly the larval stages travelled in bilge water. It has become very common in southern England and Wales and is spreading northwards, but the spread may be limited by the temperature of the sea. It is found on the upper middle shore and is tolerant of low salinity levels where fresh water enters the sea. It avoids exposed positions. It had reached the Scottish Borders by 1960 and Shetland by 1978. It is found on the Atlantic coasts of Europe from Gibraltar to Germany. E. modestus is a suspension feeder. It has feathery appendages which beat rhythmically to draw plankton and other organic particles into the shell for consumption. Eggs are laid and develop into nauplius larvae which are released into the phytoplankton. These then develop into cyprid larvae which later settle and cement themselves onto a rocky substrate. In the British Isles, E. modestus competes with Semibalanus balanoides, whereas in southern Europe it also competes with Chthamalus spp. It is particularly successful because it grows fast, tolerates reduced salinity, has a lower temperature tolerance than Chthamalus spp and a higher tolerance than Balanus spp. It is also a threat to native species because it reaches maturity in its first season and can produce several broods of larvae per year. It has an extended habitat as it grows both high up the shore and in the neritic zone. - WoRMS (2011). "Elminius modestus". World Register of Marine Species. http://www.marinespecies.org/aphia.php?p=taxdetails&id=106209. Retrieved August 17, 2011. - John Barrett & C. M. Young (1958). Collins Pocket Guide to the Sea Shore. p. 91. - D. J. Crisp (1958). "The spread of Elminius modestus Darwin in north-west Europe" (PDF). Journal of the Marine Biological Association of the United Kingdom 37 (2): 483–520. doi:10.1017/S0025315400023833. http://sabella.mba.ac.uk/1942/01/The_spread_of_Elminius_modestus_Darwin_in_north-west_Europe.pdf. - K. Hiscock, S. Hiscock & J. M. Baker (1978). "The occurrence of the barnacle Elminius modestus in Shetland". Journal of the Marine Biological Association of the United Kingdom 58 (3): 627–629. doi:10.1017/S0025315400041278. - H. Barnes & M. Barnes (1966). "Ecological and zoogeographical observations on some of the common intertidal cirripedes of the coasts of the western European mainland in June-September, 1963". In Harold Barnes. Some Contemporary Studies in Marine Science. Allen & Unwin. pp. 83–105. - "Shore life". Encarta Encyclopedia 2005 DVD. - E. Bourget (1987). "Barnacle shells: composition, structure, and growth". In Alan J. Southward. Crustacean Issues 5: Barnacle Biology. pp. 267–285. ISBN 90-6191-628-3. - H. Barnes & Margaret Barnes (1960). "Recent spread and present distribution of the barnacle Elminius modestus Darwin in north-west Europe". Proceedings of the Zoological Society of London 135 (1): 137–145. doi:10.1111/j.1469-7998.1960.tb05836.x. - "Elminius modestus". Marine Advice. Joint Nature Conservation Committee. http://jncc.defra.gov.uk/page-1704. Retrieved August 17, 2011. EOL content is automatically assembled from many different content providers. As a result, from time to time you may find pages on EOL that are confusing. To request an improvement, please leave a comment on the page. Thank you!
<urn:uuid:4bc996f4-e495-46dc-9472-2043a541ac2e>
2.84375
1,045
Knowledge Article
Science & Tech.
62.541838
1,361
No-take areas, herbivory and coral reef resilience Hughes, Terry P., Bellwood, David R., Folke, Carl S., McCook, Laurence J., and Pandolfi, John M. (2007) No-take areas, herbivory and coral reef resilience. Trends in Ecology & Evolution, 22 (1). pp. 1-3. |PDF (Published Version) - Repository staff only - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader| View at Publisher Website: http://dx.doi.org/10.1016/j.tree.2006.10... Coral reefs worldwide are under threat from various anthropogenic factors, including overfishing and pollution. A new study by Mumby et al. highlights the trophic relationships between humans, carnivorous and herbivorous fishes, and the potential role of no-take areas in maintaining vulnerable coral reef ecosystems. No-take areas, where fishing is prohibited, are vital tools for managing food webs, ecosystem function and the resilience of reefs, in a seascape setting that extends far beyond the boundaries of the reefs themselves. |Item Type:||Article (Refereed Research - C1)| |Keywords:||coral reef; no-take areas; herbivory| |FoR Codes:||06 BIOLOGICAL SCIENCES > 0602 Ecology > 060205 Marine and Estuarine Ecology (incl Marine Ichthyology) @ 100%| |SEO Codes:||96 ENVIRONMENT > 9605 Ecosystem Assessment and Management > 960508 Ecosystem Assessment and Management of Mining Environments @ 100%| |Deposited On:||20 Jul 2009 14:11| |Last Modified:||14 Jun 2013 00:27| Last 12 Months: 0 |Citation Counts with External Providers:||Web of Science: 23| Repository Staff Only: item control page
<urn:uuid:5927ee9d-bca9-4f23-9aa8-731678dca126>
2.75
415
Academic Writing
Science & Tech.
51.785245
1,362
Even when viewing the subject in the most objective way possible, it is clear that software, as a product, generally suffers from low quality. Take for example a house built from scratch. Usually, the house will function as it is supposed to. It will stand for many years to come, the roof will support heavy weather conditions, the doors and the windows will do their job, the foundations will not collapse even when the house is fully populated. Sure, minor problemsdo occur, like a leaking faucet or a bad paint job, but these are not critical. Software, on the other hand is much more susceptible to suffer from bad quality: unexpected crashes, erroneous behavior, miscellaneous bugs, etc. Sure, there are many software projects and products which show high quality and are very reliable. But lots of software products do not fall in this category. Take into consideration paradigms like TDD which its popularity is on the rise in the past few years. Why is this? Why do people have to fear that their software will not work or crash? (Do you walk into a house fearing its foundations will collapse?) Why is software - subjectively - so full of bugs? - Modern software engineering has existed for only a few decades, a small time period compared to other forms of engineering/production. - Software is very complicated with layers upon layers of complexity, integrating them all is not trivial. - Software development is relatively easy to start with, anyone can write a simple program on his PC, which leads to amateur software leaking into the market. - Tight budgets and timeframes do not allow complete and high quality development and extensive testing. How do you explain this issue, and do you see software quality advancing in the near future?
<urn:uuid:51739559-6a50-494b-95c1-877e1adebfd3>
2.59375
357
Q&A Forum
Software Dev.
45.078243
1,363
One of blog writer in "Chemistry Blog" namely Azmanam in his recent blog article entitled "How to Succeed in Organic Chemistry", has listed out some interesting points and called them as 6 truths of Organic chemistry, which every organic chemistry student must remember, are as follows. 1) Approach unknown reactions just like you should approach all reactions – Identify nucleophile(s) – Identify electrophile(s) – Nucleophiles attack electrophiles 2) Weaker Acid Wins – In and acid/base equilibrium, the equilibrium favors the side of the arrow with the weaker acid (the compound with the higher pKa) 3) Mind your charges – Make sure the net charge of all compounds is consistent throughout a mechanism 4) The 2nd Best Rule – The 2nd best resonance structure usually defines a functional group’s reactivity 5) When in doubt: Number Your Carbons! – When coupling 2 molecules, if it not readily obvious where the various atoms go in the product, number the carbon atoms in the starting material and map those numbers on to the product. 6) Carbonyls: THE CODE – There are only 3 elementary steps in a carbonyl addition mechanism. 1) Proton Transfer (always reversible) 2) Nucleophilic Addition to a Carbonyl (electrons go up onto oxygen) 3) Electrons Collapse Down from Oxygen (and kick out a good leaving group) The steps can be in any order and repeated, but those are the only 3 steps needed for addition to acid chlorides, acid anhydrides, aldehydes, ketones, amides, esters, and carboxylic acids (including aldol and Claisen reactions).
<urn:uuid:ad411600-c53e-4437-99fc-fdbc45a35bf0>
2.84375
372
Listicle
Science & Tech.
8.044607
1,364
The use of Newton's second law for rotation involves the assumption that the axis about which the rotation is taking place is a principal axis. Since most common rotational problems involve the rotation of an object about a symmetry axis, the use of this equation is usually straightforward, because axes of symmetry are examples of principle axes. A principal axis may be simply defined as one about which no net torque is needed to maintain rotation at a constant angular velocity. The issue is raised here because there are some commonly occurring physical situations where the axis of rotation is not a principal axis. For example, if your automobile has a tire which is out of balance, the axle about which it is rotating is not a principal axis. Consequently, the tire will tend to wobble, and a periodic torque must be exerted by the axle of the car to keep it rolling straight. At certain speeds, this periodic torque may excite a resonant wobbling frequency, and the tire may begin to wobble much more violently, vibrating the entire automobile. Moment of inertia concepts
<urn:uuid:c09810ae-0f5d-4a9c-9435-dbc05b338343>
4.15625
210
Knowledge Article
Science & Tech.
27.272703
1,365
Manual Section... (3) - page: asn1_read_value NAMEasn1_read_value - Returns the value of one element inside a structure - ASN1_TYPE root - pointer to a structure. - const char * name - the name of the element inside a structure that you want to read. - void * ivalue - vector that will contain the element's content, must be a pointer to memory cells already allocated. - int * len - number of bytes of *value: value..value[len-1]. Initialy holds the sizeof value. DESCRIPTIONReturns the value of one element inside a structure. If an element is OPTIONAL and the function "read_value" returns ASN1_ELEMENT_NOT_FOUND, it means that this element wasn't present in the der encoding that created the structure. The first element of a SEQUENCE_OF or SET_OF is named "?1". The second one "?2" and so on. INTEGERVALUE will contain a two's complement form integer. ENUMERATEDAs INTEGER (but only with not negative numbers). BOOLEANVALUE will be the null terminated string "TRUE" or "FALSE" and LEN=5 or LEN=6. OBJECT IDENTIFIERVALUE will be a null terminated string with each number separated by a dot (i.e. "126.96.36.1993.1"). UTCTIMEVALUE will be a null terminated string in one of these formats: "YYMMDDhhmmss+hh'mm'" or "YYMMDDhhmmss-hh'mm'". LEN=strlen(VALUE)+1. GENERALIZEDTIMEVALUE will be a null terminated string in the same format used to set the value. OCTET STRINGVALUE will contain the octet string and LEN will be the number of octets. GENERALSTRINGVALUE will contain the generalstring and LEN will be the number of octets. BIT STRINGVALUE will contain the bit string organized by bytes and LEN will be the number of bits. CHOICEIf NAME indicates a choice type, VALUE will specify the alternative selected. ANYIf NAME indicates an any type, VALUE will indicate the DER encoding of the structure actually used. ASN1_SUCCESSSet value OK. ASN1_ELEMENT_NOT_FOUNDNAME is not a valid element. ASN1_VALUE_NOT_FOUNDThere isn't any value for the element selected. ASN1_MEM_ERRORThe value vector isn't big enough to store the result. In this case LEN will contain the number of bytes needed. COPYRIGHTCopyright © 2006, 2007, 2008, 2009 Free Software Foundation, Inc.. Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. SEE ALSOThe full documentation for libtasn1 is maintained as a Texinfo manual. If the info and libtasn1 programs are properly installed at your site, the command - info libtasn1 should give you access to the complete manual. - OBJECT IDENTIFIER - OCTET STRING - BIT STRING - SEE ALSO This document was created by man2html, using the manual pages. Time: 15:27:39 GMT, June 11, 2010
<urn:uuid:d98c9343-35d4-4bf8-b519-c693b1c78e3d>
2.75
758
Documentation
Software Dev.
50.8035
1,366
Date: Jan 27, 2013 7:06 PM Author: Jerry P. Becker Subject: SAYINGS XLII Taken from many sources ... some of them identified. "Hope is a good thing, maybe the best thing. And a good thing never dies." (Andy Dufrene, from Shaw Shank Redemption/movie) "If a child can't learn the way we teach, then maybe we should teach the way they learn!" (Ignacio Estrada) [Peggy McKee] "A man is like a fraction whose numerator is what he is and whose denominator is what he thinks of himself. The larger the denominator, the smaller the fraction." (Leo Tolstoy) [From signature of Michael de Villiers] "Modern cynics and skeptics see no harm in paying those to whom they entrust the minds of their children a smaller wage than is paid to those to whom they entrust the care of their plumbing." (President John F. Kennedy (1917-1963)) [Valerie Strauss] "True terror is to wake up one morning and discover that your high-school class is running the country." (Kurt Vonnegut) [From Mike Contino] "I asked God for a bike, but I know God doesn't work that way. So I stole a bike and asked for forgiveness." (Unknown) [From Sandy Lemberg] "The problem with a lot of educational reform activity is it's a lot of 'ready, fire, aim." "Life's greatest gift is the opportunity to work hard at work worth doing." "Who dares to teach must never cease to learn." (John Cotton Dana) [from Carol Brown] "Enjoy life's journey, but leave no tracks." (Native American Commandment) "I don't feel old. I don't feel anything until noon. Then it's time for my nap." "Researchers usually find that students flourish where there is stability in the school, with an experienced staff, clear expectations, small classes, and a rich curriculum." "Do good but don't expect to be remembered or celebrated after you (Interpretation of Last Native American Commandment above) [From a note from Loh Kok Khuan] "My mechanic told me, "I couldn't repair your brakes, so I made your "Do not regret growing older. It is a privilege denied to many." "If you are really thankful, what do you do? You share." (W. Clement Stone) "Statistical significance and educational significance are often two completely different things. One child out of a thousand who does something uniquely different from other children has no statistical significance, but it may have huge educational significance. We need only look at the history of mathematics and science to note the tremendous impact that some 'statistically insignificant' individuals have had by thinking vastly differently, and daring to deviate from the norm of their times." (Michael de Villiers) "You never get a second chance to make a good first impression." (Head and Shoulders TV Commercial) "Middle age is when your classmates are so gray and wrinkled and bald they don't recognize you." "The frogs tend to forget that once they were tadpoles, too." (Korean Proverb) [From mathe 2000 selected papers book] "Real peace is liberty in place of tyranny, health instead of disease, hope instead of fear. It comes when people have the freedom to voice their views, choose their own leaders, feed their families, and raise healthy children." (Jimmy Carter, 39th President of the U.S.) [From literature from the Carter Center] "It is important to remember, in all efforts at improving the teaching of mathematics, that we are teaching human beings, and that what we are teaching them is a human activity with uses and with beauty and with surprises." (E.J. McShane, 1964) [Sent by Ginger Warfield, daughter] "Never wrestle with a pig. You'll just get dirty. And the pig loves it!" "Hospitality: making your guests feel like they're at home, even if you wish they were." (Unknown) [From Sandy Lemberg] "My people are destroyed for lack of knowledge." (Hosea 4:6) [From the Chronicle of Higher Education, December 7, 2012] "A stone in its place is like a mountain... ... but a mountain in the wrong place is just like a stone." (Turkish proverb) [Seen on EDDRA2 listserve, from Sue Ramlo] "There is no smallest among the small, and no largest among the large, but always something still smaller and something still larger." (Anaxagoras - ca. 500 BC - 428 BC) [Spelling correction from CH Candy to earlier posting] "The mathematician's patterns, like the painter's or the poet's must be beautiful; the ideas, like the colours or the words must fit together in a harmonious way. Beauty is the first test: there is no permanent place in this world for ugly mathematics." ("A Mathematician's Apology" (London 1941). [ From Bill Richardson; also from Steve Sugden and Melanie Parker] "Pythagoras walks into an airport, the TSA asks, "Hey buddy, got an Identity?" (From John Nord) "What sort of education will teach the young to hate war?" ((Virginia Woolf, Three Guineas) [From Brian Greer] "Teachers are the only professionals who have to respond to bells every forty-five minutes and come out fighting." (Frank McCourt (1930-2009), teacher and author) "Teaching is not a lost art, but the regard for it is a lost tradition." (Jacques Martin Barzun (born 1907), historian) [From Valerie Strauss] "I spend time on window ledges because I am scared of widths." "Helping people in need is a matter of fundamental principle, responsibility, righteousness and justice, not an act of charity." (Source Unknown) [From Yvelyne Germain-McCarthy] "A little health tip for you: I heard a banana-a-day is a good thing to help keep your colon clean ... it turns out you are supposed to (Dwight York) [From a friend on a greeting card] "Beauty is the first test: there is no permanent place in the world for ugly mathematics." (From Z-MNU Universitat Bayreuth calendar. The "Beauty..." quote is from G.H. Hardy's "A Mathematician's Apology." Hardy's "apology" is not an excuse or an "I am sorry" statement. In the ancient Greek sense it is about defending a position on something. In this case, Hardy was "defending" his life's devotion to research mathematics. Every budding mathematician reads it. It is totally inspirational. That was my former life before I met and began to understand Also, from Matt Wyneken: BTW1 - I just noticed another one of Hardy's quotes at this website: "No one has yet discovered any warlike purpose to be served by the theory of numbers or relativity, and it seems unlikely that anyone will do so for many years." He wrote that in 1941. Yet only a few decades later, however, RMS encryption was invented and now rules everything secret in the world, military, economic, etc. BTW2 - You also included Neil Armstong's everlasting statement among your quotes. Armstrong and Buzz Aldrin landed on the Moon in 1969, with Michael Collins in support above, only some 60+ years after the invention of human flight (as noted from television's #1 comedy, The Big Bang Theory). BTW3 - by JPB ... If you are ever tooling down InterState 65 south by Huntsville, Alabama, make time to visit the Rocket and Space
<urn:uuid:ddb7f18d-54de-4e1d-aeb8-435d55053d6f>
2.6875
1,767
Comment Section
Science & Tech.
60.217948
1,367
|Official NOAA climate monitoring station with warm| air conditioning exhaust blowing on temperature sensor. Courtesy: Dr. Roger Pielke, Sr. Also in the news the last two days is that the IPCC (the folks that won the Nobel Prize) have been wrong about increasing malaria due to global warming. A recent example is the case of malaria and climate. In the early days of global-warming research, scientists argued that warming would worsen malaria by increasing the range of mosquitoes. "Malaria and dengue fever are two of the mosquito-borne diseases most likely to spread dramatically as global temperatures head upward," said the Harvard Medical School's Paul Epstein in Scientific American in 2000, in a warning typical of many. Carried away by confirmation bias, scientists modeled the future worsening of malaria, and the Intergovernmental Panel on Climate Change accepted this as a given. When Paul Reiter, an expert on insect-borne diseases at the Pasteur Institute, begged to differ—pointing out that malaria's range was shrinking and was limited by factors other than temperature—he had an uphill struggle. "After much effort and many fruitless discussions," he said, "I…resigned from the IPCC project [but] found that my name was still listed. I requested its removal, but was told it would remain because 'I had contributed.' It was only after strong insistence that I succeeded in having it removed." Yet Dr. Reiter has now been vindicated. In a recent paper, Peter Gething of Oxford University and his colleagues concluded that widespread claims that rising mean temperatures had already worsened malaria mortality were "largely at odds with observed decreasing global trends" and that proposed future effects of rising temperatures are "up to two orders of magnitude smaller than those that can be achieved by the effective scale-up of key control measures." Entire story here. So, while many of us sweat, the threat of catastrophic global warming continues to cool.
<urn:uuid:61e1ae79-2715-4427-865f-dcd7043d2300>
2.875
397
Personal Blog
Science & Tech.
37.299743
1,368
Earth Absorbs More of Our CO2 Emissions: Science Even as Man's output of Earth-warming CO2 has risen, so has the capacity of plants and the oceans to absorb it, scientists said Wednesday, but warned this may not last forever. Carbon storage by land and sea, known as carbon sinks, has more than doubled in the past 50 years from about 2.4 billion tonnes in 1960 to some five billion tonnes in 2010, said a study in Nature. At the same time, fossil-fuel CO2 emissions rose almost four-fold. "The growth rate of atmospheric CO2 continues to rise because fossil fuel emissions are accelerating not because sinks are diminishing," researcher Ashley Ballantyne of the University of Colorado's geology department told AFP. The finding was contrary to widespread expectations that carbon sinks were slowing their CO2 uptake. "We were somewhat surprised by this result because several recent studies have been published showing that the land and oceans have been taking up less CO2," said Ballantyne. "We discovered that the Earth continues to take up more CO2 every year and there is no indication that this uptake has weakened." Ballantyne and colleagues used reported annual changes in atmospheric CO2 levels, from which they subtracted annual total man-made emissions to quantify Earth's uptake. About half of man-made CO2 emissions caused by burning fossil fuels and land-use changes such as deforestation, are taken up by plants and the oceans. CO2 can be stored away deep in the oceans for centuries. Plants and trees also use CO2 but later return it to the atmosphere through respiration or the burning of forests, for example. "We don't expect this uptake to continue to increase indefinitely because increased temperature as a result of rising CO2 may limit the net uptake of CO2 by land and oceans," said Ballantyne. In fact, carbon sinks may become new sources of CO2 within the next century. "Obviously if the Earth suddenly stopped taking up as much CO2 this would have potentially catastrophic consequences for Earth's climate system." Better understanding of these processes is crucial for climate change planning. "It makes a big difference whether the extra carbon emitted is stored in reservoirs such as the deep oceans, where it could stay for hundreds or thousands of years, or whether it is taken up by the growth of new forests where it would stay for only a few years or decades," German scientist Ingeborg Levin said in a comment that accompanied the paper. By AFP, 06/08/2012
<urn:uuid:29fec4fb-5ca1-4bb0-8491-422c0542d37e>
3.546875
523
News Article
Science & Tech.
40.725412
1,369
Three set of test procedures are used: the first only inserts n random integers into the tree / hash table. The second test first inserts n random integers, then performs n lookups for those integers and finally erases all n integers. The last test only performs n lookups on a tree pre-filled with n integers. All lookups are successful. These three test sequences are preformed for n from 125 to 4,096,000 where n is doubled after each test run. For each n the test cycles are run until in total 8,192,000 items were inserted / lookuped. This way the measured speed for small n is averaged over up to 65,536 sample runs. Lastly it is purpose of the test to determine a good node size for the B+ tree. Therefore the test runs are performed on different slot sizes; both inner and leaf nodes hold the same number of items. The number of slots tested ranges from 4 to 256 slots and therefore yields node sizes from about 50 to 2,048 bytes. This requires that the B+ tree template is instantiated for each of the probed node sizes! The speed test source code is compiled with g++ 4.1.2 -O3 -fomit-frame-pointer The results are be displayed below using gnuplot. All tests were run on a Pentium4 3.2 GHz with 2 GB RAM. A high-resolution PDF plot of the following images can be found in the package at speedtest/speedtest.pdf The first two plots above show the absolute time measured for inserting n items into seven different tree variants. For small n (the first plot) the speed of red-black tree and B+ tree are very similar. For large n the red-black tree slows down, and for n > 1,024,000 items the red-black tree requires almost twice as much time as a B+ tree with 32 slots. The STL hash table performs better than the STL map but not as good as the B+ tree implementations with higher slot counts. The next plot shows the insertion time per item, which is calculated by dividing the absolute time by the number of inserted items. Notice that insertion time is now in microseconds. The plot shows that the red-black tree reaches some limitation at about n = 16,000 items. Beyond this item count the B+ tree (with 32 slots) performs much better than the STL multiset. The STL hash table resizes itself in defined intervals, which leads to non-linearly increasing insert times. The last plots goal is to find the best node size for the B+ tree. It displays the total measured time of the insertion test depending on the number of slots in inner and leaf nodes. Only runs with more than 1 million inserted items are plotted. One can see that the minimum is around 65 slots for each of the curves. However to reduce unused memory in the nodes the most practical slot size is around 35. This amounts to total node sizes of about 280 bytes. Thus in the implementation a target size of 256 bytes was chosen. The following two plots show the same aspects as above, except that not only insertion time was measured. Instead in the first plot a whole insert/find/delete cycle was performed and measured. The second plot is restricted to the lookup / find part. The results for the trees are in general accordance to those of only insertion. However the hash table implementation performs much faster in both tests. This is expected, because hash table lookup (and deletion) requires fewer memory accesses than tree traversal. Thus a hash table implementation will always be faster than trees. But of course hash tables do not store items in sorted order. Interestingly the hash table's performance is not linear in the number of items: it's peak performance is not with small number of items, but with around 10,000 items. And for item counts larger than 100,000 the hash table slows down: lookup time more than doubles. However, after doubling, the lookup time does not change much: lookup on tables with 1 million items takes approximately the same time as with 4 million items.
<urn:uuid:ab19f71f-12f8-44ea-a83f-9fb3639e2e28>
2.890625
844
Documentation
Software Dev.
67.187473
1,370
Animal fossils are usually the remains of hard structures – bones and shells that have been petrified through enormous pressures acting over millions of years. But not all of them had such hard beginnings. Some Chinese fossils were once the embryos of animals that lived in the early Cambrian period, some 550 million years ago. Despite having the consistency and strength of jelly, the embryos have been exceptionally well preserved and the structure of their individual cells, and even the compartments within them, have been conserved in all their beautiful, minute detail. They are a boon to biologists. Ever since the work of Ernst Haeckel in the 19th century, comparing the development of animal embryos has been an important part of evolutionary biology. Usually, scientists have to piece together the development of ancient animals by comparing their living descendants. But the preserved embryos give the field of embryology its very own fossil record, allowing scientists to peer back in time at the earliest days of some of the earliest living things. But how did these delicate structures survive the pressures of the ages? Elizabeth Raff from Indiana University has a plausible answer. Her experiments suggest that fossil embryos are the work of colonies of ancient bacteria, which grew over the dead clumps of cells and eventually replaced their organic matter with minerals. They are mere casts of the original embryos. Under a range of different conditions, Raff watched decaying sea urchin embryos (which are roughly similar to the fossil ones in both size and shape. Under normal conditions, she saw that dead embryonic cells destroy themselves within a matter of hours, through the actions of their own enzymes. To produce fossil embryos, this self-destruction is the first hurdle to clear and Raff found that it can be done quite simply by placing the embryos in oxygen-less environments. When the fossilised embryos first died several million years ago, they must have sank into oxygen-deprived mud, which staved off the embryos’ destruction long enough for bacteria living on their surface to take hold. Raff found that dead sea urchin embryos are rapidly colonised by bacteria that form three-dimensional communities called biofilms. They construct these communities using the embryo’s own structures as scaffolding and the bacteria replicate the structures of the cells they consume, right down to the smallest feature. Indeed, the fossil embryos still bear traces of these ancient bacteria. They have long, thread-like imprints that strongly resemble the shapes made by bacterial groups growing over sea urchin embryos in oxygen-less water. These biofilms stimulated the growth of minerals and Raff saw that needle-shaped crystals start forming around the bacteria within a week after the embryo’s death. The crystals are mainly made of aragonite, a type of calcium carbonate typically found in the shells of molluscs and hard enough to withstand the pressures of time. By decaying the embryos, the bacteria lower the pH of the surrounding water, which creates the right conditions for the growth of these crystals. Raff’s experiments with living embryos don’t by any means give certain answers about the origins of the fossil embryos, but they do at least provide a plausible origin story. Indeed, the degree of degradation in her sea urchin embryos mirrors that seen in the fossils – in some, little but the outer layer was preserved but in others, even the inner workings were sealed in glorious detail (even though the multiple steps put a question mark over the accuracy of the final structures). The study suggests that the beautiful fossil embryos are not in fact preserved versions of the original cells, but uncanny facsimiles created by bacteria. They may have been created through a two-step process, where each layer acted as a base for sculpting the next one – animal to bacterial, and bacterial to mineral. Reference: E. C. Raff, K. L. Schollaert, D. E. Nelson, P. C. J. Donoghue, C.-W. Thomas, F. R. Turner, B. D. Stein, X. Dong, S. Bengtson, T. Huldtgren, M. Stampanoni, Y. Chongyu, R. A. Raff (2008). Embryo fossilization is a biological process mediated by microbial biofilms Proceedings of the National Academy of Sciences, 105 (49), 19360-19365 DOI: 10.1073/pnas.0810106105
<urn:uuid:0ddbe824-57dd-4013-b303-9b4e35c11948>
4.375
906
News Article
Science & Tech.
52.073942
1,371
(PhysOrg.com) -- Were dinosaurs slow and lumbering, or quick and agile? It depends largely on whether they were cold or warm blooded. When dinosaurs were first discovered in the mid-19th century, paleontologists thought they were plodding beasts that had to rely on their environments to keep warm, like modern-day reptiles. But research during the last few decades suggests that they were faster creatures, nimble like the velociraptors or T. rex depicted in the movie Jurassic Park, requiring warmer, regulated body temperatures like in mammals. Now, a team of researchers led by the California Institute of Technology (Caltech) has developed a new approach to take body temperatures of dinosaurs for the first time, providing new insights into whether dinosaurs were cold or warm blooded. By analyzing isotopic concentrations in teeth of sauropods, the long-tailed, long-necked dinosaurs that were the biggest land animals to have ever livedthink Apatosaurus (also known as Brontosaurus)the team found that the dinosaurs were about as warm as most modern mammals. "This is like being able to stick a thermometer in an animal that has been extinct for 150 million years," says Robert Eagle, a postdoctoral scholar at Caltech and lead author on the paper to be published online in the June 23 issue of Science Express. "The consensus was that no one would ever measure dinosaur body temperatures, that it's impossible to do," says John Eiler, a coauthor and the Robert P. Sharp Professor of Geology and professor of geochemistry. And yet, using a technique pioneered in Eiler's lab, the team did just that. The researchers analyzed 11 teeth, dug up in Tanzania, Wyoming, and Oklahoma, that belonged to Brachiosaurus brancai and Camarasaurus. They found that the Brachiosaurus had a temperature of about 38.2 degrees Celsius (100.8 degrees Fahrenheit) and the Camarasaurus had one of about 35.7 degrees Celsius (96.3 degrees Fahrenheit), warmer than modern and extinct crocodiles and alligators but cooler than birds. The measurements are accurate to within one or two degrees, Celsius. "Nobody has used this approach to look at dinosaur body temperatures before, so our study provides a completely different angle on the longstanding debate about dinosaur physiology," Eagle says. The fact that the temperatures were similar to those of most modern mammals might seem to imply that dinosaurs had a warm-blooded metabolism. But, the researchers say, the issue is more complex. Because large sauropod dinosaurs were so huge, they could retain their body heat much more efficiently than smaller mammals like humans. "If you're an animal that you can approximate as a sphere of meat the size of a room, you can't be cold unless you're dead," Eiler explains. So even if dinosaurs were "cold blooded" in the sense that they depended on their environments for heat, they would still have warm body temperatures. "The body temperatures we've estimated now provide a key piece of data that any model of dinosaur physiology has to be able to explain," says Aradhna Tripati, a coauthor who's an assistant professor at UCLA and visiting researcher in geochemistry at Caltech. "As a result, the data can help scientists test physiological models to explain how these organisms lived." The measured temperatures are lower than what's predicted by some models of body temperatures, suggesting there is something missing in scientists' understanding of dinosaur physiology. These models imply dinosaurs were so-called gigantotherms, that they maintained warm temperatures by their sheer size. To explain the lower temperatures, the researchers suggest that the dinosaurs could have had some physiological or behavioral adaptations that allowed them to avoid getting too hot. The dinosaurs could have had lower metabolic rates to reduce the amount of internal heat, particularly as large adults. They could also have had something like an air-sac system to dissipate heat. Alternatively, they could have dispelled heat through their long necks and tails. Previously, researchers have only been able to use indirect ways to gauge dinosaur metabolism or body temperatures. For example, they infer dinosaur behavior and physiology by figuring out how fast they ran based on the spacing of dinosaur tracks, studying the ratio of predators to prey in the fossil record, or measuring the growth rates of bone. But these various lines of evidence were often in conflict. "For any position you take, you can easily find counterexamples," Eiler says. "How an organism budgets the energy supply that it gets from food and creates and stores the energy in its musclesthere are no fossil remains for that," he says. "So you just sort of have to make your best guess based on indirect arguments." But Eagle, Eiler, and their colleagues have developed a so-called clumped-isotope technique that shows that it is possible to take body temperatures of dinosaursand there's no guessing involved. "We're getting at body temperature through a line of reasoning that I think is relatively bullet proof, provided you can find well-preserved samples," Eiler says. In this method, the researchers measure the concentrations of the rare isotopes carbon-13 and oxygen-18 in bioapatite, a mineral found in teeth and bone. How often these isotopes bond with each otheror "clump"depends on temperature. The lower the temperature, the more carbon-13 and oxygen-18 tend to bond in bioapatite. So measuring the clumping of these isotopes is a direct way to determine the temperature of the environment in which the mineral formedin this case, inside the dinosaur. "What we're doing is special in that it's thermodynamically based," Eiler explains. "Thermodynamics, like the laws of gravity, is independent of setting, time, and context." Because thermodynamics worked the same way 150 million years ago as it does today, measuring isotope clumping is a robust technique. Identifying the most well-preserved samples of dinosaur teeth was one of the major challenges of the analysis, the researchers say, and they used several ways to find the best samples. For example, they compared the isotopic compositions of resistant parts of teeththe enamelwith easily altered materialsdentin and fossil bones of related animals. Well-preserved enamel would preserve both physiologically possible temperatures and be isotopically distinct from dentin and bone. The next step is to take temperatures of more dinosaur samples and extend the study to other species of extinct vertebrates, the researchers say. In particular, taking the temperature of unusually small and young dinosaurs would help test whether dinosaurs were indeed gigantotherms. Knowing the body temperatures of more dinosaurs and other extinct animals would also allow scientists to learn more about how the physiology of modern mammals and birds evolved. Explore further: Shellfish show population growth did not send humans out of Africa
<urn:uuid:fcf1e659-7fc8-4497-9012-fd14fad8fc56>
4.0625
1,413
News Article
Science & Tech.
32.208934
1,372
Research by USDA Forest Service Southern Research Station biometrician Bernie Parresol takes center stage in a special issue of the journal Forest Ecology and Management due out in June. Parresol is lead author of two of the five articlesand co-author of two morein an issue that focuses on methods that incorporate fine-scale data into the tools Southeastern forest managers use to assess wildfire potential and plan mitigation treatments. Most fire behavior analyses rely on sparse plot inventories and data from satellites, and often do not address the complexity found at the ground level where managers operate. Parresol and fellow researchers demonstrated a statistical approach that can incorporate hundreds to thousands of fuel observations into models that managers can easily use to prioritize areas to treat to reduce the wildfire hazard. Fire is an important part of forest ecosystems in the southeastern United States, especially in the Coastal Plain. European settlers cleared most of the native longleaf pine forests of the region; industry later planted many of the same acres in loblolly pine plantations. Meanwhile, fire suppression policies broke the cycle of frequent low-intensity fires in the remaining natural forests, causing the buildup of fuels that leads to wildfires. Over the last decades, southeastern land managers added prescribed fire to other forest treatments to reduce wildland fires, promote forest restoration, and improve wildlife habitat. Because of budget constraints and public concerns about fire and smoke, managers need to prioritize the areas where they will use prescribed fire. To do this, they use wildfire hazard assessments such as LANDFIRE and the Southern Wildfire Risk Assessment (SWRA), both of which use satellite images and other supporting data to represent fuels across a landscape. Although these tools work well enough at the state and regional levels, they don't offer enough detail to land managers trying to decide which of their hundreds or thousands of acres should be burned first. The special issue of Forest Ecology and Management focuses on a study conducted on the 200,000-acre Savannah River Site as representative of an actively managed forest landscape in the Southeast. Researchers used studies on the site to assess wildland fuels, potential fire behavior and treatments to reduce fire hazard. In his first article, Parresol and fellow researchers develop equations to describe fuel loads for both dead and alive materials on the site based on vegetation type, stand age, recent fire history and other aspects. These equations were then used to create custom landscape models based on the actual data from the site, then compared with results from LANDFIRE and SWRA to assess the effectiveness of those tools. "Taken together, the research reported in these articles shows that fine scale measurements repeated over time can be put into a manageable framework and reduced to create dynamic fire behavior models useful to managers," says Parresol. "They can also be used to help address scientific questions and to evaluate the effect of management conditions." Explore further: Indonesia to use rain-making technology to stop fires More information: Access the special issue of Forest Ecology and Management: www.sciencedirect.com/science/journal/03781127/273
<urn:uuid:feb756b2-1c7c-4301-b1f3-203ad6399b54>
2.984375
618
News Article
Science & Tech.
18.246394
1,373
But electrons in two dimensions can also behave as classical particles that interact only through the mutual repulsion of their negative charges. This occurs when they are spread much farther apart and has been difficult to achieve in the lab, so researchers are still seeing new phenomena. David Rees of RIKEN, a Japanese research institute, in Wako, Japan, and his colleagues, studied this regime using electrons floating above a liquid helium surface. At low temperatures, the electrons glide rapidly far above the surface--about 11 nanometers--and barely interact with it. At temperatures somewhat below 1 Kelvin, the repulsion between electrons generates a two-dimensional solid state known as a Wigner crystal. At higher temperatures the electrons act like a liquid. Of course, this is significantly different than when QM effects kick in, whereby we get the fractional charge/quantum hall effect. It is interesting to note that we always think that to get quantum behavior, it usually requires difficult conditions. Here, it seems that it is difficult to see classical behavior clearly when the system has such a tendency to behave quantum mechanically. D.G. Rees et al., PRL v.106, p.026803 (2011).
<urn:uuid:8056347b-afb8-4458-a201-2e8f6403ea78>
3.25
245
Personal Blog
Science & Tech.
43.095385
1,374
The Earth has one Moon, but it’s not the only rocky thing orbiting us…..Posted: December 21, 2011 I spend far too much time at pub quizzes. Perhaps it’s because I’m an irritating know-it-all or I just like a vaguely intellectual pretense for going to the pub. One of the more geeky parts of it is correcting the quiz-master when they are wrong (Reykjavik is north of Helsinki and Blazin Squad did not do the original of Crossroads etc.). One such wrong answer was a week or two back when it was claimed the Earth has four moons. Additional moons of the Earth have long been claimed and were popularised a few years back when QI claimed that a co-orbital body called Cruithne was a second moon. As far as the definition of stable, natural bodies orbiting the Earth goes there is only one, although it would be entertaining if schoolchildren were taught about the wonderfully named Wahrhafter Wetter-und Magnet Mond (or veritable weather and magnetic moon). However there are sometimes other bodies that briefly orbit the Earth. The Solar System is a crowded place. Besides the eight planets and numerous dwarf planets there are millions of asteroids. Some of these have orbits that bring them close to the Earth. While most of these whizz by us, some are in orbits which mean that they can gravitationally interact with the Earth and the Moon and go in to orbit around it. These orbits are not stable and the objects will eventually be kicked out of the Earth-Moon system. To date only one known object has been discovered to have undergone such a process. Known as 2006_RH120 it is a small body, only 3-5m across. In 2007-2008 it undertook four orbits of the Earth at a distance more than twice as far away as the Moon. But how often do objects like this perform their temporary dance with the Earth? Well a new paper of has been looking in to the rate of capture and when such events happen. The authors use a simulation of the how asteroids will pass through the Earth-Moon System. They select a series of objects with orbital elements in the range where they could possibly be captured and then examine how they would be affected by coming close to the Earth and Moon. Previously it was thought that a close encounter with the Moon gave objects a gravitational tug allowing them to be captured by the Earth. However the new model finds that while the Moon does play a role in the capture, none of their simulated near-Earth objects came close enough to the Moon to get a sufficient enough tug for capture. The model also found that capture most likely at aphelion and perihelion (when the Earth is furthest and closest to the Sun during its orbit). The same capture probability peaks were previously noted for temporary satellites of Jupiter. It’s also possible that the Moon itself could capture asteroids and get its own temporary satellites. However no objects in the simulation managed to complete an orbit of the Moon. Objects in unstable orbits around the Earth will of course have the possibility entering the atmosphere and becoming meteors. About 1% of objects in the simulation impacted on the Earth, none on the moon. This means that a temporarily captured object is 3.5 times more likely to strike the Earth than an near-Earth object in a similar orbit. In total the authors estimate that a tenth of one percent of objects striking the Earth were in temporary orbit around us. In all the authors estimate based on their model and the fact there aren’t a large population of observable temporary satellites that at any one time there is one object of approximately one metre in size temporarily orbiting the Earth along with potentially other smaller bodies. So the Earth only has one Moon, but it’s not the only natural object orbiting us. Granvik, M., Vaubaillon, J., & Jedicke, R. (2011). The population of natural Earth satellites Icarus DOI: 10.1016/j.icarus.2011.12.003
<urn:uuid:e129c807-068b-4949-a081-bfc5b3ed05f5>
3.265625
833
Personal Blog
Science & Tech.
55.906262
1,375
Nanotech raises 'toxic sock' alert The smell from your socks might not be the only toxic thing around, a US chemistry conference has heard. Silver nanoparticles, used for years to kill bacteria and eliminate odours in socks, food containers, medical dressings and even teddy bears, might be a threat to the environment, according to new research. "People might not even be aware they are buying these things," says Troy Benn, an environmental engineer at Arizona State University. Benn and his colleagues presented their findings this week at a meeting of the American Chemical Society. Among an estimated 600 consumer products that contain nanomaterials, at least 20% contain silver nanoparticles, the researchers say. The treated products can be effective as silver has odour-fighting and antibacterial properties. But little is known about how these particles affect waterways once they pass through laundry drains. To find out, the researchers bought six pairs of commercially available silver nanoparticle-treated socks. They soaked them in water and put them in a washing machine. After as little as one washing, virtually all the nanoparticles from two brands of the socks washed out. After four washings, two other brands lost just 1% of the silver nanoparticles. That suggests to researchers that it is the manufacturing process of the socks, not the nanoparticles themselves, that cause the silver to disappear down the drain. The researchers tested waterways for two types of silver: nanoparticle silver, whose safety profile is unknown, and harmful ionic silver. They found both. Ionic silver in waterways kills fish and other aquatic creates when it enters their gills, but is harmless to humans unless at high concentrations. The scientists raised concern about the overall level of silver, noting that the sludge and wastewater from the manufacturing plants is often sold to farms as fertiliser or dumped into waterways. Increased silver concentrations could render bacteria used in water treatment plants less effective. Silver concentrations could also pose its own risks. "With increased silver in waste water it could become so concentrated with silver that it could be classified as a hazardous waste," says Benn. A hazardous level would be as much as 20 times what their results suggest. The findings will be published in the journal Environmental Science and Technology. Like any other chemical? Nanoparticles, silver or otherwise, should be treated like any other chemical, according to Dr Challa Kumar, a nanotechnology researcher at Louisiana State University. "They could be dangerous, you never know. But we don't want to blame the nanoparticles for being toxic when it's not the nanoparticles but the manufacturing process that is the problem," says Kumar. "But if the silver is getting leached out then it is a matter of concern."
<urn:uuid:5dbcd1ed-e118-4528-a9e7-a2e0419896d3>
3.015625
570
News Article
Science & Tech.
40.390943
1,376
Welcome to Plant2pollinator Plant2pollinator is a practical science resource for understanding pollinator partnerships and building biodiversity stewardship for students from stages 1 to 4. Based on the premise that the types of flowering plants in your garden can indicate a diversity of insect visitors, Plant2pollinator provides ideas, investigations and information on invertebrates and their crucial role in pollinating flowering plants.You can make a major contribution to many mysteries surrounding insect pollination, invertebrate diversity and sustainability by surveying and understanding insect visitors to flowering plants. Scientists are still to find out more about the fascinating partnership between plants and their pollinators. Plant2pollinator relies on the Bugwise guides and tools for identifying beetles, wasps, flies, bees, moths and butterflies developed by the Australian Museum’s research scientists for use in field and laboratory work. These tools and guides were developed to assist enthusiastic observers who are not scientifically trained in identifying invertebrates. Focus of Plant2pollinator resources Many native, food and fibre plants rely on insects to reproduce. This process is known as pollination, and Plant2pollinator explores this concept. Pollination is the process of transferring pollen from the anthers (male reproductive organ) to the stigma female reproductive organ) of flowers to ensure the successful production of seed. It is essential for the genetic diversity of most flowering plants and one of biodiversity’s vital services. In some remarkable cases only a single insect species can successfully pollinate a particular flower. Consequently, if an insect species is removed from an area, the vegetation will also suffer, highlighting the importance of insect biodiversity and our role in maintaining it. The investigation of specific and generalist insect pollinators and their plant hosts aims to reinforce the idea that pollination is a fundamental process of ecological interdependence. Plant2pollinator (P2p) is sponsored by The Environmental Trust. P2p acknowledges the significant contributions of Phoebe Hill, Project Officer, and Geoff Gardner, Intern, to this resource pack. The project team wishes to acknowledge the following Australian Museum scientists for their scientific advice, guidance and recommendations: - Dr Dave Britton, Entomology Collections Manager, - Martyn Robinson, Naturalist, - Dr Chris Reid, Research Scientist, - Dr Shane McEvey, Entomologist - Dr John Gollan, Research Officer. P2p could not have been as comprehensive without the generous advice and instruction of Dr Michael Batley. Michael’s bee videos have also contributed to P2P’s on-line resources. P2p wishes also to acknowledge Brian Walters, of the Australian Native Plants Society of Australia, for the use of his photographs for the website gallery. The majority of the illustrations have been taken from the original Bugwise Invertebrate Guide, illustrator Andrew Howells. The keys, guides and diagrams have been researched and designed by the Plant2pollinator project team, Phoebe Hill, Geoff Gardner and Sue Lewis. The following references were used to produce the Plant2Pollinator resources: - Dollin, A., M. Batley, M. Robinson & B. Faulkner. 2000. Native Bees of the Sydney Region: A Field Guide. Australian Native Bee Research Centre. - Proctor, M., Lack, A. and P. Yeo.1997. The natural history of pollination. Timber Press. Sue Lewis , Education Officer, Bugwise for Schools
<urn:uuid:f3e2e50d-ff47-4785-a1be-be31dad05fb5>
3.15625
725
About (Org.)
Science & Tech.
23.340417
1,377
| NEWS RELEASE: for immediate release Friday, January 3, 2003. Bush administration denies legal protection of flat-tailed horned lizard WASHINGTON DC -- Today, the Bush administration denied endangered species act protection for the imperiled flat-tailed horned lizard (Phrynosoma mcallii), an attractive Sonoran desert native that looks like a mini-dinosaur. "This unjustified denial of desert wildlife protection continues the president's anti-environmental policies and ensures more litigation." said Daniel R. Patterson, Desert Ecologist with the Center for Biological Diversity. "This political decision is a favor to industry that flies in the face of biological facts and the compelling national interest for wildlife conservation." The flat-tailed horned lizard is a small desert reptile that inhabits portions of the Sonoran Desert in southern California, Arizona, and northern Mexico. The main cause for the decline of the flat-tailed horned lizard is conversion of habitat to urban and agricultural uses. The various uses include crops, cities, off-road vehicle use, geothermal leases, military maneuvers, gravel pits, highways, etc. Other factors responsible for the decline of this species include the use of pesticides on crops. Pesticide drift is thought to affect ant populations in adjacent habitat. A typical flat-tailed horned lizard measures approximately 3.3 inches from snout to vent, and has two rows of fringed scales on either side of the body with a dark stripe along its backbone. Flat-tailed horned lizards feed primarily on native harvester ants, consuming 150-200 ants per day. A proposed rule to list the species as threatened was published in the Federal Register on November 29, 1993. On July 15, 1997, the US Fish and Wildlife Service withdrew its proposal to list the flat-tailed horned lizard as threatened. The decision to withdraw the proposed listing was challenged in court by conservationists. On October 24, 2001, the District Court ordered the Service to reinstate the 1993 proposed rule to list the lizard as threatened and to make a new final listing determination for the species. Today, the Service withdrew that rule, denying legal protection for the lizard. Species ecology information: http://uts.cc.utexas.edu/~iffp475/phrynos_html/mcallii.html
<urn:uuid:3f744d79-9145-4320-b5fd-3dd1cd378fb4>
2.890625
477
News (Org.)
Science & Tech.
35.23243
1,378
An important concept that comes from sequences is that of series and summation. Series and summation describes the addition of terms of a sequence. There are different types of series, including arithmetic and geometric series. Series and summation follows its own set of notation that is important to memorize in order to understand homework problems. So a series is just the summation of a sequence. So a sequence is just a bunch of numbers in a row, a series is what happens when we add up all those numbers together. Okay? So before me I have a general term for a sequence. a sub n is equal to n squared minus 1. And first we're asked to find the first four terms. Okay? So in order to find the first term, we would find a sub 1 which happens when we plug in 1. 1 squared minus 1 that's just 0. So our first term is going to be 0. To find the second term we plug in 2. a sub 2 is equal to 2 squared. 4-1 which is going to give us 3. Third term [IB] and repeat a sub 3 is 3 squared, 9-1 is 8. And the fourth term a sub 4, plug in 4. 4 squared, 16-1 is 15. So this right here is a sequence. It's 4 numbers written in order with commons in between. It's just a collection of numbers. Find the sum of those first 4 terms. So basically we already found the 4 terms, all we have to do is add them together. 0+3 is 3 plus 8 is 11 plus 15 is 26. So 26 is then the series, okay? Series is the way I remember it is, series is a shorter word therefore your answer should be shorter, one number. A sequence is a longer word, it's going to be a collection of data, a collection of numbers, okay? So basically all the series is is a summation of the sequence.
<urn:uuid:812b542f-d4de-4cf3-9bff-52518cbabffb>
4.40625
402
Tutorial
Science & Tech.
86.384637
1,379
The rare prolonged snowstorms and low temperatures that have caused havoc in many parts of China are mainly related to La Nina and abnormal atmospheric circulation, Chinese meteorologists said. The unbalanced precipitation in China this winter greatly resembles the aftermath of La Nina event in history, which indicates that the latest development of La Nina is a main cause of China's abnormal snowy weather, meteorologists with Jiangxi Provincial Meteorological Bureau said. The experts predicted that the La Nina event, which formed in August last year, will continue to the end of the spring season. They also pointed out that the abnormal atmospheric circulation in some regions of Europe and Asia, which has lasted for nearly 20 days since mid-January, is responsible for the rampant chilly weather, rain and snowstorms. (Xinhua News Agency February 2, 2008)
<urn:uuid:bc93c7cd-4c43-4226-b456-f02f74ff4edd>
2.9375
166
News Article
Science & Tech.
16.456651
1,380
Surveyor soars toward red planet Satellite to map Martian surface November 7, 1996 Web posted at: 1:30 p.m. EST CAPE CANAVERAL, Florida (CNN) -- NASA launched a 10-month, unmanned mission to Mars Thursday, the first step in a multi-spacecraft bid to determine if there is -- or ever was -- life on the fourth rock from the sun. (848K/19 sec. QuickTime movie) Global Surveyor, the first of 10 NASA probes bound for Mars the next decade, replaces one that mysteriously disappeared three years ago. The spacecraft soared aloft at noon EST atop a Delta 2 rocket launched from Cape Canaveral Air Station in Florida. The launch, originally scheduled for Wednesday, was postponed 24 hours because of Surveyor will take 10 months to make the 470-million-mile trip and another six months to ease into a mapping orbit. Later, it will dip into the Red Planet's thin atmosphere, using its wing-like solar panels as brakes. Surveyor will study the Martian surface and atmosphere, but will not land. More to come It is the first of three spacecraft, two U.S. and one Russian, destined for Mars this year. The next launch is Mars Pathfinder, equipped with a robotic ground vehicle, that is scheduled for liftoff December 2 and will land July 4, 1997, two months ahead of Surveyor. NASA plans to send pairs of spacecraft every 26 months through 2005 but has no firm plans for a manned mission to Thursday's launch comes amid controversial revelations by scientists of possible ancient life on Mars. "One of our goals is ultimately to return a sample of the surface of the planet itself," NASA's Wes Huntress told CNN in a live interview. Scientists hope the sample has "evidence on whether or not there was early life on the planet," Huntress said. Observer: lost in space Surveyor was designed and built in record time to replace NASA's $1 billion Mars Observer probe, which spun out of control -- for reasons unknown -- just days before it was due to enter the planet's orbit in 1993. Surveyor carries copies of five of the seven scientific instruments on its ill-fated predecessor, but at $215 million is much less expensive. It was made mostly from leftover parts from Observer. The problem: where to look? From an altitude of 230 miles (365 km), its telephoto camera will see objects on the surface as small as a compact car. By the end of one Martian year -- 687 Earth days -- 99 percent of the planet will have been mapped by Surveyor's The probe does not carry any instruments that could directly detect evidence of life, but it will scout out sites for a future robotic mission to recover samples of rock. Scientists must decide the best places to look "before we decide which of those interesting rocks to bring back," "We'll be able to identify areas that might have been conducive to past life," said Surveyor Mission Manager Glenn Cunningham. (288K/13 sec. AIFF or WAV sound) Correspondents John Zarrella, John Holliman and Reuters contributed to this report. Related sites: Note: Pages will open in a new browser window External sites are not endorsed by CNN Interactive. © 1996 Cable News Network, Inc. All Rights Reserved.
<urn:uuid:88e90af8-1d5c-43bb-99f9-5e5d90cac14b>
3.0625
757
Truncated
Science & Tech.
57.151778
1,381
My Saved Article |Atomic Number:||33||Atomic Symbol:||As| |Atomic Weight:||74.9216||Electron Configuration:||2-8-18-5| |Melting Point:||817 @ 28 atm.oC||Boiling Point:||sublimes @ 613oC| |Uses:||LEDs, deadly poison, semiconductors| History(L. arsenicum, Gr. arsenikon, yellow orpiment, identified with arenikos, male, from the belief that metals were different sexes; Arabic, Az-zernikh, the orpiment from Persian zerni-zar, gold) SourcesElemental arsenic occurs in two solid modifications: yellow, and gray or metallic, with specific gravities of 1.97, and 5.73, respectively. is believed that Albertus Magnus obtained the element in 1250 A.D. Schroeder published two methods of preparing the element. arsenopyrite, (FeSAs) is the most common mineral from which, on heating, the arsenic sublimes leaving ferrous sulfide. PropertiesThe element is a steel gray, very brittle, crystalline, semimetallic solid; it tarnishes in air, and when heated is rapidly oxidized to arsenous oxide with the odor of garlic. HandlingArsenic and its compounds are poisonous. Arsenic is used in bronzing, pyrotechny, and for hardening and improving the sphericity of shot. CompoundsThe most important compounds are white arsenic, the sulfide, Paris green, calcium arsenate, and lead arsenate; the last three have been used as agricultural insecticides and poisons. Marsh's test makes use of the formation and ready decomposition of arsine. Arsenic is finding increasing uses as a doping agent in solid-state devices such as transistors. Gallium arsenide is used as a laser material to convert electricity directly into
<urn:uuid:a283feed-f052-4dc2-bd51-f6d4be44bf24>
3.140625
442
Knowledge Article
Science & Tech.
29.915592
1,382
Technology Transfer Sponsored by Most consumer attention to oysters and mussels has centered on their taste, beautiful by-products or aphrodisiac effects; however, their adhesive properties are what caught the attention of Jonathan Wilker, PhD, associate professor of chemistry, and his research team at Purdue University. Wilker’s team has been studying marine biological adhesives for years and has found that the two mollusks produce adhesives that form a non-toxic, strong bond in wet environments. Although Wilker mainly has worked to develop synthetic versions of the adhesives for medical use, he is investigating other applications, which may include personal care. Wilker has studied the adhesives produced by various marine entities, including Mytilus edulis (the blue mussel) and Crassostea virginica (Eastern oyster)—an oyster popular in the human diet. He notes one similarity. “Mussels, oysters and barnacles all use cross-linked proteins, (long biological polymers), to make their adhesive,” said Wilker. The difference, however, is in the composition of the adhesive. To study the adhesive, Wilker and his team cut open the shells of oysters and observed the interface where they were attached; he compared this with separate, unattached portions of shell as a control. “[Since] the oyster’s adhesive is comprised of materials similar to the shell, we speculate the cement comes from the same place, system or organ as the shell,” he furthered. Both the oyster shell and adhesive consist of calcium carbonate and protein as starting materials but the shell is mostly calcium carbonate with a small amount of protein, whereas there is more protein and less calcium carbonate in the adhesive. “In the cement, the extra reactivity is added to the proteins so they crosslink together,” Wilker explained. The adhesive produced by oysters is 10-15% protein and 85-90% calcium carbonate (chalk), which according to Wilker results in a hard inorganic, cement-like material. Unlike oysters, Wilker notes that mussels separately produce their adhesive and shell. “If you crack open a mussel, a separate organ [is present that] produces the adhesive,” said Wilker. He added that the adhesive produced by mussels is about 99% proteins and more like soft organic glue.
<urn:uuid:19ffe944-e7e9-4824-81bd-a8e9654374da>
3.71875
506
Knowledge Article
Science & Tech.
25.190049
1,383
Physicists say they have found a Higgs boson GENEVA – The search is all but over for a subatomic particle that is a crucial building block of the universe. Physicists announced Thursday they believe they have discovered the subatomic particle predicted nearly a half-century ago, which will go a long way toward explaining what gives electrons and all matter in the universe size and shape. The elusive particle, called a Higgs boson, was predicted in 1964 to help fill in our understanding of the creation of the universe, which many theorize occurred in a massive explosion known as the Big Bang. The particle was named for Peter Higgs, one of the physicists who proposed its existence, but it later became popularly known as the "God particle." If you have any technical difficulties, either with your username and password or with the payment options, please contact us by e-mail at email@example.com
<urn:uuid:3f708c17-600e-4e1e-b39c-4272e1e232cf>
2.890625
192
News Article
Science & Tech.
28.140706
1,384
MethylationIn Chemistry, Methylation is the addition of a methyl group to a substrate. In Epigenetics, Methylation can refer to the addition of a methyl group to a cytosine residue of DNA to convert it to 5-methylcytosine or the addition of a methyl group or groups to arginine or lysine amino acids in a protein. Methylation of DNA occurs at any CpG sites, which are sequences of DNA where cytosine lies next to guanine. The process of methylation is mediated by an enzyme known as DNA methyltransferase. CpG sites are quite rare in a eukaryotic genome except in regions near the promoter of a eukaryotic gene. These regions are known as CpG islands, and the state of methylation of these CpG sites are critical for gene activity/expression. In early development (fertilisation to 8-cell stage), the eukaryotic genome is demethylated. From the 8-cell stage to the morula, de novo methylation of the genome occurs, modifying and adding epigenetic information to the genome. By blastula stage, the methylation is complete. This process is referred to as "epigenetic reprogramming". The importance of methylation was shown in knockout mutants without DNA methyltransferase. All the resulting embryos died at the morula stage. The pattern of methylation has recently become an important topic for research. Studies have found that in normal tissue, methylation of a gene is mainly localised to the coding region, which is CpG poor. In contrast, the promoter region of the gene is unmethylated, despite a high density of CpG islands in the region. Interestingly, in cancer cells, methylation is very high even in the promoter region, raising interest in the role of methylation in the induction of cancerous properties. Furthermore, the pattern of methylation has been shown to be a reliable marker of cancerous tissue, with a heavily methylated gene found in 90% or more patients with prostate cancer. Additionally, adenosine methylation is part of the restriction modification system of many bacteria. Bacterial DNAs are methylated periodically throughout the genome, and foriegn DNAs (which are not methylated in this manner) that are introduced into the cell are degraded by restriction enzymes. Bacteria protect themselves from infection by bacteria viruses, called bacteriophage or phage, through this system. Methylation can occur on the arginine and lysine residues of proteins. Methylation of protein has been most well studied in the histones. The transfer of methyl groups from S-adenosyl-methionine to histones is catalyzed by enzymes known as histone methyltransferases. Histones which are methylated on certain residues can act in epigenetically to repress or activate "gene" expression.
<urn:uuid:035df6a6-30ef-4be3-8552-967176c6e35a>
3.5
609
Knowledge Article
Science & Tech.
24.040336
1,385
"In cosmology, it turns out that 'a galaxy a long time ago' and 'far, far away' really do go together," says Associate Professor Roger Romani, who with graduate student David Sowards-Emmerd and Professor Peter Michelson of Stanford, and radio astronomer Lincoln Greenhill of the Harvard-Smithsonian Center for Astrophysics, spotted one of the oldest supermassive black holes yet found. The scientists collaborate at the Kavli Institute for Particle Astrophysics and Cosmology at Stanford. "In this case, we're looking at [a black hole] far enough away that it's within a billion years of the origin of it all, the Big Bang." The supermassive black hole sits in the center of a galaxy. A disk of stars and gas swirl around the black hole and eventually get sucked in. "That generates enormous amounts of power, enormous amounts of energy," Romani says. "It's far more efficient even than nuclear fusion. These gravity-powered sources are the most powerful sources in the universe." As black holes go, this one is a messy eater. It's Jabba the Hutt, in fact, gobbling up its galaxy so quickly that not everything is making it down its throat past the point of no return - that place, called the "event horizon," where not even light can escape gravity's strongest pull. The matter that doesn't make it past the event horizon is spewing back up in the form of accelerated high-energy particles. If a black hole amid a galaxy shoots out high-energy particles in narrow jets that just happen to be aimed at Earth, astrophysicists give the whole thing a special name - "blazar." Amazingly, these blazars can be detected at nearly all energies, even at the high energy of gamma rays. In fact, distant blazars seem to dominate the gamma-ray sky and can obscure other objects of interest. Pulsars, spinning neutron stars nearby in our own galaxy, can also emit gamma rays, but far fewer of them are known. Romani, whose main interest is pulsars, wanted to identify and discard blazars so he could concentrate on the neutron stars. "I got started working on the blazars as a way of culling the wheat from the chaff," Romani says. "But then the chaff proved just as interesting." In preparation for a mission that is scheduled to launch in 2007, the co-authors have surveyed 200 blazars; eventually they hope to survey 2,000. The mission, led by Michelson, will use the Gamma Ray Large Area Space Telescope (GLAST) to study high-energy sources of radiation in the universe, such as supermassive black holes, merging neutron stars and hot streams of gas moving at nearly the speed of light. It is funded by NASA, the U.S. Department of Energy and government agencies in France, Italy, Japan and Sweden. "Something really new is waiting to be found in the gamma-ray sky," Romani says. "If we could identify all the blazars, tag the pulsars - the things that are left over, that's where the really new discoveries will be." In photographs, blazars look just like stars. So how do scientists spot them? The co-authors first identified gamma rays seen by the Energetic Gamma Ray Experiment Telescope (EGRET), a GLAST precursor initiated by Stanford physics Professor Robert Hofstadter in the 1970s and subsequently directed by Michelson. Greenhill led the effort to obtain radio images of the blazar jet using the Very Long Baseline Array (VLBA). Funded by the National Science Foundation and operated by the National Radio Astronomy Observatory, the VLBA is essentially a radio camera. It consists of 10 dish antennas - 25 meters wide and distributed from Hawaii across the United States to St. Croix - slaved together with computers to create a composite image with a resolution Greenhill calls "comparable to what they would get with a single antenna about as large as a continent." To find out how far away the blazar was, Romani and Sowards-Emmerd used the Hobby-Eberly Telescope (HET), an optical instrument in a remote part of Texas, to obtain spectral patterns of visible and infrared light. HET is a joint project of the University of Texas at Austin, Pennsylvania State University, Stanford, Ludwig-Maximilians-Universität München and Georg-August-Universität Göttingen. Spectroscopy reveals signatures of elements in a galaxy's gases. Elements such as hydrogen, nitrogen, carbon and oxygen radiate at specific energies, or equivalently at specific wavelengths. A consequence of cosmic expansion is that those wavelengths get shifted to the red part of the spectrum, or "red-shifted," if an object is extremely far away. The red shift corresponds to age. "The higher that number, the smaller the universe was when the light was emitted - hence, the earlier you're talking about," Romani explains. The Hobby-Eberly Telescope told the researchers that the red shift of their blazar was 5.5. This high number told them this was not just some star in our backyard; it was an enormous source of energy shining from way across the universe. "It's amazing to find something so interesting and unique in a relatively small survey," says Sowards-Emmerd, who re-analyzed EGRET data to select the targets examined by HET and analyzed the optical data. "We immediately realized that a high-redshift blazar and gamma-ray source would allow us to test our understanding of relativistic radio jets and their interaction with the cosmic microwave background leftover from the Big Bang," Greenhill says. "It's a searchlight that's set so far away that it illuminates matter and radiation all the way between us, between time one billion years after the Big Bang and now," Romani says. "If you can detect it with a gamma-ray telescope, you have a handle on the birth of stars and galaxies between then and now that you never had before." Scientists are currently stymied about how a black hole could have gotten so big so fast. How do you take something big enough to hold 1,000 solar systems and as heavy as all of the stars in our Milky Way galaxy put together, and quickly crunch-collapse it? Scientists think the universe formed 13.7 billion years ago with the Big Bang. The distance of the blazar indicates it formed a billion years after that. "What's interesting about a billion years after the Big Bang is that this marks the end of the 'Dark Age,"' Romani says. "The universe first formed with an enormous flash of light and heat - that's the Big Bang - and then cooled off. And everything's dark for about a billion years. And toward the end of that period, the first stars and black holes and galaxies start collapsing and forming and turning on. We talk about that as the end of the Dark Age. So it's very interesting, and this is one of the big pushes in cosmology, to find objects back in the tail end of the Dark Age, when things are first lighting up, and then to use those to figure out how everything we have in the universe formed." In the next year, the scientists hope to use the VLBA to take a better picture of the jet detected with radio waves and then observe its X-ray spectrum. This will help illuminate the matter between the supermassive black hole and Earth, clarify the black hole's size and characterize the jet's material as it moves away from the black hole at nearly the speed of light. "Studying these things gives us a window into the sort of physical processes that we can't yet control here on Earth," Romani says. "They're the extremes of physics." Those extremes fascinate Romani. "Pulsars are, I think, the most extreme objects in our universe," he says. These cores of dead stars have collapsed, but not far enough to form an event horizon, so they are just short of turning into black holes. They are the densest things in the measurable universe. They have the strongest magnetic fields. Their surfaces have extremely high temperatures. They are cosmic accelerators that speed particles to the highest energies known. So far, scientists have found only a handful of gamma-ray pulsars, and Romani is particularly excited about GLAST as a means of hunting down more in the Milky Way. "I'm particularly interested in ways in which you could find extreme physics out there in the cosmos and get a handle on physics of the 22nd or 23rd century by seeing what's going on in the sky." By Dawn Levy COMMENT:Roger Romani, Physics: (650) 725-7595, firstname.lastname@example.org EDITORS:A photo of the researchers is available at http://newsphotos.stanford.edu. Relevant Web URLs: Roger Romani's web page: http://astro.stanford.edu/home/rwr/home.html The Gamma Ray Large Area Space Telescope: http://www-glast.stanford.edu/ National Radio Astronomy Observatory: http://www.nrao.edu Hobby-Eberly Telescope: http:/www.as.utexas.edu/mcdonald/het/het.html News Service website: http://www.stanford.edu/news/ Stanford Report (university newspaper): http://news.stanford.edu Most recent news releases from Stanford: http://www.stanford.edu/dept/news/html/releases.html AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert! system.
<urn:uuid:cc9baeee-ef99-4148-95d0-9701b37df9c7>
3.34375
2,075
News (Org.)
Science & Tech.
52.192561
1,386
Dewar flask [for Sir James Dewar], container after which the common thermos bottle is patterned. It consists of two flasks, one placed inside the other, with a vacuum between. The vacuum prevents the conduction of heat from one flask to the other. For greater efficiency the flasks are silvered to reflect heat. The substance to be kept hot or cold, e.g., liquid air, is contained in the inner flask. See low-temperature physics. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on Dewar flask from Fact Monster: See more Encyclopedia articles on: Physics
<urn:uuid:15865bbc-b557-4f4b-9456-d71459f377db>
3.3125
139
Knowledge Article
Science & Tech.
53.444167
1,387
The push on green technology has recently resulted in a number of advancements in the field of solar energy. A lot of work is being done to make them more efficient, robust, less intensive, and more cost efficient. Researchers at the California Institute of Technology in Pasadena, have found a way to make flexible solar cells with silicon wires. These solar cells would use a mere 1 percent of the silicon per cell area, as compared to conventional solar wafers. This technology may have application in solar fabrics and the like, but the most important part would be making solar cells cheaper, and less fragile. Silicon is by far the most commonly used, and as of now, the most efficient material used for mass producing solar cells. Silicon wafers however, are fragile; the new technology bypasses this problem by using organic films in combination with Silicon. This makes the material less fragile, more dependable and cost efficient. Efficiency of such solar cells is likely to be in the range of 15 to 20 percent, similar to contemporary cells used in domestic installations.
<urn:uuid:764642e9-b793-4e33-b827-31148682eca9>
3.96875
213
News Article
Science & Tech.
35.238016
1,388
Stars aren't shy about sending out birth announcements. They fire off energetic jets of glowing gas traveling at supersonic speeds in opposite directions through space. Although astronomers for decades have looked at still pictures of stellar jets, they now can watch movies of them, thanks to NASA's Hubble Space Telescope. A diverse team of scientists led by astronomer Patrick Hartigan of Rice University in Houston, Texas, has collected enough high-resolution Hubble images over a 14-year period to stitch together time-lapse movies of young jets ejected from three stars. The moving pictures offer a unique view of stellar phenomena that move and change over just a few years. Most astronomical processes change over timescales that are much longer than a human lifetime. The movies reveal the motion of the speedy outflows as they tear through their interstellar environments. Never-before-seen details in the jets' structure include knots of gas brightening and dimming over time and collisions between fast-moving and slow-moving material, creating glowing arrowhead features. These phenomena are providing clues about the final stages of a star's birth, offering a peek at how our Sun behaved 4.5 billion years ago. "For the first time we can actually observe how these jets interact with their surroundings by watching these time-lapse movies," said Hartigan. "Those interactions tell us how young stars influence the environments out of which they form. With movies like these, we can now compare observations of jets with those produced by computer simulations and laboratory experiments to see what aspects of the interactions we understand and what parts we don't understand." Hartigan's team's results appeared in the July 20, 2011, issue of The Astrophysical Journal. Jets are an active, short-lived phase of star formation, lasting only about 100,000 years. They are called Herbig-Haro (HH) objects, named in honor of George Herbig and Guillermo Haro, who studied the outflows in the 1950s. Astronomers don't know what role jets play in the star-formation process or exactly how the star unleashes them. A star forms from a collapsing cloud of cold hydrogen gas. As the star grows, it gravitationally attracts more matter, creating a large spinning disk of gas and dust around it. Eventually, planets may arise within the disk as dust clumps together. The disk material gradually spirals onto the star and escapes as high-velocity jets along the star's spin axis. The speedy jets may initially be confined to narrow beams by the star's powerful magnetic field. The jet phase stops when the disk runs out of material, usually a few million years after the star's birth. Hartigan and his colleagues used the Wide Field Planetary Camera 2 to study jets HH 1, HH 2, HH 34, HH 46, and HH 47. HH 1-HH 2 and HH 46-HH 47 are pairs of jets emanating in opposite directions from single stars. Hubble followed the jets over three epochs: HH 1 and HH 2 in 1994, 1997, and 2007; HH 34 in 1994, 1998, and 2007; and HH 46 and HH 47 in 1994, 1999, and 2008. The jets are roughly 10 times the width of our solar system and zip along at more than 440,000 miles an hour (700,000 kilometers an hour). All of the outflows are roughly 1,350 light-years from Earth. HH 34, HH 1, and HH 2 reside near the Orion Nebula, in the northern sky. HH 46 and HH 47 are in the southern constellation Vela. Computer software wove together the years' worth of observations, generating movies that show continuous motion. The movies support previous observations revealing that the twin jets are not ejected in a steady stream, like water flowing from a garden hose. Instead, they are launched sporadically in clumps. The beaded-jet structure might be like a "ticker tape," recording how material episodically fell onto the star. The movies show that the clumpy gas in the jets is moving at different speeds like traffic on a freeway. When fast-moving blobs "rear-end" slower gas, bow shocks arise as the material heats up. Bow shocks are glowing waves of material similar to waves produced by the bow of a ship plowing through water. In HH 2, for example, several bow shocks can be seen where several fast-moving clumps bunch up like cars in a traffic jam. In another jet, HH 34, a grouping of merged bow shocks reveals regions that brighten and fade over time as the heated material cools where the shocks intersect. In other areas of the jets, bow shocks form from encounters with the surrounding dense gas cloud. In HH 1 a bow shock appears at the top of the jet as it grazes the edge of a dense gas cloud. New glowing knots of material also appear. These knots may represent gas from the cloud being swept up by the jet, just as a swift-flowing river pulls along mud from the shoreline. The movies also provide evidence that the inherent clumpy nature of the jets begins near the newborn stars. In HH 34 Hartigan traced a glowing knot to within about 9 billion miles of the star. "Taken together, our results paint a picture of jets as remarkably diverse objects that undergo highly structured interactions between material within the outflow and between the jet and the surrounding gas," Hartigan explained. "This contrasts with the bulk of the existing simulations, which depict jets as smooth systems." The details revealed by Hubble were so complex that Hartigan consulted with experts in fluid dynamics from Los Alamos National Laboratory in New Mexico, the Atomic Weapons Establishment in England, and General Atomics in San Diego, Calif., as well as computer specialists from the University of Rochester in New York. Motivated by the Hubble results, Hartigan's team is now conducting laboratory experiments at the Omega Laser facility in New York to understand how supersonic jets interact with their environment. "The fluid dynamicists immediately picked up on an aspect of the physics that astronomers typically overlook, and that led to a different interpretation for some of the features we were seeing," Hartigan explained. "The scientists from each discipline bring their own unique perspectives to the project, and having that range of expertise has proved invaluable for learning about this critical phase of stellar evolution." Hartigan's research team consists of Adam Frank of the University of Rochester in New York; John Foster and Paula Rosen of the Atomic Weapons Establishment in Aldermaston, England; Bernie Wilde, Rob Coker, and Melissa Douglas of Los Alamos National Laboratory in New Mexico; and Brent Blue and Freddy Hansen of General Atomics in San Diego, Calif. Rice University, Houston, Texas Rice University, Houston, Texas
<urn:uuid:3ac86bb4-458b-4f4f-9f5e-e6207db1a8b0>
3.359375
1,386
Knowledge Article
Science & Tech.
49.661609
1,389
For several years, global warming has been discussed in terms of its various influences on human society and a wide range of countermeasures have been actively promoted. Recently, the interest on impact evaluation has shifted from the entire global domain to particular countries or regional societies. Regional influences of the global warming often manifest themselves as the noticeable modulations of natural climate variability. In this respect, APL’s activities are closely and directly related to our real life events not in a distant future but in the present or several months ahead. APL’s activities are expected to contribute to the various fields of human activities indicated here which may be influenced by the ongoing global warming. According to the report of the World Meteorological Organization (WMO), trillions of dollars worth of assets and thousands of human lives are saved globally every year by daily weather forecasting. Many of the extreme weather phenomena responsible for disasters are closely connected to climate variations. The variation in wheat production in Australia is a typical case (Lower row left). The progression of global warming tends to influence climate patterns. Thus, forecasting changing climate variability patterns has a substantial social benefit for protecting human lives and assets. El Nino, often mentioned in association with abnormal weather patterns, is a well-known and significant phenomenon in the tropical Pacific Ocean. Although the Indian Ocean had been considered as an inactive ocean, El Nino like phenomenon called the “Indian Ocean Dipole Phenomenon” (IOD) was discovered about a decade ago. The frequency with which IOD occur seems to increase recently probably due to the ongoing global warming. On the other hand, El Nino tends to emerge with a slightly different pattern, called “El Nino Modoki” , which may also be a result of global warming. Those two new phenomena were discovered by the research activities done in the parent body of the present APL. Advanced forecasting research has since been carried on and developed further by future APL activities. Although El Nino and IOD are phenomena that occur over the oceans, both of which largely influence the global atmospheric flow pattern through their associated anomalous sea surface temperature field. The oceanic influence thus exerted on the lower boundary of the atmosphere propagates in the form of characteristic atmospheric wave patterns horizontally as well as vertically and eventually reaches the regions quite far from the original location of the phenomena. This mechanism is referred to as “Teleconnection” and it provides important information for people living across the globe. The right figure shows the unique teleconnection patterns respectively related to IOD and El Nino. The Application Laboratory has been developing an advanced coupled model, which we call Multi-Scale Simulator for the Geoenvironment (MSSG), with the Earth Simulator Center. MSSG is composed of a non-hydrostatic atmosphere general circulation model coupled to a land model, a non-hydrostatic/hydrostatic ocean model, and an ocean wave model with high performance computing architectures on the Earth Simulator. MSSG has been designed to simulate various multi-scale multi-physics phenomena of the Earth system in a seamless way. It serves a key concept of our understanding of the complex, multi-scale interactions with an aim to advance prediction of weather and climate variability. Especially, MSSG will provide us the prediction and understanding how our imminent environment would be influenced by El Nino or Indian Ocean Dipole and how changed extremes would be making in response to the global warming.
<urn:uuid:f33bad08-2fc7-4847-869a-78b3eb501b08>
3.4375
698
Academic Writing
Science & Tech.
17.7744
1,390
A British seismologist explains earthquakes. The rumbling and shaking of earthquakes puzzled people for centuries, writes Musson, chief spokesman at the British Geological Survey. Aristotle blamed the noise on roaring winds forced through subterranean caverns. The people of Lisbon, Portugal, racked by a massive quake in 1755, felt certain God was punishing the wicked. Shortly thereafter, working with limited data, scientists began to develop an understanding: British geologist John Michell posited that earthquakes transmitted on elastic waves; his colleague Charles Lyell found evidence of moving faults. Based on observations of the archetypal San Francisco quake of 1906, Johns Hopkins geologist Harry Fielding Reid accurately defined an earthquake as a violent movement of rocks that releases energy in the form of waves that spread outward at high velocity. Musson describes the evolving science of seismology, including the development of today’s global seismological networks. Analyzing the most significant earthquakes of all time—Lisbon, San Francisco and Sumatra (2004)—he explains what we know about these “strange and uncanny things” and scientists’ “persistent failure” at predicting them. Based on the growing population of urban areas, especially in developing nations, where buildings are not designed to withstand violent shaking, scientists are able to predict that a massive future quake will eventually result in 1 million deaths. In villages in seismically active areas, builders generally use available materials and follow traditional practices, which can lead to high death tolls. In earthquake-savvy cities, builders prevent collapses through reinforcement and other techniques. Musson urges national governments to mandate earthquake safety programs. In the meantime, he writes, the safest place to be during a quake is under a solid piece of furniture. An authoritative and accessible investigation of one of nature’s most destructive forces.
<urn:uuid:6e1ec658-51d5-4ed3-b918-fe3523c1929f>
4
370
User Review
Science & Tech.
20.195303
1,391
books > Calculus This short introductory text focuses mainly on integration and differentiation of functions of a single variable, although iterated integrals are discussed. Infinitesimals are used when appropriate, and are treated more rigorously than in old books like Thompson's Calculus Made Easy, but in less detail than in Keisler's Elementary Calculus: An Approach Using Infinitesimals. Numerical examples are given using the open-source computer algebra system Yacas, and Yacas is also used sometimes to cut down on the drudgery of symbolic techniques such as partial fractions. Proofs are given for all important results, but are often relegated to the back of the book, and the emphasis is on teaching the techniques of calculus rather than on abstract results. The book is designed more for self-study than for classroom use; full solutions are given for nearly all the end-of-chapter problems. - download in Adobe Acrobat format - View the HTML version (good for casual browsing, but not printer-friendly). - epub, kindle. (These have imperfect formatting because of the present limitations of readers in handling math.) - epub 3 with mathml. (This won't work properly on handheld ebook readers that don't support epub 3 and mathml. Such readers will probably not come on the market until 2013.) - Buy a printed copy. - LaTeX source code - Inf, a calculator that can handle infinite and infinitesimal numbers
<urn:uuid:5ba2cb02-3062-4561-96cd-4834d3a39746>
3.03125
308
Product Page
Science & Tech.
35.738636
1,392
Granny Says Life Evolved Between the Mica Sheets This Behind the Scenes article was provided to LiveScience in partnership with the National Science Foundation. I have a passion for mica. This passion led me, in my 62nd year and almost a grandmother, to develop a hypothesis for the origins of life. “Develop a hypothesis” is what I’ve been doing in the last many months, but the original inspiration came when I had not a scientific thought in my head. I was bent over the dissecting microscope in my apartment in Virginia, near the National Science Foundation, splitting mica into thin sheets to arrange around some crystals grown from a Smithsonian crystal-growing kit. As I looked at the bits of green algae and brown crud at the edges of the mica sheets, I thought, “This would be a good place for life to originate!” My hypothesis is that life originated between thin sheets of mica rocks, which provided many separate spaces for prebiotic molecules to evolve, sometimes in isolation from each other and sometimes in association with each other, as they oozed around within and between sheets. The energy needed for life to evolve from non-living molecules might have come simply from the sun and the waves. The mica hypothesis says that life developed as a ‘sandwich filling’ in mica ‘sandwiches’ in the prebiotic ‘soup,’ or, as: ‘life between the sheets.’ This contrasts with the ‘pizza’, clay, and vesicle hypotheses, in which life originated on the surfaces of earth’s mineral crust, in clay particles, or in lipid vesicles. There are also ‘RNA World’ and ‘Metabolism First’ hypotheses. My hypothesis says that RNA and proteins and metabolic chemistries could all have evolved between the mica sheets and then combined and emerged, coated with lipid membranes, as primitive cells. My passion for mica came from my research in biological Atomic Force Microscopy, for almost 20 years now, starting soon after the Atomic Force Microscope (AFM) was invented in 1986. The AFM feels a surface by raster-scanning a tiny tip across the surface, with a sensitivity so fine that it can feel even bare DNA molecules on a flat surface. The flat surface we use is mica, a layered mineral with atomically flat sheets that can be peeled off with adhesive tape to expose a clean surface. Maybe you are now asking, “How can you see bare DNA molecules on the mica when you said there was algae and crud on it?” The mica we use for AFM samples is high grade mica, free of bubbles and other defects. The mica that inspired my hypothesis for the origins of life came from an abandoned mica mine in a Connecticut state park, where my brother Jim had taken some of us for a hike the previous summer. It had lots of bubbles and defects. Theories and Hypotheses Why do I call my idea a ‘Hypothesis’? People use words in many ways, but one of the strengths of science is that it tries to use words in precisely defined ways. Theories are much stronger than Hypotheses. A Hypothesis is a starting point in the scientific method, while a Theory is the result of much research and testing. Once there were also scientific Laws, but now we know that even Newton’s Laws are not totally correct. Therefore, newer scientists such as Charles Darwin call their well-tested ideas ‘Theories’ instead of ‘Laws’. My idea is only a Hypothesis, ready for testing, by me and hopefully by many others in the scientific community. How Discoveries are Made Dan Koshland, a famous biochemist, wrote that there are three ways discoveries are made: Charge, Challenge, or Chance. He calls this the ‘Cha-Cha-Cha Theory of Scientific Discovery’. Louis Pasteur said that Chance favors the prepared mind. I think mine was a ‘Chance’ discovery, by a mind prepared by decades of diverse education and research in biochemistry, chemistry, cell biology, biophysics, nanoscience and materials science. Koshland, and Einstein before him, said that the process of discovery seems to be the same in science and in other areas. Therefore we are all making discoveries in the same ways, whatever our areas of knowledge. Discoveries range from small to earth-shaking. I wonder which kind the mica hypothesis will be: a big one that gets into textbooks some day or a small one that falls into oblivion. I’ll get some clues about this when I attend the Origin-of-Life Gordon Research Conference next week and share my hypothesis with people who have worked in the field for years or decades. Editor's Note: This research was presented at the American Society for Cell Biology’s 47th annual meeting in December. It was supported by the National Science Foundation (NSF), the federal agency charged with funding basic research and education across all fields of science and engineering. See the Behind the Scenes Archive. MORE FROM LiveScience.com
<urn:uuid:ac3decaf-315d-4cc0-95b8-4449a1fddc73>
2.59375
1,092
Nonfiction Writing
Science & Tech.
42.383529
1,393
Couple is a system of forces whose magnitude of the resultant is zero and yet has a moment sum. Geometrically, couple is composed of two equal forces that are parallel to each other and acting in opposite direction. The magnitude of the couple is given by Where are the two forces and is the moment arm, or the perpendicular distance between the forces. Couple is independent of the moment center, thus, the effect is unchanged in the following conditions. - The couple is rotated through any angle in its plane. - The couple is shifted to any other position in its plane. - The couple is shifted to a parallel plane. In a case where a system is composed entirely of couples in the same plane or parallel planes, the resultant is a couple whose magnitude is the algebraic sum of the original couples.
<urn:uuid:ff5ee6c4-71e3-4c99-9025-e5f95a169681>
3.78125
169
Knowledge Article
Science & Tech.
41.345625
1,394
The latest news from academia, regulators research labs and other things of interest Posted: March 5, 2009 Titania nanotubes and sunlight turn carbon dioxide into methane (Nanowerk News) Dual catalysts may be the key to efficiently turning carbon dioxide and water vapor into methane and other hydrocarbons using titania nanotubes and solar power, according to Penn State researchers. Burning fossil fuels like oil, gas and coal release large amounts of carbon dioxide, a greenhouse gas, into the atmosphere. Rather than contribute to global climate change, producers could convert carbon dioxide to a wide variety of hydrocarbons, but this makes sense to do only when using solar energy. "Recycling of carbon dioxide via conversion into a high energy-content fuel, suitable for use in the existing hydrocarbon-based energy infrastructure, is an attractive option, however the process is energy intense and useful only if a renewable energy source can be used for the purpose," the researchers note in a recent issue of Nano Letters. Craig A. Grimes, professor of electrical engineering and his team used titanium dioxide nanotubes doped with nitrogen and coated with a thin layer of both copper and platinum to convert a mixture of carbon dioxide and water vapor to methane. Using outdoor, visible light, they reported a 20-times higher yield of methane than previously published attempts conducted in laboratory conditions using intense ultraviolet exposures. The chemical conversion of water and carbon dioxide to methane is simple on paper -- one carbon dioxide molecule and two water molecules become one methane molecule and two oxygen molecules. However, for the reaction to occur, at least eight photons are required for each molecule. "Converting carbon dioxide and water to methane using photocatalysis is an appealing idea, but historically, attempts have had very low conversion rates," said Grimes who is also a member of Penn State's Materials Research Institute. "To get significant hydrocarbon reaction yields requires an efficient photocatalyst that uses the maximum energy available in sunlight." The team, which also included Oomman K. Varghese and Maggie Paulose, Materials Research Institute research scientists and Thomas J. LaTempa, graduate student in electrical engineering, used natural sunlight to test their nanotubes in a chamber containing a mix of water vapor and carbon dioxide. They exposed the co-catalyst sensitized nanotubes to sunlight for 2.5 to 3.5 hours when the sun produced between 102 and 75 milliwatts for each square centimeter exposed. The researchers found that nanotubes annealed at 600 degrees Celsius and coated with copper yielded the highest amounts of hydrocarbons and that the same nanotubes coated with platinum actually yielded more hydrogen, while the copper coated nanotubes produced more carbon monoxide. Both hydrogen and carbon monoxide are normal intermediate steps in the process and as the building blocks of syngas, can be used to make liquid hydrocarbon fuels. When the team used a nanotube array with about half the surface coated in copper and the other half in platinum, they enhanced the hydrocarbon production and eliminated carbon monoxide. The yield for these dual catalyst nanotubes was 163 parts per million hydrocarbons an hour for each square centimeter. The yield from titania nanotubes without either copper or platinum catalysts is only about 10 parts per million. "If we uniformly coated the surface of the nanotube arrays with copper oxide, I think we could greatly improve the yield," said Grimes. Grimes also found that lengthening the titanium dioxide tubes, which for other applications increases yield, does not improve results. "We think that distribution of the sputtered catalyst nanoparticles is at the top surface of the nanotubes and not inside and that is why increased length does not improve the reaction," says Grimes. Although all these experiments were done with nitrogen-doped titanium dioxide nanotubes, the researchers conclude that the nitrogen did not enhance the conversion of carbon dioxide to hydrocarbons. The catalysts, however, did shift the reaction from one that used only the energy in ultraviolet light to one that used other wavelengths of visible light and therefore more of the sun's energy. The researchers are now working on converting their batch reactor into a continuous flow-through design that they believe will significantly increase yields. The researchers have filed a provisional patent on this work.
<urn:uuid:f5a6b603-f078-48e0-9bd0-69074855c973>
2.921875
889
News Article
Science & Tech.
20.114802
1,395
Hurricane Philippe and tropical storm Rita formed in the North Atlantic on Sunday, making them the eighth hurricane and seventeenth tropical storm of the season. The US National Hurricane Center (NHC) expects Rita to become a full-fledged hurricane this week, amid fears that it could affect areas hard-hit by deadly Hurricane Katrina. Now in the southern Bahamas, Rita is expected to continue moving west, with its most likely course taking it between Cuba and southern Florida into the Gulf of Mexico. Hurricane warnings were issued late on Sunday afternoon for parts of Cuba and the Florida Keys. Current projections by the NHC indicate the storm will intensify over the Gulf and eventually hit southern Texas Saturday morning, but the paths of hurricanes are notoriously hard to predict days in advance. An obvious concern is if its path should bend north, toward hurricane-shattered New Orleans and surrounding areas of Louisiana, Mississippi and Alabama. Philippe formed further to the east of Rita and is following a north-northwest course in the Atlantic. It is not expected to approach land before nearing Bermuda on Saturday. At the start of August, the NHC predicted an exceptionally busy North Atlantic hurricane season, with 18 to 21 tropical storms, 9 to 11 of which would become hurricanes. So far that forecast has been borne out, and it could yet prove an underestimate. The record tropical storm season saw 21 cyclones, and occurred in 1933. If that number is exceeded, the NHC will use up its list of names and turn to Greek-letter designations. The record hurricane season was 12 in 1969. Katrina's devastation of New Orleans and the Gulf Coast has already made this the most expensive hurricane season ever for the US, in terms of life lost and damaged infrastructure. If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to. Have your say Only subscribers may leave comments on this article. Please log in. Only personal subscribers may leave comments on this article
<urn:uuid:9014c37e-3560-4227-a303-34e756270f12>
3
438
Truncated
Science & Tech.
41.756367
1,396
Northern Prairie Wildlife Research Center The little-wing pearly mussel faces water quality degradation resulting from industrial and sewage effluents and the runoff of silt and other water pollutants from poorly designed construction, development, mining, agricultural, and forestry activities. Further, the spread of the exotic zebra mussel (Dreissena polymorpha) represents a potential threat to the survival of this species. Zebra mussels outcompete native mussel fauna, and infestations in the water column can physically disrupt normal breeding and feeding behavior. Although little measurable progress has been made in establishing new mussel populations or in stabilizing existing populations, substantial recovery efforts for all the State's federally listed mussels are under way. Research is continuing on maintaining captive mussel populations, mussel cryopreservation, and potential impacts of the exotic zebra mussel on native mussels. Recovery of the littlewing pearly mussel will require additional research to develop new propagation techniques, reintroduction into unoccupied historical habitat, and determination of the factors that are causing declines in the wild. Also, technology is needed for cryopreservation of freshwater mussel genetic material. In FY 1992, the North Carolina Wildlife Resources Commission was provided $5,000 to conduct surveys and monitor the little-wing pearly mussel, as well as the State's other federally listed mussels. Forest Service: This Federal agency has funded a study of mussel distribution in the Little Tennessee River, which is inhabited by the little-wing pearly mussel. North Carolina Wildlife Resources Commission: This State agency is responsible for managing the State's mussel populations and helps protect listed species through an environmental review process and site surveys. The agency published proceedings of a symposium on North Carolina's endangered mollusks, including the little-wing pearly mussel. North Carolina Department of Environment, Health and Natural Resources: Through maintenance of a natural history data base, this State agency provides valuable information on the distribution of federally listed mussels. Plan approved 9/22/89.
<urn:uuid:e4b87b92-69b2-4ef6-8b44-c948713bf951>
3.53125
424
Knowledge Article
Science & Tech.
7.547076
1,397
If you look closely, you can find fossilized material on the banks of the Norris Lake shoreline in Anderson County, Tennessee when the Tennessee Valley Authority (TVA) lowers the water level. If you are really lucky, you will find traces of sea creatures or beautiful flora or fauna impressions encased between the freshly exposed layers of rock. These are ancient treasures from our country’s rich geological history. A paleogeography reconstruction of the Earth took place some 56 to 34 million years ago during the Eocene geologic period of time . Eleanor Frierson, who passed away in April 2013, was the grande dame of partnerships to improve public access to federal and international science information. For 10 years, she helped spearhead U.S. interagency efforts to make federal science information more accessible to Americans, playing an absolutely crucial leadership role on the Science.gov Alliance. She took Science.gov all the way from a nascent concept through to its maturation. Ms. Frierson also made similar contributions to the international science portal, WorldWideScience.org. During the past year, Dr. William N. Watson, physicist, of DOE/OSTI’s staff has posted quite a few very interesting white papers in OSTI’s monthly Science Showcase on OSTI’s Home Page. This quiet, unassuming man crafts prolific papers on popular science topics of interest to the Department of Energy (DOE). He investigates and assimilates this information from OSTI’s extensive R&D Collections and takes us on a layman’s journey through the technical details and scientific research that make it all possible. William’s papers have helped us to understand key technologies developed at DOE Laboratories for the Mars Science Laboratory’s Curiosity and how chemical analysis of rocks and soil is determined millions of miles away. We know what is happening with new heat pump technology and how DOE researchers are working to improve designs and efficiency. With the release of SciTech Connect, OSTI is expanding its deployment of semantic search, an innovative technology to improve the quality and relevance of search results across the majority of its DOE content. Semantic search is a way to enhance search accuracy contextually. Rather than relying on search algorithms that identify a specific query term, semantic search uses more complex contextual relationships among people, places and things. It is an especially effective search approach when a person truly is researching a topic (rather than trying to navigate to a particular destination). Did you ever stop to think what makes it possible for you to have immediate, free access to Department of Energy (DOE) scientific findings from billions of dollars of annual research? A lot of behind-the-scenes work and dedication of an entire community make it all possible. The heart and soul of this endeavor is the DOE Scientific and Technical Information Program (STIP), a collaboration to ensure your access to DOE research and development results.
<urn:uuid:bc7d35d1-e6b7-4f39-9bdc-9fba5dd660a3>
2.515625
600
About (Org.)
Science & Tech.
35.676505
1,398
Assume 14 < n < 30. Make boxes labelled 16-n to 30-n. Let Si be the set of the first i elements. Let si be the sum of the elements of Si. If si is less than or equal to 30-n, put Si in the box labelled si. Otherwise, put it in the box labelled si-n. There are 15 boxes, and 16 subsets, so at least one box has two subsets. Clearly, a box with two subsets must have one, say Si, with si = the box label, and one subset Sj with sj-n = the box label, with Sj containing Si, obviously. So Sj - Si is a subset whose elements have sum n. The complement of this set with respect to the full set of 16 elements has sum 30-n. You can probably express the above in 4 lines if you're especially terse.
<urn:uuid:1294ddef-6dd8-4747-94c9-62093f81d332>
3
189
Q&A Forum
Science & Tech.
89.103562
1,399