text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Table of Contents
SUSE LINUX is available for several 64-bit platforms. This does not necessarily mean that all the applications included have already been ported to 64-bit platforms. SUSE LINUX supports the use of 32-bit applications in a 64-bit system environment. This chapter offers a brief overview of how this support is implemented on 64-bit SUSE LINUX platforms. It explains how 32-bit applications are executed (runtime support) and how 32-bit applications should be compiled to enable them to run both in 32-bit and 64-bit system environments. Additionally, find information about the Kernel API and an explanation of how 32-bit applications can run under a 64-bit kernel.
SUSE LINUX for the 64-bit platforms AMD64 and EM64T is designed so that existing 32-bit applications run in the 64-bit environment “out-of-the-box.” This support means that you can continue to use your preferred 32-bit applications without waiting for a corresponding 64-bit port to become available.
|Conflicts between Application Versions|
If an application is available both for 32-bit and 64-bit environments, the parallel installation of both versions is bound to lead to problems. In such cases, decide on one of the two versions and install and use this.
To be executed correctly, every application requires a range of libraries. Unfortunately, the names for the 32-bit and 64-bit versions of these libraries are identical. They must be differentiated from each other in another way.
To retain compatibility with the 32-bit version, the libraries are stored at the same place in the system as in the 32-bit environment. The 32-bit version of libc.so.6 is located under /lib/libc.so.6 in both the 32-bit and 64-bit environments.
All 64-bit libraries and object files are located in directories called lib64. The 64-bit object files you would normally expect to find under /lib, /usr/lib, and /usr/X11R6/lib are now found under /lib64, /usr/lib64, and /usr/X11R6/lib64. This means that there is space for the 32-bit libraries under /lib, /usr/lib and /usr/X11R6/lib, so the file name for both versions can remain unchanged.
No subdirectories of the object directories whose data content does not depend on the word size are moved. For example, the X11 fonts are still found in the usual location under /usr/X11R6/lib/X11/fonts. This scheme conforms to the LSB (Linux Standards Base) and the FHS (File System Hierarchy Standard). | <urn:uuid:a06c0485-fb21-4e32-bcc4-21a5929168a8> | 2.9375 | 577 | Documentation | Software Dev. | 58.929562 |
The most common cause of flooding is when the volume of water exceeds the capacity of the river or stream
channel. Rivers are natural drainage channels for surface waters. Surface waters comprise two components:
runoff and base flow. Runoff is that part of precipitation that flows toward the rivers
or streams on the ground surface or within the soil (subsurface runoff or interflow). Base flow is the
part of stream flow that enters the stream channel from groundwater.
Stream flow is affected by a number of factors (The Corps' Hydrologic Engineering Center (HEC) offers
Hydrologic Engineering for Planning
a hydrology course for non-hydrologists for those interested in more details than are provided here).
The most important of these for the purposes of this manual are the amount and type of precipitation, the
nature and condition of the drainage basin and climate. During a rainstorm, the amount, intensity and
duration of the rain as well as the area of the storm and its path, all determine the surface water
runoff that reaches a stream.
The amount, intensity and duration of rain affect the ability of the land to absorb the precipitation,
which further affects the rate of runoff. The area and path of the storm in relation to the size of the
watershed determine the area contributing runoff. The runoff rate and the area affected together
determine the volume of water that will pass a given point downstream. The volume of water moving
through the channel and the channel's dimensions and conditions determine the nature and extent of the
The shape, size, soil type and topography of the drainage basin are other factors that can affect the
quantity of water reaching the stream and the timing with which it arrives. Although some of these
factors are constant, some (like the absorptive or shedding properties of the soil) vary with vegetation
cover, season and previous rainfall.
Climate can also influence the relationship between precipitation and runoff. Frost makes most soil
impenetrable if the soil contains moisture. Parched soil can also influence runoff rates. A large part of
the year's precipitation may be stored in the form of snow in the Northern U.S. during winter. Heavy ice
formation on rivers can also influence flooding.
Floods may result from one or more of the following causes:
- Snowmelt runoff
- Urban stormwater runoff
- Coastal storms, tsunamis, cyclones, hurricanes
- Ice jams and other obstructions
- Dam failure or the failure of some other hydraulic structure
- Catastrophic outbursts
As noted above, rainfall is the most common cause of flooding in the U.S. The volume of water in
the stream or river's channel simply exceeds its capacity to convey the water. As a result water
begins to spill out of the channel onto the adjoining lands of the natural floodplain, which may
have been significantly altered by human activity.
Floods can rise slowly or quickly. In many areas they may develop over a period of days. Flash floods
can be extremely dangerous. Unanticipated, they usually happen on small watersheds as a result of a
torrential downpour, often caused by heavy thunderstorm activity. In a flash flood, stream flow peaks
within hours of the rainfall. Estimating damages due to rainfall floods is now a straightforward process.
During winter in some parts of the U.S., most of the precipitation may be stored as snow or ice
on the ground. As temperatures rise huge quantities of water are released. These floods are most
common in spring but can occur as a result of sudden winter thaws. Heavy runoff can result from
the rapid melting of the snow under the combined effect of sunlight, winds and warmer temperatures.
If the ground is frozen, the water produced by the melting snow is unable to penetrate and runs
off into streams and lakes. Flooding becomes even more severe if the snowmelt runoff is compounded
by runoff from concurrent heavy rainfall. The later the spring thaw, the greater the risk of
this compound flood problem. Snowmelt explains the prevalence of heavy spring runoff and
flooding in some parts of the country.
Urban Drainage (Stormwater Runoff) Flooding
Urbanization drastically alters the drainage characteristics of the land. The slanted roofs,
downspouts, storm gutters and stormwater conveyance systems increase the volume and rate of
surface runoff. The urban runoff from intense rainfall can exceed the carrying capacity of the
sewer system, creating a backup in the system. This backup often causes flooding of basements
and low lying roads. Urban stormwater runoff can also cause local rivers to flood as well as the
urban area itself. Although the impact on a major river may be minimal, the carrying capacity of
small streams can be quickly exceeded, causing localized flooding and erosion problems.
Coastal Storm Flooding
High winds and wave action have created flood conditions on the seashores as well as on the shores of
the Great Lakes and other large water bodies throughout the U.S. A related cause of flooding includes the
interaction between high estuarine flows and tides. Storm surge or seiches occurring simultaneously with
high waves can cause shoreline flooding. Every body of water has a set of natural periods of oscillation
at which it is easy to set up motions called seiches. Surges are caused by sudden changes in atmospheric
pressure and by the wind stress accompanying moving storm systems.
Storm systems occur frequently and some have the potential to cause abnormal water levels at
coastlines. Determining water elevations during storms is a complex problem. It involves
interactions between wind and water and differences in atmospheric pressure. Erosion damage
can be a significant category of losses in these kinds of floods. For examples of erosion
damage on the Great Lakes see
Great Lakes Issues
This makes estimating damages for such events complex and difficult.
Lake flooding can be complicated by the fact that it is often a weir flow that can last for
extended periods of time in areas afflicted by high lake levels.
Tsunami is a Japanese term for "harbor wave." A tsunami, also known as a tidal wave, is the most
spectacular coastal flooding event. A tsunami actually has nothing to do with the tides. An undersea
movement such as an earthquake or a landslide causes a disturbance that gives a vertical motion to the
water column resulting in a tsunami.
An earthquake of 7.0 on the Richter scale can generate a series of waves. In the Pacific Basin these
waves have been known to travel at almost 570 mph over long distances with little loss of energy.
Crests can be several hundred miles apart. As the wave approaches the coast it grows as it slows down.
The mass of water that hits the shore can have both tremendous velocity as well as force behind it.
Estimating damages from these kinds of floods is very difficult because tsunamis are unique with
respect to location, amplitude of waves and time between troughs. Because the source of the wave is
always unknown, modeling these events remains a crude approximation. For an overview of recent
tsunami events see
Recent Tsunami Events
The December 2004 tsunami in the Indian Ocean is well documented. See
NOAA and the Indian Ocean Tsunami
for a starting point. Informative publications can be found at
After the Tsunami: Human Rights of Vulnerable Populations
Hope for Renewal: Photographs from Indonesia After the Tsunami
Several informative animations are also available on-line including:
Savage Earth Animation
The following materials were taken from the
FEMA Hurricanes site.
A hurricane is a tropical storm with winds that have reached a constant speed of 74 mph or more.
Hurricane winds blow in a large spiral around a relative calm center known as the "eye." The "eye" is
generally 20 to 30 miles wide, and the storm may extend outward 400 miles. As a hurricane nears land,
it can bring torrential rains, high winds and storm surges. A single hurricane can last for more than
two weeks over open waters and can run a path across the entire length of the eastern seaboard. August
and September are peak months during the hurricane season that lasts from June 1 through November 30.
Hurricanes are called "typhoons" in the western Pacific Ocean, while similar storms in the Indian Ocean
are called "cyclones."
Moving ashore, they sweep the ocean inward while spawning tornadoes and producing torrential rains and
floods. Even more dangerous than the high winds of a hurricane is the storm surge, a dome of ocean
water that can be 20 feet at its peak and 50 to 100 miles wide. The surge can devastate coastal
communities as it sweeps ashore. Nine out of 10 hurricane fatalities are attributable to the storm
Heavy rains and ocean waters brought ashore by strong winds can cause flooding. The runoff
systems in many cities are unable to handle such an increase in water because of the gentle
topography in many of the coastal areas where hurricanes occur. Hurricanes are capable of
producing copious amounts of flash-flooding rainfall. During landfall, a hurricane rainfall of
10 to 15 inches or more is common. If the storm is large and moving slowly, less than 10 mph,
the rainfall amounts from a well-organized storm may be even greater. To get a generic estimate
of the rainfall amount (in inches) that can be expected, divide the storm's forward motion by
100, i.e., Forward Speed/100 = estimated inches of rain. Tropical Storm Claudette (1979) brought
45 inches of rain to an area near Alvin, Texas, contributing to more than $600 million in damage.
Estimating damages for hurricane floods is more difficult than for fluvial floods. Estimating wave
damages, for example is one problem, separating out wind damage from water damage is another challenge.
Nonetheless, hurricane flood damages are estimated routinely. To see some of the latest advances in
this area see the Corps'
Storm Damage Reduction Model.
Ice Jam Flooding
Ice jams are a major concern in some cold region parts of the country. Jams form during both the
freeze-up and breakup periods of ice formation. They result from the accumulation of ice fragments
that build up in a logjam fashion to restrict the flow of water. The jams act as a temporary
obstruction to stream flow. The mechanics of ice jam flooding can be quite complex, for more
information see the
Ice Jam and Ice Flooding Clearinghouse.
A brief overview is provided below.
Ice floes left behind by floodwaters
During freeze-up ice jams usually form where floating ice slush or blocks, formed by frazil
ice, encounter a stable ice cover. The beginning of the ice jam is the toe and the upstream
end is the head. The stable ice is usually frozen to the banks or is restricted from moving by
the channel configuration. Generally, incoming ice fragments either submerge and deposit under
the stable ice cover, pile up behind it, or both. Bridge piers, islands, bends, shallows,
slope reductions and constrictions can increase the likelihood of a jam forming. Ice jams in
the spring result from accumulated ice from the breakup of the upstream ice cover.
Ice jams cause flooding for two reasons. First, ice jams can be very thick, many feet thick in
some cases. Second, the underside of the ice cover is usually very rough. In an open stream
the streambed is the only source of friction retarding the flow of water. The rougher the
streambed, the greater the depth required to pass a given stream discharge. With an ice jam in
place frictional resistance is greatly increased and the flow depth has to be much greater than
for open water. Add the depth of water needed to float the ice jam to the depth required to
maintain the discharge and extremely high water levels can occur, even at relatively small
When an ice jam suddenly is released it produces a surge of flow that can move at very rapid
speeds. This surge can carry and deposit chunks of ice as large as automobiles, presenting a
significant increase in damage potential for these kinds of floods. Estimating damages for ice
jam floods is made difficult by the fact that it is very difficult to estimate the frequency of
occurrence of an ice jam and the significance of damages caused by floating ice floes.
Dam Failure Flooding
Flooding can result from the failure of dams or other hydraulic structures. These
failures can result in a wall of water being released in a surge down the river channel.
The suddenness and magnitude of such an event can have disastrous results.
Catastrophic Outburst Flooding
Outburst floods are more common in western Canada and other parts of the world than they are
in the U.S. An outburst flood occurs when lakes dammed by glaciers or moraines suddenly drain
and tons of water, mud and debris are released. The resulting floodwaters can pick up large
quantities of sediments and transform into destructive debris flows. The random and often
unpredictable nature of these kind of events make the estimation of damages resulting from them
as difficult as estimating damages from dam failures.
Glossary of Terms
Glossary of Lake and Water Words
is available from the North American Lake Management Society. Also see the
from the National Flood Insurance Program. | <urn:uuid:15582381-985f-4b1a-bcc2-fb61f1ace3a0> | 4.1875 | 2,812 | Knowledge Article | Science & Tech. | 43.802401 |
Mars Science Lab Project Manager and professor of geology at California Institute of Technology John Grotzinger presented new evidence of ancient habitability on Mars, based on the findings from the Curiosity rover, in a lecture on Thursday evening. On Tuesday, the Jet Propulsion Laboratory in Pasadena, Calif. announced that Curiosity’s current location in Gale Crater very likely could have hosted microbial life.
Grotzinger explained that this particular location was chosen because it promised to have relevance to multiple interests in the search for habitability on Mars. The rover’s eventual destination for the rover is Mount Sharp.
Researchers receiving Curiosity’s findings back on Earth were first struck by the rock’s surprising color — on the famous Red Planet, the rock in Gale Crater was gray.
“Red Mars turned gray at Gale Crater,” Grotzinger said.
The rock found in Gale Crater has been notable to scientists because it suggests a long history of interaction with neutral pH water. This water, which would likely have had a low salinity concentration, would have been far more inviting to microbial life than any other location.
Researchers found that the magnetite found in the rock was not fully oxidized. The discovery of both oxidized and reduced substances in these samples suggests that microorganisms that subsist simply on the chemical energy potential present within a rock could have lived within the Gale Crater rock.
Back on Earth, scientists like Princeton’s own Tullis C. Onstott have touted the importance and vitality of prokaryotes that live in extreme habitats like these in recent years. These so-called “extremophiles” were probably the first organisms on Earth, Grotzinger explained.
“This is the most complex spacecraft ever to be sent to the surface of another planet,” Grotzinger said as he explained an image of Curiosity’s insides. Curiosity is equipped with tools with names like CheMin, Curiosity’s X-ray diffraction instrument, and Dust Removal Tool.
One of the challenges of the Mars mission has been the need for vigilant communication with the rover, Grotzinger explained. Furthermore, the scientists must take meticulous precautions in every action. Not only must every movement be simulated on Earth before it can happen on Mars, but every sample must be taken several times in order to prevent contamination.
This time commitment can become a problem when Mars begins its transit behind the Sun. For that period of about a month, the Earth will not be able to communicate by radio signals with Curiosity.
Grotzinger expressed his anticipation for the coming Mars Sample Return Mission scheduled for launch in 2020.
Reader Comments (0)
No comments yet. Be the first to post your opinion on this article. | <urn:uuid:e2592778-e05b-497b-a936-045463cd7847> | 4.03125 | 571 | Truncated | Science & Tech. | 34.750356 |
In the olden days, not only did we have to walk a mile in the chilling winds of a snowstorm to get to school (hey our grandparents had it rougher; they had to do it to get to day care), we also had to make programs without buttons and scrollbars. Now, of course, we have object-oriented programming. This article will introduce you to the most important concepts as they relate to Java.
Simply put, when you change the state of an object, you ask it to perform a behavior. An object stores its states in a field (commonly referred to as variables, which will be discussed in further depth later) and demonstrates its behaviors through methods (known as functions, also covered later).
When you press the power button on your computer, it places it in the On state. Because we have changed the state of the computer (from Off to On), we have also initiated certain behaviors from the computer. In this case, the power comes on, the fans within the computer turn on, and the system bios, followed the hard drive, etc. all become active.
Note that an object can only have the states and behaviors you place upon it. I cannot press the on button of my computer and receive a bologna sandwich unless the programmer coded that in. (Note to self: add bologna sandwich dispenser to computer).
Benefits of Objects
Aside from being shiny and fun to look at, objects within a program offer several benefits.
Modularity: You can write the code for the object within the object itself, keeping it separate from other coding, while still being able to call upon it at any time.
Hiding Information: Planting your code within your object makes it less visible to the user.
Reuseability: Once created, an object can be copied and used again and again without having to go through the process of remaking it.
Debugging Made Simple: Objects make debugging easier in some instances. If you track the problem to a button within your application, you can simply remove the button and remake it, instead of having to delve into your main code. | <urn:uuid:89e44130-f2a3-4683-a472-644c304a3eff> | 3.40625 | 437 | Tutorial | Software Dev. | 49.021814 |
Jupiter has a total of 63 known satellites. The four great moons of Jupiter were discovered by Galileo Galilei (1564–1642) in Jan. 1610, and are called the Galilean satellites after their discoverer. Their names are Io, Europa, Ganymede, and Callisto. Like our Moon, the satellites always keep the same face turned toward the planet they circle. Jupiter's four largest moons all have thin atmospheres. A carbon dioxide atmosphere envelops Callisto; Europa and Ganymede each have thin oxygen atmospheres; and Io's contains sulfur dioxide.
Information Please® Database, © 2007 Pearson Education, Inc. All rights reserved.
More on Jovian Moons from Infoplease: | <urn:uuid:bf123219-547a-402e-b30e-4f47d0a9f0e5> | 3.375 | 153 | Knowledge Article | Science & Tech. | 36.084608 |
|MadSci Network: Engineering|
Most thermometers depend on the differential thermal expansion rates of materials to measure temperatures. That is, we all know that materials expand in heat and contract in the cold; what's less obvious is that different materials do so at different rates. The rate is called "coefficient of thermal expansion". I describe three different designs below that reflect this principle and classic thermometer designs. My recommendation is to go with design #3 as being the simplest in concept and construction. Thermometer #1 -------------- The difference in expansion rates of different metals is the principle is used in mechanical dial thermometers. Consider two long strips of dissimilar metals are welded together along their lengths. As the temperature increases, one of the metal strips want to expand more than the other, so the assembly bends towards the side that isn't expanding as fast. In a dial thermometer, the bimetal strip is coiled into a spring that expands as the temperature rises and contracts as the temperature falls. One end of the sprig is fixed to the thermometer's case, and the other to the pointer. If you were to find two long, thin strips of dissimilar metal (say, brass and aluminum), these could be fastened together along their lengths using screws and nuts every few inches, or perhaps by gluing them with a strong glue such as epoxy. One would take note of the amount of curvature in the strips to detect temperature changes. Thermometer #2 -------------- Another approach is to take advantage of the different expansion rates of a liquid and solid. Galileo used this idea and Archimede's principle to invent his thermometer, the aptly named "Galileo's thermometer". Liquids such as water often expand much more than solids such as glass or metal. Objects float in water if the weight of water they displace is more than their own weight; if the weight of water displaced is less than the object's weight, it will sink. The weight of an object does not change with temperature. The density (weight per unit volume) of a liquid changes with temperature because although the total weight is constant, the total volume changes. For water above 4 deg C, but below boiling, water expands at the rate of about 0.2 cc per kg per deg C. That is, 1.000 liter of water at 25 deg C will occupy about 1.002 liters (that is, one liter plus 2 cc more) at 35 deg C. An object that just *barely* floated at 25 deg C will sink at 35 deg C, if it doesn't expand very much (its temperature is increasing, too). But glass doesn't expand very much compared to water. What Galileo did is have a glass blower make some sealed, air-filled glass balls. Galileo then carefully added weights to these balls so that each had a specific temperature at which could no longer float. That is, balls with a lot of weight would sink when the temperature was low, while balls with less weight would float until the temperature got higher. The balls were marked with the target temperature at which they would sink. Galileo's thermometers can often be found in scientific gift shops (search the Web for "Galileo +thermometer" to see illustrations). I have one at home and it works quite nicely. You could try reproducing this invention using water and any kind of rigid floating object. For example, small bottles in which you put a little sand (but mostly air) might work. If you have access to a chemistry lab (or better yet, the chemist!) with accurate scales and volume measuring equipment (you need to measure the displacement volume of the bottles accurately), you could assemble the weighted bottles in fairly straightforward fashion. One would need to reference a table of water density versus temperature, with the objective to weighting the different bottles to get to the same average densities for different water temperatures. The scales would be used to measure the mass of each bottle and deterrmining how much sand to add. Alternatively, one could get there experimentally (as did Galileo) by floating the bottles in tubs of water at different temperatures until just the right amount of sand is in each (and checking that there's not too much or too little by seeing if they float or sink in tubs colder or hotter than the target temperature of each bottle). This design (#2) is probably the fussiest of the three to get working right. Thermometer #3 -------------- The simplest approach of all (sorry I made you wait 'til the end) is to reproduce the bulb thermometer on a grand scale. The bulb thermometer has a reservoir of liquid (alcohol or mercury, typically) that feeds a very narrow channel inside a glass column. As the liquid expands with temperature, the level inside the columns rises; as the temperature decreases, the level sinks. Why not do this with a glass jug, a stopper, some clear, thin plastic tubing, a funnel, a long thin stick, a small stick to plug a hole in the stopper (see below), and some duct tape? (No good science experiment can be done without duct tape; however, any kind of reasonably waterproof adhesive tape will do for your purposes). Make two holes in the stopper that snugly fit the plastic tubing. Fill the jug with colored water, and insert the stopper. Use some of the plastic tubing to connect the funnel's outlet to the jug (via the hole in the stopper). Insert a different piece of plastic tubing in the other hold in the stopper part way into the jug. Use the tape to affix the stick to the jug so that the stick stands vertically, then tape the second piece of tubing to the stick. The glass jug is the reservoir of your thermometer and the tubing taped to the stick is the "glass column". All that's left to do is to use the funnel to force in a bit more liquid to the jug to force out all air and to get some liquid partway up the other tube (the one taped to the stick). Remember that "water seeks its own level". Once you've done that, remove pull out the tubing connected to the funnel and plug its hole through the stopper (this is where the "plug stick" comes in). This will take some practice and deft application of fingers, as the water partway up the column will want to come spurting out of the hole formerly occupied by the filling tube. You need to use a glass jug, as opposed to plastic, to ensure rigidity, so that as the water expands and contracts it must rise and fall up and down the thermometer's column. The other trick to making this work is to force out all the air bubbles. Again, you want to make sure that the water has no place to go when it expands except up the column. A gallon jug (about 4 liters) will work nicely if thin tubing (say, 1/8" to 1/4" diameter, that is, 3 to 6 mm diameter) can be procured. I'm thinking of the clear plastic tubing that's often used by hobbyists who keep aquariums to supply air to their water filters from an external air pump. A pet shop or well stocked hardware store should also be able to fix you up with some of this tubing (it's quite inexpensive). With a 4 liter reservoir and 5 mm diameter tubing, you should get a rise of about 2 cm of liquid per deg C temperature rise. But remember, it will take time for this amount of water to change temperature, so don't expect rapid response times. You should easily see temperature changes through the course of a day, though. This apparatus will be a bit messy to set up, but probably the easiest and most accessible to your students in terms of concept and materials. Have fun! Steve Czarnecki P.S. A simple "dinner table" experiment to demonstrate the thermal expansion of gasses: Take an empty glass soda bottle ("a Coke bottle" or a wine bottle), wet the top of the rim with saliva, and place a small coin over the opening. A U.S. dime works well with one of the old 16 oz. returnable Coke bottles. The saliva acts to help seal the air inside the bottle. Carefully wrap your hands around the bottle as it stands on a table top. Continue holding the bottle, being careful not to shake or move it. Watch the coin over the opening very carefully. After five minutes or so, you should see it pop up a bit and come back to rest, with the cycle repeating a few times over the next few minutes. What's happening is that your hands are warming the air inside the bottle, which expands, forcing the coin to flip up a bit and allow the excess pressure to escape. It takes time to get this going because you have to warm up the glass bottle and then the air inside. Note that a glass bottle is essential to making this work (the air must have no where to go except to dislodge the coin). Use the smallest (i.e., lightest) coin that will just cover the bottle opening.
Try the links in the MadSci Library for more information on Engineering. | <urn:uuid:1063089e-e74b-4e39-8f60-8f9307bc4862> | 4.09375 | 1,879 | Tutorial | Science & Tech. | 57.757994 |
RAND_add, RAND_seed, RAND_status, RAND_event, RAND_screen - add entropy to
void RAND_seed(const void *buf, int num);
void RAND_add(const void *buf, int num, double entropy);
int RAND_event(UINT iMsg, WPARAM wParam, LPARAM lParam);
RAND_add() mixes the num bytes at buf into the PRNG state. Thus, if the data at buf are unpredictable to an adversary, this increases the uncertainty about the
state and makes the PRNG output less predictable. Suitable input comes from
user interaction (random key presses, mouse movements) and certain hardware
entropy argument is (the lower bound of) an estimate of how much randomness is
contained in buf, measured in bytes. Details about sources of randomness and how to
estimate their entropy can be found in the literature, e.g. RFC 1750.
RAND_add() may be called with sensitive data such as user
entered passwords. The seed values cannot be recovered from the PRNG
OpenSSL makes sure that the PRNG state is unique for each thread. On
systems that provide
/dev/urandom, the randomness device is used to seed the PRNG transparently. However, on
all other systems, the application is responsible for seeding the PRNG by
RAND_seed() is equivalent to
RAND_add() when num == entropy.
RAND_event() collects the entropy from Windows events such as
mouse movements and other user interaction. It should be called with the
iMsg, wParam and lParam arguments of all messages sent to the window procedure. It will estimate the entropy
contained in the event message (if any), and add it to the PRNG. The
program can then process the messages as usual.
RAND_screen() function is available for the convenience of
Windows programmers. It adds the current contents of the screen to the
PRNG. For applications that can catch Windows events, seeding the PRNG by
RAND_event() is a significantly better source of
randomness. It should be noted that both methods cannot be used on servers
that run without user interaction.
RAND_event() return 1 if the
PRNG has been seeded with enough data, 0 otherwise.
The other functions do not return values.
RAND_screen() are available in
all versions of SSLeay and OpenSSL.
RAND_status() have been added in OpenSSL 0.9.5,
RAND_event() in OpenSSL 0.9.5a. | <urn:uuid:1cf50e0a-7553-4cf1-bd07-f76ce83d2983> | 2.75 | 561 | Documentation | Software Dev. | 53.43465 |
The University of Oregon chemistry professor, plastics expert and public speaker isn't really a crusader, though, so much as a numbers guy who likes to help people make sustainable choices based on facts rather than trends or clever marketing. Tyler is well-versed in the technique known as "life cycle assessment," a way of measuring a product's environmental impact throughout its entire existence -- from birth to death (or resurrection via recycling).
Tyler's talks can yield counter intuitive ways of thinking about everyday consumer choices. Here he sounds off about his way of looking at sustainability and previews his OMSI Science Pub talk tonight in Hillsboro.
What first started you on the path of studying sustainability?
The sustainability thing got started a number of years ago because I went across the street to the student union and they were giving, I think it was 50 cents off, if you brought your own ceramic mug instead of using one of their disposable cups.
I remember asking the guy, what's going on? He told me it's for environmental reasons. It's more sustainable to bring your cup.
I'd never heard of that before, and it turned out to be -- without giving out all of the data -- it turned out to be basically an urban legend.
So what's the fallacy there?
(A ceramic mug) has to be fired in a kiln at a high temperature for a certain period of time and you use a lot of energy to fire that ceramic mug. It turns out you might as well take the energy -- the natural gas or petroleum -- and convert it into plastic. No matter how much you reuse that ceramic mug, at least within the assumptions of a life cycle assessment, you never recover that energy you used in the manufacturing process.
But what if you're looking at it from the standpoint of litter?
Right, so that's the classic case of a plastic bag. A plastic bag is really good in terms of global warming impact, chemical use, water use, just all those environmental impacts we think about -- a plastic bag is really good compared to other types of bags.
But where it's really bad is it doesn't degrade, so it's easier for it to end up as litter. So if you see we have plastic bags blowing all around, or they're getting in our waterways and choking fish and that's the most important thing for you, then you ban them. But if you're more interested in global warming, you might come to a different conclusion.
So when you go to the grocery store do you get a plastic bag, a paper bag or do you use a cotton tote bag?
The most sustainable option is a reusable tote bag made out of recycled plastic.
That's interesting because the cloth bags are so vogue right now.
Well, cotton requires a huge amount of water to grow. So in terms of water use, cotton is really low on the list. It's bad for the environment in that sense. Also something like 25 percent of all pesticides used in this country are used on cotton.
So yeah, there are issues with cotton.
So another big choice for consumers right now is what kind of car you drive. The emphasis seems to be on gas mileage. Are there some other considerations people should be thinking about?
No. When they did the original life cycle assessment it showed that the Prius wasn't actually that good compared to a regular internal combustion engine car. But that life cycle assessment was redone with better data and better knowledge of how long the Prius is going to last. It turns out the Prius is actually quite good in terms of its environmental impact. So a Prius, or hybrid vehicles actually look pretty good.
What's another kind of product people might want to think about more before buying it?
Bio-plastics made from corn starch. They're not as green as we thought they were. It takes a lot of energy to grow the corn and make the starch. Conventional plastics look pretty good compared to a lot of those bio-plastics.
So what kinds of things will you be dealing with in your OMSI Science Pub talk?
So I don't want to give away my talk, but since this is a Science Pub and there will be people sitting around drinking beer, I've got a lot of life cycle assessment data on the manufacturing of beer. A lot of people think that's a fairly harmless activity but some of the data is absolutely amazing, the amount of greenhouse gas brewing beer produces.
I also talk about the impact of animals -- pets.
So you're coming up to the Portland area to challenge people on their beer and their dogs? It'll be pretty interesting to see how that turns out.
(Laughing) I don't come across as challenging them. I'm giving them something to think about.
-- Joe Hansen is a freelance writer | <urn:uuid:a343e312-ee07-410a-94a6-1bf397367bf8> | 2.796875 | 993 | Audio Transcript | Science & Tech. | 61.185338 |
A laboratory experiment at NASA's Jet Propulsion Laboratory, Pasadena, Calif., simulating the atmosphere of Saturn's moon Titan suggests complex organic chemistry that could eventually lead to the building blocks of life extends lower in the atmosphere than previously thought. The results now point out another region on the moon that could brew up prebiotic materials.
Following up on Wednesday’s surprise announcement that a cosmic ray detector on board the International Space Station had possibly made the first instrumented detection of dark matter, an article from the U.S. Department of Energy describes the methodology behind the discovery and what lies ahead for researchers.
As the shapes of galaxies go, the spiral disk—with its characteristic pinwheel profile—is by far the most pedestrian. But despite their common morphology, how galaxies like ours get and maintain their characteristic arms has proved to be an enduring puzzle in astrophysics. How do the arms of spiral galaxies arise? Do they change or come and go over time? The answers to these and other questions are now coming into focus as researchers capitalize on powerful new computer simulations to follow the motions of as many as 100 million “stellar particles” as gravity and other astrophysical forces sculpt them into familiar galactic shapes.
A new look at conditions after a Manhattan-sized asteroid slammed into a region of Mexico in the dinosaur days indicates the event could have triggered a global firestorm that would have burned every twig, bush, and tree on Earth and led to the extinction of 80% of all Earth’s species, says a new University of Colorado Boulder study.
The SpaceX Dragon capsule returned to Earth on Tuesday with a full science load from the International Space Station—and a bunch of well-used children's Legos. The privately owned cargo ship splashed down in the Pacific right on target, 250 miles off the coast of Mexico's Baja Peninsula, five hours after leaving the orbiting lab. The California-based SpaceX confirmed the Dragon's safe arrival via Twitter.
The SpaceX Dragon capsule returned to Earth on Tuesday with a full science load from the International Space Station. The privately owned cargo ship splashed down in the Pacific, off the coast of Mexico's Baja Peninsula, five hours after leaving the orbiting laboratory. The California-based SpaceX confirmed the Dragon's safe arrival via Twitter.
A Boeing 787 with a redesigned battery system made a 2-hour test flight on Monday, and the company said the event "went according to plan." The test flight was an important step in Boeing Co.'s plan to convince safety regulators to let airlines resume using the plane, which the company calls the Dreamliner.
Boeing's comments about the smoldering batteries on its 787 have annoyed the National Transportation Safety Board. Boeing gave its own account of two battery incidents, which included a fire, at a detailed press briefing in Tokyo last week. The problem is that the NTSB is still investigating the incidents. Boeing is a party to the investigation, meaning it provides technical experts and, in effect, gets a seat at the table as investigators try to sort out what happened.
The Big Bang theory says the visible portion of the universe was smaller than an atom when, in a split second, it exploded, cooled and expanded rapidly, much faster than the speed of light. The European Space Agency's Planck space probe has looked back at the afterglow of the Big Bang, and results released today have now added about 80 million years to the universe's age, putting it 13.81 billion years old.
Rusted pieces of two Apollo-era rocket engines that helped boost astronauts to the moon have been fished out of the murky depths of the Atlantic by Amazon.com CEO Jeff Bezos. A privately funded expedition led by Bezos raised the main engine parts during three weeks at sea, about 360 miles from Cape Canaveral. The engine parts were resting nearly 3 miles deep in the Atlantic
After taking measurements of sudden, drastic changes in radiation levels, researchers have reported that NASA’s Voyager 1 spacecraft, now more than 11 billion miles from the Sun, left the heliosphere dominated by the Sun and has passed outside our solar system. Anomalous cosmic rays, which are cosmic rays trapped in the outer heliosphere, all but vanished, dropping to less than 1% of previous amounts.
NASA's twin GRAIL (Gravity Recovery and Interior Laboratory) spacecraft went out in a blaze of glory Dec. 17, 2012, when they were intentionally crashed into a mountain near the moon's north pole. GRAIL had company—NASA's Lunar Reconnaissance Orbiter (LRO) mapping satellite was orbiting the moon as well. With just three weeks notice, the LRO team scrambled to get LRO in the right place at the right time to witness GRAIL's fiery finale
A team of international scientists, including a Lawrence Livermore National Laboratory astrophysicist, has made the most detailed examination yet of the atmosphere of a Jupiter-size like planet beyond our solar system. The finding provides astrophysicists with additional insight into how planets are formed.
Shining in the infrared with the energy of a trillion suns and producing a thousand new suns per year, newly discovered “starburst galaxies” represent what the most massive galaxies in our cosmic neighborhood looked like in their star-making youth. The discovery of these “abnormal” galaxies was recently made by the new Atacama Large Millimeter Array in Chile, which was formally dedicated this week.
Life as we know it is based upon the elements of carbon and oxygen. Now a team of physicists, including one from North Carolina State University, is looking at the conditions necessary to the formation of those two elements in the universe. They’ve found that when it comes to supporting life, the universe leaves very little margin for error.
Drilling into a rock near its landing spot, the Curiosity rover has answered a key question about Mars: The red planet long ago harbored some of the ingredients needed for primitive life to thrive. Topping the list is evidence of water and basic elements that teeny organisms could feed on, scientists said Tuesday.
The Mars rover Curiosity drilled into its first rock a month ago. Now scientists will reveal what's inside. Gathering at NASA headquarters Tuesday, the rover team will detail the minerals and chemicals found in a pinch of ground-up rock. The results come seven months after Curiosity made a dramatic landing in an ancient crater near the equator.
A pair of newly discovered stars is the third-closest star system to the Sun, according to a recent paper published by a Penn State University astrophysicist. At 6.5 light years, the duo is the closest star system discovered since 1916, and is expected to attract considerable attention from planet hunters.
NASA’s Martian rover hunkered down Wednesday after the sun unleashed a blast that raced toward Mars. While Curiosity was designed to withstand punishing space weather, its handlers decided to power it down as a precaution since it suffered a recent computer problem. While the hardy rover slept, the Opportunity rover and two NASA spacecraft circling overhead carried on with normal activities.
The Hubble constant is a fundamental quantity that measures the current rate at which our universe is expanding; it is critical for gauging the age and size of our universe. One of the largest uncertainties plaguing past measurements of the Hubble constant has involved the distance to the Large Magellanic Cloud, our nearest neighboring galaxy. A team of astronomers have now managed to improve the measurement of the distance to our nearest neighbor galaxy and, in the process, refine the calculation that helps measure the expansion of the universe.
Chemists have recently shown that conditions in space are capable of creating complex dipeptides—linked pairs of amino acids—that are essential building blocks shared by all living things. The discovery opens the door to the possibility that these molecules were brought to Earth aboard a comet or possibly meteorites, catalyzing the formation of proteins (polypeptides), enzymes and even more complex molecules, such as sugars, that are necessary for life.
A private Earth-to-orbit delivery service made good on its latest shipment to the International Space Station on Sunday, overcoming mechanical difficulty and delivering a ton of supplies with high-flying finesse. The Dragon's arrival couldn't have been sweeter—and not because of the fresh fruit on board for the six-man station crew. Coming a full day late, the 250-mile-high linkup above Ukraine culminated a two-day chase that got off to a shaky, almost dead-ending start.
A commercial cargo ship rocketed toward the International Space Station on Friday under a billion-dollar contract with NASA that could lead to astronaut rides in just a few years. Launch controllers applauded and gave high-fives to one another once the spacecraft safely reached orbit. The rocket successfully separated from the white Dragon capsule, which contains more than a ton of food, tools, computer hardware, and science experiments.
NASA's Fermi Gamma-ray Space Telescope orbits our planet every 95 minutes, building up increasingly deeper views of the universe with every circuit. Its wide-eyed Large Area Telescope (LAT) sweeps across the entire sky every three hours, capturing gamma rays from sources across the universe. A Fermi scientist has transformed LAT data of a famous pulsar into a mesmerizing movie that visually encapsulates the spacecraft's complex motion.
Boeing CEO Ray Conner has met with Japan's transport minister and other officials in Tokyo to explain his company's proposal for resolving problems with the 787 Dreamliner's lithium-ion batteries that have kept the aircraft grounded for over a month. | <urn:uuid:25072f39-e6fc-41df-b9e8-472dad06828a> | 2.8125 | 1,961 | Content Listing | Science & Tech. | 35.428536 |
Thank you for your feedback!
Abstract"Plastic made from milk" —that certainly sounds like something made-up. If you agree, you may be surprised to learn that in the early 20th century, milk was used to make many different plastic ornaments —including jewelry for Queen Mary of England! In this chemistry science project, you can figure out the best recipe to make your own milk plastic (usually called casein plastic) and use it to make beads, ornaments, or other items.
In this chemistry science project, you will investigate which is the best recipe for making plastic out of milk.
What can you make out of milk? Cheese, butter, whipped cream, sour cream, yogurt, ice cream, and...plastic! Are you surprised by plastic? It is true. In fact, from the early 1900s until about 1945, plastic made from milk was quite common. This plastic, known as casein plastic or by the trade names Galalith and Erinoid, was used to manufacture buttons, decorative buckles, beads, and other jewelry, as well as fountain pens and hand-held mirrors and fancy comb-and-brush sets. Figure 1 below shows examples of belt buckles made from casein plastic in the 1930s and '40s; more examples can be found in the references in the Bibliography.
But how can milk be changed into plastic? To answer that we need to think first about what plastic is. The word plastic is used to describe a material that can be molded into many shapes. Plastics do not all look or feel the same. Think of a plastic grocery bag, a plastic doll or action figure, a plastic lunch box, and a disposable plastic water bottle. They are all made of plastic, but they look and feel different. Why? Their similarities and differences come from the molecules that they, like everything else, are made of. Molecules are the smallest units (way too small to see with your eye!) of any given thing. Plastics are similar because they are all made up of molecules that are repeated over and over again in a chain. These are called polymers, and all plastics are polymers. Sometimes polymers are chains of just one type of molecule, as in the top half of Figure 2 below. In other cases polymers are chains of different types of molecules, as in the bottom half of Figure 2, that link together in a regular pattern. A single repeat of the pattern of molecules in a polymer (even if the polymer uses only one type of molecule) is called a monomer.
Milk contains many molecules of a protein called casein. When you heat milk and add an acid (in our case vinegar), the casein molecules unfold and reorganize into a long chain. Each casein molecule is a monomer and the polymer you make is made up of many of those casein monomers hooked together in a repeating pattern like the top (all pink) example in Figure 2.. The polymer can be scooped up and molded, which is why it is a plastic.
In this chemistry science project, you will investigate what is the best recipe for making casein plastic by making batches of heated milk with different amounts of vinegar. How much vinegar is needed to give you the most plastic? Without enough vinegar the casein molecules do not unfold well, making it difficult for them to link together into a polymer. Of course, if you were manufacturing you would be thinking about both the amount of plastic you can make and the cost. The more of any ingredient you use the more expensive the end product is. The "best" recipe will have the highest yield (make the most plastic) for the smallest amount of vinegar.
The plastic you make will be a bit more crumbly and fragile than Galalith or Erinoid. That is because the companies that made those casein plastics included a second step. They washed the plastic in a harsh chemical called formaldehyde. The formaldehyde helped harden the plastic. Although you will not use formaldehyde because it is too dangerous to work with at home, you will still be able to mold the unwashed casein plastic you make. Once you have a recipe, with the best ratio of vinegar to milk, for your casein plastic, you can have fun with it. Try shaping it, molding it, or dyeing it to make beads, figures, or ornaments, such as those shown in Figure 3 below.
Terms, Concepts, and Questions to Start Background Research
These resources have more information about, and photos of, casein plastic:
This website is a fun introduction to polymers:
For help creating graphs, try this website:
Materials and Equipment
The materials listed below are for doing the experimental procedure exactly as written. However, you can make changes to the experimental procedure in order to use a different size measuring cup and/or a stovetop rather than a microwave.
Shop for Supplies at Science Buddies Online Store
Science Buddies has compiled some suggestions for harder to find items in our Amazon store. The store does not include every item for every project, but it does include items that we feel work for the projects on our website. If you have comments or would like us to add items to the store, please contact us at firstname.lastname@example.org.
Making Casein Plastic
This experiment uses hot liquids, so an adult's help will be needed throughout.
Analyzing Your Data
Ideas for Fun with Your Casein Plastic
Try making beads, ornaments, or figurines out of your casein plastic. You should do the molding and coloring steps (except for paint and/or marker) within the first hour of making the plastic or it will start drying out.
Sandra Slutz, PhD, Science Buddies
If you like this project, you might enjoy exploring related careers.
Everything in the environment, whether naturally occurring or of human design, is composed of chemicals. Chemists search for and use new knowledge about chemicals to develop new processes or products.
Chemical Engineer |
Chemical engineers solve the problems that affect our everyday lives by applying the principles of chemistry. If you enjoy working in a chemistry laboratory and are interested in developing useful products for people, then a career as a chemical engineer might be in your future.
Industrial Engineer |
You’ve probably heard the expression “build a better mousetrap.” Industrial engineers are the people who figure out how to do things better. They find ways that are smarter, faster, safer, and easier, so that companies become more efficient, productive, and profitable, and employees have work environments that are safer and more rewarding. You might think from their name that industrial engineers just work for big manufacturing companies, but they are employed in a wide range of industries, including the service, entertainment, shipping, and healthcare fields. For example, nobody likes to wait in a long line to get on a roller coaster ride, or to get admitted to the hospital. Industrial engineers tell companies how to shorten these processes. They try to make life and products better—finding ways to do more with less is their motto.
Materials Scientist and Engineer |
What makes it possible to create high-technology objects like computers and sports gear? It's the materials inside those products. Materials scientists and engineers develop materials, like metals, ceramics, polymers, and composites, that other engineers need for their designs. Materials scientists and engineers think atomically (meaning they understand things at the nanoscale level), but they design microscopically (at the level of a microscope), and their materials are used macroscopically (at the level the eye can see). From heat shields in space, prosthetic limbs, semiconductors, and sunscreens to snowboards, race cars, hard drives, and baking dishes, materials scientists and engineers make the materials that make life better. | <urn:uuid:43bac83e-67d5-48c5-b8b4-2106aeacc10d> | 3.109375 | 1,628 | Tutorial | Science & Tech. | 45.752565 |
Dec. 6, 2002 KINGSTON, R.I. – November 25, 2002 – When a property is suspected of having contaminated soil or groundwater, it is usually a lengthy and costly process to confirm the presence of pollutants and to delineate the extent of the contamination. Soon that process may be simplified considerably.
University of Rhode Island geophysicist Reinhard Frohlich, an associate professor of geosciences, has devised a cost-effective, new method for finding underground contaminants that will reduce drilling and digging beneath the surface. By inserting two metal spikes in the ground at various distances and connecting them to an electric current, Frohlich can measure the voltage between the spikes and determine the resistivity of the soil, which tells him if the soil is polluted.
"My initial objective was to do an experiment at the surface that would explain what was going on beneath the surface," said Frohlich, whose research was funded by a $55,000 grant from the U.S. Environmental Protection Agency.
Resistivity measurements, which calculate a material's opposition to the flow of electric current, are widely used to track contaminated salts dissolved in groundwater because they are good conductors of electricity. But Frohlich's experiments focused on finding organic compounds like toluene, benzene, xylene, ethylbenzene, phenol and other cancer-causing substances that do not conduct electricity.
"Our system seems to work very well on all organic compounds. Resistivity increases significantly in areas where the aquifer is polluted compared to clean areas," he said. "We should be able to use this as the first step in the remediation process because it's quicker and allows us to drill fewer borings into the aquifer." Frohlich tested his system at the Picillo Pig Farm in West Coventry, a Superfund site where illegal dumping of chemical waste was discovered following an explosion in 1978. The R.I. Department of Environmental Management and the EPA have been monitoring and cleaning the site for more than 20 years.
"The Picillo Farm is a suitable site for our experiments because the results can be compared with the many monitoring wells and other analyses that have been conducted there over the years," Frohlich said. In addition to field tests at the Picillo Farm, Frohlich conducted controlled laboratory tests comparing clean soil with contaminated soil of known composition.
His study will next attempt to quantify the amount of contaminants at a given location. "It's one thing to identify a clean or contaminated site, but we want to also get a quantitative value for the contaminants," said Frohlich. "That's something that the EPA would really like to be able to do."
Other social bookmarking and sharing tools:
The above story is reprinted from materials provided by University Of Rhode Island.
Note: If no author is given, the source is cited instead. | <urn:uuid:b3d1140c-9945-4ffd-85f0-dc9bf554cc6b> | 3.0625 | 587 | Truncated | Science & Tech. | 35.565981 |
A closeup view of a typical pair of sunspots, with Earth superimposed to show scale.
Click on image for full size
Original Windows to the Universe artwork by Randy Russell using images from the Royal Swedish Academy of Sciences (sunspot image) and NASA (Earth image).
Sizes of Sunspots
Sunspots are very large
structures. Although they look small against the backdrop of the Sun,
which has a diameter of 1.4 million km (870
thousand miles), most sunspots could swallow a planet. Many sunspots, like
the ones shown in the image on this page, are as large
as Earth! Most spots range in size from about 1,500 km (932 miles) to around
50,000 km (31,068 miles) in diameter. Occasionally gigantic sunspots the size
of Jupiter appear on the Sun's "surface".
Astronomers believe some other stars also have spots. Young stars seem especially
likely to have large numbers of starspots, and some of those
may be immense.
Shop Windows to the Universe Science Store!Cool It!
is the new card game from the Union of Concerned Scientists that teaches kids about the choices we have when it comes to climate change—and how policy and technology decisions made today will matter. Cool It! is available in our online store
You might also be interested in:
Sunspots are dark, planet-sized regions that appear on the "surface" of the Sun. Sunspots are "dark" because they are cooler than their surroundings. A large sunspot might have a central temperature of...more
Most of the energy we receive from the Sun is the visible (white) light emitted from the photosphere. The photosphere is one of the coolest regions of the Sun (6000 K), so only a small fraction (0.1%)...more
In recent years astronomers have become able to detect "starspots" on distant stars! Like the sunspots that frequently dot the "surface" of the nearest star, our Sun, starspots are relatively cool, dark...more
Rising above the Sun's chromosphere , the temperature jumps sharply from a few tens of thousands of kelvins to as much as a few million kelvins in the Sun's outer atmosphere, the solar corona. Understanding...more
Eclipses have been monitored for centuries, but it was only recently that we understood what really occurs. Eclipses have always been fascinating to watch, but they weren't always welcome. For many years,...more
An eclipse of the Sun occurs when the Earth passes through the Moon's shadow. A total eclipse of the Sun takes place only during a new moon, when the Moon is directly between the Sun and the Earth and...more
The gas in the solar corona is at very high temperatures (typically 1-2 million kelvins in most regions) so it is almost completely in a plasma state (made up of charged particles, mostly protons and electrons)....more | <urn:uuid:00eb7a3e-da71-407a-b2bd-e12d99ae9a72> | 4 | 627 | Knowledge Article | Science & Tech. | 59.339584 |
Published: 3:38 PM GMT on November 03, 2011
Last month, a team of scientists from Berkeley called the Berkeley Earth Surface Temperature (BEST) group released results from research they did on the Earth surface temperature record. Though there have been numerous studies and time series created on surface temperature, they wanted to take an independent look at the data and create a new temperature record. What they found was surprising to some in the "skeptic" community, though not surprising to most climate scientists.
Dr. Richard Muller is the founder and scientific director of the BEST group, which is made up of physicists, statisticians, and climatologists. Though Dr. Muller has been described as a climate change "skeptic" and "denialist," he has an impressive and extensive curriculum vitae in physics, including being a consultant for the U.S. Department of Defense, and a MacArther Foundation Fellow, and the recipient of the National Science Foundation Alan T. Altman Award. His skepticism is evidenced most frequently in the press by his funding from the Koch brothers, who have made billions of dollars in the oil industry. The BEST project also accepted funding from Koch, among many other organizations, though the funders had no influence over methodology or results, which is almost always the case in peer reviewed science. The BEST group also includes Dr. Judith Curry, the chair of the School of Earth and Atmospheric Sciences at Georgia Tech, who has recently been vocal about the need for a more transparent scientific process, and more eyes on the data, especially when it comes to research on man-made global warming and the temperature record.
The BEST team was open with their hypothesis: they expected to find that, when using temperature stations that other organizations failed to include, the warming trend wouldn't be present, or at least not as dramatic. Their objectives are listed on their website (which also includes access to data and submitted papers), which include:
-- Merging land surface data into a raw dataset that's in a common format and easy to use
-- Developing new and potentially better ways of processing, average, and merging the data
-- Creating a new global temperature record
-- To provide not only the raw data and the resulting record, but also the code and tools used to get there, making the process as transparent as possible
Figure 1. Locations of the the 39,028 temperature stations in the Berkeley Earth data set (blue). Stations classified as rural are plotted on top in black.
The BEST project collaborators combined data from 15 sources that, wherever possible, did not include the tried and true data that the "big three" (NASA, NOAA, or HadCRU) used in their analyses, mainly the GHCN Monthly dataset, which is widely used because of its requirements that the each station in the data set have plenty of observations, no gaps, and no erroneous data. However, the BEST project was born to create a new global surface temperature record, and to "see what you get" if you use observations that other institutions have weeded out. BEST looked at data from 39,028 different temperature measurement stations from around the globe (Figure 1), and developed an averaging process to merge the stations into one record, which you see below in comparison to previous records that have been constructed.
Figure 2. Temperature time series from the big three: NASA Goddard Institute for Space Science (NASA GISS, blue), NOAA (green), and the Hadley Centre and Climate Research Unit of East Anglia (HadCRU, red) along with the results from the BEST project (black).
The result was a new land surface temperature series to be added to the well-cited records of NOAA, NASA, and HadCRU, in addition to some truly independent, amateur compilations. The new temperature record agrees with the records from "the big three," and agrees with them on a warming of 1°C since 1950. BEST also addressed concerns raised by the skeptic community about station bias and urban heat island effect. They conclude that the urban heat island effect does not contribute significantly to the land temperature rise, given that urban area is only 1% of the land area in the record. Also, they looked at the stations that Anthony Watts has reported as "poor" quality, and have found that they also showed the same warming as the stations that were reported as "OK." This helps to show that temperature stations were not "cherry picked" in previous studies for warming trends, but for honest station quality.
The addition of another (eventually) peer-reviewed temperature series is good, and more eyes looking at the data is good, but the result is not surprising. However, it might have changed the minds of some skeptics who have been wanting to see an analysis from scientists that they find trustworthy. I think Dr. Muller sums their results up nicely in his Wall Street Journal opinion article:
When we began our study, we felt that skeptics had raised legitimate issues, and we didn't know what we'd find. Our results turned out to be close to those published by prior groups. We think that means that those groups had truly been very careful in their work, despite their inability to convince some skeptics of that.
The BEST project has four papers out for review in various journals. Having released the results to the public eye before undergoing the scrutiny of peer review, they've also made some updates to the analysis since these papers were submitted, thanks to a peer review process of its own: the internet.
Links and references:
Berkeley Earth Surface Temperature
BEST Press Release | <urn:uuid:18819ac7-393c-41a3-b334-c653f6417c0f> | 3.203125 | 1,136 | Nonfiction Writing | Science & Tech. | 39.017257 |
The movies have brainwashed us into thinking that robots should look like people, but the revolution isn’t turning out that way. How are the machines changing, and how will they change us? DISCOVER, with the National Science Foundation and Carnegie Mellon University, posed these questions to four experts in a panel discussion and in video interviews with each scientist individually. Below are the video interviews and the transcript of the panel discussion.
Robin Murphy of Texas A&M is an expert in rescue robots; Red Whittaker of Carnegie Mellon designs robots that work in difficult environments; Javier Movellan of U.C. San Diego studies how robots interact with children; and Rodney Brooks of MIT founded iRobot, maker of the Roomba. Editor in chief Corey S. Powell moderated the panel.
Powell: Let’s start with the basics. What exactly is a robot?
Brooks: A robot is something that senses the world, does some sort of computation, and decides to take an action outside of its physical extremity. That action might be moving around, or it might be grabbing something and moving it. I say “outside its extremity” because I don’t like to let dishwashers be defined as robots.
Whittaker: I’m a little more liberal about that. I worked with robots that cleaned up the Three Mile Island nuclear accident [from 1984 to 1986]. Those were remote controlled, and one of the knocks against them was that they weren’t real robots. Those machines that are tour guides to the Titanic or our eyes on Mars don’t do a lot of thinking either, but they’re good enough in my book.
Movellan: The idea that a system has to operate in space and time, with real-time constraints, is critical. It’s also critical to understand the intelligence of these things—it’s not just intelligence in general but an intelligence situated in a particular world.
Murphy: And I would add that sometimes this intelligence is shared. Is the robot just the physical entity at Three Mile Island or at a disaster site or on Mars? Or is it also right here with us? More and more robots are part of a shared cognition system.
Rodney, you’ve talked about four goals that robot researchers should be aiming for. What are they?
Brooks: First, the object-recognition capabilities of a 2-year-old child. You can show a 2-year-old a chair that he’s never seen before, and he’ll be able to say, “That’s a chair.” Our computer vision systems are not that good. But if our robots did have that capability, we’d be able to do a whole lot more.
Second, the language capabilities of a 4-year-old child. When you talk to a 4-year-old, you hardly have to dumb down your grammar at all. That is much better than our current speech systems can do.
Third, the manual dexterity of a 6-year-old child. A 6-year-old can tie his shoelaces. A 6-year-old can do every operation that a Chinese worker does in a factory. That level of dexterity, which would require a combination of new sorts of sensors, new sorts of actuators, and new algorithms, will let our robots do a whole lot more in the world.
Fourth, the social understanding of an 8- or 9-year-old child. Eight- or 9-year-olds understand the difference between their knowledge of the world and the knowledge of someone they are interacting with. When showing a robot how to do a task, they know to look at where the eyes of the robot are looking. They also know how to take social cues from the robot.
If we make progress in any of those four directions our robots will get a lot better than they are now.
We’ve already had a lot of success with robots in space. The Obama administration recently announced plans to cancel NASA’s Ares rockets and Orion capsule [updated in Obama's April speech], which were intended to take humans back to the moon—and Red, you don’t look at all sad about that. Why not?
Whittaker: This is actually very good news for robotics. Robot missions don’t require immense launch payloads. You don’t have to keep humans warm, keep them fed and watered and breathing.
In fact, you’re working now on a robotic mission that will be done without government funding, right?
Whittaker: Google is offering a $20 million prize for the first robot that sends television signals from the moon, and I intend to win that. There are bonuses for traveling a certain distance and for navigating to a place where humans have sent things before [for instance to the Apollo sites]. That is more deliberative than robotic wandering. It’s nonfederal, but that’s how all of the great technological incentives work. When Lindbergh flew to Paris for his $25,000, it wasn’t a federal program. Great prizes completely transform people’s belief, catapult an industry, and drive technology. And they’re rather fun.
What about all the money NASA has already invested in these rockets? Is there a way to merge the manned program with a robotics program?
Whittaker: Robotic precursors could vastly improve the prospects for human exploration. For example, an orbiting spacecraft recently discovered the opening to a lunar cave. There are extensive caves on the moon, and they’re important because humans don’t do well in the extreme heat and the extreme cold on the surface. Those caves are waiting to be explored, but clearly no human would be the first one in. If you lose 10 robots exploring those caves, you don’t worry about it, but if you lose one person, it could shut down your entire space program.
There are also places near the poles of the moon where it stays light for months at a time. That is arguably the most valuable real estate in the solar system [because solar energy would be available so much of the time]. What a gift it would be for robots to confirm, survey, and establish these areas.
Brooks: In addition to precursor missions, the most astounding thing for us as humans will be if we discover life somewhere else. We’re looking at extrasolar planets [ones around other stars], but there are also a bunch of places in our solar system that look promising—the moons of Jupiter and Saturn, and back again to Mars. For the cost of two shuttle launches, we could have an extensive unmanned mission to each of those places. If Obama’s policy frees up money for robotic probes, it increases our chances of detecting life. That would open up whole new vistas in our understanding, and it would change us philosophically. | <urn:uuid:9c2b7d1e-2c8d-4628-afce-13fa7bbcb99f> | 3.28125 | 1,446 | Audio Transcript | Science & Tech. | 58.080756 |
ANSI Common Lisp 15 Arrays 15.2 Dictionary of Arrays
The type of a vector that is not displaced to another
array, has no fill pointer, is not
and is able to hold
elements of any type is a subtype of type simple-vector.
The type simple-vector is a subtype of type vector,
and is a subtype of type (vector t).
- Compound Type Specifier Kind:
- Compound Type Specifier Syntax:
- Compound Type Specifier Arguments:
size - a non-negative fixnum,
or the symbol *.
The default is the symbol *.
- Compound Type Specifier Description:
This is the same as (simple-array t (size)).
- Allegro CL Implementation Details: | <urn:uuid:a392e492-bf92-4956-8ba2-7979abb66cff> | 2.796875 | 166 | Documentation | Software Dev. | 47.699359 |
ANSI Common Lisp 2 Syntax 2.4 Standard Macro Characters
A semicolon introduces characters that are to be ignored,
such as comments. The semicolon and all characters up to
and including the next newline or end of file are ignored.
220.127.116.11 Examples of Semicolon
18.104.22.168 Notes about Style for Semicolon | <urn:uuid:6f4b9efd-15f5-4e04-8749-698549ddd3dd> | 2.984375 | 82 | Documentation | Software Dev. | 67.576587 |
Seattle Weekly: News: The Super Flood by Frank Parchman:
Some 5,600 years ago, the body of water we call Puget Sound had an arm that extended 30 miles inland from present-day Elliott Bay in Seattle to a point halfway between Auburn and Sumner. Today, of course, that is the Green River Valley—the narrow, flat suburban land of Kent and Renton and the industrial lowlands of South Seattle. It would be reasonable to think that this change happened gradually, but scientists have determined that most of the long-gone stretch of inland sea was transformed by a single event that created 200 square miles of land in a matter of hours, with waves of mud 20 feet to 600 feet high. Imagine a wall the consistency of wet concrete traveling up to 60 mph. This mudflow destroyed everything in its path, uprooting entire old-growth forests. It hit Puget Sound with such force and with so much material that it flowed underwater for 15 miles, maybe farther. An area of hundreds of square miles was covered with mud and debris up to 350 feet deep. | <urn:uuid:829367ee-d65f-4e3a-a9b0-5512c921ae13> | 3.203125 | 220 | Personal Blog | Science & Tech. | 57.117861 |
scanf, fscanf, sscanf, vscanf, vsscanf, vfscanf - input format conversion
scanf(const char *format, ...);
fscanf(FILE *stream, const char *format, ...);
sscanf(const char *str, const char *format, ...);
vscanf(const char *format, va_list ap);
vsscanf(const char *str, const char *format, va_list ap);
vfscanf(FILE *stream, const char *format, va_list ap);
The scanf() family of functions read input according to the
as described below. This format may contain ``conversion
the results of such conversions, if any, are stored through
a set of
The scanf() function reads input from the standard input
fscanf() reads input from the supplied stream pointer
sscanf() reads its input from the character string pointed
to by str.
The vfscanf() function is analogous to vfprintf(3) and reads
the stream pointer stream using a variable argument list of
stdarg(3)). The vscanf() function scans a variable argument
the standard input and the vsscanf() function scans it from
these are analogous to the vprintf() and vsprintf() functions, respectively.
Each successive pointer argument must correspond properly
with each successive
conversion specifier (but see ``suppression'' below). All conversions
are introduced by the % (percent sign) character.
string may also contain other characters. Whitespace (such
tabs, or newlines) in the format string match any amount of
including none, in the input. Everything else matches only
Scanning stops when an input character does not match such a
Scanning also stops when an input conversion cannot
be made (see
Following the % character introducing a conversion there may
be a number
of flag characters, as follows:
* Suppresses assignment. The conversion that follows
usual, but no pointer is used; the result of the
h Indicates that the conversion will be one of dioux
or n and the
next pointer is a pointer to a short int (rather
l Indicates either that the conversion will be one of
dioux or n
and the next pointer is a pointer to a long int
int), or that the conversion will be one of efg and
pointer is a pointer to double (rather than float).
q Indicates that the conversion will be one of dioux
or n and the
next pointer is a pointer to a quad_t (rather than
L Indicates that the conversion will be efg and the
next pointer is
a pointer to long double.
In addition to these flags, there may be an optional maximum
expressed as a decimal integer, between the % and the conversion. If no
width is given, a default of ``infinity'' is used (with one
below); otherwise at most this many characters are scanned
the conversion. Before conversion begins, most conversions
this whitespace is not counted against the field
The following conversions are available:
% Matches a literal `%'. That is, `%%' in the format
a single input `%' character. No conversion is done,
does not occur.
d Matches an optionally signed decimal integer; the next
be a pointer to int.
D Equivalent to ld; this exists only for backwards compatibility.
i Matches an optionally signed integer; the next pointer
must be a
pointer to int. The integer is read in base 16 if it
`0x' or `0X', in base 8 if it begins with `0', and in
base 10 otherwise.
Only characters that correspond to the base
o Matches an octal integer; the next pointer must be a
O Equivalent to lo; this exists for backwards compatibility.
u Matches an optionally signed decimal integer; the next
be a pointer to unsigned int.
x Matches an optionally signed hexadecimal integer; the
must be a pointer to unsigned int.
X Equivalent to x.
f Matches an optionally signed floating-point number;
the next pointer
must be a pointer to float.
e Equivalent to f.
g Equivalent to f.
E Equivalent to f.
G Equivalent to f.
s Matches a sequence of non-whitespace characters; the
must be a pointer to char, and the provided array must
enough to accept and store all the sequence and the
character. The input string stops at whitespace or at
field width, whichever occurs first. If specified,
field length refers to the sequence being scanned
rather than the
storage space, hence the provided array must be 1
larger for the
terminating NUL character.
c Matches a sequence of width count characters (default
1); the next
pointer must be a pointer to char, and there must be
for all the characters (no terminating NUL is added).
skip of leading whitespace is suppressed. To skip
first, use an explicit space in the format.
[ Matches a nonempty sequence of characters from the
specified set of
accepted characters; the next pointer must be a pointer to char,
and there must be enough room for all the characters
in the string,
plus a terminating NUL character. The usual skip of
whitespace is suppressed. The string is to be made up
in (or not in) a particular set; the set is defined by the
characters between the open bracket [ character and a
] character. The set excludes those characters if the
after the open bracket is a circumflex ^. To
include a close
bracket in the set, make it the first character after
bracket or the circumflex; any other position will end
The hyphen character - is also special; when placed
other characters, it adds all intervening characters
to the set.
To include a hyphen, make it the last character before
close bracket. For instance, `[^]0-9-]' means the set
except close bracket, zero through nine, and hyphen'.
ends with the appearance of a character not in the
(or, with a circumflex,
in) set or when the field width runs out.
p Matches a pointer value (as printed by `%p' in
printf(3)); the next
pointer must be a pointer to void.
n Nothing is expected; instead, the number of characters
thus far from the input is stored through the next
must be a pointer to int. This is not a conversion,
can be suppressed with the * flag.
For backwards compatibility, other conversion characters
(except ` ')
are taken as if they were `%d' or, if uppercase, `%ld', and
of `% ' causes an immediate return of EOF.
These functions return the number of input items assigned,
which can be
fewer than provided for, or even zero, in the event of a
Zero indicates that, while there was input available,
were assigned; typically this is due to an invalid input character,
such as an alphabetic character for a `%d' conversion. The
value EOF is
returned if an input failure occurs before any conversion
such as an endof-file
occurs. If an error or end-of-file occurs after
begun, the number of conversions which were successfully
completed is returned.
getc(3), printf(3), strtod(3), strtol(3), strtoul(3)
The functions fscanf(), scanf(), and sscanf() conform to ANSI X3.159-1989
The functions vscanf(), vsscanf(), and vfscanf() first appeared in
All of the backwards compatibility formats will be removed
in the future.
Numerical strings are truncated to 512 characters; for example, %f and %d
are implicitly %512f and %512d.
OpenBSD 3.6 January 31, 1995
[ Back ] | <urn:uuid:3b81d4d8-68ad-4ad5-891e-89fab9a0c7eb> | 3.46875 | 1,746 | Documentation | Software Dev. | 56.066355 |
Sun & Volcanoes Control Climate
It is accepted that volcanic eruptions can have a major impact on short term climate. A new study in Nature Geoscience uses instrument records, proxy data and climate modeling to show that multidecadal variability is a dominant feature of North Atlantic sea-surface temperature (SST), which, in turn, impacts regional climate. It turns out that the timing of multidecadal SST fluctuations in the North Atlantic over the past 600 years has, to a large degree, been governed by changes in external solar and volcanic forcings. Solar influence is not surprising but the fact that volcanoes cause climate change lasting decades has some significant implications for those trying to model climate over the next century.
When a volcano erupts it spews large quantities of ash, water vapor, sulfur dioxide and even some carbon dioxide into the atmosphere. Sulfur dioxide reacts with water to form sulfuric acid droplets (aerosol particles), which are highly reflective and reduce the amount of incoming sunlight, leading to what some refer to as a “volcanic winter.” An eruption large enough to to depress global temperatures by 1°C (1.8°F) and trigger widespread crop failures for several years afterwards should occur about once every 200-300 years.
Volcanic aerosols are also injected directly into the stratosphere, where they modify both short-wave and long-wave radiation transfer. This can cause strong heating of the lower tropical stratosphere by absorption of terrestrial and solar near-infrared radiation. The strengthened polar vortex that follows traps the wave energy of the tropospheric circulation, and the North Atlantic oscillation (NAO) dominates winter circulation, producing winter warming over large parts of the Northern Hemisphere. Evidently volcanoes first cause cooling and then longer-term warming.
A volcano modifying the climate.
In “External forcing as a metronome for Atlantic multidecadal variability,” Odd Helge Otterå et al. examine the driving forces behind the Atlantic multidecadal oscillation (AMO), a basin-wide variation marked by alternation of warm and cold sea surface temperature (SST) anomalies in the North Atlantic with a period of about 60–80 years. Their analysis of multiple proxies indicates that AMO variability has existed for several centuries.
It has been suggested, on the basis of climate model simulations, that these variations are internally driven and related to multidecadal fluctuations in the Atlantic meridional overturning circulation (AMOC). Otterå et al, find that AMO is not solely driven by the changes in the AMOC. Instead, external forcings such as total solar irradiance (TSI) variations and volcanic eruptions are important drivers. As stated in the article abstract:
We find that volcanoes play a particularly important part in the phasing of the multidecadal variability through their direct influence on tropical sea-surface temperatures, on the leading mode of northern-hemisphere atmosphere circulation and on the Atlantic thermohaline circulation. We suggest that the implications of our findings for decadal climate prediction are twofold: because volcanic eruptions cannot be predicted a decade in advance, longer-term climate predictability may prove challenging, whereas the systematic post-eruption changes in ocean and atmosphere may hold promise for shorter-term climate prediction.
That longer-term climate predictions “may prove challenging” is science speak for “probably never work.” The researchers used a fully coupled climate model, the Bergen Climate Model (BCM), to demonstrate that external forcing has been instrumental in pacing multidecadal variability in the Atlantic region over the past 600 years. A total of seven simulations were carried out, the results of which are shown in the figures below.
a, Simulated standardized indices of AMO (black), AMOC (purple), global SST PC1 (grey) and PC3 (pink) together with reconstructed standardized AMO indices based on multiple proxies (dark green) and tree-ring data (light green). Correlations (α<0.1) and root mean square errors between EXT600 and reconstructions are also shown. b, Regression of global SST in EXT600 on PC1. c, The same as b, but for PC3. d, Cross-correlations of the simulated AMO, PC1, PC3 and AMOC indices with the TSI forcing in EXT600. Positive lags mean that the forcing is leading. e, The same as d, but for correlations with the total (TSI+volcano) forcing. f, Cross-correlations of the simulated AMO, PC1 and PC3 indices with the AMOC index. Positive lags mean that the AMOC is leading. In d–f significance levels (α<0.05) are shown in grey shading.
Several earlier studies have suggested lagged relationships between low-frequency TSI variations and the NAO. The proposed mechanisms include atmospheric teleconnections from the Pacific Ocean as well as stratosphere–troposphere coupling. Otterå et al. reinforce the findings previously reported on this blog (see “Pacific Warming, Atlantic Hurricanes & Global Climate Non-Disruption”). “In EXT600, we find no significant correlation between the simulated NAO and the applied TSI forcing,” they state. “However, there is a significant negative correlation between the NAO and the total external forcing, suggesting a potential role for volcanoes.”
Positive or increasing NAO is typically associated with large tropical volcanic eruptions. It is known from both observations and other modeling studies that large tropical eruptions have a tendency to induce a positive NAO response, causing the well-known posteruption winter warming phenomenon over Northern Hemisphere land masses. However, climate models have only shown limited ability in simulating this robust, observation-based feature, possibly linked to inadequate treatment of stratosphere–troposphere dynamical interactions.
Simulated responses to volcanic forcing.
The authors present a number of possible caveats to the findings. “For example, it could be argued that the BCM underestimates the internal variability of the AMOC on multidecadal timescales,” they state. “This question is, however, difficult to adequately address in the absence of instrumental observations of the AMOC.” As usual, the models are untrustworthy and there is a lack of good hard empirical data. Nonetheless, Otterå et al. conclude:
Although the external forcing is clearly important for the AMO characteristics in the BCM, it cannot explain all of the simulated variability. In the model, and also probably in nature, there is an interplay between the intrinsic climate variability and the external forcing. Rather, we conclude that the external forcing acts as a metronome for the Atlantic multidecadal variability. In view of this, the frequency and intensity of external forcing need to be better understood and quantified to produce reliable near-term climate forecasts.
Volcanoes are among Earth's most destructive natural phenomena. Is it any wonder that, to be able to predict climate variation over multidecadal timescales, you need to be able to predict volcanic eruptions. The impact of a volcanic eruption is not a smooth, continuously varying function over time, like waxing and waning of solar intensity or the slow buildup of gases in the atmosphere. Volcanic eruption sends a sudden shock throughout the global environment, a pulse of change, a perturbation of the system.
Volcanoes cause decadal scale climate change.
Climate models have no way to model such phenomena since future eruptions cannot be predicted. Most models use a constant value to represent aerosol inputs averaged over time, which means they are always wrong: they overestimate levels when there have been no recent eruptions and underestimate levels when an eruption occurs. Since volcanoes are unpredictable in their timing, location and intensity, this is pretty much a show stopper for climate prediction models.
Albert Einstein once said that God does not play dice with the Universe. Here is proof that Einstein was wrong, at least about our little corner of the Universe, because it looks like nature does play dice with climate change. This, of course, has not stopped climate scientists from trying to play god.
Be safe, enjoy the interglacial and stay skeptical. | <urn:uuid:80c292f5-5699-44de-a8c1-37efe505a46d> | 3.375 | 1,723 | Knowledge Article | Science & Tech. | 25.54393 |
The precision, down to a tenth of a meter, calculated by the University of Grenada in their press release below is simply stunning. I wonder what the error bars are on 2.7 meters over seven years of the study? And, how does one filter out seasonal weather effects over such a short time span? Inquiring minds want to know.
Here’s the main points:
- Vascular plants have moved 2.7 m upwards, which might lead to the extinction of high-mountain species.
- While species diversity in summits of temperate-boreal regions has increased, it has declined in Mediterranean regions.
- Such are the results obtained from a study published in Science, where University of Granada researchers participated.
Researchers at the University of Granada Department of Botanic have participated in an international study that has confirmed that global warming is causing plants to migrate to higher altitudes. The study –recently published in Science– analyzed species diversity shifts in 66 summits of 17 European ranges between 2001 and 2008.
In the Iberian Peninsula, two target regions were selected in the Pyrenees (Ordesa) and Sierra Nevada (Granada). Researchers found that the species under study had migrated an average of 2.7m upwards. “This finding confirms the hypothesis that a rise in temperatures drives Alpine flora to migrate upwards. As a result, rival species are threatened by competitors, which are migrating to higher altitudes. These changes pose a threat to high-mountain ecosystems in the long and medium term” the authors state.
Boreal-Temperate and Mediterranean Summits
The study also reveals an average increase of 8% in the number of species growing in summits of European mountains. However, such increase is not general, as of the 66 peaks in boreal and temperate areas, the majority revealed an increase in species diversity, while 8 out of the 14 summits in the Mediterranean area revealed a decline in the number of species represented.
Furthermore, the study revealed that species diversity has changed more significantly at low elevation sites –at the upper limit of the forest or an equivalent altitude– in the Mediterranean region than in other regions.
In Mediterranean mountains (Sierra Nevada, Corsica, Central Apennines and Crete), the rise in temperatures is causing a decline in annual average rainfall, which results in longer summer droughts. Consequently, temperature rise and droughts pose a threat to unique endemic species.
The mountains that present the most significant shifts in species diversity are Mediterranean mountains –located in Southern Europe–, where climate is different to that of the rest of Europe. In general, moist-soil species are more vulnerable to climate change, though high-mountain endemic species are also affected.”For example, in Sierra Nevada, the observation plots revealed a decrease in the number of emblematic species such as Androsacevitalianasubsp. Nevadensis and Plantagonivalisy Artemisia granatensis”, the University of Granada professor, Joaquín Molero Mesa, explains.
Another Sampling Site
Sierra Nevada has very special characteristics, as it is the only mountain range in the Iberian Peninsula that has Mediterranean climate from top to the hill foot. Consequently, the research group coordinated by professor Molero Mesa –with the special collaboration of Mª Rosa Fernández Calzado– placed another sampling site (four summits located at an elevation above 2500m high) in 2005. The purpose was to increase the sample size and obtain more reliable results. In two years, a comparative study of the results obtained in the first and second study will be conducted.
Thus, Sierra Nevada is the only mountain range with two target regions under observation. The research group is coordinated with the Observatorio de Cambio Global de Sierra Nevada, and has established –in collaboration with a research group from Morocco– another target region in the high Western Atlas, where observation plots and thermometers will be installed next summer. The purpose of this action is to better understand climate and species variations in the most vulnerable environment: the Mediterranean region.
This study is part of the Project GLORIA (The Global Observation Research Initiative in Alpine Environments) initiated in Europe in 2000 and which has spread worldwide.
Recent Plant Diversity Changes on Europe’s Mountain Peaks. Science. DOI: 10.1126/science.1219033
I often wonder if the act of studying these plants doesn’t account for some of the changes, such as tracking seeds around in the mud on your shoes, etc.
The plant in the photo with the press release, Androsace vitaliana turns out it is easy to grow in your garden. So it follows that I’m not too worried about this news. | <urn:uuid:c8115a67-bc1f-4cd3-9111-3d424f902983> | 3.984375 | 984 | Personal Blog | Science & Tech. | 31.588727 |
kaufDA, a team in Germany, has started an initiative called “Make it green” whose goal is to reduce carbon emissions worldwide. One aspect of the program is to offset the carbon footprint resulting from the use of the Internet by both raising public awareness about protecting the environment and by planting more trees. Working with the Arbor Day Foundation, kaufDA will plant one tree in the Plumas National Forest in Northern California for every participating blog.
If you have a blog, you can have a tree planted in your honor by spreading the word about the program. See the kaufDA website for more details.
About Carbon Emissions and Global Warming
The climate change phenomenon known as global warming is a result of increased carbon emissions from driving cars, home energy use, and the energy used to produce all of the products and services we consume. The steady upward trend in temperatures has, and will continue to have, a drastic effect on the planet and its inhabitants. For example, the United States Geological Survey (USGS) has predicted a loss of two-thirds of the world’s polar bears by 2050 due to declines in ice habitats. According to a Climactic Change journal report by the World Wildlife Fund, by 2070, the sea levels near Bangladesh will rise 11 inches, submerging 96% of the Bengal tiger habitat. Many animals will be threatened by the change and loss of habitat due to global warming.
About Planting Trees
The United Nations Framework Convention on Climate Change (UNFCCC) assumes a yearly absorption of one tree to be approximately 10kg (20lb.) of carbon dioxide emissions. The Arbor Day Foundation is working to plant more trees in the Plumas National Forest in Northern California, which lost 88,000 acres of forest due to fires in 2007.
Aside from planting trees, you can help curb global warming by reducing your carbon emissions. This includes walking or taking public transportation instead of driving, using energy saver appliances and light bulbs, buying locally grown produce, recycling, and more. | <urn:uuid:b0ed90db-9b7a-4f9b-83ff-d73a693b373a> | 3.28125 | 414 | Personal Blog | Science & Tech. | 39.793252 |
Astronomy Day is a grass roots movement to share the joy of astronomy with the general population - "Bringing Astronomy to the People." On Astronomy Day, thousands of people who have never looked through a telescope will have an opportunity to see first hand what has so many amateur and professional astronomers all excited. Astronomy clubs, science museums, observatories, universities, planetariums, laboratories, libraries, and nature centers host special events and activities to acquaint their population with local astronomical resources and facilities. It is an astronomical PR event that helps highlight ways the general public can get involved with astronomy - or at least get some of their questions about astronomy answered. Astronomy Week is the same concept as Astronomy Day except seven times longer.
Astronomy Day occurs sometime between mid April and mid May on a Saturday near or before the 1st quarter Moon. Astronomy Week starts the Monday preceding Astronomy Day and ends the following Sunday. Astronomy Week was created to give sponsoring organizations a longer period of time to host special events. Some local Astronomy Week celebrations have actually been longer than just one week.
|Year||Astronomy Week||Astronomy Day|
|2003||April 27 - May 3||May 3|
|2004||April 18 - 24||April 24|
|2005||April 18 - 24||April 16|
|2006||April 30 - May 7||May 3|
|2007||April 16 - 22||April 21|
|2008||May 5 - 11||>May 10|
|2009||April 29 - May 3||May 2|
|2010||April 19 - 25||April 24|
|2011||May 2 - 8||May 7|
|2012||April 14 - 20||April 28|
|2013||April 15 - 21||April 20|
|2014||May 5 - 11||May 10|
Astronomy Day events take place at hundreds of sites across the United States. Internationally England, Canada, New Zealand, Finland, Sweden, the Philippines, Argentina, Malaysia, New Guinea plus many other countries have hosted Astronomy Day activities. Each location plans and executes events that work best for their local area.
Activities have included talks by astronauts, astronomers and NASA personnel, Moon rocks, a Moon gravity simulator, games, prizes, astronomical food, scale models of the solar system, space hardware, space ballets and poetry and, of course, actual outdoor observing (daytime and nighttime) with a telescope. Daytime observations include SAFE ways to observe the Sun. Many organizations host elaborate exhibits at shopping malls, museums, nature centers, libraries, etc. Teachers have used Astronomy Day to promote the study of astronomy with their classes.
Astronomy Day was born in California in 1973. Doug Berger, then president of the Astronomical Association of Northern California, decided that rather than try to entice people to travel long distances to visit observatory open houses, they would set up telescopes closer to where the people were - busy locations - urban locations like street corners, shopping malls, parks, etc.
His strategy paid off. Not only did Astronomy Day go over with a bang, not only did the public find out about the astronomy club, they found out about future observatory open houses. Since the public got a chance to look through a portable telescope, they were hooked. Then wanted to see what went on at the bigger telescopes, so they turned out in droves at the next observatory open house.
© 2007 Ronald A. Leeseberg. This feature was updated ©2012 Dawn Jenkins | <urn:uuid:ac23a443-da23-4e58-8827-f78cc8bd77ca> | 3.484375 | 734 | Knowledge Article | Science & Tech. | 37.632378 |
Large spiral galaxy NGC 4945 is seen
near the center of this
In fact, NGC 4945 is almost the size of our own
Milky Way Galaxy.
Its own dusty disk, young blue star clusters, and pink star forming
regions standout in the sharp, colorful telescopic image.
About 13 million light-years distant toward the
NGC 4945 is only about six times farther away than Andromeda,
the nearest large spiral galaxy to the Milky Way.
Though the galaxy's central region is largely hidden from
view for optical telescopes, X-ray and infrared observations indicate
high energy emission and star formation in the core
of NGC 4945.
Its obscured but active nucleus qualifies the gorgeous island
universe as a Seyfert galaxy
and likely home to a central supermassive black hole.
Credit & Copyright:
J. Harvey, S. Mazlin, D. Verschatse, J. Joaquin Perez, | <urn:uuid:b474e7d4-d209-4f91-b984-e4044c1cd38d> | 2.90625 | 200 | Knowledge Article | Science & Tech. | 55.081825 |
Did Ancient Humans Have Knowledge of the Electromagnetic (EM) Spectrum?
By Glenn Kreisberg, Radio Frequency (RF) Spectrum Engineer
It’s been suggested, at various times, that ancient humans had knowledge and use of unseen powers, forces and energy fields. Could these unseen forces and fields consist of electromagnetic frequency waves and particle fields that make up the EM Spectrum? This is not a simple question to answer.
What evidence exists, and what kind of evidence may come to light, to support such a claim?
There is no question, that as it has always existed, the EM Spectrum is a naturally occurring part of our environment, comprised of a continuous sequence of electromagnetic energy arranged according to wavelength or frequency, as generated by particle motion (vibrations) and pulses created from many sources.
There is also no doubt that many ancient cultures had a connection with nature and natural forces that was fundamental and could only be described as intimate and profound in ways we moderns can merely attempt to comprehend.
This article will examine some of what evidence and possible evidence exists, that suggests ancient knowledge of the EM spectrum, examine its scientific foundation and whether it can be used to form a hypothesis and hopefully, be applied to solving this mystery.
From ancient times to today, humans have demonstrated an inherent curiosity and the desire to understand mysterious and odd phenomena, signs and images. For the vanished civilizations and cultures of Egypt, Sumer, and other early civilizations, and actually for the entire lapsed time of humankind, there remain many unsolved and unsettled images, messages, texts, tablets, artefacts, inscriptions, engravings, schemes, and phenomena that suggest a connection to unseen forces.
As modern society explores the mysterious meanings of certain universal cultural myths and symbols, so too may have humans from earlier civilizations, who repeated and venerated various motifs throughout time and traditions. The origin and meaning of these mysterious symbols may have, in fact, remained unknown even to the ancient cultures that utilized them, the ancients knowing only that certain signs and symbols were important clues to even more ancient lost knowledge and powers.
It has been noted by many that the designs and motifs of ancient architecture often reflect and in many ways try to mimic the patterns, signs and signals found to occur naturally in our environment. Most significant, for the purposes of this report, are the many variations of the basic waveforms, be it sine wave, saw-tooth wave, box wave or the endless variety of spirals and wave forms that adorn ancient cave wall, temples, and structures and appear in architecture, scrolls, tablets and inscriptions throughout the ancient world.
Below are examples of waveform symbols appearing in ancient designs and motifs that existed in ancient cultures from around the world.
Fig. 1 The dragon or serpent is an ancient Chinese symbol for an unseen force. This one appears to take the shape of a box tooth wave.
Southeastern Native American cultures dating back 20 to 25 thousand years extensively used waveform symbols for ornamentation on nearly all handmade items and wares such as pottery and textiles. The variety is nearly endless. Some typical examples follow:
As I have mentioned, the EM Spectrum exists naturally, occurring as a part of our natural environment. And again, acknowledging the ancients’ intimacy and interdependency with nature, it would not be surprising if they possessed some knowledge of this naturally occurring “tool.”
The ancient cultures of this world are known to have identified and utilized the forces of nature to their benefit, including water, fire, wind, and sound. Are we to believe that mankind is only now, in the past century, exploiting the waves and frequencies of the EM spectrum for the first time? I'm not so sure. And perhaps more importantly, if ancient culture did possess this knowledge, where did it come from and how was it processed?
Mayan Pyramid of Kukulkan at Chichen Itza has a saw tooth wave form built into its architecture.
A thorough examination of this subject must begin first with tracking the breakthroughs and discoveries that have occurred throughout history and that have led to the concepts and principles that make up modern electromagnetic theory. (See Appendix 1)
Electromagnetic radiation has been around since the birth of the universe; light is its most familiar form. Electric and magnetic fields are part of the spectrum of electromagnetic radiation, which extends from static electric and magnetic fields, through radiofrequency and infrared radiation, to X-rays
From written history it appears that many of the concepts now familiar in EM theory were explored and developed during a time when many modern high-tech investigative and detection tools and methods did not exist.
But is it possible that the ability to manipulate the particles and waves of the EM spectrum was discovered and developed even earlier than written history suggests? Could it be that many of the symbols, images, architectures, and myths of ancient cultures are representations reflecting the possession of such knowledge? | <urn:uuid:3f01d6fb-b344-41d4-88d7-b047c182546a> | 3.28125 | 1,009 | Personal Blog | Science & Tech. | 25.585879 |
THE Hubble Space Telescope may soon be able to see other Earths - or at least those on their last legs - by detecting their evaporating oceans.
Almost all the extrasolar planets we know about are Jupiter-sized gas giants, Earth-sized ones being too small for existing methods to detect. But Michael Jura of the University of California in Los Angeles points out that any planet like Earth will eventually warm up as its star gets older and brighter. The oceans will start to evaporate, sending water vapour into the upper atmosphere, where ultraviolet light will split it into hydrogen and oxygen. Jura says that over billions of years, the hydrogen should form a tenuous cloud perhaps 5 million kilometres across (
Jura wants to use Hubble to monitor perhaps five nearby stars for this absorption. It's a gamble, ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:493eaad7-18a4-463e-8d9c-043376272111> | 3.734375 | 192 | Truncated | Science & Tech. | 47.834709 |
The Liquids Reflectometer enables scientists to look within and between layers of film and determine how they are structured.
Svetlana Sukhishvili is a professor of chemistry and co-director of the Nanotechnology Graduate Program at Stevens Institute of Technology in Hoboken, New Jersey. As part of the Spallation Neutron Source's (SNS) user program, she and her students are using the facility's Liquids Reflectometer to study layered polymer films. These thin films are composed of polymer layers of varying composition, built up one layer at a time. "These layered films currently have many potential applications in controlled delivery, as well as in optics as antifog coatings and antireflective coatings," she says. "However, very little is known about the internal structure of such films." The Liquids Reflectometer provides a way for scientists to look within and between the layers of film and determine how they are structured and how those structural features are related to the film's properties. "We asked several questions," Sukhishvili says. "When we are depositing polymer layers on surfaces to make films, will these films remain layered when used in wet environments? Also, if the internal structure changes, under what conditions does it change and how does this affect the internal structure of the film?"
One of the technology's applications is controlled delivery, or "time-release" delivery of functional small molecules. This behavior could be utilized in a drug delivery system or as a coating on the surface of a biomedical device that would release different compounds at predetermined times. The polymer layers can trap small molecules and then release them on demand or in response to changes in environmental conditions, such as temperature or acidity. "For these films to function properly, it is important that the layers remain well structured," Sukhishvili explains. "This depends on the dynamics of the polymer molecules at the layer boundaries, which are not not well understood."
Sukhishvili's group created their films by depositing layers of material containing deuterium, an element that is easily "seen" by neutrons, with hydrogen-containing layers in between. The purpose of having alternating layers containing deuterium and hydrogen is to provide contrast between the layers at their interfaces. "We wanted to look at how the structure would spread when we changed the environmental conditions. We found interesting trends between the strength of intermolecular interactions and intertwining of polymer layers and the structure of the films." Sukhishvili points out that using alternating layers of deuterated and hydrogenated materials to enhance contrast is a unique technique and is well suited to the capabilities of the SNS. "We are one of the few groups in the country that decided to take advantage of the SNS to look at this fundamentally important, yet practically relevant, question to shed some light on the structure-property relationships of such films," she says.
Sukhishvili views the SNS as a new kind of user facility that offers broad opportunities for networking and collaboration. "The SNS is a growing and forward-thinking user facility," she says. "I also think the potential for collaboration with ORNL's Center for Nanophase Materials Sciences and the combination of these capabilities with the ability to do new and exciting neutron reflectivity experiments provide much broader options. The result, she believes, will be an attraction to users for years to come.
Web site provided by Oak Ridge National Laboratory's Communications and External Relations | <urn:uuid:1c4fd98f-7073-4ee6-976c-ace731296310> | 3.0625 | 713 | Knowledge Article | Science & Tech. | 27.210877 |
Science Agenda: To the Moons, NASA; July 2012; Scientific American Magazine; by The Editors; 1 Page(s)
Last year, after a lengthy, circuitous journey through the solar system, a NASA probe known as MESSENGER entered into orbit around Mercury. No spacecraft had visited the innermost planet in more than three decades, and none has paid an extended visit. With MESSENGER's arrival, NASA and its international counterparts now have spacecraft stationed at Mercury, Venus, Mars and Saturn—not to mention Earth and the moon. Two more NASA craft are en route to Jupiter and Pluto; yet another ought to reach the dwarf planet Ceres in 2015. Humankind's presence has never stretched so far.
It could stretch farther still, with robots spying down on bizarre moons that might harbor alien life or on the little-understood outermost planets. An even more novel campaign would ferry Martian rocks back to Earth for analysis. NASA had been on track to begin such an ambitious project, but alas, political maneuvering recently forced the space agency to scrap its plans. | <urn:uuid:e6f7da40-1341-4bbe-8246-7658abc26ec1> | 3.484375 | 219 | Truncated | Science & Tech. | 42.223182 |
Discover the cosmos! Each day a different image or photograph of our fascinating universe is featured, along with a brief explanation written by a professional astronomer.
2003 February 20
Explanation: A cold wind blows from the central star of the Boomerang Nebula. Seen here in a detailed false-color image recorded in 1998 by the Hubble Space Telescope, the nebula lies about 5,000 light-years away towards the grand southern constellation of Centaurus. The symmetric cloud appears to have been created by a high-speed wind of gas and dust blowing from an aging central star at speeds of nearly 600,000 kilometers per hour. This rapid expansion has cooled molecules in the nebular gas to about one degree above absolute zero - colder than even the cosmic background radiation - making it the coldest region observed in the distant Universe. Shining with light from the central star reflected by dust, the frigid Boomerang Nebula is believed to be a star or stellar system evolving toward the planetary nebula phase.
Authors & editors:
Jerry Bonnell (USRA)
NASA Technical Rep.: Jay Norris. Specific rights apply.
A service of: LHEA at NASA / GSFC
& NASA SEU Edu. Forum
& Michigan Tech. U. | <urn:uuid:58b7b78b-e3b5-402d-b111-fe0f7b96a23e> | 3.75 | 256 | Knowledge Article | Science & Tech. | 45.208454 |
Tuckett, R. P. (2008) CF3SF5 : a ‘super’ greenhouse gas. Education in Chemistry, 45. pp. 17-21. ISSN 0013-1350
URL of Published Version: http://www.rsc.org/Education/EiC/issues/2008Jan/CF3SF5SuperGreenhouseGas.asp
One molecule of the anthropogenic pollutant trifluoromethyl sulphur pentafluoride (CF\(_3\)SF\(_5\)), an adduct of the CF\(_3\) and SF\(_5\) free radicals, causes more global warming than one molecule of any other greenhouse gas yet detected in the Earth’s atmosphere. That is, it has the highest per molecule radiative forcing of any greenhouse pollutant, and the value of its global warming potential is only exceeded by that of SF\(_6\). First, the greenhouse effect is described, the properties of a molecule that cause it to be a significant greenhouse gas, and therefore the contributions that physical chemistry can make to an improved understanding of the effect. Second, the chemistry of (CF\(_3\)SF\(_5\)), first discovered in the atmosphere in 2000, is taken as a case study. Experiments using tunable vacuum-UV radiation, electrons and small cations have determined some of the relevant physical properties of this molecule, including the strength of the (CF\(_3\)-SF\(_5\)) covalent bond. The main sink route to remove (CF\(_3\)SF\(_5\)) from the earth’s atmosphere is low-energy electron attachment in the mesosphere. Third, it is shown how such data are important inputs to determine the lifetime of this pollutant, ca. 1000 years, in the atmosphere. Finally, the generic lessons that can be learnt from the study of such long-lived greenhouse gases are described.
Repository Staff Only: item control page | <urn:uuid:8de6526e-491e-4801-b4be-2cd01a7f7231> | 3.203125 | 412 | Academic Writing | Science & Tech. | 53.485931 |
Global Warming Effects Being Felt Around the World
North America is already feeling the affects of warmer temperatures in a variety of ways.
In New England, maple syrup production is declining because of warmer than usual winters. Farmers who used to begin tapping their trees in the beginning of March now must begin tapping in February. Last year some began tapping in mid-February and still missed much of the sap.
The National Oceanic and Atmospheric Administration reports that temperatures have risen by 2.8 degrees in the Northeast since 1971. Tim Perkins, director of the Proctor Maple Research Center, calls the situation "dire." His data shows that over the last 40 years the maple sugaring season has moved steadily earlier and become steadily shorter.
In Northern Canada, warming temperatures are threatening the boreal forests, the "lungs of the world." Increasing drought and insect infestation are taking their toll. As the trees dry, forest fires increase, sending more carbon into the atmosphere. The number of forest fires doubled in the 1980s and 90s from the previous decade and are expected to double again this century.
Nearly half of the carbon that exists on land is contained in the boreal forests that stretch across the northern latitudes of North America, Europe and Asia. Steven Kallick, an expert on the boreal forests comments that; "We are taking risks with a system we don't understand that is absolutely loaded with carbon. The impact could be enormous."
In the U.S. Southwest, rapidly growing population and a seven year drought is stressing water supplies. The situation is only expected to get worse as temperatures warm. One study predicted as much as a 20 percent decline in water supply, greater than water saving measures could compensate for.
The Colorado River basin has seen faster temperature growth than other parts of the U.S. and are now 1.5 degrees warmer than in the 1950s. While local officials have taken measures to develop more local sources of water, climate change will inevitably collide with population growth.
Finally, a study on children's health has shown that warmer temperatures increase the number of sick children. A two year study at a major children's hospital showed that for every five degree rise in temperature, two more children under six were admitted with fever. The study showed that children are less able to regulated their bodies against climate change than adults, increasing their risk of fever and gastric diseases.
While the more profound effects of global warming may be decades or centuries away, climate change is already making itself felt in many ways around the globe. | <urn:uuid:4a50f9ee-378a-498c-8724-7a1679f2ed8c> | 3.703125 | 518 | Personal Blog | Science & Tech. | 46.166128 |
SPARQL is an RDF query language, that is, a query language for databases, able to retrieve and manipulate data stored in Resource Description Framework format. It was made a standard by the RDF Data Access Working Group of the World Wide Web Consortium, and is considered as one of the key technologies of the semantic web. On 15 January 2008, SPARQL 1.0 became an official W3C Recommendation. SPARQL allows for a query to consist of triple patterns, conjunctions, disjunctions, and optional patterns. Implementations for multiple programming languages exist. "SPARQL will make a huge difference" according to Sir Tim Berners-Lee in a May 2006 interview. There exist tools that allow one to connect and semi-automatically construct a SPARQL query for a SPARQL endpoint, for example ViziQuer. In addition, there exist tools that translate SPARQL queries to other query languages, for example to SQL and to XQuery. | <urn:uuid:545597b4-14d1-4bc5-9d3b-d00f49afe18a> | 2.953125 | 202 | Knowledge Article | Software Dev. | 43.808754 |
Light Dimmers and Resonance
Name: Edward M.
We have several incandescent bulbs that are controlled by
dimmers. If the dimmers are slowly moved, there are several positions
where the bulb makes a noise. I assume these noises represent some sort
of resonant frequency, but what is vibrating? and what property of the
changing electrical energy is altering the vibration to set up the
resonance? Am I correct in assuming that to maximize bulb life the
dimmer should be set to avoid these noises?
Interesting question. Possible sources might be: 1. A "hum" from the
dimmer -- presumebaly a transformer -- due to the 60 hertz AC current. This
is the very familiar hum you pick up around unshielded AC circuits. 2. The
AC current might set up a vibration in the filament, or its wire leads. This
would be a much higher frequency pitch. If it is the latter, certainly you
would want to avoid those settings.
I do not know, but I would guess the dimmer decreases the portion of the
normal sine wave which will be let through to the bulbs in order to dim
the bulbs. Different portions of a sine wave will have different harmonic
components. When a large harmonic component is at the frequency of a
resonance of the filament, the filament could be made to vibrate more
vigorously and even emit noises. I would say that you are absolutely
right in that those settings should be avoided for maximum bulb life.
Richard J. Plano
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:6fe3812c-ef5f-484e-8d31-3d76faa324c5> | 2.6875 | 343 | Q&A Forum | Science & Tech. | 54.503194 |
On July 16, 2009, Wal-Mart announced that it will develop a sustainable product rating system that can be used to evaluate the sustainability of the products they sell in their stores. As a reminder, Wal-Mart sells a lot of products to a lot of people. According to its website, Wal-Mart “serves customers and members more than 200 million times per week at more than 7,900 retail units under 62 different banners in 15 countries.” Wal-Mart’s sustainability initiatives are diverse and plentiful (a curious dichotomy given the stigma created by their propensity for union busing and low wage employment).
The goal of the rating system is to convey information to the consumer about the materials used to make the product (are they safe?), the quality of the products (is it well-made?), and the manner in which the products were produced (was it made responsibly?). The sustainable product rating system will be developed in three phases. Ultimately, data will be used to inform the creation of a rating system for consumers. To get there, they must first survey the 100,000 plus Wal-Mart suppliers and then use the results of the survey to develop a global database of information about the life cycles of their products.
The survey (download a PDF) comprises 15 questions about the following four topics:
1) Energy and Climate: Reducing Energy Costs and Greenhouse Gas Emissions
2) Material Efficiency: Reducing Waste and Enhancing Quality
3) Natural Resources: Producing High Quality, Responsibly Sourced Raw Materials
4) People and Community: Ensuring Responsible and Ethical Production
The environmental and occupational health community should be joyous to see Wal-Mart’s attention to subject areas that flirt with public health issues. However, will this flirtation be understood by all 100,000 plus surveyed suppliers? In reviewing the 15 questions, the following terms stood out because they appear to be important criteria for a sustainable product rating system, they appear to be important environmental and occupational health considerations, and they are terms for which globally-recognized definitions do not exist.
The terms are:
- corporate greenhouse gas emissions
- solid waste
- water use reduction targets
- social compliance evaluations
I wager that I can define all four of these terms and that my definitions will not match yours. Game on.
I postulate that without further explanation and clarity regarding the intent and meaning of the questions asked in the survey the results will be so varied and diverse that they will stifle development of an effective sustainable product rating system.
Kas is an industrial hygienist studying public health in the DC metro area. | <urn:uuid:12b8edbd-3e51-4382-a842-999243dd5295> | 3.21875 | 541 | Personal Blog | Science & Tech. | 35.082385 |
Gravity is the powerful force that glues our universe together. Gravity helped form our solar system, the planets, and the stars. It holds the planets in orbit around the Sun, and moons in orbit around the planets. The gravitational pull of the Sun and Moon creates the tides on Earth. Far beyond our solar system, the irresistible force of gravity is collapsing stellar cores into amazing - and bizarre - objects in our universe--neutron stars and black holes.
|Pushing the Moon Away|
Our Moon is moving farther away from Earth!
The energy transferred from gravitational tides
is causing the Moon to move away at almost
4 cm a year, and slowing the Earth's rotation!
This fundamental force of gravity will help scientists model the interior of our Moon, Earth, and other planets, and measure the masses of distant stars and galaxies. NASA's GRAIL mission is studying the Moon's gravitational field, which will give scientists a better picture of the lunar interior.
The GRACE mission is still making detailed measurements of our home planet's gravity field while providing a unique tool for monitoring the Earth's natural systems. Gravity Probe B, a mission that has now ended, measured predictions of Einstein's general theory of relativity to verify the fundamental effect of gravity on time and space. Observatories on the ground are trying to observe the gravitational waves rippling through our universe to study distant black holes and the early evolution of the universe.
Check out this topic's activities in Classrooms and Organizations and Clubs to explore tides, your weight on other worlds, and orbits of moons and comets. It's a weighty topic; join us to tackle it together! | <urn:uuid:d2fa3344-48a9-4ad5-a55e-4b557028cc14> | 4.3125 | 338 | Knowledge Article | Science & Tech. | 44.619973 |
By Charles Q. Choi, SPACE.com
The moon — linked in myth with goddesses of witchcraft and the hunt, with gods of magic and wisdom — is nearly as old as Earth itself, with enigmas of its own. As close as the moon is to Earth, we are still far from solving all its mysteries — from how the moon was born to whether life on Earth has its past and future there.
How was the moon made?
Most scientists think the moon was born from a gargantuan collision — when a young, 30-million-year-old Earth was sideswiped by an embryonic planet the size of Mars some 4.5 billion years ago, with debris from our planet and this impactor eventually coalescing into a molten, red-hot moon.
Curiously, while the latest computer models suggest most of the moon came from the impactor, lunar samples from the Apollo and other missions suggest the moon is very chemically similar to Earth's mantle.
"Perhaps that means the impactor, this embryonic planet, was similar to Earth, drawn from the same materials our planet was," said Bernard Foing, principal scientist on SMART-1, a European Space Agency satellite that orbited the moon from 2004 to 2006. Japan's lunar orbiter Kaguya, which launched Sept. 13, and India's 2008 lunar craft Chandrayaan-1 should return more details about the moon's composition, evolution and, ultimately, its mysterious origin.
Water on the moon?
The relentless bombardment of the moon by comets and water-rich asteroids over billions of years could have left water behind on the lunar surface, possibly hidden in permanent shadows in craters at the moon's poles.
In 1999, the Lunar Prospector orbiter discovered unusually high levels of hydrogen. This could be linked with water — which is, after all made from hydrogen and oxygen — "although hydrogen in the solar wind could have been trapped at the poles as well," Foing said.
Although ground-based telescopes suggest ice may not exist in thick deposits at lunar polar craters, ice could still exist in grains mixed in with the dirt. NASA's 2008 Lunar Reconnaissance Orbiter will carry along two probes that will crash onto the moon to search for water ice at its south pole.
The Lunar Cataclysm
The moon was rocked by a chain of devastating cosmic impacts known as the Lunar Cataclysm or the Late Heavy Bombardment about 4.2 billion to 3.8 billion years ago, which gouged out 50 or so giant basins still visible on the lunar surface. Astronomers suspect it occurred when the orbits of Jupiter and Saturn shifted, with the gravitational pull of these giant planets hurling more asteroids and comets around.
All the inner planets likely got hit at the same epoch as well — Foing estimated Earth suffered 25 or 30 times more impacts than the moon. Scientists aren't quite certain when the Late Heavy Bombardment occurred and how long it lasted, but it apparently took place around when life arose on Earth.
Pinning down when these impacts occurred could help shed light on whether they scoured primitive life that had just developed on Earth — or whether they planted chemical ingredients that helped life emerge. "It will be necessary to go to many impact basins on the moon to measure samples to try and figure out when they were created," Foing said.
Clues of life's origins on the moon?
Millions of tons of rocks blasted off Earth by cosmic impacts during the planet's earliest days could have landed on the moon, stones that could hold secrets concerning the origins of life — including the remote possibility of microbial fossils.
"As much as 200 kilograms from the early Earth could have fallen on every square kilometer of the moon," Foing said. "These rocks could be a very interesting scientific goal for robot and human expeditions to dig and look for."
Future of the moon?
When it comes to the future of life, "are we able to bring Earth's life to the moon? Can we expand life outside Earth's cradle? That's a question yet to be answered," Foing told SPACE.com.
The moon holds intriguing resources in its minerals, including metals and oxygen, "but it doesn't contain much carbon," Foing said. "If you want to grow plants there, you'll need to enrich the dirt, bring in carbon, nitrogen and phosphorus."
Lunar settlers could use any water available on the moon for survival, but that water could hold billions of years worth of secrets regarding comets that collided with the moon, "so I'd rather study it than drink it," Foing said. "We could just use the hydrogen and oxygen available on the moon to produce artificial water."
Copyright 2007, SPACE.com Inc. ALL RIGHTS RESERVED.
Conversation guidelines: USA TODAY welcomes your thoughts, stories and information related to this article. Please stay on topic and be respectful of others. Keep the conversation appropriate for interested readers across the map. | <urn:uuid:bb9631a2-2875-4bc3-9a14-0c62efc17d87> | 3.53125 | 1,026 | Truncated | Science & Tech. | 55.998484 |
|Why is the sky near
Antares and Rho Ophiuchi so colorful?
The colors result from a mixture of objects and processes.
Fine dust illuminated from the front by starlight produces blue
Gaseous clouds whose atoms are excited by ultraviolet starlight produce
reddish emission nebulae.
Backlit dust clouds block starlight and so
a red supergiant and one of the brighter stars in the night sky,
lights up the yellow-red clouds on the lower center.
lies at the center of the blue nebula near the top.
The distant globular cluster
M4 is visible just to the right of
and to the lower left of the red cloud engulfing Sigma Scorpii.
These star clouds are even more
colorful than humans can see,
emitting light across the electromagnetic spectrum.
Credit & Copyright: | <urn:uuid:9b710fc8-5471-42f7-a68b-d22cf1e194a8> | 3.640625 | 179 | Knowledge Article | Science & Tech. | 51.460938 |
The average global temperature for July 2012 was more than 1° Fahrenheit above the 20th-century average, making it the fourth warmest July since record keeping began in 1880. July 2012 also marks the 36th consecutive July and the 329th consecutive month with a global temperature above the 20th-century average. The last July with below-average temperature was July 1976, and the last month with below-average temperature was February 1985.
The map above shows July temperatures relative to average across the globe. Red indicates temperatures up to 11°Fahrenheit warmer than the 1981–2010 average, and blue indicates temperatures up to 11°Fahrenheit cooler than the average. Most areas of the world experienced higher-than-average monthly temperatures. The most unusually warm spots were the central U.S. and Canada, Greenland, and southeastern Europe. However, Australia, northern and western Europe, eastern Russia, Alaska, and southern South America were notably cooler than average.
Considering only land areas, global temperature tied with 2002 as the third warmest July on record, with temperatures more than 1.5°Fahrenheit above the 20th-century average. For the oceans, the July global sea surface temperature was close to 1°Fahrenheit above the 20th-century average, tying with 2006 as the seventh warmest July on record. This was also the highest monthly global ocean temperature departure from average for any month since July 2010.
The map reveals that temperatures in the eastern equatorial Pacific Ocean were slightly above average, but as of the end of July, the basin was still officially “neutral” with respect to the El Niño-Southern Oscillation climate pattern. However, according to NOAA’s Climate Prediction Center, El Niño conditions—the warm phase—will likely develop between now and September. In addition to influencing seasonal climate in the United States, El Niño is often, but not always, associated with global temperatures that are higher than normal.
Map by Dan Pisut, NOAA Environmental Visualization Lab, based on temperature anomaly data from the National Climatic Data Center. Caption adapted by Susan Osborne from the NCDC’s July 2012 Global Climate Report. Reviewed by Jessica Blunden.
Global Climate Summary—July 2012
Earth’s fourth warmest June on record
Global land temperature is May 2012 is warmest on record
May…oh My! Unusual Heat for the U.S.
Climate Prediction Center’s El Niño-Southern Oscillation Page
Climate Variability: Oceanic Niño Index
Climate Variability: Southern Oscillation Index | <urn:uuid:e63a06e0-dc37-4109-968c-ac0e4ccb98fd> | 3.546875 | 529 | Knowledge Article | Science & Tech. | 27.955653 |
The idea is to make sure the environment the sections are in before
adding the antibodies is as similar to the environment the antibodies
themselves are in as possible. This increases the avidity and affinity of
the antibodies for the antigens.
Think of it like a lock and key. If your hand is shaking you won't be
able to get the key into the lock as well. The molecules of the proteins are
doing the same thing when they are of a different pH (or temperature for
that matter). When the molecules aren't moving around or twisting (like what
happens in antigen retrieval) they can line up and the reactive parts can
get close enough to bond properly.
Histonet mailing list | <urn:uuid:8e804302-f518-4a92-af20-4747ee292cd8> | 2.734375 | 149 | Knowledge Article | Science & Tech. | 47.77872 |
You say that "...the level of carbon-14 in a living cell's DNA is directly proportional to the level in the atmosphere at the time it was born, minus a tiny amount lost to radioactive decay" (17 June, p 50). Isn't this only applicable if the cell was made using carbon that had just been absorbed from the atmosphere?
If so, Frisén is actually measuring the time elapsed since the carbon in the DNA left the atmosphere. For vegetarians eating fresh vegetables, the extra time would be quite short, but for meat-eaters the carbon has gone from the atmosphere into plants, which were eaten by animals which in turn lived for some time before being eaten.
Hmm, time for lunch. If I have a salad rather than a steak, will I feel younger?
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:06ecd248-2db8-4362-bef9-a1da88e1a834> | 3.15625 | 188 | Truncated | Science & Tech. | 61.201993 |
French physicist, who was awarded the 1991 Nobel Prize for Physics
for his discoveries about the ordering of molecules in liquid crystals
Gennes investigated how extremely complex forms of matter behave during
the transition from order to disorder. He showed how electrically or
mechanically induced phase changes transform liquid crystals from a
transparent to an opaque state, the phenomenon exploited in liquid-crystal
displays. His research on polymers contributed to understanding how
the long molecular chains in molten polymers move, making it possible
for scientists to better determine and control polymer properties. A
few of the judges on the Nobel committee described Gennes as "the
Isaac Newton of our time" in having successfully applied mathematics
to generalized explanations of several different physical phenomena.
Main Page | About Us | All text is available under the terms of the GNU Free Documentation License. Timeline of Nobel Prize Winners is not affiliated with The Nobel Foundation. External sites are not endorsed or supported by http://www.nobel-winners.com/ Copyright © 2003 All Rights Reserved. | <urn:uuid:eb66ffad-0946-45b9-ba56-014493d480db> | 3.109375 | 216 | Knowledge Article | Science & Tech. | 22.443954 |
The Purplemath Forums
Bases: Octal (Base 8) and
An older computer base system is "octal", or base eight. The digits in octal math are 0, 1, 2, 3, 4, 5, 6, and 7. The value "eight" is written as "1 eight and 0 ones", or 108.
I will do the usual repeated division, this time dividing by 8 at each step:
Then the corresponding octal number is 5458.
I will follow the usual procedure, counting off the digits from the RIGHT, starting at zero:
Then I'll do the addition and multiplication:
5×82 + 4×81
Then the corresponding decimal number is 35710.
If you work with computer programming or computer engineering (or computer graphics, about which more later), you will encounter base-sixteen, or hexadecimal, math.
As mentioned before, decimal math does not have one single solitary digit that represents the value of "ten". Instead, we use two digits, a 1 and a 0: "10". But in hexadecimal math, the columns stand for multiples of sixteen! That is, the first column stands for how many units you have, the second column stands for how many sixteens, the third column stands for how many two hundred fifty-sixes (sixteen-times-sixteens), and so forth.
In base ten, we had digits 0 through 9.
In base eight, we had digits 0
through 7. In base 4, we had digits
In any base system, you will have digits 0
through one-less-than-your-base. This means that, in
hexadecimal, we need to have "digits" 0
through 15. To do this, we would need
single solitary digits that stand for the values of "ten", "eleven", "twelve",
"thirteen", "fourteen", and "fifteen". But we don't. So, instead,
we use letters. That is, counting in hexadecimal, the sixteen "numerals" are:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F
In other words, A is "ten" in "regular" numbers, B is "eleven", C is "twelve", D is "thirteen", E is "fourteen", and "F" is fifteen. It is this use of letters for digits that makes hexadecimal numbers look so odd at first. But the conversions work in the usual manner.
Here, I will divide repeatedly by 16, keeping track of the remainders as I go. (You might want to use some scratch paper for this.)
Reading off the digits, starting from the top and wrapping around the right-hand side, I see that 35710 = 16516.
List the digits, and count them off from the RIGHT, starting with zero:
Remember that each digit in the hexadecimal number represents how many copies you need of that power of sixteen, and convert the number to decimal:
1×162 + 6×161
Then 16516 = 35710.
I will divide repeatedly by 16, keeping track of my remainders:
From the long division, I can see that the hexadecimal number will have a "fifteen" in the sixteen-cubeds column, a "nine" in the sixteen-squareds column, an "eleven" in the sixteens column, and a "thirteen" in the ones column. But I cannot write the hexadecimal number as "1591113", because this would be confusing and imprecise. So I will use the letters for the "digits" that are otherwise too large, letting "F" stand in for "fifteen", "B" stand in for "eleven", and "D" stand in for "thirteen". Copyright © Elizabeth Stapel 1999-2011 All Rights Reserved
Then 6393310 = F9BD16.
I will list out the digits, and count them off from the RIGHT, starting at zero:
Actually, it will probably be helpful to redo this, converting the alphabetic hexadecimal "digits" to their corresponding "regular" decimal values:
Now I'll do the multiplication and addition:
15×163 + 9×162
+ 11×161 + 13×160
As expected, F9BD = 6393310.
If you would like to try converting a decimal number to a base of your own choosing, click here.
If you would like to try converting from a base of your own choosing to a decimal number , click here.
If you work on web pages and graphics programs, you may find it helpful to convert between the RGB values (for an image in your graphics program) and the hexadecimal values (for a matching background color on the web page).
Graphics programs deal with the RGB (red-green-blue) values for colors. Each of these components of a given color have values somewhere between 0 and 255. These values may be converted to hexadecimal values between 00 and FF. If you list the RGB components of a color as a string of three numbers, you might get, say, R:204, G:51, B:255, which translates into a light-purplish #CC33FF in HTML coding. Note that 20410 = CC16, 5110 = 3316, and 25510 = FF16.
On the other hand, if you have some coding for #990033, this would translate into a dark-reddish R:153, G:0, B:51 in your graphics program. That is, to convert between your graphics program and your web-page coding, deal with the hexadecimal number not as one six-digit number, but as three two-digit numbers, and convert these pairs of digits into the corresponding RGB values.
For a discussion of the history of "web safe" colors, including why they involve only the hexadecimal equivalents of 0, 51, 102, 153, 204, and 255, look here. For a demonstration of the different text and background colors in HTML, look here. | <urn:uuid:bec9c197-df51-40e5-a7b3-4df05ed8b8f6> | 4.125 | 1,327 | Tutorial | Science & Tech. | 67.502747 |
Faults and Earthquakes in San Diego County
Like the rest of southern California, San Diego County has a number of active earthquake faults. These faults generally run in a northwest-southeast direction and are the product of crustal stresses associated with movement of the Pacific and North American lithospheric plates.
From east to west the major active faults consist of the San Jacinto, Elsinore, La Nacion, and Rose Canyon faults onshore and the Coronado Bank, San Diego Trough, and San Clemente faults offshore. Often the traces of these faults are marked by river valleys and canyons such as in the Lake Henshaw area where the Elsinore Fault passes along the northeast shore of the lake, or in Balboa Park where the small Florida Canyon Fault passes along the western slope of the canyon and beneath the parking lot of the Naval Hospital.
Since 1984 earthquake activity in San Diego County has doubled over that of the preceding 50 years. In modern times the strongest recorded quake (seismographs were not developed until 1934) in coastal San Diego County was the M5.3 temblor that occurred on 13 July 1986 on the Coronado Bank Fault, 25 miles offshore of Solana Beach.
Historic documents record that a very strong earthquake struck San Diego on 27 May 1862, damaging buildings in Old Town and opening up cracks in the earth near the San Diego River mouth. This destructive temblor was centered on either the Rose Canyon or Coronado Bank faults and descriptions of damage suggest that it had a magnitude of about 6.0.
In recent years there have been several earthquakes recorded within the Rose Canyon Fault Zone as it passes beneath the city. Three temblors shook the city on 17 June 1985 (M3.9, 4.0, 3.9) and a stronger quake occurred on 28 October 1986 (M4.7).
Ongoing field and laboratory studies suggest the following maximum likely magnitudes for local faults: San Jacinto (M6.4 to 7.3), Elsinore (M6.5 to 7.3), Rose Canyon (M6.2 to 7.0), La Nacion (M6.2 to 6.6), Coronado Bank (M6.0 to 7.7), San Diego Trough (M6.1 to 7.7), San Clemente (M6.6 to 7.7).
Individual earthquakes differ in strength. The Richter Scale was devised as a means of rating earthquake strength and is an indirect measure of seismic energy released. The scale is logarithmic with each one point increase corresponding to a 10 fold increase in the amplitude of the seismic shock waves generated by the earthquake. In terms of actual energy released, however, each one point increase on the Richter Scale corresponds to about a 32 fold increase in energy released. So a magnitude (M) 7 earthquake is 100 times (10 X 10) more powerful than a M5 earthquake and releases 1,024 times (32 X 32) the energy.
Seismologists (scientists who study earthquakes) use seismographs to determine the magnitude of earthquakes. The seismograph itself is a complex device that electronically amplifies seismic shock waves arriving from the earthquake event and records them on a seismogram.
An earthquake generates different types of seismic shock waves that travel outward from the focus or point of rupture on a fault. Since the focus is actually deep within the crust seismologists more often refer to the epicenter, which is the point on the earth's surface directly above the focus. Seismic waves that travel through the earth's crust are called body waves and are divided into primary (P) and secondary (S) waves. Because P waves move faster (1.7 times) than S waves they arrive at the seismograph first. By measuring the time delay between arrival of the P and S waves and knowing the distance to the epicenter, seismologists can compute the Richter Scale magnitude for the earthquake.
The Modified Mercalli Scale is another means for rating earthquakes, but one that attempts to quantify intensity of ground shaking. Intensity under this scale is a function of distance from the epicenter (the closer to the epicenter the greater the intensity), ground acceleration, duration of ground shaking, and degree of structural damage.
The earth's crust is divided into seven major lithospheric plates. Powered by forces operating in the earth's molten interior, these plates are in constant slow motion. As they move they carry with them the continents and ocean basins. The edges or boundaries of the plates are where most tectonic action occurs.
New crust is generated along spreading boundaries such as the East Pacific Rise and the Mid-Atlantic Ridge. Crust is consumed at convergent boundaries such as occur along the west coast of South America or along the Pacific margin of the Aleutian Islands. The final type of plate boundary is called a transform boundary and occurs here in California as the San Andreas Fault. At transform boundaries the plates slide past one another.
Because of the huge scale and awesome forces of plate tectonics it is not surprising that most earthquake activity is concentrated along plate boundaries. This is especially true for convergent and transform boundaries.
Large or small, most earthquakes are caused by the slippage of masses of crustal rock along earth fractures called faults. Faults that have horizontal movement are called Strike-slip faults. Faults that primarily have vertical movement are called Dip-slip faults and come in two primary types. A normal dip-slip fault is caused by extensional (pull apart) tectonic forces and a reverse dip-slip fault is caused by compressional (pushing) tectonic forces.
The Northridge quake in January of 1994 was caused by a reverse dip-slip fault. Strike-slip faults are caused by horizontal shearing tectonic forces and result in the rocks on either side of the fault moving in opposite directions.
The infamous San Andreas Fault is a very large scale strike-slip fault. Most north-south faults in southern California are of this type, including the most active faults in San Diego County -- the San Jacinto, Elsinore, Rose Canyon, and Coronado Banks faults.
Ultimately, all earthquakes are caused by movement of the earth's lithospheric plates and the plutonic forces that drive it. This movements cause seismic stress and strain to build up in the crust. Frictional forces constrain this stress and strain until a critical failure point is reached, beyond which the rocks rupture. This rupture releases the stress and strain as seismic energy which we feel as an earthquake.
Earthquakes can also be caused by volcanic eruptions as evidenced by the seismic activity that surrounded the Mount St. Helens eruption of 1981.
We humans can also cause earthquakes by underground detonation of nuclear bombs, injection of fluids into deep water wells, and over-extraction of water from aquifers. | <urn:uuid:6314efda-cd7c-4f3a-b95d-cc05a0c4f43c> | 3.484375 | 1,424 | Knowledge Article | Science & Tech. | 50.861268 |
Water Basics, Challenge 1:
How Fair Are the Water Properties?
Your teacher will divide you into teams comprising four to six students to answer questions about the special properties of water and its ability to sustain life on Earth. Use the resources listed under "Multiple Perspectives" to help you find the answers. When you have finished, your team will share your answers and ideas with the other students in the class. This exercise will help you find out what you already know and identify critical gaps in your knowledge on the topic.
- How did water originate on Earth?
- What is the shape and structure of a water molecule?
- In many ways, water is a miracle substance. The ways in which individual water molecules interact with each other is responsible for many unique properties. What are some of these unique properties?
- The density of water is dependent on its temperature. What is the relationship between density and temperature?
- How does the relationship between density and temperature make it possible for all three states of water to coexist on Earth at the same time?
- How are water and life connected?
- Water is essential for all living things on Earth. Why is water’s ability to hold and transport heat important for life on Earth?
- Environmental water quality determines the suitability for human use (for example, drinking water, swimming, boating, and so on). Water quality also affects the health of aquatic organisms that live in the water and has an impact on wildlife, which use the water for drinking or as a habitat. What are some of the measurements made to assess water quality?
- What is the potential impact of toxic substances in water and high populations of certain microorganisms on the health of humans and ecosystems?
- What are the different water quality standards for environmental conditions, ecosystems, and intended human uses? | <urn:uuid:ac9403f6-e5d9-4b4a-a32c-f6b833bb1f3a> | 4.9375 | 373 | Tutorial | Science & Tech. | 40.769903 |
Previous abstract Next abstract
Einstein's theory of general relativity is widely accepted as the correct description of relativistic gravity. Consequences of the theory like black holes are routinely used to account for the properties of astrophysical phenomena such as quasars and AGNs. Many high-precision tests of general relativity have been carried out to confirm the theory. However, all of these tests probe only the weak-field, slow-velocity limit of the theory. These are the lowest order corrections to Newtonian gravitation theory. There are no experiments as yet that test the strong field character of the theory. Processes such as the formation of black holes or gravitational radiation from colliding black holes are required to test the strong field regime. Large-scale computations on supercomputers provide the only way to probe strong field phenomena at present. The field is still in its infancy but numerical relativity has already provided useful insights into the nature of relativistic gravitation. Already, the collapse of fluid stars and collisionless star clusters to black holes can be followed in spherical and axisymmetry. The collision of relativistic stars and clusters, as well as black holes, can also be studied by numerical means, although only the head-on case has been treated so far. Simulations have even been performed which suggest the formation of naked singularities and the possible violation of cosmic censorship. A computer-generated color video highlighting some of these findings will be presented.
Wednesday program listing | <urn:uuid:f3c98260-a123-4bf6-8726-7579639aee41> | 3.3125 | 300 | Academic Writing | Science & Tech. | 25.518657 |
Chemical Concepts Demonstrated
- Relative activity of metals
- Metal classification based on reactivity
- Drop samples of sodium, magnesium, aluminum, and iron metal into water.
- Drop samples of magnesium, aluminum, and iron metal into 6M HCl.
|sodium ||Highly reactive in water. || |
Highly reactive in acid.
|magnesium ||No reaction in water at room temperature. || |
Reacts rapidly in acid.
|aluminum ||No reaction in water at room temperature. || |
|iron ||No reaction in water. || |
Notice the metals' locations in the periodic table. The most reactive metals have the greatest tendency to lose electrons to form positively charged ions. Metals, therefore, become more reactive as they are located further to the left on the periodic table.
Based on the activities of the metals, the four metals can be separated into three different categories:
- Reactive in both acid and water (i.e. high reactivity): sodium (and, by extension, other alkali metals)
- Reactive in acid, but not water (i.e. moderate reactivity): magnesium (and, by extension, other alkaline-earth metals)
- Unreactive in both acid and water (i.e. low reactivity): aluminum and iron (and, by extension, other transition and Group IIIA metals) | <urn:uuid:cfd123c8-29a7-4373-acbd-5dab36c6fb9e> | 3.8125 | 291 | Knowledge Article | Science & Tech. | 29.482594 |
Invasive snakes threaten biodiversity in Florida
The invasive Burmese python has been linked to mammal declines Florida’s Everglades National park. Researchers fear that some of the endangered species of the region may be in danger.
The Burmese python is a constrictor from Southeast Asia that has found its way from pet stores into Florida’s Everglades National Park. Officials first observed it in Florida’s wilderness in the 1980s and it was considered officially established in the year 2000. The pythons prey upon a diverse set of animals that includes birds and mammals and even alligators. They have proliferated since their establishment, expanding their range to the point that running across a python has become a common occurrence.
A group of researchers has examined the effects that the pythons are having on local mammals and their results appear in the Proceedings of the National Academy of Sciences. The team, led by a biologist from Davidson College, conducted surveys of Everglades National Park from 2003 to 2011. They compared their findings to data from surveys conducted in the 1990s, before the snakes began proliferating. The scientists found that raccoons, opossums, and bobcats were markedly less abundant, with the frequency of their sightings having decreased by 99.3 percent, 98.9 percent, and 87.5 percent, respectively. Overall, the frequency of medium-sized mammals sightings had decreased a lot, with a slight increase in the number of small rodent sightings. The losses were most pronounced in regions where the pythons had been around for longer.
The pythons seem to be having an effect on the local food web of the Everglades. Because the pythons have found conditions where they can thrive, their pressure on the ecosystem could change its fundamental food-web dynamics. This is bad for conservation because it means that the populations of prey species can be reduced and even competitor species, such as the panther, could become less abundant. In other cases, however, the pythons might increase the numbers of small rodents because the snakes eat their larger predators—so an understanding of the effects is still unfolding.
The effects could have legal consequences, as well. The Everglades contain threatened and endangered species that are listed under the Endangered Species Act. Now that the delayed effects of this invasive snake are coming to light, managers can use the knowledge to better protect the biodiversity of this unique national park.
Michael E. Dorcas, John D. Willson, Robert N. Reed, Ray W. Snow, Michael R. Rochford, Melissa A. Miller, Walter E. Meshaka, Jr., Paul T. Andreadis, Frank J. Mazzotti, Christina M. Romagosa, and Kristen M. Hart, “Severe mammal declines coincide with proliferation of invasive Burmese pythons in Everglades National Park
,”Proceedings of the National Academy of Sciences
, Published ahead of print, January 30, 2012 (2012), 1-5. DOI: 10.1073/pnas.1115226109
· Climate Change
· Ecosystem Conservation
· Environmental Policy
· Industrial Ecology
· Land Management
· Society and Environment
· Urban Planning
· Water Resources | <urn:uuid:df6239bf-eaca-4877-8e66-0d29511f5d9a> | 3.328125 | 669 | Academic Writing | Science & Tech. | 44.116246 |
What do bees make honey from (pollen or nectre)? What do
they use the other for?
I have heard there are bees that make honey from decaying flesh. Is this
Sorry for the delayed reply, I hope this is still useful. Bees use nectar to
make honey, and gather pollen for brood production - it has more protein.
They feed developing queens on a mixture of the two. I don't know of any way
honey could be made from decacying flesh, there would be no substantial
amount of sugar.
Click here to return to the Zoology Archives
Update: June 2012 | <urn:uuid:a8176526-3e86-44ef-8934-19cad344196d> | 3.296875 | 130 | Q&A Forum | Science & Tech. | 65.435 |
Find out why water is one of the most amazing compounds in the
universe and why it is essential for life. - UNDER DEVELOPMENT
Get some practice using big and small numbers in chemistry.
Many physical constants are only known to a certain accuracy. Explore the numerical error bounds in the mass of water and its constituents.
This is the area of the advanced stemNRICH site devoted to the core applied mathematics underlying the sciences.
PhysNRICH is the area of the StemNRICH site devoted to the mathematics underlying the study of physics
An introduction to a useful tool to check the validity of an equation.
Work out the numerical values for these physical quantities.
Advanced problems in the mathematical sciences.
Explore displacement/time and velocity/time graphs with this mouse
This is the technology section of stemNRICH - Core.
Make an accurate diagram of the solar system and explore the concept of a grand conjunction.
chemNRICH is the area of the stemNRICH site devoted to the
mathematics underlying the study of chemistry, designed to help
develop the mathematics required to get the most from your study. . . .
Some explanations of basic terms and some phenomena discovered by
Estimate these curious quantities sufficiently accurately that you can rank them in order of size
Can you suggest a curve to fit some experimental data? Can you work out where the data might have come from?
Which units would you choose best to fit these situations?
When you change the units, do the numbers get bigger or smaller?
Use your skill and knowledge to place various scientific lengths in order of size. Can you judge the length of objects with sizes ranging from 1 Angstrom to 1 million km with no wrong attempts?
Use trigonometry to determine whether solar eclipses on earth can be perfect.
In which Olympic event does a human travel fastest? Decide which events to include in your Alternative Record Book. | <urn:uuid:72cb7780-8e46-4146-a347-09ab252b7d3d> | 3.875 | 394 | Content Listing | Science & Tech. | 44.086427 |
Seawater Chemistry from the North Pole Environmental Observatory, 2000-2005
This data set includes diverse measurements of seawater chemistry along with supplementary conductivity, temperature, depth (CTD); pressure; salinity; potential temperature; and density data from the Arctic Ocean near the North Pole. Measurements were taken from sea ice platforms each April or May from 2000-2005. Investigators used a CTD-O2 system to measure seawater conductivity, temperature, depth, and dissolved oxygen content, and collected Niskin bottle samples for measurements of salinity, oxygen isotope composition, and concentrations of phosphate, silicic acid, nitrate, nitrite, ammonium, and barium. The available in situ dissolved oxygen measurements were collected beginning in 2002.
The North Pole Environmental Observatory (NPEO) is a year-round, automated scientific observatory, deploying various instruments each April in order to learn how the world's northernmost sea helps regulate global climate. It consists of a set of unmanned scientific platforms that record oceanographic, cryospheric, and atmospheric data throughout the year. More information about the project is available at the project Web site, North Pole Environmental Observatory (http://psc.apl.washington.edu/northpole/).
These data are available via FTP.
The following example shows how to cite the use of this data set in a publication. For more information, see our Use and Copyright Web page.
Kelly K. Falkner. 2002, updated 2005 and 2006. Seawater Chemistry from the North Pole Environmental Observatory, 2000-2005. [indicate subset used]. Boulder, Colorado USA: National Snow and Ice Data Center. | <urn:uuid:3a465c48-2dc5-4204-b374-8bb0e44380aa> | 2.75 | 340 | Knowledge Article | Science & Tech. | 21.60119 |
Ozone Hole Meteorology: 2011 Temperature
The depth and area of the Antarctic ozone hole are governed by the temperature of the stratosphere and the amount of sunlight reaching the south polar region. Temperatures that are cold enough can form polar stratospheric clouds (PSCs). PSCs are an important component in the destruction of ozone molecules. PSCs can be formed when temperatures fall below a given threshold for each type of PSC. The formation temperature is dependent on concentrations of nitric acid and water vapor, and the potential temperature of the air. PSCs can be formed from sulfate aerosols, nitric acid trihydrate (NAT), or ice.
Comparison to all years
The following figures show the daily progression through the ozone hole season, comparing the current year to the climatology of all other years.
-- click on a link for a PDF figure -- | <urn:uuid:50a43935-9fa9-48f2-a7a6-b959d0ac0b96> | 3.90625 | 181 | Knowledge Article | Science & Tech. | 33.56413 |
Astronomical seeing is the blurring and twinkling of celestial objects, such as stars and planets, caused by turbulence in the Earth's atmosphere.
The worse the seeing conditions, the poorer the quality of the image being observed by a telescope. It's a bit like looking into a clear sea on a windy day. The more disturbed the sea is, the poorer the image of objects beneath the surface. Good seeing would be the equivalent of a flat calm sea.
Although a clear sky may appear calm, there are various updrafts and turbulent layers that can still mess about with the light passing through. The result can be rapid changes in the quality of images, as shown right.
All ground-based observatories are affected by seeing, but the effect can be reduced by putting observatories on the top of high mountains, so that telescopes are looking through less atmosphere. Astronomers have also developed adaptive optics technology to try and correct for seeing effects, but the best option is to launch your telescope into orbit above the Earth's atmosphere.
If you fancy learning more about astronomical seeing, and seeing the effects for yourself, why not try out our Seeing Workshop. | <urn:uuid:b9293ba0-addd-416f-ad7f-2b35c975d0a3> | 3.5625 | 237 | Knowledge Article | Science & Tech. | 49.06 |
A close-up of the world’s smallest orchid, at just over 2mm from petal tip to petal tip.
Image: Lou Jost.
The world’s smallest orchid was discovered recently in a mountainous nature reserve in Ecuador by American botanist Lou Jost. Dr. Jost, a former physicist, now works as a mathematical ecologist, plant biogeographer and conservation scientist, and is one of the world’s most expert orchid hunters. In the previous decade, Dr. Jost discovered 60 new species of orchids and 10 other new plant species. He discovered this diminutive plant whilst examining another species of small orchid that he was cultivating.
“I found it among the roots of another plant that I had collected, another small orchid which I took back to grow in my greenhouse to get it to flower,” Dr. Jost stated. “A few months later I saw that down among the roots was a tiny little plant that I realized was more interesting than the bigger orchid.”
“Looking at the flower is often the best way to be able to identify which species of orchid you’re got hold of — and can tell you whether you’re looking at an unknown species or not,” explained Dr. Jost (pictured at right).
The tiny flower is just 2.1 millimeters — less than half an inch — across and the petals are only one cell thick: the flower is transparent. This discovery has been tentatively classified as a new species of Platystele, a genus that is primarily comprised miniature plants.
Previously, another orchid, Platystele jungermannioides, discovered in 1912, was recognized as the tiniest species known in the world.
Dr. Jost, an expert orchid hunter, recently discovered another tiny orchid that is new to science while searching the Rio Anzu Reserve in central Ecuador.
“It was so small, it looked like a piece of dirt at first,” said Dr. Jost of that plant.
“I was going through the moss on a fallen tree branch — they’re good places for orchids to grow — when I spotted it. The flower was 3mm across.”
That previously unknown small orchid is another Platystele species.
This newest plant species was collected in the Cerro Candelaria reserve in the eastern Andes Mountains. This 2113 hectare reserve, comprising mainly Cloud Florest and Paramo (tropical alpine grasslands), was created by the EcoMinga Foundation, based in Ecuador, in partnership with the World Land Trust in Great Britain. Dr. Jost is a cofounder of the EcoMinga Foundation.
The Cerro Candelaria reserve is a rich biological transition zone that stretches between the Sangay National Park in the Andes Mountains towards the Los Llanganates National Park in the Amazon River Basin. The Cerro Candelaria region of Ecuador is known for its many tiny orchid species, and it is home to a number of rare and poorly-known orchids, including an orchid genus found no where else. Already 16 species of orchid new to science have been discovered in this reserve as well as a new species of frog and a new species of tree that will be named in honor of Sir David Attenborough. This nature reserve is also home to threatened animal and bird species such as the White-rimmed Brush-Finch and the Mountain Tapir, Spectacled Bear and Ocelot.
“It’s an exciting feeling to find a new species. People think everything has been discovered but there’s much more,” Dr. Jost pointed out.
Dr. Jost’s most dramatic discovery so far was 28 new orchid species in the genus, Teagueia — which had previously included just six species. Teagueia orchids are a spectacular plant radiation that evolved in an area that is smaller than the island of Manhattan. The radiation of these 28 closely related orchids in such a small area is celebrated as a botanical version of Darwin’s finches.
Road construction through the most remote and pristine regions in Ecuador has led to the discovery of more than 1,000 orchid species in the past century. These species that are new-to-science are eagerly pursued by orchid collectors, greenhouses and breeders as well as botanists and other scientists throughout the world.
Sources and Background Reading:
Lou Jost’s Monograph of the Genus Teagueia. Fascinating reading about all these new orchid species. Includes comments about unresolved taxonomic issues in this genus and images of each flower.
Read more about the Cerro Candelaria nature reserve.
Read more about the EcoMinga Foundation.
Read more about the World Land Trust.
Doomed To Die? Sports Illustrated Magazine. | <urn:uuid:78655857-d63a-4334-882f-2400b78d213c> | 3.546875 | 1,038 | Personal Blog | Science & Tech. | 53.317488 |
The acid/base titration with NaOH and acetic acid is the oldest experiment in the book. I did it when I was in college (a long time ago) and both schools for which I work do the lab. I know they repeat it in higher level classes with a few more steps to make it more challenging. So it is to your advantage to try to master it at this level so you can handle more detail when they throw it at you.
Part I: Standardization of a base (NaOH)
Why would you standardize a solution? Generally it is because you need to precisely know the concentration of it.
What variables do you know in the first part? You know how many moles of acid that you started with. (KHP). You can calculate moles of acid in your starting material. You also know that the balanced chemical equation for the neutralization of KHP and NaOH reacts in a 1:1 mole ratio between acid and base.
KHP(aq) + NaOH(aq) → KNaP(aq) + H2O(l)
With your buret, you measure the volume of NaOH required to neutralize your KHP acid. You know the experiment is complete when you see your indicator turn a faint pink color. This indicates the pH of the titration is greater than 7 (slightly past the end point the pH can jump up sharply). The true equivalence between NaOH and KHP comes a few drops before you see pink in the solution but we approximate equivalence at the point the solution turns pink.
How do you calculate the concentration of the standard solution? You take moles of acid and set it equal to moles of NaOH. (You know this is true based on the balanced equation).
You can read the volume of NaOH required to complete the titration on your buret. This is the total mL (or Liters) you used to complete the titration.
Simply divide moles of base by liters of solution to get the concentration of the standard solution.
In part II of this experiment you are working backwords. What are your known and unknown variables? Let’s think about this carefully:
Generally you are told to pipette a certain amount of acetic acid in a flask. Let’s say you have a 5.00 mL calibrated pipette. You have 5.00 mL or 0.00500 L of acetic acid in your flask. (Notice I used three sig figs by adding two zeroes to the end of my liter value)
In this titration you now have a known concentration of NaOH from part A. (No, it is not 1:1 as I often see in my lab reports for this lab. It is the value you determined in part A). At this end of the titration, your acetic acid turns pink when moles of acetic acid is essentially equal to moles of base. How do you use this data to derive the concentration of your acid?
Let’s say you have a 0.2540 M concentration of NaOH from part A and you used 26.80 mL of your NaOH to titrate your acetic acid. You can determine moles of NaOH used for the titration by multiplying your concentration of NaOH by your volume of NaOH.
0.2540 moles/Liter NaOH X 0.02680 L NaOH= moles of NaOH (moles/Liter X Liters is always moles)
We know this is equal to moles of acetic acid by the balanced chemical equation.
What volume is this divided by to get the concentration? The original volume of acid you pipette into your flask- 5.00 mL or 0.00500 L. | <urn:uuid:a75760a4-929b-4ac2-a2a5-5086a77a14a4> | 3.703125 | 786 | Tutorial | Science & Tech. | 70.613104 |
(96) ... By the electromagnetic experiments of MM. Weber and Kohlrausch [ref], v=310,740,000 metres per second is the number of electrostatic units in one electromagnetic unit of electricity, and this, according to our result, should be equal to the velocity of light in air or vacuum. The velocity of light in air ... according to the more accurate experiments of M. Foucault [ref], V=298,000,000. The velocity of light in the space surrounding the earth, deduced from the coefficient of aberration and the received value of the radius of the earth's orbit, is V = 308,000,000. (97) Hence the velocity of light deduced from experiment agrees sufficiently well with the value of v deduced from the only set of experiments we as yet possess. The value of v was determined by measuring the electromotive force with which a condenser of known capacity was charged, and the discharging the condenser through a galvanometer, so as to measure the quantity of electricity in it in electromagnetic measure. The only use made of light in the experiment was to see the instruments. The value of V found by M. Foucault was obtained by determining the angle through which a revolving mirror turned, while the light reflected from it went and returned along a measured course. No use what ever was made of electricity of magnetism. The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws.
Published before 1923 | <urn:uuid:8aa4c827-231e-4fd2-8790-0442c2209fcc> | 3.1875 | 326 | Knowledge Article | Science & Tech. | 49.825255 |
Trawling 'ploughs' the deep sea floor
Habitat damage Bottom trawling by fishermen may be even more damaging than previously thought, affecting the seabed as seriously as intensive ploughing of farmland erodes the soil, say Spanish scientists.
Bottom trawling - dragging nets across the sea floor to scoop up fish - stirs up the sediment lying on the seabed, displaces or harms some marine species, causes pollutants to mix into plankton and move into the food chain and creates harmful algae blooms or oxygen-deficient dead zones.
"Bottom trawling has been compared to forest clear-cutting, although our results suggest that a better comparison might be intensive agricultural activities," they write in a study published on the journal Nature.
During the 20th century, more intensive farming techniques and changes in land use reduced the diversity of landscapes almost everywhere, say the researchers.
Ploughing up land exposes the top soil to erosion by wind and water, destroying or weakening nutrients in the soil which are essential for many plant species to survive.
As with soil, the seabed is composed of layers of sediment, holding nutrients that are vital for marine life.
While farmers usually plough their land a few times a year, sea trawling can occur on a near daily basis, the scientists say.
Fishing has also become increasingly industrial. As technology has improved and traditional fish stocks have been depleted, trawling fleets have gone into ever deeper waters in search of fish.
The scientists measured the movement of sediments on the sea floor caused by fishing activities in a submarine canyon in the northwest Mediterranean Sea.
Deep-sea trawling became fully industrialised in the region in the 1960s and 180 large bottom trawlers currently operate to depths of 800 metres or more.
The scientists found heavy fishing equipment moves sediments on upper continental slopes - the transitions between shallow continental shelves and deep basins - modifying the submarine landscape over large areas.
They linked daily sediment movement to the passage of the trawling fleet, and found some of the movement was similar in size to the sediment transport caused by winter storms in nearby submarine canyons.
Using satellite navigation tracks from bottom trawlers operating in the area, the scientists discovered the tracks coincided with smoothed parts of the canyon at depths shallower than 800 metres. Untrawled parts of the canyon, by contrast, were dominated by a network of valleys.
"The frequent repeated trawling (ploughing) over the same ground, involving displacement of sediments owing to mechanical redistribution, ultimately causes the levelling of the surface and produces morphological effects similar to those of a [ploughed] farmer's field," say the scientists.
The ecological impact of trawling and its influence on changes to the submarine landscape should be considered a danger to the ocean ecosystem alongside global warming, rising sea levels, acidification and changes in ocean circulation, they add. | <urn:uuid:7272384d-d7be-4738-8ed2-1dd5701798e2> | 3.734375 | 606 | Truncated | Science & Tech. | 24.601428 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Friday, 15 March 2013
Mass strandings of pilot whales do not result from extended family members rushing to the aid of kin, new research suggests.
Monday, 19 November 2012
The largest animal on Earth is changing its ecology, say researchers, and it could be due to climate change or the effects of whaling.
Monday, 17 September 2012
Whales successfully released after beaching themselves, have a good chance of survival, according to new satellite tracking data.
Friday, 14 September 2012
Killer whale mothers often live long past menopause and now research suggests it is to care for offspring - especially sons.
Tuesday, 3 July 2012
Meet a scientist Marine scientist Maryrose Gulesserian's research sheds some light on how our fascination with whale watching affects these magnificent animals.
Wednesday, 8 February 2012
The steady drone of commercial shipping lanes not only alters whale behaviour but can affect the giant sea mammals physically.
Wednesday, 26 October 2011
Some killer whales wander nearly 10,000 kilometres from Antarctica, but not to feed or breed, a new study has found.
Wednesday, 17 August 2011
An Australian palaeontologist has figured out a missing step in the evolution of giant filter-feeding mouths characteristic of blue whales.
Wednesday, 20 April 2011
Humpback whales can swim thousands of kilometres in a straight line, suggesting they may use a unique compass mechanism, according to researchers.
Friday, 15 April 2011
Scientists say mating songs sung by Australian humpbacks go on to spread to whale populations all over the South Pacific.
Monday, 6 December 2010
Whales living near the Galapagos Islands appear to have been exposed to higher levels of pollutants than those in other areas of the Pacific, say an international team of researchers.
Wednesday, 10 November 2010
Whales are showing signs of acute sun damage that researchers believe is due to rising levels of ultra violet radiation.
Wednesday, 13 October 2010
A humpback whale has broken the world record for travel by any mammal, swimming from the Atlantic to the Indian Ocean in search of a mate, marine biologists report.
Thursday, 1 July 2010
Palaeontologists unearth a prehistoric monster whale with teeth so huge it probably hunted other whales not less than half its size.
Monday, 21 June 2010
As the future of whales once more comes under global debate, some scientists say the marine mammals are not only smarter than thought but also share several attributes once claimed as exclusively human. | <urn:uuid:9833cd58-f980-4e74-93db-8131a17d20e8> | 2.890625 | 520 | Content Listing | Science & Tech. | 34.015984 |
Science Fair Project Encyclopedia
By definition, interplanetary travel is travel between bodies in a given star system.
Current achievements in interplanetary travel
NASA's Apollo program landed twelve people on the Moon and returned them to Earth: Apollo 11-17, except 13, i.e. six missions, with each time three astronauts of which two landed on the Moon; Robot probes have been sent to fly past most of the major planets of the Solar system. The most distant probe spacecrafts Pioneer 10, Pioneer 11, Voyager 1 and Voyager 2 are on course to leave the Solar system, but will cease to function long before reaching the Oort cloud.
Robot landers such as Viking and Pathfinder have already landed on the surface of Mars and several Venera and Vega spacecraft have landed on the surface of Venus. The NEAR Shoemaker orbiter successfully landed on the asteroid 433 Eros, even though it was not designed with this maneuver in mind.
Orbital mechanics of interplanetary travel
To date, the only form of spacecraft propulsion used for interplanetary missions is the chemical rocket engine. The limitations of this engine dictate the trajectories and travel times required for interplanetary travel.
All objects in a star system are in orbit around the star; if they were not, they would have "left" the system or fallen into the star long ago. This implies that one cannot simply point oneself at another planet and fly in that direction, because upon arrival the planet will be moving at an inappropriate relative velocity or may have moved altogether. For instance, if a spacecraft were to start from the Earth and fly to Mars, its final velocity will be close to Earth's orbital velocity which is much higher than that of Mars. This is because any spacecraft starting on a planet is also in orbit around the Sun, and a brief glance at the planetary speeds and distances demonstrates that the power of a chemical rocket pales in comparison to the relative speeds of the planets. In order to make interplanetary travel possible, a reduction in the total amount of energy needed to do so is required.
For many years this meant using the Hohmann transfer orbit. Hohmann demonstrated that the lowest energy transfer between any two orbits is to elongate the orbit so that its apogee lies over the orbit in question. Once the spacecraft arrives, a second application of thrust will re-circularize the orbit at the new location. In the case of planetary transfers this means adjusting the spacecraft, originally in an orbit almost identical to Earth's, such that the apogee is on the far side of the Sun near the orbit of the other planet. A spacecraft traveling from Earth to Mars via this method will arrive near Mars orbit in approximately 18 months, but because the orbital velocity is greater when closer to the center of mass (ie. the Sun) and slower when farther from the center, the spacecraft will be travelling quite slowly and a small application of thrust is all that is needed. If the maneuver is timed properly, Mars will be "arriving" under the spacecraft when this happens.
The Hohmann transfer applies to any two orbits, not just those with planets involved. For instance it is the most common way to transfer satellites into geostationary orbit, after first being "parked" in low earth orbit. However the Hohmann transfer takes an amount of time similar to 1/2 of the orbital period of the outer orbit, so in the case of the outer planets this is many years – too long to wait. It is also based on the assumption that the points at both ends are massless, as in the case when transferring between two orbits around Earth for instance. With a planet at the destination end of the transfer, calculations become considerably more difficult.
One technique, known as the gravitational slingshot, uses the gravity of the planets to modify the path of the spacecraft without using fuel. In typical example, a spacecraft is sent to a distant planet on a path that is much faster than what the Hohmann transfer would call for. This would typically mean that it would arrive at the planet's orbit and continue past it. However if there is a planet between the departure point and the target, it can be used to bend the path toward the target, and in many cases the overall travel time is greatly reduced. A prime example of this are the two craft of the Voyager program, which used slingshot effects to change trajectories several times in the outer solar system. This method is not easily applicable to Earth-Mars travel however, although it is possible to use other nearby planets such as Venus or even the Moon as slingshots.
Another technique uses the atmosphere of the target planet to slow down. In this case the spacecraft is sent on a high-speed transfer, which would normally mean it would go right past its target upon arrival. By passing into the atmosphere this extra speed is lost, and the amount of energy lost to transport the weight of the required heat shield is considerably less than the weight of the rocket fuel that would be needed to provide the same amount of energy. This concept, known as aerobraking, was first used on the Apollo program wherein the returning spacecraft did not bother to re-enter Earth orbit in a transfer, and instead re-entered immediately at the end of the journey. Similar systems are included on most basic plans for a manned mission to Mars.
Recent advances in computing have allowed old mathematical solutions to be re-investigated, and have led to a new system for calculating even lower-cost transfers. Paths have been calculated which link the Lagrange points of the various planets into the so-called Interplanetary Superhighway. The transfers on this system are slower than Hohmann transfers, but use even less energy, and are particularly useful for sending spacecraft between the inner planets.
There are a number of designs for more efficient spacecraft propulsion methods (as measured by specific impulse) that could, speed up interplanetary space missions greatly and allow greater design "safety margins" by reducing the imperative to make spacecraft lighter. If developed, such designs would use trajectories far different to Hohmann transfers.
The most likely near-term development is that of electric propulsion, which uses an external source such as a nuclear reactor to generate electricity, which is then used to accelerate a chemically inert propellant to speeds far higher than achieved in a chemical rocket. A prototype of this technology has already been used on NASA's Deep Space One , and a more ambitious, nuclear-powered version is intended for an unmanned Jupiter mission, the Jupiter Icy Moons Orbiter, scheduled for "within a decade".
See the spacecraft propulsion article for a discussion of a number of other technologies that could, in the medium to longer term, be the basis of interplanetary missions. Unlike the situation with interstellar travel, the barriers to fast interplanetary travel involve engineering and economics rather than any basic physics.
While manned interplanetary travel (with the arguable exception of the Apollo program) has not yet been achieved, a trip to Mars is probably feasible, even with chemical rocket propulsion, and could probably be achieved within a decade (at most two) if the funds were made available. NASA's "Design Reference Mission" proposes a Mars exploration program costing $50 billion, but others have made detailed proposals with projected costs much less (see Mars Direct).
- See also: manned space mission
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:3d6bc8a4-0ded-48fb-9aa5-cf1cf464ef95> | 3.71875 | 1,543 | Knowledge Article | Science & Tech. | 34.699019 |
Gypsum, a common mineral composed of calcium sulfate, lies scattered in this newly-released image of the Olympia Undae region of Mars. The picture was made by combining CRISM data with HiRISE observations, and has the same resolution characteristics of the Nili Fossae image. Made on October 1, 2006, the image covers about 12 square kilometers (7.5 sq. mi.). Gypsum, used commonly on Earth as the substance in wallboard, is a sulfate-rich mineral that forms by evaporation and requires large amounts of water for its formation. In this image, brighter areas contain more gypsum and darker areas less so. The bottom views are enlargements of the central part of the two versions of the image shown at top. Could this large gypsum deposit be the dead mineral remains of a once mighty sea that occupied this area of the Red Planet? | <urn:uuid:9d48d62b-32f4-4800-8e90-7258ffc2c2cc> | 3.421875 | 183 | Knowledge Article | Science & Tech. | 47.968052 |
Abell 3376: A cluster of galaxies about 614 million light years from Earth.
Caption: This composite image of the galaxy cluster Abell 3376 combines Chandra and ROSAT X-ray data (gold), an optical image from the Digitized Sky Survey (red, green and blue), and a radio image from the VLA (blue). Two different teams used Chandra observations of galaxy clusters – including Abell 3376 -- to study the properties of gravity on cosmic scales and test Einstein's theory of General Relativity. Such studies are crucial for understanding the evolution of the Universe, both in the past and the future, and for probing the nature of dark energy, one of the biggest mysteries in science.
Scale: Image is
29 arcmin across (about 5 million light years across).
Chandra X-ray Observatory | <urn:uuid:860cf87f-c610-47ca-bed5-f3d59a3fbbad> | 3.484375 | 169 | Truncated | Science & Tech. | 38.336372 |
Welcome to the official Cryptography site for Duke CompSci 182 Group 5 Section 1.
A Typical Conversation on Cryptography
Me: Hey, you like feeling secure with your computer, right?
Friend:...riiiiiiiiiight? (having no idea where I'm going with this)
Me: Would you feel like switching over to a new mail client and chat client that would promote the confidentiality of your online communications?
Friend: I guess. Would it be easy?
Me: Oh, probably. How would you also feel about needing to find a public key in order to send an e-mail to some-one new and posting a public key so people can talk to you?
Friend: I would need to use this public key thing to talk to you?
Friend: I'm not sure if I like talking to you enough to put up with that.
Courtesy of XKCD
What Is Cryptography.
Cryptography is the practice and study of techniques for secure communication in the presence of third parties. More generally, it is about constructing and analyzing protocols that overcome the influence of adversaries and which are related to various aspects in information security such as data confidentiality, data integrity, and authentication. Modern cryptography intersects the disciplines of mathematics, computer science, and electrical engineering. Applications of cryptography include ATM cards, computer passwords, and electronic commerce.
Everyday Uses of Cryptography.
Whether you realize it or not, there are a lot of ways that you deal with some form of encryption every day. The simplest example is the password you use to log on to a network. Orginally passwords were sent to the server in plaintext. Not the brightest idea. So the passwords are now encrypted!
If you have ever purchased something online you have likely encountered another form of encryption. Both SSL and S-HTTP are technologies that have been developed to protect such web activity. S-HTTP was designed to allow files and messages to be encrypted and then sent over the Internet. SSL on the other hand was developed to allow a secure connection between a browser and web server. In the case of SSL, all data that is sent can be encrytped rather than only messages S-HTTP. There are two levels of encryption 40-bit and 128-bit. The bit is the size of the key and the longer the key the more security.
Other familiar uses of encryption involve ATMs. The magnetic strip on the back of an ATM card contains among many things an encrypted copy of your account number. With the given PIN number (key) this encryption is able to be verified and your account accessed. Without the PIN, the card is useless.Additionally, cryptographic technics are employed to protect the copyrighted material found on DVDs and CDs. And finally, all cell phone data that uses GSM technology, has its transmissions encrypted.
Current Issues: Quantum Computing
Quantum computers are computers that utilize the power of quantum mechanics
to perform computational operations on data. They are fundamentally different from the
classical model of a computer. Whereas data for classical computers are encoded in bits,
quantum computers employ quantum bits to represent data and to perform computation.
These ‘qubits’ can exist not only in the classical 0 and 1 states but also in a quantum
superposition of both these states. When these ‘qubits’ are in this superposition of states,
it can effectively perform an operation on both values simultaneously. Moreover, a pair
of qubits can be in any quantum superposition of 4 states; therefore, it can perform on 4
values at the same time. Similarly, a three-qubit system can perform on 8 values.
Generally, an n qubit system can perform an operation on 2n values simultaneously. This
method by which quantum computers can perform simultaneous computations is called
Quantum computers function by manipulating these qubits with a quantum algorithm. With large-scale quantum computers, these algorithms can solve certain problems in a fraction of the time taken by a classical computer. For instance, Shor’s algorithm can quickly factor large numbers. Factoring a 1000 digit number on a quantum computer with Shor’s algorithm would take twenty whereas on a classical computer it would take longer than age of the universe. As we can see, an implementation of Shor’s algorithm would have a severe effect on the field of cryptography because it would utterly undermine security provided by public key encryption. Cryptographers have thought that more digits added to the key can combat the increased performance of computers. However, with the power of quantum parallelism, the number of digits in the key has such a small effect on a quantum computer running Shor’s algorithm. The algorithm can crack RSA 140 in a matter of seconds. | <urn:uuid:52e4b63f-8099-4fca-92c9-7056001ad11c> | 3.46875 | 976 | Knowledge Article | Science & Tech. | 45.664439 |
Interfaces are used to define a contract for the classes. It has its own importance when it comes to defining standards.
Consider the following example:
Cars have some standard and their basic operations are the same. Each car should have operations like startCar, moveCar, stopCar etc. Toyota, Mazda and Honda are cars and each has its own way of starting, moving and stoping. So each has to have its own startCar, moveCar and stopCar method.
How to enforce | <urn:uuid:1513f15e-ecaf-4d00-bac6-5e501c921a43> | 3.140625 | 102 | Tutorial | Software Dev. | 49.386154 |
An Introduction to Unix Permissions -- Part Two
Pages: 1, 2
Remember from last week that what a user can do with a file depends on both the file's permissions and the directory's permissions. Let's make a test directory in our home directory to store some test files:
cd mkdir testdir cd testdir touch testfile ls -latotal 2 drwxr-xr-x 2 genisis wheel 512 Aug 20 10:24 . drwxr-xr-x 14 genisis wheel 1024 Aug 20 10:23 .. -rw-r--r-- 1 genisis wheel 0 Aug 20 10:24 testfile
Note the interesting behaviour with the times. The parent directory was last modified when
testdir was created, as its name had to be added to the parent directory's list. (Remember from last week that a directory is simply a file containing a list of the directory's contents). Similarly, the
testdir directory was modified the same time that the
testfile was created, as it also had to be added to its directory list.
Now, I want you to
login as a different user (other than root). Looking at the permissions for the testdir directory, will that user be able to
cd into that directory, use the
ls command, read a file, change a file, create a file, or remove a file? As you try this exercise, remind yourself which permission is allowing or preventing that user from doing something in the
testdir directory. I'll
login as the user
exit login: bikoPassword:
Note the shortcut to return to
genisis' home directory. Looks like the execute permission for everyone on
cd into it.
ls -latotal 2 drwxr-xr-x 2 genisis wheel 512 Aug 20 10:24 . drwxr-xr-x 14 genisis wheel 1024 Aug 20 10:23 .. -rw-r--r-- 1 genisis wheel 0 Aug 20 10:24 testfile
Looks like the read permission for everyone on the
testdir directory allowed
biko to list its contents.
Looks like the read for everyone on the
testfile allowed its contents to be read. Even though the file was empty,
biko did not receive an error message.
touch myfiletouch: myfile: Permission denied
biko doesn't have write permission to this directory, so he won't be creating any files in it.
rm testfileoverride rw-r--r-- genisis/wheel for testfile?
yrm: testfile: Permission denied
Again, lack of write permission will prevent
biko from removing files from this directory.
mv testfile ~mv: rename testfile to /home/biko/testfile: Permission denied
This is a move operation;
biko would need write permission on the
testdir directory for this to work.
cp testfile ~
This copy operation was successful as
biko does have write permission on his home directory.
ls -la >> testfiletestfile: Permission denied.
Here I was trying to append the results of
ls -la to the end of the testfile. If
biko had write permission, he would be able to do this.
Now, how would we give
biko permission to create files in this directory but not delete any files which were created by
genisis? See if you can come up with a solution in both absolute and symbolic mode before testing your theory. Note, you have a choice of giving permission to the primary group, to everyone else, or to both. You'll have to log in as the original user who made the testdir directory in order to change its
permissions. And don't forget to change your directory's sticky bit.
Now is a good time to mention groups, as group membership is an important consideration when setting permissions. To see what groups you belong to, simply type:
To see what groups anyone else belongs to, add their login name to the end of the groups command like so:
Here I have two users who don't live in the same group. If I wanted them
to share a directory, I could set permissions on everyone, but this would
also give permissions to everyone else. Alternately, I could make them
members of the same group and set permissions for the group. With this
method, I may also have to change the primary group of the file using the
chown command. Let's create a group called
projects and add these two users to it. Become root, as only root can make new users or groups. We'll use the
pw command to create the group; first, we'll type
pw to get the syntax:
usage:pw [user|group|lock|unlock] [add|del|mod|show|next] [help|switches/values]
Then we'll add a group with the fairly straightforward syntax:
pw group add projects
Then we'll verify that it worked like so:
grep projects /etc/groupprojects:*:1006:
Our new group is showing up in the
/etc/group database with a group ID of
1006. Now, I'll want to add the users
biko to the group like so:
pw groupmod projects -M genisis,biko grep projects /etc/groupprojects:*:1006:genisis,biko
Everything looks good; one last test to verify:
groups genisiswheel projects
groups bikobiko projects
Note that you can belong to more than one group at a time in FreeBSD. Now
all we have to do is change the primary group of the
testdir directory so we can give permissions to just
biko. The user
genisis can do this, as she owns the directory, so we'll exit out of the root account and log back in as
cd chown :projects testdir
Note that the
chown command requires a full colon (
:) to indicate you want to change the primary group, not the owner of the file or directory.
Now, let's see if the change was successful:
ls -la testdirtotal 2 drwxr-xr-x 2 genisis projects 512 Aug 20 10:24 . drwxr-xr-x 14 genisis wheel 1024 Aug 20 11:45 .. -rw-r--r-- 1 genisis wheel 0 Aug 20 10:24 testfile
We can now set permissions for the projects group, and it will only affect
I'd like to end this article with some common permissions to set on directories that you create.
If you create a directory that will contain private data that you only
want yourself to access, set its permissions to
700. Users will be able to see the directory, but they won't be able to
cd into it, list its contents, or modify any of the files in it. Keep in mind that root is not subject to permissions, so nothing is really hidden from the root account.
If you wish to have a directory inaccessible to a group of users, set its
705. This works, as FreeBSD stops reading permissions when it finds a match. This means that FreeBSD first checks to see if you are the owner of the file; if you are, you are subject to the owner's permissions. If you are not the owner, it then checks to see if you belong to the primary group of the file; if you do, you are subject to that
group's permissions. If you don't, you are subject to the permissions of
If you want a group to be able to write files, but only delete their own
files, set the directory's permissions to
Practice reading directory listings to determine what you can and can't do
with a file. When you see a listing, think which
chmod command would have set that permission. Before you know it, you'll be able to look at a
listing and know how to change its permissions so your users can do what
you want them to do.
Next week, we'll do some customizing of the user environment by learning how to change our shell prompt.
Dru Lavigne is a network and systems administrator, IT instructor, author and international speaker. She has over a decade of experience administering and teaching Netware, Microsoft, Cisco, Checkpoint, SCO, Solaris, Linux, and BSD systems. A prolific author, she pens the popular FreeBSD Basics column for O'Reilly and is author of BSD Hacks and The Best of FreeBSD Basics.
Read more FreeBSD Basics columns.
Discuss this article in the Operating Systems Forum.
Return to the BSD DevCenter. | <urn:uuid:7d23c2b6-7211-4d62-adaf-f4855fee4454> | 2.78125 | 1,850 | Tutorial | Software Dev. | 62.17305 |
New research shows Earth's clouds are getting little lower, about one percent on average.
According to scientists this indicates that something quite important might be going on.
The results have potential implications for future global climate.
Scientists at the University of Auckland in New Zealand analyzed the first 10 years of global cloud-top height measurements
(from March 2000 to February 2010) from the Multi-angle Imaging SpectroRadiometer
(MISR) instrument on NASA's Terra spacecraft.
The study, published recently in the journal Geophysical Research Letters, revealed an overall trend of decreasing cloud height.
Global average cloud height declined by around one percent over the decade, or by around 100 to 130 feet (30 to 40 meters).
Most of the reduction was due to fewer clouds occurring at very high altitudes.
Lead researcher Roger Davies said that while the record is too short to be definitive, it provides a hint that something quite important might be going on.
Longer-term monitoring will be required to determine the significance of the observation for global temperatures.
A consistent reduction in cloud height would allow Earth to cool to space more efficiently, reducing the surface temperature of the
planet and potentially slowing the effects of global warming. This may represent a "negative feedback" mechanism - a change caused
by global warming that works to counteract it. "We don't know exactly what causes the cloud heights to lower," says Davies.
This image of clouds over the southern Indian Ocean was acquired on July 23, 2007 by one of the backward (northward)-viewing cameras
of the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's polar-orbiting Terra spacecraft.Image credit: NASA/JPL-Caltech
"But it must be due to a change in the circulation patterns that give rise to cloud formation at high altitude."
NASA's Terra spacecraft is scheduled to continue gathering data through the remainder of this decade.
Scientists will continue to monitor the MISR data closely to see if this trend continues.
Anomaly In The Earth's Atmosphere Filmed By ISS
This strange pyramid-shaped cloud was filmed by the ISS crew.
What this could be, depends on who you ask.
Some will say it is just a cirrus cloud. Others think it could be a result of HAARP, or a cloaked UFO. Whatever it is, it certainly does not look like an ordinary cloud.
It is up to you to decide what to think.
Subscribe To Our Space, Astronomy, Astrophysics, Earth and Xenology News!
Grab the latest RSS feeds right to your reader, desktop or mobile phone.
Unusual Pulsar Or Alien Signals?
The pulse timing of this object is considered unusual.
What kind of phenomenon is related to this object?
It is the first time this kind of phenomenon has been observed by astronomers.
The "Cloaked" Star Was Difficult To Find
An object obscured by dust, and buried in a two-star system enshrouded by dense gas, is not easy to find.
A "cloaked" star was discovered after it ate a little of its neighbor. The meal must have given the star a bit of indigestion, because it
"burped" with a blast of high-energy radiation, which gave it away.
Radio Emission From Ultracool Dwarf Detected By Arecibo Telescope
The Arecibo Telescope in Puerto Rico has discovered sporadic bursts of polarized radio emission from the T6.5 brown J1047+21.
Because Arecibo is a single, fixed-dish telescope, it has a restricted practical sensitivity to weak, quiescent emission from radio sources...
Invader From Another Galaxy
This alien intruder from another galaxy is in many ways different from other exoplanets observed by astronomers.
Located about 2000 light-years from Earth in the southern constellation of Fornax (the Furnace), the Jupiter-like planet orbits a dying star of
extragalactic origin and risks to be engulfed by it.
"Pillars Of Creation" Are Gone
Every time you look at the beautiful and famous image of the Pillars of Creation taken by Hubble back in 1995,
you are actually admiring something that no longer exists.
In fact, the Pillars of Creation were already long gone by the time the image was captured! | <urn:uuid:13c9e012-c8ea-470b-aa58-44264bb33221> | 3.921875 | 907 | Content Listing | Science & Tech. | 44.708052 |
While news headlines this past week highlighted flood and devastation on the Mississippi in the US, Europe has been agonising about the long-term drought in the Mediterranean (see Focus, this issue). Both catastrophes have their own timescales, and both inflict devastating human misery.
In an effort to tame great rivers, many have inadvertently been made worse. For two centuries, engineers have raised the Mississippi's levees ever higher to keep it from inundating its natural flood plain. But, with more than $7 billion spent in the past 60 years, the work has been largely self-defeating.
The Mississippi's swamps and flood plains, from Iowa to Louisiana, were once natural reservoirs, receiving water from rivers when they flood and evening out the seasonal variation in flow. Raise the levees and the flow in the river becomes greater during floods, bringing new threats downstream. Meanwhile, with no water stored on the flood plains, the ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:03691c98-ec82-4e86-a923-716809590c5d> | 3.5 | 218 | Truncated | Science & Tech. | 50.145086 |
HOPPING may be the best way for robotic probes to explore the surface of comets and asteroids. Japanese engineers have built a cylindrical prototype that they say could take 9-metre hops in a low-gravity environment. They propose adding a more advanced version of the probe to MUSES-C, a Japanese mission to return an asteroid sample to Earth in June 2006.
Wheeled robots work well on moons and planets, but the very low gravity of asteroids and comets poses problems. The traction needed for horizontal motion comes from the vehicle's weight pressing down on the surface, but on an asteroid only 2 kilometres across the force of gravity is about 100 000 times weaker than on Earth. That leaves so little traction that the robot's wheels will slip unless they move very slowly.
Researchers at the Jet Propulsion Laboratory in Pasadena, California, have already developed a wheeled rover for MUSES-C, to ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:68bf1308-7d0e-4943-8007-a1b86d9cb9d8> | 3.921875 | 215 | Truncated | Science & Tech. | 47.351818 |
Flame colours - a demonstration
This demonstration experiment can be used to show the flame colours given by alkali metal, alkaline earth metal, and other metal, salts. This is a spectacular version of the ‘flame tests’ experiment that can be used with chemists and non-chemists alike.
This experiment must be done as a demonstration. It takes about ten minutes if all is prepared in advance.
Preparation includes making up the spray bottles and conducting a risk assessment.
Your employer's risk assessment must be customised by determining where to spray the flame to guarantee the audience’s safety.
Samples of the following metal salts (no more than 1 g of each):
Lithium chloride (HARMFUL)
Copper sulfate (HARMFUL, DANGEROUS TO THE ENVIRONMENT)
Ethanol (HIGHLY FLAMMABLE), approx 10 cm3 for each metal salt
or IDA (industrial denatured alcohol) (HIGHLY FLAMMABLE, HARMFUL)
Refer to Health & Safety and Technical notes section below for additional information.
Trigger pump operated spray bottles (Note 2)
Heat resistant mat(s)
Hand-held spectroscopes or diffraction gratings (optional)
Health & Safety and Technical notes
Carry out the whole experiment in a well vebtilated area you have previously shown to be safe. Wear eye protection. Ensure that the spray can be safely directed away from yourself and the audience.
Sodium chloride, NaCl(s) - see CLEAPSS Hazcard.
Potassium chloride, KCl(s) - see CLEAPSS Hazcard. Potassium iodide and lithium iodide can be used instead. As a general rule, chlorides are usually suggested as they tend to be more volatile and more readily available. These two are in fact a little more volatile than the chloride, and potassion iodide is certainly likely to be available - see CLEAPSS Hazcard.
Lithium chloride, LiCl(s), (HARMFUL) - see CLEAPSS Hazcard.
Copper sulfate, CuSO4(s), (HARMFUL, DANGEROUS TO THE ENVIRONMENT) - see CLEAPSS Hazcard.
Ethanol, CH3CH2OH(l), (HIGHLY FLAMMABLE). IDA (industrial denatured alcohol) (HIGHLY FLAMMABLE, HARMFUL) - see CLEAPSS Hazcard. Make a saturated solution of each salt in about 10 cm3 ethanol. To do this, add the salt to the ethanol in small quantities, with stirring, until no more will dissolve – often only a few mg of salt will be needed.
1 Other metal salts (e.g. those of calcium and barium) can also be used provided an appropriate risk assessment is carried out. Barium chloride (TOXIC), calcium chloride (IRRITANT) and strontium chloride (IRRITANT) all give different colours - see CLEAPSS Hazcards. The chlorides of metals are the best but other salts also work - carry out an appropraite risk assessment.
2 Place each salt solution in a spray bottle and label the bottle. The solutions can be retained for future use. They can be stored in the plastic bottles for several weeks at least without apparent deterioration of the bottles. Spray bottles of the type used for products such as window cleaner should be used. These piston-operated spray bottles should be emptied, cleaned thoroughly and finally rinsed with distilled water. Ideally, one bottle is needed for each metal salt. Never use spray bottles with a rubber bulb - the flame may flash back into the container.
a Darken the room if possible.
b Light the Bunsen and adjust it to give a non-luminous, roaring flame (air hole open).
c Conduct a preliminary spray in a safe direction away from the Bunsen flame.
Adjust the nozzles of the spray bottles to give a fine mist.
d Choose one spray bottle. Spray the solution into the flame in the direction you have rehearsed. Repeat with the other bottles.
e A spectacular coloured flame or jet should be seen in each case. The colour of the flame depends on the metal in the salt used.
f As an extension, students can view the flames through hand-held spectroscopes or diffraction gratings in order to see the line spectrum of the element. (Diffraction gratings work better. A better way to produce a steady source of light is to use discharge tubes from the Physics Department – with a suitable risk assessment.)
The colours that should be seen are:
- sodium – yellow-orange (typical ‘street lamp’ yellow)
- potassium – purple-pink, traditionally referred to as ‘lilac’ (often contaminated with small amounts of sodium)
- lithium – crimson red
- copper – green/blue
- calcium – orange-red (probably the least spectacular)
- barium – apple green
- strontium – crimson
The electrons in the metal ions are excited to higher energy levels by the heat. When the electrons fall back to lower energy levels, they emit light of various specific wavelengths (the atomic emission spectrum). Certain bright lines in these spectra cause the characteristic flame colour.
The colour can be used to identify the metal or its compounds (eg sodium vapour in a street lamp). The colours of fireworks are, of course, due to the presence of particular metal salts.
Health & Safety checked June 2007
The University of Edinburgh - gives a simple explanation of flame colours in terms of excited electrons.
Creative Chemistry - gives another slightly different version, involving establishing some flame colours and then using them to identify unknowns.
Page last updated on 02 December 2011 | <urn:uuid:1ff95956-c134-4408-9adb-3deb25aaa8e7> | 3.609375 | 1,227 | Tutorial | Science & Tech. | 43.598552 |
In order to justify text in the way a word processor would, one must set 'linebreak' to 'false' and 'parbreak' to 'true'. For every new line, you must use "\n\n" (two newlines, NOT carriage returns as stated in the docs). If you wish to make two new lines in your finished document, "\n\n\n\n" will not work. Instead, you must use "\n\n \n\n".
For every line to be justified, you should append "\n " (new line and a space) to the end of the text. For this to work, 'linebreak' must be 'true'.
(PECL ps >= 1.1.0)
ps_show_boxed — Output text in a box
Outputs a text in a given box. The lower left corner of the box is at
bottom). Line breaks
will be inserted where needed. Multiple spaces are treated as one.
Tabulators are treated as spaces.
The text will be hyphenated if the parameter
hyphenation is set to
and the parameter
hyphendict contains a valid
filename for a hyphenation
file. The line spacing is taken from the value leading.
Paragraphs can be
separated by an empty line just like in TeX. If the value
set to value > 0.0 then the first n lines will be indented. The number n
of lines is set by the parameter numindentlines.
In order to prevent
indenting of the first m paragraphs set the value
parindentskip to a
Resource identifier of the postscript file as returned by ps_new().
The text to be output into the given box.
x-coordinate of the lower left corner of the box.
y-coordinate of the lower left corner of the box.
Width of the box.
Height of the box.
hmodecan be "justify", "fulljustify", "right", "left", or "center". The difference of "justify" and "fulljustify" just affects the last line of the box. In fulljustify mode the last line will be left and right justified unless this is also the last line of paragraph. In justify mode it will always be left justified.
The output of ps_show_boxed() can be configured with several parameters and values which must be set with either ps_set_parameter() or ps_set_value(). Beside the parameters and values which affect text output, the following parameters and values are evaluated.
- leading (value)
Distance between baselines of two consecutive lines.
- linebreak (parameter)
Set to "true" if you want a carriage return to start a new line instead of treating it as a space. Defaults to "false".
- parbreak (parameter)
Set to "true" if you want a carriage return on a single line to start a new paragraph instead of treating it as a space. Defaults to "true".
- hyphenation (parameter)
Set to "true" in order to turn hyphenation on. This requires a dictionary to be set with the parameter "hyphendict". Defaults to "false".
- hyphendict (parameter)
Filename of the dictionary used for hyphenation pattern (see below).
- hyphenminchar (value)
The number of chars which must at least be left over before or after the hyphen. This implies that only words of at least two times this value will be hyphenated. The default value is three. Setting a value of zero will result in the default value.
- parindent (value)
Set the amount of space in pixel for indenting the first m lines of a paragraph. m can be set with the value "numindentlines".
- parskip (value)
Set the amount of extra space in pixel between paragraphs. Defaults to 0 which will result in a normal line distance.
- numindentlines (value)
Number of lines from the start of the paragraph which will be indented. Defaults to 1.
- parindentskip (value)
Number of paragraphs in the box whose first lines will not be indented. This defaults to 0. This is useful for paragraphs right after a section heading or text being continued in a second box. In both case one would set this to 1.
- linenumbermode (parameter)
Set how lines are to be numbered. Possible values are "box" for numbering lines in the whole box or "paragraph" to number lines within each paragraph.
- linenumberspace (value)
The space for the column left of the numbered line containing the line number. The line number will be right justified into this column. Defaults to 20.
- linenumbersep (value)
The space between the column with line numbers and the line itself. Defaults to 5.
Text is hyphenated if the parameter hyphenation is set to true and a valid hyphenation dictionary is set. pslib does not ship its own hyphenation dictionary but uses one from openoffice, scribus or koffice. You can find their dictionaries for different languages in one of the following directories if the software is installed:
Number of characters that could not be written.
Vedeți de asemenea
- ps_continue_text() - Continue text in next line
Note that there will no box be drawn around the text even if the function name suggests this.
After the box has been drawn you can get the new x and y position with
textx points to the end of the last character written by ps_show_boxed and texty points to the baseline of the last line written (which means, if there is e.g. a 'g' in the last line then the lower part's y-coordinates of the g will be lower than the value of texty. I hope you understand what I meant) | <urn:uuid:aa7ebcf3-9ace-4beb-83cc-684a32b27882> | 3.09375 | 1,277 | Documentation | Software Dev. | 59.16966 |
About earthquakes in the Easter Tennessee Seismic Zone:
The Eastern Tennessee seismic zone extends across Tennessee and northwestern Georgia into northeastern Alabama. It is one of the most active earthquake areas in the Southeast. Although the zone is not known to have had a large earthquake, a few earthquakes in the zone have caused slight damage. The largest known (magnitude 4.6) occurred on April 29, 2003, near Fort Payne, Alabama. Earthquakes too small to cause damage are felt about once a year. Earthquakes too small to be felt are abundant in the seismic zone, and seismographs have recorded hundreds of them in recent decades. Earthquakes in the central and eastern U.S., although less frequent than in the western U.S., are typically felt over a much broader region. East of the Rockies, an earthquake can be felt over an area as much as ten times larger than a similar magnitude earthquake on the west coast. A magnitude 4.0 eastern U.S. earthquake typically can be felt at many places as far as 100 km (60 mi) from where it occurred, and it infrequently causes damage near its source. A magnitude 5.5 eastern U.S. earthquake usually can be felt as far as 500 km (300 mi) from where it occurred, and sometimes causes damage as far away as 40 km (25 mi). | <urn:uuid:95f0a0d5-501f-499f-af9e-411b8201cc74> | 3.359375 | 273 | Knowledge Article | Science & Tech. | 63.770959 |
History of Gypsy Moths in the U.S. Along with other species, the Gypsy Moth was imported into the United States in the mid-nineteenth century with the intent of finding a species of silk producing moth that could be hybridized to compete favorably with the Silkworm Moth, yet not be subject to the many diseases that the Silkworm Moth suffered in cultures. This experiment was conducted by Leopold Trouvelot, an amateur lepidopterist from Medford, Massachusetts, who at one time had more than a million larvae in cultivation behind his house. In 1868 or 1869, several individuals of adult Gypsy Moths escaped from his house, with ten years elapsing before the neighborhood trees were badly defoliated by resulting populations of the moth. From that start, Gypsy Moths have become one of the most important forest pests in the United States, defoliating millions of acres in the northeastern U.S. The Gypsy Moth continues its spread, extending into Virginia, North Carolina and Michigan, with isolated pockets in the Pacific Coast states.
Distribution method. The Gypsy Moth has special methods of dispersal. The young larvae have hairs with small air pockets that create buoyancy, allowing them to travel great distances when the wind is strong. They have been found as high as 2,000 feet in the air, and are known to travel five miles a day by this method. Adult females commonly pupate and deposit egg masses on motor vehicles, especially trucks and recreational vehicles that are parked near or under trees.
Life History: Females lay eggs on the trunks of trees, each egg mass including several hundred eggs. Gypsy Moths overwinter in the egg stage, and hatch in April or May. The young caterpillars are black and hairy, later becoming mottled gray with tufts of bristlelike hairs, and blue and red spots on the back. There is one generation per year. Gypsy Moths have preference for oaks, but they will attack the foliage of most trees and shrubs. Adults differ in appearance, males being brown with a fine, darker brown pattern on the wings. Females are nearly white, with a few dark markings on the wings. Females do not fly. Caterpillars climb trees and feed mostly at night. They are capable of denuding foliage from trees, and this activity will kill many trees if repeated over a few years. Trees also become weakened and more susceptible to diseases and wood boring insects.
Control of Gypsy Moths. Egg masses can be scraped from trees and burned. Sticky bands may be placed around trees to prevent the larvae from climbing to the foliage. After World War II, DDT was used for chemical control and was very effective. However, many other animals from Honeybees to bald eagles were killed or affected also. Nearly 50 species of insects that are parasitic on Gypsy Moths have been introduced for biological control, and this strategy has undoubtedly prevented the Gypsy Moth from becoming even more destructive. Bacterial, fungal, and viral diseases are other likely control agents, currently being explored as possibilities in integrated pest management for the Gypsy Moth.
Doane, C. C. and McManus, M. L., editors. 1981. The Gypsy Moth: Research toward Integrated Pest Management. U. S. Department of Agiculture Forest Service Science and Education Agency, Technical Bulletin 1584.
Forbush, E. H. and Fernald, C. H. 1896. The Gypsy Moth. Wright & Potter, Boston.
Gerardi, M. H. and Grimm. J. K. 1979. The History, Biology, Damage, and Control of the Gypsy Moth, Porthetria dispar (L.). Associated University Presses, Cranberry, New Jersey.
Leonard, David E. 1974. Recent developments in the ecology and control of the Gypsy Moth. Annual Review of Entomology, Volume 19.
Prepared by the Department of Systematic Biology, Entomology Section,
Information Sheet Number 36
NOTE: This publication can be made available in Braille or audio cassette. To obtain a copy in one of these formats, please call or write :
Entomology || Natural History || Encyclopedia Smithsonian
Office of Visitor Services
Public Inquiry Services | <urn:uuid:204820e2-a3b4-47f0-94c7-61d2e63bf36c> | 3.515625 | 891 | Knowledge Article | Science & Tech. | 49.851133 |
Tails of Wonder!
Help the Stardust spacecraft capture comet dust and bring it back to Earth!
Test your comet IQ and help Stardust capture samples from Comet Wild 2. Each time you answer a question correctly, Stardust will move a little closer to the comet. If you get 8 out of 10 correct, Stardust will capture samples of the comet and bring them home to Earth.
If you don't know an answer, keep reading here!
Stardust is the first space mission to capture dust from a comet and return it to Earth. Stardust was launched on February 7, 1999. On January 2, 2004, Stardust met up with Comet Wild 2 (pronounced "Vilt 2"). Before it got there, the spacecraft had to make two trips around the Sun. Then it captured particles of dust using some very weird stuff called aerogel.
Aerogel looks like frozen smoke. It is so light and wispy you can see right through it. It is the lightest known solid material. The comet particles were trapped gently inside the aerogel and stored safely for the trip home. See more pictures of aerogel.
Stardust made one more trip around the Sun to catch up with Earth again. The samples inside the aerogel, stored in a special reentry capsule, parachuted safely to Earth on January 15, 2006.
Why sample a comet?
But why take all this time and trouble to bring home comet samples?
Comets are believed to be a very old part of our solar system. They are made of the leftover materials that didn't become part of the Sun, the planets, or the moons. If we knew more about comets, we would know more about how our solar system formed over four billion years ago!
The nucleus, or solid part of a comet, is usually less than 10 kilometers (about 6 miles) across. The nucleus is like a dirty snowball. Nobody knows for sure what any comet is like inside. Maybe they are not all similar.
Comets seem to contain a lot of ice, some rocks and dust, and some gas. As they get closer to the Sun and start to heat up, some of their materials starts to boil off. This material forms a cloud around the nucleus. The cloud is called the coma and may be hundred of thousands of kilometers in diameter. And trailing out for oftentimes millions of kilometers are the comet's tails.
Why do comets have tails?
Most comets have two tails. The tails appear as the comet approaches the Sun. Sunlight pushes on things, but very gently. Because the comet dust particles are so small, they are pushed away from the Sun into a long tail. Another tail is made of electrically charged molecules of gas (called ions). Very rarely a comet will have a third tail made of sodium, which we usually don't see with our unaided eyes.
In the early time of our solar system, Earth was often hit by comets. Scientists believe comets may have contributed some of the water for our oceans or even some of the molecules from which life eventually evolved.
Some believe it may have been a comet hitting Earth that caused the dinosaurs to become extinct.
The Stardust mission, as well as other comet missions NASA has planned, will teach us much more about these fascinating solar system objects. | <urn:uuid:b0f6f1d6-1e05-49c4-8f75-d1c376a89a28> | 3.953125 | 692 | Knowledge Article | Science & Tech. | 64.148181 |
Relativistic Doppler effect
The classical Doppler effect can be observed with any type of waves. When the source of the waves if
moving towards the observer it causes the waves to bunch up, resulting in an apparently higher frequency.
Similarly, if the source is
receding from the observer, the waves are spread out and the frequency appears amaller. The effect of time dilation
between a moving source and an observer complicates this situation (frequency is inverse time); the
relativistic adjustment to the Doppler effect is called the relativistic Doppler effect.
Transverse Doppler effect
When the source is moving past the observer, displaced in the transverse direction the observed frequency is
f = γf'
at the moment the source is at closest approach to the observer and
when the observer sees the source at closest approach.
Longitudinal Doppler effect
When the source is moving directly towards or away from the observer the frequency observed is:
One twin remains on earth whilst the other makes a trip at high speed to a distant star and returns. Both
twins, apparently, claim that the time of the other twin is dilated during the trip, and thus both claim that the
other twin is younger upon reunion. This is not really a paradox. The twin who makes the trip has to
undergo acceleration and deceleration which means that she is not in an inertial reference frame. Thus the
stay-at-home twin is right and the traveling twin is younger than her sister when she returns. | <urn:uuid:5253c872-4bb1-40e9-b6d5-6c9ba4ae1c8c> | 3.90625 | 329 | Knowledge Article | Science & Tech. | 39.6704 |
in regards to geometry
A cube has how many corners
Answer: 8 Corners
Try to picture a cube sitting in front of you and count the points on top, 4. The same is true for the side laying flat on the table. A total of 8 corners.
A cube has 8 vertices (corners, points), 12 edges (line segments), and 6 faces (planes). One formula that works for any solid with flat sides and no holes is vertices+faces-edges=2.
For me, I like to picture the cube as two squares with arms connecting them. I count up the corners on one square and multiply that by two. Since there are four corners on one square the equation you get is 4*2=8. So, your answer is 8. | <urn:uuid:bf6dbb58-b103-486b-8643-9db99f030f03> | 3.65625 | 163 | Q&A Forum | Science & Tech. | 76.863411 |
Chandra Finds Abundance of Ultraluminous X-ray Sources
This Chandra X-ray image shows the central regions of two colliding galaxies known collectively as "The Antennae." The latest Chandra data reveal a large population of extremely bright X-ray sources in this area of intense star formation. These "ultraluminous" X-ray sources, which emit 10 to several hundred times more X-ray power than similar sources in our own Galaxy, are believed to be either massive black holes, or black holes that are beaming their energy toward Earth. In this X-ray image, red represents the low energy band, green intermediate and blue the highest observed energies. The white and yellow sources are those that emit significant amounts of both low- and high-energy X-rays.
The Antennae Galaxies, about 60 million light years from Earth in the constellation Corvus, got their nickname from the wispy antennae-like streams of gas seen by optical telescopes. These wisps are believed to have been produced by the collision between the galaxies that began about 100 million years ago and is still occurring. | <urn:uuid:9f9c7a17-4122-4648-9c7f-d29a918e62b7> | 3.734375 | 233 | Knowledge Article | Science & Tech. | 35.975 |
Published in Agronomy Journal, Volume 75, July 1, 1983, pages 654-656.
Publisher website: http://www.agronomy.org
NOTE: At the time of publication, the author David J. Wehner was affiliated with the University of Illinois at Urbana-Champaign. Currently, April 2008, he is the Dean of the College of Agriculture, Food and Environmental Sciences at California Polytechnic State University - San Luis Obispo.
The components of a turfgrass ecosystem, including plants, an intervening layer of thatch and the underlying soil, influence the fate of topically applied urea fertilizer. The loss of urea N by ammonia volatilization may be governed by the rate of urea hydrolysis. The main objective of this study was to determine the extent of urease activity associated with turfgrass plant tissue, thatch, and the underlying soil. This information may help elucidate the mechanism of ammonia loss following urea application. Because a turfgrass stand frequently possesses an extensive thatch layer that may serve as the primary plant growth medium, additional objectives included: i) determining the effects of air drying and seasonal variation on the activity of urease in thatch; ii) determining the variability in thatch urease activity by analyzing multiple field samples; and iii) determining the variation of urease activity within a thatch profile. Turfgrass clippings, thatch, and underlying Flanagan silt loam soil (Aquic Argiudoll) samples were taken from a field-grown Kentucky bluegrass (Poa pratensis L.) turf in either September 1980 or March 1981. On a dry weight basis, urease activity was 18 to 30 times higher from turfgrass clippings and thatch than from soil. Air drying thatch increased urease activity by 20 % over moist samples while air drying soil samples had no apparent effect. Greenhouse incubation of winter-dormant thatch samples increased urease activity 40 %, presumably in response to the duration of increased temperature. Thatch urease activity varied between sampling sites but still remained extremely high compared to soil activity. Within each thatch sample (1 X 1 X 2 cm), urease activity was highest in the upper 1.0 cm of the profile. It was concluded that thatch urease activity was variable in nature depending upon seasonal conditions which contrasts sharply with extremely stable soil urease activities. These findings suggest that, because of the high level of urease in thatch, ammonia volatilization will occur from most urea-treated turfgrass stands, regardless of the type of underlying soil unless the urea is thoroughly washed into the soil.
Agronomy and Crop Sciences | <urn:uuid:7b0377d0-6977-49a4-8acc-eff38805f4ba> | 2.6875 | 569 | Academic Writing | Science & Tech. | 30.463284 |
The Urban Barcode Project is a science competition in New York City. A DNA barcode is a DNA sequence that uniquely identifies each species of living thing. In the Urban Barcode Project, student research teams use DNA barcoding to explore biodiversity in New York City. TeaBOL is a DNA barcoding project undertaken by students at the Trinity School.
DNA, competition, New York, barcoding, sequencing, biodiversity, school, genetics, tea | <urn:uuid:7bdbf72e-57d3-4fae-b167-67568c9436cd> | 2.734375 | 93 | Knowledge Article | Science & Tech. | 30.332941 |
Timothy syndrome mutations provide new insights into the structure of L-calcium channel
The human genome encodes 243 voltage-gated ion channels. Mutations in calcium channels can cause severe inherited diseases such as migraine, night blindness, autism spectrum disorders and Timothy syndrome, which leads to severe cardiovascular disorders. Katrin Depil and Anna Stary-Weinzinger together with colleagues from the Department of Pharmacology and Toxicology, University of Vienna analyzed changes in molecular organization of calcium channels caused by Timothy syndrome mutations. Recently, they published their current research results in the Journal of Biological Chemistry. Ion channels are large membrane proteins that conduct potassium, sodium or calcium ions. They regulate electrical signals in the nervous system, control the release of neurotransmitters and are responsible for the regulation of the heart rhythm and muscle contractions. Voltage-gated calcium channels, like all other voltage-gated ion channels open and close in response to changes in membrane potential. The exact mechanisms underlying this gating process are still unexplored. It is however known that mutations can severely affect channel opening and closing, thereby disturbing the calcium homeostasis, which could lead to so called "ion channel diseases" or "channelopathies".
Timothy syndrome, which was first described in the 90s, often leads to sudden cardiac death in early childhood. In 2004 it was discovered that mutations in calcium channel, which replace two amino acids in the ion channel protein sequence with other amino acids, cause the neurological disorders, autism, severe arrhythmias and webbing of fingers and toes that are associated with the Timothy syndrome. Prof. Hering, Head of the Department of Pharmacology and Toxicology of the University of Vienna, explains that the Timothy mutations result in enhanced calcium entry caused by defects in channel closure during an action potential. This in turn induces a calcium overflow causing arrhythmias and multiple disease patterns."
Destabilization of the closed pore
The current research focus of the two young scientists, Katrin Depil and Anna Stary-Weinzinger, are voltage gated calcium channels. In the recently published paper in Journal of Biological Chemistry the authors describe that the Timothy-mutation is part of a highly conserved structure motif, which consists of small amino acids – glycines (G) and alanines (A), which they named the "G/A/G/A"-motif. The strongest effect on channel opening occurs when residues from this motif are replaced with bigger hydrophobic amino acids. Anna Stary-Weinzinger: "We assume that the Timothy G406 and the whole G/A/G/A-motif are essential for sealing of the closed channel pore. Mutations to larger amino acids in this position prevent optimal channel closure. Our data suggest that these residues form an important part of the channel gate."
Guided by systematic mutation and correlation analyzes of specific pore segments in calcium channels, Katrin Depil already succeeded in identifying key amino acid side chain properties, that play a key role in the molecular mechanism of channel opening and closure. Katrin Depil: "By analyzing further interactions in different positions in the pore region we aim to refine our calcium channel homology models. We hope to contribute to a better understanding of Timothy disease and other channelopathies."
Source: University of Vienna
- Understanding night blindness and calciumThu, 1 Apr 2010, 10:26:40 EDT
- Protein provides link between calcium signaling in excitable and non-excitable cellsFri, 1 Oct 2010, 12:22:47 EDT
- Common mechanism underlies many diseases of excitabilityMon, 28 Dec 2009, 18:08:59 EST
- NJIT math professor illuminates cellular basis of neural impulse transmissionTue, 2 Nov 2010, 12:38:21 EDT
- Solving the puzzle of the BK ion channelWed, 23 Jun 2010, 14:25:03 EDT
- Timothy syndrome mutations provide new insights into the structure of L-calcium channelfrom Science DailyThu, 14 Jul 2011, 14:30:25 EDT
- Timothy syndrome mutations provide new insights into the structure of L-calcium channelfrom PhysorgThu, 14 Jul 2011, 11:30:54 EDT
Latest Science NewsletterGet the latest and most popular science news articles of the week in your Inbox! It's free!
Check out our next project, Biology.Net
From other science news sites
Popular science news articles
- Allosaurus fed more like a falcon than a crocodile, new study finds
- Invasive crazy ants are displacing fire ants in areas throughout southeastern US
- Beautiful 'flowers' self-assemble in a beaker
- Scientific insurgents say 'Journal Impact Factors' distort science
- GPS solution provides 3-minute tsunami alerts | <urn:uuid:526c7163-1bbc-4eea-b27e-c85bcc3bc741> | 3.03125 | 983 | Truncated | Science & Tech. | 21.412085 |
Hspec is a Behaviour-Driven Development tool for Haskell programmers. BDD is an approach to software development that combines Test-Driven Development, Domain Driven Design, and Acceptance Test-Driven Planning. Hspec helps you do the TDD part of that equation, focusing on the documentation and design aspects of TDD.
Hspec (and the preceding intro) are based on the Ruby library RSpec. Much of what applies to RSpec also applies to Hspec. Hspec ties together descriptions of behavior and examples of that behavior. The examples can also be run as tests and the output summarises what needs to be implemented.
import Test.Hspec import Test.Hspec.QuickCheck import Test.Hspec.HUnit import Test.QuickCheck hiding (property) import Test.HUnit main = hspec mySpecs
Since the specs are often used to tell you what to implement, it's best to start with undefined functions. Once we have some specs, then you can implement each behavior one at a time, ensuring that each behavior is met and there is no undocumented behavior.
unformatPhoneNumber :: String -> String unformatPhoneNumber number = undefined formatPhoneNumber :: String -> String formatPhoneNumber number = undefined
mySpecs = describe "unformatPhoneNumber" [
A boolean expression can act as a behavior's example.
it "removes dashes, spaces, and parenthesies" (unformatPhoneNumber "(555) 555-1234" == "5555551234"),
The pending function marks a behavior as pending an example. The example doesn't count as failing.
it "handles non-US phone numbers" (pending "need to look up how other cultures format phone numbers"),
An HUnit Test can act as a behavior's example. (must import
it "removes the \"ext\" prefix of the extension" (TestCase $ let expected = "5555551234135" actual = unformatPhoneNumber "(555) 555-1234 ext 135" in assertEqual "remove extension" expected actual),
IO() action is treated like an HUnit TestCase. (must import
it "converts letters to numbers" (do let expected = "6862377" let actual = unformatPhoneNumber "NUMBERS" assertEqual "letters to numbers" expected actual),
The property function allows a QuickCheck property to act as an example. (must import
it "can add and remove formatting without changing the number" (property $ forAll phoneNumber $ \ n -> unformatPhoneNumber (formatPhoneNumber n) == n) ] phoneNumber :: Gen String phoneNumber = do nums <- elements [7,10,11,12,13,14,15] vectorOf nums (elements "0123456789")
- data Spec
- data Result
- describe :: String -> [IO (String, Result)] -> IO [Spec]
- it :: SpecVerifier a => String -> a -> IO (String, Result)
- hspec :: IO [Spec] -> IO ()
- pending :: String -> Result
- descriptions :: [IO [Spec]] -> IO [Spec]
- hHspec :: Handle -> IO [Spec] -> IO Bool
- hspecX :: IO [Spec] -> IO a
- hspecB :: IO [Spec] -> IO Bool
- pureHspec :: [Spec] -> [String]
- pureHspecB :: [Spec] -> ([String], Bool)
The name of what is being described, usually a function or type.
|-> [IO (String, Result)]|
A list of behaviors and examples, created by a list of
|-> IO [Spec]|
Create a set of specifications for a specific type being described. Once you know what you want specs for, use this.
describe "abs" [ it "returns a positive number given a negative number" (abs (-1) == 1) ]
|:: SpecVerifier a|
A description of this behavior.
An example for this behavior.
|-> IO (String, Result)|
Create a description and example of a behavior, a list of these
is used by
describe. Once you know what you want to specify, use this.
describe "closeEnough" [ it "is true if two numbers are almost the same" (1.001 `closeEnough` 1.002), it "is false if two numbers are not almost the same" (not $ 1.001 `closeEnough` 1.003) ]
Create a document of the given specs and write it to stdout. This does track how much time it took to check the examples. Use this if you want a description of each spec and do need to know how long it tacks to check the examples or want to write to stdout.
Declare an example as not successful or failing but pending some other work. If you want to report on a behavior but don't have an example yet, use this.
describe "fancyFormatter" [ it "can format text in a way that everyone likes" (pending "waiting for clarification from the designers") ]
A handle for the stream you want to write to.
|-> IO [Spec]|
The specs you are interested in.
|-> IO Bool|
Create a document of the given specs and write it to the given handle. This does track how much time it took to check the examples. Use this if you want a description of each spec and do need to know how long it tacks to check the examples or want to write to a file or other handle.
writeReport filename specs = withFile filename WriteMode (\ h -> hHspec h specs)
hspec except the program exits successfull if all examples ran without failures or
with an errorcode of 1 if any examples failed.
hspec except it returns a bool indicating if all examples ran without failures
Create a document of the given specs. This does not track how much time it took to check the examples. If you want a description of each spec and don't need to know how long it tacks to check, use this. | <urn:uuid:eb9587f4-24ff-421e-a29d-dfacfa5ac58c> | 2.75 | 1,309 | Documentation | Software Dev. | 54.665155 |
|Portability||portable (H98 + FFI)|
This module provides pure functions for compressing and decompressing
streams of data represented by lazy
ByteStrings. This makes it easy to
use either in memory or with disk or network IO.
For example a simple gzip compression program is just:
import qualified Data.ByteString.Lazy as ByteString import qualified Codec.Compression.GZip as GZip main = ByteString.interact GZip.compress
Or you could lazily read in and decompress a
.gz file using:
content <- fmap GZip.decompress (readFile file)
Compress a stream of data into the gzip format.
This uses the default compression level which favours a higher compression
ratio over compression speed. Use
compressWith to adjust the compression
Control amount of compression. This is a trade-off between the amount of compression and the time and memory required to do the compression.
The default compression level is 6 (that is, biased towards high compression at expense of speed).
No compression, just a block copy.
The fastest compression method (less compression)
The slowest compression method (best compression).
A specific compression level between 1 and 9.
Decompress a stream of data in the gzip format.
There are a number of errors that can occur. In each case an exception will be thrown. The possible error conditions are:
- if the stream does not start with a valid gzip header
- if the compressed stream is corrupted
- if the compressed stream ends permaturely
Note that the decompression is performed lazily. Errors in the data stream may not be detected until the end of the stream is demanded (since it is only at the end that the final checksum can be checked). If this is important to you, you must make sure to consume the whole decompressed stream before doing any IO action that depends on it. | <urn:uuid:7f5cf599-f5b1-49d8-a3e9-ecf2e9b0e3b7> | 2.703125 | 411 | Documentation | Software Dev. | 52.537985 |
questions 1: the three vectors x, y, z have real-valued components and form the sides of a triangle. Prove the Law of Cosines for these vectors, i.e...that
|z|^2 = |x|^2 + |y|^2 - 2*x*y = |x|^2 + |y|^2 - 2*x*y*cos(theta)
where theta is the angle between vectors x and y. (can be using scalar product to solve)
question 2: Let vector x = (1,1,0,0), y = (-1,1,0,0), z = ( 0,0, 1,1), t = (0,0,-1,1). Show that these vectors are linearly independent. Therefore, any subset of them is linearly independent. Discuss the space spanned by each pair of the vectors.
anyone can help ? | <urn:uuid:4ec8d8d2-b221-4171-980f-d71ff384d714> | 3.484375 | 200 | Q&A Forum | Science & Tech. | 102.039091 |
Many complex systems are composed of a large number of similar units that are connected in a complicated manner. An important example is provided by neural networks where nerve cells in the brain communicate by exchanging pulses via synaptic connections. Unlike atoms in a crystal which are arranged on a regular, e.g cubic lattice, nerve cells in the brain grow synaptic connections in a highly specific but irregular fashion. In such systems, a particular question is how rapid coordination, e.g. synchronization, between units of a complex network can be achieved. Three theoretical neuro-physicists from the Max Planck Institut for Flow Research in Goettingen have now shed new light on this question for networks of pulse-coupled oscillators, simple models of neural networks in the brain (Physical Review Letters 92: 074101, 2004).
To analyze the impact of network structure on its function the scientists use the theory of random matrices. Initiated by the work of Wigner on correlations of energy levels in atomic nuclei, random matrix theory has been extensively investigated since the 1950s. Its range of application has been continuously growing since then and today includes the study of various phenomena as different as quantum mechanical aspects of chaos and price fluctuations on financial markets. Timme, Wolf, and Geisel have now demonstrated that the theory of random matrices can also be applied to the dynamic evolution of complex networks. This new approach allows the explorationof the impact of a network's topology on its dynamics, systematically and analytically. From the theory of random matrices the researchers derived mathematical expressions which precisely determined how fast neurons can coordinate their activity, i.e. how fast neural networks can synchronize. Using these random matrix theory expressions, the dependence on properties of single neurons as well as of the network topology can be accurately predicted.
As might be expected, they fouPage: 1 2 Related biology news :1
Contact: Prof. Theo Geisel
. NIGMS funds Center for Quantitative Biology2
. Biology of aging3
. Chemistry meets Biology4
. Annual Chemical Biology Symposium at Yale May 14, 20045
. Institute for Systems Biology, Pacific Northwest National Laboratory announce collaboration6
. Light Biology, a biotech company based on UT Southwestern technology, bought by NimbleGen Systems7
. Biochemistry and Molecular Biology: Research at the Interface of Biochemistry and Human Health8
. Experimental Biology 2004 - Translating the Genome9
. ASPB Plant Biology 2004 in Walt Disney World10
. The American Society for Biochemistry and Molecular Biology meets in Boston11
. Experimental Biology 2004 meets in Washington, D.C. April 17-21 | <urn:uuid:b1648b0d-8602-4a6c-8531-16638fd7c22f> | 2.78125 | 542 | Content Listing | Science & Tech. | 31.845056 |
Climate change tops the list of threats. As oceans absorb man-made carbon dioxide from the atmosphere, they become more acidic. This ocean acidification is accelerating at an alarming rate, harming marine ecosystems and species. Coral bleaching caused by increasing temperatures is affecting not only coral reefs, but the tourism and fishing industries and the coastal communities who rely on healthy seas.
But the oceans also provide an effective weapon in the fight against climate change. Coastal ecosystems such as mangroves, wetlands, seagrasses and salt marshes trap and store huge amounts of carbon. And marine-based renewable energy such as offshore wind and tidal power is a promising way to help reduce carbon emissions. Read more Did you know?
Did you know? | <urn:uuid:9c550a11-52db-4692-a02d-252aa8db21b6> | 3.6875 | 148 | Truncated | Science & Tech. | 34.971193 |
I don’t have much to say about the human side of the Haitian earthquake except that it’s terrible, as media reports will tell you. I saw the waves coming in on our seismograph drum, which gave some idea of the distance and size of the earthquake but not the location. The location in itself is about as bad as possible — a large, shallow earthquake very close to a major city, in a country too poor to afford earthquake-resistant construction. All I can do is suggest that you donate to relief organizations and press the Canadian and other governments to get aid in place as quickly as possible.
I can, at least, say something meaningful about the seismology. The best source for information on recent earthquakes is usually the USGS page. The basic parameters of an earthquake are time, location, magnitude, and source mechanism. Time is generally given in universal time code (UTC) rather than in any particular time zone. The location is given in three dimensions: latitude, longitude, and depth. Depth is the hardest of the three to measure accurately, but the value given (10 km) is relatively shallow — well within the crust. Shallower earthquakes are more likely to be damaging than deeper ones due to the shorter distance to the surface. This map puts the event in context with respect to other earthquakes in the region: Haiti’s actually west of the main subduction-zone seismicity — earthquakes are much more frequent elsewhere in the Caribbean. The fault yesterday’s earthquake was on is a strike-slip (horizontal motion, like the San Andreas) fault with left-lateral motion (if you stand on one side of the fault, the other side’s moving to the left).
The magnitude of the earthquake was, based on current estimates, a 7.0. That’s big, comparable to the 1989 Loma Prieta earthquake that did serious damage to San Francisco and environs. It’s not big enough to be all that rare on a global scale, however — a quick search of the ANSS catalog shows 145 events of magnitude 7.0 or greater since the beginning of 2000 (i.e. a bit over one per month). The earthquake magnitude scale is logarithmic, meaning that the Sumatran earthquake of 2004 (a 9.0) was roughly 100 times the size of this one. An earthquake’s effect on humans depends on a lot more than the magnitude, though — where it is in relation to populated areas is of critical importance, as are the response of local soils (soft ground can make matters worse) and the types of construction used.
Finally, there’s the source mechanism. It’s possible to work out, by looking at the waves radiated out in different directions, what the “radiation pattern” of energy transmitted by the earthquake was. There are different characteristic patterns for different types of fault motions. This earthquake’s pattern is characteristic of a slightly oblique strike-slip fault; based on the pattern, two different fault orientations are possible, but one (WSW-ESE) lines up with known fault traces and gives the expected left-lateral motion.
As I mentioned before, the earthquake showed up very strongly on the old analog seismograph we have as part of a museum display. The fancier digital instrument chose this opportune moment to play dead, but I obtained seismograms from an instrument operated by the Canadian National Seismograph Network that is located fairly close by.
The instrument in question records ground motion along three axes (N-S, E-W, and vertical); in the above plot I’ve rotated the horizontal components into radial (parallel to the path to the source) and transverse (perpendicular to the path to the source) components to show how different the result is. At this distance, four major wave types are observed: P and S waves, which pass through the Earth’s interior, and Love and Rayleigh waves, which travel along the surface. The surface waves are much lower in frequency and higher in amplitude than the body waves. P and S waves are also different in frequency and polarization, as we can see by zooming in: | <urn:uuid:373a0f9d-80b0-4998-a5a6-de747605fa17> | 2.953125 | 870 | Personal Blog | Science & Tech. | 44.39007 |
Effect of Bird's Eye chili(i.e. capsicum frutescen) on Gryllus assimilis (the common black cricket)
This science project was conducted to determine if Bird's Eye chili can be used as a form of deterrent against Gryllus assimilis (the common black cricket). The experiment was done by spraying various concentrations of Bird's Eye chili extract on crickets.
How to store bread safely
This experiment was conducted to find out what conditions will help prevent bread becoming moldy. The bread was kept in different environments to observe the time taken for the mold to appear.
How the frequency at which a cricket chirps can be affected by external factors
There are several natural phenomena - the rate ants walk, the rate fireflies flash, the rate of a terrapin's heartbeat, and even the frequency of human alpha brain-wave patterns - that follow the Arrhenius equation closely. This page looks at one well-studied case, the chirp rate of the snowy tree cricket, oecanthus fultoni. This little cricket, shown below, is familiar to outdoor enthusiasts as a
Which brand of cooking spray works best?
wanted to find out how much food would stick to cookware using different brands of cooking spray. My hypothesis stated that name brand cooking spray would work the best, supermarket brands would be a close comparison, and that any amount of spray would work better than none at all.
Can water be split into oxygen and hydrogen?
If an electrical current is passed through water between electrodes (the positive and minus poles of a battery), the water is split into its two parts: oxygen and hydrogen. This process is called electrolysis and is used in industry in many ways, such as making metals like aluminum. If one of the electrodes is a metal, it will become covered or plated with any metal in the solution.
Is the water near animal pens contaminated?
The purpose of this experiment is to test the water near animal pens in Batesburg-Leesville for fecal contamination. I hypothesize that the wells located near the animal pens will be more likely to show fecal contamination than the wells that are not near animal pens.
What factors affect air polution in cities?
The Phoenix metropolitan area, like many large cities, has problems with air pollution at certain times of the year. You can do a simple experiment to determine some of the factors that affect air pollution.
How do Increased CO2 Levels Affect Plant Growth?
The problem with increased CO2 is its affect on global warming. CO2 is not a pollutant but it does trap infrared heat from radiating back into space. This phenomenon is known as the greenhouse effect. This global warming then affects global ecosystems by having effects on water vapor and other climate features. If CO2 levels continue to rise, the results to the planet are difficult to gauge. Som
How can plants be used to measure the level of air polution?
How can plants be used to measure the level of air pollution?
What are the effects of Polyacrylamide and. Polyacrylate on soil erosion?
The purpose of this experiment was to determine the effects polyacrylamide or polyacrylate could halt erosion. I believe that polyacrylate will halt erosion the best because there is evidence that polyacrylate has halted erosion. | <urn:uuid:2e70beff-752c-42ef-b92f-fd0ec7b3db6b> | 2.828125 | 694 | Content Listing | Science & Tech. | 44.634337 |
On September 26,
a large solar
coronal mass ejection smacked into planet
producing a severe geomagnetic storm and wide spread auroras.
Captured here near
local midnight from Kvaløya island
outside Tromsø in northern Norway, the intense auroral glow
was framed by parting rain clouds.
Tinted orange, the clouds are also in silhouette as
the tops of the colorful shimmering
of northern lights
extend well over 100 kilometers above the ground.
Though the auroral rays are parallel, perspective
makes them appear to radiate from a vanishing point at
Near the bottom of the scene, an even more distant Pleiades star
cluster and bright planet Jupiter shine on this
cloudy northern night. | <urn:uuid:b6892e4a-fe59-4b8c-8619-bc5c93e5f4ee> | 2.71875 | 159 | Knowledge Article | Science & Tech. | 31.257045 |
Free Online Dictionary
|Babylon English English dictionary||Download this dictionary|
n. (Physics) tau, tauon, negatively charged lepton having a mass almost 3500 times that of the electron (discovered by Martin Lewis Perl)
The following video provides you with the correct English pronunciation of the word "tau lepton", to help you become a better English speaker.
|Wikipedia English The Free Encyclopedia||Download this dictionary|
The tau (t), also called the tau lepton, tau particle or tauon, is an elementary particle similar to the electron, with negative electric charge and a . Together with the electron, the muon, and the three neutrinos, it is classified as a lepton. Like all elementary particles, the tau has a corresponding antiparticle of opposite charge but equal mass and spin, which in the tau's case is the antitau (also called the positive tau). Tau particles are denoted by and the antitau by .
|See more at Wikipedia.org...|
© This article uses material from Wikipedia® and is licensed under the GNU Free Documentation License and under the Creative Commons Attribution-ShareAlike License
|Babylon English-Norwegian||Download this dictionary|
s. (fysikk) tauon, taulepton, tau, negativt laddet lepton med en masse nesten 3500 ganger mer enn et elektron (oppdaget av Martin Lewis Perl)
| tau lepton in French | tau lepton in Spanish | tau lepton in Dutch | tau lepton in Portuguese | tau lepton in German | tau lepton in Russian | tau lepton in Korean | tau lepton in Hebrew | tau lepton in Polish | tau lepton in Hungarian | tau lepton in Czech | tau lepton in Latvian | tau lepton in Norwegian | tau lepton in Swedish
You think you have ethics... Take the survey NOW! | <urn:uuid:6b1dc544-2930-48b0-9b57-6a6cc89bf0d7> | 2.96875 | 431 | Structured Data | Science & Tech. | 37.809289 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
characteristics and classification
Burrowing barnacles (order Acrothoracica, about 30 species) are small, unisexual forms that lack shells and have fewer than six pairs of cirri. They burrow into hard limy material, such as clam shells and coral. Trypetesa is found only inside snail shells occupied by hermit crabs.
...have a calcareous shell made up of a number of articulated plates. The infraclass Cirripedia is divided into two superorders, Acrothoracica and Thoracica. Members of the Acrothoracica are known as burrowing barnacles because they burrow into calcareous substrates (e.g., limestone, corals, and mollusk shells). The acrothoracicans are recognized as fossils primarily by their burrows, and, while...
What made you want to look up "burrowing barnacle"? Please share what surprised you most... | <urn:uuid:1a17affc-83b4-4440-95d0-d13d5717beab> | 3.859375 | 230 | Knowledge Article | Science & Tech. | 36.694929 |
|Scientific Name:||Fregata magnificens|
|Species Authority:||Mathews, 1914|
|Red List Category & Criteria:||Least Concern ver 3.1|
|Reviewer/s:||Butchart, S. & Symes, A.|
This species has a very large range, and hence does not approach the thresholds for Vulnerable under the range size criterion (Extent of Occurrence <20,000 km2 combined with a declining or fluctuating range size, habitat extent/quality, or population size and a small number of locations or severe fragmentation). The population trend appears to be increasing, and hence the species does not approach the thresholds for Vulnerable under the population trend criterion (>30% decline over ten years or three generations). The population size is very large, and hence does not approach the thresholds for Vulnerable under the population size criterion (<10,000 mature individuals with a continuing decline estimated to be >10% in ten years or three generations, or with a specified population structure). For these reasons the species is evaluated as Least Concern.
|Range Description:||This species is distributed on the Pacific and Atalantic coasts of America, from California (USA) to Ecuador (including the Galapagos), and from Florida to south Brazil. One relict population breeds at Cape Verde off the coast of Africa. Outside the breeding season it is largely sedentary, with the dispersal of immature and non-breeding individuals.|
Native:Anguilla; Antigua and Barbuda; Aruba; Bahamas; Barbados; Belize; Bermuda; Bonaire, Sint Eustatius and Saba; Brazil; Cape Verde; Cayman Islands; Colombia; Costa Rica; Cuba; Curaçao; Dominica; Dominican Republic; Ecuador (Galápagos); Ecuador (Galápagos); El Salvador; French Guiana; Grenada; Guadeloupe; Guatemala; Guyana; Haiti; Honduras; Jamaica; Martinique; Mexico; Montserrat; Nicaragua; Panama; Peru; Puerto Rico; Saint Kitts and Nevis; Saint Lucia; Saint Martin (French part); Saint Vincent and the Grenadines; Sint Maarten (Dutch part); Suriname; Trinidad and Tobago; Turks and Caicos Islands; United States; United States Minor Outlying Islands; Uruguay; Venezuela; Virgin Islands, British; Virgin Islands, U.S.
Vagrant:Argentina; Chile; Denmark; France; Gambia; Mauritania; Portugal; Spain
Present - origin uncertain:Canada; Senegal; Western Sahara
|Range Map:||Click here to open the map viewer and explore range.|
|Habitat and Ecology:||The Magnificent Frigatebird often nests in mangroves, but also in bushes and even on cactus. It can breed on the ground (del Hoyo et al. 1992). Data reveals it is almost continuously on the wing, with morphology and flight pattern resulting in extremely low costs of foraging, relying on prey driven to the surface by underwater predators such as tuna. Low cost of flight due to extensive use of thermals allows exploitation of tropical waters in which prey is scarce (Weimerskirch et al. 2003). It feeds mainly on flying-fish and squid, but also jellyfish, baby turtles, seabird eggs and chicks, offal and fish scraps (del Hoyo et al. 1992).|
|Citation:||BirdLife International 2012. Fregata magnificens. In: IUCN 2012. IUCN Red List of Threatened Species. Version 2012.2. <www.iucnredlist.org>. Downloaded on 23 May 2013.|
|Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please fill in the feedback form so that we can correct or extend the information provided| | <urn:uuid:b10af90e-8370-41df-b1e3-c4f713f61c94> | 2.703125 | 820 | Knowledge Article | Science & Tech. | 34.148489 |
IT LOOKS like an ordinary USB memory stick, but a little gadget that can sequence DNA while plugged into your laptop could have far-reaching effects on medicine and genetic research.
The UK company Oxford Nanopore Technologies built the device, called MinION, and claims it can sequence simple genomes - such as those of some viruses and bacteria - in a matter of seconds.
More complex genomes might not be practical at first, but MinION could also be useful for quickly sequencing DNA from cells in a biopsy to look for cancer, for example, or to determine the genetic identity of bone fragments at an archaeological site.
MinION has already sequenced a simple virus called Phi X, which contains 5000 genetic base pairs, the company announced last week at the Advances in Genome Biology and Technology (AGBT) conference in Marco Island, Florida.
This was done as a proof of principle. "Phi ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:b28d5bc9-f7f8-4e3a-a4c1-c1d0e967e2c8> | 3.4375 | 213 | Truncated | Science & Tech. | 40.004577 |
Mercury and Radioisotope Thermalelectric Generators
Country: United States
Date: Spring 2010
Why was mercury used in some radioisotope
thermalelectric generators (RTGs)?
After some extensive searching, I was unable to find a single
reference to the
use of mercury in a Radioisotope Thermoelectic Generator. Do you have
any concrete examples of this? The only use for mercury I can imagine, is to
act as a circulating cooling fluid.
The metal mercury was used in a Rankine cycle generator on the
SNAP-1 RTG to make electric power. It was never deployed. Later
RTG's seem to have used thermoelectric methods to generate the
electricity. The Rankine system is similar to a steam power plant,
but mercury metal is used instead of water. Mercury was found to be
the best working fluid under the conditions and temperatures that
space RTG systems operate. NASA tech note TN D-5092 has useful
information how to make a mercury boiler for use in space.
Nuclear reactors have also been used to generate power in space
using a mercury Rankine cycle, but these are not RTGs.
Russia has many RTGs rusting away all over their country, but I can
find no evidence that they use mercury.
Click here to return to the Engineering Archives
Update: June 2012 | <urn:uuid:a0323e69-2aba-4a75-850e-4d247389d087> | 3.28125 | 294 | Q&A Forum | Science & Tech. | 35.424883 |
THE MILKY WAY GALAXY
The study of the main components of our Galaxy (Disks, Halo, Bulge, Clusters) is of the uttermost importance to understand the formation and evolution of external galaxies. It is only in our Galaxy that the properties of the different components can be studied in great detail.
Observations of the chemical and kinematical properties of samples of stars belonging to halo and thick disk, the discovery of extremely metal poor stars, the detailed study of Globular clusters, allow to define the properties of the oldest galactic populations and the different merging and accretion processes that affected our Galaxy during its evolution. This include the derivation of accurate age and metallicity for stars and globular clusters and the study of the role of stellar internal mixing from anomalous chemical compositions. The discovery of multiple populations in Globular Clusters represents a key ingredient fot the definition of their formation and evolution.
The Galactic Bulge has been studied by dedicated observational projects, aiming to derive the age distribution and metallicity of its stars. Other research activities are dedicated to the global morphology of the Milky Way from stellar counts and on the study of the extinctions on different lines of sight, to constrain the dust properties in our galaxy.
Population synthesis method allows to define the large scale properties of the Galactic stellar populations, from color-magnitude diagrams and from kinematic data. The stars distribution on the CMD carries the signature of the Star Formation History; their position and kinematics trace the dynamical evolution. This information is combined in a simulator, used as a tool to recover the structure and evolution of the Milky Way.
Simulations of stellar counts in any position of the sky can be obtained interactively with the tTRILEGALool at OAPd. The simulator, which includes a wide variety of photometric systems, can be used both to interpret existing data sets, and to design future surveys. In fact, it is being applied to the data from the SDSS-III Survey, in which the OAPd is involved.
A more sophisticated version of the Padova Galaxy Model including the kinematics of the disks and the halo is also developed.
The ESA cornerstone mission Gaia (launch 2013) will allow an unprecedented view of the formation and evolution of our Galaxy, giving the phase space distribution and metallicity for a billion stars. Padova has a important management responsibilities on the DPAC, the international consortium of over 400 scientists that will process the Gaia data. Padova contribution is mainly focused on the problem of deriving the stellar astrophysical parameters that will be part of the final Gaia catalog.
Gaia has limited spectroscopic capabilities, allowing the derivation of chemical abundances and radial velocities only for bright start ( G< 11 e G< 16 respectively). To complement Gaia measures, a 300-author team proposed the Gaia-ESO Public Spectroscopic Survey to ESO and was granted of 300 nights. The survey will derive metallicity and radial velocities up to G=19, for field stars and for open clusters, using FLAMES (GIRAFFE and UVES) on the VLT, to obstain an homogeneous study of all Galactic stellar populations. Padova Observatory is one of the leading Institutes in this project (with Firenze and Bologna).
The OAPd is involved in the RAVE survey, that secured spectra of bright high galactic latitude stars in the same wavelength region studied with GAIA. The main objective of the survey is to derive radial velocities and chemical abundances. Researchers at the Observatory are responsible for the data extraction and calibration, and for the validation of the derived atmospheric parameters via the acquisition and analysis of NTT and AAT data.
A sample of Red Clump Stars from the Hipparcos Catalogue is the target of the ARCS spectroscopic survey, conducted at the Asiago telescope. Coupling the astrometric, kinematic and photometric information will provide important clues for studies of the galactic structure and dynamics. This, and other researches benefit from the development of a library of synthetic stellar spectra covering the wavelength range from 2500 to 10500 A and with various values of the spectral resolution. This library constitutes the reference data base for the automated analysis of spectral surveys. | <urn:uuid:a8d92399-ead0-4e9a-a09a-c9d5f426b32b> | 3.234375 | 889 | Knowledge Article | Science & Tech. | 21.107803 |
It's often useful to keep any built files completely
separate from the source files.
In SCons, this is usually done by creating one or more separate
variant directory trees
that are used to hold the built objects files, libraries,
and executable programs, etc.
for a specific flavor, or variant, of build.
SCons provides two ways to do this,
one through the SConscript function that we've already seen,
and the second through a more flexible
One historical note: the
used to be called
That name is still supported
but has been deprecated
because the SCons functionality
differs from the model of a "build directory"
implemented by other build systems like the GNU Autotools.
The most straightforward way to establish a variant directory tree
uses the fact that the usual way to
set up a build hierarchy is to have an
SConscript file in the source subdirectory.
If you then pass a
variant_dir argument to the
SConscript function call:
SCons will then build all of the files in the build subdirectory:
% ls src SConscript hello.c % scons -Q cc -o build/hello.o -c build/hello.c cc -o build/hello build/hello.o % ls build SConscript hello hello.c hello.o
But wait a minute--what's going on here? SCons created the object file build/hello.o in the build subdirectory, as expected. But even though our hello.c file lives in the src subdirectory, SCons has actually compiled a build/hello.c file to create the object file.
What's happened is that SCons has duplicated the hello.c file from the src subdirectory to the build subdirectory, and built the program from there. The next section explains why SCons does this. | <urn:uuid:21b2d146-6575-4851-ab7d-0417009160b8> | 3.515625 | 398 | Documentation | Software Dev. | 61.463882 |
Inside the Insectarium
By John C. Abbott
“Because of their diversity, natural beauty and sheer power, insects certainly constitute a force of nature.”
A jewel beetle (Chrysina woodi), from the Davis Mountains, is one of the few animals on the planet that can see circularly polarized light in the same way that we use glasses to view 3-D movies.
A non-biting midge in flight with its front legs raised.
The unusual-looking waved light fly (Pyrgota undata) deposits eggs on the backs of June beetles. The larvae become parasites of the beetle.
These striking dung beetles — the male and female rainbow scarab (Phanaeus difformis) — are commonly found in cow pastures throughout Texas.
A brilliant jewel beetle (Chrysina woodi) flies between walnut trees in the Davis Mountains.
The burying beetle (Nicrophorus carolinus) mimics a wasp while in flight by inverting its elytra (forewings). A pair of these intriguing beetles will bury a small animal carcass, lay their eggs on it and then guard the young as they feed.
The emerald euphoria (Euphoria fulgida) is a common visitor to butterfly bait traps in the spring.
The six-spotted tiger beetle (Cicindela sexguttata) is equally capable of avoiding predators by flying in the air or running on the ground.
The American bird grasshopper (Schistocerca americana) gets its name because of its large size and its abilities as a strong flier. | <urn:uuid:1eb8fe47-57f1-4dce-9336-a8bccc679376> | 3.375 | 340 | Truncated | Science & Tech. | 42.660188 |
Luminescense Still Mystery to Science (Mar, 1932)
Luminescense Still Mystery to Science
by Calvin Frazer
ON DECEMBER 28, 1929, the British steamship Talma was off the eastern shores of the Bay of Bengal, en route from Calcutta to the Far East. The weather was calm and clear. Toward seven in the evening an extraordinary display of luminosity was seen in the surrounding sea.
“At first,” says the captain’s report, “what appeared like small globules of phosphorescence rising from below and breaking at the surface were observed. Later these assumed an appearance almost like flashes of lightning under the water, which rapidly formed into regular beams, curved as the curved spokes of a wheel might be, and of a width at the ship of about 30 feet.
“These revolved rapidly from right to left at the rate of two a second—timed as the beams passed the bridge—around a distant center, which could not actually be seen clearly but appeared to be about five miles off.
“This center passed ahead of the ship, being first observed on the port beam, and from there drawing slowly ahead of and across the bows of the ship, fading gradually till on the starboard bow, when the whole phenomenon disappeared about fifteen minutes after it began.”
Here is a tale that science would dismiss as a mere sailor’s yarn, but for one reason —very similar appearances have been reported many times in and about the Indian seas; especially in the eastern part of the Bay of Bengal and the adjacent Strait of Malacca.
A case almost identical with the one seen from the Talma was observed in 1909 from the Danish steamship Bintang. In other cases revolving systems of beams have been seen on both sides of a ship at the same, time. In one case the beams reversed their direction of rotation during the observation.
The beams are generally described as curved, with their concave sides in the direction toward which they move. Most of the displays reported have lasted only a few minutes.
In its ordinary manifestations the so-called phosphorescence of the sea is, of course, a very familiar spectacle and its cause is well understood. It has nothing to do with phosphorus—hence science prefers to apply the term “luminescence” to this and other varieties of light that are accompanied by little or no perceptible heat—but is due to the presence in the water of light-bearing organisms that are to the ocean what glow-worms and fireflies are to the land. It seems, however, quite impossible that these creatures should travel in the water in such a way as to produce the effects just described.
A more plausible assumption is that these apparent evolutions are not due to actual movements of the luminescent organisms, but to the passage of wavelike impulses of some sort over the surface of the sea, in response to which these creatures light up momentarily, after the manner of flashing fireflies.
What these impulses might be is a profound mystery. Vibrations from submarine earthquakes or volcanic outbreaks have been suggested as a possible cause, but this suggestion leaves much to be explained.
There are several other mysterious luminous phenomena of the sea, and there is one on land that has been famous for ages, though it has received little attention from scientific men, especially in recent years. Here is a case reported in 1916 by Dr. Matthew Luckiesh.
The Mysterious Will-o’-the-Wisp Dr. Luckiesh was tramping one dark night over the desert between Goodsprings, Nevada, and Ivanpah, California. About 2 a.m. he came to an area where a shower and melting mountain snows had left shallow pools of water. Suddenly a light was seen floating in the air about five feet above the ground. As its distance and size were unknown it might have been taken for a light in a cabin window but for the fact that there was no human habitation within twenty miles.
Presently the light sailed off some distance and then stopped. Soon others appeared; some floating apparently stationary, others darting here and there. When the display was at its height hundreds of individual lights were visible simultaneously. The display was seen for more than an hour.
These lights were not fireflies, which are unknown in the region mentioned. Apparently they were will-o’-the-wisp, and they were so described by Dr. Luckiesh—but what is will-o’-the-wisp? Nobody knows.
Most reference books tell you that it is supposed to be due to the spontaneous combustion of gases escaping from decaying matter in the soil, and certain gases in particular are mentioned in this connection; but when tried in the laboratory they fail to produce the effects described. No chemist has yet manufactured a good imitation of will-o’-the-wisp.
Luminous Bacteria Now it happens that many species of bacteria are luminescent; Dr. Molisch, the great German authority on luminescence, names twenty that shine by their own light, and he has utilized one species in constructing an ingenious “bacterial lamp,” which gives enough light to read by.
Prof. Fernando Sanford has suggested that bubbles of gas rising from wet ground are sometimes laden with swarms of luminous bacteria, and that this is the true explanation of will-o’-the-wisp. His suggestion seems to be the best guess thus far offered on the subject. | <urn:uuid:50c4a10d-d884-4e20-a921-6562162ec5db> | 3.46875 | 1,149 | Personal Blog | Science & Tech. | 46.503435 |
|Ivars Peterson's MathTrek|
April 15, 1996
Scissors-paper-rock is a game that children play, mathematicians analyze, and a certain species of lizard takes very seriously.
In the playground version of the game, each of two players holds a hand behind his back. On the count of three (or by chanting some ritual phrase), both players bring their hidden hands forward in one of three configurations. Two fingers in a "V" represent scissors, the whole hand spread out and slightly curved means paper, and a clenched fist signifies rock. The winner is determined by the following sequence of rules: Scissors cut paper, paper wraps rock, and rock breaks scissors. If both players present the same configuration, the game is a draw.
Is there a winning strategy for this game? It certainly doesn't make sense to show the same configuration each time. An alert opponent would quickly learn to anticipate your move, make the appropriate response, and always win. A similar danger lies in following any kind of pattern. Thus, unless you can find a flaw in your opponent's play, your best bet is to mix the three choices in a random manner.
Of course, this isn't a completely satisfying result. If you stick to a strategy of random choices, your opponent can't profit. But then, you can't profit from your opponent's mistakes either.
Curiously, the scissors-paper-rock game has a counterpart in the mating rituals of a certain species of lizard native to California. Instead of just one mating strategy, these lizards have three, distinct types of behavior that constantly compete with one another in a perpetual cycle of dominance.
In the side-blotched lizard (Uta stansburiana), males have one of three throat colors, each one declaring a particular strategy. Dominant, orange-throated males establish large territories within which live several females. But these territories are vulnerable to infiltration by males with yellow-striped throats -- known as sneakers -- who mimic the markings and behavior of receptive females. The orange males can't successfully defend all their females against these disguised interlopers, who cluster on the fringes of the territories held by the orange lizards.
However, a large population of sneakers, which have no territory of their own to defend, can be quickly overrun by blue-throated males, who defend territories large enough to hold just one female. Sneakers have no chance against a vigilant, blue-throated guard. But once the sneakers become rare, powerful orange males flourish, grabbing territory and females from the blue lizards. Now, the blue males lose out.
As in the scissors-paper-rock game, the wide-ranging, ultradominant strategy of orange males is defeated by the sneaker strategy of the yellow males, which is in turn defeated by the mate-guarding strategy of blue males. The orange strategy defeats the blue strategy to complete the cycle.
Reporting in a recent issue of Nature, biologists Barry Sinervo and Curt M. Lively of Indiana University discuss field data showing that the populations of each of these three types, or morphs, of male lizard oscillate over a six-year period. They found that when a morph population hits a low, this particular type of lizard produces the most offspring in the following year, helping to perpetuate the cycle. This arrangement somehow succeeds in maintaining substantial genetic diversity while keeping the overall population reasonably stable.
The mathematical side of the scissors-paper-rock game gets a little more interesting when a scoring system is introduced. Suppose, for instance, that scissors scores one point against paper, paper scores two against rock, and rock scores three against scissors. In this situation, would you automatically form a rock and hope to score three, or would you expect your opponent to form a rock, which you could beat by forming paper?
As in the basic game, making the same choice every time doesn't work. What seems to make sense, again, is to mix the three choices randomly, forming each of scissors, paper, and rock with a certain probability. The scoring system determines what these probabilities ought to be to achieve an optimal result. In the example given, you can calculate that the probability should be 1/3 for scissors, 1/2 for paper, and 1/6 for rock. Other scoring schemes give different probabilities.
Any deviation from a random mixing strategy gives your opponent an opportunity to profit from your actions. At the same time, by sticking strictly to these probabilities, you forgo any chance of taking advantage of bad play on the part of your opponent. When both players adopt exactly the same strategy, no one wins -- or loses -- in the long run.
What side-blotched lizards have figured out quite naturally, mathematicians can emulate with their reasoning and their proofs.
Copyright © 1996 by Ivars Peterson.
Beasley, John D. The Mathematics of Games. Oxford University Press, 1990.
Sinervo, B., and C.M. Lively. "The rock-paper-scissors game and the evolution of alternative male strategies." Nature, 380 (21 March 1996): 240-243.
Smith, John Maynard. "The games lizards play." Nature, 380 (21 March 1996): 198-199.
Comments are welcome. Please send messages to Ivars Peterson at email@example.com. | <urn:uuid:08181ab7-baf2-45e6-a4f2-4218e3179027> | 3.171875 | 1,103 | Nonfiction Writing | Science & Tech. | 47.143974 |
Map Making with Differential Data
WMAP observes temperature differences between points separated by ~141° on the sky. Maps of the relative sky temperature are reconstructed from the difference data using a modified form of the algorithm adopted by COBE-DMR.
The algorithm WMAP uses to reconstruct sky maps from differential data is an iterative one. In the limit that the instrument noise is white (uncorrelated from one sample to the next in time) the algorithm is mathematically equivalent to a least squares fit of a set of temperature differences to a set of map pixel temperatures. However, the implementation does not require the evaluation or inversion of large matrices and has a very intuitive interpretation as follows. The actual signal WMAP measures is the temperature difference between two points on the sky, DT = T(A)-T(B), where T(A) is the temperature seen by feed A, and likewise for B. Feed A can be thought of as viewing the sky while feed B can be thought of as viewing a comparative reference signal, or vice versa. In WMAP's case, the comparative signal is a different point in the sky. If we knew the temperature T(B) we could recover T(A) using T(A) = DT+T(B), but since we don't know T(B), we use a guess in which T(B) is estimated from a previous sky map iteration. Thus the temperature in pixel i of a map is given by the average of all observations of pixel i after correcting each observation for the estimated signal seen by the opposite feed.
For this scheme to be successful it is imperative for a given pixel i to be observed with many different pixels on its ring of neighbors. Thus the method requires a carefully designed scan strategy to go with it. The strategy designed for WMAP achieves this while simultaneously avoiding close encounters with the Sun, Earth, and Moon. The algorithm has been tested with the WMAP scan strategy using an end-to-end mission simulation that incorporates a realistic sky signal, instrument noise, and calibration methods. The results of these simulations are described in detail in an Astrophysical Journal article. The main figure from that paper is reproduced here. After 40 iterations of the algorithm, the artifacts that remain in the map due to the map-making itself have a peak-peak amplitude of less than 0.2 µK, even in the presence of Galactic features with a peak brightness in excess of 60 mK. | <urn:uuid:de721bed-10ec-4afb-a60e-d4a37c21a4f6> | 3.09375 | 501 | Academic Writing | Science & Tech. | 41.547381 |
Pacific Coastal & Marine Science Center
Coastal and Marine Earthquake Studies
|The Cascadia Megathrust and Tectonic Stress in the Pacific Northwest
To understand the tectonic cause of stress and deformation in the Pacific Northwest, we need to consider not only forces acting along the Cascadia Subduction zone, but also forces acting along the tectonic margins to the north (Queen Charlotte fault) and to the south (San Andreas fault).
In addition, over millions of years, mountain ranges are built that locally affect stress and deformation within the North American plate (i.e., internal forces in diagram at right). In general, the present day state of stress depends, in part, on how the continent was deformed in the past. In other words, the present-day rate of deformation has a "memory" of past deformation episodes.
As one might be lead to believe from the previous discussion, modeling the deformation of continents is very complex. The model chosen for this study (LARAMY) was developed over several years by Dr. Peter Bird at UCLA (Earth and Space Sciences). This model incorporates many aspects of continental deformation observed in the field and laboratory measurements of how rocks deform under different pressure and temperature conditions. In particular, LARAMY is quite successful in modeling the formation of the Rocky Mountains using reconstructions of plate motions along the western margin of North America. For this study, however, we focus on the last time step of the model--the present day state of stress and rate of deformation. | <urn:uuid:e12f9288-e19d-4e89-8f52-92d9b8c066ef> | 3.5 | 317 | Knowledge Article | Science & Tech. | 33.032527 |
COLUMBIA, Mo. University of Missouri scientist Ray Semlitsch studies creatures most people dont ever see. These creatures are active only at night and thrive in the shallow, cool, wet surroundings of headwater streams, an oft-overlooked biological environment.
A collaborative study, with MU graduate student Bill Peterman, recently published in the journal Freshwater Biology, revealed the biomass (total mass of an organism in an area) of the black-bellied salamander far exceeds any previous estimates, and the contribution of the species and its habitat may be critical in the food chain. While the ecological role of the salamander is not fully understood, radio-telemetry and mark-recapture tracking methods used in the study indicate the salamanders are a critical component in the productivity of headwater streams, possibly ensuring the survival of other species of fauna.
This is important because it is the first study to uncover the hidden biomass of these salamanders, said Semlitsch, professor of biological science in the MU College of Arts and Science. Salamanders typically live underground. They live in places most people dont see, and they live in these small, headwater streams where there are no other fresh-water vertebrates. Fish cant exist in these small streams. This is where water seeps out of the rock, where all streams begin life as a stream.
These headwater streams, according to the study, are very productive areas for salamanders and Semlitsch advocates the protection of these ecosystems.
The final take-home message of our study is salamanders comprise a huge amount of protein biomass for these headwater stream ecosystems, Semlitsch said. We think thats important because that biomass can then be used by consumers, such as predators, or could be used by decomposers in that system. The salamanders also are consuming aquatic insects. They are a key link, we think, in these headwater stream systems that has not been
|Contact: Bryan E. Jones|
University of Missouri-Columbia | <urn:uuid:76a970bc-2eed-4f58-8ca9-6fcdbb7e251c> | 3.671875 | 420 | Truncated | Science & Tech. | 28.51391 |
Since the birth of the Homo Habilis we have been creating new things by “chopping out” pieces to create components to be stuck together in the building phase. Nature does not work like that. It builds up molecule after molecules and that provides ways to create most efficient structures.
This is now within our technical capabilities and we are seeing the benefit every day.
Scientist at the Pacific Northwest National Laboratory together with visiting scientists from the Wuhan University have been able to create a layer of manganese oxide crystals of very specific forms that is particularly effective in transporting sodium ions. This increases the yield of batteries and can provide the tools for a new approach to energy accumulation and distribution.
The prototype yields 128 milliamp-hour per gram of electrode material, much better than any existing batteries.
The grid of manganese oxide are built assembling two different kinds of crystals created at different temperatures, one in the shape of pyramids, the others in the shape of a octahedron. Mixing together the two types creates tiny channels across which the sodium ion can easily flow.
What is most interesting to me is the ability that we have reached in controlling single molecules in building complex structures. This is what is really going to change the rules of the game. | <urn:uuid:d64de630-fd4e-4c66-8277-be07e2f0447e> | 3.296875 | 261 | Personal Blog | Science & Tech. | 34.255 |
Mono: A branch to other platforms
Mono is a open-source, multi-platform CLR (Common Language Runtime) allows .NET Applications
(particularly C# written applications) to run on Linux and Mac OS X. Mono was created as a deal from
Microsoft to uses the CLR to create what is now Mono. What many people don't know, the CLR of
.NET is open source, just not the front-end. If the CLR wasn't open-source, then Mono wouldn't be here
today. Now, before you get to asking “Whats the purpose of this? I've heard about this a million
times!”, just sit back and relax as I guide you through the concept of Mono and making your first Hello
Now that we have covered a little bit of Mono's start. Lets get Mono into our toolbox and create some
applications. Mono, itself, is written in C#, so the main language of building Mono apps would be, you
guessed it, C#. The Mono Project is working on VB.NET support so VB developers, you get a bone
too. If you have a existing application you would like to port, go to http://www.mono-project.com and
click MoMA. MoMA is a Mono software tester which allows you to test if you application will run
flawlessly on Mono. Lets start by opening a current project in MoMA.
This is the welcome screen.
Click Next and we will add our project's exe/dll files to MoMA.
OK, now I have added my main project, Open Studio, to the analyzer. Click next and we will see some
results. Depending on how many files get added, it will take a few seconds to 10 minutes to scan. And
our results are:
If we click, View Detailed Report, we will see a detail report on whats wrong with our application.
Wow, thats a lot of mistakes. Now if your application's report contains “Method Missing from Mono” or “Method with [MonoTodo]” then theres nothing we can do about it. Those just mean that they haven't
been added to Mono yet. But if its something like P/Invoke, then chances our, you can't use it EVER!
Stey tuned for the next article when we set-up Mono and get a IDE for developing on the new platform.
You may download this article as PDF here
This post has been edited by Amrykid: 17 May 2009 - 05:02 PM | <urn:uuid:d352baf3-f906-405d-82fc-8978ac877465> | 2.71875 | 544 | Comment Section | Software Dev. | 73.900565 |
Regular readers of Nanodot will be aware that the nanotech uses of the exquisite molecular recognition properties of DNA include both the programmed assembly of nanoparticles (which is not atomically precise) and structural DNA nanotechnology (the atomically precise assembly of nanometer scale structures from DNA strands). A group of German scientists have developed a new slant on DNA nanotechnology by using atomic force microscopy to assemble a DNA scaffold on a surface to which molecular building blocks can then bind. The pattern of molecular building blocks thus assembled can be arbitrarily complex, with individual building blocks spaced about 50 nm apart. The precision of assembly is essentially the lateral precision of the AFM, about 6 nm. So the building blocks are atomically precise but the larger structure is only precise on a larger scale. The technology is described in a Nanowerk Spotlight written by Michael Berger, which includes a movie showing the assembly of a rather intricate flower pattern about 10 µmeters across. From “Nanotechnology cut and paste with single molecules“
Using a hybrid approach that combines the precision of an atomic force microscope (AFM) with the selectivity of DNA interactions, researchers in Germany have successfully demonstrated a technique that fills the gap between top-down and bottom-up since it allows for the control of single molecules with the precision of atomic force microscopy and combines it with the selectivity of self-assembly.
“In the past, great efforts have been put into creating DNA structures like the so called DNA origami or crystals composed of nanoparticles” Dr. Hermann E. Gaub tells Nanowerk. “However, these approaches exclusively rely on self-assembly and are purely bottom-up. They don’t allow control over single molecules and the structures that are formed are predetermined by the design of the experiment.”
Gaub, head of the Biophysics and Molecular Materials Group in the Physics Department at the Ludwig-Maximilians-University (LMU) of Munich, together with Elias Puchner and colleagues from the university’s Center for Nanoscience and the Center for Integrated Protein Science Munich, combined the precision of atomic force microscopy with the selectivity of DNA interaction to create freely programmable nanopatterns of DNA-oligomers on a surface and in aqueous environment.
What the LMU researchers did was create a DNA scaffold by picking biotin bearing DNA oligomers with an AFM tip and depositing them, one by one, in a desired pattern on a surface, basically creating a pattern of attachment points for fluorescent semiconductor nanoparticles conjugated with streptavidin. The small bacterial protein streptavidin is commonly used for the detection of various biomolecules and it binds with high affinity to the vitamin biotin. The strong streptavidin-biotin bond can be used to attach various biomolecules to one another or onto a solid support.
When the sample with the DNA scaffold is incubated with a solution of fluorescent nanoparticles, a rapid self-assembly process of these particles on the predefined scaffold takes place. Watch a movie of this process here.
Berger quotes Gaub that extension of the technique to assembly in three dimensions “appears challenging but achievable”. The technique was introduced in Science earlier this year (abstract) and recently elaborated in Nano Letters (abstract).
A subtle but crucial point that makes the technique workable is only clearly illustrated in the online supplementary material to the Science paper. The molecular building block is a single strand of DNA to which other molecules (biotin, a fluorophore, etc.) can be attached. One end of the DNA strand binds to an anchor DNA strand on the surface; the other, shorter end binds to a DNA strand on the AFM tip. The AFM tip moves the molecular building block from an anchor strand in the “depot” area of the surface to an anchor strand in the “target” area where the structure is to be assembled. The key to the technique working is that the anchor strand is linked to the surface by its 5′ end in the depot region and by its 3′ end in the target region. As can be seen in the illustration, this sets up the binding of the molecular building block to the depot region in an “unzip” geometry in which the duplex can be broken one base pair at a time, thus requiring a relatively small retraction force on the AFM tip, while binding to the target region is in a “shear” geometry which requires a greater force to pull the duplex apart. Consequently the AFM tip can remove the building block from the depot area, but leave it on the target area when the tip retracts. The authors report that, amazingly, one cantilever tip was used to transport over 5000 units from the depot to the target area. | <urn:uuid:ca8b01f6-a71f-4998-8a22-25ce043fcb43> | 3.5 | 1,006 | Knowledge Article | Science & Tech. | 26.920765 |