text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Sea-Ice thickness from ICESat data (Arctic)
Sample map of the ICESat-data based Arctic sea-ice thickness distribution for the period October/November 2005.
The data are available via this link.
Travel time measurements of laser pulses emitted by the GLAS instrument aboard the ICESat satellite allow the determination of the sea-ice freeboard height with centimeter accuracy. Under certain assumptions and taking additional data like the distribution of multiyear ice and the snow depth on sea ice into account (see references), the freeboard height can be converted into an ice thickness.
The data set available here comprises data from 10 ICESat measurement periods, i.e. a time series of two measurements of the Arctic sea-ice thickness distribution every winter (October/November and February/March) for winters 2003/04 until 2007/08 with a grid resolution of 25 km x 25 km.
Coverage, spatial and temporal resolution
Period and temporal resolution:
These are data from selected ICESat-GLAS sensor measurement periods for years
This data is a mean / composite over the considered ICESat measurement period. The exact duration of these period is given in Kwok et al., 2009 (see references); typically the periods comprise 35 days in February/March (2007: March/April) bzw. October/November (2003: September-November).
Coverage and spatial resolution:
Data are stored as 4byte floating point values. The header contains the number of grid cells with valid values (2byte integer), followed by geographic latitude and longitude, x and y, and the ice thickness; x and y are the distance (in km) of the center of each grid cell to the center of the NSIDC grid.
A routine (IDL) to read, display and store the data as an image can be downloaded here.
This data give a measure about the mean monthly Arctic sea-ice thickness distribution at two points during the seasonal cycle: October/November, i.e. about one month after freeze-up has started, and February/March, i.e. when sea ice extent is at its maximum but sea-ice thickness will continue to increase for 1-2 months.
The data set does not contain error estimates
Kwok et al. found a mean uncertainty of the sea ice thickness of about 0.7 m using an error propagation. Sea-ice draft estimated from ICESat data on the one hand and measured at mooring on the other hand agree within 0.5 m.
However, it can be expected that the uncertainty can be considerably higher than the two numbers given above, because of the used assumptions and because of gaps in our knowledge concerning snow properties as its depth and their variability.
We recommend to take a look at the references and/or get into discussion with us or Ron Kwok directly.
Stefan Kern, Institute: CliSAP / KlimaCampus / ICDC
email: stefan.kern@ zmaw.de
Ron Kwok, Institute: Jet Propulsion Laboratory, Pasadena, CA, U.S.A.
email: ronald.kwok@ jpl.nasa.gov
Please contact Ron Kwok firstname.lastname@example.org | <urn:uuid:c50caa6c-ae55-4ef0-9878-d7db0be0dedf> | 2.78125 | 688 | Knowledge Article | Science & Tech. | 47.587877 |
Sketchpad Resources || Main CIGS Page
Dirichlet Regions for Three Points:
The loci of circle intersections trace the boundaries of three Dirichlet regions, each containing one of the points A, B, C. The Dirichlet region containing A, for example, is the set of points P which are closer to A than to any of the other points B or C. It follows that the boundary between two Dirichlet regions is the set where distances to two of the points A, B, C are equal and less than the third distance. Finally, the corner or vertex is the point where the 3 distances to A, B, and C are equal. We recognize this as the circumcenter of triangle ABC. Since the circles have the same radius. The points on the circles have the same distances to the corresponding centers, so the intersection points of two circles are equidistant from two of the three points A, B, C.
Dirichlet Regions for Four Points
Dirichlet Regions for Five Points
Home || The Math Library || Quick Reference || Search || Help | <urn:uuid:8aa77f97-c724-42ff-915e-4883357cbc30> | 3.9375 | 225 | Tutorial | Science & Tech. | 55.583526 |
In an earlier post I noted that
the use of average temperatures could actually hide indications of climate change. Inaccurate temperatures would further reduce the accuracy of any analysis.
the use of high and low temperatues instead of hourly temperatures creates a potential bias if temperatures are checked at times other than midnight for reporting periods that end at midnight. Procedures exist for minimizing this problem, but they cannot prevent the possiblity that would reduce accuracy.
Although highs and lows usually occur at times other than midnight that is not always the case. For example, especially in the United States, the passage of a strong cold front through an area not long after midnight may result in the high temperature for the day occurring just after midnight instead of the normal time in the afternoon. Or the cold front could pass through just before midnight resulting in the previous day’s low occurring just before midnight.
In such situations highs or lows may not be representative of temperatures during the day. Temperatures during the rest of the 24 hour period may differ significantly from either the high or the low. in the U.S. Cold. fronts are more likely to produce this situation because cold air from the Arctic can be significantly colder than the air it replaces. With warm fronts some warm air moves in, but more of the heating is caused by the sun.
In the central U.S. a cold pocket of air from the north can pass through an area with a warm front from the west moving in behind it. the result can be the same cold air providing the low temperature for more than one area.
Inaccurate equipment, particularly in the 3rd world, can also adversely affect the quality of data. Areas with violence may not be able to provide data at all.
The location of equipment can also reduce accuracy. Temperatures even within a few miles may differ by several degrees. For example, last week the official site in my home town recorded a low of 26 F. I live about 5 or 6 miles away and have a lot of cold sensitve impatiens in my yard. They didn’t suffer any freeze damage.
Characteristics of some sites may limit accuracy. Locations near pavement or air conditioners may show higher temperatures than the surrounding area. | <urn:uuid:c1fa95b8-3429-4013-85f1-eb03080a3220> | 3.21875 | 454 | Personal Blog | Science & Tech. | 48.36503 |
There continues to be many news outlets that are pushing the idea that the Earth will warm up 4 °C by the year 2100. Back in December I initially addressed the issue here, but a thought has percolated with me about how to really deal with this propaganda. The idea is simple enough, I am going to start tracking the prediction error that exists from the warming that will have to take place in order to reach 4 °C of warming by the year 2100. This is a way to see how realistic the projections for the next 87 years are.
To do this I had to pick a starting point. Since 2100 is a nice arbitrary point in the future, I picked January 1, 2000 as my arbitrary starting point for tracking the error. That gives the warmists 100 years for the Earth to warm up 4 °C. 1999 and 2000 both had very similar temperatures so using 2000 as the starting point for the error monitoring is also useful.
The error I will be measuring is the difference between the actual measured temperature (UAH is the one I will be using) and the predicted temperature. So a negative error means the Earth is cooler than predicted.
A trend down in the error indicates that the prediction is wrong because the Earth is not warming.
A trend up in the error indicates that the prediction is wrong because the Earth is warming faster than predicted.
No trend in the error indicates that the predictions are correct.
One particular note. Back in December I made the assumption that the NH and SH would not warm the same amount. When I initially set this up it was apparent that this assumption drove a huge difference in the NH and SH. As a result I have instead proceeded forward using the notion that both the NH and SH will have 4 °C of warming by the year 2100. I still believe this is incorrect, but I will use that as the basis for the prediction because the error for the NH is so insanely absurd in the lack of warming. The error is still substantial, but the difference between the hemispheres is greatly reduced.
The following chart is the basis for the prediction. It assumes that the global anomaly will be 4 °C warmer in 2100 than it was in 2000. In the year 2000 the Earth’s actual temperature was right at 14 °C. So the Earth must be at 18 °C in the year 2100. Anyone who thinks that will happen believes in global warming. In order for the Earth to be 18 °C in the year 2100, the following warming must take place, although I will be the first to acknowledge that the Earth doesn’t behave linearly.
Since I started in 2000, I now have 12 complete years in which to calculate the error. I will be showing the error for the Earth as a whole and for each hemisphere individually. All rates will be in °C/month. So multiplying the rate by 120 will give the error/decade. For instance the linear rates to get 4 °C warming in a 100 year period is:
0.0033 °C / month =
0.04 °C / year =
0.4 °C / decade =
4 °C / century
Since the regressions are based on the monthly data, the slope will be comparable to the °C / month.
The global, NH and SH errors are all negative indicating that we are not on track to hit 4 °C by the year 2100. If the current trend continues, the temperature in the year 2100 will be:
Global Anomaly: 1.0 °C
NH Anomaly: 1.4 °C
SH Anomaly: 0.6 °C
The Earth is currently not on track for the predictions to be correct. | <urn:uuid:9fee6feb-3706-4544-83e4-8b1dbbd2ae48> | 2.734375 | 756 | Personal Blog | Science & Tech. | 73.165784 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Monday, 26 November 2012
New research into the world's major river systems has found that too much water is being taken out.
Tuesday, 20 November 2012
Water conservation can lead to smellier sewers and more corrosion in sewer pipes, according to new research.
Wednesday, 4 April 2012 28
Ask an Expert If we can tap the Artesian Basin to take water out, why can't we pump water back into it in times of flood ?
Wednesday, 19 January 2011
As floods continue to cover parts of Australia, scientists are calling for an urgent change in thinking on how we predict and cope with flood events.
Thursday, 14 October 2010 42
Opinion Decades of sound science show the future of the Murray-Darling Basin depends upon water cuts, argues John Williams.
Wednesday, 19 May 2010 37
Opinion It's a simple question, and one we're going to be asking ourselves more and more over the coming years. Why? Because that simple question is going to save the planet, says Bianca Nogrady.
Tuesday, 22 September 2009
Ace Day Jobs Every day Chris Moore makes decisions that affect a city's future. Watch Ace Day Jobs to find out how he became a civil engineer.
Thursday, 7 December 2006
Great Moments in Science Australia is the driest inhabited continent on Earth and it's not uncommon for parts of it to have the occasional water crisis.
Thursday, 19 September 2002
Feature How renewable is the water from the Great Artesian Basin? Can rainfall really replenish the supply, or are we in danger of losing one of our greatest natural resources? | <urn:uuid:712d1b6b-42bd-46be-8e3d-c513e87fa110> | 2.71875 | 349 | Content Listing | Science & Tech. | 55.135242 |
The Structure of the DNA Molecule
Access Excellence Classic Collection
Although scientists as far back in history as Aristotle recognized that the features of one generation are passed on to the next (...like begets like...) it was not until the 1860's that the fundamental principles of genetic inheritance were described by Gregor Mendel. Mendel's work with common garden peas, pisum sativum, led him to hypothesize that phenotypic traits (physical characteristics) are the result of the interaction of discrete particles, which we now call genes, and that both parents provide particles which make up the characteristics of the offspring. His theories were, however, widely disregarded by scientists of the time. In the last quarter of the 19th century, however, microscopists and cytologists, interested in the process of cell division, developed both the equipment and the methods needed to visualize chromosomes and their division in the processes of mitosis (A. Schneider, 1873) and of meiosis (E. Beneden, 1883).
As the 20th century began many scientists noticed similarities in the theoretical behavior of Mendel's particles, and the visible behavior of the newly discovered chromosomes. It wasn't long before most scientists were convinced that the hereditary material responsible for giving living things their characteristic traits, and chromosomes must be one in the same. Yet, questions still remained. Chemical analysis of chromosomes showed them to be composed of both protein and DNA. Which substance carried the hereditary information? For many years most scientists favored the hypothesis that protein was the responsible molecule because of its comparative complexity when compared with DNA. After all, DNA is composed of a mere 4 subunits while protein is composed of 20, and DNA molecules are linear while proteins range from linear to multiply branched to globular. It appeared clear that the relatively simple structure of a DNA molecule could not carry all of the genetic information needed to account for the richly varied life in the world around us!
It was not until the late 1940's and early 1950's that most biologists accepted the evidence showing that DNA must be the chromosomal component that carries hereditary information. One of the most convincing experiments was that of Alfred Hershey and Martha Chase who, in 1952, used radioactive labeling to reach this conclusion(See Graphics). This team of biologists grew a particular type of phage, known as T2, in the presence of two different radioactive labels so that the phage DNA incorporated radioactive phosphorus (32P), while the protein incorporated radioactive sulfur (35S). They then allowed the labeled phage particles to infect non-radioactive bacteria and asked a very simple question: which label would they find associated with the infected cell? Their analysis showed that most of the 32P-label was found inside of the cell, while most of the 35S was found outside. This suggested to them that the proteins of the T2 phage remained outside of the newly infected bacterium while the phage-derived DNA was injected into the cell. They then showed that the phage derived DNA caused the infected cells to produce new phage particles. This elegant work showed, conclusively, that DNA is the molecule which holds genetic information. Meanwhile, much of the scientific world was asking questions about the physical structure of the DNA molecule, and the relationship of that structure to its complex functioning.
Watson and Crick
In 1951, the then 23-year old biologist James Watson traveled from the United States to work with Francis Crick, an English physicist at the University of Cambridge. Crick was already using the process of X-ray crystallography to study the structure of protein molecules. Together, Watson and Crick used X-ray crystallography data, produced by Rosalind Franklin and Maurice Wilkins at King's College in London, to decipher DNA's structure.
This is what they already knew from the work of many scientists, about the DNA molecule:
- DNA is made up of subunits which scientists called nucleotides.
- Each nucleotide is made up of a sugar, a phosphate and a base.
- There are 4 different bases in a DNA molecule:
- adenine (a purine)
- cytosine (a pyrimidine)
- guanine (a purine)
- thymine (a pyrimidine)
- The number of purine bases equals the number of pyrimidine bases
- The number of adenine bases equals the number of thymine bases
- The number of guanine bases equals the number of cytosine bases
- The basic structure of the DNA molecule is helical, with the bases being stacked on top of each other
Working with nucleotide models made of wire, Watson and Crick attempted to put together the puzzle of DNA structure in such a way that their model would account for the variety of facts that they knew described the molecule. Once satisfied with their model, they published their hypothesis, entitled "Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid" in the British journal Nature (April 25, 1953. volume 171:737-738.) It is interesting to note that this paper has been cited over 800 times since its first appearance!
Here are their words:
"...This (DNA) structure has two helical chains each coiled round the same axis...Both chains follow right handed helices...the two chains run in opposite directions. ..The bases are on the inside of the helix and the phosphates on the outside..."
"The novel feature of the structure is the manner in which the two chains are held together by the purine and pyrimidine bases... The (bases) are joined together in pairs, a single base from one chain being hydrogen-bonded to a single base from the other chain, so that the two lie side by side...One of the pair must be a purine and the other a pyrimidine for bonding to occur. ...Only specific pairs of bases can bond together. These pairs are: adenine (purine) with thymine (pyrimidine), and guanine (purine) with cytosine (pyrimidine)."
"...in other words, if an adenine forms one member of a pair, on either chain, then on these assumptions the other member must be thymine; similarly for guanine and cytosine. The sequence of bases on a single chain does not appear to be restricted in any way. However, if only specific pairs of bases can be formed, it follows that if the sequence of bases on one chain is given, then the sequence on the other chain is automatically determined."
"...It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material."
And with these words, the way was made clear for tremendous strides in our understanding of the structure of DNA and, as a result our ability to work with and manipulate the information-rich DNA molecule.
Source: Pamela Peters, Ph.D., Access Excellence, Genentech, Inc.
For further information please see: | <urn:uuid:64b41299-349b-474e-bdb0-ea4b3176f572> | 3.921875 | 1,456 | Knowledge Article | Science & Tech. | 41.58806 |
Science Fair Project Encyclopedia
Classical tests of general relativity
The classical tests of general relativity are the direct consequences for experimental verification of the theory of general relativity about gravitational interaction. Three possible types of experiments are proposed soon after publication of the Einstein field equations in 1916, a fourth test is added much later. They are
- Gravitational redshift or Einstein shift(clocks in a gravitational field observed from a distance tick slower),
- Deflection of light (when light passes near a mass concentration such as the Sun, its path is slightly bent, also called gravitational lensing),
- Perihelion shift of the planet Mercury (the deviation from Kepler-orbits of a test mass (such as a planet) around a massive object such as the Sun)
- Time delay in radar propagation near the Sun
The gravitational redshift is a simple consequence of the Einstein equivalence principle and was found by Albert Einstein eight years before the full theory. It is a direct consequence of the so-called equivalence principle.
Experimental verification of this principle requires good clocks and it was for the first time experimentally confirmed as late as in 1960, by the Pound-Rebka experiment (Pound, R.V., Rebka, G.A., 1960, Phys. Rev. Lett., 4, 337), later improved by Pound and Snider. The famous experiment is generally called the Pound-Rebka-Snider experiment. The accuracy is typically 1%.
A very accurate gravitational redshift experiment was performed in 1976 (Vessot, R.F.C,. Levine, M.W., Mattison, E.M., et al., 1980, Phys. Rev. Lett. 45, 2081-2084). A hydrogen maser clock on a rocket was launched to a height of 10000 km, and its rate compared with an identical clock on the ground. It tested the gravitational redshift to 0.02%.
Gravitational deflection of light
The first observation of light deflection was performed by noting the change in position of stars near the Sun (F.W.Dyson, A.S. Eddington, C. Davidson, 1920, Philos. Trans. Royal Soc. London, Vol.~220A, p. 291-333). It took place during a total solar eclipse (so that stars near the Sun could be observed) in 1919 and was observed on an island near Brazil and near the westcoast of Africa. The result was considered spectacular news and made the front page of most international journals. It made Einstein and his General Relativity world famous. The early accuracy, however, is poor (20% at best) and remained poor for about 40 years, till methods were found to accurately measure stellar positions in the sky.
Using radio interferometry
Radio interferometry observations (using stars that emit in the radio range) during the 1960s are able to provide accurate (relative) positions of radio sources. The sources used are quasars, some of which are strong radio sources in the sky. The quasars 3C273 and 3C279 have a small angular separation. Each year around October 8 they pass near the Sun, whereby the quasar 3C279 is eclipsed by the Sun. During its approach to the Sun the bending of light near the Sun can be verified to 1.5%. (Fomalont, E.B., and Sramek, R.A., 1976, Phys. Rev. Lett., 236, 1475-1478.)
The positional accuracy of any telescope is in principle limited by diffraction, for radio telescopes this is also the practical limit. An important improvement in obtaining positional high accuracies (from milli-arcsec to micro-arcsec) was obtained by combining radio telescopes across the Earth. The technique is called VLBI, Very Long Baseline Interferometry. With this technique radio observations couple the phase information of the radio signal observed in telescopes separated over large distances. With these accuracies the Einstein light deflection can be determined to an accuracy of 0.2% (D.S. Robertson & W.E. Carter, 1984, Nature, 310, p.572-574; and D.S. Robertson, W.E. Carter & W.H.Dillinger, 1991, Nature, 349, p.768-770).
At this level of precision all sorts of systemetic effects have to be taken into account to know the precise location of the telescopes on Earth. Important are such effects as: Earth nutation (which has an error in the annual term of 2 milli-arcsec, mas), Earth rotation, atmospheric refraction, tectonic displacement, tidal waves in the ocean, etc. An astronomical limitation is the refraction of radio waves around the Sun, in the so called solar corona , extending to several Solar radii. Fortunately, gravitational reflection is achromatic (it doesn't depend on wavelength) while the Solar corona bends electromagnetic radiation in the radio depending on wavelength. This chromatic effect can be used to eliminate the refraction in the solar corona, but uncertainties remain.
Observations with the Hypparchos satellite
In principle we observe almost all sky slightly distorted due to the gravitational deflection of light caused by the Sun (the anti-Sun direction excepted). This effect has been observed. The ESA astrometric satellite Hipparchos has measured the positions of about 105 stars. During the full mission about 3.5 × 106 relative positions have been determined, each to an accuracy of typically 3 mas (1 mas= 0.001 arcsec; this accuracy is for a 8-9 magnitude star). Since the gravitation deflection perpendicular to the Earth-Sun direction is already 4.07 mas, corrections are needed for practically all stars. Without systematic effects, the error in an individual observation of 3 mas, could be reduced by the square root of the number of positions, leading to a precision of 0.0016 mas. Systematic effects, however, limit the accuracy of the determination to 0.1% (Froeschl\'e, M.\, Mignard, F., Arenou F., 1997, Hipparchos Venice, ESA-SP-402, "Determination of the PPN parameter γ with the Hipparchos data").
Perihelion shift of Mercury
The two previous effects, the gravitational redshift and the deflection of light, are derived from nul-geodesics, the paths of photons. Also the path of a test particle in Einstein's theory of gravitation, differs from the pure ellipses expected on the basis of Newtonian theory. With small deviations from the Newtonian theory, the effect is that the axis of the ellips will rotate. Since the point in the orbit of a planet nearest to the Sun is called perihelion, the most obvious effect is the perihelion shift of planets. The orbit of the planet Mercury, nearest to the Sun, was carefully calculated in the 19th century. The disturbing effects of other planets also cause a perihelion shift, but the calculation let a small amount of 43 arcseconds per century unexplained. This is just the amount predicted by General Relativity.
Time-delay in radar propagation
The previous three tests are called the three classical tests of General Relativity. Much later, in 1964, Shapiro proposed another test to be performed within the solar system. It is generally called the fourth "classical" test of General Relativity. Shapiro predicted a relativistic time delay in the round-trip travel time for radar signals reflecting off other planets (Shapiro,I.I., 1964, Phys. Rev. Lett. 13, p.~789-791, "Fourth test of general relativity").
The curvature of the path of a photon passing near the Sun is too small to have an observable delaying effect, but General Relativity predicts a time delay which becomes progessively larger when the photon passes nearer to the Sun. Observing radar reflections from a planet just before and after it will be eclipsed by the Sun shows this effect.
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:31e0099b-d001-43c4-870c-600b0c6d199a> | 3.84375 | 1,704 | Knowledge Article | Science & Tech. | 51.269086 |
As short as an attosecond…Wednesday, July 11th, 2012 by Roberto Saracco
Imagine a billionth of a second, well it is pretty difficult. Let’s say that a billionth of a second is as far to a second as one second is far from 33 years (in 33 years you have a billion of seconds). In a billionth of a second a light beam will move just 33cm! And light is pretty fast.
Now take that billionth of a second and divide it by one billion. What you have is an “attosecond”, a billionth of a billionth of a second. In that attosecond a light beam will travel a distance of 3 angstrom, that is a distance 3 atoms long!
Well, this is actually the feat achieved by researchers at the Imperial College in London! They managed to generate a laser pulse for an attosecond and they expect this can help other scientists in getting a better understanding of some nature processes, like photosyntheses.
To imagine why it would be so think at those pictures showing a bullet that is going through an apple. You have surely see those pictures, just in case you can see one here.
To take such a picture you have to freeze the image using a very short burst of light. Well, with the burst of light lasting one attosecond scientists hope to be able to freeze the movements of atoms in chemical reactions, that is to “see” the electrons binding atoms to form molecules and hence get a much better understanding of processes like photosynthesis.
Understanding these processes means to be able to mimic them (potentially) and hence to create better solar panels (as an example).
This is just an example of a general trend of science, moving to the nano dimension, to the dimension at which Nature works. And this will bring a lot of innovation to our world! | <urn:uuid:4f47be9b-790c-48b1-af51-0a48d41a7e78> | 3.25 | 390 | Personal Blog | Science & Tech. | 53.998877 |
The greatest earthquake risk east of the Rocky Mountains is along the New Madrid fault system. Damaging earthquakes are much less frequent than in California, but when they do occur, the damage can be far greater, due to the underlying geology.
The New Madrid fault system, or the New Madrid seismic zone, is a series of faults beneath the continental crust in a weak spot known as the Reelfoot Rift. It cannot be seen on the surface. The fault system extends 150 miles southward from Cairo, Illinois through New Madrid and Caruthersville, Missouri, down through Blytheville, Arkansas to Marked Tree, Arkansas. It dips into Kentucky near Fulton and into Tennessee near Reelfoot Lake, and extends southeast to Dyersburg, Tennessee. It crosses five state lines, and crosses the Mississippi River in at least three places.
Magnitudes are determined by various methods The one of which most people are familier, is the Richter Scale. The Richter Scale is a measure of energy released in an earthquake. It is determined by ground motion on seismograms. An earthquake has only one Richter magnitude.
What do the numbers on the Richter Scale tell us? Each unit of the Richter Scale is a tenfold increase in the relative size of an earthquake. A magnitude 6.0 is ten times the size of a magnitude 5.0, and one hundred times the size of a 4.0. But energy release is a different matter. Each unit up the Richter Scale is around a 32 times greater release of energy. A magnitude 6.0 releases 32 times more energy than a mangitude 5.0, and about one thousand times more energy than a magnitude 4.0
The amount of energy released in a small earthquake is not enough to prevent a large one from occurring. A million magnitude 2.0 earthquakes would release the same amount of energy as one magnitude 6.0 quake. 32,768 magnitude 2.0 earthquakes would release the same amount of energy as one magnitude 5.0
Of greatest concern in the near future, then , are the 6.0 to 6.5 events. Damaging earthquakes in this magnitude range are possible within the lifetimes of our children. Two have occurred since 1811-1812, one in 1843, and another in 1895.
Many things can be done to protect ourselves. Education, preparedness planning, and proper building construction are proven means to minimize the deaths, injuries, and economic losses due to earthquakes. Northern California and Armenia recently experienced 6.9-7.1 earthquakes.Northern California was prepared, Armenia was not. In northern California 62 people died and there were more than $6 billion in losses. In Armenia over 25,000 people died and losses were greater than $20 billion. The central United States is more prepared than Armenia, but not nearly as well prepared as northern California.
The choice is ours. We can get ready and reduce our losses, or we can do nothing, and suffer the full consequences of a damaging earthquake. We need to continue to plan, to build better buildings, and make sure that earthquake preparedness becomes a part of all our lives. We cannot prevent the coming of an earthquake, but we can reduce the effects. | <urn:uuid:d798d8df-3ce1-4db5-a25a-2b61ad6cf02e> | 4 | 659 | Knowledge Article | Science & Tech. | 56.111724 |
WrapperTypes are usually trivial wrappers (i.e. newtypes) that are designed to convey some information to the type system. Non-trivial type synonyms and Type class wrapper are both instances of this. This idiom is also in a synergistic relation to Phantom types, Traits type class, and Simulating dependent types.
1 Usages and examples
One use of Wrapper types is to add Phantom types to a pre-existing (e.g. 3rd party) type.Another example occurs in WxHaskell to handle interfacing to an OO library. The Sub-typing relationship is represented as nested wrapper types, so that
Traits type class and wrapper types often give two different approaches to solving the same problem. For example, if you want to compare two Strings for equality in different ways (mainly case-sensitive and case-insensitive) you can either use a wrapper to adapt String to the Eq class,
newtype CIString = CIString String instance Eq CIString where CIString a == CIString b = map toUpper a == map toUpper b
or you can make a TraitsTypeclass
class MyEq traits a where cmp :: traits -> a -> a -> Bool data CaseSensitive data CaseInsensitive instance MyEq CaseSensitive String where cmp _ = (==) instance MyEq CaseInsensitive String where cmp _ a b = map toUpper a == map toUpper b
As the example illustrates, the two approaches have different trade-offs, but we can also get some of the benefits of both with the following synergy between Phantom types and Traits type class (and perhaps also wrapper types). What we do is store the traits type variable in a phantom type variable (added in this case via a wrapper type) which avoids the need for a Reified type parameter or the construction of a custom class (when the class already exists).
newtype PString a = PString String data CaseSensitive data CaseInsensitive instance Eq (PString CaseSensitive) where PString a = PString b = a == b instance Eq (PString CaseInsensitive) where PString a == PString b = map toUpper a == map toUpper b
This gives us the benefit of only having one type that we can parameterize to different implementations and the benefit of working with pre-existing type classes, it does however require us to provide a phantom type argument.
2 See also
- http://www.haskell.org/pipermail/haskell/2004-August/014397.html gives a rather involved example that uses this idiom. | <urn:uuid:c3e0a213-9024-4488-a7b9-647a11376570> | 3.234375 | 560 | Documentation | Software Dev. | 38.967144 |
Scientists at the Natural History Museum have identified for the first time how the tiny structures that make butterfly wings shimmer, and seemingly change colour, actually work
'We've found for the first time that more than one structure within a single scale causes the colour changes in butterflies,' says Dr Abigail Ingram, butterfly expert (zoologist) at the Museum.
Butterflies display some of the most vivid colours in nature and these colours are the result of light striking structures on the surface of the wing scales. They are known as iridescence or structural colours.
The centre of the study was a butterfly from New Guinea called Lamprolenis nitida.
Butterfly wing under SEM (Scanning Electron Microscope)Butterfly wing under SEM (Scanning Electron Microscope)
This butterfly has an intriguing characteristic - it appears matt brown when lit from above but green to red when lit from the front, and blue to violet when lit from the back.
'L. nitida uses 2 nanostructures in a single wing scale, each of which causes a separate iridescent signal of different colours that can be seen in different directions,' says Ingram.
'We've never seen this in any other butterfly, nor indeed other animals or plants. This discovery is therefore new to the field of butterfly structural colour but also more generally, to the study of structural colours in nature.'
L. nitida> lives in the forests of New Guinea with thousands of other species of butterflies. They therefore need to be able to recognise each other.
Light levels are very low in these dense forests and occasionally a shaft of light breaks through the overhead canopy and will light up a butterfly and make it very noticeable.
Only the males butterflies of this species use the colour changing ability and Dr Ingram and her team say it is probably used in threat displays to warn off other males.
Understanding how these tiny nanostructures in the butterfly wings cause optical effects is important because scientists want to replicate these structures, and other good designs from nature and use them in modern technology.
This is known as biomimetics and already scientists have made iridescent structures using conventional engineering and Museum scientists have grown cells (culture cells) that produce the structures for us. | <urn:uuid:8e5642f0-01c0-428e-b54d-c0cc9ec08c9a> | 3.96875 | 459 | Knowledge Article | Science & Tech. | 33.932062 |
Describing Motion with Velocity vs. Time Graphs
Experiment with velocity-time (and position-time) plots with this interactive Java applet.PhET Simulation: The Moving Man
Explore the relationship between motion graphs and motion with PHET's The Moving Man simulation.Apply the Brakes
Explore the difference between constant speed and accelerated motion with the Apply the Brakes simulation.Flickr Physics
Visit The Physics Classroom's Flickr Galleries and take a visual overview of 1D Kinematics.Shockwave Studios
Think you get the idea? Try the Graph That Motion activity from the Shockwave Studios.Shockwave Studios
Think you get the idea? Try the Graph That Motion activity from the Shockwave Studios.
This interactive simulation will help students relate the shape of v-t (and p-t) plots to the actual motion.PhET Simulation: The Moving Man
Explore the relationship between graphs and motion with The Moving Man simulation from PHET.Apply the Brakes
This Apply the Brakes simulation provides multiple representations of constant speed and accelerated motion.Graph Matching Motion Model
This EJS simulation from Open Source Physics (OSP) contrasts the graphs for constant speed and accelerated motion.Shockwave Studios
Graph That Motion from the Shockwave Studios is an excellent accompanying activity to this page.Curriculum Corner
Learning requires action. Give your students this sense-making activity from The Curriculum Corner.The Laboratory
Looking for a lab that coordinates with this page? Try the Velocity-Time Graphs Lab at The Laboratory. Requires motion detectors.Treasures from TPF
Need ideas? Explore The Physics Front's treasure box of catalogued resources on kinematic graphing.Socratic Dialog-Inducing Lab
A collection of "guided construction" activities that introduces the concept of kinematic graphing in an interactive manner.
The Meaning of Shape for a v-t Graph
Our study of 1-dimensional kinematics has been concerned with the multiple means by which the motion of objects can be represented. Such means include the use of words, the use of diagrams, the use of numbers, the use of equations, and the use of graphs. Lesson 4 focuses on the use of velocity versus time graphs to describe motion. As we will learn, the specific features of the motion of objects are demonstrated by the shape and the slope of the lines on a velocity vs. time graph. The first part of this lesson involves a study of the relationship between the shape of a v-t graph and the motion of the object.
Consider a car moving with a constant, rightward (+) velocity - say of +10 m/s. As learned in an earlier lesson, a car moving with a constant velocity is a car with zero acceleration.
If the velocity-time data for such a car were graphed, then the resulting graph would look like the graph at the right. Note that a motion described as a constant, positive velocity results in a line of zero slope (a horizontal line has zero slope) when plotted as a velocity-time graph. Furthermore, only positive velocity values are plotted, corresponding to a motion with positive velocity.
Now consider a car moving with a rightward (+), changing velocity - that is, a car that is moving rightward but speeding up or accelerating. Since the car is moving in the positive direction and speeding up, the car is said to have a positive acceleration.
If the velocity-time data for such a car were graphed, then the resulting graph would look like the graph at the right. Note that a motion described as a changing, positive velocity results in a sloped line when plotted as a velocity-time graph. The slope of the line is positive, corresponding to the positive acceleration. Furthermore, only positive velocity values are plotted, corresponding to a motion with positive velocity.
The velocity vs. time graphs for the two types of motion - constant velocity and changing velocity (acceleration) - can be summarized as follows.
The Importance of Slope
The shapes of the velocity vs. time graphs for these two basic types of motion - constant velocity motion and accelerated motion (i.e., changing velocity) - reveal an important principle. The principle is that the slope of the line on a velocity-time graph reveals useful information about the acceleration of the object. If the acceleration is zero, then the slope is zero (i.e., a horizontal line). If the acceleration is positive, then the slope is positive (i.e., an upward sloping line). If the acceleration is negative, then the slope is negative (i.e., a downward sloping line). This very principle can be extended to any conceivable motion.
The slope of a velocity-time graph reveals information about an object's acceleration. But how can one tell whether the object is moving in the positive direction (i.e., positive velocity) or in the negative direction (i.e., negative velocity)? And how can one tell if the object is speeding up or slowing down?
The answers to these questions hinge on one's ability to read a graph. Since the graph is a velocity-time graph, the velocity would be positive whenever the line lies in the positive region (above the x-axis) of the graph. Similarly, the velocity would be negative whenever the line lies in the negative region (below the x-axis) of the graph. As learned in Lesson 1, a positive velocity means the object is moving in the positive direction; and a negative velocity means the object is moving in the negative direction. So one knows an object is moving in the positive direction if the line is located in the positive region of the graph (whether it is sloping up or sloping down). And one knows that an object is moving in the negative direction if the line is located in the negative region of the graph (whether it is sloping up or sloping down). And finally, if a line crosses over the x-axis from the positive region to the negative region of the graph (or vice versa), then the object has changed directions.
Now how can one tell if the object is speeding up or slowing down? Speeding up means that the magnitude (or numerical value) of the velocity is getting large. For instance, an object with a velocity changing from +3 m/s to + 9 m/s is speeding up. Similarly, an object with a velocity changing from -3 m/s to -9 m/s is also speeding up. In each case, the magnitude of the velocity (the number itself, not the sign or direction) is increasing; the speed is getting bigger. Given this fact, one would believe that an object is speeding up if the line on a velocity-time graph is changing from near the 0-velocity point to a location further away from the 0-velocity point. That is, if the line is getting further away from the x-axis (the 0-velocity point), then the object is speeding up. And conversely, if the line is approaching the x-axis, then the object is slowing down.
Check Your Understanding
1. Consider the graph at the right. The object whose motion is represented by this graph is ... (include all that are true):
- moving in the positive direction.
- moving with a constant velocity.
- moving with a negative velocity.
- slowing down.
- changing directions.
- speeding up.
- moving with a positive acceleration.
- moving with a constant acceleration. | <urn:uuid:9e13f0aa-dbba-40a9-b382-b9acbcf5e272> | 3.328125 | 1,539 | Tutorial | Science & Tech. | 46.177428 |
Posted By Crosis on Tuesday, November 12 2002
Subject: Scientists spot simultaneous solar flares on opposite sides of sun
Scientists have observed solar flares erupting almost simultaneously on opposite sides of the sun.
The discovery was made by researchers at the National Solar Observatory in southern New Mexico.
Click Here To Read The Complete Article......
Scientists spot simultaneous solar flares on opposite sides
No replies to this topic
1 user(s) are reading this topic
0 members, 1 guests, 0 anonymous users | <urn:uuid:875eca96-d2ed-40ef-96fa-adb7eef4a8c0> | 2.921875 | 101 | Comment Section | Science & Tech. | 41.675152 |
Instances of Set and ImmutableSet both provide the following operations:
||cardinality of set s|
||test x for membership in s|
||test x for non-membership in s|
||test whether every element in s is in t|
||test whether every element in t is in s|
||s | t||new set with elements from both s and t|
||s & t||new set with elements common to s and t|
||s - t||new set with elements in s but not in t|
||s ^ t||new set with elements in either s or t but not both|
||new set with a shallow copy of s|
Note, the non-operator versions of union(),
intersection(), difference(), and
symmetric_difference() will accept any iterable as an argument.
In contrast, their operator based counterparts require their arguments to
be sets. This precludes error-prone constructions like
Set('abc') & 'cbs' in favor of the more readable
Changed in version 2.3.1:
Formerly all arguments were required to be sets.
In addition, both Set and ImmutableSet support set to set comparisons. Two sets are equal if and only if every element of each set is contained in the other (each is a subset of the other). A set is less than another set if and only if the first set is a proper subset of the second set (is a subset, but is not equal). A set is greater than another set if and only if the first set is a proper superset of the second set (is a superset, but is not equal).
The subset and equality comparisons do not generalize to a complete
ordering function. For example, any two disjoint sets are not equal and
are not subsets of each other, so all of the following return
Accordingly, sets do not implement the __cmp__ method.
Since sets only define partial ordering (subset relationships), the output of the list.sort() method is undefined for lists of sets.
The following table lists operations available in ImmutableSet but not found in Set:
||returns a hash value for s|
The following table lists operations available in Set but not found in ImmutableSet:
||s |= t||return set s with elements added from t|
||s &= t||return set s keeping only elements also found in t|
||s -= t||return set s after removing elements found in t|
||s ^= t||return set s with elements from s or t but not both|
||add element x to set s|
||remove x from set s; raises KeyError if not present|
||removes x from set s if present|
||remove and return an arbitrary element from s; raises KeyError if empty|
||remove all elements from set s|
Note, the non-operator versions of update(), intersection_update(), difference_update(), and symmetric_difference_update() will accept any iterable as an argument. Changed in version 2.3.1: Formerly all arguments were required to be sets.
Also note, the module also includes a union_update() method which is an alias for update(). The method is included for backwards compatibility. Programmers should prefer the update() method because it is supported by the builtin set() and frozenset() types.
See About this document... for information on suggesting changes. | <urn:uuid:571c1b2f-fcc7-44d4-a468-d88e756a74ac> | 3.140625 | 737 | Documentation | Software Dev. | 52.475021 |
Back in the old days of Python, to call a function with arbitrary arguments, you would use
apply still exists in Python2.7 though not in Python3, and is generally not used anymore. Nowadays,
is preferred. The
multiprocessing.Pool modules tries to provide a similar interface.
Pool.apply is like Python
apply, except that the function call is performed in a separate process.
Pool.apply blocks until the function is completed.
Pool.apply_async is also like Python's built-in
apply, except that the call returns immediately instead of waiting for the result. An
ApplyResult object is returned. You call its
get() method to retrieve the result of the function call. The
get() method blocks until the function is completed. Thus,
pool.apply(func, args, kwargs) is equivalent to
pool.apply_async(func, args, kwargs).get().
In contrast to
Pool.apply_async method also has a callback which, if supplied, is called when the function is complete. This can be used instead of calling
import multiprocessing as mp
result_list =
# This is called whenever foo_pool(i) returns a result.
# result_list is modified only by the main process, not the pool workers.
pool = mp.Pool()
for i in range(10):
pool.apply_async(foo_pool, args = (i, ), callback = log_result)
if __name__ == '__main__':
may yield a result such as
[1, 0, 4, 9, 25, 16, 49, 36, 81, 64]
pool.map, the order of the results may not correspond to the order in which the
pool.apply_async calls were made.
So, if you need to run a function in a separate process, but want the current process to block until that function returns, use
Pool.map blocks until the complete result is returned.
If you want the Pool of worker processes to perform many function calls asynchronously, use
Pool.apply_async. The order of the results is not guaranteed to be the same as the order of the calls to
Notice also that you could call a number of different functions with
Pool.apply_async (not all calls need to use the same function).
Pool.map applies the same function to many arguments.
Pool.apply_async, the results are returned in an order corresponding to the order of the arguments. | <urn:uuid:6a273c9a-9fb7-4d7e-883a-0574dd790b38> | 2.71875 | 546 | Q&A Forum | Software Dev. | 64.889324 |
New Chameleon Species from the Aberdare Mountains, Kenya
A new species from the central highlands of Kenya is described but may be threatened by fires
Surveys of the highlands of East Africa continue to reveal undescribed chameleon species diversity. Researchers recently surveyed the inaccessible Kinangop Peak (4000m) at the southern end of the Aberdare range, central highlands of Kenya. They compared the external morphology of small chameleons they found with other described taxa in the bitaeniatus group, which revealed them to be morphologically distinct. Mitochondrial DNA was used to construct gene trees and showed that the proposed new taxon, Trioceros kinangopensis sp. nov., is unique evolutionary lineage that is closely related to T. nyirit and T. schubotzi. Trioceros kinangopensis sp. nov. is restricted to the afroalpine zone (>3500m) and the prevalence of fires in this type of habitat may threaten its long-term survival.
For more information, read the full article here. | <urn:uuid:a3fa7ca5-c178-4df8-82e0-fc9c6f28de22> | 2.734375 | 228 | Truncated | Science & Tech. | 36.466491 |
Tag: "sticky" at biology news
Research suggests new way to repair cartilage damage
...hyaluronan was chemically altered to have multiple sticky
sites that are used to latch on to each other. The researchers then treat the polymer gel with laser light, turning the liquid into a solid, a process that takes about 30 seconds. "The solid polymer creates a scaffold of support that fills the defe...
Study reveals important new factor in cystic fibrosis lung inflammation
...ne causes the body to produce an abnormally thick, sticky
mucus that clogs the lungs and leads to life-threatening lung infections. These thick secretions also obstruct the pancreas, preventing digestive enzymes from reaching the intestines to help break down and absorb food. People with CF have a variety o...
Using science to restore habitat for declining species
...r pecks away the tree bark to make holes that leak sticky
resin that deters snakes from entering the nest cavity. One of Conner's many studies suggests that the socially dominant breeding male can actually determine which tree will produce the most resin. Conner also studies the role of red heart fungus, wh...
Stanford researchers go from heaven to Earth in 'lifeguard' test
... "We learned: Don't use electrodes that have very sticky
electrode gel. That stuff comes off when you sweat," said Mundt, who took part in the climbs. The most dramatic test so far put the equipment through an environment as close to extraterrestrial as possible. On that trip, the expedition me...
Chemical Society announces EPA awards for environmentally friendly technology
...tional, Inc. (Memphis, Tenn.) Recycling can be a sticky
business when lingering adhesives on envelopes, ma...ll operators typically apply toxic solvents to the sticky
gunk. But Buckman Laboratories found a novel enzyme to do the job more safely. The company's product...
Molecular motor shuttles key protein in response to light
... directly. Instead, they are "glued" together by a sticky
fat, called phosphoinositides. "Arrestin is pasted onto the myosin motor and is quickly taken to its target destination within the cell," says Montell. "This explains why it moves much faster than if it just moved passively, essentially wandering to ...
Biologists deciphering complex lemur scent language
... And both male and female genital glands produce a sticky
goo that females might mix with urine, and which might have an effect on the message the female is encoding." Scordato performed two kinds of studies to understand lemur scent communications. She recorded the rates of different kinds of scent markin...
Researchers show how to assemble building blocks for nanotechnology
... sheets, shells and other unusual structures using sticky
patches that make the particles group themselves t...ering, studied the self-assembly of particles with sticky
molecular "patches" on their surfaces---discrete interaction sites that cause particles to stick tog...
Proteins show promise for mosquito control
...lity to metabolize cholesterol. Cholesterol, the sticky
substance that accumulates on the lining of human arteries, is an important component of cell membranes in vertebrates and invertebrates. In mosquitoes, it is vital for growth, development and egg production. Unlike humans, mosquitoes cannot synthe...
1 2 3 4 | <urn:uuid:eed07e4a-1f6d-4a4e-822c-76cd7c179d21> | 2.703125 | 707 | Content Listing | Science & Tech. | 48.38419 |
Nested loop (loop over loop)
In this algorithm, an outer loop is formed which consists of few entries and then for each entry, and inner loop is processed.
Select tab1.*, tab2.* from tabl, tab2 where tabl.col1=tab2.col2;
For i in (select * from tab1) loop
For j in (select * from tab2 where col2=i.col1) loop
The Steps involved in doing nested loop are:
a) Identify outer (driving) table
b) Assign inner (driven) table to outer table.
c) For every row of outer table, access the rows of inner table.
In execution plan it is seen like this:
When optimizer uses nested loops?
Optimizer uses nested loop when we are joining tables containing small number of rows with an efficient driving condition. It is important to have an index on column of inner join table as this table is probed every time for a new value from outer table.
- No of rows of both the table is quite high
- Inner query always results in same set of records
- The access path of inner table is independent of data coming from outer table.
Note: You will see more use of nested loop when using FIRST_ROWS optimizer mode as it works on model of showing instantaneous results to user as they are fetched. There is no need for selecting caching any data before it is returned to user. In case of hash join it is needed and is explained below.
Hash joins are used when the joining large tables. The optimizer uses smaller of the 2 tables to build a hash table in memory and the scans the large tables and compares the hash value (of rows from large table) with this hash table to find the joined rows.
The algorithm of hash join is divided in two parts
- Build a in-memory hash table on smaller of the two tables.
- Probe this hash table with hash value for each row second table
In simpler terms it works like
For each row RW1 in small (left/build) table loop
Calculate hash value on RW1 join key
Insert RW1 in appropriate hash bucket.
For each row RW2 in big (right/probe) table loop
Calculate the hash value on RW2 join key
For each row RW1 in hash table loop
If RW1 joins with RW2
Return RW1, RW2
When optimizer uses hash join?
Optimizer uses has join while joining big tables or big fraction of small tables.
Unlike nested loop, the output of hash join result is not instantaneous as hash joining is blocked on building up hash table.
Note: You may see more hash joins used with ALL_ROWS optimizer mode, because it works on model of showing results after all the rows of at least one of the tables are hashed in hash table.
Sort merge join
Sort merge join is used to join two independent data sources. They perform better than nested loop when the volume of data is big in tables but not as good as hash joins in general.
They perform better than hash join when the join condition columns are already sorted or there is no sorting required.
The full operation is done in two parts:
- Sort join operation
get first row RW1 from input1
get first row RW2 from input2.
- Merge join operation
while not at the end of either input loop
if RW1 joins with RW2
get next row R2 from input 2
return (RW1, RW2)
else if RW1 < style=""> get next row RW1 from input 1
get next row RW2 from input 2
Note: If the data is already sorted, first step is avoided.
Important point to understand is, unlike nested loop where driven (inner) table is read as many number of times as the input from outer table, in sort merge join each of the tables involved are accessed at most once. So they prove to be better than nested loop when the data set is large.
When optimizer uses Sort merge join?
a) When the join condition is an inequality condition (like <, <=, >=). This is because hash join cannot be used for inequality conditions and if the data set is large, nested loop is definitely not an option.
b) If sorting is anyways required due to some other attribute (other than join) like “order by”, optimizer prefers sort merge join over hash join as it is cheaper.
Note: Sort merge join can be seen with both ALL_ROWS and FIRST_ROWS optimizer hint because it works on a model of first sorting both the data sources and then start returning the results. So if the data set is large and you have FIRST_ROWS as optimizer goal, optimizer may prefer sort merge join over nested loop because of large data. And if you have ALL_ROWS as optimizer goal and if any inequality condition is used the SQL, optimizer may use sort-merge join over hash join | <urn:uuid:aa9bedac-146b-4c29-99b8-ccefabad5e15> | 3.609375 | 1,054 | Documentation | Software Dev. | 57.700849 |
- Wind farms can warm up the surface of the land underneath them during the night.
- Satellite data of surface temperatures from western Texas show a direct correlation between night-time temperature increases and wind farm location.
New research finds that wind farms actually warm up the surface of the land underneath them during the night, a phenomena that could put a damper on efforts to expand wind energy as a green energy solution.
Researchers used satellite data from 2003 to 2011 to examine surface temperatures across as wide swath of west Texas, which has built four of the world's largest wind farms. The data showed a direct correlation between night-time temperatures increases of 0.72 degrees C (1.3 degrees F) and the placement of the farms. | <urn:uuid:0235891f-115b-48ae-86f7-214398d144ad> | 3.59375 | 148 | Personal Blog | Science & Tech. | 51.064884 |
I’m of the opinion that if we’re ever going to achieve long-range space exploration/colonization, human engineering of some sort will be required. This not only applies to space exploration however, but most likely the long-range survival of humanity in general. The common fear in this topic seems to always conjure up a ‘rise of the machines’ scenario when I’m discussing this with others. So I thought that today I’d post a few thoughts on how these fears might be alleviated in the future.
Collective human engineering could only work if it were done in such a way that truly ensured benevolence. Genetic experiments of the 20th century were largely associated with horrfying military actions. That said, there is nothing to suggest that superintelligent beings that grow out of a technological singularity would not be sociopathic. However, mirror-touch synaesthesia could be a beneficial limitation to full-spectrum superintelligence. This simply means that humans with mirror-touch synaesthesia are able to feel the pain of other people. It is a rare disorder that very few people truly experience.
Superintelligent beings need to be engineered in such a way that rewards or punishes their actions. This could very well be that feedback system. Nothing would be destructive if it had to endure every bit of pain it caused. For that matter, this could be looked at as an element of extreme doses of human guilt grafted onto a new artificial matrix.
However, to do this requires a fundamental change in humanity. Humans respond the way they do because of natural development. Elements of aggression and competition in human nature stem from ancestral survival instincts. These aspects of the human psyche were extremely important in primitive mankind.
From desktops to smartphones, from records to mp3s, from tv to streaming video, the pace of change in modern life can be bewildering. But is it our imagination or are these changes really accelerating? And what does it mean if they are?
Some individuals have suggested that drugs could be used to control people. This is certainly a grim manner of control. Rather, people need to aspire to a greater good for the coming technological singularity. We need controls in place every step of the way while scientists work toward a common goal. Feel free to share your thoughts below as always.
Have a great weekend everyone!
Additional Learning Resources:
- Superintelligent Will (acceleratingfuture.com)
- Redefining superintelligence (wetwiring.wordpress.com)
- Boosting the brains of animals (sentientdevelopments.com)
- Interviewed by The Rational Future (acceleratingfuture.com)
- Use of Singularity in Scrutinizer (algorithmist.wordpress.com)
Around the Twitosphere
How long until we see a technological singularity? On a related note, how long until we get kick-ass robot servant like in I, Robot?
people oughta read J. Diamond & Morris !What will happen first? The technological singularity or climate collapse? We gotta win the race! | <urn:uuid:ca270da1-4bd3-46e6-85fc-182b0667a5ba> | 2.765625 | 647 | Personal Blog | Science & Tech. | 39.094469 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Monday, 11 March 2013
The ability of ecosystems to adapt to climate change has been put under the microscope and the news is good for tuna and tropical rainforests.
Monday, 30 July 2012
New light has been shed on the Southern Ocean's ability to store carbon through an international study that pinpoints where carbon capture is most efficient.
Tuesday, 4 October 2011
It's 20 years since the signing of an historic agreement that prevented mining in Antarctica.
Thursday, 28 July 2011
Marine sensors have been deployed to the north of Australia to measure changes in ocean currents moving between the Pacific and Indian oceans.
Wednesday, 18 May 2011
Melting icebergs release iron into the Southern Ocean and stimulate carbon-gobbling phytoplankton, a new study has found.
Thursday, 14 October 2010
A new flotilla of floating sensors, underwater gliders and seal-mounted cameras are being used by Australian marine scientists to probe deep into our oceans.
Tuesday, 3 August 2010
Australia's waters have been ranked as being the most biologically diverse in the world, yet up to 80% of the species in it have yet to be discovered, a new study has shown.
Thursday, 20 May 2010
Scientists investigating deep sea life in a supposedly pristine area of the Southern Ocean near eastern Antarctica have been shocked to find what they believe is evidence of illegal fishing there.
Tuesday, 27 April 2010
The fishing industry needs to harvest a more diverse range of species, to take the pressure off our favourite seafood, argue researchers.
Monday, 26 April 2010
Scientists have measured the most powerful current that helps drive the circulation of the Southern Ocean, paving the way for more accurate climate models.
Friday, 16 April 2010
The supercharging of Earth's water cycle by global warming is making some parts of our oceans saltier, while others parts are getting fresher, according to a new study.
Monday, 15 March 2010
Scientists have discovered a link between winds that circle Antarctica, and changes in the depth of an important ocean layer which impacts the rate of climate change.
Friday, 27 November 2009
Scientists have given the state of Australia's marine environment a low grade in the country's first Marine Climate Change report card released today.
Tuesday, 6 October 2009
Molecular biologist Dr Elizabeth Blackburn says she was following her nose when she and two colleagues found a key to staving off the ravages of aging.
Friday, 28 August 2009
Changes in the sun's radiation cause a small La Nina-like effect on earth, new research has found small La Niņa-like | <urn:uuid:b57bec39-da32-499a-acc9-2670e9d08f52> | 2.875 | 549 | Content Listing | Science & Tech. | 36.029148 |
Wednesday 15 May
Louisiana pancake batfish (Halieutichthys intermedius)
- The Louisiana pancake batfish is a bizarre-looking fish with an enlarged head, a flattened body and limb-like pectoral fins.
- The Louisiana pancake batfish uses its fins to ‘walk’ along the ocean floor in a peculiar motion that resembles that of a walking bat.
Louisiana pancake batfish fact file
- Find out more
- Print factsheet
Louisiana pancake batfish description
First discovered in 2010, the Louisiana pancake batfish (Halieutichthys intermedius) is a bottom-dwelling fish with a truly bizarre appearance (1). Like other batfish (members of the Ogcocephalidae family), this species has a flattened body with an enlarged head and trunk which form a rounded disc shape (1) (2) (3). Its pectoral fins resemble limbs, and the Louisiana pancake batfish uses these and the smaller pelvic fins to ‘walk’ along the ocean floor (2) (4) (5) (6).
Like other batfish, the Louisiana pancake batfish is also peculiar in possessing a fleshy structure at the end of the snout which is used to lure in prey. Formed from a modified dorsal fin spine, this structure has a fleshy ‘bait’ on the end, and when retracted it sits in a cavity at the front of the snout (2) (3) (4) (5) (7). As in other Halieutichthys species, this cavity is very small compared to that of other batfish, and is hidden by puffy folds (1).
The Louisiana pancake batfish’s small mouth is situated on the underside of the body, and can be protruded (2). The eyes of this species are set close together on the top of the head (1) (2), and the small, round gill openings are located at the base of the pectoral fins (3) (4) (7). The Louisiana pancake batfish has a small dorsal fin on the top of its tail (2) (3).
The colouration of live Louisiana pancake batfish has yet to be recorded, but preserved specimens are dark to greyish brown, with a pale network-like pattern on the upper surface of the body. The underside of the body is paler, and the pectoral fins are marked with black bands which extend fully across the fin. Juvenile Louisiana pancake batfish have a large black patch on each pectoral fin, but the patch does not extend completely across the fin (1).
The bodies of batfish are covered in cone-like scales known as tubercles, which sometimes have small spines (1) (2) (3) (4) (5), making the fish look like it is covered in coarse hair (5). In the Louisiana pancake batfish, some smaller individuals have small spines on the tubercles, but these are much reduced in larger specimens. Its blunt tubercles help to distinguish this species from other closely related batfish (1).
The Louisiana pancake batfish can be distinguished from closely related species by having blunt tubercles. In some smaller individuals the tubercles may have small spines, but these are much reduced in larger specimens (1).
A relatively small batfish species, the Louisiana pancake batfish is intermediate in appearance between the closely related Halieutichthys aculeatus and Halieutichthys bispinosus, giving this species its scientific name of intermedius (1).
- Also known as
- tortilla fish. Top
FishBase - Halieutichthys intermedius:
International Institute for Species Exploration, Arizona State University: Top 10 - 2011: Pancake batfish:
The Nature Conservancy - Gulf of Mexico:
ARKive - Newly discovered species:
- Dorsal fin
- The unpaired fin found on the back of the body of fish, or the raised structure on the back of most cetaceans (whales, dolphins and porpoises).
- Animals with no backbone, such as insects, crustaceans, worms, molluscs, spiders, cnidarians (jellyfish, corals, sea anemones) and echinoderms.
- Pectoral fins
- In fish, the pair of fins that are found one on each side of the body just behind the gills. They are generally used for balancing and braking.
- Relating to or inhabiting the open ocean.
- Pelvic fins
- In fish, the pair of fins found on the underside of the body.
- The production or depositing of eggs in water.
- A small, rounded, wart-like bump on the skin or on a bone.
- Ho, H.-C., Chakrabarty, P. and Sparks, J.S. (2010) Review of the Halieutichthys aculeatus species complex (Lophiiformes: Ogcocephalidae), with descriptions of two new species. Journal of Fish Biology, 77: 841-869.
- McEachran, J.D. and Fechhelm, J.D. (1998) Fishes of the Gulf of Mexico. Volume 1: Myxiniformes to Gasterosteiformes. University of Texas Press, Austin, Texas.
Carpenter, K.E. (2002) The Living Marine Resources of the Western Central Atlantic. Volume 2: Bony Fishes Part 1 (Acipenseridae to Grammatidae). Food and Agriculture Organization of the United Nations, Rome. Available at:
- Nelson, J.S. (2006) Fishes of the World. Fourth Edition. John Wiley & Sons, Inc., Hoboken, New Jersey.
FishBase - Family Ogcocephalidae - Batfishes (January, 2013)
American Museum of Natural History - Scientists describe two new species of fish from area engulfed by oil spill (January, 2013)
FishBase - Order summary for Lophiiformes (January, 2013)
International Institute for Species Exploration, Arizona State University: Top 10 - 2011: Pancake batfish (January, 2013)
- Chakrabarty, P., Lam, C., Hardman, J., Aaronson, J., House, P.H. and Janies, D.A. (2012) SPECIESMAP: a web-based application for visualizing the overlap of distributions and pollution events, with a list of fishes put at risk by the 2010 Gulf of Mexico oil spill. Biodiversity and Conservation, 21: 1865-1876.
- view the contents of, and Material on, the website;
- download and retain copies of the Material on their personal systems in digital form in low resolution for their own personal use;
- teachers, lecturers and students may incorporate the Material in their educational material (including, but not limited to, their lesson plans, presentations, worksheets and projects) in hard copy and digital format for use within a registered educational establishment, provided that the integrity of the Material is maintained and that copyright ownership and authorship is appropriately acknowledged by the End User.
Louisiana pancake batfish biology
Batfish are named for their habit of ‘walking’ along the ocean floor on their arm-like fins, a bizarre motion which resembles that of a walking bat (6). Like other batfish, the Louisiana pancake batfish is an awkward swimmer (4) (5).
The Louisiana pancake batfish is well camouflaged against the substrate, helping it to capture unsuspecting prey (6). The peculiar ‘lure’ at the front of its snout can be extended a short way in front of the head (2) (3), and is believed to lure prey to within reach of the fish’s mouth (2) (6) (7). In captivity, other batfish species rarely move around, except for wriggling their lures when prey is present (3), and it is thought that the lure may attract prey by excreting a fluid (3) (6). The diet of the Louisiana pancake batfish is likely to include a range of small invertebrates and fish (2) (3) (5).
Although little is currently known about the breeding behaviour of the Louisiana pancake batfish, other batfish species have pelagic eggs and young, which live in the open ocean rather than on the ocean floor (3) (5). The young batfish are generally transparent and are spherical in shape, only changing into the adult form after settling on the ocean bottom (3).Top
Louisiana pancake batfish range
The Louisiana pancake batfish is known only from the northern Gulf of Mexico, where it has been recorded at depths of up to 366 metres. A few batfish specimens have been recorded further east and north up the Atlantic coast of the United States, but these have not yet been positively identified as this species (1).Top
Louisiana pancake batfish habitat
Like other Halieutichthys species, the Louisiana pancake batfish lives on the ocean floor, where it is likely to be found over sandy substrates (1).Top
Louisiana pancake batfish status
The Louisiana pancake batfish has yet to be classified by the IUCN.Top
Louisiana pancake batfish threats
The known range of the Louisiana pancake batfish occurs within the area affected by the 2010 Gulf of Mexico oil spill, which pumped vast quantities of oil into the marine environment (6) (8) (9). This strange-looking fish was only just discovered prior to the spill (8), and the fate of its population is not yet known. However, the Louisiana pancake batfish is considered to be one of the species that is most likely to have been negatively affected by the oil spill, as well as by the large quantity of dispersants used to tackle it (9).
Little is currently known about how the oil and dispersants are distributed beneath the ocean’s surface, and the potential impacts on the region’s deep sea species is still poorly understood. Possible effects on fish could include changes to their migration routes and spawning grounds, accumulation of pollutants within their bodies, increased mortality, and even local extinctions (9).Top
Louisiana pancake batfish conservation
No specific conservation measures are currently known to be in place for the Louisiana pancake batfish. However, studies have been undertaken to determine which species are likely to be most vulnerable to the effects of the 2010 oil spill, which may help to prioritise future research and conservation efforts. More information is needed on the region’s fish and on the effects of the spill before the fate of unique species such as the Louisiana pancake batfish can be determined (9).Top
Find out more
Find out more about the Louisiana pancake batfish:
More information on conservation in the Gulf of Mexico:
Learn more about newly discovered species on ARKive:
This information is awaiting authentication by a species expert, and will be updated as soon as possible. If you are able to help please contact:
More »Related species
Play the Team WILD game
MyARKive offers the scrapbook feature to signed-up members, allowing you to organize your favourite ARKive images and videos and share them with friends.
Terms and Conditions of Use of Materials
Copyright in this website and materials contained on this website (Material) belongs to Wildscreen or its licensors.
Visitors to this website (End Users) are entitled to:
End Users shall not copy or otherwise extract, alter or manipulate Material other than as permitted in these Terms and Conditions of Use of Materials.
Additional use of flagged material
Green flagged material
Certain Material on this website (Licence 4 Material) displays a green flag next to the Material and is available for not-for-profit conservation or educational use. This material may be used by End Users, who are individuals or organisations that are in our opinion not-for-profit, for their not-for-profit conservation or not-for-profit educational purposes. Low resolution, watermarked images may be copied from this website by such End Users for such purposes. If you require high resolution or non-watermarked versions of the Material, please contact Wildscreen with details of your proposed use.
Creative commons material
Certain Material on this website has been licensed to Wildscreen under a Creative Commons Licence. These images are clearly marked with the Creative Commons buttons and may be used by End Users only in the way allowed by the specific Creative Commons Licence under which they have been submitted. Please see http://creativecommons.org for details.
Any other use
Please contact the copyright owners directly (copyright and contact details are shown for each media item) to negotiate terms and conditions for any use of Material other than those expressly permitted above. Please note that many of the contributors to ARKive are commercial operators and may request a fee for such use.
Save as permitted above, no person or organisation is permitted to incorporate any copyright material from this website into any other work or publication in any format (this includes but is not limited to: websites, Apps, CDs, DVDs, intranets, extranets, signage, digital communications or on printed materials for external or other distribution). Use of the Material for promotional, administrative or for-profit purposes is not permitted. | <urn:uuid:ab86f124-774f-4a74-a45d-68029d4026dc> | 2.828125 | 2,824 | Knowledge Article | Science & Tech. | 40.404669 |
Southern right whale (Eubalaena australis)
|Size||Length: up to 18 m (2)|
Calf length: 5.50 m (2)
Calf weight: 1,000 – 1,500 kg (2)
|Weight||up to 80,000 kg (2)|
The southern right whale is classified as Least Concern (LC) on the IUCN Red List (1). The Chile-Peru subpopulation is listed as Critically Endangered (CR) on the IUCN Red List (1). It is listed on Appendix I of CITES (3) and Appendix I of the Convention for the Conservation of Migratory Species (4). It is also classified as endangered under the Environment Protection and Biodiversity Conservation Act 1999 and protected within Australian waters under the Whale Protection Act 1980 (2).
Known as a right whale because during the height of whaling efforts, this was the ‘right’ whale to catch, as it is large, slow-moving and floats when dead (5). This whale is easy to identify as it has a uniformly dark colour with white callosities (outgrowths of hard skin) on and around the head which can even be used to distinguish individuals (2). The body is rotund and the head is very large, making up one third of the total length. Unusually for baleen whales, the southern right whale does not have a dorsal fin or a grooved throat. The flippers are short and wide, and the blow hole is V-shaped (2).
The southern right whale is only found in the southern hemisphere in all waters between 30 and 60 º south (2).
A migratory species, the southern right whale is found in the open ocean of the most southern region of its range during the summer months where prey populations are more abundant, but migrates up to the coastal regions of more northerly regions of its range during the winter and spring (2).
Southern right whales belong in separate breeding groups which travel to their own areas to reproduce (5). Up to eight males may mate with one female (5) between July and August, but unusually for mammals, aggression between males is minimal (2). Females calve once every three years between June and August, with a gestation period of 11 to 12 months. Calving females go for four months during the winter months without eating, and give birth to a single, large calf weighing up to 1,500 kilograms (2). Females will nurture and feed their calves in the shallows where they are well protected from attacks by orcas and great white sharks (5). Calves are weaned after a year, and will reach sexual maturity at nine to ten years (2).
These enormous animals eat some of the smallest creatures in the ocean, filtering water through long and numerous baleen plates to feed on the small plankton including larval crustaceans and copepods (2).
Southern right whales produce short, low-frequency moans, groans, belches and pulses (7). Typical feeding dives last between 10 and 20 metres and southern right whales are also frequently seen at or above the surface of the water, slapping the water with its tail and flippers, rolling, and breaching (launching out of the water and landing on the side or back). The function of these behaviours is not known (7).
Following serious over-exploitation from the 1600s until the 1930s, the southern right whale population became dangerously low (2) (7). International protection in 1935 allowed a slow increase, but illegal whaling continued into the 1960s. Since then, the population has been increasing at the calculated ‘maximum rate’ (6). However, whilst this huge and unsustainable threat has largely been eliminated, pressures on the southern right whale still exist. Disturbance from vessels, divers, coastal industrial activity, entanglement in fishing gear and pollution are all concerns (2).
International protection by the International Whaling Commission and individual country programs to protect whales has produced significant results since the ban on hunting this species (6). Conservation activities currently include monitoring population numbers and behaviour through the use of photo identification of individuals, assessing the effects of disturbance, and education programs (2).
For further information on whaling see:
The International Whaling Commission:
Department of the Environment and Heritage: Australian Government:
This information is awaiting authentication by a species expert, and will be updated as soon as possible. If you are able to help please contact:
- Baleen: in some whales, the comb-like fibrous plates hanging from the upper jaw that are used to sieve food from sea water. These are often referred to as whalebone.
- Copepod: large and diverse group of minute marine and freshwater crustaceans belonging to the subclass Copepoda. They usually have an elongated body and a forked tail.
- Crustacea: diverse group of arthropods (a phylum of animals with jointed limbs and a hard chitinous exoskeleton) characterised by the possession of two pairs of antennae, one pair of mandibles (parts of the mouthparts used for handling and processing food) and two pairs of maxillae (appendages used in eating, which are located behind the mandibles). Includes crabs, lobsters, shrimps, slaters, woodlice and barnacles.
- Dorsal fin: the unpaired fin found on the back of the body of fish, or the raised structure on the back of most cetaceans.
- Gestation: the state of being pregnant; the period from conception to birth.
- Larval: of the stage in an animal’s lifecycle after it hatches from the egg. Larvae are typically very different in appearance to adults; they are able to feed and move around but usually are unable to reproduce.
- Parasite: an organism that derives its food from, and lives in or on, another living organism at the host’s expense.
- Plankton: aquatic organisms that drift with water movements; may be either phytoplankton (plants), or zooplankton (animals).
IUCN Red List (June, 2009)
Department of the Environment and Heritage: Australian Government (June, 2008)
CITES (November, 2004)
CMS (November, 2004)
Marine Themes (November, 2004)
- Kenney, R.D. (2002) North Atlantic, North Pacific, and Southern Right Whales. In: Perrin, W.F., Würsig, B. and Thewissen, J.G.M. (Eds) Encyclopedia of Marine Mammals. Academic Press, London.
WWF (June, 2008) | <urn:uuid:c4b8045a-6b5d-4a41-b4c8-75ae8109dead> | 3.484375 | 1,415 | Knowledge Article | Science & Tech. | 48.446006 |
We will now see why studying the free energy of a system is useful in determining its behaviour.
The free energy change, ΔG, of a chemical reaction is the difference in free energy between the products of the reaction and the reactants. If the free energy of the products is less than the free energy of the reactants there will be a driving force for the reaction to occur.
For the reaction
the free energy change,
if the standard states .
We see that the free energy change of a reaction is determined by the relative
quantities of reactants and products.
previous | next | <urn:uuid:be535866-f4a6-4b1a-b8d4-53052aafb4a0> | 4.03125 | 124 | Knowledge Article | Science & Tech. | 54.252529 |
Static source code analysis for bug finding ("static analysis" for short) is the process of detecting bugs via an automated tool that analyzes source code without executing it. The idea goes back at least to Lint, which was invented at Bell Labs in the 1970s, but static analysis has undergone a revolution in effectiveness and usability in the last decade. The initial focus of static analysis tools was on the C and C++ programming languages. Such tools are particularly necessary given C/C++'s notorious flexibility and susceptibility to low-level bugs. More recently, tools have flourished for Java and/or Web applications; these are needed because of the prevalence of easily exploitable network vulnerabilities. When using such tools, it is all too easy to deploy them in a way that looks good superficially, but misses important defects, shows many false positives, and brings the tool into disrepute. This article is a guide to the process of deploying a static analysis tool in a large organization while avoiding the worst organizational and technical pitfalls.
Leading commercial static analysis tools with which I am familiar include Coverity, Fortify (now owned by Hewlett-Packard), and Klocwork. Klocwork and Coverity both initially focused on C/C++, although they came from opposite origins: Klocwork from the telephone equipment company Nortel, and Coverity from Stanford University. Fortify's initial focus was on security for Web applications in languages such as Java and PHP. All three companies are now encroaching on each others' territories, but it remains to be seen how well they will do outside of their core competencies.
An excellent, free, but limited, academic static Java byte code analysis tool is FindBugs. Its lack of an integrated database for defect suppression makes its large-scale use difficult in sizable organizations, but its use by individual developers within the Eclipse development environment can be extremely valuable. Similarly, recent versions of Apple's Xcode and Microsoft's Visual Studio development environments contain integrated static analysis tools for C/C++. These are useful for finding relatively shallow bugs while an individual developer is writing code; their short feedback loop bypasses the difficulties of broader deployment of tools that perform deeper analysis. A longer list of tools is provided in Figure 1 (which is taken from "Magic Quadrant for Static Application Security Testing," by Gartner Inc.), albeit with a strong bias towards security and adherence to Gartner's strategy recommendations.
There is no general-purpose introductory textbook on the subject; the best general introduction is a short article by Dawson Engler, the inventor of Coverity, et al. Two of the leaders at Fortify have written an introductory textbook, but it focuses primarily on their tool and on security, and skimps on key static analysis concepts. There is a rigorous academic textbook on the more general concept of static analysis, but it preceded the revolution in static analysis for bug finding.
Getting Started: The Politics
The first question to ask before deciding to do static analysis in an organization is not what tool to buy, nor even whether you should do static analysis at all. It's "why?"
If your purpose is genuinely to help find bugs and get them fixed, then your organizational and political approach must be different from the more usual, albeit unadmitted, case: producing metrics and procedures that will make management look good. (Fixing bugs should actually be your second goal: An even higher goal is preventing bugs in the first place by making your developers learn from their mistakes. This also contraindicates outsourcing the evaluation and fixing of defects, tempting though that may be.)
Political Issues to Settle in Advance
Get buy-in from your testing/quality assurance department. They must support the project and will have authority over quality-related issues, even if they inconvenience the other stakeholders. Quality has a much smaller constituency than the schedule or the smooth running of internal procedures, but it must be the final arbiter for crucial quality-related decisions (see chapter 22 of Joel On Software for more on this topic).
Give some thought to what part of the organization, if any, should be in charge of running the tool once it's set up. If your organization has a tools team, it might seem the obvious owner, but this does need careful consideration. Static analysis for bug finding is probably not your organization's core competency, and you will need to worry about the Iron Law of Bureaucracy: Your tools team's institutional interest will be in the smooth running of the tool, not in the messy changes necessary for finding bugs. Even if you're reluctant to outsource the rest of the process, administration and configuration may be more flexible if done by external players rather than an internal team with its own interests, habits, and procedures. It may also be more productive to hire an expensive consultant for a few hours, rather than a lesser-paid internal resource full-time; an external resource may be more flexible and less prone to establishing an entrenched bureaucracy.
The conventional wisdom is that getting most developers to use a static analysis tool requires a high-ranking management champion to preach its benefits, ensure that the tool is used, and keep the focus on finding bugs and getting them fixed. The flip side to this is that any attempt to herd cats (or to get programmers to adhere to best practices) will cause a backlash. Your tool must withstand scrutiny from developers looking for excuses to stop using it.
Get buy-in from engineering that they will make time in the schedule to review and fix bugs found, even if they are disinclined to do so, which they will be once they see the first false positive. (Or even the first false false positive; more on this later.) Ensure that it's not the least-effective engineers whose time is allotted for reviewing and fixing static analysis bugs (more on this momentarily). You'll also need agreement from the security team that sales personnel will get access to your real source code; more on that is also to follow.
Smart Programmers Add More Value and Subtract Less
Handling static analysis defects is not something to economize on. Writing code is hard, finding bugs in professional code should be hard, and evaluating possible mistakes in alleged bugs is even harder. Learning to evaluate static analysis defects, even in a developer's own code, requires training and supervision. It is necessary to tread delicately around the polite pretense that the code owner is an infallible authority on the behavior of that code. Misunderstandings about the actual behavior of unsigned integers and assertions are, for instance, regrettably common in my experience. | <urn:uuid:70720199-1647-4c0a-ade9-91bba85c6549> | 2.78125 | 1,341 | Personal Blog | Software Dev. | 27.594825 |
Nevertheless, substantial progress has been achieved in better defining the direct effect of a wider set of different aerosols. The SAR considered the direct effects of only three anthropogenic aerosol species: sulphate aerosols, biomass-burning aerosols, and fossil fuel black carbon (or soot). Observations have now shown the importance of organic materials in both fossil fuel carbon aerosols and biomass-burning carbon aerosols. Since the SAR, the inclusion of estimates for the abundance of fossil fuel organic carbon aerosols has led to an increase in the predicted total optical depth (and consequent negative forcing) associated with industrial aerosols. Advances in observations and in aerosol and radiative models have allowed quantitative estimates of these separate components, as well as an estimate for the range of radiative forcing associated with mineral dust, as shown in Figure 9. Direct radiative forcing is estimated to be -0.4 Wm-2 for sulphate, -0.2 Wm-2 for biomass-burning aerosols, -0.1 Wm-2 for fossil fuel organic carbon, and +0.2 Wm-2 for fossil fuel black carbon aerosols. Uncertainties remain relatively large, however. These arise from difficulties in determining the concentration and radiative characteristics of atmospheric aerosols and the fraction of the aerosols that are of anthropogenic origin, particularly the knowledge of the sources of carbonaceous aerosols. This leads to considerable differences (i.e., factor of two to three range) in the burden and substantial differences in the vertical distribution (factor of ten). Anthropogenic dust aerosol is also poorly quantified. Satellite observations, combined with model calculations, are enabling the identification of the spatial signature of the total aerosol radiative effect in clear skies; however, the quantitative amount is still uncertain.
Estimates of the indirect radiative forcing by anthropogenic aerosols remain problematic, although observational evidence points to a negative aerosol-induced indirect forcing in warm clouds. Two different approaches exist for estimating the indirect effect of aerosols: empirical methods and mechanistic methods. The former have been applied to estimate the effects of industrial aerosols, while the latter have been applied to estimate the effects of sulphate, fossil fuel carbonaceous aerosols, and biomass aerosols. In addition, models for the indirect effect have been used to estimate the effects of the initial change in droplet size and concentrations (a first indirect effect), as well as the effects of the subsequent change in precipitation efficiency (a second indirect effect). The studies represented in Figure 9 provide an expert judgement for the range of the first of these; the range is now slightly wider than in the SAR; the radiative perturbation associated with the second indirect effect is of the same sign and could be of similar magnitude compared to the first effect.
The indirect radiative effect of aerosols is now understood to also encompass effects on ice and mixed-phase clouds, but the magnitude of any such indirect effect is not known, although it is likely to be positive. It is not possible to estimate the number of anthropogenic ice nuclei at the present time. Except at cold temperatures (below -45°C) where homogeneous nucleation is expected to dominate, the mechanisms of ice formation in these clouds are not yet known.
Other reports in this collection | <urn:uuid:a8d2fde4-a91d-4cce-9a95-5d2adf493535> | 3.765625 | 671 | Knowledge Article | Science & Tech. | 20.267189 |
Areopaguristes tudgei. That's the name of a new species of hermit crab recently discovered on the barrier reef off the coast of Belize by Christopher Tudge, a biology professor at American University in Washington, D.C.
Tudge has been interested in biology his whole life, from boyhood trips to the beach collecting crustaceans in his native Australia, to his undergraduate and PhD work in zoology and biology at the University of Queensland. He has collected specimens all over the world, from Australia to Europe to North and South America.
Until now, he has never had a species named after him. He only found out about his namesake after reading an article about it in the journal Zootaxa. Apparently, finding out after-the-fact is standard practice in the highly formalized ritual of naming a new species.
The two crustacean taxonomists and authors of the paper who named the new crab after Tudge, Rafael Lemaitre of the Department of Invertebrate Zoology at the Smithsonian Institution's National Museum of Natural History and Darryl L. Felder of the University of Louisiana-Lafayette's Department of Biology Laboratory for Crustacean Research, have known Tudge since he first came to Washington in 1995 as a postdoc research fellow at the Smithsonian.
Lemaitre and Felder have been collecting specimens on the tiny Belizean island for decades and for more than 10 years, they had asked Tudge—who specializes in the structures of crustacean reproduction and how they relate to the creatures' evolutionary history—to join them on one of their semiannual research outings.
Finally, in February 2010, Tudge joined them on a tiny island covered with hundreds of species of their favorite fauna.
It was crab heaven for a cast of crustacean guys.
"So you can take 40 steps off the island and you're on the edge of the reef, and then the back part of the reef is what they call the lagoon," Tudge recalled. "You slowly walk out into ever-increasing depths of water and it's a mixture of sand and sea grass and bits of coral, and then there's some channels. There's lots of different habitats there. Some islands are covered by mangroves. So we would visit all the different habitats that were there."
"We would collect on the reef crest, go and turn over coral boulders on the reef flat, snorkel over the sea grass beds. We pumped sand and mud to get things out of the ground. We walked into the mangroves and collected crustaceans from under the mangrove roots. We even snorkeled in the channels in the mangrove islands."
But discovering the new species was much less involved: Tudge turned over a coral boulder in an intertidal area, saw 50 or so tiny crabs scrambling around, and stuck a dozen or so specimens in a bottle before going on with his work.
Only later in the lab, under the microscope, was it determined that this isolated little group of hermit crabs might be unique.
As the journal authors write: "Given this cryptic habitat and the relatively minute size of the specimens (shield length range = 1.0-3.0 mm), it is not surprising that these populations have gone unnoticed during extensive sampling programs that have previously taken place along the Barrier Reef of Belize."
Getting the Word
Tudge found out only recently found out that Areopaguristes tudgei—a tiny hermit crab differentiated from others in its genus by such characteristics as the hairs growing on some of its appendages—was joining the list of about 3 million known species.
Lemaitre emailed him a PDF of the finished article. A note said only, "Here's a new species. What do you think?" The note had a smiley emoticon.
That's the way it works, said Tudge's colleague American University's College of Arts and Sciences, biology professor Daniel Fong. There's no warning; one day you just find out. Fong has also had species named after him, and he has discovered new ones as well.
"You go through several emotions when a species has been named after you," Fong said. "It is truly an honor, in the most formal sense of the term, that your colleagues have thought of naming a species after you. It is a very special type of recognition of your contribution to your research field by your colleagues."
Amid their exhaustive taxonomic description, complete with drawings and photographs of Areopaguristes tudgei, the journal article authors explain why they chose its name: "This species is named after our colleague Christopher C. Tudge (American University) who first noticed and collected populations of this diminutive hermit crab living under large dead coral boulders during joint field work in Carrie Bow Cay. The name also acknowledges his unique contributions to knowledge of the reproductive biology of hermit crabs."
Maggie Barrett | Source: EurekAlert!
Further information: www.american.edu
More articles from Life Sciences:
Tokyo Institute of Technology research: An insight into cell survival
17.05.2013 | Tokyo Institute of Technology
Asian lady beetles use biological weapons against their European relatives
17.05.2013 | Max-Planck-Institut für chemische Ökologie
Researchers have shown that, by using global positioning systems (GPS) to measure ground deformation caused by a large underwater earthquake, they can provide accurate warning of the resulting tsunami in just a few minutes after the earthquake onset.
For the devastating Japan 2011 event, the team reveals that the analysis of the GPS data and issue of a detailed tsunami alert would have taken no more than three minutes. The results are published on 17 May in Natural Hazards and Earth System Sciences, an open access journal of ...
A new study of glaciers worldwide using observations from two NASA satellites has helped resolve differences in estimates of how fast glaciers are disappearing and contributing to sea level rise.
The new research found glaciers outside of the Greenland and Antarctic ice sheets, repositories of 1 percent of all land ice, lost an average of 571 trillion pounds (259 trillion kilograms) of mass every year during the six-year study period, making the oceans rise 0.03 inches (0.7 mm) per year. ...
About 99% of the world’s land ice is stored in the huge ice sheets of Antarctica and Greenland, while only 1% is contained in glaciers.
However, the meltwater of glaciers contributed almost as much to the rise in sea level in the period 2003 to 2009 as the two ice sheets: about one third. This is one of the results of an international study with the involvement of geographers from the University of Zurich.
Second sound is a quantum mechanical phenomenon, which has been observed only in superfluid helium.
Physicists from the University of Innsbruck, Austria, in collaboration with colleagues from the University of Trento, Italy, have now proven the propagation of such a temperature wave in a quantum gas. The scientists have published their historic findings in the journal Nature.
Below a critical temperature, certain fluids become superfluid ...
Researchers use synthetic silicate to stimulate stem cells into bone cells
In new research published online May 13, 2013 in Advanced Materials, researchers from Brigham and Women's Hospital (BWH) are the first to report that synthetic silicate nanoplatelets (also known as layered clay) can induce stem cells to become bone cells without the need of additional bone-inducing factors.
Synthetic silicates are made ...
17.05.2013 | Physics and Astronomy
17.05.2013 | Physics and Astronomy
17.05.2013 | Physics and Astronomy
17.05.2013 | Event News
15.05.2013 | Event News
08.05.2013 | Event News | <urn:uuid:2d8eb026-db43-4827-b5bc-4fb8de4c48ff> | 3.15625 | 1,637 | Knowledge Article | Science & Tech. | 48.642414 |
1. Electricity is the flow of electrical power or charge.
2. Electricity travels at the speed of light more than 186,000 miles per second.
3. The electricity we use in our homes comes from a primary source like oil, nuclear power, coal, natural gas, the sun, or other natural sources.
4. Water and wind are other sources of energy, called mechanical energy.
5. We use electricity every day for heat, light and power.
6. Before we began generating electricity, over 100 years ago, fireplaces and pot-belly stoves kept homes warm, kerosene lamps and candles lit homes and food was kept cool in iceboxes or underground storage cellars.
7. Benjamin Franklin was the first person to help people understand the principles of electricity and Thomas Edison changed the world with his invention of the electric light bulb.
8. A spark of static electricity can measure up to three thousand (3,000) volts. A bolt of lightning can measure up to three million (3,000,000) volts and it lasts less than one second!
9. According to the United States Energy Information Administration, electricity consumption will increase by 51 percent from 2002 to 2025.
10. The first power plant owned by Thomas Edison opened in New York City in 1882. | <urn:uuid:7eed3d92-d5f3-4a5a-8115-46f17045e64c> | 3.34375 | 268 | Listicle | Science & Tech. | 59.310714 |
Welcome to "The
those who have had the good fortune to view our planet from space,
they are struck by the overwhelming impression that ours is a blue
planet. Indeed over 70% of our planet is covered by water giving our
home this blue aura. On closer inspection, patches of emerald and
aquamarine become apparent in the larger expanse of deep blue. These
patches are in the shallow waters of the tropics, fringing islands
the edges of continents; or, in turn, encircled by the ring-like islands
that we call atolls.
Coming ever closer to Earth and approaching these
oceanic jewels, a
border of white is perceived which is revealed to be surf crashing
against what appears to be a solid bastion of rock. Leaving our vantage
point from above and diving into the sea, we discover that what we
is solid rock is in fact a living mass - a kaleidoscopic vision of
color, shape, and life that is a coral reef.
Coral reefs are among the most amazing of ecosystems
on our planet.
Although found as solitary forms through 400 million years of geological
history, the fossil record shows that corals evolved into modern
reef-building organisms within the past 25 million years. Over those
millions of years, coral reefs have evolved into the rainforests of
sea –a place of great biological diversity that is home to thousands
species that are found no where else. In fact, coral reefs are the
complex, species-rich, and productive of marine ecosystems.
The scientific study of coral reefs has only begun
within the past 200
years. Charles Darwin, James Dwight Dana, and Louis Agassiz were all
pioneers in the study of coral reefs. Initial studies were concerned
with the mode of formation of coral reefs. Darwin, who sailed with
BEAGLE and Dana who accompanied the United States exploring expedition
each made notable contributions in this arena. Darwin’s model
fringing reef on young high islands progressing to older atolls fringing
coral lagoons and Dana’s independently arrived at concept of
progression from young to old followed by the probability of sunken
flat-topped seamounts are as valid today as they were over 150 years
ago. Louis Agassiz conducted his first studies of coral reefs in 1851
when he was commissioned by the Coast Survey to study the Florida
as related to navigation of the Florida Straits.
coral reefs, both modern and fossil, are studied as indicators of
global change; as multi-faceted ecosytems with a plethora of species
that could provide cures for forms of cancer and other ills afflicting
mankind; and as highly endangered ecosystems that suffer from bleaching
episodes related to warming of the global ocean, massive invasions
of predatory species such as crown of thorns starfish, pollution from
chemicals and sediment laden waters, and destructive fishing practices.
Because of their fragility, coral reefs have been compared to the
proverbial “canary in the mine shaft” for the world ocean.
Let us hope that we are able to preserve the beautiful creatures of
the coral reefs and their wonderful ecosystems for future generations.
the many wonderful photographs found here that will help you learn
more about coral reefs in this album, visit the following sites to
learn more about what is being done to preserve and protect these | <urn:uuid:a108c04f-6ecb-4fe9-b040-ae12dae9cf82> | 3.59375 | 726 | Knowledge Article | Science & Tech. | 33.623556 |
Soil pH is an indication of the alkalinity or acidity of soil.
It is based on the measurement of pH, which is based in turn on the activity of hydrogen ions (H+) in a water or salt solution.
For more information about the topic Soil pH, read the full article at Wikipedia.org, or see the following related articles:
Recommend this page on Facebook, Twitter,
and Google +1:
Other bookmarking and sharing tools: | <urn:uuid:f2a49dae-ccb6-4dbf-b4ec-c9f537cb9d0b> | 2.984375 | 98 | Knowledge Article | Science & Tech. | 44.227078 |
For the Danish physicist Niels Bohr, a founder of quantum theory (and to whom Schrödinger’s regretful comment was directed), the answer was that measurements must be made with a classical apparatus. In what has come to be called the standard, or Copenhagen, interpretation of quantum mechanics, Bohr postulated that macroscopic detectors never achieve any fuzzy superposition, but he did not explain exactly why not. “He wanted to mandate ‘classical’ by hand,” says Wojciech Zurek of Los Alamos National Laboratory. “Measurements simply became.” Bohr also recognized that the boundary between the classical and the quantum can shift depending on how the experiment is arranged. Furthermore, size doesn’t necessarily matter: superpositions can persist on scales much larger than the atomic.
In November 1995 Pritchard and his M.I.T. colleagues crystallized the fuzziness of measurement. The team sent a narrow stream of sodium atoms through an interferometer, a device that gives a particle two paths to travel. The paths recombined, and each atom, acting as a wave, “interfered” with itself, producing a pattern of light and dark fringes on an observing screen (identical to what is seen when a laser shines through two slits). The standard formulation of quantum mechanics states that the atom took both paths simultaneously, so that the atom’s entire movement from source to screen was a superposition of an atom moving through two paths.
The team then directed a laser at one of the paths. This process destroyed the interference fringes, because a laser photon scattering off the atom would indicate which path the atom took. (Quantum rules forbid “which-way” information and interference from coexisting.)
On the surface, this scattering would seem to constitute a measurement that destroys the coherence. Yet the team showed that the coherence could be “recovered”— that is, the interference pattern restored—by changing the separation between the paths to some quarter multiple of the laser photon’s wavelength. At those fractions, it was not possible to tell from which path the photon scattered. “Coherence is not really lost,” Pritchard elucidates. “The atom became entangled with a larger system.” That is, the quantum state of the atom became coupled with the measuring device, which in this case was the photon.
Like many previous experiments, Pritchard’s work, which is a realization of a proposal made by the late Richard Feynman many years ago, deepens the mysteries underlying quantum physics rather than resolving them. It demonstrates that the measuring apparatus can have an ambiguous definition. In the case of Schrödinger’s cat, then, is the measurement the lifting of the lid? Or when light reaches the eye and is processed by the mind? Or a discharge of static from the cat’s fur?
A recent spate of Schrödinger’s cat experiments have begun to address these questions. Not all physicists concur that they are looking at bona fide quantum cats—“kitten” is the term often used, depending on the desired level of cuteness. In any event, the attempts do indicate that the quantum-classical changeover— sometimes called the collapse of the wave function or the state-vector reduction— has finally begun to move out of the realm of thought experiments and into real-world study.
Here, Kitty, Kitty
In 1991 Carlos Stroud and John Yeazell of the University of Rochester were experimenting with what are called Rydberg atoms, after the Swedish spectroscopist Johannes Rydberg, discoverer of the binding-energy relation between an electron and a nucleus. Ordinarily, electrons orbit the nucleus at a distance of less than a nanometer; in Rydberg atoms the outer electron’s orbit has swollen several 1,000-fold. This bloating can be accomplished with brief bursts of laser light, which effectively put the electron in many outer orbitals simultaneously. Physically, the superposition of energy levels manifests itself as a “wave packet” that circles the nucleus at an atomically huge distance of about half a micron. The packet represents the probability of the excited electron’s location. | <urn:uuid:7bcb78c8-b323-478d-97bf-fefae08fd762> | 3.625 | 898 | Knowledge Article | Science & Tech. | 37.337727 |
Provide your input on the draft USGS Global Change Science Strategy by April 8, 2011.
Crews respond to spring flooding in the Midwest and Northern Plains. Read more...
The USGS is ready to address some of society’s most critical issues for years to come. Read more
Join citizens and scientists in tracking The Pulse of Our Planet!
On March 3, the U.S. Geological Survey turned 134. Established by Congress in 1879 and built on a legacy of impartial science, the bureau faces unusual challenges in the near term.
The extent and distribution of the world’s ice, primarily in the form of glaciers, provide insight about changes in the Earth’s climate and changes in sea-level. Read more
The Nation’s next Earth-observing satellite was successfully launched on February 11. Once it is mission-certified in orbit, the satellite will become Landsat 8. Read more
The recent past sheds light on preserving the future of economically and ecologically important native trout populations across the West. Read more
Washington, D.C., is a unique city full of landmarks and buildings that are recognizable worldwide. But how were these stone giants built? Read more
Watch USGS scientists in the Arctic track Pacific walruses to examine how these animals are faring in a world with less sea ice. Read more
The world's longest-running Earth-observing satellite program.
Dust storms July 21-22 blinded motorists, grounded flights and knocked out electricity. What’s causing the dust storms?
The majority of the nation is facing dry conditions; in most areas drought conditions are expected to persist or intensify. Read more
A contest to celebrate 40 years of Landsat. Read more
Please comment on the USGS’ draft science strategies! Read more
Timing is everything! Consider helping track changes in spring’s arrival
Need a historical map for your genealogy research? You are in luck. We’ve got what you need! Download and view USGS historical maps from the comfort of your own home.
Flood Safety Awareness Week is March. 12-16. What can you do to prepare?
National Groundwater Awareness Week is Mar. 11-17, 2012. See how USGS science is connecting groundwater and surface water.
Since Japan’s March 11, 2011, Tohoku earthquake and subsequent tsunami, scientists at the USGS have learned much to help better prepare for a large earthquake in the United States.
Five USGS employees honored with Distinguished Service Awards for their service to the nation
The USGS and UNESCO have produced a book that gives us a new way to look at our shared global heritage.
Groundwater in aquifers on the East Coast and in the Central U.S. has the highest risk of contamination from radium, a naturally occurring radioactive element and known carcinogen.
The proposed USGS budget reflects research priorities to respond to nationally relevant issues, including water quantity and quality, ecosystem restoration, hydraulic fracturing, natural disasters such as floods and earthquakes, and support for the National Ocean Policy, and has a large R&D component.
Caribou expert Layne Adams discusses the lives of reindeer — apart from their famous role on Christmas Eve. How they survive the cold.
Climate science is helping to predict food shortages, identify impacts on human health, and prepare for future conditions.
As demand grows, Landsat data can help us track trends in key resources. Remote-sensing satellites help scientists to observe our world, monitor changes, and detect critical trends in forestry, water, crops, and urban landscapes. Learn more.
A new study provides crucial information for difficult decisions regarding conservation, economic interests, and food and water security. Projected changes for 2010-2099
It's only the beginning of their careers, but these 3 young scientists have forged ahead with innovative research at the frontiers of science. How they've transformed their fields
Oct. 9-15, 2011, is Earth Science Week, themed "Our-Ever Changing Earth," and Oct. 12, 2011, is International Day for Natural Disaster Reduction. Answers to questions posed by a changing world
By 1936, devastating losses of wildlife populations were threatening the Nation’s natural resource heritage. America's first wildlife research center
A dust storm on Tuesday, October 4, blinded motorists and caused a large string of motor vehicle crashes, multiple injuries, and at least one death. What’s causing the dust storms?
USGS scientists study walruses off the northwestern Alaska coast in August as part of their ongoing study of how the Pacific walrus are responding to reduced sea ice conditions in late summer and fall.
USGS scientists are collecting water samples and other data to determine trends in ocean acidification from the least explored ocean in the world.
In support of the Famine Early Warning Systems Network, USGS scientists use satellite remote sensing to assess agricultural conditions that foretell famine.
New USGS research shows that rice could become adapted to climate change and some catastrophic events by colonizing its seeds or plants with the spores of tiny naturally occurring fungi. The DNA of the rice plant itself is not changed; instead, researchers are re-creating what normally happens in nature.
Now that field work has wrapped up at the Ice Age "Snowmastodon" fossil site near Snowmass Village, Colo., USGS and other scientists will begin work on unraveling the climate and environmental history of the area.
USGS scientists are studying the Earth’s conditions 3 million years ago to gain insight into the impacts of future climate. Join us Aug. 3 in Reston, Va., to learn how this information is used to better understand the magnitude of changes forecast for the end of this century.
USGS crews continue to measure streamflow and collect water quality and sediment samples in the Ohio and Mississippi River basins using state-of-art instruments.
Over the past four decades, about 14% of the ice and permanent snow of Washington's Mount Rainier has melted due to combined recent warming and reduced precipitation.
USGS science supports management, conservation, and restoration of imperiled, at-risk, and endangered species.
In a unique application of data, this year's report provides the nation's first assessment of birds on public lands and waters.
The USGS, NASA, and other organizations and Federal agencies are studying how climate change affects wildlife and ecosystems.
Using coral growth records and measurements of changing ocean chemistry from increased atmospheric CO2, USGS scientists are providing a foundation for predicting future impacts of ocean acidification and sea-level rise to coral reefs.
Increased dust storm activity may result from enhanced aridity in the Southwest, according to a USGS study.
Provide your input on the draft USGS Global Change Science Strategy by April 8, 2011.
Sea-ice habitats essential to polar bears would likely respond positively should more curbs be placed on global greenhouse gas emissions, according to a new modeling study published today in the journal, Nature.
Landscape photos taken in the same place but many years apart reveal dramatic changes due to human and natural factors. The USGS Desert Laboratory Repeat Photography Collection, the largest archive of its kind in the world, is 50 years old.
Decreasing pH and warming temperatures are changing ocean conditions and affecting coral and algal growth in South Florida. USGS scientists are conducting field measurements to learn more.
Many coastal wetlands worldwide including several on the U.S. Atlantic coast may be more sensitive than previously thought to climate change and sea-level rise in the this century.
USGS findings support recent predictions that climate change will stress ecosystems at lower elevations more than higher elevations. This information may guide future conservation efforts in helping decision makers develop regional landscape predictions about biological responses to climate changes.
The Earth as Art 3 collection, the latest set of Landsat satellite images selected for their artistic quality, reveals an intricate beauty in Earth’s natural patterns.
USGS scientists are investigating sea turtles and their habitats in Dry Tortugas National Park to provide insight that will be used as decision-support tools for managing coral ecosystems.
Looking for information on natural resources, natural hazards, geospatial data, and more? The USGS Education site provides great resources, including lessons, data, maps, and more, to support teaching, learning, K-12 education, and university-level inquiry and research.
The timing of animal migration and reproduction, and observing when plants send out new leaves and bear fruit, is increasingly important in understanding how climate change affects biological and hydrologic systems. Photo credit Copyright C Brandon Cole.
The United States Group on Earth Observations (USGEO) is working to connect Earth observations with public health, agriculture, climate, and data management and dissemination.
USGS studies the relationships among earth surface processes, ecological systems, understanding current changes in the context of prehistoric and recent earth processes, distinguishing between natural and human-influenced changes, and recognizing ecological and physical responses to changes in climate.
The USGS Science Strategy is a comprehensive report to critically examine the USGS's major science goals and priorities for the coming decade. The USGS is moving forward with these strategic science directions in response to the challenges that our Nation's future faces and for the stewards of our Federal lands.
7 p.m.—Public lecture (also live-streamed over the Internet)
USGS-led survey finds that national wildlife refuges rate highly with visitors.
As the climate has warmed, many plants are starting to grow leaves and bloom flowers earlier. A new study published in the journal, Nature, suggests that most field experiments may underestimate the degree to which the timing of leafing and flowering changes with global warming.
Stressed agricultural lands may be releasing less of the moisture needed to protect the breadbasket of a continent.
Spring rains in the eastern Horn of Africa are projected to begin late this year and be substantially lower than normal.
In recognition of World Forestry Day, let’s take a glimpse at USGS science to understand the fate of forests from climate change.
A new study concludes that fossil fuel emissions are likely contributors to a substantial amount of organic carbon found on glaciers in Alaska. Fossil fuel emissions, which contain organic carbon, can speed up the rate of glacier melt when deposited on glacier surfaces. In addition, the organic molecules associated with these deposits can be transportedContinue Reading
The U.S. Geological Survey had a very busy 2011 — below are a few of our highlights from last year.
Despite news articles warning of large-scale releases of methane due to climate change, recent research indicates that most of the world’s gas hydrate deposits should remain stable for the next few thousand years.
Join us on February 1 to view the Earth from space, and discuss the profound impact Landsat has on many facets of our economy, safety, and environment.
Scientists have discovered an outbreak of coral disease called Montipora White Syndrome in Kāneohe Bay, Oahu. The affected coral are of the species Montipora capitata, also known as rice coral.
USGS scientists will join thousands of scientists, managers, and decision makers in Boston this week to present new findings on toxics at the Society for Environmental Toxicology and Chemistry (SETAC) conference in the Hynes Convention Center, Nov. 13-17.
On Nov. 3, USGS scientists Patrick Barnard and William Ellsworth will present a public lecture in Menlo Park, CA, providing Bay Area residents information about USGS research in the San Francisco Bay Area, including recent discoveries beneath San Francisco Bay and ongoing studies to better understand earthquake probabilities and the potential hazards associated with strong ground shaking.
Rivers and streams in the United States are releasing substantially more carbon dioxide into the atmosphere than previously thought.
Climate Change Impacts to Tribal Communities The USGS is working with Native American communities and organizations to understand climate change impacts to their land and neighborhoods. Projects include interviews with indigenous Alaskans to understand their personal observations of climate change, as well as studying how climate change is impacting sand dunes and posing risksContinue Reading
As climate changes, it affects the timing of when leaves emerge, the amount of foliage that grows as well as the timeframe when leaves begin to fall.
How will accelerated glacial melting over the next 50 years as a result of climate change affect the unique Gulf of Alaska and Copper River coastal ecosystems? USGS scientists are studying these processes and impacts.
USGS scientists are assessing the potential to remove CO2 from the atmosphere for storage in other Earth systems through a process called carbon sequestration.
Receive news and updates: | <urn:uuid:5ee2d4a4-efe2-4a65-a113-d5b5285908a2> | 3.015625 | 2,603 | Content Listing | Science & Tech. | 37.928749 |
Observing Basics: Distances in Space
Light's constant speed helps astronomers and observers better understand the vastness of our universe.
|Light travels at a constant speed of 299,702,458 meters per second (186,282 miles per second). That's equal to more than 670 million mph (1.08 billion km/hr). So the distance light travels in a year is known as the light-year, and it's equivalent to 5.88 trillion miles (9.46 trillion km).|
Expand your observing at Astronomy.com
Check out Astronomy.com's interactive StarDome to see an accurate of your sky. This tool will help you locate this week's targets.
Intro to the Sky: Get to know the night sky
Learn how to use star charts, find constellations, and observe the brightest objects in your night sky with in this handy reference section.
The Sky this Week
Get a daily digest of celestial events coming soon to a sky near you.
After you listen to the podcast and try to find the objects, be sure to share your observing experience with us by leaving a comment at the blog or in the Reader Forums.
Other Observing Basics videos | <urn:uuid:49fc90d1-8ce4-41a4-8133-e6046dc704c1> | 3.25 | 249 | Tutorial | Science & Tech. | 74.456053 |
It’s been an interesting year for climate scientist Judith Curry, who after Climategate split with most of her peers and called for reform in the climate science community. She did this most publicly via a letter published by Climate Audit, a noted skeptic web site. Curry called for more transparency in climate research.
But it’s difficult to paint Curry as a skeptic, especially when considering her scholarly work, the most recent of which was published this week in Proceedings of the National Academy of Sciences (see abstract). It delves into the question of why the Antarctic sea ice hasn’t been melting, and builds upon the consensus climate view.
Given the new paper I thought it a good time to speak with Curry about the last year. Here’s a transcript of our discussion.
We’ve seen rapid melting in the Arctic, but not in the Antarctic. Is this something that has concerned climate scientists?
It’s sort of a paradox. The paradox of why the Antarctic isn’t melting and the Arctic is has gotten a lot of attention, and it’s become one of the skeptics’ arguments. The climate models have generally matched the observations, so scientists have said that’s what the climate models predict, and people haven’t been too bothered by it. But trying to understand exactly what has been going on has not been intuitive. It’s not like there’s been a big debate in the climate community, or a lot of worry about this, because observations have agreed with the models. But that didn’t really explain anything. So in this paper we’ve tried to dig in and find out what really has been going on.
What did you find?
The answer is tied up in a combination of natural variability and global warming. But the most important part of the story is it’s not so much the direct heating from above, but how the precipitation modulates the heating both from below and above. The explanation we’ve found doesn’t translate into a simple sound bite.
So give me a non-sound bite answer.
Sea ice can melt from both above and below, either heating from the ocean below or the atmosphere above. In the case of the Arctic most of the melting is driven from the warmer atmosphere above. In the Antarctic most of the melting has been driven from the ocean below. What our study has identified is that there’s been increased precipitation over the last few decades that has freshened the upper ocean, which makes it more stable so the heat below doesn’t make it up to the sea ice to melt it.
Freshens the upper ocean?
It decreases the saltiness. When you have a fresh layer on top that’s less dense it acts as a barrier to prevent the mixing of warmer water from below. It insulates the ice to some extent. We’ve also seen a big role of natural variability, over the past 30 years or so the dominant climate signal has been from the Antarctic Oscillation rather than from global warming. The net effect of all this has been an increase in precipitation, mostly snow. This diminishes the melting both from below and above. It stops the melting from above because snow has a higher albedo and reflects more sunlight.
At some point does this result in a net loss of ice rather than gains?
What happens in the 21st century projections is that the global warming signal begins to dominate. We still have the freshening of the upper ocean, but the upper ocean is getting warmer because of a warmer atmosphere. And the precipitation starts to fall more as rain than snow. Rain falling on ice speeds the melting from above.
Three different models show Antarctic ice beginning to recede around 2060.
Aside from the new paper, you’ve certainly had an interesting year as a climate scientist.
Oh my gosh, I stepped into it with that little essay I put out. I figured, in for a penny in for a pound.
You have been among the most outspoken scientists in the wake of the Climategate e-mails. Most sought to downplay their significance. You took a position that this was a teachable moment for climate science. Has you gotten any traction on this?
The thing that’s getting traction, the most important thing, is the need for transparency, to get the methods out there and the data out there. There’s a real public demand for accountability on this subject and it’s just plain good science. With the World Wide Web it’s just easy to do. The whole transparency thing, everyone agrees on that, and it’s slowly happening. The other thing I’m seeing is that two of the professional societies, the American Meteorological Association and the American Geophysical Union, are talking about ensuring that skeptical papers get through if they’re of the right quality. Some people were getting their papers rejected because they disagreed with the IPCC. That’s not the way it’s supposed to work. Papers were getting rejected for the wrong reason. It’s good that professional societies are taking this seriously. Those are some good things that have happened in the science community.
What about on the policy side?
On the policy side of it everything seems to have fallen apart. A year ago it seems like we were on track for something to happen, but everything’s fallen apart for a whole host of reasons. It’s not like Climategate caused all that. There were a whole bunch of political and economic issues, like the developed world versus developing countries. Frankly I think this is a good thing that it’s fallen apart in the short term so everyone can sit back and reflect a little bit more on what we should be doing — to try and really understand where our common interests lie and maybe get away from the UN Model and understand the unintended consequences of some of the policies people are talking about. There’s some no-brainer things that people can be doing, and I hope some of this can get started. But in terms of these big, huge far-reaching policies … the work that needs to be done is really in the economic and political arena to figure out what actually makes sense to do.
Solutions that make sense to a broad range of interests?
Exactly. Otherwise it’s just not going to happen.
Have the positions you’ve taken affected your standing in the climate science community?
I have no idea. I haven’t had any obvious ostracism. A few people who were directly involved in Climategate e-mailed me and weren’t particularly happy about it. But I’ve gotten encouragement from other people, and other people don’t seem to be particularly aware of it. I’ve gotten a fair amount of positive feedback but there’s probably a lot going on out there among people who don’t directly communicate with me. I think it’s fair to say it’s pretty unpopular in certain circles.
Oh yes. Those guys are directly involved in Climategate so that’s not a huge surprise. (note: Joe Romm, of Climate Progress, was not directly involved in Climategate as his private e-mails were not published. Gavin Schmidt, of RealClimate, points out that he was the victim of a crime and not guilty of anything.)
Do you think those kinds of sites are helpful in trying to build public confidence in climate scientists?
That’s a tough one. Real Climate, I think they’ve damaged their brand. They started out doing something that people liked, but they’ve been too partisan in a scientific way. Their moderation hasn’t been good. There was a lot of rudeness toward me on one thread that was actually encouraged by the moderators. I don’t think that has served them well.
Why have you been so conversant with some of the so-called skeptical sites, sites that are certainly outside mainstream climate science?
One of the other positives that I think has come out of Climategate is a realization of what other bloggers like (Steve) McIntyre (of Climate Audit) are actually up to. This isn’t a Merchants of Doubt, oil-company-funded effort. It’s a grassroots effort. These are people who are interested, they want to see accountability. They have a certain amount of expertise and they want to play around with climate data. There’s no particularly evil motives behind all this.
We really don’t understand the potential or impact the blogosphere is having. I think it’s big and growing. The sites that are growing in popularity are Watts Up With That, which really have huge traffic. I think there’s a real interest in the subject. I think there’s a hunger for information. I think there’s a huge potential here for public education. People say it’s polarizing, and sure, you have Climate Progress and Climate Depot on the two extremes, but in the middle you’ve got all these lukewarmer blogs springing up. So I can also see a depolarizing effect. There seems to be a lot more stuff building up in the middle right now. With the IPCC, and the expectation that scientists hew to the party line, it was getting pretty evangelical. When I speak up about maybe there’s more uncertainty, some people regard that as heresy. That’s not a good thing for either science or policy. We’ve got to lose that. | <urn:uuid:31df5a93-125b-49e6-ab04-29a5266450f7> | 3.03125 | 1,999 | Audio Transcript | Science & Tech. | 57.235847 |
Comparative Biochemistry and Physiology - Part A: Molecular & Integrative Physiology
1. 1. Seasonal differences in metabolic and water loss rates were examined in three related species of grasshoppers collected from shrub-steppe communities in Utah: Arphia conspersa, A. pseudonietana and
2. 2 cohorts of Trimerotropis pallidipennis. 2. No significant differences (P = 0.05) in metabolic rates were observed between seasons (early vs late), between genera (Arphia vs Trimerotropis) nor among species.
3. 3. Early season (spring) grasshoppers had a higher (but non-significant) mean water loss rate (±X ± SD in mg.g−1-hr−1) (4.81 ± 1.53) than late season (summer) grasshoppers (4.43 ± 1.43).
4. 4. Among species, early season A. conspersa had a significantly higher water loss rate (5.22 ± 1.76) under similar conditions than late season A. pseudonietana (3.67 ± 1.22), but early season T. pallidipennis had a significantly lower water loss rate (4.40 ± 1.17) than the late season generation (5.32 ± 1.12).
5. 5. Because of variables that were not or could not be controlled, the relationship between these physiological traits and season was difficult to address.
Forlow, L. and MacMahon, J. (1988). Seasonal comparison of metabolic and water loss rates of three species of grasshoppers. Comparative Biochemistry and Physiology - Part A: Molecular & Integrative Physiology, 89(1): 51-60. | <urn:uuid:b5e3d614-f632-4849-ba40-caca0960f1ed> | 2.734375 | 370 | Academic Writing | Science & Tech. | 59.830502 |
Debris flow extending down the southwest wall of Janssen K crater (a highlands crater about 16 km in diameter). Image width is 570 meters (NASA/GSFC/Arizona State University).
Two separate debris flows extend down the western interior wall of the the impact crater Janssen K (46.1° S, 42.3° E). A portion of one flow is observed in the upper left of the image, and a larger one extends from the bottom center. The bumpy terrain between the two flows is the underlying crater wall. These flows extend from the upper reaches of the inner crater wall (having traveled across ~4.5 km of the inner crater wall) and are composed of loose debris. The debris lobes and the loose rocks cover a sheet of impact melt that fills the crater floor. Did these debris flows form as part of the impact process, or did they form later seismic processes due to a nearby impact?
Browse the whole NAC Image and see if you can find clues to help determine when these flows formed! | <urn:uuid:36021bb5-85c8-4444-9825-766ae8960516> | 3.375 | 214 | Knowledge Article | Science & Tech. | 68.893788 |
Update: Criss-Cross Caterpillars
The appearance of Xs and Ys in the striping pattern of monarch caterpillars occurs due to a misalignment of segments. These are referred to as teratogens or as teratogenic forms. Most of these forms are developmental in origin but a few may be genetic. We usually spot these in the third instar but it may be that the misalignment just becomes more apparent as the caterpillar gets larger. Most of these do not survive to the adult stage; however, several years ago one of these "criss-cross caterpillars" appeared in our culture, emerged as a female monarch, mated, and laid eggs. The mysterious "X" appeared on only one of her offspring. Photos by Monarch Watch (2002).
See photos on this website:http://www.facebook.com/media/set/?set= ... 114&type=1 | <urn:uuid:36c4d46c-0cea-4d96-99c3-a8fc40307f6f> | 3.5625 | 189 | Comment Section | Science & Tech. | 55.294615 |
For Lottie Williams, the cosmic lottery hit a jackpot on a cool Tulsa, Oklahoma, morning in 1997. Soon after witnessing a satellite breaking up in the atmosphere during her early morning walk, Williams entered the history books as the first, and only, person on record to be hit by falling space junk. The feather-light piece of scalded shielding from what is believed to be a Delta rocket booster lightly tapped her shoulder with no ill effect.
Nearly all the clutter of space travel now hangs above us at distances as far as 22,500 miles (36,200 kilometers) and as near as 150 miles (240 kilometers) from Earth, and some of it has long outlived its usefulness. The oldest known object is the 1958 Vanguard I research satellite, which ceased all functions in 1964 and has since orbited the globe nearly 194,000 times. Orbital debris, the technical term for nonfunctional and human-made space junk, includes not only whole, abandoned satellites, but also pieces of broken satellites, deployed rocket bodies, human waste, and other random objects, like the glove lost by astronaut Ed White during his historic 1965 spacewalk. The newest jettisoned junk is a fridge-size ammonia reservoir released into its own orbit on July 23, following a NASA decision that no other disposal options were feasible.
The United States Space Surveillance Network catalogs about 12,000 pieces of debris larger than about four inches (ten centimeters) in diameter, and tracks their speedy march around the globe to help protect larger satellites, shuttles, and the International Space Station. (Small, undetectable pieces number in the millions, and pose a potentially lethal and largely unpredictable threat to human operations in space.) So, how often does orbital debris make its way back home? NASA estimates that, on average, one piece of cataloged debris reenters the lower atmosphere and falls to Earth every day.—Gabrielle E. Montanez | <urn:uuid:c8178f37-b117-480c-9ee4-8ad07d714f8d> | 3.6875 | 394 | Nonfiction Writing | Science & Tech. | 33.930717 |
Getting a solid view of lightning
Space Science News home
Getting a solid view of lightning New Mexico team develops system to
depict lightning in three dimensions
One of a series of stories covering the quadrennial International Conference on Atmospheric Electricity, June 7-11, 1999, in Guntersville, Ala.
For most of us observing lightning - from a safe distance or inside a building - a lightning bolt is a flat, two-dimensional creature painted against a distant backdrop. But as researchers have shown over the last three decades, lightning has a complex shape that may let scientist pry a few secrets from a storm - including where a tornado might form.
Left: Just a bolt straight down? Is it pointing towards the viewer, or away? And what about the big ones that get away, the cloud-to-cloud flashes that are not seen on the ground? Credit: NOAA
A number of techniques have been developed over the years to look at the 3-D structure," said William Rison of the New Mexico Institute of Mining and Technology in Socorro, N.M. Rison works with Paul Krehbiel who talks today about "3-D Lightning Mapping and Observations" at the International Atmospheric Electricity Conference in Guntersville.
December 3: Mars Polar Lander nears touchdown
December 2: What next, Leonids?
November 30: Polar Lander Mission Overview
November 30: Learning how to make a clean sweep in space
If several radio receivers are set up to record the radio pulse with precision timing, then the location of the pulse origin can be backtracked with a little math. It's similar to the principle used in navigating by satellite where several satellite broadcast the same timing signal and are heard, at different times (depending on distance) by a receiver.
Right: A typical radio waveform for a lightning bolt. The large peak is what scientists want to record. The lower peaks are noise, some caused by lightning from distant storms. Links to 556x413-pixel, 9KB GIF. Credit: New Mexico Tech.
The time of arrival method was pioneered in the 1960s by David Proctor of South Africa using VHF radio receivers. His work was laborious, with the data being collated and analyzed by hand.
In the 1970s and '80s, the technique was expanded by Carl Lennon and Launa Maier at NASA's Kennedy Space Center who developed a real-time system linked by microwave relays. This lets meteorologists tell launch directors if an electrical storm is too close for a safe launch. It's also optimized for Florida thunderstorms that are different in structure from what is seen in the central United States, and the microwave relays are bulky and difficult to move for studies in different areas.
Above: Negative leaders - lightning moving upward from negatively to positively charged regions in a cloud - are readily detected by radio techniques. Positive leaders are rarely observed. What sometimes appears to be a positive leader is actually an inverted cloud where the negative charges are at the top and positive charges are below. A negative step leader in a cloud-to-ground strike is not impulsive but continuous, making it difficult to detect by time-of-arrival methods. Credit: NASA/Marshall
Sign up for our EXPRESS SCIENCE NEWS delivery
Rison said the system comprises 10 automated ground stations, each with a radio receiver, a precision clock developed for satellite navigation receivers, a signal processor to extract and time-tag just the lightning pulse, and tape recorder. In two field campaigns, the network was set up northwest of Oklahoma City in June 1998 and outside Socorro in August 1998. It is scheduled for another campaign in New Mexico in the summer of 1999 and the Nebraska-Kansas-Colorado area the summer of 2000.
Left: Red dots mark where the Lightning Mapping System receivers were positioned outside Oklahoma City for field tests. Norman, Okla., home of the National Sever Storms Laboratory, is in the lower right corner of this map. Links to 800x600-pixel, 24KB GIF. Credit: New Mexico Tech.
After a storm, the data tapes are collected and sent to Krehbiel's team. They then produce 3-D plots showing a lightning bolt's trip through a storm.
"This gives you, with a few qualifications, the full flash," Rison said. The principal qualification is that the system is most sensitive to negative streamers, discharges in which a stream of electrons burrows its way through the air to a positively charged area.
Time of arrival techniques locate impulsive radio emissions. This happens most often with negative leaders propagating into negative charge regions. It does not locate positive leaders into negative charge regions because the positive leaders do not radiate strongly. It often does not locate negative stripped leaders of cloud-to-ground lightning. These radiate strongly but are continuous, not impulsive.
As the lightning streaks across the sky, it ionizes new points in the air. In effect, the transmitter is moving through the sky, sending a new signal from each point. Each is recorded and, when the storm is reconstructed in a computer, becomes an individual point in a three-dimensional grid. The points are color coded to help the human eye follow a bolt's path across the sky. The positional accuracy is best when the storm is close to the array of receivers, but is still good out to ranges of 250 km (155 mi).
Voltage (June 18, 1999) Scientists discuss biology, safety,
and statistics of lightning strikes.|
News shorts from Atmospheric Electricity Conference (June 16, 1999) Poster papers on hurricanes and tornadoes summarized.
Soaking in atmospheric electricity (June 15, 1999) 'Fair weather' measurements important to understanding thunderstorms.
Lightning position in storm may circle strongest updrafts (June 11, 1999) New finding could help in predicting hail, tornadoes
Lightning follows the Sun (June 10, 1999) Space imaging team discovers unexpected preferences
Spirits of another sort (June 10, 1999) Thunderstorms generate elusive and mysterious sprites.
Getting a solid view of lightning (June 9, 1999): New Mexico team develops system to depict lightning in three dimensions.
Learning how to diagnose bad flying weather (June 8, 1999): Scientists discuss what they know about lightning's effects on spacecraft and aircraft.
Three bolts from the blue (June 8, 1999): Fundamental questions about atmospheric electricity posed at conference this week.
Lightning Leaders Converge in Alabama (May 24, 1999): Preview of the 11th International Conference on Atmospheric Electricity.
What Comes Out of the Top of a Thunderstorm? (May 26, 1999): Gamma-rays (sometimes).
Lightning research at NASA/Marshall and the Global Hydrology and Climate Center.
The results are impressive and highly promising, Rison said.
"The initial discharge goes up," he said, describing one bolt recorded during a July 11, 1998, storm. The negative charge was at about 6 km altitude, and the positive charge was at 9 to 10 km. When discharge occurred when electric potential between the two regions became great enough to ionize a channel through air and become as conductive as a metal wire.
"Once that channel is established, then the discharge continued in the horizontal direction," he said. The upper charge center expanded east-west, and the lower center expanded north-south. Like snowflakes, each bolt is unique. "It depends on the structure of the cloud."
Right: The black patch in this plot of lightning indicates that path of a tornado that formed in Oklahoma on June 13, 1998. The Lightning Mapping System did not see the tornado itself, but shows lightning snaking around the strong convective core that spawned the twister. Links to 570x720-pixel, 40KB GIF. Credit: New Mexico Tech.
"There was no time without lightning," Rison said. "The data are very amorphous. It just fills the plot and appears to be continuous. There's no gap in between. It's a real challenge to sort out. You can see tendrils, but not the start of one discharge and the start of another. That hasn't been seen before."
On the other hand, the Lightning Mapping System has recorded storms with apparent dead zones, where almost no lightning appears. But these are not hurricane-like eyes of the storm.
"That's a region of a very strong updraft moving at about 100 to 160 km/h (60-100 mph), vertically," Rison said. Storm watchers observed "a very high turret penetrating to the stratosphere." A little lightning does appear at extreme altitudes.
"We see clear evidence of an updraft," he continued. "We see continued discharging that has to be explained somehow." In this case, no tornado appeared. But Rison and his team have seen lightning dead spots where tornadoes did appear.
Left: Another portion of the June 13, 1998, storm shows an "eye" that isn't - it's a strongly convective core where only sparse lightning (blue circle) occurs, and that's at the top of the core, as seen in the cross-section at right. Links to 570x720-pixel, 64KB GIF Credit: New Mexico Tech.
Radar is the tool that is needed now to complement studies with the Lightning Mapping System. Observations with two or more Doppler radar units would provide three-dimensional data on wind speed and direction.
New Mexico Tech is talking with the Global Hydrology and Climate Center in Huntsville about establishing a system in Huntsville, and with the Federal Aviation Administration, National Severe Storms Laboratory, and Global Atmospheric Inc., about setting up a prototype unit at the Dallas-Forth Worth Airport.
45th Weather Squadron at Patrick AFB,
lightning reference page.
National Severe Storms Laboratory, Norman, OK
Numerical Modeling at NSSL
The New Mexico Tech 3D Lightning Mapping System
Lightning Detection and Ranging project at Kennedy Space Center.
National Severe Storms Laboratory Photo Library, where we got a lot of the neat pictures in these stories.
Join our growing list of subscribers - sign up for our express news delivery and you will receive a mail message every time we post a new story!!!
For more information, please contact:|
Dr. John M. Horackack , Director of Science Communications
Curator: Linda Porter
NASA Official: Ron Koczor | <urn:uuid:e237f96c-19d3-4ae2-878f-5c61b4c00762> | 3.671875 | 2,164 | Content Listing | Science & Tech. | 48.293886 |
The embryo of a guppy fish at 40-times magnification.
Image by Shmuel Silberman.
THIS WEEK’S QUESTION!
Every Sunday, a question will be asked about one of the images from this past week. Be the first to answer correctly, and your blog will be promoted on Monday’s image post and Biocanvas’s main site!
Some flowering plants have evolved ways to inhibit self-fertilization in order to increase the genetic variability of the population.
What is one specific mechanism employed by plants to prevent a flower from using its own pollen to fertilize itself?
Despite being so far apart in evolutionary terms, dolphins and crickets may both use a sound-transmitting lipid to hear things.
You thought Charizard was scary, wait until you meet MRSA…
If you find yourself some place really cold this holiday season, may I suggest stepping outside and having some fun freezing soap bubbles? The crystal growth is quite lovely, as seen in this photograph. If you live in warmer climes, fear not, you can always experiment in your freezer. It would be particularly fun, I think, to see how a half-bubble sitting on a cold plate freezes in comparison to a droplet like this one. (Video credit: Mount Washington Observatory)
For the right flow speeds and incidence angles, a jet of Newtonian fluid can bounce off the surface of a bath of the same fluid. This is shown in the photo above with a laser incorporated in the jet to show its integrity throughout the bounce. The walls of the jet direct the laser much the way an optical fiber does. The jet stays separated from the bath by a thin layer of air, which is constantly replenished by the air being entrained by the flowing jet. The rebound is a result of the surface tension of the bath providing force for the bounce. (Photo credit: T. Lockhart et al.)
Read em and weep anti-space-funding promoters.
Nothing quite compares to the beauty of fluid dynamics on astronomical scales. What you see here are raw photographs of recent storms at Saturn’s north pole. The recent change in Saturnian seasons has afforded Cassini a sunlit view of the northern pole, which had previously lain in darkness. A roiling vortex filled with clouds being twisted and sheared was revealed near the center of its famed polar hexagon. (Photo credit: NASA/JPL-Caltech/Space Science Institute; submitted by J. Shoer)
A 2-year-old girl named Lilly sits on a Victoria water lily inside a tropical greenhouse at the Braunschweig University of Technology’s botanic garden.
The Victoria water lily, named for Queen Victoria of the England, is native to the Amazon river basin and can hold up to 70 pounds of weight.
Does the Universe Have a Purpose? feat. Neil deGrasse Tyson
Amazing new photos from NASA’s Cassini probe orbiting Saturn reveal a dizzying glimpse into a monster storm raging on the ringed planet’s north pole.
Image: NASA / JPL-Caltech / SSI
Cassini took the spectacular Saturn storm photos Tuesday and relayed it back to Earth the same day, mission scientists said in a statement. The pictures reveal a swirling storm reminiscent of the recent Hurricane Sandy that recently plagued our own planet.
Saturn’s mysterious northern vortex, a vast hexagon-shaped storm, dominates this photo taken Tuesday by NASA’s Cassini spacecraft.
The tempest is located in a strange hexagonal cloud vortex at Saturn’s north pole that was first discovered by the Voyager spacecraft in the early 1980s, and sighted more closely by Cassini since then. The strange six-sided feature is thought to be formed by the path of a jet stream flowing through the planet’s atmosphere.
“Cassini’s recent excursion into inclined orbits has given mission scientists a vertigo-inducing view of Saturn ‘s polar regions, and what to our wondering eyes has just appeared: roiling storm clouds and a swirling vortex at the center of Saturn’s famed northern polar hexagon,” Cassini scientists wrote in an online update.
Atlantis as seen from the ISS, 19 July 2011. Backing away from the station for the final time, the Shuttle is hidden in Earth’s Shadow. | <urn:uuid:e9b9a67b-f77b-4684-95c7-481a60ce96cc> | 3.140625 | 909 | Content Listing | Science & Tech. | 51.400394 |
||This article needs to be wikified. (November 2011)|
A chemical property is any aspect of a substance which is only seen by means of a chemical reaction.
This is different from a physical property which can be discovered without changing the substance's chemical structure.
Chemical properties can be used for building chemical classifications. They can be used to identify an unknown substance or to separate or purify it from other substances. | <urn:uuid:1112e349-d3cc-44ba-b5d6-508ecd46e47f> | 3.375 | 87 | Knowledge Article | Science & Tech. | 37.119 |
What will you find in some pond water? Pond critters of course.
We had already completed an initial lesson in the classroom. I had scooped a bucket of pond water from my neighborhood and we examined the unicellular and multicellular life of the pond. In fact, this lesson was taught almost two months ago but then we had some signs of spring. The rains came and with it, outdoor recess became a festival of science! My kiddos found a puddle and were completely enthralled with the life of a puddle. They begged me to bring out the microscopes and slides for RECESS!!
Of course, being the teacher who wants to make the kids happy, I gave them exactly what they wanted. Combined with a little mobile technology and we were identifying the organisms in the puddle in no time. We found that one of the most common organisms was the red fly larvae. Perhaps the most exciting moment for the group was when two different organisms were sucked up into the pipette at the same time and a real live food chain took place! You might have already guessed it. We witnessed the predator/prey relationship. Talk about exciting!
|Red fly larvae|
How to make a wet mount slide using file folders and tape, plus a student worksheet for observing pond life.
Pond Life Identification Kit
Interactive Virtual Pond Dip
Pond Water: a closer look | <urn:uuid:f5b9ad58-485c-4c10-87db-4dee5e837262> | 3.078125 | 288 | Personal Blog | Science & Tech. | 57.084118 |
As Russian speakers will be aware, more and less profane versions of the question “What was that?” punctuated many of the spectacular videos recorded in and around the Siberian city of Chelyabinsk on Friday, as what NASA calls “the Russian fireball” screamed across the sky.
To answer that question, The Lede turned to Kenneth Chang, a colleague on the Science desk, who assured us that it was a meteor and put together the following guide to near-earth-object terminology.
Asteroid: a rock in orbit generally between Mars and Jupiter; fragments of a planet that never came together. Over the eons, because of collisions and gravitational jostling with neighbors, some asteroids have been ejected from the main belt and some are on trajectories that intersect with Earth’s orbit.
Comet: a chunk of ice and rock originating from the outer solar system. Some of them occasionally get gravitationally nudged so that they zoom toward the inner solar system, with the possibility of hitting Earth.
Meteor: the streak of light seen when a space rock — an asteroid or a comet — enters the atmosphere and starts burning up. It’s the scientific synonym for “falling star.”
Meteorite: if a meteor doesn’t entirely burn up, a piece of space rock that lands on Earth is called a meteorite.
Meteoroid: a space rock that’s bigger than a dust grain but smaller than an asteroid. The dividing line between asteroid and meteoroid is fuzzy, but generally space rocks bigger than boulders are asteroids. So a breadbox-size rock would be a meteoroid.
Bolide: astronomers use the term to describe a bright fireball from an incoming meteor; geologists use it as a catch-all term for a comet or an asteroid that hits the Earth. | <urn:uuid:0729b584-8e21-4823-b61c-d66c05739d3d> | 4 | 387 | Knowledge Article | Science & Tech. | 45.028222 |
Index to Plant Chromosome Numbers (IPCN)
The Index to Plant Chromosome Numbers is an NSF funded project that aims to extract and index original plant chromosome numbers of naturally occurring and cultivated plants published throughout the world. A committee of voluntary contributing editors, located in various parts of the world, reviews sets of serial titles assigned to them and returns the information to the editors for collation in the Index and database. Chromosome indexes are published at two or three year intervals. The Index to Plant Chromosome Numbers project has been based at the Missouri Botanical Garden since 1978. Data from published indexes from 1979 onward are available for consultation through this facility.
For additional information, see the last supplement by Goldblatt & Johnson 2006. Index to Plant Chromosome Numbers 2001-2003. Monographs in Systematic Botany from the Missouri Botanical Garden 106.
An Index covering the years 2004-2006 is in preparation and will be published in the fall of 2008 as: Goldblatt & Johnson 2008. Index to Plant Chromosome Numbers 2004-2006. Monographs in Systematic Botany from the Missouri Botanical Garden.
Many but not all data in the printed version of the Index to Plant Chromosome Numbers (1979-- ) are available on the Web in the IPCN database. The printed indexes and the database provide references to chromosome counts reported in the original literature. We therefore request that the IPCN database itself not be cited as the source for chromosome counts. If there is a need to cite the IPCN database, we recommend the following:
Index to plant chromosome numbers. 1979-- . P. Goldblatt & D. E. Johnson, eds. Missouri Botanical Garden, St. Louis. | <urn:uuid:149d5a3e-89a4-435d-856c-376690848276> | 2.75 | 357 | Knowledge Article | Science & Tech. | 42.724848 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Tuesday, 7 May 2013
The fungus suspected of killing off many of the world's frogs is from an ancient strain that has just recently escaped its niche thanks to globalisation, suggests new research.
Wednesday, 27 March 2013
Higher body temperatures protect frogs from a deadly fungal disease that has decimated amphibian populations around the world, say Australian scientists.
Wednesday, 16 January 2013
An Australian researcher who discovered a new species of flying frog near Ho Chi Minh City says it is a rare find so close to such a big city.
Wednesday, 19 December 2012
A study of the leg muscles of leaping toads might explain why we experience a jolting impact when we step off an unexpected stair.
Monday, 8 October 2012
Male orange-eyed tree frogs trill to advertise their size not to prospective mates, but to other males, a new study has found.
Monday, 28 May 2012
Climate change may have a more unpredictable effect on the distribution of cold-blooded animals than previously thought.
Monday, 19 March 2012
Proteins secreted by frog skin could one day help in the design of new antibiotics to fight superbugs, say researchers.
Thursday, 15 March 2012
Biologists have uncovered a new species of frog in the concrete jungle of New York.
Thursday, 12 January 2012
With voices hardly louder than an insect's buzz, the tiniest frogs ever discovered are smaller than a coin.
Tuesday, 27 September 2011
Two new studies have revealed how some frogs can survive the chytrid fungal disease.
Friday, 9 September 2011
Green tree frogs quench their thirst in arid landscapes by 'mining' moisture in the air.
Wednesday, 31 August 2011
Researchers have discovered that a chemical made by cane toads tadpoles disrupts the development of younger members.
Wednesday, 18 May 2011
One in three of all types of amphibians may yet to be found by scientists, according to a new study.
Wednesday, 11 May 2011
Meet a scientist With one-third of all amphibian species threatened with extinction, Jodi Rowley is passionate about documenting the diversity of Southeast Asian species "before it's too late".
Wednesday, 23 February 2011
For three-quarters of a century, the cane toad has rampaged around northeastern Australia, but scientists hope the toxic terror may be stopped in its tracks by the humble fence. | <urn:uuid:bfa035fb-984f-495c-94c3-e2b71a790097> | 2.703125 | 509 | Content Listing | Science & Tech. | 45.719444 |
Tools for HPC development
Parallel applications development
Examine the tools and utilities one may expect to find in a parallel
- Data set preparation - dealing with data sets that tend to be much
larger than in conventional computing.
- Code debugging - need special tools to debug code that runs on many
- Code analysis and profiling - helps in load balancing.
- Program simulation - cheaper way to applications development.
- Interpretation and presentation of results - again, HPC applications
tend to output vast amounts of data that needs to be displayed
- Data set preparation
- Often dealing with data sets that tend to be much larger than in
- Use of Graphical User Interfaces (GUI's) to set input parameters.
- Code debugging
- Need special tools to debug code that runs on several processors.
- Allows examination of the program source whilst it is executing.
Alternatively, perform a post-mortem.
- Facilities such as 'breakpoints' permit inspection and modification of variables in each of the processes.
- Ideally, there should be a central facility to enable control of
- Code performance analysis and profiling
A typical profiler will show:
- Colour-coded graphics displays indicating when processors are
computing, communicating, or idle.
- Time axis to follow each processor's activities during execution.
- Graphics to indicate communication patterns as a function of time.
- Subroutine usage to guide where parallelism should be concentrated.
- Program simulation
- A simulator allows development of parallel code on a serial machine.
- Assures program will execute but will not predict runtime on
- For example, Cray T3D simulator on Cray YMP.
HPC applications tend to output vast quantities of data.
- Visualization provides an effective means for presentation of results.
- Creative uses of colour, texture, translucency, and others allow
complicated models to be displayed in a form that can be more easily
understood e.g. AVS, PVwave.
- Packages in areas such as advanced visualisation, CFD, Molecular
Modelling, Geographical Information Systems (GIS).
- Grand Challenges as suggested by the HPC Initiative of the U.S. and
- Database and transaction processing required by banks, airline
reservation systems, and others.
- Virtual Reality for both work and leisure activities.
Submitted by Mark Johnston,
last updated on 21 February 1994. | <urn:uuid:153a4b9b-023d-4f08-b59d-afe12fb1bfdb> | 2.9375 | 511 | Tutorial | Software Dev. | 27.34411 |
Having got out my college physics book, I will now prove that focal length does not affect resolution loss due to diffraction. *drum roll*
Now, we know that diffraction creates these 'Airy disks' which look like bulls-eyes - a central dot surrounded by concentric rings. The smaller the aperture, the bigger this central dot becomes - this is as described in http://www.cambridgeincolour.com/tutorials...photography.htm
, to wit:
Some images may help, hopefully he won't mind me linking them here:
Airy disk in 2D
Airy disk in 3D
In order to figure out the maximum resolution of a lens, we figure out the angular separation of two points which are barely resolved - which for the purposes of this exercise will be two points whose central dots are just touching edge-to-edge - this will be the same as the angular size of either central peak.
The equation to figure this out is:sinθ
is the angular separation, λ
is the wavelength of light (considered a constant in this exercise, and set at 500nm) and D
is the aperture diameter.
For simplicity's sake, we'll make sinθ
just plain old θ
, as θ
is so small in any real world scenario to make the difference negligible. That gives us:θ
Now, the experiment: We take two lenses, of 50mm and 100mm focal lengths, and set their apertures to f/2. Take a picture of an object 10 meters away with the 100mm lens, and one of an object 5 meters away with the 50mm lens. We'll take the wavelength of light to be 500nm as a constant.
We do this because for this experiment to be valid, the size of the image on the negative/sensor must be the same for both lenses. Since a 100mm lens gives 2x the magnification of a 50mm lens, we stand back twice the distance with the 100mm lens.
For the 50mm lens, D
, the aperture diameter is 50/2 = 25mm
This gives us:θ
= 2.44x10⁻⁵ rad
For the 100mm lens, D
, the aperture diameter is 100/2 = 50mm
This gives us:θ
= 1.22x10⁻⁵ rad
So, what does that tell us? It tells us that the Airy disk of the 100mm lens is half the angular size of that of the 50mm lens. This is critical to understanding why focal length doesn't matter in terms of diffraction. Longer lenses produce a narrower 'beam' of diffracted light - remember, the number above is in 'rad', so it's an angle, not a distance.
Even though the diffracted light has to travel further to reach the film/sensor in the 100mm lens than the 50mm lens, it's spreading out much more slowly. By the time it reaches the film/sensor it will produce a spot of light exactly as large as the 50mm lens does.
Now to prove this concretely, we want to measure what that means in terms of resolution. To do this, we want to measure the minimum distance between two points on an object that can be resolved with both lenses at the distances and aperture given.
From basic optics:y/s = y'/s'
is the separation of the points on the object, y'
is the separation of the corresponding points on the film/sensor, s
is the distance from lens to object and s'
is the distance from lens to film/sensor.
Thus the angular separations of the object points and the corresponding image points are both equal to θ
, so we get:y/s = θ
andy'/s' = θ
is greater than the focal length of the lens, the image distance s'
is approximately equal to the focal length.
For the 50mm lens, s
= 5m, s'
= 50mm and θ
= 2.44x10⁻⁵ rad:y
/5m = 2.44x10⁻⁵
= 1.22x10⁻⁴m = 0.122mm
/50mm = 2.44x10⁻⁵
= 1.22x10⁻³mm = 0.00122mm
For the 100mm lens, s
= 10m, s'
= 100mm and θ
= 1.22x10⁻⁵ rad:y
/10m = 1.22x10⁻⁵
= 1.22x10⁻⁴m = 0.122mm
/100mm = 1.22x10⁻⁵
= 1.2x10⁻³mm = 0.00122mm
Phew! So, where does that leave us? Both lenses can resolve a minimum distance of .122mm on an object, which translates to a minimum resolvable dot on the film/sensor of .00122mm at those distances and an aperture of f/2. If we change the aperture, we will get different numbers, but the important thing is that they are the same for both lenses - so we've just proved that for the same image size and the same aperture, but different focal lengths, resolution is identical.
Again, the reason for this is that the longer lens produces a narrower cone of diffraction, so even though the light has to travel further to get to the film/sensor, it ends up producing the same size spot of light in both cases.
Thus, focal length does not contribute to resolution loss from diffraction.
Peter: An interesting post.
You say that: "so we've just proved that for the same image size and the same aperture, but different focal lengths, resolution is identical."
I understand what you are saying, and I agree with the point, but I'm not happy with the use of the term resolution, as it is very confusing. In optics the resolution of a lens is a well defined quanity, and it does depend on the focal length as the equations you posted show. I think that what you mean is the sharpness of the image on the sensor (whether that be film, or a solid state device), though I'm sure there is a more precise phrase than "sharpness on the sensor". | <urn:uuid:e9329e4c-3e0e-44cb-8b3c-56b02a137218> | 3.578125 | 1,355 | Comment Section | Science & Tech. | 75.521169 |
Blasting the atmosphere of Saturn's moon Titan with X-rays can produce a base component of DNA, a new laboratory study suggests. While the effect may only occur periodically, when meteoroid impacts deliver water to the moon's surface, the finding adds to evidence that Titan may be ripe for life.
In some ways, Titan is more like Earth than any other body in the solar system. It has continents, lakes, clouds, and perhaps even rain – but where Earth has rock and water, Titan has ice and methane. It may also harbour an ocean of liquid water beneath its icy surface that could host life.
With its nitrogen-rich atmosphere and abundance of organic material, Titan seems like a model of the early Earth.
But how did life on Earth get started, and might a similar process have a chance on Titan? For decades, researchers have been trying to recreate life's appearance on the early Earth by zapping materials thought to be present there with electricity or high-energy photons. The first such trial, called the Miller-Urey experiment, was performed in the early 1950s and produced amino acids, the building blocks of proteins.
In the 50 years since, dozens of teams have expanded on Stanley Miller and Harold Urey's original setup, using a wide variety of energy sources and gases that modelled conditions not only on Earth, but on interstellar dust grains and on Titan.
In 1984, a team that included Carl Sagan created adenine, one of five base components of DNA and RNA, in Titan-like conditions, using a spark of electricity meant to simulate lightning.
But so far no evidence of lightning has been found on Titan. And until now studies hitting Titan-like atmospheres with photons, like those it receives from the sun, have produced only organic compounds like benzene – and none of the components of DNA.
Now, researchers led by Sergio Pilling of the Catholic University of Rio de Janeiro in Brazil have produced the base adenine using photons for the first time.
Instead of using ultraviolet (UV) radiation as in previous studies, however, they used low-energy, or "soft", X-rays. "Soft X-rays can penetrate deeper in Titan's atmosphere and reach denser regions [than UV]," Pilling told New Scientist, adding that X-rays set off different chemical reactions in Titan's atmosphere.
They modelled Titan's current atmosphere using a mixture of nitrogen and methane gas, and added water to it to simulate the conditions when the moon is bombarded with water-bearing comets or asteroids – a situation that occurred much more frequently in the early solar system.
A frozen sheet of salty water ice lay below this 'atmosphere' and caused the gas to condense into liquid droplets, like dew settling onto Titan's icy surface.
Then the researchers bombarded the setup with X-rays for up to three days, representing the radiation that Titan would get from the sun over a period of about 7 million years. Afterwards, the still-frozen surface contained some organic compounds, but nothing that could be called the building blocks of life.
But when they heated the samples to room temperature, adenine appeared.
That means Titan's saucepan of proto-life would need a source of extra heat to activate. If there was a warm period in Titan's history, perhaps prompted by volcanic activity or meteoroid impacts, "a primitive life could have had a chance to flourish there", the researchers write.
And Titan is due to be heated up in the next few billion years, when the sun bloats into a red giant star, expanding to the present orbit of Earth, they say.
Chris McKay, an astrobiologist at NASA, says the work is interesting, but adds that it may be difficult for life to get started on the moon's surface most of the time. "Adenine synthesis is important, but because Titan lacks water and essentially lacks any molecule that includes oxygen, prebiotic synthesis cannot get very far," he told New Scientist.
But if impacts sometimes allow water to exist on the moon's surface, "then things might happen", he says. "It is interesting to see how far the chemistry can go."
Jonathan Lunine of the University of Arizona agrees. "This is interesting but not seminal," he told New Scientist.
He points out that adenine is just one of many molecules used by life on Earth, so its creation in the experiment does not mean Titan has all the elements needed to create life as we know it.
Some researchers have speculated that microbes on Titan might breathe hydrogen, eat organic molecules drifting down from the upper atmosphere and excrete methane. But so far there is no evidence of life on the moon, and if any does exist, it may use entirely different building blocks from that on Earth (see Life - but not as we know it).
Journal reference: Journal of Physical Chemistry A (DOI: 10.1021/jp902824v)
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
It's Life Jim. . . but Not As We Know It
Fri Jun 26 01:15:50 BST 2009 by billzfantazy
I wonder @ the hubris of people who believe life can only exist in Earth like conditions....it's all we know.... but not all we can imagine.
It's Life Jim. . . But Not As We Know It
Fri Jun 26 07:04:42 BST 2009 by Julian
I've wondered that same thing for a long time. Look at how every living thing on this planet evolves to live in any environment. I think it's all just time, and in time anything can happen.
It's Life Jim. . . But Not As We Know It
Fri Jun 26 09:28:12 BST 2009 by C Sagan
I agree, we havent ventured past our solar system but many act like they understand the entirety of the universe. The question we should be asking is how we will identify life when we do blunder into it. It may be very hard for us to identify due to our carbon-bias.
Leave Us Alone
Fri Jun 26 01:26:06 BST 2009 by derekcolman
As a Titan, I beseach you to halt this research. We have no wish to associate with you primitive apemen on Earth.
Leave Us Alone
Mon Jun 29 22:54:12 BST 2009 by anonymous
I'm a Titan too; unfortunately some of my kind are grievously shortsighted and make silly comments about stopping research. Be it known that many of us wish to share knowlege with the hot fast creatures that have lately popped up on a planet of the inner Solar System (we thought no planets within Jupiter for several millions of years...it's hard to conduct good astronomical observations from our world, you know).
Do not take advantage of us simply because it takes us six of your hours to write a sentence. We've already developed our super-conducting computers and robots to take care of our metabolic shortcomings AND any hostilities. We've had them for about 100 million years now, and they work faster than you can yet imagine.
They even preserved YOUR world about 65 million years ago...
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:6c0aa9f1-0e99-4977-b514-66cd5ffc7560> | 4.0625 | 1,609 | Comment Section | Science & Tech. | 55.238894 |
Consider a system of N particles in a cubic box of volume .
The particles are assumed not to interact with each other. Thus, the Hamiltonian in Cartesian coordinates may be taken to be
where we are assuming that all particles are of the same type.
The microcanonical partition function is
Since the Hamiltonian is independent of the coordinates, the 3N coordinate integrations can be done straightforwardly. The range of each one is 0 to L. Thus, these integrations give an overall factor :
To do the momentum integrals, we first change variables to
Substitution into the partition function gives
The 3N dimensional integral can now be seen to an integration over the surface of a sphere defined by the equation
Therefore, it proves useful to transform to 3N dimensional spherical coordinates, , where
where is the 3N-1 solid angle integral over the 3N-1 angles. The partition function now becomes:
At this point, we use an identity of -functions:
We will also make use of the fact that N is large, so that we may take . Using the general formula for an n dimensional solid angle integral:
where is the Gamma function:
Thus, the solid angle integral is
and the partition function finally becomes
The entropy S(N,V,E) is given by
Note that the term since , which is negligibly small compared to the term proportional to N and . Thus, we can neglect it. Now, we can simplify using Stirling's approximation
which is valid for N very large. Also, note that
Substituting these approximations into the expression for the entropy, we obtain
We could also simplify the using Stirling's approximation, however, let us keep it as it is for now, since, as we remember from our past treatment of the microcanonical ensemble, this factor was included in the partition function in an ad hoc manner, in order to account for the indistinguishability of the particles. We will want to explore the effect of removing this term. Without it, the entropy is the purely classical entropy
Other thermodynamic quantities can be easily obtained. For example, the temperature is
which is the result we obtained from our analysis of the classical virial theorem. The pressure is given by
which is the famous ideal gas law. This is actually the equation of state of the ideal gas, as it expresses the pressure as a function of the volume and temperature. It can also be written as
where is the constant density of the gas.
One often expresses the equation of state graphically. For the ideal gas, if we plot P vs. V, for different values of T, we obtain the following plot:
The different curves shown are called the isotherms, since they represent P vs. V for fixed temperature.
The heat capacity of the ideal gas follows from the expression for the energy
From our previous analysis of the virial theorem, we can conclude that each kinetic mode contributes k/2 to the heat capacity. | <urn:uuid:e25bba0e-8074-4e17-a9f4-a98ffbaa9d8c> | 3 | 622 | Academic Writing | Science & Tech. | 40.01813 |
This image of the Martian surface shows large valleys called Northwestern sloped valleys (NSV's) that were created by cataclysmic flooding in the distant past.
Courtesy of NASA
The Largest Valley System in the Solar System
News story originally written on August 8, 2001
Scientists studying the features of Mars
have discovered the largest known valleys
in the solar system. These valleys and gorges lie beneath the surface of the planet, under ash, lava, and dust, and were spotted by a satellite in Mars' orbit that can look beneath the surface using lasers. The name of this satellite is the Mars Global Surveyor
These giant channels were apparently formed by large amounts of water flowing through them, as similar structures are formed on Earth, except that to form valleys the size of these the water flow through them would have been 50,000 times that of the Amazon River!
The scientists who completed the study say that Mars probably experienced periods of dramatic heating caused by volcanic activity, and this sharp increase in temperature would have caused floods of an incredible scale. They say that the amount of water flowing through the channels they have discovered would have been enough to fill an ocean three times the size of the Mediterranean Sea in less than two months!
The evidence from this study supports the long-held idea that Mars has vast amounts of frozen water, and that from time to time this water is released by volcanic activity. It also suggests that sometime in the future, Mars may again have oceans, lakes and rivers on its surface.
Shop Windows to the Universe Science Store!
Learn about Earth and space science, and have fun while doing it! The games
section of our online store
includes a climate change card game
and the Traveling Nitrogen game
You might also be interested in:
This page is the start of the tour which explores water on Mars. Use the navigation button at the top of the page to move through the tour. To go to the next page, just press the forward link on the navigation...more
Scientists have found lakes and a river in the highlands of Mars. They don't contain any water, but they may show that the cold, dry planet once had a very different environment where liquid water flowed...more
It was another exciting and frustrating year for the space science program. It seemed that every step forward led to one backwards. Either way, NASA led the way to a great century of discovery. Unfortunately,...more
The Space Shuttle Discovery lifted off from Kennedy Space Center on October 29th at 2:19 p.m. EST. The weather was great as Discovery took 8 1/2 minutes to reach orbit. This was the United States' 123rd...more
A moon was discovered orbiting the asteroid, Eugenia. This is only the second time in history that a satellite has been seen circling an asteroid. A special mirror allowed scientists to find the moon...more
Will Russia ever put the service module for the International Space Station in space? NASA officials want an answer from the Russian government. The necessary service module is currently waiting to be...more
A coronal mass ejection (CME) happened on the Sun early last month. The material that was thrown out from this explosion passed the ACE spacecraft. The SWICS instrument on ACE has produced a new and very...more | <urn:uuid:25f69800-bd26-4ec7-850c-1e7e1813d443> | 4.375 | 675 | Content Listing | Science & Tech. | 56.193773 |
Over the weekend, The New York Times reported that the Obama administration is preparing to launch biology into its first big project post-genome: mapping the activity and processes that power the human brain. The initial report suggested that the project would get roughly $3 billion dollars over 10 years to fund projects that would provide an unprecedented understanding of how the brain operates.
But the report was remarkably short on the scientific details of what the studies would actually accomplish or where the money would actually go. To get a better sense, we talked with Brown University's John Donoghue, who is one of the academic researchers who has been helping to provide the rationale and direction for the project. Although he couldn't speak for the administration's plans, he did describe the outlines of what's being proposed and why, and he provided a glimpse into what he sees as the project's benefits.
What are we talking about doing?
We've already made great progress in understanding the behavior of individual neurons, and scientists have done some excellent work in studying small populations of them. On the other end of the spectrum, decades of anatomical studies have provided us with a good picture of how different regions of the brain are connected. "There's a big gap in our knowledge because we don't know the intermediate scale," Donaghue told Ars. The goal, he said, "is not a wiring diagram—it's a functional map, an understanding."
This would involve a combination of things, including looking at how larger populations of neurons within a single structure coordinate their activity, as well as trying to get a better understanding of how different structures within the brain coordinate their activity. What scale of neuron will we need to study? Donaghue answered that question with one of his own: "At what point does the emergent property come out?" Things like memory and consciousness emerge from the actions of lots of neurons, and we need to capture enough of those to understand the processes that let them emerge. Right now, we don't really know what that level is. It's certainly "above 10," according to Donaghue. "I don't think we need to study every neuron," he said. Beyond that, part of the project will focus on what Donaghue called "the big question"—what emerges in the brain at these various scales?"
While he may have called emergence "the big question," it quickly became clear he had a number of big questions in mind. Neural activity clearly encodes information, and we can record it, but we don't always understand the code well enough to understand the meaning of our recordings. When I asked Donaghue about this, he said, "This is it! One of the big goals is cracking the code."
Donaghue was enthused about the idea that the different aspects of the project would feed into each other. "They go hand in hand," he said. "As we gain more functional information, it'll inform the connectional map and vice versa." In the same way, knowing more about neural coding will help us interpret the activity we see, while more detailed recordings of neural activity will make it easier to infer the code.
As we build on these feedbacks to understand more complex examples of the brain's emergent behaviors, the big picture will emerge. Donaghue hoped that the work will ultimately provide "a way of understanding how you turn thought into action, how you perceive, the nature of the mind, cognition."
How will we actually do this?
Perception and the nature of the mind have bothered scientists and philosophers for centuries—why should we think we can tackle them now? Donaghue cited three fields that had given him and his collaborators cause for optimism: nanotechnology, synthetic biology, and optical tracers. We've now reached the point where, thanks to advances in nanotechnology, we're able to produce much larger arrays of electrodes with fine control over their shape, allowing us to monitor much larger populations of neurons at the same time. On a larger scale, chemical tracers can now register the activity of large populations of neurons through flashes of fluorescence, giving us a way of monitoring huge populations of cells. And Donaghue suggested that it might be possible to use synthetic biology to translate neural activity into a permanent record of a cell's activity (perhaps stored in DNA itself) for later retrieval.
Right now, in Donaghue's view, the problem is that the people developing these technologies and the neuroscience community aren't talking enough. Biologists don't know enough about the tools already out there, and the materials scientists aren't getting feedback from them on ways to make their tools more useful.
Since the problem is understanding the activity of the brain at the level of large populations of neurons, the goal will be to develop the tools needed to do so and to make sure they are widely adopted by the bioscience community. Each of these approaches is limited in various ways, so it will be important to use all of them and to continue the technology development.
Assuming the information can be recorded, it will generate huge amounts of data, which will need to be shared in order to have the intended impact. And we'll need to be able to perform pattern recognition across these vast datasets in order to identify correlations in activity among different populations of neurons. So there will be a heavy computational component as well.
Who’s going to pay for it, and where will the money go?
According to Donoghue, the basic outlines of the effort have been in the works for a while. In June of last year, a number of neuroscientists published a paper in Neuron that outlined the basic ideas. These scientists had been working with the Kavli Foundation, which maintains a strong interest in neuroscience. Kavli, in turn, has been in contact with other research organizations, including the Howard Hughes Medical Institute, the Wellcome Trust, and for this particular project, the Allen Institute for Brain Science and the Simons Foundation, among others.
Donoghue told Ars he only got involved after this point, as the effort began to focus more on the human brain and somewhat less on model organisms (though these will likely have to be used for some aspects of the studies). Given the large role that the government plays in funding biomedical research in the US, it was essential to get it involved. "We see this as a coordinating effort for funding that's already there to ensure maximal effective use of the funds," Donaghue said. Although he hopes there will be ways to obtain additional money for the project as a whole, the overall scope of the project is such that many of its goals can be accomplished through existing funding mechanisms.
One of the reasons is that Donaghue expects that most of the work can be done via the existing model of funding many individual investigators. Although the project is being compared to the human genome effort, that was mostly completed through a handful of sequencing centers with highly specialized and expensive equipment. Donaghue expects that most of the work can be done with materials that will be within the reach of independent labs. There will be some need to coordinate data sharing and computational resources, but the actual work of studying the brain is likely to be distributed widely within the research community.
The key features will really be the coordination and focus of the effort, along with the interdisciplinary technology development described above.
Why would we want to do this?
Understanding consciousness, decision making, and memory don't do it for you? That's OK; Donaghue suggested that the work could have huge commercial payoffs in both health and computing.
For computing, we may begin to understand why the human brain badly outperforms computers on a number of tasks like image recognition and language comprehension. As an example, he pointed out that humans can read the distorted text of CAPTCHAs without (usually) too much struggle, yet they still pose a barrier to computers. Understanding how the brain manages this and other feats could allow us to design computers or software that can perform similar tasks. "You'll get a lot more spam," Donaghue joked, "but you might get intelligent readers that recognize spam." He could clearly envision a lot of additional applications for this sort of computerized text comprehension.
On the medical side, Donaghue noted that consciousness emerges from the network of interactions that take place in the brain. Many neural disorders—the loss of memory in Alzheimers, the erratic thought in schizophrenia, the unregulated emotions of depression—are all disruptions of this underlying network. Understanding how it operates is an essential step in figuring out how to intervene. Donaghue argued that if we could use that knowledge to, say, add 10 years of health to the typical Alzheimer's patient, then we'd save more than the entire cost of the program.
In his view, even if you're not a neurobiologist, you should be hoping this program gets off the ground. | <urn:uuid:3db21e7d-f54e-4116-99c5-d2bd57091162> | 3.25 | 1,813 | Audio Transcript | Science & Tech. | 40.948187 |
Thermodynamic energy problems (tutorial J ) are presented in a variety of ways designed to illustrate: the definitions of thermal energy changes, the transfer of thermal energy, how the energy of a reaction is measured and how the energy of a reaction can be calculated using thermodynamic data. The problems given here are meant to be only a representative sample.
·A mixture of propane and oxygen are burned in a cylinder equipped with a piston. The system loses 1200 J of heat to its surroundings and does 500 J of work in moving the piston. What is the change of the internal energy of the system? J
·The specific heat capacity of water is 4.184 J/gK. If you take 1 liter of water from a refrigerator at 4oC and let it warm to room temperature (24oC), how much heat is transferred to the water from its surroundings? J
·If a 100 g block of aluminum at 100oC is dropped into 1000 mL of water at 20oC, what would be the final temperature of the water? The specific heat capacity of aluminum is 0.908 J/gK. J
·Calculate the amount of heat in joules needed to change 50.0 g of ice at -30oC to superheated steam at 120oC. The specific heat of ice is 2.1 J/gK and the specific heat of steam is 2.0 J/gK. The heat of fusion for water is 333 J/g and the heat of vaporization for water is 2260 J/g. J
·What is the DHorxn for the burning of ethanol with oxygen? The balanced chemical equation is…
C2H5OH (liq) + 3 O2 (gas)® 2 CO2 (gas) + 3 H2O (liq)
You will need to refer to tables of standard enthalpies of formation in order to solve this problem.J
·Hydrazine is used as a rocket fuel. When hydrazine reacts with oxygen, the following reaction occurs…
N2H4 (liq) + O2 (gas)® N2 (gas) + H2O (gas)
When 1.00 g of hydrazine is burned in a constant volume calorimeter, the change in temperature is +3.51 K. If the heat capacity of the bomb calorimeter is 5,510 J/gK,what is the quantity of heat evolved in this reaction? What is the qrxn? J
·What is the DHorxn for the following reaction?
H2C=CH2 + H2® CH3–CH3
You will need to refer to tables of bond energies in order to solve this problem.J
Return to the problem tutorial.J
Copyright © August 2000 by Richard C. Banks...all rights reserved. | <urn:uuid:bda93d27-93a2-49eb-bce4-f13b40019b03> | 3.6875 | 597 | Tutorial | Science & Tech. | 75.983596 |
We're constantly imitating nature.
Artificial intelligence researchers study the way babies learn to right themselves after falling down to help train robots to behave similarly.
We're still learning new things about flight dynamics and wing design from butterflies and other animals.
If you've ever carefully tiptoed across the floor to keep from disturbing someone, you're mimicking how a deer walks to avoid alerting predators to its presence.
Okay, that one's a stretch, but if you've ever watched a deer do this, it sure seems like one heck of a coincidence.
In any case, imitation is the sincerest form of flattery. It's also how modern science works - creating models for simple structures in order to approximate the real world. When we succeed, we learn; when we fail, we learn more. It's a painstaking process of trial and error called the scientific method.
Every year the biotechnology industry comes one step closer to learning how to cure our ills and extend the human lifespan. We have further to go than we've come, to be sure, but getting here was no easy trick. After all, biotech research is attempting nothing short of unveiling the secrets of life. | <urn:uuid:63afee7c-a28e-4e62-b6b7-b019fbf64b3d> | 2.796875 | 244 | Nonfiction Writing | Science & Tech. | 50.887243 |
Astroparticle physics, the same as particle astrophysics, is a branch of particle physics that studies elementary particles of astronomical origin and their relation to astrophysics and cosmology. It is a relatively new field of research emerging at the intersection of particle physics, astronomy, astrophysics, detector physics, relativity, solid state physics, and cosmology. Partly motivated by the historic discovery of neutrino oscillations, the field has undergone remarkable development, both theoretically and experimentally, over the last decade.
The field of astroparticle physics is evolved out of optical astronomy. With the growth of detector technology came the more mature astrophysics, which involved multiple physics subtopics, like mechanics, electrodynamics, thermodynamics, plasma physics, nuclear physics, relativity, and particle physics. Particle physicist found astrophysics necessary due to difficulty in performing comparable experiments to those found in nature. For example the energy scale in the cosmic ray spectrum can see energies as high as 10^20 eV, where a proton-antiproton collision at the Large Hadron Collider produces energies on the scale of TeV.
The field can be said to have begun in 1910, when a German physicist named Theodor Wulf measured the ionization in the air, an indicator of gamma radiation, at the bottom and top of the Eiffel Tower. He found that there was far more ionization at the top than what was expected if only terrestrial sources were attributed for this radiation.
Victor Francis Hess, then an Austrian physicist, hypothesized that some of the ionization was caused by radiation from the sky. In order to defend this hypothesis, Hess designed instruments capable of operating at high altitudes and performed observations on ionization up to an altitude of 5.3 km. From 1911 to 1913, Hess made ten flights to meticulously measure ionization levels. Through prior calculations, he did not expect there to be any ionization at an altitude of 500m if terrestrial sources were the sole cause of radiation. His measurements however, revealed that although the ionization levels initially decreased with altitude, they began to sharply rise at some point. At the peaks of his flights, he found that the ionization levels were much greater than at the surface. Hess was then able to conclude that “a radiation of very high penetrating power enters our atmosphere from above.” Furthermore, one of Hess’s flights was during a near-total eclipse of the Sun. Since he did not observe a dip in ionization levels, Hess reasoned that the source had to be further away in space. For this discovery, Hess was one of the people awarded the Nobel Prize in Physics in 1936. In 1925, Robert Millikan confirmed Hess’s findings and subsequently coined the term ‘cosmic rays’.
Many physicists knowledgeable about the origins of the field of astroparticle physics prefer to attribute this ‘discovery’ of cosmic rays by Hess as the starting point for the field.
Topics of Research
While it may be difficult to decide on a standard 'textbook' description of the field of astroparticle physics, the field can be characterized by the topics of research that are actively being pursued. The Astroparticle Physics journal, published by Elsevier, accepts papers that are focused on new developments in the following areas:
- High-energy cosmic-ray physics and astrophysics;
- Particle cosmology;
- Particle astrophysics;
- Related astrophysics: Supernova, Active Galactic Nuclei, Cosmic Abundances, Dark Matter etc.;
- High-energy, VHE and UHE gamma-ray astronomy;
- High- and low-energy neutrino astronomy;
- Instrumentation and detector developments related to the above-mentioned fields.
Open Questions
One main task for the future of the field is simply to thoroughly define itself beyond working definitions and clearly differentiate itself from astrophysics and other related topics.
A current unsolved problem for the field of astroparticle physics is dark matter and dark energy. Observations of the orbital velocities of stars in the Milky Way and the velocities of galaxies in galactic clusters, the energy density of the visible matter in the universe is far too insufficient to account for the dynamics. Since the early nineties some candidates have been found to partially explain some of the missing dark matter, but they are nowhere near sufficient to offer a full explanation. The finding of an accelerating universe suggests that a large part of the missing dark matter is stored as dark energy in a dynamical vacuum.
Another question for astroparticle physicists is why is there so much more matter than antimatter in the universe today. Baryogenesis is the term for the hypothetical processes that produced the unequal numbers of baryons and anitbaryons in the early universe, which is why the universe is made of matter today, and not antimatter.
Experimental Facilities
The rapid development of this field has led to the design of new types of infrastructure. In underground laboratories or with specially designed telescopes, antennas and satellite experiments, astroparticle physicists employ new detection methods to observe a wide range of cosmic particles including neutrinos, gamma rays and cosmic rays at the highest energies. They are also searching for dark matter and gravitational waves. Experimental particle physicists are limited by the technology of their terrestrial accelerators, which are only able to produce a small fraction of the energies found in nature.
Facilities, experiments and laboratories involved in astroparticle physics include:
- IceCube (Antarctica). The longest particle detector in the world, was completed in December 2010. The purpose of the detector is to investigate high energy neutrinos, search for dark matter, observe supernovae explosions, and search for exotic particles such as magnetic monopoles.
- ANTARES (telescope). (Toulon, France). A Neutrino detector 2.5 km under the Mediterranean Sea off the coast of Toulon, France. Designed to locate and observe neutrino flux in the direction of the southern hemisphere.
- Pierre Auger Observatory (Pampa Amarilla, Argentina). Detects and investigates high energy cosmic rays using two techniques. One is to study the particles interactions with water placed in surface detector tanks. The other technique is to track the development of air showers through observation of ultraviolet light emitted high in the Earth's atmosphere.
- CERN Axion Solar Telescope (CERN, Switzerland). Searches for axions originating from the Sun.
- NESTOR Project (Pylos, Greece). The target of the international collaboration is the deployment of a neutrino telescope on the sea floor off of Pylos, Greece.
- Laboratori Nazionali del Gran Sasso (L'Aquila, Italy). Located within the Gran Sasso mountain with its experimental halls covered by 1400m of rock, which protects experiments from cosmic rays. The facility hosts experiments that require a low noise background environment.
- Aspera European Astroparticle network Started in July 2006 and is responsible for coordinating and funding national research efforts in Astroparticle Physics.
See also
- Longair, M.S. (1981). High energy astrophysics. Cambridge, UK: Cambridge University Press. p. 11. ISBN 0-521-23513-8.
- Cirkel-Bartelt, Vanessa (2008). "History of Astroparticle Physics and its Components". Living Rev. Relativity (Max Planck Institute for Gravitational Physics) 11 (2): 7. Retrieved 23 January 2013.
- Claus Grupen; Astroparticle Physics, Springer (2006).ISBN 3-540-25312-2
- Donald Hill Perkins; Particle Astrophysics, 2nd edit, Oxford University (2009). ISBN 0-19-954546-4
- Claus Grupen; Astroparticle Physics, Springer (2006).ISBN 3-540-25312-2
- Aspera European network portal
- www.astroparticle.org: all about astroparticle physics...
- Aspera news
- Astroparticle physics news on Twitter
- Virtual Institute of Astroparticle Physics
- Helmholtz Alliance for Astroparticle Physics
- UCLA Astro-Particle Physics at UCLA
- Journal of Cosmology and Astroparticle Physics
- Astroparticle Physics in the Netherlands
- Astroparticle and High Energy Physics
- ASD: Astroparticle Physics Laboratory at NASA | <urn:uuid:5d95c4be-9f3b-4a15-84be-a4a4b240e91a> | 3.28125 | 1,741 | Knowledge Article | Science & Tech. | 26.047512 |
Climate models at different spatial scales and levels of complexity provide the major source of information for constructing scenarios. GCMs and a hierarchy of simple models produce information at the global scale. These are discussed further below and assessed in detail in Chapters 8 and 9. At the regional scale there are several methods for obtaining sub-GCM grid scale information. These are detailed in Chapter 10 and summarised in Section 13.4.
The most common method of developing climate scenarios for quantitative impact assessments is to use results from GCM experiments. GCMs are the most advanced tools currently available for simulating the response of the global climate system to changing atmospheric composition.
All of the earliest GCM-based scenarios developed for impact assessment in the 1980s were based on equilibrium-response experiments (e.g., Emanuel et al., 1985; Rosenzweig, 1985; Gleick, 1986; Parry et al., 1988). However, most of these scenarios contained no explicit information about the time of realisation of changes, although time-dependency was introduced in some studies using pattern-scaling techniques (e.g., Santer et al., 1990; see Section 13.5).
The evolving (transient) pattern of climate response to gradual changes in atmospheric composition was introduced into climate scenarios using outputs from coupled AOGCMs from the early 1990s onwards. Recent AOGCM simulations (see Chapter 9, Table 9.1) begin by modelling historical forcing by greenhouse gases and aerosols from the late 19th or early 20th century onwards. Climate scenarios based on these simulations are being increasingly adopted in impact studies (e.g., Neilson et al., 1997; Downing et al., 2000) along with scenarios based on ensemble simulations (e.g., papers in Parry and Livermore, 1999) and scenarios accounting for multi-decadal natural climatic variability from long AOGCM control simulations (e.g., Hulme et al., 1999a).
There are several limitations that restrict the usefulness of AOGCM outputs for impact assessment: (i) the large resources required to undertake GCM simulations and store their outputs, which have restricted the range of experiments that can be conducted (e.g., the range of radiative forcings assumed); (ii) their coarse spatial resolution compared to the scale of many impact assessments (see Section 13.4); (iii) the difficulty of distinguishing an anthropogenic signal from the noise of natural internal model variability (see Section 13.5); and (iv) the difference in climate sensitivity between models.
Other reports in this collection | <urn:uuid:c1b1aad4-d95a-4c6c-8bd7-477524a7b40b> | 3.4375 | 524 | Academic Writing | Science & Tech. | 40.142727 |
The predatory tunicate (Megalodicopia hians) is a species of tunicate (see thesetwo previous posts) which lives anchored along the deep sea canyon walls and seafloor, waiting for tiny animals to drift or swim into its hood-shaped mouth. Looking something like a cross between a jellyfish and a Venus Flytrap (see this post), its mouthlike hood is quick to close when a small animal drifts inside. Once the predatory tunicate catches a meal, it keeps its trap shut until it is ready to eat again. They are known to live in the Monterey Canyon at depths of 200–1,000 metres (660–3,300 ft). They mostly eat zooplankton and tiny animals.
The googly-eyed glass squid(Teuthowenia pellucida) is a rare, slightly blue and transparent deep-sea squid. It gets its name from its disproportionately large eyes. It has eight short tentacles and one slightly longer pair. Its internal digestive organs and the females eggs can be visible through its transparent body. It is able to engorge itself with surrounding water to dramatically increase in size, portraying a more intimidating appearance to potential predators. Like most squid, it can also escape predators using jet propulsion. The cells of its eyes and tentacles form small light-emitting organs (bioluminescent photophores). This array of small lights is used to mask the true identity of the googly-eyed squid to others in the dark. For more on glass squid, see this post.
A photographer’s strobe gives a violet sheen to this translucent juvenile roundbelly cowfish(Lactoria diaphana) off the coast of Kona, Hawaii. The Roundbelly Cowfish has also been called the Diaphanous Box-fish, Diaphanous Cowfish, Thorny-back Cowfish, Translucent Boxfish, and Transparent Boxfish.The Roundbelly Cowfish has a pair of short horns in front of the eyes, a stout spine on the back, and a pair of spines near the anal fin. As its common name suggests, the belly region of the carapace is rounded. The species is yellowish to brown with dusky spots and blotches. Juveniles can be recognized by the almost transparent lower portion of the head and body. It has a pair of short horns in front of the eyes. The Roundbelly Cowfish occurs in tropical and some temperate waters of the Indo-West and Central Pacific and is typically about 30cm.
The vampire squid (Vampyroteuthis infernalis), looks like something that swam out of a late-night science fiction movie. But in spite of its monstrous name, its is a small creature, growing to only about 6 inches in length. The vampire squid is an ancient species and is a phylogenic relict, meaning that is is the only surviving member of the order Vampyromorphida. It is a unique member of the cephalopod family in that it shares similarities with both squid and octopuses. In fact, it was originally and mistakenly identified as an octopus by researchers in 1903.The vampire squid’s body is covered with light-producing organs called photophores. This gives the squid the unique ability to “turn itself on or off” at will through a chemical process known as bioluminescence. When the photophores are off, the squid is completely invisible in the dark waters where it lives. The squid has incredible control over these light organs. It has the ability to modulate the size and intensity of the photophores to create complex patterns that can be used to disorient predators and attract prey. Vampire squid are found throughout the deep oceans of the world in most tropical and temperate regions at depths of between 300 feet (about 90 meters) and 3,000 feet (over 900 meters). | <urn:uuid:5a35a3a5-e6f6-410d-bf73-867aa8225cb3> | 3.5 | 810 | Personal Blog | Science & Tech. | 48.490036 |
This section contains documents created from scanned original files and other
documents that could not be made accessible to screen reader software. A "#"
symbol is used to denote such documents.
Definition, including P = F*v; example of power for bicycle at two speeds.
Definitions of power, average power, and instantaneous power; power in terms of force (P=F*v) with examples; energy in terms of power (E = P*t); forms of energy.
Definitions, including average and instantaneous power; power in terms of force (P= F*v) with examples; energy in terms of power (E= P*t); forms of energy.
Power defined, with equation (P = F*v); impulse defined; center of mass defined; velocity and momentum of center of mass.
Definition of work-kinetic energy theorem (Kf = Ki + Wf,i); work done by non-constant force; work done along an arbitrary path; line integrals; definition of average power; definition of instantaneous power.
Calculating the stall torque and torque at maximum power of a motor.
Computing the work done and calories burned by a cyclist. | <urn:uuid:b5aaa49c-0260-487e-b1cb-149a1012953b> | 3.546875 | 250 | Content Listing | Science & Tech. | 40.619242 |
Exercise 6.10: stripHtmlTags
Write a method called
stripHtmlTags that accepts a
Scanner representing an input file containing an HTML web page as its parameter, then reads that file and prints the file's text with all HTML tags removed. A tag is any text between the characters
> . For example, consider the following text:
<html> <head> <title>My web page</title> </head> <body> <p>There are many pictures of my cat here, as well as my <b>very cool</b> blog page, which contains <font color="red">awesome stuff about my trip to Vegas.</p> Here's my cat now:<img src="cat.jpg"> </body> </html>
If the file contained these lines, your program should output the following text:
My web page There are many pictures of my cat here, as well as my very cool blog page, which contains awesome stuff about my trip to Vegas. Here's my cat now:
You may assume that the file is a well-formed HTML document and that there are no
> characters inside tags.
Contacting server and running tests...
Is there a problem?
Contact a Practice-It administrator. | <urn:uuid:b3e582a1-6a9a-412f-9fa5-eb1e3fd35788> | 3.15625 | 263 | Documentation | Software Dev. | 65.567016 |
In the post "pthread creation" we saw how to create a simple thread and we had used the default attributes for the thread.
One of the default attribute is that the thread is creates always as a joinable i.e. the parent thread has to call pthread_join for the thread to successfully terminate.
On the other hand if we do not want to call pthread_join from the parent, but let the child thread finish and in then inform the parent, then we will have to change the attribute to "detachstate".
If we create a thread in the "detachable" state, then the parent thread need not call pthrea_join, it will just have to wait till the thread exits and informs about it .
The two states supported by POSIX are
To set the attributes to default value we can use
Now to modify the detachstate from joinable we will use the function:
Thus to modify the default initialization we will use
Now we can create a thread using the above attribute variable and we need not call pthread_join
on this thread.
Note: hello is the same function we defined in the post "pthread creation"
The full code will look as follows.
Please note that the function "hello" has the statement
This is because the parent is waiting for the child to exit and does not call join to check the child's status.
Now compile it using the options -pthread
Execute it :
To see the effect of creating a detachable thread, remove the exit from the function "heelo" and the execute the program. We will see that the process goes into an infinite wait as the parent thread keeps waiting for the child thread to call the exit. | <urn:uuid:907b8e41-d4c6-4358-a265-293c293f3ce1> | 3.109375 | 359 | Tutorial | Software Dev. | 55.024021 |
Science subject and location tags
Articles, documents and multimedia from ABC Science
Thursday, 25 February 2010
Palaeontologists in the US have identified the remains of a gigantic, prehistoric shark nicknamed the 'shell crusher', which pulverised shelled animals with its 1000 teeth.
Wednesday, 17 February 2010
A group of British and Canadian palaeontologists have found fossils that show the earliest evidence of animal locomotion.
Wednesday, 10 February 2010
A global drop in oxygen levels may have been the driver that led ancient fish to leave the water and evolve into the first air-breathing animals on land, suggests an Australian study.
Friday, 29 January 2010
The bird family tree just gained a new and distinctive member, according to Chinese palaeontologists.
Thursday, 12 November 2009
An Australian palaeontologist has unearthed the fossilised remains of a new species of dinosaur, which has prompted a re-think about dinosaur evolution.
Friday, 23 October 2009
Scientists have unveiled a tiny dinosaur weighing less than a kilogram and measuring around 70 centimetres long, making it the second smallest ever found.
Thursday, 15 October 2009
A team of Chinese and British palaeontologists has identified a crow-sized fossil that they believe fills a key gap in our understanding of the mysterious flying reptiles known as pterosaurs.
Monday, 5 October 2009
The world's worst mass extinction 250 million years ago was the trigger for a fungus explosion, according to a new study.
Monday, 5 October 2009
Scientists have discovered a piece of fossilised amber that came from a plant living more than 300 million years ago.
Friday, 2 October 2009
The world's oldest and most complete skeleton of a potential human ancestor has been unveiled by an international team of researchers.
Thursday, 1 October 2009
The terrifying Tyrannosaurus rex and its close relatives may have suffered from trichomonosis, a potentially life-threatening disease that still exists today, according to a new study.
Wednesday, 30 September 2009
A team of scientists has overturned the theory that the world's largest lizard evolved on the islands of Indonesia.
Monday, 28 September 2009
To survive freezing temperatures and huge predators, some species of Australian dinosaur took refuge underground, according to a new study.
Friday, 18 September 2009
Scientists say a miniature Tyrannosaurus rex ancestor unearthed in China sheds new light on the evolution of the giant meat-eater.
Thursday, 17 September 2009 48
Science Feature Biological curves were used to date the earliest forms of life. But could the discovery of how to make curved inorganic materials in the laboratory throw our understanding of life on Earth? | <urn:uuid:c4261ac7-b45d-4501-9aa8-1488f549a6f5> | 2.953125 | 547 | Content Listing | Science & Tech. | 32.029077 |
Stem cell everything
Stem cell research is too large an area to review in its entirety, but in 2012 more than any other year since their stumbling entry to the world stage, stem cells have proven to be a vitally important part of our quest to master all disease. The problem posed by embryonic stem cells, which provided total pluripotency, has been largely solved by the ability to roll back adult cells to more stem-y versions (this was the 2012 Nobel Prize in medicine). As a result, research has lurched forward like a dog let slip, and the results are already exciting to say the least.
Stem cells have let researchers treat multiple sclerosis in mice or ease a certain sort of blindness. A patient’s stem cells can be used to repair damaged heart tissue following heart attack, or just grow some new ones from nothing more than a skin sample. Stem cells can grow us an organ, and they can increase that organ’s chance of acceptance by the body. They can be programmed to seek out viruses, like HIV. Perhaps most excitingly, 2012 saw the successful development of a cluster of cloned brain neurons, a tiny grown brain sample with enormous implications of psycho-pharmacological research; if everybody reacts differently to a medicine, just test the medicine on their brain before giving it to them at all. Stem cells have the potential to make the “take this and tell me if you feel bad” school of medicating seem crude, like leeching an infection.
Embryonic cells are still necessary for a few things, however, such as growing bone or bone marrow. Still, the future is bright.
Next page: The next step to transhumanism… | <urn:uuid:d58caa32-a358-4191-9473-9c770957320b> | 3.125 | 352 | Truncated | Science & Tech. | 45.854091 |
In a paper published in the current issue of Nature Genetics, the researchers reported finding some surprises as they have decoded the genome of the worm, a tiny nematode called Pristionchus pacificus.
"We found a larger number of genes than we expected," says Sandra Clifton, Ph.D., research assistant professor of genetics and a co-author of the paper. "These include genes that help the worms live in a hostile environment, the result of living in and being exposed to the byproducts of decaying beetle carcasses, and others that also have been found in plant parasitic nematodes. The genome supports the theory that P. pacificus might be a precursor to parasitic worms."
Scientists estimate there are tens of thousands of nematode species. The worms are typically just one millimeter long and can be found in every ecosystem on Earth. Parasitic nematodes can infect humans as well as animals and plants.
One nematode in particular is well known in scientific circles: Caenorhabditis elegans has long been used as a model organism in research laboratories. Its genome sequence was completed in 1998 by Washington University genome scientists working as part of an international research collaboration.
Unlike C. elegans, which lives in the dirt, P. pacificus makes its home in an unusual ecological niche: it lives together with oriental beetles in the United States and Japan in order to devour the bacteria, fungi and other small roundworms that grow on beetle carcasses after they have died. While the beetles are alive and the nematodes' food source is scarce, the worms live in a "resting" stage in which they don't eat or reproduce.
This suspended state, called dauer diapause, is thought to be the infective state of parasitic nematodes. According to the World Health Organization, parasitic nematodes infect about 2 billion people worldwide and severely sicken some 300 million.
The genome of P. pacificus is substantially larger and more complex than C. elegans. It has nearly 170,000 chemical bases and contains 23,500 protein-coding genes. By comparison, C. elegans and the human parasitic nematode Brugia malayi, whose genome was sequenced in 2007, only have about 20,000 and 12,000 protein-coding genes, respectively. Infection with B. malayi causes lymphatic filariasis, which can lead to elephantiasis, a grotesque enlargement of the arms, legs and genitals.
Interestingly, the P. pacificus genome contains a number of genes for cellulases - enzymes that are required to break down cell walls of plants and microorganisms. These genes are nonexistent in C. elegans, although they have been found in plant parasitic nematodes.
"Using genetic tools, we can analyze the development, behavior and ecology of this highly unusual worm to aid in understanding the evolutionary changes that allowed parasitism to occur," says co-author Richard K. Wilson, Ph.D., director of Washington University's Genome Sequencing Center.
The P. pacificus genome was sequenced at Washington University; Ralf Sommer, Ph.D., and colleagues at the Max-Planck Institute supplied the DNA for sequencing and analyzed the sequence data.
The research was funded by the National Human Genome Research Institute and the Max-Planck Society.
Dieterich C, Clifton S, Schuster L, Chinwalla A, Delehaunty K, Dinkelacker I, Fulton L, Fulton R, Godfrey J, Minx P, Mitreva M, Roeseler W, Tian H, Witte H, Yang S-P, Wilson R, Sommer RJ. The genome sequence of Pristionchus pacificus provides a unique perspective on nematode life-style and the evolution toward parasitism.
Washington University School of Medicine's 2,100 employed and volunteer faculty physicians also are the medical staff of Barnes-Jewish and St. Louis Children's hospitals. The School of Medicine is one of the leading medical research, teaching and patient care institutions in the nation, currently ranked fourth in the nation by U.S. News & World Report. Through its affiliations with Barnes-Jewish and St. Louis Children's hospitals, the School of Medicine is linked to BJC HealthCare.
Caroline Arbanas | Source: Newswise Science News
Further information: www.wustl.edu
Further Reports about: beetle carcasses > Caenorhabditis elegans > elegans > enzymes > Genom > Genome > Medicine > nematode > nematode species > pacificus > parasitic > Parasitic nematodes > parasitic worms > parasitism > Pristionchus > Pristionchus pacificus > protein-coding genes > sequence > Worm Genome
More articles from Life Sciences:
Study details genes that control whether tumors adapt or die when faced with p53 activating drugs
23.05.2013 | University of Colorado Denver
Scientists announce Top 10 New Species
23.05.2013 | Arizona State University
New indicator molecules visualise the activation of auto-aggressive T cells in the body as never before
Biological processes are generally based on events at the molecular and cellular level. To understand what happens in the course of infections, diseases or normal bodily functions, scientists would need to examine individual cells and their activity directly in the tissue.
The development of new microscopes and fluorescent dyes in ...
A fried breakfast food popular in Spain provided the inspiration for the development of doughnut-shaped droplets that may provide scientists with a new approach for studying fundamental issues in physics, mathematics and materials.
The doughnut-shaped droplets, a shape known as toroidal, are formed from two dissimilar liquids using a simple rotating stage and an injection needle. About a millimeter in overall size, the droplets are produced individually, their shapes maintained by a surrounding springy material made of polymers.
Droplets in this toroidal shape made ...
Frauhofer FEP will present a novel roll-to-roll manufacturing process for high-barriers and functional films for flexible displays at the SID DisplayWeek 2013 in Vancouver – the International showcase for the Display Industry.
Displays that are flexible and paper thin at the same time?! What might still seem like science fiction will be a major topic at the SID Display Week 2013 that currently takes place in Vancouver in Canada.
High manufacturing cost and a short lifetime are still a major obstacle on ...
University of Würzburg physicists have succeeded in creating a new type of laser.
Its operation principle is completely different from conventional devices, which opens up the possibility of a significantly reduced energy input requirement. The researchers report their work in the current issue of Nature.
It also emits light the waves of which are in phase with one another: the polariton laser, developed ...
Innsbruck physicists led by Rainer Blatt and Peter Zoller experimentally gained a deep insight into the nature of quantum mechanical phase transitions.
They are the first scientists that simulated the competition between two rival dynamical processes at a novel type of transition between two quantum mechanical orders. They have published the results of their work in the journal Nature Physics.
“When water boils, its molecules are released as vapor. We call this ...
23.05.2013 | Physics and Astronomy
23.05.2013 | Health and Medicine
23.05.2013 | Ecology, The Environment and Conservation
17.05.2013 | Event News
15.05.2013 | Event News
08.05.2013 | Event News | <urn:uuid:9c1850f5-9b68-40c2-bff1-efc0983ff0c8> | 4.03125 | 1,589 | Content Listing | Science & Tech. | 38.981853 |
Oil and gas might be running out, but renewable power sucks so much it counts for less than 10 per cent of all the energy we use. The answer? Recreate the sun using nuclear fusion, in a sleepy corner of the UK.
No, really. Over the past few decades scientists and engineers have been scratching their heads over how to solve our energy problems using the process that powers stars. In fact, Europe’s biggest nuclear fusion device, the Joint European Torus (JET), is leading the way, and it’s hidden away in the tranquil Oxfordshire countryside. “In effect we’re making a miniature version of the sun in the laboratory,” explains Nick Holloway from JET. But how the hell do they do it, and when will it be powering our laptops?
Alleviating Any Con-Fusion
To answer that, first a crash-course in nuclear physics using my love life as a metaphor. Much like many of my romantic encounters, fusion requires bringing together two objects that tend to repel each other. By joining two hydrogen nuclei, it’s possible to create a new helium nucleus, and simultaneously release serious amounts of energy. For some perspective, just 25g of hydrogen isotopes — the same weight as two iPod Shuffles — could produce enough electricity to last an average European a lifetime. Hear that, solar power?
The tricky bit, though, is getting those damned nuclei close enough to fuse, because they both have a positive charge which means they repel each other. The solution isn’t elegant, but it does work: throw enough heat at the hydrogen, and it becomes plasma — a hot soup of nuclei and electrons, with enough energy to overcome the repulsion. A bit like the lubricating role alcohol used to play in my romantic encounters. But, unlike my love life, the reaction needs to reach a scorching 100 million C to get going — which, if you’re wondering, is ten times hotter than the sun. So how on earth do they do that in a country where 20 degrees celsius is considered tropical? | <urn:uuid:913b153e-0a7f-4ee3-a365-f57ccd94606c> | 3.4375 | 437 | Personal Blog | Science & Tech. | 51.929664 |
Einstein’s theories of relativity hold that space and time are woven together into a four-dimensional fabric, and that a weighty body like a planet or a star depresses that fabric, like someone sitting on a chair or a trampoline. Gravitational attraction is really just objects following the warped path.
What’s more, the rotation of a massive body would also affect the fabric, so that a distant observer would perceive objects close to a gravitational body as being dragged around. Think of Earth sitting in a vat of liquid — as the planet rotates, the liquid starts to swirl, too, and so does everything near the Earth.
If this is true, the axis of a gyroscope would change when compared to the light from a faraway star. This is what GP-B was designed to do.
Orbiting 400 miles above the Earth in a polar orbit, GP-B contains four gyroscopes made of quartz-silicon spheres that are considered nearly perfect — they’re in the Guinness Book of World Records. It has a telescope that stared at a single star, IM Pegasi, while the satellite made its rounds. If the Earth’s mass did not affect spacetime, the gyroscopes would point the same direction forever. But they didn’t, experiencing teeny but measurable changes in the direction of their spin. This is exactly what Einstein predicted back in 1916.
|Liveleak on Facebook| | <urn:uuid:142fade4-2452-4b43-b725-33aad66e7f24> | 3.5625 | 302 | Personal Blog | Science & Tech. | 49.539746 |
A sample of a bizarre crystal once considered unnatural may have arrived on Earth 15,000 years ago, having hitched a ride on a meteorite, a new study suggests.
The research strengthens the evidence that this strange "quasicrystal" is extraterrestrial in origin.
The pattern of atoms in a quasicrystal falls short of the perfectly regular arrangement found in crystals. Until January, all known quasicrystals were man-made. "Many thought it had to be that way, because they thought quasicrystals are too delicate, too prone to crystallization, to form naturally," study researcher Paul Steinhardt of Princeton University told LiveScience at the time.
Then researchers announced the presence of a natural quasicrystal in a meteorite found in the Koryak Mountains of Russia. . That meteorite was being kept in a museum in Italy. Now, on an expedition to the site where it was found in Russia, Steinhardt and his colleagues now have found more natural samples of quasicrystals for analysis.
Quasicrystals were first synthesized in a lab in 1982 by Israeli chemist Dan Shechtman, whose work won the Nobel Prize for Chemistry in 2011. Regular crystals are made up of regular clusters of repeating atoms arranged in particular symmetries. Quasicrystals are orderly, too, but they do not exactly repeat themselves. If regular crystals are like boring bathroom tiles, quasicrystals are like complex tile mosaics. | <urn:uuid:b9953f26-f73b-4ebd-ab97-c197eb05b3e6> | 3.671875 | 309 | Comment Section | Science & Tech. | 31.864163 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
"speed of sound"
We found 16 results on physics.org and 164 results in our database of sites
153 are Websites,
1 is a Videos,
and 10 are Experiments)
Search results on physics.org
Search results from our links database
How the new roof at Wimbledon affects tennis ball speeds.
The cross-country navigation of an aircraft involves the vector addition of relative velocities since the resultant ground speed is the vector sum of the airspeed and the wind speed.
The speed distribution for the molecules of an ideal gas is given by Maxwell's speed distribution
Shows a simulation for crossing a river in a boat. Speed of river, boat angle and speed can be changed.
Page explaining speed and velocity, and the difference between the two.
A good site on how animals such as bats or dolphins see with sound (also known as echolocation ).
A good web page that highlights links for beginners about sound.
Diagrams showing how sound is reflected and standing waves.
An interesting page on how sound is refracted and examples of the effect this causes.
Microphones are transducers which detect sound signals and produce an electrical image of the sound
Showing 31 - 40 of 164 | <urn:uuid:ccf259bc-35eb-433a-b645-127f6da29f53> | 3.21875 | 301 | Content Listing | Science & Tech. | 59.3625 |
|Apr3-08, 08:04 PM||#1|
Frames of reference question
1. The problem statement, all variables and given/known data
A smooth level table is centered on a platform which rotates.
- The uniform rotation is at: one revolution in 12 seconds
- Two perpendicular lines are drawn through the centre of the table, intersecting a circle of 1.20m radius at points: A', C', B' & D'.
- Two men, H' and I', sit on the platform at opposite ends of line A'C'
- A third man, J, is above the table so that he can observe the motion of a frictionless puck in a stationary frame of reference.
- J has four marks on the floor, forming two perpendicular reference lines AC and BC through the centre of the table.
As H' passes A he gives the puck a sudden push so that it travels along line AC with a speed of 0.40m/s. Construct a vector diagram to show the velocity that H' gave the puck in J's frame of reference.
Make a diagram of the puck's motion as seen by H' and I'
Suppose that H' launches puck as he passes A so the I' will catch the puck as he passes D. Construct diagram which shows motion of puck in J's frame of reference. What is the speed of the puck in this frame of reference?
With what speed and in what direction can H' launch the puck as he passes A so that, as J sees it, the puck remains at A?
2. Relevant equations
Fc = mv^2/R
3. The attempt at a solution
I frankly don't know where to begin but for the first question I know that in J's frame of reference the puck will look like it is moving in a straight line.
To H' and I' the motion will look circular.
|Similar Threads for: Frames of reference question|
|reference frames question||General Physics||2|
|Frames of Reference Question||Special & General Relativity||3|
|frames of reference/vectors question.||Introductory Physics Homework||3|
|Frames of reference & Inertial frames||Classical Physics||2|
|Question about frames of reference and clocks!||Special & General Relativity||11| | <urn:uuid:451ced34-d8a7-4813-bf6a-82cd1303c224> | 3.875 | 494 | Comment Section | Science & Tech. | 68.075032 |
You probably didn’t wake up this morning wondering what happens to the antiprotons that must be created by the collision of cosmic rays with the upper atmosphere. But if you are one of the few who loses sleep over the fact that these antiprotons should be somewhere out there but have yet to be directly detected, we are happy to report that you can rest easy: Astrophysicists have finally found them trapped in an antiproton belt around the Earth.
For those of you who couldn't care less about antiprotons or what an antiproton is, this is less a story about antiprotons or earth-shattering discovery and more a story about good science bearing fruit. See, when cosmic rays from the sun and elsewhere in the cosmos bombard nuclei in the upper atmosphere, the resulting particle collisions are akin to those that occur in particle accelerators here on the ground. And like those laboratory collisions, these smash-ups birth daughter particles.Astronomers have long thought these collisions must produce antiprotons just as they do in the lab, but thus far no one has been able to prove definitively what happens to these antiprotons as they’re difficult to seek out and measure, especially from the ground. Theoretically it made sense that they should be trapped by the Earth’s magnetic field, yet no antiproton cloud was empirically evident.
Enter PAMELA, a low Earth orbiting spacecraft launched in 2006 to seek out antiprotons in cosmic rays. Each day PAMELA makes a pass through the South Atlantic Anomaly, the part of the Van Allen Belts that come closest to the Earth and a sort of tide pool for energetic particles. If the antiprotons are collecting anywhere, they ought to be here.
And now, after analyzing 850 days of data, it turns out they are. PAMELA tracked down exactly 28 of them, which is actually way more than one might expect to find blowing in the solar wind. In other words, antiprotons are being captured and stored there. Solid scientific theory (and high-tech orbiting hardware) wins again.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:12e13be7-2f8d-45b9-8aa3-cac3d1f8d757> | 3.203125 | 491 | Nonfiction Writing | Science & Tech. | 41.230306 |
A flexible optically transparent fiber, usually made of glass or plastic, through which light can be transmitted by successive internal reflections.
The field of technology that combines the physics of light with electricity. Optoelectronics encompasses the study, design and manufacture of devices that source, detect and control light converting electrical signals into photon signals and vice versa.
The path of an electron around the nucleus of an atom.
The branch of Chemistry devoted to the study of carbon element (symbol C), its compounds and their properties. The name organic came from the word organism as prior to 1828, all organic compounds had been obtained from organisms or their remains. There are over six million organic compounds characterized, including the foods we eat, furs and feathers, and the organisms they came from but also plastics, dyes and drugs, insecticides, petroleum products etc. Organic chemistry presents the biggest impact on daily life both because of the variety of practical applications and the possibility of better understanding life around us.
Chemical synthesis dealing with the synthesis of organic compounds via organic reactions.
The third most abundant element in the universe by mass after hydrogen and helium and the most abundant element by mass in the Earth’s crust. Diatomic oxygen gas constitutes 20.9% of the volume of air.
Ozone (O3) is a form of oxygen. It is a colourless gas that has a very pungent odour. It exists naturally at low concentrations in the stratosphere where it absorbs ultraviolet radiation. In the troposphere it exists naturally at extremely low concentrations. These concentrations increase when sunlight acts on various gases, coming mainly from vehicle exhausts, and ozone then becomes a pollutant in the troposphere. Ozone is a highly corrosive gas and is poisonous to most organisms. At concentrations as low as 0.00001 per cent (or 10 parts per hundred million), it can irritate the membranes lining the nose, throat and airways and can trigger or exacerbate asthma attacks. | <urn:uuid:cdd77568-de19-4c81-b468-7101c2e72eab> | 3.25 | 403 | Structured Data | Science & Tech. | 36.661052 |
WEEK 1 (FRACTALS)
1-1) Play the following "line chaos game": You need a coin , paper and pencil.
Draw a segment across the page; we will call its length "1". The left end is called H (for Heads) and the right is called T (for Tails).
Rules: Pick any point in the segment (called Xo, the "seed"). Flip the coin. If it turns up Heads then move the point toward the H end so that its distance to H is 1/3 of the previous distance. Conversely, if it is Tails then move the point toward the T end so that its distance to T is 1/3 of the previous distance. Repeat...
As an exercise to practice the game, do the following sequence: Start at the mid-point (Xo=1/2) and sketch the orbit for this succesion of coin flips: H,H,T. You should get something like this:
Your job is to find out what object would emerge if you would play this game for thousands of flips (that is, regardless of the seed you start with, as long as you erase the first few points, and the particular successions that might occur, is there a PATTERN that emerges? Is this an object you know?). A bit of thinking, and a short VPython program might help you.
1-2) Draw by hand the first three iterations of the fractals generated by removal according to the following rules:
a) "FRACTAL +": Start with a square of side length "1". Divide into 9 equal squares (side 1/3 each), and then remove the four corner squares (to get the + shape). Repeat for each remaining square.
b) "FRACTAL H": Start with a square of side length "1". Divide into 9 equal squares (side 1/3 each), and then remove the top and bottom mid- squares (to get the H shape). Repeat for each remaining square.
c) "FRACTAL X": Start with a square of side length "1". Divide into 9 equal squares (side 1/3 each), and then remove the top, bottom, left and right mid- squares (to get the X shape). Repeat for each remaining square.
d) "FRACTAL O": Start with a square of side length "1". Divide into 9 equal squares (side 1/3 each), and then remove the center square (to get the square O shape). Repeat for each remaining square. What is the "technical" name of this fractal?
Can you calculate the fractal dimension of the above objects?
1-3) Make a notch in the middle of one side of a piece of typing paper. Hold the sides of the paper firmly and pull them apart (in the plane of the paper) until the paper tears. The tear edge will be jagged and fractal-like. Overlay the torn edge on a piece of graph paper and estimate the dimension of the edge by box counting. Does the dimension depend on the speed with which the paper is torn? Try tissue paper instead of typing paper. Does the character of the edge depend on the structure of the paper?
1-4) Make a copy of fine graph paper on a overhead foil. Use this as one of the sheets in the "blob of goo between two sheets of plastic" experiment: Put a small blob of goo (tootpaste and mud work well) on this sheet, cover with a plain overhead transparency foil, and press firmly on the top foil until the small blob is squeezed out to a large, thin blob. Pull the foils apart, producing a fingery pattern. Do box counting to estimate the average dimension of the boundary of this "dendrite-like" pattern. Try the experiment with different pulling speeds. Does the dimension depend on the speed with which the sheets are separated? Also, try different "goos." Does the dimension depend on the material used?
1-5) Measure the fractal dimension of the boundary of a leaf of your choice. Compare with other people's results.
1-6) Measure the fractal dimension of the boundary of a feature on a map of your choice (could be a costaline, lake, country, etc.. Compare with other people's results.
WEEK 2 (CHAOS)
2-1) Below is a graph of the tent map, defined by the following iteration rule:
Xn+1 = s Xn ..........................if Xn<1/2
Xn+1 = s (1-Xn) ....................if Xn>1/2
Show the maximum height of the tent map graph is s/2.
Show the nonzero fixed point occurs at Xf = s/(1+s).
For s > 1, show the first iterate of s/2 has the value s (1 - s/2 ).
Use graphical iteration to locate four points which the tent map takes to the fixed point w of the Figure below.
Numerically calculate the four values of x you found when s = 2.
2-2) LIFE AND NONLINEAR DYNAMICS
Write a parable of your own. Specifically, take an instance from your life where a small choice, perhaps made in an offhand fashion, led eventually to a significant difference in your life. For your parable to be apt to the context of this course, the train of events flowing from your choice has to be deterministic. Try to identify some of the nonlinearities in the "dynamics of your life" that helped cause your choice to initiate the unexpected chain of events it did.
To emphasize the magnitude of the effect of this small choice, construct a plausible scenario of the result of your making a different small choice (under the same "life dynamics").
Do some research to find out what do the economists refer to by the "tulips' boom/bust" experience.
What do you think is the connection between market crashes and non-linear systems?
And outbreak of war and non-linear systems?
How about epidemics?
2-4) The Game of Life
a) Determine by hand the next generation of the Life configurations below
b) Determine by hand the next two generations of the Life configurations below
Using LifeLab and the random initial condition, observe which stable configurations arise spontaneously from the Life rules. (Allow some time to pass before drawing any conclusions - the transients must be allowed to die down.).
A textbook description is no substitute for the experience of Life. Play with LIFE to get a hint of the Game's many wonders. But here's a warning label for the user: Life is powerfully addictive! Use only when not faced with an impending deadline.
"Human consciousness, free will, or feelings can never be understood in terms of physical laws."
"People are mechanistic entities whose behaviors emerge from the same principles that govern all forms of matter."
Here are two opposite views of humanity.
Construct an argument that supports each position.
Which argument do you find more compelling? | <urn:uuid:5a712ac8-60a6-490e-b283-4441dcae14e0> | 3.609375 | 1,479 | Tutorial | Science & Tech. | 74.26975 |
Wildfires: A Symptom Of Climate Change
Hampton VA (SPX) Sep 29, 2010
This summer, wildfires swept across some 22 regions of Russia, blanketing the country with dense smoke and in some cases destroying entire villages. In the foothills of Boulder, Colo., this month, wildfires exacted a similar toll on a smaller scale.
That's just the tip of the iceberg. Thousands of wildfires large and small are underway at any given time across the globe. Beyond the obvious immediate health effects, this "biomass" burning is part of the equation for global warming. In northern latitudes, wildfires actually are a symptom of the Earth's warming.
'We already see the initial signs of climate change, and fires are part of it," said Dr. Amber Soja, a biomass burning expert at the National Institute of Aerospace (NIA) in Hampton, Va.
And research suggests that a hotter Earth resulting from global warming will lead to more frequent and larger fires.
The fires release "particulates" - tiny particles that become airborne - and greenhouse gases that warm the planet.
"What we found is that 90 percent of biomass burning is human instigated," said Levine, who was the principal investigator for a NASA biomass burning program that ran from 1985 to 1999.
Levine and others in the Langley-led Biomass Burning Program travelled to wildfires in Canada, California, Russia, South African, Mexico and the wetlands of NASA's Kennedy Space Center in Florida.
Biomass burning accounts for the annual production of some 30 percent of atmospheric carbon dioxide, a leading cause of global warming, Levine said.
Dr. Paul F. Crutzen, a pioneer of biomass burning, was the first to document the gases produced by wildfires in addition to carbon dioxide.
"Modern global estimates agree rather well with the initial values," said Crutzen, who shared the Nobel Prize in Chemistry 1995 with Mario J. Molina and F. Sherwood Rowland for their "work in atmospheric chemistry, particularly concerning the formation and decomposition of ozone."
The reason is that, unlike the tropics, northern latitudes are warming, and experiencing less precipitation, making them more susceptible to fire. Coniferous trees shed needles, which are stored in deep organic layers over time, providing abundant fuel for fires, said Soja, whose work at the NIA supports NASA.
"That's one of the reasons northern latitudes are so important," she said, "and the smoldering peat causes horrible air quality that can affect human health and result in death."
Fires in different ecosystems burn at different temperatures due to the nature and structure of the biomass and its moisture content. Burning biomass varies from very thin, dry grasses in savannahs to the very dense and massive, moister trees of the boreal, temperate and tropical forests.
Fire combustion products vary over a range depending on the degree of combustion, said Levine, who authored a chapter on biomass burning for a book titled "Methane and Climate Change," published in August by Earthscan.
Flaming combustion like the kind in thin, small, dry grasses in savannahs results in near-complete combustion and produces mostly carbon dioxide. Smoldering combustion in moist, larger fuels like those in forest and peatlands results in incomplete combustion and dirtier emission products such as carbon monoxide.
Boreal fires burn the hottest and contribute more pollutants per unit area burned.
In Russia, the wildfires are believed caused by a warming climate that made the current summer the hottest on record. The hotter weather increases the incidence of lightning, the major cause of naturally occurring biomass burning.
Soja said she hopes the wildfires in Russia prompt the country to support efforts to mitigate climate change. In fact, Russia's president, Dmitri A. Medvedev, last month acknowledged the need to do something about it.
"What's happening with the planet's climate right now needs to be a wake-up call to all of us, meaning all heads of state, all heads of social organizations, in order to take a more energetic approach to countering the global changes to the climate," said Medvedev, in contrast to Russia's long-standing position that human-induced climate change is not occurring.
Share This Article With Planet Earth
Langley Research Center
Forest and Wild Fires - News, Science and Technology
Brasilia (AFP) Sept 19, 2010
A forest fire was burning out of control Sunday across a big swath of national park very near Brazil's capital Brasilia, management of the park told AFP. More than 140 firefighters were battling the blaze, which was suspected to have been deliberately started in the dry brushland of the park, popularly known as Agua Mineral. Park management said no one had been hurt in the fire and no pr ... read more
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2010 - SpaceDaily. AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by SpaceDaily on any Web page published or hosted by SpaceDaily. Privacy Statement| | <urn:uuid:deb15579-c6d7-45b1-bad8-4c7697662dfa> | 3.203125 | 1,107 | Truncated | Science & Tech. | 38.673761 |
An international team of scientists recently announced the discovery of a new species of blind deep-sea crab whose legs are covered with long, pale yellow hairs. This crab was first observed in March 2005 by marine biologists using the research submarine Alvin to explore hydrothermal vents along the Pacific-Antarctic ridge, south of Easter Island. Because of its hairy legs, this animal was nicknamed the "Yeti crab," after the fabled Yeti, the abominable snowman of the Himalayas. The Yeti crab was discovered during the Easter Microplate expedition to the southeast Pacific, led by MBARI scientist Bob Vrijenhoek. The primary goal of this expedition was to learn how bottom-dwelling animals from one deep-sea hydrothermal vent are able to colonize other hydrothermal vents hundreds or thousands of miles away. Vrijenhoek and his team were addressing this question by comparing the DNA of animals at hydrothermal vents in different parts of the Pacific Ocean.
During one Alvin dive, marine biologist Michel Segonzac, from Institut français de recherche pour l'exploitation de la mer (IFREMER) in France, noticed an unusually large (15-cm-long) crab with hairy arms lurking on the seafloor. Segonzac asked the Alvin pilots to collect this crab and bring it back to the surface.
The researchers saw more of these unusual crabs during subsequent Alvin dives. Most of the crabs were living at depths of about 2,200 meters (7,200 feet) on recent lava flows and areas where warm water was seeping out of the sea floor. According MBARI biologist Joe Jones, "Many of the crabs were hiding underneath or behind rocks—all we could see were the tips of their arms sticking out."
After returning to shore, researchers Segonzac and Jones worked with Enrique Macpherson from the Consejo Superior de Investigaciones Científicas (CSIC) in Spain to identify the crab they had collected. They found that the crab was not only a new species (which they named Kiwa hirsuta), but an entirely new family (Kiwaidae). The Yeti crab is a distant relative to the hermit crabs commonly seen lurking in tide pools. | <urn:uuid:a77f7bb8-8051-4a69-a917-be7de1a0f5e2> | 3.8125 | 468 | Truncated | Science & Tech. | 37.489884 |
Deprecated since version 2.6: This module is obsolete. Use the subprocess module. Check especially the Replacing Older Functions with the subprocess Module section.
This module allows you to spawn processes and connect to their input/output/error pipes and obtain their return codes under Unix and Windows.
The primary interface offered by this module is a trio of factory functions. For each of these, if bufsize is specified, it specifies the buffer size for the I/O pipes. mode, if provided, should be the string 'b' or 't'; on Windows this is needed to determine whether the file objects should be opened in binary or text mode. The default value for mode is 't'.
On Unix, cmd may be a sequence, in which case arguments will be passed directly to the program without shell intervention (as with os.spawnv()). If cmd is a string it will be passed to the shell (as with os.system()).
The only way to retrieve the return codes for the child processes is by using the poll() or wait() methods on the Popen3 and Popen4 classes; these are only available on Unix. This information is not available when using the popen2(), popen3(), and popen4() functions, or the equivalent functions in the os module. (Note that the tuples returned by the os module’s functions are in a different order from the ones returned by the popen2 module.)
Executes cmd as a sub-process. Returns the file objects (child_stdout_and_stderr, child_stdin).
New in version 2.0.
On Unix, a class defining the objects returned by the factory functions is also available. These are not used for the Windows implementation, and are not available on that platform.
If not using one of the helper functions to create Popen3 objects, the parameter cmd is the shell command to execute in a sub-process. The capturestderr flag, if true, specifies that the object should capture standard error output of the child process. The default is false. If the bufsize parameter is specified, it specifies the size of the I/O buffers to/from the child process.
New in version 2.0.
The following attributes are also available:
Any time you are working with any form of inter-process communication, control flow needs to be carefully thought out. This remains the case with the file objects provided by this module (or the os module equivalents).
When reading output from a child process that writes a lot of data to standard error while the parent is reading from the child’s standard output, a deadlock can occur. A similar situation can occur with other combinations of reads and writes. The essential factors are that more than _PC_PIPE_BUF bytes are being written by one process in a blocking fashion, while the other process is reading from the first process, also in a blocking fashion.
There are several ways to deal with this situation.
The simplest application change, in many cases, will be to follow this model in the parent process:
import popen2 r, w, e = popen2.popen3('python slave.py') e.readlines() r.readlines() r.close() e.close() w.close()
with code like this in the child:
import os import sys # note that each of these print statements # writes a single long string print >>sys.stderr, 400 * 'this is a test\n' os.close(sys.stderr.fileno()) print >>sys.stdout, 400 * 'this is another test\n'
In particular, note that sys.stderr must be closed after writing all data, or readlines() won’t return. Also note that os.close() must be used, as sys.stderr.close() won’t close stderr (otherwise assigning to sys.stderr will silently close it, so no further errors can be printed).
Applications which need to support a more general approach should integrate I/O over pipes with their select() loops, or use separate threads to read each of the individual files provided by whichever popen*() function or Popen* class was used. | <urn:uuid:847cae06-922f-43e8-a7c8-7c141ef75996> | 2.953125 | 898 | Documentation | Software Dev. | 58.97525 |
The Pentagon Problem
Stephanie J. Morris
The "Pentagon Problem" begins with any pentagon. A second pentagon,
in blue, is formed by connecting the midpoints of each side of the first
pentagon. A third pentagon, in green, is constructed the same way by connecting
the midpoints of pentagon 2.
"As long as each side of the two smaller pentagons
are formed by joining the midpoints of the next larger pentagon,
the two ratios of the areas will always be equal, and the two ratios of
the perimeters will always be equal, no matter what shape the largest pentagon
Do you think that this conjecture is valid? Why or why not?
When I first read this conjecture, I thought that it seemed pretty "logical"
that the conjecture was a valid one. I believed that after several constructions
of various pentagons and the comparisons of the above mentioned ratios,
that I could come up with a proof of this conjecture.
However, after I began investigating the conjecture with the Geometer's
Sketchpad, I soon learned that maybe my own assumptions of the conjecture's
validity and its proof were not so logical after all.
Investigation of Pentagons:
I began with an arbitrary pentagon, i.e. it is not a regular pentagon. I
then constructed the other two pentagons according to the guidelines of
the problem. The original pentagon, pentagon 1 is in red. Pentagon 2 is
in blue and pentagon 3 is in green.
At first glance, it seems that the conjecture just might be true. There
appears to be a constant ratio between the Area of the Pentagon and the
area of the pentagon from which it was constructed.
Area Pentagon 3 = .65 Area Pentagon 2 = .65
Area Pentagon 2 Area Pentagon 1
After seeing this, I conjectured that the same ratio would hold when comparing
the area of pentagon 3 and the area of pentagon 1. I thought that it would
hold since pentagon 3 is constructed from pentagon 2 which is constructed
from pentagon 1. It seemed that since pentagon 3 is indirectly constructed
from pentagon 1, that they would be related in the same way also.
Area Pentagon 3 = .42
Area Pentagon 1
After seeing that the ratio did not hold, I realized that it did not make
sense for it to hold. I conjectured that the ratio would not be constant,
but that perhaps there would be a constant relationship between the areas
of pentagon 3 and pentagon 1. For instance, a fractional relationship. Maybe
I would discover this when I changed the shape of the original pentagon
later in my investigation.
Next, I decided to look at the ratios of the perimeters of pentagons 1,
2 , and 3.
Perimeter 2 = .80 Perimeter 3 = .81
Perimeter 1 Perimeter 2
These ratios were very close, but not exactly equal. It seemed
likely that the minute difference could be attributed to the way that the
measurements were rounded off by Geometer's Sketch Pad. I decided that the
next pentagons that I looked at would not have measurements rounded off
until the thousandths place. I was curious to see what the ratio would be
between the perimeter of pentagon 3 and the perimeter of pentagon 1.
Perimeter 3 = .64
Once again, this comparison yielded a ratio that was not the same as the
ratios between the other pentagons when they were compared.
I was beginning to doubt the validity of the original conjecture. The ratios
were "approximately equal", but not exactly equal. Exactly equal
seemed to be what the conjecture called for.
The next thing that I decided to do was to change the shape of the original
pentagon from which the other two pentagons were formed. I wondered what
the effects would be on the area and perimeter ratios.
In this investigation, note that all measurements are measured to the thousandths
place. I did this so that I could see just how close the ratios are to being
equal. I was suspicious that some of the ratios might appear to be equal
when in fact, they may have been rounded-off and not equal at all. In doing
this, my suspicions were somewhat confirmed. For area, the ratios were very
close, but not truly equal. So, rounding-off the measurements does make
a big difference.
However, the perimeter ratios were not even close to being equal.
Area 3 = .655 Area 2 = .656
Area 2 Area 1
Area 3 = .429
The perimeter ratios were very surprising and very contradictory to the
Perimeter 3 = .823 Perimeter 2 = .649
Perimeter 2 Perimeter 1
Perimeter 3 = .534
In this pentagon, the perimeter ratios were nowhere close to being constant.
Changing the shape of the original pentagon seemed to make the ratios become
even further apart. This seemed to be a strong counterexample to the conjecture.
I continued changing the shape of the original pentagon. In some cases,
it remained a pentagon and in other cases I changed it to a quadrilateral
or a triangle. The results were the following:
In this figure, the ratios of area between 3 and 2 and 2 and 1, were approximately
equal, but not exactly equal. The ratio of area between 3 and 1 is not close
to being the same. The same results were true for the perimeter ratios.
In this figure, I changed the pentagon to a quadrilateral.
The ratios here seem to be even further apart than before. For area, the
ratios that we are looking for to be constant differ by .02. In perimeter,
the ratios differ by .038! It seems that changing the shape of the original
pentagon does nothing to validate the conjecture, but rather it only casts
more doubt about its validity. I was convinced that the ratios were not
constant, and thus, that the conjecture is false. However, I wasn't sure
why the ratios were so close, yet unequal.
I decided to compare a few more figures before I made an "official"
decision about my thoughts on the conjecture.
I changed the original pentagon to a quadrilateral, but one of a different
shape than in the above example. The differences in the ratios that were
"supposed" to be constant continued to become larger.
Here, the original pentagon was changed to a triangle. The results are consistent
with what I had found before. The ratios just were not constant.
While I had long since decided that the conjecture could not be true, there
was one more approach that I wanted to try, just to see what would happen.
I constructed a fourth pentagon in the same manner that I had constructed
pentagons 2 and 3.
Doing this, I had a lot of ratios to compare. I will discuss the area ratios
and the perimeter ratios separately. When I compared the area of the pentagon
and the area of the pentagon that it was constructed from, e.g., the area
of pentagon 3 and the area of pentagon 2, the ratios were all very close
to being equal. (See ratios 1,2, and 3.) However, when I compared the ratios
of the areas of a pentagon and a pentagon that it was not directly constructed
from, but certainly a consequence of, the ratio ws not similar to the above
mentioned ratios. For example, ratio 4 is not close to ratios 1-3, but upon
constructing ratio 5, I saw that ratio 4 and 5 are very close. Ratio 5 is
the same type of comparison as ratio 4. Ratio 5 is comparing the area of
pentagon 4 to the area of pentagon 2. Pentagon 4 was not directly constructed
from pentagon 2, but it is a consequence of pentagon 2 at the very least.
This was another relationship between the areas that needed to be explored.
A final ratio was constructed. Ratio 6 is the comparison of the area of
pentagon 4 and the area of pentagon 1. Again, pentagon 4 was not directly
constructed from pentagon 1, but pentagon 1 was certainly necessary in the
construction of pentagon 4. Again, the ratio was not approximately equal
to any of the other ratios that I had found
I went on and constructed a fifth pentagon from the fourth pentagon to see
what the ratios would turn out to be. The ratio of pentagon 5 to pentagon
1 was not close to the other ratios. This was not a surprising result since
pentagon 5 is not directly constructed from pentagon 1. This is the same
relationship that I discussed in the previous figure when comparing pentagon
4 to pentagon 1. Pentagon 1 is 3 pentagons "ahead" in the construction
than pentagon 4. I found the ratio of pentagon 5 to the pentagon that it
is 3 "ahead" of in the construction, pentagon 2.
Area p5 = .2798 Area p4 = .276 (3 apart)
Area p2 Area p1
This ratio is approximately the same as the ratio between pentagon 4 and
pentagon 1. I wondered if I might find more similar ratios when comparing
ratios of pentagons that are x pentagons apart in the construction.
Area p3 = .422 Area p4 = .427 (2 apart)
Area p1 Area p2
This was very fascinating to me. While the original conjecture was false,
I had discovered some other properties of the pentagons. When looking at
the original ratios of pentagons that are 1 apart in the construction, I
had found that they were not equal, but very close. I had found the same
to be true when looking at the ratios of the pentagons that are x apart
in the construction. I wanted to know what would happen if I continued to
construct more pentagons and x became larger than 4. Going back to the previous
figure, I constructed more pentagons and compared the ratios.
By constructing a sixth pentagon, I was able to compare the ratios of pentagons
that are 1. 2. 3. and 4 pentagons apart. Ratios of pentagons that are the
same number apart are all approximately equal. For instance:
Area p5 = .181 Area p6 = .183 (4 apart)
Area p1 Area p2
These ratios are approximately the same. Seeing this, I constructed yet
a seventh pentagon.
The seventh pentagon and the ratios that can be formed
from it, fit well into my observation about x apart pentagons. In fact,
the newest ratios in each class here, are equal to the previous ratio in
the class. The class being x and x being the number of pentagons apart in
Not wanting to spoil a good thing, I went on to construct an eighth pentagon.
This construction also backed my conjecture about the different "classes"
of the pentagons and the ratios contained within each class.
Constructing a ninth pentagon and examining class 7, my "class"
observation also held.
The following figure contains the tenth and final pentagon that I constructed.
By constructing this pentagon, I was able to form class 8 which also confers
with the observations that I had made earlier about the different classes
of pentagons and the ratios within each class.
In this figure, I examined the ratios of the perimeters of different pentagons
in the different classes. Rather than listing all of the individual ratios,
I have opted to list the results since the ratios are the of the same pentagons
that I examined above. After examining the perimeter ratios, I found that
the same results were true as were for the area ratios. By looking at the
perimeter ratios of the pentagons that were all x apart in the construction,
I discovered that the ratios within each class were all approximately equal
and that they usually converged to one value, ( in the picture above this
value is the last one listed). This was true for x= 1 - 8.
Even though I concluded that the original conjecture was not true, and thus,
was unable to offer a formal proof of the conjecture, I learned a great
deal about the area and perimeter of pentagons. I did many investigations
and came up with an "altered" version of the original conjecture:
"Altered" Conjecture: "As long as each
side of the smaller pentagons are formed by joining the midpoints of the
next larger pentagon, the ratios of the areas of the pentagons that are
x pentagons apart in the construction will all be approximately equal, and
the ratios of the perimeters of the pentagons that are x pentagons apart
in the construction will all be approximately equal."
There are many other avenues that can be explored with this investigation.
For example, the problem could be extended to the construction of other
polygons, including those that are regular and those that are not. Whatever
direction this problem takes, I think that it is certainly a worthwhile
problem for students to explore in mathematics.
Zbiek, Rose Mary. The Pentagon Problem: Geometric Reasoning
with Technology : Mathematics Teacher . February 1996,
NCTM: Reston, VA. | <urn:uuid:550991ce-fd5e-46eb-b2e3-abd905780c8a> | 2.984375 | 2,831 | Nonfiction Writing | Science & Tech. | 54.957655 |
Paul Gilster follows up on a story I first saw at MSNBC describing how microbial life can make the iciest and most inhospitable places home. Project SLIce (Signatures of Life in Ice) studies how organic material might behave on other worlds by studying here it on Earth - and it's yielded some useful information.
An early SLIce result, described at the Goldschmidt2009 geochemistry meeting in Davos last week: The best place to look for microorganisms in ice is in the layers close to the surface. That’s good to know, because a planetary rover is going to be able to sample such environments much more readily than those several meters beneath. Also helpful is the team’s discovery that cleaning the rover’s sample scoop is harder than it looks, leaving dead micro-organisms on it even after it had apparently been sterilized. New procedures have resolved the problem, ensuring we don’t inadvertently ‘discover’ Earth organisms that have found their way along for the ride. | <urn:uuid:97b396cd-9814-47e2-aac1-6d6d6468e48d> | 2.75 | 209 | Personal Blog | Science & Tech. | 36.564177 |
JavaPlot is a library that can be used as a way to create gnuplot plots on the fly through pure Java commands. In contrast with other common gnuplot Java libraries, it uses Java structures to store the various plot parameters, including datasets.
Moreover, it is flexible enough to give special parameters to gnuplot, even if the library does not yet support it. JavaPlot software uses Java's Exceptions to inform the user if something went wrong.
Java 1.5 (or better) is needed for this library. The reason is the extensive usage of various 1.5 technologies, such as Generics and autoboxing, to help maipulation of plot data. It has been tested with gnuplot 4.2. Older versions might or might not work.
This library has been checked in Windows XP, Linux (Debian) and Mac OS X (Tiger & Leopard). It should work on any other system, if you fine tune the special parameters needed.
First you have to include this library in your classpath. Then the easiest way to start creating plots, is to create a new instance of JavaPlot object.
A test case can be found under test/com/panayotis/gnuplot/GNUPlotTest.java. It needs JUnit4 to run, but you can safely copy&paste the ocde from this example to match your needs. For more detailed information, see the provided javadoc. Most methods should be self explanatory.
If you want to go deeper into the library, it is important to understand "ProeprtiesHolder" class, which is the base properties holder of this library. . This class is able to store pairs of values (such as key-value pairs). Use the set() and unset() method of this class to add parameters which will be used when creating the gnuplot program.
There are some things that are not supported yet. These are mainly the multiplot environment and splot-family commands. Still, using methods like getPreInit() and getPostInit() you might be able to simulate them.
If you want to use SVG output in Java, you need a library to handle SVG files. Such a library is SVGSalamander provided with this package. There is a bug in this library, though, which ignores color values. Thus all colors in SVG graphs are black.
· Java SE Development Kit
What's New in This Release: [ read full changelog ]
· Basic support of Graph3d (splot).
· Implementation of user-defined terminals. | <urn:uuid:fdb7d0f4-eccc-4aeb-b138-0b98cef4cad0> | 3.109375 | 537 | Documentation | Software Dev. | 58.39493 |
When we have a situation where strings contain multiple pieces of information (for example, when reading in data from a file on a line-by-line basis), then we will need to parse (i.e., divide up) the string to extract the individual pieces.
Issues to consider when parsing a string:
We want to divide up a phrase into words where spaces are used to separate words. For example
the music made it hard to concentrate
String phrase = "the music made it hard to concentrate"; String delims = "[ ]+"; String tokens = phrase.split(delims);
for (int i = 0; i < tokens.length; i++) System.out.println(tokens[i]);
Suppose each string contains an employee's last name, first name, employee ID#, and the number of hours worked for each day of the week, separated by commas. So
String employee = "Smith,Katie,3014,,8.25,6.5,,,10.75,8.5"; String delims = "[,]"; String tokens = employee.split(delims);
After this code executes, the tokens array will contain ten strings (note the empty strings): "Smith", "Katie", "3014", "", "8.25", "6.5", "", "", "10.75", "8.5"
There is one small wrinkle to be aware of (regardless of how consecutive delimiters are handled): if the string starts with one (or more) delimiters, then the first token will be the empty string ("").
Suppose we have a string containing several English sentences that uses only commas, periods, question marks, and exclamation points as punctuation. We wish to extract the individual words in the string (excluding the punctuation). In this situation we have several delimiters (the punctuation marks as well as spaces) and we want to treat consecutive delimiters as one
String str = "This is a sentence. This is a question, right? Yes! It is."; String delims = "[ .,?!]+"; String tokens = str.split(delims);
All we had to do was list all the delimiter characters inside the square brackets ( [ ] ).
Suppose we are representing arithmetic expressions using strings and wish to parse out the operands (that is, use the arithmetic operators as delimiters). The arithmetic operators that we will allow are addition (+), subtraction (-), multiplication (*), division (/), and exponentiation (^) and we will not allow parentheses (to make it a little simpler). This situation is not as straight-forward as it might seem. There are several characters that have a special meaning when they appear inside [ ]. The characters are ^ - [ and two &s in a row(&&). In order to use one of these characters, we need to put \\ in front of the character:
String expr = "2*x^3 - 4/5*y + z^2"; String delims = "[+\\-*/\\^ ]+"; // so the delimiters are: + - * / ^ space String tokens = expr.split(delims);
String s = string_to_parse; String delims = "[delimiters]+"; // use + to treat consecutive delims as one; // omit to treat consecutive delims separately String tokens = s.split(delims); | <urn:uuid:8d22d577-dff2-46e6-a6d2-47ae08023ce8> | 3.953125 | 726 | Documentation | Software Dev. | 61.358992 |
|Atomic mass||79.9 amu|
|Date of discovery||1826|
|Name of discoverer||Antoine J. Balard|
|Name origin||From the Greek bromos.|
|Uses||fire retardants, ingredients in bug and fungus sprays, antiknock compounds in leaded gasoline, and oil-well completion fluids. The remainder, as elemental bromine, is shipped to various chemical processors for use in chemical reagents, disinfectants, photographic preparations and chemicals, solvents, water-treatment compounds, dyes, insulating foam, and hair-care products.|
|Obtained from||Ocean Water & any brine source.|
Bromine has the atomic number 35. Like chlorine, it is a halogen that easily reacts with other elements. In nature bromine can only be found in compounds. These combinations are called bromides. Bromides are used to obtain pure bromine and to produce bromine products. After fluorine, bromine is the most reactive element. It reacts with many different substances, is very corrosive and destructive of organic material.
Bromine is the only non-metallic element that is liquid at room temperature and standard pressure. It is a red liquid that easily evaporates and smells. Bromine is approximately 3,12 times heavier than water. At temperatures of 58,8 °C it becomes gaseous & at –7,3 °C and lower temperatures it is a solid.
Bromine is a bleach. It is poisonous in fluid form and bromine vapor is destructive of human skin, eyes and respiratory tract. It causes serious burns. A concentration of 1 ppm can cause eye watering and when inhalation of concentrations below 10 ppm occurs, one starts to cough and the respiratory tracts are irritated.
|Periodic Table of the Elements| | <urn:uuid:a8c014b1-ac73-4ab7-b51c-744754c1b66f> | 3.453125 | 394 | Structured Data | Science & Tech. | 38.508945 |
Try This at Home: Fish Out of Water
A persistence of vision illusion occurs when several discrete images are quickly flashed before the eye. These individual images are melded together by the eye and the brain to form a continuous image. This is the same sort of illusion that makes movies and cartoons appear as moving images.
What You Need
- 2 index cards
- Crayons or markers
- Clear tape
- 1 pencil
What To Do
Make sure you have an adult with you to supervise this experiment.
Draw and color a fishbowl, without any fish in it, on one index card.
Draw and color a fish, without a fishbowl or water, on the other index card. Make sure that the fish is the right size, and in the right place on the index card, to fit inside of the fishbowl.
Place the index cards back to back, so that both drawings are showing and are right-side up, and tape the top and sides of the cards together.
Stick the pencil in between the cards so that it is perpendicular to the bottom of the cards. Tape the pencil to the cards in this position.
Make a hypothesis! The fish and the bowl are not together. By spinning them quickly, do you think the fish will ever appear in the bowl?
Spin the cards by rolling the pencil in between both hands quickly. Do you see the fish in the bowl?
As you spin the pencil, each index card will flash before your eyes. The cells in the back of your eye, on the retina, will retain this image for just a little longer than you actually see it. The next image will flash so your cells will see this image as well. Your brain helps to combine the two images into one image; the fish looks like itís inside the fishbowl! You can also do this experiment using two different images, such as a bird in a tree or a puppy in a window. Similarly, many discrete images flash before your eyes when you watch television or go to a movie. The combination of your eyes and your brain work to form the images into one continuous image.
All humans are born colorblind–our color-seeing cells don't start working until we are 4 months old. | <urn:uuid:de84cec6-5b1b-47ba-8686-9774ee9bb608> | 3.890625 | 461 | Tutorial | Science & Tech. | 64.873191 |
<p>Humans may have changed the Earth's climate ever since they began using "slash and burn" tactics to clear forests for growing crops. But today's civilizations must deal with the industrial revolution's contribution to a warming planet and the choice of trying to reverse or balance out such climate change with new geoengineering tactics.</p> <p>Geoengineering ideas typically aim to stop the warming of the Earth's climate by removing the greenhouse gas carbon dioxide (CO2) or by reflecting more sunlight back into space. Many mimic natural processes such as the cooling effect of volcanic eruptions or boosting the CO2-absorbing effect of forests. But the idea of humans intentionally engineering the Earth's climate on a grand scale still attracts plenty of controversy as well.</p> <p>Here you can take a look at ratings for some of wildest geoengineering ideas described in a 2009 report by the UK's Royal Society. The British study has been cited in later U.S. reports by the U.S. National Academy of Sciences (2010) Washington-based Bipartisan Policy Center (2011).
<p>White clouds based on small micro-droplets of moisture could reflect more sunlight to slow down the heating of the planet. Many proposals have suggested using ships or aircraft to seed clouds with a spray of salty ocean water, or perhaps dropping a special hydrophilic (water-attracting) powder from aircraft.</p> <p><b>Impact:</b> Low to Medium. There is uncertainty about producing enough of the cloud seeding effect, and the method is largely limited to areas over oceans.</p> <p><b>Affordability:</b> Medium. The cost of ocean water is low, but the cloud seeding must continue almost constantly for a long period of time.</p> <p><b>Timeliness:</b> Medium. The effect on lowering temperatures would begin within one year. Deployment could start within years or a few decades.</p> <p><b>Safety:</b> Low. The cloud seeding may end up affecting weather patterns and ocean currents. There is also the possibility of pollution of the cloud seeding uses chemicals or materials other than sea-salt.</p>
<p>Humans could capture CO2 directly from the ambient air similar technologies already capture carbon from power plants. The CO2 would be absorbed by solids or alkaline liquids before being moved to long-term deep storage underground.</p> <p><b>Impact:</b> High. This idea is both doable and has no limits on the size of its possible effect. It also tackles a main cause of climate change and ocean acidification by removing CO2.</p> <p><b>Affordability:</b> Low. The carbon capture methods would have potentially high material and energy costs.</p> <p><b>Timeliness:</b> Low. Humans still need to do more work to find cost-effective air capture methods, and would need time to build the infrastructure to do the job. It would also be slow to reduce global temperatures.</p> <p><b>Safety:</b> Very high. There are few side effects.</p>
<p>Humans could release a wide range of tiny particles into the stratosphere to reflect sunlight back into space. That mimics the natural cooling effect of huge volcanic eruptions that toss similarly small particles high into the atmosphere. Fleets of aircraft, rockets, balloons or even huge artillery guns could do the job of delivery.</p> <p><b>Impact:</b> High. This is already doable and possibly very effective. There is also no limit to its effect on global temperatures.</p> <p><b>Affordability:</b> High. This only requires small quantities of materials at relatively low cost.</p> <p><b>Timeliness:</b> High. The effect would start to reduce temperatures within one year. Deployment would only require years or possibly a few decades.</p> <p><b>Safety:</b> Low. Many possible side effects include damage to the stratospheric ozone layer, effects on high-altitude clouds, and impact on the biological productivity of plants and animals.</p>
<p>Huge sun-shields in space could reflect solar radiation away from Earth. Such shields would require tactics worthy of a science fiction story arrays of thousands of mirrors, swarms of trillions of reflecting disks, a huge reflector made on the moon out of lunar glass, or a Saturn-like ring of dust particles and shepherding satellites.</p> <p><b>Impact:</b> High. There is no limit on its possible effects on global temperatures.</p> <p><b>Affordability:</b> Very low to Low. Space launches and operations would mean a high cost for deployment and maintenance, but the methods could have a very long lifetime once deployed.</p> <p><b>Timeliness:</b> Very low. Humans would need several decades at the very least to put reflectors into space. The reflectors would begin to reduce global temperatures within a few years.</p> <p><b>Safety:</b> Medium. There would be regional climate effects, but no known biochemical effects on the environment.</p>
<p>Weather effects naturally eat away at silicate rocks (the most common rocks on Earth) an effect that leaves silicate free to react chemically with CO2 and store it as carbonate rock. The natural process occurs slowly over many thousands of years, but humans could speed up the weathering effect by mining silicate materials to spread them more widely. They could possibly even store the dissolved materials leftover by the chemical reactions in the oceans.</p> <p><b>Impact:</b> High. There is plenty of room for storage in either the Earth's soils or oceans. Both methods would address the cause of both climate change and ocean acidification, but dumping materials in the ocean could directly reverse ocean acidification.</p> <p><b>Affordability:</b> Low. The mining, processing and transportation of silicate materials would be expensive and possibly require a lot of energy.</p> <p><b>Timeliness:</b> Low. This would be slow to reduce global temperatures, would take time to build the necessary infrastructure, and would also require time to investigate its efficiency and possible side effects on the environment.</p> <p><b>Safety:</b> Medium or High. May have side effects on soil pH, vegetation, and marine life.</p>
<p>Hot deserts receive high levels of solar radiation through sunlight. One geoengineering proposal suggests covering the deserts with reflective polyethylene-aluminum surfaces to boost their reflective power an idea similar to the lower-risk concept of making building rooftops white or shiny to reflect sunlight.</p> <p><b>Impact:</b> Low to Medium. This idea would require complete and very reflective coverage of all major desert areas (about 10 percent of all land).</p> <p><b>Affordability:</b> Very low. The cost of materials, deployment and maintenance could be huge.</p> <p><b>Timeliness:</b> High. Could be done very quickly and would prove rapidly effective.</p> <p><b>Safety:</b> Very low. There would be huge environmental and ecological impacts on desert ecosystems, as well as probable effects on weather.</p>
<p>Ocean algae floating on surface waters represent natural sponges that soak up CO2 the first step toward storing CO2 in the deep sea as dead organic matter sinks to the bottom. Researchers have tried small experiments to find out if seeding the ocean with iron or other nutrients can boost algae blooms and that CO2 storage effect.</p> <p><b>Impact:</b> Low. Humans could try this geoengineering tactic today, but tests have suggested it wouldn't be very effective. The ocean's natural carbon cycling also makes this unlikely as a long-term carbon storage solution.</p> <p><b>Affordability:</b> Medium. This would not be very cost-effective, especially for methods other than iron fertilization.</p> <p><b>Timeliness:</b> Low or Very low. Ocean fertilization would be slow to reduce the Earth's global temperatures.</p> <p><b>Safety:</b> Very low. This method has big risks for "unintended and undesirable ecological side effects," such as increasing the number of ocean "dead zones" starved of oxygen or slightly increasing acidification of the deep ocean.</p> | <urn:uuid:07bb9332-c819-4787-b115-1f15957b378e> | 3.796875 | 1,820 | Truncated | Science & Tech. | 53.782595 |
Can Asia’s large mammals be saved from extinction?
A. Christy Williams
28th October, 2011
The Javan rhino isn’t the only south east Asian mammal whose future looks bleak, says the WWF’s A. Christy Williams
Earlier this year, when I received the results of DNA analysis of rhino dung and tissue samples gathered from the Cat Tien National Park in Vietnam, it confirmed our worst fears. The last Javan rhino in the country had been poached in early 2010. It’s an event that could be the first milestone on a potentially inexorable slide towards the extinction of several large mammals in south east Asia.
In the 1990s, as the region opened up after years of war, there was a real feeling of joy and optimism amongst conservationists, when several large mammals were either re-discovered (the Javan rhinos in Cat Tien) or newly discovered (the Saola, Giant Muntjac in Laos and Vietnam) and protected areas, on paper at least, were being set up at a rapid pace. Non-government organisations (NGOs), with support from government aid agencies, moved quickly to support these protected areas, creating much needed infrastructure, capacity building and helping to develop management plans for some of the parks. Cat Tien was one of the early beneficiaries of this work largely due to the rediscovery of the Javan rhino, and as a result of the wider conservation efforts that followed, many other species have also been allowed to thrive.
However, while most governments in the region are happy to set aside parks on paper, little investment has gone into practically protecting many of them from hunting and forest loss. Local hunting combined with the unfounded rumour that rhino horn can cure cancer is what most likely sealed the fate of the last Javan rhino in Cat Tien, and this same problem is now threatening other rhino populations across Africa and South Asia. With rhino horn prices spiralling on the illegal market, over 1150 rhinos have been poached in the last five years in southern Africa to meet increasing demand. And as this battle goes on, a population of less than 50 rhinos in Ujung Kulon National Park on the Indonesian Island of Java are the last remaining hope for Javan rhinos on this planet.
Lacking the cute appeal of a panda or the magnetic charisma and cultural associations of the tiger, the fate of the Javan rhino has not received the government attention it deserves until this recent extinction event in Vietnam. In 1986, in the face of strong opposition from certain sections of the conservation community, the US Government decided to trap and move all the remaining condors into captivity because they could not establish what was killing them in the wild and therefore offer them protection in their natural habitat. This single step most likely saved the condor from extinction and today they have been successfully reintroduced to the wild.
While not advocating removing the remaining Javan rhinos and placing them in captivity, the species faces a similar ‘Californian Condor moment’. Unless the Indonesian Government moves quickly to address the problem, whether by improving habitat or translocating a few individuals to secure site to improve breeding performance, the Javan rhinos in Indonesia surely faces a bleak future. And they are not alone. Asian elephants and the Tonkin snub-nosed monkey in Vietnam, the Saola (also known as the Asian unicorn) one of the world's rarest mammals, which is found only in Vietnam and Laos, and the Sumatran rhino in Sumatra and Borneo, all face a similar fate. It is vital that something is done now to safeguard the future of all of these species - before it is simply too late.
A. Christy Williams is the leader of the WWF Asian elephant and Rhino Conservation Programme Leader. To find out more about WWF’s work to protect the Javan Rhino in Indonesia and our efforts to prevent further extinctions in the region please go to www.wwf.panda.org
Arctic special Why Arctic Ocean oil drilling is a risky choice
It's not a question of ‘if' a major spill will occur in the Arctic, but ‘when and where', says conservation biologist and oil industry expert Rick Steiner
Shot, face hacked off, tusks stolen... horror of the elephants butchered for their ivory
More than 3000 elephants may have been slaughtered in 2011 so far - and that's just those we know about. In Kenya, Mary Rice from the Environmental Investigation Agency witnesses the bloody reality of the global ivory trade
With the death of Wangari Maathai, the green movement has lost one of its greatest proponents
Environmentalist, democracy campaigner and Nobel laureate; Wangari Maathai led an extraordinary life but it's her overwhelming kindness and charm that I’ll always remember, says Ruth Styles
Battery egg hens still face hell as 'enriched' cages phased in
In 2012 battery cages are due to be replaced by 'enriched' cages, designed to improve birds' welfare. But footage from existing 'enriched' cage egg farms reveals intensive production, cruelty and suffering, says Justin Kerswell
Why the BBC is wrong to scrap its Wildlife Fund
The planned closure of the BBC Wildlife Fund represents the premature end of a model for how wildlife film-making can support conservation of the very environments it documents, says Rob St John
Using this website means you agree to us using simple cookies. | <urn:uuid:fde7e968-f7f6-4ddc-bf9f-1dc45d594abb> | 3.078125 | 1,124 | Personal Blog | Science & Tech. | 35.220125 |
Last April, at a meeting of the American physical society in Washington, D.C., representatives of three independent laboratories announced new high-precision measurements of the strength of the force of gravity. To the astonishment of the audience, the three measurements disagreed with one another by considerable amounts, and worse, none of them matched the value that physicists have accepted as correct for more than a decade. No one could offer so much as a hint to explain the discrepancies.
To illustrate the magnitude of the predicament, imagine a felon hunted by the police. They know that he is hiding somewhere along a street of ten blocks, with ten houses on each block. On the basis of previous information, the police have concentrated their surveillance on a particular house in the middle of the second block, when suddenly three new and presumably trustworthy witnesses appear. One places the miscreant in the very first house of the first block, the second singles out a dwelling near the end of the first block, while the third witness points to a house way across town at the other end of the street, more than eight blocks from the stakeout.
Experiments to measure G are painfully sensitive to every stray gravitational influence, from sparrows flying over the roof to earthquakes in the antipodes.
What are the cops to do? Go with the majority and move their operation over to the first block? Take an average and wait somewhere in the third block? Try to pick the most reliable witness and concentrate on a single house? Stretch their net to cover the entire ten-block street? Or stay put, discounting the new reports because they contradict one another? Physicists trying to make sense of the new measurements are facing the same unsatisfactory choices.
The goal of the measurements is easy to understand. According to Isaac Newton, any two material objects in the universe attract each other with a force that is proportional to the mass of the objects and that diminishes with their distance from each other. To quantify this phenomenon, physicists define as G the magnitude of the attraction that two one-kilogram masses, exactly one meter apart, exert on each other. Strictly speaking, G is an odd quantity with no intuitive meaning, so for this reason physicists take the liberty of referring to it in more familiar terms as a force. In this case, the value of G is 15.0013 millionths of a millionth of a pound. (G is not to be confused with g, the acceleration of gravity near the surface of Earth, or with the g-force, the effect of an acceleration on a body.)
The conceptual simplicity of measuring the strength of gravity contrasts sharply with the practical difficulty of carrying it out. There are two fundamental reasons for the elusiveness of G. For one thing, gravity is pathetically feeble. If the two chunks of matter were ten times closer, or about four inches apart, the force, though it would rise to 100 G, would still amount to no more than about a billionth of a pound—the weight of an average E. coli bacterium.
The other, more subtle, problem is that gravity, unlike all the other forces of nature, cannot be shielded. Electricity and magnetism, for example, which keep molecules from disintegrating, can be neutralized. Positive charges cancel negative charges, south poles offset north poles. Shielding makes it possible to insulate electrical conductors so they can be handled safely, even if they carry 220 lethal volts, and for the same reason, radios, which feed on electromagnetic radiation, fade in highway tunnels. No such shielding is available for gravity, and hence experiments to measure G are painfully sensitive to every stray gravitational influence, from sparrows flying over the laboratory roof to earthquakes in the antipodes.
Newton, who formulated the universal law of gravity and used it to explain a wealth of phenomena, including the orbits of planets, the tides of the ocean, and the flattening of Earth at its poles, did not need to know the value of G. Nor, for that matter, do NASA engineers who plot the paths of space probes with breathtaking precision. Most applications of the theory of gravity depend only on relative values, such as the ratio of the acceleration of the moon to that of an apple, which can be determined with much greater precision than the absolute value of G.
The first accurate measurement of G was not made, in fact, until 1797, more than a century after the discovery of the law of gravity, and it arose from a classic experiment performed by the English nobleman Henry Cavendish. Cavendish was an eccentric. Although he was said to be “the richest of all learned men, and very likely also the most learned of all the rich,” he lived frugally, spending his wealth only on books and scientific equipment. Morbidly taciturn and pathologically reclusive, he was such a confirmed misogynist that he communicated with his female housekeeper only by written notes.
Winfried Michaelis’s group in Brunswick, Germany, creates an electric field by means of two electrostatic generators to hold one end of a crossbar in place while the other end (not shown) is subjected to a minute gravitational tug. the crossbar floats on a pool of mercury.
Yet for all his bizarre behavior, Cavendish was one of the most original and productive scientists of his generation. The ingenious device he employed for measuring G, called a torsion balance, had been built by the clergyman and amateur naturalist John Michell and was invented simultaneously by the French electrical pioneer Charles Coulomb, but in Cavendish’s skillful hands it revolutionized the science of precision measurements. Almost all of the hundreds of subsequent determinations of G have used the torsion balance. Furthermore, it has been adapted for countless other applications, such as seismological measurements and electrical calibration—wherever precise control over very small forces is called for.
The conceptual basis of the torsion balance is the observation that it doesn’t take much force to induce a twist, or torsion, in a long, thin wire hanging from the ceiling. (A hanged man twists even in a faint breeze.) If a horizontal crossbar is hung from the lower end of the wire, in the manner of a rod in a mobile, it can serve as a pointer for indicating the angle through which the wire has been twisted. Once such a torsion balance has been calibrated, it becomes a measuring device for minuscule forces applied to one end of the crossbar: a small horizontal push results in a sizable angle of twist.
Cavendish attached a small lead ball to one end of the crossbar, brought an enormous weight on a fixed support to a point slightly in front of the ball, and then watched the wire twist as the ball was attracted to the weight. (Actually, to balance his apparatus, he placed identical balls at both ends of the crossbar, dumbbell fashion, and doubled the attraction by mounting two large weights symmetrically as close to the balls as he could get without their touching.) By measuring the minute twist induced in the wire in this contrivance, Cavendish read off the actual force that caused it. From this, and the measured dimensions of the apparatus, he was able to deduce the value of G by means of simple proportions. The result was in the ballpark of the modern value, but what a huge ballpark it was. Cavendish estimated his precision at about 7 percent, which translates into locating the fugitive felon somewhere within the span of 100 blocks.
Modern measurements are almost a thousand times better, pinning the culprit down to a specific house (although the disagreements among the new results take the bloom off this achievement). But the uncertainty in the value of G remains astronomical by today’s exacting standards. Historically, G was the first universal constant of physics, and ironically it is by a wide margin the least well known. Modern physics is built on such numbers as the speed of light (c); the charge of an electron (e); and the quantum of action (h), which determines the sizes of atoms. Some of these constants have been measured to within one part in 100 million, others to a few parts per million. Our ignorance of G, compared with all of them, shocks by its crudeness.
The constants c, e, and h are entangled with one another in a tight web of interconnections that spans the microworld, in the sense that all measurements of atomic and nuclear properties must ultimately be expressed in terms of these and a small handful of other numbers. Such entanglement entails a complex system of cross-checks and mutual constraints that help fix the fundamental constants with impressive precision. Unfortunately, G does not participate in any of these relationships, because gravity plays no role in the atom. The gravitational attraction among atomic constituents is 30 or 40 orders of magnitude weaker than the competing electrical and nuclear forces and is thus completely irrelevant. In the end G stands naked and aloof, the ancient, unapproachable king of the fundamental constants.
So why not just leave it alone? Why do scientists devote their energies and careers to a better determination of G instead of pursuing more profitable ends? Currently there is no practical value in knowing its magnitude. Neither astronomy nor geology nor space exploration would benefit from a new measurement, so these are not motivations for the new experiments. Instead scientists want to measure G as a matter of principle—just because it’s there. And that is how science progresses. In the late nineteenth century astronomers struggled to tease out a tiny anomaly in the orbit of Mercury—an aberration that would never affect a calendar or the prediction of an eclipse. They measured it just because it was there, with no inkling that it would soon emerge as the sole experimental anchor of a revolutionary new conception of space and time—the general theory of relativity. | <urn:uuid:9978e826-bed5-4d0e-8fc9-dfdfbfbcc9b4> | 3.296875 | 2,013 | Nonfiction Writing | Science & Tech. | 39.604234 |
The stop() action forces the process that fires the enabled probe to stop when it next leaves the kernel, as if stopped by a proc(4) action. The prun(1) utility may be used to resume a process that has been stopped by the stop() action. The stop() action can be used to stop a process at any DTrace probe point. This action can be used to capture a program in a particular state that would be difficult to achieve with a simple breakpoint, and then attach a traditional debugger like mdb(1) to the process. You can also use the gcore(1) utility to save the state of a stopped process in a core file for later analysis. | <urn:uuid:0a53584e-707b-408f-b1b9-1068ee5bf3fc> | 3.109375 | 143 | Documentation | Software Dev. | 55.422538 |
Species loss tied to ecosystem collapse and recovery
The world's oceans are under siege. Conservation biologists regularly note the precipitous decline of key species, such as cod, bluefin tuna, swordfish and sharks. Lose enough of these top-line predators (among other species), and the fear is that the oceanic web of life may collapse. In a new paper in Geology, researchers at Brown University and the University of Washington used a group of marine creatures similar to today's nautilus to examine the collapse of marine ecosystems that coincided with two of the greatest mass extinctions in the Earth's history. They attribute the ecosystems' collapse to a loss of enough species occupying the same space in the oceans, called "ecological redundancy."
While the term is not new, the paper marks the first time that a loss of ecological redundancy is directly blamed for a marine ecosystem's collapse in the fossil record. Just as ominously, the authors write that it took up to 10 million years after the mass extinctions for enough variety of species to repopulate the ocean – restoring ecological redundancy – for the ecosystem to stabilize.
"It's definitely a cautionary tale because we know it's happened at least twice before," said Jessica Whiteside, assistant professor of geological sciences at Brown and the paper's lead author. "And you have long periods of time before you have reestablishment of ecological redundancy."
If the theory is true, the implications could not be clearer today. According to the United Nations-sponsored report Global Biodiversity Outlook 2, the population of nearly one-third of marine species that were tracked had declined over the three decades that ended in 2000. The numbers were the same for land-based species. "In effect, we are currently responsible for the sixth major extinction event in the history of the Earth, and the greatest since the dinosaurs disappeared, 65 million years ago," the 2006 report states.
Whiteside and co-author Peter Ward studied mass extinctions that ended the Permian period 250 million years ago and another that brought the Triassic to a close roughly 200 million years ago. Both periods are generally believed to have ended with global spasms of volcanic activity. The abrupt change in climate stemming from the volcanism, notably a spike in greenhouse gases in the atmosphere, decimated species on land and in the oceans, losing approximately 90 percent of existing marine species in the Permian-Triassic and 72 percent in the Triassic-Jurassic. The widespread loss of marine life and the abrupt change in global climate caused the carbon cycle, a broad indicator of life and death and outside influences in the oceans, to fluctuate wildly. The authors noted these "chaotic carbon episodes" and their effects on biodiversity by studying carbon isotopes spanning these periods.
The researchers further documented species collapse in the oceans by compiling a 50-million-year fossil record of ammonoids, predatory squidlike creatures that lived inside coiled shells, found embedded in rocks throughout western Canada. The pair found that two general types of ammonoids, those that could swim around and pursue prey and those that simply floated throughout the ocean, suffered major losses. The fossil record after the end-Permian and end-Triassic mass extinctions shows a glaring absence of swimming ammonoids, which, because they compete with other active predators including fish, is interpreted as a loss of ecological redundancy.
"It means that during these low-diversity times, there are only one or two (ammonoids) taxa that are performing. It's a much more simplified food chain," Whiteside noted.
Only when the swimming ammonoids reappear alongside its floating brethren does the carbon isotope record stabilize and the ocean ecosystem fully recover, the authors report. "That's when we say ecological redundancy is reestablished," Whiteside said. "The swimming ammonoids have fulfilled that trophic role."
Source: Brown University
- What triggers mass extinctions? Study shows how invasive species stop new lifeThu, 30 Dec 2010, 10:11:34 EST
- Land animals, ecosystems walloped after Permian dieoffWed, 26 Oct 2011, 9:35:11 EDT
- Scientists pioneer method to predict environmental collapseMon, 19 Nov 2012, 22:39:28 EST
- Mass extinction study provides lessons for modern worldMon, 29 Oct 2012, 16:33:48 EDT
- The winners of mass extinction: With predators gone, prey thrivesMon, 2 May 2011, 15:50:44 EDT
- Species loss tied to ecosystem collapse and recoveryfrom Science CentricMon, 10 Jan 2011, 14:26:41 EST
- Species loss tied to ecosystem collapse and recoveryfrom PhysorgMon, 10 Jan 2011, 11:35:25 EST
- Species loss tied to ecosystem collapse and recoveryfrom Science DailyMon, 10 Jan 2011, 11:32:55 EST
- Species loss tied to ecosystem collapse and recoveryfrom Science BlogMon, 10 Jan 2011, 11:32:08 EST
Latest Science NewsletterGet the latest and most popular science news articles of the week in your Inbox! It's free!
Learn more about
Check out our next project, Biology.Net
From other science news sites
Popular science news articles
No popular news yet
No popular news yet
- Stem cell transplant restores memory, learning in mice
- Superstorm Sandy shook the US
- 2 landmark studies report on success of using image-guided brachytherapy to treat cervical cancer
- Calculating tsunami risk for the US East Coast
- Researchers discover mushrooms can provide as much vitamin D as supplements | <urn:uuid:982a3a4a-ce33-4bcb-b5cf-227a839ed286> | 3.65625 | 1,145 | Truncated | Science & Tech. | 29.300076 |
This document describes the MD5 message-digest algorithm. The algorithm takes as input a message of arbitrary length and produces as output a 128-bit "fingerprint" or "message digest" of the input. It is conjectured that it is computationally infeasible to produce two messages having the same message digest, or to produce any message having a given prespecified target message digest. The MD5 algorithm is intended for digital signature applications, where a large file must be "compressed" in a secure manner before being encrypted with a private (secret) key under a public-key cryptosystem such as RSA.
The MD5 algorithm is designed to be quite fast on 32-bit machines. In addition, the MD5 algorithm does not require any large substitution tables; the algorithm can be coded quite compactly.
The MD5 algorithm is an extension of the MD4 message-digest algorithm 1,2]. MD5 is slightly slower than MD4, but is more "conservative" in design. MD5 was designed because it was felt that MD4 was perhaps being adopted for use more quickly than justified by the existing critical review; because MD4 was designed to be exceptionally fast, it is "at the edge" in terms of risking successful cryptanalytic attack. MD5 backs off a bit, giving up a little in speed for a much greater likelihood of ultimate security. It incorporates some suggestions made by various reviewers, and contains additional optimizations. The MD5 algorithm is being placed in the public domain for review and possible adoption as a standard.
For OSI-based applications, MD5's object identifier is
md5 OBJECT IDENTIFIER ::= iso(1) member-body(2) US(840) rsadsi(113549) digestAlgorithm(2) 5}
In the X.509 type AlgorithmIdentifier , the parameters for MD5 should have type NULL. | <urn:uuid:3ac7a38e-7b45-481c-8e2d-84d71bcfe4e1> | 2.953125 | 394 | Documentation | Software Dev. | 37.902098 |
Search Loci: Convergence:
One merit of mathematics few will deny: it says more in fewer words than any other science. The formula, e^(i*pi) = -1 expressed a world of thought, of truth, of poetry, and of the religious spirit "God eternally geometrizes."
In N. Rose, Mathematical Maxims and Minims, Raleigh NC: Rome Press Inc., 1988.
Francois-Joseph Servois: Priest, Artillery Officer, and Professor of Mathematics
While at Metz, Servois began publishing and presenting his mathematical research. His first paper, written in 1805, was a treatise on expanding functions into power series. This paper was presented to the Institut National des Sciences et des Arts* and would undergo several revisions to become his celebrated “Essay” [Servois 1814a] on the foundations of the differential calculus. (The next section describes this paper in more detail.)
Figure 3. Figures from Servois' geometry text (public domain). The left-hand figure was used to construct a parallel to an inaccessible line. The right-hand figure was used to prolong a line beyond an obstacle. The dotted circles in these figures are not part of the constructions, but rather used to verify the validity of the constructions.
Additionally, Servois published his first and only book, titled Solutions peu connues de différents problèmes de géométrie-pratique [Servois 1804] (Little-known Solutions to Various Problems in Practical Geometry) in 1804.** Servois' geometry text was intended to be a reference on applied geometry for military officers, presenting constructions in ruler-geometry that could be used on the battlefield at any time [Bradley 2002]. Taton stated that the work was well received by the public and Jean-Victor Poncelet (1788-1867) declared Servois' book to be
The geometry textbook is divided into two sections after a brief introduction: Theory and Practice. There is an index of the practical section, errata, and a table of figures. It is also bound with a 28-page article, a “Letter from S … to F …, Professor of Mathematics” who is possibly François Joseph Français (1768-1810). The military applications of the geometry text can be seen in the following index of the practical section. Note that all of these constructions are performed with straightedge, but no compass.
Interestingly, Poncelet consulted Servois for his expertise on geometry several times during the writing of the 1822 edition of the Traité des propriétés projectives [Taton 1972].
Servois was eventually transferred to the artillery school at La Fère, where he presented papers on the elements of dynamics and on cometary and planetary orbits to the Institut. However, these papers were never published [Taton 1972]. Then in 1810, he wrote and published his “De principio velocitatum virtualium commentatio,” a paper elaborating Joseph-Louis Lagrange's (1736-1813) notion of “virtual velocities.” The paper was entered in a prize competition sponsored by the Academy of Turin. Curiously, his memoir was the only entry the Academy received and, because Servois missed the deadline, nobody won the prize. However, the paper was deemed worthy, so the Academy published it and elected him a corresponding member [Bradley 2002].
*After the French Revolution, the Royal Academy of Sciences of Paris, along with other royal societies, were incorporated into the Institut National des Sciences et des Arts, renamed the Institut de France in 1806.
Petrilli, Jr., Salvatore J., "Francois-Joseph Servois: Priest, Artillery Officer, and Professor of Mathematics," Loci (June 2010), DOI: 10.4169/loci003498 | <urn:uuid:d4700412-3245-483d-b6e9-5da8b9f84041> | 2.96875 | 824 | Academic Writing | Science & Tech. | 38.216562 |
I concluded the discussion of shaft work and flow work. Based on the performance on the first PRS question, I seems that people have a good understanding of these concepts. I then introduced the concept of stagnation quantities (enthalpy, pressure temperature) using a second PRS question. I find it easiest to understand stagnation properties and whether temperatures and pressures go up or down with moving and stationary flow by making arguments about changes in internal energy and kinetic energy. We will spend all of the next lecture on these concepts. I also recommend you read over the notes as well as the old mud responses (T9, T10, T11)
Responses to 'Muddiest Part of the Lecture Cards'
(25 respondents out of 67 students)
1)I don't really understand how the temperature of a peice of fluid that gets stuck to an airplane goes up because it gains k.e. by moving (I guess I don't understand the trade-off). (1 student) When a moving airplane moves a particle in the air somewhere above it, where does the energy come from to move it? (1 student) I still don't understand how you can move or stop air molecules without doing work on them. (1 student) Still confused how one can make a particle move without inputting any energy to it -- does creating pressure waves require energy? (1 student) Why is internal energy traded on k.e. on particles outside the wing? Doesn't the pressure wave impart k.e. on the particles so i.e. doesn't need to be sacrificed? (1 student) These are all good questions. First note that the quantity that is conserved for these types of flow processes (adiabatic and no external work) is total or stagnation enthalpy, not energy. Total enthalpy has in it a term related to flow work, internal energy and kinetic energy. So when I was describing trade-offs between internal energy and kinetic energy, I was being a little loose (with the intention of having the class understand the overall concept). We will talk more about these issues during the next lecture, but several of the questions may be resolved by reading through some of my old mud responses. I have put two relevant ones in below:
OLD MUD #1: How can a particle be accelerated to a given speed without work? How did the chunk of gas in the example get to position (1) if no work was done on it? and related questions (3 students) There is work, but it is flow work, not external work. Remember, our control volume is defined by a set of streamlines. Between the inlet and the outlet of the streamlines the velocity of the flow changes, but it does this without heat transfer and without external work. Remember work is the transfer of energy across a system boundary. It is when it crosses the system boundary that we label and identify it.
OLD MUD #2: I am unclear as to why the leading edge of the wing heats up in the first example, then later you said temperature drops over the wing causing the condensation trails. How can it get hotter and colder at the same time? (1 student) This is a very good question. It does get hotter and colder at the same time. Just not in the same place. The flow very close to the body gets hotter, the flow farther from the body gets cooler. You need to know a little more about fluids before the answer is clear. There are two different processes going on. The first is like the example of the engine sitting motionless on the ground drawing in air--the (static) temperature and (static) pressure drop. The second is like the example of the skin temperature of a supersonic airplane being significantly elevated above the ambient atmospheric temperature. In the first case, no energy is added to the flow; energy is just converted from internal energy to kinetic energy. In the second case, energy is added to the flow. Now let us discuss why this happens.
PROCESS 1:When a body moves through a fluid it creates a disturbance. That is, it changes the velocity, pressure and temperature of the flow around it. It tells the flow "Get out of my way, I am coming through!" This disturbance is felt some distance away from the body (a distance of about two times the characteristic physical dimension of the body). You can think of this like the bow wave in front of a boat, the water starts to move out of the way before the boat gets there. This information is transmitted upstream of the body at the speed of sound. So for a supersonic body (one traveling faster than the speed of sound), the flow doesn't know to get out of the way sometimes until the body has passed (it happens when the shockwave passes through the region of flow). There is no external work done in causing the flow to move (there can be flow work). So the total energy of the flow is the same. When it moves to get out of the way, the kinetic energy must come from somewhere. It comes from the internal energy (or more appropriately, the enthalpy since stagnation enthalpy is the conserved quantity for these processes) and thus the temperature and pressure are reduced.
PROCESS 2: For the flow very close to the body (within an inch or so of the surface for a large airplane), the body adds energy to the flow. That is it pulls it along with it. We discussed this in lecture T1 and in a PRS question. Some number of molecules very close to the surface of the body stick to the body. They in turn pull on the particles next to them. This is exactly the same mechanism by which honey sticks to a spoon. The more viscous the fluid, the stickier it is and the more molecules get pulled along with the body (compare how much honey sticks to a spoon--to how much water sticks to a spoon--to how much air sticks to a spoon). What develops is something called a boundary layer. So the velocity very near the surface of the body looks like the sketch shown below. The thickness of the boundary layer depends on how viscous the fluid is, on how fast the body is moving, and on the distance from the leading edge of the body. The boundary layer is thicker the more viscous the fluid is, thinner the faster the body is moving, and thicker the longer the distance from the leading edge. By dimensional analysis you can see that the thickness of the boundary layer is proportional to sqrt(nx/c) where n is the kinematic viscosity (units=m^2/s), x is the distance from the leading edge of the body and c is the speed of the body. So to cause the flow to stagnate on a body (move at the same speed as the body) kinetic energy must be added, thus raising the total energy of the flow.
2) What are the conditions for the stagnation temperature equation to be true? and related questions (5 students) The stagnation temperature is the temperature you would reach if you stagnated an ideal gas via an adiabatic process with no external work.
3) Are we doing PRS questions on this stuff? (1 student) Yes we will.
4) How do we use these things? (1 student) You will see several examples in the next lecture and on the homework.
5)Why is "c" used for velocity? It is confusing because it is also used for light. (1 student) It is confusing, but we don't have enough letters in the alphabet. The other common letters used for velocity are "v" and "u" and "w" (you will see a lot of these in fluids). But in thermo we use these for volume, internal energy and work.
6)Would be clearer if you emphasized external work=shaft work. (1 student) I tried to say it several times, but I do need to add more words to this effect in the notes.
7) What do flow work and shaft work mean when we are talking about small bits of air on or around an airplane? (1 student) The same thing they mean for any control volume (note shaft work is really a placeholder for all forms of external work).
8) How do you determine the signs of wf and ws? (1 student) The same way we do for all forms of work. If energy flows out of the system (i.e. the system does work on its surroundings) the sign is positive. If energy flows into the system (i.e. the surroundings do work on the system) the sign is negative.
9) Would the temperature drop you talked about lead to accelerated ice formation on an airfoil under the right atmospheric conditions? (1 student) Perhaps. I don't realy know. Professor Hansman is an expert on icing and he may know the ansewr to this.
10) With the turbine engine, is it best to maximize wshaft and minimize wflow? (1 student) In general, one would like to extract as much work from the flow as possible. But note that in the case of the turbine, the flow work is negative, causing ws > w. Note also that in reality there are a variety of fluid mechanical and mechanical design considerations that go into determining the best flow speeds, pressures and temperatures in various parts of the device.
11) No mud (7 students). Good. | <urn:uuid:4618822a-38be-4db6-8643-bb46595d019e> | 3.03125 | 1,949 | Comment Section | Science & Tech. | 61.548047 |
Fractal patterns spotted in the quantum realm
Feb 9, 2010 7 comments
From thunderous mountain landscapes viewed from above to the erratic trajectories of Brownian motion, fractal patterns exist at many scales in nature. Physicists believe that fractals also exist in the quantum world, and now a group of researchers in the US has shown that this is indeed the case. This image shows the fractal pattern that results when the waves associated with electrons start to interfere with each other.
A fractal is a geometric entity whose basic patterns are repeated at ever decreasing sizes. For example, a river system is an approximate fractal pattern as the channels branch off into progressively narrower tributaries moving upstream; at each confluence the pattern is a smaller version of the previous branching.
A sudden transition
Ali Yazdani at Princeton University in the US and his colleagues have revealed that these patterns also exist at the scale of individual atoms in a solid. And the key to this effect is a sudden transition where a material changes from a metal to an insulator. At this transition, the waves associated with individual electrons go from being extended across the whole system to being localized at lattice sites.
We do this stuff every day, but once we managed to get the experiment to work with this material, we were confronted with what look like random patterns Ali Yazdani, Princeton University
At this metal–insulator transition the electron waves become squashed together. They begin to affect each other in a complicated network of constructive and destructive interference, which results in a fractal pattern. Yazdani and his team were able to observe this effect using a scanning tunnelling microscope (STM), which provided the atomic scale resolution.
The material used was the ferromagnetic semiconductor gallium arsenide doped with up to 5% manganese, chosen because the researchers are interested in efficient ways of turning a semiconductor into a magnet. Indeed, doping gallium arsenide in this way has become a popular approach in the burgeoning field of spintronics – electronics that exploits the spin of particles as well as their charge. Spintronics has the potential to boost the speed of computing and electronics.
Talking about his research, Yazdani admits that observing these fractals was not the primary aim of this research. "We do this stuff every day, but once we managed to get the experiment to work with this material, we were confronted with what look like random patterns," he says. His group went on to develop the theory and realized that the electrons they were observing were on the brink of localization.
Yazdani and his team intend to develop their research by comparing the collective versus individual behaviour of electrons in their system and how this influences the spatial patterns. The bigger picture of this research is to connect these patterns with theories of magnetism to advance both fundamental research and the development of spintronics applications.
This research is published in Science.
About the author
James Dacey is a reporter for physicsworld.com | <urn:uuid:5d289d0f-ae0b-44a3-aff0-8e85526757a4> | 2.90625 | 614 | Truncated | Science & Tech. | 29.301465 |
Major Section: PROGRAMMING
Example: ACL2 !>(digit-to-char 8) #\8For an integer
nfrom 0 to 15,
(digit-to-char n)is the character corresponding to
nin hex notation, using uppercase letters for digits exceeding 9. If
nis in the appropriate range, that result is of course also the binary, octal, and decimal digit.
The guard for
digit-to-char requires its argument to be an
integer between 0 and 9, inclusive. | <urn:uuid:d72dcb92-aac0-4f8e-8721-9e7a9ca9c5fe> | 2.703125 | 111 | Documentation | Software Dev. | 54.17368 |
Record Heat Wave Grips US: But Is It Climate Change?
Thousands of temperature records have been shattered in recent weeks. Awesome or awful?
- A spring heat wave like no other in U.S. and Canadian history peaked in intensity yesterday, during its tenth day. Since record keeping began in the late 1800s, there have never been so many temperature records broken for spring warmth in a one-week period--and the margins by which some of the records were broken yesterday were truly astonishing. Wunderground's weather historian, Christopher C. Burt, commented to me yesterday, "it's almost like science fiction at this point."
- Records are not only being broken across the country, they're being broken in unusual ways. Chicago, for example, saw temperatures above 26.6°Celsius (80°Fahrenheit) every day between March 14-18, breaking records on all five days. For context, the National Weather Service noted that Chicago typically averages only one day in the eighties each in April. And only once in 140 years of weather observations has April produced as many 80°Fahrenheit days as this March. Meanwhile, Climate Central reported that in Rochester, Minnesota. the overnight low temperature on March 18 was 16.6°Celsius (62°Fahrenheit), a temperature so high it beat the record high of 15.5°Celsius (60°Fahrenheit) for the same date.
- Speaking at a high-dollar Chicago fundraiser hosted by Oprah Winfrey as the city basked in June-like weather last week, President Barack Obama admitted to being “a little nervous” about global warming: “We’ve had a good day,” Obama said. “It’s warm every place. It gets you a little nervous about what’s happening to global temperatures. But when it’s 75 degrees in Chicago in the beginning of March it gets you thinking … ” “Something’s wrong,” Oprah interjected. “Yeah,” Obama said. “On other hand we really have enjoyed the nice weather.”
- Although studies have not yet been conducted on the main factors that triggered this heat wave and whether global warming may have tilted the odds in favor of the event, scientific studies of previous heat events clearly show that global warming increases the odds of heat extremes, in much the same way as using steroids boosts the chances that a baseball player will hit more home runs in a given year. Gabi Hegerl, Chair of Climate System Science at the University of Edinburgh, said there is evidence that extreme heat events have become more common and more severe, including at the regional level in parts of the U.S. ”This is consistent with observing more and stronger heat waves,” she said. Hegerl said that in order to draw conclusions about global warming’s role in this particular heat wave, one would need to conduct modeling studies where you compare the odds of this event occurring with and without added greenhouse gases in the atmosphere, such as carbon dioxide, “to see how much the warming has changed the odds.”
Did you find this story interesting? like or comment as 8 already did! | <urn:uuid:803b9c13-8547-4ff3-b546-5fd60a6b3f1d> | 2.890625 | 665 | Personal Blog | Science & Tech. | 52.144223 |
Same Wind Conditions, More Productive Turbines
Wind and solar power are the two main pillars of our clean energy future (combined with hydro where it makes sense, deep rock geothermal when costs can be brought down, wave power, maybe someday thorium, some biomass/biofuels where they make sense, etc). If we take a closer look at wind power, huge progress has been made in past decades, and while some advances are obvious – wind turbines have been getting bigger all the time – some improvements are making a difference in more subtle ways.
Your Phone Isn’t the Only Thing Getting Smarter
A lot of progress has been made in making a wind turbine of a given size more productive, from better positioning individual wind turbines and whole wind farms thanks to better computer models to improving the efficiency of various components, from rotor blades to all of the mechanical parts inside the nacelle, all of this connected to sensors and clever software to optimize operations under a wider range of conditions. Improving reliability is also a big deal, as it lower overall operation costs and allows a turbine to capture more wind because of less maintenance downtime. All of this together makes enough difference that even turbines that have a capacity rating that is lower than older models actually end up producing more kWhs per year:
GE’s new 2.5-120 wind turbine, announced last week, is a case in point. Its maximum power output, 2.5 megawatts, is lower than that of the 2.85 megawatt turbine it’s superseding. But over the course of a year it can generate 15 percent more kilowatt hours. Arrays of sensors paired with better algorithms for operating and monitoring the turbine let it keep spinning when earlier generations of wind turbines would have had to shut down.(source)
This, plus bigger turbines and economies of scale, have contributed to more than halving the average cost of wind power during the past two decades. From 15 cents/kWh in 1991 to about 6.5 cents/kWh today, a price that is competitive with new natural gas power plants.
All of these improvements are making the future of wind power look bright, at least as long as we keep improving our power grids to handle more intermittency (something that more plug-in vehicles can help with, since they can be charged off-peak when the wind is blowing):
Indeed, last month the Electric Reliability Council of Texas said that the latest data on wind turbine performance and costs suggests that wind power is likely to be more cost-effective than natural gas over the next 20 years, and it could account for the majority of new generating capacity added over that that time in Texas. Before the council factored in the latest data, it had expected all new generation to come from natural-gas plants.(source) | <urn:uuid:098df9f0-4b62-4ce0-bcd9-5b1d07e7ce2e> | 3.140625 | 578 | Personal Blog | Science & Tech. | 34.614557 |
All rights reserved. Technical University of Budapest Institute of Physics, NIIF Institute.
Turning away from physics and natural sciences is both European and worldwide problem that scientists face. A possible explanation of this phenomenon may be that physics is taught in such a low number of lessons at secondary schools that students cannot develop a proper scientific view of life. Experimentation plays a fundamental role in building the right physical approach. During the programme several experiments will be performed in order to emphasise that physics is present in our everyday life thus it is of utmost importance to study physics. Invoking several spectacular experiments the complex concepts and laws of physics can be explained in a clear and easily understandable way for the general audience. | <urn:uuid:894c1363-a50e-4239-826c-18109a11b98d> | 2.75 | 140 | Knowledge Article | Science & Tech. | 25.6012 |
Guest post by David Archibald
Willis Eschenbach’s post on lab work on coral response to elevated carbon dioxide levels, and The Reef Abides, leads to a large scale, natural experiment in Papua New Guinea. There are several places at the eastern end of that country where carbon dioxide is continuously bubbling up through healthy looking coral reef, with fish swimming around and all that that implies.
Coral Reef at Dobu Island with carbon dioxide bubbling through it (photo: Bob Halstead)
What that implies is that ocean acidification is no threat at all. If the most delicate, fragile, iconic ecosystem of them all can handle flat-out saturation with carbon dioxide, what is there to worry about?
That lack of a threat is a threat to a human institution though – the Australian Institute of Marine Science (AIMS) based in Townsville, north Queensland run by Professor Ove Hoegh-Guldberg.
To quote Walter Starck (http://www.bairdmaritime.com/index.php?option=com_content&view=article&id=6171:png-coral-reefs-and-the-bubble-bath&catid=99:walter-starcks-blog&Itemid=123) – “A never ending litany of purported environmental threats to Australia’s Great Barrier Reef has maintained a generous flow of funding for several generations of researchers. The “reef salvation” industry now brings about US$91 million annually into the local economy in North Queensland.
Although none of these threats has ever become manifest as a serious impact and all of the millions of dollars in research has never found any effective solution for anything, the charade never seems to lose credibility or support. The popular threat of the moment is ocean acidification from increasing atmospheric carbon dioxide.”
So AIMS mounted an expedition to Papua New Guinea to examine the large scale, natural experiment that was a threat to their livelihood. They reported in Nature (http://www.nature.com/nclimate/journal/v1/n3/pdf/nclimate1122.pdf?WT.ec_id=NCLIMATE-201106) that while the reefs they examined looked healthy, they didn’t like them. The threat has been averted for the moment, but maintaining funding requires constant vigilance.
To lend credence to David Archibald’s post, here’s a story on Bob Halstead’s diving website.
THE SHELL GAME
By Bob Halstead
The shell game has been of particular interest to me after reading a scientific letter “Volcanic carbon dioxide vents show ecosystem effects of ocean acidification” published in Nature a couple of years ago. Since then there has been a deluge of alarmist warnings on “Ocean Acidification” – including one in the Feb/March issue of Dive Pacific from an organization called the “International Union for the Conservation of Nature” – but no actual reefs destroyed by it, of course.
The letter was illustrated by photographs of eroded shells and predictably concluded that this was due to ocean acidification, caused by too much atmospheric CO2 which Al Gore tells us is caused by bad humans burning fossil fuels to survive and prosper (as he did), instead of buying carbon credits from him and becoming poor.
The reason for my scepticism was my own well-publicised underwater observations at Dobu Island in Milne Bay where CO2 vents bubble through a thriving coral reef. Just maybe, I thought, these people do not a have a clue what they are writing about. So when they approached me to see if they could dive Dobu I said of course, but that I was not interested in cherry picking data to conform to any conspiracy to promote Anthropogenic Global Warming. Interestingly I never heard back from them.
Now we have the astonishing “Climategate” scandal revealing a huge scientific fraud producing the dodgy evidence used by the IPCC and environmental activists to predict Global Apocalypse, and a Copenhagen Treaty more designed to foster World Government than combat pollution. I originally wrote this before the Copenhagen conference so had no idea what a total fiasco and lie-fest it turned out to be.
But I have real news!!
The Woods Hole Oceanographic Institute has, on 1st December 2009, issued a press release titled “In CO2-rich Environment, Some Ocean Dwellers Increase Shell Production”. Here is some of what it says:-
“In a striking finding that raises new questions about carbon dioxide’s (CO2) impact on marine life, Woods Hole Oceanographic Institution (WHOI) scientists report that some shell-building creatures—such as crabs, shrimp and lobsters—unexpectedly build more shell when exposed to ocean acidification caused by elevated levels of atmospheric carbon dioxide (CO2).
Because excess CO2 dissolves in the ocean—causing it to “acidify” —researchers have been concerned about the ability of certain organisms to maintain the strength of their shells. Carbon dioxide is known to trigger a process that reduces the abundance of carbonate ions in seawater—one of the primary materials that marine organisms use to build their calcium carbonate shells and skeletons.
The concern is that this process will trigger a weakening and decline in the shells of some species and, in the long term, upset the balance of the ocean ecosystem.
But in a study published in the Dec. 1 issue of Geology, a team led by former WHOI postdoctoral researcher Justin B. Ries found that seven of the 18 shelled species they observed actually built more shell when exposed to varying levels of increased acidification. This may be because the total amount of dissolved inorganic carbon available to them is actually increased when the ocean becomes more acidic, even though the concentration of carbonate ions is decreased.
“Most likely the organisms that responded positively were somehow able to manipulate…dissolved inorganic carbon in the fluid from which they precipitated their skeleton in a way that was beneficial to them,” said Ries, now an assistant professor in marine sciences at the University of North Carolina. “They were somehow able to manipulate CO2…to build their skeletons.”
“We were surprised that some organisms didn’t behave in the way we expected under elevated CO2,” said Anne L. Cohen, a research specialist at WHOI and one of the study’s co-authors. “What was really interesting was that some of the creatures, the coral, the hard clam and the lobster, for example, didn’t seem to care about CO2 until it was higher than about 1,000 parts per million [ppm].” Current atmospheric CO2 levels are about 380 ppm, she said.”
NOTE “the coral” in the previous paragraph. There is more to the news release, and it ends up by saying:-
Since the industrial revolution, Ries noted, atmospheric carbon dioxide levels have increased from 280 to nearly 400 ppm. Climate models predict levels of 600 ppm in 100 years, and 900 ppm in 200 years.
“The oceans absorb much of the CO2 that we release to the atmosphere,” Ries says. However, he warns that this natural buffer may ultimately come at a great cost.
“It’s hard to predict the overall net effect on benthic marine ecosystems,” he says. “In the short term, I would guess that the net effect will be negative. In the long term, ecosystems could re-stabilize at a new steady state.
“The bottom line is that we really need to bring down CO2 levels in the atmosphere.”
Having studied Climategate it is not difficult to work out how this amazing and welcome press release actually got published instead of being censored or trivialised, as so many other inconvenient anti-AGW scientific papers and observations have been.
The last line is the key (…we really need to bring down CO2 levels in the atmosphere.”). This inclusion was designed to appease the alarmist fanatics, and enable the paper – which is a staggering departure from the usual AGW propaganda – to be published. Brilliant.
Look out! Woods Hole has found a way of beating the Shell Game.
David Archibald sent another report to me last year by Walter Starck in PDF form, titled: Observations on Growth of Reef Corals and Sea Grass Around Shallow Water Geothermal Vents in Papua New Guinea
He has similar photos not only of Coral and CO2 bubbling up, but of sea grass patches.
Dobu I. corals aerated by bubbling CO2
One of the numerous smaller bubble streams coming up through lush beds of Thalassia.
On 14 February 2010 we visited two geothermal areas in the D’Entrecasteaux Islands, Milne Bay Province, PNG. One is located near the north end of Normanby Island about 30 m S.E. of the outer end of the wharf at the village of Esa’Ala. The other is a well known dive site known as the “Bubble Bath”. It is located about 20 m offshore near the mid-north coast of Dobu Island, an extinct volcano.
At Esa’Ala the area of bubble venting is scattered along the inner edge of a fringing reef which is about 10 -15 m in width. The outside edge slopes steeply into deep water and the inside edge is bordered by grass beds (Thalassia sp.) on silty bottom of mixed reef and volcanic sediments. The bubbling is near continuous small trickles at numerous points scattered amid both grass and coral areas in water depths of 3 – 5 m. The location is sheltered from prevailing wind and wave action.
Both coral and plant growth were unusually luxuriant. In the grass beds small juvenile rabbitfish (Siganus sp.) are abundant feeding on the epiphytic algae growing on the grass blades.
The pH of water samples was measured using a Pacific Aquatech PH-013 High Accuracy Portable pH Meter with a resolution of 0.01 pH. It was calibrated with buffered solutions at pH 6.864 and pH 4.003 immediately before measuring the samples. The Esa’Ala sample was taken immediately adjacent to a Porites coral and about 10 cm from a small bubble stream. The pH was 7.96. A sample from next to a Porites coral at the “Bubble Bath” measured 7.74. This was also about 10 cm from a somewgat larger bubble stream and about 12 m from the main gas vent. A sample next to the main vent measured 6.54. A sample from the open ocean just outside Egum Atoll about 100 Km N.E. of Dobu read 8.23 which is near typical for open ocean in this region.
It seems that coral reefs are thriving at pH levels well below the most alarming projections for 2100. The biggest threat we face isn’t to Barrier Reef tourism. The whole modern economy is founded on cheap abundant energy. High energy liquid fuel is essential to all mobile heavy machinery. Trucks, tractors, trains, ships, planes and earth moving equipment cannot be run on sunbeams and summer breezes. The International Energy Agency along with virtually all oil industry analyst groups now recognise that future global oil supplies are likely to be increasingly tight and more expensive.
Read the full report with more photos here (PDF) Walter Starck on coral and other marine life | <urn:uuid:2445f580-b3a3-4894-924b-f916d4c61dbc> | 3.125 | 2,414 | Personal Blog | Science & Tech. | 49.317795 |
|How fast is lightning? Lightning, in fact, moves not only too fast for humans to see, but so fast that humans can't even tell which direction it is moving. The above lightning stroke did not move too fast, however, for this extremely high time resolution video to resolve. Tracking at an incredible 7,207 frames per second, actual time can be seen progressing at the video bottom. The above lightning bolt starts with many simultaneously creating ionized channels branching out from an negatively charged pool of electrons and ions that has somehow been created by drafts and collisions in a rain cloud. About 0.015 seconds after appearing -- which takes about 3 seconds in the above time-lapse video -- one of the meandering charge leaders makes contact with a suddenly appearing positive spike moving up from the ground and an ionized channel of air is created that instantly acts like a wire. Immediately afterwards, this hot channel pulses with a tremendous amount of charges shooting back and forth between the cloud and the ground, creating a dangerous explosion that is later heard as thunder. Much remains unknown about lightning, however, including details of the mechanism that separates charges.
Credit & Copyright: Tom A. Warner, ZTResearch, www.weathervideoHD.TV | <urn:uuid:18627e30-ebb6-4b4c-834c-e126c260dcfb> | 3.140625 | 248 | Truncated | Science & Tech. | 48.919551 |
|NGC 253 is not only one of the brightest
spiral galaxies visible,
it is also one of the
Discovered in 1783 by Caroline Herschel in the constellation of Sculptor,
NGC 253 lies only about ten million light-years distant.
NGC 253 is the largest member of the
Sculptor Group of Galaxies, the nearest group to our own
Local Group of Galaxies.
The dense dark
accompanies a high
star formation rate, giving
NGC 253 the designation of
Visible in the above photograph
is the active central nucleus,
also known to be a bright source of
X-rays and gamma rays.
Credit & Copyright: | <urn:uuid:6fb7eb0e-ac90-4c1f-9ef1-3fd07fd7dd7a> | 3.578125 | 147 | Knowledge Article | Science & Tech. | 45.344521 |
There are very few countries you could compare with the heat and humidity of Australia but the situation seems to be worsening with signs that the climate is changing. A report by the Australian Bureau of Meteorology seems to confirm the concerns of many experts who have been predicting a significant increase in the temperature of the world.
Indeed, while global warming has been dismissed by many experts the figures from Australia seem to suggest there is something behind the theory that the earth is heating up.
Record temperatures across Australia
One of the issues which people will mention time and time again, is when does a change in climate become a change in trend? Experts will tell you that extreme weather conditions, on either side of traditional weather ranges, are conducive with a potential change in the overall weather going forward. This would appear to be the case with regards to Australia where we have seen some extreme weather conditions over the last 12 months.
To give you an example of what is happening across Australia figures show that Alice Springs experienced 40°C weather for 17 days constant and while this sounds very uncomfortable, compare this to Birdsville where the situation continued for 31 days. These are not isolated instances of weather change as there are numerous other areas of Australia which have experienced record daily temperatures.
As of earlier this week, the summer of 2012/13 is now the hottest summer in Australia since records began in 1910. The very fact that the temperatures from September 2012 to February 2013, the traditional Australian summer, were all warmer days than the previous highs from the 2006/7 certainly reflects concerns on the ground.
Quote from AustraliaForum.com : “Ok I understand the El Niña cycle and all, but seriously!! It was all warm, sunny and fuzzy today. And by midday like from nowhere started raining…I know it is still warm enough to walk around in singlet and shorts but I signed up for endless summer with hot sun and uninterrupted beach season.”
This is a phenomenal shift in weather patterns which was further strengthened by the fact that the average summer temperature in 2012/13 was a disturbing 1.4°C above the normal and 1.1°C above the period between 1961 and 1990. Those who refute allegations of global warming will no doubt refer back to history which shows that the earth has gone through cyclical periods of excessive heat and cooling down. However, the situation today in Australia seems to be very different.
Hotter for longer and more often
As we touched on above, there is very strong evidence to suggest that the temperature in Australia is hotter than previous years and more concerning is the fact that these hot days are more often than ever before. We all saw the news over the last few months which showed frightening bushfires which took wildlife, human life and property across Australia. We all saw the terrible scenes of devastation left behind by fires which swept across areas of Australia the size of small countries.
Scientists have been warning of increased summer temperatures across Australia for many years now although a number of those are showing concern were ridiculed in public and their reputation’s shattered. The fact is that recent data released by the authorities of Australia seems to confirm growing fears of a shift change in the Australian climate. | <urn:uuid:626c4179-d5fa-44d9-8067-92b87ff04e07> | 2.984375 | 644 | Comment Section | Science & Tech. | 43.218405 |
Global warming is said to increase the average temperature of the Earth's surface, enough to cause climatic change. The increase in greenhouse gases such as carbon dioxide is seen as the important factor. The rise in their levels in the atmosphere is caused by human activities such as deforestation, carbon dioxide combustion, etc.
Global surface temperature increased 0.74 ± 0.18 °C (1.33 ± 0.32 °F) during the last century.
Results from our forum
... that food ran out- they died, without damaging all other life on Earth. Global warming is caused by the heat that man produces by the use of artificial energy and by the ...
See entire post | <urn:uuid:0dff5bc0-4b61-40ab-b87f-58aa12458335> | 3.359375 | 138 | Truncated | Science & Tech. | 62.134811 |
Appendix A. Methods for creating current-observed, current-predicted, and future-predicted species distribution models for whitebark pine within British Columbia, Canada.
In 2006, species distribution models for whitebark pine within British Columbia were created by Tongli Wang (University of British Columbia, unpublished) using the climate-envelope modeling technique of Hamann and Wang (2006). The current-observed range for whitebark pine was determined using 479 presence observations from the botanical inventory used to create the BC Ministry of Forests and Range’s Biogeoclimatic Ecological Classification (BEC) system (Fig. 1a). The BEC system is a hierarchical classification system that divides BC’s landbase into 14 zones, 97 subzones and 152 variants based on vegetation, soil, climate and topography (Meidinger and Pojar, 1991). The variant level describes land units comprising relatively homogeneous ecological and geoclimatic features.
To create current-observed species ranges, Hamann and Wang (2006) extrapolated the occurrence data from one-dimensional observation points to two-dimensional BEC variant polygons, under the assumption that a species should be able to grow anywhere within a variant in which it is observed. Variant-level divisions had not been delineated for the alpine tundra BEC zone at the time that Hamann and Wang created the SDMs, so instead they created alpine tundra pseudo-variants based on geographic divisions between mountain ranges. High-elevation areas with permanent icefields were excluded, and polygons with low (< 1/100 of average) predicted species frequencies eliminated.
ClimateBC v3.1 (Wang et al., 2006) was then used to generate biologically-relevant normal (1961-1990) climate variables associated with whitebark pine’s current-observed range. ClimateBC interpolates weather station data using high-resolution digital elevation models that accurately capture climatic variance in BC’s mountainous terrain. Whitebark pine’s current-predicted range (Fig. 1b) was extrapolated by selecting all areas in the province with normal climate conditions in the range of those currently experienced by the species, accounting for climatic interactions. Future-predicted ranges for 2025, 2055 and 2085 were then projected using “middle of the road” (IS92a) Coupled Global Circulation Model ensemble mean (CGCM1 GAX) carbon scenarios developed by the Canadian Centre for Climate Modeling and Analysis (Flato et al., 2000, accessed using ClimateBC). Recently, whitebark pine’s present and future distributions were remodelled using a classification and regression-tree procedure called Random Forests, yielding broadly similar predictions (T. Wang, University of British Columbia, pers. comm.)
Flato, G. M., G. J. Boer, W. G. Lee, N. A. McFarlane, D. Ramsden, M. C. Reader, and A. J. Weaver (2000). The Canadian centre for climate modelling and analysis global coupled model and its climate. Climate Dynamics 16(6):451–467.
Hamann, A., and T. Wang (2006). Potential effects of climate change on ecosystem and tree species distribution in British Columbia. Ecology 87:2773–2786.
Meidinger, D., and J. Pojar (1991). Ecosystems of British Columbia. Special Report Series 6, B.C. Ministry of Forests and Range, Victoria, BC. B.C. Ministry of Forests and Range, Victoria.
Wang, T., A. Hamann, D. L. Spittlehouse, and S. N. Aitken (2006). Development of scale-free climate data for western Canada for use in resource management. International Journal of Climatology 26(3):383–397. | <urn:uuid:e897f354-d5da-47a5-a120-e0a58693760b> | 2.8125 | 814 | Academic Writing | Science & Tech. | 40.263789 |
- All common compounds of Group I and ammonium ions are soluble.
- All nitrates, acetates, and chlorates are soluble.
- All binary compounds of the halogens (other than F) with metals are soluble, except those of Ag, Hg(I), and Pb. Pb halides are soluble in hot water.)
- All sulfates are soluble, except those of barium, strontium, calcium, lead, silver, and mercury (I). The latter three are slightly soluble.
- Except for rule 1, carbonates, hydroxides, oxides, silicates, and phosphates are insoluble.
- Sulfides are insoluble except for calcium, barium, strontium, magnesium, sodium, potassium, and ammonium.
It is very helpful to be able to recognize the formulas for those gases that may be used or produced during the course of a chemical reaction. The way to indicate that a compound is a gas when you write a chemical equation is to place (g) after the formula, such as HCl(g).
Here is a list of some of the more common gases:
F2, Cl2, H2, N2, O2, SO2, SO3, CO, CO2, H2S, NO, NO2,
NH3, P2O3, P2O5, SiF4, HCl, HBr, HI, HF, N2O5, N2O3, N2O | <urn:uuid:ea5f393b-3f68-460f-b8ea-24bd45bc51ed> | 3.796875 | 316 | Tutorial | Science & Tech. | 53.205 |
Tahoe Science Projects supported by SNPLMA
Monitoring past, present, and future water quality using remote sensing (RS)
Our current view of water quality in Lake Tahoe depends heavily on data records from two points within the lake and the points where some of the streams enter the lake. These point data do not provide the temporal and spatial detail needed to understand the changes taking place at different parts of the lake (such as the nearshore zone), and the linkage between the lake observations and the input sources. The intent of this project was to demonstrate the use of remote sensing for measuring water quality parameters at Lake Tahoe. One of the major benefits of this approach would be that a whole-lake view of water quality changes would be possible, even extending into the nearshore where discrete sources of pollutants could be identified. Linked to this was the possibility that through using archived satellite data, long-term trends in other parts of the lake (beyond the two sites currently monitored by UC Davis) would be feasible. The system capitalized on the local infrastructure developed by NASA and UC Davis, the long-term dataset that was collected by UC Davis, and the numerous freely available satellite datasets.
Lead Researchers: S. Geoffrey Schladow, University of California, Davis; Simon J. Hook, Jet Propulsion Laboratory (NASA/JPL)
Figure: Preliminary near-shore clarity map derived from satellite imagery (ASTER) data
Final Report [27MB pdf]
Summary of Findings[pdf]
A system was developed to semi-automatically acquire, store, and process satellite imagery to quantify water clarity and near-surface chlorophyll a concentration measurements over the entire lake as measures of nearshore and offshore water quality at Lake Tahoe. An automated atmospheric correction procedure and processing code were developed to produce high quality maps and time series of water quality at Lake Tahoe. These lakewide maps show significant variations in Secchi depth and chlorophyll a. The largest variations occur closer to shore, and the lowest Secchi depth and highest chlorophyll a are frequently seen to be associated with stream mouths and to occur at times of spring runoff. An unexpected finding of comparing the 8-year, monthly averaged Secchi depth around the lake periphery is that Secchi depth is consistently lower on the east side of the lake (from Stateline Point to Tahoe Keys) than on the west side of the lake. This appears to hold true at all times of year, and is most pronounced closest to shore (at the nearshore sampling stations). Consistently the lowest clarity region is between Glenbrook and Marla Bay. Chlorophyll a on the other hand, did not show as clear a pattern from east to west. The images in the report showed clearly that the distribution of clarity and chlorophyll a in the nearshore is very often controlled by the transport processes within the lake. In addition to the maps and time series, a web-accessible repository was created to store and distribute these and other satellite data products acquired or developed at Lake Tahoe on a near-real-time basis. The methodology developed for this study can be used to study historical or future changes in nearshore and offshore water clarity for any region of concern around Lake Tahoe and help guide management decisions and monitoring efforts related to water quality.
For more information:
NASA JPL Calibration and Validation
NASA JPL Calibration and Validation Lake Tahoe | <urn:uuid:f2f358fe-2801-470d-969a-fa6406188ecd> | 3.375 | 704 | Academic Writing | Science & Tech. | 20.027803 |