text stringlengths 174 655k | id stringlengths 47 47 | score float64 2.52 5.25 | tokens int64 39 148k | format stringclasses 24 values | topic stringclasses 2 values | fr_ease float64 -483.68 157 | __index__ int64 0 1.48M |
|---|---|---|---|---|---|---|---|
Punctuated equilibrium and molecular clocks?
Dr. Peter Gegenheimer
PGegen at UKans.nolospamare.edu
Fri Sep 4 18:37:39 EST 1998
On Wed, 2 Sep 1998 15:16:45, Johnjoe McFadden <j.mcfadden at surrey.ac.uk> wrote:
> Punctuated equilibrium and molecular clocks?
> The punctuated equilibrium hypothesis of Gould and Eldridge, in which
> evolution is proposed to go through long periods of stasis interspersed
> with bursts of rapid evolution, is of course related to the fossil
> record. However, periods of rapid evolution should also leave their
> trace in molecular clocks.
> Does the phylogenetic analysis of protein gene sequences suggest that
> their evolution has in some cases been episodic? I guess the evidence,
> if it existed, would come from comparison of sequence divergence of a
> protein like globin with a molecular clock sequence (e.g. ribosomal RNA)
> for the same group of species. Is their any evidence that over
> geological periods of time (measured by the clock sequence) the protein
> undergoes episodic bouts of evolution?
Punctuated equilibrium is almost certainly driven by large-scale genome and
chromosome rearrangements which will not be reflected in the sequences of
individual genes. Rather, it is the organization & spatial/temporal patterns of
gene expression which vary, driven perhaps by the relocation of epigeneitc
regulation (e.g. DNA methylation or protein binding). As you can see, this is
the hard-core McClintock line, and I think it will prove to be right.
Sequences of core enzymes, such as rRNA (the catalytic component of the
ribosome), cannot vary greatly over time without losing function. Sequences
involved in developmental regulation and external form of an organism can vary
| Dr. Peter Gegenheimer | Vox: 785-864-3939 FAX: 785-864-5321 |
| Molecular Biosciences | PGegen at UKans.edu |
| Ecol & Evol Biology | http://RNAworld.Bio.UKans.edu/ |
| | |
| University of Kansas | |
| 2045 Haworth Hall | "The sleep of reason produces |
| Lawrence KS 66045-2106 | monsters." Goya |
More information about the Mol-evol | <urn:uuid:a1cbc04d-4acb-4b8e-a92d-c74598ab5a4b> | 2.640625 | 533 | Comment Section | Science & Tech. | 43.239046 | 400 |
We have seen that the grand disparity that was believed to exist between the way Nature works here on earth and in the heavens is not valid. The question remains, however, can we learn everything we need to know by investigating phenomena here on earth and extending that result to the Universe at large?
The answer must be no for the following reasons:
1) Who would have thought to look for a law of Universal gravitation without the precise measurements and detailed analysis of Brahe and Kepler? Cavendish's laboratory measurement of G was done in response to interpret results obtained for the solar system.
2) Even if someone would have used the Cavendish apparatus to map out the gravitational force between two bodies, independently of knowing Kepler's results, would we be able to infer a complete understanding of celestial motion?
No. We know Newton's Law of Universal Gravitation is
For example, there are certain aspects of Mercury's motion that can not be explained using the Newtonian form. The correct explanation of Mercury's orbital motion requires General Relativity. In fact, General Relativity predicts that the path of a beam of light will be bent in a gravitational field. This effect is too feeble to see in a lab on earth. It was first observed by starlight being bent near the disk of the sun in a solar eclipse.
3) If we consider then the solar system to be our laboratory, is that a big enough laboratory to establish all that could be known?
The answer to this must be no too. In the 20th century,
since Zwicky in the 1930's,
it is known that either the gravitational force deviates from Newtonian gravity at large distances, or that there is substantial dark matter in and between galaxies. The density of dark matter is so low that it has an imperceptible effect on small scale motions, like that in the solar system. The data seem to favor the existence of some very large amount of unknown, maybe even exotic ( Is this the new celestial matter ?) type of matter.
4) Is the galaxy large enough as a laboratory to pin down all the Laws of Nature?
This seems to require a negative answer as well. There are structures that encompass groups of galaxies, and the non isotropy of the cosmic background radiation is a pattern on an extremely large scale. We have also seen that the luminosity vs distance plot for supernovae (SNe1A) suggest that the universe is accelerating in its expansion. This was the discussion about "dark energy" or the cosmological constant. This effect is not seen until we look out to red shifts > 1, or about 6 billion light years.
Sometimes features of the world are not visible unless
we look on the large scale.
In fact, the most recent analysis from WMAP, using the angular spot size of the CMBR temperature fluctuations, fits a flat space scenario. Hence, ignoring local gravitational distortions of space-time the sum of angles in a triangle that covers most of the universe is 180 degrees!
5) If we could include the entire universe in our laboratory, would we have enough data to explain it all? | <urn:uuid:f9420df7-da98-4a38-b73a-49b3f5de5570> | 3.265625 | 638 | Q&A Forum | Science & Tech. | 45.422311 | 401 |
This lecture is about ways of looking at DNA sequences in complete genomes and chromosomes, in terms of symmetry elements. There are two parts to this talk. In Part 1, I will discuss the fact that we simply have "Too Much Information" becoming available, and the problem will only get worse in the near future. There are ways of cataloging and organising the data, of course. I have found that the true diversity of genome sizes in Nature is often neglected, so we'll talk for a few minutes about the "C-value paradox", along with some possible ideas for WHY certain organisms have so much DNA.
I would like to think that one way of dealing with the explosion of sequence information, in terms of DNA sequences, is to think about it in biological terms, in particular in physical-chemical terms of structure and function of symmetry elements. For example, there are specific DNA sequences which "code" for a telomere, and different DNA sequences which are specific for centromeres. Specific DNA sequences, their structures, and biological functions will be discussed.
In Part 2, I will introduce "DNA Atlases", first having a look at base composition throughout sequenced chromosomes, and then looking at gene expression throughout the whole genome.
I have also made separate file, containing specific LEARNING OBJECTIVES for this lecture, as well as a "self-test quiz", which I recommend having a look at, BEFORE the lecture, if possible. I've incorporated the answers to questions 1 and 2 into PART 1 of the lecture notes.
Brevis esse laboro, Obscuro fio. - Horace
The information in GenBank is doubling every 10 months.
What are the implications of this?
A look at genome sequencing since 1994:
|YEAR||# GENOMES Sequenced|
Although the number of genomes being sequenced is increasing rapidly, one has to this into perspective - the organisms can be placed into four different classes:
|Organism group||Size (bp)||No. sequenced|
|viruses||~300 bp to ~350,000 bp||545|
|prokaryotes||~250,000 to ~15,000,000 bp||>100|
||~12,000,000 to ~600,000,000,000 bp||4|
|multi-celled eukaryotes||~20,000,000 to ~500,000,000,000 bp||3|
|Drosophila species||Genome Size |
(in base pairs)
|D. americana||~300,000,000 bp|
|D. arizonensis||~225,000,000 bp|
|D. eohydei (male)||~234,000,000 bp|
|D. eohydei (female)||~246,000,000 bp|
|D. funebris||~255,000,000 bp|
|D. hydei||~202,000,000 bp|
|D. melanogaster||~180,000,000 bp|
(~138,000,000 bp sequenced)
|D. miranda||~300,000,000 bp|
|D. nasutoides||~800,000,000 bp|
|D. neohydei||~192,000,000 bp|
|D. simulans||~127,500,000 bp|
|D. virilis||~345,000,000 bp|
In summary, the genome sizes of the Drosophila species that have been examined so far range from about 127 million bp to about 800 million bp. But of course at present we SUSPECT that they contain roughly the same number of genes, although it is possible (likely) that they contain duplicated regions (or perhaps even entire chromosomes; there is ample space to have an entire extra copy (or two or more) of the entire genome). In addition, they also contain various types of repeats, known as "selfish DNA".
Why does amoeba have more than 200x as much DNA as humans?
Think about it for a discussion in class. I have a possible explanation, although I'm not sure anyone really knows the answer to this, to be honest.
This brings us to the first question on the quiz:
Answers to the self-test quiz which you are supposed to do BEFORE the lecture:
1. The short answer - a very long time. About 2.4x1012 years.
That's about 160 times longer than the estimated age of the universe!
2. The piece of paper would be quite thick - it would reach outside the earth's
atmosphere and beyond the orbit of the planet Mars.
Today's lecture will cover:
Next Tuesday's lecture will cover:
One way of dealing with the problem of how to display so much sequence information is to have a look at the whole chromosome at once, smoothing over a large window. The entire bacterial chromosome is displayed as a circle, with different colours representing various parameters. First, as an introduction to atlases, we will look at base-composition. Then we will have a look at levels of expression of mRNA and proteins throghout the chromosome. As examples, I will use my very favourite organism, Escherichia coli K-12.
There are several things to notice in this plot. First, the concentration of the bases are not uniformly distributed throughout the genome, but there are "clumps" or clusters where specific bases are a bit more concentrated. Also, the G's (turquoise) clearly are seen to be favoured on one half of the chromosome, whilst the C's (magenta) are on the other strand. This shows up in the "GC-skew" lane as well (2nd circle from the middle). I have labelled the entire terminus region, which ranges from TerE (around 1.08 million bp (Mbp) to TerG (~2.38 Mbp) in Escherichia coli K-12. Finally, several genes corresponding to the darker bands (e.g., more biased nucleotide composition) are labelled.
The same pattern can be seen for the other three Escherichia coli chromosomes which have been sequenced (so far!), as shown in the table below.
Strain: K-12, isolate W3110
DDBJ NCBI tax
Strain: K-12, isolate MG1655
U. Wisconsin TIGR cmr NCBI tax NCBI entrez
Strain: O157:H7 (substrain EDL93)
U. Wisconsin NCBI tax NCBI entrez
Strain: O157:H7 (substrain RIMD 0509952)
Miyazaki, Japan NCBI tax NCBI entrez
||DNA Res. 8:11-22
In addition to showing overall global properties of the chromosome (such as replication origin and terminus), the base composition can also highlight regions different from the rest of the genome. For example, in the plasmid pO157, there are some regions which are much more AT rich (probably these came about as a result of horizontal gene transfer - we will discuss this again in the next lecture...)
Note that the "toxB" gene is much more AT rich than the average for the rest of the plasmid. This COULD be due to the fact that this gene came from an organism with a more AT rich genome, or (more likely in my opinion) it is more AT rich because it is important for this gene to vary in sequence (e.g., have a higher mutational frequencey).
Escherichia coli is probably the best characterised organism.
There are 4085 predicted genes in Escherichia coli strain K-12 isolate W3110.
There are 4289 predicted genes in Escherichia coli strain K-12 isolate MG1665.
There are 5283 predicted genes in Escherichia coli strain O157:H7 isolate EDL933 (enterohemorrhagic pathogen). There are about 5361 predicted genes in Escherichia coli strain O157:H7 substrain RIMD 0509952 (enterohemorrhagic pathogen).
Roughly 2600 genes have been found to be expressed in Escherichia coli strain K-12 cells, under standard laboratory growth conditions.
About 2100 spots can be seen on 2-D protein gels.
Very roughly 1000 different genes (only about 600 mRNA transcripts) are expressed at "detectable levels" in E. coli cells grown in LB media.
Only about 350 proteins exist at concentrations of > 100 copies per cell. (These make up 90% of the total protein in E.coli!)
Most (>90%) of the proteins are present in very low amounts (less than 100 copies per cell).
It has been known since the 1960's that genes closer to the replication origin are more highly expressed. However, it has only been in the past few years that technology has allowed the simultaneous monitoring of ALL the genes in Escherichia coli. There are 4397 annotated genes in the E. coli K-12 genome. Shown below is an "Atlas plot" of the E. coli K-12 genome, with the outer circle representing the concentration of proteins (roughly in number of molecules/cell) and mRNA (again, roughly number of molecules/cell). Under these conditions (e.g., cells grown to late log phase, in minimal media), there were 2005 genes expressed at detectable levels, and only 233 proteins have been found to exist in "abundant" conditions (e.g., very roughly more than 100 molecules per cell).
For E. coli K-12 cells, grown in minimal media to late log phase:
4397 annotated genes -> 2005 mRNAs expressed -> 233 abundant proteins
(note that these numbers will vary for different experimental conditions....)
In this picture, the outer lane represents the concentration of proteins (blue), the next lane the concentration of mRNA (green), and then the annotated genes.
The inner three circles represent different aspects of the DNA base composition throughout the genome. The innermost circle (turquoise/violet) is the bias of G's towards one strand or the other (that is, a look at the mono-nucleotide distribution of the 4 DNA bases). The next lane is the density of stretches of purine (or pyrimidine) stretches of 10 bp or longer. Note that in both cases purines tend to favour the leading strand of the replicore, whilst pyrimidine tracts are more likely to occur on the lagging strand. Finally, the next circle (turquoise/red) is simply the AT content of the genome, averaged over a 50,000 bp window. Note that the terminus is slightly more AT rich, whilst the rest of the genome is slightly GC rich. (The AT content scale ranges from 45% to 55%).
Link to more atlases for Escherichia coli genomes.
Link to the main "Genome Atlas" web page
Friday (6 April, 2001)
Link to a list of recent papers and talks on DNA structures.
Watson, James D. "A PASSION FOR DNA: Genes, Genomes, and Society", (Oxford University Press, Oxford, 2000). Amazon Barnes&Noble
Sinden, Richard R., "DNA: STRUCTURE and FUNCTION", (Academic Press, New York, 1994). Amazon Barnes&Noble
Calladine,C.R., Drew,H.R., "Understanding DNA: The Molecule and How It Works", (2nd edition, Academic Press, San Diego, 1997). Amazon Barnes&Noble
A List of more than a thousand books about DNA | <urn:uuid:4efde337-022b-4a51-bb2b-99949f551b28> | 3.046875 | 2,521 | Audio Transcript | Science & Tech. | 61.187538 | 402 |
Delaware Bay — One
a 10,000-Mile-Long Chain
During May and early June, the shores of Delaware Bay resonate
with the cheerful chattering of more than 20 species of migratory
shorebirds. Delaware Bay provides an ecologically important
stepping-stone for the birds' spring pilgrimage to Arctic nesting
grounds.The Delaware Bay is the largest spring staging area
for shorebirds in eastern North America. A staging site is
an area with plentiful food where migrating birds gather to
replenish themselves before continuing on their journey. Staging
sites serve as a link in a chain connecting wintering areas
with breeding grounds, sites for which there are no alternatives.
Place cursor on map
to see the Southward Migration
Shorebirds begin to arrive in early May. The numbers of birds
soar upward during mid-month and usually peak between May 18
and 24 (in some years as late as May 28). They have traveled
from the coasts of Brazil, Patagonia, and Tierra del Fuego, from
desert beaches of Chile and Peru, and from mud flats in Suriname,
Venezuela, and the Guyanas. After several days of non-stop flight,
and having come as far as 10,000 miles, they reach the bay beaches
depleted of their energy reserves. Luckily, nature provides an
abundant food supply in this area at just this time of year:
the eggs of hundreds of thousands of horseshoe crabs that have
migrated to Delaware Bay beaches to spawn.
A Feast for Feathered
The shorebirds spend between two to three weeks gorging primarily
on fresh horseshoe crab eggs, although worms and small bivalves
are also plentiful. High in protein and fat, the eggs are an
energy-rich source of food. This high-calorie diet enables
the birds to nearly double or triple their body weight before
continuing on to Arctic nesting areas.
More Than a Million Mouths
Each spring, scientists from the Delaware and New Jersey Divisions
of Fish and Wildlife conduct weekly aerial surveys of migratory
shorebirds on Delaware Bay beaches. In May 2001, scientists
observed more than 775,000 shorebirds along beach habitat.
Ninety-five percent of these birds were represented by four
species: red knots, ruddy turnstones, semipalmated sandpipers,
and dunlins. Migratory shorebirds are also known to utilize
marshes and back-bay habitats. Thus, throughout their spring
migration, the actual number of shorebirds using Delaware Bay
as a staging ground may surpass one million. Click
here to meet a
few of these Delaware diners.
recent decline in the horseshoe crab population appears to
correlate with a decline in migrating shorebird populations. Click
here to learn more about the problems facing migratory shorebirds.
Click here to learn why horseshoe crabs are decreasing in abundance. | <urn:uuid:04ebbc42-3082-425f-9757-68642066de98> | 3.375 | 630 | Knowledge Article | Science & Tech. | 41.248588 | 403 |
This week’s top news story has been hiding in plain sight on the Internet for two years.
Even so, a September, 2010 report from Deutsche Bank Group entitled “Climate Change: Addressing the Major Skeptic Arguments,” is big news. In Earth Preservers’ opinion, the report has the potential to be a game-changer because it has the clearest, simplest explanation for why man-made climate change is real.
“(This report’s) clear conclusion is that the primary claims of the skeptics do not undermine the assertion that human-made climate change is already happening and is a serious long term threat.
“To us,” report continues, “the most persuasive argument in support of climate change is that the basic laws of physics dictate that increasing carbon dioxide levels in the earth’s atmosphere produce warming. (This will be the cause irrespective of other climate events.) The only way that warming can be mitigated by natural resources is if there are countervailing ‘feedback mechanisms’, such as cooling from increased cloud cover caused by the changing climate.
“A key finding of the current research is that there has far been no evidence of such countervailing factors. In fact, most observed and anticipated feedback mechanisms are actually working to amplify the warming process, not cool it.”
The report goes on to answer each argument skeptics make in the often rancorous public debate in the US over whether climate change is real, among them:
* Global average temperatures have not risen since 1998
* Climate models are defective and therefore cannot provide reliable projections of future climate trends.
What makes the Deutsche Bank report compelling reading isn’t so much that the information is new. Rather, it’s the way Mark Fulton, Global Head of Climate Change Investment Research, and his team at DB Climate Change Advisors, have presented the information, the source of which is the Columbia Climate Center at the Earth Institute, Columbia University. Each of the skeptics’ arguments is answered simply and directly. | <urn:uuid:0ea7ffd5-5967-436c-b5e3-e1620cf95b58> | 2.640625 | 426 | Personal Blog | Science & Tech. | 33.547692 | 404 |
The Bonding Model
A Brief Description
What is it?
A mechanism by means of which atoms, ions or groups of atoms are held together in a molecule or crystal.
A tentative description of a system or theory that accounts for all of its known properties. | <urn:uuid:f02d9416-8186-4f02-a920-55f0b199ea5d> | 2.546875 | 54 | Knowledge Article | Science & Tech. | 60.01 | 405 |
Two researchers from the State Key Laboratory of Millimeter Waves at Southeast University from Nanjing, China, have discovered and prototyped a device that acts like a black hole for electromagnetic waves in the microwave spectrum. It consists of 60 concentric rings of metamaterials, a class or ordered composites that can distort light and other waves.
Qiang Cheng and Tie Jun Cui called their device “omnidirectional electromagnetic absorber”. The 60 rings of circuit board are arranged in concentric layers and coated in copper. Each of the layers is printed with alternating patterns, which resonate or don’t resonate in electromagnetic waves.
What is indeed very amazing is that their device can spiral 99% of the radiations coming from all directions inside it and convert them into heat, acting like an “electromagnetic black body” (or “hole”).
The omnidirectional electromagnetic absorber could be used in harvesting the energy that exists in form of electromagnetic waves and turn them into usable heat. Of course, turning the heat back into electricity isn’t a 100% efficient process (far from it), but directly harvesting electromagnetic waves in the classic antenna-fashion is way too inefficient compared to this black hole.
“Since the lossy core can transfer electromagnetic energies into heat energies, we expect that the proposed device could find important applications in thermal emitting and electromagnetic-wave harvesting.”
Possible uses can vary from powering your phone with the existing electromagnetic energy that surrounds it, to wireless power transmission and even powering space ships – it all depends on the wavelength that the device is tuned to.
The question that arises is: would this kind of devices have other uses than these constructive ones mentioned above?
More like this article
Not what you were looking for? Search The Green Optimistic!
Join the Discussion4046 total comments so far. What's your opinion ?
Electromagnetic wave harvesting? Extremely fascinating. When one thinks about it, it makes sense. Electromagnetism is one of the more powerful forms of the universe (next to gravity and strong/weak nuclear forces). The inner sci-fi geek in me loves the idea and can only imagine what an EM device could do for humanity in the future. But of course the part of me stuck in reality is still skeptical of such technologies and what their applicable use would be. Very very cool science though!-Consumer Energy Alliance "A balanced approach towards America's energy future" | <urn:uuid:9223455a-55b0-44bb-b0c4-c351a964fb39> | 3.328125 | 510 | Personal Blog | Science & Tech. | 30.252155 | 406 |
ATP hydrolysis in F1-ATPase
Why is F1Fo-ATP synthase so important?
F1Fo-ATP synthase, or ATP synthase for short, is one of the most abundant proteins in every organism. It is responsible for synthesizing the molecule adenosine tri-phosphate (ATP), the cells’ energy currency. ATP is depicted in Fig. 1 and used to power and sustain virtually all cellular processes needed to survive and reproduce. Even when at rest, the human body metabolizes more than half its body weight in ATP per day, this figure rising to many times the body weight under conditions of physical activity.
What do we know about F1Fo-ATP synthase?
Researchers have been trying to uncover the "secret" behind ATP synthase’s very efficient mode of operation for quite some time. Unfortunately, even after more than 30 years of study, we still don’t fully understand how F1Fo-ATPase really works. The protein consists of two coupled rotary molecular motors, called Fo and F1, respectively, the first one being membrane embedded and the latter one being solvent exposed.
One of the most important breakthroughs in the field was the determination of an atomic resolution X-ray crystal structure for the F1 part of ATP synthase. This allowed researchers, for the first time, to connect biochemical data to the three dimensional structure of the protein (Abrahams et al., Nature 370:621-628, 1994). The X-ray structure beautifully supported Paul Boyer’s "binding change mechanism" (Boyer, Bioch. Bioph. Acta 215-250, 1993) as the modus operandi for ATP synthase’s rotational catalytic cycle and lead to the 1997 Nobel Prize in chemistry for Boyer and Walker.
F1-ATPase in its simplest prokaryotic form (shown schematically in Fig. 2) consists of a hexameric assembly of alternating α and β subunits arranged in the shape of an orange. The central cavity of the hexamer is occupied by the central stalk formed by subunits γ, δ and ε. Due to a lack of high resolution structures for the Fo part of ATP synthase, much less is known about this subunit. It is currently thought that a transmembrane proton gradient drives rotation of the c-subunit ring of Fo which is then coupled to movement of the central stalk. The rotation of the latter eventually causes conformational changes in the catalytic sites located in F1 leading to the synthesis of ATP.
What are some of the missing pieces in our understanding of F1?
ATP synthase can be separated into its two constituent subunits F1 and Fo, which can then be studied individually. Solvated F1 is able to hydrolyze ATP and experiments pioneered by Noji et al. (Nature 386:299-302, 1997) have shown that ATP hydrolysis in F1 drives rotation of the central stalk. However, we don’t know if ATP hydrolysis itself or rather binding of ATP to the catalytic sites induces rotation. We would also like to know how the binding pockets cooperate during steady-state ATP hydrolysis to achieve their physiological catalysis rates. It has been suggested that ATP binding and product unbinding provide the main "power stroke" and that the actual catalytic step inside the binding pockets is equi-energetic, but, unfortunately, there is currently no consensus regarding this issue. In any case, since ATP in solution is a very stable molecule, the catalytic sites have to be able to lower the reaction barrier toward product formation considerably in order to cause efficient hydrolysis.
Computational Study of ATP hydrolysis in F1-ATPase
Our research focuses on investigating the ATP hydrolysis reaction and its interaction with the protein environment in the catalytic sites of F1-ATPase using computer simulations. To be able to study a chemical reaction inside the extended protein environment provided by the catalytic sites we employ combined quantum mechanical/molecular mechanical (QM/MM) simulations to investigate both the βTP and βDP catalytic sites. Fig. 3 depicts the quantum mechanically treated region of the former. Quite surprisingly, our simulations show that there is a dramatic change in the reaction energetics in going from βTP (strongly endothermic) to βDP (approximately equi-energetic), despite the fact that the overall protein conformation is quite similar. In both βTP and βDP, the actual chemical reaction proceeds via a multi-center proton relay mechanism involving two water molecules. A careful study of the electrostatic interactions between the protein environment and the catalytic core region as well as several computational mutation studies identified the "arginine finger" residue αR373 as the most significant element involved in this change in energetics.
Several important conclusions can be drawn from our simulations: Efficient catalysis proceeds via a multi-center proton pathway and a major factor for ATPase’s efficiency is, therefore, the ability to provide the proper solvent environment by means of its catalytic binding pocket. Furthermore, the sidechain of the arginine finger residue αR373 is found to be a major element in signaling between catalytic sites to enforce cooperation since it controls the reaction barrier height as well as the reaction equilibrium of the ATP hydrolysis/synthesis reaction.
Zooming in on ATP hydrolysis in F1. Markus Dittrich and Klaus Schulten. Journal of Bioenergetics and Biomembranes, 37:441-444, 2005.
ATP hydrolysis in the bTP and bDP catalytic sites of F1-ATPase. Markus Dittrich, Shigehiko Hayashi, and Klaus Schulten. Biophysical Journal, 87:2954-2967, 2004.
On the mechanism of ATP hydrolysis in F1-ATPase. Markus Dittrich, Shigehiko Hayashi, and Klaus Schulten. Biophysical Journal, 85:2253-2266, 2003.
Other QM/MM projects
This material is based upon work supported by the National Science Foundation under Grant No. 0234938. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. | <urn:uuid:ffc4d1d4-49ed-4dfd-b327-6c2f202a05b7> | 3.53125 | 1,351 | Academic Writing | Science & Tech. | 36.90577 | 407 |
Journal of scientists, physicists, mathematicians, engineers, inventors, nature, biology, technology, animal kingdoms, and science projects.
Kids to 12
If you have questions concerning this website, contact firstname.lastname@example.org
Make a Gas Balloon
by Jeanette Cain
2. Baking soda
3. Soft drink bottle
When a volcano erupts, it gives off gases. This experiment shows how the buildup of gas pressure can inflate a balloon. CAUTION: You will need an adult to help. There is also the possibility that putting too much gas-making mixture in the bottle can cause the balloon to explode!
This is what happens inside a volcano: gas pressure builds up causing an ernormous explosion to take place. This explosion often releases a deadly, hot gas cloud.
Place the funnel in the top of the bottle. Add some baking soda. (The funnel needs to be dry or the baking soda will stick to it.) Using the funnel, pour vinegar into the bottle.
This part requires steady, but quick hands: remove the funnel and quickly slip the balloon over the bottle's top. By now, the soda and vinegar are fizzing because they are giving off gas bubbles.
Your balloon begins to inflate due to the pressure (force) of the gas in the bottle; the more gas, the more the balloon inflates. CAUTION: Don't pop the balloon!
Please visit our
affiliate partners that
keeps our site up. | <urn:uuid:d724afa2-3b93-4b64-ab9f-f22b8946640c> | 3.609375 | 309 | Tutorial | Science & Tech. | 56.675386 | 408 |
Water from space is 'raining' onto a planet-forming disc at supersonic speeds, new observations from the Spitzer Space Telescope reveal. The unprecedented detail of the observations at this early stage of the disc's formation could help reveal which of two competing theories of planet formation is correct.
Planets form when matter clumps together in swirling discs of gas and dust, called protoplanetary discs, around infant stars. But many details of how this works are still not known. For example, some scientists think giant planets can form in just a few thousand years, while others argue it takes millions of years.
Now, astronomers led by Dan Watson of the University of Rochester in New York, US, have gained an unprecedented view of a protoplanetary disc at the young age of just a few hundred thousand years old.
They used the Spitzer Space Telescope to examine the spectrum of infrared light coming from the vicinity of an embryonic star called IRAS 4B, which lies about 1000 light years from Earth.
At this very early stage, an outer cocoon of gas and dust called an envelope still surrounds the star and its swirling disc.
Previous observations in the microwave portion of the spectrum suggested that this large cocoon is contracting and sending material onto the disc. But the inner region, where the disc meets the cocoon, could not be seen at these wavelengths.
The Spitzer observations probe this inner region and reveal infrared light emitted by massive amounts of water vapour - the equivalent of five times the content of the Earth's oceans.
The vapour is too hot to be explained by the embryonic star's radiation alone, suggesting another process must be heating it up.
The team believes ice from the cocoon is pelting the disc at a rate faster than the speed of sound there, creating a shock front. "The sonic boom that it endures when it lands on the disc heats it up very efficiently" and vaporises it, Watson told New Scientist.
This supersonic shock "has been searched for and theorised about for decades", Watson says. It is a short-lived phenomenon that only occurs during the first few hundred thousand years of the star and disc formation, while the envelope is still feeding the disc.
The light emitted as the icy particles hit the disc can be used to learn more about the disc itself at this early stage, which could shed light on how planets form.
Most astronomers believe planets form according to a model known as "core accretion", in which small particles snowball into larger and larger objects over millions of years.
A competing idea, called "disc instability", is that turbulence in the disc can cause matter to collapse into planets extremely quickly, producing gas giants such as Jupiter in just a few thousand years.
"If you wanted to test between those scenarios, one of the most important places to look would be the stage we're looking at now," Watson says.
Future observations of such young discs could reveal how turbulent the discs are, and thus whether they boast the conditions required for disc instability, he says. "The whole subject of the very beginnings of the development of solar systems is open to study now," Watson says.
Donald Brownlee of the University of Washington in Seattle, US, agrees. "It's interesting to have a new peek into a period of history of what appears to be a forming planetary system, potentially at a timescale that we've never seen before," he told New Scientist. "It forms another important clue to how planetary systems form."
Journal reference: Nature (vol 448, p 1026)
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article
Barking Up The Wrong Tree
Mon Mar 19 19:02:04 GMT 2012 by Tony Marshallsay
Having been schooled in the theory of planet formation by agglomeration of dust particles over a period of thousands or millions of years, I have recently ceased believing in it for a number of reasons:
1. If agglomeration works so well, why has the Asteroid Belt not agglomerated to a planet?Of course, recent examination by spacecraft has revealed that, while some asteroids are solid, other "potatoes" are, indeed, agglomerates.
2. The agglomeration theory cannot easily - if at all - explain retrograde planetary spins
3. It is difficult to see how agglomeration and an exceedingly slow increase in self-gravity pressure could result in the creation of "rocky" planets like our Earth, with molten iron cores including heavier, radioactive elements to generate internal heating, since any heat generated by the compression process would be dissipated into space over such a long time, making fusion reactions extremely unlikely.
Accordingly, I have come to the opinion that planets of all types are formed initially not over an exceedingly long time but rather almost instantly as core shards of exploding supernovae.
This view, again, has several implications, viz:
A. Outer core shards consisting of light materials would likely be small, lose heat very quickly and cool into misshapes before being able to become spherical under self-gravity.
B. Inner core shards, on the other hand, would have sufficient thermal capacity and radioactive material to maintain heat long enough to develop a spherical shape and the composition of our Earth (we might thus consider the Earth to be a microcosm of a stellar core, albeit under far less heat and pressure).
C. The shards would be flung in all directions, resulting in the multitude of "free planets" recently observed by Japanese investigators.
D. Some of those free planets would inevitably - sooner or later - become trapped in the gravitational fields of stars, creating planetary systems, such as our own Solar System.
E. The inconsistencies of size and composition of the Solar System's planets can then easily be explained by considering the planets as having been captured "missiles" from various supernovae, perhaps even in other galaxies (do the math - it's possible, even at incredibly slow speeds, when you take a few billion years into consideration).
F. A "Gas Giant" can be formed by a heavy, rocky "seed" gathering a thick coat of gas through happening to have been ejected in the direction of a large gas cloud.
Opinions on the above are welcome (I am becoming accustomed to brickbats descending upon my head from a great height!).
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:23eeb74b-c2db-46f7-b015-b9611e9f2bd0> | 3.90625 | 1,433 | News Article | Science & Tech. | 42.513405 | 409 |
a. Synoptic history
An extratropical low pressure system formed just east of the Turks and
Caicos Islands near 0000 UTC 25 October in response to an upper level
cyclone interacting with a frontal system. The low initially moved
northwestward, and in combination with a strong surface high to the north
developed into a gale center six hours later. By 1800 UTC that day it had
developed sufficient organized convection to be classified using the
Herbert-Poteat subtropical cyclone classification system, and the best track
of the subtropical storm begins at this time
(Table 1 and Figure 1).
Upon becoming a subtropical storm, the cyclone turned northward. This
motion continued for 24 h while the system slowly intensified. The storm
jogged north-northwestward late on 26 October, followed by a
north-northeastward turn and acceleration on the 27th. During
this time, satellite imagery indicated intermittent bursts of central
convection while Air Force Reserve Hurricane Hunter aircraft indicated a
large (75-100 n mi) radius of maximum winds. This evolution was in contrast
to that of Hurricane Michael a week-and-a-half before. Although of similar
origin to the subtropical storm, Michael developed persistent central
convection and completed a transition to a warm-core hurricane.
After reaching a 50 kt intensity early on 27 October, little change in
strength occurred during the next 24 h. The storm turned northeastward and
accelerated further on the 28th in response to a large and cold
upper-level cyclone moving southward over southeastern Canada. A last burst
of organized convection late on the 28th allowed the storm to reach
a peak intensity of 55 kt. A strong cold front moving southward off the New
England coast then intruded into the system, and the storm became
extratropical near Sable Island, Nova Scotia, around 0600 UTC 29 October.
The extratropical center weakened rapidly and lost its identity near eastern
Nova Scotia later that day. It should be noted that the large cyclonic
circulation that absorbed the subtropical storm was responsible for heavy
early-season snowfalls over portions of the New England states and
b. Meteorological statistics
Table 1 shows the best track positions and intensities
for the subtropical storm, with the track plotted in
Figure 1. Figure 2
and Figure 3 depict the curves of minimum central
sea-level pressure and maximum sustained one-minute average "surface" (10 m
above ground level) winds, respectively, as a function of time. These
figures also contain the data on which the curves are based: satellite-based
Hebert-Poteat and experimental extratropical transition intensity (Miller
and Lander, 1997) estimates from the Tropical Analysis and Forecast Branch
(TAFB), the Satellite Analysis Branch (SAB) of the National Environmental
Satellite Data and Information Service (NESDIS), and the Air Force Weather
Agency (AFWA), as well as data from aircraft, ships, buoys and land stations.
The Air Force Reserve Hurricane Hunters flew two mission into the storm with
a total of four center fixes. Central pressures on both flights were in the
997-1000 mb range, and the maximum flight level (1500 ft) winds were 60 kt
on the first flight and 61 kt on the second. A weak temperature gradient
was observed in the system on the first flight, suggesting that the cyclone
still had some baroclinic characteristics. The second flight showed a
uniform airmass within 100 n mi the center with temperatures of about
The storm had a large envelope, and many ships reported 34 kt or higher
winds. Table 2 summaries these observations.
There were few observations near the central core. Canadian buoy 44137
reported winds 160/39 kt with a pressure of 979.1 mb at 0200 UTC 29 October,
which is the basis for the lowest pressure. Other reports from this buoy
indicate that the winds increased in the last hour before the center passed,
suggesting that some kind of inner wind maximum was present even as the
storm was becoming extratropical. Earlier, a drifting buoy about 35 n mi
southeast of the center reported a pressure of 996.6 mb at 2051 UTC 27
October, which showed that the storm had begun to deepen.
Sable Island, Nova Scotia, reported a pressure of 980.6 mb as the center
passed over at 0600 UTC on the 29th. Maximum sustained winds were
35 kt after the center passage at 0700 and 0800 UTC. Several other stations
in eastern Nova Scotia and southwestern Newfoundland reported sustained
35-50 kt winds around 1200 UTC on the 29th.
The maximum intensity of this system is uncertain. Satellite intensity
estimates late on the 28th and early on the 29th along
with a 35-40 kt forward motion indicate the possibility of 65-75 kt
sustained winds. However, this is not supported by surface observations
near the center early on the 29th. The maximum intensity is
estimated to have been 55 kt.
c. Casualty and damage statistics
No reports of casualties or damage have been received at the National
Hurricane Center (NHC).
d. Forecast and warning critique
No advisories were written on this storm, as a decision was made
operationally to handle it in marine forecasts as an extratropical storm.
Post-analysis of satellite imagery and of 27 October aircraft data are the
basis for classifying the system now as subtropical. Due to the operational
handling, there are no formal NHC forecasts to verify. Large-scale
numerical models generally performed well in forecasting the genesis and
motion of this cyclone. The models did mostly underestimate the
intensification that occurred north of the Gulf Stream. However, this
strengthening was fairly well forecast by the GFDL model.
No tropical cyclone watches or warnings were issued for this storm. Marine
gale and storm warnings were issued in high seas and offshore forecasts from
Marine Prediction Center and the TAFB of the TPC. Gale warnings were also
issued for portions of the North Carolina coastal waters by local National
Weather Service offices.
Miller, D. W and M. A. Lander, 1997: Intensity estimation of tropical
cyclones during extratropical transition. JTWC/SATOPSTN-97/002, Joint
Typhoon Warning Center/Satellite Operations, Nimitz Hill, Guam, 9 pp.
Best track, Subtropical Storm, 25-29 October 2000.
|Lat. (°N)||Lon. (°W)
|25 / 0000||21.5|| 69.5||1009|| 30|| extratropical low|
|25 / 0600||22.5|| 70.0||1007|| 35||extratropical gale|
|25 / 1200||23.5|| 70.9||1006|| 35||"|
|25 / 1800||24.5|| 71.7||1005|| 35||subtropical storm|
|26 / 0000||25.7|| 71.7||1004|| 35||"|
|26 / 0600||26.6|| 71.7|| 1003|| 35||"|
|26 / 1200||27.4|| 71.8|| 1002|| 40||"|
|26 / 1800||28.3|| 72.1|| 1000|| 45||"|
|27 / 0000||29.2|| 72.5|| 997|| 50||"|
|27 / 0600||30.0|| 72.6|| 997||50||"|
|27 / 1200||30.9|| 72.5|| 997||50||"|
|27 / 1800||32.6|| 71.6|| 996||50||"|
|28 / 0000||34.2|| 70.7|| 994||50||"|
|28 / 0600||35.7|| 69.9|| 992||50||"|
|28 / 1200||36.5|| 68.1|| 990|| 50||"|
|28 / 1800||38.0|| 65.5|| 984|| 55||"|
|29 / 0000||40.5|| 62.6|| 978|| 55||"|
|29 / 0600||44.0|| 60.0|| 980|| 50||extratropical|
|29 / 1200||46.0|| 59.5|| 992|| 45||"|
|29 / 1800|| ||absorbed into larger extratropical low|
|29 / 0200||41.7||61.6||976||55||minimum pressure|
Selected ship and buoy observations of subtropical storm or greater winds
associated with the subtropical storm, 25-29 October 2000.
|Dock Express 20||25/1200||27.0||68.9||050/45||1009.0|
|Splendour of the Seas||25/1800||28.6||65.2||070/40||1015.0|
a 8 minute average wind
b 10 minute average wind
Best track for the subtropical storm, 25-29 October 2000.
Best track minimum central pressure curve for the subtropical storm, 25-29
Best track maximum sustained 1-minute 10 meter wind speed curve for the
subtropical storm, 25-29 October 2000. Vertical black bars denote wind
ranges in subtropical and extratropical satellite intensity estimates. | <urn:uuid:d64fb991-6106-4d55-b5c7-2cf381540ae9> | 3.09375 | 2,083 | Knowledge Article | Science & Tech. | 70.098467 | 410 |
Contact: Ken Kingery, NSCL, Office: 517-908-7482, Kingery@nscl.msu.edu
Published January 4, 2012
For Immediate Release
EAST LANSING, Mich. – The recent measurement of the mass of the short-lived rare isotope manganese-66 has made it possible for nuclear astrophysicists to pin down the underlying heating elements of one of the universe’s most fantastic phenomena—accreting neutron stars.
Out in the cold depths of space, billions of the densest objects known to man sit quietly while their nuclear decomposition processes play out. But some of them are hungry. Some neutron stars sit close enough to a neighboring star for its immense gravity to begin pulling matter from its neighbor into its own mass in an ongoing thermonuclear process. But sooner or later, the fuel for the neutron star is exhausted and it begins to cool rapidly. Through observations of this cooling process and measurements taken at nuclear physics laboratories such as the National Superconducting Cyclotron Laboratory (NSCL), scientists can deduce the inner workings of neutron stars.
In the recent experiment at NSCL, researchers measured the mass of manganese-66, which sits right next to iron-66 on the nuclear chart. Based on the newly discovered mass and previous measurements of iron-66, scientists can determine where in the crust of a neutron star the layer of iron-66 lies, which is one of two heating elements in neutron stars.
“On earth, iron-66 is a rare short-lived isotope with a half-life of about 400 ms,” said Alfredo Estrade, postdoctoral researcher with St. Mary’s University in Halifax, Canada, and GSI in Darmstadt, Germany, and lead author of the study. “However, it also is part of the crust of accreting neutron stars, where it becomes stable due to its high density and it heats the crust by capturing electrons.”
Scientists at NSCL calculated the mass of manganese-66 by doing a time-of-flight experiment. Krypton-86 was accelerated up to 40 percent of the speed of light and smashed into a thin foil of beryllium. Some of the ions shattered after hitting other nuclei in the foil, creating a smorgasborg of new isotopes and particles. The facility then filtered out about 100 desired types of isotopes, some of which they wanted to measure and others that they used for calibrations.
The filtered isotopes traveled down the beamline where they were caught by a detector that identified which isotope was which. Due to their different masses, the different isotopes took different amounts of time to complete their journey. By identifying manganese-66 and measuring the time it took to run the course, the scientists could determine its weight to within one part in 100,000.
The resulting mass was different than what theorists had predicted for the rare isotope, which changes the models of how neutron stars are structured. The result was a bit of a surprise.
“The mass difference between iron-66 and manganese-66 allows us determines the depth needed to induce the heating reactions and therefore the location of the heat source associated with this reaction inside a neutron star,” explained Hendrik Schatz, nuclear astrophysicist at NSCL and Principle Investigator for the Joint Institute for Nuclear Astrophysics (JINA), who worked on the paper along with Milan Matos, a postdoctoral researcher at Louisiana State University stationed at Oak Ridge National Laboratory. “With this new measurement, all of the critical heat sources now can be located within a neutron star. The heat source turns out to be located much closer to the surface than was assumed before based on theoretical predictions of the mass difference between iron-66 and manganese-66.”
Michigan State University has been working to advance the common good in uncommon ways for more than 150 years. One of the top research universities in the world, MSU focuses its vast resources on creating solutions to some of the world’s most pressing challenges, while providing life-changing opportunities to a diverse and inclusive academic community through more than 200 programs of study in 17 degree-granting colleges. | <urn:uuid:fe19fb5c-1732-4c4d-89d4-c2032e87fa1c> | 3.625 | 880 | News (Org.) | Science & Tech. | 38.409882 | 411 |
1 GCC The GCC command invokes the GNU C compiler. GCC file-spec 2 Parameters file-spec A C source file. If no input file extension is specified, GNU C assumes .C as the default extension unless the /PLUS qualifier is given, in which case .CC is assumed as the default extension. If an extension of .CPP is given, then the source file is assumed to be the output of the preprocessor, and thus the preprocessor is not executed. If an extension of .S is given, then the source file is assumed to be the assembly code output of the compiler, and only the assembler is called to generate an object file. 2 Qualifiers GNU C command qualifiers modify the way the compiler handles the compilation. The following is the list of available qualifiers for GNU C: /CASE_HACK /CC1_OPTIONS=(option [,option...]]) /DEBUG /DEFINE=(identifier[=definition][,...]) /G_FLOAT /INCLUDE_DIRECTORY=(path [,path...]]) /LIST[=filename] /MACHINE_CODE /OBJECT[=filename] /OPTIMIZE /PLUS /PROFILE[=identifier] /SCAN=(file[,file...]) /SHOW[=option] /UNDEFINE=(identifier[,identifier,...]) /VERBOSE /VERSION /WARNING 2 Linking When linking programs compiled with GNU C, you should include the GNU C library before the VAX C library. For example, LINK object-file,GNU_CC:GCCLIB/LIB,SYS$LIBRARY:VAXCRTL/LIB You can also link your program with the shared VAX C library. This can reduce the size of the .EXE file, as well as make it smaller when it's running. For example, $ LINK object-file, GNU_CC:GCCLIB/LIB,SYS$INPUT/OPT SYS$SHARE:VAXCRTL/SHARE (If you use the second example and type it in by hand, be sure to type ^Z after the last carriage return). A simpler alternative would be to place the single line: SYS$SHARE:VAXCRTL/SHARE into a file called VAXCRTL.OPT, and then use the link command: $ LINK object-file, GNU_CC:GCCLIB/LIB,VAXCRTL.OPT/OPT If a program has been compiled with /G_FLOAT, then the linking instructions are slightly different. If you are linking with the non-shared library, then the command that you should use would be: LINK object-file,GNU_CC:GCCLIB/LIB,SYS$LIBRARY:VAXCRTLG/LIB - ,SYS$LIBRARY:VAXCRTL/LIB Note that both VAXCRTL and VAXCRTLG must be linked to. If you are using the shared VAX C library, then you should use a command like: $ LINK object-file, GNU_CC:GCCLIB/LIB,SYS$INPUT:/OPTIONS SYS$SHARE:VAXCRTLG/SHARE In the case of the sharable library, only one library needs to be linked to. 2 /CASE_HACK /[NO]CASE_HACK D=/CASE_HACK Since the VMS Linker and Librarian are not case sensitive with respect to symbol names, a "case-hack" is appended to a symbol name when the symbol contains upper case characters. There are cases where this is undesirable, (mainly when using certain applications where modules have been precompiled, perhaps in another language) and we want to compile without case hacking. In these cases the /NOCASE_HACK switch disables case hacking. 2 /CC1_OPTIONS This specifies additional switches to the compiler itself which cannot be set by means of the compiler driver. 2 /DEBUG /DEBUG includes additional information in the object file output so that the program can be debugged with the VAX Symbolic Debugger. To use the debugger it is also necessary to link the debugger to your program, which is done by specifying the /DEBUG qualifier to the link command. With the debugger it is possible to set breakpoints, examine variables, and set variables to new values. See the VAX Symbolic Debugger manual for more information, or type "HELP" from the debugger prompt. 2 /DEFINE /DEFINE=(identifier[=definition][,...]) /DEFINE defines a string or macro ('definition') to be substituted for every occurrence of a given string ('identifier') in a program. It is equivalent to the #define preprocessor directive. All definitions and identifiers are converted to uppercase unless they are in quotation marks. The simple form of the /DEFINE qualifier: /DEFINE=vms results in a definition equivalent to the preprocessor directive: #define VMS 1 You must enclose macro definitions in quotation marks, as in this example: /DEFINE="C(x)=((x) & 0xff)" This definition is the same as the preprocessor definition: #define C(x) ((x) & 0xff) If more than one /DEFINE is present on the GCC command line, only the last /DEFINE is used. If both /DEFINE and /UNDEFINE are present on a command line, /DEFINE is evaluated before /UNDEFINE. 2 /G_FLOAT Instructs the compiler to use "G" floating point arithmetic instead of "D". The difference is that double precision has a range of approximately +/-0.56e-308 to +/-0.9 e+308, with approximately 15 decimal digits precision. "D" floating point has the same range as single precision floating point, with approximately 17 decimal digits precision. If you use the /G_FLOAT qualifier, the linking instructions are different. See "Linking" for further details. 2 /LIST /LIST[=list_file_name] This does not generate a listing file in the usual sense, however it does direct the compiler to save the preprocessor output. If a file is not specified, then this output is written into a file with the same name as the source file and an extension of .CPP. 2 /INCLUDE_DIRECTORY /INCLUDE_DIRECTORY=(path [,path...]) The /INCLUDE_DIRECTORY qualifier provides additional directories to search for user-defined include files. 'path' can be either a logical name or a directory specification. There are two forms for specifying include files - #include "file-spec" and #include <file-spec>. For the #include "file-spec" form, the search order is: 1. The directory containing the source file. 2. The directories in the /INCLUDE qualifier (if any). 3. The directory (or directories) specified in the logical name GNU_CC_INCLUDE. 4. The directory (or directories) specified in the logical name SYS$LIBRARY. For the #include <file-spec> form, the search order is: 1. The directories specified in the /INCLUDE qualifier (if any). 2. The directory (or directories) specified in the logical name GNU_CC_INCLUDE. 3. The directory (or directories) specified in the logical name SYS$LIBRARY. 2 /MACHINE_CODE Tells GNU C to output the machine code generated by the compiler. The machine code is output to a file with the same name as the input file, with the extension .S. An object file is still generated, unless /NOOBJ is also specified. 2 /OBJECT /OBJECT[=filename] /NOOBJECT Controls whether or not an object file is generated by the compiler. 2 /OPTIMIZE /[NO]OPTIMIZE Controls whether optimization is performed by the compiler. By default, optimization is on. /NOOPTIMIZE turns optimization off. 2 /PLUS Instructs the compiler driver to use the GNU-C++ compiler instead of the GNU-C compiler. Note that the default extension of source files is .CC when this qualifier is in effect. 2 /PROFILE /PROFILE[=identifier] Instructs the compiler to generate function profiling code. You must link your program to the profiler when you use this options. The profile statistics are automatically printed out on the terminal during image exit. (i.e. no modifications to your source file are required in order to use the profiler). There are three identifiers that can be used with the /PROFILE switch. These are ALL, FUNCTION, and BLOCK. If /PROFILE is given without an identifier, then FUNCTION is assumed. 3 Block_Profiler The block profiler counts how many times control of the program passes certain points in your program. This is useful in determining which portions of a program would benefit from recoding for optimization. The report for the block profiler contains the function name, file name, PC, and the source file line number as well as the count of how many times control has passed through the specified source line. 3 Function_Profiler The function profiler counts how many times each function is entered, and keeps track of how much CPU time is used within each function. You should be careful about interpreting the results of profiles where there are inline functions. When a function is included as inline, then there is no call to the internal data collection routine used by the profiler, and thus there will be no record of this function being called. The compiler does generate a callable version of each inline function, and if this called version is used, then the profiler's data collection routine will be called. 2 /SCAN /SCAN=(file[,file...]) This qualifier supplies a list of files that will be read as input, and the output will be discarded before processing the regular input file. Because the output generated from the files is discarded, the only effect of this qualifier is to make the macros defined in the files available for use in the main input. 2 /SHOW /SHOW[=option] This causes the preprocessor to generate information other than the preprocessed input file. When this qualifier is used, no assembly code and no object file is generated. The output of the preprocessor is placed in the file specified by the /LIST qualifier, if present. If the /LIST qualifier is not present, then the output is placed in a file with the same name as the input file with an extension that depends upon which option that is selected. 3 DEFINITIONS This option causes the preprocessor to dump a list of all of the definitions to the output file. This is useful for debugging purposes, since it lets you determine whether or not everything has been defined properly. If the default file name is used for the output, the extension will be .DEF. 3 RULES This option causes the preprocessor to output a rule suitable for MAKE, describing the dependencies of the main source file. The preprocessor outputs one MAKE rule containing the object file name for that source file, a colon, and the names of all the concluded files. If there are many included files then the rule is split into several lines using the '\'-newline. When using this option, only files included with the "#include "file" directive are mentioned. If the default file name is used for the output, a null extension will be used. 3 ALL This option is similar to RULES, except that it also mentions files included with the "#include <file.h>" directive. If the default file name is used for the output, a null extension will be used. 2 /UNDEFINE /UNDEFINE cancels a macro definition. Thus, it is the same as the #undef preprocessor directive. If more than one /UNDEFINE is present on the GCC command line, only the last /UNDEFINE is used. If both /DEFINE and /UNDEFINE are present on a command line, /DEFINE is evaluated before /UNDEFINE. 2 /VERBOSE Controls whether the user sees the invocation command strings for the preprocessor, compiler, and assembler. The compiler also outputs some statistics on time spent in its various phases. 2 /VERSION Causes the preprocessor and the compiler to identify themselves by their version numbers, and in the case of the compiler, the version number of the compiler that built it. 2 /WARNING When this qualifier is present, warnings about usage that should be avoided are given by the compiler. For more information, see "Using and Porting the GNU Compiler Collection (GCC)", in the section on command line options, under "-Wall". Warnings are also generated by the preprocessor when this qualifier is given. 2 Known_Incompatibilities_with_VAX-C There are several known incompatibilities between GNU-C and VAX-C. Some common ones will be briefly described here. A complete description can be found in "Using and Porting the GNU Compiler Collection (GCC)" in the chapter entitled "Using GCC on VMS". GNU-C provides case hacking as a means of giving case sensitivity to symbol names. The case hack is a hexadecimal number appended to the symbol name, with a bit being set for each upper case letter. Symbols with all lower case, or symbols that have a dollar sign ("$") are not case hacked. There are times that this is undesirable, namely when you wish to link your program against a precompiled library which was compiled with a non-GNU-C compiler. X-windows (or DECWindows) is an example of this. In these instances, the /NOCASE_HACK switch should be used. If you require case hacking in some cases, but not in others (i.e. Libg++ with DECWindows), then it is recommended that you develop a header file which will define all mixed case functions that should not have a case hack as the lower case equivalents. GNU-C does not provide the globaldef and globalref mechanism which is used by VAX-C to coerce the VMS linker to include certain object modules from a library. There are assembler hacks, which are available to the user through the macros defined in gnu_hacks.h, which effectively give you the ability to perform these functions. While not syntactically identical, they do provide most of the functionality. Note that globaldefs of enums is not supported in the way that it is under VAX-C. This can be easily simulated, however, by globaldefing an integer variable, and then globalvaluing all of the enumerated states. Furthermore, the way that globalvalue is currently implemented, the data type of the globalvalue variable is seen to the compiler to be a pointer to the data type that you specify. This is necessary in order to make the compiler correctly address the globalvalue variables. | <urn:uuid:01ae1d1d-f97d-420b-a8d9-926b83fcea12> | 3.3125 | 3,159 | Documentation | Software Dev. | 46.353827 | 412 |
Parsing XML Documents
To manipulate an XML document, XML parser is needed. The parser loads the document into the computer's memory. Once the document is loaded, its data can be manipulated using the appropriate parser.
We will soon discuss APIs and parsers for accessing XML documents using serially accesss mode (SAX) and random access mode (DOM). The specifications to ensure the validity of XML documents are DTDs and the Schemas.
DOM: Document Object Model
The XML Document Object Model (XML DOM) defines a standard way to access and manipulate XML documents using any programming language (and a parser for that language).
The DOM presents an XML document as a tree-structure (a node tree), with the elements, attributes, and text defined as nodes. DOM provides access to the information stored in your XML document as a hierarchical object model.
The DOM converts an XML document into a collection of objects in a object model in a tree structure (which can be manipulated in any way ). The textual information in XML document gets turned into a bunch of tree nodes and an user can easily traverse through any part of the object tree, any time. This makes easier to modify the data, to remove it, or even to insert a new one. This mechanism is also known as the random access protocol .
DOM is very useful when the document is small. DOM reads the entire XML structure and holds the object tree in memory, so it is much more CPU and memory intensive. The DOM is most suited for interactive applications because the entire object model is present in memory, where it can be accessed and manipulated by the user.
SAX: Simple API for XML
This API was an innovation, made on the XML-DEV mailing list through a product collaboration, rather than being a product of the W3C.
SAX (Simple API for XML) like DOM gives access to the information stored in XML documents using any programming language (and a parser for that language).
This standard API works in serial access mode to parse XML documents. This is a very fast-to-execute mechanism employed to read and write XML data comparing to its competitors. SAX tells the application, what is in the document by notifying through a stream of parsing events. Application then processes those events to act on data.
SAX is also called as an event-driven protocol, because it implements the technique to register the handler to invoke the callback methods whenever an event is generated. Event is generated when the parser encounters a new XML tag or encounters an error, or wants to tell anything else. SAX is memory-efficient to a great extend.
SAX is very useful when the document is large.
DOM reads the entire XML structure and holds the object tree in memory, so it is much more CPU and memory intensive. For that reason, the SAX API are preferred for server-side applications and data filters that do not require any memory intensive representation of the data.
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:11e6041b-34da-4e10-a185-a3ccb8d319fb> | 3.5625 | 648 | Documentation | Software Dev. | 45.82441 | 413 |
Displaying 1-13 of 13 links
AfricaAdapt is a bilingual (English/French) network of African researchers, policymakers, civil society organisations and local communities that encourages information sharing on climate change adaptation for Africa.
The network publishes information on its activities including workshops, innovation funding, radio programmes in local languages and news services for mobile phones. It also publishes video, audio and photo stories to present community perspectives on climate change adaptation methods. It links to key organisations and publications on adaptation in several fields including agriculture, fisheries, forestry, energy, water and health.
This site provides access to a suite of climate related observations, projections and predictions for the African continent. Of particular interest are the up-to-date seasonal predictions and the African monsoon bulletin. There is also a searchable archive of climate data and research activities detailed in French. ACMAD also offers 'on the job training' in climatology. The website is also available in French.
The AMMA programme aims to study how the West African monsoon affects meningitis and malaria epidemics. While it focuses on one weather system, the climate factors it looks at can be generalised to other environments. For example, it examines how wind, dust, rainfall, temperature and humidity, amongst others, affect mosquito density and malaria or meningitis epidemics in people. The website also offers a key resource for researchers in the form of an open-access bibliographic database containing more than 250 scientific articles.
This site provides access to reports of projects and case studies conducted as part of the international Assessments of Impacts and Adaptations to Climate Change in Multiple Regions and Sectors (AIACC) initiative. A searchable database of projects by country, region, and sector contains some of the final reports in pdf format.
The projects cover adaptation in almost all sectors, with five projects with final reports in southern Africa, one project in eastern Africa and three projects in western Africa. The site also provides accessible summaries of each project, as well as updates posted throughout the duration of the studies.
This online resource captures current articles, reports, papers and books sourced from nongovernmental organisations and development agencies such as ActionAid, SouthSouthNorth, the International Institute for Environment and Development and the World Bank. The site features short summaries and links to full papers, all of which are relevant to adaptation in sub-Saharan Africa.
The Climate Prediction Centre's African Desk aims to create a partnership between the United States' National Centers for Environmental Prediction (NCEP) and the African Meteorological Services to encourage exchange of data and train meteorologists.
The website contains weather summaries, rainfall, monsoon predictions and various short and long term weather forecasts. The African Desk also hosts two visitors at a time for training in climate change monitoring and predictions methods.
This organisation aims to encourage dialogue and the sharing of good practice by policymakers and opinion leaders on the future of agricultural growth in Africa. It covers topic areas such as climate change, land use, policy processes and science, technology and innovation. The website publishes free to access publications, lists of relevant events and fellowships, and online discussions on issues including small-scale agriculture. It also provides access to resources for policy engagement, such as policy briefs, and a regularly updated list of relevant organisations and websites.
The Guardian Environment website publishes news and commentary on environmental issues such as climate change, energy, ethical living, food and recycling.
It also provides blogs, job listings and multimedia, including audio and video podcasts. Users can comment and are encouraged to join discussions.
The website also aggregates relevant news from members of the Guardian Environment Network, which brings together the world's best environment websites including SciDev.Net, China Dialogue, Real Science and the World Resources Institute.
This non-profit organisation aims to develop sustainable ecological farming in Africa and India. ICRISAT's mission is "to help empower 600 million poor people to overcome hunger, poverty and a degraded environment in the dry tropics through better agriculture".
ICRISAT's BioPower initiative aims to ensure that bioenergy research benefits the poor. Its activities include analysing bioenergy trends and understanding their repercussions for the poor, and enabling governments to formulate pro-bioenergy policies that benefit poor people.
This site is maintained by the Kenya Meteorological Department and contains short term weather forecasts, seasonal forecasts, and agro-meteorological data. Other climatological data is available from the website upon request.
The Southern African Regional Climate Outlook Forum (SARCOF) is a regional seasonal weather outlook prediction and application process adopted by the fourteen countries of the Southern African Development Community (SADC) Member States.
The site provides access to weather forecasts and climate predictions and features weather warnings, mid-season rainfall analysis and rainfall review reports to mitigate extreme climatic conditions.
These country-level reports, published by the Climate Systems and Policy research cluster at the University of Oxford, provide data on observed and projected climates in 52 countries in the developing world.
Each report contains maps, diagrams, tables and a narrative of the data, putting it in the context of the country's general climate. Files in text format with datasets containing underlying and model data can be downloaded for further use.
After the release of the Third Assessment Report of the Intergovernmental Panel on Climate Change in 2001, UNEP and GRID Arendal published this set of 25 graphics focused on the special challenges that Africa faces due to expected long term climate change.
Three sections cover the evidence of change in Africa, the science driving these changes, and vulnerability to — and trends in — extreme events on the continent. The graphics also show the severity of climate impacts on fresh water, human health, and food in Africa. | <urn:uuid:02c598e4-b789-4b16-a239-434d102f9a6c> | 2.90625 | 1,177 | Content Listing | Science & Tech. | 14.803434 | 414 |
Aug. 20, 2012 Botany is plagued by the same problem as the rest of science and society: our ability to generate data quickly and cheaply is surpassing our ability to access and analyze it. In this age of big data, scientists facing too much information rely on computers to search large data sets for patterns that are beyond the capability of humans to recognize -- but computers can only interpret data based on the strict set of rules in their programming.
New tools called ontologies provide the rules computers need to transform information into knowledge, by attaching meaning to data, thereby making those data retrievable by computers and more understandable to human beings. Ontology, from the Greek word for the study of being or existence, traditionally falls within the purview of philosophy, but the term is now used by computer and information scientists to describe a strategy for representing knowledge in a consistent fashion. An ontology in this contemporary sense is a description of the types of entities within a given domain and the relationships among them.
A new article in this month's American Journal of Botany by Ramona Walls (New York Botanical Garden) and colleagues describes how scientists build ontologies such as the Plant Ontology (PO) and how these tools can transform plant science by facilitating new ways of gathering and exploring data.
When data from many divergent sources, such as data about some specific plant organ, are associated or "tagged" with particular terms from a single ontology or set of interrelated ontologies, the data become easier to find, and computers can use the logical relationships in the ontologies to correctly combine the information from the different databases. Moreover, computers can also use ontologies to aggregate data associated with the different subclasses or parts of entities.
For example, suppose a researcher is searching online for all examples of gene expression in a leaf. Any botanist performing this search would include experiments that described gene expression in petioles and midribs or in a frond. However, a search engine would not know that it needs to include these terms in its search -- unless it was told that a frond is a type of leaf, and that every petiole and every midrib are parts of some leaf. It is this information that ontologies provide.
The article in the American Journal of Botany by Walls and colleagues describes what ontologies are, why they are relevant to plant science, and some of the basic principles of ontology development. It includes an overview of the ontologies that are relevant to botany, with a more detailed description of the PO and the challenges of building an ontology that covers all green plants. The article also describes four keys areas of plant science that could benefit from the use of ontologies: (1) comparative genetics, genomics, phenomics, and development; (2) taxonomy and systematics; (3) semantic applications; and (4) education. Although most of the examples in this article are drawn from plant science, the principles could apply to any group of organisms, and the article should be of interest to zoologists as well.
As genomic and phenomic data become available for more species, many different research groups are embarking on the annotation of their data and images with ontology terms. At the same time, cross-species queries are becoming more common, causing more researchers in plant science to turn to ontologies. Ontology developers are working with the scientists who generate data to make sure ontologies accurately reflect current science, and with database developers and publishers to find ways to make it easier for scientist to associate their data with ontologies.
Other social bookmarking and sharing tools:
- R. L. Walls, B. Athreya, L. Cooper, J. Elser, M. A. Gandolfo, P. Jaiswal, C. J. Mungall, J. Preece, S. Rensing, B. Smith, D. W. Stevenson. Ontologies as integrative tools for plant science. American Journal of Botany, 2012; 99 (8): 1263 DOI: 10.3732/ajb.1200222
Note: If no author is given, the source is cited instead. | <urn:uuid:e259c501-12cb-4c8a-8d16-344ec175ca5c> | 3.34375 | 854 | News Article | Science & Tech. | 37.604072 | 415 |
Apr 2, 2009 | 1
Stargazers take note: Today marks the beginning of a four-day celestial celebration called 100 Hours of Astronomy, part of the International Astronomical Union's International Year of Astronomy (IYA2009). The IYA2009 marks the 400th anniversary of Italian astronomer Galileo Galilei turning his telescopes to the skies and beginning a new era of astronomical observation.
A kickoff event at the Franklin Institute in Philadelphia today showcased one of Galileo's surviving telescopes. According to the institute, this marks the first time one of the two remaining instruments has left Italy.
An international "star party" is scheduled to take place during which telescopes will be made available to the public at different sites around the globe. Many are amateur telescopes set up in parks or on sidewalks; the 100 Hours of Astronomy Web site has details on many of the planned activities. Most star parties are scheduled to take place on Saturday, but some are planned for other times, such as one beginning this evening in New York City, where Columbia University will set up telescopes in Harlem's Powell Plaza for viewing the moon and Saturn.
Deadline: Jun 30 2013
Reward: $1,000,000 USD
This is a Reduction-to-Practice Challenge that requires written documentation and&
Deadline: Jul 30 2013
Reward: $100,000 USD
The Seeker desires a method for producing pseudoephedrine products in such a way that it will be extremely difficult for clandestine che
Save 66% off the cover price and get a free gift!
Learn More >>X | <urn:uuid:486f237c-137a-42f4-a753-124ae39ffd6e> | 2.53125 | 321 | News (Org.) | Science & Tech. | 26.846581 | 416 |
- Introduction to Hubble
- The Current Science Instruments
- Mission Operations and Observations
- Previous Instruments
- Technical Overview
Introduction to Hubble
The Hubble Space Telescope (HST) is a cooperative program of the European Space Agency (ESA) and the National Aeronautics and Space Administration (NASA) to operate a space-based observatory for the benefit of the international astronomical community. HST is an observatory first envisioned in the 1940s, designed and built in the 1970s and 80s, and operational since the 1990. Since its preliminary inception, HST was designed to be a different type of mission for NASA -- a long-term, space-based observatory. To accomplish this goal and protect the spacecraft against instrument and equipment failures, NASA planned on regular servicing missions. Hubble has special grapple fixtures, 76 handholds, and is stabilized in all three axes. HST is a 2.4-meter reflecting telescope, which was deployed in low-Earth orbit (600 kilometers) by the crew of the space shuttle Discovery (STS-31) on 25 April 1990.
Responsibility for conducting and coordinating the science operations of the Hubble Space Telescope rests with the Space Telescope Science Institute (STScI) on the Johns Hopkins University Homewood Campus in Baltimore, Maryland. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc. (AURA).
HST's current complement of science instruments includes three cameras, two spectrographs, and fine guidance sensors (primarily used for accurate pointing, but also for astrometric observations). Because of HST's location above the Earth's atmosphere, these science instruments can produce high-resolution images of astronomical objects. Ground-based telescopes are limited in their resolution by the Earth’s atmosphere, which causes a variable distortion in the images. Hubble can observe ultraviolet radiation, which is blocked by the atmosphere and therefore unavailable to ground-based telescopes. In the infrared portion of the spectrum, the Earth’s atmosphere adds a great deal of background, which is absent in Hubble observations.
When originally planned in the early 1970s, the Large Space Telescope program called for return to Earth, refurbishment, and re-launch every 5 years, with on-orbit servicing every 2.5 years. Hardware lifetime and reliability requirements were based on that 2.5-year interval between servicing missions. In the late 70s, contamination and structural loading concerns associated with return to Earth aboard the shuttle eliminated the concept of ground return from the program. NASA decided that on-orbit servicing might be adequate to maintain HST for its 15-year design life. A three-year cycle of on-orbit servicing was adopted. HST servicing missions in December 1993, February 1997, December 1999, March 2002 and May 2009 were enormous successes and validated the concept of on-orbit servicing of Hubble.
The years since the launch of HST in 1990 have been momentous, with the discovery of spherical aberration in its main mirror and the search for a practical solution. The STS-61 (Endeavour) mission of December 1993 corrected the effects of spherical aberration and fully restored the functionality of HST. Since then, servicing missions have regularly provided opportunities to repair aging and failed equipment as well as incorporate new technologies in the telescope, especially in the Science Instruments that are the heart of its operations.
See OPO's Hubble Primer for more information about HST.
The Current Science Instruments
Space Telescope Imaging Spectrograph
A spectrograph spreads out the light gathered by a telescope so that it can be analyzed to determine such properties of celestial objects as chemical composition and abundances, temperature, radial velocity, rotational velocity, and magnetic fields. The Space Telescope Imaging Spectrograph (STIS) can study these objects across a spectral range from the UV (115 nanometers) through the visible red and the near-IR (1000 nanometers).
STIS uses three detectors: a cesium iodide photocathode Multi-Anode Microchannel Array (MAMA) for 115 to 170 nm, a cesium telluride MAMA for 165 to 310 nm, and a Charge Coupled Device (CCD) for 165 to 1000 nm. All three detectors have a 1024 X 1024 pixel format. The field of view for each MAMA is 25 X 25 arc-seconds, and the field of view of the CCD is 52 X 52 arc-seconds.
The main advance in STIS is its capability for two-dimensional rather than one-dimensional spectroscopy. For example, it is possible to record the spectrum of many locations in a galaxy simultaneously, rather than observing one location at a time. STIS can also record a broader span of wavelengths in the spectrum of a star at one time. As a result, STIS is much more efficient at obtaining scientific data than the earlier HST spectrographs.
A power supply in STIS failed in August 2004, rendering it inoperable. During the servicing mission in 2009, astronauts successfully repaired the STIS by removing the circuit card containing the failed power supply and replacing it with a new card. Since STIS was not designed for in-orbit repair of internal electronics, this task was a substantial challenge for the astronaut crew.
Near Infrared Camera and Multi-Object Spectrometer
The Near Infrared Camera and Multi-Object Spectrometer (NICMOS) is an HST instrument providing the capability for infrared imaging and spectroscopic observations of astronomical targets. NICMOS detects light with wavelengths between 0.8 and 2.5 microns - longer than the human-eye limit.
The sensitive HgCdTe arrays that comprise the infrared detectors in NICMOS must operate at very cold temperatures. After its deployment, NICMOS kept its detectors cold inside a cryogenic dewar (a thermally insulated container much like a thermos bottle) containing frozen nitrogen ice. NICMOS is HST's first cryogenic instrument.
The frozen nitrogen ice cryogen in NICMOS was exhausted in early 1999, rendering the Instrument inoperable at that time. An alternate means of cooling the NICMOS was developed and installed in the March 2002 servicing mission. This device uses a mechanical cooler to cool the detectors to the low temperatures necessary for operations. The technology for this cooler was not available when the instrument was originally designed, but fortunately became available in time to support the reactivation of the instrument.
Since late 2008, the NICMOS Cooling System (NCS) has experienced difficulties maintaining the instrument’s nominal scientific operating state, in which the detectors are maintained at ~ 77K. Repeated restart attempts have demonstrated that it is not possible to restart the NCS in a cold state immediately following safing events. The main culprit for the problems is believed to be water ice in the primary (circulator) loop of the NCS. An inefficient approach to this problem would be to put the NCS through a several-month warm-up/cooldown cycle and hope that there is an opportunity for science prior to the next payload safing event.
The only feasible path towards satisfactory operation of NICMOS is to remove the putative water by venting the existing contaminated Ne coolant and replacing it with a fresh charge, which is available onboard but has never actually been used on-orbit. Based on the Cycle 18 proposal review results, STScI and Goddard HST Project, with the concurrence of NASA Headquarters, have decided that NICMOS will not be available for science in Cycle 18. A decision on the availability of NICMOS beyond Cycle 18 has not yet been made and awaits further discussion.
Advanced Camera for Surveys
The ACS is a camera designed to provide HST with a deep, wide-field survey capability from the visible to near-IR, imaging from the near-UV to the near-IR with the point-spread function critically sampled at 6300 Å, and solar blind far-UV imaging. The primary design goal of the ACS Wide-Field Channel is to achieve a factor of 10 improvement in discovery efficiency, compared to WFPC2, where discovery efficiency is defined as the product of imaging area and instrument throughput. These gains are a direct result of improved technology since the HST was launched in 1990. The Charge Coupled Devices (CCDs) used as detectors in the ACS, are more sensitive than those of the late 80s and early 90s, and also have many more pixels, capturing more of the sky in each exposure. The wide field camera in the ACS is a 16 megapixel camera.
The ACS was installed during the March 2002 servicing mission. As a result of the improved sensitivity it instantly became the most heavily used Hubble instrument. It has been used for surveys of varying breadths and depths, as well as for detailed studies of specific objects. The ACS worked well until January 2007, at which time a failure in the electronics for the CCDs occurred and has prevented use of those detectors. Engineers and astronauts then developed an approach to remove and replace the failed electronics, which was carried out during the 2009 servicing mission. As with the STIS repair, the ACS repair was challenging, since the instrument was not designed originally with this type of repair in mind.
Fine Guidance Sensors
The Fine Guidance Sensors (FGS), in addition to being an integral part of the HST Pointing Control System (PCS), provide HST observers with the capability of precision astrometry and milliarcsecond resolution over a wide range of magnitudes (3 < V < 16.8). Its two observing modes - Position Mode and Transfer Mode - have been used to determine the parallax and proper motion of astrometric targets to a precision of 0.2 mas, and to detect duplicity or structure around targets as close as 8 mas (visual orbits can be determined for binaries as close as 12 mas).
Cosmic Origins Spectrograph
The Cosmic Origins Spectrograph (COS) is a fourth-generation instrument that was installed on the Hubble Space Telescope (HST) during the 2009 servicing mission. COS is designed to perform high sensitivity, moderate- and low-resolution spectroscopy of astronomical objects in the 115-320 nm wavelength range. It significantly enhances the spectroscopic capabilities of HST at ultraviolet wavelengths, and provides observers with unparalleled opportunities for observing faint sources of ultraviolet light. The primary science objectives of the COS are the study of the origins of large scale structure in the Universe, the formation and evolution of galaxies, the origin of stellar and planetary systems, and the cold interstellar medium.
The COS achieves its improved sensitivity through advanced detectors and optical fabrication techniques. At UV wavelengths even the best mirrors do not reflect all light incident upon them. Previous spectrographs have required multiple (5 or more) reflections in order to display the spectrum on the detector. A substantial portion of the COS improvement in sensitivity is due to an optical design that requires only a single reflection inside the instrument, reducing the losses due to imperfect reflectivity. This design is possible only with advanced techniques for fabrication, which were not available when earlier generations of HST spectrographs were designed.
COS has a far-UV and near-UV channel that use different detectors: two side-by-side 16384 x 1024 pixel Cross-Delay Line Microchannel Plates (MCPs) for the far-UV, 115 to 205 nm, and a 1024x1024 pixel cesium telluride MAMA for the near-UV,170 to 320 nm. The far-UV detector is similar to detectors flown on the FUSE spacecraft, and takes advantage of improved technology over the past decade. The near-UV detector is a spare STIS detector.
Wide Field Camera 3
The Wide Field Camera 3 (WFC3) is also a fourth generation instrument that was installed during the 2009 servicing mission. Equipped with state-of-the-art detectors and optics, WFC3 provides wide-field imaging with continuous spectral coverage from the ultraviolet into the infrared, dramatically increasing both the survey power and the panchromatic science capabilities of HST.
The WFC3 has two camera channels: the UVIS channel that operates in the ultraviolet and visible bands (from about 200 to 1000 nm), and the IR channel that operates in the infrared (from 900 to 1700 nm). The performance of the two channels was designed to complement the performance of the ACS. The UVIS channel provides the largest field of view and best sensitivity of any ultraviolet camera HST has had. This is feasible as a result of continued improvement in the performance of Charge Coupled Devices designed for astronomical use. The IR channel on WFC3 represents a major improvement on the capabilities of the NICMOS, primarily as a result of the availability of much larger detectors, 1 megapixel in the WFC3/IR vs. 0.06 megapixels for the NICMOS. In addition, modern IR detectors like that in the WFC3 have benefited from improvements over the last decade in design and fabrication.
Mission Operations and Observations:
Although HST operates around the clock, not all of its time is spent observing. Each orbit lasts about 95 minutes, with time allocated for housekeeping functions and for observations. "Housekeeping" functions includes turning the telescope to acquire a new target, switching communications antennas and data transmission modes, receiving command loads and downlinking data, calibrating the instruments and similar activities. On average, the telescope spends about 50% of the time observing astronomical targets. About 50% of the time the view to celestial targets is blocked by the Earth, and that time is used to carry out these support functions.
Each year the STScI solicits ideas for scientific programs from the worldwide astronomical community. All astronomers are free to submit proposals for observations. Typically, 700-1200 proposals are submitted each year. A series of panels, involving roughly 100 astronomers from around the world, are convened to recommend which of the proposals to carry out over the next year. There is only sufficient time in a year to schedule about 1/5 of the proposals that are submitted, so the competition for Hubble observing time is tight.
After proposals are chosen, the observers submit detailed observation plans. The STScI uses these to develop a yearlong observing plan, spreading the observations evenly throughout the period and taking into account scientific reasons that may require some observations to be at a specific time. This long-range plan incorporates calibrations and engineering activities, as well as the scientific observations. This plan is then used as the basis for detailed scheduling of the telescope, which is done one week at a time. Each event is translated into a series of commands to be sent to the onboard computers. Computer loads are uplinked several times a day to keep the telescope operating efficiently.
When possible, two scientific instruments are used simultaneously to observe adjacent target regions of the sky. For example, while a spectrograph is focused on a chosen star or nebula, a camera can image a sky region offset slightly from the main viewing target. During observations the Fine Guidance Sensors (FGS) track their respective guide stars to keep the telescope pointed steadily at the right target.
Engineering and scientific data from HST, as well as uplinked operational commands, are transmitted through the Tracking Data Relay Satellite (TDRS) system and its companion ground station at White Sands, New Mexico. Up to 24 hours of commands can be stored in the onboard computers. Data can be broadcast from HST to the ground stations immediately or stored on a solid-state recorder and downlinked later.
The observer on the ground can examine the "raw" images and other data within a few minutes for a quick-look analysis. Within 24 hours, GSFC formats the data for delivery to the STScI. STScI is responsible for calibrating the data and providing them to the astronomer who requested the observations. The astronomer has a year to analyze the data from the proposed program, draw conclusions, and publish the results. After one year the data become accessible to all astronomers. The STScI maintains an archive of all data taken by HST. This archive has become an important research tool in itself. Astronomers regularly check the archive to determine whether data in it can be used for a new problem they are working on. Frequently they find that there are HST data relevant for their research, and they can then download these data free of charge.
Hubble has proven to be an enormously successful program, providing new insight into the mysteries of the Universe.
Previously Flown Instruments:
- Wide Field Planetary Camera
- Wide Field Planetary Camera 2
- Faint Object Spectrograph
- Goddard High Resolution Spectrograph
- Corrective Optics Space Telescope Axial Replacement
- Faint Object Camera
- High Speed Photometer
Wide Field/Planetary Camera
The Wide Field/Planetary Camera (WF/PC1) was used from April 1990 to November 1993, to obtain high resolution images of astronomical objects over a relatively wide field of view and a broad range of wavelengths (1150 to 11,000 Angstroms).
Wide Field Planetary Camera 2
The original Wide Field/Planetary Camera (WF/PC1) was replaced by WFPC2 on the STS-61 shuttle mission in December 1993. WFPC2 was a spare instrument developed by the Jet Propulsion Laboratory in Pasadena, California, at the time of HST launch. It consisted of four cameras. The relay mirrors in WFPC2 were spherically aberrated in just the right way to correct for the spherically aberrated primary mirror of the observatory. (HST's primary mirror is 2 microns too flat at the edge, so the corrective optics within WFPC2 were too high by that same amount.). The "heart'' of WFPC2 consisted of an L-shaped trio of wide-field sensors and a smaller, high resolution ("planetary") camera tucked in the square's remaining corner.
WFPC2 was removed in the May 2009 servicing mission and replaced by the Wide-Field Camera 3 (WFC3).
Faint Object Spectrograph
A spectrograph spreads out the light gathered by a telescope so that it can be analyzed to determine such properties of celestial objects as chemical composition and abundances, temperature, radial velocity, rotational velocity, and magnetic fields. The Faint Object Spectrograph (FOS) was one of the original instruments on Hubble; it was replaced by NICMOS during the second servicing mission in 1997. The FOS examined fainter objects than the High Resolution Spectrograph (HRS), and could study these objects across a much wider spectral range -- from the UV (1150 Angstroms) through the visible red and the near-IR (8000 Angstroms).
The FOS used two 512-element Digicon sensors (light intensifiers). The "blue" tube was sensitive from 1150 to 5500 Angstroms (UV to yellow). The "red" tube was sensitive from 1800 to 8000 Angstroms (longer UV through red). Light entered the FOS through any of 11 different apertures from 0.1 to about 1.0 arc-seconds in diameter. There were also two occulting devices to block out light from the center of an object while allowing the light from just outside the center to pass on through. This could allow analysis of the shells of gas around red giant stars of the faint galaxies around a quasar.
The FOS had two modes of operation: low resolution and high resolution. At low resolution, it could reach 26th magnitude in one hour with a resolving power of 250. At high resolution, the FOS could reach only 22nd magnitude in an hour (before noise becomes a problem), but the resolving power was increased to 1300.
Goddard High Resolution Spectrograph
The Goddard High Resolution Spectrograph (GHRS) was one of the original instruments on Hubble; it failed in 1997, shortly before being replaced by STIS during the second servicing mission. As a spectrograph, HRS also separated incoming light into its spectral components so that the composition, temperature, motion, and other chemical and physical properties of the objects could be analyzed. The HRS contrasted with the FOS in that it concentrated entirely on UV spectroscopy and traded the extremely faint objects for the ability to analyze very fine spectral detail. Like the FOS, the HRS used two 521-channel Digicon electronic light detectors, but the detectors of the HRS were deliberately blind to visible light. One tube was sensitive from 1050 to 1700 Angstroms; while the other was sensitive from 1150 to 3200 Angstroms.
The HRS also had three resolution modes: low, medium, and high. "Low resolution" for the HRS was 2000 -- higher than the best resolution available on the FOS. Examining a feature at 1200 Angstroms, the HRS could resolve detail of 0.6 Angstroms and could examine objects down to 19th magnitude. At medium resolution of 20,000; that same spectral feature at 1200 Angstroms could be seen in detail down to 0.06 Angstroms, but the object would have to be brighter than 16th magnitude to be studied. High resolution for the HRS was 100,000, allowing a spectral line at 1200 Angstroms to be resolved down to 0.012 Angstroms. However, "high resolution" could be applied only to objects of 14th magnitude or brighter. The HRS could also discriminate between variations in light from objects as rapid as 100 milliseconds apart.
Corrective Optics Space Telescope Axial Replacement
COSTAR was not a science instrument; it was a corrective optics package that displaced the High Speed Photometer during the first servicing mission to HST. COSTAR was designed to optically correct the effects of the primary mirror's aberration for the Faint Object Camera (FOC), the High Resolution Spectrograph (HRS), and the Faint Object Spectrograph (FOS). All the other instruments that have been installed since HST's initial deployment, have been designed with their own corrective optics. When all of the first-generation instruments were replaced by other instruments, COSTAR was no longer be needed and was removed from Hubble during the 2009 servicing mission.
Faint Object Camera
The Faint Object Camera (FOC) was built by the European Space Agency as one of the original science instruments on Hubble. It was replaced by ACS during the servicing mission in 2002.
There were two complete detector systems for the FOC. Each used an image intensifier tube to produced an image on a phosphor screen that is 100,000 times brighter than the light received. This phosphor image was then scanned by a sensitive electron-bombarded silicon (EBS) television camera. This system was so sensitive that objects brighter than 21st magnitude had to be dimmed by the camera's filter systems to avoid saturating the detectors. Even with a broad-band filter, the brightest object that could be accurately measured was 20th magnitude.
The FOC offered three different focal ratios: f/48, f/96, and f/288 on a standard television picture format. The f/48 image measured 22 X 22 arc-seconds and yielded a resolution (pixel size) of 0.043 arc-seconds. The f/96 mode provided an image of 11 X 11 arc-seconds on each side and a resolution of 0.022 arc-seconds. The f/288 field of view was 3.6 X 3.6 arc-seconds square, with resolution down to 0.0072 arc-seconds.
High Speed Photometer
The High Speed Photometer (HSP) was one of the four original axial instruments on the Hubble Space Telescope (HST). The HSP was designed to make very rapid photometric observations of astrophysical sources in a variety of filters and passbands from the near ultraviolet to the visible. The HSP was removed from HST during the first servicing mission in December, 1993.
For more complete technical information about HST and its instruments, see the HST Primer. | <urn:uuid:eb22ec4d-4069-49a1-8a5e-6fdb526d2a5a> | 3.484375 | 4,978 | Knowledge Article | Science & Tech. | 38.758229 | 417 |
University of Idaho Geologists Take Preventative Measures Before Potential Earthquake
Monday, July 11 2011
IDAHO FALLS, Idaho – Grand Teton National Park is a spectacular site along the Wyoming-Idaho border. The park brings in nearly 4 million visitors a year and creates a scenic background for those who live there.
While the beauty is stunning, it’s tempered by the potential of danger from beneath the ground. The majestic mountain range sits on an active fault line that could one day lead to a severe earthquake.
The University of Idaho and the Idaho Bureau of Homeland Security are working together with local officials to identify areas that would be most affected in Idaho’s Teton County in the event of an earthquake. The results of the survey will allow county leaders and citizens the opportunity to better protect government buildings and private property before an earthquake hits.
“With eastern Idaho’s risk from earthquakes, it is important to have the best information so that emergency managers can be prepared and make informed decisions,” said Brig. Gen. Bill Shawver, director of Idaho Bureau of Homeland Security. “This project is a great cooperative effort between Teton County, the University of Idaho and BHS that will increase the ability of emergency managers to plan for earthquakes.”
Teton County’s governmental seat is the city of Driggs, roughly 20 miles west of the Teton fault. While this fault has been seismically quiet in recorded historic time, geologists believe it could generate a magnitude 7.2 earthquake at some point in the future.
“Such an earthquake could produce heavy damage in Teton County to structures not built to seismic standards,” explained Bill Phillips, research geologist for the Idaho Geological Survey. “The amount of damage during earthquakes also is influenced by local soil and rock conditions. We are constructing a map of these conditions in Teton County so that emergency planners can be better prepared.”
During the week of July 18-22, geologists will be in the field using seismographs and geophone sensors in 25 places around Teton County to determine what type of soil and bedrock make up the area and how those areas would react during potential earthquake activity.
Results from the survey will be given to the county’s emergency services center.
The survey is funded by the Idaho Bureau of Homeland Security through the Earthquake Hazard Reduction grant program. For more information on the survey, contact Bill Phillips from the University of Idaho at (208) 301-8794, or Greg Adams from Teton County at (208) 354-2703.
# # #
About the University of Idaho
Founded in 1889, the University of Idaho is the state’s land-grant institution and its principal graduate education and research university, bringing insight and innovation to the state, the nation and the world. University researchers attract nearly $100 million in research grants and contracts each year. The University of Idaho is classified by the prestigious Carnegie Foundation as high research activity. The student population of 12,000 includes first-generation college students and ethnically diverse scholars, who select from more than 130 degree options in the colleges of Agricultural and Life Sciences; Art and Architecture; Business and Economics; Education; Engineering; Law; Letters, Arts and Social Sciences; Natural Resources; and Science. The university also is charged with the statewide mission for medical education through the WWAMI program. The university combines the strength of a large university with the intimacy of small learning communities and focuses on helping students to succeed and become leaders. It is home to the Vandals, and competes in the Western Athletic Conference. For more information, visit www.uidaho.edu | <urn:uuid:598d39a2-6240-4ac2-9698-b54ba43fcdc1> | 3.21875 | 752 | News (Org.) | Science & Tech. | 31.121045 | 418 |
Want to stay on top of all the space news? Follow @universetoday on Twitter
At 54.6 million km away at its closest, the fastest travel to Mars from Earth using current technology (and no small bit of math) takes around 214 days — that’s about 30 weeks, or 7 months. A robotic explorer like Curiosity may not have any issues with that, but it’d be a tough journey for a human crew. Developing a quicker, more efficient method of propulsion for interplanetary voyages is essential for future human exploration missions… and right now a research team at the University of Alabama in Huntsville is doing just that.
This summer, UAHuntsville researchers, partnered with NASA’s Marshall Space Flight Center and Boeing, are laying the groundwork for a propulsion system that uses powerful pulses of nuclear fusion created within hollow 2-inch-wide “pucks” of lithium deuteride. And like hockey pucks, the plan is to “slapshot” them with plasma energy, fusing the lithium and hydrogen atoms inside and releasing enough force to ultimately propel a spacecraft — an effect known as “Z-pinch”.
“If this works,” said Dr. Jason Cassibry, an associate professor of engineering at UAH, “we could reach Mars in six to eight weeks instead of six to eight months.”
The key component to the UAH research is the Decade Module 2 — a massive device used by the Department of Defense for weapons testing in the 90s. Delivered last month to UAH (some assembly required) the DM2 will allow the team to test Z-pinch creation and confinement methods, and then utilize the data to hopefully get to the next step: fusion of lithium-deuterium pellets to create propulsion controlled via an electromagnetic field “nozzle”.
Although a rocket powered by Z-pinch fusion wouldn’t be used to actually leave Earth’s surface — it would run out of fuel within minutes — once in space it could be fired up to efficiently spiral out of orbit, coast at high speed and then slow down at the desired location, just like conventional rockets except… better.
“It’s equivalent to 20 percent of the world’s power output in a tiny bolt of lightning no bigger than your finger. It’s a tremendous amount of energy in a tiny period of time, just a hundred billionths of a second.”
– Dr. Jason Cassibry on the Z-pinch effect
In fact, according to a UAHuntsville news release, a pulsed fusion engine is pretty much the same thing as a regular rocket engine: a “flying tea kettle.” Cold material goes in, gets energized and hot gas pushes out. The difference is how much and what kind of cold material is used, and how forceful the push out is.
Everything else is just rocket science.
Read more on the University of Huntsville news site here and on al.com. Also, Paul Gilster at Centauri Dreams has a nice write-up about the research as well as a little history of Z-pinch fusion technology… check it out. Top image: Mars imaged with Hubble’s Wide-Field Planetary Camera 2 in March 1995. | <urn:uuid:c8b6f20c-f68c-449a-a881-34c4b2fdc078> | 3.953125 | 694 | News Article | Science & Tech. | 51.146838 | 419 |
In programming, classification of a particular type of information. It is easy for humans to distinguish between different types of data. We can usually tell at a glance whether a number is a percentage, a time, or an amount of money. We do this through special symbols -- %, :, and $ -- that indicate the data's type. Similarly, a computer uses special internal codes to keep track of the different types of data it processes.
Most programming languages require the programmer to declare the data type of every data object, and most database systems require the user to specify the type of each data field. The available data types vary from one programming language to another, and from one database application to another, but the following usually exist in one form or another: | <urn:uuid:47437832-f7d4-4366-94ba-2caef5456fb8> | 3.609375 | 152 | Knowledge Article | Software Dev. | 27.357179 | 420 |
In 1991 the Galileo spacecraft photographed the asteroid, Gaspra. This picture shows the asteroid in false color. Gaspra circles the Sun between Mars and Jupiter.
Click on image for full size
Japan And U.S. Join Together for Asteroid Expedition
News story originally written on June 20, 1997
The first asteroid collection mission has been set. Japan and the United States will put joint efforts into the MUSES-C mission to be launched in January 2002 from Kagoshima Space Center, Japan. This will allow the spacecraft to arrive at the NEREUS asteroid in September 2003.
Nereus is a small asteroid approximately one mile in diameter. It was discovered in 1982. At its closest point to the Sun, its orbit takes it just inside the orbit of the Earth.
The MUSES-C spacecraft contains a miniature robotic rover that will conduct surface measurements of the rocky asteroid. The rover weighs less than 2.2 pounds. It is to date the smallest ever flown in space. Asteroid samples will also be taken during the mission and will be returned in January 2006 by a parachute-borne recovery capsule.
This mission is extremely important. If successful, it will grant Earth-bound scientists first-hand information about the materials that helped form the inner, rocky planets more than four billion years ago. Isotopic measurements of the asteroid samples may even unlock information about cosmological beginnings.
Dr. Jurgen Rahe, director of Solar System Exploration at NASA headquarters expressed excitement about the mission by saying, "This ambitious mission is an opportunity for two spacefaring nations to combine their expertise and achieve something truly
Shop Windows to the Universe Science Store!
Our online store
includes issues of NESTA's quarterly journal, The Earth Scientist
, full of classroom activities on different topics in Earth and space science, as well as books
on science education!
You might also be interested in:
It was another exciting and frustrating year for the space science program. It seemed that every step forward led to one backwards. Either way, NASA led the way to a great century of discovery. Unfortunately,...more
The Space Shuttle Discovery lifted off from Kennedy Space Center at 2:19 p.m. EST, October 29th. The sky was clear and the weather was great as Discovery took 8 1/2 minutes to reach orbit for the Unitied...more
A moon was discovered orbiting the asteroid, Eugenia. This is only the second time in history that a satellite has been seen circling an asteroid. A special mirror allowed scientists to find the moon...more
Will Russia ever put the service module for the International Space Station in space? NASA officials are demanding an answer from the Russian government. The necessary service module is currently waiting...more
During a period of about two days in early May, 1998, the ACE spacecraft was immersed in plasma associated with a coronal mass ejection (CME). The SWICS instrument on ACE, which determines unambiguously...more
J.S. Maini of the Canadian Forest Service has referred to forests as the "heart and lungs of the world." Forests reduce soil erosion, maintain water quality, contribute to atmospheric humidity and cloud...more
In late April through mid-May 2002, all five naked-eye planets are visible simultaneously in the night sky! This is includes Mercury which is generally very hard to see because of its proximity to the...more | <urn:uuid:bbafcfb3-30eb-4ad6-b65b-900551d18354> | 3.640625 | 695 | Content Listing | Science & Tech. | 49.231404 | 421 |
xmlsh is derived from a similar syntax as the unix shells (see Philosophy) . If you are familiar with any of these shell languages (sh,bash,ksh,zsh) you should be right at home. An attempt was made to stay very close to the sh syntax where reasonable, but not all subtlies or features of the unix shells are implemented. In order to accomidate native XML types and pipelines some deviations and extensions were necessary. Lastly, as an implementation issue, xmlsh is implemented in java using the javacc compiler for parsing. This made support for some of the syntax and features of the C based shells difficult or impossible. Future work may try to tighten up these issues.
xmlsh can run in 2 modes, interactive and batch. In interactive mode, a prompt ("$ ") is displayed and in batch mode there is no prompt. Otherwise they are identical. Running xmlsh with no arguments starts an interactive shell. Running with an argument runs in batch mode and invokes the given script.
You can run an xmlsh script by passing it as the first argument, followed by any script arguments
xmlsh myscript.xsh arg1 arg2
For details on xmlsh invocation and parameters see xmlsh command
- Current Directory
- Environment variables
- Standard ports ( input/output/error )
The shell itself maintains additional environment which is passed to all subshells, but not to external (sub process) commands.
- Namespaces, including the default namespace (See Namespaces)
- Declared functions (See SyntaxFunction )
- imported modules and packages (See Modules)
- Shell variables (Environment variables and internal shell variables) (See BuiltinVariables)
- Positional parameters ($1 ... $n)
- Shell Options (-v, -x ...)
On startup, xmlsh reads the standard input (interactive mode) or the script file (batch mode), parses one command at a time and executes it. The following steps are performed
- Parse statement. Statements are parsed using the Core Syntax.
- Expand variables. Variable expansion is performed. See Variables and CoreSyntax.
- Variable assignment. Prefix variable assignment is performed. Variables and CoreSyntax.
- IO Redirection. IO redirection (input,output, here documents) CommandRedirect and CoreSyntax.
- Command execution. Commands are executed. CommandExecution
- Exceptions raised can be handled with a try/catch block.
After the command is executed, then the process repeats. | <urn:uuid:693b762b-6579-4096-8873-426b157e93c6> | 2.78125 | 532 | Documentation | Software Dev. | 41.737462 | 422 |
Simple observational proof of the greenhouse effect of carbon dioxide
Posted by Ari Jokimäki on April 19, 2010
Recently, I showed briefly a simple observational proof that greenhouse effect exists using a paper by Ellingson & Wiscombe (1996). Now I will present a similar paper that deepens the proof and shows more clearly how different greenhouse gases really are greenhouse gases. I’ll highlight the carbon dioxide related issues in their paper.
Walden et al. (1998) studied the downward longwave radiation spectrum in Antarctica. Their study covers only a single year so this is not about how the increase in greenhouse gases affects. They measured the downward longwave radiation spectrum coming from atmosphere to the surface during the year (usually in every 12 hours) and then selected three measurements from clear-sky days for comparison with the results of a line-by-line radiative transfer model.
First they described why Antarctica is a good place for this kind of study:
Since the atmosphere is so cold and dry (<1 mm of precipitable water), the overlap of the emission spectrum of water vapor with that of other gases is greatly reduced. Therefore the spectral signatures of other important infrared emitters, namely, CO2, O3, CH4, and N2O, are quite distinct. In addition, the low atmospheric temperatures provide an extreme test case for testing models
Spectral overlapping is a consideration here because they are using a moderate resolution (about 1 cm-1) in their spectral analysis. They went on further describing their measurements and the equipment used and their calibration. They also discussed the uncertainties in the measurements thoroughly.
They then presented the measured spectra in similar style than was shown in Ellingson & Wiscombe (1996). They proceeded to produce their model results. The models were controlled with actual measurements of atmospheric consituents (water vapour, carbon dioxide, etc.). The model is used here because it represents our theories which are based on numerous experiments in laboratories and in the atmosphere. They then performed the comparison between the model results and the measurements. Figure 1 shows their Figure 11 where total spectral radiance from their model is compared to measured spectral radiance.
The upper panel of Figure 1 shows the spectral radiance and the lower panel shows the difference of measured and modelled spectrum. The overall match is excellent and there’s no way you could get this match by chance so this already shows that different greenhouse gases really are producing a greenhouse effect just as our theories predict. Walden et al. didn’t stop there. Next they showed the details of how the measured spectral bands of different greenhouse gases compare with model results. The comparison of carbon dioxide is shown here in Figure 2 (which is the upper panel of their figure 13).
The match between the modelled and measured carbon dioxide spectral band is also excellent, even the minor details track each other well except for couple of places of slight difference. If there wouldn’t be greenhouse effect from carbon dioxide or if water vapour would be masking its effect, this match should then be accidental. I see no chance for that, so this seems to be a simple observational proof that carbon dioxide produces a greenhouse effect just as our theories predict.
Walden, V. P., S. G. Warren, and F. J. Murcray (1998), Measurements of the downward longwave radiation spectrum over the Antarctic Plateau and comparisons with a line-by-line radiative transfer model for clear skies, J. Geophys. Res., 103(D4), 3825–3846, doi:10.1029/97JD02433. [abstract] | <urn:uuid:7ca379c3-faf0-4aab-83e1-0999a130f017> | 2.78125 | 746 | Personal Blog | Science & Tech. | 45.861755 | 423 |
Effects of agriculture, urbanization, and climate on water quality in the northern Great Plains
Limnol. Oceanogr., 44(3_part_2), 1999, 739-756 | DOI: 10.4319/lo.1999.44.3_part_2.0739
ABSTRACT: The QuAppelle Valley drainage system provides water to a third of the population of the Canadian Great Plains, yet is plagued by poor water quality, excess plant growth, and periodic fish kills. Fossil algae (diatoms, pigments) and invertebrates (chironomids) in Pasqua Lake were analyzed by variance partitioning analysis (VPA) to determine the relative importance of climate, resource use, and urbanization as controls of aquatic community composition 1920-1994. From fossil analyses, we identified three distinct biological assemblages in Pasqua Lake. Prior to agriculture (ca. 1776-1890), the lake was naturally eutrophic with abundant cyanobacterial carotenoids (myxo-xanthophyll, aphanizophyll), eutrophic diatoms (Stephanodiscus niagarae, Aulacoseira granulata, Fragilaria capucina/ bidens), and anoxia-tolerant chironomids (Chironomus). Principal components (PCA) and dissimilarity analyses demonstrated that diatom and chironomid communities did not vary significantly (P . 0.05) before European settlement. Communities changed rapidly during early land settlement (ca. 1890-1930) before forming a distinct assemblage ca. 1930-1960 characterized by elevated algal biomass (inferred as beta-carotene), nuisance cyanobacteria, eutrophic Stephanodiscus hantzschii, and low abundance of deep-water zoobenthos. Recent fossil assemblages (1977-1994) were variable and indicated water quality had not improved despite 3-fold reduction in phosphorus from sewage. Comparison of fossil community change and continuous annual records of 83 environmental variables (1890-1994) using VPA captured 71-97% of variance in fossil composition using only 10-14 significant factors. Resource use (cropland area, livestock biomass) and urbanization (nitrogen in sewage) were stronger determinants of algal and chironomid community change than were climatic factors (temperature, evaporation, river discharge). Landscape analysis of inferred changes in past algal abundance (as b-carotene; ca. 1780-1994) indicated that urban impacts declined with distance from point sources and suggested that management strategies will vary with lake position within the catchment. | <urn:uuid:a1a2c89e-04ce-4111-9061-590b9e2b279d> | 2.828125 | 557 | Academic Writing | Science & Tech. | 14.229348 | 424 |
How Much Does the Ocean Weigh?
Water does weigh something; about 8.3 pounds per gallon. In research published this week, scientists from the National Oceanography Center and Newcastle University have proposed an idea that will assess the mass of the world ocean by weighing it at a single point. But there is a catch. Global sea level is currently rising at about 3 mm per year, but predictions of rise over the century vary from 30 cm to over a meter. There are two ways global sea level can increase. The water in the oceans can warm and expand, leading to the same weight of water taking up more space. In other words water density can vary which must be taken into account. Alternatively, more water added to the ocean from melting of land ice will increase the ocean’s weight.
The National Oceanography Centre’s Prof Christopher Hughes said: “We have shown that making accurate measurements of the changing pressure at a single point in the Pacific Ocean will indicate the mass of the world ocean. And we know where to place such an instrument — the central tropical Pacific where the deep ocean is at its quietest. This pressure gauge needs to be located away from land and oceanic variability. The principle is rather like watching your bath fill: you don’t look near the taps, where all you can see is splashing and swirling, you look at the other end where the rise is slow and steady.”
By a lucky chance, pressure measurements have been made in the Pacific Ocean since 2001, as part of the U.S. National Tsunami Hazard Mitigation Program, which focuses on detecting the small pressure fluctuations produced by the deep ocean waves that become tsunamis at the coast.
From these measurements, the team including Dr Rory Bingham, based in the School of Civil Engineering and Geosciences at Newcastle University, have been able to show that a net 6 trillion tonnes of water enters the ocean between late March and late September each year, enough to raise sea level by 1.7 cm, and leaves the ocean in the following six months.
Prof Hughes: “Of course, what we are most interested in is how much water accumulates in the ocean each year, and this is where we currently have a problem. While present instruments are able to measure pressure variations very accurately, they have a problem with long term trends, producing false outcomes.”
By knowing the weight an estimate of how much the ocean in increasing would be known which would be related to how much global warming is occurring.
“This is a challenging goal. The pressure changes are smaller than the background pressure by a factor of about 10 million, and the deep ocean is a hostile environment for mechanical components with erosion and high pressures. However, there are many other measurement systems with this kind of accuracy and there is no reason, in principle, why someone with a new idea and a fresh approach could not achieve this.
Article appearing courtesy Environmental News Network.
|Tags: oceanic variability oceanography pacific ocean pressure fluctuations sea level water density||[ Permalink ]| | <urn:uuid:7c5d0829-26bf-4315-9573-7861ac4c1901> | 3.65625 | 630 | News Article | Science & Tech. | 45.56883 | 425 |
5.8 magnitude earthquake. There was a lot of chatter generated by it, probably disproportionate to the magnitude of the event. There were a few news items that might be of interest to some of you. First, contrary to initial reports, there was some building damage in Virginia and the DC area, including the collapse of finials on the National Cathedral's main tower. Second, PhysOrg explains why the earthquake was felt over such a large area, from Georgia north to Quebec and west to Wisconsin. Scientific American has a list of the top ten East Coast earthquakes. Finally, here is an interesting bird-related note from the National Zoo:
The first warnings of the earthquake may have occurred at the National Zoo, where officials said some animals seemed to feel it coming before people did. The red ruffed lemurs began “alarm calling” a full 15 minutes before the quake hit, zoo spokeswoman Pamela Baker-Masson said. In the Great Ape House, Iris, an orangutan, let out a guttural holler 10 seconds before keepers felt the quake. The flamingos huddled together in the water seconds before people felt the rumbling. The rheas got excited. And the hooded mergansers — a kind of duck — dashed for the safety of the water. | <urn:uuid:888fe84b-8541-4cec-97f6-504bbe4d5df0> | 2.609375 | 266 | Personal Blog | Science & Tech. | 47.627928 | 426 |
The Westerlies, anti-trades, or Prevailing Westerlies, are prevailing winds in the middle latitudes between 30 and 60 degrees latitude, blowing from the high pressure area in the horse latitudes towards the poles. These prevailing winds blow from the west to the east and steer extratropical cyclones in this general manner. Tropical cyclones which cross the subtropical ridge axis into the Westerlies recurve due to the increased westerly flow. The winds are predominantly from the southwest in the Northern Hemisphere and from the northwest in the Southern Hemisphere.
The Westerlies are strongest in the winter hemisphere and times when the pressure is lower over the poles, while they are weakest in the summer hemisphere and when pressures are higher over the poles. The Westerlies are particularly strong, especially in the southern hemisphere, where there is less land in the middle latitudes to cause the flow pattern to amplify, or become more north-south oriented, which slows the Westerlies down. The strongest westerly winds in the middle latitudes can come in the Roaring Forties, between 40 and 50 degrees latitude. The Westerlies play an important role in carrying the warm, equatorial waters and winds to the western coasts of continents, especially in the southern hemisphere because of its vast oceanic expanse.
If the Earth were a non-rotating planet, solar heating would cause winds across the mid-latitudes to blow in a poleward direction, away from the subtropical ridge. However, the Coriolis effect caused by the rotation of Earth causes winds to steer to the right of what would otherwise be expected across the Northern Hemisphere, and left of what would be expected in the Southern Hemisphere. This is why winds across the Northern Hemisphere tend to blow from the southwest, but they tend to be from the northwest in the Southern Hemisphere. When pressures are lower over the poles, the strength of the Westerlies increases, which has the effect of warming the mid-latitudes. This occurs when the Arctic oscillation is positive, and during winter low pressure near the poles is stronger than it would be during the summer. When it is negative and pressures are higher over the poles, the flow is more meridional, blowing from the direction of the pole towards the equator, which brings cold air into the mid-latitudes.
Throughout the year, the Westerlies vary in strength with the polar cyclone. As the cyclone reaches its maximum intensity in winter, the Westerlies increase in strength. As the cyclone reaches its weakest intensity in summer, the Westerlies weaken. An example of the impact of the Westerlies is when dust plumes, originating in the Gobi desert combine with pollutants and spread large distances downwind, or eastward, into North America. The Westerlies can be particularly strong, especially in the Southern Hemisphere, where there is less land in the middle latitudes to cause the progression of west to east winds to slow down. In the Southern hemisphere, because of the stormy and cloudy conditions, it is usual to refer to the Westerlies as the Roaring Forties, Furious Fifties and Shrieking Sixties according to the varying degrees of latitude.
Impact on ocean currents
Due to persistent winds from west to east on the poleward sides of the subtropical ridges located in the Atlantic and Pacific oceans, ocean currents are driven in a similar manner in both hemispheres. The currents in the Northern Hemisphere are weaker than those in the Southern Hemisphere due to the differences in strength between the Westerlies of each hemisphere. The process of western intensification causes currents on the western boundary of an ocean basin to be stronger than those on the eastern boundary of an ocean. These western ocean currents transport warm, tropical water polewards toward the polar regions. Ships crossing both oceans have taken advantage of the ocean currents for centuries.
The Antarctic Circumpolar Current (ACC), or the West Wind Drift, is an ocean current that flows from west to east around Antarctica. The ACC is the dominant circulation feature of the Southern Ocean and, at approximately 125 Sverdrups, the largest ocean current. In the northern hemisphere, the Gulf Stream, part of the North Atlantic Subtropical Gyre, has led to the development of strong cyclones of all types at the base of the Westerlies, both within the atmosphere and within the ocean. The Kuroshio (Japanese for "Black Tide") is a strong western boundary current in the western north Pacific Ocean, similar to the Gulf Stream, which has also contributed to the depth of ocean storms in that region.
Extratropical cyclones
An extratropical cyclone is a synoptic scale low pressure weather system that has neither tropical nor polar characteristics, being connected with fronts and horizontal gradients in temperature and dew point otherwise known as "baroclinic zones".
The descriptor "extratropical" refers to the fact that this type of cyclone generally occurs outside of the tropics, in the middle latitudes of the planet, where the Westerlies steer the system generally from west to east. These systems may also be described as "mid-latitude cyclones" due to their area of formation, or "post-tropical cyclones" where extratropical transition has occurred, and are often described as "depressions" or "lows" by weather forecasters and the general public. These are the everyday phenomena which along with anti-cyclones, drive the weather over much of the Earth.
Although extratropical cyclones are almost always classified as baroclinic since they form along zones of temperature and dewpoint gradient, they can sometimes become barotropic late in their life cycle when the temperature distribution around the cyclone becomes fairly uniform along the radius from the center of low pressure. An extratropical cyclone can transform into a subtropical storm, and from there into a tropical cyclone, if it dwells over warm waters and develops central convection, which warms its core and causes temperature and dewpoint gradients near their centers to fade.
Interaction with tropical cyclones
When a tropical cyclone crosses the subtropical ridge axis, normally through a break in the high-pressure area caused by a system traversing the Westerlies, its general track around the high-pressure area is deflected significantly by winds moving towards the general low-pressure area to its north. When the cyclone track becomes strongly poleward with an easterly component, the cyclone has begun recurvature, entering the Westerlies. A typhoon moving through the Pacific Ocean towards Asia, for example, will recurve offshore of Japan to the north, and then to the northeast, if the typhoon encounters southwesterly winds (blowing northeastward) around a low-pressure system passing over China or Siberia. Many tropical cyclones are eventually forced toward the northeast by extratropical cyclones in this manner, which move from west to east to the north of the subtropical ridge. An example of a tropical cyclone in recurvature was Typhoon Ioke in 2006, which took a similar trajectory.
- Robert Fitzroy (1863). The weather book: a manual of practical meteorology. Longman, Green, Longman, Roberts, & Green. p. 63.
- Glossary of Meteorology (2009). Westerlies. American Meteorological Society. Retrieved on 2009-04-15.
- Nathan Gasser (2000-08-10). Solar Heating and Coriolis Forces. University of Tennessee at Knoxville. Retrieved on 2009-05-31.
- Ralph Stockman Tarr and Frank Morton McMurry (1909).Advanced geography. W.W. Shannon, State Printing, pp. 246. Retrieved on 2009-04-15.
- National Snow and Ice Data Center (2009). The Arctic Oscillation. Arctic Climatology and Meteorology. Retrieved on 2009-04-11.
- Halldór Björnsson (2005). Global circulation. Veðurstofu Íslands. Retrieved on 2008-06-15.
- James K. B. Bishop, Russ E. Davis, and Jeffrey T. Sherman (2002). "Robotic Observations of Dust Storm Enhancement of Carbon Biomass in the North Pacific". Science 298. pp. 817–821. Retrieved 2009-06-20.
- Walker, Stuart (1998). The sailor's wind. W. W. Norton & Company. p. 91. ISBN 0-393-04555-2, 9780393045550 Check
- Wunsch, Carl (November 8, 2002). "What Is the Thermohaline Circulation?". Science 298 (5596): 1179–1181. doi:10.1126/science.1079329. PMID 12424356. (see also Rahmstorf.)
- National Environmental Satellite, Data, and Information Service (2009). Investigating the Gulf Stream. North Carolina State University. Retrieved on 2009-05-06.
- Ryan Smith, Melicie Desflots, Sean White, Arthur J. Mariano, Edward H. Ryan (2005). The Antarctic CP Current. The Cooperative Institute for Marine and Atmospheric Studies. Retrieved on 2009-04-11.
- S. Businger, T. M. Graziano, M. L. Kaplan, and R. A. Rozumalski (2004). Cold-air cyclogenesis along the Gulf-Stream front: investigation of diabatic impacts on cyclone development, frontal structure, and track. Meteorology and Atmospheric Physics, pp. 65-90. Retrieved on 2008-09-21.
- David M. Roth (2000). P 1.43 A FIFTY YEAR HISTORY OF SUBTROPICAL CYCLONES. American Meteorological Society. Retrieved on 2008-09-21.
- D. K. Savidge and J. M. Bane (1999). Cyclogenesis in the deep ocean beneath the Gulf Stream. 1. Description. Journal of geophysical research, pp. 18111-18126. Retrieved on 2008-09-21.
- Dr. DeCaria (2007-05-29). "ESCI 241 – Meteorology; Lesson 16 – Extratropical Cyclones". Department of Earth Sciences, Millersville University, Millersville, Pennsylvania. Archived from the original on 2007-05-29. Retrieved 2009-05-31.
- Robert Hart and Jenni Evans (2003). "Synoptic Composites of the Extratropical Transition Lifecycle of North Atlantic TCs as Defined Within Cyclone Phase Space" (PDF). American Meteorological Society. Retrieved 2006-10-03.
- Ryan N. Maue (2009). CHAPTER 3: CYCLONE PARADIGMS AND EXTRATROPICAL TRANSITION CONCEPTUALIZATIONS. Florida State University. Retrieved on 2008-06-15.
- Atlantic Oceanographic and Meteorological Laboratory, Hurricane Research Division (2004). "Frequently Asked Questions: What is an extra-tropical cyclone?". NOAA. Retrieved 2006-07-25.
- Joint Typhoon Warning Center (2009). Section 2: Tropical Cyclone Motion Terminology. United States Navy. Retrieved on 2007-04-10.
- Powell, Jeff, et al. (May 2007). "Hurricane Ioke: 20–27 August 2006". 2006 Tropical Cyclones Central North Pacific. Central Pacific Hurricane Center. Retrieved 2007-06-09. | <urn:uuid:93104ccf-3d6e-43ec-af16-17dcd0d47b03> | 3.96875 | 2,388 | Knowledge Article | Science & Tech. | 49.147338 | 427 |
Initial surveys began this week and are focusing on the collection of water samples for eDNA analysis. Electroshocking and netting survey efforts will also be conducted starting next week. The eDNA surveys will occur in the Sandusky River and Bay, and the Maumee River and Bay. Samples will be collected in the areas where positive eDNA samples were collected in 2011 and at additional locations believed to provide suitable bighead and silver carp habitat. MDNR Research Program Manager Tammy Newcomb said, "Our coordinated sampling efforts with partner agencies are very important in order to revisit areas where positive samples were collected last year, and to expand sampling to areas that may be reproductively favorable for bighead or silver carp. These are the areas where we can be most effective in preventing expansion of these species should they be present."
MDNR and ODNR requested assistance from the USFWS to develop and implement this assessment effort. The USFWS is contributing significant technical and logistical expertise, as well as personnel, survey equipment and vessels. The US Army Corps of Engineers (USACE) will analyze the collected eDNA water samples.
Access the joint release with additional details and links to information including videos and images (click here). [#GLakes]
32 Years of Environmental Reporting for serious Environmental Professionals | <urn:uuid:1da944ff-f613-4437-88c4-f052b1514ec0> | 2.5625 | 263 | News (Org.) | Science & Tech. | 24.497189 | 428 |
Shortly after the Deepwater Horizon disaster, mysterious honeycomb material was found floating in the Gulf of Mexico and along coastal beaches. Using state-of-the-art chemical forensics and a bit of old-fashioned detective work, a research team led by scientists at Woods Hole Oceanographic Institution (WHOI) confirmed that the flotsam were pieces of material used to maintain buoyancy of the pipe bringing up oil from the seafloor.
The researchers also affirmed that tracking debris from damaged offshore oil rigs could help forecast coastal pollution impacts in future oil spills and guide emergency response efforts—much the way the Coast Guard has studied the speed and direction of various floating debris to guide search and rescue missions. The findings were published Jan. 19 in Environmental Research Letters.
On May 5, 2010, 15 days after the Deepwater Horizon explosion, oceanographer William Graham and marine technicians from the Dauphin Island Sea Lab were working from a boat about 32 miles south of Dauphin Island, Ala., when they saw a 6-mile-long, east-west line containing more than 50 pieces of white material interspersed with sargassum weed. The porous material was uniformly embedded with black spheres about a centimeter in diameter. No oil slick was in sight, but there was a halo of oil sheen around the honeycomb clumps.
Two days later, the researchers also collected similar samples about 25 miles south of Dauphin Island. Nobody knew what the material was, with some hypothesizing at first that it could be coral or other substance made by marine plants or animals. Graham sent samples to WHOI chemist Chris Reddy, whose lab confirmed that the material was not biological. But the material’s source remained unconfirmed.
In January 2011, Reddy and WHOI researcher Catherine Carmichael, lead author of the new study, collected a piece of the same unknown material of Elmer’s Beach, Grand Isle, La. In April, 2011, they found several large pieces, ranging from 3 to 10 feet, of the honeycomb debris on the Chandeleur Islands off Louisiana.
Oil on all these samples was analyzed at WHOI using comprehensive two-dimensional gas chromatography. The technique identifies the thousands of individual chemical compounds that comprise different oils from different reservoirs. The chemistry of the oil on the debris matched that of oil sampled directly from the broken pipe from the Macondo well above the Deepwater Horizon rig.
In addition, one piece of debris from the Chandeleur Islands retained a weathered red sticker that read “Cuming” with the numbers 75-1059 below it. Reddy found a company called Cuming Corporation in Avon, Mass., which manufactures syntactic foam flotation equipment for the oil and gas industry. He e-mailed photos of the specimen to the company, and within hours, a Cuming engineer confirmed from the serial number that the foam came from a buoyancy module from Deepwater Horizon.
“We realized that the foam and the oil were released into the environment at the same time,” Reddy said. “So we had a unique tracer that was independent of the oil itself to chronicle how oil and debris drifted out from the spill site.”
The scientists overlaid the locations where they found honeycomb debris on May 5 and 7 with daily forecasts produced by the National Oceanic and Atmospheric Administration (NOAA) of the trajectory of the spreading oil slick. NOAA used a model that incorporated currents and wind speeds, along with data from planes and satellites. On both days, the debris was about 6.2 miles ahead of the spreading slick.
The explanation, the scientists said, is the principle of leeway, a measure of how fast wind or waves push materials. The leeway for fresh oil is 3 to 3.3 percent, but the scientists suspected that “the protruding profile of the buoyant material” acted acting like a sail, allowing wind to drive it faster than and ahead of the floating oil.
In this case, the flotsam served as a harbinger for the oncoming slick, but because different materials can have different leeways, oil spill models may not accurately forecast where oiled debris will head. “Even a small deviation in leeway can, over time, results in significant differences in surface tracks because of typical wind fields,” the scientists wrote.
The Coast Guard has a long history of calculating the leeway of various materials, from life jackets to bodies of various sizes and weights, to improve forecasts of where the materials would drift if a ship sank or a plane crashed into the sea. But calculating leeways has not been standard practice in oil spills.
“We never had solid data to make the case until this study,” said Merv Fingas, who tracked oil spills for more than 38 years for Environment Canada, which is equivalent to the U.S. Environmental Protection Agency.
“These results,” the study’s authors wrote, “provide insights into the fate of debris fields deriving from damaged marine materials and should be incorporated into emergency response efforts and forecasting of coastal impacts during future offshore oil spills.”
This research was funded by the National Science Foundation.
The Woods Hole Oceanographic Institution is a private, independent organization in Falmouth, Mass., dedicated to marine research, engineering, and higher education. Established in 1930 on a recommendation from the National Academy of Sciences, its primary mission is to understand the ocean and its interaction with the Earth as a whole, and to communicate a basic understanding of the ocean's role in the changing global environment. | <urn:uuid:159fe05d-2ff2-40d7-b7fd-3e6b8580676c> | 3.171875 | 1,158 | Knowledge Article | Science & Tech. | 39.867616 | 429 |
The Active Galactic Nucleus (AGN) of Seyfert galaxy M77 (NGC 1068), about 60 million light years from Earth, in the X-ray light, as photographed by Chandra X-ray Observatory.
A composite Chandra X-ray (blue/green) and Hubble optical (red) image of M77 (NGC 1068) shows hot gas blowing away from a central supermassive object at speeds averaging about 1 million miles per hour. The elongated shape of the gas cloud is thought to be due to the funneling effect of a torus, or doughnut-shaped cloud, of cool gas and dust that surrounds the central object, which many astronomers think is a black hole. The X-rays are scattered and reflected X-rays that are probably coming from a hidden disk of hot gas formed as matter swirls very near the black hole. Regions of intense star formation in the inner spiral arms of the galaxy are highlighted by the optical emission. This image extends over a field 36 arcsec on a side.
This three-color high energy X-ray image (red =1.3-3 keV, green = 3-6 keV, blue = 6-8 keV) of NGC 1068 shows gas rushing away from the nucleus. The brightest point-like source may be the inner wall of the torus that is reflecting X-rays from the hidden nucleus. Scale: Image is 30 arcsec per side.
This three-color low energy X-ray image of M77 (NGC 1068) (red = 0.4-0.6 keV, green = 0.6-0.8 keV, blue = 0.8-1.3 keV) shows gas rushing away from the the nucleus (bright white spot). The range of colors from blue to red corresponds to a high through low ionization of the atoms in the wind. Scale: Image is 30 arcsec per side.
This optical image of the active galaxy NGC 1068, taken by Hubble's WFPC2, gives a detailed view of the spiral arms in the inner parts of the galaxy. Scale: Image is 30 arcsec per side.
Credit: X-ray: NASA/CXC/MIT/P. Ogle et.al.; Optical: NASA/STScI/A. Capetti et.al.
Last Modification: July 12, 2003 | <urn:uuid:ca6a66be-8a64-433f-8fae-bbae7e01bdef> | 3.59375 | 499 | Knowledge Article | Science & Tech. | 79.921157 | 430 |
"Basically what has happened by introducing the toads is it has created really strong evolutionary pressure both on the toads themselves and on animals that interact on the toads," Shine said.
For example, Shine and his colleague Benjamin Phillips previously showed that two native Australian snake species have evolved smaller heads and are no longer able to eat the toads, which carry a lethal toxin.
Other studies have shown that some would-be toad predators have altered their diets to exclude toads, while others have evolved resistance to the cane toad toxin, Shine said.
"These studies tell us a lot about the evolutionary process," said Jonathan Losos, an evolutionary biologist at Washington University in St. Louis, Missouri.
"Invading species are a huge problem, and cane toads are a classic example of that," he added. "But they also represent an inadvertent evolutionary experiment, the sort of experiment you couldn't [normally] conduct."
Rules and regulations prohibit scientists from purposely confronting native species in the wild with a non-native competitor or predator to see how natural selection works, he explained.
The evolutionary processes spawned by the cane toad invasion have occurred in a span of just 70 years. This adds to evidence from the past two decades that populations can adapt quickly when selection pressure is strong.
"We're taught evolution occurs over these very, very long time frames. But in systems like these, it's incredibly fast," Shine, the study co-author, said.
According to Losos, the unusual aspect of the toad leg length adaptation is the mechanism that drives it.
In most instances rapid evolution occurs when an organism enters a new environment and some variation that was previously irrelevant becomes favored. That variation is repeatedly selected until it becomes more common, he explained.
In the case of the cane toads, longer legs make the toads faster, and the fastest toads are always at the invasion front. The lead toads mate, passing their long legs to their offspring.
As long as there is no disadvantage to being the first into a new territory, this process should allow the toads to "evolve faster and faster rates of movement," Shine said.
Cane Toad Management
According to Shine, as scientists learn more about cane toad biology, they can devise strategies for eradicating local populations, such as changing the character of a breeding pond or staking out toad migration routes.
But the toads are likely to be permanent fixtures in Australia and will continue their spread, he said.
While Shine is optimistic that ecosystems will adapt, "there may be some parts of native systems that don't and, in time, will go extinct," he said.
"One message from the work," he added, "is to try to stop invasive species, you probably ought to start as soon as you get a chance. The longer you let it linger, the more formidable the adversary will be."
Free Email News Updates
Sign up for our Inside National Geographic newsletter. Every two weeks we'll send you our top stories and pictures (see sample).
SOURCES AND RELATED WEB SITES | <urn:uuid:99526aee-9313-40fd-a7bb-a1193129d6bf> | 3.921875 | 641 | News Article | Science & Tech. | 37.832566 | 431 |
Step by step data modeling with Cassandra:
When working with a relational database, the first thing you do is modeling your data. A well defined database model allows you to query its data through SQL queries. Unfortunately, a fully normalized model degrades your performance when joins need to be executed on tables that contain millions of rows. To improve performance, Cassandra advocates a query-first approach, where first you identify your queries and then model your data accordingly. In the next couple of paragraphs, we will gradually explore the Cassandra data structures by developing the mutation data model. Remember, what we are trying to achieve is to be able to quickly calculate mutation frequencies!
Original title and link: Cassandra as a Mutation Datastore (NoSQL database©myNoSQL) | <urn:uuid:f89668d8-7d97-44b7-9d76-70142645bc74> | 2.59375 | 153 | Truncated | Software Dev. | 23.705714 | 432 |
Mission Type: Flyby
Launch Vehicle: Titan IIIE-Centaur (TC-7 / Titan no. 23E-7 / Centaur D-1T)
Launch Site: Cape Canaveral, USA, Launch Complex 41
NASA Center: Jet Propulsion Laboratory
Spacecraft Mass: 2,080 kg (822 kg mission module)
Spacecraft Instruments: 1) imaging system; 2) ultraviolet spectrometer; 3) infrared spectrometer; 4) planetary radio astronomy experiment; 5) photopolarimeter; 6) magnetometers; 7) plasma particles experiment; 8) low-energy charged-particles experiment; 9) plasma waves experiment and 10) cosmic-ray telescope
Spacecraft Dimensions: Decahedral bus, 47 cm in height and 1.78 m across from flat to flat
Spacecraft Power: 3 plutonium oxide radioisotope thermoelectric generators (RTGs)
Maximum Power: 470 W of 30-volt DC power at launch, dropping to about 287 W at the beginning of 2008, and continuing to decrease
Antenna Diameter: 3.66 m
X-Band Data Rate: 115.2 kbits/sec at Jupiter, less at more distant locations (first spacecraft to use X-band as the primary telemetry link frequency)
Total Cost: Through the end of the Neptune phase of the Voyager project, a total of $875 million had been expended for the construction, launch, and operations of both Voyager spacecraft. An additional $30 million was allocated for the first two years of VIM.
Deep Space Chronicle: A Chronology of Deep Space and Planetary Probes 1958-2000, Monographs in Aerospace History No. 24, by Asif A. Siddiqi
National Space Science Data Center, http://nssdc.gsfc.nasa.gov/
Solar System Log by Andrew Wilson, published 1987 by Jane's Publishing Co. Ltd.
Voyager Project Homepage, http://voyager.jpl.nasa.gov
An alignment of the outer planets that occurs only once in 176 years prompted NASA to plan a grand tour of the outer planets, consisting of dual launches to Jupiter, Saturn, and Pluto in 1976-77 and dual launches to Jupiter, Uranus, and Neptune in 1979. The original scheme was canceled for budgetary reasons, but was replaced by Voyager 1 and 2, which accomplished similar goals at significantly lower cost.
The two Voyager spacecraft were designed to explore Jupiter and Saturn in greater detail than the two Pioneers (Pioneers 10 and 11) that preceded them had been able to do. Each Voyager was equipped with slow-scan color TV to take live television images from the planets, and each carried an extensive suite of instruments to record magnetic, atmospheric, lunar, and other data about the planets. The original design of the spacecraft was based on that of the older Mariners. Power was provided by three plutonium oxide radioisotope thermoelectric generators (RTGs) mounted at the end of a boom.
Although launched about two weeks before Voyager 1, Voyager 2 exited the asteroid belt after its twin and followed it to Jupiter and Saturn. The primary radio receiver failed on 5 April 1978, placing the mission's fate on the backup unit, which has been used ever since. A fault in this backup receiver severely limits its bandwidth, but the mission has been a major success despite this obstacle. All of the experiments on Voyager 2 have produced useful data.
Voyager 2 began transmitting images of Jupiter on 24 April 1979 for time-lapse movies of atmospheric circulation. They showed that the planet's appearance had changed in the four months since Voyager 1's visit. The Great Red Spot had become more uniform, for example.
The spacecraft relayed spectacular photos of the entire Jovian system, including its moons Amalthea, Io, Callisto, Ganymede, and Europa, all of which had also been imaged by Voyager 1, making comparisons possible. Voyager 2's closest encounter with Jupiter was at 22:29 UT on 9 July 1979 at a range of 645,000 km.
Voyager 1's discovery of active volcanoes on Io prompted a 10-hour volcano watch for Voyager 2. Though the second spacecraft approached no closer than a million kilometers to Io, it was clear that the moon's surface had changed and that six of the volcanic plumes observed earlier were still active.
Voyager 2 imaged Europa at a distance of 206,000 km, resolving the streaks seen by Voyager 1 into a collection of cracks in a thick covering of ice. No variety in elevation was observed, prompting one scientist to say that Europa was "as smooth as a billiard ball." An image of Callisto, studied in detail months later, revealed a 14th satellite, now called Adrastea. It is only 30 to 40 km in diameter and orbits close to Jupiter's rings. As Voyager 2 left Jupiter, it took an image that revealed a faint third component to the planet's rings. It is thought that the moons Amalthea and Thebe may contribute some of the material that constitutes the ring.
Following a midcourse correction two hours after its closest approach to Jupiter, Voyager 2 sped to Saturn. Its encounter with the sixth planet began on 22 August 1981, two years after leaving the Jovian system, with imaging of the moon Iapetus.
Once again, Voyager 2 repeated the photographic mission of its predecessor, although it flew 23,000 km closer to Saturn. The closest encounter was at 01:21 UT on 26 August 1981 at a range of 101,000 km. The spacecraft provided more detailed images of the ring spokes and kinks, as well as the F-ring and its shepherding moons. Voyager 2's data suggested that Saturn's A-ring was perhaps only 300 m thick. It also photographed the moons Hyperion, Enceladus, Tethys, and Phoebe.
Using the spacecraft's photopolarimeter (the instrument that had failed on Voyager 1), scientists observed a star called Delta Scorpii through Saturn's rings and measured the flickering level of light over the course of 2 hours, 20 minutes. This provided 100-m resolution, which was 10 times better than was possible with the cameras, and many more ringlets were discovered.
After Voyager 2 fulfilled its primary mission goals with its flybys of Jupiter and Saturn, mission planners set the spacecraft on a 4.5-year journey to Uranus, during which it covered 33 AU (about 5 billion km). The geometry of the Uranus encounter was designed to enable the spacecraft to use a gravity assist to help it reach Neptune. Voyager 2 had only 5.5 hours of close study during its flyby, the first (and so far, only) human-made spacecraft to visit the planet Uranus.
Long-range observations of Uranus began on 4 November 1985. At that distance, the spacecraft's radio signals took approximately 2.5 hours to reach Earth. Light conditions were 400 times less than terrestrial conditions. The closest approach took place at 17:59 UT on 24 January 1986 at a range of 71,000 km.
The spacecraft discovered 10 new moons, two new rings, and a magnetic field (stronger than that of Saturn) tilted at 55 degrees off-axis and off-center, with a magnetic tail twisted into a helix that stretches 10 million km in the direction opposite that of the sun.
Uranus, itself, displayed little detail, but evidence was found of a boiling ocean of water some 800 km below the top cloud surface. The atmosphere was found to be 85 percent hydrogen and 15 percent helium (26 percent helium by mass). Strangely, the average temperature of 60 K (-351.4 degrees Fahrenheit, -213 degrees Celsius) was found to be the same at the sun-facing south pole and at the equator. Wind speeds were as high as 724 km per hour.
Voyager 2 returned spectacular photos of Miranda, Oberon, Ariel, Umbriel, and Titania, the five larger moons of Uranus. In a departure from Greek mythology, four of Uranus' moons are named for Shakespearean characters and one-Umbriel-is named for a sprite in a poem by Alexander Pope. Miranda may be the strangest of these worlds. It is believed to have fragmented at least a dozen times and reassembled in its current confused state.
Following the Uranus encounter, the spacecraft performed a single midcourse correction on 14 February 1986 to set it on a precise course to Neptune. Voyager 2's encounter with Neptune capped a 7-billion-km journey when on 25 August 1989, at 03:56 UT, it flew about 4,950 km over the cloud tops of the giant planet, closer than its flybys of the three previous planets. As with Uranus, it was the first (and so far, only) human-made object to fly by the planet. Its 10 instruments were still in working order at the time.
During the encounter, the spacecraft discovered five new moons and four new rings. The planet itself was found to be more active than previously believed, with winds of 1100 km per hour. Hydrogen was found to be the most common atmospheric element, although the abundant methane gives the planet its blue appearance. Voyager data on Triton, Neptune's largest moon, revealed the coldest known planetary body in the solar system and a nitrogen ice volcano on its surface.
The spacecraft's flyby of Neptune set it on a course below the ecliptic plane that will ultimately take it out of the solar system. After Neptune, NASA formally renamed the entire project (including both Voyager spacecraft) the Voyager Interstellar Mission (VIM).
Approximately 56 million km past the Neptune encounter, Voyager 2's instruments were put into low-power mode to conserve energy. In November 1998, twenty-one years after launch, nonessential instruments were permanently turned off. Six instruments are still operating. Data from at least some of the instruments should be received until at least 2025. Sometime after that date, power levels onboard the spacecraft will be too low to operate even one of its instruments.
As of March 2010, Voyager 2 was about 92 AU (13.7 billion km) from the sun, increasing its distance at a speed of about 3.3 AU (about 494 million km) per year. | <urn:uuid:8dc77e83-9fed-44f4-9ea1-785ba6eb8250> | 2.8125 | 2,129 | Knowledge Article | Science & Tech. | 54.372877 | 433 |
The current state of arctic sea ice (see graph below) sends a chill down my spine.
So what it says is that the ice is melting furiously, and looks like it’s not yet slowing down even though the days have started to draw in.
However, any scientist will tell you that no single data point can be used as evidence of global warming, there are simply too many fluctuations for anything to be concluded over anything but the longest timescales. We cannot simply look at the mean temperature for a hot year and say, there you go, global warming!
Now, the issue is, there are well-known cycles over pretty much all timescales – this pretty much undermines all serious attempts at prediction.
So, what to do? Well all is not lost; there are still some clever little leading indicators we can look at to give us that sobering wake up call.
Firstly, we know CO2 concentration is up, no doubt or argument, this can be seen in the famous Hawaii data above, complete with the seasonal ‘breathing’ by global plant-life. The argument is about whether the greenhouse models that say this will result in warming will turn out right. I honestly don’t know, but I wouldn’t even have to wonder if the CO2 levels weren’t going up, would I?
#2: A Record Breaking Rate of Record Breaking
Secondly, rather looking at averages or ‘new records’, we can look at the frequency of records. So rather than saying, “we just had the hottest summer ever in some parts of the US, there’s the proof” we can look at how often records are set all over the world – hottest, coldest, wettest, dryest and so on. This approach creates a filter; if it shows there are more records being broken on the hot side than the cold side, could this be an indicator? I hope not, because there are.
Again, it could be part of a long-term cycle that could bottom out any time now. But on the other hand, if it was going the other way, I wouldn’t have to hope, would I?
#3: Sea Ice
Now the sea ice. The sea ice is another proxy for temperature. The reason it’s interesting to climatologists is because it is a natural way to ‘sum-up’ the total warmth for the year and longer; if ice is reducing over several years, it means that there has been a net surplus of warmth.
Today we are seeing a new record set for minimal northern sea ice. And not only is there less area of ice, but it is thinner than previously realized and some models now suggest we could be ice-free in late summer in my lifetime.
Now if that does not strike you cold, then I didn’t make myself clear. This is not some political posturing, not some ‘big-business’ spin, nor greeny fear mongering. It’s a cold clean fact you can interpret for yourself, and it could not be clearer.
So is it time to panic?
Well it can still be argued the melting is part of a cycle, it could of course reverse and hey, no biggy. After all, what does it matter how much ice there is?
Well, yet again, I hate to rely on the ‘hope’ that it’s a cycle. Because if it continues, the next effect will be felt much closer to home…
Sea level is the ultimate proxy for warming. Indeed, sea level change can be so serious, maybe it is the problem rather than the symptom. If the ice on Greenland and Antarctica melt, the rise in sea level would displace hundreds of millions of people and change the landscape so dramatically it’s a fair bet wars and famine will follow. Now that is serious.
So have we seen sea level rise? Well, yes. Here’s the plot:
Now, it looks pretty conclusive but hold the boat. Some say’s it’s proof of warming but not everyone agrees. It’s true it could again be a cycle. Also, the sea level rise is fairly gradual; what people are really arguing about is whether we should expect it to speed up. If temperature goes up a few degrees it could go up 5 or 10 times faster. The speed is the issue. Humanity can cope if the level goes up slowly enough, sure, countries like Tuvalu will be in big trouble either way, but countries like Bangladesh and cities like New York and London will only be in real trouble if the rate increases.
Canaries taken into mines in order to detect poisonous gases; the idea being they would suffer the gas faster than the people and if the canary dropped, it was time to vacate. Do we have systems that are hypersensitive to climate change?
Yes! There are many delicately balanced ecosystems that can can pushed over a tipping point with the lightest of touch. Is there an increase in the rate of species loss, or an increase in desertification? Yes!
We can also look at how far north certain plants can survive, how high up mountains trees can live or how early the first buds of spring arrive.
Again, these indicators fail to give solace. Everywhere we look we see changes, bleached coral, absent butterflies, retreating glaciers.
The conservative approach is to ascribe these changes to the usual cut and thrust of life on earth; some take solace from the fact that humankind has survived because we are the supreme adapters and that the loss of species is exactly how the stronger ones are selected.
Yes, we are great at adapting, however, to kill any complacency that may create, consider the following: for humans just ‘surviving’ is not the goal, that’s easy, we also need to minimize suffering and death, a much tougher aim. We’ve also just recently reduced our adaptability significantly by creating ‘countries’. Countries may seem innocuous, but they come with borders – and mean we can no longer migrate with the climate. Trade across border also needs to be of roughly the same value in both directions. While some countries will actually see productivity benefits from global warming, most will not, and without the freedom to move, famine will result. Trade imbalances mean inequality will become extreme. The poorest will suffer the most.
So for now changes are happening, and advances in agricultural technology are easily coping; however, because ecosystems are often a fine balance between strong opposing forces, changes may be fast should one of the ropes snap.
Looking at the long history of the earth we have seen much hotter and much colder scenes. We have seen much higher and much lower sea levels. We are being wishful to assume we will stay as we have for the last 10,000 years. It may last, or it may change. Natural cycles could ruin us. And mankind is probably fraying the ropes by messing with CO2 levels.
Can we predict if we are about to fall off of our stable plateau? No, probably not. But is it possible? Heck yeah.
If you liked this, you may like these earlier posts on the subject of global warming: | <urn:uuid:9a7cd3b6-4f62-4026-a172-8d9ce40f079e> | 2.890625 | 1,518 | Personal Blog | Science & Tech. | 59.33903 | 434 |
I have seen many tutorials on ASP.NET but most of them starts with coding and writing your first ASP.NET Program. But here I has written this tutorial for explaining why there is a need for ASP.NET when classy ASP is working fine and what are the underlying technology behind ASP.NET, What programming model ASP.NET Provides to programmers. Now let us get started.
ASP.NET is the new offering for Web developers from the Microsoft .It is not simply the next-generation of ASP; in fact, it is a completely re-engineered and enhanced technology that offers much, much more than traditional ASP and can increase productivity significantly.
Because it has evolved from ASP, ASP.NET looks very similar to its predecessor—but only at first sight. Some items look very familiar, and they remind us of ASP. But concepts like Web Forms, Web Services, or Server Controls gives ASP.NET the power to build real Web applications.
Looking Back : Active Server Pages (ASP)
Microsoft Active Server Pages (ASP) is a server-side scripting technology. ASP is a technology that Microsoft created to ease the development of interactive Web applications. With ASP you can use client-side scripts as well as server-side scripts. Maybe you want to validate user input or access a database. ASP provides solutions for transaction processing and managing session state. Asp is one of the most successful language used in web development.
Problems with Traditional ASP
There are many problems with ASP if you think of needs for Today's powerful Web applications.
- Interpreted and Loosely-Typed Code
ASP scripting code is usually written in languages such as JScript or VBScript. The script-execution engine that Active Server Pages relies on interprets code line by line, every time the page is called. In addition, although variables are supported, they are all loosely typed as variants and bound to particular types only when the code is run. Both these factors impede performance, and late binding of types makes it harder to catch errors when you are writing code.
- Mixes layout (HTML) and logic (scripting code)
ASP files frequently combine script code with HTML. This results in ASP scripts that are lengthy, difficult to read, and switch frequently between code and HTML. The interspersion of HTML with ASP code is particularly problematic for larger web applications, where content must be kept separate from business logic.
- Limited Development and Debugging Tools
Microsoft Visual InterDev, Macromedia Visual UltraDev, and other tools have attempted to increase the productivity of ASP programmers by providing graphical development environments. However, these tools never achieved the ease of use or the level of acceptance achieved by Microsoft Windows application development tools, such as Visual Basic or Microsoft Access. ASP developers still rely heavily or exclusively on Notepad.
Debugging is an unavoidable part of any software development process, and the debugging tools for ASP have been minimal. Most ASP programmers
resort to embedding temporary Response. Write statements in their code to trace the progress of its execution.
- No real state management
Session state is only maintained if the client browser supports cookies. Session state information can only be held by using the ASP Session object. And you have to implement additional code if you, for example, want to identify a user.
- Update files only when server is down
If your Web application makes use of components, copying new files to your application should only be done when the Web server is stopped. Otherwise it is like pulling the rug from under your application's feet, because the components may be in use (and locked) and must be registered.
- Obscure Configuration Settings
The configuration information for an ASP web application (such as session state and server timeouts) is stored in the IIS metabase. Because the metabase is stored in a proprietary format, it can only be modified on the server machine with utilities such as the Internet Service Manager. With limited support for programmatically manipulating or extracting these settings, it is often an arduous task to port an ASP application from one server to another.
ASP.NET was developed in direct response to the problems that developers had with classic ASP. Since ASP is in such wide use, however, Microsoft ensured that ASP scripts execute without modification on a machine with the .NET Framework (the ASP engine, ASP.DLL, is not modified when installing the .NET Framework). Thus, IIS can house both ASP and ASP.NET scripts on the same machine.
Advantages of ASP.NET
- Separation of Code from HTML
To make a clean sweep, with ASP.NET you have the ability to completely separate layout and business logic. This makes it much easier for teams of programmers and designers to collaborate efficiently. This makes it much easier for teams of programmers and designers to collaborate efficiently.
- Support for compiled languages
developer can use VB.NET and access features such as strong typing and object-oriented programming. Using compiled languages also means that ASP.NET pages do not suffer the performance penalties associated with interpreted code. ASP.NET pages are precompiled to byte-code and Just In Time (JIT) compiled when first requested. Subsequent requests are directed to the fully compiled code, which is cached until the source changes.
- Use services provided by the .NET Framework
The .NET Framework provides class libraries that can be used by your application. Some of the key classes help you with input/output, access to operating system services, data access, or even debugging. We will go into more detail on some of them in this module.
- Graphical Development Environment
Visual Studio .NET provides a very rich development environment for Web
developers. You can drag and drop controls and set properties the way you do in Visual Basic 6. And you have full IntelliSense support, not only for your code, but also for HTML and XML.
- State management
To refer to the problems mentioned before, ASP.NET provides solutions for session and application state management. State information can, for example, be kept in memory or stored in a database. It can be shared across Web farms, and state information can be recovered, even if the server fails or the connection breaks down.
- Update files while the server is running!
Components of your application can be updated while the server is online and clients are connected. The Framework will use the new files as soon as they are copied to the application. Removed or old files that are still in use are kept in memory until the clients have finished.
- XML-Based Configuration Files
Configuration settings in ASP.NET are stored in XML files that you can easily read and edit. You can also easily copy these to another server, along with the other files that comprise your application.
Here are some point that gives the quick overview of ASP.NET.
- ASP.NET provides services to allow the creation, deployment, and execution of Web Applications and Web Services
- Like ASP, ASP.NET is a server-side technology
- Web Applications are built using Web Forms. ASP.NET comes with built-in Web Forms controls, which are responsible for generating the user interface. They mirror typical HTML widgets like text boxes or buttons. If these controls do not fit your needs, you are free to create your own user controls.
- Web Forms are designed to make building web-based applications as easy as building Visual Basic applications
ASP.NET is based on the fundamental architecture of .NET Framework. Visual studio provide a uniform way to combine the various features of this Architecture.
Architecture is explained form bottom to top in the following discussion.
At the bottom of the Architecture is Common Language Runtime. NET Framework common language runtime resides on top of the operating system services. The common language runtime loads and executes code that targets the runtime. This code is therefore called managed code. The runtime gives you, for example, the ability for cross-language integration.
.NET Framework provides a rich set of class libraries. These include base classes, like networking and input/output classes, a data class library for data access, and classes for use by programming tools, such as debugging services. All of them are brought together by the Services Framework, which sits on top of the common language runtime.
ADO.NET is Microsoft’s ActiveX Data Object (ADO) model for the .NET Framework. ADO.NET is not simply the migration of the popular ADO model to the managed environment but a completely new paradigm for data access and manipulation.
ADO.NET is intended specifically for developing web applications. This is evident from its two major design principles:
- Disconnected Datasets—In ADO.NET, almost all data manipulation is done outside the context of an open database connection.
- Effortless Data Exchange with XML—Datasets can converse in the universal data format of the Web, namely XML.
The 4th layer of the framework consists of the Windows application model and, in parallel, the Web application model.
The Web application model-in the slide presented as ASP.NET-includes Web Forms and Web Services.
ASP.NET comes with built-in Web Forms controls, which are responsible for generating the user interface. They mirror typical HTML widgets like text boxes or buttons. If these controls do not fit your needs, you are free to create your own user controls.
Web Services brings you a model to bind different applications over the Internet. This model is based on existing infrastructure and applications and is therefore standard-based, simple, and adaptable.
Web Services are software solutions delivered via Internet to any device. Today, that means Web browsers on computers, for the most part, but the device-agnostic design of .NET will eliminate this limitation.
- One of the obvious themes of .NET is unification and interoperability between various programming languages. In order to achieve this; certain rules must be laid and all the languages must follow these rules. In other words we can not have languages running around creating their own extensions and their own fancy new data types. CLS is the collection of the rules and constraints that every language (that seeks to achieve .NET compatibility) must follow.
- The CLR and the .NET Frameworks in general, however, are designed in such a way that code written in one language can not only seamlessly be used by another language. Hence ASP.NET can be programmed in any of the .NET compatible language whether it is VB.NET, C#, Managed C++ or JScript.NET.
Quick Start :To ASP.NET
After this short excursion with some background information on the .NET Framework, we will now focus on ASP.NET.
File name extensions
Web applications written with ASP.NET will consist of many files with different file name extensions. The most common are listed here. Native ASP.NET files by default have the extension .aspx (which is, of course, an extension to .asp) or .ascx. Web Services normally have the extension .asmx.
Your file names containing the business logic will depend on the language you use. So, for example, a C# file would have the extension .aspx.cs. You already learned about the configuration file Web.Config.
Another one worth mentioning is the ASP.NET application file Global.asax - in the ASP world formerly known as Global.asa. But now there is also a code behind file Global.asax.vb, for example, if the file contains Visual Basic.NET code. Global.asax is an optional file that resides in the root directory of your application, and it contains global logic for your application.
All of these are text files
All of these files are text files, and therefore human readable and writeable.
The easiest way to start
The easiest way to start with ASP.NET is to take a simple ASP page and change the file name extension to .aspx.
Here is quick introduction of syntax used in ASP.NET
You can use directives to specify optional settings used by the page compiler when processing ASP.NET files. For each directive you can set different attributes. One example is the language directive at the beginning of a page defining the default programming language.
Code Declaration Blocks
Code declaration blocks are lines of code enclosed in <script> tags. They contain the runat=server attribute, which tells ASP.NET that these controls can be accessed on the server and on the client. Optionally you can specify the language for the block. The code block itself consists of the definition of member variables and methods.
Code Render Blocks
Render blocks contain inline code or inline expressions enclosed by the character sequences shown here. The language used inside those blocks could be specified through a directive like the one shown before.
HTML Control Syntax
You can declare several standard HTML elements as HTML server controls. Use the element as you are familiar with in HTML and add the attribute runat=server. This causes the HTML element to be treated as a server control. It is now programmatically accessible by using a unique ID. HTML server controls must reside within a <form> section that also has the attribute runat=server.
Custom Control Syntax
There are two different kinds of custom controls. On the one hand there are the controls that ship with .NET, and on the other hand you can create your own custom controls. Using custom server controls is the best way to encapsulate common programmatic functionality.
Just specify elements as you did with HTML elements, but add a tag prefix, which is an alias for the fully qualified namespace of the control. Again you must include the runat=server attribute. If you want to get programmatic access to the control, just add an Id attribute.
You can include properties for each server control to characterize its behavior. For example, you can set the maximum length of a TextBox. Those properties might have sub properties; you know this principle from HTML. Now you have the ability to specify, for example, the size and type of the font you use (font-size and font-type).
The last attribute is dedicated to event binding. This can be used to bind the control to a specific event. If you implement your own method
MyClick, this method will be executed when the corresponding button is clicked if you use the server control event binding shown in the slide.
Data Binding Expression
You can create bindings between server controls and data sources. The data binding expression is enclosed by the character sequences <%# and %>. The data-binding model provided by ASP.NET is hierarchical. That means you can create bindings between server control properties and superior data sources.
Server-side Object Tags
If you need to create an instance of an object on the server, use server-side object tags. When the page is compiled, an instance of the specified object is created. To specify the object use the identifier attribute. You can declare (and instantiate) .NET objects using class as the identifier, and COM objects using either progid or classid.
Server-side Include Directives
With server-side include directives you can include raw contents of a file anywhere in your ASP.NET file. Specify the type of the path to filename with the pathtype attribute. Use either File, when specifying a relative path, or Virtual, when using a full virtual path.
To prevent server code from executing, use these character sequences to comment it out. You can comment out full blocks - not just single lines.
First ASP.NET Program.
Now let us have our First ASP.NET program.
Let’s look at both the markup and the C# portions of a simple web forms application that generates a movie line-up dynamically through software.
Web form application part 1 -- SimpleWebForm.aspx
<% @Page Language="C#" Inherits="MoviePage" Src="SimpleWebForm.cs" %>
<H1 align="center"><FONT color="white" size="7">Welcome to <br>Supermegacineplexadrome!</FONT></H1>
<P align="left"><FONT color="lime" size="5"><STRONG>
<U>Showtimes for <%WriteDate();%></U>
<FONT size="5" color="yellow"><%WriteMovies();%></FONT>
And this is where the C# part of a web forms application comes in.
Web form application part 2 - SimpleWebForm.cs
public class MoviePage:Page
protected void WriteDate()
protected void WriteMovies()
Response.Write("<P>The Glass Ghost (R) 1:05 pm, 3:25 pm, 7:00 pm</P>");
Response.Write("<P>Untamed Harmony (PG-13) 12:50 pm, 3:25 pm, " + <br> "6:55 pm</P>");
Response.Write("<P>Forever Nowhere (PG) 3:30 pm, 8:35 pm</P>");
Response.Write("<P>Without Justice (R) 12:45 pm, 6:45 pm</P>");
Execution Cycle :
Now let's see what’s happening on the server side. You will shortly understand how server controls fit in.
A request for an .aspx file causes the ASP.NET runtime to parse the file for code that can be compiled. It then generates a page class that instantiates and populates a tree of server control instances. This page class represents the ASP.NET page.
Now an execution sequence is started in which, for example, the ASP.NET page walks its entire list of controls, asking each one to render itself.
The controls paint themselves to the page. This means they make themselves visible by generating HTML output to the browser client.
We need to have a look at what’s happening to your code in ASP.NET.
Compilation, when page is requested the first time
The first time a page is requested, the code is compiled. Compiling code in .NET means that a compiler in a first step emits Microsoft intermediate language (MSIL) and produces metadata—if you compile your source code to managed code. In a following step MSIL has to be converted to native code.
Microsoft intermediate language (MSIL)
Microsoft intermediate language is code in an assembly language–like style. It is CPU independent and therefore can be efficiently converted to native code.
The conversion in turn can be CPU-specific and optimized. The intermediate language provides a hardware abstraction layer.
MSIL is executed by the common language runtime.
Common language runtime
The common language runtime contains just-in-time (JIT) compilers to convert the MSIL into native code. This is done on the same computer architecture that the code should run on.
The runtime manages the code when it is compiled into MSIL—the code is therefore called managed code.
ASP.NET Applications and Configuration
Like ASP, ASP.NET encapsulates its entities within a web application. A web application is an abstract term for all the resources available within the confines of an IIS virtual directory. For example, a web application may consist of one or more ASP.NET pages, assemblies, web services configuration files, graphics, and more. In this section we explore two fundamental components of a web application, namely global application files (Global.asax) and configuration files (Web.config).
Global.asax is a file used to declare application-level events and objects. Global.asax is the ASP.NET extension of the ASP Global.asa file. Code to handle application events (such as the start and end of an application) reside in Global.asax. Such event code cannot reside in the ASP.NET page or web service code itself, since during the start or end of the application, its code has not yet been loaded (or unloaded). Global.asax is also used to declare data that is available across different application requests or across different browser sessions. This process is known as application and session state management.
The Global.asax file must reside in the IIS virtual root. Remember that a virtual root can be thought of as the container of a web application. Events and state specified in the global file are then applied to all resources housed within the web application. If, for example, Global.asax defines a state application variable, all .aspx files within the virtual root will be able to access the variable.
Like an ASP.NET page, the Global.asax file is compiled upon the arrival of the first request for any resource in the application. The similarity continues when changes are made to the Global.asax file; ASP.NET automatically notices the changes, recompiles the file, and directs all new requests to the newest compilation. A Global.asax file is automatically created when you create a new web application project in the VS.NET IDE.
Application directives are placed at the top of the Global.asax file and provide information used to compile the global file. Three application directives are defined, namely Application, Assembly, and Import. Each directive is applied with the following syntax:
<%@ appDirective appAttribute=Value ...%>
In ASP, configuration settings for an application (such as session state) are stored in the IIS metabase. There are two major disadvantages with this scheme. First, settings are not stored in a human-readable manner but in a proprietary, binary format. Second, the settings are not easily ported from one host machine to another.(It is difficult to transfer information from an IIS’s metabase or Windows Registry to another machine, even if it has the same version of Windows.)
Web.config solves both of the aforementioned issues by storing configuration information as XML. Unlike Registry or metabase entries, XML documents are human-readable and can be modified with any text editor. Second, XML files are far more portable, involving a simple file transfer to switch machines.
Unlike Global.asax, Web.config can reside in any directory, which may or may not be a virtual root. The Web.config settings are then applied to all resources accessed within that directory, as well as its subdirectories. One consequence is that an IIS instance may have many web.config files. Attributes are applied in a hierarchical fashion. In other words, the web.config file at the lowest level directory is used.
Since Web.config is based on XML, it is extensible and flexible for a wide variety of applications. It is important, however, to note that the Web.config file is optional. A default Web.config file, used by all ASP.NET application resources, can be found on the local machine at:
ASP.NET is an evolution of Microsoft’s Active Server Page (ASP) technology. Using ASP.NET, you can rapidly develop highly advanced web applications based on the .NET framework. Visual Studio Web Form Designer, which allows the design of web applications in an intuitive, graphical method similar to Visual Basic 6. ASP.NET ships with web controls wrapping each of the standard HTML controls, in addition to several controls specific to .NET. One such example is validation controls, which intuitively validate user input without the need for extensive client-side script.
In many respects, ASP.NET provides major improvements over ASP, and can definitely be considered a viable alternative for rapidly developing web-based applications.
Points of Interest
I has write this tutorial to share my Knowledge of ASP.NET with you. You can find more articles and software projects with free source code on my web site http://programmerworld.net .
- Date Posted: June 27, 2003
I am a B.E in Information Technology form Lingaya's Institute of Managemant and Technology Faridabad, India.
I has worked on VC++, MFC, VB, Sql Server. Currently I am working on .net, C# and ASP.net
I keeps my free source code projects and articles at website http://programmerworld.net | <urn:uuid:9b858cdc-f880-4b7a-9fbf-1b67bd6e948f> | 3.1875 | 4,994 | Truncated | Software Dev. | 47.079626 | 435 |
This page describes how to specify the direction of a vector. It contains a text description and an animation of an arrow turning counterclockwise that displays the degree that it is at. There are links at the bottom of the page for similar animations.
This tutorial is part of The Physics Classroom. This web site also includes interactive tools to help students with concepts and problem solving, worksheets for student assignments, and recommendations for simple introductory laboratories.
%0 Electronic Source %A Henderson, Tom %D 1996 %T The Physics Classroom: Vector Direction %V 2013 %N 26 May 2013 %9 image/gif %U http://www.physicsclassroom.com/mmedia/vectors/vd.cfm
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. | <urn:uuid:6a11f536-346c-4d0c-958b-70bb1362951b> | 3.46875 | 192 | Tutorial | Science & Tech. | 45.497727 | 436 |
Category: Sponges view all from this category
Description 8" tall (20 cm) and 1" (2.5 cm) wide. Sponge produces brilliantly colored tree-like, intertwined branches in red and orange. Surface is covered with tiny, scattered pores.
Habitat Ocean or bay shallows, Tidepools.
Range Eastern Canada, Florida, New England, Mid-Atlantic, California, Texas, Northwest.
Discussion Branches of the Red Beard Sponge provide important habitats for crustaceans and juvenile fish species. Reproduces both asexually and sexually. Broken branches can regenerate into new sponges. When cells from the Red Beard Sponge are separated, they have the ability to reorganize themselves. | <urn:uuid:5b37c27a-2d82-4de1-8942-df11c18b4567> | 3.09375 | 147 | Knowledge Article | Science & Tech. | 33.730857 | 437 |
What does this program measure?
Sulfur Dioxide is measured at 4,10,23,and 40 meters above the ground, in units of parts per trillion by volume.
How does this program work?
Why is this research important?
This is part of an effort to monitor long-term of emissions from Kilauea and Mauna Loa volcanoes. It may provide a precursor to the next Mauna Loa eruption. These measurements also detection large sulfur dioxide pollution events from Asia.
Are there any trends in the data?
No trends. Kilauea has been in continuous eruption since 1983. Mauna Loa last erupted in 1984. Since 1994; SO2 levels from Mauna Loa have been low (<500ppt, or parts per trillion).
How does this program fit into the big picture?
What is it's role in global climate change?
The SO2 data can be used to detect periods of volcanic pollution at the observatory. This can provide a non-baseline filter for other measurements.
Comments and References
NOAA Sulfur Dioxide Monitoring (SO2) | <urn:uuid:f8bcf8d0-f144-4e8a-b1ee-796934b50837> | 4.03125 | 233 | Knowledge Article | Science & Tech. | 60.74052 | 438 |
- Formation of H2 and CH4 by weathering of olivine at temperatures between 30 and 70°CAnna Neubeck
Department of Geological Sciences, Stockholm University, Sweden
Geochem Trans 12:6. 2011..This may expand the range of environments plausible for abiotic CH4 formation both on Earth and on other terrestrial bodies...
- Methane emissions from Pantanal, South America, during the low water season: toward more comprehensive samplingDavid Bastviken
Department of Thematic Studies Water and Environmental Studies, Linkoping University, Linkoping, Sweden
Environ Sci Technol 44:5450-5. 2010..Future measurements with static floating chambers should be based on many individual chambers distributed in the various subenvironments of a lake that may differ in emissions in order to account for the within lake variability...
- Freshwater methane emissions offset the continental carbon sinkDavid Bastviken
Department of Thematic Studies Water and Environmental Studies, Linkoping University, SE 58183 Linkoping, Sweden
Science 331:50. 2011..Thus, the continental GHG sink may be considerably overestimated, and freshwaters need to be recognized as important in the global carbon cycle...
- Measurement of methane oxidation in lakes: a comparison of methodsDavid Bastviken
Department of Water and Environmental Studies, Linkoping University, Sweden
Environ Sci Technol 36:3354-61. 2002..We conclude that methods using the stable isotope or mass balance modeling approach represent promising alternatives, particularly for studies focusing on ecosystem-scale carbon metabolism...
- Organic matter chlorination rates in different boreal soils: the role of soil organic matter contentMalin Gustavsson
Department of Thematic Studies, Water and Environmental Studies, Linkoping University, 58183 Linkoping, Sweden
Environ Sci Technol 46:1504-10. 2012.... | <urn:uuid:42de6163-2efa-4490-ba7a-c4d0a2a4ed7f> | 2.796875 | 378 | Content Listing | Science & Tech. | 25.039844 | 439 |
Ancient Ichthyosaur Mother Did Not Explode, Scientists Say
An ichthyosaur female with embryos scattered outside her body.
It is unlikely that the body of a mother ichthyosaur exploded, say researchers who offer another explanation for the scattered remains of embryos found around her in rock that was once deep underwater.
Rather, the scattering of the embryos was probably caused by minor sea currents after the expectant mother died and her body decayed some 182 million years ago, the researchers propose.
If this scenario sounds confusing, it is important to know that ichthyosaurs, extinct marine reptiles that lived at the same time as the dinosaurs, did not lay eggs but rather carried their young in their bodies until they gave birth. Ichthyosaurs resembled fish but, unlike most fish, breathed air through lungs.
The nearly intact skeleton of the female ichthyosaur in question was found in Holzmaden, Germany. But the remains of most of the approximately 10 embryos were scattered far outside her body it. Other fossilized ichthyosaur remains have been found in similarly strange arrangements, with skeletons usually complete but jumbled to some degree.
A Swiss and German research team set out to examine the idea that after death, such large-lunged marine creatures floated on the surface, with putrefaction gases building up inside them, until the gases escaped, often by bursting. Such explosions would jumble the bones.
The researchers examined the decay and preservation of ichthyosaur skeletons and compared this information with that of modern animals, particularly marine mammals. To get an idea of the amount of pressure that builds up after death during different stages of bloating, they looked at measurements from the abdomens of 100 human corpses.
"Our data and a review of the literature demonstrate that carcasses sink and do not explode (and spread skeletal elements)," the researchers wrote online Feb. 1 in the journal Palaeobiodiversity and Palaeoenvironments.
Generally, carcasses of ichthyosaurs would have sunk to the seafloor and broken down completely. Only under specific circumstances — including in warmer water less than 164 feet (50 meters) deep — would gas inside the body have brought the remains to the surface, said the researchers, led by Achim Reisdorf of the University of Basel in Switzerland. When this happened, the carcass would decompose slowly, scattering bones over a wide area.
Ichthyosaurs' remains stayed neatly in place only under specific conditions, according to the research team: The water pressure had to be great enough to prevent them from floating, scavengers did not pick them over, and strong currents did not disturb them.
The female ichthyosaur died in water about 492 feet (150 m) deep. Decomposition of the body released the embryo skeletons, and minor currents along the seafloor distributed them around her body, the researchers speculate.
MORE FROM LiveScience.com | <urn:uuid:8aa2421f-4491-4216-aa1b-341bb6b55b5b> | 3.390625 | 608 | News Article | Science & Tech. | 33.144118 | 440 |
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Defending Planet Earth: Near-Earth-Object Surveys and Hazard Mitigation Strategies
or regional effects near the time and place of the impact, but could include, for large impacts, global climate change or tsunamis. But how large an impact and what kind of impact could cause these effects is still uncertain. A research program is needed to address all of these issues in order to assess and quantify the risks associated with the NEO impact hazard.
The ability to mitigate the impact hazard, or even to define appropriate strategies for mitigating the hazard, likewise depends on the acquisition of the new knowledge and understanding that could be gained through a research program. Even if the only viable mitigation approach to an impending impact is to warn the population and to evacuate, better information is needed for making sound decisions. Under what conditions should warning be provided and when, and who should evacuate? If, however, there are available active mitigation options, like changing the orbit of an impactor, again better information is needed: One must be able to predict with confidence the response of an impactor to specific forms of applied forces, impacts of various types and speeds, or various types of radiant energy, such as x rays. The required information goes beyond the basic physical characterization that determines the size and mass of the impactor and includes surface and subsurface compositions, internal structures, and the nature of their reactions to various inputs.
Just as the scope of earthquake research is not limited only to searching for and monitoring earthquakes, the scope of NEO hazard mitigation research should not be limited to searching for and detecting NEOs. A research program is a necessary part of an NEO hazard mitigation program. This research should be carried out in parallel with the searches for NEOs, and it should be broadly inclusive of research aimed at filling the gaps in present knowledge and understanding so as to improve scientists’ ability to assess and quantify impact risks as well as to support the development of mitigation strategies. This research needs to cover several areas discussed in the previous chapters of this report: risk analysis (Chapter 2), surveys and detection of NEOs (Chapter 3), characterization (Chapter 4), and mitigation (Chapter 5). The committee stresses that this research must be broad in order to encompass all of these relevant and interrelated subjects.
Recommendation: The United States should initiate a peer-reviewed, targeted research program in the areaof impact hazard and mitigation of NEOs. Because this is a policy-driven, applied program, it should not bein competition with basic scientific research programs or funded from them. This research program shouldencompass three principal task areas: surveys, characterization, and mitigation. The scope should includeanalysis, simulation, and laboratory experiments. This research program does not include mitigation spaceexperiments or tests that are treated elsewhere in this report.
Some specific topics of interest for this research program are listed below. This list is not intended to be exhaustive:
Analyses and simulations of ways to optimize search and detection strategies using ground-based or space-based approaches or combinations thereof (see Chapter 3);
Studies of distributions of warning times versus sizes of impactors for different survey and detection approaches (see Chapter 2);
Studies of the remote-sensing data on NEOs that are needed to develop useful probabilistic bases for choosing active-defense strategies when warning times of impacts are insufficient to allow a characterization mission (see Chapter 4);
Concept studies of space missions designed to meet characterization objectives, including a rendezvous and/or landed mission and/or impactors;
Concept studies of active-defense missions designed to meet mitigation objectives, including a test of mitigation by impact with the measurement of momentum transfer efficiency to the target (see Chapter 5);
Research to demonstrate the viability, or not, of using the disruption of an NEO to mitigate against an impact;
The technological development of components and systems necessary for mitigation;
Analyses of data from airbursts and their ground effects as obtained by dedicated networks, including military systems and fireball (brighter than average meteor) observations; also analyses and simulations to assess | <urn:uuid:1c8537b6-85fc-4358-8e69-44ca13424579> | 3.265625 | 863 | Knowledge Article | Science & Tech. | 12.632292 | 441 |
Destroying a dangerous asteroid with a nuclear bomb is a well-worn trope of science fiction, but it could become reality soon enough.
Scientists are developing a mission concept that would blow apart an Earth-threatening asteroid with a nuclear explosion, just like Bruce Willis and his oilmen-turned-astronaut crew did in the 1998 film "Armageddon."
But unlike in the movie, the spacecraft under development — known as the Hypervelocity Asteroid Intercept Vehicle, or HAIV — would be unmanned. It would hit the space rock twice in quick succession, with the non-nuclear first blow blasting out a crater for the nuclear bomb to explode inside, thus magnifying its asteroid-shattering power.
"Using our proposed concept, we do have a practically viable solution — a cost-effective, economically viable, technically feasible solution," study leader Bong Wie, of Iowa State University, said Wednesday at the 2012 NASA Innovative Advanced Concepts (NIAC) meeting in Virginia. [ 5 Reasons to Care About Asteroids ]
When, not if
Earth has been pummeled by asteroids throughout its 4.5 billion-year history, and some of the strikes have been catastrophic. For example, a 6-mile-wide (10 kilometers) space rock slammed into the planet 65 million years ago, wiping out the dinosaurs.
Earth is bound to be hit again, and relatively soon. Asteroids big enough to cause serious damage today — not necessarily the extinction of humans, but major disruptions to the global economy — have hit the planet on average every 200 to 300 years, researchers say.
So humanity needs to have a plan in hand to deal with the next threatening asteroid, many scientists stress.
That plan should include deflection strategies, they say. Given a few decades of lead time, a threatening space rock could be nudged off course — perhaps by employing a tag-along "gravity tractor" probe, or even by painting the asteroid white and letting sunlight give it a push.
But humanity also needs to be prepared for an asteroid that pops up on scientists' radar just weeks before a potential impact. That scenario might demand the nuclear option that Wie and his colleagues are working to develop.
A one-two punch
NASA engineers identified 168 technical flaws in "Armageddon," Wie said. But one thing the movie got right is the notion that a nuke will be far more effective if it explodes inside an asteroid rather than at its surface. (At a depth of 10 feet, or 3 meters, the bomb's destructive power would be about 20 times greater, Wie said.)
So Wie and his team came up with a way to get the bomb down into a hole, without relying on a crew of spacewalking roughnecks to bore into the space rock.
The HAIV spacecraft incorporates two separate impactors, a "leader" and a "follower." As HAIV nears the asteroid, the leader separates and slams into the space rock, blasting out a crater about 330 feet (100 m) wide.
The nuke-bearing follower hits the hole a split-second later, blowing the asteroid to smithereens. Simulations suggest the explosion would fling bits of space rock far and wide, leaving only a tiny percentage of the asteroid's mass to hit Earth, Wie said.
This is no pie-in-the sky dream: The researchers have received two rounds of funding from the NIAC program, and they say their plan is eminently achievable.
"Basically, our proposed concept is an extension of the flight-proven $300 million Deep Impact mission," Wie said, referring to the NASA effort that slammed an impactor into Comet Tempel 1 in 2005.
Demonstration mission coming?
The HAIV project is still in its early stages, and much more modeling and developmental work is needed. But Wie and his colleagues are ambitious, with plans for a bomb-free flight test in the next decade or so.
Space news from NBCNews.com
Teen's space mission fueled by social media
Science editor Alan Boyle's blog: "Astronaut Abby" is at the controls of a social-media machine that is launching the 15-year-old from Minnesota to Kazakhstan this month for the liftoff of the International Space Station's next crew.
- Buzz Aldrin's vision for journey to Mars
- Giant black hole may be cooking up meals
- Watch a 'ring of fire' solar eclipse online
- Teen's space mission fueled by social media
"Our ultimate goal is to be able to develop about a $500 million flight demo mission within a 10-year timeframe," Wie said.
The team's current work involves analyzing the feasibility of nuking a small but still dangerous asteroid — one about 330 feet (100 m) wide — with little warning time. However, it wouldn't be too difficult to scale up, Wie said.
"Once we develop technology to be used in this situation, we are ready to avoid any collision — with much larger size, with much longer warning time," Wie said.
- Paintball Players vs. The Asteroid Apocalypse? | Video
- The 7 Strangest Asteroids in the Solar System
- When Space Attacks: The 6 Craziest Meteor Impacts
© 2013 Space.com. All rights reserved. More from Space.com. | <urn:uuid:e4d7dff6-f3a3-4717-910b-97e700c705b1> | 3.359375 | 1,102 | News Article | Science & Tech. | 49.307155 | 442 |
Environmental News: Media CenterMain page | Archive
FOR IMMEDIATE RELEASE
Press contact: Dr. Susan Subak, 202-289-2417
If you are not a member of the press, please write to us at firstname.lastname@example.org or see our contact page.
Major Changes Ahead Due to Global Warming, Says New Federal Report
Mid-Atlantic Region May Be Particularly Vulnerable
WASHINGTON (June 12, 2000) - The first comprehensive assessment of global warming's potential impact on the United States warns that some U.S. ecosystems are likely to disappear entirely as a result of climate change. The 145-page overview, "Climate Change Impacts on the United States," was released today for public comment by the U.S. Global Change Research Program.
Researchers at Pennsylvania State University released a companion overview on the Mid-Atlantic region earlier this year that details the potential harm caused by rising temperatures and sea levels on an area stretching from southeastern New York state to North Carolina. The Mid-Atlantic Regional Assessment overview report is available on the Web at http://www.essc.psu.edu/mara/.
"The Mid-Atlantic assessment shows that many of the region's distinct natural features could deteriorate as a result of changing climate," says Susan Subak, a senior research associate at the Natural Resources Defense Council. "Whether we're talking about Chesapeake Bay fisheries and recreational areas or Southern Appalachian forest and bird habitats, rising temperatures would put further stress on natural systems in this populous region. Cutting our consumption of fossil fuels, particularly from vehicles and electricity, would limit the increase of heat-trapping gases in the atmosphere, and help prevent the worst-case scenarios."
The national assessment evaluates global warming's potential impact on 20 geographic areas and five sectors: agriculture, coastal areas, forests, human health and water resources. The draft report's main findings include:
- Continued growth in worldwide emissions is likely to increase average temperatures across the United States by 5-10 degrees Fahrenheit by 2100.
- Some ecosystems, such as alpine meadows in the Rocky Mountains and some barrier islands, are likely to disappear entirely, while others, such as forests of the Southeast, are likely to experience major species shifts or break up.
- Drought is an important concern in every region and many regions are at risk from increased flooding and water quality problems. Snowpack changes are especially important in the West, Pacific Northwest, and Alaska.
- Climate change and the resulting rise in sea level are likely to exacerbate threats to buildings, roads, powerlines, and other infrastructure in climatically sensitive places, such as low-lying coastlines.
- Climate change will very likely magnify the cumulative impacts of other stresses, such as air and water pollution and habitat destruction. For some systems, such as coral reefs, the effects are very likely to exceed a critical threshold, bringing irreversible damage.
- Heat stress from higher summer temperatures, more frequent and extensive flooding, and extended ranges of disease-bearing insects may increase the risk of illness and pose additional challenges to the public health care system.
Some adverse impacts will be unavoidable because heat-trapping gases are long-lasting in the atmosphere, and because emissions already have changed our climate. These impacts will be aggravated by other stresses on the environment and by changing socioeconomic conditions.
The national assessment offers Americans suggestions on preparing for global warming. One of the most important is to limit the impact of development on the environment, especially in areas vulnerable to species and habitat loss. By reducing these stresses, we can support the capacity of natural ecosystems and communities to respond to a changing climate.
More than 250 scientists assisted in the national assessment, and countless other experts and stakeholders contributed to the workshops and to the technical review of the documents. There will be a 60-day public comment period after the report is released.
The Natural Resources Defense Council is a national, non-profit organization of scientists, lawyers and environmental specialists dedicated to protecting public health and the environment. Founded in 1970, NRDC has more than 400,000 members nationwide, served from offices in New York, Washington, Los Angeles and San Francisco.
Sign up for NRDC's online newsletter
NRDC Gets Top Ratings from the Charity Watchdogs
- Charity Navigator awards NRDC its 4-star top rating.
- Worth magazine named NRDC one of America's 100 best charities.
- NRDC meets the highest standards of the Wise Giving Alliance of the Better Business Bureau. | <urn:uuid:938c302c-58f2-4006-9441-a899aff50880> | 2.890625 | 919 | News (Org.) | Science & Tech. | 29.399965 | 443 |
Press Release 10-142
Gulf Oil Spill: NSF Awards Grant to Study Effects of Oil and Dispersants on Louisiana Salt Marsh Ecosystem
Researchers measuring impacts of short- and long-term exposure in extensive Gulf Coast marshes
August 16, 2010
As oil and dispersants wash ashore in coastal Louisiana salt marshes, what will their effects be on these sensitive ecosystems?
The National Science Foundation (NSF) has awarded a rapid response grant to scientist Eugene Turner of Louisiana State University and colleagues to measure the impacts on Gulf Coast salt marshes.
The researchers will track short-term (at the current time, and again at three months) and longer-term (at 11 months) exposure to oil and dispersants.
The coast of Louisiana is lined with extensive salt marshes whose foundation is two species of Spartina grass.
In brackish marshes, Spartina patens is the dominant form. It's locally known as wiregrass, marsh hay and paille a chat tigre (hair of the tiger).
In more saline marshes closer to the Gulf of Mexico, Spartina alterniflora, also called smooth cordgrass and oyster grass, takes over. A tall form of this wavy grass grows on the streamside edge of the marsh; a shorter form grows behind it.
In their NSF study, the biologists will document changes in these critically-important Spartina grasses, as well as in the growth of other salt marsh plants, and in marsh animals and microbes.
Field investigators will collect samples three times at 35 to 50 sites and analyze the oil and dispersants after each expedition.
The first field effort is now underway.
"Data are being collected that may be used as indicators of the long-term health of the salt marsh community," says Turner. "From these data, we will obtain information that precedes potentially far-reaching changes.
"This exceptionally large oil spill and subsequent remediation efforts are landmark opportunities to learn about short- and long-term stressors on salt marsh ecosystems."
Salt marsh stressors, such as those from oil spills, can have dramatic, visible, and immediate direct impacts, Turner says, on marshes and surrounding uplands.
"They also have indirect effects because, as oil and dispersants begin to degrade, they enter food webs via primary consumers such as suspension-feeding oysters, deposit-feeding bivalves, and grazing gastropods," says David Garrison of NSF's Division of Ocean Sciences, which funded the research.
These "primary consumers," in turn, serve as food sources for those at higher trophic levels--including humans.
As contaminants make their way up the food chain, they may become concentrated, as in the well-known example of mercury in fish.
"The effects of environmental stressors can cascade through ecosystems as metabolic pathways are altered," says Todd Crowl of NSF's Division of Environmental Biology, which co-funded the research. "The result may be an ecosystem that's radically altered well into the future."
The research, says Turner, is a benchmark study in salt marsh ecosystem change, and will answer key questions about salt marsh stability.
This NSF grant is one of many Gulf oil spill-related rapid response awards made by the federal agency. NSF's response involves active research in social sciences, geosciences, computer simulation, engineering, biology, and other fields. So far, the Foundation has made more than 60 awards totaling nearly $7 million.
For more on the RAPID program, please see the RAPID guidelines. See also a regularly updated list of RAPIDs targeting the Gulf oil spill response. Because RAPID grants are being awarded continuously, media can also contact Josh Chamot (email@example.com) in the Office of Legislative and Public Affairs for the latest information on granted awards.
Cheryl Dybas, NSF (703) 292-7734 firstname.lastname@example.org
The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2012, its budget was $7.0 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.
Get News Updates by Email
Useful NSF Web Sites:
NSF Home Page: http://www.nsf.gov
NSF News: http://www.nsf.gov/news/
For the News Media: http://www.nsf.gov/news/newsroom.jsp
Science and Engineering Statistics: http://www.nsf.gov/statistics/
Awards Searches: http://www.nsf.gov/awardsearch/ | <urn:uuid:c05f8f19-595e-4ad5-9208-8d378ff764cb> | 3.140625 | 1,033 | News (Org.) | Science & Tech. | 51.255162 | 444 |
|Feb8-13, 06:53 AM||#1|
Friction conditions on contact point of disc
I have some doubts regarding the friction force on a certain situation. Imagine a disc over a fixed flat surface. Like this:
The disc has two motions, rotational and translational but these are independent of each other. I mean the translational motion does not come from the rotation of the disc, imagine an external force that moves the disc.
Now, with this in mind lets say that the disc is moving from left to right always keeping contact with the surface. At the same time the disc is rotating with an angular velocity clockwise. Because of the different movements, at the contact point, there is slip.
Now to the question itself. What kind of friction do I have at the contact point? Do I have rolling or sliding friction at the contact point? and what direction does the friction force have since the disc is moving to the right but at the contact point the direction of rotation is to the left?
|Similar Threads for: Friction conditions on contact point of disc|
|Help with Point Charges and Charge by Contact||Introductory Physics Homework||12|
|accumulation point unit disc||Calculus & Beyond Homework||4|
|Movement of a point on the perimeter of a disc||Advanced Physics Homework||0|
|Point contact diode||Atomic, Solid State, Comp. Physics||0|
|Contact friction||Introductory Physics Homework||3| | <urn:uuid:71afd96d-d742-443c-bdac-974756c057f4> | 2.75 | 316 | Comment Section | Science & Tech. | 52.911731 | 445 |
PostgreSQL has a large object facility, which provides stream-style access to user data that is stored in a special large-object structure. Streaming access is useful when working with data values that are too large to manipulate conveniently as a whole.
This chapter describes the implementation and the programming and query language interfaces to PostgreSQL large object data. We use the libpq C library for the examples in this chapter, but most programming interfaces native to PostgreSQL support equivalent functionality. Other interfaces might use the large object interface internally to provide generic support for large values. This is not described here. | <urn:uuid:57a87af7-2167-4b33-94bb-7ef763afaa29> | 2.578125 | 118 | Documentation | Software Dev. | 23.697647 | 446 |
A regular expression (regexp) is a text string that describes some set of strings.
Functions that handle regular expressions, based on GNU regexp-0.12, have been implemented (for more details, see the GNU documentation about regexp rules).
The functions available from Search menu provide search forward or backward and replace. Each of them prompts a dialog box to get the target regexp.
Regular expressions are composed of characters and operators that match one or more characters. Here is an abstract of commons operators:
|matches one of a choice of regular expressions.
[...]matches one item of a list.
[^...]matches a single character not represented by one of the list items.
(...)treats any number of other operators (i.e. subexpressions) as a unit.
\digit matches a specified preceding group.
^matches the beginning of line.
$matches the end of line.
Smac provides the following functions:
returns the position of the next regular expression regexp, or -1 if regexp has not been found, or -2 if regexp is not valid.
returns the position of the previous regular expression regexp, or -1 if regexp has not been found, or -2 if regexp is not valid.
returns the beginning position of the substring n of the regexp found by the previous search call to a regexp.
returns the end position of the substring n of the regexp found by the previous search call to a regexp.
replaces the regular expression regexp with the string newstring. If the argument regexp is ommited, the previous search call to a regexp is used. It returns 1 on success else 0. | <urn:uuid:3fca356f-7757-4c16-b824-ba85de9811d8> | 2.953125 | 370 | Documentation | Software Dev. | 62.315392 | 447 |
June 2004 Sky from the Keeble Observatory
We will be unable to view the June 8th "transit of Venus" from the Keeble Observatory - the event will be over before the rising Sun clears our obscured eastern horizon. You will, however, be able to view it from several global web sites: The Exploratorium, in San Francisco will carry links to the Penteli Observatory, near Athens, Greece. NASA will offer links to several observatories around the world. The European Southern Observatory will have live coverage, as well.
May weather was not kind to comet watchers in the Center of the Universe. Haze on those evenings when it wasn't actually cloudy, and clouds and rain on other days, meant that it was unlikely that anyone actually saw comets NEAT and LINEAR. June weather, typically, will not be any improvement, and the comets are fading rapidly as their orbits carry them away from the Sun. There will be others in the future, hopefully more fortuitously positioned. Bright comets, like Hale-Bopp of several years ago, are roughly once per decade.
Comets have long fascinated humankind. Regular meteor showers and the stately motion of the planets through the zodiac could be anticipated. But, what to make of these "hairy stars" which persist for months, seemingly at random? (Our word "comet" comes from the Latin "coma" - which means hair!) Aristotle assumed they were atmospheric phenomena, describing them in his treatise on weather. Tycho Brahe used geometry to show that they were in the realm of the planets (but not the stars). It was once church doctrine that these represented harbingers of evil - firebrands hurled by an angry God to warn sinful humankind. Indeed, Halley's Comet was high above the Battle of Hastings, which was certainly bad luck for the English. It might have been considered good luck for the Norman victors!
Tycho got it right, of course. Comets are part of our solar system, just as much as are the planets, asteroids, and meteoroids. Indeed, meteor showers are residue from comets, which provides us a clue to their origin and makeup.
Some 5 billion years ago, comets were among the first large aggregate objects to condense out of the cloud of gas and dust which formed our solar system. Far from the growing heat source at the center, which was to become our Sun, they contain dust and frozen ices of water, methane, and ammonia. We believe these represent the oldest undisturbed remnants of the original pre-solar cloud. Pristine material from the time the Earth and other planets first formed. Fred Whipple dubbed them "dirty snowballs" and the description is apt. There are two major reservoirs of comets left from these early times. The so-called Kuiper Belt lies beyond the orbit of distant Neptune and lies in the plane of the ecliptic, roughly from 30 to 500 AU. (1 AU - "astronomical unit" is the average distance between Earth and Sun, about 150 million kilometers, or 93 million miles.) Pluto, and recently-discovered Sedna are among the largest denizens of this region ... yes, Pluto is probably just a large comet! Far beyond the Kuiper Belt lies the huge spherical shell called the Oort Cloud - stretching from 10,000 to perhaps 100,000 AU.
A comet's orbit may be disturbed by a collision, or by gravitational perturbations from passing stars or clouds. The comet then falls toward the inner solar system. As sunlight warms the comet, the volatile ices vaporize and carry with them dust and rocks from the nucleus of the comet. These gasses and dust particles are pushed away from the Sun by the pressure of sunlight and the streaming solar wind to form the tails of the comet. Ultraviolet light ionizes the gas and makes it glow. As the comet sweeps through its orbit the dust from its tail is strewn along its orbit, leaving the debris which makes up periodic meteor showers.
Lunar phases for June: Full Moon on the 3rd, at 12:20 am; Last Quarter on the 9th, at 4:03 pm; New Moon on the 17th, at 4:27 pm; First Quarter on the 25th, at 3:08 pm. Summer solstice, when the Sun is highest above the equator, will occur at 8:58 pm on the 20th. This is sometimes called the "longest day of the year," but it's 24 hours just like any other day! In fact, the solstice event will take place after sunset on the 20th! We will experience over 14 hours of sunlight on the 20th and 21st, however.
Evening planet watching in June is largely Jupiter watching. After sunset, Jupiter emerges from twilight high to the southwest, about 50 degrees off the horizon. It sets about 1:00 am. Saturn and Mars are above the horizon at sunset, but low to the northwest and probably lost in the horizon clutter and haze. Mornings are not promising for planet watching, either. Mercury and Venus have returned to the pre-dawn sky, but will be very low (< 10 degrees) on the northeast horizon at sunrise.
An hour or so after sunset at mid-month, the overhead view is essentially out of the plane of the Galaxy, which is nearly coincident with the horizon at 8:30 pm. Castor and Pollux are to the west, settling towards the horizon. High above the southern horizon, almost at zenith, is bright Arcturus, in the constellation Bootes. To the west, just below and to the right of Jupiter, you'll find Regulus, the heart of the Lion in the constellation Leo. The days around the new Moon are a good time to hope for clear skies ... maybe a cold front will sweep away the haze enough so that you can use Jupiter to find some deeper objects with your binoculars. Within 4 degrees to the right and slightly above Jupiter you may find several galaxies from the Messier catalog - M105, M95, and M96 are all in Leo. About 6 degrees above Jupiter is another, known as M65. These objects were all catalogued by Charles Messier to avoid confusing them with comets. We'll say more about the Messier Catalog next month.
Vega is to the ENE about 37 degrees off the horizon and appearing higher each night. It will be prominent in the late summer and autumn skies. Near Vega, use binoculars to find the Ring Nebula. Below Vega rises the constellation Cygnus. This marks the plane of the Galaxy, and the general direction towards which our Sun in moving in its orbit about the distant center of the Milky Way.
For your own monthly star chart, you can direct your web browser to http://www.skymaps.com. You will find extensive descriptions of what's worth looking for, and you can download and print a single copy for your personal use.
Copyright 2004 George Spagna | <urn:uuid:a14c43de-99e0-4bc2-9aff-30cb43f3da1e> | 3.078125 | 1,450 | News (Org.) | Science & Tech. | 60.031577 | 448 |
Fascinating creatures indeed bentley!
Cuttlefish belong to the Cephalopoda class and includes squid, octopuses, and nautiluses). Although they are known as fish, they are mollusks and not fish! Recent studies indicate that cuttlefish are among the most intelligent invertebrates. Internationally there are 120 species of cuttlefish recognized. I am not sure how many species are in South African waters though!
Cuttlefish have eight arms and two tentacles furnished with suckers, with which they secure their prey.
They have a life expectancy of approximately 2 years and feed on small mollusks, crabs, shrimp, fish, octopuses, worms, and other cuttlefish. They are being preyed upon by dolphins, sharks, fish, seals and other cuttlefish.
Their cuttle-bones are porous and used for buoyancy by changing the gas-to-liquid ratio in the chambered cuttle-bone.
Cuttlefish eyes are among the most developed in the animal kingdom.
The blood of a cuttlefish is an unusual shade of green-blue. The reason for this is the fact that they use the copper containing protein hemocyanin to carry oxygen instead of the red iron-containing protein hemoglobin that is found in mammals. They have 3 separate hearts to pump their blood. Two hearts pump blood to the pair of gills and the third pumps blood to the rest of the body.
Photo of 2 Cuttlefish:Herewith something interesting facts about their ability to change colours:Cuttlefish are sometimes referred to as the chameleon of the sea because of their remarkable ability to rapidly alter their skin color at will. Their skin flashes a fast-changing pattern as communication to other cuttlefish and to camouflage them from predators. This color-changing function is produced by groups of red, yellow, brown, and black pigmented chromatophores above a layer of reflective iridophores and leucophores, with up to 200 of these specialized pigment cells per square millimeter. The pigmented chromatophores have a sac of pigment and a large membrane that is folded when retracted. There are 6-20 small muscle cells on the sides which can contract to squash the elastic sac into a disc against the skin. Yellow chromatophores (xanthophores) are closest to the surface of the skin, red and orange are below (erythrophores), and brown or black are just above the iridophore layer (melanophores). The iridophores reflect blue and green light. Iridophores are plates of chitin or protein, which can reflect the environment around a cuttlefish. They are responsible for the metallic blues, greens, golds, and silvers often seen on cuttlefish. All of these cells can be used in combinations. For example, orange is produced by red and yellow chromatophores, while purple can be created by a red chromatophore and an iridophore. The cuttlefish can also use an iridophore and a yellow chromatophore to produce a brighter green. As well as being able to influence the color of the light that reflects off their skin, cuttlefish can also affect the light's polarization, which can be used to signal to other marine animals, many of which can also sense polarization. | <urn:uuid:b54600a1-7fc1-4680-a155-982badee16b7> | 3.421875 | 703 | Comment Section | Science & Tech. | 39.829133 | 449 |
by Staff Writers
Ann Arbor MI (SPX) Mar 14, 2013
In evolutionary biology, there is a deeply rooted supposition that you can't go home again: Once an organism has evolved specialized traits, it can't return to the lifestyle of its ancestors.
There's even a name for this pervasive idea. Dollo's law states that evolution is unidirectional and irreversible. But this "law" is not universally accepted and is the topic of heated debate among biologists.
Now a research team led by two University of Michigan biologists has used a large-scale genetic study of the lowly house dust mite to uncover an example of reversible evolution that appears to violate Dollo's law.
The study shows that tiny free-living house dust mites, which thrive in the mattresses, sofas and carpets of even the cleanest homes, evolved from parasites, which in turn evolved from free-living organisms millions of years ago.
"All our analyses conclusively demonstrated that house dust mites have abandoned a parasitic lifestyle, secondarily becoming free-living, and then speciated in several habitats, including human habitations," according to Pavel Klimov and Barry OConnor of the U-M Department of Ecology and Evolutionary Biology.
Their paper, "Is permanent parasitism reversible?-Critical evidence from early evolution of house dust mites," is scheduled to be published online March 8 in the journal Systematic Biology.
Mites are arachnids related to spiders (both have eight legs) and are among the most diverse animals on Earth. House dust mites, members of the family Pyroglyphidae, are the most common cause of allergic symptoms in humans, affecting up to 1.2 billion people worldwide.
Despite their huge impact on human health, the evolutionary relationships between these speck-sized creatures are poorly understood. According to Klimov and OConnor, there are 62 different published hypotheses arguing about whether today's free-living dust mites originated from a free-living ancestor or from a parasite-an organism that lives on or in a host species and damages its host.
In their study, Klimov and OConnor evaluated all 62 hypotheses. Their project used large-scale DNA sequencing, the construction of detailed evolutionary trees called phylogenies, and sophisticated statistical analyses to test the hypotheses about the ancestral ecology of house dust mites.
On the phylogenetic tree they produced, house dust mites appear within a large lineage of parasitic mites, the Psoroptidia. These mites are full-time parasites of birds and mammals that never leave the bodies of their hosts. The U-M analysis shows that the immediate parasitic ancestors of house dust mites include skin mites, such as the psoroptic mange mites of livestock and the dog and cat ear mite.
"This result was so surprising that we decided to contact our colleagues to obtain their feedback prior to sending these data for publication," said Klimov, the first author of the paper and an assistant research scientist in the Department of Ecology and Evolutionary Biology.
The result was so surprising largely because it runs counter to the entrenched idea that highly specialized parasites cannot return to the free-living lifestyle of their ancestors.
"Parasites can quickly evolve highly sophisticated mechanisms for host exploitation and can lose their ability to function away from the host body," Klimov said.
"They often experience degradation or loss of many genes because their functions are no longer required in a rich environment where hosts provide both living space and nutrients. Many researchers in the field perceive such specialization as evolutionarily irreversible."
The U-M findings also have human-health implications, said OConnor, a professor in the Department of Ecology and Evolutionary Biology and a curator of insects and arachnids at the U-M Museum of Zoology.
"Our study is an example of how asking a purely academic question may result in broad practical applications," he said. "Knowing phylogenetic relationships of house dust mites may provide insights into allergenic properties of their immune-response-triggering proteins and the evolution of genes encoding allergens."
The project started in 2006 with a grant from the National Science Foundation. The first step was to obtain specimens of many free-living and parasitic mites-no simple task given that some mite species are associated with rare mammal or bird species around the world.
The research team relied on a network of 64 biologists in 19 countries to obtain specimens. In addition, Klimov and OConnor conducted field trips to North and South America, Europe, Asia and Africa. On one occasion, it took two years to obtain samples of an important species parasitizing African birds.
A total of around 700 mite species were collected for the study. For the genetic analysis, the same five nuclear genes were sequenced in each species.
How might the ecological shift from parasite to free-living state have occurred?
There is little doubt that early free-living dust mites were nest inhabitants-the nests of birds and mammals are the principal habitat of all modern free-living species in the family Pyroglyphidae.
Klimov and OConnor propose that a combination of several characteristics of their parasitic ancestors played an important role in allowing them to abandon permanent parasitism: tolerance of low humidity, development of powerful digestive enzymes that allowed them to feed on skin and keratinous (containing the protein keratin, which is found in human hair and fingernails) materials, and low host specificity with frequent shifts to unrelated hosts.
These features, which occur in almost all parasitic mites, were likely important precursors that enabled mite populations to thrive in host nests despite low humidity and scarce, low-quality food resources, according to Klimov and OConnor. For example, powerful enzymes allowed these mites to consume hard-to-digest feather and skin flakes composed of keratin.
With the advent of human civilization, nest-inhabiting pyroglyphids could have shifted to human dwellings from the nests of birds and rodents living in or around human homes. Once the mites moved indoors, the potent digestive enzymes and other immune-response-triggering molecules they carry made them a major source of human allergies.
University of Michigan
Darwin Today At TerraDaily.com
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement| | <urn:uuid:d326f074-df33-47b2-afdc-38759e5eadd0> | 3.046875 | 1,417 | News Article | Science & Tech. | 21.896895 | 450 |
How do people know how old a star is?
Wow, this is a popular question! Scientists have learned a lot about stars, especially the stages in their lives. Since a single star can live for billions of years, scientists study several stars at different stages of their lives.
Certain characteristics of stars are related to each other. The luminosity, temperature, magnitude, spectral class and mass are all related. For example, larger stars are cooler, red in color and are very luminous. All these characteristics are important in determining the age of a star, but scientists found that the composition of a star is the most important.
The Hertzsprung-Russell Diagram is a very famous diagram that shows how these characteristics of stars are related. Stars are divided into different categories depending on their temperature, size, etc. Most stars are either, main sequence, or giants. Scientists realized that the compositions of stars were related to the diagram. Stars spend most of their lives as a main sequence star. During this time they burn hydrogen in their core.
When a star burns hydrogen it creates helium. At some point the star uses up all the hydrogen, and starts to burn helium. The star expands and cools while burning the helium. During this stage a star is called a giant.
So why tell you all of this? Well, scientists discovered this is a very easy way to compare stars. It is also a great way to tell the age of a star. Scientists can look at the spectra of a star and tell its temperature, which is related to the size, etc. In turn, this information reveals how much hydrogen or helium is left inside the star. We know the rate at which stars burn the gases. Scientists can now tell how old the star is depending on its composition!
Submitted by Dana, Kelly, Michael, Kelli, Tommy, Nick, Randall, (ages 11 &12, North Carolina)
Windows original artwork
Shop Windows to the Universe Science Store!
Learn about Earth and space science, and have fun while doing it! The games
section of our online store
includes a climate change card game
and the Traveling Nitrogen game
You might also be interested in:
Most stars fall on a rather wide line (highlighted in red) that passes from the lower right to the upper left of the page. This line is called the main sequence. It tells us interesting things about how...more
Follow along the horizontal curves (highlighted in red). These are the locations of the giant stars. Giants don't behave like main sequence stars. The brightness is constant or even increasing as you...more
It depends on which type of motion you are asking about. If you take a birds-eye view from the top of the solar system all the planets orbit around the Sun in a counter-clockwise (or direct) direction....more
Almost everyone has a question or two about living in space. What do astronauts do in space? How do they do everyday things like eat, sleep and go to the bathroom? It's important to note that astronauts...more
There is a really neat internet program called Solar System Live that shows the position of all of the planets and the Sun for any given day. If you go to that page, you'll see an image similar to the...more
The picture of the American Flag (the one put there by the Apollo astronauts) is waving (or straight out) in the wind. How could that be possible if there is no atmosphere on the Moon? Was it some sort...more
I was wondering if there is a new planet? Are there planets (a tenth planet?) after Pluto belonging to our solar system? What are the names of the new planets discovered in the solar system? Are there...more
If that is so, the energy released during the Big Bang must have created many such black holes. Therefore most of the Energy of the Big bang must have disappeared in that form. Then how did the Universe...more | <urn:uuid:c0c6b479-5438-47f4-a412-d5c36f7ad001> | 3.671875 | 816 | Content Listing | Science & Tech. | 63.365947 | 451 |
October 3, 2011 | A team that includes NCAR scientists Anne Boynard and Alex Guenther has found that the rate at which plant canopies emit isoprene, a volatile organic compound, is influenced by circadian rhythms. The discovery has the potential to lead to more accurate predictions of ground-level ozone, which is harmful to human health.
For the study, the researchers made measurements of isoprene in Malaysia above both tropical rain forest and oil palm plantations. They observed for the first time ever a circadian (24-hour) rhythm operating in concert across the entire tree canopy, especially in the palm plantation.
The finding changes how scientists estimate isoprene emissions from plants, as both the palm plantations and rain forest emit less isoprene than shown by computer models of emissions. This has implications for ground-level ozone, which forms when volatile organic compounds such as isoprene react with nitrogen oxides from automobiles and industry.
The researchers incorporated the circadian pattern into the NCAR Model of Emissions of Gases and Aerosols from Nature (MEGAN) model to estimate isoprene emissions for input to ozone models. They then compared simulated ground-level ozone to observed ozone measurements from 290 monitoring sites in the United States. They found that model accuracy was significantly improved. Accounting for circadian impacts on isoprene emissions could especially improve ozone predictions in isoprene-sensitive regions of the world, which include the United States, Mediterranean, Middle East, Japan, and parts of Southeast Asia.
The research was published in Nature Geoscience in September.
C. N. Hewitt, K. Ashworth, A. Boynard, A. Guenther, B. Langford, A. R. MacKenzie, P. K. Misztal, E. Nemitz, S. M. Owen, M. Possell, T. A. M. Pugh, A. C. Ryan, O. Wild, “Ground-level ozone influenced by circadian control of isoprene emissions,” Nature Geoscience, 2011; DOI: 10.1038/ngeo1271 | <urn:uuid:ef2fddd6-6d5b-4319-ad5a-b03922cfdc5c> | 3.296875 | 437 | Academic Writing | Science & Tech. | 42.501375 | 452 |
This is another image I found on Google+
All lines are absolutely straight, parallel and perpendicular but why does it appear to have a curvature?
Related: How does this illusion work?
Like these questions :) Many of these illusions come from Prof. Akiyoshi Kitaoka, a japanese Psychologist and expert for Gestalt Psychology. On his website you'll find some more fascinating illusions and questions to ask here ;)
The illusion above is named Cafe Wall illusion and the newest model to explain those illusions is the contrast-polarity model. Short explanation from his webpage:
The paper explained it better to me:
This explains why you perceive a tilt. If you position the smaller squares now in distinct edges of the big squares, you can achieve 2- and 3-dimenional illusions. Here you see a increasing of the tilt due to more smaller squares:
Here you can see that the positioning of the smaller squares is critical to achieve the 3D effect of the orignial bulge effect in your question:
Notice that Gestalt Psychology is a non-reductionistic theory approach and investigates mainly the phenomenology and underlying Gestalt Laws of visual perception. How these Gestalt Laws developed on a deeper level is a question of neurobiological evolution similar to, "why have some species of apes color-vision and some not". The ellipses in the explaining picture above show you, that our cognitive visual machine somehow tries to group divided objects (square and line of same contrast/brightness) in one line and we see a tilt. I'm guessing here, but this is probably due to cognitive brain algorithm that saves things and objects we see and perceive mainly by countor and shapes, rather than pixel by pixel like a computer and digital camera do it, which of course don't perceive any tilt or 3D illusion in any of those trick images :)
Read the papers for more explanations and examples, not behind a paywall: | <urn:uuid:41b7145c-e537-491e-9369-ebe5ab5d7f66> | 2.71875 | 399 | Q&A Forum | Science & Tech. | 25.346958 | 453 |
Scientific name: Coenonympha tullia
Rests with wings closed. Some have row of ‘eyespots’ on underwings, like Ringlet, but some don’t.
The Large Heath is restricted to wet boggy habitats in northern Britain, Ireland, and a few isolated sites in Wales and central England.
The adults always sit with their wings closed and can fly even in quite dull weather provided the air temperature is higher than 14B:C. The size of the underwing spots varies across its range; a heavily spotted form (davus) is found in lowland England, a virtually spotless race (scotica) in northern Scotland, and a range of intermediate races elsewhere (referred to aspolydama).
The butterfly has declined seriously in England and Wales, but is still widespread in parts of Ireland and Scotland.
Size and Family
- Family – Browns
- Small/Medium Sized
- Wing Span Range (male to female) - 41mm
- Listed as a Section 41 species of principal importance under the NERC Act in England
- Listed as a Section 42 species of principal importance under the NERC Act in Wales
- Classified as a Northern Ireland Priority Species by the NIEA
- UK BAP status: Priority Species
- Butterfly Conservation priority: High
- European Status: Vulnerable
- Protected in Great Britain for sale only
The main foodplant is Hare's-tail Cottongrass (Eriophorum vaginatum) but larvae have been found occasionally on Common Cottongrass (E. angustifolium) and Jointed Rush (Juncus articulatus). Early literature references to White Beak-sedge (Rhyncospora alba), are probably erroneous.
- Countries – England, Scotland and Wales
- Northern Britain and throughout Ireland
- Distribution Trend Since 1970’s = -43%
The butterflies breed in open, wet areas where the foodplant grows, this includes habitats such as; lowland raised bogs, upland blanket bogs and damp acidic moorland. Sites are usually below 500m (600m in the far north) and have a base of Sphagnum moss interspersed with the foodplant and abundant Cross-leaved Heath (the main adult nectar source).
In Ireland, the butterfly can be found where manual peat extraction has lowered the surface of the bog, creating damp areas with local concentrations of foodplant. | <urn:uuid:3a335f27-c035-4215-b4c2-b0179298929c> | 3.5 | 520 | Knowledge Article | Science & Tech. | 28.962464 | 454 |
We said on the Getting Started Page that HTML is
nothing more than a box of highlighters that we use to carefully describe our text. This is mostly the entire story. Normally our content is just text we want to define in some way. But what if our content is not just text? What if, let’s say, we have a bunch of images that we want to include on the page? We certainly can’t type in 4-thousand pixels on the keyboard to make up a 200x200-pixel image…
Motivation and Syntax
When the content we want is not text, then we have to have of including that content on the page. The most common example is an image. The problem, however, is that html tags are like highlighters — they have an opening tag and a closing tag. Between the opening and closing tags fits the data that is “highlighted” by the tag. If we were to have an
<image> tag in HTML (we don’t have that tag—one close to it though), what would go “inside” of it? What might you replace the
stuff with inside of
It simply doesn’t make sense for an
<image> tag to exist like all the other HTML tags because the other HTML tags define something else while image is, itself, something that can be defined.
The image tag and all such manner of tags are called “element” tags because, just like the name implies, the tags are themselves the elements all their own. For all intents and purposes you can treat element tags just like text.
If your content is like the words in a textbook and regular HTML is like a pack of highlighters, then these special element tags are indeed like the text and not like the highlighters at all.
The XML standard says that every tag must be closed. But we have this new breed of tags that really don’t make sense to be closed. What we have is a compromise between the two extremes. We have a self-closing tag. The tag is just like the tags we learned about on the General Syntax Page with two exceptions.
- There is no closing tag
- There is a
>to indicate that the tag is self-closing.
So this looks like:
(There is commonly a space before the
/, but again spacing after the name of the tag is arbitrary.)
You might imagine that there could be a tag that produces the copyright symbol (©). There isn’t (we’ll get to special characters later). But if there were, you might imagine there being an element tag called
copyrightsymbol that we could use right in line with our text to produce a ©
Images on Web sites take the form of image files stored on a server. Much like line breaks, images are element tags that are treated like text. The difference is that the image element tag is replaced by the actual image file.
We mentioned the (non-existent)
<image> tag earlier in our discussion on the necessity of the element-style tags. The real tag to include an image on the page is
<img>. This tag makes little sense if used without its
Let’s say we have the image
image1.jpg, , uploaded to the same folder as our HTML file. To include the image on the page, all we have to insert is:
<p><img src="./image1.jpg" /></p>
Which would be rendered (displayed) like:
And, again, images are like text — they go right in with your content:
<p>This is image1: <img src="./image1.jpg" />. Cool, right?</p>
This would be rendered like:
This is image1: . Cool, right?
(More information on how to reference your images using different paths depending on where your images are stored can be found on the Internet File Management Page of the Web Publishing at the UW online curriculum.)
If your images contribute to the content of your site, then you should provide an
alt attribute for your images. The
alt attribute is a text version of your image. Usually it is just a concise sentence describing the image. The
alt attribute will be used if your image is unavailable for any reason (e.g. if you delete the image file, if your viewers can’t see images, etc.)
If we had a picture of a dog jumping into a lake called
spot.jpg, we might use the following HTML to place it on the page:
<p>A picture I took: <img src="./spot.jpg" alt="Spot jumping into a lake." /></p>
If your image is purely a visual element (e.g. an icon next to a download link or an image used in your page’s layout), then you don’t need to provide an
alt attribute. If your web design work is sponsored by the University, be sure to check out the UW’s page on Web Site Accessibility by clicking here.
The spacing rules of HTML say that when we break the line in the source code (e.g. using the enter key on the keyboard), we don’t also break the line on the rendered (displayed) version of the page.
This is why the following two blocks of code:
<p>This is text. This is more text</p>
<p>This is text This is text</p>
…are considered equivalent. They will both be displayed by the web browser in exactly the same way:
This is text. This is more text
<br />element tag. The following block of code:
..is rendered like:
<p> In what particular thought to work I know not; <br /> But in the gross and scope of my opinion, <br /> This bodes some strange eruption to our state.<br /> </p>
In what particular thought to work I know not;
But in the gross and scope of my opinion,
This bodes some strange eruption to our state.
Above we imagined that there was an HTML element tag called
copyrightsymbol that would be used to produce a Copyright symbol (©). If there were such a tag, we might have the following HTML:
<p>This page is Copyright (<copyrightsymbol />) 1989 By George Orwell</p>
There turns out to be so many such symbols that the creators of HTML decided to create a whole group of “special symbols” (or “special characters”). These characters are used in the place of any character you cannot type using a standard US-English QWERTY keyboard. They are also used in the place of some “reserved” characters (like the less-than and greater-than signs, <, and >).
There are many such characters. They all start with an ampersand (
&) and end with a semicolon (
;). The web browser sees these and replaces them with the special character. Some of them are mentioned in the table to the right. You can find a complete listing of all such special characters by doing a search in your favorite search engine for
HTML Special Characters. | <urn:uuid:802e6015-705c-4518-969a-a08bf3e5ad88> | 3.234375 | 1,534 | Documentation | Software Dev. | 64.109588 | 455 |
I was given this code from a thread that I created (i cannot remember who) and I would like somebody (preferably the same person who gave it to me) to explain what each line does ('cause i've got no idea). And also, when I compile it (in MSVC++ 6.0), it does nothing but sit there!! It is meant to display all of the lines of a text file. I really wanted it to store each line in a variable if it wasn't empty, but anyway.....
...here is the code:
using namespace std;
ifstream file("myfile.txt"); // or whatever
if(!CurrentLine.empty()) // store it if not empty
// show the stored lines
for(int i = 0; i < Lines.size(); i++)
cout << Lines[i] << endl; | <urn:uuid:364ad431-a0c8-44e7-8c15-f614a15b6f5a> | 2.875 | 180 | Personal Blog | Software Dev. | 78.901365 | 456 |
Did you know that most of the computers on which you deploy applications have more power in the GPU on the video card than in the CPU, even multi-core machines? Harnessing the power of the GPU is the next step in the manycore/multicore revolution and can mean astonishing improvements in execution time. Depending on how data parallel your calculations are, you might see a speedup of 5, 10, or even 50 times! Imagine a calculation that takes 24 hours today completing in half an hour instead. What new capabilities would that enable for your users? Until recently, running code on the GPU has meant using one of several "C-like" languages. The upcoming release of C++ Accelerated Massive Parallelism (AMP) means that you can use accelerators like the GPU from native C++. Visual Studio includes debugging and profiling support for C++ AMP, and you don't need to download or install any new libraries to accelerate your code. In this session, see the power of C++ AMP and learn the basic concepts you need to adapt your code to use this massive parallelism. | <urn:uuid:8fa1ab43-5b0e-4ef6-aabd-038ffb133eca> | 3.03125 | 224 | News (Org.) | Software Dev. | 46.038138 | 457 |
Cyanobacterial Emergence at 2.8 Gya and Greenhouse Feedbacks
D. Schwartzman, K. Caldeira & A. Pavlov
Approximately 2.8 billion years ago, cyanobacteria and a methane-influenced greenhouse emerged nearly simultaneously. Here we hypothesize that the evolution of cyanobacteria could have caused a methane greenhouse.
Apparent cyanobacterial emergence at about 2.8 Gya coincides with the negative excursion in the organic carbon isotope record, which is the first strong evidence for the presence of atmospheric methane. The existence of weathering feedbacks in the carbonate-silicate cycle suggests that atmospheric and oceanic CO2 concentrations would have been high prior to the presence of a methane greenhouse (and thus the ocean would have had high bicarbonate concentrations). With the onset of a methane greenhouse, carbon dioxide concentrations would decrease. Bicarbonate has been proposed as the preferred reductant that preceded water for oxygenic photosynthesis in a bacterial photosynthetic precursor to cyanobacteria; with the drop of carbon dioxide level, Archean cyanobacteria emerged using water as a reductant instead of bicarbonate (Dismukes et al., 2001). Our thermodynamic calculations, with regard to this scenario, give at least a tenfold drop in aqueous CO2 levels with the onset of a methane-dominated greenhouse, assuming surface temperatures of about 60°C and a drop in the level of atmospheric carbon dioxide from about 1 to 0.1 bars. The buildup of atmospheric methane could have been triggered by the boost in oceanic organic productivity that arose from the emergence of pre-cyanobacterial oxygenic phototrophy at about 2.8–3.0 Gya; high temperatures may have precluded an earlier emergence. A greenhouse transition timescale on the order of 50–100 million years is consistent with results from modeling the carbonate-silicate cycle. This is an alternative hypothesis to proposals of a tectonic driver for this apparent greenhouse transition. | <urn:uuid:f4947171-69c0-4989-a627-b8ca8e544ab3> | 2.71875 | 417 | Academic Writing | Science & Tech. | 22.795393 | 458 |
The sum utility calculates and prints a 16-bit checksum for the named file and the number of 512-byte blocks in the file. It is typically used to look for bad spots, or to validate a file communicated over some transmission line.
The following options are supported:
Use an alternate (machine-dependent) algorithm in computing the checksum.
The following operands are supported:
A path name of a file. If no files are named, the standard input is used.
See largefile(5) for the description of the behavior of sum when encountering files greater than or equal to 2 Gbyte ( 231 bytes).
See environ(5) for descriptions of the following environment variables that affect the execution of sum: LC_CTYPE, LC_MESSAGES, and NLSPATH.
See attributes(5) for descriptions of the following attributes:
|ATTRIBUTE TYPE||ATTRIBUTE VALUE|
“Read error” is indistinguishable from end of file on most devices; check the block count.
Portable applications should use cksum(1).
sum and usr/ucb/sum (see sum(1B)) return different checksums. | <urn:uuid:73ff07cb-09a1-46ee-a241-2dd7b2cf7934> | 2.65625 | 253 | Documentation | Software Dev. | 42.467734 | 459 |
Note: Using access() to check if a user is authorized to e.g. open a file before actually doing so using open() creates a security hole, because the user might exploit the short time interval between checking and opening the file to manipulate it.
Note: I/O operations may fail even when access() indicates that they would succeed, particularly for operations on network filesystems which may have permissions semantics beyond the usual POSIX permission-bit model.
Although Windows supports chmod(), you can only
set the file's read-only flag with it (via the
S_IREAD constants or a corresponding integer value).
All other bits are ignored.
|path, uid, gid)|
|path, uid, gid)|
'..'even if they are present in the directory. Availability: Macintosh, Unix, Windows.
Changed in version 2.3: On Windows NT/2k/XP and Unix, if path is a Unicode object, the result will be a list of Unicode objects.
0666(octal). The current umask value is first masked out from the mode. Availability: Macintosh, Unix.
FIFOs are pipes that can be accessed like regular files. FIFOs exist until they are deleted (for example with os.unlink()). Generally, FIFOs are used as rendezvous between ``client'' and ``server'' type processes: the server opens the FIFO for reading, and the client opens it for writing. Note that mkfifo() doesn't open the FIFO -- it just creates the rendezvous point.
|filename[, mode=0600, device])|
0777(octal). On some systems, mode is ignored. Where it is used, the current umask value is first masked out. Availability: Macintosh, Unix, Windows.
0777(octal). On some systems, mode is ignored. Where it is used, the current umask value is first masked out. Note: makedirs() will become confused if the path elements to create include os.pardir. New in version 1.5.2. Changed in version 2.3: This function now handles UNC paths correctly.
pathconf_namesdictionary. For configuration variables not included in that mapping, passing an integer for name is also accepted. Availability: Macintosh, Unix.
If name is a string and is not known, ValueError is
raised. If a specific value for name is not supported by the
host system, even if it is included in
OSError is raised with errno.EINVAL for the
os.path.join(os.path.dirname(path), result). Availability: Macintosh, Unix.
>>> import os >>> statinfo = os.stat('somefile.txt') >>> statinfo (33188, 422511L, 769L, 1, 1032, 100, 926L, 1105022698,1105022732, 1105022732) >>> statinfo.st_size 926L >>>
Changed in version 2.3: If stat_float_times returns true, the time values are floats, measuring seconds. Fractions of a second may be reported if the system supports that. On Mac OS, the times are always floats. See stat_float_times for further discussion.
On some Unix systems (such as Linux), the following attributes may also be available: st_blocks (number of blocks allocated for file), st_blksize (filesystem blocksize), st_rdev (type of device if an inode device). st_flags (user defined flags for file).
On other Unix systems (such as FreeBSD), the following attributes may be available (but may be only filled out if root tries to use them): st_gen (file generation number), st_birthtime (time of file creation).
On Mac OS systems, the following attributes may also be available: st_rsize, st_creator, st_type.
On RISCOS systems, the following attributes are also available: st_ftype (file type), st_attrs (attributes), st_obtype (object type).
For backward compatibility, the return value of stat() is also accessible as a tuple of at least 10 integers giving the most important (and portable) members of the stat structure, in the order st_mode, st_ino, st_dev, st_nlink, st_uid, st_gid, st_size, st_atime, st_mtime, st_ctime. More items may be added at the end by some implementations. The standard module stat defines functions and constants that are useful for extracting information from a stat structure. (On Windows, some items are filled with dummy values.)
Note: The exact meaning and resolution of the st_atime, st_mtime, and st_ctime members depends on the operating system and the file system. For example, on Windows systems using the FAT or FAT32 file systems, st_mtime has 2-second resolution, and st_atime has only 1-day resolution. See your operating system documentation for details.
Availability: Macintosh, Unix, Windows.
Changed in version 2.2: Added access to values as attributes of the returned object. Changed in version 2.5: Added st_gen, st_birthtime.
True, future calls to stat() return floats, if it is
False, future calls return ints. If newvalue is omitted, return the current setting.
For compatibility with older Python versions, accessing stat_result as a tuple always returns integers.
Changed in version 2.5: Python now returns float values by default. Applications which do not work correctly with floating point time stamps can use this function to restore the old behaviour.
The resolution of the timestamps (that is the smallest possible fraction) depends on the system. Some systems only support second resolution; on these systems, the fraction will always be zero.
It is recommended that this setting is only changed at program startup time in the __main__ module; libraries should never change this setting. If an application uses a library that works incorrectly if floating point time stamps are processed, this application should turn the feature off until the library has been corrected.
For backward compatibility, the return value is also accessible as a tuple whose values correspond to the attributes, in the order given above. The standard module statvfs defines constants that are useful for extracting information from a statvfs structure when accessing it as a sequence; this remains useful when writing code that needs to work with versions of Python that don't support accessing the fields as attributes.
Changed in version 2.2: Added access to values as attributes of the returned object.
None. If given and not
None, prefix is used to provide a short prefix to the filename. Applications are responsible for properly creating and managing files created using paths returned by tempnam(); no automatic cleanup is provided. On Unix, the environment variable TMPDIR overrides dir, while on Windows the TMP is used. The specific behavior of this function depends on the C library implementation; some aspects are underspecified in system documentation. Warning: Use of tempnam() is vulnerable to symlink attacks; consider using tmpfile() (section 14.1.2) instead. Availability: Macintosh, Unix, Windows.
None, then the file's access and modified times are set to the current time. Otherwise, times must be a 2-tuple of numbers, of the form
(atime, mtime)which is used to set the access and modified times, respectively. Whether a directory can be given for path depends on whether the operating system implements directories as files (for example, Windows does not). Note that the exact times you set here may not be returned by a subsequent stat() call, depending on the resolution with which your operating system records access and modification times; see stat(). Changed in version 2.0: Added support for
Nonefor times. Availability: Macintosh, Unix, Windows.
(dirpath, dirnames, filenames).
dirpath is a string, the path to the directory. dirnames is
a list of the names of the subdirectories in dirpath
'..'). filenames is a list of
the names of the non-directory files in dirpath. Note that the
names in the lists contain no path components. To get a full
path (which begins with top) to a file or directory in
If optional argument topdown is true or not specified, the triple for a directory is generated before the triples for any of its subdirectories (directories are generated top down). If topdown is false, the triple for a directory is generated after the triples for all of its subdirectories (directories are generated bottom up).
When topdown is true, the caller can modify the dirnames list in-place (perhaps using del or slice assignment), and walk() will only recurse into the subdirectories whose names remain in dirnames; this can be used to prune the search, impose a specific order of visiting, or even to inform walk() about directories the caller creates or renames before it resumes walk() again. Modifying dirnames when topdown is false is ineffective, because in bottom-up mode the directories in dirnames are generated before dirpath itself is generated.
By default errors from the
os.listdir() call are ignored. If
optional argument onerror is specified, it should be a function;
it will be called with one argument, an OSError instance. It can
report the error to continue with the walk, or raise the exception
to abort the walk. Note that the filename is available as the
filename attribute of the exception object.
os.path.islink(path), and invoke
walk(path)on each directly.
This example displays the number of bytes taken by non-directory files in each directory under the starting directory, except that it doesn't look under any CVS subdirectory:
import os from os.path import join, getsize for root, dirs, files in os.walk('python/Lib/email'): print root, "consumes", print sum(getsize(join(root, name)) for name in files), print "bytes in", len(files), "non-directory files" if 'CVS' in dirs: dirs.remove('CVS') # don't visit CVS directories
In the next example, walking the tree bottom up is essential: rmdir() doesn't allow deleting a directory before the directory is empty:
# Delete everything reachable from the directory named in 'top', # assuming there are no symbolic links. # CAUTION: This is dangerous! For example, if top == '/', it # could delete all your disk files. import os for root, dirs, files in os.walk(top, topdown=False): for name in files: os.remove(os.path.join(root, name)) for name in dirs: os.rmdir(os.path.join(root, name))
New in version 2.3.
See About this document... for information on suggesting changes. | <urn:uuid:cc9bee38-ec23-4459-b5cc-5601b63c418b> | 2.921875 | 2,355 | Documentation | Software Dev. | 47.354915 | 460 |
16 Jun 2008: Report
The Limits of Climate Modeling
As the public seeks answers about the future impacts of climate change, some climatologists are growing increasingly uneasy about the localized predictions they are being asked to make.
Now that the world largely accepts our climate is changing, and that humans are to blame, we all want to know what the future holds for our own backyard. How bad will it get? Flood or drought? Feast or famine? Super-hurricane or Mediterranean balm?
The statisticians and climatologists who brought us the big picture are now under huge pressure to get local. But they are growing increasingly concerned about whether their existing models and computers are up to the job. They organized a summit in Reading, England, in May to discuss their concerns. As Brian Hoskins of Reading University, one of the British government’s top climate advisers, put it: “We’ve worked out the global scale. But that’s the easy problem. We don’t yet understand the smaller scale. The pressure is on for answers, and we can’t wait around for decades.”
Already, policymakers are starting to take at face value model predictions of — to take a few examples — warming of 18 degrees Fahrenheit (7.8 degrees Celsius) or more in Alaska, and super-droughts in the southwestern United States, but little warming at all in central states.
But is the task doable? Some climate modelers say that even with the extraordinary supercomputing power now available, the answer is no. That, by being lured into offering local forecasts for decades ahead, they are setting themselves up for a fall that could undermine the credibility of the climate models.
Lenny Smith, an American statistician now working on climate modeling at the London School of Economics in the United Kingdom, is fearful. “Our models are being over-interpreted and misinterpreted,” he says. “They are getting better; I don't want to trash them. But policy-makers think we know much more than we actually know. We need to drop the pretense that they are nearly perfect.”
There are two areas of concern. First, how accurate are the global models at mimicking atmospheric processes? And second, are they capable of zooming in on particular areas to give reliable pictures of the future where you live?
Nobody much doubts that greenhouse gases like carbon dioxide accumulating in the atmosphere will cause warming. It would be a contradiction of 200 years of physics if they did not. But exactly how much warming will occur — and how it will be distributed across the globe and impact other climatic features like rainfall — depends on feedbacks in the climate system, the oceans, and the natural carbon cycle. The influence of some of these feedbacks is much less clear.
One big issue is the influence of clouds. The models are pretty hopeless at predicting future cloud cover. And we can’t even be sure whether, overall, extra clouds would warm or cool the planet. (Clouds may cool us in the day, but will usually keep us warm at night.) In the language of Donald Rumsfeld, we would call this problem a “known unknown.”
And there may also be “unknown unknowns.” For instance, a paper in Earth and Planetary Science Letters
in March reported finding fossilized ferns in central Siberia that suggest that in the late Cretaceous era, temperatures there were like modern-day Florida. Yet current climate models predict that the area should have had average temperatures around zero Celsius. The British climate modeler involved in the study, Paul Valdes of Bristol University, says this snapshot from the era of the dinosaurs could mean that “the internal physics of our climate models are wrong.” That the models may also be drastically underestimating likely warming in the 21st century.
This uncertainty at the heart of the models seems surprising when the predictions of most global climate models seem to be in agreement. For more than a decade they have estimated that a doubling of carbon dioxide in the air will warm the world by between 1.5 and 4.5 degrees Celsius.
Some experts think the consensus of the models is bogus. “The modelers tend to tweak them to align them. The process is very incestuous,” one leading British analyst on uncertainty in models told me.
Another, Jerry Ravetz, fellow at Oxford University’s James Martin Institute for Science and Civilisation, says: “The modelers are trained to be very myopic, and not to understand the uncertainty within their models. They become very skilled at solving the little problems and ignoring the big ones.”
These are serious charges. But the custodians of the big models say this is really a communications problem between them and the outside world. Gavin Schmidt of NASA’s Goddard Institute for Space Studies, which runs one of the world’s top climate models, says modelers themselves have a “tacit knowledge” of the uncertainties inherent in their work. But this knowledge rarely surfaces in public discussions, resulting in “an aura of exactitude that can be misleading.”
Steve Rayner, director of the James Martin Institute, says, “What climate models do well is give a broad picture. What they are absolutely lousy at is giving specific forecasts for particular places or times.” And yet that is what modelers are increasingly doing.
At a meeting at Cambridge University in Britain last summer, Larry Smith singled out for criticism the British government's Met Office Hadley Centre for Climate Prediction and Research, which runs another of the world’s premier climate models. He accuses the Centre of making detailed climate projections for regions of Britain, when global climate models disagree strongly about how climate change will affect the islands.
James Murphy, Hadley’s head of climate prediction, says: “I find it far-fetched that a planner is going to rush off with a climate scientist’s probability distribution and make an erroneous decision because they assumed they could trust some percentile of the distribution to its second decimal point.”
But some say the Hadley Centre invites just such a response in some of its leaflets. One of its reports, “New Science for Managing Climate Risks,” distributed to policymakers at the Bali climate conference last December, included “climate model predictions” forecasting that by 2030 the River Orinoco’s flow will have declined by 18.7 percent, the Zambezi by 34.9 percent, and the Amazon by 13.4 percent.
Many in the modeling community are growing wary of such spurious certainty. Last year, a panel on climate modeling assembled by the UN’s World Climate Research Program under the chairmanship of Jagadish Shukla of the George Mason University at Calverton, Maryland, concluded that current models “have serious limitations in simulating regional features, for example rainfall, mid-latitude storms, organized tropical convection, ocean mixing, and ecosystem dynamics.”
Regional projections, the panel said, “are sufficiently uncertain to compromise the goal of providing society with reliable predictions of regional climate change.” Many of the predictions were “laughable,” according to the panel. Concern is greatest about predicting climate in the tropics, including hurricane formation. This seriously undermines the credence that can be placed on a headline-grabbing prediction in May that the future might see fewer Atlantic hurricanes (albeit sometimes more intense).
This might not matter too much if politicians and policymakers had a healthily skeptical view of climate models. But most do not, a meeting of modelers held in Oxford heard in February. Policymakers often hide behind models and modelers, using them to claim scientific probity for their actions. One speaker likened modern climate modelers to the ancient oracles. “They are part of the tradition of goats’ entrails and tea leaves. They are a way of objectifying advice, cloaking sensible ideas in a false aura of scientific certainty.”
We saw that when European governments at the recent Bali climate conference cited the UN’s Intergovernmental Panel on Climate Change as reporting that keeping carbon dioxide concentrations in the atmosphere below 450 parts per million would prevent warming above 2 degrees Celsius. And that that was a “safe” level of warming. Neither statement is in any IPCC reports, and its scientists have repeatedly stated that what might be regarded as a safe degree of warming is ultimately a political and not a scientific question.
None of this should be taken to suggest either that climate models are not valuable tools, or that they are exaggerating the significance of man-made climate change. In fact, they may well inadvertently be under-estimating the pace of change. Most models suggest that climate change in the coming decades will be gradual — a smooth line on a graph. But our growing knowledge of the history of natural climate change suggests that change often happens in sudden leaps.
For instance, there was a huge step-change in the world’s climate 4,200 years ago. Catastrophic droughts simultaneously shattered human societies across the Middle East, India, China, and the interior of North America. “Models have great difficulty in predicting such sudden events, and in explaining them,” says Euan Nisbet of Royal Holloway, the University of London. “But geology tells us that catastrophe has happened in the past, and is likely to happen again.”
As Pasky Pascual of the U.S. Environmental Protection Agency put it at the Oxford meeting: “All models are wrong; some are wronger.” But they are our best handle on likely climate change in the coming decades.
Acting on the findings of Shukla’s panel, modelers from around the world met at the summit in Reading in May to “prepare a blueprint to launch a revolution in climate prediction.” They said that to meet the demands for reliable local forecasts they needed more than a thousand times more computing power than they currently have, and called for the establishment of a new billion-dollar global climate modeling center, a “Manhattan Project for climate,” to deliver the goods.
POSTED ON 16 Jun 2008 IN
Biodiversity Climate Science & Technology Asia Europe North America | <urn:uuid:b16d65e9-f659-4f06-882f-45c8ec102566> | 3 | 2,143 | News Article | Science & Tech. | 39.759837 | 461 |
||This article needs additional citations for verification. (February 2010)|
Opacity is the measure of impenetrability to electromagnetic or other kinds of radiation, especially visible light. In radiative transfer, it describes the absorption and scattering of radiation in a medium, such as a plasma, dielectric, shielding material, glass, etc. An opaque object is neither transparent (allowing all light to pass through) nor translucent (allowing some light to pass through). When light strikes an interface between two substances, in general some may be reflected, some absorbed, some scattered, and the rest transmitted (also see refraction). Reflection can be diffuse, for example light reflecting off a white wall, or specular, for example light reflecting off a mirror. An opaque substance transmits no light, and therefore reflects, scatters, or absorbs all of it. Both mirrors and carbon black are opaque. Opacity depends on the frequency of the light being considered. For instance, some kinds of glass, while transparent in the visual range, are largely opaque to ultraviolet light. More extreme frequency-dependence is visible in the absorption lines of cold gases. Opacity can be quantified in many ways; for example, see the article mathematical descriptions of opacity.
Quantitative definition
The words "opacity" and "opaque" are often used as colloquial terms for objects or media with the properties described above. However, there is also a specific, quantitative definition of "opacity", used in astronomy, plasma physics, and other fields, given here.
In this use, "opacity" is another term for the mass attenuation coefficient (or, depending on context, mass absorption coefficient, the difference is described here) at a particular frequency of electromagnetic radiation.
More specifically, if a beam of light with frequency travels through a medium with opacity and mass density , both constant, then the intensity will be reduced with distance x according to the formula
- x is the distance the light has traveled through the medium
- is the intensity of light remaining at distance x
- is the initial intensity of light, at
For a given medium at a given frequency, the opacity has a numerical value that may range between 0 and infinity, with units of length2/mass.
Planck and Rosseland opacity
It is customary to define the average opacity, calculated using a certain weighting scheme. Planck opacity uses normalized Planck black body radiation energy density distribution as the weighting function, and averages directly. Rosseland opacity (after Svein Rosseland), on the other hand, uses a temperature derivative of Planck distribution (normalized) as the weighting function, and averages ,
The photon mean free path is . The Rosseland opacity is derived in the diffusion approximation to the radiative transport equation. It is valid whenever the radiation field is isotropic over distances comparable to or less than a radiation mean free path, such as in local thermal equilibrium. In practice, the mean opacity for Thomson electron scattering is:
where is the hydrogen mass fraction. For nonrelativistic thermal bremsstrahlung, or free-free transitions, it is:
The Rosseland mean absorption coefficient including both scattering and absorption (also called the extinction coefficient) is:
See also
- Absorption (electromagnetic radiation)
- Mathematical descriptions of opacity
- Molar absorptivity
- Reflection (physics)
- Scattering theory | <urn:uuid:2d3f3447-b565-4f51-9cdf-55ed1cc136a9> | 3.953125 | 717 | Knowledge Article | Science & Tech. | 14.203316 | 462 |
The troposphere is the lowest portion of Earth's atmosphere. It contains approximately 80% of the atmosphere's mass and 99% of its water vapor and aerosols. The average depth of the troposphere is approximately 17 km (11 mi) in the middle latitudes. It is deeper in the tropics, up to 20 km (12 mi), and shallower near the polar regions, at 7 km (4.3 mi) in summer, and indistinct in winter. The lowest part of the troposphere, where friction with the Earth's surface influences air flow, is the planetary boundary layer. This layer is typically a few hundred meters to 2 km (1.2 mi) deep depending on the landform and time of day. The border between the troposphere and stratosphere, called the tropopause, is a temperature inversion.
The word troposphere derives from the Greek: tropos for "change" reflecting the fact that turbulent mixing plays an important role in the troposphere's structure and behavior. Most of the phenomena we associate with day-to-day weather occur in the troposphere.
Pressure and temperature structure
The chemical composition of the troposphere is essentially uniform, with the notable exception of water vapor. The source of water vapor is at the surface through the processes of evaporation and transpiration. Furthermore the temperature of the troposphere decreases with height, and saturation vapor pressure decreases strongly as temperature drops, so the amount of water vapor that can exist in the atmosphere decreases strongly with height. Thus the proportion of water vapor is normally greatest near the surface and decreases with height.
The pressure of the atmosphere is maximum at sea level and decreases with higher altitude. This is because the atmosphere is very nearly in hydrostatic equilibrium, so that the pressure is equal to the weight of air above a given point. The change in pressure with height, therefore can be equated to the density with this hydrostatic equation:
Since temperature in principle also depends on altitude, one needs a second equation to determine the pressure as a function of height, as discussed in the next section.*
The temperature of the troposphere generally decreases as altitude increases. The rate at which the temperature decreases, , is called the environmental lapse rate (ELR). The ELR is nothing more than the difference in temperature between the surface and the tropopause divided by the height. The reason for this temperature difference is the absorption of the sun's energy occurs at the ground which heats the lower levels of the atmosphere, and the radiation of heat occurs at the top of the atmosphere cooling the earth, this process maintaining the overall heat balance of the earth.
As parcels of air in the atmosphere rise and fall, they also undergo changes in temperature for reasons described below. The rate of change of the temperature in the parcel may be less than or more than the ELR. When a parcel of air rises, it expands, because the pressure is lower at higher altitudes. As the air parcel expands, it pushes on the air around it, doing work; but generally it does not gain heat in exchange from its environment, because its thermal conductivity is low (such a process is called adiabatic). Since the parcel does work and gains no heat, it loses energy, and so its temperature decreases. (The reverse, of course, will be true for a sinking parcel of air.)
Since the heat exchanged is related to the entropy change by , the equation governing the temperature as a function of height for a thoroughly mixed atmosphere is
If the air contains water vapor, then cooling of the air can cause the water to condense, and the behavior is no longer that of an ideal gas. If the air is at the saturated vapor pressure, then the rate at which temperature drops with height is called the saturated adiabatic lapse rate. More generally, the actual rate at which the temperature drops with altitude is called the environmental lapse rate. In the troposphere, the average environmental lapse rate is a drop of about 6.5 °C for every 1 km (1,000 meters) in increased height.
The environmental lapse rate (the actual rate at which temperature drops with height, ) is not usually equal to the adiabatic lapse rate (or correspondingly, ). If the upper air is warmer than predicted by the adiabatic lapse rate (), then when a parcel of air rises and expands, it will arrive at the new height at a lower temperature than its surroundings. In this case, the air parcel is denser than its surroundings, so it sinks back to its original height, and the air is stable against being lifted. If, on the contrary, the upper air is cooler than predicted by the adiabatic lapse rate, then when the air parcel rises to its new height it will have a higher temperature and a lower density than its surroundings, and will continue to accelerate upward.
Temperatures decrease at middle latitudes from an average of 15°C at sea level to about -55°C at the top of the tropopause. At the poles, the troposphere is thinner and the temperature only decreases to -45°C, while at the equator the temperature at the top of the troposphere can reach -75°C.
The tropopause is the boundary region between the troposphere and the stratosphere.
Measuring the temperature change with height through the troposphere and the stratosphere identifies the location of the tropopause. In the troposphere, temperature decreases with altitude. In the stratosphere, however, the temperature remains constant for a while and then increases with altitude. The region of the atmosphere where the lapse rate changes from positive (in the troposphere) to negative (in the stratosphere), is defined as the tropopause. Thus, the tropopause is an inversion layer, and there is little mixing between the two layers of the atmosphere.
Atmospheric flow
The flow of the atmosphere generally moves in a west to east direction. This however can often become interrupted, creating a more north to south or south to north flow. These scenarios are often described in meteorology as zonal or meridional. These terms, however, tend to be used in reference to localised areas of atmosphere (at a synoptic scale). A fuller explanation of the flow of atmosphere around the Earth as a whole can be found in the three-cell model.
Zonal Flow
A zonal flow regime is the meteorological term meaning that the general flow pattern is west to east along the Earth's latitude lines, with weak shortwaves embedded in the flow. The use of the word "zone" refers to the flow being along the Earth's latitudinal "zones". This pattern can buckle and thus become a meridional flow.
Meridional flow
When the zonal flow buckles, the atmosphere can flow in a more longitudinal (or meridional) direction, and thus the term "meridional flow" arises. Meridional flow patterns feature strong, amplified troughs and ridges, with more north-south flow in the general pattern than west-to-east flow.
Three-cell model
The three cells model attempts to describe the actual flow of the Earth's atmosphere as a whole. It divides the Earth into the tropical (Hadley cell), mid latitude (Ferrel cell), and polar (polar cell) regions, dealing with energy flow and global circulation. Its fundamental principle is that of balance - the energy that the Earth absorbs from the sun each year is equal to that which it loses back into space, but this however is not a balance precisely maintained in each latitude due to the varying strength of the sun in each "cell" resulting from the tilt of the Earth's axis in relation to its orbit. It demonstrates that a pattern emerges to mirror that of the ocean - the tropics do not continue to get warmer because the atmosphere transports warm air poleward and cold air equatorward, the effect of which appears to be that of heat and moisture distribution around the planet.
Synoptic scale observations and concepts
Forcing is a term used by meteorologists to describe the situation where a change or an event in one part of the atmosphere causes a strengthening change in another part of the atmosphere. It is usually used to describe connections between upper, middle or lower levels (such as upper-level divergence causing lower level convergence in cyclone formation), but can sometimes also be used to describe such connections over distance rather than height alone. In some respects, tele-connections could be considered a type of forcing.
Divergence and Convergence
An area of convergence is one in which the total mass of air is increasing with time, resulting in an increase in pressure at locations below the convergence level (recall that atmospheric pressure is just the total weight of air above a given point). Divergence is the opposite of convergence - an area where the total mass of air is decreasing with time, resulting in falling pressure in regions below the area of divergence. Where divergence is occurring in the upper atmosphere, there will be air coming in to try to balance the net loss of mass (this is called the principle of mass conservation), and there is a resulting upward motion (positive vertical velocity). Another way to state this is to say that regions of upper air divergence are conducive to lower level convergence, cyclone formation, and positive vertical velocity. Therefore, identifying regions of upper air divergence is an important step in forecasting the formation of a surface low pressure area.
- "ISS022-E-062672 caption". NASA. Retrieved 21 September 2012.
- McGraw-Hill Concise Encyclopedia of Science & Technology. (1984). Troposhere. "It contains about four-fifths of the mass of the whole atmosphere."
- Danielson, Levin, and Abrams, Meteorology, McGraw Hill, 2003
- Landau and Lifshitz, Fluid Mechanics, Pergamon, 1979
- Landau and Lifshitz, Statistical Physics Part 1, Pergamon, 1980
- Kittel and Kroemer, Thermal Physics, Freeman, 1980; chapter 6, problem 11
- "American Meteorological Society Glossary - Zonal Flow". Allen Press Inc. June 2000. Retrieved 2006-10-03.
- "American Meteorological Society Glossary - Meridional Flow". Allen Press Inc. June 2000. Retrieved 2006-10-03.
- "Meteorology - MSN Encarta, "Energy Flow and Global Circulation"". Encarta.Msn.com. Archived from the original on 2009-10-31. Retrieved 2006-10-13.
|Look up troposphere in Wiktionary, the free dictionary.|
- Composition of the Atmosphere, from the University of Tennessee Physics dept.
- Chemical Reactions in the Atmosphere
- http://encarta.msn.com/encyclopedia_761571037_3/Meteorology.html#s12 (Archived 2009-10-31) | <urn:uuid:289d13f0-c4f6-44f9-8b69-3daada4f7990> | 4.375 | 2,265 | Knowledge Article | Science & Tech. | 39.657849 | 463 |
|Foundation of Quantum Theory|
The following well-known experiments serve as a motivation for studying quantum theory. The experimental results cannot be explained using ideas from classical physics.
|1. Blackbody Radiation||2. Photoelectric Effect||3. Compton Effect|
It is well-known that when a body is heated it emits electromagnetic radiation. For example, if a piece of iron is heated to a few hundred degrees, it gives off e.m. radiation which is predominantly in the infra-red region. When the temperature is raised to 1000C it will begin to glow with reddish color which means that the radiation emitted by it is in the visible red region having wavelengths shorter than in the previous case. If heated further it will become white-hot and the radiation emitted is shifted towards the still shorter wave-length blue color in the visible spectrum. Thus the nature of the radiation depends on the temperature of the emitter.
A heated body not only emits radiation but it also absorbs a part of radiation falling on it. If a body absorbs all the radiant energy falling on it, then its absorptive power is unity. Such a body is called a black body.
An ideal blackbody is realized in practice by heating to any desired temperature a hollow enclosure (cavity) and with a very small orifice. The inner surface is coated with lamp-black. Thus radiation entering the cavity through the orifice is incident on its blackened inner surface and is partly absorbed and partly reflected. The reflected component is again incident at another point on the inner surface and gets partly absorbed and partly reflected. This process of absorption and reflection continues until the incident beam is totally absorbed by the body.
The inner walls of the heated cavity also emit radiation, a part of which can come out through the orifice. This radiation has the characteristics of blackbody radiation - the spectrum of which can be analyzed by an infra-red spectrometer.
Experimental results show that the blackbody radiation has a continuous spectrum (shown in the graph). The intensity of the emitted radiation El is plotted as a function of the wavelength l for different temperatures. The wavelength of the emitted radiation ranges continuously from zero to infinity. El increases with increasing temperature for all wavelengths. It has very low values for both very short and very long wavelengths and has a maximum in between at some wavelength lmax. lmax depends on the temperature of the blackbody and decreases with increasing temperature.
The shift in the peak of the intensity distribution curve obeys an empirical relationship known as Wien's displacement law:
lmax T = constant.
The total power radiated per unit area of a blackbody can be derived from thermodynamics. This is known as Stefan-Boltzmann law which can be expressed mathematically as:
E = s T4,
where s = 5.67 x 10-8 W m-2 K-4 is known as Stefan's constant.
Note that the total power E radiated is obtained by integrating El over all wavelengths. W. Wien proposed an empirical relationship between El with l for a given temperature T:
El (T) = A exp(-B/lT)/l5,
where the constants A and B are chosen arbitrarily so as to fit the experimental energy distribution curves.
But it was later found that the experimental data don't follow Wien's empirical relation at larger wavelengths [See Fig. below ].
Wien's theory of intensity of radiation was based only on arguments from thermodynamics not on any plausible model. Considering the radiation system as composed of a bunch of harmonic oscillators Rayleigh and Jeans derived (using thermodynamics) an expression for the emitted radiation El:
El = (c/4) (8pkBT/l4).
'kB' is the Boltzman constant (kB=1.345 x 10-23 J/K).
The above expression agrees well with the experimental results at long wavelengths but drastically fails at shorter wavelengths. In the limit l -> 0, El -> infinity from the expression above, but in the experiments El -> 0, as l -> 0. This serious disagreement between theory and experiment indicates the limitations of classical mechanics.
Max Planck later derived an expression for the emitted radiation using quantum mechanics. He made a bold new postulate that an oscillator can have only energies which are discrete, i.e., an integral multiple of a finite quantum of energy hf where h is Planck's constant (h= 6.55 x 10-34 J.s) and f is the frequency of the oscillator. Thus the energy of the oscillator is,
E = nhf,
where n is an integer or zero. Planck further assumed that the change in energy of the oscillator due to emission or absorption of radiant energy can also take place by a discrete amount hf. Since radiation is emitted from the oscillators, and since according to Planck, the change in energy of the oscillators can only take discrete values, the energy carried by the emitted radiation, which is called a photon, will be hf, and that is also equal to the loss of energy of the oscillator. Again, this is also the energy gain of the oscillator when it absorbs a photon. Based on these ideas Planck derived the expression for the energy distribution of blackbody radiation:
El = (c/4) (8phc/l5)(1/[exp(hc/lkBT) - 1]).
Rayleigh-Jean's expression and Wien's displacement law are special cases of Planck's law of radiation. Planck's formula for the energy distribution of blackbody radiation agrees well with the experimental results, both for the long wavelengths and the short wavelengths of the energy spectrum.
Please on the simulation below to see nice interactive demonstration of the physics of Blackbody radiation.
Simulation on Blackbody Radiation
Back to Top
Planck's postulate regarding the discrete nature of the possible energy states of an oscillator marked a radical departure from the ideas of classical physics. According to the laws of classical mechanics, the energy of an oscillator can vary continuously, depending only on the amplitude of the vibrations - this is in total contrast to Planck's hypothesis of discrete energy states of an oscillator. Photoelectric effect is another classic example which can not be explained with classical physics. Einstein was awarded Nobel prize for his explanation of the physics of photoelectric effect.
The basic experiment of photoelectric effect is simple. It was observed that a metal plate when exposed to ultraviolet radiation became positively charged which showed that it has lost negative charges from its surface. These negatively charged particles were later identified to be electrons (later named photoelectrons). This phenomenon is known as photoelectric effect.
Please out the physics applet below which shows the effect of light on various metals.
Simulation on Photoelectric effect
The main results of the experiment can be summarized as follows:
On exposure to the incident light, photoelectrons with all possible velocities ranging from 0 upto a maximum vm are emitted from the metal plate. When a positive potential is applied to the collector (which collects the emitted photoelectrons), a fraction of the total number emitted is collected by the collector. This fraction increases as the voltage is increased. For potentials above about +10 volts, all the electrons emitted by the light are collected by the collector which accounts for the saturation of the photoelectric current [Figs. (a) and (b) below].
On the other hand, when a negative retarding potential is applied on the collector, the lower energy electrons are unable to reach the collector so that the current gradually decreases with increasing negative potential. Finally for a potential -V0 (known as the stopping potential), the photoelectrons of all velocities upto the maximum vm are prevented from reaching the collector. At this point, the maximum kinetic energy of the emitted electrons equals the energy required to overcome the effect of the retarding potential - so we can write
mvm2/2 = eV0.
Conclusion from the experimental results:
(1) The photoelectric current depends upon the intensity of the light used. It is independent of the wavelength of the light [See Fig. (a) above].
(2) The photoelectrons are emitted with all possible velocities from 0 upto a maximum vm which is independent of the intensity of the incident light, but depends only on its wavelength (or frequency). It is found that if f is the frequency of the light used, then the maximum kinetic energy of the photoelectrons increases linearly with f [See Figs. (b) and (c) above ].
(3) Photoelectron emission is an instantaneous effect. There is no time gap between the incidence of the light and the emission of the photoelectrons.
(4) The straight line graph showing the variation of the maximum kinetic energy of the emitted electrons with the frequency f of the light intersects the abscissa at some point f0. No photoelectron emission takes place in the frequency range f<f0. This minimum frequency f0 is known as the threshold frequency. Its value depends on the nature of the emitting material [See Fig. (c) above].
Breakdown of Classical Physics:
According to classical physics -
(a) Light is an electromagnetic wave - the intensity of light is determined by the amplitudes of these electromagnetic oscillations. When light falls on an electron bound in an atom, it gains energy from the oscillating electric field. Larger the amplitude of oscillations, larger is the energy gained by the emitted electron - thus energy of the emitted electrons should depend on the intensity of the incident light. This is in contrast to what has been observed in experiment (point 2 above).
(b) According to the electromagnetic theory, the velocity of the emitted electrons should not depend on the frequency of the incident light. Whatever may be the frequency of the incident light, the electron would be emitted if it gets sufficient time to collect the necessary energy for emission. So the photoelectric emission is not an instantaneous effect. These are in contrary to points 3 and 4 above.
(c) Finally, the incident electromagnetic wave acts equally on all the electrons of the metal surface. There is no reason why only some electrons will be able to collect the necessary energy for emission from the incident waves. Given sufficient time, all electrons should be able to collect the energy necessary for emission. So there is no reason why the photoelectric current should depend upon the intensity of the incident light. However, this is again in contrary to the observed facts (point 1 above).
Einstein's light quantum hypothesis and photoelectric equation:
We have seen from above that the maximum kinetic energy of the emitted photoelectrons increases linearly with the frequency of the incident light. In terms of equation we have
mvm2/2 = eV0 = af - W
where a and W are constants. W is known as the work function of the emitting material. The constant a was determined experimentally and is found to be equal to the Planck's constant h. We can then rewrite the above equation as -
mvm2/2 = eV0 = hf - W.
For the special value of f = f0 = W/h, the K. E. of the emitted photoelectrons becomes zero. So there will be no photoelectron emission if f < f0. f0 is the threshold frequency. The equation,
mvm2/2 = eV0 = hf - hf0
is known as the famous Einstein's photoelectric equation.
Einstein used the quantum hypothesis of Planck to explain the photoelectric effect. He postulated that light is emitted from a source in the form of energy packets of the amount hf known as the light quantum or photon. This is known as Einstein's light quantum hypothesis.
When a photon of energy hf falls on an electron bound inside an atom, the electron absorbs the energy hf and is emitted from the atom provided that hf is greater than the energy of binding of the electron in the atom which is equal to the work function W of the metal. The surplus of energy (hf - W) is taken away by the electron as its kinetic energy. Obviously if hf < W, i.e. f<f0, no photoelectric emission can take place. This explains the existence of the threshold frequency.
Furthermore, according to Einstein's theory, larger the number of photons falling on the metal, greater is the probability of their encounter with the atomic electrons and hence greater is the photoelectric current. So the increase of photoelectric current with the increasing light intensity can be easily explained.
Finally, as soon as the photon of energy hf > W falls on an electron, the latter absorbs it and is emitted instantaneously.
Note that Einstein's light quantum hypothesis postulates the corpuscular nature of light in contrast to the wave nature. We will talk about this wave-particle duality later on in this course.
Back to Top
The discovery of Compton scattering of x-rays provides direct support that light consists of pointlike quanta of energy called photons.
A schematic diagram of the apparatus used by Compton is shown in the Figure below. A graphite target was bombarded with monochromatic x-rays and the wavelength of the scattered radiation was measured with a rotating crystal spectrometer. The intensity was determined by a movable ionization chamber that generated a current proportional to the x-ray intensity. Compton measured the dependence of scattered x-ray intensity on wavelength at three different scattering angles of 45o, 90o, and 135o. The experimental intensity vs. wavelength plots observed by Compton for the above three scattering angles (See Fig. below) show two peaks, one at the wavelength l of the incident x-rays and the other at a longer wavelength l'. The functional dependence of l' on the scattering angle and l was predicted by Compton to be:
l' - l = (h/mec)[ 1- cosq ] = l0 [ 1- cosq ].
The factor l0=h/mec, also known as Compton wavelength can be calculated to be equal to 0.00243 nm.
The physics of Compton effect:
To explain his observations Compton assumed that light consists of photons each of which carries an energy hf and a momentum hf/c (as p = E/c = hf/c). When such a photon strikes a free electron the electron gets some momentum (pe) and kinetic energy (Te) due to the collision, as a result of which the momentum and energy of the photon are reduced.
Considering energy and momentum conservation (For the detail derivation please here) one can derive the change in wavelength due to Compton scattering:
l' - l = (h/mec)[ 1- cosq ].
Note that the result is independent of the scattering material and depends only on the angle of scattering.
The appearance of the peak at the longer wavelength in the intensity vs. wavelength curve is due to Compton scattering from the electron which may be considered free, since its energy of binding in the atom is small compared to the energy hf of the photon. The appearance of the other peak at the wavelength of the incident radiation is due to scattering from a bound electron. In this case the recoil momentum is taken up by the entire atom, which being much heavier compared to the electron, produces negligible wavelength shift.
Compton effect gives conclusive evidence in support of the corpuscular character of electromagnetic radiation.
Please out the simulation below which shows Compton scattering.
Simulation on Compton Scattering
Back to Top
© Kingshuk Majumdar (2000) | <urn:uuid:ce82c755-4001-49b5-866b-576e95373ef8> | 3.671875 | 3,244 | Academic Writing | Science & Tech. | 44.877323 | 464 |
Bivalves in Time and Space (BiTS): Clams as tools to understand macroevolution
2011 REU Project:
This study is part of a collaborative effort (see also www.bivatol.org/BiTS/) to develop bivalves as a model clade for macroevolutionary studies. By integrating molecular, morphological and paleontological datasets, BiTS aims to test methods of molecular clock dating, ancestral state reconstruction and historical biogeography, as well as to detect spatial and temporal trends in evolution.
BiTS researchers at the Field Museum concentrate on the morphological and paleontological components of the project, investigating the evolution of numerous shell and anatomical features in two of the commonest bivalve lineages - venus clams and cockles.
Research methods and techniques: REU participants in the project will receive an introduction to bivalve morphology and systematics, with particular focus on shell characters - i.e., those that preserve well in fossils. They will prepare specimens, document diagnostic characters with optical and scanning electron microscopy, build and analyze phylogenetic trees, and gain experience with relevant literature research and collection management techniques.
Curator/Advisors: Dr. Rüdiger Bieler, Zoology/Invertebrates, in collaboration with postdoctoral fellow Dr. André Sartori.
REU Intern: KATHERINE ANDERSON
Ecology and Evolutionary Biology major
University of Michigan, Ann Arbor
Symposium Presentation Title: Diversity of Venus clams: building an online resource for species identification
Symposium Presentation Abstract: Venerids, commonly known as Venus clams, are the most diverse family of marine bivalves, with over 500 extant species. They are found on every continent except Antarctica, and many are edible, commercially collected and cultured, comprising an important food source worldwide. Despite their prevalence and economic importance, there is still no freely accessible, online catalogue available to aid in recognition of venerid species. Species identification is crucial not only for economic reasons, but also for conserving biodiversity and ensuring accuracy in scientific studies. An online catalogue consisting of individual species pages with detailed morphological descriptions and high quality photographs is being built in order to provide a resource that is both complete and available for anyone to use. Specimens from the collections of the Field Museum of Natural History were identified to species level using primary and secondary literature. Following identification, the morphology of the shell of each species was thoroughly described based on characteristics of all specimens available. Descriptions include details of overall shape and coloration, as well as features important for bivalve taxonomy, such as the morphology of the hinge teeth, lunule, escutcheon and ligament. In order to aid in identification, differences among similar species that may be commonly confused were also noted on each species page. High quality photographs of the dorsal view, external and internal views of one valve, and the hinge plates of both valves were taken of a representative specimen of each species. Species pages were published on eBivalvia, a collaborative database for information about bivalves, which shares its contents with the Encyclopedia of Life. In addition to species pages, genus pages were also created containing descriptions of characteristics shared by all species within a genus. At this time, over 100 species of venerids have been described and photographed. The species pages may become more comprehensive in the future, as information such as habitat and distribution is appended. The online catalogue not only documents the diversity of Venus clams, but also provides an accurate and accessible resource for species identification that can be utilized by researchers, students and shell collectors, as well as conservation agencies and fisheries. | <urn:uuid:9852024a-517f-481c-8747-c26fb866b524> | 2.6875 | 750 | Academic Writing | Science & Tech. | 4.664802 | 465 |
Robert A. Pritzker Assistant Curator of Meteoritics and Polar Studies Philipp Heck and co-authors from the Max-Planck-Institute for Chemistry in Germany had their paper on the first isotopic analysis of sulfur-rich comet dust published in the April issue of the journal Meteoritics & Planetary Science. The dust was captured during a flyby of Comet Wild 2 by NASA’s Stardust Mission and returned to Earth.
The Robert A. Pritzker Center for Meteoritics and Polar Studies is proud to announce the newest addition to the meteorite collection. The newly named meteorite Thika, recently classified as a L6 ordinary chondrite, was donated to the Center by Collections and Research Committee member Terry Boudreaux in mid-September.
Field Museum researchers at the Robert A. Pritzker Center for Meteoritics and Polar Studies have received a second target foil from the Interstellar Dust Collector onboard NASA's Stardust Mission - that returned the first solid extraterrestrial material to Earth from beyond the Moon.
We announce a call for abstracts for the session P15 “Laboratory Analysis of Extraterrestrial Dust Returned to Earth” at the Fall Meeting 2011 of the American Geophysical Union (AGU), December 5-9, 2011 San Francisco, California, USA.
Collections & Research Committee member Terry Boudreaux donated a very unusual meteorite specimen to The Field Museum’s Robert A. Pritzker Center for Meteoritics and Polar Studies. The meteorite is named NWA 5492 after northwest Africa where it was found. Its petrology and chemical composition are very different compared to other meteorites and it cannot be classified with the existing scheme.
About 470 million years ago – in a time period called Ordovician – the parent asteroid of one of the L chondrites, one of the most common meteorite types, was disrupted in a collision with another body. This event led to a subsequent bombardment of Earth with collisional debris for at least 10 million years. This finding is reported in a recent study in Earth and Planetary Science Letters by Field Museum scientists Dr. Birger Schmitz (Research Associate), Robert A. Pritzker Assistant Curator of Meteoritics and Polar Studies Dr. Philipp Heck, and an international team of coauthors.
Right after the Mifflin Meteorite fell in SW Wisconsin in April 2010 the Robert A. Pritzker Assistant Curator of Meteoritics and Polar Studies Dr. Philipp R. Heck coordinated an international study to determine the time it spent in space and to calculate its size in space before it got ablated and broke apart in our atmosphere. Now, first results obtained from this study are published as extended abstracts, and were presented in more detail in March at the Lunar and Planetary Science Conference in Texas: The new results show that Mifflin was travelling through space as a small 3 feet object for about 20 Million years before it landed in Wisconsin. | <urn:uuid:5f953902-9e10-4542-9451-f2f8fa14bc64> | 2.9375 | 608 | News (Org.) | Science & Tech. | 37.056982 | 466 |
In the applet below you see 1D realisation of white and correlated noise
with equidistant step in x. Independent random points
uniform distribution on interval (-1, 1) make the white noise (the blue
curve). Correlated random points Vi (the red curve) are
obtained by averaging of white noise in radius Rc sphere, i.e.
kernel Ko is used
Vi = ∑j= -Rc,Rc fi+j
To get a 2D fractal noise (mountain) you take an elastic string (see Fig.1), then a random vertical displacement is applied to its middle point. The process is repeated recursively to the middle point of every new segment. The random displacement decreases m times each iteration (usually m = 2 are used).
Using Fourier transformation for V(r), K(r), f(r)
g(k) = ∫ g(r) eikr dr ,
we get for V(k)
V(k) = K(k) f(k) .
I.e. averaging (*) means the white noise filtration by means of a filter with bandwidth K(k). The bandwidths for the two used filters are shown in Fig.3 (for Rc = 1)
KG(k) ~ exp[-(Rck)2], Ko ~ sin(Rck)/k.
At last 2D correlated random landscape. To get a smooth potential
2D Gauss kernel is used
Percolation in random potential landscape
Drag mouse to rotate 3D image (with "shift" to zoom it). The white line (in the blue bar to the right) corresponds to the average <V> value. The yellow line corresponds to the Fermi energy εF . Drag the line by mouse to change εF . See also 3D Mountains and Hidden Surface Removal Algorithms. | <urn:uuid:ad8ea79d-d059-4e6b-9162-059c5c2dc206> | 2.6875 | 403 | Academic Writing | Science & Tech. | 76.022895 | 467 |
Length of metal strips produced by a machine are normally distributed with mean length of 150 cm and a standard deviation of 10 cm. Find the probability that the length of a randomly selected strip is
i/ Shorter than 165 cm?
ii/ Longer than 170 cm?
iii/ Between 145 cm and 155 cm? | <urn:uuid:b1c81644-32dd-49d4-ba08-7b8c59cdda58> | 3.1875 | 65 | Q&A Forum | Science & Tech. | 88.035307 | 468 |
Marine Wildlife Encyclopedia
Acoel Flatworm Waminoa species
The diminutive acoel flatworms look like colored spots on the bubble coral on which they live. Their ultra-thin bodies glide over the coral surface as they graze, probably eating organic debris trapped by coral mucus. Acoel flatworms have no eyes and instead of a gut, they have a network of digestive cells. They are able to reproduce by fragmentation, each piece forming a new individual. The genus is difficult to identify to species level and the distribution of acoel flatworms is uncertain. | <urn:uuid:7f445df0-7be3-4499-90a1-236dec70c805> | 2.625 | 119 | Knowledge Article | Science & Tech. | 29.705 | 469 |
Newton's first law states that an object will keep doing what it is doing if left alone, in other words - The natural state of an object is static - unchanging - motion.
Newton's second law clarifies the first. Acceleration, or any change in motion, is an unnatural state for an arbitrary object left to its laurels, however it is a state that clearly exists all around us. Newton defines the "thing" that forces an object to change its state of being - a force.
In this most rigorous sense, a force is defined to be that which causes a change in motion.
The observation of a change in momentum necessitates that there is some force driving that change, so in this sense the two are equivalent (there is an equals sign there after all) - wherever you see a (net) force you will see an acceleration, wherever you see an acceleration you will find a force responsible for it. However, going back to the first law, acceleration is a change in the (kinetic) state of an object, an objects natural tendency is to statically maintain its state. The observation of an unnatural state of being would logically imply that there is a cause.
Intuitively it seems unnatural that accelerations would happen spontaneously and that the universe will invent a force just to balance the books if you will. | <urn:uuid:9653235f-f275-4ae2-8e05-52f71a5a082d> | 3.390625 | 273 | Q&A Forum | Science & Tech. | 39.848107 | 470 |
Contrary to popular belief, astronauts still have weight while they are orbiting the earth. In fact, Shuttle astronauts weigh almost as much in space as they do on the earth's surface. But these astronauts are in free fall, together with their ship, and their downward accelerations prevents them from measuring their weights directly.
Instead, astronauts make a different type of measurement—one that accurately determines how much of them there is: they measure their masses. Your weight is the force that the earth's gravity exerts on you; your mass is the measure of your inertia, how hard it is to make you accelerate. For deep and interesting reasons, weight and mass are proportional to one another at a given location, so measuring one quantity allows you to determine the other. Instead of weighing themselves, astronauts measure their masses.
They make these mass measurements with the help of a shaking device. They strap themselves onto a machine that gently jiggles them back and forth to see how much inertia they have. By measuring how much force it takes to cause a particular acceleration, the machine is able to compute the mass of its occupant.
Answered by Lou A. Bloomfield of the University of Virginia | <urn:uuid:5696a8c8-21e4-4bdc-a7a2-5523499690d2> | 4.5 | 240 | Q&A Forum | Science & Tech. | 42.654701 | 471 |
Defects combine to make perfect devices
Jun 26, 2002
Faulty components are usually rejected in the manufacture of computers and other high-tech devices. However, Damien Challet and Neil Johnson of Oxford University say that this need not be the case. They have used statistical physics to show that the errors from defective electronic components or other imperfect objects can be combined to create near perfect devices (D Challet and N Johnson 2002 Phys. Rev. Lett. 89 028701).
Most computers are built to withstand the faults that develop in some of their components over the course of the computer’s lifetime, although these components initially contain no defects. However, many emerging nano- and microscale technologies will be inherently susceptible to defects. For example, no two quantum dots manufactured by self-assembly will be identical. Each will contain a time-independent systematic defect compared to the original design.
Historically, sailors have had to cope with a similar problem – the inaccuracy in their clocks. To get round this they often took the average time of several clocks so that the errors in their clocks would more or less cancel out.
Similarly, Challet and Johnson consider a set of N components, each with a certain systematic error – for example the difference between the actual and registered current in a nanoscale transistor at a given applied voltage. They calculated the effect of combining the components and found that the best way to minimize the error is to select a well-chosen subset of the N components. They worked out that the optimum size of this subset for large numbers of devices should equal N/2.
On this basis, the researchers say that it should be possible to generate a continuous output of useful devices using only defective components. To find the optimum subset from each batch of defective devices, all of the defects can be measured individually and the minimum calculated with a computer. Alternatively, components can be combined through trial and error until the aggregate error is minimized. Once the optimum subset has been selected, fresh components can be added to replenish the original batch and the cycle started over again.
Challet and Johnson point out that this process and the wiring together of the components will add to the overall cost of making the device. But they believe that these extra costs are likely to be outweighed by the fact that defective components can be produced cheaply en masse. Hewlett Packard, for example, has already built a supercomputer – known as Teramac – from partially defective conventional components using adaptive wiring.
“Our scheme implies that the ‘quality’ of a component is not determined solely by its own intrinsic error,” write the researchers. “Instead, error becomes a collective property, which is determined by the ‘environment’ corresponding to the other defective components.”
About the author
Edwin Cartlidge is News Editor of Physics World | <urn:uuid:7ed1dcb4-7e35-4561-bffe-329002b7a93d> | 3.671875 | 584 | Truncated | Science & Tech. | 32.359988 | 472 |
Launch Date: December 02, 1997
Mission Project Home Page - http://www.mpe-garching.mpg.de/EQS/eqs_home.html
EQUATOR-S was a low-cost mission designed to study the Earth's equatorial magnetosphere out to distances of 67000 km. It formed an element of the closely-coordinated fleet of satellites that compose the IASTP program. Based on a simple spacecraft design, it carries a science payload comprising advanced instruments that were developed for other IASTP missions.
Unique features of EQUATOR-S were its nearly equatorial orbit and its high spin rate. It was launched as an auxiliary payload on an Ariane-4, December 2nd, 1997. The mission was intended for a two-year lifetime but stopped transmitting data on May 1, 1998.
The idea of an equatorial satellite dates back to NASA's GGS (Global Geospace Science) program, originally conceived in 1980. The equatorial element of the program was abandoned in 1986 and several subsequent attempts to rescue the mission failed, leaving a significant gap in both NASA's GSS and the international IASTP programs.
The Max-Planck-Institut für Extraterrestrische Physik (MPE) decided to fill this gap because of its interest in GSS and the opportunity for a test of an advanced instrument to measure electric fields with dual electron beams. In addition to MPE-internal funds and personnel, the realization of EQUATOR-S was possible through a 1994 grant from the German Space Agency DARA (meanwhile part of DLR). | <urn:uuid:08b61c80-8adc-4d5c-92e2-9ac587918416> | 3.09375 | 335 | Knowledge Article | Science & Tech. | 48.947562 | 473 |
More on those rorquals: part I is required reading. To those who have seen this stuff before: sorry, am going through a busy phase and no time for new material (blame dinosaurs and azhdarchoid pterosaurs… and baby girls). Oh, incidentally, I recently registered Tet Zoo with ResearchBlogging: I haven’t done this before because ‘blogging on peer-reviewed research’ is the norm at Tet Zoo, not the exception. It seems to take ages for posts to be uploaded to ResearchBlogging – like, hours. Is this normal? Anyway…
This time we look at the basics of rorqual morphology and at their feeding behaviour. The rostrum in rorquals is long and tapers to a point (though it is comparatively broad in blue whales) and, in contrast to other mysticetes, a stout finger-like extension of the maxillary bone extends posteriorly, overlapping the nasals and abutting the supraoccipital (the shield-like plate that forms the rear margin of the skull). The dorsal surfaces of the frontals (on the top of the skull) possess large depressions while the ventral surfaces of the zygomatic processes (the structures that project laterally from the cheek regions) are strongly concave, again unlike the condition in other mysticetes [painting above by Valter Fogato].
Rorqual lower jaws are gigantic, beam-like bones that bow outwards along their length. The symphyseal area (the region where the jaw tips meet) is unfused, as is the case in all mysticetes (even the most basal ones) but not other cetaceans, meaning that the two halves of the jaw can stretch apart at their tips somewhat. Exceeding 7 m in blue whales, rorqual lower jaws are the largest single bones in history (ha! Take that Sauropoda).
A section of blue whale jaw was once ‘discovered’ at Loch Ness and misidentified as the femur of a gigantic, hitherto undiscovered tetrapod. Occasionally rorqual skulls have been discovered in which the long lower jaws have been stuck wedged inside various of the skull openings and with their tips protruding like tusks. People unfamiliar with cetacean skulls have then naively assumed that the skull belonged to some sort of tusked prehistoric sea monster. Ben Roesch once discussed the case of the Ataka carcass of 1956: a giant beached animal possessing divergent ‘tusks’ that are in fact the separated halves of a rorqual’s lower jaw (see adjacent image).
I’ve come across another case of this sort of thing. The accompanying newspaper piece, from The Telegraph of June 29th 1908, features a skull trawled up by the Aberdeen vessel Balmedie (sailing out of Grimsby), and thought by the article’s writer to be that of ‘some prehistoric monster’, apparently with tongue preserved. It’s clearly a rorqual skull, and the pointed, narrow rostrum and posterior widening of the mesorostral gutter indicates that it’s a minke whale skull [for other cases in which whale carcasses have been mistaken for 'sea monsters' see the Tecolutla monster article].
Moving back to the morphology of the rorqual lower jaw, a tall, well-developed coronoid process – way larger than that of any other mysticete – projects from each jaw bone and forms the attachment site for a tendinous part of the temporalis muscle, termed the frontomandibular stay.
All of these unusual features are linked to the remarkable feeding style used by rorquals. How do they feed? Predominantly by lunge-feeding (also known as engulfment feeding): by opening their mouths to full gape (c. 90º), and then lunging into a mass of prey. Those depressed areas on the frontals and zygomatic processes house particularly large temporalis and masseter muscles, the muscles involved in closing the jaw. The frontomandibular stay provides a strong mechanical linkage between the lower jaw and skull and primarily serves to amplify the mechanical advantage of the temporalis muscles.
As a rorqual lunge-feeds, a huge quantity of water (hopefully containing prey) is engulfed within the buccal pouch, transforming the whale from ‘a cigar shape to the shape of an elongated, bloated tadpole’ (Orton & Brodie 1987, p. 2898). While a rorqual uses its muscles to open its jaws, the energy that powers the expansion of the buccal pouch is essentially provided by the whale’s forward motion, and not by the jaw muscles. In other words, the engulfing process is powered solely by the speed of swimming. Orton & Brodie (1987) noted that the engulfed water ‘is not displaced forward or moved backward by internal suction, but is simply enveloped with highly compliant material’ (p. 2905). Rorquals do not, therefore, set up a bow wave as they engulf (UPDATE: by complete coincidence, Paul Brodie told me in a recent email [28th Feb 2009] that he’s just completing a long-in-the-pipeline manuscript containing field data on Sei whale. Wow, I really look forward to seeing this).
A rorqual may engulf nearly 70% of its total body weight in water and prey during this action, which in an adult blue whale amounts to about 70 tons (Pivorunas 1979). In order to cope with this, the tissues of the buccal pouch must be highly extensible and able to cope with massive distortion. The ventral surface of the pouch is covered by grooved blubber, on which the 50-90 grooves extend from the jaw tips to as far posteriorly as the umbilicus. The ventral grooves can be extended to 4 times their resting width, and to 1.5 times their resting length. Internal to the grooved blubber is the muscle tissue of the buccal pouch, and this is unique, containing large amounts of elastin, and consisting of an inner layer of longitudinally arranged muscle bands and an outer layer where the bands are obliquely oriented (Pivorunas 1977).
When a rorqual lunges, delicate timing is needed, otherwise the buccal pouch will rapidly fill with seawater and not with prey. How then do rorquals get their timing just right? It seems that rorquals possess batteries of sensory organs within and around the buccal pouch: there are laminated corpuscles closely associated with the ventral grooves that might serve a sensory function, and located around the edges of the jaws, and at their tips, are a number of short (12.5 mm) vibrissae. Long assumed to be vestiges from the time when whale ancestors had body hair, it now seems that these structures have a role in sensing vibrations.
Once a mass of prey is engulfed, a rorqual then has to squeeze the water out through its baleen plates while at the same time retaining the prey. Rorqual baleen plates number between 219 to 475 in each side of the jaw (the number of plates is highly variable within species, with sei whales alone having between 219 and 402), and each plate ranges in length from 20 cm (in the minkes) to 1 m (in the blue). As the whale stops lunging forward, the pressure drops off, allowing deflation of the buccal pouch. Passive contraction of the blubber grooves and active contraction of the muscle layer within the buccal pouch also occurs at this time [adjacent image, showing engulfment process in Fin whale, by Jeremy Goldbogen and Nicholas Pyenson and taken from the UC Berkeley news site. Goldbogen et al.'s research is discussed in the next article. See also Pyenson's site and Goldbogen's site].
For an outstanding sequence of photos illustrating engulfment in action, see Randy Morse’s photos of a feeding blue whale.
So that’s the basics. But there’s so much more to the subject than this. How is it that, during lunge feeding, agile, highly reactive prey remain within the mouth cavity prior to the mouth’s closure? Why do some rorquals make loud noises during lunge-feeding? Why, given their giant size and theoretical high aerobic dive limit, do big rorquals not spend more time lunge-feeding beneath the surface? Why do some rorquals exhibit strongly asymmetrical patterns of pigmentation? And don’t forget that not all rorquals lunge-feed. More on these issues in the following post.
Refs – -
Orton, L. S., Brodie, P. F. (1987). Engulfing mechanisms of fin whales Canadian Journal of Zoology, 65, 2898-2907
Pivorunas, A. 1977. The fibrocartilage skeleton and related structures of the ventral pouch of balaenopterid whales. Journal of Morphology 151, 299-314.
- . 1979. The feeding mechanisms of baleen whales. American Scientist 67, 432-440. | <urn:uuid:8f2f560b-dae5-475c-97cf-fc3c37678e04> | 2.671875 | 1,965 | Personal Blog | Science & Tech. | 48.710284 | 474 |
PHP is considered an insecure language to develop in not because of secret backdoors put in by the PHP language developers, but because it was initially developed without security as a major concern and compared to other languages/web frameworks its difficult to develop securely in it.
E.g., if you develop a LAMP/LAPP (linux+apache+mysql/postgresql+PHP) web app, you have to manually code in input/output sanitation to prevent SQL injection/XSS/CSRF, make sure there are no subtle calls to
eval user-supplied code (like in
preg_replace with a '/e' ending the regexp argument), safely deal with file uploads, make sure user passwords are securely hashed (not plaintext), authentication cookies are unguessable, secure (https) and http-only, etc.
Most modern web-frameworks simplify many of these issues by doing most of these things in a secure fashion (or initially doing them insecurely and then getting secure updates).
The risk of there being a secret backdoor in an open-source PHP is small; and the risk is present in every piece of software (windows/linux/apache/nginx/IIS/postgresql/oracle) you use -- both open-source and closed-source. The open-source ones at least have the benefit that many independent eyes look at it all the time and you could examine it if you wanted.
Also note in principle, even after fully examining the source code and finding no backdoors and fully examining the source code of your compiler (finding no backdoors), if you then recompile your compiler (bootstrap by using some untrusted existing compiler) and then compile the safe source code with your newly compiled "safe" compiler, your executable code could still have backdoors brought in from using the untrusted existing compiler to compile the new compiler. See Ken Thompson's Reflections on Trusting Trust. (The way this is defended against in practice is by using many independent and obscure compilers from multiple sources to compile any new compiler and then compare the output). | <urn:uuid:9f5695bc-5609-4c4f-ad0d-a28ed7a4e1d1> | 2.78125 | 437 | Q&A Forum | Software Dev. | 23.270071 | 475 |
The cerebral cortex, a layer of neural tissue surrounding the cerebrum of the mammalian brain, has been known to play various roles in memory, language, thought, attention, and consciousness. Up until now, no invertebrate equivalent
to the cerebral cortex has been encountered, but Detlev Arendt, Raju Tomer, and colleagues may have found an evolutionary counterpart. The obvious answer is hidden in one simple creature– the worm. Wait, what? Yeah, you heard me. The marine ragworm, found at all water depths, has been shown to possess a tissue resembling that of our mysterious cerebral cortex.
Arendt and his colleagues used a technique called cellular profiling to determine a molecular footprint for each kind of cell in this particular type of ragworm. By utilizing this technique, they were able to uncover which genes were turned on and off in each cell, providing a means for cellular categorization. Surprisingly, mushroom bodies, regions of the ragworm’s brain that are thought to control olfactory senses, show a striking similarity to tissue found in our cerebral cortex. This intriguing discovery may provide remarkable insight into the evolutionary basis of what has developed into an incredibly important cerebral structure.
Read more about this review here, or see the original article in Cell. | <urn:uuid:1a0e3f67-9ec4-493a-8b64-388a43f16bf0> | 3.484375 | 260 | Truncated | Science & Tech. | 31.768566 | 476 |
Object Pool Design Pattern
Object pooling can offer a significant performance boost; it is most effective in situations where the cost of initializing a class instance is high, the rate of instantiation of a class is high, and the number of instantiations in use at any one time is low.
Object pools (otherwise known as resource pools) are used to manage the object caching. A client with access to a Object pool can avoid creating a new Objects by simply asking the pool for one that has already been instantiated instead. Generally the pool will be a growing pool, i.e. the pool itself will create new objects if the pool is empty, or we can have a pool, which restricts the number of objects created.
It is desirable to keep all Reusable objects that are not currently in use in the same object pool so that they can be managed by one coherent policy. To achieve this, the Reusable Pool class is designed to be a singleton class.
The Object Pool lets others “check out” objects from its pool, when those objects are no longer needed by their processes, they are returned to the pool in order to be reused.
However, we don’t want a process to have to wait for a particular object to be released, so the Object Pool also instantiates new objects as they are required, but must also implement a facility to clean up unused objects periodically.
The general idea for the Connection Pool pattern is that if instances of a class can be reused, you avoid creating instances of the class by reusing them.
<strong>Reusable</strong>- Instances of classes in this role collaborate with other objects for a limited amount of time, then they are no longer needed for that collaboration.
<strong>Client</strong>- Instances of classes in this role use Reusable objects.
<strong>ReusablePool</strong>- Instances of classes in this role manage Reusable objects for use by Client objects.
Usually, it is desirable to keep all
Reusable objects that are not currently in use in the same object pool so that they can be managed by one coherent policy. To achieve this, the
ReusablePool class is designed to be a singleton class. Its constructor(s) are private, which forces other classes to call its getInstance method to get the one instance of the
A Client object calls a
acquireReusable method when it needs a
Reusable object. A
ReusablePool object maintains a collection of
Reusable objects. It uses the collection of
Reusable objects to contain a pool of
Reusable objects that are not currently in use.
If there are any
Reusable objects in the pool when the
acquireReusable method is called, it removes a
Reusable object from the pool and returns it. If the pool is empty, then the
acquireReusable method creates a
Reusable object if it can. If the
acquireReusable method cannot create a new
Reusable object, then it waits until a
Reusable object is returned to the collection.
Client objects pass a
Reusable object to a
releaseReusable method when they are finished with the object. The
releaseReusable method returns a
Reusable object to the pool of
Reusable objects that are not in use.
In many applications of the Object Pool pattern, there are reasons for limiting the total number of
Reusable objects that may exist. In such cases, the
ReusablePool object that creates
Reusable objects is responsible for not creating more than a specified maximum number of
Reusable objects. If
ReusablePool objects are responsible for limiting the number of objects they will create, then the
ReusablePool class will have a method for specifying the maximum number of objects to be created. That method is indicated in the above diagram as setMaxPoolSize.
Do you like bowling? If you do, you probably know that you should change your shoes when you getting the bowling club. Shoe shelf is wonderful example
of Object Pool. Once you want to play, you’ll get your pair (
aquireReusable) from it. After the game, you’ll return shoes back to the shelf (
ObjectPoolclass with private array of
releasemethods in ObjectPool class
- Make sure that your ObjectPool is Singleton
Rules of thumb
- The Factory Method pattern can be used to encapsulate the creation logic for objects. However, it does not manage them after their creation, the object pool pattern keeps track of the objects it creates.
- Object Pools are usually implemented as Singletons.
Object Pool code examples
|This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 Unported License| | <urn:uuid:2cf5c550-c98e-4074-8ffd-0e998c786591> | 2.875 | 999 | Documentation | Software Dev. | 41.722381 | 477 |
The re-introduction of wolves in a US National Park in the mid-1990s is not helping quaking aspens (Populus tremuloides) to become re-established, as many researchers hoped.
In a study published in the journal Ecology showed that the population of wolves in Yellowstone Park was not deterring elks from eating young trees and saplings.
It was assumed that the presence of wolves would create a “landscape of fear”, resulting in no-go areas for elks.
Researchers writing in the Ecological Society of America’s (ESA) journal said that the aspens were not regenerating well in the park as a result of the elk eating the young trees.
However, they added that the conventional wisdom suggested that as the wolves were predators of the elk, it was thought that the elk would eventually learn to avoid high-risk areas in which the wolves were found.
This would then allow plants in those areas – such as aspen – to grow big enough without being eaten and killed by the elk. And in the long-term, the thinking went, the habitat would be restored.
In this latest study, lead author Matthew Kauffman – a US Geological Survey scientist – suggested the findings showed that claims of an ecosystem-wide recovery of aspen were premature.
“This study not only confirms that elk are responsible for the decline of aspen in Yellowstone beginning in the 1890s, but also that none of the aspen groves studied after wolf restoration appear to be regenerating, even in areas risky to elk,” Dr Kauffman explained.
Because the “landscape of fear” idea did not appear to be benefiting aspen, the team concluded that if the Northern Range elk population did not continue to decline (their numbers are 40% of what they were before wolves), many of Yellowstone’s aspen stands were unlikely to recover.
“A landscape-level aspen recovery is likely only to occur if wolves, in combination with other predators and climate factors, further reduce the elk population,” observed Dr Kauffman.
The paper, Are wolves saving Yellowstone’s aspen? A landscape-level test of a behaviorally mediated trophic cascade, has been published online in Ecology. The authors of the paper are: Matthew Kauffman (USGS), Jedediah Brodie (University of Montana) and Erik Jules (Humboldt State University).
Source: ESA press release
Filed under: animals, biodiversity, conservation, research | Tagged: aspen, ecological society of america, ecology, ecosystem, elk, environment, esa, habitat, landscape of fear, national park, poplar tremuloide, predation, quaking aspen, research, science, trees, US, us geological survey, wolf re-introduction, wolves, yellowstone, yellowstone park | Leave a Comment » | <urn:uuid:3dbcd303-6bf8-4ee3-b99b-8d939aff08b0> | 2.9375 | 616 | Personal Blog | Science & Tech. | 22.258679 | 478 |
This tree diagram shows the relationships between several groups of organisms.
The root of the current tree connects the organisms featured in this tree to their containing group and the rest of the Tree of Life. The basal branching point in the tree represents the ancestor of the other groups in the tree. This ancestor diversified over time into several descendent subgroups, which are represented as internal nodes and terminal taxa to the right.
You can click on the root to travel down the Tree of Life all the way to the root of all Life, and you can click on the names of descendent subgroups to travel up the Tree of Life all the way to individual species.close box
Insects have a large number of unique, derived characteristics, although none of these are externally obvious in most species. These include (Kristensen, 1991):
- lack of musculature beyond the first segment of antenna.
- Johnston's organ in pedicel (second segment) of antenna. This organ is a collection of sensory cells that detect movement of the flagellum.
- a transverse bar forming the posterior tentorium inside the head
- tarsi subsegmented
- females with ovipositor formed by gonapophyses from segments 8 and 9
- annulated, terminal filament extending out from end of segment 11 of abdomen (subsequently lost in most groups of insects)
One notable feature linking Thysanura + Pterygota is the presence of two articulations on each mandible. Archaeognathans have only one mandibular condyle or articulation point; they are "monocondylic". Thysanura + Pterygota, with their two mandibular condyles, are sometimes called Dicondylia. The many other apomorphies linking Dicondylia are described in Kristensen (1991).
It is possible that the thysanurans are not themselves monophyletic; Thysanura (exclusive of the family Lepidothricidae) plus pterygotes may be monophyletic, with lepidothricids sister to this complex (Kristensen, 1991).
Beutel, R. G. and S. N. Gorb. 2001. Ultrastructure of attachment specializations of hexapods, (Arthropoda): evolutionary patterns inferred from a revised ordinal phylogeny. Journal of Zoological Systematics and Evolutionary Research 39:177-207.
Bitsch, J. and A. Nel. 1999. Morphology and classification of the extinct Archaeognatha and related taxa (Hexapoda). Annales de la Société entomologique de France 35:17-29.
Boudreaux, H. B. 1979. Arthropod Phylogeny with Special Reference to Insects. New York, J. Wiley.
Carpenter, F. M. 1992. Superclass Hexapoda. Volumes 3 and 4 of Part R, Arthropoda 4 of Treatise on Invertebrate Paleontology. Boulder, Colorado, Geological Society of America.
Carpenter, F. M. and L. Burnham. 1985. The geological record of insects. Annual Review of Earth and Planetary Sciences 13:297-314.
Caterino, M. S., S. Cho, and F. A. H. Sperling. 1999. The current state of insect molecular systematics: a thriving tower of Babel. Annual Review of Entomology 45:1–54.
Chapman, R. F. 1998. The Insects: Structure and Function. Cambridge University Press, Cambridge, U.K., New York.
Daly, H. V., J. T. Doyen, and A. H. Purcell III. 1998. Introduction to Insect Biology and Diversity, 2nd edn. Oxford University Press, Oxford.
Dindall, D. L. 1990. Soil Biology Guide. New York, John Wiley & Sons.
Engel, M. S. and D. A. Grimaldi. 2004. New light shed on the oldest insect. Nature 427:627-630.
Evans, H. E. 1993. Life on a Little-Known Planet. New York, Lyons & Burford.
Gereben-Krenn, B. A. and G. Pass. 2000. Circulatory organs of abdominal appendages in primitive insects (Hexapoda : Archaeognatha, Zygentoma and Ephemeroptera). Acta Zoologica 81:285-292.
Grimaldi, D. 2001. Insect evolutionary history from Handlirsch to Hennig, and beyond. Journal of Paleontology 75:1152-1160.
Grimaldi, D. and M. S. Engel. 2005. Evolution of the Insects. Cambridge University Press.
Hennig, W. 1981. Insect Phylogeny. New York, J. Wiley.
Kjer, K. M. 2004. Aligned 18S and insect phylogeny. Systematic Biology 53(3):506-514.
Klass, K. D. 1998. The proventriculus of the Dicondylia, with comments on evolution and phylogeny in Dictyoptera and Odonata (Insecta). Zoologischer Anzeiger 237:15-42.
Kristensen, N. P. 1975. The phylogeny of hexapod "orders". A critical review of recent accounts. Zeitschrift für zoologische Systematik und Evolutionsforschung 13:1–44.
Kristensen, N. P. 1981. Phylogeny of insect orders. Annual Review of Entomology 26:135-157.
Kristensen, N. P. 1995. Forty years' insect phylogenetic systematics. Zoologische Beiträge NF 36(1):83-124.
Kukalová-Peck, J. 1987. New Carboniferous Diplura, Monura, and Thysanura, the hexapod ground plan, and the role of thoracic lobes in the origin of wings (Insecta). Canadian Journal of Zoology 65:2327-2345.
Labandeira, C. C., and J. J. Sepkoski, jr. 1993. Insect diversity in the fossil record. Science 261:310–315.
Larink, O. 1997. Apomorphic and plesiomorphic characteristics in Archaeognatha, Monura, and Zygentoma. Pedobiologia 41:3-8.
Merritt, R. W. and K. W.Cummins, eds. 1984. An Introduction to the Aquatic Insects of North America, Second Edition. Kendall-Hunt.
Naumann, I. D., P. B. Carne, J. F. Lawrence, E. S. Nielsen, J. P. Spradberry, R. W. Taylor, M. J. Whitten and M. J. Littlejohn, eds. 1991. The Insects of Australia: A Textbook for Students and Research Workers. Volume I and II. Second Edition. Carlton, Victoria, Melbourne University Press.
Pass, G. 2000. Accessory pulsatile organs: Evolutionary innovations in insects. Annual Review of Entomology 45:495-518.
Snodgrass, R. E. 1935. Principles of Insect Morphology. McGraw-Hill, New York. 667 pp.
Snodgrass, R. E. 1952. A Textbook of Arthropod Anatomy. Comstock Publishing Associates, Ithaca, N.Y. 363 pp.
Stehr, F. W. 1987. Immature Insects, vol. 1. Dubuque, Iowa: Kendal/Hunt. 754 pp.
Stehr, F. W. 1991. Immature Insects, vol. 2. Dubuque, Iowa: Kendal/Hunt. 974 pp.
Wooton, R. J. 1981. Paleozoic insects. Annual Review of Entomology 26:319-344.
- Smithsonian Institution Department of Entomology.
- Entomology Department of Harvard's Museum of Comparative Zoology
- Entomology Department. California Academy of Sciences.
- The Essig Museum of Entomology. Berkeley, California.
- Insect Division. University of Michigan Museum of Zoology.
- Bishop Museum Hawaii Entomology Home
- Introduction to Insect Biology & Classification. The University of Queensland.
- Virtual Exhibit on Canada's Biodiversity: Insects.
- Entomological Data Information System (EDIS). Staatliches Museum für Naturkunde Stuttgart, Germany.
- Compendium of Hexapod Classes and Orders. North Carolina State University.
- Nomina Insecta Nearctica. A Checklist of the Insects of North America.
- Common Names of Insects in Canada. Entomological Society of Canada.
- The Canadian National Collection (CNC) of Insects, Arachnids and Nematodes.
- Singing Insects of North America. By Thomas J. Walker (crickets and katydids) and Thomas E. Moore (cicadas).
- Entomology Database KONCHU. Species Information Database on Japanese, East Asian and Pacific Insects, Spiders and Mites.
- A Catalogue of the Insects of South Africa.
- CSIRO Entomology Home Page.
- General Entomology Resources from Scientific Reference Resources:
- Entomological Society of America.
- Chemical Ecology of Insects. John A. Byers, USDA-ARS.
- elin. Entomology Libraries and Information Networks.
- Entomological Glossary.
- Popular Classics in Entomology. Colorado State University.
- Forensic Entomology Pages, International.
- Book of Insect Records. University of Florida.
- Alien Empire. Companion piece to a PBS Nature program.
- Entomology Index of Internet Resources. Iowa State University.
- Entomology on the WWW. Colorado State University.
- Entomology on the WWW. Michigan State University.
- BIOSIS BiologyBrowser: Insecta.
Images and Other Media:
- BugGuide.Net. An online community of naturalists who enjoy learning about and sharing observations of insects, spiders, and other related creatures.
- Hawaiian Insect Image Galleries. Bishop Museum.
- Entomology Image Gallery. Iowa State University.
- Very Cool Bugs.
- Dennis Kunkel's Microscopy.
- The Virtual Insectary.
- Thais in 2000: Entomology.
- Reference Library of Digitized Insect Sounds. Richard Mankin, Center for Medical, Agricultural and Veterinary Entomology, Gainesville, Florida.
- Meganeura. Palaeoentomological Newsletter.
- Eocene Fossils.
- Fossil insects from Florissant (Colorado). Peabody Museum of Natural History.
- Stewart Valley Fossil Insects. California Academy of Sciences.
- Amber and Copal: Their Significance in the Fossil Record. Hooper Virtual Natural History Museum.
- The Natural History of Amber. 3 Dot Studio.
- Frozen Dramas. Swedish Amber Museum.
- Nature's Preservative--Organic Flypaper: Amber Gives a Green Light to Study of Ancient Life. The Why files. University of Wisconsin.
- Amber Home. Gary Platt.
- Baltic Amber Inclusions. Wolfgang Wiggers.
- Dominican Amber Fossils. ESP Designs.
- The Amber Room. Steve Kurth.
- Thomas Say (1787-1834), father of American entomology.
- Charles Darwin (1809-1882), AboutDarwin.com.
- Jean-Henri Fabre (1823-1915) e-museum.
- Famous Entomologists on Postage Stamps.
For young entomologists:
- bugbios. Shameless promotion of insect appreciation by Dexter Sear.
- O. Orkin Insect Zoo. National Museum of Natural History. Smithsonian Institution.
- Bug Camp. Field Museum of Natural History, Chicago.
- Insectclopedia. Links to websites about insects.
- Bugscope. Educational outreach project of the World Wide Laboratory.
- The Bug Club for Young Entomologists. UK Amateur Entomologists' Society.
- The Wonderful World of Insects.
- Class: Insecta. Spencer Entomological Museum at the University of British Columbia, Vancouver, Canada.
Page copyright © 2002
All Rights Reserved.
Citing this page:
Tree of Life Web Project. 2002. Insecta. Insects. Version 01 January 2002 (under construction). http://tolweb.org/Insecta/8205/2002.01.01 in The Tree of Life Web Project, http://tolweb.org/ | <urn:uuid:1175fce3-6e6f-4cc3-a81c-14516c744153> | 3.4375 | 2,746 | Knowledge Article | Science & Tech. | 43.62091 | 479 |
By Chris Wickham
LONDON (Reuters) - Large-scale engineering projects aimed at fighting global warming could radically reduce rainfall in Europe and North America, a team of scientists from four European countries have warned.
Geoengineering projects are controversial, even though they are largely theoretical at this point. They range from mimicking the effects of large volcanic eruptions by releasing sulphur dioxide into the atmosphere, to deploying giant mirrors in space to deflect the sun's rays.
Proponents say they could be a rapid response to rising global temperatures but environmentalists argue they are a distraction from the need to reduce man-made carbon emissions.
Critics also point to a lack of solid research into unintended consequences and the absence of any international governance structure for such projects, whose effects could transcend national borders.
A small geoengineering experiment in the UK was recently abandoned due to a dispute over attempts by some of the team involved to patent the technology.
In this new study scientists from Germany, Norway, France and the UK used four different computer models that mimic the earth's climate to see how they responded to increased levels of carbon dioxide coupled with reduced radiation from the sun.
Their scenario assumed a world with four times the carbon dioxide concentration of the preindustrial world, which lead author Hauke Schmidt says is at the upper end, but in the range of what is considered possible at the end of this century.
They found that global rainfall was reduced by about 5 percent on average using all four models.
"Climate engineering cannot be seen as a substitute for a policy pathway of mitigating climate change through the reduction of greenhouse gas emissions," they said in the study, published in Earth System Dynamics, an open access journal of the European Geosciences Union.
Under the scenario studied, rainfall diminished by about 15 percent, or about 100 millimeters per year, compared to pre-industrial levels, in large areas of North America and northern Eurasia.
Over central South America, all the models showed a decrease in rainfall that reached more than 20 percent in parts of the Amazon region.
(Editing by Mark Heinrich) | <urn:uuid:2d6ed62d-530d-4e14-9b7c-e71629d6b1a1> | 3.203125 | 425 | News Article | Science & Tech. | 24.368692 | 480 |
In Writing Secure PHP, I covered a few of the most common security holes in websites. It's time to move on, though, to a few more advanced techniques for securing a website. As techniques for 'breaking into' a site or crashing a site become more advanced, so must the methods used to stop those attacks.
Most hosting environments are very similar, and rather predictable. Many web developers are also very predictable. It doesn't take a genius to guess that a site's includes (and most dynamic sites use an includes directory for common files) is an www.website.com/includes/. If the site owner has allowed directory listing on the server, anyone can navigate to that folder and browse files.
Imagine for a second that you have a database connection script, and you want to connect to the database from every page on your site. You might well place that in your includes folder, and call it something like connect.inc. However, this is very predictable - many people do exactly this. Worst of all, a file with the extension ".inc" is usually rendered as text and output to the browser, rather than processed as a PHP script - meaning if someone were to visit that file in a browser, they'll be given your database login information.
Placing important files in predictable places with predictable names is a recipe for disaster. Placing them outside the web root can help to lessen the risk, but is not a foolproof solution. The best way to protect your important files from vulnerabilities is to place them outside the web root, in an unusually-named folder, and to make sure that error reporting is set to off (which should make life difficult for anyone hoping to find out where your important files are kept). You should also make sure directory listing is not allowed, and that all folders have a file named "index.html" in (at least), so that nobody can ever see the contents of a folder.
Never, ever, give a file the extension ".inc". If you must have ".inc" in the extension, use the extension ".inc.php", as that will ensure the file is processed by the PHP engine (meaning that anything like a username and password is not sent to the user). Always make sure your includes folder is outside your web root, and not named something obvious. Always make sure you add a blank file named "index.html" to all folders like include or image folders - even if you deny directory listing yourself, you may one day change hosts, or someone else may alter your server configuration - if directory listing is allowed, then your index.html file will make sure the user always receives a blank page rather than the directory listing. As well, always make sure directory listing is denied on your web server (easily done with .htaccess or httpd.conf).
Out of sheer curiosity, shortly after writing this section of this tutorial, I decided to see how many sites I could find in a few minutes vulnerable to this type of attack. Using Google and a few obvious search phrases, I found about 30 database connection scripts, complete with usernames and passwords. A little more hunting turned up plenty more open include directories, with plenty more database connections and even FTP details. All in, it took about ten minutes to find enough information to cause serious damage to around 50 sites, without even using these vulnerabilities to see if it were possible to cause problems for other sites sharing the same server.
Most site owners now require an online administration area or CMS (content management system), so that they can make changes to their site without needing to know how to use an FTP client. Often, these are placed in predictable locations (as covered in the last article), however placing an administration area in a hard-to-find location isn't enough to protect it.
Most CMSes allow users to change their password to anything they choose. Many users will pick an easy-to-remember word, often the name of a loved one or something similar with special significance to them. Attackers will use something called a "dictionary attack" (or "brute force attack") to break this kind of protection. A dictionary attack involves entering each word from the dictionary in turn as the password until the correct one is found.
The best way to protect against this is threefold. First, you should add a turing test to a login page. Have a randomly generated series of letters and numbers on the page that the user must enter to login. Make sure this series changes each time the user tries to login, that it is an image (rather than simple text), and that it cannot be identified by an optical character recognition script.
Second, add in a simple counter. If you detect a certain number of failed logins in a row, disable logging in to the administration area until it is reactivated by someone responsible. If you only allow each potential attacker a small number of attempts to guess a password, they will have to be very lucky indeed to gain access to the protected area. This might be inconvenient for authentic users, however is usually a price worth paying.
Finally, make sure you track IP addresses of both those users who successfully login and those who don't. If you spot repeated attempts from a single IP address to access the site, you may consider blocking access from that IP address altogether.
One excellent way to make sure that even if you have a problem with someone accessing your database who shouldn't be able to, you can limit the damage they can cause. Modern databases like MySQL and SQL Server allow you to control what a user can and cannot do. You can give users (or not) permission to create data, edit, delete, and more using these permissions. Usually, I try and ensure that I only allow users to add and edit data.
If a site requires an item be deleted, I will usually set the front end of the site to only appear to delete the item. For example, you could have a numeric field called "item_deleted", and set it to 1 when an item is deleted. You can then use that to prevent users seeing these items. You can then purge these later if required, yourself, while not giving your users "delete" permissions for the database. If a user cannot delete or drop tables, neither can someone who finds out the user login to the database (though obviously they can still do damage).
PHP contains a variety of commands with access to the operating system of the server, and that can interact with other programs. Unless you need access to these specific commands, it is highly recommended that you disable them entirely.
For example, the eval() function allows you to treat a string as PHP code and execute it. This can be a useful tool on occasion. However, if using the eval() function on any input from the user, the user could cause all sorts of problems. You could be, without careful input validation, giving the user free reign to execute whatever commands he or she wants.
There are ways to get around this. Not using eval() is a good start. However, the php.ini file gives you a way to completely disable certain functions in PHP - "disable_functions". This directive of the php.ini file takes a comma-separated list of function names, and will completely disable these in PHP. Commonly disabled functions include ini_set(), exec(), fopen(), popen(), passthru(), readfile(), file(), shell_exec() and system().
It may be (it usually is) worth enabling safe_mode on your server. This instructs PHP to limit the use of functions and operators that can be used to cause problems. If it is possible to enable safe_mode and still have your scripts function, it is usually best to do so.
Finally, Be Completely and Utterly Paranoid
Much as I hate to bring this point up again, it still holds true (and always will). Most of the above problems can be avoided through careful input validation. Some become obvious points to address when you assume everyone is out to destroy your site. If you are prepared for the worst, you should be able to deal with anything.
Ready for more? Try Writing Secure PHP, Part 3. | <urn:uuid:88a3943e-6d77-4211-8079-34a232f009fd> | 2.546875 | 1,679 | Tutorial | Software Dev. | 48.596991 | 481 |
Spiders: True or False?
Spiders evolved more than 300 million years ago, long before dinosaurs walked the Earth.
- True. Those ancient spiders didn't build webs but sought the safety of burrows dug underground. There, they were shaded from the Sun and protected from predators.
There are as many species of mammals as there are spiders.
- False. While there are about 6,000 mammal species, scientists have identified over 43,000 spider species so far. There may be at least as many still out there to be discovered.
Every spider sheds its exoskeleton, the inflexible outer shell, several times during its life.
- True. Most stop molting once they reach maturity, though females from some relatively primitive families of spiders continue to do so throughout their lives.
Spiders have poor eyesight.
- True. Nearly all have eight simple eyes--consisting of one lens and a retina--arranged in different ways but, for the most part, don't see very well. In most cases, spiders use other senses, like touch and smell, to help capture prey.
All spiders make webs.
- False. Only about 50 percent of known spider species do. Others hunt their prey or burrow underground. One species, Argyroneta aquatica, lives underwater.
All spider silk is the same.
- False. Spiders make many different kinds of silk, each with a property--toughness, flexibility, stickiness--specific to the task it performs.
Spiders are important predators.
- True. By one estimate, the spiders on one acre of woodland alone consume more than 80 pounds (36 kg) of insects a year! Those insect populations would explode without predators.
Not all spiders have venom.
- True. Those in the group Uloboridae don't. Instead of subduing their prey with venom, they wrap it tightly with silk.
I'm very likely to be bitten by a spider.
- False. With a few exceptions, spiders are very shy. They almost always run away rather than bite. In addition, misdiagnosis of insect bites as spider bites is very common.
Spiders don't care for their offspring.
- False. Many spiders do. For instance, a female wolf spider may carry an egg sac containing her young for weeks. Once the spiderlings hatch, she hauls as many as 100 or more of them on her back for another week or so.
Like many animals, spiders are threatened by habitat destruction and introduced species.
- True. Despite these threats, spiders and other non-vertebrates can be overlooked in conservation planning, in part because they're so small. | <urn:uuid:355f91bc-3817-4af4-8f64-cd94b2bfbe6d> | 3.390625 | 556 | Q&A Forum | Science & Tech. | 56.747675 | 482 |
One half of optics was missing
At optical frequencies, electromagnetic waves interact with an ordinary optical material (e.g., glass) via the electronic polarizability of the material. In contrast, the corresponding magnetizability is negligible for frequencies above a few THz, or in other words, its magnetic permeability is identical to unity (μ(ω)=1). Consequently, the optical properties of an ordinary optical material are completely characterized by its electric permittivity ε(ω) (or dielectric function). As a result, we can only directly manipulate the electric component of light with an appropriate optical device while we have no immediate handle on the corresponding magnetic component. One half of optics has been missing.
Artificial magnetism at optical frequencies
Photonic metamaterials open up a way to overcome this constraint. The basic idea is to create an artificial crystal with significantly sub-wavelength periods. Analogous to an ordinary optical material, such a photonic metamaterial can approximately be treated as an effective medium characterized by effective material parameters ε(ω) and μ(ω). However, the proper design of the elementary building blocks (or "artificial atoms" or "meta-atoms") of the photonic metamaterial allows for a non-vanishing magnetic response and even for μ<0 at optical frequencies – despite the fact that the constituent materials of the photonic metamaterial are completely non-magnetic.
Negative refractive index …
Much of the early excitement in the field has been about achieving a negative index of refraction n<0 by simultaneous ε<0 and µ<0 at near-infrared or even at visible frequencies. A negative refractive index means that the phase velocity of light is opposite to the electromagnetic energy flow (the Poynting vector). This unusual situation has inspired fascinating ideas like the so-called "perfect lens", which employs the fact that the optical path length between two spatially separate points can be made equal to zero, rendering the two points equivalent for the purpose of optics.
… and beyond
Artificial magnetism is also a necessary prerequisite for obtaining strong optical activity and circular dichroism. These phenomena are based on magnetic dipoles excited by the electric component of the light field and vice versa. Three-dimensional metal helices have been a corresponding paradigm building block in optical textbooks, but their nanofabrication has not been possible until quite recently. Such gold-helix metamaterials can be applied as compact and broadband (more than one octave) circular polarizers - the circular analogue of the good old wire-grid linear polarizer (already used by Heinrich Hertz in his pioneering experiments on electromagnetic waves in Karlsruhe in 1887) and possibly a first down-to-earth application of the deceptively simple but far-reaching ideas of photonic metamaterials.
Transforming optical space
Further flexibility for achieving certain functions arises from intentionally spatially inhomogeneous optical metamaterials. Such structures can be designed using the concepts of transformation optics, which is inspired by Albert Einstein’s theory of General Relativity. In essence, distortions of actual space (e.g., due to heavy masses) can equivalently be mimicked by distortions of optical space, i.e., by tailoring the local index of refraction. Invisibility cloaking structures have been a demanding benchmark example for the strength of transformation optics because invisibility cloaks would have been considered "impossible" just five years ago. Today, direct laser writing has allowed for the first three-dimensional invisibility cloaking structures. Lately, even visible operation frequencies have become accessible.
A complete list of publications can be found here. | <urn:uuid:daebb916-fc77-4989-a71e-ab10d0c822cd> | 3.46875 | 765 | Knowledge Article | Science & Tech. | 13.463016 | 483 |
Using the code
The object is called
XMLWriter. It automatically replaces invalid characters such as quotation marks or greater than signs with the appropriate XML values. However, it does not throw exceptions on invalid tag names, because the application I’m writing won’t have the possibility of producing invalid tag names. If you want to add tag-name validation to the object, it should not be a difficult task.
new command like so:
var XML = new XMLWriter();
XMLWriter object has the following public methods:
Attrib (Name, Value)
Node (Name, Value)
BeginNode writes out an opening tag, giving it the name that you pass the method. Below is an example, followed by the XML it would produce:
EndNode ends the current node (if any are still open). So following from the
BeginNode example, if we were to write
XML.EndNode(), the writer would write “/>”. The object is smart enough to know if you have written any text or inner nodes out and will write “</Foo>” if necessary.
Attrib writes out an attribute and value on the currently open node. Below is an example, followed by the XML it would produce:
XML.Attrib(“Bar”, “Some Value”);
WriteString writes out a string value to the XML document as illustrated below:
Node method writes out a named tag and value as illustrated below:
XML.Node(“MyNode”, “My Value”);
Close method does not necessarily need to be called, but it’s a good idea to do so. What it does is end any nodes that have not been ended.
ToString method returns the entire XML document as a single string (duh).
I’ve provided some sample code. The XMLWriter.js file contains all the code you will need to write XML. It is clean code, but uncommented. I’ve tested this code in IE 6.0 and FireFox 1.5. | <urn:uuid:dc06ca57-286f-4242-94bc-3cc2724496d8> | 2.5625 | 440 | Documentation | Software Dev. | 58.905508 | 484 |
Cascading Style Sheets and themes development
Cascading Style Sheets, commonly referred to as CSS, is commonly used to style web pages written in HTML and XHTML, but can be used together with any kind of XML document. It is a style sheet language used to describe the look and formatting (presentation semantics) of a document written in a markup language.
The primary use of CSS is to separate document content from document presentation, such as layout, fonts and colors. This allows for tableless web design, gives the web designer more flexibility and control, and makes it possible for multiple pages to share the same formatting.
The CSS specifications are maintained by the World Wide Web Consortium (W3C).
CSS History – the beginning
Before CSS was developed, the presentational attributes of HTML documents were almost always contained within the HTML markup. The web designer had to explicitly describe all backgrounds, font colors, borders, element alignments, etcetera. The aim of CSS was to allow web designers to move most of this information to a separate style sheet.
Style sheets have been round since the early days of SGML (Standard Generalized Markup Language), i.e. since the 1970s. As HTML became more and more widely used, HTML came to encompass a wide variety of stylistic possibilities to meet the demands of increasingly complex web page designs. The designers' gained more and more control, but in the process, HTML became more and more complicated to write and maintain.
Robert Cailliau, the Belgian informatics engineer who together with Tim Berners-Lee developed the World Wide Web, wanted to find a way to separate the structure from the presentation. He also wanted to give the user the option of choosing between three different kinds of style sheets: one for screen presentation, one suitable for printing and one for the editor.
Eventually, nine different style sheet languages was presented to the World Wide Web Consortium. Two of them was chosen: Cascading HTML Style Sheets (CHSS) and Stream-based Style Sheet Proposal (SSP). CHSS had been suggested by Norweigan web pioneer Håkon Wium Lie, while SSP was the brainchild of Dutch computer scientist Ber Bos. Lie teamed up with computer scientists Yves Lafon and Dave Raggett to make Raggett's Arena browser support CSS, while Lie and Bos worked together to turn CHSS into the CSS standard. (The letter H was removed since their style sheets was to be applied to more than just HTML).
CSS History – CSS level 1 and level 2 Recommendations
In 1994, CSS was presented at the Mosaic and the Web conferenece in Chicago. Unlike existing style language such as DSSSL and FOSI, CSS made it possible to use multiple style sheets for the same document and allow the design to be controlled by both designer and user.
In December 1996, CSS became official through the publishing of the CSS level 1 Recommendation. In May 1998, the World Wide Web Consortium published the CSS level 2 Recommendation. The CSS level 3 Recommendation has not yet been published, even though it has been in the works since 1998. | <urn:uuid:eaa36fc8-ecca-48d1-8983-07e89d8ceb11> | 3.921875 | 649 | Knowledge Article | Software Dev. | 46.424753 | 485 |
Scientists from Stanford University, the Wildlife Conservation Society, the American Museum of Natural History, and other organizations are closing in on the answer to an important conservation question: how many humpback whales once existed in the North Atlantic?
Building on previous genetic analyses to estimate the pre-whaling population of North Atlantic humpback whales, the research team has found that humpbacks used to exist in numbers of more than 100,000 individuals. The new, more accurate estimate is lower than previously calculated but still two to three times higher than pre-whaling estimates based on catch data from whaling records.
Known for its distinctively long pectoral fins, acrobatics, and haunting songs, the humpback whale occurs in all the world's oceans. Current estimates for humpback whale numbers are widely debated, but some have called for the level of their international protection to be dropped.
The study appears in the recently published edition of Conservation Genetics. The authors include: Kristen Ruegg and Stephen Palumbi of Stanford University; Howard C. Rosenbaum of the Wildlife Conservation Society and the American Museum of Natural History; Eric C. Anderson of the National Marine Fisheries Service and University of California-Santa Cruz; Marcia Engel of the Instituto Baleia Jubarte/Humpback Whale Institute, Brazil; Anna Rothschild of AMNH's Sackler Institute for Comparative Genomics; and C. Scott Baker of Oregon State University.
"We're certain that humpback whales in the North Atlantic have significantly recovered from commercial whaling over the past several decades of protection, but without an accurate size estimate of the pre-whaling population, the threshold of recovery remains unknown," said Dr. Kristen Ruegg of Stanford University and the lead author of the study. "We now have a solid, genetically generated estimate upon which future work on this important issue can be based."
"Our current challenge is to explain the remaining discrepancy between the historical catch data and the population estimate generated by genetic analyses," said Dr. Howard Rosenbaum, study co-author and Director of the Wildlife Conservation Society's Ocean Giants Program. "The gap highlights the need for continued evaluations of whale populations, and presents new information informing the debate and challenges associated with recovery goals."
"We have spent a great deal of effort refining the techniques and approaches that give us this pre-whaling number," said Dr. Steve Palumbi of Stanford. "It's worth the trouble because genetic tools give one of the only glimpses into the past we have for whales."
Reaching some 50 feet in length, the humpback whale was hunted for centuries by commercial whaling fleets in all the world's oceans. Humpbacks had predictable migration routes and were reduced to several hundred whales in the North Atlantic. The global population was reduced by possibly 90 percent of its original size. The species received protection from the International Whaling Commission in North Atlantic waters in 1955 due to the severity of its decline.
Since that time, the humpback whales of the North Atlantic have made a remarkable comeback; experts estimate the current size of the North Atlantic's humpback whale population to be more than 17,000 animals. North Atlantic humpback whales are now one of the best-studied populations of great whales in the world and the mainstay of a multi-million dollar whale-watching industry.
But estimating the number of whales that existed prior to commercial whaling is a far more difficult problem, critical in determining when the total population has recovered. Historical catch data from the logs of whaling vessels suggest a population size between 20,000-46,000 whales, but the current genetic analysis indicates a much larger pre-whaling population. The results of the genetic analysis indicate that the North Atlantic once held between 45,000—235,000 humpback whales (with an average estimate of 112,000 animals).
A previous study using the mitochondrial DNA of humpbacks in the North Atlantic suggested a higher pre-whaling population size; an average of 240,000 individuals. To increase the accuracy of the current analysis, the team measured nine segments in the DNA sequences throughout the genome (as opposed to just one DNA segment used in the previous study).
Palumbi, who participated in the first humpback genetic analysis, added: "The International Whaling Commission reviewed the results of the first study and recommended we improve the method in six specific ways. We've done that now and have the best-ever estimate of ancient humpback populations."
Scott Baker, Associate Director of Oregon State University's Marine Mammal Institute and a co-author said: "These genetic estimates greatly improve our understanding of the genetic diversity of humpback whales, something we need to understand the impact of past hunting and to manage whales in the uncertain future."
The research team analyzed genetic samples from whales in the North Atlantic as well as the Southern Hemisphere. Southern Atlantic whales were used to answer one of the six IWC questions: was there intermixing of whale populations across the equator? The samples were analyzed by sequencing specific regions of DNA in known genes. By comparing the genetic diversity of today's population to the genetic mutation rate, Ruegg and colleagues could estimate the long-term population size of humpbacks. They also showed no substantial migration of humpbacks whales across the Equator between the Southern and Northern Atlantic, and no movement from the Pacific to the Atlantic.
The team recently used the same techniques to estimate pre-whaling numbers for the Pacific gray whale and the Antarctic minke whale. A difference of two to three times also was recorded between the genetic and catch estimates for the grey whale population, but were exactly on target for the Antarctic minke whale, which has not been extensively hunted.
Wildlife Conservation Society: http://www.wcs.org
This press release was posted to serve as a topic for discussion. Please comment below. We try our best to only post press releases that are associated with peer reviewed scientific literature. Critical discussions of the research are appreciated. If you need help finding a link to the original article, please contact us on twitter or via e-mail. | <urn:uuid:191f0960-5692-4290-b26b-c2e038ee1936> | 3.6875 | 1,235 | News Article | Science & Tech. | 27.650889 | 486 |
Active Sunspot Shoots Off Intense New Solar Flare
The Solar Dynamics Observatory (SDO) captured this image of the sun during an M6.1 flare that peaked at 7:44 AM EDT on July 5, 2012. The image is shown in the 304 Angstrom wavelength, which is typically colorized in red.
The sun fired off yet another intense solar flare today (July 5), the latest in a series of storms from a busy sunspot being closely watched by space telescopes and astronomers.
NASA's Solar Dynamics Observatory snapped a daunting new image of a strong M-class solar flare that peaked this morning at 7:44 a.m. EDT (1144 GMT). The M6.1 flare triggered a moderate radio blackout that has since subsided, according to officials at NASA and the National Oceanic and Atmospheric Administration (NOAA).
The eruption came from a sprawling sunspot, called Active Region 1515, which has been particularly dynamic this week. In fact, the sunspot region has now spewed 12 M-class solar flares since July 3, NASA officials said in a statement today. The sunspot region is huge, stretching more than 62,137 miles long (100,000 kilometers) in length, they added.
This sunspot region has also produced several coronal mass ejections (CMEs), which are clouds of plasma and charged particles that are hurled into space during solar storms. Powerful CMEs have the potential to disrupt satellites in their path and, when aimed directly at Earth, can wreak havoc on power grids and communications infrastructure.
The CMEs that were triggered by this week's solar flares, however, are thought to be moving relatively slowly, and will likely not hit Earth since the active region is located so far south on the face of the sun, NASA officials said. [More Solar Flare Photos from Sunspot AR1515]
But, the sunspot is slowly rotating toward Earth, and scientists are still monitoring its activity.
"Stay tuned for updates as Region 1515 continues its march across the solar disk," officials at the Space Weather Prediction Center, a joint service of NOAA and the National Weather Service, wrote in an update.
X-class solar flares are the strongest sun storms, with M-class flares considered medium-strength, and C-class the weakest. Today's M6.1 eruption is a little over half the size of the weakest X-class flare, NASA officials said.
Radio blackouts can occur when a layer of Earth's atmosphere, called the ionosphere, is bombarded with X-rays or extreme ultraviolet light from solar eruptions. Disturbances in the ionosphere can change the paths of high and low frequency radio waves, which can affect information carried along these channels.
Radio blackouts are categorized on a scale from R1 (minor) to R5 (extreme). An R2 radio blackout can result in limited degradation of both high- and low-frequency radio communication and GPS signals, NASA officials said.
The sun is currently in an active phase of its roughly 11-year solar weather cycle. The current cycle, known as Solar Cycle 24, is expected to peak in mid-2013.
MORE FROM LiveScience.com | <urn:uuid:7bf285d9-50fe-4a56-88ce-e0ccb22f68b1> | 3.265625 | 659 | News Article | Science & Tech. | 50.708943 | 487 |
Volume 9, Issue 2
Tricks of the Trade
In and Out
Download This Issue
Learning about Differential Equations from Their Symmetries
Application of MathSym to Analyzing an Ordinary Differential Equation
In the previous section we used a scaling symmetry to help understand the solutions of a pair of differential equations. In each case, the scaling symmetry was found by inspection. Here I present the computation of the complete set of point symmetries for two additional differential equations. Our third example is a nonlinear ordinary differential that we analyze using its two symmetries. The final example is the partial differential equation known as the cubic nonlinear Schrödinger equation .
Example three is the ordinary differential equation
that arises in the study of nonlinear water wave equations. I also show that we can use its two symmetries to begin to learn something about the structure of its solutions.
MathSym returns a system of equations, the determining equations, whose solutions generate the symmetries of equation (8). Internally, the MathSym package denotes all independent variables in an equation as and dependent variables as . This way it can be run on systems of equations with arbitrary numbers of independent and dependent variables without needing to know how to treat different variable names. Furthermore, constants are represented as internally and printed as . With this notation, constants are treated correctly by Mathematica's differentiation routine Dt. MathSym's output is the following list of determining equations.
With the output from MathSym we can continue our analysis of equation (8). First, we solve the determining equations:
The functions and determine two symmetries that can be used to convert equation (8) into two integrals. The reader is directed to similar computations for the Blasius boundary layer equation which appear on pages 118-120 of .
We begin by considering the symmetry that occurs because of the term. Setting and produces a transformation and . We next look for two quantities that do not change under this transformation. Obvious choices are and . If we assume that is a function of and write the differential equation for that arises by insisting that satisfy equation (8), we find
This is a standard reduction of order for autonomous equations that may be found in a sophomore differential equations text such as .
This equation in and has a symmetry that is generated by the constant appearing in equations (9) and (10). From this symmetry we can derive new variables and and consider as a function of . In terms of and , equation (11) becomes
We have now converted the problem of solving the original equation into two integrations. First we find as a function of giving us a solution of equation (12) and hence of equation (11). Then we return to the original variables and have implicitly as a function of . Integrating again gives a relationship between and .
We can make Mathematica carry out some of these computations. First we will ask that it determine a solution to equation (12) by integrating both sides of the equation.
In the equation for we can return to the original variables and .
What results is an implicit relationship between and and while MathSym has been successful in generating the symmetries of equation (8), it still is a challenge to solve this equation.
About Mathematica | Download Mathematica Player
Copyright © 2004 Wolfram Media, Inc. All rights reserved. | <urn:uuid:621b4b37-387c-4787-9931-0da8ab590a19> | 3.359375 | 692 | Truncated | Science & Tech. | 36.476891 | 488 |
Joined: 16 Mar 2004
|Posted: Tue May 27, 2008 11:36 am Post subject: Assembly lines for nanotechnology.
|24 August 2007 RSC Publishing - Chemical Technology
Nano production lines
Researchers in Switzerland have built nanoscale cargo loading stations and shuttles, an important step towards assembly lines for nanotechnology.
Drawing of a nano conveyor belt
Biological assembly lines consist of kinesin proteins which carry cargo, like organelles or vesicles, and literally walk along microtubules. However, as far as man-made systems go, 'nothing comparable to a macroscopic assembly line exists at the nanoscale,' according to Viola Vogel of the Department of Materials at the Swiss Federal Institute of Technology (ETH) in Zurich. 'Imagine if you wanted to build a car by fabricating all of its components, putting them in a glass full of water and hoping that they would self-assemble spontaneously into the finished car.'
The challenge is to tune the interactions in the system so that the cargo remains stuck to the station when not needed, but can be picked up easily by the shuttle. As a test of principle, Vogel and colleagues used gold nanoparticles coated in anti-biotin antibodies as cargo, and compared loading stations made of biotin-tipped DNA with biotin-tipped polyethylene glycol. Biotinylated microtubules, powered by kinesin motors, act as shuttles rather than conveyor belts, as they do in cells.
Vogel and team then tracked the fate of the gold nanoparticles with scanning electron microscopy. They found that the shuttles did indeed pick up the nanoparticles and that they held on to them, with a loss rate of about 28% over 12 minutes. They also found that DNA stations are more effective than polymer ones.
'Future challenges will be to combine the main components of a transport system: pick-up of cargo from defined locations, guided transport, and controlled discharge of the load at the final destination,' commented Vogel. She went on to caution: 'The problems are always in the details of working through the engineering challenges of interfacing biological molecules with synthetic devices.'
Story posted: 24th August 2007 | <urn:uuid:21df35ea-b78d-400a-9ded-412d6c7d5521> | 3.328125 | 466 | Comment Section | Science & Tech. | 29.467236 | 489 |
Another Tropical Cyclone Developing
Hurricane Season 2012: Tropical System 92E (Eastern Pacific Ocean)
While Tropical storm Aletta is forecast to weaken and dissipate another tropical cyclone appears to be forming in the eastern Pacific south of Acapulco, Mexico. The TRMM satellite passed above this tropical disturbance (92E) on 18 May 2012 at 0957 UTC. Data captured with this pass by TRMM's Microwave Imager (TMI) and Precipitation Radar (PR) instruments were used in the rainfall analysis shown. This analysis indicates that this area contained very heavy rainfall in the northeastern quadrant of the disturbance. Some storms were producing rainfall at a rate of over 50 mm/hr (~2 inches).
A 3-D image from TRMM PR shows that a few of these strong convective storms reached heights of about 15km (~9.3 miles). Radar reflectivity values of over 50 dBz found by TRMM PR in this stormy area provided more evidence that heavy rainfall was occurring. The National Hurricane Center (NHC) assigned this disturbed area a 40% probability of becoming a tropical cyclone within the 48 hours. It would be named Bud.
SSAI/NASA Goddard Space Flight Center
, Greenbelt, Md. | <urn:uuid:08219858-741f-4000-be9a-2022c05c95fa> | 2.59375 | 259 | Knowledge Article | Science & Tech. | 45.037919 | 490 |
EVEN a material 10 billion times as strong as steel has a breaking point. It seems neutron stars may shatter under extreme forces, explaining puzzling X-ray flares.
Neutron stars are dense remnants of stars gone supernova, packing the mass of the sun into a sphere the size of a city. Their cores may be fluid, but their outer surfaces are solid and extremely tough - making graphene, the strongest material on Earth, look like tissue paper by comparison.
These shells may shatter, though, in the final few seconds before a pair of neutron stars merges to form a black hole - a union thought to generate explosions known as short gamma-ray bursts.
David Tsang of the California Institute of Technology in Pasadena and colleagues have calculated how the mutual gravitational pull of such stars will distort their shape, creating moving tidal bulges. As the stars spiral towards each other, orbiting ever faster, they squeeze and stretch each other ever faster too.
A few seconds before the stars merge, the frequency of this squeezing and stretching matches the frequency at which one of the stars vibrates most easily. This creates a resonance that boosts the vibrations dramatically, causing the star's crust to crack in many places - just as a wine glass may shatter when a certain note is sung, the team says (Physical Review Letters, DOI: 10.1103/physrevlett.108.011102).
The star's gravity is too powerful to let the pieces fly away, but the sudden movement can disturb its magnetic field, accelerating electrons and leading to a powerful X-ray flare. That could explain observations by NASA's Swift satellite in which a blast of X-rays preceded some short gamma-ray bursts by a few seconds.
Combining observations of X-ray flares with those of gravitational waves emitted by the stars as they spiral together could fix the exact frequency at which the shattering occurs, which would reveal more about the stars' mysterious interiors, says Tsang.
If you would like to reuse any content from New Scientist, either in print or online, please contact the syndication department first for permission. New Scientist does not own rights to photos, but there are a variety of licensing options available for use of articles and graphics we own the copyright to.
Have your say
Only subscribers may leave comments on this article. Please log in.
Only personal subscribers may leave comments on this article | <urn:uuid:dde9296d-febe-4cf0-8ed1-a0f99cdc6029> | 3.921875 | 484 | Truncated | Science & Tech. | 47.882598 | 491 |
A tsunami is a series of waves most commonly caused by violent movement of the sea floor. In some ways, it resembles the ripples radiating outward from the spot where stone has been thrown into the water, but a tsunami can occur on an enormous scale. Tsunamis are generated by any large, impulsive displacement of the sea bed level. The movement at the sea floor leading to tsunami can be produced by earthquakes, landslides and volcanic eruptions.
Most tsunamis, including almost all of those traveling across entire ocean basins with destructive force, are caused by submarine faulting associated with large earthquakes. These are produced when a block of the ocean floor is thrust upward, or suddenly drops, or when an inclined area of the seafloor is thrust upward or suddenly thrust sideways. In any event, a huge mass of water is displaced, producing tsunami. Such fault movements are accompanied by earthquakes, which are sometimes referred to as “tsunamigenic earthquakes”. Most tsunamigenic earthquakes take place at the great ocean trenches, where the tectonic plates that make up the earth’s surface collide and are forced under each other. When the plates move gradually or in small thrust, only small earthquakes are produced; however, periodically in certain areas, the plates catch. The overall motion of the plates does not stop; only the motion beneath the trench becomes hung up. Such areas where the plates are hung up are known as “seismic gaps” for their lack of earthquakes. The forces in these gaps continue to build until finally they overcome the strength of the rocks holding back the plate motion. The built-up tension (or comprehension) is released in one large earthquake, instead of many smaller quakes, and these often generate large deadly tsunamis. If the sea floor movement is horizontal, a tsunami is not generated. Earthquakes of magnitude larger than M 6.5 are critical for tsunami generation.
Tsunamis produced by landslides:
Probably the second most common cause of tsunami is landslide. A tsunami may be generated by a landslide starting out above the sea level and then plunging into the sea, or by a landslide entirely occurring underwater. Landslides occur when slopes or deposits of sediment become too steep and the material falls under the pull of gravity. Once unstable conditions are present, slope failure can be caused by storms, earthquakes, rain, or merely continued deposit of material on the slope. Certain environments are particularly susceptible to the production of landslide-generated earthquakes. River deltas and steep underwater slopes above sub-marine canyons, for instance, are likely sites for landslide-generated earthquakes.
Tsunami produced by Volcanoes:
The violent geologic activity associated with volcanic eruptions can also generate devastating tsunamis. Although volcanic tsunamis are much less frequent, they are often highly destructive. These may be due to submarine explosions, pyroclastic flows and collapse of volcanic caldera.
(1) Submarine volcanic explosions occur when cool seawater encounters hot volcanic magma. It often reacts violently, producing stream explosions. Underwater eruptions at depths of less than 1500 feet are capable of disturbing the water all the way to the surface and producing tsunamis.
(2) Pyroclastic flows are incandescent, ground-hugging clouds, driven by gravity and fluidized by hot gases. These flows can move rapidly off an island and into the ocean, their impact displacing sea water and producing a tsunami.
(3) The collapse of a volcanic caldera can generate tsunami. This may happen when the magma beneath a volcano is withdrawn back deeper into the earth, and the sudden subsidence of the volcanic edifice displaces water and produces tsunami waves. The large masses of rock that accumulate on the sides of the volcanoes may suddenly slide down slope into the sea, causing tsunamis. Such landslides may be triggered by earthquakes or simple gravitational collapse. A catastrophic volcanic eruption and its ensuing tsunami waves may actually be behind the legend of the lost island civilization of Atlantis. The largest volcanic tsunami in historical times and the most famous historically documented volcanic eruption took lace in the East Indies-the eruption of Krakatau in 1883.
Tsunami waves :
A tsunami has a much smaller amplitude (wave height) offshore, and a very long wavelength (often hundreds of kilometers long), which is why they generally pass unnoticed at sea, forming only a passing "hump" in the ocean. Tsunamis have been historically referred to tidal waves because as they approach land, they take on the characteristics of a violent onrushing tide rather than the sort of cresting waves that are formed by wind action upon the ocean (with which people are more familiar). Since they are not actually related to tides the term is considered misleading and its usage is discouraged by oceanographers.
These waves are different from other wind-generated ocean waves, which rarely extend below a dept of 500 feet even in large storms. Tsunami waves, on the contrary, involvement of water all the way to the sea floor, and as a result their speed is controlled by the depth of the sea. Tsunami waves may travel as fast as 500 miles per hour or more in deep waters of an ocean basin. Yet these fast waves may be only a foot of two high in deep water. These waves have greater wavelengths having long 100 miles between crests. With a height of 2 to 3 feet spread over 100 miles, the slope of even the most powerful tsunamis would be impossible to see from a ship or airplane. A tsunami may consist of 10 or more waves forming a ‘tsunami wave train’. The individual waves follow one behind the other anywhere from 5 to 90 minutes apart.
As the waves near shore, they travel progressively more slowly, but the energy lost from decreasing velocity is transformed into increased wavelength. A tsunami wave that was 2 feet high at sea may become a 30-feet giant at the shoreline. Tsunami velocity is dependent on the depth of water through which it travels (velocity equals the square root of water depth h times the gravitational acceleration g, that is (V=√gh). The tsunami will travel approximately at a velocity of 700 kmph in 4000 m depth of sea water. In 10 m, of water depth the velocity drops to about 35 kmph. Even on shore tsunami speed is 35 to 40 km/h, hence much faster than a person can run.It is commonly believed that the water recedes before the first wave of a tsunami crashes ashore. In fact, the first sign of a tsunami is just as likely to be a rise in the water level. Whether the water rises or falls depends on what part of the tsunami wave train first reaches the coast. A wave crest will cause a rise in the water level and a wave trough causes a water recession.
Seiche (pronounced as ‘saysh’) is another wave phenomenon that may be produced when a tsunami strikes. The water in any basin will tend to slosh back and forth in a certain period of time determined by the physical size and shape of the basin. This sloshing is known as the seiche. The greater the length of the body, the longer the period of oscillation. The depth of the body also controls the period of oscillations, with greater water depths producing shorter periods. A tsunami wave may set off seiche and if the following tsunami wave arrives with the next natural oscillation of the seiche, water may even reach greater heights than it would have from the tsunami waves alone. Much of the great height of tsunami waves in bays may be explained by this constructive combination of a seiche wave and a tsunami wave arriving simultaneously. Once the water in the bay is set in motion, the resonance may further increase the size of the waves. The dying of the oscillations, or damping, occurs slowly as gravity gradually flattens the surface of the water and as friction turns the back and forth sloshing motion into turbulence. Bodies of water with steep, rocky sides are often the most seiche-prone, but any bay or harbour that is connected to offshore waters can be perturbed to form seiche, as can shelf waters that are directly exposed to the open sea.
The presence of a well developed fringing or barrier of coral reef off a shoreline also appears to have a strong effect on tsunami waves. A reef may serve to absorb a significant amount of the wave energy, reducing the height and intensity of the wave impact on the shoreline itself.
The popular image of a tsunami wave approaching shore is that of a nearly vertical wall of water, similar to the front of a breaking wave in the surf. Actually, most tsunamis probably don’t form such wave fronts; the water surface instead is very close to the horizontal, and the surface itself moves up and down. However, under certain circumstances an arriving tsunami wave can develop an abrupt steep front that will move inland at high speeds. This phenomenon is known as a bore. In general, the way a bore is created is related to the velocity of the shallow water waves. As waves move into progressively shallower water, the wave in front will be traveling more slowly than the wave behind it .This phenomenon causes the waves to begin “catching up” with each other, decreasing their distance apart i.e. shrinking the wavelength. If the wavelength decreases, but the height does not, then waves must become steeper. Furthermore, because the crest of each wave is in deeper water than the adjacent trough, the crest begins to overtake the trough in front and the wave gets steeper yet. Ultimately the crest may begin to break into the trough and a bore formed. A tsunami can cause a bore to move up a river that does not normally have one. Bores are particularly common late in the tsunami sequence, when return flow from one wave slows the next incoming wave. Though some tsunami waves do, in deed, form bores, and the impact of a moving wall of water is certainly impressive, more often the waves arrive like a very rapidly rising tide that just keeps coming and coming. The normal wind waves and swells may actually ride on top of the tsunami, causing yet more turbulence and bringing the water level to even greater heights. | <urn:uuid:87a817df-e201-474d-b964-dcde3f8d1a17> | 4.90625 | 2,112 | Knowledge Article | Science & Tech. | 39.834447 | 492 |
Quarks are completely confined within protons and neutrons: a phenomenon that we do not completely understand. The artist's view below represents the two interlinked phenomena that drive confinement: the long-range correlations in the physical vacuum that surround the proton and excludes the color-field emanating from quarks and the extremely strong gluon-fields between the quarks.
To make progress on confinement we need to separate these two effects and study each individually. One way to do this is to make a much larger system of quarks and gluons where the role of the vacuum at the surface of the larger system is much reduced. Such a large system can be produced by compressing or heating nuclear matter so that the neutrons and protons begin to overlap. As the boundaries between each neutron and proton disappear, a large volume of a new state of matter should be formed - the quark gluon plasma (QGP). The strong interactions between quarks and gluons dominate the properties of the QGP, and because of the larger volume of the system, the influence of the correlated vacuum is much reduced.
Collisions between two heavy nuclei take place at Relativistic Heavy Ion Collider (RHIC). Our first results from PHENIX indicate that the plasma may be formed in these reactions. Leading the evidence for the QGP is the reduced yield of particles at high transverse momenta (pt). These particles predominantly come from rare, high-momentum collisions between quarks and gluons (partons) that occur in the hot, early stage of the reaction. As high momentum partons travel through the forming plasma, they are predicted to lose a considerable fraction of their energy. Outside the collision zone high-momentum partons fragment into hadrons, and any energy-loss in the plasma softens the hadronic spectrum, i.e. lowers the measured yield of hadrons at high-pt.
The first high-pt spectra from Au+Au collisions measured by PHENIX at RHIC were published in 2001 with the key observation that the high-pt spectra are softer in central than in peripheral collisions.
Because a central reaction would produce a larger volume of QGP, this result is consistent with the hard-scattered parton losing energy in a QGP.
The overall caution remains that a heavy-ion reaction is a very complex, challenging environment. A strong case for the existence and properties of the QGP must rely on a broad range of observations. To extend our repertoire of probes, our group is currently analyzing data on J/psi suppression in Au+Au collisions.
Looking forward to the future, at ISU we are leading a major upgrades for PHENIX : a new Si vertex detector to quantitatively probe the early, highest energy-density phase of the matter formed in a heavy-ion reaction by measuring the yield and spectra of heavy-flavored mesons. Measuring the spectra of charm and beauty-mesons requires a tracking resolution of less than 100 mm to measure the decay of mesons displaced from the collision point. | <urn:uuid:995b15b9-4599-4630-b3ea-780e20ab3f77> | 3.34375 | 637 | Academic Writing | Science & Tech. | 40.847735 | 493 |
Penn Physics Professor Finds Light under the Sea
Penn Physics and Astronomy Assistant Professor Alison Sweeney's research on undersea biooptics may have implications for solar energy and the development of biofuels. As reported on the SASFrontiers website, Sweeney's research indicates that proteins in mollusc shells, called reflectins, help propagate solar energy underwater to optimize photosynthesis in algae colonies. Reflectins also enable squids to change their own reflective qualities, and enable them to camouflage themselves in brighter or darker waters.
You cna read about Dr. Sweeney's research in SASFrontiers.
Earlier: Alison Sweeney wins Bartholomew Award. | <urn:uuid:4ef20d98-c749-446b-af61-0df21c103cc5> | 2.765625 | 131 | News (Org.) | Science & Tech. | 22.970163 | 494 |
The uncanny valley appears pretty frequently in these pages, at least in presentation — like the disembodied baby head above, for instance, or the wonderfully horrible Telenoid. These robots and others represent the gulf in our robot affinity that gapes open when machines approach a certain level of human likeness.
Masahiro Mori described this phenomenon 42 years ago, when he was a robotics professor at the Tokyo Institute of Technology. His paper was largely unnoticed for decades, but more recently it has become a touchstone for robotics, especially as they become more lifelike. But the paper was never published in English in its entirety, for whatever reason. Now here it is, in a new translation approved by Mori and appearing in IEEE Spectrum.Mori notes the eerie sensation that arises when we are tricked into thinking an artificial limb is real, and then realize it’s not — it “becomes uncanny,” and we lose our affinity for it. He expresses this phenomenon in a graph.
“I have noticed that, in climbing toward the goal of making robots appear human, our affinity for them increases until we come to a valley, which I call the uncanny valley,” the new translation reads.
He also charts our affinities and lack thereof for still and moving objects, noting that our affinity is pretty high for a stuffed animal or a humanoid robot. But movement is key to our affinity — a humanoid robot would not move like a human, so it would be incredibly creepy, he says. “Imagine a craftsman being awakened suddenly in the dead of night. He searches downstairs for something among a crowd of mannequins in his workshop. If the mannequins started to move, it would be like a horror story,” he writes.
A still corpse is also down in the valley. At the deepest point: Zombies.
Five amazing, clean technologies that will set us free, in this month's energy-focused issue. Also: how to build a better bomb detector, the robotic toys that are raising your children, a human catapult, the world's smallest arcade, and much more. | <urn:uuid:c801bdac-aa80-4e1f-b826-906d926e8444> | 2.71875 | 434 | News Article | Science & Tech. | 41.352768 | 495 |
Experimental Fusion Research
PPPL fusion research centers on the National Spherical Torus Experiment (NSTX), which is undergoing a $94 million upgrade that will make it the most powerful experimental fusion facility, or tokamak, of its type in the world when work is completed in 2014. Experiments will test the ability of the upgraded spherical facility to maintain a high-performance plasma under conditions of extreme heat and power. Results could strongly influence the design of future fusion reactors.
The Laboratory develops components and scientific data for ITER, which represents the largest step to date toward the development of a commercial fusion reactor. ITER, whose name is Latin for “the way,” is being built in Cadarache, France, by the European Union, the United States, China, India, Japan, Korea and Russia. The facility is designed to produce 500 million watts of fusion power for at least 400 seconds by the late 2020s to demonstrate the feasibility of fusion as a source of energy.
PPPL conducts research on the use of liquid lithium to help keep fusion reactions hot. The Laboratory’s Lithium Tokamak Experiment (LTX) is the world’s first experimental fusion facility to have liquid lithium covering all its walls to absorb plasma particles that escape from magnetic confinement. The shiny metal keeps the particles from re-entering the plasma as a cold gas, retains impurities that can cool the plasma and halt fusion reactions, and prevents damage to the plasma-facing walls. Included in this research are experiments led by Princeton University engineer Bruce Koel on the behavior of lithium and other wall materials. | <urn:uuid:64d593c2-67d9-4e90-8ee9-cb440376d58c> | 3.390625 | 330 | About (Org.) | Science & Tech. | 28.784455 | 496 |
Right now, the accelerator is stopped for the annual maintenance shutdown. This is the opportunity to fix all problems that occurred during the past year both on the accelerator and the experiments. The detectors are opened and all accessible malfunctioning equipment is being repaired or replaced.
In the 27-km long LHC tunnel, surveyors are busy getting everything realigned to a high precision, while various repairs and maintenance operations are on their way. By early March, all magnets will have been cooled down again and prepared for operation.
The experimentalists are not only working on their detectors but also improving all aspects of their software: the detector simulations, event reconstruction algorithms, particle identification schemes and analysis techniques are all being revised.
By late March, the LHC will resume colliding protons with the goal of delivering about 16 inverse femtobarns of data, compared to 5 inverse femtobarns in 2011. This will enable the experiments to improve the precision of all measurements achieved so far, push all searches for new phenomena slightly further and explore areas not yet tackled. The hope is to discover particles associated with new physics revealing the existence of new phenomena. The CMS and ATLAS physicists are looking for dozens of hypothetical particles, the Higgs boson being the most publicized but only one of many.
When protons collide in the LHC accelerator, the energy released materializes in the form of massive but unstable particles. This is a consequence of the well-known equation E=mc2, which simply states that energy (represented by E) and mass (m) are equivalent, each one can change into the other. The symbol c2 represents the speed of light squared and acts like a conversion factor. This is why in particle physics we measure particle masses in units of energy like GeV (giga electronvolt) or TeV (tera electronvolt). One electronvolt is the energy acquired by an electron through a potential difference of one volt.
It is therefore easier to create lighter particles since less energy is required. Over the past few decades, we have already observed the lighter particles countless times in various experiments. So we know fairly well how many events containing them we should observe. We can tell when new particles are created when we see more events of a certain topology than what we expect from those well-known phenomena, which we refer to as the background.
We can claim that something additional and new is also occurring when we see an excess of events. Of course, the bigger the excess, the easier it is to claim something new is happening. This is the reason why we accumulate so many events, each one being a snap-shots of the debris coming out of a proton-proton collisions. We want to be sure the excess cannot be due to some random fluctuation.
Some of the particles we are looking for are expected to have a mass in the order of a few hundred GeV. This is the case for the Higgs boson and we already saw possible signs of its presence last year. If the observed excess continues to grow as we collect more data in 2012, it will be enough to claim the Higgs boson discovery beyond any doubt in 2012 or rule it out forever.
Other hypothetical particles may have masses as large as a few thousand GeV or equivalently, a few TeV. In 2011, the accelerator provided 7 TeV of energy at the collision point. The more energy the accelerator has, the higher the reach in masses, just like one cannot buy a 7000 CHF car with 5000 CHF. So to create a pair of particles with a mass of 3.5 TeV (or 3500 GeV), one needs to provide at least 7 TeV to produce them. But since some of the energy is shared among many particles, the effective limit is lower than the accelerator energy.
There are ongoing discussions right now to decide if the LHC will be operating at 8 TeV this year instead of 7 TeV as in 2011. The decision will be made in early February.
If CERN decides to operate at 8 TeV, the chances of finding very heavy particles will slightly increase, thanks to the extra energy available. This will be the case for searches for particles like the W’ or Z’, a heavier version of the well-known W and Z bosons. For these, collecting more data in 2012 will probably not be enough to push the current limits much farther. We will need to wait until the LHC reaches full energy at 13 or 14 TeV in 2015 to push these searches higher than in 2011 where limits have already been placed around 1 TeV.
For LHCb and ALICE, the main goal is not to find new particles. LHCb aims at making extremely precise measurements to see if there are any weak points in the current theoretical model, the Standard Model of particle physics. For this, more data will make a whole difference. Already in 2011, they saw the first signs of CP-violation involving charm quarks and hope to confirm this observation. This measurement could shed light on why matter overtook antimatter as the universe expanded after the Big Bang when matter and antimatter must have been created in equal amounts. They will also investigate new techniques and new channels.
Meanwhile, ALICE has just started analyzing the 2011 data taken in November with lead ion collisions. The hope is to better understand how the quark-gluon plasma formed right after the Big Bang. This year, a special run involving collisions of protons and lead ions should bring a new twist in this investigation.
Exploring new corners, testing new ideas, improving the errors on all measurements and most likely the final answer on the Higgs, that is what we are in with the LHC for in 2012. Let’s hope that in 2012 the oriental dragon, symbol of perseverance and success, will see our efforts bear fruit.
To be alerted of new postings, follow me on Twitter: @GagnonPauline or sign-up on this mailing list to receive and e-mail notification. | <urn:uuid:f37ea100-b3b9-472e-bffa-c0ee6d515f58> | 3.328125 | 1,236 | Personal Blog | Science & Tech. | 47.898318 | 497 |
There has been a flurry of recent commentary concerning Amazon drought – some of it good, some of it not so good. The good stuff has revolved around a recently-completed interesting field experiment that was run out of the Woods Hole Research Center (not to be confused with the Woods Hole Oceanographic Institution), where they have been examining rainforest responses to drought – basically by using a very large rainproof tent to divert precipitation at ground level (the trees don’t get covered up). As one might expect, a rainforest without rain does not do well! But exactly what happens when and how the biosphere responds are poorly understood. This 6 year long field experiment may provide a lot of good new data on plant strategies for dealing with drought which will be used to improve the models and our understanding of the system.
The not-so-good part comes when this experiment is linked too directly to the ongoing drought in the southern Amazon. In the experiment, older tree mortality increased markedly after the third year of no rain at all (with around 1 in 10 trees dying). Since parts of the Amazon are now entering a second year of drought (possibly related to a persistent northward excursion of the ITCZ), the assumption in the Independent story (with the headline ‘One year to save the Amazon’) was that trees will start dying forest-wide next year should the drought continue.
This is incorrect for a number of reasons. Firstly, drought conditions are not the same as no rain at all – the rainfall deficit in the middle of the Amazon is significant, but not close to 100%! Secondly, the rainfall deficits are quite regionally variable, so a forest-wide response is highly unlikely. Also, the trees won’t all die in just one more year and could recover, depending on yearly variation in climate.
While this particular article is exaggerated, there are, however, some issues that should provoke genuine concern. Worries about the effects of the prolonged drought (and other natural and human-related disturbances) in the Amazon are indeed widespread and are partly related to the idea that there may be a ‘tipping point’ for the rainforest (see this recent article for some background). This idea is exemplified in a study last year (Hutrya et al, 2005) which looked at the sharp transition between forest and savannah and related that to the coupling of drought incidence and wild fires with the forest ecosystem. Modelling work has suggested that the Amazon may have two vegetation/regional climate equilibria due to vegetation and climate tending to reinforce each other if one is pushed in a particular direction (Oyama and Nobre, 2003). The two alternative states could be one rainforested and wet like today, the other mainly savannah and dry in the Eastern Amazon. Thus there is a fear that too much drought or disturbance could flip parts of the forest into a more savannah-like state. However, there is a great deal of uncertainty in where these thresholds may lie and how likely they are to be crossed, and the rate at which change will occur. Models go from predicting severe and rapid change (Cox et al, 2004), to relatively mild changes (Friedlingstein et al (2003)). Locally these responses can be dramatic, but of course, these changes also have big implications for total carbon cycle feedback and so have global consequences as well.
Part of that uncertainty is related to the very responses that are being monitored in the WHRC experiment and so while I would hesitate to make a direct link, indirectly these results may have big consequences for what we think may happen to the Amazon in the future.
Special thanks to Nancy Kiang for taking the time to discuss this with me.
Update: WHRC comments on the articles below. | <urn:uuid:34247e22-9bec-4ed3-b5e5-15b6d905fbaf> | 2.8125 | 773 | Personal Blog | Science & Tech. | 38.063333 | 498 |
Extreme Weather Map Shows 3,527 Monthly Weather Records Shattered in 2012
Top Ten States with Greatest Percentage of Locations with Record-Breaking Heat: TN, WI, MN, IL, IN, NV, WV, ME, CO, MD
NEW YORK, Jan. 15, 2013 /PRNewswire-USNewswire/ — In 2012, there were at least 3,527 monthly weather records for heat, rain and snow broken by extreme weather events that hit communities throughout the U.S., according to an updated interactive extreme weather mapping tool and year-end review released today by the Natural Resources Defense Council. The 2012 tally exceeds the 3,251 records smashed in 2011 and catalogues these record-breaking extreme events in all 50 states.
New this year, the interactive map at www.nrdc.org/extremeweather also ranks all 50 states for the percentage of weather stations reporting at least one monthly heat record broken in 2012. The ten states showing the highest percentage with new heat records are: Tennessee (36%), Wisconsin (31%), Minnesota (30%), Illinois (29%), Indiana (28%), Nevada (27%), West Virginia (26%), Maine (26%), Colorado (25%), and Maryland (24%). Especially hard-hit regions include the Upper Midwest, Northeast, northern Great Plains, and Rocky Mountain states.
“2012′s unparalleled record-setting heat demonstrates what climate change looks like,” said Kim Knowlton, NRDC Senior Scientist. “This extreme weather has awoken communities across the country to the need for preparedness and protection. We know how to reduce local risks, improve our lives and create more resilient communities. Now our leaders must act.”
Because these monthly weather records compete against prior records set over at least the last 30 years at each location, the 3,527 monthly records-broken highlight notable patterns of extreme weather in the U.S. And in fact, from 1980 through 2011, the frequency of weather-related extreme events in North America nearly quintupled, rising more rapidly than anywhere else in the world, according to international insurance giant MunichRe.
In 2012, Americans experienced the hottest March on record in the contiguous U.S., and July was the hottest single month ever recorded in the lower 48 states. As a whole, 2012 was the warmest year ever recorded in the U.S., according to the National Oceanic and Atmospheric Administration’s (NOAA) State of the Climate report released last week. NOAA has also estimated that 2012 will surpass 2011 in aggregate costs for U.S. annual billion-dollar disasters, and MunichRe also recently revealed that in 2012, more than 90 percent of the world’s insured disaster costs occurred in the U.S.
Some of 2012′s most significant weather disasters include:
- The summer of 2012 was the worst drought in 50 years across the nation’s breadbasket, with over 1,300 U.S. counties in 29 states declared drought disaster areas.
- Wildfires burned over 9.2 million acres in the U.S., and destroyed hundreds of homes. The average size of the fires set an all-time record of 165 acres per fire, exceeding the prior decade’s 2001-2010 average of approximately 90 acres per fire.
- Hurricane Sandy’s storm surge height, 13.88 feet, broke the all-time record in New York Harbor, and ravaged communities across New Jersey and New York with floodwaters and winds. The cost of Sandy reached an estimated $79 billion with at least 131 deaths reported.
There are proactive steps government decision-makers can take to minimize the impact on communities increasingly vulnerable to climate change. NRDC encourages all states to undertake the following key actions to protect public health:
- Enact plans to limit carbon emissions from power plants, vehicles and other major sources of heat-trapping pollution; coupled with increased investment in energy efficiency and renewable energy.
- Emergency planning must incorporate risks from climate change. States and local governments should develop, prioritize, support and implement comprehensive climate change mitigation plans to address climate risks.
- The Federal Emergency Management Agency (FEMA) must also prioritize addressing and preparing for climate change by providing guidance and resources to state and local governments.
For more information about 2012′s record-breaking extreme weather events, see:
- NRDC’s 2012 Extreme Weather Mapping Tool
- Kim Knowlton’s blog: http://switchboard.nrdc.org/blogs/kknowlton/
- Frances Beinecke’s blog: http://switchboard.nrdc.org/blogs/fbeinecke/
- Rocky Kistner’s blog: http://switchboard.nrdc.org/blogs/rkistner/
- NRDC’s What Climate Change Looks Like
The Natural Resources Defense Council (NRDC) is an international nonprofit environmental organization with more than 1.3 million members and online activists. Since 1970, our lawyers, scientists, and other environmental specialists have worked to protect the world’s natural resources, public health, and the environment. NRDC has offices in New York City, Washington, D.C., Los Angeles, San Francisco, Chicago, Livingston, Montana, and Beijing. Visit us at www.nrdc.org and follow us on Twitter @NRDC.
SOURCE Natural Resources Defense Council, Washington, D.C. | <urn:uuid:8887f5a7-30e7-4547-a28b-21b5a9068af8> | 2.578125 | 1,126 | News Article | Science & Tech. | 53.494708 | 499 |