text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Eta Carinae, one of the most massive stars in the Milky Way Galaxy, has been called a 'supernova in the making'. An outburst seen from the southern hemisphere in 1843 made it briefly the second brightest star in the night sky, although it is an estimated 9,000 light years from Earth. The star itself is not seen in this image. It is hidden by an elaborate nebulosity of gas and dust, produced by past eruptions. Yet infrared observations that penetrate the cloak of dust show that it is shining within, and recent observations with the HST have measured the rate at which matter continues to stream from the star.
The larger, red region of nebulosity is probably the most rapidly moving gas that erupted from Eta Carinae in the 1840s. Some of this outlying material is Moving at velocities in excess of two million miles per hour. The two pronounced lobes at the center of the picture shine so brightly because they contain huge numbers of microscopic dust particles that reflect or 'scatter' the light from the hidden central star.
Eta Carinae is about 4 million times more luminous than the Sun, and probably more than 100 times as massive. Presumably it will indeed become a supernova within the next few million years.
Technical Information: Composite made from images taken separately in red, green, and blue light.
Credit: J. Hester (Arizona State University), and NASA | <urn:uuid:41367b6d-fc6d-4e42-b37c-8ba8070b4e42> | 3.71875 | 295 | Knowledge Article | Science & Tech. | 45.63435 |
Search Loci: Convergence:
Just as the introduction of the irrational numbers ... is a convenient myth [which] simplifies the laws of arithmetic ... so physical objects are postulated entities which round out and simplify our account of the flux of existence .... The conceptional scheme of physical objects is [likewise] a convenient myth, simpler than the literal truth and yet containing that literal truth as a scattered part.
In J. Koenderink, Solid Shape, Cambridge Mass.: MIT Press, 1990.
Gerbert d'Aurillac and the March of Spain: A Convergence of Cultures
Gerbert also described a new system of representing numbers – a system very familiar to us today, but unknown in Western Europe at the time, where Roman numerals were still the order of the day. The new numerals, called ghobar (“dust,” from writing the numerals in the dust of a counting board) or, more generally, Hindu-Arabic, consisted of nine symbols, plus (later) a zero. Any number could be represented by placing the symbols next to each other in a particular order. The shape and orientation of the numerals evolved over the years, but at the time of Gerbert they looked something like this:
These numerals had been known in India since the sixth century C.E., and in the eighth century they began to be transmitted to Arabic scholars, who discovered how to represent decimal fractions, as well as whole numbers, with them. It wasn’t long before Arabic works describing the new system of numeration were translated into Latin; the earliest known is The Book of Addition and Subtraction according to the Hindu Calculation, by Muhammed ibn Musa al-Khwarizmi (c. 780–850). Al-Khwarizmi is probably the most famous Eastern mathematician known to students of mathematics in the West. His name gave us the term algorithm, and the word algebra first appeared in his writings. His treatise about “the Hindu calculation,” written about 820, was almost certainly available in the Spain of the tenth century.
Other scholars whose works would have been accessible to Gerbert were Abu al-Rayhan al-Biruni (973 – 1048), also from Central Asia, and Abu Sahl al-Kuhi, who lived in the second half of 10th century in Tabaristan (Persia).
There were also Jewish scholars living and working in the courts of the caliphs. Abu Sahl Dunas ibn Tamim was active in Kairouan (Tunisia) in the tenth century. Writing in Arabic in a commentary in the year 955, he mentioned an earlier work in which he described the use of Hindu numerals. | <urn:uuid:8e8458c1-56b1-46de-8399-c621e0c5f3f4> | 3.46875 | 567 | Knowledge Article | Science & Tech. | 50.018916 |
Mathematician Who build Mathematics
This article is about the short biographies of great Mathematicians with whom mathematic's did start.Article include the short description of the life and work of Thales,Pythagoras and Euclid.
- Thales(ca.625-547 B.C.)
- Thales Work
- Pythagoras(ca. 580-500 B.C.)
- Pythagoras Work
- Euclid(flourished ca.300 B.C.)
- Euclid's Work
World lived in the darkness of thoughts for centuries until Thales was born who with his rational thought and skepticism gave the flow of thought a new direction.Many other branches like art,craft etc existed and flourished for centuries.But with the advent of the thought of skepticism did come an urge, to know.Thales was born in Miletus on the western coast of Asia minor.Initially,he is said to be a prosperous mercent who acquired wealth to secure his comfort and then devoted rest of his life to the study of Mathematics and in travelling various places. He astoned the priest of the Egypt by calculating the height of the pyramid by its shadow and the concept of similar triangles.
He was probably the first Greek Astronomer who studied the planet deeply and helped in distinguishing in from its source Astrology.He is also adored as the creator of Geometry and founder of the Ionian school.He is reffered to be the first one to think everythig to be apart of just single matter.As the legend proclaim, he considered everything to be WATER.This thought was of great importance as it brought into light the idea of Unification of energy which have been the major problem posed by scientists and physicists today.
Thales is regarded as the first person who gave the first systematised proof of the first theorem.Various Mathematical theorem credited to his excellence are:-
1. A circle is bisected by a diameter.
2. The base angle of an isoscles triangle are equal.
3 .Vertical angles are equal.
4. An angle inscribed in a semi-circle is a right angle.
5. A triangle is completely determined if two angles and the include sides are given.
Pythagoras(ca. 580-500 B.C.)
Pythagoras is the first to use the word –Mathematike instead of Mathemata.Though, Thales was the first to give the mathematical proof of something,Pythagoras was the first to relate one proof with the other.He used the knowledge of one proof ie. the fundamental theorem to derive the other theorem.Pythagoras who was born on the island of Samos off the western coast of Asia minor is considered to to be the first to use the word, Kosmos for the ordered and understandable whole.
In order to enrich his knowledge and widen his perspective he travelled through the world and founded the Pythagorean School which argubly has the honour of being the first University.His pupil were known as Pythagorean who did spread through the world and propagated his knowledge and teachings among the mass.
His studies with reference to Mathematics was mainly concentrated on Geometry and Arithmatic.He was also intrested in the art of Music and Astronomy.The various famous theorems given by him are-
1.Sum of the angle in any triangle equals two right angles.
2.The Pythagoras theorem about the square of hypotenuese.
3.There is no rational whose square is 2.
It is Pythagoras who brought the new concept that for anything to be true,it should be represented as Numbers.Though there has been great contribution from Pythagoras and his pupil,there has been no clear indication as to who was the real generator of some of the proofs and its theorem.There has been a great contribution from the part of Pythagoreans who’s observations ,problems encounterd and question raised led to the discovery of irrrational numbers.Some of the interest of the Pythagoreans were in figurates,musical notes and its relation with numbers etc.
Euclid(flourished ca.300 B.C.)
As it is said that the person is known by the work he has done;this has been more than appropriately applicable on Euclid’s ELEMENT.Element has been the major mind teaser for the various mathematicians for centuries.Element starts with the defnition-“A point is that which has no part.”It consist of 14 books and 465 proposition none of which explicitly defined.What is known about Euclid is that he taught in Alexandria and was also a member of the staff of Ptolemy.
Euclid role was mainly as a compiler who collected and arranged the scattered knowledge of the world at one place.
As there has been no clear indication of the origin of some of the proofs and theorem in Element,these proofs have been by default credited to Euclid.Some of his famous works are:-
1.Euclidean Algorithm for finding the G.C.D. of two positive numbers.
2.Prime numbers that divide the product of two positive integers necessarily divides one of the factors.
3.Euclid theorem on the infinity of primes.
4.Thoerem on perfect numbers.
Element part 8 is dedicated to the construction of the regular polygon asnd polyhedra which gives the knowledge that there are only five regular polyhedra namely – tetrahedron,cube,octahedron,dofecahedron and icosahedron.Euclid basically helped in organising various thoughts that were scattered and thus gave a way of deducing one theorem from already known and assumed fundamental truths.Euclid model became so famous that it became the parameter for everything to be true and facts begun to be deduced with Euclidean thoughts in reference.
Brief Lives And Memorable Mathematics
Published and Distributed by
The Mathematical Association of America | <urn:uuid:ef7ddc68-3fc9-4123-bec7-3b47ed57c58f> | 3.765625 | 1,255 | Knowledge Article | Science & Tech. | 53.542969 |
Spikes on Saturn
The beautiful, swirling band stretching around Saturn's northern hemisphere in this false-color image is a storm that raged from 2010 to 2011. NASA's Cassini spacecraft monitored the action in the jovian planet's atmosphere, sending back interesting and sometimes bizarre data back to scientists on Earth. Some of the strangest data was after the storm seemed to have abated, and then suddenly a huge disturbance in the upper atmosphere sent temperatures soaring 150 degrees Fahrenheit, letting out a burst of ethylene -- a gas not normally seen on Saturn. Though the process that caused the "burp" is still a bit of a mystery, NASA Goddard scientists are continuing to study Cassini's data, publishing the events in the November 20 issue of the Astrophysical Journal.
Credit: NASA/JPL-Caltech/Space Science Institute | <urn:uuid:3bfeab0b-f63a-4f57-a487-bd5f9525bc0e> | 3.34375 | 170 | Knowledge Article | Science & Tech. | 26.336943 |
According to one report, 49 of the 50 U.S. states reported snow on the ground today.
Only Hawaii had no snow. Here in the Atlanta area, the massive storm that left Dallas under nearly a foot of snow (a record), the snow started falling around 1 P.M., and had been reduced to flurries by 8:00 P.M. But with temperatures dropping into the low 20′s tonight, we expect icy roads tomorrow. The storm was the one that originated in Texas, and brought snow to all of the Southeastern states by yesterday. The storm has tracked out to sea. Take a look at the CapitalClimate map showing this February storm.
Does all of this snow, not just in the southeast, but especially in the mid-Atlantic states mean that global warming is over. You’d believe that if you listen to the Global Warming deniers such as the Senator from Oklahoma, and some talk show hosts. Global Warming speaks to the issue of climate change, and climate is the long term trend of weather, and the evidence is that the planet Earth is warming. Look around, and you’ll see evidence for this: earlier Springs, melting ice, higher sea level and others. Actually, Climate scientists predicted more ferocious storms because of global warming. Higher temperatures—->more evaporation—>greater amount of water vapor in the atmosphere—>increased levels of precipitation—>big snow storms. Follow this link to find out the position on climate change of the American Meteorological Society.
Snow scenes in Marietta, Georgia. Here is a video and some pictures from Marietta taken yesterday and this morning after the February 12 Texas size snow storm. | <urn:uuid:6d33e6ff-e39d-484b-a814-8cce872f7ce5> | 3.046875 | 350 | Personal Blog | Science & Tech. | 59.194736 |
The projected disappearance of small glaciers* worldwide threatens to eliminate the water supply for numerous towns in valleys, such as the Ecuadorian capital Quito, fed by the rivers that flow down from the surrounding mountains. But retreating ice is also a threat to freshwater fauna. According to a study published in Nature Climate Change, the local and regional diversity of mountain aquatic fauna will be reduced considerably if predictions are realised. Until now, the impact of global thawing on biodiversity in watercourses had never been calculated in detail.
Several hundreds of millions of people in Southeast Asia depend, to varying degrees, on the freshwater reservoirs of the Himalayan glaciers. Consequently, it is important to detect the potential impact of climate changes on the Himalayan glaciers at an early stage. Together with international researchers, glaciologists from the University of Zurich now reveal that the glaciers in the Himalayas are declining less rapidly than was previously thought. However, the scientists see major hazard potential from outbursts of glacial lakes.
Do you remember that flawed Himalayan glacier melting prediction? Here’s what is truly going on in the world’s highest mountain range – and yes, these figures are science-derived.
Conventional reading suggests glaciers and ice sheets are formed top-down, by the cumulative compaction of snowflakes. New research published in Science today shows there is a bottom-up component too – at least for the East Antarctic ice sheet.
Changing wind patterns due to current climate changes sweep up more dust from the Tibetan Plateau, lake sediment measurements show. Jessica Conroy, a graduate student in paleoclimatology at the University of Arizona in Tucson, presented a dust record dating back … Continue reading | <urn:uuid:56507017-bf6e-4542-b039-644426acacaa> | 3.65625 | 348 | Content Listing | Science & Tech. | 25.972061 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
study of upper-level wind systems
The characteristics of upper-level wind systems are known mainly from an operational worldwide network of rawinsonde observations. (A rawinsonde is a type of radiosonde designed to track upper-level winds and whose position can be tracked by radar.) Winds measured from Doppler-radar wind profilers, aircraft navigational systems, and sequences of satellite-observed cloud imagery have also been...
What made you want to look up "rawinsonde"? Please share what surprised you most... | <urn:uuid:b9370a5c-1ece-4206-b3a1-ef64bfa74d41> | 3.109375 | 144 | Knowledge Article | Science & Tech. | 38.252273 |
A warmer atmosphere results in an amplification of the water cycle. Some areas of the world are net importers of rainfall (such as tropical rainforests), while some are net exporters (such as oceans around the tropics). The “amplification” of the cycle means that dry regions become drier, and wet regions become wetter. During the 20th century, total rainfall in the United States increased by about seven percent; the largest increases occurred in the central and eastern regions (net importing regions). Most of this increase in precipitation can be accounted for by heavy and extreme precipitation events becoming even more intense. The amount of rain that falls during the heaviest one percent of rainfall events has increased by 20 percent over the last 100 years.
Seasons: Spring, Summer, Fall, Winter
(Source: United States. Climate Change Science Program. Weather and Climate Extremes in a Changing Climate. Synthesis Assessment Product 3.3: GPO. 2008 and “U.S. Temperature and Precipitation Trends.” U.S. National Oceanic and Atmospheric Administration (NOAA): Climate Prediction Center. 5 January 2005. 26 June 2008 and Soden, B., Wentz, F.J., Santer, B.D. and Zwiers F. “Climatically-Induced Increases in Water Vapor and Precipitation: Causation and Implications.” United States Senate, Washington, D.C. 29 October 2007. Accessed Online 17 December 2007 http://www.ametsoc.org/atmospolicy/ESSSarchiveclimatechange.html>.) | <urn:uuid:5062eb97-f95e-406b-89d1-50296d1ed9a5> | 3.78125 | 331 | Knowledge Article | Science & Tech. | 48.458457 |
Soils Plants and Invasion
Plants live in tight association with microbes, especially belowground where fungi and bacteria live on and inside the roots of plants. The relationship can be beneficial or harmful to the plant. Some microbes cause plant diseases by decomposing roots. Others trade nutrients with the roots in return for sugars produced aboveground by leaves.
We investigate two main aspects of the relationship between plants and microbes in the soil. First, as it pertains to weeds, we want to know if soil microbes can help or hinder plant invasions. Three highly invasive weeds of contrasting life history strategies; cheatgrass, knapweed and leafy spurge, co-occur with remnants of native plant vegetation. This creates a unique opportunity to observe, characterize, and manipulate interactions between plants and belowground microbial communities. We outline a number of short, intermediate and long-term research projects that will significantly enhance our knowledge regarding plant microbe interactions and soil processes, with the overall goal to better understand, predict and counteract plant invasions, and to restore and manage invaded ecosystems. For more information on this research topic (Click here).
Second, we seek to understand how the relationship between plants and soil influences the function of ecosystem processes. Soil microbes are responsible for organic matter decomposition and nutrient cycling between the atmosphere and the land. On this project we collaborate with the Earth Microbiome Project (EMP). The goal is to map and understand the diversity of microorganisms in habitats around the world. We mapped microbial diversity and function across gradients of weed invasions. To learn more (Click here). | <urn:uuid:ca862d68-2458-4020-914f-1fba167a4e4d> | 3.734375 | 321 | Content Listing | Science & Tech. | 30.153675 |
The Darwin’s Frog (Rhinoderma darwinii), is a frog native to the forest streams of Chile and Argentina. It is named after Charles Darwin who discovered it on his world voyage, “Voyage of the Beagle”, on the HMS Beagle.
The frog is brown or green with a size of about 1.25 inches. Its front feet are not webbed, but some of the toes on the back feet are. It eats insects and other arthropods.
The most striking feature is the way the tadpoles are raised – inside the vocal sac of the male. The female lays about 30 eggs and then the male guards them for about 2 weeks, until they hatch. Then the male picks up all the survivors and carry around the developing young in their vocal pouch. The tadpoles develop in their baggy chin skin, feeding off their egg yolk. When the tiny froglets have developed (about half an inch) they hop out and swim away.
Darwin’s frog not only has to hunt, but also must hide from predators wanting to eat it. Its most reliable technique to avoid its hunter is camouflage. It lays on the ground looking like a dead leaf until the predator passes by. | <urn:uuid:3926994e-1b0e-4b0a-9d3a-a12944202b51> | 3.390625 | 259 | Knowledge Article | Science & Tech. | 69.896931 |
Our Milky Way galaxy is teeming with a wild variety of planets. In addition to our solar system's eight near-and-dear planets, there are more than 800 so-called exoplanets known to circle stars beyond our sun.
A typical robot may struggle to discover objects in its surroundings when it relies on computer vision alone. However, by taking advantage of all of the information available to it - such as an object's location, size, shape and even whether it can be lifted - a robot can continually discover and refine its understanding of objects.
Before the world was wired, the day began and ended with the sun. Work had to be done efficiently so that not a moment of precious daylight was wasted. Now, with easy access to electricity, we light up the night so that life goes on, even in the dark.
A roughly 3.5-mile high Martian mound that some scientists suspect preserves evidence of a massive lake might actually have formed as a result of the Red Planet's famously dusty atmosphere.
Columbia Engineering researchers have developed a technique to isolate a single water molecule inside a buckyball, or C60, and to drive motion of the so-called "big" nonpolar ball through the encapsulated "small" polar H2O molecule, a controlling transport mechanism in a nanochannel under an external electric field.
Electric bicycles or e-bikes, are great cruisers. Don’t want to pedal up that last half-mile or more to get back home? Or go up yet another hill? No problem. Just turn on the electric motor, stop pedaling, and just focus on the road or trail in front of you.
Seven years ago, Duke University engineers demonstrated the first working invisibility cloak in complex laboratory experiments. Now it appears creating a simple cloak has become a lot simpler.
A NASA-led modeling study provides new evidence that global warming may increase the risk for extreme rainfall and drought.
Scientists at the University of Manchester says that natural emissions and manmade pollutants may have an unexpected cooling effect on the world's climate by making clouds brighter.
I’ve always loved Voltaic System’s habit of slapping a solar panel on just about anything one could carry around during the day. Backpacks, tablet covers, or laptop bags all become doubly useful once they’re capable of powering the gadgets within.
As planets age they typically become darker and cooler. However, Saturn however is much brighter than expected for a planet of its age - a question that has puzzled scientists since the late sixties.
Coral reefs are one of the most beautiful natural phenomena on our planet, but their purpose goes far beyond visual enjoyment. Did you know that coral reefs provide a living for over 500 million people across the globe?
Thee delicate wisps of gas seen in the image beloww make up an object known as SNR B0519-69.0, or SNR 0519 for short.
Solar Impulse embarked this morning on its latest journey, departing the San Francisco Bay Area at dawn, heading toward Phoenix on the first leg of a trip that if all goes well will land the sun-powered airplane in New York City around the Fourth of July.
There’s significant progress to report on an idea that could transform electric vehicles from a potential grid destabilizer to a helpful piece in the energy storage puzzle.
NGC 6559 is a cloud of gas and dust located at a distance of about 5000 light-years from Earth, in the constellation of Sagittarius (The Archer).
Dwellings fashioned out of used intermodal shipping containers continue to spread into new places. Their spread is slowed by the resistance of many US cities to altering their building codes.
In a city better known for turning its rivers bright green every March 17, a new title has been bestowed upon an unassuming little stretch of pavement in the industrial Pilsen section of Chicago.
Half the size of a paperclip, weighing less than a tenth of a gram, it leaps a few inches, hovers for a moment on fragile, flapping wings, and then speeds along a preset route through the air.
In an effort to determine if conditions were ever right on Mars to sustain life, a team of scientists, including a Michigan State University professor, has examined a meteorite that formed on the red planet more than a billion years ago. | <urn:uuid:0af44ba2-42d9-4be3-b91e-2cc3f9a4f6e3> | 3.21875 | 903 | Content Listing | Science & Tech. | 46.061924 |
Tongueless Frogs, AglossaDavid Cannatella
This tree diagram shows the relationships between several groups of organisms.
The root of the current tree connects the organisms featured in this tree to their containing group and the rest of the Tree of Life. The basal branching point in the tree represents the ancestor of the other groups in the tree. This ancestor diversified over time into several descendent subgroups, which are represented as internal nodes and terminal taxa to the right.
You can click on the root to travel down the Tree of Life all the way to the root of all Life, and you can click on the names of descendent subgroups to travel up the Tree of Life all the way to individual species.close box
Pipids are highly aquatic frogs that rarely if ever venture out of water. They have several adaptations to aquatic life, including the loss of the tongue (tongues are not generally useful for feeding in water), and the presence of lateral line organs, which are used to detect wave motion in water (these are present in most groups of fishes). The group is sometimes called the Aglossa.
Pipid frogs are found in Africa, South America, and just get into Panama. Some species in South America, such as the Surinam Toad (Pipa pipa) are extremely flattened and look like roadkills. Females of the genus Pipa have an elaborate mating behavior, in which eggs are deposited on the back of the female, and the skin swells up around the eggs to encase them in pockets in which the embryos develop. In some species the eggs hatch out as tadpoles, but in others fully formed froglets emerge from the mother's back.
Tadpoles (when present) lack beaks and denticles, and have paired spiracles (if spiracles are present). This is the Orton type 1 tadpole, also found in Rhinophrynidae. There is much diversity in larval morphology and ecology in pipids. Tadpoles of Xenopus and Silurana are extremely efficient filter feeders. Tadpoles of Hymenochirus are carnivorous, eating larger prey items. In some species of Pipa the eggs (embedded in the mother's back) hatch out as tadpoles, but other species have direct development, in which froglets emerge.
The genus Xenopus (African Clawed frogs) has undergone drastic evolution in chromosome number, producing tetraploid (4n), and octoploid (8n), and even dodecaploid (12n) species. These higher levels of ploidy may have resulted from hybridization of between species. One species, Xenopus laevis, is widely used as a lab animal in molecular and developmental biology. The Dwarf Clawed Frogs (Hymenochirus) are very small, about 20-30 mm, and are widely sold in aquarium stores. The call of many pipid frogs is a clicking sound, which in Xenopus borealis is produced by forcefully pulling apart the large arytenoid cartilages of the larynx (voice-box), thus producing a "pop" by implosion.
The definition of the name Pipidae is problematic because of the relationships of Mesozoic and Tertiary taxa to living pipids (Báez, 1981). Báez' cladogram placed †Thoraciliacus, †Cordicephalus, †Saltenia, and †Eoxenopoides outside of the living Pipidae. She used two synapomorphies for Pipidae (including the aforementioned taxa): the absence of a quadratojugal and the absence of mentomeckelian bones. The first of these is also present in the closely related †Palaeobatrachidae, and may be diagnostic of a larger clade. Although mentomeckelians are reported in palaeobatrachids, the examination of †Palaeobatrachus fossils and figures in Spinar (1972) has not convinced Cannatella of their presence.
Cannatella and Trueb (1988a) diagnosed the living Pipidae by a large number of synapomorphies including presence of an epipubis cartilage, an unpaired epipubic muscle, absence of a quadratojugal, free ribs in the larvae, a fused articulation between the coccyx and sacrum, a short, stocky scapula, elongate septomaxillary bones, ossified pubis, a single, median palatal opening of the eustachian tube, lateral line organs in the adults, and absence of a tongue. Several of these characters are present in fossil taxa, and thus may be diagnostic of larger clades. Several others cannot be assessed in fossils. By ignoring the fossils, Cannatella and Trueb (1988a) produced a diagnosis for the family that was misleading. Relationships within the living Pipidae were discussed by Báez (1981), Cannatella and de Sá (1993), Cannatella and Trueb (1988a,b), and de Sá and Hillis (1990).
To ensure stability, Ford and Cannatella (1993) defined the node-based name Pipidae to be the most recent common ancestor of living pipids (Xenopus, Silurana, Hymenochirus, Pseudhymenochirus, and Pipa) and all of its descendants. Taxa considered to be fossil "pipids" (†Thoraciliacus, †Cordicephalus, †Saltenia, †Shomronella, and †Eoxenopoides) are assigned only to the level of Pipimorpha.
Báez, A. M. 1981. Redescription and relationships of Saltenia ibanezi, a Late Cretaceous pipid frog from northwestern Argentina. Ameghiniana 18(3-4):127-154.
Cannatella, D. C., and R. O. de Sa. 1993. Xenopus laevis as a model organism. Syst. Biol. 42(4):476-507.
Cannatella, D. C., and L. Trueb. 1988a. Evolution of pipoid frogs: Intergeneric relationships of the aquatic frog family Pipidae (Anura). Zool. J. Linn. Soc. 94:1-38.
Cannatella, D. C., and L. Trueb. 1988b. Evolution of pipoid frogs: morphology and phylogenetic relationships of Pseudhymenochirus. J. Herpetol. 22:439-456.
de Sá, R. O., and D. M. Hillis. 1990. Phylogenetic relationships of the pipid frogs Xenopus and Silurana: an integration of ribosomal DNA and morphology. Mol. Biol. Evol. 7(4):365-376.
Ford, L. S., and D. C. Cannatella. 1993. The major clades of frogs. Herp. Monogr. 7:94-117.
Spinar, Z. V. 1972. Tertiary frogs from central Europe. W. Junk, The Hague.
University of Texas, Austin, Texas, USA
Correspondence regarding this page should be directed to David Cannatella at
Page copyright © 1995 David Cannatella
Page: Tree of Life Pipidae. Tongueless Frogs, Aglossa. Authored by David Cannatella. The TEXT of this page is licensed under the Creative Commons Attribution License - Version 3.0. Note that images and other media featured on this page are each governed by their own license, and they may or may not be available for reuse. Click on an image or a media link to access the media data window, which provides the relevant licensing information. For the general terms and conditions of ToL material reuse and redistribution, please see the Tree of Life Copyright Policies.
Citing this page:
Cannatella, David. 1995. Pipidae. Tongueless Frogs, Aglossa. Version 01 January 1995 (under construction). http://tolweb.org/Pipidae/16986/1995.01.01 in The Tree of Life Web Project, http://tolweb.org/ | <urn:uuid:8298c6cc-6302-489e-93fa-9db8a36b1240> | 3.640625 | 1,745 | Knowledge Article | Science & Tech. | 45.714468 |
Cesty do hlubin zamrzlého času
Journeys into the depth of frozen time by Petr Pokorný.
Cores through the polar ice sheets provide a remarkable record of past environmental conditions. Ice coring evolved as a specific science since 1960s through pioneering effort of scientists from USA and other nations. Snow accumulating on polar plateaus of Antarctic and Greenland does this in a regularly ordered fashion that preserves a stratigraphic sequence accessible by drilling vertical borings into the ice. Collecting these cores is a specialized engineering challenge. Because ice is a deformable material, and flows, ice coring locations must be chosen carefully to extract the most reliable information. Deep cores require large equipment, and fluid-filled boreholes to avoid freezing and collapse of the hole. The deepest cores are collected during multi-year campaigns and provide views more than 700 000 years deep into the past. | <urn:uuid:28e979d7-2b83-49c9-9bf6-5e18a6435e40> | 3.5625 | 187 | Knowledge Article | Science & Tech. | 28.765 |
Small (µm –mm scale) aquatic organisms rarely have strong enough morphological features to actively migrate over long distances. Despite this many taxa occur almost everywhere on Earth and the expression “everything is everywhere” has been used regarding these small creatures. During an expedition to one of the most hostile and isolated freshwater systems on Earth, the Dry Valley Lakes in Antarctica, we did, however, not expect a high biodiversity regarding, for example, rotifers and crustacean zooplankton, due to biogeographical borders (salt water oceans). We were therefore very, very surprised when finding, not only the endemic species previously recorded, but also several cosmopolitan species and thereby the highest biodiversity with respect to rotifers ever recorded on the Antarctic mainland! Some of these species, such as Keratella cochlearis and K. quadrata (upper photo)can be found in any pond or lake, even the one outside the Ecology Building in Lund! A plausible question is therefore: how did they come to the Dry Valley lakes? Although we can only speculate, which we do in a recent paper published in Antarctic Science (for a pdf), a likely explanation is that, despite strong restrictions, those animals have unintentionally been brought to the Dry Valleys by the few scientists that have got permission to work there. We can also speculate regarding that this “assisted dispersal” has occurred probably more than ten years ago, since several of the rotifers show relatively high population densities, but probably after the mid-1980ties, since sampling at that time did not register those cosmopolitan species. Although in this case we can only guess regarding the processes, it is likely that small organisms may have a lot of assistance in their dispersal by larger animals, including humans, even to very remote regions. In addition to this surprising dispersal and high biodiversity, we are also proud to show a photo of the most Sothern copepod ever recorded. Hence, we can also extend the southern border of dispersal for Boeckella sp. to 77oS (photo below)! | <urn:uuid:4cc31c50-fb79-485f-9cdf-0db2c2ce89bb> | 3.671875 | 429 | Personal Blog | Science & Tech. | 20.992747 |
In the Eiffel user discussion group , Ian Joyner recently asked:
A lot of people are now using Result as a variable name for the return value in many languages. I believe this first came from Eiffel, but can’t find proof. Or was it adopted from an earlier language?
Proof I cannot offer, but certainly my recollection is that the mechanism was an original design and not based on any previous language. (Many of Eiffel’s mechanisms were inspired by other languages, which I have always acknowledged as precisely as I could, but this is not one of them. If there is any earlier language with this convention — in which case a reader will certainly tell me — I was and so far am not aware of it.)
The competing conventions are a return instruction, as in C and languages based on it (C++, Java, C#), and Fortran’s practice, also used in Pascal, of using the function name as a variable within the function body. Neither is satisfactory. The return instruction suffers from two deficiencies:
- It is an extreme form of goto, jumping out of a function from anywhere in its control structure. The rest of the language sticks to one-entry, one-exit structures, as I think all languages should.
- In most non-trivial cases the return value is not just a simple formula but has to be computed through some algorithm, requiring the declaration of a local variable just to denote that result. In every case the programmer must invent a name for that variable and, in a typed language, include a declaration. This is tedious and suggests that the language should take care of the declaration for the programmer.
The Fortran-Pascal convention does not combine well with recursion (which Fortran for a long time did not support). In the body of the function, an occurrence of the function’s name can denote the result, or it can denote a recursive call; conventions can be defined to remove the ambiguity, but they are messy, especially for a function without arguments: in function f, does the instruction
f := f + 1
add one to the value of the function’s result as computed so far, as it would if f were an ordinary variable, or to the result of calling f recursively?
Another problem with the Fortran-Pascal approach is that in the absence of a language-defined rule for variable initialization a function can return an undefined result, if some path has failed to initialize the corresponding variable.
The Eiffel design addresses these problems. It combines several ideas:
- No nesting of routines. This condition is essential because without it the name Result would be ambiguous. In all Algol- and Pascal-like languages it was considered really cool to be able to declare routines within routines, without limitation on the depth of recursion. I realized that in an object-oriented language such a mechanism was useless and in fact harmful: a class should be a collection of features — services offered to the rest of the world — and it would be confusing to define features within features. Simula 67 offered such a facility; I wrote an analysis of inter-module relations in Simula, including inheritance and all the mechanisms retained from Algol such as nesting (I am trying to find that document, and if I do I will post it in this blog); my conclusion was the result was too complicated and that the main culprit was nesting. Requiring classes to be flat structures was, in my opinion, one of the most effective design decisions for Eiffel.
- Language-defined initialization. Even a passing experience with C and C++ shows that uninitialized variables are one of the major sources of bugs. Eiffel introduced a systematic rule for all variables, including Result, and it is good to see that some subsequent languages such as Java have retained that convention. For a function result, it is common to ignore the default case, relying on the standard initialization, as in if “interesting case” then Result:= “interesting value” end without an else clause (I like this convention, but some people prefer to make all cases explicit).
- One-entry, one-exit blocks; no goto in overt or covert form (break, continue etc.).
- Design by Contract mechanisms: postconditions usually need to refer to the result computed by a function.
The convention is then simple: in any function, you can use a language-defined local variable Result for you, of the type that you declared for the function result; you can use it as a normal variable, and the result returned by any particular call will be the final value of the variable on exit from the function body.
The convention has been widely imitated, starting with Delphi and most recently in Microsoft’s “code contracts”, a kind of poor-man’s Design by Contract emulation, achieved through libraries; it requires a Result notation to denote the function result in a postcondition, although this notation is unrelated to the mechanisms in the target languages such as C#. As the example of Eiffel’s design illustrates, a programming language is a delicate construction where all elements should fit together; the Result convention relies on many other essential concepts of the language, and in turn makes them possible.
Eiffel Software discussion group, here. | <urn:uuid:c3c69279-448f-4db4-8972-f1218195d726> | 3.265625 | 1,104 | Personal Blog | Software Dev. | 35.035668 |
A language element that took a third if not a second look before I understood it is declaring an Objective-C method. Since I am a newbie as far as coding in Objective-C is concerned, I just accepted things for what is, understanding a bit first, making something work, and learn more about it as I progressed. When you declare an Objective-C method, it follows this basic format:
1. Methods with no parameter
<method type> (<return type>) <method name>;
+ (void) doLogin; - (void) doLogin;
2. Methods with a single parameter
<method type> (<return type>) <method name>: (<argument type>) <argument name>;
+(void) doLoginWithUserId: (NSString *) userId; - (void) doLoginWithUserId: (NSString *) userId;
3. Methods with 2 parameters
<method type> (<return type>) <method name>: (<argument type>) <argument name> <argument 2 label>: (<argument 2 type>) <argument 2 name>;
+(void) doLoginWithUserId: (NSString *) userId andPassword : (NSString *) pwd; - (void) doLoginWithUserId: (NSString *) userId andPassword : (NSString *) pwd;
The following are the elements as mentioned in the syntax:
Replace <method type> with either a + or a –. The + method type means that the method is a class method or in C#/Java world means that it is a static method. It is a method which can be invoked without instantiating the class. The – method type, on the other hand, is an instance method or a method which can be invoked only when the class has already been instantiated.
Using the sample code in item 1 format above and given each method is declared in separate classes named LoginClass, the class methods or + methods can be invoked as follows:
[LoginClass doLogin]; [LoginClass doLoginWithUserId:@”jojit”]; [LoginClass doLoginWithUserId:@”jojit” andPassword:@”password”];
The instance method or the – method type can be invoked by declaring a variable of type LoginClass and instantiating it as follows.
LoginClass *loginObj = [[LoginClass alloc] init]; [loginObj doLogin]; [loginObj doLoginWithUserId:@”jojit”]; [loginObj doLoginWithUserId:@”jojit” andPassword:@”password”];
Return Type, Argument Type, and Argument 2 Type
Replace <return type>, <argument type>, and <argument 2 type> with valid data types like void, int, NSString, etc. Note the asterisk after the NSString arguments in above examples, this is used when you are using an object data type rather than primitive data types.
This, of course, refer to the name of the method.
Argument Name and Argument 2 Name
These elements are the same as method arguments in other languages like C#, VB and Java, i.e. it is used for passing parameters to the methods. I would admit that I haven’t researched if there is such thing as in/out or ByVal/ByRef parameters in Objective-C.
Argument 2 Label
Why is there an Argument 2 Label but no Argument 1 Label? I don’t know but actually the label is optional. You may or may not use a label for the arguments 2 onwards but along the way, I realized how to better name my methods and arguments.
I accepted these things as they were initially but along the way I kept on asking “does it mean all methods are public?”. This is because we just want some methods to be invoked internally within the class. I tried using C functions believing that they are always private but I have to verify this some more. C functions have a different declaration syntax compared to Objective-C methods. I won’t discuss it here but this is interesting to note. Hopefully, I can discuss it in another post or at least post a link to another blog discussing it.
After about a week in coding and several lines of code, I am still itching at finding how to make my methods private. I found the answer just today and it prompted me to post this blog again after about a month hiatus to at least lessen my blogging backlog. I will discuss about that in my next post. | <urn:uuid:8e632dc9-8ade-457c-baa3-158e57e272b3> | 3.125 | 953 | Personal Blog | Software Dev. | 40.481131 |
The Problem of Distribution in an Expanding Universe
Therefore, we must consider the alternative scale of distance, and formulate the law of distribution on the assumption that red-shifts are the familiar velocity-shifts, and do measure the expansion of the universe. The actual recession necessitates one correction to apparent luminosities and another to the epoch of the various surveys. The first correction has already been discussed. Recession reduces apparent luminosities of the nebulae by the `recession factors' 1 + d / . When these effects are removed from the measures, the nebulae appear brighter, and the distances are less than those estimated from the uncorrected measures. Now the latter data, as we have just seen, indicate uniform distribution. Consequently, the revised distances might be expected to introduce departures from uniformity, in the sense that the volumes increase less rapidly than the numbers of nebulae, or, in other words, that the distribution increases outwards leaving the observer in an unwelcomed, favoured position. However, this conclusion does not necessarily follow, because the surveys represent different epochs in the history of the expanding universe.
The light which reaches us today left the limits of the various surveys far back in past time. From the limit of the deepest survey, for instance, the light started about 4.00 million years ago. It travelled for about 120 million years before it reached the limit of the next deepest-survey, and another 130 million years before crossing the limit of the shallowest survey. During these immense intervals of time the nebulae at the limits of the different surveys were receding at enormous velocities to still greater distances.
We count a certain number of nebulae and we know that they were scattered through a certain volume of space when the light left the limit of the survey. But today, many millions of years later, these same nebulae are scattered through a much larger volume of space, and the increase is different for each survey. Evidently, all the surveys must be reduced to the same epoch before the law of distribution can be formulated. Moreover, the law will continually change, for the recession implies that the distribution thins out with time.
These considerations emphasize the complexity of the problem of distribution in an expanding universe. The first step in the solution is the choice of a common epoch to which the different surveys will be reduced in order to make the comparison of numbers of nebulae and volumes of space. As a matter of convenience, the epoch selected is now, the time at which the surveys were made. Then, knowing the law of red-shifts or, in other words, the law of expansion, it seems possible to expand the volumes of all the surveys up to the epoch, now.
At this point the procedure becomes arbitrary. The calculations, in the present stage of knowledge, may be made in various ways, and the choice involves assumptions concerning the nature of the universe. As a simple illustration, does an individual nebula maintain a constant velocity as it recedes into the depths of space, or does its velocity steadily increase with increasing distance? This and other more technical questions must be answered before the reductions can be made with confidence. Thus the problem of reduction to a common epoch forces us to consider cosmological theory and some of the models of the universe which loom in that shadowy realm.
Most of the current models are derived from relativistic cosmology. Moreover, the outstanding exception, Professor Milne's kinematical model, is so outwardly similar, in several of its aspects, to a special case in the relativistic theory, that the observer, faced with a small sample, can scarcely hope to distinguish between them. Therefore, in the brief discussion which follows, the relativistic models alone will be considered. There are, it is said, many compelling reasons for concentrating on the theory, but the observer is not the proper authority to present them in their technical details. Instead, a few of the underlying principles will be mentioned together with the features of the models that may be compared with observations. | <urn:uuid:2bbce61d-a553-421e-b17d-64098015a5d4> | 3.453125 | 831 | Academic Writing | Science & Tech. | 30.314677 |
Accidental Pinhole and Pinspeck Cameras
Accidental pinhole and antipinhole cameras
There are many ways in which pictures are formed around us. The most efficient mechanisms are to use lenses or narrow apertures to focus light into a picture of what is in front. So a set of occluders (to form a pinhole camera) or a mirror surface (to capture only a subset of the reflected rays) will let us see an image as we view a surface. For those cases, an image is formed by intentionally building a particular arrangement of surfaces that will result in a camera. However, similar arrangements appear naturally by accidental arrangements of surfaces in many places. Often the observer is not aware of the faint images produced by those accidental cameras.
A shadow is also a form of accidental image. The shadow of an object is all the light that is missing because of the object presence in the scene. If we were able to extract the light that is missing (that is the difference between when the object is absent from the scene and when the object is present) we would get an image. That image would be the negative of the shadow and it will be approximatively equivalent to the image produced by a pinhole camera with a pinhole with the shape of the occluder. Therefore, a shadow is not just a dark region around an object. A shadow is the negative picture of the environment around the object producing it.
When we walk under the Sun we project a sharp dark shadow on the ground, and there seems to be nothing special about it. The shadow seems to disapear as soon as we enter under the shadow of a building. However, even when there is no apparent shadow around us, we are still blocking some of the light that fills the space producing a very faint penumbra on the ground all around us. That shadow is colorful, and even if we can not see it, it reveals the scene around us. This effect is what is shown in the two images on the right side. On the image on top, a person is jumping. As there is no direct sun light there seems to be no shadow. However, if we subtract that picture from a picture taken without the person, we will be able to extract a faint shadow as shown in the image bellow. The shadow shows the blue of the sky above, and the yellow color of the buildings on the opposite side to the wall. In fact, an occluder is an accidental antipinhole camera and its shadow is the picture.
In this work we identify and study two types of accidental cameras (pinholes and antipinholes) that can be formed in scenes revealing the scene outside the picture.
A person moving inside the room projects a faint change of illumination in the wall. The illumination change can be used to reveal the scene outside the room.
When there is no direct Sun light, a person walking seems not to project any shadow. But the shadow is still there even if it is so faint that can not be easily seen. We can reveal this shadow by increasing the contrast as shown in the bottom images. | <urn:uuid:b2463c8a-1494-4bba-8093-501510b6e9d8> | 3.546875 | 635 | Academic Writing | Science & Tech. | 47.682787 |
The spin-torque effect in a nanomagnet. (top left) If the polarization of the spins in the current (, green arrow) is parallel to the easy axis (black arrow), the spin torque opposes the natural damping and opens up the angle of the precession cone at all points along the cone. (top right) If makes a finite angle with the easy axis, the spin torque opens the precession cone angle around a half circle centered at point , but closes the cone angle in the other half circle centered at point . Applying two successive spin-torque pulses either both at point , or at points and , increases or decreases the net effectiveness of the spin torque in inducing a dynamic switch. (bottom) Schematic of the nanopillar structure. The free layer is the right layer and the current is applied along the horizontal axis of the pillar. The polarization of the current (green arrows) also rotates as a result of the spin-torque effect. | <urn:uuid:21a98938-a863-41ea-8418-873d9a8d2ccc> | 2.953125 | 201 | Academic Writing | Science & Tech. | 46.16 |
Everybody knows that when a stone is dropped in water, a jet of water shoots up. Physicists Detlef Lohse, from the University of Twente in The Netherlands, and Heinrich Jaeger, of The University of Chicago, are combining math, theory and super high-speed videos to try to figure out the basic physics underlying the jet.
Video footage courtesy of Detlef Lohse/University of Twente and John Royer, Eric Corwin, Heinrich Jaeger, University of Chicago. Cover image from jmsuarez/flickr. Music from Prelinger Archives. Produced by Flora Lichtman | <urn:uuid:4bec2120-57e0-457a-b28d-1d692f035942> | 2.90625 | 132 | Truncated | Science & Tech. | 32.615379 |
Sunday, 22 November 2009
British Wildlife: T
Thecodontosaurus antiquus Morris, 1843
Thecodontosauridae; Saurischia; Sauropsida; Chordata
Thecodontosaurus has often been nicknamed 'The Bristol Dinosaur'. Its remains were first found in Clifton, near the Avon Gorge, and remains one of the earliest dinosaurs yet to be found in Britain. The remains date to the Late Triassic, a time when dinosaurs were radiating from initially theropod-like forms into prosauropods and basal ornithischians. Thecodontosaurus is either a primitive prosauropod or a basal sauropodomorph. This means it could be ancestral to the first dinosaurs to have become either sauropods or prosauropods (long-necks and not-quite-so-long-necks).
Thecodontosaurus antiquus bones in matrix
Bristol City Museum
Talpa europaea Linnaeus, 1758
Talpidae; Eulipotyphla; Mammalia; Chordata
Moles are the third type of 'insectivore' found in Britain (I covered hedgehogs here and shrews here). They are distinctive-looking enough to most people for them to be familiar, despite hardly ever being seen. This probably has something to do with the excellent Wind in the Willows by Kenneth Grahame (also see the next 'T' animal). Moles spend all of their time underground; they are only seen when creating molehills or if they have been killed.
Dead European mole
Photo of specimen studied at Anglia Ruskin University, Cambridge
Moles are actually surprisingly small... before seeing one (never seen a live one, only dead ones like this) I thought they were at least hedgehog-sized, but they are really just a large shrew. The first thing one usually notices about a mole is either its claws or lack of eyes. They don't really lack eyes, but they are extremely reduced in size and are hidden under fur, and sometimes skin. The claws are incredible, especially those of the front paw, which are obviously used for digging.
Eurasian mole skeleton
Booth Museum of Natural History, Brighton
Moles mainly eat earthworms, but will also eat other invertebrates and even other vertebrates.
Bufo bufo Linnaeus, 1758
Bufonidae; Anura; Amphibia; Chordata
The common toad is, surprisingly enough, one of the most abundant amphibians in the UK, along with the common frog (Rana temporaria). Britain's other toad, the natterjack (Epidalea calamita) is much rarer and extremely localised in distribution. Common toads are found very often in bodies of water of varying size, and even found away from water outside the breeding season.
Female common toad
Females are larger than males, and during the breeding season, one can often find pairs in amplexus; this is when the male grips onto the female using 'nuptial pads' (the main method, apart from size, to distinguish the sexes) on his hands. They remain bonded until after the male has fertilised her eggs, which are laid as toadspawn in water. The spawn consists of a double row of black dots encased in a jelly strand.
Immature common toad
Enfield, North London
Common toads metamorphose from tadpoles into miniature toads in a few months, and are often found in damp areas. I briefly kept a toadlet last year, which I named Toad of Toad Hall, feeding it on aphids and ants. It didn't really eat the ants, they would just crawl all over it. Aphids, however, it loved. I kept it for a few weeks and released it back into my garden.
Next week, U: an extinct troglodyte bear, a cryptic yet colourful moth, and an auk that lays pear-shaped eggs. | <urn:uuid:64c79f19-b77c-4aae-ba47-42f6e55e8188> | 3.3125 | 846 | Personal Blog | Science & Tech. | 39.870973 |
Q&A: Galaxies, Galaxy Clusters, AGN, and Quasars
In my local newspaper an article from the Los Angeles Times has
the following quote, "...the center of the galaxy..is a violent
place where stars are forming, dying and exploding at furious
rates and being buffeted by supernova shock waves." So, how do
you define "furious rates"? Is the reporter referring in cosmic
terms? Are new stars being formed at the rate of 1 per second?
or one per million years? and exploding at the same rate?
The star formation rate in the central region of the Milky Way
varies from place to place. It is concentrated in giant
molecular clouds that are common there. In one of these clouds,
a few hundred thousand stars have formed over the past ten
million years, and a hundred or more supernovas have occurred in
that period. In our neck of the Milky Way, the rate is much
less. No supernovas have occurred within a few dozen light years
in the last 10 million years. | <urn:uuid:8ffee866-746a-42af-94f1-72419d6b7822> | 2.953125 | 226 | Q&A Forum | Science & Tech. | 63.397714 |
Return to Atomic Structure menu
The charge on electron was first measured by J.J. Thomson and two co-workers (J.S.E. Townsend and H.A. Wilson), starting in 1897. Each used a slightly different method. Townsend's work will be described as an example.
Townsend's work depended on the fact that drops of water will grow around ions in humid air. Under the influence of gravity, the drop would fall, accelerating until it hit a constant speed.
Several items were measured in this experiment.
1. the mass of a water droplet (actually the average mass of many)
2. the total electric charge carried on all the droplets (this was done by absorbing the water into an acid and measuring the charge picked up.)
3. the velocity of the droplet
4. the total mass of all water droplets (found by measuring the acid's increase in weight)
He determined the e/m ratio of the droplets (2 divided by 4), then multiplied by the mass of one droplet to get the value for e.
Thomson, Townsend, and Wilson each obtained roughly the same value for the charge on positive and negative ions. It was about 1 x 10¯19 coulombs. This work continued until about 1901 or 1902.
II. Robert A. Millikan's Definitive Measurement
Robert A. Millikan started his work on electron charge in 1906 and continued for seven years. His 1913 article announcing the determination of the electron's charge is a classic and Millikan received the Nobel Prize for his efforts.
Here is a diagram of his apparatus, reproduced from his 1913 article:
Here is Millikan's description:
8. THE EXPERIMENTAL ARRANGEMENTS.
The experimental arrangements are shown in Fig. 1. The brass vessel D was built for work at all pressures up to 15 atmospheres but since the present observations have to do only with pressures from 76 cm. down these were measured with a very carefully made mercury manometer M which at atmospheric pressure gave precisely the same reading as a standard barometer. Complete stagnancy of the air between the condenser plates M and N was attained first by absorbing all of the heat rays from the arc A by means of a water cell w, 80 cm. long, and a cupric chloride cell d, and second by immersing the whole vessel D in a constant temperature bath G of gas-engine oil (40 liters) which permitted, in general, fluctuations of not more than .02° C. during an observation. This constant temperature bath was found essential if such consistency of measurement as is shown below was to be obtained. A long search for causes of slight irregularity revealed nothing so important as this and after the bath was installed all of the irregularities vanished. The atomizer A was blown by means of a puff of carefully dried and dust-free air introduced through the cock e. The air about the drop p was ionized when desired by means of Röntgen rays from X which readily passed through the glass window g. To the three windows g (two only are shown) in the brass vessel D correspond, of course, three windows in the ebonite strip c which encircles the condenser plates M and N. Through the third of these windows, set at an angle of about 18° from the line Xpa and in the same horizontal plane, the oil drop is observed.
This is a photo dating from the time of the experiment.
These are some points to be made about the experiment:
1. The two plates were 16 mm across, "correct to about .01 mm."
2. The hole bored in the top plate was very small.
3. The space between the plates was illuminated with a powerful beam of light.
4. He sprayed oil ("the highest grade of clock oil") with an atomizer that made drops one ten-thousandth of an inch in diameter.
5. One drop of oil would make it through the hole.
6. The plates were charged with 5,000 volts.
7. It took a drop with no charge about 30 seconds to fall across the opening between the plates.
8. He exposed the droplet to radiation while it was falling, which stripped electrons off.
9. The droplet would slow in its fall. The drops were too small to see. What he saw was a shining point of light.
10. By adjusting the current, he could freeze the drop in place and hold it there for hours. He could also make the drop move up and down many times.
11. Since the rate of ascent (or descent) was critical, he has a highly accurate scale inscribed onto the telescope used for droplet observation and he used a highly accurate clock, "which read to 0.002 second."
Millikan's Improvements over Thomson
1. Oil evaporated much slower than water, so the drops stayed essentially constant in mass.
2. Millikan could study one drop at a time, rather than a whole cloud.
3. In following the oil drop over many ascents and descents, he could measure the drop as it lost or gained electrons, sometimes only one at a time. Every time the drop gained or lost charge, it ALWAYS did so in a whole number multiple of the same charge.
The value as of 1991 (for the charge on the electron) is 1.60217733 (49) x 10¯19 coulombs. This is less than 1% higher than the value obtained by Millikan in 1913. The 49 in parenthesis shows the plus/minus range of the last two digits (the 33). It is unlikely that there will be much improvement of the accuracy in years to come.
Interesting Fact about Robert Millikan's Experiment
In "The Discovery of Subatomic Particles" by Steven Weinberg there appears a footnote on p. 97. It reads:
. . . . there appeared a remarkable posthumous memoir that throws some doubt on Millikan's leading role in these experiments. Harvey Fletcher (1884-1981), who was a graduate student at the University of Chicago, at Millikan's suggestion worked on the measurement of electronic charge for his doctoral thesis, and co-authored some of the early papers on this subject with Millikan. Fletcher left a manuscript with a friend with instructions that it be published after his death; the manuscript was published in Physics Today, June 1982, page 43. In it, Fletcher claims that he was the first to do the experiment with oil drops, was the first to measure charges on single droplets, and may have been the first to suggest the use of oil. According to Fletcher, he had expected to be co-author with Millikan on the crucial first article announcing the measurement of the electronic charge, but was talked out of this by Millikan.
Return to Atomic Structure menu | <urn:uuid:f023966f-e4fb-4a3a-af32-700c764fa467> | 3.640625 | 1,416 | Knowledge Article | Science & Tech. | 66.582732 |
Evidence for Spectral Lines
Spectral lines were first seen in the sun's spectrum by William Wollaston in 1802.
However, they were not systematically studied until 1814, when a German optician named
Joseph von Fraunhofer observed and catalogued them. Fraunhofer carefully recorded the
positions of the lines, but he didn't attempt to explain why they were there. In the
late 1850's, the physicist Gustav Kirchhoff decided to investigate further, with the
help of the chemist Robert Bunsen.
Did he invent the Bunsen burner?
And they found that each element had its own unique set of lines?
They certainly did. A given element would always produce the same spectrum, which was
different from that of any other element. In fact, in the 1860's, Kirchhoff and Bunsen
discovered two new elements, cesium and rubidium, when they came across some spectral
lines that didn't fit any of the known elements. Later, the elements of gallium,
helium, argon, neon, krypton, and xenon were also discovered using spectroscopy. | <urn:uuid:7ed2a7d3-a3ee-4934-84ae-9c50e7799b7a> | 4 | 240 | Knowledge Article | Science & Tech. | 48.040189 |
Dr Ccile Gaspar of French Polynesia asks: Q: What is the situation of green, hawksbill and leatherback sea turtles in the world? A:
Sea turtles are among the most threatened, yet poorly understood creatures of the ocean, says Roderic Mast, Vice President at Conservation International and Director of CIs Sea Turtle Flagship Program. All but one of the world's seven species of sea turtles are considered Endangered or Critically Endangered on the 2004 IUCN Red List of Threatened Species
Sea turtles have long been a part of coastal peoples diets wherever the animals occur, and hunting remains a substantial threat to green sea turtles (Chelonia mydas
) in many areas. The shell of the hawksbill turtle (Eretmochelys imbricat
a) was once prized by the fashion industry for its characteristic tortoise shell pattern. Additional threats to sea turtles and their habitats come from marine pollution (especially ingestible plastics), industrial fishing, and coastal development, including beach lighting, which affects turtles nesting, feeding, and migratory habitats.
of Pacific leatherback turtles (Dermochelys coriacea
) that nest on beaches from Mexico south to Panama have been drastically reduced by years of uncontrolled egg collection and a dramatic increase in their incidental capture by fisheries. In the June 2000 issue of Nature
, scientists reported that the population of leatherbacks nesting at Playa Grande, Costa Rica
had dwindled by more than 90 percent in just a decade. Off the Pacific coast of Mexico, fewer than 100 remain.
Our success in conserving turtles
hinges on preserving the ecosystems that support them the sea itself, says Mast. Sea turtles are flagship species
for the sea
they help us communicate to the public
about the complexities of marine conservation." | <urn:uuid:45518250-5a90-4aea-8158-826035a82f80> | 3.484375 | 367 | Q&A Forum | Science & Tech. | 26.459727 |
A reader sent this abstract of a Henrik Svensmark study with a one word caption: Wow! I agree. The notion that "local" (and by local, we mean unimaginably far away) supernova affecting the Earth's climate is certainly creative. Haven't even read the thing so certainly not buying it yet, but it certainly is an amazing hypothesis.
Observations of open star clusters in the solar neighbourhood are used to calculate local supernova (SN) rates for the past 510 Myr. Peaks in the SN rates match passages of the Sun through periods of locally increased cluster formation which could be caused by spiral arms of the Galaxy. A statistical analysis indicates that the Solar system has experienced many large short-term increases in the flux of Galactic cosmic rays (GCR) from nearby SNe. The hypothesis that a high GCR flux should coincide with cold conditions on the Earth is borne out by comparing the general geological record of climate over the past 510 Myr with the fluctuating local SN rates. Surprisingly, a simple combination of tectonics (long-term changes in sea level) and astrophysical activity (SN rates) largely accounts for the observed variations in marine biodiversity over the past 510 Myr. An inverse correspondence between SN rates and carbon dioxide (CO2) levels is discussed in terms of a possible drawdown of CO2 by enhanced bio-productivity in oceans that are better fertilized in cold conditions – a hypothesis that is not contradicted by data on the relative abundance of the heavy isotope of carbon, 13C.
I was initially very skeptical of Svensmark's work attempting to link cosmic rays to cloud formation, with that affect acting as an amplifier (in terms of warming and cooling effects) of changes in solar output. I must say that over time, that work has survived replication effects pretty well. | <urn:uuid:929d5dcb-09c6-4a1f-950b-8beb75980a8f> | 2.921875 | 369 | Personal Blog | Science & Tech. | 36.41114 |
Aired Wednesday, August 20, at 8:00 p.m. on CPTV
Renowned astrophysicist Neil deGrasse Tyson investigates whether a “doomsday asteroid†the size of the Rose Bowl will hit the earth in 2036, and explores what the consequences could be — and what steps NASA could take to avoid this catastrophe. Other stories include the latest evidence on genes and hormones that regulate human body weight, which help explain why most attempts at dieting prove so frustrating; a profile of a wildly innovative young MIT roboticist and gifted fiction writer, Karl Iagnemma; and a playful story about the decades-long quest by nuclear chemists to reach the shores of the “Island of Stability,†the birthplace of a novel element at the far reaches of the Periodic Table. Tyson's passion for storytelling and communicating science inject dynamism into the provocative, fast-paced series, which delivers reports from the frontlines of scientific research and discovery while illuminating connections to viewers' everyday lives. Learn more... | <urn:uuid:bf754e31-6041-41ce-9139-28076b86c4b8> | 2.6875 | 221 | Content Listing | Science & Tech. | 37.978569 |
Look up monthly U.S., Statewide, Divisional, and Regional Temperature, Precipitation, Degree Days, and Palmer (Drought) rankings for 1-12, 18, 24, 36, 48, 60-month, and Year-to-Date time periods. Data and statistics are as of January 1895.
Please note, Degree Days are not available for Agricultural Belts
Wyoming Temperature Rankings, September 1911
More information on Climatological Rankings
(out of 119 years)
|Jul - Sep 1911
|3rd Coldest||1912||Coldest to Date|
|116th Warmest||2012||Warmest since: 1910| | <urn:uuid:6fd3e763-d4b5-4d90-873c-aa50744c41ff> | 2.703125 | 140 | Structured Data | Science & Tech. | 51.857104 |
MySQL is an open source relational database management system that is a popular database option for web applications. MySQL is written using C and C++ and makes use of the SQL database language to access and manipulate data stored in the database.
Although MySQL is primarily used with Linux operating systems it also offers support for Windows, BSD and UNIX based operating systems. MySQL is available as the database component with the LAMP stack.
MySQL is simple and easy to use, yet at the same time powerful enough to handle large volumes of data. Icreon offers to use MySQL as the database layer in developing web applications. | <urn:uuid:8fab8ec6-0331-4e33-be96-0e84b34bb213> | 2.828125 | 125 | Knowledge Article | Software Dev. | 33.765942 |
Frogs and Toads in Alabama
Frogs and toads are tailless, aquatic, semiaquatic, or terrestrial amphibians characteristically having a smooth moist skin, webbed feet, and long hind legs adapted for leaping. Adults lack tails and most have a well-developed ear and a voice used to attract mates, drive off intruders and to signal distress and presence. All are carnivorous as adults. With their moist skin, most frogs and toads are prone to dessication (drying out), and therefore are confined to wet or moist habitats. However, some species have adapted to more arid habitats by burrowing into the soil or hiding beneath rocks or logs to avoid the heat of the day. Most species return to water to breed.
“True” Toads - Family Bufonidae
American Toad Bufo americanus. Fairly common in northeastern Alabama above Fall Line Hills. Breeds in temporary woodland pools January to May. Encountered most frequently late winter to early spring near deciduous forest. Lowest Conservation Concern.
Fowler’s Toad Bufo fowleri. Common statewide in a variety of habitats, including disturbed areas. Breeds March to August, often in more permanent aquatic sites than other toads. Alabama’s most commonly encountered and widely distributed toad; often seen on roads. Lowest Conservation Concern.
Oak Toad Bufo quercicus. Uncommon to fairly common south of Blackland Prairie. Found locally in Coosa River Valley of Ridge and Valley, where it has not been verified for many years. Breeds April to July in temporary pools. Inhabits areas of sandy soils, especially fire-maintained pine flatwoods, where it may be absent from some areas of seemingly suitable habitat. MODERATE CONSERVATION CONCERN.
Southern Toad Bufo terrestris. Common in southern and western portions of Alabama, occupying all of Coastal Plain and western portions of
Treefrogs and Allies - Family Hylidae
Northern Cricket Frog Acris crepitans crepitans. Common above Fall Line Hills and locally common in Coastal Plain. Occurs essentially statewide. Breeds March through August in a wide variety of aquatic habitats, especially margins of permanent water bodies with sparse vegetation. Low Conservation Concern.
Southern Cricket Frog Acris gryllus gryllus. Common in Coastal Plain, locally common above Fall Line Hills, absent from northeastern and extreme northern Alabama. Breeds March through August in, and near, temporary water bodies, preferring weedy shorelines, wet meadows, and similar habitats. Lowest Conservation Concern.
Pine Barrens Treefrog Hyla andersonii. Threatened. Known from fewer than 20 isolated locations in southern Escambia,
Bird-voiced Treefrog Hyla avivoca. Common in Coastal Plain to which it is apparently restricted. If valid, one unverified record from St. Clair County in Ridge and Valley ecoregion would represent a northern disjunct population. Breeds April through July in forested swamps, beaver ponds, and floodplains. Lowest Conservation Concern.
Cope’s Gray Treefrog Hyla chrysoscelis. Common statewide. Breeds April through August in temporary to semi-permanent pools. Found in a variety of habitats, most frequently in association with deciduous forest. Lowest Conservation Concern.
Green Treefrog Hyla cinerea. Common nearly statewide, but rare or absent from portions of Interior Plateau and northern portions of Southwestern Appalachians, Ridge and Valley, and
Pine Woods Treefrog Hyla femoralis. Locally common in Coastal Plain, where most frequently encountered in Dougherty Plain and Southern Pine Plains and Hills. Disjunct populations in Ridge and Valley have not been verified in many years. Breeds April to August in temporary pools and ponds. Typically inhabits pine-dominated forests in areas of sandy soils. Lowest Conservation Concern.
Barking Treefrog Hyla gratiosa. Fairly common in Coastal Plain, scarcer in other regions, where suitable habitats often are limited and distribution more localized. Occurs essentially statewide. Breeds March through July, usually in temporary ponds or fishless semi-permanent ponds. Low Conservation Concern.
Squirrel Treefrog Hyla squirella. Common in Coastal Plain, less common and local in Ridge and Valley and extreme southern Piedmont. Breeds April to August in temporary pools and ponds, exploits a variety of habitats, and often encountered around buildings. Lowest Conservation Concern.
Mountain Chorus Frog Pseudacris brachyphona. Fairly common from Fall Line Hills northward, absent from most of Coastal Plain. Breeds December to April in shallow temporary pools in wooded areas, most often in hilly terrain. Lowest Conservation Concern.
Northern Spring Peeper Pseudacris crucifer crucifer. Common statewide. Breeds January to April in ponds, pools, and swamps in, or near, wooded areas. Rarely encountered during warmer months. Lowest Conservation Concern.
Upland Chorus Frog Pseudacris feriarum feriarum. Common nearly statewide; absent from extreme southern
Southern Chorus Frog Pseudacris nigrita nigrita. Locally common in Dougherty Plain and Southern Pine Plains and Hills. Also occurs in eastern portion of Southern Hilly Gulf Coastal Plain. Breeds February to May, usually in grassy temporary wetlands in, or near, areas of sandy soils. Lowest Conservation Concern.
Little Grass Frog Pseudacris ocularis. Rare and peripheral in Dougherty Plain. This tiniest of North American frogs, found from
Ornate Chorus Frog Pseudacris ornata. Uncommon to locally common in Coastal Plain east of Cahaba and Alabama Rivers. Occurs west of
Leptodactylid Frogs - Family Leptodactylidae
Greenhouse Frog Eleutherodactylus planirostris. Exotic. Apparently confined to coastal areas of Baldwin and
Narrow-mouthed Toads and Allies - Family Microhylidae
Eastern Narrow-mouthed Toad Gastrophryne carolinensis. Common statewide. A secretive burrowing frog that breeds April to September in vegetated margins of lakes, ponds, and ditches. Lowest Conservation Concern.
Spadefoot Toads and Allies - Family Pelobatidae
Eastern Spadefoot Scaphiopus holbrookii. Locally common statewide. A secretive burrowing frog that may breed at any season following heavy rains. Capable of reproducing in pools that hold water only two to three weeks. Susceptible to destruction of breeding habitat, which may not be readily recognized as wetland. Low Conservation Concern.
“True” Frogs - Family Ranidae
Gopher Frog Rana capito. Endangered. Principally a Coastal Plain longleaf pine forest inhabitant, where 10 historic records exist. Subspecific allocation of Alabama populations is problematic; formerly considered R. c. sevosa, dusky gopher frog (see Mississippi gopher frog). Highly terrestrial, but breeds late January to March in open temporary ponds. Alabama’s five extant breeding sites are in Escambia and Covington Counties. The only Ridge and Valley breeding pond (Shelby County) was drained for a subdivision in 1997, and a Barbour County breeding pond was destroyed by road construction. HIGHEST CONSERVATION CONCERN.
American Bullfrog Rana catesbeiana. Common statewide. A large, familiar, highly aquatic frog. Breeds March to August in lakes, ponds, and many streams. Lowest Conservation Concern.
Bronze Frog/Green Frog Rana clamitans ssp. Common statewide, this highly aquatic and familiar frog is bronze (R. c. clamitans) in southern and green (R. c. melanota) in northern Alabama. Breeds April to August. Prefers swamps, small streams, and other aquatic habitats. Lowest Conservation Concern.
Pig Frog Rana grylio. Locally common in Lower Coastal Plain and southernmost tier of counties of Dougherty Plain and Southern Pine Plains and Hills. A large, highly aquatic frog of permanent, open water bodies with emergent vegetation. Breeds April to August. Lowest Conservation Concern.
Mississippi Gopher Frog Rana sevosa. Endangered/Possibly extirpated. Recently described (2001) as western component of what had been considered R. capito sevosa (dusky gopher frog) from Mobile Bay to Louisiana. Currently known from a single population in Mississippi, but also recorded from Gulf Coast Flatwoods of Alabama (mouth of Dog River). Similar in appearance and habitat requirements to gopher frog. Secretive and difficult to survey. Listed as endangered by the U.S. Fish and Wildlife Service. HIGHEST CONSERVATION CONCERN.
River Frog Rana heckscheri. Peripheral and rare in southern portion of Southern Pine Plains and Hills, and potentially in Dougherty Plain of southernmost tier of counties. Common in
Pickerel Frog Rana palustris. Fairly common to uncommon and locally distributed in all regions above Fall Line, with disjunct populations in Lime Hills and Southern Pine Plains and Hills of Coastal Plain (Monroe and Conecuh Counties). Frequently encountered in, and near, cave entrances, but exploits other cool-water habitats. Breeds in winter and early spring. A Conecuh County population, associated with a limestone cave, has not been confirmed in over two decades. Low Conservation Concern.
Southern Leopard Frog Rana sphenocephala. Common statewide. Fairly aquatic but ranges away from water when foraging. Often seen on roads. Breeds mostly December through March in woodland pools, swamps, ponds, and other wetlands. Lowest Conservation Concern.
Wood Frog Rana sylvatica. Rare and local in distribution. Documented from twelve locations in eastern Ridge and Valley and upper Piedmont from Mount Cheaha, in Talledega County, south to Horseshoe Bend in Tallapoosa County. A highly terrestrial frog of deciduous forests. Breeds January to February in woodland pools. Thought to be declining, but status not investigated in over two decades. MODERATE CONSERVATION CONCERN.
Mirarchi. Ralph E., ed. 2004. Alabama Wildlife, Volume One. A Checklist of Vertebrates and Selected Invertebrates: Aquatic Mollusks, Fishes, Amphibians, Reptiles, Birds and Mammals. The University of Alabama Press, Tuscaloosa, AL. 209 pp. | <urn:uuid:e98f178c-ea9f-4a5e-8bba-359451b65392> | 3.6875 | 2,244 | Structured Data | Science & Tech. | 34.176308 |
Dec. 31, 2008 Plants come in all shapes and sizes, from grand Redwood trees to the common Snowdrop. Although we cannot see them, under the ground plants rely on a complex network of roots. What determines the pattern of root growth has been a mystery, but a new paper shows that the shape of the existing root can determine how further roots branch from it – because shape determines hormone concentration. The work also suggests that the root-patterning system shares a deep evolutionary relationship to the patterning system of plant shoots, something that had not been realized previously.
The paper, by Laskowski, Grieneisen, Hofhuis, et al, explores the architecture of the root system of the model organism Arabidopsis thaliana, a plant with the unusual common name "mouse-ear cress." The authors show that the curve of the root is key in provoking new growth. They used computational modeling of the transport of a well-known plant hormone, auxin, and by following the diffusion of this hormone they reveal that its accumulation leads to the specification of new growth regions in the root structure.
In particular, the initial trigger of this accumulation is a difference in cell size between the inner and outer sides of a root curve, which is then amplified by feedback responses from the hormone transport system.
Surprisingly, this new model on root architecture is reminiscent of the way leaves develop around growing tips in plant – a key feature of shoot architecture. This is exciting because it suggests that a deep connection exists between both root and shoot architectures – which have hitherto been viewed upon as being entirely separate. The work further shows that a new kind of biology, involving complete mixing of experiments and computer modeling, is a very powerful tool in probing organismal architecture.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
- Laskowski et al. Root System Architecture from Coupling Cell Shape to Auxin Transport. PLoS Biology, 2008; 6 (12): e307 DOI: 10.1371/journal.pbio.0060307
Note: If no author is given, the source is cited instead. | <urn:uuid:e87899f9-8c40-4b9d-9055-0756ab8d864d> | 3.40625 | 450 | Truncated | Science & Tech. | 44.279349 |
More In This Article
About 20 years ago one of the authors of this article took his father's binoculars and tiptoed out of the house at night. The budding astronomer decided that he would look for playmates on other planets going around stars in the sky. To his chagrin, the binoculars made no difference whatsoever. The stars appeared as twinkling points of light to his naked eye, and they were pointlike through binoculars as well. Although the largest stars could engulf our entire solar system within their luminous diameters, every star (aside from the sun) is simply too distant to be resolved with binoculars.
Two decades later the same kid can see not just a point of light but a circular disk--at least for some of the brightest stars. This stellar resolution takes advantage of a technique that was suggested more than 130 years ago: interferometry. Instead of looking through binoculars or even a conventional telescope, he must use a computer display connected to a device called an optical interferometer. For more than half a century, interferometry at radio wavelengths has succeeded brilliantly, mapping the structures of distant galaxies and quasars by their radio emissions. Only in the past 15 years, however, has technology allowed interferometry at infrared and visual wavelengths to take off--and the results have been well worth the wait. The Hubble Space Telescope reigns supreme for taking crisp photographs of faint objects, but ground-based optical interferometers can see, for the brightest stars, details 100 times finer than Hubble can. | <urn:uuid:ecaa181c-2e45-4662-9c7a-9aef7b834733> | 3.578125 | 321 | Truncated | Science & Tech. | 36.154588 |
Comments on Vortex Engine - Tame Tornadoes May Generate Power
Not only could this technique produce electricity, it could also serve as a form of planetary air conditioning to counter global warming. (Read
the complete story)
|"There may be an application already waiting. "Beyond Invention" a TV Program showcased an Australian engineer who has plans for a very large solar heat / under glass with a very very tall heat rising tower to do the same thing. This tornado technique would eliminate his tower construction problem and readily leverage the captured solar heat, not waste heat from an existing facility. He may have funding already for part of his project, this might make the whole thing he's doing "within reason" financially. - fyi"
(Allen 7/26/2007 9:06:43 PM)
|"another altogether different solution is to create to buildings. the 1st will be twice as hi and long as the 2nd. the 2nd is placed parallelled and touching the 1st, longitudinally. Now when a normal wind passes over the entire structure, vortexes are naturally formed. Having a free moving spiraling stair case like structure, whose steppes are really blades(wing or propellar shaped) the whole thing will spin like mad. Thus the building can act as store houses or generating plants distributing free electricity into the GRID."
( 7/26/2007 9:28:29 PM)
Get more information on Vortex Engine - Tame Tornadoes May Generate Power
Leave a comment:
MIT Robot Cheetah Video Shows Gait Transition
'The legs are long, curled way up to deliver power, like a cheetah's.'
TrackingPoint Smart Rifle
Not your typical 'smart bullet' approach.
'Hello, Computer!' Google Now Highlighted at IO13
Sky City's 220 Stories Are Go
'It rested among green parklands and... stood in total isolation, a glittering block of whites and flashing windows dotted with colors.'
CARMAT Bioprosthetic Total Human Heart Replacement
'George Walt's corporate existence proved the workability of wholly mechanical organs...'
Personal Sniffer Robots
'...The ticking combinations of the olfactory system of the hound.'
Physical Exam? We've Got Apps
See the future of handheld, personal medical devices.
The Interplanetary Internet, Vint Cerf Speaking
'This was the center of Interplanetary Communications.'
Drosophila Robotica, The Mechanical Fly
'... the Scarab [flying robot] buzzed into the great workroom as any intruding insect might...'
Robo-Raven Flapping Wing Robot Bird
'When he had first built them, they had been crude indeed, flying mechanisms with little more than a reflex-response unit.'
Japan's Nursing Home Robot Plan
Let's make the Roujin Z-0001 Robotic Bed!
Samsung Smart TVs With Gesture Control
'He waved his hand and the circuit switched abruptly.'
Swiss HCPVT Giant Photovoltaic 'Flower'
'...leaning against one of the slender stalks of a sunshade-photocell collector.'
Mini-Livers Made By 3D Printer
Organleggers may experience an employment downturn.
Smartphone Sensor System Tracks Gunfire
'Sound trackers on the roof could zero in on weapons action...'
Bacteria Now Make Biofuel Like Oil
'They have ... germs that eat pretty near anything, and produce oil as a waste product.'
Peel And Stick Thin Film Solar Cells
'It turns sunlight into electricity, just like any solar power converter, but you spray it on.'
A Big Collection Of Small Books
'Black, oblong, no larger than the end of Paul's thumb... It's a very old Orange Catholic Bible made for space travelers.'
Mu-Gripper Microsurgical 'Robots'
'It took about seven minutes ... for the cookie cutters to be randomly distributed throughout the victim's organs and limbs.'
Bartendro Robot Bartender
'He sipped the cognac that the robot bartender handed him...' | <urn:uuid:3b6dcbd7-c6c8-4f31-9b0a-f39ed456a97f> | 2.8125 | 859 | Comment Section | Science & Tech. | 52.995056 |
What Time is it?
Sometimes measuring time in millisecond resolution just isn’t accurate enough. Together with industry and community leaders, the W3C Web Performance working group has worked to solve this problem by standardizing the High Resolution Time specification.
To solve this issue, the High Resolution Time specification defines a new time base with at least microsecond resolution (one thousandth of a millisecond). To reduce the number of bits used to represent this number and to increase readability, instead of measuring time from 01 January, 1970 UTC, this new time base measures time from the beginning of navigation of the document, performance.timing.navigationStart. This time base is also different in that it is monotonically increasing and not subject to clock skew or adjustment. The difference between subsequent calls to performance.now() will never be negative.
The specification defines
performance.now() as the analogous method to
Date.now() for determining the current time in high
DOMHighResTimeStamp is the analogous type to
DOMTimeStamp that defines the high resolution time value. | <urn:uuid:e98c6eae-3bc1-4d0e-b6a2-31666f6eaaaf> | 2.9375 | 223 | Documentation | Software Dev. | 24.85957 |
cons creates an ordered pair.
cdr return the first and second components,
respectively, of an ordered pair. The function
recognizes ordered pairs.
Ordered pairs are used to represent lists and trees. See any Common Lisp documentation for a discussion of how list constants are written and for the many list processing functions available. Also, see programming where we list all the ACL2 primitive functions.
Here are some examples of list constants to suggest their syntax.
'(a . b) ; a pair whose car is 'a and cdr is 'b '(a . nil) ; a pair whose car is 'a and cdr is nil '(a) ; another way to write the same thing '(a b) ; a pair whose car is 'a and cdr is '(b) '(a b c) ; a pair whose car is 'a and cdr is '(b c) ; i.e., a list of three symbols, a, b, and c. '((a . 1) (b . 2)) ; a list of two pairs
It is useful to distinguish ``proper'' conses from ``improper'' ones,
the former being those cons trees whose right-most branch terminates with
nil. A ``true list'' (see true-listp ) is either
or a proper cons.
(A b c . 7) is an improper cons and hence not a | <urn:uuid:771a8586-60b5-41e6-82e2-dd0b3edad08d> | 3.3125 | 293 | Documentation | Software Dev. | 81.952824 |
ADO.NET is an evolution of the ADO data access model that directly addresses
user requirements for developing scalable applications. It was designed specifically
for the web with scalability, statelessness, and XML in mind.
ADO.NET uses some ADO objects, such as the Connection and Command objects,
and also introduces new objects. Key new ADO.NET objects include the DataSet, DataReader, and DataAdapter.
The important distinction between this evolved stage of ADO.NET and previous
data architectures is that there exists an object -- the DataSet -- that is
separate and distinct from any data stores. Because of that, the DataSet functions as a standalone entity.
You can think of the DataSet as an always disconnected recordset that knows nothing about the source or destination of the data it contains. Inside a DataSet, much like in a database, there are
tables, columns, relationships, constraints, views, and so forth.
A DataAdapter is the object that connects to the database to fill the DataSet. Then, it
connects back to the database to update the data there, based on operations performed while the DataSet held the data. In the past, data processing has been primarily connection-based. Now, in an effort to
make multi-tiered apps more efficient, data processing is turning to a
message-based approach that revolves around chunks of information. At the
center of this approach is the DataAdapter, which provides a bridge to
retrieve and save data between a DataSet and its source data store. It
accomplishes this by means of requests to the appropriate SQL commands made against the
The XML-based DataSet object provides a
consistent programming model that works with all models of data storage: flat, relational, and hierarchical.
It does this by having no 'knowledge' of the source of its data, and by
representing the data that it holds as collections and data types. No
matter what the source of the data within the DataSet is, it is manipulated
through the same set of standard APIs exposed through the DataSet and its
While the DataSet has no knowledge of the source of its data, the managed
provider has detailed and specific information. The role of the managed
provider is to connect, fill, and persist the DataSet to and from data stores.
The OLE DB and SQL Server .NET Data Providers (System.Data.OleDb and System.Data.SqlClient) that are part of the .Net Framework provide four basic objects: the Command, Connection, DataReader and DataAdapter.
In the remaining sections of this document, we'll walk through each
part of the DataSet and the OLE DB/SQL Server .NET Data Providers explaining what they
are, and how to program against them.
The following sections will introduce you to some objects that have evolved,
and some that are new. These objects are:
- Connections. For connection to and managing transactions against a database.
- Commands. For issuing SQL commands against a database.
- DataReaders. For reading a forward-only stream of data records from a SQL Server data source.
- DataSets. For storing, remoting and programming against flat data, XML data and relational data.
- DataAdapters. For pushing data into a DataSet, and reconciling data against a database.
Note When dealing with connections to a database, there are two different
options: SQL Server .NET Data Provider (System.Data.SqlClient) and OLE DB .NET Data Provider (System.Data.OleDb).
In these samples we will use the SQL Server .NET Data Provider.
These are written to talk directly to Microsoft SQL Server. The OLE DB .NET Data Provider
is used to talk to any OLE DB provider (as it uses OLE DB underneath).
Connections are used to 'talk to' databases, and are represented by provider-specific classes such as SQLConnection.
Commands travel over connections and resultsets are returned in the form of streams which can be read by a
DataReader object, or pushed into a DataSet object.
The example below shows how to create a connection object. Connections can be opened
explicitly by calling the Open method on the connection, or will be opened
implicitly when using a DataAdapter.
Commands contain the information that is submitted to a database, and are represented by provider-specific classes such as
SQLCommand. A command can be a stored procedure call, an UPDATE statement, or a statement that
returns results. You can also use input and output parameters, and return
values as part of your command syntax. The example below shows how to issue an INSERT statement
against the Northwind database.
The DataReader object is somewhat synonymous with a read-only/forward-only cursor
over data. The DataReader API supports flat as well as hierarchical data. A DataReader object is returned
after executing a command against a database. The format of the returned DataReader object is different from a recordset.For example, you might use the DataReader to show the
results of a search list in a web page.
DataSets and DataAdapters
The DataSet object is similar to the ADO Recordset object, but more powerful, and with one other important distinction: the DataSet is always disconnected. The DataSet object represents a cache of data, with
database-like structures such as tables, columns, relationships, and
constraints. However, though a DataSet can and does behave much
like a database, it is important to remember that DataSet objects do not interact directly
with databases, or other source data. This allows the developer to work with a programming model that is always consistent,
regardless of where the source data resides. Data coming from a database, an XML file, from code, or user input can all
be placed into DataSet objects. Then, as changes are made to the DataSet they can be tracked and verified before updating the source data. The GetChanges method of the DataSet object actually creates a second DataSet that contains only the changes to the data. This DataSet is then used by a DataAdapter (or other
objects) to update the original data source.
The DataSet has many XML characteristics, including the ability to produce
and consume XML data and XML schemas. XML schemas can be used to describe
schemas interchanged via XML Web services. In fact, a DataSet with a schema can
actually be compiled for type safety and statement completion.
The DataAdapter object works as a bridge between the DataSet and the source data. Using the provider-specific SqlDataAdapter (along with its associated SqlCommand and SqlConnection) can increase overall performance when working with a Microsoft SQL Server databases. For other OLE DB-supported databases, you would use the OleDbDataAdapter object and its associated OleDbCommand and OleDbConnection objects.
The DataAdapter object uses commands to update the data source after changes have been made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT command; using the Update method calls the INSERT, UPDATE or DELETE command for each changed row. You can explicitly set these commands in order to control the statements used at runtime to resolve changes, including the use of stored procedures. For ad-hoc scenarios, a CommandBuilder object can generate these at run-time based upon a select statement. However, this run-time
generation requires an extra round-trip to the server in order to gather required metadata,
so explicitly providing the INSERT, UPDATE, and DELETE commands at design time will result in
better run-time performance.
Dim myConnection As SqlConnection = New SqlConnection("server=(local)\SQLExpress;Integrated Security=SSPI;database=northwind")
Dim mySqlDataAdapter As SqlDataAdapter = New SqlDataAdapter("select * from customers", myConnection)
mySqlDataAdapter.InsertCommand.CommandText = "sp_InsertCustomer"
mySqlDataAdapter.InsertCommand.CommandType = CommandType.StoredProcedure
mySqlDataAdapter.DeleteCommand.CommandText = "sp_DeleteCustomer"
mySqlDataAdapter.DeleteCommand.CommandType = CommandType.StoredProcedure
mySqlDataAdapter.UpdateCommand.CommandText = "sp_UpdateCustomers"
mySqlDataAdapter.UpdateCommand.CommandType = CommandType.StoredProcedure
The records are appropriately mapped to the given commands accordingly.
Figure: DataAdapters and DataSets
The sample below illustrates loading a DataAdapter via a SELECT statement.
Then it updates, deletes and adds some records within the DataSet. Finally, it
returns those updates to the source database via the DataAdapter. The constructed
DeleteCommand, InsertCommand and UpdateCommand are shown in the page. It also
illustrates using multiple DataAdapter objects to load multiple tables (Customers and Orders) into the
- ADO.NET is the next evolution of ADO for the .Net Framework.
- ADO.NET was created with n-Tier, statelessness and XML in the forefront. Two new
objects, the DataSet and DataAdapter, are provided for these scenarios.
- ADO.NET can be used to get data from a stream, or to store data in a cache for updates.
- There is a lot more information about ADO.NET in the documentation.
- Remember, you can execute a command directly against the database
in order to do inserts, updates, and deletes. You don't need to first put data into a DataSet in order
to insert, update, or delete it.
- Also, you can use a DataSet to bind to the data, move through the data, and navigate data relationships.
Microsoft .NET Framework SDK QuickStart Tutorials Version 2.0
Copyright � 2005 Microsoft Corporation. All rights reserved. | <urn:uuid:394c4c3b-b987-4e81-85cc-fae1c2011967> | 2.703125 | 2,127 | Documentation | Software Dev. | 35.141428 |
Could the dream come true? A new study by Stanford researchers suggests that such forecasts may one day be possible—not on Earth, but on the Sun.
"We have learned to detect sunspots before they are visible to the human eye," says Stathis Ilonidis, a PhD student at Stanford University. "This could lead to significant advances in space weather forecasting."
Sunspots are the "butterfly's wings" of solar storms. Visible to the human eye as dark blemishes on the solar disk, sunspots are the starting points of explosive flares and coronal mass ejections (CMEs) that sometimes hit our planet 93 million miles away. Consequences range from Northern Lights to radio blackouts to power outages.
Astronomers have been studying sunspots for more than 400 years, and they have pieced together their basic characteristics: Sunspots are planet-sized islands of magnetism that float in solar plasma. Although the details are still debated, researchers generally agree that sunspots are born deep inside the Sun via the action of the Sun’s inner magnetic dynamo. From there they bob to the top, carried upward by magnetic buoyancy; a sunspot emerging at the stellar surface is a bit like a submarine emerging from the ocean depths.
Their analysis technique is called "time-distance helioseismology," and it is similar to an approach widely used in earthquake studies. Just as seismic waves traveling through the body of Earth reveal what is inside the planet, acoustic waves traveling through the body of the Sun can reveal what is inside the star. Fortunately for helioseismologists, the Sun has acoustic waves in abundance. The body of the Sun is literally roaring with turbulent boiling motions. This sets the stage for early detection of sunspots.
"We can't actually hear these sounds across the gulf of space," explains Ilonidis, "but we can see the vibrations they make on the Sun's surface." Instruments onboard two spacecraft, the venerable Solar and Heliospheric Observatory (SOHO) and the newer Solar Dynamics Observatory (SDO) constantly monitor the Sun for acoustic activity.
Submerged sunspots have a detectable effect on the sun's inner acoustics—namely, sound waves travel faster through a sunspot than through the surrounding plasma. A big sunspot can leapfrog an acoustic wave by 12 to 16 seconds. "By measuring these time differences, we can find the hidden sunspot."
"This is the first time anyone has been able to point to a blank patch of Sun and say 'a sunspot is about to appear right there,'" says Ilonidis's thesis advisor Prof. Phil Scherrer of the Stanford Physics Department. "It's a big advance."
"There are limits to the technique," cautions Ilonidis. "We can say that a big sunspot is coming, but we cannot yet predict if a particular sunspot will produce an Earth-directed flare."
So far they have detected five emerging sunspots—four with SOHO and one with SDO. Of those five, two went on to produce X-class flares, the most powerful kind of solar explosion. This encourages the team to believe their technique can make a positive contribution to space weather forecasting. Because helioseismology is computationally intensive, regular monitoring of the whole Sun is not yet possible—"we don’t have enough CPU cycles," says Ilonidis —but he believes it is just a matter of time before refinements in their algorithm allow routine detection of hidden sunspots. | <urn:uuid:76f88287-3719-49f5-bf98-1048a6ef32a2> | 4.34375 | 740 | Knowledge Article | Science & Tech. | 40.163339 |
Full name: degree Fahrenheit
Plural form: degrees Fahrenheit
Alternate spelling: F
Category type: temperature
Scale factor: 0.555555555556
The SI base unit for temperature is the kelvin.
1 kelvin is equal to 1.8 degree Fahrenheit.
Valid units must be of the temperature type.
You can use this form to select from known units:
I'm feeling lucky, show me some random units
Fahrenheit is a temperature scale named after the German physicist Gabriel Fahrenheit (1686Ė1736), who proposed it in 1724. | <urn:uuid:2f2ca441-4709-4b14-9bf4-d754ea9c9481> | 3.015625 | 125 | Structured Data | Science & Tech. | 48.999167 |
Triangle's default behavior is to find the Delaunay triangulation of a set of vertices. Store the vertices in a .node file, such as spiral.node, illustrated below. The command
triangle spiralproduces the Delaunay triangulation, also illustrated below.
To triangulate a PSLG instead, describe the geometry of the region you wish to mesh in a .poly file, such as face.poly, illustrated below. Use the -p switch to specify that the input is a PSLG (.poly file) rather than a vertex set (.node file). The command
triangle -p facewill produce the constrained Delaunay triangulation, with holes and concavities removed. (The mouth and eye holes are specified in the input file; the concavities are removed automatically.)
The automatic removal of concavities from the triangulation will be detrimental if you have not taken care to surround the area to be triangulated with segments. In the next example, the input file box.poly defines an open region, so the -c switch must be used to prevent the automatic removal of concavities (which would eliminate the whole triangulation).
triangle -pc boxproduces the constrained Delaunay triangulation illustrated below. The -c switch causes Triangle to triangulate the convex hull of the PSLG.
A conforming constrained Delaunay triangulation of a PSLG can be generated by use of the -q, -a, or -u switch, in addition to the -p switch. If you don't wish to enforce any angle or area constraints, use `-q0', which requests quality meshing with a minimum angle of zero. The result is demonstrated below.
triangle -pq0 face
A conforming Delaunay triangulation of a PSLG can be generated as above, with the addition of the -D switch to ensure that all the triangles of the final mesh are Delaunay (and not just constrained Delaunay).
triangle -pq0L face | <urn:uuid:69ebf4e8-b6fb-4b70-9a24-dd9ca952b006> | 2.6875 | 433 | Documentation | Software Dev. | 45.772336 |
- Posted by dan on July 26, 2010
I've been trying to evangelise BDD (Behaviour Driven Development) at work over the past few months and I've had considerably more success than I thought.
So what is BDD?
BDD or Behaviour Driven Development is a development technique that tries to bring together the numerous factions of development. Namely BA (Business Analyst), QA (Quality Assurance) and Developer.
There seem to be 2 main flavours the specification based tools and the scenario based tools. I'm going to describe the scenario based tools. DDD (Domain Driven Design) puts lots of emphasis on the need for ubiquotous language for your business. In Scrum there's a near universal understanding of the user story. That tends to follow the "As a" "I want" "So that" where the "So that" describes the business value being derived, and the "As a" describes the user's role. This has proved the be very successful for us. BDD uses "Given" "When" "Then" as it's vocabulary where Given sets the context When defines the action that is being performed and Then asserts the behaviour that is expected.
Just using these 3 simple words has lead to real progress for us, even without tooling and technologies just writing the "BDD Scenarios" down as we say forces the team to think about what we're building on considerable depth. It also nearly always brings out some scenario which we haven't initially considered and we then write scenarios to cover that. What we end up with is a very good understanding of the problem and how our proposed solution is going to work. | <urn:uuid:e026add0-09a7-4ec0-b8c8-0ee150853ebf> | 2.84375 | 343 | Personal Blog | Software Dev. | 54.409533 |
Archived:Overview of standard QWidget library on Symbian
Qt provides a rich set of standard widgets that can be used to create graphical user interfaces for applications. Qt’s widgets are flexible and can easily be sub classed to suit specialized requirements.
Widgets are visual elements that are combined to create user interfaces. Buttons, menus, scroll bars, message boxes, and application windows are all examples of widgets. Qt’s widgets are not arbitrarily divided between “controls” and “containers”; all widgets can be used both as controls and as containers. Custom widgets can easily be created by sub classing existing Qt widgets, or created from scratch if necessary.
Standard widgets are provided by the QWidget class and its subclasses, and custom widgets can be created by sub classing them and reimplementing virtual functions.
A widget may contain any number of child widgets. Child widgets are shown within the parent widget’s area. A widget with no parent is a top-level widget (a “window”), and usually has its own entry in the desktop environment’s task bar. Qt imposes no arbitrary limitations on widgets. Any widget can be a top-level widget; any widget can be a child of any other widget. The position of child widgets within the parent’s area can be set automatically using layout managers, or manually if preferred. When a parent widget is disabled, hidden, or deleted, the same action is recursively applied to all its child widgets.
Labels, message boxes, tooltips, and other textual widgets are not confined to using a single color, font, and language. Qt’s text-rendering widgets can display multi-language rich text using a subset of HTML and most widgets can be styled using a description language.
Qt widgets used in various user interface components. The widgets were arranged using Qt Designer and rendered using the Plastique style, demonstrating Qt 4’s standard look.
The widgets shown include standard input widgets like QLineEdit for one-line text entry, QCheckBox for enabling and disabling simple independent settings, QSpinBox and QSlider for specifying quantities, QRadioButton for enabling and disabling exclusive settings, and QComboBox, which opens to present a menu of choices when clicked. Clickable buttons are provided by QPushButton. Container widgets such as QTabWidget and QGroupBox are also shown. These widgets are managed specially in Qt Designer to allow designers to rapidly create new user interfaces while helping to keep them maintainable. More complex widgets such as QScrollArea, are often used more by developers than by user interface designers because they are often used to display specialized or dynamic content.
Why WIDGETS ?
Call them gadgets, mini-applications, or badges, but widgets are becoming more and more popular with both consumers and advertisers. They give users a way to interact with your brand and your content far beyond your web site. Keep reading to find out more about these useful bits of code, and why they may spawn a small revolution in the way we use the web.
The below graphical example shows us that how we can drag item and make QWidgets.
1.QPushButton Screen Shot
2. QSpinBox and QSlider
QSpinBox *spinBox = new QSpinBox; QSlider *slider = new QSlider(Qt::Horizontal);
5. QToolBar and QToolButton
6.QDial and QSpinBox | <urn:uuid:bf1c0584-f744-4896-9b69-2bdc91d35fb3> | 2.84375 | 739 | Documentation | Software Dev. | 42.730765 |
Snakes are of major importance as pest controllers because of their extensive predation on destructive mammals such as rats and mice. Some, like the sea snakes and pythons, are highly regarded as food in Asia but, although most are probably edible, snakes are not widely used for meat. The skin is often used for belts, bags, and shoes. Venom is removed from snakes for use in treating certain diseases and to make antivenin for snakebites.
See also snake worship.
Sections in this article:
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. | <urn:uuid:1bc954db-01d5-4c01-87a8-f68304f5d343> | 2.765625 | 129 | Knowledge Article | Science & Tech. | 39.282764 |
Marine Worm Jaws
Marine worm jaws are easily preserved and are known in nearly every system. Most are composed of chitin (fingernail material). They are black and shiny, and have many teeth. Sea worms live today, and the fossil record of worm trails goes back to Precambrian time. The oldest worm jaws are found in Ordovician rocks, but they are most common in the Silurian rocks of northeastern Illinois. Most are so small that they can only be identified with a magnifying glass. The numbers in the image below, 8x, 10x, etc., indicate the magnification needed to see the jaws at the size shown.
The printed version of Guide for Beginning Fossil Hunters can be purchased from the Shop ISGS Web site.
Updated 09/26/2011 SLD | <urn:uuid:5f1ed730-2b2d-42bc-8885-f216cdc258e5> | 3.671875 | 167 | Knowledge Article | Science & Tech. | 59.950017 |
int setpgid(pid_t pid, pid_t pgid);
pid_t getpgid(pid_t pid);
pid_t getpgrp(void); /* POSIX.1 version */
pid_t getpgrp(pid_t pid); /* BSD version */
int setpgrp(void); /* System V version */
int setpgrp(pid_t pid, pid_t pgid); /* BSD version */
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
_SVID_SOURCE || _XOPEN_SOURCE >= 500 || _XOPEN_SOURCE && _XOPEN_SOURCE_EXTENDED
setpgrp() (BSD), getpgrp() (BSD):
_BSD_SOURCE && ! (_POSIX_SOURCE || _POSIX_C_SOURCE || _XOPEN_SOURCE || _XOPEN_SOURCE_EXTENDED || _GNU_SOURCE || _SVID_SOURCE)
setpgid() sets the PGID of the process specified by pid to pgid. If pid is zero, then the process ID of the calling process is used. If pgid is zero, then the PGID of the process specified by pid is made the same as its process ID. If setpgid() is used to move a process from one process group to another (as is done by some shells when creating pipelines), both process groups must be part of the same session (see setsid(2) and credentials(7)). In this case, the pgid specifies an existing process group to be joined and the session ID of that group must match the session ID of the joining process.
The POSIX.1 version of getpgrp(), which takes no arguments, returns the PGID of the calling process.
getpgid() returns the PGID of the process specified by pid. If pid is zero, the process ID of the calling process is used. (Retrieving the PGID of a process other than the caller is rarely necessary, and the POSIX.1 getpgrp() is preferred for that task.)
The System V-style setpgrp(), which takes no arguments, is equivalent to setpgid(0, 0).
The BSD-specific setpgrp() call, which takes arguments pid and pgid, is equivalent to setpgid(pid, pgid).
The BSD-specific getpgrp() call, which takes a single pid argument, is equivalent to getpgid(pid).
The POSIX.1 getpgrp() always returns the PGID of the caller.
getpgid(), and the BSD-specific getpgrp() return a process group on success. On error, -1 is returned, and errno is set appropriately.
POSIX.1-2001 also specifies getpgid() and the version of setpgrp() that takes no arguments. (POSIX.1-2008 marks this setpgrp() specification as obsolete.)
The version of getpgrp() with one argument and the version of setpgrp() that takes two arguments derive from 4.2BSD, and are not specified by POSIX.1.
Each process group is a member of a session and each process is a member of the session of which its process group is a member.
A session can have a controlling terminal. At any time, one (and only one) of the process groups in the session can be the foreground process group for the terminal; the remaining process groups are in the background. If a signal is generated from the terminal (e.g., typing the interrupt key to generate SIGINT), that signal is sent to the foreground process group. (See termios(3) for a description of the characters that generate signals.) Only the foreground process group may read(2) from the terminal; if a background process group tries to read(2) from the terminal, then the group is sent a SIGTSTP signal, which suspends it. The tcgetpgrp(3) and tcsetpgrp(3) functions are used to get/set the foreground process group of the controlling terminal.
The setpgid() and getpgrp() calls are used by programs such as bash(1) to create process groups in order to implement shell job control.
If a session has a controlling terminal, and the CLOCAL flag for that terminal is not set, and a terminal hangup occurs, then the session leader is sent a SIGHUP. If the session leader exits, then a SIGHUP signal will also be sent to each process in the foreground process group of the controlling terminal.
If the exit of the process causes a process group to become orphaned, and if any member of the newly orphaned process group is stopped, then a SIGHUP signal followed by a SIGCONT signal will be sent to each process in the newly orphaned process group. An orphaned process group is one in which the parent of every member of process group is either itself also a member of the process group or is a member of a process group in a different session (see also credentials(7)). | <urn:uuid:80c9d12a-db1a-4131-b8b8-97d6cde20b74> | 2.828125 | 1,099 | Documentation | Software Dev. | 63.729484 |
[advance to content]
Click on the Polynomial worksheet set you wish to view below.
What is a polynomial?
A polynomial is a mathematical expression that has three allowed ingredients - constants, variables and exponents.
1/2 MV2 is a polynomial that has 1/2 as a constant, M and V as variables and, on the V, an exponent of 2. As a group, 1/2 MV2 is called a term. The 'poly' in polynomial refers to the fact that there can be many terms (poly means many).
As long as no variable divides another and all exponents are whole numbers, a polynomial can have any number of terms (but not an infinite number!).
How are polynomials used in the real world?
Many features of the natural world can best be described by polynomials. For instance, the path that a thrown object will follow (baseball, rock, hammer) is a curve called a parabola, and the formula for this curve is a polynomial. The equation to graph it with x and y coordinates is: y = ax2 + bx + c. (a, b, and c are constants that depend on which parabola you want to describe.)
The formula in the first section, 1/2 MV2, is used in physics to calculate the energy of moving objects. It says that the kinetic energy is one-half the mass times the velocity (speed) squared.
A basic problem in polynomials.
A basic problem in polynomials is solving them for a particular variable.
For example, if x is 20, what will the polynomial x2 - 5x + 12 equal?
Substituting 20 for x gives 202 - (5 x 20) + 12 = 400 - 100 + 12 = 312.
Who invented polynomials?
Polynomials as a class of formulas came about as the result of adopting mathematical notation. Although problems that we would now call polynomials existed, they were only described in words. Something like, Five barrels of the best wine, 16 barrels of medium quality wine, and 4 barrels of poor wine. In modern mathematical notation, that would be written as: 5x + 16y + 4z, a polynomial.
Notation that allowed compact formulas only became popular in the 18th and 19th centuries, with Leonhard Euler (1707 - 1783) credited with many of the symbols we still use today. He first started using letters near the beginning of the alphabet (a,b,c) to stand for constants, and letters near the end (x,y,z) for variables in polynomial equations.
So, although people have been investigating polynomials since 300 BC (or earlier) the branch of mathematics we now call polynomials is much younger.
An interesting fact about polynomials.
When relationships are written as polynomials, they sometimes reveal something that might otherwise be hidden. The equation, Ke = 1/2 MV2 tells us how much energy (Ke) an object in motion has. It says you take the mass of the object (M) and multiply it be how fast the object is moving squared (V2) and by 1/2.
We know, instinctively, that if you use a heavier hammer, you can hit something harder. That is an increase in mass. We also know that swinging a hammer faster will also make it hit harder.
What might not be so obvious is that an increase in the speed of the hammer means more than an increase in the weight. Because V (velocity, or speed) is squared, an increase in how fast the hammer is moving increases the final energy more than an increase in the weight.
This is why a lighter bullet, moving very fast, can do more damage than a heavier bullet that moves more slowly. It is also why bat speed is so important in baseball. Without polynomials to show the relationship, this wouldn't be as easy to see. | <urn:uuid:4b07cc64-72a2-42dc-9abf-b7e6d9ac2724> | 4.15625 | 855 | Tutorial | Science & Tech. | 61.84314 |
"AND now, the weather for the next decade." It sounds wildly optimistic, but meteorologists could be offering such forecasts within 20 years.
In Paris last week, researchers from 60 countries unveiled an international programme of research into climate variability called CLIVAR which could help predict the world's weather for years ahead. But they warned that it will only happen if their governments contribute "hundreds of millions of pounds". CLIVAR will operate under the UN's World Climate Research Programme but has no independent funds.
The scientists, meeting at the headquarters of the UN's science organisation UNESCO, cast aside the view that the weather was inherently chaotic and unpredictable. They said there were recurrent patterns, and they were close to understanding those patterns. "We can start to see into the future. This really is a revolution," said Ed Sarachik, an oceanographer at Seattle University who helped to write the plan for CLIVAR. ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:81e64c28-f7cd-4062-aaf2-d199b85f961b> | 3.40625 | 214 | Truncated | Science & Tech. | 46.120246 |
Species at Risk
What's so special about Nova Scotia's Blanding's turtles?
In Nova Scotia, the Blanding's turtle is one of our "southern relics" because the population is small, isolated and at the northeastern edge of its range. This reptile, and other southern relics such as water-pennywort, northern ribbon snake and southern flying squirrel, moved north during a warm period near the end of the last ice age. As the climate changed to what it is today, only the southwestern part of the province remained warm enough for these species to survive.
Hatchling turtles around a nest cavity
© Parks Canada / Peter Hope / 1991
Blanding's turtles can live to be more than 80 years. In Nova Scotia , these long-lived turtles do not reach sexual maturity until they are 18 to 24 years. They are the latest to mature within the species range. Other populations, living as close as Maine or southern Ontario and Quebec, mature as early as 14 to 20 years.
Blanding's turtles in Nova Scotia are genetically distinguishable from other Blanding's turtles in North America . This means they may make a large contribution to the genetic diversity of the species, because they have some unique behaviours and look slightly different from other Blanding's turtles.
Blanding's turtle on a rock
© Parks Canada / D. Cairns / 1978
In fact, the Nova Scotia population is composed of three sub-populations (Kejimkujik, McGowan, New Elm), each genetically distinguishable from the other. The three sub-populations do not seem to mix. Behavioural differences have also been observed among these populations. | <urn:uuid:db7f5c05-2f8f-4d64-ab84-458d9cf66da4> | 4.125 | 347 | Knowledge Article | Science & Tech. | 46.516085 |
Carbon nanotubes are brimming with possibilities, whether used to strengthen damaged cartilage
, act as a drug delivery mechanism
or create plastic that's as strong as steel
. Some researchers, however, are urging caution in handling these tiny particles, lest they enter the lungs of workers in nanotube manufacturing facilities and lodge in sensitive tissue in the throat and lungs.
The logical comparison here is the carelessness with which asbestos was handled for decades, a mistake that cost many workers their lives from mesothelioma, a deadly cancer of the membrane lining the body's internal organs (in particular the lungs) that can take 30 to 40 years to appear. Eager to avoid similar tragedy, industry and academia are probing the potential dangers of nanotubes to nip any in the bud before they become widely used.
Nature Nanotechnology reported
last month that scientists at Queen's Medical Research Institute at the University of Edinburgh/MRC Center for Inflammation Research (CIR) in Scotland found that long, thin carbon nanotubes look and behave like asbestos fibers
"This is probably the most attention a study on this subject has gotten," says Peter Antoinette, CEO of Nanocomp Technologies
, Inc., a Concord, N.H., manufacturer of carbon nanotubes. He adds that common sense is the best approach: You don't want to inhale any microscopic particles, regardless of whether they're made of carbon or asbestos. "If not handled correctly," he says, "flour is dangerous."
Carbon nanotubes have become increasingly popular because of their extraordinary properties. "They're the strongest materials made by man," Antoinette says. "Stronger than steel, lighter than aluminum and a better conductor than copper."
Although much of the Nature Nanotechnology
study resonates with Antoinette, he questions some of the methods the researchers employed. For example, the carbon nanotubes were injected into the stomachs of the lab animals they tested. "I'm not sure that's truly representative of inhalation," he says.
readers praised attempts to shed light on the potential dangers of carbon nanotubes but were hesitant to write off such a promising technology. "I agree we should look into this more closely," commented Karen Garvin
. "However, that doesn't mean we should halt all study or use of nanotubes out of fear of the unknown. As a society, we either tend to ignore or overreact to environmental concerns."
Another reader, Hugh Jones
, pointed out that carbon nanotubes are just one of many technologies whose impact on health is unclear at this time. "We don't know with certainty the long-term effects of cell phones, plastic bottles and the like," he wrote. "So it's good to see someone sounding the alarm for potential dangers like this before they cause grievous harm. Or perhaps the dangers mentioned here will motivate the companies to create a safer product."
Of course, the debate is as hypothetical today as it was a decade ago
, when Science
first likened carbon nanotubes to asbestos, stirring debate among pathologists, nanotube chemists and asbestos researchers who disagreed on the extent of the danger posed by exposure to carbon nanotubes. Let's hope further investigation of carbon nanotubes reveals ways to harness their potential—as well as mitigate their risks. | <urn:uuid:ae1bc8a0-a9f3-4686-8e34-806789c57612> | 3.390625 | 692 | Nonfiction Writing | Science & Tech. | 32.108766 |
by Staff Writers
Washington DC (SPX) Nov 04, 2011
Concerns that global warming may have a domino effect -unleashing 600 billion tons of carbon in vast expanses of peat in the Northern hemisphere and accelerating warming to disastrous proportions - may be less justified than previously thought.
That's the conclusion of a new study on the topic in ACS' journal Environmental Science and Technology.
Christian Blodau and colleagues explain that peat bogs - wet deposits of partially decayed plants that are the source of gardeners' peat moss and fuel - hold about one-third of the world's carbon.
Scientists have been concerned that global warming might dry out the surface of peatlands, allowing the release into the atmosphere of carbon dioxide and methane (a greenhouse gas even more potent than carbon dioxide) produced from decaying organic matter.
To see whether this catastrophic domino effect is a realistic possibility, the scientists conducted laboratory simulations studying the decomposition of wet bog peat for nearly two years.
Far from observing sudden releases of greenhouse gases, they found that carbon release and methane production slowed down considerably in deeply buried wet peat, most likely because deeper peat is shielded from exchange of water and gases with the atmosphere.
In connection with previous work, the study concluded that "even under moderately changing climatic conditions," peatlands will continue to sequester, or isolate from the atmosphere, their huge deposits of carbon and methane.
American Chemical Society
Beyond the Ice Age
Comment on this article via your Facebook, Yahoo, AOL, Hotmail login.
New webcam allows world to watch live polar bear migration
Ottawa (AFP) Nov 2, 2011
An estimated 1,000 polar bears linger outside the Canadian town of Churchill, Manitoba waiting for the Hudson Bay to freeze over around this time, every year. Tourists flock to the town to see them. But this year, cameras turned on the polar bears are also bringing a front row view of their annual migration to anyone with an Internet connection. A group of philanthropic and animal we ... read more
|The content herein, unless otherwise known to be public domain, are Copyright 1995-2011 - Space Media Network. AFP and UPI Wire Stories are copyright Agence France-Presse and United Press International. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement| | <urn:uuid:0a1686d5-3c72-48e7-836b-99014d4a5be8> | 3.703125 | 537 | Truncated | Science & Tech. | 32.033879 |
A plant captures energy from the sun through the photosynthesis process, so when plant material is burned, it is actually releasing the energy from the sun. But unlike solar energy, which is produced by the heat of the sun, we refer to the energy derived from plant material as biomass.
Biomass is considered an important option for renewable energy and there are a lot of different ways that this energy is derived and used. Plants can be grown specifically for the production of economically friendly energy, or the residue that is left after plant material has been processed for other needs can be used as a source of renewable energy. When fuels are taken from plant material, or bio mass, they are called bio fuel. The term bio-fuel broadly covers the terms:
When bio mass is derived from trees, grasses and other non food sources, it is called cellulosic biomass. Although the plants are not used to produce bio gases directly through the fermentation of the carbohydrates to make bio alcohol or bio ethanol, the cellulosic bio mass can be used as feedstock in the production of ethanol.
Ethanol can be used as an alternative renewable fuel in its pure form, but it is usually added to gasoline instead of being used as a standalone fuel for cars and trucks. The addition of ethanol improves emissions that are released and increases octane of the vehicle fuel. Ethanol is the most widely used fuel under the bio mass category at this time. Animal fats, recycled grease and vegetable oils can all be used to produce bio diesel. The product can power vehicles in its pure form. Usually, though, the bio diesel is used as an additive to diesel fuel. The addition of the bio mass reduces the levels of hydrocarbons, carbon
monoxide and particulates that are produced by vehicles that are diesel powered.
This post is part of a series on the Thamesgate Blog. For more information on renewable energy solutions see our website.
Written by Nick Watkins.
Posted in Renewable Energy | <urn:uuid:e60d8bb6-ba43-4a05-b51c-5d06a849d83f> | 3.96875 | 402 | Personal Blog | Science & Tech. | 38.984009 |
Reducing carbon emissions from deforestation
1st March 2009
We can all agree on what the problem is, it’s settling on a solution that’s the difficult part. Mark Anslow explores the complicated world of deforestation
Do we merely want to preserve the carbon in rainforests, or do we also want to protect them as habitats?
If 2007 was the year in which the world woke up to climate change, then 2008 was the year in which everyone realised just how damn complicated the whole situation is. On paper, reducing levels of deforestation should be one of the easiest areas to tackle. After all, in the words of Tim Yeo MP, chair of the Parliamentary Environmental Audit Committee, ‘there is no rocket science involved in dealing with deforestation – it is not like carbon capture and storage where we are waiting for a technological breakthrough’.
Indeed not. In practice, however, the situation has become so complicated, with so many different proposals on the table, that very few outside the UN and environmental NGOs have the slightest clue what is happening.
Most of the proposals are happy to come together under the general UN banner of REDD – Reducing Emissions from Deforestation and Degradation. The UN hasn’t fixed on one particular proposal for REDD yet, but the concrete is drying fast and will (probably) be set by December of this year.
Before it is, however, there are a battery of problems to tackle. First, there’s the sticky issue of setting the ‘baselines’ or...
To view the rest of this article - you must be a paying subscriber and Login
Using this website means you agree to us using simple cookies. | <urn:uuid:fc700500-8db3-4f52-a4f8-54e9d0e849f7> | 2.796875 | 348 | Truncated | Science & Tech. | 46.085755 |
Edward Lorenz; Pioneer in Creation of Chaos Theory
Thursday, April 17, 2008
Edward N. Lorenz, 90, a meteorologist who laid the groundwork for chaos theory, memorably asking whether the flap of a butterfly's wings in Brazil can set off a tornado in Texas, died of cancer April 16 at his home in Cambridge, Mass. He was an emeritus professor at the Massachusetts Institute of Technology.
At MIT, Dr. Lorenz accidentally discovered how small differences in the early stages of a dynamic system, such as the weather, can trigger such huge changes in later stages that the result is unpredictable and essentially random.
At the time, Dr. Lorenz was studying why it's so hard to accurately forecast the weather, but the implications of his work go far beyond meteorology.
The new science of chaos fundamentally changed the way researchers address topics from the geometry of snowflakes to the predictability of which movies will become blockbusters. The butterfly effect became a popular way of describing unpredictability, most recently in "An Inconvenient Truth" (2006), the Academy Award-winning documentary with former Vice President Al Gore.
It also "brought about one of the most dramatic changes in mankind's view of nature since Sir Isaac Newton," said the committee that awarded Dr. Lorenz the 1991 Kyoto Prize for basic sciences.
Yet Dr. Lorenz's 1962 paper on chaos theory was largely ignored for years. A decade later, when he gave a talk about predictability, with a title asking the famous butterfly question, the scientific establishment was ready to consider the idea. Other scientists who had been working on similar questions swarmed to the field, and one by one, certain assumptions of science began to falter.
"When I first heard this [butterfly effect] idea, I thought it very clever but it couldn't be literally true," said James Gleick, a science writer and author of "Chaos: Making a New Science" (1987), which explored Dr. Lorenz's work. "But it is literally true. . . . Complex dynamical systems, if they are chaotic, never repeat themselves. They are capable of an infinite variety of behavior."
This means that simple systems can result in complex behavior and that the slightest change in underlying causes can make the result unpredictable.
Chaos theory -- also known as the science of nonlinearity, the science of complexity, the science of random recurrent behavior or the science of turbulence and discord -- has thus been called the third great scientific revolution of the 20th century, along with relativity and quantum physics.
Edward Norton Lorenz was born May 23, 1917, in West Hartford, Conn., and graduated from Dartmouth College. He received a master's degree in mathematics in 1940 from Harvard University and served as a weather forecaster for the Army Air Forces during World War II.
In 1948, he received a doctorate in meteorology from MIT and joined its faculty. He remained there the rest of his career.
In 1961, he was using a primitive computer to model weather forecasts, which led to his most renowned work. | <urn:uuid:11882c54-6bc7-4e6f-9165-295a5f2536d8> | 3.234375 | 632 | Nonfiction Writing | Science & Tech. | 47.657114 |
Discovery comes in for a perfect landing.
Click on image for full size
Courtesy of NASA
Shuttle-Mir Program Comes to a Close
News story originally written on June 13, 1998
and Pilot Gorie brought the shuttle Discovery in for a perfect landing yesterday. Discover touched down at 1:00 p.m. CDT on June 12, 1998. This ends the 11-day mission which included a 4-day docking with the space station Mir.
While docked with the space station, Andy Thomas came onboard the shuttle for his ride home to Earth after spending four and a half months on Mir.
Yesterday's landing is the end of an 812-day continuous U.S. presence in space! There are no future plans for U.S. involvement with the Russian space station, so Discovery's landing is the official close to the Shuttle-Mir program.
Shop Windows to the Universe Science Store!
Our online store
includes issues of NESTA's quarterly journal, The Earth Scientist
, full of classroom activities on different topics in Earth and space science, ranging from seismology
, rocks and minerals
, and Earth system science
You might also be interested in:
It was another exciting and frustrating year for the space science program. It seemed that every step forward led to one backwards. Either way, NASA led the way to a great century of discovery. Unfortunately,...more
The Space Shuttle Discovery lifted off from Kennedy Space Center at 2:19 p.m. EST, October 29th. The sky was clear and the weather was great as Discovery took 8 1/2 minutes to reach orbit for the Unitied...more
A moon was discovered orbiting the asteroid, Eugenia. This is only the second time in history that a satellite has been seen circling an asteroid. A special mirror allowed scientists to find the moon...more
Will Russia ever put the service module for the International Space Station in space? NASA officials are demanding an answer from the Russian government. The necessary service module is currently waiting...more
During a period of about two days in early May, 1998, the ACE spacecraft was immersed in plasma associated with a coronal mass ejection (CME). The SWICS instrument on ACE, which determines unambiguously...more
J.S. Maini of the Canadian Forest Service has referred to forests as the "heart and lungs of the world." Forests reduce soil erosion, maintain water quality, contribute to atmospheric humidity and cloud...more
In late April through mid-May 2002, all five naked-eye planets are visible simultaneously in the night sky! This is includes Mercury which is generally very hard to see because of its proximity to the...more | <urn:uuid:ec9a602e-b37d-4bed-86f8-2499fd9cc637> | 2.890625 | 549 | Content Listing | Science & Tech. | 58.801067 |
Polar Bear. Photo by NPS.
Climate change threatens some of the most treasured natural and historic places in our nation. An example in the Alaska Arctic bioregion that park managers are concerned about is the potential vegetation shift from tundra to shrubland due to warmer temperatures. This vegetation change, combined with the loss in sea ice, may result in a loss of vital habitat for caribou, birds and polar bears.
Suggested links to learn more about climate change in this region: | <urn:uuid:9c7fa16c-5654-47a5-9105-b16afeea7a66> | 2.859375 | 105 | Knowledge Article | Science & Tech. | 42.080889 |
[Previous] | [Session 26] | [Next]
L. K. Tamppari (JPL), D. A. Senske (NASA HQ), T. V. Johnson, R. Oberto, W. Zimmerman (JPL), JPL's Team-X Team
Since the arrival of the Galileo spacecraft to the Jovian system in 1995, evidence indicating a liquid water ocean beneath the icy Europan crust has become much stronger. This evidence combined with the fact that Europa is greater than 90 wt% water makes it a candidate body to harbor extant or extinct life. The outstanding Europa science questions are to determine whether or not there is or has been a liquid water layer under the ice and whether or not liquid water currently exists on the surface or has in the geologically recent past, what geological processes create the ice rafts and other ice-tectonic processes that affect the surface, the composition of the deep interior , geochemical sources of energy, the nature of the neutral atmosphere and ionosphere, and the nature of the radiation environment, especially with regard to its implications for organic and biotic chemistry. In addition, in situ studies of the surface of Europa would offer the opportunity to characterize the chemistry of the ice including organics, pH, salinity, and redox potential.
In order to address these scientific objectives, a Europa program, involving multiple spacecraft, is envisioned. The JPL Outer Planets program has been helping to lay the groundwork for such a program. This effort is being conducted with particular emphasis on compiling and identifying science objectives which will flow down to a Europa mission architecture. This poster will show the tracability of observational methods from the science objectives.
Also in support of developing a Europa mission architecture, JPL’s Team-X has conducted a variety of Europa mission studies . A comparison of the studies done to date will be presented, highlighting science objectives accomplished, technological challenges, and cost.
A more detailed presentation will be given on a Europa Lander concept study. First, the science objectives and instrumentation will be shown, including instrument mass, power usage, volume, and data rate. Second, the mission design will be discussed, including candidate launch and arrival dates and landing ellipse issues. Third, the technology developments required and other issues will be presented.
This poster presentation will provide an opportunity for the science community to influence future work on developing a Europa architecture, including refinements to a Europa Lander , other mission concepts, and further science objective identification and prioritization.
This work was carried out at Caltech’s Jet Propulsion Laboratory under a contract from NASA.
Morrison, D., Introduction to the Satellites of Jupiter in Satellites of Jupiter, Morrison ed., 1982.
Space Studies Board, A Strategy for the Exploration of Europa, National Academy Press, Washington D. C., 1999. | <urn:uuid:aa6dde02-a1b4-4a14-a372-a0c84a52acaf> | 2.984375 | 588 | Academic Writing | Science & Tech. | 31.246 |
Although it may be 'apparent' (it is a harmonic of sorts), the apparentness no means invalidates its importance to weather mechanics. Because the Coriolis effect exists, it causes a weather patterns to spiral and vorteces to build. Without the Coriolis effect, cyclonic systems could not grow and develop. This is why we do not have hurricanes at the equator. The spiral, interestingly, allows force interactions to build and feed. Without the coriolis effect we would not have hurricanes...and we would not have the Great Red Spot on Jupiter. | <urn:uuid:a30b952c-4f84-404f-91d2-167e3c70e680> | 3.359375 | 118 | Comment Section | Science & Tech. | 49.460699 |
One of the most common tasks is to generate the flat text of the email message represented by a message object tree. You will need to do this if you want to send your message via the smtplib module or the nntplib module, or print the message on the console. Taking a message object tree and producing a flat text document is the job of the Generator class.
Again, as with the email.Parser module, you aren't limited to the functionality of the bundled generator; you could write one from scratch yourself. However the bundled generator knows how to generate most email in a standards-compliant way, should handle MIME and non-MIME email messages just fine, and is designed so that the transformation from flat text, to an object tree via the Parser class, and back to flat text, is idempotent (the input is identical to the output).
Here are the public methods of the Generator class:
Optional mangle_from_ is a flag that, when true, puts a ">"character in front of any line in the body that starts exactly as
"From " (i.e.
From followed by a space at the front of the
line). This is the only guaranteed portable way to avoid having such
lines be mistaken for Unix-From headers (see
WHY THE CONTENT-LENGTH FORMAT IS BAD
Optional maxheaderlen specifies the longest length for a non-continued header. When a header line is longer than maxheaderlen (in characters, with tabs expanded to 8 spaces), the header will be broken on semicolons and continued as per RFC 2822. If no semicolon is found, then the header is left alone. Set to zero to disable wrapping headers. Default is 78, as recommended (but not required) by RFC 2822.
The other public Generator methods are:
Optional unixfrom is a flag that forces the printing of the
Unix-From (a.k.a. envelope header or
delimiter before the first RFC 2822 header of the root message
object. If the root object has no Unix-From header, a standard
one is crafted. By default, this is set to 0 to inhibit the printing
of the Unix-From delimiter.
Note that for sub-objects, no Unix-From header is ever printed.
As a convenience, see the methods Message.as_string() and
str(aMessage), a.k.a. Message.__str__(), which
simplify the generation of a formatted string representation of a
message object. For more detail, see email.Message.
See About this document... for information on suggesting changes. | <urn:uuid:761c5b66-53ec-4f5a-9fe4-1cf8c0b6641a> | 2.75 | 559 | Documentation | Software Dev. | 59.844842 |
By Andrew Howley & Kike Ballesteros
With all our emphasis on charismatic fish and stunning coral formations, boring old algae tends to get skipped over by most observers of the underwater world. Being out here in the South Pacific with an algae expert though, it doesn’t take long to be won over by these intriguingly important life forms.
Kike Ballesteros of the Centre d’Estudis Avançats de Blanes, CSIC, is NG Explorer-in-Residence Enric Sala’s long-time mentor and collaborator. He is also the master of all things algal for the Pitcairn Islands expedition, and he agreed to enlighten me about the wonders of algae.
Algae Great and Small
First off, there are two major kinds of algae in a coral reef. One is the tiny microalgae living inside the coral itself and providing a food source (and coloration) for these ancient animals. The other is macroalgae, which includes seaweed, and can be either soft and fleshy or hard and crusty (calcareous).
Basis of the Food Chain
In shallow temperate areas huge plains of seaweed and other macroalgae are the main components of the seascape. Here in the tropics the main component is coral. But don’t be fooled—macroalgae are still hard at work making life at the coral reefs possible.
Algae, both soft and crusty, provide the only food source for plant-eating fishes (such as parrotfishes, chubs, damselfishes or surgeonfishes) and invertebrates (some sea urchins, small crustaceans, and snails). Together with the microalgae that live inside the coral colonies, these algae provide the basic food and energy source much of what lives on the reef.
Builders of the Reef
Crusty algae are also active builders of the reef structure itself.
Red algae (such as Hydrolithon and Lithophyllum in the gallery above) produce limestone which cements together the coral pieces into a solid chunk. Without this ever-rising base, the corals themselves would not be able to build up vertically.
The algae then is what allows the reef to build up towards the ocean surface and cause waves to break off shore, at once forming the lagoon and protecting the interior island’s beaches from the full power of the ocean.
Where Tropical Beach Sand Comes From
Finally, other crusty species of green algae (Halimeda in the gallery) are important sand producers. Beaches along the reef and sand flats under the water are made up of many components, including coral debris and the skeletons of several marine invertebrates (like molluscs, sea urchins, and tiny shelled foraminiferans) but far and away, the biggest contributors are broken up Halimeda.
So next time you’re sipping mai-tais on a tropical beach, raise your glass to algae, the workhorses that make paradise possible.
More From the Pitcairn Islands Expedition | <urn:uuid:017ab856-37f0-498a-a71f-b0f21e04acb6> | 3.484375 | 649 | Truncated | Science & Tech. | 37.521751 |
Interfaces are the basis of proper composition and shine in relationship with many design patterns (e.g. the Command pattern). As such, they are fundamental to sound OO design. Teach interfaces as the rule and class inheritance as the exception. Class inheritance (extends) is mostly unnecessary and widely misunderstood and leads to rigidity while promising flexibility. Interface implementation, on the other hand, leads to a more functional design and stimulates creativity.
A proper way to teach interfaces is to create a bit of undo functionality, with an interface 'Command', having two methods 'do' and 'undo' and a command stack.
Interfaces should be:
- Coherent (the methods should have an obvious relationship, such as do() and undo(), fork() and join()),
- Obvious to implement (the pre- and post-conditions of each method should be easy to understand, even though the implementation of the interface might be complex). This guarantees encapsulation and reuse,
- Well named (generally the name should be an adjective, although exceptions can be made, such as the Command above).
Finally, interfaces generally define the façade of a more complex system, in order to insulate implementation changes in a future release from the users of this system. | <urn:uuid:9c4b537e-21a1-484b-ae08-9c3e5725f738> | 3.8125 | 259 | Q&A Forum | Software Dev. | 28.318741 |
Plate Boundaries and Volcanoes
and is replicated here as part of the SERC Pedagogic Service.
Course: Geology of the National Parks
I have students work in groups of three or four, studying maps of volcanic activity around the world, on a base map of tectonic plate boundaries. I have each group then classify three to five plate boundary types, on the basis of volcanic activity. (In other words, I have them do the "volcanologist" portion of Dale Sawyer's plate tectonics jigsaw. (more info) ) When they are done, a few volunteers present their groups' classifications to the class.
I then lecture, very briefly, saying simply that the patterns they noticed are not coincidental, and that we will discover what causes them as the week goes on, and that a large part of science (and geology in particular) involves trying to explain the patterns we observe in the natural world.
I particularly like this exercise because the students can all find patterns in the data, and have fun doing it, and they are doing science -- even the ones who tell me in their (written) introductions that they are science-phobic. And it leads right in to the plate tectonics jigsaw that we do for the rest of the week. | <urn:uuid:2e522cbe-c463-44b9-9f23-bd4aa66985ab> | 3.3125 | 269 | Tutorial | Science & Tech. | 42.97 |
Acting upon the hint which had been conveyed from various investigations in the domain of physics, and concentrating upon the problem all those unmatched powers of intellect which distinguished him, the great inventor had succeeded in producing a little implement which one could carry in his hand, but which was more powerful than any battleship that ever floated. The details of its mechanism could not be easily explained, without the use of tedious technicalities and the employment of terms, diagrams and mathematical statements, all of which would lie outside the scope of this narrative. But the principle of the thing was simple enough. It was upon the great scientific doctrine, which we have since seen so completely and brilliantly developed, of the law of harmonic vibrations, extending from atoms and molecules at one end of the series up to worlds and suns at the other end, that Mr. Edison based his invention.
Every kind of substance has its own vibratory rhythm. That of iron differs from that of pine wood. The atoms of gold do not vibrate in the same time or through the same range as those of lead, and so on for all known substances, and all the chemical elements. So, on a larger scale, every massive body has its period of vibration. A great suspension bridge vibrates, under the impulse of forces that are applied to it, in long periods. No company of soldiers ever crosses such a bridge without breaking step. If they tramped together, and were followed by other companies keeping the same time with their feet, after a while the vibrations of the bridge would become so great and destructive that it would fall in pieces. So any structure, if its vibration rate is known, could easily be destroyed by a force applied to it in such a way that it should simply increase the swing of those vibrations up to the point of destruction.
Now Mr. Edison had been able to ascertain the vibratory swing of many well-known substances, and to produce, by means of the instrument which he had contrived, pulsations in the ether which were completely under his control, and which could be made long or short, quick or slow, at his will. He could run through the whole gamut from the slow vibrations of sound in air up to the four hundred and twenty-five millions of millions of vibrations per second of the ultra red rays.
Having obtained an instrument of such power, it only remained to concentrate its energy upon a given object in order that the atoms composing that object should be set into violent undulation, sufficient to burst it asunder and to scatter its molecules broadcast. This the inventor effected by the simplest means in the world—simply a parabolic reflector by which the destructive waves could be sent like a beam of light, but invisible, in any direction and focused upon any desired point.
Testing the "Disintegrator."
I had the good fortune to be present when this powerful engine of destruction was submitted to its first test. We had gone upon the roof of Mr. Edison's laboratory and the inventor held the little instrument, with its attached mirror, in his hand. We looked about for some object on which to try its powers. On a bare limb of a tree not far away, for it was late in the Fall, sat a disconsolate crow.
"Good," said Mr. Edison, "that will do." He touched a button at the side of the instrument and a soft, whirring noise was heard.
"Feathers," said Mr. Edison, "have a vibration period of three hundred and eighty-six million per second."
He adjusted the index as he spoke. Then, through a sighting tube, he aimed at the bird.
"Now watch," he said.
Another soft whirr in the instrument, a momentary flash of light close around it, and, behold, the crow had turned from black to white!
"Its feathers are gone," said the inventor; "they have been dissipated into their constituent atoms. Now, we will finish the crow."
Instantly there was another adjustment of the index, another outshooting of vibratory force, a rapid up and down motion of the index to include a certain range of vibrations, and the crow itself was gone—vanished in empty space! There was the bare twig on which a moment before it had stood. Behind, in the sky, was the white cloud against which its black form had been sharply outlined, but there was no more crow.
"That looks bad for the Martians, doesn't it?" said the Wizard. "I have ascertained the vibration rate of all the materials of which their war engines whose remains we have collected together are composed. They can be shattered into nothingness in the fraction of a second. Even if the vibration period were not known, it could quickly be hit upon by simply running through the gamut."
"Hurrah!" cried one of the onlookers. "We have met the Martians and they are ours." | <urn:uuid:e6dc781f-6587-409c-8e73-f65627bbce06> | 2.859375 | 1,011 | Audio Transcript | Science & Tech. | 54.165034 |
DNA-RNA Reverse Transcribing Viruses
This tree diagram shows the relationships between several groups of organisms.
The root of the current tree connects the organisms featured in this tree to their containing group and the rest of the Tree of Life. The basal branching point in the tree represents the ancestor of the other groups in the tree. This ancestor diversified over time into several descendent subgroups, which are represented as internal nodes and terminal taxa to the right.
You can click on the root to travel down the Tree of Life all the way to the root of all Life, and you can click on the names of descendent subgroups to travel up the Tree of Life all the way to individual species.close box
Page copyright © 2005
All Rights Reserved.
- First online 22 December 2005
Citing this page:
Tree of Life Web Project. 2005. DNA-RNA Reverse Transcribing Viruses. Version 22 December 2005 (temporary). http://tolweb.org/DNA-RNA_Reverse_Transcribing_Viruses/21831/2005.12.22 in The Tree of Life Web Project, http://tolweb.org/ | <urn:uuid:e4ab5c75-b1d2-47aa-918d-54900202620c> | 3.078125 | 244 | Truncated | Science & Tech. | 64.105214 |
Ask a question about 'Auxiliary feedwater'
Start a new discussion about 'Auxiliary feedwater'
Answer questions from other users
is a backup water
Water is a chemical substance with the chemical formula H2O. A water molecule contains one oxygen and two hydrogen atoms connected by covalent bonds. Water is a liquid at ambient conditions, but it often co-exists on Earth with its solid state, ice, and gaseous state . Water also exists in a...
supply system found in pressurized water reactor
Pressurized water reactors constitute a large majority of all western nuclear power plants and are one of three types of light water reactor , the other types being boiling water reactors and supercritical water reactors...
nuclear power plant
A nuclear power plant is a thermal power station in which the heat source is one or more nuclear reactors. As in a conventional thermal power station the heat is used to generate steam which drives a steam turbine connected to a generator which produces electricity.Nuclear power plants are usually...
s. This system, sometimes known as Emergency feedwater, can be used during shutdowns, including accident conditions, and sometimes during startup. It works by pumping water to the steam generators
Steam generators are heat exchangers used to convert water into steam from heat produced in a nuclear reactor core. They are used in pressurized water reactors between the primary and secondary coolant loops....
from reserve tanks or a larger body of water (e.g. lake, river, or ocean) to remove decay heat
Decay heat is the heat released as a result of radioactive decay. This is when the radiation interacts with materials: the energy of the alpha, beta or gamma radiation is converted into the thermal movement of atoms.-Natural occurrence:...
from the reactor
A nuclear reactor is a device to initiate and control a sustained nuclear chain reaction. Most commonly they are used for generating electricity and for the propulsion of ships. Usually heat from nuclear fission is passed to a working fluid , which runs through turbines that power either ship's... | <urn:uuid:ab5a7e64-90c5-4197-acc0-3121d056deb9> | 3.546875 | 420 | Q&A Forum | Science & Tech. | 43.848601 |
Sarah, after completing all the readings and looking into the two species concepts in detail - what do you think? Do you think one definition or the other has more merit? If scientists are not unified behind the BSC, what do you think of its central role in school textbooks?
You make a great point about numbers. As soon as we narrow the definition of species, then we have fewer animals within each species. Should protection be all about numbers? Some people have suggested that conservation efforts should be focused on preserving whole habitats and ecosystems rather than boosting the numbers of specific species. Do you think this is a better way to go?
Maureen 28 Sept 9:06PM
I think the preservation of habitats and ecosystems is a plan that protects a greater number of species and promotes continued biodiversity by insulating the areas with the highest potential for speciation. The “lost world” in Papua, New Guinea is just one example of how a “protected” habitat can give rise to a host of new species. The earth’s rain forests are home to so many yet undiscovered species, not to mention those that we know to be in existence, and their importance is only beginning to be understood (i.e. medical treatments, species interactions). By protecting these ecosystems, I think we establish the means for a greater conservation impact overall. I don’t think this plan should be instead of, but in addition to, conservation efforts to preserve dangerously low populations.
Sarah 4 Oct 11:29PM
Hi Maureen, You make a good point about undiscovered species and how protecting habitats can unintentionally protect organisms that are yet to be discovered. I agree with you about protecting whole habitats and ecosystems. While conservation efforts should still be focused on protecting endangered species, they should also expand beyond species protection to ecosystem protection.
Melissa 29 Sept 11:28AM
"As soon as we narrow the definition of species, then we have fewer animals within each species. Should protection be all about numbers? Some people have suggested that conservation efforts should be focused on preserving whole habitats and ecosystems rather than boosting the numbers of specific species. Do you think this is a better way to go?"
If I might jump in, I think that conservation should certainly be focused on numbers, but not necessarily the number of total species. I think that, if push comes to shove, there are two considerations that should take priority over preserving just the number of species: genetic diversity and ecological significance, two measures that are necessary for biota to recover from the stresses that have driven them to this point. Ecological significance is relatively separate from the BSC vs PSC discussion, especially since it is usually discussed on the population level. Genetic diversity, however, is tightly linked to the debate. While comparing the genetic diversity of populations is relatively straightforward, if we're making species vs. species decisions, these two definitions might encompass groups with very different degrees of diversity.
Scientist: Andrea 1 Oct 8:34AM
It does seem strange that biodiversity should be defined by number of taxa, but that is how it has been done traditionally in natural history and paleobiology. This is why the ecological definition of biodiversity, essentially a measure of disparity or niche breadth, is being taken up instead in conservation.
Kathy 30 Sept 11:41AM
This is a very interesting debate and I look forward to reading other's responses. My first thought is to think about the degree of influence humans have had over the species becoming endangered. Now this is obviously very hard to measure and I'd imagine that humans are in some degree responsible for all endangered animals, but animals become extinct for a variety of reasons. I would argue that scientists should try to conserve those populations that are becoming endangered as a result of direct human influence, i.e. chopping down forests to build houses.
Sarah 4 Oct 11:PM
Hi Jody, Based on the information presented in the readings, I feel that the Biological Species Concept has more merit. The reason for this is that the BSC specifically defines a species as a group that can produce fertile offspring whereas the PSC leaves much room for interpretation as to what a defines a species. Simply defining a species as a group that has at least one unique feature does not seem stringent or specific enough to yield consistent results. Under the PSC, I feel that someone could define a group of cats with thicker coats as a different species than a group of cats with thinner coats even though the groups could be genetically similar but experience different levels of shedding due to environmental conditions. This, to me, does not seem like a very reliable method of defining species. However, I realize that everyone is entitled to their own opinion and because scientists are not uniformly behind the BSC, I feel that school text books should objectively present both theories and the individual students can decide for themselves which they accept.
To answer your question about conservation and numbers, I do not believe that conservation should solely focus on the numbers of organisms in a species. Instead, I think that efforts should constantly be made to preserve ecosystems. This would involve careful control of predator prey populations, restrictions placed on introducing non-native species into a habitat, and reduction of human impact on natural habitats. I do think that conservation efforts should occur in response to endangered species, but saving individual groups of organisms should not be the sole priority. Without healthy ecosystems, more and more species will become endangered as predator prey relations change and food webs are altered. Thus, by protecting entire habitats and ecosystems rather than simply focusing on species numbers, conservation efforts can be much more efficient.
I do agree that the PSC is more useful when trying to classify extinct organisms where only fossil evidence remains. In this case, one may not know if two organisms can interbreed successfully, and thus may need to use morphological data to draw conclusions. However, I also feel that defining species as groups that have a given genetic variation from other groups would be more effective than either BSC or PSC. Not only is genetic information available in fossilized organisms, but defining a specific degree of genetic difference required for two species eliminates subjectivity.
Mike 4 Oct 11:35PM
It is rather common, and I think appropriate, to teach simplified and/or older versions of a scientific idea, confident that we will clarify and elaborate later. Examples: We teach Newtons' Laws before clarifying the physics with quantum theory for the very small realm, special relativity for the very fast realm, and general relativity for the very massive realm. I teach the classical five kingdom model (always dropping the 'g' of course! -- no sexist language in my classroom) before the three domain model. The Bohr model is a great place to begin learning quantum theory, but it is not correct.
After this week's lesson I am quite convinced that the PSC has more general validity. However, I think I will initially define species with the BSC in my middle school classrooms. The difference is, I will return to the topic and introduce the PSC. | <urn:uuid:2b468811-884a-4177-8fba-29537679db2d> | 3.34375 | 1,430 | Comment Section | Science & Tech. | 37.765991 |
We use altitude and azimuth to describe the location of an object in the sky as viewed from a particular location at a particular time.
The altitude is the distance an object appears to be above the horizon. The angle is measured up from the closest point on the horizon.
The azimuth of an object is the angular distance along the horizon to the location of the object. By convention, azimuth is measured from north towards the east along the horizon
|[back to beginning of topic]||[back one page in this topic]||[next page in this topic]|
|[back to the topics page]||[back to astro 201 home page]| | <urn:uuid:0b43b302-1e0b-473d-83a5-1a220d68cd3d> | 3.609375 | 141 | Tutorial | Science & Tech. | 37.646667 |
3. The world is warming
- There are three main records of global temperature at the Earth’s surface: the UK Met Office/UEA Climatic Research Unit (CRU) record, a record produced by NASA’s Goddard Institute for Space Studies, and a record produced by the US National Oceanic and Atmospheric Administration (NOAA). The records are in close agreement, showing that global average temperature has increased by approximately 0.75°C since 1900 (Figure 7, below).
Figure 7 (above): The three main records of global average surface temperature. Red line = NOAA record, blue line = NASA record, black line = Met Office/ UEA CRU record, with grey shading showing 95% confidence interval on Met Office/UEA CRU record. Source: Met Office Hadley Centre. Before 1850, instrumental time series measurements with global coverage are not available. [IPCC AR4 (2007) (Working Group 1; 1.3.2)]. (Larger version of Figure 7 (PNG, 402 Kb) )
- The global average temperature is currently calculated from measurements taken at about 5,000 land-based weather stations and over 1,200 free-floating buoys, as well as ships and moored buoys (Figure 8, below). A number of issues need to be managed to ensure the integrity of these raw data, including accounting for a lack of complete coverage in some areas, relocation of weather stations, and changes of instruments and measurement procedures. These issues are addressed using statistical techniques and the uncertainties they introduce are factored into the "error bars" associated with the records. The size of the error bars decreases towards the present day as the number and accuracy of measurements underpinning the records increases.
Figure 8 (above): Maps showing (top) land stations used to construct the UEA Climatic Research Unit land surface temperature dataset, and (bottom) sea surface temperature measurements made in May 2010 by ships (blue dots) drifting buoys (red dots) and moored buoys (grey dots) (note that over the space of a year, the gaps in sea surface measurements gradually get ‘filled in’ along the shipping routes by ships and in other places by drifting buoys). Statistical methods are used to calculate the global average in a way that takes account of the uneven distribution of observations. Source: Met Office. (Larger version of Figure 8 (PPT, 498 Kb) )
- Records of global average temperature change on land, at the sea surface and over the oceans at night all show a clear warming trend, with somewhat greater warming over land, as predicted by climate models .
Satellites measure temperature change in the lower atmosphere (the ‘troposphere’). At the global scale, these records are in close agreement with the surface temperature records (Figure 9) . In the tropics, warming in the lower atmosphere is expected to be slightly greater than at the surface due to changes in the amount of water vapour in the atmosphere as it warms, although estimates of the precise amount of additional warming from different climate models varies. This higher rate of warming is not seen in some satellite records, however the error bars on satellite records are large (because satellites measure temperature using indirect methods and the data from them need to be corrected for factors such as instrument temperature and drift of the satellite in orbit) .
Figure 9 (above): Three satellite records of global average temperature in the lower atmosphere (black, blue and red lines) compared to the Met Office/UEA CRU record of global average temperature at the surface (green line). Source: Met Office. (Larger version of Figure 9 (PNG, 137 Kb) )
- Although other factors might influence each of them individually, trends observed in a wide range of other physical variables over the past few decades are consistent with those expected from the warming seen in temperature records, including:
- a steady rise in global sea level (Figure 10a) ;
- retreat of Arctic sea ice (the late summer minimum in Arctic sea-ice extent has decreased by about 10% each decade since satellite records began in the 1970s – Figure 10b, below) ;
- widespread ice mass losses from glaciers and ice caps ;
- earlier retreat of snow cover in spring in the Northern Hemisphere ;
- shifts in rainfall patterns consistent with those expected in a warming world (including increases in the Northern Hemisphere mid-latitudes and drying in the Northern Hemisphere subtropics and tropics );
- increases in atmospheric humidity in the lower atmosphere ;
- increases in the number of heavy rainstorms and heatwaves over many land areas .
Figure 10 (a) (above, top): Annual averages of the global mean sea level based on reconstructed sea level fields since 1870 (red), tide gauge measurements since 1950 (blue) and satellite altimetry since 1992 (black). Units are in mm relative to the average for 1961 to 1990. Error bars are 90% confidence intervals. (Source: IPCC AR4) (Larger version of Figure 10 (a))
Figure 10 (b) (above, bottom): average Arctic sea ice extent in September from 1979 to 2009 (Source: NSIDC) (Larger version of Figure 10 (b)
- Changes in natural systems consistent with a warming trend have also been observed over recent decades, including:
- shifts in the ranges of some terrestrial plant and animal species to higher latitudes and altitudes (for example, the ranges of a number of butterfly species have shifted poleward or uphill in Europe) ;
- warming of lakes and rivers ;
- changes in the distribution of some marine species (for example a northerly shift in the distribution of plankton in the North Atlantic ocean );
- earlier arrival of spring and an increase in the length of the growing season, on average, in many regions of the Northern Hemisphere .
Natural variability, resulting from internal adjustments in the climate system, solar variability and volcanic activity, causes global average temperature to fluctuate on timescales of a few years to a decade or more. Consequently it is possible to find many periods of a few years when global average temperature has levelled off or declined. In order to detect whether the climate system is changing over and above these natural factors, it is helpful to consider the trend in global temperature after it has been averaged over each decade to remove some of the short-term natural variability (Figure 11, below). It is clear that the decadal-scale trend in temperature has been upward. Even allowing for uncertainties in the observations, the last three decades have each been significantly warmer than the previous one. The size and sustained nature of the warming since the 1950s is unprecedented over the instrumental record.
Figure 11 (above): Global average surface temperature record averaged over each decade since 1850 (expressed as temperature difference from the 1961-1990 average). The uncertainty in the observed estimates is shown in the error bars. Source: Met Office (Larger version of Figure 11 (PNG, 235 Kb) )
- The rate of the warming observed has varied regionally, highlighting how important it is to avoid making deductions about global climate from what happens in one part of the world (Figure 12). For example, over the past century the rate of warming in much of the United States has been lower than in many other parts of the world and temperatures have decreased in the northern North Atlantic, while the rate of warming in the Arctic has been almost twice the global average . This regional variation occurs because local factors such as retreat of snow and ice and multi-decadal changes in ocean circulation affect the distribution of warming.
Figure 12 (above): Linear trend of annual temperatures 1901-2005. Grey areas indicate areas with insufficient data to calculate a robust trend. Trends significant at the 5% level are shown by white crosses. (Source: IPCC AR4) (Larger version of Figure 12 (JPG, 252 Kb) )
< Previous: 2. Human activities | Next: 4. Natural factors >
- 21. IPCC AR4, Working Group I Ch. 3,4&5 (2007)
- 22. Karl et al. (2006)
- 23. The extent of the decline in sea ice cover is variable through the year, reaching its minimum area in September. The decline observed in some winter months is not statistically significant against the background of natural variability.
- 24. National Snow and Ice Data Centre (NSIDC): www.nsidc.org
- 25. Note that changes in glacier extent are affected by changes in precipitation as well as temperature.
- 26. Zhang et al. (2007)
- 27. IPCC AR4, Working Group II Report, Ch 1. (2007) | <urn:uuid:4738ec58-33b4-42c0-bef3-1d0517316b76> | 3.828125 | 1,790 | Knowledge Article | Science & Tech. | 34.098109 |
Brighton Webs Ltd.
Statistics for Energy and the Environment
Monte Carlo Methods - Concept.
If the technique was being named today it would probably be called Las Vegas calculations and the output known as "Vegas Values". Monte Carlo methods are based on random numbers. For a long time, Monte Carlo was one of the best known venues for roulette and because a fair roulette wheel is one of the earliest random number generators, a branch of mathematics has been linked to Mediterranean seaside town.
The applications of Monte-Carlo methods are many and various, but many will come under these headings:
Other resources on the site include:
The Monte-Carlo Method
A trivial example of the method is the estimation of the area of a circle, its trivial because there is a well known formula which is quick and easy to use, but this example has most of the elements of more complex applications.
Alternatively, you can use your browser to do it for you, enter the radius of the circle (values in the range 1 to 10) into the text box, click on the "estimate" button and the estimated and accurate estimates will appear:
The estimate is generated by simulating the drawing of 10, 100, 1000 or 10,000 dots. By increasing the number of simulations, we can increase the accuracy and also the time taken to complete the process.
A real world situation
Moving on from the trivial to an application which is closer to the real world, that of a venture capital bank, albeit in an example presented in an over simplified form. Such banks invest in high risk projects and need to manage their risk. At one extreme, the performance of a bank with a single investment would be dependent on that investment, if it failed, the bank would lose money, it was a spectacular success, the bank would be highly profitable. However, by spreading its funds over several ventures, the probability of failure is reduced but the profits from the successful ones are offset by the cost of the failures.
Monte Carlo methods provide a means of modelling the behaviour of a portfolio. In the example below, a fictional bank makes between 1 and 20 investments. Historically, 50% of investments fail to create marketable products. This can be modelled with the binomial distribution, which for a given probability of success, provides an estimate of the number of success for a given number of investments. The example below shows the probability of a given number of successes, for 10 investments.
Of those that start trading, the distribution of revenues is shown in the diagram:
It is a perversity of nature, that the distribution of desirable outcomes are left skewed, i.e. the probability of a modest success is greater than that of a spectacular one, hence sales have been modelled as left skewed, whilst costs are right skewed, i.e. the probability of exceeding budgets is great, this is shown in the diagram below:
Using these models we can estimate the bank's ROR (Rate of Return) using the process outlined in the simplified flowchart below:
The value of 1,000 simulations is arbitrary, in practice the number should be appropriate to the application. For example, if an event within the process occurs infrequently, the overall number of cycles should be large enough to ensure that the results include all likely outcomes.
The results for investments in 1, 5, 10, 15 and 20 projects have been presented in the form of a line graph showing the probability that a given ROR will be exceeded.
In this graph, the red line shows the probability of the ROR exceeding 0 (i.e. not making a loss). For a single investment, the probability of not making a loss is 50%, by increasing the number of projects to 20, the probability of not making a loss rises to nearly 80%. However, this reduction in risk, is offset by decreased upside (i.e. making large profits). For a single project, the probability of exceeding a 40% ROR is 42%, as the number of investments increases to 20, this figure falls to 34%.
The worldly wise amongst you will be saying that this is just a way of using a computer to illustrate the old adage "don't put all your eggs in one basket". However, this form of analysis does provide some understanding of how many baskets are needed for a given level of security.
The Monte-Carlo process can also provide some insight into the workings or the bank, for example, by analysing the sum of costs for all the investments, it can be seen that for 20 projects, that the probability that $210m of capital will be adequate is 90%.
Hopefully between these two examples, I have shown the basic concept of the Monte-Carlo method and illustrated its application to a real world situation.
|Page updated: 05-Aug-2011| | <urn:uuid:ce06781a-4bc5-447a-9882-55726f52f65c> | 3.21875 | 992 | Knowledge Article | Science & Tech. | 44.331897 |
Mail Online “absolutely wrong” to infer global cooling from new research - but that doesn't stop it warning of new ‘Ice Age’
- 09 May 2012, 15:45
- Verity Payne
new research was published in the journal Nature Geoscience
suggesting that a period of low solar activity about 2,800 years
ago was associated with a sudden increase in windy weather in
western Europe in late winter and early spring.
A Mail Online article uses this research to infer that we might
now face '
global cooling' - adding to the ongoing series of articles the
Mail has published that appear to be an attempt to dismiss man-made
Just as on a number of previous occasions in which the Mail has
linked new research into the sun's effects on climate to the
prospect of a '
new ice age' or '
mini ice age', the Mail's interpretation of the new scientific
paper is plain wrong.
The researchers involved studied lake sediments from Germany
which were deposited around 2-3 thousand years ago. They find that
during a period of very low solar activity called the 'Homeric
minimum', which has already been associated with
cooling in west
Europe, there was a sudden increase in windiness.
They also used climate model simulations to show that low solar
activity leads to changes in atmospheric circulations patterns
which cause cooling over northern and middle Europe and higher
temperatures over Greenland - a finding which agrees with
The overall conclusion is that:
"[...] the combination of both proxy
data [the sediments] and climate models highlights a possible role
of the solar forcing not only during the winter but also on the
early spring climate over the European Atlantic sector."
From this, and indeed from the title of the research paper -
"Regional atmospheric circulation shifts induced by a grand solar
minimum" - it is clear that the paper investigates a regional shift
in climate. To paraphrase the paper's conclusions, past changes to
the sun's activity affected climate in parts of Europe, as
suggested by previous research. The new results are interesting in
that they advance and consolidate current understanding.
This may be an obvious point, but the paper isn't discussing
changes to global temperature.
However, in sticking to what appears to be a recent decision to
step up poor climate science reporting, the Mail Online plumps for
the headline "
Is 'global cooling' on the way? Lake sediment proves sun cooled
earth 2,800 years ago - and it could happen again soon"
The sub-headlines go on to say that the "Sun's activity CAN
cause changes in Earth's climate," and that the research "May throw
predictions of global warming out of whack".
We asked the study's lead author Celia Martin-Puertas, of the
Helmholtz Centre Potsdam, for her views on the Mail article.
Did her paper predict that the planet was about to cool?
She told us:
"it is absolutely wrong that our study
may predict a global cooling in the future."
She also pointed out that the Mail appears to have missed this
important paragraph from the
press release accompanying the study:
"[these] findings cannot be directly
transferred [to] future projections because the current climate is
additionally affected by anthropogenic forcing, they provide clear
evidence for still poorly understood aspects of the climate system.
[...] Only when the mechanisms of solar-climate links are better
understood a reliable estimate of the potential effects of the next
Grand solar minimum in a world of anthropogenic climate change will
In other words the Mail's claim that this research "May throw
predictions of global warming out of whack" is unfounded, and
contradicts the press release from the scientists who conducted the
Inevitable leap to a 'new ice age'
'Global cooling' is not the only dramatic inaccuracy the Mail
introduces into its reporting of this research. The article also
manages to claim that we might face an ice age if solar activity
"Some scientists suspect that the
current period of high solar activity - including increased
sunspots and solar storms thsi [sic] year - will be followed by a
'minimum' period, which could even cause an Ice Age"
The 'impending ice age' warning is an outright
misrepresentation. And yet it's a claim that has proved
bewilderingly popular with some British newspapers over the last
year or so. Warnings on 'ice ages' from low solar activity just
go away, as we have
Just to be clear: As far as we are aware, and having spoken with
various scientists about their work in this area over the past year
or so, there is no scientific evidence or research which suggests
we're going to see a new ice age in the near future, even in the
event of a 'grand solar minimum'.
Based on a blog?
As we've noted
before, Mail Online makes a habit of basing its climate stories
on climate skeptic blogs. One of its favourite sources seems to be the
Register - an IT blog that takes an inexplicably skeptical line on
So we have to wonder whether it is just a coincidence that
yesterday the Register featured a
blog post about this new research, saying that it "flies
counter to theories offered by carbon-alarmist climate scientists".
Could that be where the Mail Online story originated? | <urn:uuid:301b0a47-9aa5-4855-8eea-b9af0f1ad2f5> | 2.84375 | 1,158 | Nonfiction Writing | Science & Tech. | 34.26992 |
|Math Skills Review|
The Quadratic Equation
Many times in Chemistry, e.g. when solving equilibrium problems, a quadratic equation results. It has the general form:
There are two roots (answers) to a quadratic equation, because of the in the equation. In most chemistry problems, only one answer will be meaningful and have physical significance. This means that one answer will make sense, the other answer won't. This will be obvious! Usually when the WRONG answer is plugged in, it will lead to a negative concentration or amount. Since nothing can exist as a negative concentration, the other answer must be the RIGHT one.
Let's work through a typical quadratic calculation that you might find in equilibrium problems.
To expand the denominator, multiply the two terms together:
Then we have:
If we cross-multiply (See the review on Algebraic Manipulation), we get:
If we then subtract x2 from both sides, we can rearrange the equation to get a quadratic equation:
Now, plug the numbers into the quadratic formula, where a = 48.0, b = -19.6 and c = 1.47:
Chemical Equilibrium Application: At this point, it may be difficult for you to see which root (answer) is useful and which one is not. Let me give you the original problem:
Consider the following equilibrium having an equilibrium constant = 49.0 at a certain temperature:
If 0.300 mol of A and 0.100 mol of B are mixed in a 1.00 liter container and allowed to reach equilibrium, what concentrations of A and B will react and what concentrations of C and D will be formed?
In this particular problem, the initial concentrations of two reactants were 0.300 M and 0.100 M - these numbers appeared in the denominator of the original problem. The value of x represents the concentration of these reactants that were converted into products. If 0.309 M of one reactant was lost, that would leave behind (0.300 - 0.309) = -0.009 M of one reactant and (0.100 - 0.309) = -0.209 M of the other reactant. Since it is impossible to have a negative concentration remaining, the 0.309 number is extraneous (meaningless) and the other, x = 0.099 is the root we are interested in. Therefore, A and B both lost 0.099 M and the equilibrium concentrations of both C and D are 0.099 M.
Pick your next topic:
|Algebraic Manipulation||Scientific Notation||Significant Figures|
|Dimensional Analysis||Manipulation of Exponents||Logarithms| | <urn:uuid:f5996c48-999c-4e5e-a6b6-977785dc1a1e> | 4.09375 | 582 | Tutorial | Science & Tech. | 65.512756 |
pvm_gather - A specified member of the group receives messages from each member of the group and gathers these messages into a single array.
C int info = pvm_gather( void *result, void *data, int count, int datatype, int msgtag, char *group, int rootginst)
Fortran call pvmfgather(result, data, count, datatype, msgtag, group, rootginst, info)
result On the root this is a pointer to the starting address of an array datatype of local values which are to be accumulated from the members of the group. If n if the number of members in the group, then this array of datatype should be of length at least n*count. This argument is meaningful only on the root.
data For each group member this is a pointer to the starting address of an array of length count of datatype which will be sent to the specified root member of the group.
count Integer specifying the number of elements of datatype to be sent by each member of the group to the root.
Integer specifying the type of the entries in the result and data arrays.
msgtag Integer message tag supplied by the user. msgtag should be >= 0. It allows the user's program to distinguish between different kinds of messages.
group Character string group name of an existing group.
Integer instance number of group member who performs the gather of the messages from the members of the group.
info Integer status code returned by the routine. Values less than zero indicate an error.
pvm_gather() performs a send of messages from each member of the group to the specified root member of the group. All group members must call pvm_gather(), each sends its array data of length count of datatype to the root which accumulates these messages into its result array. It is as if the root receives count elements of datatype from the ith member of the group and places these values in its result array starting with offset i*count from the beginning of the result array. The root task is identified by its instance number in the group.
In using the scatter and gather routines, keep in mind that C stores multidimensional arrays in row order, typically starting with an initial index of 0; whereas, Fortran stores arrays in column order, typically starting with an offset of 1.
Note: pvm_gather() does not block. If a task calls pvm_gather and then leaves the group before the root has called pvm_gather an error may occur.
The current algorithm is very simple and robust. A future implementation may make more efficient use of the architecture to allow greater parallelism.
info = pvm_gather(&getmatrix, &myrow, 10, PVM_INT, msgtag, "workers", rootginst);
CALL PVMFGATHER(GETMATRIX, MYCOLUMN, COUNT, INT4, & MTAG, `workers', ROOT, INFO)
These error conditions can be returned by pvm_gather | <urn:uuid:80d939e4-aa0f-4399-8c08-28c8d67090c5> | 3.25 | 650 | Documentation | Software Dev. | 52.646304 |
I’m not a mathematician (if you are, please send some tips to improve this post), but I saw an interesting math problem going around the internet and I felt like posting my solution. This the the problem as I first saw it:
Everyone seems to have solved this problem looking for a function that relates the two numbers. I actually tried to solve the problem assuming the numbers were actually equal to each other (more on that later), but most people seem to have solved for a function that relates the numbers. Let’s try looking for a function first.
A simple solution
If you’re looking for a simple pattern, you may notice the numbers on the left are increasing in increments of one, and the numbers on the right by increments of 11. Mind the gap, though, we’re solving for 117 instead of 116. Following this pattern we’ve increased by 2 on the left and should therefore add 22 on the right. So the answer is 79.
This is just a simple linear regression, so you can solve it to find a formula you can use for any number:
1 2 3 4 5 6 7
There you go, a simple formula that works for all our test cases, and we can easily plug in 12345 and find that the answer with this pattern would be 134587.
An alternative trick
Some people used an alternative pattern. They took the last digit from the right as the first digit on the left. The remaining digit(s) on the right are the some of the digits on the right. So for 113 the answer starts with 3 (the last number) and ends with 5 (the sum of 1, 1, and 3). Again, this matches all the test cases.
A different interpretation
I was trying to solve a different problem. I assumed the numbers were actually equal. Of course 111 is not equal to 13 in our familiar decimal numeral system, but programmers are used to working with alternative bases like binary or base 2, octal or base 8 and hexadecimal or base 16.
This is the pattern I found:
1112 = 134 = 7 1123 = 245 = 14 1134 = 356 = 23 1145 = 467 = 34 1156 = 578 = 47
If you continue this pattern:
1167 = 689 = 62 1178 = 7910 = 79 1189 = 8A11 = 98
(Note: Once you go past decimal, or base 10, the number system involves letters. So A is the “number” after 9.)
I find it interesting that three pretty simple patterns all work for 7 straight test cases, even though my interpretation solved a different problem! However, they do begin diverging at 118.
|x||y (simple solution)||y (composition)||y (number systems)||y (number systems in decimal)|
The way the problem is stated is not precise. The popular solutions assume there is an implicit function. The problem could be more precisely stated as:
1 2 3 4 5 6 7 8 9 10
My solution assumes there are unknown implicit bases. I’m not exactly sure how to state that, but it’s something like this:
Given: 111f(x) = 13g(x) 112f(x) = 24g(x) 113f(x) = 35g(x) 114f(x) = 46g(x) 115f(x) = 57g(x) Find: f(x) g(x) f(117) and g(117)
I’m curious where this problem originated. Occam’s Razor suggests they had the simple solution in mind, but if the problem is really “for genuises”, then counting by 11 is a bit trivial. It’s too bad they didn’t ask about 118.If the problem was really “for genuises”, then it seems like there’d be a bit more to it than incrementing by 11. They should have asked about 118. | <urn:uuid:6f5b3a6b-b0b4-4296-97c2-6a55fa1c66dd> | 2.9375 | 849 | Personal Blog | Science & Tech. | 74.745645 |
|Click on the image to enlarge it in a new window|
© Roy Anderson
Click on map to enlarge it in a new window
(Maps updated 30th November 2009)
Agonum muelleri (Herbst, 1784)
Description: A 7-9.5mm long black beetle, with brassy or purplish elytra and bright green reflecting, metallic foreparts. It occurs on open, moderately dry ground, including arable land. Widely distributed.
World Distribution: A Eurosiberian Wide-temperate species (64), widespread in Europe except the extreme north, east to western Siberia. Introduced in North America.
Irish Status: Widespread and common throughout.
Ecology: Next to Agonum fuliginosum probably the most widespread of the genus. Occurs frequently on agricultural land and a variety of peatlands, as well as in riparian habitats. | <urn:uuid:538852bd-9d15-4c6f-b4fa-57531d7ec2a9> | 2.859375 | 190 | Knowledge Article | Science & Tech. | 37.496667 |
For four decades, Amory Lovins has been a leading proponent of a renewable power revolution that would wean the U.S. off fossil fuels and usher in an era of energy independence. In an interview with Yale Environment 360, he talks about his latest book, which describes his vision of how the world can attain a green energy future by 2050.
Amory B. Lovins is fond of referring to the Rocky Mountain Institute, where he serves as chairman and chief scientist, as a “think and do” tank, and it’s clear that to Lovins the doing is every bit as important as the thinking. Hardly lacking in confidence or ambition, Lovins — in conjunction with his colleagues at the institute — has published Reinventing Fire, his step-by-step blueprint for how to transition to a renewable energy economy by mid-century.
Impressive in both its scope and detail — Lovins discusses everything from how to redesign heavy trucks to make them more fuel efficient to ways to change factory pipes to conserve energy — the book lays out a plan for the U.S. to achieve the following by 2050: cars completely powered by hydrogen fuel cells, electricity, and biofuels; 84 percent of trucks and airplanes running on biomass fuels; 80 percent of the nation’s electricity produced by renewable power; $5 trillion in savings; and an economy that has grown by 158 percent. … Continue Reading | <urn:uuid:2441bebc-4f2d-49f5-9f43-c777749eef23> | 2.84375 | 291 | Truncated | Science & Tech. | 44.485071 |
These changes were already in place 3.5 million years ago. One of our ancient relatives, Australopithecus afarensis, had a remarkably human foot and was clearly already walking around on two legs. Some scientists have taken this to mean that hominins such as Lucy (the most famousA.afarensis specimen) necessarily walked on the ground. After all, human feet are supposedly ill-suited for life in the trees.
But try telling that to the gentleman in the video below. He’s one of theTwa pygmies—a group of Ugandan hunter-gatherers who often climb trees in search of food, such as honey and fruit. Like other Twa men, he started from an early age. And he’s clear proof that a human foot is no impediment to walking straight up a trunk.
The footage was shot by Vivek Venkataraman, Thomas Kraft andNathaniel Dominy from Dartmouth College. The trio originally started studying the Twa to understand the evolution of their short five-foot stature but were awestruck at how adeptly they could climb. “We tried to climb the same trees, but we found it extremely difficult,” says Venkataraman. “The Twa were quicker, more agile, and highly coordinated.” | <urn:uuid:12c4ac11-3036-47b9-86f8-52a47b300cb8> | 3.359375 | 275 | Truncated | Science & Tech. | 50.829808 |
In the period between 1900 and 1938 (BD) natural selection is complicated by other forces. Though the color of the dark form gives it an advantage over the light, the new trait is introduced into a system of other traits balanced for the light form; thus the dark form is at first at a considerable physiological disadvantage. In fact, when moths of the dark form were crossed with moths of the light form 50 years ago, the resulting broods were significantly deficient in the dark form. When the same cross is made today, the broods contain more of the dark form than one would expect. The system of hereditary traits has become adjusted to the new trait.
There is evidence that other changes take place during the period BD. Specimens of the peppered moth from old collections indicate that the earliest melanics were not so dark as the modern dark form: they retained some of the white spots of the light form. Today a large proportion of the moths around a city such as Manchester are jet black. Evidently when the early melanics inherited one gene for melanism, the gene was not entirely dominant with respect to the gene for light coloration. As the gene complex adjusted to the mutation, however, the new gene became almost entirely dominant.
When the dark form comprises about 10 percent of the population, it may jump to 90 percent in as little as 15 or 20 years. This is represented by period DE on the graph. Thereafter the proportion of the dark form increases at a greatly reduced rate.
Eventually one of two things must happen: either the light form will slowly be eliminated altogether, or a balance will be struck so that the light form continues to appear as a small but definite proportion of the population. This is due to the fact that the moths which inherit one gene for dark coloration and one for light (heterozygotes) have an advantage over the moths which inherit two genes for dark coloration (homozygotes). And when two heterozygotes mate, a quarter of their offspring will have two genes for light coloration, i.e., they will be light. Only after a very long period of time, therefore, could the light forms (and with them the gene for light coloration) be entirely eliminated. This period of removal, represented bv EF on the diagram, might be more than 1,000 years. Indications so far suggest, however, that complete removal is unlikely, and that a balance of the two forms would probably occur. In this balance the light form would represent about 5 percent of the population.
The mechanisms I have described are without doubt the explanation of industrial melanism: normal mutation followed by natural selection resulting in an insect of different color, physiology and behavior. Industrial melanism involves no new laws of nature; it is governed by the same mechanisms which have brought about the evolution of new species in the past.
There remains, however, one major unsolved problem. Why is it that, in almost all industrial melanics, the gene for melanism is dominant? Many geneticists would agree that dominance is achieved by natural selection, that it is somehow related to a successful mutation in the distant past. With these thoughts in mind I recently turned my attention away from industrial centers and collected moths in one of the few remaining pieces of ancient Caledonian pine forest in Britain: the Black Wood of Rannoch. Located in central Scotland far from industrial centers, the Black Wood is probably very similar to the forests that covered Britain some 4,000 years ago. The huge pines of this forest are only partly covered with lichens. Here I found no fewer than seven species of moths with melanic forms.
I decided to concentrate on the species Cleora repandata, the dark form of which is similar to the dark form of the same species that has swept through central England. This dark form, like the industrial melanics, is inherited as a Mendelian dominant. Of just under 500 specimens of C. mpandata observed, 10 percent were dark. | <urn:uuid:a13bf5f2-c437-4c78-ba87-77f0f9635791> | 4.21875 | 819 | Academic Writing | Science & Tech. | 45.573827 |
PostgreSQL is an “Object Relational Database Management System (ORDBMS)”. This database management system (DBMS) is similar to a relational database but with an object-oriented database model: objects, classes and inheritance are directly supported in database schemas and in the query langauge. It supports a large number of programming interfaces, including ODBC, Java (JDBC), Tcl/Tk, PHP, Perl and Python.
Let me explain some advance feature of Postgresql:
* Foreign keys
* Transactional integrity
* Table Inheritance
* Multiversion concurrency control
* Supports complex queries
* Server side language: sql, java, ruby, Paython, TCL, etc.
Trigger is a specification that the database should automatically execute a particular function whenever a certain type of operation is performed. Triggers can be defined to execute either before or after any INSERT, UPDATE, or DELETE operation, either once per modified row, or once per SQL statement. If a trigger event occurs, the trigger's function is called at the appropriate time to handle the event.
CREATE VIEW defines a view of a query. The view is not physically materialized. Instead, the query is run every time the view is referenced in a query.
Table inheritance, can be a useful tool for database designers. In PostgreSQL, a table can inherit from zero or more other tables, and a query can reference either all rows of a table or all rows of a table plus all of its descendant tables.
The features like foreign key references, and views will allow you to hide the complexity of the database from the application. You can also avoid the creation of complicated SQL commands.
Multiversion concurrency control will allow two or more sessions try to access the same data at the same time from a database.
Basic PSQL commands
||List all databases
||Connect to new database.
||To view list of relations/tables
||Describe the details of given table.
||Get a help on syntax of SQL commands
||Lists all psql slash commands.
||System variables list.
Why to use PostgreSQL?????????
There are several reasons to go with Postgresql, I will try to make a brief outline.A well tuned Postgres is pretty close on SELECT performance to MySQL with small databases. With large tables MySQL has some bad performance problems, and Postgres performs much better. Write performance is also an issue with MySQL -- with a lot of traffic, it has serious problems with concurrent writes. Under heavy load, Postgres performs much better.
Security is huge, but PostgreSQL’s support and focus on data integrity, granular access controls, ACID compliance.
PostgreSQL operates on the principle that certain users have certain types of access to data. In PostgreSQL, these are called “roles” and can be created or managed using CREATE ROLE, ALTER ROLE, and DROP ROLE. Unlike MySQL, these can also be mapped and tied to system users, which means it can leverage different forms of system authentication, identify server authentication, LDAP server authentication, PAM, and Kerberos. For local connections, you can also use filesystem permissions by changing who can access the UNIX domain socket, and where it is located.
What is a PostgreSQL database used for?
PostgreSQL is used by several different web programming languages including PHP, and Python. These programming languages make it extremely easy to connect to a PostgreSQL database. It is also used for many content management scripts like Joomla, wordpress etc.
In closing, as we know disadvantage always comes with advantage therefore, postgresql is also having some vulnerability. But Postgresql is ideal for those who want to create a Web application and performance is his main concern. Finally, it is upto you to find what database meet your requirement. If you have any questions, we would be happy to talk to you! :)
About the Author :
Saurabh Suman works as a Software Engineer in Bobcares. He loves reading books and listening to music in his free time. | <urn:uuid:cb964985-aff0-466b-b16d-3d00e01936be> | 3.296875 | 863 | Personal Blog | Software Dev. | 39.664821 |
Of the forces of nature, gravity is probably the least understood. Gravity is everywhere but the mechanism through which it propagates is not known.
Gravity is an effect of mass, it appears. The gravity field can be calculated with good accuracy given the amount and density distribution of a body of matter. Gravity does not seem to be affected by the materials involved and does not seem to be shielded by anything.
Even what fundamentally constitutes mass and why this mass has inertia is not understood. Inertia and gravity are linked because the effects of gravity are not distinguishable from the effects of an accelerating frame. Empirically, an object's mass determines the force with which gravity pulls it and likewise determines the force with which the same object pushes against an accelerating platform.
General relativity predicts the existence of gravity waves but none such have been detected. These waves would have a particle manifestation called the graviton, a bit like light waves have the photon. The graviton would be the quantum of bending of spacetime by gravity.
General relativity does not see gravity exactly as a force like EM for example. Instead, gravity is curvature of spacetime, geometric in nature. We may understand this by thinking that objects on which no external force works travel in straight lines in 'flat' space. When the space is curved, objects continue to follow lines but these are not straight. For example, for an object orbiting a planet, the straight line is now an ellipse. A line followed by an object is called a geodesic. The closer the object comes to the source of the gravity, the more the space is curved, i.e. the more the geodesic deviates from the straight line. This comes basically to the same as seeing gravity as a force acting on all parts of a body, proportionally to the inverse of the square of the distance. Seeing gravity as geometry rather than force has however advantages in more complex cases.
The speed of propagation of gravity is a matter of debate. This is not readily measurable because gravity cannot be modified in an experimental setting, not at least with any technology that would generally be known. If an object moves, does the vector of the force of gravity point to the present position of the object or at a position the object occupied in the past? Observations are not conclusive.
Of the four forces, gravity is by far the weakest. It has been proposed that gravity act along hidden dimensions, using part of its force towards objects not perceived in the 3+1 dimensions of spacetime.
Various technologies for manipulating gravity have been proposed. None of them are officially recognized to work and theoretical understanding of gravity is not to be found in the public domain. Unifying gravity with the three other forces (EM, strong and weak nuclear forces) is considered to be the holy grail of physics. Approaches towards this include string theory, quantum gravity and other multidimensional theories such as Kaluza-Klein theories. There is reason to believe that key work is suppressed from the public domain and that the science establishment is chasing its tail while the 'real' theoretical and applied work takes place in secret.
The Cassiopaea material comments on gravity in many places. Gravity is, according to this source, the binder of all physical and all ethereal. There is nothing that is not derived from gravity. Even thoughts have gravity. Gravity does not go anywhere, it is eternal and omnipresent but it can be gathered and dispersed in the form of so-called unstable gravity waves. The principles of STO and STS are, as all else, also reflected in terms of gravity, as dispersion and gathering of gravity waves, respectively. Gravity pervades all densities. It can be manipulated by consciousness, sound or other means but details are not given. Manipulation of gravity was used in ancient history for building and even for people levitating. We have no theoretical framework for placing this information at present. Items such as the UFO phenomenon suggest that gravity is most likely technologically manipulable, also the evidence of some ancient buildings would suggest that something of the sort is possible. | <urn:uuid:5a46c986-19ad-4241-a964-d2f57be64b9e> | 3.9375 | 835 | Structured Data | Science & Tech. | 38.025181 |
Beware that if your static java.lang.NoClassDefFoundError when you try to access the class which failed to load.block throws Exception than you may get
This shows that static method can not be overridden in Java and concept of method overloading doesn't apply to static methods. Instead declaring same static method on Child class is known as method hiding in Java.
Best practices - static variable and static method in Java
Here are some of the best practices you can follow while using static variable and method in Java.
1. Consider making a static variable final in Java to make it constant and avoid changing it from anywhere in the code. Also remember that if you change value of static final variable in Java like in enum String pattern, you need to recompile all classes which use those variable, because static final variables are cached on client side.
That's all on What is static variable, method and nested static class in Java. knowledge of static keyword in Java is must for any Java programmer and skill to find out when to use static variable or static method is an important skill. Incorrect and careless use of static variable and static method in Java will result in serious concurrency issues like deadlock and race condition in Java. | <urn:uuid:cd2f0e81-0a2d-4b6e-b4f7-23b5f773823f> | 3.21875 | 250 | Personal Blog | Software Dev. | 46.749044 |
This module implements pseudo-random number generators for various distributions.
For integers, uniform selection from a range. For sequences, uniform selection of a random element, a function to generate a random permutation of a list in-place, and a function for random sampling without replacement.
On the real line, there are functions to compute uniform, normal (Gaussian), lognormal, negative exponential, gamma, and beta distributions. For generating distributions of angles, the von Mises distribution is available.
Almost all module functions depend on the basic function random(), which generates a random float uniformly in the semi-open range [0.0, 1.0). Python uses the Mersenne Twister as the core generator. It produces 53-bit precision floats and has a period of 2**19937-1. The underlying implementation in C is both fast and threadsafe. The Mersenne Twister is one of the most extensively tested random number generators in existence. However, being completely deterministic, it is not suitable for all purposes, and is completely unsuitable for cryptographic purposes.
The functions supplied by this module are actually bound methods of a hidden instance of the random.Random class. You can instantiate your own instances of Random to get generators that don’t share state. This is especially useful for multi-threaded programs, creating a different instance of Random for each thread, and using the jumpahead() method to make it likely that the generated sequences seen by each thread don’t overlap.
Class Random can also be subclassed if you want to use a different basic generator of your own devising: in that case, override the random(), seed(), getstate(), setstate() and jumpahead() methods. Optionally, a new generator can supply a getrandbits() method — this allows randrange() to produce selections over an arbitrarily large range.
New in version 2.4: the getrandbits() method.
As an example of subclassing, the random module provides the WichmannHill class that implements an alternative generator in pure Python. The class provides a backward compatible way to reproduce results from earlier versions of Python, which used the Wichmann-Hill algorithm as the core generator. Note that this Wichmann-Hill generator can no longer be recommended: its period is too short by contemporary standards, and the sequence generated is known to fail some stringent randomness tests. See the references below for a recent variant that repairs these flaws.
Changed in version 2.3: Substituted MersenneTwister for Wichmann-Hill.
Initialize the basic random number generator. Optional argument x can be any hashable object. If x is omitted or None, current system time is used; current system time is also used to initialize the generator when the module is first imported. If randomness sources are provided by the operating system, they are used instead of the system time (see the os.urandom() function for details on availability).
Changed in version 2.4: formerly, operating system resources were not used.
If x is not None or an int or long, hash(x) is used instead. If x is an int or long, x is used directly.
Return an object capturing the current internal state of the generator. This object can be passed to setstate() to restore the state.
New in version 2.1.
Changed in version 2.6: State values produced in Python 2.6 cannot be loaded into earlier versions.
state should have been obtained from a previous call to getstate(), and setstate() restores the internal state of the generator to what it was at the time setstate() was called.
New in version 2.1.
Change the internal state to one different from and likely far away from the current state. n is a non-negative integer which is used to scramble the current state vector. This is most useful in multi-threaded programs, in conjunction with multiple instances of the Random class: setstate() or seed() can be used to force all instances into the same internal state, and then jumpahead() can be used to force the instances’ states far apart.
New in version 2.1.
Changed in version 2.3: Instead of jumping to a specific state, n steps ahead, jumpahead(n) jumps to another state likely to be separated by many steps.
Returns a python long int with k random bits. This method is supplied with the MersenneTwister generator and some other generators may also provide it as an optional part of the API. When available, getrandbits() enables randrange() to handle arbitrarily large ranges.
New in version 2.4.
Functions for integers:
random.randrange([start], stop[, step])
Return a randomly selected element from range(start, stop, step). This is equivalent to choice(range(start, stop, step)), but doesn’t actually build a range object.
New in version 1.5.2.
Return a random integer N such that a <= N <= b.
Functions for sequences:
Return a random element from the non-empty sequence seq. If seq is empty, raises IndexError.
Shuffle the sequence x in place. The optional argument random is a 0-argument function returning a random float in [0.0, 1.0); by default, this is the function random().
Note that for even rather small len(x), the total number of permutations of x is larger than the period of most random number generators; this implies that most permutations of a long sequence can never be generated.
Return a k length list of unique elements chosen from the population sequence. Used for random sampling without replacement.
New in version 2.3.
Returns a new list containing elements from the population while leaving the original population unchanged. The resulting list is in selection order so that all sub-slices will also be valid random samples. This allows raffle winners (the sample) to be partitioned into grand prize and second place winners (the subslices).
Members of the population need not be hashable or unique. If the population contains repeats, then each occurrence is a possible selection in the sample.
To choose a sample from a range of integers, use an xrange() object as an argument. This is especially fast and space efficient for sampling from a large population: sample(xrange(10000000), 60).
The following functions generate specific real-valued distributions. Function parameters are named after the corresponding variables in the distribution’s equation, as used in common mathematical practice; most of these equations can be found in any statistics text.
Return the next random floating point number in the range [0.0, 1.0).
Return a random floating point number N such that a <= N <= b for a <= b and b <= N <= a for b < a.
The end-point value b may or may not be included in the range depending on floating-point rounding in the equation a + (b-a) * random().
random.triangular(low, high, mode)
Return a random floating point number N such that low <= N <= high and with the specified mode between those bounds. The low and high bounds default to zero and one. The mode argument defaults to the midpoint between the bounds, giving a symmetric distribution.
New in version 2.6.
Beta distribution. Conditions on the parameters are alpha > 0 and beta > 0. Returned values range between 0 and 1.
Exponential distribution. lambd is 1.0 divided by the desired mean. It should be nonzero. (The parameter would be called “lambda”, but that is a reserved word in Python.) Returned values range from 0 to positive infinity if lambd is positive, and from negative infinity to 0 if lambd is negative.
Gamma distribution. (Not the gamma function!) Conditions on the parameters are alpha > 0 and beta > 0.
Gaussian distribution. mu is the mean, and sigma is the standard deviation. This is slightly faster than the normalvariate() function defined below.
Log normal distribution. If you take the natural logarithm of this distribution, you’ll get a normal distribution with mean mu and standard deviation sigma. mu can have any value, and sigma must be greater than zero.
Normal distribution. mu is the mean, and sigma is the standard deviation.
mu is the mean angle, expressed in radians between 0 and 2**pi*, and kappa is the concentration parameter, which must be greater than or equal to zero. If kappa is equal to zero, this distribution reduces to a uniform random angle over the range 0 to 2**pi*.
Pareto distribution. alpha is the shape parameter.
Weibull distribution. alpha is the scale parameter and beta is the shape parameter.
class class random.WichmannHill([seed])
Class that implements the Wichmann-Hill algorithm as the core generator. Has all of the same methods as Random plus the whseed() method described below. Because this class is implemented in pure Python, it is not threadsafe and may require locks between calls. The period of the generator is 6,953,607,871,644 which is small enough to require care that two independent random sequences do not overlap.
This is obsolete, supplied for bit-level compatibility with versions of Python prior to 2.1. See seed() for details. whseed() does not guarantee that distinct integer arguments yield distinct internal states, and can yield no more than about 2**24 distinct internal states in all.
class class random.SystemRandom([seed])
Class that uses the os.urandom() function for generating random numbers from sources provided by the operating system. Not available on all systems. Does not rely on software state and sequences are not reproducible. Accordingly, the seed() and jumpahead() methods have no effect and are ignored. The getstate() and setstate() methods raise NotImplementedError if called.
New in version 2.4.
Examples of basic usage:
>>> random.random() # Random float x, 0.0 <= x < 1.0 0.37444887175646646 >>> random.uniform(1, 10) # Random float x, 1.0 <= x < 10.0 1.1800146073117523 >>> random.randint(1, 10) # Integer from 1 to 10, endpoints included 7 >>> random.randrange(0, 101, 2) # Even integer from 0 to 100 26 >>> random.choice('abcdefghij') # Choose a random element 'c'>>> items = [1, 2, 3, 4, 5, 6, 7] >>> random.shuffle(items) >>> items [7, 3, 2, 5, 6, 4, 1]>>> random.sample([1, 2, 3, 4, 5], 3) # Choose 3 elements [4, 1, 5]
M. Matsumoto and T. Nishimura, “Mersenne Twister: A 623-dimensionally equidistributed uniform pseudorandom number generator”, ACM Transactions on Modeling and Computer Simulation Vol. 8, No. 1, January pp.3-30 1998.
Wichmann, B. A. & Hill, I. D., “Algorithm AS 183: An efficient and portable pseudo-random number generator”, Applied Statistics 31 (1982) 188-190.
Complementary-Multiply-with-Carry recipe for a compatible alternative random number generator with a long period and comparatively simple update operations. | <urn:uuid:bacff325-c143-4479-860e-11458e78ccc0> | 3.0625 | 2,439 | Documentation | Software Dev. | 55.404131 |
[Tutor] Recursion doubt
reachmsn at hotmail.com
Tue Apr 15 11:12:09 CEST 2008
At the url http://www.python.org/doc/essays/graphs.html there is some code by Guido Van Rossum for computing paths through a graph - I have pasted it below for reference -
Let's write a simple function to determine a path between two nodes. It takes a graph and the start and end nodes as arguments. It will return a list of nodes (including the start and end nodes) comprising the path. When no path can be found, it returns None. The same node will not occur more than once on the path returned (i.e. it won't contain cycles). The algorithm uses an important technique called backtracking: it tries each possibility in turn until it finds a solution.
def find_path(graph, start, end, path=): path = path + [start] if start == end: return path if not graph.has_key(start): return None for node in graph[start]: if node not in path: newpath = find_path(graph, node, end, path) if newpath: return newpath return None
*** He then says------------------------
It is simple to change this function to return a list of all paths (without cycles) instead of the first path it finds:
def find_all_paths(graph, start, end, path=): path = path + [start] if start == end: return [path] if not graph.has_key(start): return paths = for node in graph[start]: if node not in path: newpaths = find_all_paths(graph, node, end, path) for newpath in newpaths: paths.append(newpath) return paths
*** I couldn't understand how it was simple to change the function find paths to find all paths. How would you think about writing this second function recursively. Especially the part about if start==end: return [path]. I feel you would give square brackets around path here after first writing the inductive part ... for node in graph[start] .... and then by trial and error put square brackets around path in the Basis part. Can someone please explain how to write this code. Thanks!
Video: Get a glimpse of the latest in Cricket, Bollywood, News and Fashion. Only on MSN videos.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Tutor | <urn:uuid:7a0796bf-e17d-421b-9ebf-110268812db4> | 2.984375 | 536 | Comment Section | Software Dev. | 76.34438 |
Continuous Solar Spectra
Name: Justin C.
My query is: why is it assumed that the light coming
regions of the Sun contains a continuous spectrum? (This assumption must
made if we are to assume all dark lines are due to absorption at some
in the journey of the light; If light has been absorbed it must have been
there in the first place!). Is it just that there are so many different
possible jumps (because there are so many elements present in the Sun)
all wavelengths of photons are emitted?
You raise a valid question. Indeed the temperature of the Sun (or other
"hot" star) is treated as a blackbody radiator because the temperature of
the "inside" is a very hot plasma (a collection of charged particles). On
the surface, there are sunspots (storms) that are cooler and appear
"black" compared to the surroundings but are still bright and hot compared
to "normal" terrestrial temperatures. It is known from lab experiments
that hot plasmas produce essentially a continuum of radiation, so that is
some of the data supporting continuous radiation from "inside" the Sun.
Also the theoretical model of how stars produce radiation from the
reactions occurring within the Sun/stars is reasonably well understood and
the model predicts a continuum of radiation. Even if there were some
variations in the internal solar spectra it would be quite difficult to
distinguish it because the Planck blackbody radiation is just so intense
at those temperatures, small variations would be swamped by that
radiation. And we are talking about "stable" stars, during a supernova or
other instable stars, or neutron stars and black holes etc. things are
much more complicated.
It is not just speculation that the absorption lines observed in high
resolution spectra of the Sun (and other hot stable stars) are due to cooler
gases surrounding the bright source. It is possible to measure the relative
intensity of those absorptions (from any number of elements and/or
molecules) and from that data obtained in terrestrial labs to infer the
temperature of the absorbers from the relative intensities of the
absorptions. This applies not only to visible light but also applies to
spectra extending from x-rays, ultraviolet, visible, infrared, and even
microwave frequencies. By piecing all the data together to make an
internally consistent match to the observed solar (or other star) spectra it
is possible to determine whether the absorbers/emitters that are on the
stellar surface, the various levels of the solar atmosphere, and even those
absorbers/emitters that are in space between earth and the light source.
Nonetheless, I think you raise a perceptive question.
Click here to return to the Astronomy Archives
Update: June 2012 | <urn:uuid:5dc1f5af-0a6b-4fe0-8b38-fae274ec128d> | 3.734375 | 595 | Q&A Forum | Science & Tech. | 32.100353 |
A specially modified helicopter with a boom and winch underneath snags the parafoil chute attached to a model Genesis sample return capsule. The hook on the end of the boom collapses the chute, allowing the helicopter to retrieve the capsule in mid-air.
This photo was taken during successful trials of this novel capsule recovery technology.
Genesis returned its solar wind samples to Earth on Sept. 8, 2004.
Image Credit: NASA Jet Propulsion Laboratory
Credit: Jet Propulsion Laboratory | <urn:uuid:2bc1ab2f-d35f-4e86-975d-8780f141e5bb> | 3.03125 | 101 | Truncated | Science & Tech. | 32.647154 |
This table and the accompanying observations were first presented to the Russian Chemical Society in March 1869. (Actually, Mendeleev was ill, and his colleague Nikolai Menshutkin presented his paper [Menschutkin 1934].) The paper was published in the first volume of the new society's journal. That same year, a German abstract of the paper, consisting of the table and eight comments, was published in Zeitschrift für Chemie. The German abstract was the vehicle by which Mendeleev's ideas reached chemists working in Western Europe. An English translation of that German abstract is presented here. View a manuscript draft of the table.
By ordering the elements according to increasing atomic weight in vertical rows so that the horizontal rows contain analogous elements, still ordered by increasing atomic weight, one obtains the following arrangement, from which a few general conclusions may be derived.
This table contains several implicit predictions of unknown elements. Mendeleev soon retreated from this prediction of a heavier analogue of titanium and zirconium. His later tables [Mendeleev 1871] erroneously placed lanthanum in this spot. This original prediction was actually borne out in 1923 with the discovery of hafnium.
Rhodium (Rh) is misplaced. It belongs between ruthenium (Ru) and palladium (Pd). Technetium (Tc), the element which belongs between ruthenium and molybdenum (Mo) has no stable isotopes and was not synthesized until 1937.
Most of the elements in this column are slightly out of order. After tungsten (W) should come rhenium (Re), which was not yet discovered, followed by osmium (Os), iridium (Ir), platinum (Pt), gold (Au), mercury (Hg), thallium (Tl), lead (Pb), and bismuth (Bi). Bismuth, however, is placed correctly insofar as it completes the row beginning with nitrogen (N). At this time, lead was frequently miscategorized, placed among elements which form compounds with one atom of oxygen (PbO analogous to CaO, for example); however, lead also forms a compound with two atoms of oxygen (PbO2 analogous to CO2) and it belongs in the same group as carbon (C). Similarly, thallium was often placed among elements which form compounds with one atom of chlorine (TlCl analogous to NaCl, for example); however, thallium also forms a compound with three atoms of chlorine (TlCl3 analogous to BCl3) and it belongs in the same group as boron (B).
The classification of hydrogen has been an issue throughout the history of periodic systems. Some tables place hydrogen with the alkali metals (lithium, sodium, etc.), some with the halogens (fluorine, chlorine, etc.), some with both, and some in a box of its own detached from the main body of the table. Mendeleev's original table did none of the above, placing it in the same row as copper, silver, and mercury.
The prediction of an unknown analogue of aluminum was borne out by the discovery in 1875 of gallium (atomic weight = 70), the first of Mendeleev's predictions to be so confirmed. [Lecoq de Boisbaudran 1877]
Uranium (standard symbol U) is misplaced. Its atomic weight is actually more than double the value given here. The element which belongs between cadmium (Cd) and tin (Sn) is indium (In), and Mendeleev put indium there in the next version of his table [Mendeleev 1871]. The proper place for uranium, however, would not be found until the 1940s.
The prediction of an unknown analogue of silicon was borne out by the discovery in 1886 of germanium (atomic weight = 73). [Winkler 1886]
In German publications, J is frequently used instead of I as the chemical symbol for iodine (Jod, in German). Iodine is placed correctly after tellurium (i.e., with the halogens) despite having a lower atomic weight than tellurium. See comment 7 after the table.
The prediction of an unknown element following calcium is a weak version of Mendeleev's subsequent prediction of the element we now know as scandium, discovered in 1879 [Nilson 1879]. In Mendeleev's 1871 table [Mendeleev 1871] the missing element is correctly placed between calcium and titanium, and as an analogue of yttrium. That the 1869 prediction is flawed can be seen from the fact that every other entry in the bottom reaches of the table is wrong. (See next note.) Still, the prediction deserves more credit than van Spronsen gave it [van Spronsen 1969, p. 220]: "That the element next to calcium later proved to be scandium, was fortuitous; Mendeleev cannot be said to have already foreseen this element in 1869."
The elements placed in the last four rows of the table puzzled Mendeleev, as is apparent from the glut of question marks and the fact that several are out of order according to their assigned atomic weights. Many of these elements were rare and poorly characterized at the time. Didymium appeared in many lists of elements at this time, but it was later proved to consist of two elements, praseodymium and neodymium. The atomic weights of erbium, yttrium (standard symbol Y), indium, cerium, lanthanum, and thorium are wrong. The interdependence of atomic weights and chemical formulas that plagued determinations of atomic weight since the time of Dalton was still problematic for these elements. Most of them elements (erbium, yttrium, cerium, lanthanum, and the component elements of didymium) belong to the family of rare earths, a group whose classification would present problems for many years to come. (Thorium belongs to the group of elements immediately below most of the rare earths.) Many of the rare earths were not yet discovered, and (as already noted) the atomic weights of the known elements were not well determined. The chemical properties of the rare earths are so similar that they were difficult to distinguish and to separate. Mendeleev made some progress with these elements in the next couple of years. His 1871 table [Mendeleev 1871] has correct weights for yttrium, indium, cerium, and thorium, and correct classification for yttrium and indium.
Translator's note: In his 1889 Faraday lecture [Mendeleev 1889], Mendeleev used the word "periodicity" rather than the phrase "stepwise variation" in translating this sentence from his 1869 paper. "Periodicity" is certainly an appropriate term to describe the cyclic repetition in properties evident in this arrangement. It is worth noting, however, that the German words read in 1869 by Western European scientists (stufenweise Abänderung) lack the implication of repetition inherent in the term periodicity. --CJG
Groups of similar elements with consecutive atomic weights are a little-emphasized part of classification systems from Mendeleev's time and before (cf. Newlands 1864) to the present.
The existence of a very regular progression in atomic weight among elements with similar chemical behavior had attracted the attention of chemists almost from the time they began to measure atomic weights [Döbereiner 1829]. The triad of elements Mendeleev cites here includes two (rubidium and cesium) discovered in the early 1860s. Mendeleev's table, however, goes beyond strictly regular isolated triads of elements to a systematic classification (albeit not always correct) of all known elements.
The valence of an element is essentially the number of bonds that element can make when it forms compounds with other elements. An atom of hydrogen, for example, can make just one bond, so its valence is one; we call it monovalent. An atom of oxygen can bond with two atoms of hydrogen, so its valence is two. Some elements, particularly heavier elements, have more than one characteristic valence. (For example, lead has valence 2 and 4; thallium has valence 1 and 3. See note 4 above.) The elements in the cited series have valences 1, 2, 3, 4, 3, 2, and 1 respectively.
Mendeleev is correct in this observation. The two lightest elements, hydrogen and helium (the latter as yet unknown) are the most common elements in the universe, making up the bulk of stars. Oxygen and silicon are the most common elements in the earth's crust. Iron is the heaviest element among the most abundant elements in the stars and the earth's crust.
Although the chemical behavior of elements in the same family is similar, it is not identical: there are differences due to the difference in atomic weight. For example, both chlorine and iodine form compounds with one atom of hydrogen: HCl and HI. These compounds are similar, in that they are both corrosive gases which dissolve readily in water. But they differ in that HI has, for example, a higher boiling point and melting point than HCl (typical of the heavier of a pair of related compounds).
In later publications [Mendeleev 1871] Mendeleev went into considerable detail regarding the properties of predicted elements. The success of these predictions played a part in establishing the periodic system, although apparently not the primary part. [Brush 1996] See Scerri & Worrall 2001 for a discussion of prediction and accommodation in the periodic table.
Mendeleev went on to incorporate this "correction" in his 1871 table [Mendeleev 1871], listing the atomic weight of tellurium as 125. But the "correction" is erroneous. Mendeleev was right to put tellurium in the same group with sulfur and oxygen; however, strict order of atomic weights according to the best information he had available would have required iodine (127) to come before tellurium (128). He was suspicious of this apparent inversion of atomic weight order; as it happens, the atomic weights Mendeleev had available to him agree with the currently accepted values.
While his suggestion to change that of tellurium was wrong, his classification was correct and his faith in the regularity of the periodic system was only slightly misplaced. The natural order of the elements is not quite one of increasing atomic weight, but one of increasing atomic number. In 1913, a discovery by Henry Moseley made the atomic number more than simply a rank order for the elements [Moseley 1913, 1914]. The atomic number is the same as the quantity of positive charge in the nucleus of an atom. The periodic system contains a few "inversions" of atomic weight, but no inversions of atomic number. | <urn:uuid:24d582cc-341b-4fac-90a7-788f5988ac54> | 3.515625 | 2,294 | Academic Writing | Science & Tech. | 42.06142 |
The Society for Conservation Biology recently completed a major overhaul of the SCB website.
The new website provides a wealth of information on recent issues in conservation policy. You can access regular updates on conservation policy news by subscribing to the Policy RSS feed.
Other sections of the website provide information on SCB’s regional sections and working groups. The most popular section of the website is the board listing job openings in the field of conservation biology.
A new study published in Science by Brosi and Biber compares species listed under the US Endangered Species Act (ESA) in response to citizen petitions versus initiatives from within the agencies (FWS and NMFS). The authors asked whether citizen involvement, as some claim, diverts scarce conservation resources to species which are at lower risk than those identified by the agencies. The authors found, on the contrary, that species listed in response to citizen petitions were at least as threatened as those proposed by the agencies. These findings support the wisdom of the drafters of the ESA, who included the ability of citizens to petition for species’ listing to help ensure that species are not overlooked in the listing process due to political concerns or other reasons.
As the New York Times notes, “These impressive statistical results also help restate — and re-ratify — the reason the authors of the Endangered Species Act included the public in the first place. There are a lot more of us than there are Fish and Wildlife Service scientists. And the petitioning public isn’t merely an amorphous cross section of Americans. It includes scientists, local specialists, committed conservationists and passionate defenders of nature, who, in many cases, can keep a closer eye on the ground than the Fish and Wildlife Service.”
Science Daily also noted “The public brings diffuse and specialized expertise to the table, from devoted nature enthusiasts to scientists who have spent their whole careers studying one particular animal, insect or plant. Public involvement can also help counter the political pressure inherent in large development projects. The FWS, however, is unlikely to approve the listing of a species that is not truly threatened or endangered, so some petitions are filtered out. “You could compare it to the trend of crowdsourcing that the Internet has spawned,” Brosi says. “It’s sort of like crowdsourcing what species need to be protected.”
A new paper by Levi and Wilmers in the journal Ecology uses a 30-year time series of wolf, coyote, and fox relative abundance from the state of Minnesota, USA, to show that wolves suppress coyote populations, which in turn releases foxes from top-down control by coyotes. The authors conclude “Mesopredator release theory has often considered the consequence of top predator removal in a three species interaction chain (i.e., coyote–fox–prey) where the coyote was considered the top predator (Ritchie and Johnson 2009). However, the historical interaction chain before the extirpation of wolves had four links. In a four-link system, the top predator releases the smaller predator. The implication is that a world where prey species are heavily predated by abundant small predators (mesopredator release) may be similar to the historical ecosystem.” The study’s findings suggest that “among-guild interaction chains with even numbers of species will result in the smallest competitor being suppressed while among-guild interaction chains with odd numbers of species will result in the smallest competitor being released.” These findings have important implications for efforts to predict the consequences of removal or restoration of top predators.
Conservation biologists have long debated whether and how it is appropriate for scientists to influence policy decisions. A pair of essays in the journal Conservation Biology (one published, another in press) asks whether it’s appropriate for scientists to review and critique recovery goals for endangered species. Wilhere (2012) argues that because recovery criteria are inherently normative (values driven), scientists are engaging in “inadvertent advocacy” when they criticize such criteria. In a response, myself and coauthors agree with Wilhere that recovery criteria represent an interaction of science and values, but provide a different view on the appropriate role of individual scientists and scientific societies in reviewing recovery criteria and recovery plans. This debate is central to recovery planning for many species, and we suggest a way forward for the agencies to more clearly separate the normative and scientific elements of recovery criteria. We call on the agencies to develop an explicit decision framework that would provide the flexibility needed to address the unique biological circumstances faced by different species but would limit the abuse of discretion that has allowed political interference to drive many listing and recovery decisions.
A new paper in the journal Nature Climate Change finds evidence that declining snowfall in the southwestern US indirectly influences plants and associated birds by allowing greater over-winter herbivory by elk. Abundances of deciduous trees and associated songbirds have declined with decreasing snowfall over 22 years of study in montane Arizona. The researchers experimentally tested the hypothesis that declining snowfall indirectly influences plants and associated birds by allowing greater over-winter herbivory by elk, by excluding elk from one of two paired snowmelt drainages and replicating this paired experiment across three distant canyons. Over six years, the exclosures reversed multi-decade declines in plant and bird populations by experimentally inhibiting heavy winter herbivory associated with declining snowfall. Predation rates on songbird nests decreased in exclosures, despite higher abundances of nest predators, demonstrating the over-riding importance of habitat quality to avian recruitment.
The Connectivity Analysis Toolkit is a software interface that provides conservation planners with tools for both linkage mapping and landscape-level ‘centrality’ analysis. 450 people from around the world have downloaded the CAT since it became available in 2010.
We have just released Version 1.2 with the following changes:
• Approximate shortest-path betweenness centrality allows faster computation of this metric
• Approximate current flow betweenness centrality allows faster computation of this metric; function also uses sparse matrices for lower RAM requirements
• Network flow functions updated to LEMON version 1.2.2
• Updates to manual and tutorial dataset
These are major updates which speed computation in some cases by an order of magnitude. Thanks to Aric Hagberg for his work adding these new functions to NetworkX and thus making them available for the CAT.
The software is freely available at www.connectivitytools.org (a link is also posted on this blog site).
Two new articles published in the Proceedings of the National Academy of Sciences discuss whether management can push forests and other ecosystems into ‘landscape traps’ which may be difficult to restore to former conditions. The ‘landscape trap’ concept resembles previous research on alternate stable ecosystem states, but recognizes the importance of spatial dynamics in maintaining a landscape in a degraded state. Continue reading
A new report titled “Assessment & Planning for Ecological Connectivity: A Practical Guide” has been produced by a team of scientists convened by the Wildlife Conservation Society’s North America Program. The report can be downloaded here.
A new paper in the Journal Science by Chen and colleagues finds that species ranges are moving upward in elevation and towards the poles faster than has been expected from previous studies. Species’ ranges have climbed an average of 11 meters higher and 16.9 km closer to the poles per decade, with species in areas experiencing the greatest climate shift also showing the greatest range movement. Continue reading
A new paper in Science by Jim Estes and colleagues reviews contemporary findings on the consequences of removing large apex consumers (e.g., top predators) from nature—a process they term trophic downgrading.
The authors highlight the ecological theory that predicts trophic downgrading, consider why these effects have been difficult to observe, and summarize the key empirical evidence for trophic downgrading. The paper concludes Continue reading | <urn:uuid:024b097b-d873-40ff-a86c-6752d2d8e8af> | 2.8125 | 1,642 | Content Listing | Science & Tech. | 21.602507 |
Look up monthly U.S., Statewide, Divisional, and Regional Temperature, Precipitation, Degree Days, and Palmer (Drought) rankings for 1-12, 18, 24, 36, 48, 60-month, and Year-to-Date time periods. Data and statistics are as of January 1895.
Please note, Degree Days are not available for Agricultural Belts
Utah Temperature Rankings, September 1988
More information on Climatological Rankings
(out of 119 years)
|40th Coldest||1912||Coldest since: 1986|
|79th Warmest||2001, 1990||Warmest since: 1987| | <urn:uuid:6c400383-6f26-4c5a-88c5-59a731fa6913> | 2.703125 | 135 | Structured Data | Science & Tech. | 52.15599 |
What is a mirage?
Under a baking sun, a weary traveller trudges across a seemingly never-ending expanse of desert. Looking up, he suddenly spots something in the distance: a sparkling lake. He rubs his eyes. It’s still there. Picking up the pace in glee he strides ahead… only for the water to melt into thin air.
You might think our traveller was hallucinating, but mirages are a naturally-occurring optical illusion. In cartoons, a mirage is often depicted as a peaceful, lush oasis lying in the shade of swaying palm trees, but in reality it is much more likely to look like a pool of water.
The illusion results from the way in which light is refracted (bent) through air at different temperatures. Cold air is denser than warm air, and therefore has a greater refractive index. This means that as light passes down from cool to hot air, it gets bent upwards towards the denser air and away from the ground (see diagram).
To your eyes, these distorted rays seem to be coming from the ground, so you perceive a refracted image of the sky on the ground. This looks just like a reflection on the surface of a pool of water, which can easily cause confusion.
A beginner’s guide to mirage spotting
1. There’s no need to trek to the desert to see a mirage: they are very common on roads. In fact, likely locations include anywhere where the ground can absorb a lot of heat. If you’ve ever walked barefoot on hot tarmac or sand, then you’ll know just how hot they can get! A hot ground warms up the air immediately above it, creating a sharp temperature gradient in the air – the first ingredient of a good mirage.
2. Make sure you can see ahead of you well into the distance. The most spectacular mirages occur in wide expanses of flat land as too many hills, dips or bumps will prevent the refracted light from reaching your eyes.
3. Check the weather forecast. If you see a huge puddle ahead on a rainy day you’d be better off steering well clear, as mirages are far more likely to found during dry, sunny weather.
See a mirage:
The folowing links are external | <urn:uuid:096ac966-f10e-4489-8a36-89f3b9cfd17c> | 3.328125 | 486 | Knowledge Article | Science & Tech. | 62.450788 |
pathChirp is an active probing tool for estimating the
available bandwidth on a communication network path. Based on the
concept of "self-induced congestion," pathChirp features an
exponential flight pattern of probes we call a chirp. Packet
chirps offer several significant advantages over current probing
schemes based on packet pairs or packet trains. By rapidly increasing
the probing rate within each chirp, pathChirp obtains a rich set of
information from which to dynamically estimate the available
Another of our tools, STAB, is based on pathChirp
and locates available bandwidth bottlenecks.
Steps to unpack file. All UNIX commands are in bold face. The outputs of certain
commands are system dependent and are be denoted by square brackets [***].
If you downloaded the Gzip Compressed File
gunzip ./pathchirp-2.4.1.tar.gz tar -xvf ./pathchirp-2.4.1.tar
If you downloaded the Uncompressed File tar -xvf ./pathchirp-2.4.1.tar Start
by reading the README file in the newly created subdirectory pathchirp-2.4.1 or follow the instructions below.
Running the code.
ls ./ Let us call the result of this command [subdir]. Examples of [subdir] could be i686, i386, sparc and so on.
cd [subdir] All the above commands must be run on the SENDER, RECEIVER, and MASTER machines. Chirps travel from the SENDER to the RECEIVER machine. The MASTER starts the experiment and stores the results of the experiment in a file (see above figure).
On the SENDER run ./pathchirp_snd
On the RECEIVER run ./pathchirp_rcv
On the MASTER run ./pathchirp_run -S [sender machine name or IP address] -R [receiver machine name or IP address] -t 300
At the MASTER you will observe the output
Opening file: [resultsfilename]
After 300 seconds (5 minutes) the experiment would have ended. The results will be in the file [resultsfilename] at the receiver in the format [timestamp] [Available bandwidth estimate in Mega bits/sec]
To view the results run the following at the MASTER
To rerun the experiment you only need to restart the ./pathchirp_run program at the MASTER.
You should already have the ns-2 simulator installed. You can obtain ns-2 code here.
In the following NS-2-DIR refers
to your ns-2.* directory (example: ns-2.27)
Save current files:
Before untarring the code below, save a few ns files that will be overwritten
so that you can revert to the original easily.
1) cd NS-2-DIR 2) tar -cvf original.tar Makefile.in FILES tcl/lib/ns-default.tcl tcl/lib/ns-packet.tcl common/packet.h
In case you need to revert to the original code:
1) cd NS-2-DIR 2) tar -xvf original.tar 3) make clean 4) ./configure 5) make depend 6) make | <urn:uuid:296f37cf-defa-4ea5-afb7-9cd85aa261c1> | 3.25 | 734 | Documentation | Software Dev. | 66.095375 |
Directional wind shear is the change in wind direction with height. In the image (right), the view is looking north. The wind near the surface is blowing from the southeast to the northwest.
As the elevation increases the direction veers (changes direction in a clock-wise motion) becoming south, then southwest, and finally, west.
Speed shear is the change in wind speed with height. In the illustration below, the wind is increasing with height. This tends to create a rolling affect to the atmosphere and is believed to be a key component in the formation of mesocyclones which can lead to tornadoes.
Strong vertical shear is the combination of a veering directional shear and strong speed shear and is the condition that is most supportive of supercells. | <urn:uuid:53eb452d-f69c-4bfc-815a-811c1183d691> | 3.875 | 161 | Knowledge Article | Science & Tech. | 58.15967 |
It uses CSS selectors to access and manipulate HTML elements (DOM Objects) on a web page.
jQuery also provides a companion UI (user interface) framework and a plug-ins.
Many of the largest companies on the Web uses jQuery:
You will find an excellent jQuery Tutorial here at W3Schools.
API is short for Application Programming Interface. It is a library of properties and methods for manipulating the HTML DOM.
MooTools also includes some lightweight effects and animation functions.
Here are some other frameworks not covered in this short overview:
Ext JS - Customizable widgets for building rich Internet applications.
Dojo - A toolkit designed around packages for DOM manipulation, events, widgets, and more.
UIZE - Widgets, AJAX, DOM, templates, and more.
You always want your web pages to be as fast as possible. You want to keep the size of your pages as small as possible, and you want the browser to cache as much as possible.
A CDN (Content Delivery Network) solves this. A CDN is a network of servers
containing shared code libraries.
Normally you just have to reference a library file from you web page.
In the next chapter of this tutorial we will walk you through a test process for jQuery.
Your message has been sent to W3Schools. | <urn:uuid:171aad97-752d-43cb-b4d0-8ddbfd4d14e1> | 2.75 | 278 | Tutorial | Software Dev. | 51.092352 |
The Weather Channel Position Statement
on Global Warming
The scientific issue of global warming can be broken down into three main questions: Is global warming a reality? Are human activities causing it? What are the prospects for the future?
Warming: Fact or Fiction?
The climate of the earth is indeed warming, with an increase of approximately 1 - 1 1/2 degrees Fahrenheit in the past century, more than half of that occurring in the past three decades. The warming has taken place as averaged globally and annually; significant regional and seasonal variations exist
Impacts can already be seen, especially in the Arctic, with melting glaciers, thawing permafrost, and rapid retreat and thinning of sea ice, all of which are affecting human populations as well as animals and vegetation. There and elsewhere, rising sea level is increasing coastal vulnerability.
Odds are now leaning toward increased frequency and intensity of heat waves in the warm season and warm spells in the cold season in parts of the world, as well as reduced frequency of low temperature extremes. There is evidence in recent years of a direct linkage between the larger-scale warming and short-term weather events such as heat waves.
In some regions there has been a tendency for an increase in precipitation extremes, both wet (including floods) and dry (droughts). These observations over the past several decades are consistent with what theory and global climate models would suggest.
The jury is out on exactly what effect(s) global warming is having or will have in the future upon tropical cyclones.
To what extent the current warming is due to human activity is complicated because large and sometimes sudden climate changes have occurred throughout our planet's history -- most of them before humans could possibly have been a factor. Furthermore, the sun/atmosphere/land/ocean "climate system" is extraordinarily complex, and natural variability on time scales from seconds to decades and beyond is always occurring.
However, it is known that burning of fossil fuels injects additional carbon dioxide and other so-called greenhouse gases into the atmosphere. This in turn increases the naturally occurring "greenhouse effect," a process in which our atmosphere keeps the earth's surface much warmer than it would otherwise be.
More than a century's worth of detailed climate observations shows a sharp increase in both carbon dioxide and temperature. These observations, together with computer model simulations and historical climate reconstructions from ice cores, ocean sediments and tree rings all provide strong evidence that the majority of the warming over the past century is a result of human activities. This is also the conclusion drawn, nearly unanimously, by climate scientists.
Humans are also changing the climate on a more localized level. The replacement of vegetation by buildings and roads is causing temperature increases through what's known as the urban heat island effect. In addition, land use changes are affecting impacts from weather phenomena. For example, urbanization and deforestation can cause an increased tendency for flash floods and mudslides from heavy rain. Deforestation also produces a climate change "feedback" by depleting a source which absorbs carbon dioxide.
The bottom line is that with the rate of greenhouse gas emissions increasing, a significant warming trend is expected to also continue. This warming will manifest itself in a variety of ways, and shifts in climate could occur quickly, so while society needs to continue to wrestle with the difficult issues involved with mitigation of the causes of global warming, an increased focus should be placed on adaptation to the effects of global warming given the sensitivity of civilizations and ecosystems to rapid climate change.
Potential outcomes range from moderate and manageable to extreme and catastrophic, depending on a number of factors including location and type of effect, and amount of greenhouse gas emissions. Not every location and its inhabitants will be affected equally, but the more the planet warms, the fewer "winners" and the more "losers" there will be as a result of the changes in climate. The potential exists for the climate to reach a "tipping point," if it hasn't already done so, beyond which radical and irreversible changes occur.
Copyright The Weather Channel, 2007. All Rights Reserved.
Storm Encyclopedia Index | <urn:uuid:c0d53566-cf20-48a5-9602-34aff8a558d8> | 4.09375 | 840 | Knowledge Article | Science & Tech. | 27.803427 |
|Share this post!|
Hash tables are an efficient implementation of a keyed array data structure, a structure sometimes known as an associative array or map. If you're working in C++, you can take advantage of the STL map container for keyed arrays implemented using binary trees, but this article will give you some of the theory behind how a hash table works.
In a keyed array, however, you would be able to associate each element with a "key," which can be anything from a name to a product model number. So, if you have a keyed array of employee records, you could access the record of employee "John Brown" like this:
Keyed Arrays vs. Indexed ArraysOne of the biggest drawbacks to a language like C is that there are no keyed arrays. In a normal C array (also called an indexed array), the only way to access an element would be through its index number. To find element 50 of an array named "employees" you have to access it like this: | <urn:uuid:ba48a41f-48b4-41a4-893a-e2a0b147b6d4> | 4.03125 | 214 | Tutorial | Software Dev. | 50.076395 |
Presently, only one supernova has been detected by its neutrinos. This was supernova 1987a, a relatively close supernova which occurred in in the Large Magellanic Cloud, a satellite galaxy to our own. When this star exploded, the neutrinos escaped the surface of the star and reached detectors on Earth three hours before the shockwave reached the surface, producing a visible brightening. Yet despite the enormity of the eruption, only 24 neutrinos (or more precisely, electron anti-neutrinos), were detected between three detectors.
The further away an event is, the more its neutrinos will be spread out, which in turn, decreases the flux at the detector. With current detectors, the expectation is that they are large enough to detect supernovae events around a rate of 1-3 per century all originating from within the Milky Way and our satellites. But as with most astronomy, the detection radius can be increased with larger detectors. The current generation uses detectors with masses on the order of kilotons of detecting fluid, but proposed detectors would increase this to megatons, pushing the sphere of detectability to as much as 6.5 million light years, which would include our nearest large neighbor, the Andromeda galaxy. With such enhanced capabilities, detectors would be expected to find neutrino bursts on the order of once per decade.
Assuming the calculations are correct and that 20% of supernova implode directly, this means that such gargantuan detectors could detect 1-2 failed supernovae per century. Fortunately, this is slightly enhanced due to the extra mass of the star, which would make the total energy of the event higher, and while this wouldn’t escape as light, would correspond to an increased neutrino output. Thus, the detection sphere could be pushed out to potentially 13 million lightyears, which would incorporate several galaxies with high rates of star formation and consequently, supernoave.
While this puts the potential for detections of failed supernovae on the radar, a bigger problem remains. Say neutrino detectors record a sudden burst of neutrinos. With typical supernovae, this detection would be quickly followed with the optical detection of a supernova, but with a failed supernova, the followup would be absent. The neutrino burst is the beginning and end of the story, which could not initially positively define such an event as different from other supernovae, such as those that form neutron stars.
To tease out the subtle differences, the team modeled the supernovae to examine the energies and durations involved. When comparing failed supernovae to ones forming neutron stars, they predicted that the failed supernovae neutrino bursts would have shorter durations (~1 second) than ones forming neutron stars (~10 seconds). Additionally, the energy imparted in the collision that makes up the detection would be higher for failed supernovae (up to 56 MeV vs 33 MeV). This difference could potentially discriminate between the two types. | <urn:uuid:a3e8267d-e1e7-495b-82cc-f6e9948e343b> | 4.21875 | 618 | Personal Blog | Science & Tech. | 30.171825 |
The enhanced spectral resolution of hyperspectral and control of bandwidths of multispectral data yield an advantage over color aerial photography particularly when coral health and time series analysis of coral reef community structure are of interest. Depending on the type of instrument, a spectral imaging system can be utilized to see multiple colors from ultraviolet through the far infrared range. The AURORA hyperspectral imaging system collected 72 ten nm bands in the visible and near infrared spectral range with a 3 meter pixel resolution. The data was processed to select band widths, which optimized feature detection in shallow and deep water. Photointerpreters can accurately and reliably delineate boundaries of features in the imagery as they appear on the computer monitor using a software interface such as the Habitat Digitizer.
The shallow band IDs and centers were configured as: 1) Band 17 at 508.319 nm 2) Band 22 at 547.918 nm 3) Band 27 at 605.516 nm
The deep band IDs and centers were configured as: 1) Band 11 at 450.001 nm 2) Band 22 at 547.918 nm 3) Band 33 at 663.835 nm | <urn:uuid:7095780b-1386-4251-89b1-48bf6d8d10bd> | 2.75 | 239 | Documentation | Science & Tech. | 54.662121 |
Processing Arithmetic Expressions with the Shunting-Yard Algorithm
(Originally posted on reedbeta.com)
In game development (or programming in general) it’s not uncommon to have a situation where you’d like to let a user enter an arithmetic formula that your code parses and evaluates. For example, in a shader you might like to have an annotation that specifies how a parameter is to be computed in the main application. In various kinds of authoring tools you might like to create a shape, image, or animation based on a mathematical function. Embedding a full-fledged scripting language like Python or Lua is a bit overkill for these kinds of tasks. So how can we handle arithmetic expressions without a large amount of infrastructure?
In textbooks and university computer-science courses, we often hear about a few classic approaches to parsing formulas. One is the so-called reverse Polish notation, where we write formulas in postfix form, with operators following their operands:
2 3 + // means 2 + 3 a b + c d + * // means (a + b) * (c + d) pi 4 / sin // means sin(pi/4)
The nice things about RPN are that (a) parentheses are not needed, nor the concepts of operator precedence and associativity, since the order of operations is fully specified by the notation; and (b) it can be parsed by a simple algorithm: scan the formula left-to-right, when you see an operand push it on a stack, and when you see an operator, pop the required operand(s) off the stack, apply the operator, and push the result back on. When you’re done, the result is the only item left on the stack (assuming well-formed input).
That’s quite easy to add to an application; there’s almost no infrastructure needed. But it has the disadvantage that it requires users to work in this unfamiliar and awkward notation. Of course, users can be trained to work in RPN, and with experience it no longer appears unfamiliar or awkward. But let’s take pity on the poor users and let them work with standard mathematical notation. What options are there?
The standard computer-science cirriculum at this point would start talking about context-free grammars, abstract syntax trees, and syntax-directed parsers. There are two main approaches to building parsers that are used in practice, i.e. for parsing programming languages: top-down (also known as recursive descent or LL) and bottom-up (aka shift-reduce, LR). Unfortunately, neither of these is a good fit for embedding a simple arithmetic language in an application. Top-down parsing isn’t a good fit for arithmetic in general, since each level of operator precedence requires its own nonterminal symbol in the grammar (each of which corresponds to a function call in the parser), and right-associative operators can’t be expressed in the grammar without breaking the LL constraint, necessitating some sort of extragrammatical fixup. Bottom-up parsing works by using a large state machine whose transition rules are usually impractical to work out by hand, requiring a parser generator tool such as Bison to compute. Again, that’s a lot of infrastructure to throw at what is not such a complicated problem.
Fortunately, there is another way: the shunting-yard algorithm. It is due to Edsger Dijkstra, and so named because it supposedly resembles the way trains are assembled and disassembled in a railyard. This algorithm processes infix notation efficiently, supports precedence and associativity well, and can be easily hand-coded.
How It Works
As in RPN, we scan the formula from left to right, processing each operand and operator in order. However, we now have two stacks: one for operands and another for operators. Then, we proceed as follows:
- If we see an operand, push it on the operand stack.
If we see an operator:
- While there’s an operator on top of the operator stack of precedence higher than or equal to that of the operator we’re currently processing, pop it off and apply it. (That is, pop the required operand(s) off the stack, apply the operator to them, and push the result back on the operand stack.)
- Then, push the current operator on the operator stack.
- When we get to the end of the formula, apply any operators remaining on the stack, from the top down. Then the result is the only item left on the operand stack (assuming well-formed input).
Note that “applying” an operator can mean a couple of different things in this context. You could actually execute the operators, in which case the operands would be numerical values of all the terms and subexpressions; you could also build a syntax tree, in which case the operands would be subtrees. The algorithm works the same way in either case.
That’s basically all there is to it, aside from some bells and whistles! As you can see, it has a lot in common with the RPN algorithm, and is just a little more complicated.
I described the algorithm above in its simplest form, but there are several enhancements that can be made to handle more complicated formulas.
Associativity. Above, I said that when processing an operator, any operators of equal precedence at the top of the stack should be popped and applied. This makes those operators left-associative, since the leftmost of the two operators will be applied first. You can implement right-associativity by leaving equal-precedence operators on the stack.
Parentheses. Parens are a bit of a special case. When you see a left paren, push it on the operator stack; no other operators can pop a paren (so it’s as if it has the lowest precedence). Then when you see a right paren, pop-and-apply any operators on the stack until you get back to a left paren, which is popped and discarded.
Unary operators. These generally work just like any binary operators except that they only pop one operand when they’re applied. There is one extra rule that needs to be followed, though: when processing a unary operator, it’s only allowed to pop-and-apply other unary operators—never any binary ones, regardless of precedence. This rule is to ensure that formulas like a ^ -b are handled correctly, where ^ (exponentiation) has a higher precedence than – (negation). (In a ^ -b there’s only one correct parse, but in -a^b you want to apply the ^ first.)
Both prefix and postfix unary operators can be used. The way to tell whether you’re in a position to allow prefix or postfix operators is to look at the previous token; if it’s an operand, you’re looking for binary and postfix unary operators, and if the previous token is an operator (or there’s no previous token) you’re looking for prefix unary operators. Note that a left parent counts as an operator and a right paren as an operand for this purpose. This rule also allows you to tell whether – is a negation (unary) or a subtraction (binary)—it’s a negation if it appears when looking for a prefix unary operator, and a subtraction otherwise.
Function calls. The prefix/postfix rule also allows you to tell when an open paren designates a function call rather than grouping a subexpression (a grouping paren is like a prefix operator while a function-call one is like a postfix operator). When a function-call paren is encountered, the operand at the top of the stack is the function to be called. Push the paren on the operator stack as before, but also set up a list somewhere to hold the function arguments, and maintain a mapping that lets you find that list again from the paren on the stack. (Note that with nested function calls, you could have multiple left parens on the stack.)
Then, when a comma is encountered, pop-and-apply operators back to a left paren; the operand on the top of the stack is then the next argument, and should be popped and added to the argument list. When the right paren is encountered, do the same, then pop and discard the left paren. (Note that the arguments shouldn’t be left on the operand stack, at least not without some sort of sentinel between them; this would allow an ill-formed call like f(a, b, +) to be parsed as f(a + b)).
Array subscripts using square brackets can be handled in the same way as function calls.
After all this, the algorithm has grown a bit, and is a little trickier to get right in all the corner cases—but it’s still pretty simple, and certainly less work than a full-blown syntax-directed parser! In my opinion, it’s a shame that the shunting-yard algorithm isn’t more widely discussed as part of standard texts and computer-science university courses. Even the Dragon Book doesn’t so much as mention it! I never heard of this algorithm until I happened to see a forum post that referenced it, but its simplicity, elegance, and efficiency make it a superior solution for many use-cases of processing arithmetic expressions.
Recently Featured Posts
- Accessing Microsoft Windows 8 Desktop Sensors
- Shader Effects: Glow and Bloom
- Shader Effects: Screen Space Ambient Occlusion
- Shader Effects: Refraction
- Shader Effects: Blend Modes
- Shader Effects: Gamma Correction
- Follow Devmaster.net on Facebook, Twitter, and Google+
- Shader Effects: Depth of Field
- Shader Effects: Shadow Mapping
- Shader Effects: Old Film
Recent Forum Discussions
- HIERARCHICAL TEMPORAL MEMORY CRITICISM
- Shadow techniques
- 3D Alien Buildings for Game Design
- Animating simple player sprite (Platform)
- some thoughts on artificial intelligence
- Yildiz-Online: MMORTS (Alpha)
- advice on the best schools in either bc...
- Giant rubber duck meets sorry end
- MinGW64 and GLES2 emulator?
- Basic path tracing in OpenCL | <urn:uuid:a9742f90-8b2f-48ec-892e-ab7146c20ff0> | 2.90625 | 2,217 | Comment Section | Software Dev. | 44.257128 |
Is there a simple explanation of what the Laplace transformation do exactly and how they work? Reading my math book has left me in a foggy haze of proofs that I don't completely understand. I'm looking for an explanation on lamans terms so that I understand what it is doing as I make these seemingly "magical" transformations. I searched the site and closest to an answer was this however it doesn't explain things for my simplistic mathematical mind.
Note: please edit tags as I don't really know what to put down for this topic | <urn:uuid:2b0aae17-5e35-4201-9a2f-797c08432274> | 2.859375 | 110 | Q&A Forum | Science & Tech. | 54.204235 |
Whats in a Name? Bumps, Bights, Scarps and Shelves
Steve Gittings, Research Coordinator
U.S. National Marine Sanctuary System
You may have seen an old cartoon that shows some elderly ladies sitting around a coffee table chatting, and one says to the other with a deadpan look, I dont know why I dont care about what lives on the bottom of the ocean, but I dont.
Well, in some ways you cant blame her. At first glance, the ocean bottom is largely a featureless plain. I was once on a five-day survey on the Navys NR-1 research submarine. Lying on my stomach in a tiny, cushion-covered room below the bridge, looking out portholes for 12 hrs a day, the sub crossed mile after mile of soft sediment and an occasional burrow. Animals are there, to be sure, but most are so small that they live between sediment grains. These miniatures may see their muddy home as a high rise with an ocean view. I saw, well, mud.
So why look?
Well, that cartoon lady might be surprised to take a second look at what scientists are finding these days in Earths oceans. Far from featureless, if you know where to look, or at least look long enough, you will find natures weird and wonderful. I like to say it is a place where the unpredictable is commonplace.
It seems everywhere we look we are finding new forms of beings with imaginative life-styles. Hydrothermal vents along undersea volcanic ridges are populated by vestimentiferan tubeworms harboring bacteria that use chemical energy from hydrogen sulfide instead of sunlight to make food for their host. Bacteria inside bivalves around natural oil seeps in the ocean derive energy from methane for survival. Mussels with sulfide-utilizing bacteria live at depths of 2,000 ft above pools of water seven times saltier than normal seawater.
A New Look at the Ocean
It was with great expectation, then, that the idea for the Islands in the Stream Expedition was born. Starting in Belize, circling the Gulf of Mexico, then heading up the U.S. Atlantic Coast, this mission would stop at places thought to harbor some of the most diverse and productive environments anywhere. We would document this wealth of life, and compare what we found throughout the journey. We would also try to understand how important processes in the ocean affected their development. Who were the predators and who was the prey? Who preferred what kind of habitat? Where would ocean currents carry larvae? (see The Islands)
There would be several segments, or "legs," of this mission, each with a different science crew. One group would study Belize, another Mexico, a third the Flower Garden Banks in the northwest Gulf of Mexico, a fourth the northeastern Gulf, and a fifth the southeast Gulf. Later in the year, there would be groups studying the Oculina Reserve off Florida, the Savannah Scarp off Georgia, the Charleston Bump, and the North Carolina Shelf (The Point, Lophelia Reefs, Cape Fear Terrace). Each, however, would be looking at things like bottom and fish community development, and human influences on these communities.
The Ivory Tower of Babel
Science is a field full of jargon. I am the first to admit that I dont even understand the titles in some science articles. You have your immunoreceptors, buckminsterfullerines, restriction fragment length polymorphisms, Euclidean 3-manifolds, photonic atoms, and apoptosis. (Of course, you already know what vestimentiferans are.)
In order to do the work of the Islands in the Stream Expedition, it would be necessary to use a common language. We started with English. Beyond that, it was a challenge. There are countless ways to describe ecological principles and environmental observations. We needed to settle on a scheme that would allow us to compare areas hundreds of miles apart being investigated by people who hardly knew each other.
Toward a Common Language
One way to do this is to standardize how the areas would be evaluated, helping to put the whole region in perspective. There are many characterization schemes. Geologists are particularly adept at describing features, which would be an important aspect of the Islands mission. We would be visiting places with names that reflected their geologic nature -- the Flower Garden Banks, Pulleys Ridge, the Pinnacles, and Grays Reef. All have different geologic descriptors. But how different are they? And is it their geology that distinguishes them, or some other aspect of their environment? To answer these and other questions, we needed a scheme that provided for a broad and comprehensive understanding of the places we visited. We also needed to provide a tool to observers with highly varying levels of training in biology and ecology.
During 2001, a habitat characterization protocol was developed and used on the Caribbean and Gulf of Mexico legs of the mission. The protocol will again be used on the South Atlantic Bight leg. A portion of that scheme is presented here to give you an idea of the type of information and level of detail required.
Protocol for Habitat Characterization
Habitat descriptions at a number of sites visited during the Islands in the Stream mission were based on a combination of exploratory, or reconnaissance, dives, using one-person DeepWorker submersibles, and relatively simple transect and roving surveys conducted using ROVs (remotely operated vehicles), and, where practical, scuba divers. Standardized data forms recorded the activities. At locations targeted for habitat characterization during the expedition, the task is accomplished in several steps, as follows:
- Initial classification of features -- available maps and other types of information were used to choose and prioritize target sites.
- Preliminary reconnaissance -- dives started with a get-acquainted session, where pilots recorded the variety and boundaries of habitats within target areas. After becoming familiar with the site, they could begin more detailed assessments.
- Fish community survey -- fish are fundamentally important to ecosystem health, and pilots characterized the community by recording the species they saw, the habitats in which they occurred, and some measure of their abundance (either a relative or absolute abundance).
- Benthic (bottom) video transects -- Pilots used video cameras to record animals and plants living on the bottom. The records were supplemented by extensive verbal descriptions of each habitat. Transects were generally of a predetermined length (usually around 25 m), and organisms were counted later by experts reviewing the videotape.
- Benthic video and audio reconnaissance and sampling -- after completing other tasks, pilots supplemented the data by audio and video recordings of various aspects of each habitat, including species composition, community development, species interactions, species/habitat associations, noteworthy behaviors, and human impacts.
For a more complete description of any of these topics, click here.
Protocols like this enable consistent observations of each of the places visited during the expedition. Without it, comparisons are very difficult to make, and meaningful application of the information to resource management is unlikely to occur. With it, scientists and managers can determine the condition of the sites as well as the threats, and judge the best ways to respond to problems. Some of the management options may include: (1) establishing protected areas to control human activities, such as destructive fishing practices or collecting for other purposes, (2) changing laws related to activities that affect water quality, or (3) altering local plans for responding to spills or other events that might affect particularly sensitive areas. In addressing these information needs, resource mangers are much more capable of ensuring the long-term survival of critical marine ecosystems.
Sign up for the Ocean Explorer E-mail Update List. | <urn:uuid:0378c519-622e-45f1-8f9e-1589733494f5> | 2.96875 | 1,599 | Knowledge Article | Science & Tech. | 34.510898 |
El Niño and its partner La Niña, the warm and cold phases in the eastern half of the tropical Pacific, play havoc with climate worldwide. Predicting El Niño events more than several months ahead is now routine, but predicting how it will change in a warming world has been hampered by the short instrumental record. An international team of climate scientists has now shown that annually resolved tree-ring records from North America, particularly from the US Southwest, give a continuous representation of the intensity of El Niño events over the past 1100 years and can be used to improve El Niño prediction in climate models. The study, spearheaded by Jinbao Li, International Pacific Research Center, University of Hawai'i at Manoa, is published in the May 6 issue of Nature Climate Change.
Tree rings in the US Southwest, the team found, agree well with the 150-year instrumental sea surface temperature records in the tropical Pacific. During El Niño, the unusually warm surface temperatures in the eastern Pacific lead to changes in the atmospheric circulation, causing unusually wetter winters in the US Southwest, and thus wider tree rings; unusually cold eastern Pacific temperatures during La Niña lead to drought and narrower rings. The tree-ring records, furthermore, match well existing reconstructions of the El Niño-Southern Oscillation and correlate highly, for instance, with δ18O isotope concentrations of both living corals and corals that lived hundreds of years ago around Palmyra in the central Pacific.
"Our work revealed that the towering trees on the mountain slopes of the US Southwest and the colorful corals in the tropical Pacific both listen to the music of El Niño, which shows its signature in their yearly growth rings," explains Li. "The coral records, however, are brief, whereas the tree-ring records from North America supply us with a continuous El Niño record reaching back 1100 years."
The tree rings reveal that the intensity of El Niño has been highly variable, with decades of strong El Niño events and decades of little activity. The weakest El Niño activity happened during the Medieval Climate Anomaly in the 11th century, whereas the strongest activity has been since the 18th century.
These different periods of El Niño activity are related to long-term changes in Pacific climate. Cores taken from lake sediments in the Galapagos Islands, northern Yucatan, and the Pacific Northwest reveal that the easterncentral tropical Pacific climate swings between warm and cool phases, each lasting from 50 to 90 years. During warm phases, El Niño and La Niña events were more intense than usual. During cool phases, they deviated little from the long-term average as, for instance, during the Medieval Climate Anomaly when the eastern tropical Pacific was cool.
"Since El Niño causes climate extremes around the world, it is important to know how it will change with global warming," says co-author Shang-Ping Xie. "Current models diverge in their projections of its future behavior, with some showing an increase in amplitude, some no change, and some even a decrease. Our tree-ring data offer key observational benchmarks for evaluating and perfecting climate models and their predictions of the El Niño-Southern Oscillation under global warming."
Explore further: Alaska volcano shoots ash 15,000 feet into the air
More information: Jinbao Li, Shang-Ping Xie, Edward R. Cook, Gang Huang, Rosanne D'Arrigo, Fei Liu, Jian Ma, and Xiao-Tong Zheng, 2011: Interdecadal modulation of El Niño amplitude during the past millennium. Nature Climate Change. | <urn:uuid:03cc3534-f1fc-42df-8b04-02922d247c16> | 4 | 722 | Knowledge Article | Science & Tech. | 29.710155 |
Learn PHP Bitwise Operator:
It is another kind of operator ,called bitwise operator and supported by PHP. It is so called because instead of operating on the integer value, it works on the binary number, like if we want to perform the following task,
12>>2, then :
0000 1100 >> 2:
After first shift, 0000 0110 and after second shift, 0000 0011
Output would be 3 and from the above example it is very clear that after every shift 1 is moved towards right side and the blank space is filled by 0.
So, from the above example we can see that the right shift operator operates on binary number, that's why it is known as bitwise operator.
Example of PHP Bitwise Operator:
echo"\$a & \$b = ".($a & $b)."<br/>";
echo"\$a | \$b = ".($a | $b)."<br/>";
echo"\$a ^ \$b = ".($a ^ $b)."<br/>";
echo"~(\$a) = ".~$a."<br/>";
echo"\$a >> \$b = ".($a >> $b)."<br/>";
echo"\$a << \$b = ".($a << $b)."<br/>";
$a & $b = 2
$a | $b = 10
$a ^ $b = 8
~($a) = -11
$a >> $b = 2
$a << $b = 40
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:4331bda4-a92b-4ec2-9482-2ef10602a0c1> | 3.671875 | 377 | Documentation | Software Dev. | 85.122479 |
This movie shows one of the most beautiful – and mind-boggling – experiments in physics: particles behaving as waves as they pass through a diffraction grating.
Each speck of light represents a single molecule that has passed through the grating. If the molecules obeyed the laws of classical physics – the laws that describe the motion of everyday, macroscopic objects – we’d see a pattern corresponding to the slits in the grating, as if we’d thrown a load of blackcurrants through some railings (as you do).
Instead, we see an interference pattern, even though the molecules go through the grating one by one. This can be explained by each molecule having its own wavefront which goes through all the slits at once – it’s a bit like a blackcurrant turning into a wave (of Ribena?), rippling through all the railings, and combining again into a blackcurrant as it hits the wall. Crazy, I know.
This phenomenon is called wave-particle duality, and this movie is the first time it’s been captured on camera for large molecules. As physicists carry out these kinds of experiments with larger and larger molecules, they’ll be able to understand more about the differences between the world we see around us and the strange, surreal world of atoms and molecules.
If you’re interested in finding out more, I wrote an article about this movie for physicsworld.com – click here to have a read. | <urn:uuid:9bc250d2-928d-4261-8119-81a13262d3e9> | 2.875 | 315 | Personal Blog | Science & Tech. | 48.501483 |
Because of the simplicity of geometry and principle of functionality bubble columns are often used
for chemical and biological reactors.
The example shown here investigates the accuracy of 2-D models by comparing numerical predictions
with experimental observations.
The gas mass flow rate is increased from left to right. The lower flow rates lead to a transient bubble
column whereas higher flow rates lead to a stationary distribution of the gas bubbles.
The comparison with the experimental observations reported in is very good.
Chem.-Ing. Tech. 66 (1994) Nr. 4 S. 505-510 | <urn:uuid:3b1c7c26-c20c-4d8e-a42d-e5f4ace6a0cb> | 2.78125 | 116 | Knowledge Article | Science & Tech. | 48.208403 |