text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Windhunter is a project for hydrogen mass production based on electrolysis of the sea water. It consists of a platform that sustains some wind mills. They produce electricity following that electricity produces hydrogen and oxygen through the well-known electrolysis process. The wind turbines have a power of 2 MegaWatt. From the experiments presented in the “hydrogen power” category on cars-and-trees.com, it is required only a small amount of current (abt 10Amps) to make enough hydrogen to run a car. Imagine the possibilities of 2MW power in continuous run. Anyway, some calculations have to be made, but it’s obvious it can generate in time great quantities of energy. For free. The platform can be put in the sea or oceans and doesn’t interfere with any human eyesight (except maybe ships travelling across the sea). The bad part is that its construction is not cheap at all. You can see the whole project on http://www.windhunter.org or if your want to see specific details directly, you can download a document presenting all the facts of this project here: http://www.windhunter.org/Windhunter_Integrated_Info.doc More like this article Not what you were looking for? Search The Green Optimistic! Join the Discussion4033 total comments so far. What's your opinion ? Electrolysis of seawater produces hydrogen, for sure, but it also produces chlorine in aboundance. It might produce some oxyten as well. But what is to be done with the chlorine?The reason is that in electrolysis the most reactive elements are the most likely to be separated to the two electrodes. Since seaware contains a lot of sodium chloride, NaCl, these elements will react preferentially to hydrogen and oxygen. Consequently, the sodium will be attracted to the negative electrode, where it reacts with the H2O to produce hydrogen and sodium hydroxide. The chlorine is attracted to the positive electrode where it is released.Try it.
<urn:uuid:39ff2758-2756-4c33-9f37-2d10bc44b36a>
3.234375
422
Personal Blog
Science & Tech.
47.978125
plant or animal person, or thing adopted by a group as a representative symbol terrestrial - living on or in the ground; not aquatic - any fresh, marine, or terrestrial crustacean of the order isopoda, having seven pairs of legs adapted for crawling and has a flattened body - any marine or fresh water crustaceans of the subclass copepoda that has an elongated body and a forked tail Troglobites, Troglophiles, Trogloxenes A copepod in trogloxenes , troglophiles!! This is not gibberish. These are names of different categories of terrestrial cave animals and bacteria. Then we have stygobites, stygophilies, and stygloxenes. These animals and bacteria are a bit different, they live in water. They are aquatic. There are a lot more, like extremophiles. These little organisms live in harsh environments like glaciers, swamps, and volcanoes. They can not be seen with the naked eye so you need a microscope. You might think that living in a cave habitat can be rather difficult, and indeed it is. Trogloxenes that you might know are: bats, bears, foxes, and raccoons. Bats are the most common trogloxenes, and have become a cave mascot. Trogloxenes can live above ground, or below ground. They are cave visitors. Troglophiles are animals that can go either way, they can live in a cave or outside of a cave. Many of these will be insects like: crickets, centipedes, and some salamanders. These can be called cave lovers. Troglobites are true cave dwellers. Most troglobites have special adaptations that help them adjust to life in complete darkness. Some troglobites have poor eye sight or have no eyes at all. They can sense vibrations or moving objects with their very long and sensitive antennas. They are also able to hear, smell, and feel as well. Troglobites are pale, white, or transparent. Because of this, troglobites can not come in contact with sunlight because the results can prove to be fatal. Some examples of troglobites are blind flatworms, eyeless shrimp, isopods, and copepods. A cave can be a habitat for many interesting life forms. These life forms have adapted to their lives below or above the surface. These living creatures make the world a more interesting place to be.
<urn:uuid:a5306186-9187-4558-b76c-34322653b694>
3.65625
575
Knowledge Article
Science & Tech.
40.389309
May 1, 2012 On 5 and 6 June this year, millions of people around the world will be able to see Venus pass across the face of the Sun in what will be a once-in-a-lifetime experience. It will take Venus about six hours to complete its transit, appearing as a small black dot on the Sun's surface, in an event that will not happen again until 2117. In this month's Physics World, Jay M Pasachoff, an astronomer at Williams College, Massachusetts, explores the science behind Venus's transit and gives an account of its fascinating history. Transits of Venus occur only on the very rare occasions when Venus and Earth are in a line with the Sun. At other times Venus passes below or above the Sun because the two orbits are at a slight angle to each other. Transits occur in pairs separated by eight years, with the gap between pairs of transits alternating between 105.5 and 121.5 years -- the last transit was in 2004. Building on the original theories of Nicolaus Copernicus from 1543, scientists were able to predict and record the transits of both Mercury and Venus in the centuries that followed. Johannes Kepler successfully predicted that both planets would transit the Sun in 1631, part of which was verified with Mercury's transit of that year. But the first transit of Venus to actually be viewed was in 1639 -- an event that had been predicted by the English astronomer Jeremiah Horrocks. He observed the transit in the village of Much Hoole in Lancashire -- the only other person to see it being his correspondent, William Crabtree, in Manchester. Later, in 1716, Edmond Halley proposed using a transit of Venus to predict the precise distance between Earth and the Sun, known as the astronomical unit. As a result, hundreds of expeditions were sent all over the world to observe the 1761 and 1769 transits. A young James Cook took the Endeavour to the island of Tahiti, where he successfully observed the transit at a site that is still called Point Venus. Pasachoff expects the transit to confirm his team's theory about the phenomenon called "the black-drop effect" -- a strange, dark band linking Venus's silhouette with the sky outside the Sun that appears for about a minute starting just as Venus first enters the solar disk. Pasachoff and his colleagues will concentrate on observing Venus's atmosphere as it appears when Venus is only half onto the solar disk. He also believes that observations of the transit will help astronomers who are looking for extrasolar planets orbiting stars other than the Sun. "Doing so verifies that the techniques for studying events on and around other stars hold true in our own backyard.. In other words, by looking up close at transits in our solar system, we may be able to see subtle effects that can help exoplanet hunters explain what they are seeing when they view distant suns," Pasachoff writes. Not content with viewing this year's transit from Earth, scientists in France will be using the Hubble Space Telescope to observe the effect of Venus's transit very slightly darkening the Moon. Pasachoff and colleagues even hope to use Hubble to watch Venus passing in front of the Sun as seen from Jupiter -- an event that will take place on 20 September this year -- and will be using NASA's Cassini spacecraft, which is orbiting Saturn, to see a transit of Venus from Saturn on 21 December. "We are fortunate in that we are truly living in a golden period of planetary transits and it is one of which I hope astronomers can take full advantage," he writes. Editors note: Looking directly at the sun can cause severe and permanent eye damage. Do not look directly at Venus' transit of the sun. For more information see Wikipedia article. Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. - Jay M Pasachoff. Venus: it's now or never. Physics World, Volume 25, Issue 5, May 2012 [link] Note: If no author is given, the source is cited instead.
<urn:uuid:8ed69503-90e8-44b7-84f0-994f63ec80c4>
3.859375
862
Truncated
Science & Tech.
51.800161
How are chromosomes ‘painted’? Your 23 pairs of chromosomes contain around 24,000 pairs of genes. FISH – fluorescent in situ hybridisation – can be used to ‘paint’ chromosomes. Scientists prepare chromosomes on a microscope slide, tag a copy of the gene they want to find with a fluorescing dye and add it to the slide. Under a special microscope, the same gene shows up as a brightly coloured dot on the chromosome.
<urn:uuid:b546ae24-1f75-4d79-95b6-a723c2028e5a>
3.390625
95
Knowledge Article
Science & Tech.
51.819183
The people in south Asia had no warning of the next disaster rushing toward them the morning of December 26, 2004. One of the strongest earthquakes in the past 100 years had just destroyed villages on the island of Sumatra in the Indian Ocean, leaving many people injured. But the worst was yet to come—and very soon. For the earthquake had occurred beneath the ocean, thrusting the ocean floor upward nearly 60 feet. The sudden release of energy into the ocean created a tsunami (pronounced su-NAM-ee) event—a series of huge waves. The waves rushed outward from the center of the earthquake, traveling around 400 miles per hour. Anything in the path of these giant surges of water, such as islands or coastlines, would soon be under water. The people had already felt the earthquake, so why didn't they know the water was coming? As the ocean floor rises near a landmass, it pushes the wave higher. But much depends on how sharply the ocean bottom changes and from which direction the wave approaches. Energy from earthquakes travels through the Earth very quickly, so scientists thousands of miles away knew there had been a severe earthquake in the Indian Ocean. Why didn't they know it would create a tsunami? Why didn't they warn people close to the coastlines to get to higher ground as quickly as possible? In Sumatra, near the center of the earthquake, people would not have had time to get out of the way even if they had been warned. But the tsunami took over two hours to reach the island of Sri Lanka 1000 miles away, and still it killed 30,000 people! It is important, though, to understand just how the tsunami will behave when it gets near the coastline. As the ocean floor rises near a landmass, it pushes the wave higher. But much depends on how sharply the ocean bottom changes and from which direction the wave approaches. Scientists would like to know more about how actual waves react. MISR has nine cameras all pointed at different angles. So the exact same spot is photographed from nine different angles as the satellite passes overhead. The image at the top of this page was taken with the camera that points forward at 46°. The image caught the sunlight reflecting off the pattern of ripples as the waves bent around the southern tip of the island. These ripples are not seen in satellite images looking straight down at the surface. Scientists do not yet understand what causes this pattern of ripples. They will use computers to help them find out how the depth of the ocean floor affects the wave patterns on the surface of the ocean. Images such as this one from MISR will help. Images such as these from MISR will help scientists understand how tsunamis interact with islands and coastlines. This information will help in developing the computer programs, called models, that will help predict where, when, and how severely a tsunami will hit. That way, scientists and government officials can warn people in time to save many lives.
<urn:uuid:db2613b9-457b-405c-a9e8-cf6b3053cdc7>
4.6875
607
Knowledge Article
Science & Tech.
59.732901
Want to stay on top of all the space news? Follow @universetoday on Twitter The average Earth surface temperature is 14° C. That’s 287 kelvin, or 57.2° F. As you probably realize, that number is just an average. The Earth’s temperature can be much higher or lower than this temperature. In the hottest places of the planet, in the deserts near the equator, the temperature on Earth can get as high as 57.7° C. And then in the coldest place, at the south pole in Antarctica, the temperature can dip down to -89° C. The reason the average temperature on Earth is so high is because of the atmosphere. This acts like a blanket, trapping infrared radiation close to the planet and warming it up. Without the atmosphere, the temperature on Earth would be more like the Moon, which rises to 116° C in the day, and then dips down to -173° C at night.
<urn:uuid:86d90c21-1f3c-4921-a8db-cbd4b6a112ed>
3.625
204
Knowledge Article
Science & Tech.
69.944741
Science Fair Project Encyclopedia - Green algae - land plants (embryophytes) - non-vascular embryophytes - vascular plants (tracheophytes) - seedless vascular plants - seed plants (spermatophytes) Plants are a major group of living things (about 300,000 species), including familiar organisms such as trees, flowers, herbs, and ferns. Aristotle divided all living things between plants, which generally do not move or have sensory organs, and animals. In Linnaeus' system, these became the Kingdoms Vegetabilia (later Plantae) and Animalia. Since then, it has become clear that the Plantae as originally defined included several unrelated groups, and the fungi and several groups of algae were removed to new kingdoms. However, these are still often considered plants in many contexts. Indeed, any attempt to match "plant" with a single taxon is doomed to fail, because plant is a vaguely defined concept unrelated to the presumed phylogenic concepts on which modern taxonomy is based. - See main article at Embryophytes Most familiar are the multicellular land plants, called embryophytes. They include the vascular plants, plants with full systems of leaves, stems, and roots. They also include a few of their close relatives, often called bryophytes, of which mosses are the most common. All of these plants have eukaryotic cells with cell walls composed of cellulose, and most obtain their energy through photosynthesis, using light and carbon dioxide to synthesize food. About 300 plant species do not photosynthesize but are parasites on other species of photosynthetic plants. Plants are distinguished from green algae, from which they evolved, by having specialized reproductive organs protected by non-reproductive tissues. Bryophytes first appeared during the early Palaeozoic. They can only survive in moist environments, and remain small throughout their life-cycle. This involves an alternation between two generations: a haploid stage, called the gametophyte, and a diploid stage, called the sporophyte. The sporophyte is short-lived and remains dependent on its parent. Vascular plants first appeared during the Silurian period, and by the Devonian had diversified and spread into many different land environments. They have a number of adaptations that allowed them to overcome the limitations of the bryophytes. These include a cuticle resistant to desiccation, and vascular tissues which transport water throughout the organism. In many the sporophyte acts as a separate individual, while the gametophyte remains small. The first primitive seed plants, Pteridosperms (seed ferns) and Cordaites, both groups now extinct, appeared in the late Devonian and diversified through the Carboniferous, with further evolution through the Permian and Triassic periods. In these the gametophyte stage is completely reduced, and the sporophyte begins life inside an enclosure called a seed, which develops while on the parent plant, and with fertilisation by means of pollen grains. Whereas other vascular plants, such as ferns, reproduce by means of spores and so need moisture to develop, some seed plants can survive and reproduce in extremely arid conditions. Early seed plants are referred to as gymnosperms (naked seeds), as the seed embryo is not enclosed in a protective structure at pollination, with the pollen landing directly on the embryo. Four surviving groups remain widespread now, particularly the conifers, which are dominant trees in several biomes. The angiosperms, comprising the flowering plants, were the last major group of plants to appear, emerging from within the gymnosperms during the Jurassic and diversifying rapidly during the Cretaceous. These differ in that the seed embryo is enclosed, so the pollen has to grow a tube to penetrate the protective seed coat; they are the predominant group of flora in most biomes today. Algae and Fungi The algae comprise several different groups of organisms that produce energy through photosynthesis. The most conspicuous are the seaweeds, multicellular algae that often closely resemble terrestrial plants, found among the green, red, and brown algae. These and other algal groups also include various single-celled creatures and forms that are simple collections of cells, without differentiated tissues. Many can move about, and some have even lost their ability to photosynthesize; when first discovered, these were considered as both plants and animals. The embryophytes developed from green algae; the two are collectively referred to as the green plants or Viridaeplantae. The kingdom Plantae is now usually taken to mean this monophyletic group, as shown above. With a few exceptions among the green algae, all such forms have cell walls containing cellulose and chloroplasts containing chlorophylls a and b, and store food in the form of starch. They undergo closed mitosis without centrioles, and typically have mitochondria with flat cristae. The chloroplasts of green plants are surrounded by two membranes, suggesting they originated directly from endosymbiotic cyanobacteria. The same is true of the red algae, and the two groups are generally believed to have a common origin. In contrast, most other algae have chloroplasts with three or four membranes. They are not in general close relatives of the green plants, acquiring chloroplasts separately from ingested or symbiotic green and red algae. Unlike embryophytes and algae, fungi are not photosynthetic, but are saprophytes: they obtain their food by breaking down and absorbing surrounding materials. Most fungi are formed by microscopic tubes called hyphae, which may or may not be divided into cells but contain eukaryotic nuclei. Fruiting bodies, of which mushrooms are the most familiar, are actually only the reproductive structures of fungi. They are not related to any of the photosynthetic groups, but are close relatives of animals. The photosynthesis and carbon fixation conducted by land plants and algae are the ultimate source of energy and organic material in nearly all habitats. These processes also radically changed the composition of the Earth's atmosphere, which as a result contains a large proportion of oxygen. Animals and most other organisms are aerobic, relying on oxygen; those that do not are confined to relatively few, anaerobic environments. Much of human nutrition depends on cereals. Other plants that are eaten include fruits, vegetables, herbs, and spices. Some vascular plants, referred to as trees and shrubs, produce woody stems and are an important source of building material. A number of plants are used decoratively, including a variety of flowers. Simple plants like algae may have short life spans as individuals, but their populations are commonly seasonal. Other plants may be organized according to their seasonal growth pattern: - Annual: live and reproduce within one growing season. - Biennial: live for two growing seasons; usually reproduce in second year. - Perennial: live for many growing seasons; continue to reproduce once mature. Among the vascular plants, perennials include both evergreens that keep their leaves the entire year, and deciduous plants which lose their leaves for some part. In temperate and boreal climates, they generally lose their leaves during the winter; many tropical plants lose their leaves during the dry season. The growth rate of plants is extremely variable. Some mosses grow less than 1 μm/h, while most trees grow 25-250 μm/h. Some climbing species, such as kudzu, which do not need to produce thick supportive tissue, may grow up to 12500 μm/h. Plant fossils include roots, wood, leaves, seeds, fruit, pollen, spores and amber (the fossilized resin produced by some plants). Fossil land plants are recorded in terrestrial, lacustrine, fluvial and nearshore marine sediments. Pollen, spores and algae (dinoflagellates and acritarchs) are used for dating sedimentary rock sequences. The remains of fossil plants are not as common as fossil animals, although plant fossils are locally abundant in many regions worldwide. Early fossil plants are well known from the Devonian period, including the chert of Rhynie in Aberdeenshire, Scotland. The best preserved examples, from which their cellular construction has been described, have been found at this locality. The preservation is so perfect that sections of these ancient plants show the individual cells within the plant tissue. The Devonian period also saw the evolution of what many believe to be the first modern tree, Archaeopteris. This fern-like tree combined a woody trunk with the fronds of a fern, but produced no seeds. The Coal Measures are a major source of Palaeozoic plant fossils, with many groups of plants in existence at this time. The spoil heaps of coal mines are the best places to collect; coal itself is the remains of fossilised plants, though structural detail of the plant fossils is rarely visible in coal. In the Fossil Forest at Victoria Park in Glasgow, Scotland, the stumps of Lepidodendron trees are found in their original growth positions. The fossilized remains of conifer and angiosperm roots, stems and branches may be locally abundant in lake and inshore sedimentary rocks from the Mesozoic and Caenozoic eras. Sequoia and its allies, magnolia, oak, and palms are often found. Petrified wood is common in some parts of the world, and is most frequently found in arid or desert areas were it is more readily exposed by erosion. Petrified wood is often heavily silicified (the organic material replaced by silicon dioxide), and the impregnated tissue is often preserved in fine detail. Such specimens may be cut and polished using lapidary equipment. Fossil forests of petrified wood have been found in all continents. Fossils of seed ferns such as Glossopteris are widely distributed thoughout several continents of the southern hemisphere, a fact that gave support to Alfred Wegener's early ideas regarding Continental drift theory. References and further reading - Thomas N Taylor and Edith L Taylor. The Biology and Evolution of Fossil Plants. Prentice Hall, 1993. - Tree of Life - Chaw, S.-M. et al. Molecular Phylogeny of Extant Gymnosperms and Seed Plant Evolution: Analysis of Nuclear 18s rRNA Sequences (pdf file) Molec. Biol. Evol. 14 (1): 56-68. 1997. Botanical and vegetation databases - e-Floras (Flora of China, Flora of North America and others) - United States of America - Flora Europaea - 'Dave's Garden' horticultural plant database The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:b8724c44-5980-4235-935c-5f06d06d99e8>
3.90625
2,280
Knowledge Article
Science & Tech.
32.690165
Google is now serving up more than a hundred years of photographs from Life Magazine. The pictures of the early days of astronomy are just spectacular. The archives contain images of many astronomers who were critical figures in the development of the field, but who have yet to have telescopes named after them. A large fraction of them also seemed to smoke pipes. A huge hero of mine is Walter Baade. Baade was the guy who essentially took over observations at Mt Wilson during the blackouts of WWII. With the lights of Los Angeles snuffed out, and unable to serve in the military himself, he pushed the telescopes on Mt Wilson to their limits, and established the study of stellar populations in nearby galaxies. There are some terrific pictures of Walter Adams working at Mt Wilson. In the picture below, he’s holding the telescope controls used for guiding. During an astronomical observation, you have to move the telescope to compensate for the earth’s rotation. Nowadays, your computer can take care of it by adjusting the position to keep a bright star at a fixed position on a CCD camera. Back then, you looked through a little spotting scope, and manually adjusted the telescope position to keep it pointed at the right part of the sky. If you let it drift, your image would be blurry. No pee breaks for you, Dr. Adams! The guy kneeling in the figure below is Gerard Kuiper, working on a telescope at McDonald Observatory. He was a planetary astronomer, and the guy for whom the “Kuiper Belt” in the outer solar system was named, although Edgeworth probably deserved more credit for it. (Kuiper actually does have an airborne observatory named after him). And you have to love this picture of Frank Drake, working at the National Radio Astronomy Observatory in Greenbank West Virginia. You really can never have enough toggle switches. FYI, Drake is the guy behind the “Drake Equation”, used to estimate the likelihood of contact with extraterrestrial civilizations. And finally, a wonderful overhead shot of the 100″ telescope at Mt. Wilson The pictures above are a tiny fraction of the available pictures of working scientists. Cancel your afternoon appointments and dive in.
<urn:uuid:0146518b-5d1e-4ea6-8a34-c54f34f6df6f>
3.234375
454
Personal Blog
Science & Tech.
42.923767
Two stories appeared this week with more bad news on climate change. First, a review of 866 papers found that animal and plant species are shifting their ranges northward, while polar species are dying out. The linked article from the Post included several paragraphs on the economic implications, in this case ski resorts and power companies. I imagine that they are not the only industries facing challenges. "Wild species don't care who is in the White House," Parmesan said. "It is very obvious they are desperately trying to move to respond to the changing climate. Some are succeeding. But for the ones that are already at the mountaintop or at the poles, there is no place for them to go. They are the ones that are going extinct." Among the most affected species, Parmesan said, are highland amphibians in the tropics. She said more than two-thirds of 110 species of harlequin frogs, which occupy mountain cloud forests in Central America, have become extinct in the past 35 years. Meanwhile, many pest species -- including roaches, fleas, ticks and tree-killing beetles -- are surviving warming winters in increasing numbers. "We are seeing throughout the Northern Hemisphere that pests are able to have more generations per year, which allows them to increase their numbers without being killed off by cold winter temperatures," said Parmesan. Meanwhile, the rate of increase of carbon emissions has risen since 2000. Around that year , the annual rate of increase rose from about 1% to 2.5%. The Global Carbon Project, which established those figures, identified two causes. "There has been a change in the trend regarding fossil fuel intensity, which is basically the amount of carbon you need to burn for a given unit of wealth," explained Corinne Le Quere, a Global Carbon Project member who holds posts at the University of East Anglia and the British Antarctic Survey. "From about 1970 the intensity decreased - we became more efficient at using energy - but we've been getting slightly worse since the year 2000," she told the BBC News website. "The other trend is that as oil becomes more expensive, we're seeing a switch from oil burning to charcoal which is more polluting in terms of carbon."
<urn:uuid:577bcf3c-8dec-4793-b0aa-101996281e00>
2.6875
454
Personal Blog
Science & Tech.
48.678987
NASA began observing a dust storm on the planet Mars on November 10, 2012. Martian dust storms are the largest such storms in our solar system. Over the century that astronomers have monitored them through telescopes – and now via spacecraft – these periodic storms have been know to rage for months and grow to cover the entire planet Mars. This one, however, appeared to be dissipating by early December, 2012. Dust storms on Mars sometimes start in the months before Mars is closest to the sun, as it soon will be. Mars will reach perihelion – its closest point to the sun – in January 2013. Each Martian year lasts about two Earth years. Regional dust storms expanded and affected vast areas of Mars in 2001 and 2007, but not between those years and not since 2007. The image above is a mosaic taken by a spacecraft in orbit around Mars, the wonderful Mars Reconnaissance Orbiter, on November 18, 2012. Small white arrows outline the area in Mars’ southern hemisphere where the 2012 Martian dust storm was building. The storm was not far from two Mars rovers, Opportunity and Curiosity. At that time, Rich Zurek, chief Mars scientist at NASA’s Jet Propulsion Laboratory, Pasadena, California said: This is now a regional dust storm. It has covered a fairly extensive region with its dust haze, and it is in a part of the planet where some regional storms in the past have grown into global dust hazes. For the first time since the Viking missions of the 1970s, we are studying a regional dust storm both from orbit and with a weather station on the surface. That weather station on Mars comes from the Mars rover Curiosity, which landed on Mars on August 5, 2012. NASA says Curiosity’s weather station detected atmospheric changes related to the storm. For example, its sensors measured decreased air pressure and a slight rise in overnight low temperature. In fact, dust storms on Mars are known to raise the air temperature of the planet, sometimes globally. The Opportunity rover on Mars – that stalwart vehicle that has been tooling around on the Red Planet since 2004 and is now near the Endeavour crater on Mars – does not have a weather station. Opporunity was within 837 miles (1,347 kilometers) of the storm on November 21, NASA said, and did observe a slight drop in atmospheric clarity from its location. If the storm had taken over the entire planet and clouded over the sky, it would have impacted Opportunity most heavily, because that rover relies on the sun for energy. The rover’s energy supply would be disrupted if dust from the air fell on its solar panels. Meanwhile, the car-sized Curiosity rover would fare better since it is powered by plutonium instead of solar cells. Curiosity and the Mars Reconnaissance Orbiter are working together to provide a weekly Mars weather report from the orbiter’s Mars Color Imager, which you can see here. Bottom line: As Mars nears its perihelion or closest point to the sun in January 2013, a major dust storm broke out in the planet’s southern hemisphere, where summer is coming. NASA is tracking the storm with both the Curiosity and Opportunity rovers on the Martian surface, and from above with the Mars Reconnaissance Orbiter. These dust storms on Mars sometimes rage for months and cover the entire planet. This one seems to have died down suddenly.
<urn:uuid:79cd84c3-9aea-4123-a05f-43931a24850e>
4.15625
697
Knowledge Article
Science & Tech.
50.284294
The idea of the hypernova was first proposed by Dr. Bohdan Paczynski of Princeton University . He wanted to explain the gamma ray burst s that typically last a few second s at a time, come from seemingly random directions in space, and have the potential to produce more energy than anything else in the rest of the universe for a few seconds. The first remnants of such an explosion were identified by Q. Daniel Wang of Northwestern University , using work done by Dr. You-Hua Chu of the University of Illinois at Urbana-Champaigne as a base. The two remnants he identified reside in the galaxy M101 in the Ursa Major Constellation . They have been given the easily memorable names NGC5471B . The former is a nebula that is expanding at at least a hundred miles a second while the latter is one of the largest pieces of supernova wreckage known at 850 light year s across. Both are about ten times brighter than any known supernova remnants in our galaxy Little is known for sure about these powerful explosions, although it is suspected that they are the product of the collapse of extremely massive stars or their collisions with superdense objects such as neutron stars. Both relationships also imply that hypernovae probably have something to do with the formation of black holes. Aside from the physics that I'm not going to butcher by trying to explain, these relationships have been deduced primarily from the locations of gamma ray bursts' points of origin as well as the locations of various remnants of hypernovae, which both tend to be in areas of intensive star formation. Said areas are also hotspots for the formation of neutron stars, black holes, and other associated objects. Another hypernova that may come to play an important role in our lives in the near future is Eta Carainae. While it is not yet a hypernova, it is suspected that it will probably become one relatively soon due to its unstable patterns of brightness and dimness over the past 150 years that has culminated recently in an intensive brightening spell. It now radiates around 400 million times as much light and energy as the sun and is brightening in a way that astronomers do not understand. On the bright side, though, if it does explode again, it will probably be too far away (7500 light years) to hurt those of us who are protected from gamma ray bursts by an atmosphere. If, however, you are an orbital satellite or have any friends who are orbital satellites, I'd be very frightened indeed. A really cool picture of Eta Carainae can be found at http://earthfiles.com/earth040.html, where I also got much of the information on it. Most of the rest of the information in this writeup comes from http://www.space.com/scienceastronomy/astronomy/astrobizarre_000928.html. It should also be noted that I am the layest of lay persons when it comes to this stuff, so if I've gotten anything wrong, please /msg sludgeel so that I can correct it.
<urn:uuid:db36eb8c-30a7-4887-a53e-acb1c1b6fe92>
3.765625
646
Personal Blog
Science & Tech.
48.413299
A thermally insulated piston cylinder device initially contains 0.2m^3, 0.8kg of air at 20 degrees Celcius, and the piston is free to move. Now air at 600kPa and 80 degrees Celcius is slowly applied to the device through a supply line until the volume increases by 50 percent. Using constant specific heats and assume air as ideal gas, determine the entropy generation. I've been thinking about this for a long time but could not figure it out. I would appreciate any help.
<urn:uuid:ce6c3cc4-87f1-4a5f-8818-9f9cab8a03be>
2.90625
105
Q&A Forum
Science & Tech.
62.546395
[Numpy-discussion] "Nyquist frequency" in numpy.fft docstring Sun Jul 11 18:13:44 CDT 2010 Hi! I'm a little confused: in the docstring for numpy.fft we find the "For an even number of input points, A[n/2] represents both positive and negative Nyquist frequency..." but according to http://en.wikipedia.org/wiki/Nyquist_frequency (I know, I know, I've bad mouthed Wikipedia in the past, but that's in a different "The *Nyquist frequency*...is half the sampling signal <http://en.wikipedia.org/wiki/Discrete_signal> processing system...The Nyquist frequency should not be confused with the *, which is the lower bound of the sampling frequency that satisfies the Nyquist sampling criterion for a given signal or family of signals...*Nyquist rate*, as commonly used with respect to sampling, is a property of a signal <http://en.wikipedia.org/wiki/Continuous-time_signal>, not of a system, whereas *Nyquist frequency* is a property of a discrete-time system, not of a signal." Yet earlier in numpy.fft's docstring we find: "...the discretized input to the transform is customarily referred to as a * Should we be using "Nyquist rate" instead of "Nyquist frequency," and if not, why not? -------------- next part -------------- An HTML attachment was scrubbed... More information about the NumPy-Discussion
<urn:uuid:c6e88302-6165-4cb7-bee3-07f7c0a9cf58>
2.875
356
Comment Section
Software Dev.
65.777796
A Villager David has a plot of land of the shape of a quadrilateral. The Village head decided to take over some portion of his plot from one of the corners toconstruct a health center. Suzie Agrees to the above proposal with the condition that he should be given equal amount of land in lieu of his land adjoining his plot so as to form a triangular plot. Explain how this plan can be implemented. Please solve this
<urn:uuid:11dedc48-f881-412e-8b75-2d141c022009>
3.296875
89
Q&A Forum
Science & Tech.
56.888654
4.2. Protoplanetary disks Contrary perhaps to the expectation that protoplanetary disks would be deeply embedded within the clouds from which they form, and they would therefore be inaccessible to optical observations, HST revealed many dozens of protoplanetary disks ("proplyds"; e.g., Bally, O'Dell and McCaughrean 2000; O'Dell 2001; O'Dell et al. 1993; O'Dell and Wong 1996, following the initial correct identification by Churchwell et al. 1987 and Meaburn 1988). Many of these disks are seen silhouetted against the background nebular light (when they are shielded from photoionization), with some possessing ionized skins and tails (e.g., Bally, O'Dell and McCaughrean 2000; Henney and O'Dell 1999, Fig. 12). Figure 12. Protoplanetary disks (Proplyds) in the Orion Nebula (M42), HST/WFPC2. Credit: NASA, C. R. O'Dell (Vanderbilt University), and M. McCaughrean (Max-Planck-Institute for Astronomy). http://hubblesite.org/newscenter/archive/1995/45/ The ubiquity of the protoplanetary dust disks (they are seen in 55%-97% of stars; Hillebrand et al. 1998, Lada et al. 2000) demonstrates that at least the raw materials for planet formation are in place around many young stars. Indeed, in a few cases, like the dust ring and disk in HR 4796A and the nearly edge-on disk surrounding Beta Pictoris, the detailed HST images reveal gaps and warping (respectively) that could represent the effects of orbiting planets (Schneider et al. 1999, Kalas et al. 2000). Another aspect of the protoplanetary disks that is significant for planet formation is the discovery of evaporating disks in the Orion Nebula. As was noted in Section IIIA, some of the Orion proplyds were shown to be evaporating (due to photo-ablation by UV radiation from young, nearby stars) at rates of ~ 10-7 to 10-6 M yr-1 (e.g., Henney and O'Dell 1999). Given that the masses of these disks are typically of order 10-2 M (if normal interstellar grains are assumed, so that the observed dust emission can be scaled to the total mass), this implies lifetimes for these disks of 105 years or less. There exists, however, some evidence that the grain sizes in Orion's disks may, in fact, be relatively large - perhaps of the order of millimeters (Throop 2000). The latter conclusion is based on the fact that the outer portions of the disks appear to be gray (they do not redden background light), and on the failure to detect the disks at radio wavelengths in spite of the implied large extinction in the infrared (hiding the central star in some cases). The observations are thus consistent with grain sizes in excess of the radio wavelength used, of 1.3 mm. When we think about the potential implications of these two findings (about disk lifetimes and grain sizes), we realize that they may have interesting consequences for the demographics of planets in Orion. The relatively short disk lifetimes but relative large grain sizes may mean that while rocky (terrestrial) planets can form in these strongly irradiated environments, giant planets (that require the accretion of hydrogen and helium from the protoplanetary disk) cannot (unless their formation process is extremely fast; Boss 2000, Mayer et al. 2002). It is nevertheless clear from the many observations of "hot Jupiters" (giant planets with orbital radii 0.05 AU) that less extreme environments do exist, in which giant planets not only form, but also have sufficient time to gravitationally interact with their parent disk and migrate inward, to produce the distribution in orbital separations we observe today (see, for example, Lin, Bodenheimer, and Richardson 1996, Armitage et al. 2002). While disks around young stars produce jets and form planets, similar structures around old stars help perhaps to shape incredible "sculptures" around dying stars.
<urn:uuid:fb27e39a-1e34-46c6-b680-951f862a36ce>
3.90625
882
Academic Writing
Science & Tech.
54.068308
Formation. Location: The Kimberley, NW Australia. Age: Upper Devonian, Frasnian. 350 million years. Fig.1. 3D skull of the placoderm Mcnamaraspis kaprios. Courtesy of Dr.J.Long. The placoderm fish Mcnamaraspis was approximately 25 cm long and, like other placoderms, had a bony head shield which was joined to the 'shark'-like body (Fig. 1). Placoderms were the first jawed fish. The Mcnamaraspis skull exhibits annular cartilage preserved in the snout, which has never been observed in other placoderm specimens. This facilitated the entrance of water over its olfactory organs and hence its sense of smell was acute. This, together with the sharp teeth, probably made the fish a highly successful predator (Fig 2). Fig 2. Reconstruction of the placoderm Mcnamaraspis kaprios. Courtesy of Dr.J.Long Fig 3. Fig Fig 3. Front view of an arthrodire placoderm fish. Note the hard, bony teeth used for grabbing shrimp-like crustaceans. Fig 4. Head plates of the long-snouted placoderm Fallocosteus turnerae. Courtesy of Dr.J.Long. Fig 5. The skull and lower jaw of a Gogo lungfish, Griphognathus whitei. Fig 6. The 3D skull morphology of another Gogo lungfish, Chirodipterus australis. Fig 7. Reconstruction of the Gogo reef fauna. Courtesy of Dr.J.Long.
<urn:uuid:52b23c36-4572-4676-8d64-5939c3bfa069>
3.328125
369
Knowledge Article
Science & Tech.
64.118406
Common Lisp functions are partial; they are not defined for all possible inputs. But ACL2 functions are total. Roughly speaking, the logical function of a given name in ACL2 is a completion of the Common Lisp function of the same name obtained by adding some arbitrary but ``natural'' values on arguments outside the ``intended domain'' of the Common Lisp function. ACL2 requires that every ACL2 function symbol have a ``guard,'' which may be thought of as a predicate on the formals of the function describing the intended domain. The guard on the primitive car , for example, is (or (consp x) (equal x nil)), which requires the argument to be either an ordered pair or nil. We will discuss later how to specify a guard for a defined function; when one is not specified, the guard is is just to say all arguments are allowed. But guards are entirely extra-logical: they are not involved in the axioms defining functions. If you put a guard on a defined function, the defining axiom added to the logic defines the function on all arguments, not just on the guarded domain. So what is the purpose of guards? The key to the utility of guards is that we provide a mechanism, called ``guard verification,'' for checking that all the guards in a formula are true. See verify-guards. This mechanism will attempt to prove that all the guards encountered in the evaluation of a guarded function are true every time they are encountered. For a thorough discussion of guards, see the paper [km97] in the
<urn:uuid:4e783bcb-b0d6-4ed1-a695-9ba57331844b>
2.90625
328
Documentation
Software Dev.
48.296681
Researchers have developed a more reliable approach to synthetic biology, the assembly of genetic 'standard parts' to create an organism with desired traits. They've been able to combine a library of parts with computer models that help predict the behavior of those parts when they're combined in a living system. The approach takes some of the trial and error out of the process, moving 'tweaking' of the system earlier in the process. The team used their improved method to build a genetic timer for brewer's yeast, capable of causing the yeast to clump together within a fermentation vat at a specific time. We'll talk with a member of the team about the research, and what improved synthetic biology might be used for. Produced by Charles Bergquist, Director and Contributing Producer
<urn:uuid:8df856e8-dda1-417d-9150-71d6625b3871>
3.25
157
Truncated
Science & Tech.
38.843448
Authors: H. Ron Harrison Galileo studied bodies falling under gravity and Tycho Brahe made extensive astronomical observations which led Kepler to formulate his three famous laws of planetary motion. All these observations were of relative motion. This led Newton to propose his theory of gravity which could just as well have been expressed in a form that does not involve the concept of force. The approach in this paper extends the Newtonian theory and the Special Theory of Relativity by including relative velocity by comparison with electromagnetic effects and also from the form of measured data. This enables the non-Newtonian effect of gravity to be calculated in a simpler manner than by use of the General Theory of Relativity (GR). Application to the precession of the perihelion of Mercury and the gravitational deflection of light gives results which agree with observations and are identical to those of GR. This approach could be used to determine the non-Newtonian variations in the trajectories of satellites. Comments: 8 Pages. Appendix added Unique-IP document downloads: 123 times Add your own feedback and questions here:
<urn:uuid:d820d5af-039b-44b9-a401-27f9ab572390>
3.484375
222
Knowledge Article
Science & Tech.
28.091749
Science Fair Project Encyclopedia Frequency response is the measure of any system's response to frequency, but is usually used in connection with electronic amplifiers and similar systems, particularly in relation to audio signals. Because the human ear is generally not sensitive to phase, the frequency response is typically characterized by the magnitude of the system's response, measured in dB, versus frequency. The frequency response of a system is typically measured by applying an impulse to the system and measuring its response (see impulse response), sweeping a pure tone through the bandwidth of interest, or by applying a maximum length sequence. Once a frequency response has been measured (e.g., as an impulse response), providing the system is linear time invariant, its characteristic can be approximated with arbitrary accuracy by a digital filter. Similarly, if a system is demonstrated to have a poor frequency response, a digital filter can be applied to the signals prior to their reproduction to compensate for the problem. Frequency responses are often used to indicate the accuracy of amplifiers and speakers for reproducing audio. As an example, a high fidelity amplifier may be said to have a frequency response of 20Hz - 20,000Hz ±1dB, which tells you that the system amplifies equally all frequencies within that range and within the limits quoted. Such a measure does not include any other indicators of quality (e.g., non-linear distortions of the signal, signal-to-noise ratio, etc...). Frequency response therefore does not guarantee a given quality of audio reproduction, but only indicates that a piece of equipment meets the basic requirements needed for it. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:5be33dd9-8d72-447b-9917-95699895403d>
4.46875
354
Knowledge Article
Science & Tech.
30.682504
Image: Cunjevoi Illustration Cunjevoi can form large colonies on rock platforms. They have a hard outer coat that is often covered in green and brown algae. Cunjevoi have a cylinder-shaped body with two openings at the top. They can grow up to 30 cm in height. - Andrew Howells - © Australian Museum - Common name: - Scientific name: - Pyura stolonifera Cunjevoi live on rocky shores in Australia. They are found on reefs and rock platforms and wharf pylons. Cunjevoi are found in waters up to 12 m deep. Cunjevoi eat plankton. Cunjevoi take sea water in through one of the openings and remove all plankton from it. The plankton is moved to the stomach. Unwanted water is passed out through a second opening. People fishing use Cunjevoi for bait. Orange Tritons eat Cunjevoi. Cunjevoi are sometimes called sea squirts because they can spray a jet of seawater out of their body when squeezed. Cunjevoi belong to a group of animals called Tunicates.
<urn:uuid:16ebf0e7-2953-45c8-993d-d870e5043971>
3.9375
257
Knowledge Article
Science & Tech.
54.011849
Geodetic Reference Systems Definition of the various Geodetic Reference Systems and their realizations is not only important with regard to scientific work, but will also be of major importance for the practical applications in the fields of geodesy and navigation if seen against the background of an ever-increasing use of space geodetic techniques, as e.g. GPS and the future Galileo System. With the definition of the reference systems the speed of light, the radius and the flattening of the earth, a Cartesian coordinate system etc. are exemplarily defined. The individual reference systems are realized at the earth’s surface as reference frames through marked points (survey points) with their coordinates. Global terrestrial reference systems are Cartesian coordinates having their origin in the earth’s center of mass and their orientation aligned with the rotational axis of the earth. The national coordinate and height systems may constitute local reference systems with their own reference ellipsoids, geoid models and map projections, and also densifications of the global reference frame.
<urn:uuid:572a17b6-f220-4097-8a87-980eb73f29f7>
3.6875
214
Documentation
Science & Tech.
23.702001
Double the PressureScience brain teasers require understanding of the physical or biological world and the laws that govern it. In general, if you have a gas in a container and you double the amount of gas, the new pressure will be double the old pressure. I say "in general" because this isn't exactly true, but it is close enough for the purposes of this teaser. So my question is this: If you have a tire filled with the standard 32 psi and you double the amount of air molecules in the tire, what pressure will your tire gauge now read? Assume that the tire does not expand, and that the first sentence of this teaser is exactly true. The answer is NOT 64 psi. HintAtmospheric pressure is about 15 psi. How does this affect the answer? Why? Because pressure gauges are set to read "0 psi" when the pressure being read is the same as atmospheric pressure. This is very convenient because you can easily tell from the gauge if the container is under pressure or vacuum. However, this also means that "0 psi" does not really mean that there is no pressure in the container - true zero psi occurs at full vacuum. So to solve this problem, you have to recognize the fact that at 32 psi there are enough air molecules in the tire to increase the pressure from full vacuum (no molecules) to 32 psi. Since atmospheric pressure is about 15 psi, then the real pressure is 32 + 15 = 47 psi. Since this is the real pressure in the tire, you can now double it to get 94 psi. If the real pressure is 94 psi, the gauge will read 94 - 15 = 79 psi. See another brain teaser just like this one... Or, just get a random brain teaser If you become a registered user you can vote on this brain teaser, keep track of which ones you have seen, and even make your own. Back to Top
<urn:uuid:b8ee9860-c7ed-43ec-8f6d-79259c847ee2>
3.53125
396
Personal Blog
Science & Tech.
66.143831
Hi, Im new to coding im after learnig so that i can build websites for myself and my friends ...Im after some help in coding IE: php,html etc to build basic websites that involve images and text. Thanks I remember when I first started building websites I had trouble understanding what languages were used for what and how to use them. So I'll try to clarify that up for you: HTML - the base language of the web. No matter what language you use on the internet, HTML will be a part of your website. HTML is what puts all the different languages together into one page. CSS - The design language. CSS will allow you to position elements, change element colors, font styles and many other things can be done with CSS. I highly suggest mastering CSS as you can do MANY MANY things with it. Even Animation now with CSS3. PHP - The server-side scripting language. PHP is personally my favorite language just because you can do SO MUCH with it. PHP is what will handle anything like contact forms, form submissions, logins, registrations, database management and everything like that. This is the language you will use if you need to store info in a database. Mysql is also very easy to learn, and really, there's not a lot that you actually have to learn to get it to do what you want. Experience mostly just helps in getting the info faster and more efficiently. Of the two, I'd begin with PHP. It's a little easier to understand. However, it's a server-side language, so you'll require a host to run the programs. You can set up a local-host on your computer, but it's not super simple. (If you have a little cash, just buy a cheap shared-hosting site and work on there. That's what I did). All that said, my recommendation is to learn Python. From all the research I did it's by far the easiest to learn and use. You can even use it for server-side programming instead of PHP. (Although I didn't try it, it looks far more complicated to get it running than PHP. -Python 3 that is-) Even if you don't end up using Python, the lessons you learn from it will allow you to quickly pick up any programming language you choose to learn. Programming is a major challenge, Python makes it as easy as possible. I'd recommend going right to the syntax of the language you want to learn, and studying that until you understand it. I remember struggling through tutorials, feeling like I wasn't really getting it because I wouldn't understand all the examples or terminology. After that I have always got my head around the concept of the language first, be it a simple markup language or programming, after which I can expand my knowledge of that language very quickly with relative ease. Oh, and did I mention everything you ever need to know about web development is a quick Google search away? I've heard that up to 9 out of 10 web developers are self-taught, all you need is the motivation and time.
<urn:uuid:1deb286a-abc5-4cf5-9bd4-9574fb3a5455>
2.75
637
Comment Section
Software Dev.
67.537724
Did NASA have a dirty little secret about the Apollo 12 mission? A team of researchers have located and reviewed NASA's archived Apollo-era 16 millimeter film -- and have come up with a definitive answer to the persistent claim in both the press and on the Web that a microbe survived 2.5 years on the moon. Apollo 12 was launched at 11:22:00 a.m. EST on November 14, 1969. The mission plan called for a landing in the Oceanus Procellarum-Ocean of Storms-area. This site was near the Surveyor III and other earlier unmanned missions to the moon. It landed there almost five days after their launch. They collected rock samples, mostly basalt and igneous rocks. The Surveyor III camera-team thought they had detected a microbe that had lived on the moon for all those years, "but they only detected their own contamination," Rummel added. Rummel, along with colleagues Judith Allton of NASA’s Johnson Space Center and Don Morrison, a former space agency lunar receiving laboratory scientist, recently presented their co-authored paper: "A Microbe on the Moon? Surveyor III and Lessons Learned for Future Sample Return Missions." Elsewhere, while the Apollo moon "microbes" were being debunked, research by a team of scientists at the University of London reinforced a theory that evidence of life on the early Earth might be found in rocks on the moon that were ejected during the Late Heavy Bombard period -- about four billion years ago when the Earth was subjected to a rain of asteroids and comets. Given that material from early Mars has been found in meteorites on Earth, it certainly seems reasonable that tens of thousands of tons of terrestrial meteorites may have arrived there during the Late Heavy Bombardment. Research by a team under Ian Crawford and Emily Baldwin of the Birkbeck College School of Earth Sciences at the University of London in 2008 used sophisticated technology to simulate the pressures any such terrestrial meteorites might have experienced during their arrival on the lunar surface. In many cases, the pressures could be low enough to permit the survival of biological markers, making the lunar surface a productive place to look for evidence of early terrestrial life. Any such markers are unlikely to remain on Earth, where they would have been erased long ago by more than three billion years of volcanic activity, later meteor impacts, or simple erosion by wind and rain. However, meteorites arriving on Earth are decelerated by passing through our atmosphere. As a result, while the surface of the meteorite may melt, the interior is often preserved intact. Could a meteorite from Earth survive a high-velocity impact on the lunar surface? Crawford and Baldwin used finite element analysis to simulate the behavior of two different types of meteors impacting the lunar surface. Crawford and Baldwin's group simulated their meteors as cubes, and calculated pressures at 500 points on the surface of the cube as it impacted the lunar surface at a wide range of impact angles and velocities. In the most extreme case they tested (vertical impact at a speed of some 11,180 mph, or 5 kilometers per second), Crawford reports that "some portions" of the simulated meteorite would have melted, but "the bulk of the projectile, and especially the trailing half, was subjected to much lower pressures." At impact velocities of 2.5 kilometers per second or less, "no part of the projectile even approached a peak pressure at which melting would be expected." Crawford concluded that biomarkers ranging from the presence of organic carbon to "actual microfossils" could have survived the relatively low pressures experienced by the trailing edge of a large meteorite impacting the moon. Crawford suggests that the key to finding terrestrial material is to look for water locked inside, these hydrates, can be detected using infrared (IR) spectroscopy. Many minerals on Earth are formed in processes involving water, volcanic activity, or both. By contrast, the moon lacks both water and volcanoes. Crawford and his co-authors believe that a high-resolution IR sensor in lunar orbit could be used to detect any large (over one meter) hydrate meteorites on the lunar surface, while a lunar rover with such a sensor "could search for smaller meteorites exposed at the surface." Crawford suggests that it might be necessary to dig below the surface to find terrestrial meteorites. He adds that collecting samples, observing them on the lunar surface, and picking those that warrant a return to Earth for detailed analysis "would be greatly facilitated by a human presence on the moon." The last U.S. astronaut to set foot on the moon, Dr. Harrison Schmitt, was a geologist. With NASA's plans for a return to the moon later in this century shelved, it looks like it will be up to China to search for hydrated rocks, and solve the mystery of how life began on the Earth.
<urn:uuid:c97602ef-faa0-4bb6-a20a-e324a6c544e7>
3.90625
1,019
Nonfiction Writing
Science & Tech.
39.706186
The Coriolis Effect The Coriolis force comes from the rotation of Earth. Earth spins on its axis at a rate of one rotation per 24 hours. At the equator, this is equivalent to approximately 1,600 km per hour—this is the speed a person standing at the equator experiences. But at the North and South Poles, the speed is zero. This differential in speed causes eddies (swirling patterns) in the atmosphere. These in turn affect weather patterns. Put a few drops of food coloring on a tennis ball, gently lower it into a tub of water, and give it a spin with your fingers. Note the patterns of motion that the food coloring makes in the water. Hurricanes spin counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere because of the Coriolis force. We don't notice the spinning of Earth directly, because we move at constant velocity (speed and direction). A popular myth holds that the water in toilets and sinks demonstrates the Coriolis effect (the observed effect of the Coriolis force) by draining counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere. However, this actually has to do with the design of the toilet or sink rather than where it is located on Earth. NASA scientists must take Coriolis effects into consideration when they launch rockets. In fact, space launching facilities, including the Johnson Space Center in Houston and Cape Canaveral in Florida, are located in the south to take advantage of the greater speed of Earth's surface at those latitudes (distances north or south of the equator, measured by imaginary lines running east to west parallel to the equator). This activity shows you several ways to demonstrate the Coriolis force and its effects. - 2-L soda bottle - food coloring - metric ruler - merry-go-round or a swivel chair and weights - safety goggles - steel washer, nut, or other small weight - I-m nylon fishing string or line These effects were first described by GaspardGustave de Coriolis (1792–1843), a French engineer and mathematician. - Fill a 2-L soda bottle with water. Turn it upside down and let the bottle begin to pour out. Swirl the bottle clockwise until a miniature cyclone starts. Study the cyclone as the water pours out. Notice that the swirl will remain powered by gravity even if you hold the bottle still. For a more dramatic effect, first release a drop of food coloring from a height of 10 cm and allow it to settle into the water. As an extension, you can vary bottle sizes and mouth openings to find out what conditions work best to support this motion. - Get on a small merry-go-round and give it a good spin. Move toward the center. Notice what happens to the rate of rotation. You spin faster because of a principle called the conservation of angular momentum. - Put on your safety goggles, and swing a small weight in a circular orbit at the end of a I-m string. Let the string wind around your finger as shown. The result is always the same—as the length of the string decreases, the speed of the weight increases. The string may be compared to a nearly massless merry-go-round and the weight to a heavy person. Angular momentum is a quantity that is based upon an object's mass and rate of rotation. Move back to the edge and the spinning slows down. You can demonstrate the same effect in a swivel chair by holding weights in your arms, spinning, and then moving your arms toward and away from your body, or by observing figure skaters as they change their rate of rotation using their arms. In physics terms, the weight has a radial velocity (speed of rotation in respect to angle) toward your finger because of the shortening of the string. The radial velocity interacts with the rotational velocity (the speed and direction the weight turns) to produce an acceleration that is tangential (touching but not intersecting) to the path of the weight and acts to speed up the weight.I As artificial satellites fall toward Earth out of their orbit, the radius of their orbit (the distance to Earth's center) decreases and their speed increases, until friction becomes so great that they burn up in the atmosphere. Rocket Science: 50 Flying, Floating, Flipping, Spinning Gadgets Kids Create Themselves by Jim Wiese (New York: John Wiley & Sons, 1995). The Spinning Blackboard and Other Dynamic Experiments on Force and Motion (Exploratorium Science Snackbook series) by Paul Doherty (New York: John Wiley & Sons, 1996). Warning is hereby given that not all Project Ideas are appropriate for all individuals or in all circumstances. Implementation of any Science Project Idea should be undertaken only in appropriate settings and with appropriate parental or other supervision. Reading and following the safety precautions of all materials used in a project is the sole responsibility of each individual. For further information, consult your state’s handbook of Science Safety.
<urn:uuid:9c20f42c-5b5f-4e59-bccf-7f2b13d70696>
4.03125
1,053
Tutorial
Science & Tech.
49.474717
A directory is a kind of file that contains other files entered under various names. Directories are a feature of the file system. Emacs can list the names of the files in a directory as a Lisp list, or display the names in a buffer using the ls shell command. In the latter case, it can optionally display information about each file, depending on the options passed to the This function returns a list of the names of the files in the directory directory. By default, the list is in alphabetical order. If full-name is non- nil, the function returns the files' absolute file names. Otherwise, it returns the names relative to the specified directory. If match-regexp is non- nil, this function returns only those file names that contain a match for that regular expression—the other file names are excluded from the list. On case-insensitive filesystems, the regular expression matching is case-insensitive. If nosort is non- directory-filesdoes not sort the list, so you get the file names in no particular order. Use this if you want the utmost possible speed and don't care what order the files are processed in. If the order of processing is visible to the user, then the user will probably be happier if you do sort the names.(directory-files "~lewis") ⇒ ("#foo#" "#foo.el#" "." ".." "dired-mods.el" "files.texi" "files.texi.~1~") An error is signaled if directory is not the name of a directory that can be read. This is similar to directory-filesin deciding which files to report on and how to report their names. However, instead of returning a list of file names, it returns for each file a list ), where attributes is what file-attributeswould return for that file. The optional argument id-format has the same meaning as the corresponding argument to file-attributes(see Definition of file-attributes). This function expands the wildcard pattern pattern, returning a list of file names that match it. If pattern is written as an absolute file name, the values are absolute also. If pattern is written as a relative file name, it is interpreted relative to the current default directory. The file names returned are normally also relative to the current default directory. However, if full is non- nil, they are absolute. This function inserts (in the current buffer) a directory listing for directory file, formatted with lsaccording to switches. It leaves point after the inserted text. switches may be a string of options, or a list of strings representing individual options. The argument file may be either a directory name or a file specification including wildcard characters. If wildcard is non- nil, that means treat file as a file specification with wildcards. If full-directory-p is non- nil, that means the directory listing is expected to show the full contents of a directory. You should specify twhen file is a directory and switches do not contain ‘-d’. (The ‘-d’ option to lssays to describe a directory itself as a file, rather than showing its contents.) On most systems, this function works by running a directory listing program whose name is in the variable insert-directory-program. If wildcard is non- nil, it also runs the shell specified by shell-file-name, to expand the wildcards. MS-DOS and MS-Windows systems usually lack the standard Unix program ls, so this function emulates the standard Unix program lswith Lisp code. As a technical detail, when switches contains the long ‘--dired’ option, insert-directorytreats it specially, for the sake of dired. However, the normally equivalent short ‘-D’ option is just passed on to insert-directory-program, as any other option.
<urn:uuid:db58271f-d93c-4c47-bbd4-c966b7a25c6d>
3.15625
844
Documentation
Software Dev.
46.495689
Page 3 of 3 Now we come to a sideline in the language paradigm story. There is a lot of talk about dynamic languages at the moment. This is partly because until quite recently the dominant languages Java, C++ and C# were static languages - so it's quite a lot about a reaction to the old guard. The static/dynamic distinction is a very difficult one to pin down precisely. In the old days it could be summed up as the split into compiled and interpreted languages but today it is more about a split in the approach to how object oriented programming should be done. The current meaning of Dynamic when applied to languages usually refers to typing. When you create an object oriented language you have to decide if it is going to be strongly or weakly typed. Every object defines a type and in a strongly typed language you specify exactly what type can be used in any given place. If a function needs parameters that of type Apple you can't call it used parameters that are Oranges. The alternative approach is to allow objects of any type to be used anywhere and just let the language try to do the best job it can with what it is presented - this is weak typing. Now in a strongly typed language you can choose to enforce the typing when the language is compiled or at run time. This is the main distinction between static and dynamic typing. In a static typed language you can look at a variable and see that it is being assigned to an Apple just by reading the code. In a dynamically typed language you can't tell what is being assigned to a variable until the assignment is done at run time. Clearly there is an interaction between strong and weak typing and static and dynamic. A weakly typed language really doesn't have much choice but to use what looks like dynamic typing. Weak typing or dynamic typing has the ability to make programmer easier to write but the loss of the discipline of controlling type makes it more likely that a runtime error will occur and arguably makes it harder to find bugs. Dynamic languages my have something to offer the future but there is a sense in which we are simply returning to the wild primitive expression of programming that existed in a time before we learned better. Perhaps every so many generations programmers need to experience programming in the raw. The final paradigm is the graphic language - if they can be called languages. This is a strange mix of the object-oriented approach and the declarative with a little procedural thrown in. However to classify the approach in this way is to miss the bigger picture - no just to miss the picture. The idea is that if code objects are to mimic real world objects let's give them a physical appearance. In the world of the user interface we are very well accustomed to this approach - a button that you drag and drop onto a page is a physical representation of the button code object. You get to work with the button as if it was a real button - you can click it, drag it, size it, change its color and so on. Graphical objects in the UI lead to the component revolution which we are still developing - from ActiveX to WPF, Widgets and so on. Now consider using the same approach to building programs in general. You could have a loop component, a conditional component, a module component and so on. This could be assembled just like a user interface by a drag-and-drop designer and "writing the code" would be a matter or connecting them together in a flow of control graph. Some components would need you to write a few lines of procedural code to specify their actions more precisely but mostly components fit together naturally without extra code, just specify a few properties. This approach to building programs has to date mostly been used in languages such as Scratch and the Lego Mindstorms robots to get children interested in programming. However, what is easy for children should be very easy for us and the method could well translate to more ambitious projects. Only recently Google announced a graphical programming environment for the Android - but it's still in the early stages of testing. Of all the techniques described so far it is graphical programming that I'd bet was the way of the future - but how far in the future is another matter. There are a large number of other approaches to programming that we haven't considered but they are mostly side issues and special environments. For example, there is the whole issue of synchronous v asynchronous or event driven programming. Then there is the big question of sequential v parallel programming and so on. There is also the convergence of AI and programming. For example, using genetic algorithms you can evolve a program rather than writing it. If you have a favourite approach that has been left out, or want to request an article about an approach or any aspect of programming theory, then email the editor: If you would like to be informed about new articles on I Programmer you can either follow us on Twitter, on Facebook , on Digg or you can subscribe to our weekly newsletter.
<urn:uuid:d76751d3-0d58-4de9-95c0-a1ce4412c76b>
2.90625
1,010
Truncated
Software Dev.
44.909719
This article was published originally on 8/25/2010 Alert readers across Iowa and in neighboring states are asking, "Why are there are so many dragonflies this summer?" I'm not sure what explains this larger-than-normal number of dragonflies but callers are reporting anywhere from "dozens" to "hundreds" of dragonflies flying in swarms during late afternoon to early evening. Excessive rainfall in this year does not explain the abundance. Dragonflies develop as nymphs in rivers, streams and lakes. Most take at least one year to develop from the egg to the adult stage, and some take 2 or 3 years. So the swarmers you see now are at least one year old and probably two. These are the offspring of last year’s adults (if not the adults that were flying back in 2008 or 2009). To me, that means this year’s abundance is related to what happened 1 to 3 years ago, and not what happened 1 to 3 months ago. In fact, I predict that dragonfly numbers will be down in the next 1 to 3 years as flooding of 2010 may have been detrimental to nymphs in flooded streams. More water in the stream, and especially flooding, would seem to work against the dragonflies, not for them. What we do know is that dragonflies are more numerous in high-quality water, so abundance is an indicator of healthy aquatic ecosystems, and that's a good thing. Dragonflies are often observed long distances from the nearest water. It appears they travel long distances and then congregate (“swarm”) in areas where there is a plentiful flying food source such as emerging winged ants, mosquitoes, etc. Yes, dragonflies eat mosquitoes, but it’s apparent they are not keeping up with this year’s bumper crop. More information about dragonflies Dragonfly photographed near Hungry Jack Lake MN. By Richard Minnick.
<urn:uuid:c09fd244-f0ba-4b95-834a-b95040e33159>
3.140625
399
Knowledge Article
Science & Tech.
57.702115
A major focus in tissue engineering is to create materials that improve and direct cellular interaction. This interaction can be probed by measuring the relative number of cells adhered to a surface, which is thought to be an important step in the cascade of cellular fate processes such as stem cell renewal or differentiation into a specialized cell type. Now, in new work, a group of Australian researchers have utilized the industrially-relevant plasma polymerization technique to chemically modify the surface of a biologically inert substrate for the purpose of enhancing cell adhesion. Specifically, they examined the effect of two plasma polymerization parameters, discharge power and deposition time, on properties such as film thickness, chemical composition, and cellular attachment. In all cases, the plasma polymers deposited onto the substrates as thin, 8 to 40 nm, films. The chemical composition of the films was found to be more dependent on discharge power than deposition time. They showed that this outcome could be related to way in which the precursor (monomer) fragments and the plasma polymer film forms (e.g. by crosslinking). Of the two plasma polymers examined, i.e. those formed from either an amine or aldehyde precursor, it was found that the stem cells adhered best to plasma polymers that closely resembled the aldehyde monomer, i.e. plasma polymers formed under lower power and shorter times. These results have practical implications for the fast, efficient, and inexpensive surface functionalization of materials for tissue engineering.
<urn:uuid:60497062-85c5-4b78-811f-08dd5ae8ef53>
2.765625
304
Academic Writing
Science & Tech.
25.978432
Can Solar Panels Replace Nuclear Energy? A traditional view is that solar power is cool and hip but doesn’t have nearly the production muscle that nuclear has. Solar panels are certainly safer than nuclear energy, but will they ever be able to replace nuclear power plants? One blogger, Dan Hahn, seems to think that solar can replace nuclear. Dan Hahn writes that in 2010, enough solar panels were shipped and installed to produce the equivalent energy of seventeen nuclear power plants. Each nuclear power plant can produce one gigawatt of energy annually. In 2010 alone, new solar panel systems generated seventeen gigawatts, the same amount of energy as seventeen nuclear power plants. Not only is solar power safer than nuclear power, but nuclear plants can take years, even decades, to build. Solar panel installation helps get people the power they need without having to wait decades. The relative ease of solar panel installation is another reason solar panels might one day replace nuclear power plants. Another argument for solar versus nuclear is that residential solar panels have been decreasing in cost. In the past, solar panel installation has been too expensive for most homeowners to be able to afford. However, this might be already starting to change. Dan Hahn writes that the price of a residential solar panel system is decreasing. He writes that in many areas of the country, Baltimore for example, residential solar panels can pay for themselves “in just six years” so cites one href="http://www.solargaines.com/residential.html">Baltimore residential solar installer. Solar panels are not just good for the environment, but they are also good for consumers' wallets as well. The decreasing costs of residential solar panel systems and ever-growing environmental awareness bodes well for the future of solar power - especially to the tune of 17 gigawatts of power generated last year! There is hope that this green technology will one day be able to power many of our homes and businesses, safely and affordably. A solar-powered future is a bright, safe future. Last week's Ideas Sata Raid Storage System Advanced Storage Systems With the constant, rapid advance in technology,... readMetal Suppliers A full line metal distributor Metal is one of the earthly elements that ... click hereMetal Supplier The right steel supply for you The economy of today has made us all dig ... read more
<urn:uuid:666f90a6-d7d3-4f3c-81d0-f78433780cc2>
2.828125
483
Personal Blog
Science & Tech.
43.469194
Special Report - Cultivating Future Technologies As the year 2000 arrives, many people are looking back in time to determine how the world will change in the next millennium. Science has always played a significant role in society, but according to Dr. Gerry Stokes, who leads Pacific Northwest National Laboratory's Environmental Science and Health Division, that role will change in the coming century and in the next millennium. He notes that science has evolved over time and will continue to evolve as it plays an increasingly important role in our daily lives. We asked Dr. Stokes about how science has changed and how it will tackle the tough problems in the future. How has science evolved in the last century? Prior to this century, two kinds of science evolved. First, we had Galileo. He asked, `Why should people just think about something when they could go out and measure it?' This way of thinking led to modern experimental science. Then we had Newton and modern mathematically based theoretical science. He brought rigor to the process of creating a self-consistent explanation of existing facts. In the twentieth century, driven by Von Neumann, we began computational science, in which we use computer models to examine the consequences of what we think we already know. While this is related to Newton's theoretical approach, it is very different. Do you see computer models as the wave of the future? Yes, but computers aren't large enough to hold everything we know. We have to decide what to put in them, and that is the heart of computational science. Science has traditionally focused on the process of reductionism—taking things apart and forming specialties to look at every little piece. We have to reassemble knowledge to attack the big complicated problems. For example, we don't know how the human body operates as a whole. We study cells, or systems, or some smaller piece of the puzzle that can be brought into the lab or entered into a computer. Computational science will help make the transition from science of the lab to science in the real world. How will this transition from science in the lab to science in the world take place? As I look to the future, I see science as being necessarily multidisciplinary and perhaps inter-disciplinary. Teams of people from different disciplines will have to come together to tackle a problem. This will be challenging because we're used to dealing with things in small pieces. There are some technologies in today's world, like automobiles and aircraft, that no one person knows everything there is to know about them. Instead we have specialized experts that understand specific parts and work together to create the product. As we look at the real world, if we're not looking at the whole problem, I don't think we know how to ask the right questions to guide these teams on a path to the solution. Can you explain what you mean about the "right" questions? We have a difficult time articulating the big questions. It's not obvious to me that the breakthroughs we need will come from looking through the small windows of traditional science. In studies of global warming the questions being addressed deal with how much the climate is changing, how fast it is changing and what will happen as a result. I'm not convinced that those are the questions we need to be answering. Maybe the question should be more like `How can we characterize the planet in a way to understand how it changes and how we are affected by those changes?' Has the obligation of science changed in the last century? The biggest change is that science is far more central to civilization than it was at the start of the century. The advancement of civilization depends on it and reaps the benefits from it. I think society expects more of us. What does society expect from science? The world wants more than technology. The public wants science to help make sense of the world around us—to put things into perspective. In that regard, science has a lot to offer. I think that the environment and health are the two biggest challenges the public wants addressed. How can Pacific Northwest help address those issues? There are three strands in our environmental mission here at the lab. Environmental science helps us understand the legacy of past practices. Society has created situations that are causing difficulty now, and we need science to help `unfoul the footpath.' Then there's the stewardship issue. What kind of legacy are we leaving behind? For every gallon of gas we use we're putting five pounds of carbon into the atmosphere. We want to know if some seemingly unconnected act, such as driving cars, is causing the extinction of a species or the elimination of a small island nation. The focus now is moving to the question of how the environment impacts human health. How is what we're putting into the environment affecting people? The science we use to answer this question is 20 years old. As a society we've based our conclusions on experiments where animals are exposed to high doses and then inferences are made on how lower doses would affect people. Finding a better way is a new and challenging area for science. It's significant because these results are the basis for environmental legislation and regulation. Why did it take so long for the need to understand how the environment affects human health to rise to the surface? It comes back to whether we're asking the right questions. Health issues can be very personal. Medicine is very diagnostic. People feel bad and they want to be healed. Outside of epidemiology, there haven't been many attempts to deal with populations as a whole. We've had computer models of climate systems for about 10 years and yet there are no models of the public health. We need to ask questions like how would changing the smoking habits of every person affect society's health? How many people would still get lung disease from other causes? We need to understand the compounding factors to truly determine the risk elements of disease. Besides computer modeling, what kinds of research are becoming increasingly important? We're learning what drives biotechnology. We're building an understanding of the human genome, which is the code for life. We will then be able to determine what proteins are being made in cells, but that's only part of the picture. Some are made and destroyed, others combine to form something else. Now we're beginning to determine what proteins are actually present and what they do. This will create a new class of diagnostics to show how humans react to the environment. In the final analysis, computation will be critical here as well. What's the point of knowing something if you don't know the consequences? With computer modeling you can decrease the amount of experimentation it takes to make the world approachable.
<urn:uuid:c1dccecb-7abb-42bb-ba4e-6eafc25b490f>
3.421875
1,358
Audio Transcript
Science & Tech.
52.746225
by Staff Writers Washington DC (SPX) Jun 15, 2012 Two of our Milky Way's neighbor galaxies may have had a close encounter billions of years ago, recent studies with the National Science Foundation's Green Bank Telescope (GBT) indicate. The new observations confirm a disputed 2004 discovery of hydrogen gas streaming between the giant Andromeda Galaxy, also known as M31, and the Triangulum Galaxy, or M33. "The properties of this gas indicate that these two galaxies may have passed close together in the distant past," said Jay Lockman, of the National Radio Astronomy Observatory (NRAO). "Studying what may be a gaseous link between the two can give us a new key to understanding the evolution of both galaxies," he added. The two galaxies, about 2.6 and 3 million light-years, respectively, from Earth, are members of the Local Group of galaxies that includes our own Milky Way and about 30 others. The hydrogen "bridge" between the galaxies was discovered in 2004 by astronomers using the Westerbork Synthesis Radio Telescope in the Netherlands, but other scientists questioned the discovery on technical grounds. Detailed studies with the highly-sensitive GBT confirmed the existence of the bridge, and showed six dense clumps of gas in the stream. Observations of these clumps showed that they share roughly the same relative velocity with respect to Earth as the two galaxies, strengthening the argument that they are part of a bridge between the two. When galaxies pass close to each other, one result is "tidal tails" of gas pulled into intergalactic space from the galaxies as lengthy streams. "We think it's very likely that the hydrogen gas we see between M31 and M33 is the remnant of a tidal tail that originated during a close encounter, probably billions of years ago," said Spencer Wolfe, of West Virginia University. "The encounter had to be long ago, because neither galaxy shows evidence of disruption today," he added. "The gas we studied is very tenuous and its radio emission is extremely faint - so faint that it is beyond the reach of most radio telescopes," Lockman said. "We plan to use the advanced capabilities of the GBT to continue this work and learn more about both the gas and, hopefully, the orbital histories of the two galaxies," he added. Lockman and Wolfe worked with D.J. Pisano, of West Virginia University, and Stacy McGaigh and Edward Shaya of the University of Maryland. The scientists presented their findings at the American Astronomical Society's meeting in Anchorage, Alaska. National Radio Astronomy Observatory Stellar Chemistry, The Universe And All Within It Comment on this article via your Facebook, Yahoo, AOL, Hotmail login. WISE Finds Few Brown Dwarfs Close to Home Pasadena CA (JPL) Jun 15, 2012 Astronomers are getting to know the neighbors better. Our sun resides within a spiral arm of our Milky Way galaxy about two-thirds of the way out from the center. It lives in a fairly calm, suburb-like area with an average number of stellar residents. Recently, NASA's Wide-field Infrared Survey Explorer, or WISE, has been turning up a new crowd of stars close to home: the coldest of the brown dw ... read more |The content herein, unless otherwise known to be public domain, are Copyright 1995-2012 - Space Media Network. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA Portal Reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. Advertising does not imply endorsement,agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. Privacy Statement|
<urn:uuid:7a86fc5e-2416-4f57-970d-e4d60a1a52e9>
2.6875
805
Truncated
Science & Tech.
41.044724
Lesson 2: Primary Production and Upwelling in the Ocean Colorful Convection Currents Materials / Preparation Review the instructions and video at Easy Science Experiments: Colorful Convection Currents Each group of students will need: Groups of two to four Easy Science Experiments: Colorful Convection Currents includes a video demonstration. If you have never seen this activity done, you may want to review the video before trying the activity. This hot and cold water activity can be rather messy. We encourage you to try the activity yourself before doing it in the classroom. Providing students with buckets can help to avoid major water spills during this activity. Make sure that the bottles are only stacked inside the buckets. You may want to do the activity outside. If you feel that your students will not be able to do the activity, you show the online video, but it is more powerful to have students do the experiment themselves. Keep an ample supply of hot and cold colored water handy. For most students, it is intuitive that hot and cold water would mix. To see the cold water staying at the bottom may challenge their assumption. This is a good thing! But it’s important that the students first set up the experiment with the hot water at the bottom. Make sure to have a thorough discussion about what is happening. For students who haven’t studied density, you may want to include some basic density concepts at this point. OPTIONAL: If time permits, as an extension students can also experiment with salt water of various concentrations—this will deepen the students’ understanding of layers in the water column.
<urn:uuid:4af460aa-762f-4683-bf6f-41f71039e2c4>
4.09375
341
Tutorial
Science & Tech.
47.115169
Hard shell purple bug at the coast of Puerto Rico Thu, Feb 26, 2009 at 7:12 PM I was staying at a hotel on the east coast of the island of Puerto Rico and went to the shore to look at the ocean at around midday. This thing was purple, had a hard shell, did not move at all, about 5 inches long and 3 inches wide. It was withing the rocks. This was in summer 2006. East coast of Puerto Rico The creature in your photograph is a Chiton. Chitons are primitive marine molluscs that have shells composed of 8 plates. The shells provide protection against waves which enable Chitons to survive on stormy rocky coasts. Chitons are sometimes called Sea Cradles. Sat, Feb 28, 2009 at 5:58 AM Hi Daniel, Ah, another mollusk! This is Acanthopleura granulata (Gmelin, 1791), the West Indian fuzzy chiton. The shell plates of this chiton are actually brownish and are usually very eroded. The pink/purple color on this one is due to a layer of encrusting calcareous red algae. For more info see the Wikipedia article (which I put together.) Best wishes to you, Susan J. Hewitt Sun, Mar 1, 2009 at 4:43 AM I wanted to add: 1. That these chitons do move around, but only at night, grazing on microscopic algae which grows on the rock surface. Each one returns to its same spot on the rock at the end of the night. 2. That the maximum size of this species is about 3 inches in length. 3. There is a really excellent book on the chitons of P.R. called “Los Quitones de Puerto Rico” by Cedar I. Garcia Rios.
<urn:uuid:b118e623-a1a2-48f2-9400-c526bbaad919>
2.875
386
Comment Section
Science & Tech.
76.265012
w3schools is good place to start learning web technologies before reading any other manual or book. Here I am summarizing the relation among most of XML flavors. I’ll recommend it before starting with w3schools. In addition of above diagram, - DTD & XSD both do the same job. But XSD is newer than DTD, in form of XML, and has more datatypes & other options. - You need not to launch XSLT explicitly to see how your XSL will output finally. XSLT comes with Internet browser. So you just open your XML in browser. It’ll read linked XSL and will display output accordingly. - XSL-FO can be considered as next version of XSL. *Editor : I used eclipse for developing all XML flavours. HTML vs XHTML XHTML is nothing but XML formatted HTML. You need to follow XML rules nothing else. Like - proper nesting of tags - all tags must be closed - there must be a root tag - value of an attribute must be enclosed in double quote - tags name, attributes name and their values must be in low case. - etc. . .
<urn:uuid:ce674d14-2b01-4381-9f44-dee8ed4fd00e>
3.0625
252
Personal Blog
Software Dev.
77.2535
1) cell growth You should look into chemotherapy and cancer medicine in general. Because chemo is mostly effective because it kills fast dividing cells, this has been worked out reasonably well. the 7-10 year number is not really correct, some cells are replaced a lot more slowly. This is why hair often falls out in cancer treatment, because the follicle cells are growing quickly. Neurons divide very slowly - if at all - and often are never replaced. Fat cells are in between - probably replaced in the 7-10 year range. Heart cells are replaced albeit quite slowly - less than 1% per year, which implies that many cells are with you your entire lifetime. 2) atoms/molecules change The cell itself is in a continuous state of flux, but different parts of the cell, like cells in the body, change at different rates. Some proteins which make up the cell matrix or the DNA in the nucleus are replaced very rarely (through repair or rearrangement of the chromosome for instance) and most of the chromosome DNA is with the cell for the entire life of the cell. Most proteins are labelled for degradation and are recycled after a few hours of function. Metabolic compounds such as sugars or salt might drift in and out of the cell continuously, maybe turning over in an hour or so. Fats can be incorporated into the cell and last for years I think.
<urn:uuid:d8e27ed7-8a34-4b64-84aa-17d61e576988>
2.796875
285
Q&A Forum
Science & Tech.
52.339461
Interesting case. From the wikipedia article, an average white dwarf has a mass of ~0.6 Msun and a radius of ~0.015 Rsun. If we want it to have the same effective temperature as the Sun, and Earth to end up with the same insolation, then the size a of the orbit is determined by the scaling relation Originally Posted by Xibalba Rsun^2 / (1 AU)^2 ~ (0.015 Rsun)^2 / a^2 ===> a ~ 0.015 AU ~ 2*10^9 m which is only five times the distance to the Moon. Tidal forces depend on mass and the inverse cube of distance, which gives us in terms of the lunar tidals: Ftidal ~ ((1.2*10^30 kg)/(7*10^22 kg)) / 5^3 Ftidal|lunar ~ 1.5*10^5 Ftidal|lunar Instead of a tidal bulge on the order of a metre, it would be on the order of 100 km. Meaning that until tidal lock is achieved, the only permanent bodies of water would be lakes with sufficiently steep sides. Anything with shallow sides, including the oceans, would lose its water to two big blobs which would do their best to remain stationary with respect to the new sun while the planet rotates along underneath them. I guess life better stays in those sealed bunkers for the duration. The ordinary scaling law for time to tidal lock is something like Tlock ~ (10^10 years) / ((Trot in days) (Ftidal/Ftidal|lunar)^2) ~ (1 year) / (Trot in days) where Trot is the original (unlocked) day-length with respect to the tide-inducing body. Unless Earth spins a lot faster upon capture than it does now, the result is on the order of 1 year. If, on the other hand, Earth spins a lot slower, which seems like the more likely scenario to me, then the day-length would be mainly determined by the year-length, which works out on the order of 1 day (as in 24 hours), funnily enough. Mind you, I'm not sure if this situation might not be too extraordinary for the scaling relation to apply in that form. It assumes that the combination of atmospheric and ocean tides and planetary deformation is sufficient to actually dissipate rotational energy into heat at the maximal rate, which seems questionable. Assuming that it does, the power output would be vast: Ptidal ~ Erot / Tlock Ptidal ~ (1/2 I w^2) / ((1 year) / (Trot in days)) Ptidal ~ (1/2 (2/5 M R^2) ((2 pi)/Trot)^2) / ((1 year) * (1 day) / Trot) Ptidal ~ (7 * (6*10^24 kg) * (6*10^6 m)^2) / ((3*10^7 s) * (10^5 s) * Trot) Ptidal ~ (5*10^26 kg m^2 / s^2) / Trot Ptidal ~ Lsun / (Trot in seconds) I'm not sure where all these numerical coincidences come from, but that aside, it does give one quite a good sense of scale: The Earth has about one ten-thousandth the surface of the Sun, and Trot should be on the order of one hundred thousand, so for the Earth to dissipate that amount of power, it would have to output only one order of magnitude less power per unit surface than the Sun does. Since black-body power output scales with T^4, that means more than half the temperature of the sun. And as rock begins to melt by as little as 1,000 K, this would directly liquify the upper portions of the planet. Assuming the interior has been frozen solid in the interim, one ends up with a sort of inverted planet whose surface is hotter than its core. In the long run, what is of more practical import is that this might just be hot enough to boil away (away as in all the way into space) all of our water, even in as little as that one year we're talking about. As I said, the planet might well not actually have the capacity to dissipate power at that rate, so things might not get quite that bad. The price for that, though, would be a longer time to lock, so the not-quite-that-bad conditions would last a lot longer. Conclusion: Unless the planet is already as close as no matter to tidal lock when it assumes a habitable-zone orbit, which seems highly unlikely, the process of locking it would pretty much make it uninhabitable... Catch-22. The only upside I see is that this might indeed just about do what you were hoping for, in the long run, and kick-start the planet's dormant geological activity again.
<urn:uuid:d3b3b87d-83e9-49cb-b468-71ed859f82a4>
3.328125
1,063
Comment Section
Science & Tech.
64.952609
| The Dip Needle is a compass pivoted to move in the plane containing the magnetic field vector of the earth. It will then show the angle which the magnetic field makes with the vertical. The needle must be accurately balanced so that only magnetic torques are exerted on it. Some texts suggest that the dip angle be measured twice, with the poles of the needle reversed by remagetization between trials, and the results averaged. Some instruments allow the needle and circle to be rotated to allow use as a compass. The Miami apparatus was made by W. & J. George of London and Birmingham. |This dip needle was made by Ferdinand Ernicke of Berlin, and was on display at the University of Colorado physics department in 1975 when this picture was taken.| | The dip needle (or inclination compass) at the left was purchased from Ruhmkorff of Paris, probably in 1875, for Vanderbilt University. It is now on display in the Garland Collection of Classical Physics Apparatus at Vanderbilt. "In carrying out a measurement one sets the needle in the magnetic meridian by turning the support until the needle is vertical, in which case the needle is in a magnetic East-West plane, and then turns the support exactly 90°, at which point the vertical scale circling the needle is in the magnetic meridian. Thereupon the angle the needle makes with the horizontal is the angle of inclination. ... The horizontal circular scale is marked off in half degrees. The associated vernier allows readings to one minute. The vertical scale is marked off in ten-minute intervals." (From Robert A. Lagemann, The Garland Collection of Classical Physics Apparatus at Vanderbilt University (Folio Publishers, Nashville, TN, 1983) pg 152) The instrument at the left appears to be exactly the same as the one above it. However, it is marked "Gambey à Paris". It is in the apparatus collection of Case Western Reserve Unversity in Cleveland, Ohio. This Phelps and Gurley (Troy, New York) dip needle was bought by Dartmouth College in 1862. With its case and extra needle it was valued at $20.00. Attached to this apparatus when I looked at in June 2001 was the following information: " Provided it is well removed from local influences such as iron, magnetite and other ferromagnetic materials, a compass needle that is free to rotate in a vertical plane will point downward in the northern hemisphere at an angle from the horizontal along the line of the Earth's magnetic field. The instrument for measuring this angle is called an inclinometer, dip needle or, most frequently, dip circle." || The dip needle at the left is on display at the University Museum at the University of Mississippi in Oxford. The mechanism pivots so that it can be used either as a dip needle, or, in the horizontal orientation, as a compass. The accompanying placard identifies it as being made by Lerebours et Secretan of Paris, but it is not in the 1853 L&S catalogue where so much of the apparatus purchased by Frederick A.P. Barnard in the second half of the 1850s can be found. | The dip needle at the right is at the department of physics at the University of Texas at Austin. The 1888 Queen catalogue lists it as "Inclination Compass. Vertical circle, ten inches in diameter, horizontal circle, five inches; brass posts, base and leveling screws, all delicately finished ... This dip needle is at Westminster College in New Wilmington, Pennsylvania. It is about 30 cm high and has no maker's name. It can be flipped horisontally for use as a compass.
<urn:uuid:76a1661c-89cf-47bd-8e0a-26c495acf022>
4.1875
786
Knowledge Article
Science & Tech.
50.417341
Southern Leopard Frog (Rana sphenocephala) - The southern leopard frog grows to a length of 2 to 3.5 inches (about 5 to 9 cm). Its color varies from tan to several shades of brown to green. The dorsum (back) is usually covered with irregular dark brown spots between distinct light colored areas. Large dark spots on its legs may create the effect of bands. Other distinguishing characteristics include a light line along its upper jaw, light spot on its tympanum (ear), and long hind legs and toes. It is slender, with a narrow, pointed head. Males are smaller than females, but with enlarged forearms and thumbs and paired vocal sacs that look like balloons when inflated. - Life History Southern leopard frogs are very adaptable and are comfortable in many habitats - they just need cover and moisture. These frogs are great jumpers, traveling high and far in just a few jumps. They consume insects and small invertebrates. Predators such as fish, raccoons, skunks and aquatic snakes feed on the leopard frog. It reaches sexual maturity in the first spring after hatching. In Texas, breeding takes place year round depending on temperature and moisture. Several hundred eggs are laid in a cluster just below the water's surface. Tadpoles hatch in about seven to ten days. Newly hatched tadpoles are only about 20 to 25 mm long. They grow to 65 to 70 mm before metamorphosing into frogs, generally between 60 to 90 days. Southern leopard frogs have a lifespan of 3 years. Southern leopard frogs elude predators by jumping into nearby water and swimming underwater for some distance, while the predator continues looking near the point of entry into the water. They are primarily nocturnal, hiding during the day in vegetation at the water's edge. During wet months, a leopard frog may wander some distance from water, but stays in moist vegetation. They will sometimes wander to colonize. The mating call is a series of abrupt, deep croaks, creating a guttural trill. The trill rate may be as many as 13 per second. Males call from shore or while floating in shallow water. A leopard frog's mottled coloration helps camouflage it. Southern leopard frogs are often used for teaching dissecting in science classes. - Shallow freshwater areas are preferred habitat for the southern leopard frog, but they may be seen some distance from water if there is enough vegetation and moisture to provide protection. Southern leopard frogs are also able to live in brackish marshes along the coast. - Southern leopard frogs range throughout the eastern United States, from New Jersey east as far as Nebraska and Oklahoma and south into the eastern third of Texas. - The name of the genus comes from the Latin rana (frog). The species name combines the Greek words sphenos (wedgeshaped) and kephale (head) to describe its triangular head. The mating calls of southern leopard frogs are a familiar background sound to many Texans living near ponds, streams and wetlands. To obtain a tape of the calls of frogs and toads of Texas, contact Texas Parks and Wildlife Department, Wildlife Diversity Branch, 512-912-7011.
<urn:uuid:fd96980e-3086-43f3-95d0-b4803516e1e5>
3.5
669
Knowledge Article
Science & Tech.
54.859145
The experimental evidence collected during the last few years has strongly supported the view that the α particle is a charged helium atom, but it has been found exceedingly difficult to give a decisive proof of the relation. In recent papers, Rutherford and Geiger have supplied still further evidence of the correctness of this point of view. The number of α particles from one gram of radium have been counted, and the charge carried by each determined. The values of several radioactive quantities, calculated on the assumption that the α particle is a helium atom carrying two unit charges, have been shown to be in good agreement with the experimental numbers. In particular, the good agreement between the calculated rate of production of helium by radium and the rate experimentally determined by Sir James Dewar, is strong evidence in favour of the identity of the α particle with the helium atom. The methods of attack on this problem have been largely indirect, involving considerations of the charge carried by the helium atom and the value of e/m of the α particle. The proof of the identity of the α particle with the helium atom is incomplete until it can be shown that the α particles, accumulated quite independently of the matter from which they are expelled, consist of helium. For example, it might be argued that the appearance of helium in the radium emanation was a result of the expulsion of the α particle, in the same way that the appearance of radium A is a consequence of the expulsion of an α particle from the emanation. If one atom of helium appeared for each α particle expelled, calculation and experiment might still agree, and yet the α particle itself might be an atom of hydrogen or of some other substance. We have recently made experiments to test whether helium appears in a vessel into which the α particles have been fired, the active matter itself being enclosed in a vessel sufficiently thin to allow the α particles to escape, but impervious to the passage of helium or other radioactive products. The experimental arrangement is clearly seen in the figure. The equilibrium quantity of emanation from about 140 milligrams of radium was purified and compressed by means of a mercury-column into a fine glass tube A about 1.5 cms. long. This fine tube, which was sealed on a larger capillary tube B, was sufficiently thin to allow the α particles from the emanation and its products to escape, but sufficiently strong to withstand atmospheric pressure. After some trials, Mr. Baumbach succeeded in blowing such fine tubes very uniform in thickness. The thickness of the wall of the tube employed in most of the experiments was less than 1/100 mm., and was equivalent in stopping power of the α particle to about 2 cms. of air. Since the ranges of the α particles from the emanation and its products radium A and radium C are 4.3, 4.8, and 7 cms. respectively, it is seen that the great majority of the α particles expelled by the active matter escape through the walls of the tube. The ranges of the α particles after passing through the glass were determined with the aid of a zinc-sulphide screen. Immediately after the introduction of the emanation the phosphorescence showed brilliantly when the screen was close to the tube, but practically disappeared at a distance of 5 cms. Such a result is to be expected. The phosphorescence initially observed was due mainly to the α particles of the emanation and its product radium A (period 3 mins.). In the course of time the amount of radium C, initially zero, gradually increased, and the α radiations from it of range 7 cms. were able to cause phosphorescence at a greater distance. The glass tube A was surrounded by a cylindrical glass tube T, 7.5 cms. long and 1.5 cms. diameter, by means of a ground-glass joint C. A small vacuum-tube V was attached to the upper end of T. The outer glass tube T was exhausted by a pump through the stopcock D, and the exhaustion completed with the aid of the charcoal tube F cooled by liquid air. By means of a mercury column H attached to a reservoir, mercury was forced into the tube T until it reached the bottom of the tube A. Part of the α particles which escaped through the walls of the fine tube were stopped by the outer glass tube and part by the mercury surface. If the α particle is a helium atom, helium should gradually diffuse from the glass and mercury into the exhausted space, and its presence could then be detected spectroscopically by raising the mercury and compressing the gases into the vacuum-tube. In order to avoid any possible contamination of the apparatus with helium, freshly distilled mercury and entirely new glass apparatus were used. Before introducing the emanation into A, the absence of helium was confirmed experimentally. At intervals after the introduction of the emanation the mercury was raised, and the gases in the outer tube spectroscopically examined. After 24 hours no trace of the helium yellow line was seen; after 2 days the helium yellow was faintly visible; after 4 days the helium yellow and green lines were bright; and after 6 days all the stronger lines of the helium spectrum were observed. The absence of the neon spectrum shows that the helium present was not due to a leakage of air into the apparatus. There is, however, one possible source of error in this experiment. The helium may not be due to the α particles themselves, but may have diffused from the emanation through the thin walls of the glass tube. In order to test this point the emanation was completely pumped out of A, and after some hours a quantity of helium, about 10 times the previous volume of the emanation, was compressed into the same tube A. The outer tube T and the vacuum-tube were removed and a fresh apparatus substituted. Observations to detect helium in the tube T were made at intervals, in the same way as before, but no trace of the helium spectrum was observed over a period of eight days. The helium in the tube A was then pumped out and a fresh supply of emanation substituted. Results similar to the first experiment were observed. The helium yellow and green lines showed brightly after four days. These experiments thus show conclusively that the helium could not have diffused through the glass walls, but must have been derived from the α particles which were fired through them. In other words, the experiments give a decisive proof that the α particle after losing its charge is an atom of helium. We have seen that in the experiments above described helium was not observed in the outer tube in sufficient quantity to show the characteristic yellow line until two days had elapsed. Now the equilibrium amount of emanation from 100 milligrams of radium should produce helium at the rate of about .03 c.mm. per day. The amount produced in one day, if present in the outer tube, should produce a bright spectrum of helium under the experimental conditions. It thus appeared probable that the helium fired into the glass must escape very slowly into the exhausted space, for if the helium escaped at once, the presence of helium should have detected a few hours after the introduction of the emanation. In order to examine this point more closely the experiments were repeated, with the addition that a cylinder of thin sheet lead of sufficient thickness to stop the α particles was placed over the fine emanation tube. Preliminary experiments, in the manner described later, showed that the lead-foil did not initially contain a detectable amount of helium. Twenty-four hours after the introduction into the tube A of about the same amount of emanation as before, the yellow and green lines of helium in this case after one day was of about the same intensity as that after the fourth day in the experiments without the lead screen. It was thus clear that the lead-foil gave up the helium fired into it far more readily than the glass. In order to form an idea of the rapidity of escape of the helium from the lead some further experiments were made. The outer cylinder T was removed and a small cylinder of lead-foil placed round the thin emanation-tube surrounded the air at atmospheric pressure. After exposure for a definite time to the emanation, the lead screen was removed and gested [sic--tested?] for helium as follows. The lead-foil was placed in a glass tube between two stopcocks. In order to avoid a possible release of the helium present in the lead by pumping out the air, the air was displaced by a current of pure electrolytic oxygen. The stopcocks were closed and the tube attached to a subsidiary apparatus similar to that employed for testing for the presence of neon and helium in the gases produced by the action of the radium emanation on water (Phil. Mag. Nov. 1908). The oxygen was absorbed by charcoal and the tube then heated beyond the melting-point of lead to allow the helium to escape. The presence of helium was then spectroscopically looked for in the usual way. Using this method, it was found possible to detect the presence of helium in the lead which had been exposed for only four hours to the α rays from the emanation. After an exposure of 24 hours the helium yellow and green lines came out brightly. These experiments were repeated several times with similar results. A number of blank experiments were made, using samples of the lead-foil which had not been exposed to the α rays, but in no case was any helium detected. In a similar way, the presence of helium was detected in a cylinder of tinfoil exposed for a few hours over the emanation-tube. These experiments show that the helium does not escape at once from the lead, but there is on the average a period of retardation of several hours and possible longer. The detection of helium in the lead and tin foil, as well as in the glass, removes a possible objection that the helium might have been in some way present in the glass initially, and was liberated as a consequence of its bombardment by the α particles. The use of such thin glass tubes containing emanation affords a simple and convenient method of examining the effect on substances of an intense α radiation quite independently of the radioactive material contained in the tube. We can conclude with certainty from these experiments that the α particle after losing its charge is a helium atom. Other evidence indicates that the charge is twice the unit charge carried by the hydrogen atom set free in the electrolysis of water. University of Manchester, Nov. 13, 1908 Proc. Roy. Soc. A. lxxxi, pp. 141-173 (1908). Proc. Roy. Soc. A. lxxxi. p. 280 (1908). The α particles fired at a very oblique angle to the tube would be stopped in the glass. The fraction stopped in this way would be small under the experimental conditions. That the air was completely displaced was shown by the absence of neon in the final spectrum.
<urn:uuid:6a2988d4-b120-4574-856d-1ccb64753461>
3.28125
2,242
Academic Writing
Science & Tech.
49.773656
|How did spiral galaxy 510-13 get bent out of shape? The disks of many spirals are thin and flat, but not solid. Spiral disks are loose conglomerations of billions of stars and diffuse gas all orbiting a galaxy center. A flat disk is thought to be created by sticky collisions of large gas clouds early in the Warped disks are not uncommon, though, and even our own Milky Way Galaxy is thought to have a small warp. The causes of spiral warps are still being investigated, but some warps are thought to result from interactions or even collisions between galaxies. pictured above digitally sharpened, is about 150 million light years away and about 100,000 Hubble Heritage Team C. Conselice (U. Wisconsin/STScI) et al.,
<urn:uuid:49b1c263-7fa5-42c0-b697-7147a60e3d9b>
3.875
182
Content Listing
Science & Tech.
57.234464
What is Cookies? Interview question and answer by: Hariinakoti | Posted on: 4/7/2012 | Category: ASP.NET Interview questions | Views: 702 | | Points: 40 Cookie is a small amount of memory used by webserver with client system. * The cookie variables will be accesible across different web pages of website towards *client request. * The important of cookies is storing presonal information of client system to reduce memory burden on server (or) identifying client for different requests. Cookies are two types: 1.In Memory Cookie In Memory Cookie: The Cookie variable placed within browser process is called "In Memory Cookie" Persistant Cookie: The Cookie placed on harddisk memory in client system is called "Persistant Cookie". My institute Note Book | Asked In: | Alert Moderator Found interesting? Add this to:
<urn:uuid:e3dbea54-319b-4063-bac5-c13670035df0>
2.9375
187
Q&A Forum
Software Dev.
39.372425
Sphere Packing and Kissing Numbers Problems of arranging balls densely arise in many situations, particularly in coding theory (the balls are formed by the sets of inputs that the error-correction would map into a single The most important question in this area is Kepler's problem: what is the most dense packing of spheres in space? The answer is obvious to anyone who has seen grapefruit stacked in a grocery store, but a proof remains elusive. (It is known, however, that the usual grapefruit packing is the densest packing in which the sphere centers form a The colorfully named "kissing number problem" refers to the local density of packings: how many balls can touch another ball? This can itself be viewed as a version of Kepler's problem for spherical rather than Euclidean geometry. and 2nd Ajima-Malfatti points. How to pack three circles in a triangle so they each touch the other two and two triangle sides. This problem has a curious history, described in Wells' Penguin Dictionary of Curious and Interesting Geometry: Malfatti's original (1803) question was to carve three columns out of a prism-shaped block of marble with as little wasted stone as possible, but it wasn't until 1967 that it was shown that these three mutually tangent circles are never the right answer. this Cabri geometry page, Malfatti circles page, and the Wikipedia Malfatti circles page. - Algorithmic packings compared. Anton Sherwood looks at deterministic rules for disk-packing on spheres. - Apollonian Gasket, a fractal circle packing formed by packing smaller circles into each triangular gap formed by three larger circles. - Basic crystallography diagrams, B. C. Taverner, Witwatersrand. - The charged particle model: polytopes and optimal packing of p points in n dimensional spheres. packing and discrete complex analysis. Research by Ken Stephenson including pictures, a bibliography, and downloadable circle packing - Circle packings. Gareth McCaughan describes the connection between collections of tangent circles and conformal mapping. Includes some pretty postscript - Circles in ellipses. James Buddenhagen asks for the smallest ellipse that contains two disjoint unit circles. Discussion continued in a thread on circles in an ellipse. - Dense sphere-packings in hyperbolic space. packings of equal spheres in a cube, Hugo Pfoertner. With nice ray-traced images of each packing. See also Martin Erren's applet for visualizing the sphere packings. dream about sphere kissing numbers. - Edge-tangent polytope illustrating Koebe's theorem that any planar graph can be realized as the set of tangencies between circles on a sphere. Placing vertices at points having those circles as horizons forms a polytope with all edges tangent to the sphere. Rendered by POVray. Packing Page. Erich Friedman enjoys packing geometric shapes into other geometric shapes. - Figure eight knot / horoball diagram. Research of A. Edmonds into the symmetries of knots, relating them to something that looks like a packing of spheres. The MSRI Computing Group uses diagram as their logo. - The fractal art of Wolter Schraa. Includes some nice reptiles and sphere packings. - Hermite's constants. Are certain values associated with dense lattice packings of spheres Part of Mathsoft's dense packing of equal disks in a square, D. Boll et al., Elect. J. Combinatorics. - The Kepler Conjecture on dense packing of spheres. numbers. Eric Weisstein lists known bounds on the kissing numbers of spheres in dimensions up to 24. - Maximizing the minimum distance of N points on a sphere, ray-traced by Hugo Pfoertner. sample. Ed Dickey advocates teaching about sphere packings and kissing numbers to high school students as part of a strategy involving manipulative devices. configurations of electrons on a sphere, K. S. Brown. - Maximum volume arrangements of points on a sphere, Hugo Pfoertner. illumination of a sphere. An interesting variation on the problem of equally spacing points, by Hugo Pfoertner. circles in circles and circles on a sphere, Mostly about optimal packing but includes also some nonoptimal spiral and pinwheel packings. circles in the hyperbolic plane, Java animation by Kevin Pilgrim illustrating the effects of changing radii in the pennies in the plane, an illustrated proof of Kepler's conjecture in 2D by Bill Casselman. results, D. Boll. C code for finding dense packings of circles in circles, circles in squares, and spheres in spheres. - Pennies in a tray, Ivars Peterson. packing on a circle and on a sphere, - Points on a sphere. Paul Bourke describes a simple random-start hill-climbing heuristic for spreading points evenly on a sphere, with pretty pictures and C source. constellations. Sort of a dynamic version of a sphere packing problem: how to arrange a bunch of satellites so each point of the planet can always see one of them? Schramm's mathematical picture gallery primarily concentrating in square tilings and circle packings, many forming fractal patterns. J. A. Sloane's netlib directory includes many references and programs for sphere packing and clustering in various models. See also his of sphere-packing and lattice theory publications. - Soddy's Hexlet, six spheres in a ring tangent to three others, Bowl of Integers, a sphere packing combining infinitely many hexlets, - Sphere distribution problems. Page of links to other pages, collected by Anton Sherwood. and lattices. Razvan Surdulescu computes sphere volumes and describes some lattice packings of spheres. - Spheres with colorful chickenpox. Digana Swapar describes an algorithm for spreading points on a sphere to minimize the electrostatic potential, via a combination of simulated annealing and conjugate gradient optimization. patterns in disk packings, Lubachevsky, Graham, and Stillinger, Visual Mathematics. A procedure for packing unit disks into square containers produces large grains of hexagonally packed disks with sporadic rattlers along the grain boundaries. - Waterman polyhedra, formed from the convex hulls of centers of points near the origin in an See also Paul Bourke's Waterman Polyhedron page. - What is arbelos you ask? From the Geometry Junkyard, and recreational geometry pointers. Send email if you know of an appropriate page not listed here. from a common source file.
<urn:uuid:fea2ca5c-cf20-4f76-b9e1-3edc485e9b77>
2.953125
1,526
Content Listing
Science & Tech.
44.055508
HERE‘s the latest on an issue to which not nearly enough attention is paid by government and industry: A new study looking at 11,000 years of climate temperatures shows the world in the middle of a dramatic U-turn, lurching from near-record cooling to a heat spike. Research released Thursday in the journal Science uses fossils of tiny marine organisms to reconstruct global temperatures back to the end of the last ice age. It shows how the globe for several thousands of years was cooling until an unprecedented reversal in the 20th century. Scientists say it is further evidence that ...
<urn:uuid:0e577c18-ba39-47c9-99b1-c7b9a7130af7>
2.84375
121
Truncated
Science & Tech.
53.065202
Look up monthly U.S., Statewide, Divisional, and Regional Temperature, Precipitation, Degree Days, and Palmer (Drought) rankings for 1-12, 18, 24, 36, 48, 60-month, and Year-to-Date time periods. Data and statistics are as of January 1895. Please note, Degree Days are not available for Agricultural Belts Contiguous U.S. Temperature Rankings, September 1901 More information on Climatological Rankings (out of 119 years) |Apr - Sep 1901 |60th Coldest||1907||Coldest since: 1899| |58th Warmest||2012||Warmest since: 1900|
<urn:uuid:44c6c220-b0b7-41c6-bc3d-957d70d5f2f0>
2.796875
144
Structured Data
Science & Tech.
60.488056
Satellite data suggests that March, 2011 was the coolest in more than a decade. The average worldwide temperature in March was .18 degrees below the 30-year average for the month. It was the coldest March since 1999. February was also cold with temperatures running .03 degrees below the long-term average. Satellites began measuring temperature in 1978. The instruments measure the temperature of the atmosphere from the surface up to an altitude of about five miles above sea level. Satellites allow meteorologists to get accurate temperature readings for almost all regions of the Earth. This includes remote desert, ocean and rain forest areas where reliable climate data are not otherwise available. The cooling was largely driven by La Nina, which is a cooling of the equatorial Pacific Ocean.
<urn:uuid:cac9379d-97c9-4cd8-8915-863f64fb6294>
3.875
153
Knowledge Article
Science & Tech.
34.913214
Shale Shocked: ‘Remarkable Increase’ In U.S. Earthquakes ‘Almost Certainly Manmade,’ USGS Scientists Report A U.S. Geological Survey (USGS) team has found that a sharp jump in earthquakes in America’s heartland appears to be linked to oil and natural gas drilling operations. As hydraulic fracturing has exploded onto the scene, it has increasingly been connected to earthquakes. Some quakes may be caused by the original fracking — that is, by injecting a fluid mixture into the earth to release natural gas (or oil). More appear to be caused by reinjecting the resulting brine deep underground. Last August, a USGS report examined a cluster of earthquakes in Oklahoma and reported: Our analysis showed that shortly after hydraulic fracturing began small earthquakes started occurring, and more than 50 were identified, of which 43 were large enough to be located. Most of these earthquakes occurred within a 24 hour period after hydraulic fracturing operations had ceased. In November, a British shale gas developer found it was “highly probable” its fracturing operations caused minor quakes. Then last month, Ohio oil and gas regulators said “A dozen earthquakes in northeastern Ohio were almost certainly induced by injection of gas-drilling wastewater into the earth.” Now, in a paper to be deliver at the annual meeting of the Seismological Society of America, the USGS notes that “a remarkable increase in the rate of [magnitude 3.0] and greater earthquakes is currently in progress” in the U.S. midcontinent. The abstract is online.EnergyWire reports (subs. req’d) some of the findings: The study found that the frequency of earthquakes started rising in 2001 across a broad swath of the country between Alabama and Montana. In 2009, there were 50 earthquakes greater than magnitude-3.0, the abstract states, then 87 quakes in 2010. The 134 earthquakes in the zone last year is a sixfold increase over 20th century levels. The surge in the last few years corresponds to a nationwide surge in shale drilling, which requires disposal of millions of gallons of wastewater for each well. According to the federal Energy Information Administration, shale gas production grew, on average, nearly 50 percent a year from 2006 to 2010. I foresee a study in the near future, paid for by the oil industry, that concludes the increase in little earthquakes is actually serving to reduce the frequencies and magnitudes of large ones.
<urn:uuid:55dc7eab-76e0-411f-90d6-503ba50de262>
3.109375
515
Personal Blog
Science & Tech.
43.561945
AROS is a multitasking operating system. This essentially means that multiple programs may be run at the same time. Every program running is called a task. But there are also tasks that are not user-programs. There are, for example, tasks handling the file-system and tasks watching the input devices. Every task gets a certain amount of time, in which it is running. After this time it's the next task's turn; the system reschedules the tasks. Plain tasks are very limited in their capabilities. Plain tasks must not call a function of dos.library or a function that could call a function of dos.library (this includes OpenLibrary() for most cases!). Processes don't have this limitation. A task is described by a struct Task as defined in exec/tasks.h. This structure contains information about the task like the its stack, its signals and some management data. To get the address of a task structure, struct Task *FindTask( STRPTR name ); The name is a pointer to the name of the task to find. Note that this name is case-sensitive! If the named task is not found, NULL is returned, otherwise a pointer to a struct Task is returned . To get a pointer to the current task, supply NULL as name. This can The task structure contains a field called tc_UserData. You can use this for your own purposes. It's ignored by AROS. A task must be in one of following states (as set in the field tc_State of the task structure): - This state should never be set! - The task is currently running. On single processor architectures, only one task can be in that state. - The task is waiting for its activation. - The task is waiting on some .. FIXME: signal. As long as this does not occur, the program doesn't become active; it is ignored on rescheduling. Most interactive programs are in this state most of the time, as they wait for user input. - The task is in an exception. Do not set these states yourself, unless you know exactly what you are The field tc_Node.ln_Pri of the struct Node embedded in the task structure (see exec/nodes.h and the .. FIXME:: section about exec lists specifies the priority of the task. Possible priorities reach from -128 to 127. The higher the priority the more processor time the task gets from the system. To set a task's priority use the function: BYTE SetTaskPri( struct Task *task, BYTE newpri ); The old priority is returned. Every task has a stack. A stack is a piece of memory in which a tasks stores its temporary data. Compilers, for example, use the stack to store variables, you use in your programs. On many architectures, the stack is also used to supply library functions with parameters. The size of the stack is limited. Therefore only a certain amount of data can be stored in the stack. The stack-size of a task is chosen by its caller and must be at least 4096 bytes. Tasks should generally not assume that their stack-size is bigger. So, if a task needs more stack, the stack can be exchanged by using the function: void StackSwap( struct StackSwapStruct *sss ); The only argument, sss, is a pointer to a struct StackSwapStruct as defined in exec/tasks.h. struct StackSwapStack must contain a pointer to the beginning of the new stack (strk_Lower), to the end of the new stack (stk_Upper) and a new stack-pointer (stk_Pointer). This stack-pointer is normally set either to the same address as stk_Lower or to the same address as stk_Upper, depending on the kind of CPU used. When calling StackSwap(), the StackSwapStruct structure supplied as sss will be filled with information about the current stack. After finishing using the new stack, the old stack must be restored by calling StackSwap() a second time with the same StackSwapStruct. Normally, only compilers need this function. Handle it with great care as different architectures use the stack in different ways! A process is an expanded task. Different from a task, it can use functions of dos.library, because a process structure contains some special fields, concerning files and directories. But of course, all functions that can be used on tasks can also be used on processes. A process is described by a struct Process as defined in dos/dosextens.h. The first field in struct Process is an embedded struct Task. The extra fields include information about the file-system, the console, the process is connected to, and miscellaneous other stuff. Most functions of dos.library set the secondary error-code of the process structure on error. This way the caller can determine, why a certain system-call failed. Imagine, the function Open(), which opens a named file, fails. There can be multiple reasons for this: maybe the file named doesn't exist, maybe it is read protected. To find this out, you can query the secondary error-code set by the last function by using: DOS-functions return one of the ERROR_ definitions from dos/dos.h. Applications can, of course, process these error-codes as well (which is useful in many cases), but often we just want to inform the user what went wrong. (Applications normally need not care if a file could not be opened because it did not exist or because it was read protected.) To output human-readable error messages, dos.library provides two functions: LONG Fault( LONG code, STRPTR header, STRPTR buffer, LONG length ); BOOL PrintFault( LONG code, STRPTR header ); While PrintFault() simply prints an error message to the standard output, Fault() fills a supplied buffer with the message. Both functions take a code argument. This is the code to be converted into a string. You can also supply a header string, which will prefix the error message. The header may be NULL, in which case nothing is prefixed. Fault() also required a pointer to a buffer, which is to be filled with the converted string. The length of this buffer (in bytes) is to be passed in as the last argument. The total number of characters put into the buffer is returned. You are on the safe side, if your buffer has a size of 83 character plus the size of the header. Examples for the use of these functions can be found in later chapters, especially in the chapter about .. FIXME:: Files and Directories. Secondary error-codes from a program are handed back to the caller. If this is a shell, the secondary error-code will be put into the field cli_Result2 of the shell structure (struct CommandLineInterface as defined in dos/dosextens.h and .. FIXME:: discussed later. You can also set the secondary error-code yourself. This way, you can either to pass it back to another function in your program or to your caller. To set the secondary error, use: LONG SetIoErr( LONG code ); code is the new secondary error-code and the old secondary error-code is
<urn:uuid:7673a73f-8801-4880-9065-a4adbabd2564>
2.890625
1,625
Documentation
Software Dev.
61.993126
hi, this is my first post here. Am a little nervous thinking how my post will be. but i will try my best to take you through the topic and make you understand the topic. the topic i will discuss is called "Decorator pattern". i will just discuss when to use the decorator pattern and just the basic concept of decorator pattern. What is Decorator pattern? decorator pattern is design pattern which is used to add more functionalities to an existing class dynamically ( at runtime). how does the concept of decroator pattern work? the concept is usually regarded as confusing. but it is simple if you just remember the word "Wrap". we have a class "A" which is implementing and interface "Iinterface"."A" has some specific behaviours "A1","A2" etc. now we want to add a new behaviour "B1" to A. (this should be done without altering Class "A") we can do it in a simple way Create a class "B" implementing "IInterface"."B" will have the behaviour "B1". create a class which will help any "IInterface" implementor to wrap around any other "IInterface" implementor. This is our decorator. now at runtme create an instance of "A".lets call it "objA". now create an instance of "B",lets call it "objB". use the use the decorator to help objB so that it can wrap around objA. now the decorator is nothing but objB which will have objA inside it. so the decorator now has the funtionality of both objB and objA so it will emit the behaviour "A1","A2","B1". note that we did not have to change the class A at all. now if we want to add another functionality "C1" to this decorator object(having objB and objA),how will we do it? we can do it, if we make the Decorator class implement "IInterface". recollect that the purpose of the Decorator is to help any "IInterface" implementor to wrap around any other "IInterface" implementor. now we can have any decorator wrapped in any other decorator. hope i was clear. i will give an example in my next post.
<urn:uuid:800cc3b0-0ac4-4945-b6f6-d994be68074d>
3.171875
482
Q&A Forum
Software Dev.
60.226167
Using static and non static synchronized method for protecting shared resource is another Java mistake we are going to discuss in this part of our series “learning from mistakes in Java”. In last article we have seen why double and float should not be used for monetary calculation , In this tutorial we will find out why using static and non static synchronized method together for protecting same shared resource is not advisable. I have seen some times Java programmer mix static synchronized method and instance synchronized method to protect same shared resource. They either don't know or failed to realize that static synchronized and non static synchronized method lock on two different object which breaks purpose of synchronizing shared resource as two thread can concurrently execute these two method breaking mutual exclusive access, which can corrupt status of mutable object or even cause subtle race condition in Java or even more horrible deadlock in java. Static and non static synchronized method Java For those who are not familiar static synchronized method locked on class object e.g. for string class its String.class while instance synchronized method locks on current instance of Object denoted by “this” keyword in Java. Since both of these object are different they have different lock so while one thread is executing static synchronized method , other thread in java doesn’t need to wait for that thread to return instead it will acquire separate lock denoted byte .class literal and enter into static synchronized method. This is even a popular multi-threading interview questions where interviewer asked on which lock a particular method gets locked, some time also appear in Java test papers. Bottom line is that never mix static and non static synchronized method for protecting same resource. Example of Mixing instance and static synchronized methods Here is an example of multithreading code which is using static and non static synchronized method to protect same shared resource: here shared count is not accessed in mutual exclusive fashion which may result in passing incorrect count to caller of getCount() while another thread is incrementing count using static increment() method. That’s all on this part of learning from mistakes in Java. Now we know that static and non static synchronized method are locked on different locks and should not be used to protect same shared object. Other Java thread tutorials you may like:
<urn:uuid:23db3bb2-76f3-4261-89ab-37b4430f0042>
3.25
454
Personal Blog
Software Dev.
27.496129
Assembly: System.Xml (in system.xml.dll) XmlResolver is used to resolve external XML resources, such as entities, document type definitions (DTDs), or schemas. It is also used to process include and import elements found in Extensible StyleSheet Language (XSL) style sheets or XML Schema definition language (XSD) schemas. XmlUrlResolver is a concrete implementation of XmlResolver and is the default resolver for all classes in the System.Xml namespace. You can also create your own resolver. You should consider the following items when working with the XmlResolver class. XmlResolver objects can contain sensitive information such as user credentials. You should be careful when caching XmlResolver objects and should not pass the XmlResolver object to an untrusted component. If you are designing a class property that uses the XmlResolver class, the property should be defined as a write-only property. The property can be used to specify the XmlResolver to use, but it cannot be used to return an XmlResolver object. If your application accepts XmlResolver objects from untrusted code, you cannot assume that the URI passed into the GetEntity method will be the same as that returned by the ResolveUri method. Classes derived from the XmlResolver class can override the GetEntity method and return data that is different than what was contained in the original URI. Your application can mitigate memory Denial of Service threats to the GetEntity method by implementing a wrapping implemented IStream that limits the number of bytes read. This helps to guard against situations where malicious code attempts to pass an infinite stream of bytes to the GetEntity method. The following example creates an XmlReader that uses an XmlUrlResolver with default credentials. // Create an XmlUrlResolver with default credentials. XmlUrlResolver resolver = new XmlUrlResolver(); resolver.Credentials = CredentialCache.DefaultCredentials; // Create the reader. XmlReaderSettings settings = new XmlReaderSettings(); settings.XmlResolver = resolver; XmlReader reader = XmlReader.Create("http://serverName/data/books.xml"); Windows 98, Windows 2000 SP4, Windows CE, Windows Millennium Edition, Windows Mobile for Pocket PC, Windows Mobile for Smartphone, Windows Server 2003, Windows XP Media Center Edition, Windows XP Professional x64 Edition, Windows XP SP2, Windows XP Starter Edition The .NET Framework does not support all versions of every platform. For a list of the supported versions, see System Requirements.
<urn:uuid:e62b0074-7250-4434-8314-3fccf4f71da8>
2.890625
561
Documentation
Software Dev.
37.327043
4.3 Physical Basis So far, the motivation for the GCLF as a standard candle is almost totally empirical rather than theoretical. The astrophysical basis for its similarity from one galaxy to another is a challenging problem, and is probably less well understood than for any other standard candle currently in use. Because globular clusters are old-halo objects that probably predate the formation of most of the other stellar populations in galaxies (e.g. Harris 1986, 1988b, 1991; Fall and Rees 1988), to first order it is not surprising that they look far more similar from place to place than their parent galaxies do. Methods for allowing clusters to form with average masses that are nearly independent of galaxy size or type have been put forward by Fall and Rees (1985, 1988), Larson (1988, 1990), Rosenblatt et al. (1988), and Ashman and Zepf (1992) under various initial assumptions. Other constraints arising from cluster metallicity distributions and the early chemical evolution of the galaxies are discussed by Lin and Murray (1991) Brown et al. (1991). None of these yet serve as more than general guidelines for understanding why the early cluster formation process should be so nearly invariant in the early universe. After the initial formation epoch, dynamical effects on the clusters including tidal shocking and dynamical friction, and evaporation of stars driven by internal relaxation and the surrounding tidal field, must also affect the GCLF within a galaxy over many Gyr, and these mechanisms might well behave rather similarly in large galaxies of many different types. Recent models incorporating these effects (e.g. Aguilar et al. 1988; Lee and Ostriker 1987; Chernoff and Shapiro 1987; Allen and Richstone 1988) show that their importance decreases dramatically for distances 2-3 kpc from the galaxy nucleus, and for the more massive, compact clusters like present-day globulars. In addition, recent photometry (Grillmair et al. 1986; Lauer and Kormendy 1986; Harris et al. 1991) extending in close to the centers of the Virgo ellipticals has shown no detectable GCLF differences with radius. The implication is therefore that today's GCLF resembles the original mass formation spectrum of at least the brighter clusters, perhaps only slightly modified by dynamical processes. Many qualitative arguments can be constructed as to why the GCLFs should, or should not, resemble each other in different galaxies, but at the present time these must take a distant second place to the actual data.
<urn:uuid:3af92f83-cf0f-481c-b4c8-c107f2a3f871>
2.9375
520
Academic Writing
Science & Tech.
40.514769
You are currently browsing the category archive for the ‘Combinatorics’ category. Sometimes sequences of numbers are defined recursively, so that given the previous terms of the sequence we can find the next term. A classic example of this is the sequence of Fibonacci numbers, where every two consecutive terms determine the next term, and so if we are given the first two terms, we can calculate the whole sequence. However, if we wanted to, say, calculated the 1000th Fibonacci number, we’d have to start out with the first two, add them to compute the 3rd, add the second and third to compute the 4th, and so on, to slowly build up every single Fibonacci number before the 1000th in order to get there. It’d be much nicer if we had a formula for the Fibonacci sequence. That way, if we wanted the 1000th Fibonacci number, all we’d have to do is plug 1000 into the formula to compute it. Of course, the formula is only useful if it takes less time to crunch it out than it does to do it the brute-force way. A method of finding closed formulas for sequences defined by recurrences is the use of generating functions. Generating functions are functions which “encapsulate” all the information about a sequence, except you can define it without knowing the actual terms of the sequence. The power of generating functions comes from the fact that you can do things like add and multiply them together to create generating functions of other sequences, or write them in terms of themselves to find an explicit formula. Once you have an explicit form for a generating function, you can use some algebra to “extract” the information from the function, which usually means you can find a formula for the sequence in question.
<urn:uuid:6cd7ff11-4326-4fa1-9a67-4b6c632dcf70>
2.9375
379
Personal Blog
Science & Tech.
38.804486
Quantum system can emit only photons with energy equal (within the uncertainty) to the difference between two energy states. Even if the atom is in a superposition of energy states \left|\Psi\right> = C_0 \left|0\right> + C_1 \left|1\right> + C_2 \left|2\right> + \ldots \qquad (1) with average energy somewhere between the levels, it can emit only certain set of photons: $E_1 - E_0$, $E_2 - E_0$, $E_2 - E_1$ etc. Emission of a photon is an act of measurement since the energy of the emitted particle contains information about the atom. If the energy of the photon is $E_2 - E_1$ then the energy of the electron in the atom is $E_1$ - the energy of the final state of the transition. The next photon emitted by this atom will have energy equal to $E_1 - E_0$ for sure. If one observe photons emitted by an ensemble of atoms in state (1) he will see $E_1 - E_0$ photons with probability $\left|C_1\right|^2$, any of $E_2 - E_0$ and $E_2 - E_1$ with probability $\left|C_2\right|^2$ and so on. The total energy emitted by the system while it is coming to ground state is equal to average energy of state (1) multiplied by the number of atoms in the ensemble. Energy conservation is not violated. The same is true for mixed states for which the probability of certain photon is determined by the density matrix of the system.
<urn:uuid:b33feb16-7bb6-42d7-8b99-6b9496ca2c1b>
3.125
375
Q&A Forum
Science & Tech.
46.866902
When Precipitation Patterns Change Part A: What is Drought? In Lab 3, you learned to interpret climographs to understand a location's normal climate. Another way that climographs can be used is to plot current conditions over a background of the average conditionsthis provides a graphic way to see how the current year compares to the long term average. These dynamic graphs indicate if current conditions are abnormally hot, cold, wet, or dry. - Click the thumbnail image at right to see a larger view of a climograph for San Antonio, Texas. The graph shows conditions for January through mid-July of 2008. - Examine the graph to interpret the conditions in San Antonio. The background colors (pale red for temperatures and light green for accumulated precipitation) show the average conditions compiled from many years of data. The brighter red and green lines show daily temperatures and accumulated rainfall through July of 2008. - What does the graph indicate about San Antonio's temperature? The temperature was above average during January but has been in the normal range since then. - What does the cumulative rainfall graph indicate? Rainfall has been below normal all year. The cumulative total for 2008 is roughly one third of the normal total for the end of July. - What does the graph indicate about San Antonio's temperature? - Explore current dynamic weather and climate conditions for stations in the United States via data located at NOAA's Southern Regional Climate Center (SRCC). Once on this page, choose 'Station A Station' from the link under the auto-generated graphic. - To generate temperature and accumulated precipitation maps for any region of the country, start at the Select a Station link above, and type in the name of your station. On the map that appears, click the map icon and then click 'more.' You can switch between the tabs to see the station information, annual summaries, and climate normals. Use the pull down menu to change to another year of interest. - The National Weather Service (NWS) provides local climate records and summaries. Use the following instructions to locate a climograph for your area of interest. Note: instructions will vary by climate office; not all climate offices offer these types of graphs. A few that do include: Cleveland, Ohio, and Burlington, VT. - Go to the NWS home page Weather.gov, enter your city and state and click Go. - On the page that opens, click the forecast office title on the upper-left of the page. This will take you to the local climate office page. - Scroll down the menu list on the left-hand side of the page and click the link for 'Local' under the 'Climate' header. - On the page that opens choose the Local Data/Records tab. Look at the list of choices and locate the Climate graphs. Stop and Think1. List 5 cities or locations for which you examined dynamic climographs or accumulated precipitation maps. Include the date range that you observed. Tell whether each location is wetter than normal, about normal, or drier than normal. Explain your reasoning. The word "drought" means different things to different people. What visions does the term bring to your mind? Parched land, dried crops, dust storms, and starving livestock are some of the scenes that people associate with the term drought. Unlike most hazardous weather conditions, drought is not always obvious. Drought can be years in the making, as moisture in the soil evaporates and surface water sources disappear due to the lack of rain. - Read the information at What is drought? to come up with your own meaningful definition of drought. Discuss your definition with a lab partner to see if it can be improved. Stop and Think2. Write a definition for drought, in your own words. - Learn all about drought at the UNL Drought for Kids page. - Find out what the how drought is studied by reading the links on the, How Do People Study Drought? page. - Learn about the physical processes that cause or contribute to drought in Earth Observatory's North American Drought Article. Read the information about each contributing factor and view the animations about soil moisture (on the second page of the article). The animations will help you to visualize the feedback loop that exists among rainfall, soil moisture, and temperature. - What are some of the indicators that drought is present? Indicators of drought include soil moisture that is below normal, lower-than-normal rainfall or snowpack, and decreased water levels in streams and reservoirs. - The 3 main contributors to drought are high temperatures, low soil moisture content, and atmospheric circulation patterns that keep rain away from an area. Tell how each of these factors promotes drought. Higher surface temperatures result in an increase in evaporation of water. This leads to less moisture being available on the surface. If soils are wet, then much of the heat from incoming sunlight is used to evaporate the water they contain, so temperatures are kept cooler. If soil is dry, then there is little or no water available to evaporate and the land surface gets hotter and drier. Air circulation patterns are strongly affected by sea surface temperatures: air rises over areas of warm ocean water, pulling dry air across land. Is it a drought today? - Examine the diagram to see the signs of meteorological, agricultural, and hydrological droughts. - Interpret the chart to answer the following questions. - What are the causes of soil water deficiency? - If an area is experiencing reduced streamflow, which stage of drought is occurring?
<urn:uuid:a0c307c8-239d-46b9-a835-41bb2a32f766>
3.765625
1,148
Tutorial
Science & Tech.
49.347593
|The H2 Double-Slit Experiment: Where Quantum and Classical Physics Meet| For the first time, an international research team carried out a double-slit experiment in H2, the smallest and simplest molecule. Thomas Young's original experiment in 1803 passed light through two slits cut in a solid thin plate. In the groundbreaking experiment performed at ALS Beamlines 4.0 and 11.0.1, the researchers used electrons instead of light and the nuclei of the hydrogen molecule as the slits. The experiment revealed that only one "observing" electron suffices to induce the emergence of classical properties such as loss of coherence. Present-day single photoionization experiments demonstrate double-slit self-interference for a single particle fully isolated from the classical environment. But if quantum particles were put in contact with the classical world in a controlled manner, at what scale would quantum interference begin to diminish and particles start to behave classically? The team decided to study the double photoionization (complete fragmentation) of H2, creating two repelling protons acting as a double slit, a fast interfering electron, and a second electron behaving as an active or inactive observer. Experiments were performed at two different photon energies: Eϒ = 240 and 160 eV, leaving about 190 and 110 eV to be shared between the two electrons, respectively. At these high photon energies, double photoionization of H2 led in most cases to one fast and one slow electron. The fast electron's energies were 185 to 190 eV; the slow electron’s were 5 eV or less (corresponding to an inactive observer). The interference pattern of the fast electron was conditioned by the presence and velocity of the other: the greater the difference in their speeds, the less their interaction and the more visible the interference patterns. Both electrons were isolated from their surroundings, and quantum coherence prevailed, revealed by the fast electron's wavelike interference pattern at the two protons. However, at high photon-energy levels, the fast electron absorbed almost all the energy of the incident single photon, leaving the system too rapidly for interaction with the slow electron. Yet the slow electron was also ejected from the molecule through the mysterious process of electron–electron correlation. This "secret entanglement" allows two electrons to remain connected even though far apart. The researchers now had what they needed to build their classical/quantum interface. They choose ionization events where the slow electron had a bit more energy (5–25 eV) allowing it act as the classical environment (an active observer). The quantum system of the fast electron now interacted with the slow electron and began to decohere, its interference pattern disappearing. However, the overall coherence was still hidden in the two electrons' entanglement. The dielectron's wavelength was short enough to still interfere (the sum energy of the two electrons was high enough), and there was no environment to disturb the interference as the two electrons were now combined into one quasiparticle. Thus, interference between the entangled electrons could be reconstructed by graphing their correlated momenta from the angles at which they were ejected. Two waveforms appeared in the graph, either of which could be projected to show an interference pattern. Because the two waveforms were out of phase with each other, when viewed simultaneously, the interference vanished. If the two-electron system is split into its subsystems and one is thought of as the environment of the other, it becomes evident that classical properties such as loss of coherence can emerge even when only four particles are involved. Yet because the two electrons' subsystems are entangled in a tractable way, their quantum coherence can be reconstructed. In solid-state–based quantum computing devices, such electron–electron interaction represents a key challenge as decoherence and loss of information occur on the tiny scale of a single hydrogen molecule. The good news, however, is that, in theory, the information is not completely lost. Research conducted by D. Akoury, Th. Weber (University Frankfurt, Germany, and Berkeley Lab, U.S.); K. Kreidi, t. Jahnke, A. Staudte, M. Schöffler, N. Neumann, J. Titze, L. Ph. H. Schmidt, A. Czasch, O. Jagutzki, R. A. Costa Fraga, R. E. Grisenti, H. Schmidt-Böcking, R. Dörner (University Frankfurt, Germany); T. Osipov, H. Adaniya, M. H. Prior, A. Belkacem, (Berkeley Lab, CA, U.S.); R. Díez Muiño (Centro de Física de Materiales and Donostia International Physics Center, San Sebastian, Spain); N. A. Cherepkov, S. K. Semenov (State University of Aerospace Instrumentation, St. Petersburg, Russia); P. Ranitovic, C. L. Cocke (Kansas State University, U.S.); J. C. Thompson, A. L. Landers (Auburn University, U.S.) Research funding: Deutsche Forschungsgemeinschaft and by the U. S. Department of Energy, Office of Basic Energy Sciences (BES). Operation of the ALS is supported by BES.
<urn:uuid:d5a97766-4890-4e47-b8ad-668a033e2f72>
3.46875
1,125
Academic Writing
Science & Tech.
43.942948
Image via Wikipedia Did you know that some frogs talk with ultra-sound? In ultrasound, the pitch or frequency of the sound is too high for the human ear to hear. Fish and homing pigeons can see electromagnetic fields, ants can see polarised light, insects and rodents can smell pheromones, so why can't some frogs, somewhere on the planet, hear ultrasound? According to ABC Science News recently they can, if they are the concave-eared torrent frog that lives in the Huangshan Hot Springs, west of Shanghai, in China. There, a continuous torrent of water and sound fills the mountainous environment of the concave-eared torrent frog. Let’s hope that the frogs are actually alive in the Hot Springs. After all, we’ve all heard the fable about frogs and boiling water. Anyhoo it was recently discovered that these frogs can generate and hear sounds that are way up in the ultrasonic. They can generate and hear frequencies over 128 kHz. That's more than six-times better than a human can hear. So next time you want to get your message across in the “mountainous environment of the concave-eared torrent frog” just pick one up and use him as a................................froghorn. And, apparently, there is a Frog magazine in Dutch. On a recent edition, readers were commenting on the quality of the photographs therein. Presumably frogs-porn? It’s the way I am them telling.
<urn:uuid:ec1cf832-4b4c-4628-85cc-09d3ea032a4f>
2.953125
312
Personal Blog
Science & Tech.
70.702906
A chemical reaction between iron-containing minerals and water may produce enough hydrogen “food” to sustain microbial communities living in pores and cracks within the enormous volume of rock below the ocean floor and parts of the continents, according to a new study led by the University of Colorado Boulder. University of Colorado Boulder Assistant Professor Nikolaus Correll likes to think in multiples. If one robot can accomplish a singular task, think how much more could be accomplished if you had hundreds of them. Correll and his computer science research team recently created a swarm of 20 robots, each the size of a pingpong ball, which they call “droplets.” When the droplets swarm together, Correll said, they form a “liquid that thinks.” In 1977, Jimmy Carter was sworn in as president, Elvis died, Virginia park ranger Roy Sullivan was hit by lightning a record seventh time and two NASA space probes destined to turn planetary science on its head launched from Florida. When the space shuttle Atlantis lifted off for its journey to the International Space Station in 2009, it had on board two butterfly habitats, which were part of an experiment conducted by CU-Boulder and K–12 students across the country. Corn and potato crops may soon provide information to farmers about when the plants need water and how much should be delivered, due to a CU-Boulder invention. A tiny sensor clipped to plant leaves charts their moisture content, a key measure of water deficiency and accompanying stress. Data from the leaves is sent wirelessly over the Internet to computers linked to irrigation equipment, ensuring timely watering, reducing excessive water and energy use, and potentially saving farmers millions of dollars a year.
<urn:uuid:6427b637-36e9-419e-9ed4-2363f843d44f>
3.484375
344
Content Listing
Science & Tech.
32.158026
Water from the 'tap' [like our 'Green-House-Gas' or GHG emissions] flowing into . . ' the 'bath' [like the global atmosphere], raises the level of the bath-water [like the rate of atmosphere GHG accumulation/concentration] . . . but, the 'bath' is also drained by . . . the 'plug-hole' [like the natural 'sinks for GHG' . . . [affecting/slowing the rate of atmospheric GHG accumulation]. To stop the bath over-flowing, the tap must turned off in the knowledge that the bath level will continue to rise while the tap is being turned off. This is true for emissions, once the need for UNFCCC-compliance in the form of safe and stable future GHG concentrations in the atmosphere is accepted. An assessment of 'Contraction & Concentrations' and 'Contraction & Convergence' and the C&C targets and modelling behind 'sink-efficiency' in the UK Government's 'Climate Act' . The '50:50' odds the UK Government gave for avoiding a temperature rise globally of more than two degrees with their emissions scenario are in this context. They are linked the Government's wholly unsubstantiated claim that atmosphere concentrations will fall after 2050 even though we are projected as only halfway through a 100-year emissions 'contraction-event'. A letter 8th June 2011 from many eminent persons sent to the Secretary of State for Energy and Climate Change about these matters ishere Working draft of 'CBAT' - the Carbon Budget Analysis Tool [see here] C&C in the context of COP-15 Copenhagen [12/2009] with a view on what went wrong and what it takes to get it right. Presentation/Animation - Also for 'download and save' as an swffile for internet browsers or a self-executing [virus-free]Flash file for PCs. Presentation/Animation - C&C in the context of IPCC AR4 and the so-called reported quantitatively for the first time since IPCC FAR 1990. Essentially, due to 'positive feedback' effects in the carbon cycle, where rising temperature amplifies the rate at which atmospheric GHG concentration increases, accelerated rates of carbon emissions contraction are needed to meet a given concentration outcome. This is the increasingly crucial issue of changing rates of 'Sink-efficiency'. In depth analysis of this in relation to the UK Climate Act is here in thisEvidence to UK Environmental Audit Committee. The rates for Contraction:Concentrations and Contraction:Convergence are compared in thisAnimationas: - Acceptable [C1] Dangerous [C2] Impossible [C3] Rates of C&C at four different theoretical rates of sink-failure. Presentation/Animation that relates the arithmetic of emissions contraction to issues of: - science, geo-technology, oil and gas depletion, growth and damages, clean energy and implementation. The arithmetic of emissions contraction relating to: - Globalisation of Consciousness; Climate Science, Rising Risks; Trends of 'Expansion and Divergence'; 'Contraction & Convergence'; 'Syntax for Global Climate Policies'; Presentation/Animation and Notes About future 'growth', you should ask an economist 'how long is a piece of string'. He may tell you a witty Woody Allen one-liner about infinity being a really long time, 'especially towards the end'; [in other words he'll probably try and avoid the question]. If you ask a string-player how long is a piece of string, s/he'll give you a different answer. exactly twice half its length' [giving the perfect octave] exactly three time a third of its length [giving the perfect octave] as in the audio-visually animatedimage above. This Pythagorean 'stringularity' true because it has 'ontological structure'.
<urn:uuid:36baadb4-ec9d-4af2-8ca5-d86cafde31e5>
2.96875
850
Knowledge Article
Science & Tech.
38.733326
UK Germination Toolbox - about the database The Millennium Seed Bank Partnership has successfully collected and stored seed samples from around 90% of the United Kingdom’s native seed plant species, as a hedge against extinction and as a conservation resource. The ‘missing’ species produce either no seeds at all; or seeds that cannot be stored conventionally; or are too rare, or fruit too infrequently for seed collections to have been made without threatening their survival. As well as further collecting to increase bio-geographic and genetic coverage of those species already in the bank, efforts are also continuing, to locate and bank collectable samples from those last few, elusive species. In this database, the naming and definition of UK native species follows the PLANTATT database (Hill, Preston and Roy, 2004); and includes species designated there as native (N), native endemic (NE) or archaeophyte (AR; introduced before 1500AD) – 1442 species. At present casuals, aliens and more recent introductions are not included. Germination tests are central to the routine management of seeds collected for the purpose of their conservation. As well as being the most useful means of monitoring seed viability over time in storage; they also provide essential information towards the propagation of new plants. Ultimately, seeds in a bank, even though they may be perfectly viable, will be of little use if we do not know how to grow new plants from them. The MSBP aims to promote conservation by enabling the sustainable use of seeds in the bank, not least for the re-introduction of native species and restoration of degraded natural and semi-natural vegetation. Consequently, this database is intended as a resource for all those who need to propagate UK native species from seed: researchers; conservationists attempting to restore native species and vegetation; horticulturalists, including those commercial nurseries specialising in growing and supplying UK native species. It will also be useful for researchers in comparative and evolutionary ecology, seeking germination trait data. The ‘toolbox’ is comprised of information on germination from up to three available sources. 1. MSB germination tests The main purpose of this database is to share the MSB’s germination data on UK species with potential users. So, wherever they are available, a search returns a summary of the conditions applied in successful MSB germination tests. By and large the conditions (mostly temperatures) returned by a search for a particular species will be those that resulted in at least 75% germination (i.e. the MSB viability standard is passed). The tests ‘accepted’ by the MSB are usually those that are easiest to apply and repeat; wherever possible avoiding complicated temperature regimes, or the application of dormancy-breaking chemicals, for example. Please note that the successful germination conditions presented almost always DO NOT result from designed experiments with controls. Thus, they do not exclude other potentially equally successful conditions that have not been tried; nor are unsuccessful conditions reported at present. In a few cases the successful germination conditions result from tests on collections of species native to the UK, but originating elsewhere, usually in Europe. 2. Information from published literature Despite the MSB’s high coverage of UK native species, successful germination conditions are not yet available for some of them. This is sometimes due to as yet intractable dormancy problems in certain species – and research continues on these. More frequently, the collections currently held of those species are too small (<500 seeds) to commit any of them to germination testing, without jeopardising the value of the conservation collection. In such cases, where there is published information available, the database will return a summary of the conditions found to be successful in other laboratories. This part of the database currently relies heavily on the extensive compilation and analysis of published literature by Baskin & Baskin (1998), with further updates to 2001 kindly provided by the authors (cf. Baskin & Baskin, 2003a). Updates beyond that date are from the MSB’s own literature searches, which are ongoing, and will be added to the database in due course. Some published germination treatments for UK native species is for material not collected in the UK. 3. Predicting likely successful temperature regimes Worldwide, around one third of all wild species studied are not exacting in their germination requirements; and this is probably also true of UK species. So long as they have sufficient moisture and a broadly favourable temperature, they are relatively easy to germinate fully. The remainder possess varying degrees of several different kinds of dormancy, presumed to result from evolution to ensure that seedlings emerge when they are most likely to survive, and often also to ensure that emergence is spread over time (‘bet hedging’). Synchronous germination to a high percentage is often quite difficult to achieve for these species. However, optimal germination temperature and dormancy breaking conditions are often related to local climatic conditions; and these can suggest likely successful germination conditions. For example; seeds of tropical dry-land species, shed at the beginning of the dry season, often require an extended period of relatively high temperature in the dry state before germination occurs in the subsequent rainy season. This requirement appears to be an adaptation to avoid germination in response to sporadic, unreliable rainfall during the dry season, when emerging seedlings would probably be killed by drought. Similarly, cool temperate species shed in autumn may delay germination until temperatures begin to rise in the early spring, by having a requirement for an extended period at low temperature (‘cold stratification’) before germination can occur, mimicking the passage of winter and the risk of frost damage to sensitive seedlings. Application of temperature regimes related to seasonal climate cycles forms the basis of ‘move-along’ experiments (e.g., Baskin & Baskin, 2003b); in which seeds, imbibed on a moist substrate, are transferred between a succession of incubators, running at temperatures that approximate to local conditions at the source of the seeds. The start point in the temperature regime is set at the conditions pertaining when seeds are shed naturally (≈ collected) in the field. To help users predict likely successful temperature sequences, they are able to enter the latitude and longitude of the source of their seed collection (if known; input restricted to decimal degrees at present), as well as month of collection. The system will return the monthly mean minimum and maximum as well as corresponding median temperatures. This facility is mainly to allow users to make predictions of likely germination conditions in the absence of information from MSB germination tests MSB. However, they can also be used in conjunction with MSB records and published data, where they exist. The temperature values are provided by ‘WORLDCLIM’ (Hijmans et al., 2005); which uses an algorithm to compute interpolated, or modelled, temperature and rainfall data from real data compiled from weather station records worldwide, at high (1km) spatial resolution. Properties of the algorithm and uneven distribution of climate stations mean that uncertainty is highest for small islands and mountainous regions. The database currently does not return interpolated monthly rainfall amounts for UK locations, as rainfall appears to have very limited value in predicting germination period in the UK. Especially in seasonal climates, rainfall amounts can give an indication of relatively dry periods (when germination would be less likely); or relatively moist periods, when seedling emergence is likely to take place in the field. Rainfall is more evenly distributed throughout the year in the UK; and though observed mostly in spring and autumn, newly emerged seedlings of some species can be seen at any time of the year, even during mild weather in winter. Worldwide, observations of species’ seedling emergence timing are scarce or mostly non-existent; whereas it is often quite well documented for UK species (e.g. ECOFLORA). Keep up to date with events and news from Kew
<urn:uuid:18c9526c-1895-4c2e-804e-a442f3f31d81>
3.625
1,649
Knowledge Article
Science & Tech.
22.236137
The genus Isoetes is easy to recognize, but the distinctions between species are more challenging. Fortunately we have only two species in Wisconsin, of the 24 species reported for North America. The most reliable means of identifying the species of Isoetes requires observation of the megaspores with a microscope. Megaspores of I. echinospora are covered with numerous spines, and are easily distinguished from the curving ridges of I. lacustris megaspores. I. lacustris grows on lake beds, usually completely submersed in the water and often overlooked. The water must be reasonably clear to allow Isoetes to grow on the lake bed and most known locations are of oligotrophic lakes (i.e. of low productivity) with slightly acid water.
<urn:uuid:2a49bc1a-9f25-4da7-aba0-fcc8caafae10>
3.265625
175
Knowledge Article
Science & Tech.
30.454476
Terra/MODIS Color Image of Copahue Eruption Plume Across South America For the first time since 2000, Copahue is erupting, sending an ash plume across southern South America. So far, the eruption is following the same patterns as the activity that ran from July to October 2000. That activity started with phreatic (water-driven) explosions, so it will be interesting to see if this eruption has new juvenile magma involved. Earlier this year, a study of the summit crater lake suggested new magma was intruding under Copahue and the SERNAGEOMIN report mentioned. that seismicity was rising before today’s eruption. I grabbed the brand new Terra/MODIS imagery for South America and the plume from the Copahue was glorious – stretching over 350 km across Argentina to the east of the volcano. For a sense of scale on the image, the distance between Copahue and the Embalse los Barreales is ~225 km. The plume itself has been reported to be over 9.5 km / 30,000 feet tall. UPDATE 12/22 5 PM EST: Eruptions reader Kirby pointed me to the SERNAGEOMIN webcam pointed at Copahue — check out the eruption live! UPDATE 12/22 7 PM EST: ONEMI has not called for any evacuations on the Chilean side of Copahue — this article also has a nice gallery of pictures from the eruption as well. Check out the original post with more details. Erik Klemetti is an assistant professor of Geosciences at Denison University. His passion in geology is volcanoes, and he has studied them all over the world. You can follow Erik on Twitter, where you'll get volcano news and the occasional baseball comment. Follow @eruptionsblog on Twitter.
<urn:uuid:531a4ccd-c5e2-4850-bc78-faeee15054be>
3.390625
384
Personal Blog
Science & Tech.
47.083548
The three major operating systems used today are Microsoft Windows, Apple's Macintosh OS, and the various Unix derivatives. A minor irritation of cross-platform work is that these three platforms all use different characters to mark the ends of lines in text files. Unix uses the linefeed (ASCII character 10), MacOS uses the carriage return (ASCII character 13), and Windows uses a two-character sequence of a carriage return plus a newline. Python's file objects can now support end of line conventions other than the one followed by the platform on which Python is running. Opening a file with the mode 'rU' will open a file for reading in universal newline mode. All three line ending conventions will be translated to a "\n" in the strings returned by the various file methods such as read() and Universal newline support is also used when importing modules and when executing a file with the execfile() function. This means that Python modules can be shared between all three operating systems without needing to convert the line-endings. This feature can be disabled when compiling Python by specifying the --without-universal-newlines switch when running Python's configure script. See About this document... for information on suggesting changes.
<urn:uuid:89787f62-14b1-4967-9555-b6fa1113a9f4>
3.84375
257
Documentation
Software Dev.
41.155494
Contrary to Hollywood’s portrayal of gigantic man-eating sharks, the three largest species of shark spend their time peacefully roaming the ocean’s surface munching on the ocean’s smallest creatures. Basking Sharks, the second largest species of shark, cruise the seas in search of plankton, filtering up to 2,000 tons of water across its gills per hour. Reaching lengths of thirty five feet, this shark exists worldwide, yet very little is known about how they live or where they go. To discover more information about this vulnerable species, scientists from the Pacific Shark Research Center (PSRC) and the National Marine Fisheries Service (NMFS) have begun a new type of shark hunt. Unlike the crazed and frantic scenes from the JAWS movie, this shark hunt only requires a boat, camera and telephone! The Spot a Basking Shark Project enlists the help of local sea-farers to uncover the demographics and distribution of the California Basking Shark. Once common along the California coast, these gentle giants are now a rare sight. In the past, these social creatures were seen in schools of hundreds or thousands; however since 1993 no more than three basking sharks have been spotted together. Fishing and eradication efforts by fishermen who believed them to be ‘man-eaters’ contributed heavily to their population decline. Despite the fishery closure in the late 1950s, Basking Shark numbers have remained low, mostly due to human impacts like vessel strikes, fisheries bycatch and illegal shark fining. Based on the decline of Basking Shark numbers and lack of species information, the International Union for Conservation of Nature (IUCN) has listed this species as endangered. If you see a Basking Shark, the PSRC and NMFS want to know! These sharks can be identified by their large size, pointed snouts, and large gill slits that encircle the head. Basking sharks have dorsal fins up to three feet tall that are visible as they slowly swim along the surface with mouths wide open catching plankton. If you see a Basking Shark, call or email the PSRC with your location, date and time of the sighting and any photos or videos. Your information helps the PSRC document and understand these majestic and peaceful creatures.
<urn:uuid:193d0731-0e51-4550-9c1c-07d8b21d053c>
3.84375
469
Personal Blog
Science & Tech.
45.40705
A 3 digit number is multiplied by a 2 digit number and the calculation is written out as shown with a digit in place of each of the *'s. Complete the whole multiplication sum. When the number x 1 x x x is multiplied by 417 this gives the answer 9 x x x 0 5 7. Find the missing digits, each of which is represented by an "x" . The number 10112359550561797752808988764044943820224719 is called a 'slippy number' because, when the last digit 9 is moved to the front, the new number produced is the slippy number multiplied by This challenge is to make up YOUR OWN alphanumeric. Each letter represents a digit and where the same letter appears more than once it must represent the same digit each time. Amazing as it may seem the three fives remaining in the following `skeleton' are sufficient to reconstruct the entire long division Watch our videos of multiplication methods that you may not have met before. Can you make sense of them? Some 4 digit numbers can be written as the product of a 3 digit number and a 2 digit number using the digits 1 to 9 each once and only once. The number 4396 can be written as just such a product. Can. . . . Countries from across the world competed in a sports tournament. Can you devise an efficient strategy to work out the order in which they finished? What day of the week were you born on? Do you know? Here's a way to Find the numbers in this sum However did we manage before calculators? Is there an efficient way to do a square root if you have to do the work yourself? This addition sum uses all ten digits 0, 1, 2...9 exactly once. Find the sum and show that the one you give is the only Choose any 4 whole numbers and take the difference between consecutive numbers, ending with the difference between the first and the last numbers. What happens when you repeat this process over and. . . . Read this article to find out the mathematical method for working out what day of the week each particular date fell on back as far as 1700. Start with any triangle T1 and its inscribed circle. Draw the triangle T2 which has its vertices at the points of contact between the triangle T1 and its incircle. Now keep repeating this. . . . Vedic Sutra is one of many ancient Indian sutras which involves a cross subtraction method. Can you give a good explanation of WHY it How would you judge a competition to draw a freehand square? Scheduling games is a little more challenging than one might desire. Here are some tournament formats that sport schedulers use. It's like 'Peaches Today, Peaches Tomorrow' but interestingly A geometry lab crafted in a functional programming language. Ported to Flash from the original java at web.comlab.ox.ac.uk/geomlab Imagine a strip with a mark somewhere along it. Fold it in the middle so that the bottom reaches back to the top. Stetch it out to match the original length. Now where's the mark?
<urn:uuid:f3089b6d-6bb9-41dd-ba6a-02443ec43614>
3.25
690
Content Listing
Science & Tech.
71.074286
Tree of Life This is a tree of life--a diagram that shows how different types of living things, or species, are related. If you follow the lines connecting any two species on the tree, you'll get an idea of how closely related they are. The longer the path is, the more distant the relationship. The 479 species listed on this tree represent only a tiny fraction of the more than 1.7 million species scientists have identified. Many millions more species are believed to exist. Our species, Homo sapiens, is labeled in green in the top left part of the tree. How was it made? Generations of scientists have created tree-of-life diagrams by studying and comparing the physical features of different species. But this tree of life was made by comparing DNA sequences, with physical features playing a supporting role. All living things have some DNA sequences in common because they evolved from a single ancestral species. Closely related species have more DNA in common than distantly related species do, so they are positioned closer to each other on the tree.
<urn:uuid:d9793264-e7aa-404d-b436-b6555ec8d06f>
3.546875
216
Knowledge Article
Science & Tech.
52.626002
Water Evaporated from Trees Cools Global Climate, Researchers Find ScienceDaily (Sep. 14, 2011) — Scientists have long debated about the impact on global climate of water evaporated from vegetation. New research from Carnegie's Global Ecology department concludes that evaporated water helps cool Earth as a whole, not just the local area of evaporation, demonstrating that evaporation of water from trees and lakes could have a cooling effect on the entire atmosphere. These findings, published Sept. 14 in Environmental Research Letters, have major implications for land-use decision making. Evaporative cooling is the process by which a local area is cooled by the energy used in the evaporation process, energy that would have otherwise heated the area's surface. It is well known that the paving over of urban areas and the clearing of forests can contribute to local warming by decreasing local evaporative cooling, but it was not understood whether this decreased evaporation would also contribute to global warming Earth has been getting warmer over at least the past several decades, primarily as a result of the emissions of carbon dioxide from the burning of coal, oil, and gas, as well as the clearing of forests. But because water vapor plays so many roles in the climate system, the global climate effects of changes in evaporation were not well understood. The researchers even thought it was possible that evaporation could have a warming effect on global climate, because water vapor acts as a greenhouse gas in the atmosphere. Also, the energy taken up in evaporating water is released back into the environment when the water vapor condenses and returns to earth, mostly as rain. Globally, this cycle of evaporation and condensation moves energy around, but cannot create or destroy energy. So, evaporation cannot directly affect the global balance of energy on our planet. Article continues: http://www.sciencedaily.com/releases/2011/09/110914161729.htm
<urn:uuid:03518dbf-b4fd-425f-9692-606d2af52def>
3.609375
400
Truncated
Science & Tech.
28.781297
New Zealand's capital city lies within the earthquake-generating collision zone between two of the Earth's great tectonic plates, and sits on top of one of the zone's most active geological faults - the Wellington Fault. The Wellington Fault forms distinctive landscape features running right through the central city. Intensive research has been done to understand the nature of the fault and the best ways to reduce possible earthquake damage and loss. - Wellington's Shaky Foundations - How often do earthquakes occur along the fault? - How much do the Wellington fault lines move? - What would a major Wellington earthquake be like? - How do we know which fault is most likely to rupture next in Wellington? Check out our Wellington Fault video here
<urn:uuid:c2194b69-727d-4f86-9eb1-b7a95200a51f>
3.625
151
Knowledge Article
Science & Tech.
48.687805
Tropical Waves Characteristics Interval/Period of 3-4 Days Between Waves - Lasting from one week to several weeks - Propagating at 10-15 KT Wavelength of 2,000-2,500 Km Extend Vertically Between SFC-5Km A westward traveling tropical wave manifests quite well on the lower atmosphere. In the absence of satellite imagery, RAOBS, ship synoptic and surface observations are the best tools for finding these perturbations. Knowledge of climatology across the region is key for tropical wave detection, as shifts on the prevailing wind flow will be the first clue of an approaching tropical wave. Over the eastern Caribbean, the prevailing easterlies will take a more NE component as a tropical wave approaches. As the wave axis moves over the area the easterlies will return, but as it passes the winds will take an ESE component. Over the southeastern Caribbean the tropical waves are harder to find, as their circulation tends to be masked by the ITCZ anchoring low over the Gulf of Panama. Over Panama-Costa Rica the flow during the wet season has a NE component, except when a tropical wave moves west across the region inducing an ESE rotation of the mean flow. Strong waves can then draw the ITCZ north across Panama/Costa Rica into the southern Caribbean.
<urn:uuid:da0fb17d-4ede-42cd-a8aa-de894f4b95ed>
3.328125
286
Knowledge Article
Science & Tech.
36.427526
Nereid was discovered in 1949 through Earth-based telescopes. Little is known about Nereid, which is slightly smaller than Proteus, having a diameter of 211 mi (340 km). The satellite's surface reflects about 14% of the sunlight that strikes it. Nereid's orbit is the most eccentric in the solar system, ranging from about 841,100 mi (1,353,600 km) to 5,980,200 mi (9,623,700 km). Information Please® Database, © 2007 Pearson Education, Inc. All rights reserved. More on Nereid from Infoplease:
<urn:uuid:640e3d2a-53ee-41a9-9398-175795852fa7>
3.828125
128
Knowledge Article
Science & Tech.
77.191318
Like Alaska's mighty Yukon, a broad river once flowed across Antarctica, following a gentle valley shaped by tectonic forces at a time before the continent became encased in ice. Understanding what happened when rivers of ice later filled the valley could solve certain climate and geologic puzzles about the southernmost continent. The valley is Lambert Graben in East Antarctica, now home to the world's largest glacier. Trapped beneath the ice, the graben (which is German for ditch or trench) is a stunning, deep gorge. But before Antarctica's deep freeze 34 million years ago, the valley was relatively flat and filled by a lazy river, leaving a riddle for geologists to decode: How did Lambert Graben get so steep, and when was it carved? [Full Story: What Antarctica Looked Like Before the Ice] Last year, Yosemite National Park's famed "firefall" was more of a "firedrizzle" due to lack of snow. But this year, the "firefall" is burning bright. Yosemite's Horsetail Fall flows like lava under a clear sky and favorable lighting. It's a small waterfall that makes big news whenever it glows orange during sunset in mid- to late February. This time of year, the sun is setting at just the right angle and the western sky is just clear enough to create the "firefall" effect. When that happens, the waterfall will glow orange for about 10 minutes. [Full Story: Wilderness 'Paparazzi' Flock to Yosemite's 'Firefall' ] A menacing swarm of locusts that entered southern Israel earlier this week has been largely smitten, according to the Israeli government and local reports. But some of the insects' ilk may be back later this week. Officials sprayed the flying insects with pesticide early this morning (March 6), greatly reducing the number of living, flying insects, according to a statement from the Ministry of Agriculture and Rural Development. [Full Story: Israel Escapes Locust Plague — For Now] A new photo taken from the International Space Station shows an ecologically diverse area of Panama in a new light. The picture is the first taken by a new Earth-observing tool recently installed on the orbiting science laboratory, and shows the San Pablo River emptying into the Gulf of Montijo, reported NASA's Earth Observatory. [Full Story: New Space Station Camera Snaps First Image of Earth ] Emperor penguins “wear” an invisible shield of cold air that helps to prevent body heat loss, allowing the flightless birds to survive the sub-zero temps of Antarctica, a new study finds. The report, published in the journal Biology Letters, demonstrates just how hardy the birds are. [Full Story: Penguins Wear a Shield of Cold Air in Winter ] Scientists are unveiling a rare octopus that has never been on public display before. And unlike other octopuses, where females have a nasty habit of eating their partners during sex, Larger Pacific Striped Octopuses mate by pressing their beaks and suckers against each other in an intimate embrace. [Full Story: Rare Kissing Octopus Unveiled For the First Time ] The huge ocean sloshing beneath the icy shell of Jupiter's moon Europa likely makes its way to the surface in some places, suggesting astronomers may not need to drill down deep to investigate it, a new study reports. Scientists have detected chemicals on Europa's frozen surface that could only come from the global liquid-water ocean beneath, implying the two are in contact and potentially opening a window into an environment that may be capable of supporting life as we know it. [Full Story: On Jupiter's Moon Europa, Underground Ocean Bubbles Up to Surface ] The latest in a series of late-season snowstorms is barreling toward the East Coast, dumping nearly a foot of snow on some locales as it passes. The National Weather Service predicts 8 to 12 inches (20 to 30 centimeters) of snow could fall in the Mid-Atlantic states tonight (March 5), with up to 18 inches (45 cm) in West Virginia. Tomorrow (March 6), traffic snarls are expected along Interstate 95 as the system collides with warm air over the East Coast, pummeling northern Virginia, Washington, D.C., Maryland, N.Y.'s Long Island and southern Connecticut with heavy, wet snow. [Full Story: Snowstorm Threatening East Coast Seen from Space ] Camels are the poster animals for the desert, but researchers now have evidence that these shaggy beasts once lived in the Canadian High Arctic. The fossil remains of a 3.5-million-year-old camel were found on Ellesmere Island in Canada's northernmost territory, Nunavut. The camel was about 30 percent bigger than modern camels and was identified using a technique called collagen fingerprinting. The finding, detailed today (March 5) in the journal Nature Communications, suggests that modern camels stemmed from giant relatives that lived in a forested Arctic that was somewhat warmer than today. [Full Story: Giant Camels Roamed the Arctic 3.5 Million Years Ago ] In the second century, an ethnically Greek Roman named Galen became doctor to the gladiators. His glimpses into the human body via these warriors' wounds, combined with much more systematic dissections of animals, became the basis of Islamic and European medicine for centuries. Galen's texts wouldn't be challenged for anatomical supremacy until the Renaissance, when human dissections — often in public — surged in popularity. But doctors in medieval Europe weren't as idle as it may seem, as a new analysis of the oldest-known preserved human dissection in Europe reveals. [Full Story: Grotesque Mummy Head Reveals Advanced Medieval Science ] The European Union has launched a new program to tackle the threat of space junk, which litters the corridors of Earth orbit. Space junk is man-made debris — spent rocket stages, dead satellites and even lost spacewalker tools — orbiting Earth. These bits of detritus pose a risk to orbiting satellites, which even a small piece of space trash could damage or destroy. [Full Story: Europe Takes Aim at Space Junk Menace ]
<urn:uuid:00a1fb6d-3d38-422a-acaa-ee3e5395e692>
3.46875
1,295
Content Listing
Science & Tech.
38.968747
The data types you have seen so far are all concrete, in the sense that we have completely specified how they are implemented. For example, the Card class represents a card using two integers. As we discussed at the time, that is not the only way to represent a card; there are many alternative implementations. An abstract data type, or ADT, specifies a set of operations (or methods) and the semantics of the operations (what they do), but it does not not specify the implementation of the operations. That’s what makes it abstract. Why is that useful? When we talk about ADTs, we often distinguish the code that uses the ADT, called the client code, from the code that implements the ADT, called the provider code. In this chapter, we will look at one common ADT, the stack. A stack is a collection, meaning that it is a data structure that contains multiple elements. Other collections we have seen include dictionaries and lists. An ADT is defined by the operations that can be performed on it, which is called an interface. The interface for a stack consists of these operations: A stack is sometimes called a last in, first out or LIFO data structure, because the last item added is the first to be removed. The list operations that Python provides are similar to the operations that define a stack. The interface isn’t exactly what it is supposed to be, but we can write code to translate from the Stack ADT to the built-in operations. This code is called an implementation of the Stack ADT. In general, an implementation is a set of methods that satisfy the syntactic and semantic requirements of an interface. Here is an implementation of the Stack ADT that uses a Python list: class Stack : def __init__(self): self.items = def push(self, item): self.items.append(item) def pop(self): return self.items.pop() def is_empty(self): return (self.items == ) A Stack object contains an attribute named items that is a list of items in the stack. The initialization method sets items to the empty list. To push a new item onto the stack, push appends it onto items. To pop an item off the stack, pop uses the homonymous ( same-named) list method to remove and return the last item on the list. Finally, to check if the stack is empty, is_empty compares items to the empty list. An implementation like this, in which the methods consist of simple invocations of existing methods, is called a veneer. In real life, veneer is a thin coating of good quality wood used in furniture-making to hide lower quality wood underneath. Computer scientists use this metaphor to describe a small piece of code that hides the details of an implementation and provides a simpler, or more standard, interface. A stack is a generic data structure, which means that we can add any type of item to it. The following example pushes two integers and a string onto the stack: >>> s = Stack() >>> s.push(54) >>> s.push(45) >>> s.push("+") We can use is_empty and pop to remove and print all of the items on the stack: while not s.is_empty(): print s.pop(), The output is + 45 54. In other words, we just used a stack to print the items backward! Granted, it’s not the standard format for printing a list, but by using a stack, it was remarkably easy to do. You should compare this bit of code to the implementation of print_backward in the last chapter. There is a natural parallel between the recursive version of print_backward and the stack algorithm here. The difference is that print_backward uses the runtime stack to keep track of the nodes while it traverses the list, and then prints them on the way back from the recursion. The stack algorithm does the same thing, except that is use a Stack object instead of the runtime stack. In most programming languages, mathematical expressions are written with the operator between the two operands, as in 1 + 2. This format is called infix. An alternative used by some calculators is called postfix. In postfix, the operator follows the operands, as in 1 2 +. The reason postfix is sometimes useful is that there is a natural way to evaluate a postfix expression using a stack: To implement the previous algorithm, we need to be able to traverse a string and break it into operands and operators. This process is an example of parsing, and the results—the individual chunks of the string – are called tokens. You might remember these words from Chapter 1. Python provides a split method in both the string and re (regular expression) modules. The function string.split splits a string into a list using a single character as a delimiter. For example: >>> import string >>> string.split("Now is the time"," ") ['Now', 'is', 'the', 'time'] In this case, the delimiter is the space character, so the string is split at each space. The function re.split is more powerful, allowing us to provide a regular expression instead of a delimiter. A regular expression is a way of specifying a set of strings. For example, [A-z] is the set of all letters and [0-9] is the set of all numbers. The ^ operator negates a set, so [^0-9] is the set of everything that is not a number, which is exactly the set we want to use to split up postfix expressions: >>> import re >>> re.split("([^0-9])", "123+456*/") ['123', '+', '456', '*', '', '/', ''] Notice that the order of the arguments is different from string.split; the delimiter comes before the string. The resulting list includes the operands 123 and 456 and the operators * and /. It also includes two empty strings that are inserted after the operands. To evaluate a postfix expression, we will use the parser from the previous section and the algorithm from the section before that. To keep things simple, we’ll start with an evaluator that only implements the operators + and *: def eval_postfix(expr): import re token_list = re.split("([^0-9])", expr) stack = Stack() for token in token_list: if token == '' or token == ' ': continue if token == '+': sum = stack.pop() + stack.pop() stack.push(sum) elif token == '*': product = stack.pop() * stack.pop() stack.push(product) else: stack.push(int(token)) return stack.pop() The first condition takes care of spaces and empty strings. The next two conditions handle operators. We assume, for now, that anything else must be an operand. Of course, it would be better to check for erroneous input and report an error message, but we’ll get to that later. Let’s test it by evaluating the postfix form of (56+47)*2: >>> print eval_postfix ("56 47 + 2 \*") 206 That’s close enough. One of the fundamental goals of an ADT is to separate the interests of the provider, who writes the code that implements the ADT, and the client, who uses the ADT. The provider only has to worry about whether the implementation is correct – in accord with the specification of the ADT – and not how it will be used. Conversely, the client assumes that the implementation of the ADT is correct and doesn’t worry about the details. When you are using one of Python’s built-in types, you have the luxury of thinking exclusively as a client. Of course, when you implement an ADT, you also have to write client code to test it. In that case, you play both roles, which can be confusing. You should make some effort to keep track of which role you are playing at any moment.
<urn:uuid:4a0df5e6-d1e2-41d7-8f44-77b24b016a93>
4.03125
1,733
Documentation
Software Dev.
60.60727
Some of school going kids uses Electricity and Magnetism related science projects Experiments in there school’s science fair. children’s always trying to know that what is a magnet or something about electricity. Someone can make electricity and Magnetism Science Fair Projects Using Batteries, Balloons etc. these Electricity and Magnetism Physics Projects are so interesting and Hair Raising. Electricity and magnetism, electromagnetic waves and Magnetism are most used in these Physics Projects of school. here we have video to know more about Electricity and Magnetism Physics Projects Experiments.
<urn:uuid:060c7c0a-7914-4268-ba4d-692994040b5a>
3.21875
116
Truncated
Science & Tech.
22.579167
More In This Article Across two decades and thousands of pages of reports, the world's most authoritative voice on climate science has consistently understated the rate and intensity of climate change and the danger those impacts represent, say a growing number of studies on the topic. This conservative bias, say some scientists, could have significant political implications, as reports from the group – the U.N. Intergovernmental Panel on Climate Change – influence policy and planning decisions worldwide, from national governments down to local town councils. As the latest round of United Nations climate talks in Doha wrap up this week, climate experts warn that the IPCC's failure to adequately project the threats that rising global carbon emissions represent has serious consequences: The IPCC’s overly conservative reading of the science, they say, means governments and the public could be blindsided by the rapid onset of the flooding, extreme storms, drought, and other impacts associated with catastrophic global warming. "We're underestimating the fact that climate change is rearing its head," said Kevin Trenberth, head of the climate analysis section at the National Center for Atmospheric Research and a lead author of key sections of the 2001 and 2007 IPCC reports. "And we're underestimating the role of humans, and this means we're underestimating what it means for the future and what we should be planning for." Underplaying the intensity A comparison of past IPCC predictions against 22 years of weather data and the latest climate science find that the IPCC has consistently underplayed the intensity of global warming in each of its four major reports released since 1990. The drastic decline of summer Arctic sea ice is one recent example: In the 2007 report, the IPCC concluded the Arctic would not lose its summer ice before 2070 at the earliest. But the ice pack has shrunk far faster than any scenario scientists felt policymakers should consider; now researchers say the region could see ice-free summers within 20 years. Sea-level rise is another. In its 2001 report, the IPCC predicted an annual sea-level rise of less than 2 millimeters per year. But from 1993 through 2006, the oceans actually rose 3.3 millimeters per year, more than 50 percent above that projection. Some climate researchers also worry that recent institutional changes could accentuate the organization's conservative bias in the fifth IPCC assessment, to be released in parts starting in September 2013. The tendency to underplay climate impacts needs to be recognized, conclude the authors of a recent paper exploring this bias. Failure to do so, they wrote in their study published last month in the journal Global Environmental Change, "could prevent the full recognition, articulation and acknowledgement of dramatic natural phenomena that may in fact be occurring." The conservative bias stems from several sources, scientists say. Part can be attributed to science's aversion to drama and dramatic conclusions: So-called outlier events – results at far ends of the spectrum – are often pruned. Such controversial findings require years of painstaking, independent verification. Yet some events in nature are dramatic, conclude University of California, San Diego, history and science professor Naomi Oreskes and Princeton University geosciences professor Michael Oppenheimer, co-authors of the study looking at the IPCC's bias. "If the drama arises primarily from social, political or economic impacts," they wrote, "then it is crucial that the associated risk be understood fully, and not discounted.”
<urn:uuid:5ce8478c-0c2e-4d15-bde4-fd6f24bffdfa>
3.234375
689
Truncated
Science & Tech.
30.25576
The journal Science published a paper this week about the increase and spread of Dead Zones in the world oceans. These Dead Zones are created when large amounts of nutrients decompose and use up all of the oxygen in the water causing mass deaths (literally suffocation) of marine species. There are various causes for this, but changes in wind, temperature and current regimes are increasingly thought to play an important role. Interestingly the paper concludes that these dead zones may be a symptom of global warming. Description of thousands of dead crabs and other crustaceans found at the bottom of the ocean in one of these dead zones brings to mind the severe coral bleaching event which occurred in the Seychelles and elsewhere following the one month extreme warming of parts of the Indian Ocean in 1998. What was left behind was a mass grave of dead corals and loss of livelihood to islanders and coastal people. In fact in a statement by the US Department of State in 1999 the following important conclusion was made: "These events (i.e. the coral bleaching of 1998)cannot be accounted for by localized stressors or natural variability alone. Nor can El Niño by itself explain the patterns observed worldwide. Rather, the impact of these factors was likely accentuated by an underlying global cause. Thus the geographic extent, increasing frequency, and regional severity of mass bleaching events are likely a consequence of a steadily rising baseline of marine temperatures, driven by anthropogenic global warming." The number of dead sea zones have effectively doubled per decade, since scientists have started to document them. Notable areas include parts of the US coastline, South African and Namibian coastline and other parts of the world. These dead zones further threaten the livelihood of people dependent upon the coast to survive. We once believed that the oceans could take all of the worlds waste, we were soon proved wrong. We are also wrong about the atmosphere, it cannot take all of our waste, we need to reduce emissions, we need to promote renewables so that there can be more research and the price can go down and both China and India will be able to afford energy technologies. That is the message I have for the skeptics, the signal for immediate action on climate change can be found in the oceans. Islanders have learnt to live, respect and protect the oceans, the continental world needs to understand how important the ocean is to them as well.
<urn:uuid:d8135aa8-d8ac-4e98-a942-9c4571a50e14>
3.03125
483
Personal Blog
Science & Tech.
38.377727
Air Pollution as Seen From the Skies From Mt. Etna to China to the Sahara, these striking satellite images of air pollution are from both natural and man-made causes - By Sarah Zielinski - Smithsonian.com, April 20, 2010 Mount Etna, on the Italian island of Sicily, is Europe’s most active volcano, having erupted half a dozen times in the past decade alone. During an eruption, a volcano spews gases that had been dissolved in molten rock. One of those gases is sulfur dioxide, which turns into sulfuric acid in the atmosphere and then condenses into sulfate aerosols. Those aerosols can linger for months in the upper atmosphere, where they block sunlight and destroy atomospheric ozone.
<urn:uuid:16399f08-cd5e-464e-b7dd-94873f60b037>
3.140625
153
Truncated
Science & Tech.
37.703046
The getcontext() function accesses a different Context object for each thread. Having separate thread contexts means that threads may make changes (such as getcontext.prec=10) without interfering with Likewise, the setcontext() function automatically assigns its target to the current thread. If setcontext() has not been called before getcontext(), then getcontext() will automatically create a new context for use in the current thread. The new context is copied from a prototype context called DefaultContext. To control the defaults so that each thread will use the same values throughout the application, directly modify the DefaultContext object. This should be done before any threads are started so that there won't be a race condition between threads calling getcontext(). For example: # Set applicationwide defaults for all threads about to be launched DefaultContext.prec = 12 DefaultContext.rounding = ROUND_DOWN DefaultContext.traps = ExtendedContext.traps.copy() DefaultContext.traps[InvalidOperation] = 1 setcontext(DefaultContext) # Afterwards, the threads can be started t1.start() t2.start() t3.start() . . . See About this document... for information on suggesting changes.
<urn:uuid:dd836b42-b587-4802-a366-4dc7592ec3bd>
2.6875
253
Documentation
Software Dev.
43.943062
Early last week, I wrote about parallax and distance measurements. This is a follow-up post to that one. Stellar parallax is very small, and thus correspondingly difficult to measure. The closest star has a parallax of 0.772 arc-seconds (that is nearly 1/4700 of a degree). That is a very tiny angle to measure, and so it is no wonder that it took so long for astronomical technology to advance to where the measurements could be made. The capability to make such small measurements finally came in the early Nineteenth Century. One problem for astronomers trying to measure parallax is that the stars are vast distances away. The farther a star is from us, the smaller the parallax, and the harder that parallax is to measure. Even today, most stars are simply too far away to reliably measure parallax. In the early Nineteenth Century, it was worse. The technology was such that only a handful of the nearest stars had big enough parallaxes to measure. But, there are a lot of stars in the sky? Which ones would be the best candidates to study and to attempt to measure? The measurements would be time consuming, and an astronomer would not be able to measure many stars, so he’d have to pick a star and stick with it. But, if the selected star were too far away, then he’d never be able to measure parallax. At first, astronomers had thought all stars to be similar, so the brighter stars were presumed to be the nearer ones. But, that idea had begun to fall by the wayside by the Eighteenth Century. Astronomers realized that brightness may not correlate at all with nearness (and it largely doesn’t). Eager attempts to measure parallax inevitably resulted in failure. The stars were simply far more distant than anyone had been prepared to imagine. But, along the way, there were a lot of interesting discoveries. For example, the search for parallax led to the discovery of the aberration of starlight. But, there was another important factor, besides brightness, that astronomers looked to when trying to decide on a target star: its proper motion. Edmund Halley discovered that some stars had apparently shifted position over historic times. The stars are not fixed in space relative to one another. This apparent shift of the stars, as seen from Earth, is their proper motion. Assuming that most stars are moving at similar speeds, then the nearer stars might appear to have higher proper motion than the more distant ones. You can see this same effect by looking out the window of a car as you drive down the highway. The nearer objects seem to be going past the window far more quickly than the more distant objects. But, that only really holds true when you are looking at stationary objects. Other cars driving in the same direction as you may be far closer than cattle or trees alongside the road, but they will not appear to be moving very quickly with respect to your window because they have very nearly the same speed as your car. Likewise, stars quite near the Sun might not be seen to have a high proper motion if they share the Sun’s motion through the galaxy. Still, this seemed to be a far more promising correlation with distance than the brightness measure. So, the hunt was on. Numerous astronomers were eagerly working to make the first parallax measurements. Among these were Thomas Henderson, working in South Africa, Friedrick Wilhelm Struve, and Friedrick Wilhelm Bessell (both in Europe). Henderson actually got the jump on the others, making measurements of Alpha Centauri. In 1833, he packed up and went back to England, along with his data. He was in no hurry to reduce his data, so it languished for years. When he finally did get to looking at his measurements, he found that there did seem to be what may have been a parallax shift in Alpha Centauri, but he did not trust his data. He had only 19 measurements, far too few to be certain or conclusive in his findings. Furthermore, the instrument that he had been using had been damaged in shipping to South Africa. He had painstakingly applied corrections to the measurements, but he realized that other astronomers would cast doubt on his findings. He decided to wait for better measurements made with another instrument by his successor at the far southern observatory. Alpha Centauri is indeed the nearest star (actually it is a triple star system, and the closest of the three, Proxima Centauri, is the nearest star other than the Sun), and it really did have a large enough parallax to measure. And, as it turns out, Henderson’s corrections to his data were approximately correct. However, he didn’t know all of that, so he held off publishing his findings until he had more data sent to him. In the mean time, Bessell had acquired a spectacular and very precise instrument ideally suited for the task. It had originally been designed for measuring the sizes of features on the Sun, but he expertly adapted it to measure the distances between stars. Giuseppe Piazzi had shown the star 61 Cygni to have a particularly high proper motion. In fact, it was dubbed the “flying star” and at the time held the record as the star with the highest proper motion (a record that it was to eventually lose to Groombridge 1830, and then to Barnard’s Star). This made it an excellent target star. However, after only a few months, Bessell gave up the endeavor because he found the comparison star that he’d selected to be too dim to follow in poor sky conditions. Other concerns took him away from the task for a number of years. Then, in 1837, Struve announced that he’d measured the parallax of the star Vega. The number that he gave was 0.125″. Bessell poured over Struve’s data, but was not convinced that it was really believable. He feverishly resumed his measurements of 61 Cygni. For the next year, any clear night that he could observe the star, he did, often a dozen times per night, making measurements. After a year, he had hundreds of positions determined using thousands of individual measurements. His data showed no doubt that 61 Cygni moved back and forth as the Earth moved around the Sun. He had found clear evidence of parallax. Bessell computed the parallax of 61 Cygni to be 0.314″, and he published his results in late 1838. Soon afterwards, Struve revised his parallax measurement of Vega to a value nearly double what he had originally found. That huge change cast serious doubt as to the reliability of his measurements. So most astronomers, including Struve himself, ceded the first parallax measurement to Bessell. With Bessell and Struve’s measurements available, Henderson finally published his own findings for Alpha Centauri. Interestingly enough, despite Struve’s uncertainty in his own measurements, his original value for Vega’s parallax is amazingly close to the modern accepted value of 0.129″. Bessell’s parallax for 61 Cygni is also not far off of today’s accepted value of 0.285″. It really is hard to say who should get credit for the first parallax measurements. All three, Struve, Bessell, and Henderson, were working at about the same time. Henderson didn’t believe his measurements, so he didn’t publish them right away, and thus is seldom given credit. Struve published his measurements a year before Bessell, but his measurements were deemed somewhat uncertain, a fact that he most clearly stated himself. Struve, himself, gave Bessell credit for the first unambiguous measurement of stellar parallax. But, I think that all three deserve some mention. If you want to read more about this episode in the history of astronomy, an excellent resource is Alan Hirshfeld’s book Parallax: The Race to Measure the Cosmos. Finder chart for 61 Cygni created using Starry Night Pro software.
<urn:uuid:dd7e790e-73c3-4f82-b82e-a22e49919f14>
3.96875
1,691
Personal Blog
Science & Tech.
52.922019
The efficiency of any application depends on how well memory and garbage collection are managed. The following sections provide information on optimizing memory and allocation functions: Tracing Garbage Collection Other Garbage Collector Settings Tuning the Java Heap Re-basing DLLs on Windows Monitoring the Garbage Collection (GC) activity at the development server and accordingly tuning JVM and GC settings before deploying the server into production is necessary. The GC settings vary depending on the application you are running. Garbage collection reclaims the heap space previously allocated to objects no longer needed. The process of locating and removing the dead objects can stall any application and consume as much as 25 percent throughput. Almost all Java Runtime Environments come with a generational object memory system and sophisticated GC algorithms. A generational memory system divides the heap into a few carefully sized partitions called generations. The efficiency of a generational memory system is based on the observation that most of the objects are short lived. As these objects accumulate, a low memory condition occurs forcing GC to take place. The heap space is divided into old and the new generations. The new generation includes the new object space (eden), and two survivor spaces. The JVM allocates new objects in the eden space, and moves longer lived objects from the new generation to the old generation. Keep the heap size low, so that customers can increase the heap size depending on their needs. To increase the heap size, refer to the link, http://www.devx.com/tips/Tip/5578 The young generation uses a fast copying garbage collector which employs two semi-spaces (survivor spaces) in the eden, copying surviving objects from one survivor space to the second. Objects that survive multiple young space collections are tenured, meaning they are copied to the tenured generation. The tenured generation is larger and fills up less quickly. Garbage is collected less frequently; and each collection takes longer than a young space only collection. Collecting the tenured space is also referred to as doing a full generation collection. The frequent young space collections are quick, lasting only a few milliseconds, while the full generation collection takes a longer, tens of milliseconds to a few seconds, depending upon the heap size. Other GC algorithms, such as the Concurrent Mark Sweep (CMS) algorithm, are incremental. They divide the full GC into several incremental pieces. This provides a high probability of small pauses. This process comes with an overhead and is not required for enterprise web applications. When the new generation fills up, it triggers a minor collection in which the surviving objects are moved to the old generation. When the old generation fills up, it triggers a major collection which involves the entire object heap. Both HotSpot and Solaris JDK use thread local object allocation pools for lock-free, fast, and scalable object allocation. So custom object pooling is not often required. Consider pooling only if object construction cost is high and significantly affects execution profiles. The -Xms and -Xmx parameters define the minimum and maximum heap size. As collections occur when generations fill up, throughput is inversely proportional to the available memory. By default, JVM grows or shrinks the heap at each collection. This helps maintain the proportion of free space to living object at each collection within a specific range. The range is set as a percentage by the parameters -XX:MinHeapFreeRatio=<minimum> and -XX:MaxHeapFreeRatio=<maximum>; and the total size is bound by -Xms and -Xmx. JVM heap setting for Web Server should be based on the available memory on the system and frequency and duration of garbage collection. You can use -verbose:gc jvm option or the J2SE 5.0 monitoring tools to determine the frequency of garbage collection. For more information on J2SE 5.0 monitoring tools, see J2SE 5.0 Monitoring Tools. The maximum heap size should be determined based on the process data model (32-bit or 64-bit) and availability of virtual and physical memory on the system. Excessive use of physical memory for Java heap may cause paging of virtual memory to disk during garbage collection, resulting in poor performance. For more information on Java tuning, see http://java.sun.com/performance/reference/whitepapers/tuning.html.
<urn:uuid:4c7ab6b5-fa80-4636-9639-96c43e580b47>
3.015625
901
Documentation
Software Dev.
38.161999
Some of Palawan’s reefs are sad reflections of warming ocean temperatures. White skeletons are all that remain of previously colorful and varied coral reefs around the island. The phenomena is known as ‘coral bleaching’, caused by too warm of ocean temperatures. Scientists cited in the article below hold out hope for these damaged reefs. Apparently, some corals can adapt to warming temperatures, and even thrive in them. Studies are being done in Kiribati, an island in the South Pacific, very close to the equator, where ocean temperatures are the hottest. An international team of scientists, including lead researchers from Canada and Australia published an article on March 30, in the journal PLoS ONE, Click on the link below to read the article from ScienceDaily.com: Excerpt from article says, the study: . . . paves the way towards an important road map on the impacts of ocean warming, and will help scientists identify the habitats and locations where coral reefs are more likely to adapt to climate change. “We’re starting to identify the types of reef environments where corals are more likely to persist in the future,” says study co-author Simon Donner, an assistant professor in UBC’s Department of Geography and organizer of the field expedition. “The new data is critical for predicting the future for coral reefs, and for planning how society will cope in that future.” When water temperatures get too hot, the tiny algae that provides coral with its colour and major food source is expelled. This phenomenon, called coral bleaching, can lead to the death of corals. The researchers say coral reefs may be better able to withstand the expected rise in temperature in locations where heat stress is naturally more common. This will benefit the millions of people worldwide who rely on coral reefs for sustenance and livelihoods, they say. “Until recently, it was widely assumed that coral would bleach and die off worldwide as the oceans warm due to climate change,” says lead author Jessica Carilli, a post-doctoral fellow in Australian Nuclear Science and Technology Organisation’s (ANSTO) Institute for Environmental Research. “This would have very serious consequences, as loss of live coral — already observed in parts of the world — directly reduces fish habitats and the shoreline protection reefs provide from storms.” This is very good news for Palawan. DonnaOnPalawan wishes these scientists and their studies continuing success. My novel’s plot revolves around Palawan’s coral reefs and fish life, as I am very concerned about this issue. Palawan’s coral reefs are a precious resource. We hope the damage will be halted, and the reefs will thrive on into the future.
<urn:uuid:5e163826-772f-4baf-9ef3-7b93d366dd8b>
3.65625
568
Personal Blog
Science & Tech.
38.168048
Nov 10, 2009, 6:09 AM Post #3 of 7 Re: [DivyaG] 'constant' in perl [In reply to] Can we define any variable as constant in perl (just like const in C and Java), so that its value cannot be modified throughout the program? There are two techniques to get unmodifiable values in Perl: `use constant;` and `use Readonly;` The pragmatic constant is part of the standard Perl installation and is available to any script. It creates a sub that returns the value. It can be used anywhere a sub can. perl -e 'use constant PI=>atan2(0,-1); print PI, "\n"' Running the deparser on it, you can see the sub. You can also see that the compile phase replaces the sub with the value for faster execution. perl -MO=Deparse -e 'use constant PI=>atan2(0,-1); print PI, "\n"' Readonly is a module available from CPAN: http://search.cpan.org/~roode/Readonly-1.03/Readonly.pm What is does is tie into the variable and disables its ability to change. This adds some execution overhead every time the variable is used. It biggest drawback is that Readonly must be installed on every system where the script runs. I love Perl; it's the only language where you can bless your thingy. Perl documentation is available at perldoc.perl.org. The list of standard modules and pragmatics is available in perlmodlib. Get Markup Help. Please note the markup tag of "code".
<urn:uuid:9d2ca47b-cb5d-4f04-ad58-90f004e2d4a1>
2.703125
358
Comment Section
Software Dev.
66.384231
Flying a spacecraft to a far planet, millions of miles away, takes many talented people working with very special equipment. In planning the mission, engineers and scientists decide what kinds of instruments ride on board the spacecraft and what kind of information is gathered. They plan the spacecraft's journey by making precise calculations of its path through space. The spacecraft itself is a complex machine that must operate perfectly in alien environments -- under conditions of intense radiation, and extreme cold and heat. The spacecraft receives commands from mission controllers and sends scientific data back to Earth. A computer on board the spacecraft manages the two-way communications equipment and controls the scientific instruments and the other activities of the spacecraft. NASA tracks missions using a world-wide communications system called the Deep Space Network. Huge antennas --- some nearly as big across as a football field --- capture the faint signals from spacecraft. The signals carry the science data, which must be decoded into information or images. The Deep Space Network also has powerful transmitters to send commands to distant Next: Mission Planning
<urn:uuid:b13de43d-8e9f-4df2-8414-7592864b12be>
4.46875
224
Knowledge Article
Science & Tech.
33.378675
One of the outstanding achievements of 20th Century science was the realisation that the great diversity of nature is based on a handful of elementary particles acting under the influence of only a few fundamental forces. This talk aims to give an overview of the natural forces that shape everything around us and outlines current research and what we believe remains to be discovered. Prof. Peter Kalmus is Emeritus Professor at Queen Mary, University of London. At various times he has been President of the Physics Section of the British Association, Vice President of the Institute of Physics, and Vice President of the Royal Institution. A distinguished career has seen him awarded the Rutherford Medal for his outstanding role in the discovery of the W and Z particles, an OBE for his contributions to physics and only last week the Kelvin medal for his role in the public understanding of physics. Dramatic advances in the study of stem cells - the precursor cells of blood, skin, bone and nerve cells - could be used one day to help sufferers from Parkinson's disease, hepatitis, leukaemia, diabetes and rheumatoid arthritis. Stem cells hold the key to the ability to grow a patient's own tissue for repair, and are central to the cloning debate. Potentially they could be used to create unlimited supplies of replacement tissue, including nerve, bone, skin and heart muscle, for repairing injuries and for treating disease - potentially saving millions of lives. Cloning offers a way to grow a patient's own stem cells but, by perfecting such technology, scientists could accelerate efforts to conduct so-called reproductive cloning. Professor Richard Gardner, who is chairing the Royal Society's working group on stem cells and therapeutic cloning provides a rich overview into the how and why of cloning. Professor Wolpert's thesis is that science is not common sense. Common sense is misleading - it can make you accept that a seashell on the top of a mountain is proof of a global flood. In this talk Professor Wolpert gives one scientist's view of the culture of science and why the public's understanding of that culture is so much in error. His thoughtful analysis concludes that scientific thought is unnatural. As well as a CBE, Professor Wolpert is a fellow of the Royal Society and former chairman of the Committee for the Public Understanding of Science. In May he was awarded the Royal Institution's Michael Faraday award for services to the public understanding of science. The common flu of 1918 spread faster than any disease in history, before orsince, and killed more people in less time than all of the great plagues ofhistory, doing so in the presence of relatively 'modern' medical science.Even last year, approximately 20,000 people in the UK died from what werethought to be flu-related illnesses. Yet even this was not an epidemic -the Spanish flu pandemic at the close of the First World War is believed tohave accounted for the deaths of well over 20m people worldwide, including280,000 in the UK. The last official flu epidemic was 11 years ago, butthe fact that there is no way of guessing when the next one might be is avery serious concern. Dr Elspeth Garman will go through at what we know ofthe different strains of flu virus and outline the progress made in thedevelopment of a cure. Ref: `The Origin and Control of Flu Pandemics' Laver and Garman, Science Sep 7, 2001, 1776-1777. Cryptography, the science of encrypting and decrypting information, dates as far back as 1900 BC when a scribe in Egypt first used a derivation of the standard hieroglyphics of the day to communicate Today cryptography provides the locks and keys to the Information age. It is the technology that enables private emails to be sent and secure business transactions to take place over the Internet. Simon Singh, author of Fermat's Last Theorem and The Code Book, will give a brief history of cryptography and then discuss its impact in the 21st century. He will also bring with him a genuine Enigma cipher machine. Simon Singh completed his PhD in particle physics at Cambridge. In 1991 he joined the BBC Science Department and worked as a producer and director with 'Tomorrow's World' and 'Horizon'. He is also the author of 'Fermat's Last Theorem' and 'The Code Book', the latter forming the basis for the popular Channel 4 series 'The Science of Secrecy'. This talk is run in conjunction with BlackwellsCafé Scientifique.
<urn:uuid:5c6e8595-d959-401d-8022-7f7225a700c5>
3.03125
917
Content Listing
Science & Tech.
42.27272
The Western Diamond Back Rattle Snake The western diamond back rattle snake lives in the badlands and semi desert areas of North America, where its tough skin prevents it fromlosing too much moisture. It conserves water by excreting thick paste urine. It adjusts its daily behavior to regulate body heat, alternately basking in the sun and shade. Water is scarce in the diamondback’s arid habitat, but the snake has a tough skin that conserves moisture and a behavior pattern that helps it avoid the worst of the heat. The rattlesnake’s heat sensing pit organ guide the snake toward its prey, allowing it to strike with deadly accuracy even in total darkness. The western diamondback rattlesnake lives in arid, scrubby semi deserts of the southwest U.S. from California to Arkansas. Usually found in dry, sandy, or rocky terrain, the rattlesnake sometimes ventures onto cultivated land. The diamondback is adapted to surviving in a barren landscape where less than 1” of rain falls a year. Lack of water in its dusty range poses no problem for the diamondback. It can go for months without drinking, obtaining all the moisture it needs from its prey. The rattlesnake recycles as much of its body fluids as possible and when it does urinate, it excretes the waste as concentrated uric acid crystals rather than as a fluid. This takes the form of a white paste and is passed with the feces. In hotter areas the diamondback is most active at night, moving around in the open sunshine all day would cause it to overheat. Like other snakes, the rattlesnake can not generate enough body heat to operate its organs. The diamondback often spends the hottest daylight hours dozing beneath a rock. When hunting, the snake investigates every cranny, its forked tongue flicking in and out to taste the air for scent of prey. It preys mostly on rodents, but may eat small birds, lizards and larger animals, such as prairie dogs, ground squirrels and rabbits. As the snake strikes, long, hollow fangs swing down to stab into the prey and inject a lethal dose of venom. The jaws dislocate and skin stretches as the mouth engulfs the victim. The snake can not swallow, it walks its jaws over its prey to ingest it. The female diamondback can breed only once every two years, so there is intense competition among males. In spring, males are drawn to receptive females by scent. Several males may arrive in the same area at the same time, this signals the need for a contest to decide which of them will win the female. Mating may last 24 hours and eggs are fertilized internally. Eggs also develop and incubate inside the female’s body for about 165 days, after which young are born fully developed. The dozen or so young snakes may be up to 12” long and are independent immediately. They soon move off to catch their own prey, already equipped to deliver a killing bite. The western diamondback is not endangered yet, but its numbers are declining, along with other rattlesnake species, due to the ‘rattlesnake roundups’ that take place in some states.
<urn:uuid:ac0d9b25-8172-4a3e-8e20-47e04ad202af>
3.75
673
Knowledge Article
Science & Tech.
50.466313
International Business Times Natural climatic changes have led to the total ecosystem collapse of coral reefs, suggest a new report. According to a study by the Florida Institute of Technology, climate shifts have stalled the reef growth in the eastern Pacific for 2,500 years. The Daily Telegraph Pacific to shut down thousands of years ago, scientists have said. And human-induced pollution could worsen the trend in the future, they warned today...The reef shutdown began 4000 years ago and lasted about 2500 years, said the research led by the...
<urn:uuid:41da48c3-3502-49fe-928c-8930884ce95e>
2.6875
107
Content Listing
Science & Tech.
55.262332
[antlr-interest] What do . (period) and Tokens mean in tree grammars? Harald M. Müller harald_m_mueller at gmx.de Sat Dec 29 14:16:13 PST 2007 Sorry that I ask - but I did not find it on the Wiki and not in the ANTLR book: What do . and Tokens mean in tree grammars? AFAIK, . means "any complete subtree." - although this seems not to work in some current builds, if I understood some email of yesterday correctly? Question number two: Does a single Token also match a complete tree with this token at the root, or only a "childless tree" (i.e. the token as a tree node alone)? Question number three: What sort of lookahead is used over . ? For example, would the following work - assume here that the subtrees can be arbitrarily large subconditions (as is usual in expression trees): condition : ^(AND . ^(NOT .)) -> ...rewrite1... | ^(AND . .) -> ...rewrite2... The intention of this is to rewrite an AND tree which has as second child a tree with a NOT root to rewrite1; whereas all other trees are supposed to be rewritten as rewrite2. If ANTLR tree parsing works the way I assume it - namely the whole tree is flattened to a node sequence, on which "one-dimensional" parsing techniques (even LL(*)) are applied, then the NOT will be "too far" away even for an LL(*) analysis, because there will be recursively nested expressions on the way between the AND and the NOT. However, if ANTLR goes for real "two-dimensional" parsing, or does some lookahead over arbitrarily large subtrees (to the readily available "later" children!) - which I would call "1.5-dimensional lookahead computation/parsing", then the above two patterns could be disambiguated. -------------- next part -------------- An HTML attachment was scrubbed... More information about the antlr-interest
<urn:uuid:fbf907f8-5cfd-471c-ad04-918cb363fe58>
2.6875
468
Comment Section
Software Dev.
67.344652
When the scientists combined the light from two 8-meters telescopes with MIDI, they could simulate the resolving power of a telescope with a diameter of about 100 meters. These observations gave a "visibility function," which measures how resolved a source is: A visibility of 1 happens when a source is completely unresolved while lower visibilities indicate increased resolution. For HD 69830, the scientists did not resolve the star itself, but did resolve the dust emission, as the visibility clearly does not match the pattern of an unresolved source (dashed blue line). The levels of dust emission vary in the wavelength range covered in the observation (8-13 microns, a region of the mid-infrared spectral range), and this variation can also be seen in the visibility function. These results show that the dust lies between 4.7 and 224 million miles (7.5 and 360 million km) from the star (0.05 to 3 times the Earth-Sun distance).
<urn:uuid:a7858873-054d-46c0-8286-5073936c357e>
3.734375
193
Knowledge Article
Science & Tech.
49.06
A hailstone begins as a frozen raindrop or ice crystal. Strong updrafts of warm air and downdrafts of cool air move the frozen particle up and down through different levels of the storm cloud. The hailstone encounters different forms of moisture as it moves, and layers of frozen ice particles accumulate on its surface. The resulting hailstone has a layered structure. ! Click the image to see the animation.
<urn:uuid:07b2ab16-32ce-4e5d-86ec-d855b1af6ffa>
2.828125
85
Truncated
Science & Tech.
52.258137
How has the sea ice that surrounds Antarctica varied over the period for which there exist comprehensive satellite data? In what follows, we review what has been learned about the subject -- in the order in which it was learned -- starting with the very first year of the current millennium. Noting that "Antarctic sea ice may show high sensitivity to any anthropogenic increase in temperature" -- as per the canary-in-the-coal-mine concept of high-latitude amplification of CO2-induced global warming -- while further noting that most climate models suggest that an increase in surface temperature "would result in a decrease in sea ice coverage," Watkins and Simmonds (2000) analyzed temporal trends in different measures of the sea ice that surrounds Antarctica, using Special Sensor Microwave Imager data obtained from the Defense Meteorological Satellite Program for the nine-year period December 1987-December 1996, in search of the suspected signal. But contrary to what one would expect on the basis of the model simulations, and especially in light of what climate alarmists call the unprecedented warming of the past quarter-century, the two scientists observed statistically significant increases in both sea ice area and extent; and when they combined their results with those of the preceding nine-year period (1978-1987), both parameters continued to show increases over that expanded time period. In addition, they found that the 1990s also experienced increases in the length of the sea-ice season. In a contemporary assessment of Antarctic sea ice behavior, Yuan and Martinson (2000) also utilized Special Sensor Microwave Imager data, but they additionally analyzed brightness temperatures obtained by the Nimbus-7 Scanning Multichannel Microwave Radiometer; determining that the mean trend in the latitudinal location of the Antarctic sea ice edge over the prior 18 years was an equatorward extension of 0.011 degree latitude per year, in harmony with the findings of Comiso (2000), who analyzed Antarctic temperature data obtained from 21 surface stations, as well as from infrared satellites operating from 1979 to 1998, and discovered a 20-year cooling trend of 0.042°C per year in the satellite data and 0.008°C per year in the station data. That Antarctic sea ice had indeed increased in area, extent and season length since at least 1978 was also supported by several subsequent studies. The very next year, for example, Hanna (2001) published an updated analysis of Antarctic sea ice cover -- also based on Special Sensor Microwave Imager data, but for the extended period of October 1987-September 1999 -- finding "an ongoing slight but significant hemispheric increase of 3.7(±0.3)% in extent and 6.6(±1.5)% in area." And one year later, Parkinson (2002) utilized satellite passive-microwave data to calculate and map the length of the sea-ice season throughout the Southern Ocean for each year of the period 1979-1999, finding that although there were opposing regional trends, a "much larger area of the Southern Ocean experienced an overall lengthening of the sea-ice season ... than experienced a shortening." Concurrently, Zwally et al. (2002) also utilized passive-microwave satellite data to study Antarctic sea ice trends. Over the 20-year period 1979-1998, they report that the sea ice extent of the entire Southern Ocean increased by 11,181 ± 4,190 square km per year, or by 0.98 ± 0.37 percent per decade, while sea ice area increased by nearly the same amount: 10,860 ± 3,720 square km per year, or by 1.26 ± 0.43 percent per decade. And in contradiction of the ancillary climate-alarmist claim that various aspects of earth's climate should exhibit greater variability when it is warmer than when it is colder, they observed that the variability of monthly sea ice extent declined from 4.0% over the first ten years of the record to 2.7% over the last ten years (which were supposedly the warmest of the prior millennium, according to the world's climate alarmists). One year later, Vyas et al. (2003) analyzed data from the multi-channel scanning microwave radiometer carried aboard India's OCEANSAT-1 satellite for the period June 1999-May 2001, which they combined with data for the period 1978-1987 that had been derived from space-based passive microwave radiometers carried aboard earlier Nimbus-5, Nimbus-7 and DMSP satellites, in order to study secular trends in sea ice extent about Antarctica over the period 1978-2001. This work revealed that the mean rate of change of sea ice extent for the entire Antarctic region over this period was an increase of 0.043 M km² per year. In addition, the six researchers concluded that "the increasing trend in the sea ice extent over the Antarctic region may be slowly accelerating in time, particularly over the last decade," which finding they described as "paradoxical in the global warming scenario resulting from increasing greenhouse gases in the atmosphere." In a somewhat similar study, Cavalieri et al. (2003) extended prior satellite-derived Antarctic sea ice records several years by bridging the gap between Nimbus 7 and earlier Nimbus 5 satellite data sets with National Ice Center digital sea ice data, finding that sea ice extent about Antarctica rose at a mean rate of 0.10 ± 0.05 x 106 km² per decade between 1977 and 2002. Likewise, Liu et al. (2004) employed sea ice concentration data retrieved from the Scanning Multichannel Microwave Radiometer on the Nimbus 7 satellite, plus the Special Sensor Microwave Imager on several defense meteorological satellites, to develop a quality-controlled history of Antarctic sea ice variability over the period 1979-2002 (which included different states of the Antarctic Oscillation and several ENSO events), after which they evaluated total sea ice extent and area trends by means of linear least-squares regression. This work revealed, in their words, that "overall, the total Antarctic sea ice extent has shown an increasing trend (~4,801 km²/yr)," and that "the total Antarctic sea ice area has increased significantly by ~13,295 km²/yr, exceeding the 95% confidence level." Shortly thereafter, Parkinson (2004) reviewed the history of satellite observations of sea ice extent in the Southern Ocean about Antarctica, concentrating on data obtained from the Scanning Multichannel Microwave Radiometer aboard the Nimbus 7 satellite and subsequent satellite-based Special Sensor Microwave Imagers, because these platforms provided, in her words, "the best long-term record of changes in the full Southern Ocean ice cover." The resulting plot of 12-month running-means of Southern Ocean sea ice extent, which extended from November 1978 through December 2002, revealed significant multi-year variability in the data, which began at the top of a peak and ended at the bottom of a trough. But in spite of the high beginning point and low end point of the data, which would mitigate against a long-term upward trend, the data exhibited just such a feature, the least-squares-fit slope of which revealed a 12,380 ± 1,730 km2 upward trend in sea ice extent per year. In considering this result, it is interesting to note that over the period of time that climate alarmists claim has experienced the most extreme global warming of the past millennium or more, and in spite of the fact they have historically claimed such warming should be most evident in earth's polar regions, and that it should lead to a decrease in polar sea ice extent, just the opposite had occurred to this point in time in the Southern Ocean that surrounds Antarctica. But what is doubly damning to their dogma is the fact that the Southern Ocean's sea ice extent is extremely sensitive to warming, decreasing from a 24-year-average maximum monthly value of 18.23 x 106 km2 in September to a similarly-calculated minimum monthly value of 2.98 x 106 km2 in February. This decrease represents the disappearance of nearly 84% of each year's maximum sea ice cover; and, therefore, it can be appreciated that given just a little extra seasonal warmth, it would disappear altogether each February. But it hasn't. In fact, it continues to slowly, but ever so surely, grow in the mean. Focusing on the spring-summer period of November/December/January (1981-2000) some four years later, Laine (2008) determined trends in Antarctic ice-sheet and sea-ice surface albedo and temperature, as well as sea-ice concentration and extent, based on Advanced Very High Resolution Polar Pathfinder data in the case of ice-sheet surface albedo and temperature, and the Scanning Multichannel Microwave Radiometer and Special Sensor Microwave Imagers in the case of sea-ice concentration and extent. These analyses were carried out for the continent as a whole, as well as for five longitudinal sectors emanating from the south pole: 20°E-90°E, 90°E-160°E, 160°E-130°W, 130°W-60°W, and 60°W-20°E. This work revealed, in Laine's words, that "all the regions show negative spring-summer surface temperature trends for the study period." In addition, the Finnish researcher found that "sea ice concentration shows slight increasing trends in most sectors, where the sea ice extent trends seem to be near zero." Laine also found that "the Antarctic region as a whole and all the sectors separately show slightly positive spring-summer albedo trends." Consequently, over the last two decades of the 20th century, Antarctica successfully bucked the world's supposedly unprecedented global warming trend by (1) cooling a bit, (2) acquiring slightly more sea ice, and (3) becoming a tad more reflective of incoming solar radiation. Several other studies of the subject were also conducted in 2008. Noting that earth's polar regions "are expected to provide early signals of a climate change primarily because of the 'ice-albedo feedback' which is associated with changes in absorption of solar energy due to changes in the area covered by the highly reflective sea ice," for example, Comiso and Nishio (2008) set about to provide updated and improved estimates of trends in Antarctic sea ice cover for the period extending from November 1978 to December 2006, based on data obtained from the Advanced Microwave Scanning Radiometer, the Special Scanning Microwave Imager and the Scanning Multichannel Microwave Radiometer. And in doing so, they found that the 28-year trends in Antarctic sea ice extent and area were +0.9 ± 0.2 and +1.7 ± 0.3% per decade, which is definitely not a "signal" of global warming. In another study employing satellite-borne passive microwave radiometer data that extended the analyses of the sea ice time series reported by Zwally et al. (2002) from 20 years (1979-1998) to 28 years (1979-2006), Cavalieri and Parkinson (2008) found that "the total Antarctic sea ice extent trend increased slightly, from 0.96 ± 0.61% per decade to 1.0 ± 0.4% per decade, from the 20- to 28-year period." The Antarctic sea ice area trend, however, remained constant at 1.2 ± 0.7% per decade. Its variability, however, like that of sea ice extent, declined (from ± 0.7% to ± 0.5% per decade), so that both sets of results indicated a "tightening up" of the two relationships. And why were these things so? The two researchers state that "what is driving the observed changes remains unanswered, and the physical mechanisms explaining these changes remain to be determined." Most recently, Turner et al. (2009) reviewed the history of Antarctic sea ice extent derived from satellite observations, after which they attempted to derive an explanation for the empirical data being what they are, based on climate model simulations. Citing the work of Zwalley et al. (2002), they first noted that over the period 1979-1998, sea ice extent surrounding Antarctica increased at a mean rate of 0.98% per decade, and that Comiso and Nishio (2008) derived a value of 0.9% per decade for the period 1978-2006. This sea ice extent increase, according to their modeling work, was largely driven by an autumn increase in the Ross Sea sector that they suggest "is primarily a result of stronger cyclonic atmospheric flow over the Amundsen Sea." And they say that "the trend towards stronger cyclonic circulation is mainly a result of stratospheric ozone depletion, which has strengthened autumn wind speeds around the continent, deepening the Amundsen Sea Low through flow separation around the high coastal orography." On the other hand, and much more simply, the nine researchers report that "statistics derived from a climate model control run suggest that the observed sea ice increase might still be within the range of natural climate variability." In light of these contrasting possibilities, it is clear that the true cause of the near-three-decade-long increase in Antarctic sea ice extent cannot be stated with any confidence. The only thing we can conclude at this point in time, therefore, is that for some still-unproven reason, and in spite of the supposedly unprecedented increases in mean global air temperature and atmospheric CO2 concentration that the planet has experienced since the late 1970s, Antarctica sea ice extent has stubbornly refused to do what climate models say it should be doing, as it just keeps on growing. Cavalieri, D.J. and Parkinson, C.L. 2008. Antarctic sea ice variability and trends, 1979-2006. Journal of Geophysical Research 113: 10.1029/2007JC004564. Cavalieri, D.J., Parkinson, C.L. and Vinnikov, K.Y. 2003. 30-Year satellite record reveals contrasting Arctic and Antarctic decadal sea ice variability. Geophysical Research Letters 30: 10.1029/2003GL018031. Comiso, J.C. 2000. Variability and trends in Antarctic surface temperatures from in situ and satellite infrared measurements. Journal of Climate 13: 1674-1696. Comiso, J.C. and Nishio, F. 2008. Trends in the sea ice cover using enhanced and compatible AMSR-E, SSM/I, and SMMR data. Journal of Geophysical Research 113: 10.1029/2007JC004257. Elderfield, H. and Rickaby, R.E.M. 2000. Oceanic Cd/P ratio and nutrient utilization in the glacial Southern Ocean. Nature 405: 305-310. Hanna, E. 2001. Anomalous peak in Antarctic sea-ice area, winter 1998, coincident with ENSO. Geophysical Research Letters 28: 1595-1598. Laine, V. 2008. Antarctic ice sheet and sea ice regional albedo and temperature change, 1981-2000, from AVHRR Polar Pathfinder data. Remote Sensing of Environment 112: 646-667. Parkinson, C.L. 2002. Trends in the length of the Southern Ocean sea-ice season, 1979-99. Annals of Glaciology 34: 435-440. Parkinson, C.L. 2004. Southern Ocean sea ice and its wider linkages: insights revealed from models and observations. Antarctic Science 16: 387-400. Turner, J., Comiso, J.C., Marshall, G.J., Lachlan-Cope, T.A., Bracegirdle, T., Maksym, T., Meredith, M.P., Wang, Z. and Orr, A. 2009. Non-annular atmospheric circulation change induced by stratospheric ozone depletion and its role in the recent increase of Antarctic sea ice extent. Geophysical Research Letters 36: 10.1029/2009GL037524. Vyas, N.K., Dash, M.K., Bhandari, S.M., Khare, N., Mitra, A. and Pandey, P.C. 2003. On the secular trends in sea ice extent over the Antarctic region based on OCEANSAT-1 MSMR observations. International Journal of Remote Sensing 24: 2277-2287. Watkins, A.B. and Simmonds, I. 2000. Current trends in Antarctic sea ice: The 1990s impact on a short climatology. Journal of Climate 13: 4441-4451. Yuan, X. and Martinson, D.G. 2000. Antarctic sea ice extent variability and its global connectivity. Journal of Climate 13: 1697-1717. Zwally, H.J., Comiso, J.C., Parkinson, C.L. Cavalieri, D.J. and Gloersen, P. 2002. Variability of Antarctic sea ice 1979-1998. Journal of Geophysical Research 107: 10.1029/2000JC000733.Last updated 30 December 2009
<urn:uuid:cd085c5e-c932-4186-a7bd-51a71a37e27c>
3.078125
3,570
Academic Writing
Science & Tech.
55.11015
http://phys.org/news/2012-05-lemons-lem ... oxide.html Making carbon-based products from CO2 is nothing new, but carbon dioxide molecules are so stable that those reactions usually take up a lot of energy. If that energy were to come from fossil fuels, over time the chemical reactions would ultimately result in more carbon dioxide entering the atmosphere—defeating the purpose of a process that could otherwise help mitigate climate change. Professor Yun Hang Hu’s research team developed a heat-releasing reaction between carbon dioxide and Li3N that forms two chemicals: amorphous carbon nitride (C3N4), a semiconductor; and lithium cyanamide (Li2CN2), a precursor to fertilizers. “The reaction converts CO2 to a solid material,” said Hu. “That would be good even if it weren’t useful, but it is.”
<urn:uuid:a890fe39-22cf-4902-a48a-6b2bd82f7809>
4.125
192
Comment Section
Science & Tech.
46.941553
Skip to comments.NASA Rocket to Create Clouds Tuesday Posted on 09/15/2009 6:25:43 PM PDT by Free ThinkerNY A rocket experiment set to launch Tuesday aims to create artificial clouds at the outermost layers of Earth's atmosphere. The project, called the Charged Aerosol Release Experiment (CARE), plans to trigger cloud formation around the rocket's exhaust particles. The clouds are intended to simulate naturally-occurring phenomena called noctilucent clouds, which are the highest clouds in the atmosphere. "This is really essentially at the boundary of space," said Wayne Scales, a scientist at Virginia Tech who will use computer models to study the physics of the artificial dust cloud as it's released. "Nothing like this has been done before and that's why everybody's really excited about it." The experiment is the first attempt to create artificial noctilucent clouds. A previous spacecraft, called Aeronomy of Ice in the Mesosphere (AIM), launched in 2007 to observe the natural clouds from space. CARE is slated to launch Tuesday between 7:30 and 7:57 p.m. EDT (2330 and 2357 GMT) from NASA's Wallops Flight Facility in Virginia. Noctilucent means "night shining" in Latin. Although difficult to spot with the naked eye, the clouds are best visible when Earth's surface is in darkness and sunlight from below the horizon illuminates the high-altitude clouds. These clouds, also known as polar mesospheric clouds, are made of ice crystals. The natural ones tend to hover around 50 to 55 miles (80 to 90 km) above the Earth. CARE will release its dust particles a bit higher than that, then let them settle back down to a lower altitude. "What the CARE experiment hopes to do is to create an artificial dust layer," Scales told SPACE.com. "Hopefully it's a creation in a controlled sense, which will allow scientists to study different aspects of it, the turbulence generated on the inside, the distribution of dust particles and such." (Excerpt) Read more at livescience.com ... If it works, Obambi’s science advisor can continue to push his demented scheme (releasing sulfur-rich aerosols to combat the “greenhouse effect”). Wonder if these will be visible. Normally I'd say "WOW, this is kinda a cool experiment by NASA" ... but given their recent penchant for bowing towards the Global Warming™ crowd, I would put reservations on the purpose of this near-space experiment. Color MM as 'skeptical' as to the motive for this It doesnt pay to screw around with Mother Nature. Yep. I'm all wee-weed up. If the dust were made of shredded HR3200 I might be more interested. I am going to make an assumption that the real intent of this experiment is to buy ourselves more time before the threat of man made global warming kills us all. It’s just a pile of dust from the vacuum cleaners of a few NASA offices. There is the scary operative phrase --- "Hopefully ...." Wilhelm Reich, please call your office. I remember that ad in the 70’s. It would be followed up by and Irish Spring and then maybe you can call me Ray or you can call me Jay or maybe plop plop fiz fiz oh what a relief it is. Didn’t they used to do something like this at Wallops Island back in the early days of NASA? Oh, wonderful, now they are experimenting with climate control, just what we need. As in a nuclear scientist saying,"This experiment could trigger a chain reaction and destroy the world, hopefully my calculations are correct and it won't". A few months ago a Shuttle launch made one of these clouds. Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.
<urn:uuid:4764a0db-a1f4-4608-bb23-d0a63c6db88e>
3.46875
856
Comment Section
Science & Tech.
59.388515
Simple Equations Introduction to basic algebraic equations of the form Ax=B ⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles. - Let's say we have the equation seven times x is equal to fourteen. - Now before even trying to solve this equation, - what I want to do is think a little bit about what this actually means. - Seven x equals fourteen, - this is the exact same thing as saying seven times x, let me write it this way, seven times x, x in orange again. Seven times x is equal to fourteen. - Now you might be able to do this in your head. - You could literally go through the 7 times table. - You say well 7 times 1 is equal to 7, so that won't work. - 7 times 2 is equal to 14, so 2 works here. - So you would immediately be able to solve it. - You would immediately, just by trying different numbers - out, say hey, that's going to be a 2. - But what we're going to do in this video is to think about - how to solve this systematically. - Because what we're going to find is as these equations get - more and more complicated, you're not going to be able to - just think about it and do it in your head. - So it's really important that one, you understand how to - manipulate these equations, but even more important to - understand what they actually represent. - This literally just says 7 times x is equal to 14. - In algebra we don't write the times there. - When you write two numbers next to each other or a number next - to a variable like this, it just means that you - are multiplying. - It's just a shorthand, a shorthand notation. - And in general we don't use the multiplication sign because - it's confusing, because x is the most common variable - used in algebra. - And if I were to write 7 times x is equal to 14, if I write my - times sign or my x a little bit strange, it might look - like xx or times times. - So in general when you're dealing with equations, - especially when one of the variables is an x, you - wouldn't use the traditional multiplication sign. - You might use something like this -- you might use dot to - represent multiplication. - So you might have 7 times x is equal to 14. - But this is still a little unusual. - If you have something multiplying by a variable - you'll just write 7x. - That literally means 7 times x. - Now, to understand how you can manipulate this equation to - solve it, let's visualize this. - So 7 times x, what is that? - That's the same thing -- so I'm just going to re-write this - equation, but I'm going to re-write it in visual form. - So 7 times x. - So that literally means x added to itself 7 times. - That's the definition of multiplication. - So it's literally x plus x plus x plus x plus x -- let's see, - that's 5 x's -- plus x plus x. - So that right there is literally 7 x's. - This is 7x right there. - Let me re-write it down. - This right here is 7x. - Now this equation tells us that 7x is equal to 14. - So just saying that this is equal to 14. - Let me draw 14 objects here. - So let's say I have 1, 2, 3, 4, 5, 6, 7, 8, - 9, 10, 11, 12, 13, 14. - So literally we're saying 7x is equal to 14 things. - These are equivalent statements. - Now the reason why I drew it out this way is so that - you really understand what we're going to do when we - divide both sides by 7. - So let me erase this right here. - So the standard step whenever -- I didn't want to do that, - let me do this, let me draw that last circle. - So in general, whenever you simplify an equation down to a - -- a coefficient is just the number multiplying - the variable. - So some number multiplying the variable or we could call that - the coefficient times a variable equal to - something else. - What you want to do is just divide both sides by 7 in - this case, or divide both sides by the coefficient. - So if you divide both sides by 7, what do you get? - 7 times something divided by 7 is just going to be - that original something. - 7's cancel out and 14 divided by 7 is 2. - So your solution is going to be x is equal to 2. - But just to make it very tangible in your head, what's - going on here is when we're dividing both sides of the - equation by 7, we're literally dividing both sides by 7. - This is an equation. - It's saying that this is equal to that. - Anything I do to the left hand side I have to do to the right. - If they start off being equal, I can't just do an operation - to one side and have it still be equal. - They were the same thing. - So if I divide the left hand side by 7, so let me divide - it into seven groups. - So there are seven x's here, so that's one, two, three, - four, five, six, seven. - So it's one, two, three, four, five, six, seven groups. - Now if I divide that into seven groups, I'll also want - to divide the right hand side into seven groups. - One, two, three, four, five, six, seven. - So if this whole thing is equal to this whole thing, then each - of these little chunks that we broke into, these seven chunks, - are going to be equivalent. - So this chunk you could say is equal to that chunk. - This chunk is equal to this chunk -- they're - all equivalent chunks. - There are seven chunks here, seven chunks here. - So each x must be equal to two of these objects. - So we get x is equal to, in this case -- in this case - we had the objects drawn out where there's two of - them. x is equal to 2. - Now, let's just do a couple more examples here just so it - really gets in your mind that we're dealing with an equation, - and any operation that you do on one side of the equation - you should do to the other. - So let me scroll down a little bit. - So let's say I have I say I have 3x is equal to 15. - Now once again, you might be able to do is in your head. - You're saying this is saying 3 times some - number is equal to 15. - You could go through your 3 times tables and figure it out. - But if you just wanted to do this systematically, and it - is good to understand it systematically, say OK, this - thing on the left is equal to this thing on the right. - What do I have to do to this thing on the left - to have just an x there? - Well to have just an x there, I want to divide it by 3. - And my whole motivation for doing that is that 3 times - something divided by 3, the 3's will cancel out and I'm just - going to be left with an x. - Now, 3x was equal to 15. - If I'm dividing the left side by 3, in order for the equality - to still hold, I also have to divide the right side by 3. - Now what does that give us? - Well the left hand side, we're just going to be left with - an x, so it's just going to be an x. - And then the right hand side, what is 15 divided by 3? - Well it is just 5. - Now you could also done this equation in a slightly - different way, although they are really equivalent. - If I start with 3x is equal to 15, you might say hey, Sal, - instead of dividing by 3, I could also get rid of this 3, I - could just be left with an x if I multiply both sides of - this equation by 1/3. - So if I multiply both sides of this equation by 1/3 - that should also work. - You say look, 1/3 of 3 is 1. - When you just multiply this part right here, 1/3 times - 3, that is just 1, 1x. - 1x is equal to 15 times 1/3 third is equal to 5. - And 1 times x is the same thing as just x, so this is the same - thing as x is equal to 5. - And these are actually equivalent ways of doing it. - If you divide both sides by 3, that is equivalent to - multiplying both sides of the equation by 1/3. - Now let's do one more and I'm going to make it a little - bit more complicated. - And I'm going to change the variable a little bit. - So let's say I have 2y plus 4y is equal to 18. - Now all of a sudden it's a little harder to - do it in your head. - We're saying 2 times something plus 4 times that same - something is going to be equal to 18. - So it's harder to think about what number that is. - You could try them. - Say if y was 1, it'd be 2 times 1 plus 4 times 1, - well that doesn't work. - But let's think about how to do it systematically. - You could keep guessing and you might eventually get - the answer, but how do you do this systematically. - Let's visualize it. - So if I have two y's, what does that mean? - It literally means I have two y's added to each other. - So it's literally y plus y. - And then to that I'm adding four y's. - To that I'm adding four y's, which are literally four - y's added to each other. - So it's y plus y plus y plus y. - And that has got to be equal to 18. - So that is equal to 18. - Now, how many y's do I have here on the left hand side? - How many y's do I have? - I have one, two, three, four, five, six y's. - So you could simplify this as 6y is equal to 18. - And if you think about it it makes complete sense. - So this thing right here, the 2y plus the 4y is 6y. - So 2y plus 4y is 6y, which makes sense. - If I have 2 apples plus 4 apples, I'm going - to have 6 apples. - If I have 2 y's plus 4 y's I'm going to have 6 y's. - Now that's going to be equal to 18. - And now, hopefully, we understand how to do this. - If I have 6 times something is equal to 18, if I divide both - sides of this equation by 6, I'll solve for the something. - So divide the left hand side by 6, and divide the - right hand side by 6. - And we are left with y is equal to 3. - And you could try it out. - That's what's cool about an equation. - You can always check to see if you got the right answer. - Let's see if that works. - 2 times 3 plus 4 times 3 is equal to what? - 2 times 3, this right here is 6. - And then 4 times 3 is 12. - 6 plus 12 is, indeed, equal to 18. Be specific, and indicate a time in the video: At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger? Have something that's not a question about this content? This discussion area is not meant for answering homework questions. Share a tip When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831... Have something that's not a tip or feedback about this content? This discussion area is not meant for answering homework questions. Discuss the site For general discussions about Khan Academy, visit our Reddit discussion page. Flag inappropriate posts Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians. - disrespectful or offensive - an advertisement - low quality - not about the video topic - soliciting votes or seeking badges - a homework question - a duplicate answer - repeatedly making the same post - a tip or feedback in Questions - a question in Tips & Feedback - an answer that should be its own question about the site
<urn:uuid:0fd4521c-d6c5-4976-a39e-6b54d6f49c8b>
4.8125
2,935
Truncated
Science & Tech.
84.041077
In this module we need to be a little bit more precise about temperature and heat energy than we have been so far. Heat energy is usually measured in terms of calories. The calorie was originally defined as the amount of energy required to raise one gram of water one degree Celsius at a pressure of one atmosphere. This definition is not complete because the amount of energy required to raise one gram of water one degree Celsius varies with the original temperature of the water by as much as one percent. Since 1925 the calorie has been defined as 4.184 joules, the amount of energy required to raise the temperature of one gram of water from 14.5 degrees Celsius to 15.5 degrees Celsius. For our purposes here we can ignore the fact that the effect of one calorie of energy varies depending on the temperature of the water. Newton's model of cooling can be thought of, more precisely, as involving two steps. The picture above shows a brick whose length is four centimeters. We mentally divide the brick into two unequal pieces. The lefthand piece has a length of one centimeter and the righthand piece has a length of three centimeters. Heat is flowing across the mental boundary between the two pieces from left to right at the rate of A calories per hour. As a result the average temperature of the lefthand piece is changing at the rate of -kA degrees Celsius per hour. The constant k depends on the composition of the brick and its cross-sectional area. The average temperature of the righthand piece is changing at the rate of kA / 3 degrees Celsius per hour. The three in the denominator comes from the fact that since the righthand piece is three times the length of the lefthand piece, its mass is three times as big.
<urn:uuid:a815ae2f-316e-4506-a921-7c7a166c371c>
4.15625
363
Tutorial
Science & Tech.
50.643539
USING a giant magnetic field, scientists at the University of Nottingham and the University of Nijmegen in the Netherlands have made a frog float in mid-air. The levitation trick works because giant magnetic fields slightly distort the orbits of electrons in the frog's atoms. The resulting electric current generates a magnetic field in the opposite direction to that of the magnet. A field of 16 teslas created an attractive force strong enough to make the frog floatuntil it made its escape. The team has also levitated plants, grasshoppers and fish. "If you have a magnet that is big enough, you could levitate a human," says Peter Main, one of the researchers. He adds that the frog did not seem to suffer any ill effects: "It went back to its fellow frogs looking perfectly happy." To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:00e22c8c-93ca-4955-a9a2-776a0dfdcf69>
3.484375
194
Truncated
Science & Tech.
49.86859
Oct. 25, 2012 Japan's "triple disaster," as it has become known, began on March 11, 2011, and remains unprecedented in its scope and complexity. To understand the lingering effects and potential public health implications of that chain of events, scientists are turning to a diverse and widespread sentinel in the world's ocean: fish. Events on March 11 began with a magnitude 9.0 earthquake, the fourth largest ever recorded. The earthquake in turn spawned a massive 40-foot tsunami that inundated the northeast Japanese coast and resulted in an estimated 20,000 missing or dead. Finally, the wave caused catastrophic damage to the Fukushima Dai-ichi nuclear power plant, resulting in the largest accidental release of radiation to the ocean in history, 80 percent of which ended up in the Northwest Pacific Ocean. In a Perspectives article appearing in October 26, 2012, issue of the journal Science, WHOI marine chemist Ken Buesseler analyzed data made publicly available by the Japanese Ministry of Agriculture, Forestry and Fisheries (MAFF) on radiation levels in fish, shellfish and seaweed collected at ports and inland sites in and around Fukushima Prefecture. The picture he draws from the nearly 9,000 samples describes the complex interplay between radionuclides released from Fukushima and the marine environment. In it, Buesseler shows that the vast majority of fish caught off the northeast coast of Japan remain below limits for seafood consumption, even though the Japanese government tightened those limits in April 2012. Nevertheless, he also finds that the most highly contaminated fish continue to be caught off the coast of Fukushima Prefecture, as could be expected, and that demersal, or bottom-dwelling fish, consistently show the highest level of contamination by a radioactive isotope of cesium from the damaged nuclear power plant. He also points out that levels of contamination in almost all classifications of fish are not declining, although not all types of fish are showing the same levels, and some are not showing any appreciable contamination. As a result, Buesseler concludes that there may be a continuing source of radionuclides into the ocean, either in the form of low-level leaks from the reactor site itself or contaminated sediment on the seafloor. In addition, the varying levels of contamination across fish types points to complex methods of uptake and release by different species, making the task of regulation and of communicating the reasons behind decision-making to the fish-hungry Japanese public all the more difficult. "To predict the how patterns of contamination will change over time will take more than just studies of fish," said Buesseler, who led an international research cruise in 2011 to study the spread of radionuclides from Fukushima. "What we really need is a better understanding of the sources and sinks of cesium and other radionuclides that continue to drive what we're seeing in the ocean off Fukushima." Other social bookmarking and sharing tools: - K. O. Buesseler. Fishing for Answers off Fukushima. Science, 2012; 338 (6106): 480 DOI: 10.1126/science.1228250 Note: If no author is given, the source is cited instead.
<urn:uuid:07ac73c1-a9d5-4499-b883-960463833692>
3.875
656
Truncated
Science & Tech.
35.690478
A curve of radius 138 m is banked at an angle of 11°. An 762-kg car negotiates the curve at 82 km/h without skidding. Neglect the effects of air drag and rolling friction. Find the following. (a) the normal force exerted by the pavement on the tires (b) the frictional force exerted by the pavement on the tires (c) the minimum coefficient of static friction between the pavement and the tires
<urn:uuid:b19e6075-eff6-45c4-8e41-d7c621e69ae2>
3.25
95
Tutorial
Science & Tech.
67.785223
Last week's announcement of the discovery of a new particle seemed to answer one of the great outstanding questions in physics. But for those who haven't been immersed in all things LHC, the results were likely to raise all sorts of new questions (along with "what was all the fuss about again?"). So, to help navigate the post-Higgs world, we put together a short Q&A, based on questions that some of the Ars staff had. I know we detected it in the Large Hadron Collider, but how did they actually make Higgs bosons? There are two ways to answer that question. The first is that we're simply converting energy into matter. The protons in the collider carry a tremendous amount of energy, and it has to go somewhere. Given Einstein's E = mc2, we know that some of that energy can be converted into matter. That's why things that are much heavier than two protons at rest can pop out of the collisions. But Einstein's equations aren't magic, in that particles don't just poof into existence—there are actual processes that create them. In the LHC, the most common process that ends in a Higgs boson is gluon fusion. Gluons are the (apparently massless) carriers of the strong force that holds quarks together to form things like protons and neutrons. If two of them merge, then one possible outcome is a single Higgs particle. Everyone says that this particle was predicted by the Standard Model, but how exactly? What was missing that made people theorize the Higgs? The Standard Model describes the properties of fundamental particles and the forces that mediate their interactions. Some of these, like the photon, are massless; others, like the W and Z bosons that mediate the weak force, weigh as much as entire atoms (including some that the weak force causes to decay). Although its possible to just say "this is what these things weigh," physicists find this sort of approach dissatisfying. So, they developed a theoretical mechanism that could supply some particles with mass. Several papers, appearing about the same time, suggested that there's a pervasive field that all particles can interact with. Some, like the photon, don't, and remain massless. Others, like the W and Z bosons, undergo large interactions with the field, picking up a large mass in the process. Peter Higgs published the first paper that indicated that this field should have a corresponding particle, which eventually led to it picking up his name: the Higgs boson. (Physicist Matt Strassler has written much more about the particle's history and role in the Standard Model.) With the discovery of the W, Z, and top quark, the Higgs remained the last particle predicted by the Standard Model that remained undiscovered. Finding it became a key test as to whether the Model provided a complete picture of the basic particles and forces. Many scientists are being careful about saying that we've only found a boson that looks like the Higgs. What's that supposed to mean? If you've read our coverage of the Higgs, you know that the Standard Model predicts that it will decay along a variety of specific pathways: two photons, four leptons, etc. The fact that we're seeing something that's a boson, and clearly decays through at least some of these pathways, tells us that we've seen something very much like the Standard Model Higgs. But it may not be precisely the Standard Model version. So far, we don't have enough collisions to tell the Standard Model apart from some related theories. For example, one rare decay pathway should produce two tau particles. (Taus are part of the lepton family, which includes the electron and its heavier cousin the muon. Think of the tau as the electron's morbidly obese uncle.) So far, the CMS detector has seen none of these decays (the ATLAS team hasn't performed this analysis yet), but their absence isn't yet statistically significant. If that continues as more Higgs are produced, then it will suggest that we're looking at a non-standard Higgs. What could that be? There are a number of variations on the theory that predict it may take some of those pathways more or less often than the vanilla version of the theory. And there's a major extension to the Standard Model, called supersymmetry, that suggests that the Standard Model's particles are all parts of larger families, meaning that there would be multiple Higgs bosons, and we've only found one. Matt Strassler told Ars that a few more exotic theories suggest there will be Higgs-like particles that do very different things, some involving extra dimensions. It's only by making more of these bosons that we can start to tell these possibilities apart. Which brings us to our next question. The key thing here is that, if we haven't found the Standard Model Higgs, then we don't get to keep the Standard Model as it is. We could end up with a mildly tweaked version, we could have a Standard Model plus extensions, or we could be seeing hints of something much more significant. Until we have a better understanding of the particle we're seeing, we can't tell any of these apart. If the Large Hadron Collider was made to find the Higgs, what's it going to do now? Make more Higgs, so we can answer the previous question, for starters. CERN's director announced that it will run for a few extra months specifically to get a better statistical handle on whether this is the Standard Model Higgs. Beyond that, many other theoretical particles, including some of those predicted by things like supersymmetry, are already within reach of the energies at the LHC. Once it restarts in a couple of years, it will be running at much higher energies, opening up a greater range for discovery. Even if you don't think it's worth chasing down theoretical particles, the Universe keeps telling us that dark matter is likely to be comprised of a heavy fundamental particle. The LHC should be able to spot these if they're really out there. Does this eliminate the need to build another collider? Actually, it will certainly inform, and possibly motivate, the construction of anything that comes next. The LHC may have been a great Higgs discovery machine, but it's actually not so hot if we want to look at the Higgs in detail (and wanting the answers to the above should suggest we do). The problem is that proton collisions are messy, since you're actually colliding what's essentially a bag of quarks, gluons, and virtual particles, all of which may end up carrying some fraction of the total energy. All sorts of things spill out of the resulting collisions, making it difficult to separate out the Higgs decay. Some of the decay channels are so noisy that they actually made the discovery statistics worse in the recent announcements. A much cleaner way of going about looking at the Higgs would be to collide fundamental particles, ideally with their antiparticles. We could then tune the energy to make producing our 126GeV Higgs much more likely. That was what motivated the construction of SLAC, which smashed electrons together to produce lots of the W and Z bosons. Unfortunately, building one will be a real challenge. Electrons don't like to go around in circles (they lose energy quickly), so we'd have to build a linear collider, one that is longer than anything we've built previously. That gets expensive. The alternative is to build a muon collider, but this would involve the development of lots of new and unproven technology. In the age of tight science budgets, the prospects for a major construction project look bleak. That reminds me—the LHC cost a lot of money. Couldn't that have been put to better use? It's really difficult to guess what scientific advances are going to pay dividends. Logic gates were first considered around 1900; quantum mechanics was developed in the 1930s. It took until the 1970s for them to be married in the form that all of us now use. Restriction enzymes were discovered in the 1960s when people were trying to figure out why only some viruses could infect some bacteria. They ended up being an essential foundation for the biotech industry. I could go on with examples for ages. If anyone tells you which areas of basic research will have the largest economic impact 30 years from now, I'd bet money they're wrong. Might the money have done more good in applied research? Possibly, but even there, there are no guarantees. The technology we actually get is often radically different from what we'd want or expect based on the state of scientific knowledge. In other words, we may want and expect flying cars, but we end up with always-online smartphones. And I'd trade them both for fusion power, the basic physics of which we nailed down decades ago.
<urn:uuid:774c0b9f-f635-4705-8028-03beced576be>
3.3125
1,850
Q&A Forum
Science & Tech.
55.928429
Family: Dipluridae, Funnelweb Mygalomorphs view all from this family Description 1/8" (3-5 mm). Tan to yellowish-brown to reddish-brown, hairy. Large body. Belongs to the group of primitive spiders commonly called tarantulas. Constructs tube-shaped webs. Habitat Mature coniferous (spruce-fir) forest. Range Southern Appalachian Mountains, western North Carolina and eastern Tennessee. Endangered Status The Spruce-fir Moss Spider is on the U.S. Endangered Species List. It is classified as endangered in North Carolina and Tennessee. This species has strict habitat requirements: it lives in damp mats of moss growing on rocks in the deep shade of mature Fraser Fir and Red Spruce trees. These old-growth forests have been under attack for centuries, felled by timber interests and suffering from the effects of human dominance of the landscape. Recently, an introduced insect, the Balsam Woolly Adelgid, has been killing off fir and spruce trees, and the spider has declined as well. Only two populations of Spruce-fir Moss Spider are known to survive, one on Grandfather Mountain, North Carolina, the other on Mount LeConte, Tennessee.
<urn:uuid:d7ea8be6-59d4-4342-bb2e-f04a598588b4>
2.734375
265
Knowledge Article
Science & Tech.
44.725
1 Streams and lazy evaluation (40 points) We know that comparison sorting requires at least O(n log n) comparisons where were are sorting n elements. Let’s say we only need the first f(n) elements from the sorted list, for some function f. If we know f(n) is asymptotically less than log n then it would be wasteful to sort the entire list. We can implement a lazy sort that returns a stream representing the sorted list. Each time the stream is accessed to get the head of the sorted list, the smallest element is found in the list. This takes linear time. Removing the f(n) elements from the list will then take O(nf(n)). For this question we use the following datatype definitions. There are also some helper functions defined. (* Suspended computation *) datatype 'a stream' = Susp of unit -> 'a stream (* Lazy stream construction *) and 'a stream = Empty | Cons of 'a * 'a stream' Note that these streams are not necessarily infinite, but they can be. Q1.1 (20 points) Implement the function lazysort: int list -> int stream'. It takes a list of integers and returns a int stream' representing the sorted list. This should be done in constant time. Each time the stream' is forced, it gives either Empty or a Cons(v, s'). In the case of the cons, v is the smallest element from the sorted list and s' is a stream' representing the remaining sorted list. The force should take linear time. For example: - val s = lazysort( [9, 8, 7, 6, 5, 4] ); val s = Susp fn : int stream' - val Cons(n1, s1) = force(s); val n1 = 4 : int val s1 = Susp fn : int stream' - val Cons(n2, s2) = force(s1); val n2 = 5 : int val s2 = Susp fn : int stream' - val Cons(n3, s3) = force(s2); val n3 = 6 : int val s3 = Susp fn : int stream' Here is what is given as code: (* Suspended computation *) datatype 'a stream' = Susp of unit -> 'a stream (* Lazy stream construction *) and 'a stream = Empty | Cons of 'a * 'a stream' (* Lazy stream construction and exposure *) fun delay (d) = Susp (d) fun force (Susp (d)) = d () (* Eager stream construction *) val empty = Susp (fn () => Empty) fun cons (x, s) = Susp (fn () => Cons (x, s)) (* Inspect a stream up to n elements take : int -> 'a stream' -> 'a list take': int -> 'a stream -> 'a list *) fun take 0 s = | take n (s) = take' n (force s) and take' 0 s = | take' n (Cons (x, xs)) = x::(take (n-1) xs) My attempt at a solution I tried to do the following which get the int list and transforms it to int stream': (* lazysort: int list -> int stream' *) fun lazysort (:int list) = empty | lazysort (h::t) = cons (h, lazysort(t)); But when calling force it does not return the minimum element. I have to search for the minimum, but I do not know how... I thought of doing insertion sort like following: fun insertsort = | insertsort (x::xs) = let fun insert (x:real, ) = [x] | insert (x:real, y::ys) = if x<=y then x::y::ys else y::insert(x, ys) in insert(x, insertsort xs) end; But I have to search for the minimum and to not sort the list and then put it as a stream... Any help would be appreciated.
<urn:uuid:67b304b1-c964-4636-a74b-4a0ce25fa1ba>
3.53125
876
Q&A Forum
Software Dev.
78.829554